diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ciel Gestion Commerciale 16.0 Crack..md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ciel Gestion Commerciale 16.0 Crack..md deleted file mode 100644 index 149841a69d3e77e5153dc31e0e3760325ad3d73a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ciel Gestion Commerciale 16.0 Crack..md +++ /dev/null @@ -1,107 +0,0 @@ -
-
- Risks and drawbacks of using a cracked version of Ciel gestion commerciale 16.0
- Benefits and advantages of using a licensed version of Ciel gestion commerciale 16.0 | | H2: What is Ciel gestion commerciale and what is a crack? | - Definition and features of Ciel gestion commerciale 16.0
- Definition and types of cracks
- How cracks work and why they are illegal | | H2: Risks and drawbacks of using a cracked version of Ciel gestion commerciale 16.0 | - Security risks: malware, viruses, spyware, ransomware, etc.
- Legal risks: fines, lawsuits, penalties, etc.
- Functional risks: errors, bugs, crashes, data loss, etc.
- Ethical risks: unfair competition, piracy, theft, etc. | | H2: Benefits and advantages of using a licensed version of Ciel gestion commerciale 16.0 | - Security benefits: protection, updates, support, etc.
- Legal benefits: compliance, warranty, rights, etc.
- Functional benefits: performance, reliability, compatibility, etc.
- Ethical benefits: respect, trust, reputation, etc. | | H1: Conclusion | - Summary of the main points
- Recommendation to avoid cracks and use licensed software
- Call to action to buy Ciel gestion commerciale 16.0 from the official website | Article with HTML formatting:

Ciel gestion commerciale 16.0 crack: What is it and why you should avoid it

-

If you are looking for a software to manage your business activities, you may have heard of Ciel gestion commerciale 16.0. This is a popular software that helps you to create invoices, manage stocks, track payments, generate reports, and more. But you may also have heard of Ciel gestion commerciale 16.0 crack, which is a modified version of the software that bypasses the activation process and allows you to use it for free.

-

Ciel gestion commerciale 16.0 crack.


DOWNLOAD »»» https://byltly.com/2uKAgf



-

In this article, we will explain what Ciel gestion commerciale 16.0 and what a crack are, and why you should avoid using a cracked version of this software. We will also show you the benefits and advantages of using a licensed version of Ciel gestion commerciale 16.0.

-

What is Ciel gestion commerciale and what is a crack?

-

Ciel gestion commerciale 16.0 is a software developed by Ciel, a French company that specializes in accounting and business management software. It is designed for small and medium-sized businesses that need a simple and efficient tool to manage their daily operations.

-

Ciel gestion commerciale 16.0 allows you to:

- -

A crack is a program that modifies another program to remove or disable its security features, such as activation codes or serial numbers. A crack can also be a patch that changes some parts of the original program's code to alter its behavior or functionality.

-

A crack is usually created by hackers or crackers who want to use a software without paying for it or without following its terms and conditions. A crack can also be distributed by websites or forums that offer illegal downloads of software.

-

Ciel gestion commerciale 16.0 serial key
-Ciel gestion commerciale 16.0 activation code
-Ciel gestion commerciale 16.0 license key
-Ciel gestion commerciale 16.0 patch
-Ciel gestion commerciale 16.0 keygen
-Ciel gestion commerciale 16.0 full version download
-Ciel gestion commerciale 16.0 torrent
-Ciel gestion commerciale 16.0 free download
-Ciel gestion commerciale 16.0 crack download
-Ciel gestion commerciale 16.0 cracked version
-Ciel gestion commerciale 16.0 crack mac
-Ciel gestion commerciale 16.0 crack windows
-Ciel gestion commerciale 16.0 crack francais
-Ciel gestion commerciale 16.0 crack gratuit
-Ciel gestion commerciale 16.0 crack telecharger
-Ciel gestion commerciale 16.0 crack mega
-Ciel gestion commerciale 16.0 crack mediafire
-Ciel gestion commerciale 16.0 crack zippyshare
-Ciel gestion commerciale 16.0 crack rar
-Ciel gestion commerciale 16.0 crack zip
-Ciel gestion commerciale 16.0 crack no survey
-Ciel gestion commerciale 16.0 crack no password
-Ciel gestion commerciale 16.0 crack online
-Ciel gestion commerciale 16.0 crack offline
-Ciel gestion commerciale 16.0 crack generator
-Ciel gestion commerciale 16.0 crack software
-Ciel gestion commerciale 16.0 crack tool
-Ciel gestion commerciale 16.0 crack apk
-Ciel gestion commerciale 16.0 crack ios
-Ciel gestion commerciale 16.0 crack android
-Ciel gestion commerciale 16.0 crack review
-Ciel gestion commerciale 16.0 crack tutorial
-Ciel gestion commerciale 16.0 crack video
-Ciel gestion commerciale 16.0 crack youtube
-Ciel gestion commerciale 16.0 crack reddit
-Ciel gestion commerciale 16.0 crack quora
-Ciel gestion commerciale 16.0 crack forum
-Ciel gestion commerciale 16.0 crack blog
-Ciel gestion commerciale 16.0 crack website
-Ciel gestion commerciale 16.0 crack link
-How to get ciel gestion commerciale 16.0 crack
-How to install ciel gestion commerciale 16.0 crack
-How to use ciel gestion commerciale 16.0 crack
-How to activate ciel gestion commerciale 16.0 crack
-How to update ciel gestion commerciale 16.0 crack
-How to uninstall ciel gestion commerciale 16.0 crack
-How to fix ciel gestion commerciale 16.0 crack
-How to remove ciel gestion commerciale 16.0 crack
-How to download ciel gestion commerciale 16.0 crack
-How to buy ciel gestion commerciale 16.0

-

A crack works by replacing or modifying some files or registry entries of the original program to trick it into thinking that it has been activated or registered legally. A crack can also bypass some checks or validations that the original program performs to verify its authenticity or integrity.

-

Risks and drawbacks of using a cracked version of Ciel gestion commerciale 16.0

-

Using a cracked version of Ciel gestion commerciale 16.0 may seem tempting if you want to save money or try the software before buying it. However, you should be aware of the many risks and drawbacks that come with using a cracked version of this software.

-

Some of the risks and drawbacks are:

- -

Benefits and advantages of using a licensed version of Ciel gestion commerciale 16.0

-

The best way to use Ciel gestion commerciale 16.0 is to buy a licensed version from the official website of Ciel or one of its authorized resellers. By doing so, you will enjoy many benefits and advantages that are not available for users of cracked versions.

-

Some of the benefits and advantages are:

- -

Conclusion

-

In conclusion, Ciel gestion commerciale 16. 0 is a great software for managing your business activities, but using a cracked version of it is not worth it. You will expose yourself to many risks and drawbacks that can harm your computer, your data, your business, and your reputation. On the other hand, using a licensed version of Ciel gestion commerciale 16. 0 will bring you many benefits and advantages that will improve your security, your legality, your functionality, and your ethics. Therefore, we strongly recommend you to avoid cracks and use licensed software instead.

-

If you want to buy Ciel gestion commerciale 16. 0, you can visit the official website of Ciel at https://www.ciel.com/ or contact one of their authorized resellers near you. You can also request a free trial or a demo to test the software before buying it.

opportunity to use Ciel gestion commerciale 16.0, the best software for your business management.

-

FAQs

-

Here are some frequently asked questions about Ciel gestion commerciale 16.0 and cracks:

-
    -
  1. What is the price of Ciel gestion commerciale 16.0?
    -The price of Ciel gestion commerciale 16.0 depends on the number of users and the duration of the subscription. You can choose between a monthly or a yearly subscription, and between one or more users. The prices range from 29€ to 99€ per month, or from 299€ to 999€ per year.
  2. -
  3. How can I activate Ciel gestion commerciale 16.0?
    -To activate Ciel gestion commerciale 16.0, you need to enter the activation code that you received by email after purchasing the software. You can also activate it online by logging in to your Ciel account and following the instructions.
  4. -
  5. How can I update Ciel gestion commerciale 16.0?
    -To update Ciel gestion commerciale 16.0, you need to have an active subscription and an internet connection. You can check for updates from the software itself or from your Ciel account. You will be notified when a new update is available and you can download and install it easily.
  6. -
  7. How can I contact Ciel for support or assistance?
    -To contact Ciel for support or assistance, you can use their online chat service, their phone service, their email service, or their online forum. You can find all the contact details and the opening hours on their website at https://www.ciel.com/contactez-nous/.
  8. -
  9. How can I report a crack or a piracy of Ciel gestion commerciale 16.0?
    -To report a crack or a piracy of Ciel gestion commerciale 16.0, you can use their online form at https://www.ciel.com/signaler-un-piratage/. You can also contact them by phone or by email and provide them with any evidence or information that you have.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Arbaeen Nawawi In Urdu Pdf Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Arbaeen Nawawi In Urdu Pdf Download.md deleted file mode 100644 index 74315afcf2efb1f7b65821cc99fd5ad7e6d70861..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Arbaeen Nawawi In Urdu Pdf Download.md +++ /dev/null @@ -1,112 +0,0 @@ - -

Arbaeen Nawawi In Urdu Pdf Download: A Valuable Resource for Learning Hadith

- -

Hadith are the sayings and actions of the Prophet Muhammad (peace be upon him) that are recorded by his companions and transmitted to the later generations. Hadith are one of the primary sources of Islamic knowledge and guidance, along with the Quran. However, not all hadith are authentic and reliable, and some of them are fabricated or weak. Therefore, it is essential to learn hadith from trustworthy and qualified scholars who have verified and explained them.

- -

One of the most famous and respected scholars of hadith is Imam Abu Zakariya Yahya bin Sharaf al-Nawawi (1233-1277 CE), who belonged to the Shafi school of thought. He was a prolific writer and a renowned jurist, theologian, historian, and mystic. He authored many books on various Islamic sciences, such as fiqh, tafsir, usul al-fiqh, tasawwuf, and hadith. Among his most popular works are Riyad al-Salihin (The Gardens of the Righteous), Al-Minhaj fi Sharh Sahih Muslim (The Methodology in Explaining Sahih Muslim), and Arbaeen al-Nawawi (The Forty Hadiths of Nawawi).

-

Arbaeen Nawawi In Urdu Pdf Download


DOWNLOADhttps://imgfil.com/2uxYmE



- -

What is Arbaeen al-Nawawi?

- -

Arbaeen al-Nawawi is a collection of forty hadiths that Imam Nawawi compiled and commented on. He selected these hadiths from various sources, such as Sahih al-Bukhari, Sahih Muslim, Sunan Abu Dawud, Sunan al-Tirmidhi, Sunan al-Nasa'i, Sunan Ibn Majah, Musnad Ahmad, Muwatta Malik, and others. He chose these hadiths because they are comprehensive and fundamental in Islamic teachings and cover various aspects of faith, worship, ethics, manners, and social relations.

- -

Imam Nawawi said in his introduction to the book: "I have chosen these forty hadiths from among the sayings of Allah's Messenger (peace be upon him) that are comprehensive in meaning and convey great benefits. They are sufficient for those who act upon them to attain success in this world and the Hereafter."

- -

Imam Nawawi also explained each hadith in detail, clarifying its meaning, context, chain of narration, authenticity, and implications. He also mentioned the opinions of other scholars and related verses from the Quran. He did this to make the book more useful and beneficial for the readers.

- -

Why should you download Arbaeen Nawawi in Urdu Pdf?

- -

If you want to learn hadith from a reliable and authoritative source, you should download Arbaeen Nawawi in Urdu Pdf. This book will help you to:

- - - -

Downloading Arbaeen Nawawi in Urdu Pdf is easy and convenient. You can access it anytime and anywhere on your device without any hassle. You can also share it with your friends and family who are interested in learning hadith.

- -

How to download Arbaeen Nawawi in Urdu Pdf?

- -

To download Arbaeen Nawawi in Urdu Pdf, you just need to follow these simple steps:

-

- -
    -
  1. Go to any of the websites that offer Arbaeen Nawawi in Urdu Pdf for free download, such as https://archive.org/details/toobaa-research-library-ArbaeenNowviUrdu, https://librarypk.com/sharah-arbaeen-e-nawawi-urdu/, or https://www.emaanlibrary.com/book/urdu-sharah-arbaeen-e-navavi-imam-nawawi/.
  2. -
  3. Find the link or button that says "Download" or "Download Pdf" and click on it.
  4. -
  5. Select the folder or location where you want to save the file on your device.
  6. -
  7. Open the file with a program or app that can read Pdf files and enjoy reading Arbaeen Nawawi in Urdu.
  8. -
- -
Conclusion
- -

Arbaeen Nawawi in Urdu Pdf is a valuable resource for learning hadith from one of the greatest scholars of Islam. It contains forty hadiths that are comprehensive and fundamental in Islamic teachings. It also provides a detailed commentary and explanation of each hadith by Imam Nawawi and other scholars. It is easy and convenient to download Arbaeen Nawawi in Urdu Pdf from various websites for free. By reading this book, you can increase your faith, knowledge, character, and practice of Islam.

-
What are the main themes and topics of Arbaeen Nawawi in Urdu Pdf?
- -

Arbaeen Nawawi in Urdu Pdf covers the main themes and topics of Islam that are derived from the Quran and the Sunnah. These include:

- - - -

These themes and topics are explained and illustrated by the hadiths that are collected and commented on by Imam Nawawi in Arbaeen Nawawi in Urdu Pdf.

- -How to benefit from Arbaeen Nawawi in Urdu Pdf? - -

Arbaeen Nawawi in Urdu Pdf is a book that can benefit anyone who reads it with sincerity and attention. However, to get the most benefit from this book, one should follow some guidelines and tips:

- - - -

With these guidelines and tips, Arbaeen Nawawi in Urdu Pdf can be a source of guidance and inspiration for anyone who wants to learn hadith from one of the greatest scholars of Islam.

-What are the challenges and opportunities of Arbaeen Nawawi in Urdu Pdf? - -

Arbaeen Nawawi in Urdu Pdf is a book that has many challenges and opportunities for the readers and learners of hadith. Some of these are:

- - - -

These challenges and opportunities can be overcome and utilized by reading Arbaeen Nawawi in Urdu Pdf with sincerity, attention, respect, critical thinking, practical approach, regularity, consistency, support, cooperation, and feedback.

- -Conclusion - -

Arbaeen Nawawi in Urdu Pdf is a valuable resource for learning hadith from one of the greatest scholars of Islam. It contains forty hadiths that are comprehensive and fundamental in Islamic teachings. It also provides a detailed commentary and explanation of each hadith by Imam Nawawi and other scholars. It is easy and convenient to download Arbaeen Nawawi in Urdu Pdf from various websites for free. By reading this book, you can increase your faith, knowledge, character, and practice of Islam.

- -

Arbaeen Nawawi in Urdu Pdf covers the main themes and topics of Islam that are derived from the Quran and the Sunnah. These include the fundamentals of faith, the pillars of Islam, the virtues and obligations of worship, the rights and duties of Muslims, the moral and ethical values of Islam, the social and legal aspects of Islam, and the spiritual and mystical aspects of Islam. These themes and topics are explained and illustrated by the hadiths that are collected and commented on by Imam Nawawi in Arbaeen Nawawi in Urdu Pdf.

- -

To benefit from Arbaeen Nawawi in Urdu Pdf, one should follow some guidelines and tips such as reading the book with an open mind and a humble heart; reading the book with a sincere intention to learn and act upon what is taught; reading the book with respect and reverence for the words of Allah and His Messenger (peace be upon him); reading the book with a critical and analytical mind that seeks to understand and verify what is said; reading the book with a practical and realistic approach that applies what is learned to one's life and situations; reading the book with a regular and consistent schedule that allows one to review and reflect on what is read; reading the book with a supportive and cooperative attitude that shares what is learned with others and seeks their feedback and advice.

- -

Arbaeen Nawawi in Urdu Pdf is a book that has many challenges and opportunities for the readers and learners of hadith. These include verifying the authenticity and reliability of the hadiths; understanding their meaning; applying them to one's life; learning from authentic sources; increasing faith; improving character; enhancing knowledge; studying commentary; overcoming difficulties; utilizing resources; seeking guidance; sharing benefits; etc. These challenges can be overcome by reading Arbaeen Nawawi in Urdu Pdf with sincerity, attention, respect, critical thinking, practical approach, regularity, consistency, support, cooperation, feedback.

-Conclusion - -

Arbaeen Nawawi in Urdu Pdf is a book that every Muslim should read and benefit from. It is a collection of forty hadiths that summarize the essence of Islam and its teachings. It is also a commentary and explanation of these hadiths by one of the most eminent scholars of Islam, Imam Nawawi. It is a book that can be downloaded for free from various websites and read on any device. It is a book that can increase one's faith, knowledge, character, and practice of Islam.

- -

Arbaeen Nawawi in Urdu Pdf covers the main themes and topics of Islam, such as the fundamentals of faith, the pillars of Islam, the virtues and obligations of worship, the rights and duties of Muslims, the moral and ethical values of Islam, the social and legal aspects of Islam, and the spiritual and mystical aspects of Islam. It explains and illustrates these themes and topics by the hadiths that are selected and commented on by Imam Nawawi. It also provides the opinions and views of other scholars and related verses from the Quran.

- -

To benefit from Arbaeen Nawawi in Urdu Pdf, one should read it with sincerity, attention, respect, critical thinking, practical approach, regularity, consistency, support, cooperation, feedback. One should also overcome the challenges and utilize the opportunities that this book offers. One should also share what one learns with others and seek their guidance and advice.

- -

Arbaeen Nawawi in Urdu Pdf is a book that can change one's life for the better. It is a book that can guide one to the path of Allah and His Messenger (peace be upon him). It is a book that can make one a better Muslim and a better human being.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Csi Safe 2014 Crack 2015 13 ((EXCLUSIVE)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Csi Safe 2014 Crack 2015 13 ((EXCLUSIVE)).md deleted file mode 100644 index af9cb88326fa4004bdfa8041aae9a687d7b9b3ad..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Csi Safe 2014 Crack 2015 13 ((EXCLUSIVE)).md +++ /dev/null @@ -1,6 +0,0 @@ -

csi safe 2014 crack 2015 13


Download Ziphttps://imgfil.com/2uxX5L



- - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Electronic Communications Systems By Wayne Tomasi Pdf 5th.rar TOP.md b/spaces/1gistliPinn/ChatGPT4/Examples/Electronic Communications Systems By Wayne Tomasi Pdf 5th.rar TOP.md deleted file mode 100644 index 243b643d12857f8dd1ad5dc7eec3d4d43a6b9c4c..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Electronic Communications Systems By Wayne Tomasi Pdf 5th.rar TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

electronic communications systems by wayne tomasi pdf 5th.rar


Download File 🆓 https://imgfil.com/2uxZZn



-
-Electronic Communications Systems 5th Edition by Wayne Tomasi ... fundamentals through. tomasi torrent rar systems by wayne tomasi. 4d29de3e1b
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Street Join Clubs Defeat Bosses and Build Your Dream Car - Free APK Download.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Street Join Clubs Defeat Bosses and Build Your Dream Car - Free APK Download.md deleted file mode 100644 index 43ffc81c95b576ac2a0dcedd31a62b9fefa6865a..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CarX Street Join Clubs Defeat Bosses and Build Your Dream Car - Free APK Download.md +++ /dev/null @@ -1,102 +0,0 @@ - -

CarX Street: A Free Racing Game for Android Lovers

-

If you are a fan of racing games, you might have heard of CarX Street, a free racing game from CarX Technology for Android devices. CarX Street is an open beta test game that lets you experience the thrill of being a street racer in a dynamic open world. You can choose from a variety of cars, customize them to your liking, and race against other players or the AI in different modes. In this article, we will tell you more about CarX Street, its features, and how to download and install it on your Android device.

-

Introduction

-

What is CarX Street?

-

CarX Street is a racing game that was released in 2023 by CarX Technology, the makers of CarX Drift Racing 2. It is an open beta test game, which means that it is still in development and may have some bugs or glitches. However, it also means that you can play it for free and give your feedback to the developers to improve the game.

-

car x street apkvision


Download Filehttps://urlin.us/2uT2kG



-

CarX Street is set in Sunset City, a huge open world that you can explore freely. You can drive on highways, city streets, or off-road tracks. You can also join clubs, challenge bosses, and compete with other players online or offline. You can also build your own garage, buy houses for your cars, and collect different cars for each race mode.

-

Why should you play CarX Street?

-

CarX Street is a game that will appeal to anyone who loves racing games. Here are some reasons why you should play CarX Street:

- -

Features of CarX Street

-

Career mode

-

In career mode, you can start your journey as a street racer and become the legend of Sunset City. You can join clubs, defeat bosses, and prove your skills to everyone. You can also unlock new cars, parts, houses, and rewards as you progress through the career mode.

-

Improved car tuning

-

In CarX Street, you can tune your car to fit your preferences and needs. You can change the engine, transmission, body, suspension, tires, and more. You can also swap parts and create unique combinations for each race. For example, you can use a V8 engine for speed races or a rotary engine for drift races.

-

Visual car tuning

-

Besides performance tuning, you can also customize your car's appearance by changing the mirrors, headlights, lights, skirt, bumper, rims, and more. You can create a unique look for your car by using different colors, stickers, decals, and accessories.

-

car x street racing game download
-car x street mod apk unlimited money
-car x street open world beta
-car x street android gameplay
-car x street apk obb
-car x street drift racing 2
-car x street sunset city
-car x street apk pure
-car x street hack apk
-car x street latest version
-car x street online multiplayer
-car x street best cars
-car x street cheats codes
-car x street free download for pc
-car x street update 2023
-car x street tuning guide
-car x street review
-car x street ios release date
-car x street apk mirror
-car x street system requirements
-car x street tips and tricks
-car x street gameplay trailer
-car x street offline mode
-car x street customisation options
-car x street apk mod menu
-car x street graphics settings
-car x street new features
-car x street apk revdl
-car x street unlimited coins and gems
-car x street how to play
-car x street apk data download
-car x street realistic physics engine
-car x street apk combo
-car x street filehippo.com download link[^3^]
-car x street google play store[^2^]
-carx technologies official site[^6^]
-carx technologies privacy policy[^5^]
-license agreement for carx technologies[^4^]
-carx technologies support email address[^1^]
-how to install carx technologies games on android devices[^1^]

-

Real

Realistic physics and graphics

-

CarX Street uses the CarX Technology engine, which simulates realistic car behavior and physics. You can feel the difference between front-wheel drive, rear-wheel drive, and all-wheel drive cars. You can also experience the effects of traction, torque, and inertia on your car. The game also has stunning graphics and animations that make the races more immersive and exciting. You can see the details of the cars, the environment, and the weather. You can also enjoy the dynamic day/night cycle and the changing lighting and shadows.

-

Dynamic day/night cycle

-

One of the most impressive features of CarX Street is the dynamic day/night cycle. The game has a realistic time system that changes according to your location and timezone. You can see the sun rise and set, the moon phases, and the stars in the sky. The day/night cycle also affects the gameplay and the atmosphere of the races. For example, you can race in different weather conditions, such as sunny, cloudy, rainy, or foggy. You can also encounter different traffic patterns, pedestrians, and events in the city.

-

How to download and install CarX Street

-

Requirements and compatibility

-

CarX Street is a free racing game for Android devices. However, it is still in beta testing and may not be compatible with all devices or regions. To play CarX Street, you need to have an Android device that meets the following requirements:

- -

You can check if your device is compatible with CarX Street by visiting its official website or its Google Play Store page. You can also join the official Discord server or Facebook group to get updates and support from the developers and other players.

-

Steps to download and install CarX Street APK

-

If you want to download and install CarX Street APK on your Android device, you can follow these steps:

-
    -
  1. Go to the official website of CarX Street and click on the "Download APK" button.
  2. -
  3. Wait for the APK file to download on your device. You may need to allow unknown sources in your settings to install apps from outside the Google Play Store.
  4. -
  5. Open the APK file and follow the instructions to install CarX Street on your device.
  6. -
  7. Launch CarX Street and enjoy the game.
  8. -
-

Conclusion

-

CarX Street is a free racing game for Android devices that lets you experience the thrill of being a street racer in a dynamic open world. You can choose from a variety of cars, customize them to your liking, and race against other players or the AI in different modes. You can also explore Sunset City, join clubs, challenge bosses, and collect rewards. CarX Street has realistic physics and graphics, a detailed car tuning system, and a dynamic day/night cycle. If you are a fan of racing games, you should definitely try CarX Street.

-

FAQs

-

What is CarX Technology?

-

CarX Technology is a company that develops realistic car physics engines for games. They have created several popular racing games, such as CarX Drift Racing 2, CarX Highway Racing, and CarX Rally.

-

How can I get more cars in CarX Street?

-

You can get more cars in CarX Street by winning races, completing tasks, joining clubs, or buying them with in-game currency or real money.

-

How can I play CarX Street online?

-

You can play CarX Street online by connecting to a Wi-Fi or mobile data network. You can then join online races with other players or create your own lobby.

-

How can I give feedback or report bugs in CarX Street?

-

You can give feedback or report bugs in CarX Street by contacting the developers through their official website, Discord server, Facebook group, or email (support@carx-tech.com).

-

Is CarX Street available for iOS devices?

-

No, CarX Street is currently only available for Android devices. However, the developers have stated that they are working on an iOS version of the game.

- : https://carx-street.com/ : https://play

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Drift Hunters MAX for Mac and Enjoy Over 25 Awesome Drift Cars.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Drift Hunters MAX for Mac and Enjoy Over 25 Awesome Drift Cars.md deleted file mode 100644 index ba98f373d5bc4382a131b73f7c1425f5a2635b99..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Drift Hunters MAX for Mac and Enjoy Over 25 Awesome Drift Cars.md +++ /dev/null @@ -1,139 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - -' - # make_html_tags returns pyparsing expressions for the opening and - # closing tags as a 2-tuple - a, a_end = make_html_tags("A") - link_expr = a + SkipTo(a_end)("link_text") + a_end - - for link in link_expr.search_string(text): - # attributes in the tag (like "href" shown here) are - # also accessible as named results - print(link.link_text, '->', link.href) - - prints:: - - pyparsing -> https://github.com/pyparsing/pyparsing/wiki - """ - return _makeTags(tag_str, False) - - -def make_xml_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for XML, - given a tag name. Matches tags only in the given upper/lower case. - - Example: similar to :class:`make_html_tags` - """ - return _makeTags(tag_str, True) - - -any_open_tag, any_close_tag = make_html_tags( - Word(alphas, alphanums + "_:").set_name("any tag") -) - -_htmlEntityMap = {k.rstrip(";"): v for k, v in html.entities.html5.items()} -common_html_entity = Regex("&(?P" + "|".join(_htmlEntityMap) + ");").set_name( - "common HTML entity" -) - - -def replace_html_entity(t): - """Helper parser action to replace common HTML entities with their special characters""" - return _htmlEntityMap.get(t.entity) - - -class OpAssoc(Enum): - LEFT = 1 - RIGHT = 2 - - -InfixNotationOperatorArgType = Union[ - ParserElement, str, Tuple[Union[ParserElement, str], Union[ParserElement, str]] -] -InfixNotationOperatorSpec = Union[ - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - OptionalType[ParseAction], - ], - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - ], -] - - -def infix_notation( - base_expr: ParserElement, - op_list: List[InfixNotationOperatorSpec], - lpar: Union[str, ParserElement] = Suppress("("), - rpar: Union[str, ParserElement] = Suppress(")"), -) -> ParserElement: - """Helper method for constructing grammars of expressions made up of - operators working in a precedence hierarchy. Operators may be unary - or binary, left- or right-associative. Parse actions can also be - attached to operator expressions. The generated parser will also - recognize the use of parentheses to override operator precedences - (see example below). - - Note: if you define a deep operator list, you may see performance - issues when using infix_notation. See - :class:`ParserElement.enable_packrat` for a mechanism to potentially - improve your parser performance. - - Parameters: - - ``base_expr`` - expression representing the most basic operand to - be used in the expression - - ``op_list`` - list of tuples, one for each operator precedence level - in the expression grammar; each tuple is of the form ``(op_expr, - num_operands, right_left_assoc, (optional)parse_action)``, where: - - - ``op_expr`` is the pyparsing expression for the operator; may also - be a string, which will be converted to a Literal; if ``num_operands`` - is 3, ``op_expr`` is a tuple of two expressions, for the two - operators separating the 3 terms - - ``num_operands`` is the number of terms for this operator (must be 1, - 2, or 3) - - ``right_left_assoc`` is the indicator whether the operator is right - or left associative, using the pyparsing-defined constants - ``OpAssoc.RIGHT`` and ``OpAssoc.LEFT``. - - ``parse_action`` is the parse action to be associated with - expressions matching this operator expression (the parse action - tuple member may be omitted); if the parse action is passed - a tuple or list of functions, this is equivalent to calling - ``set_parse_action(*fn)`` - (:class:`ParserElement.set_parse_action`) - - ``lpar`` - expression for matching left-parentheses - (default= ``Suppress('(')``) - - ``rpar`` - expression for matching right-parentheses - (default= ``Suppress(')')``) - - Example:: - - # simple example of four-function arithmetic with ints and - # variable names - integer = pyparsing_common.signed_integer - varname = pyparsing_common.identifier - - arith_expr = infix_notation(integer | varname, - [ - ('-', 1, OpAssoc.RIGHT), - (one_of('* /'), 2, OpAssoc.LEFT), - (one_of('+ -'), 2, OpAssoc.LEFT), - ]) - - arith_expr.run_tests(''' - 5+3*6 - (5+3)*6 - -2--11 - ''', full_dump=False) - - prints:: - - 5+3*6 - [[5, '+', [3, '*', 6]]] - - (5+3)*6 - [[[5, '+', 3], '*', 6]] - - -2--11 - [[['-', 2], '-', ['-', 11]]] - """ - # captive version of FollowedBy that does not do parse actions or capture results names - class _FB(FollowedBy): - def parseImpl(self, instring, loc, doActions=True): - self.expr.try_parse(instring, loc) - return loc, [] - - _FB.__name__ = "FollowedBy>" - - ret = Forward() - lpar = Suppress(lpar) - rpar = Suppress(rpar) - lastExpr = base_expr | (lpar + ret + rpar) - for i, operDef in enumerate(op_list): - opExpr, arity, rightLeftAssoc, pa = (operDef + (None,))[:4] - if isinstance(opExpr, str_type): - opExpr = ParserElement._literalStringClass(opExpr) - if arity == 3: - if not isinstance(opExpr, (tuple, list)) or len(opExpr) != 2: - raise ValueError( - "if numterms=3, opExpr must be a tuple or list of two expressions" - ) - opExpr1, opExpr2 = opExpr - term_name = "{}{} term".format(opExpr1, opExpr2) - else: - term_name = "{} term".format(opExpr) - - if not 1 <= arity <= 3: - raise ValueError("operator must be unary (1), binary (2), or ternary (3)") - - if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT): - raise ValueError("operator must indicate right or left associativity") - - thisExpr = Forward().set_name(term_name) - if rightLeftAssoc is OpAssoc.LEFT: - if arity == 1: - matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...]) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + lastExpr) + Group( - lastExpr + (opExpr + lastExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + lastExpr) + Group(lastExpr[2, ...]) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr - ) + Group(lastExpr + OneOrMore(opExpr1 + lastExpr + opExpr2 + lastExpr)) - elif rightLeftAssoc is OpAssoc.RIGHT: - if arity == 1: - # try to avoid LR with this extra test - if not isinstance(opExpr, Opt): - opExpr = Opt(opExpr) - matchExpr = _FB(opExpr.expr + thisExpr) + Group(opExpr + thisExpr) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + thisExpr) + Group( - lastExpr + (opExpr + thisExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + thisExpr) + Group( - lastExpr + thisExpr[1, ...] - ) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr - ) + Group(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) - if pa: - if isinstance(pa, (tuple, list)): - matchExpr.set_parse_action(*pa) - else: - matchExpr.set_parse_action(pa) - thisExpr <<= (matchExpr | lastExpr).setName(term_name) - lastExpr = thisExpr - ret <<= lastExpr - return ret - - -def indentedBlock(blockStatementExpr, indentStack, indent=True, backup_stacks=[]): - """ - (DEPRECATED - use IndentedBlock class instead) - Helper method for defining space-delimited indentation blocks, - such as those used to define block statements in Python source code. - - Parameters: - - - ``blockStatementExpr`` - expression defining syntax of statement that - is repeated within the indented block - - ``indentStack`` - list created by caller to manage indentation stack - (multiple ``statementWithIndentedBlock`` expressions within a single - grammar should share a common ``indentStack``) - - ``indent`` - boolean indicating whether block must be indented beyond - the current level; set to ``False`` for block of left-most statements - (default= ``True``) - - A valid block must contain at least one ``blockStatement``. - - (Note that indentedBlock uses internal parse actions which make it - incompatible with packrat parsing.) - - Example:: - - data = ''' - def A(z): - A1 - B = 100 - G = A2 - A2 - A3 - B - def BB(a,b,c): - BB1 - def BBA(): - bba1 - bba2 - bba3 - C - D - def spam(x,y): - def eggs(z): - pass - ''' - - - indentStack = [1] - stmt = Forward() - - identifier = Word(alphas, alphanums) - funcDecl = ("def" + identifier + Group("(" + Opt(delimitedList(identifier)) + ")") + ":") - func_body = indentedBlock(stmt, indentStack) - funcDef = Group(funcDecl + func_body) - - rvalue = Forward() - funcCall = Group(identifier + "(" + Opt(delimitedList(rvalue)) + ")") - rvalue << (funcCall | identifier | Word(nums)) - assignment = Group(identifier + "=" + rvalue) - stmt << (funcDef | assignment | identifier) - - module_body = OneOrMore(stmt) - - parseTree = module_body.parseString(data) - parseTree.pprint() - - prints:: - - [['def', - 'A', - ['(', 'z', ')'], - ':', - [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], - 'B', - ['def', - 'BB', - ['(', 'a', 'b', 'c', ')'], - ':', - [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], - 'C', - 'D', - ['def', - 'spam', - ['(', 'x', 'y', ')'], - ':', - [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] - """ - backup_stacks.append(indentStack[:]) - - def reset_stack(): - indentStack[:] = backup_stacks[-1] - - def checkPeerIndent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if curCol != indentStack[-1]: - if curCol > indentStack[-1]: - raise ParseException(s, l, "illegal nesting") - raise ParseException(s, l, "not a peer entry") - - def checkSubIndent(s, l, t): - curCol = col(l, s) - if curCol > indentStack[-1]: - indentStack.append(curCol) - else: - raise ParseException(s, l, "not a subentry") - - def checkUnindent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if not (indentStack and curCol in indentStack): - raise ParseException(s, l, "not an unindent") - if curCol < indentStack[-1]: - indentStack.pop() - - NL = OneOrMore(LineEnd().set_whitespace_chars("\t ").suppress()) - INDENT = (Empty() + Empty().set_parse_action(checkSubIndent)).set_name("INDENT") - PEER = Empty().set_parse_action(checkPeerIndent).set_name("") - UNDENT = Empty().set_parse_action(checkUnindent).set_name("UNINDENT") - if indent: - smExpr = Group( - Opt(NL) - + INDENT - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + UNDENT - ) - else: - smExpr = Group( - Opt(NL) - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + Opt(UNDENT) - ) - - # add a parse action to remove backup_stack from list of backups - smExpr.add_parse_action( - lambda: backup_stacks.pop(-1) and None if backup_stacks else None - ) - smExpr.set_fail_action(lambda a, b, c, d: reset_stack()) - blockStatementExpr.ignore(_bslash + LineEnd()) - return smExpr.set_name("indented block") - - -# it's easy to get these comment structures wrong - they're very common, so may as well make them available -c_style_comment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/").set_name( - "C style comment" -) -"Comment of the form ``/* ... */``" - -html_comment = Regex(r"").set_name("HTML comment") -"Comment of the form ````" - -rest_of_line = Regex(r".*").leave_whitespace().set_name("rest of line") -dbl_slash_comment = Regex(r"//(?:\\\n|[^\n])*").set_name("// comment") -"Comment of the form ``// ... (to end of line)``" - -cpp_style_comment = Combine( - Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/" | dbl_slash_comment -).set_name("C++ style comment") -"Comment of either form :class:`c_style_comment` or :class:`dbl_slash_comment`" - -java_style_comment = cpp_style_comment -"Same as :class:`cpp_style_comment`" - -python_style_comment = Regex(r"#.*").set_name("Python style comment") -"Comment of the form ``# ... (to end of line)``" - - -# build list of built-in expressions, for future reference if a global default value -# gets updated -_builtin_exprs = [v for v in vars().values() if isinstance(v, ParserElement)] - - -# pre-PEP8 compatible names -delimitedList = delimited_list -countedArray = counted_array -matchPreviousLiteral = match_previous_literal -matchPreviousExpr = match_previous_expr -oneOf = one_of -dictOf = dict_of -originalTextFor = original_text_for -nestedExpr = nested_expr -makeHTMLTags = make_html_tags -makeXMLTags = make_xml_tags -anyOpenTag, anyCloseTag = any_open_tag, any_close_tag -commonHTMLEntity = common_html_entity -replaceHTMLEntity = replace_html_entity -opAssoc = OpAssoc -infixNotation = infix_notation -cStyleComment = c_style_comment -htmlComment = html_comment -restOfLine = rest_of_line -dblSlashComment = dbl_slash_comment -cppStyleComment = cpp_style_comment -javaStyleComment = java_style_comment -pythonStyleComment = python_style_comment diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/_export_format.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/_export_format.py deleted file mode 100644 index 998a9b0debaaff7f215d4e9d248975c3738a52ea..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/_export_format.py +++ /dev/null @@ -1,76 +0,0 @@ -CONSOLE_HTML_FORMAT = """\ - - - - - - - -
{code}
- - -""" - -CONSOLE_SVG_FORMAT = """\ - - - - - - - - - {lines} - - - {chrome} - - {backgrounds} - - {matrix} - - - -""" - -_SVG_FONT_FAMILY = "Rich Fira Code" -_SVG_CLASSES_PREFIX = "rich-svg" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn-0.23.2.dist-info/licenses/LICENSE.md b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn-0.23.2.dist-info/licenses/LICENSE.md deleted file mode 100644 index a6bba14552c7673a7db57a5750ddb06508264273..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn-0.23.2.dist-info/licenses/LICENSE.md +++ /dev/null @@ -1,27 +0,0 @@ -Copyright © 2017-present, [Encode OSS Ltd](https://www.encode.io/). -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Bosch Kts 650 Crack.md b/spaces/quidiaMuxgu/Expedit-SAM/Bosch Kts 650 Crack.md deleted file mode 100644 index dc2bac66c1e2ad36c60babd9ffd239c451e717b6..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Bosch Kts 650 Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

Bosch Kts 650 Crack


DOWNLOAD 🗸 https://geags.com/2uCrG9



- -Bosch Esi Kts 540 Tronic Crack Patch bosch esi tronic patch ... KTS 650, KTS 670, KTS 520 and KTS 550 discontinued testers can be ... 1fdad05405
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Buku Farmakope Indonesia Edisi 3.md b/spaces/quidiaMuxgu/Expedit-SAM/Buku Farmakope Indonesia Edisi 3.md deleted file mode 100644 index d37d3828e49c660fc91aadf876e4ad530852668a..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Buku Farmakope Indonesia Edisi 3.md +++ /dev/null @@ -1,13 +0,0 @@ -
-

Review Buku Farmakope Indonesia Edisi 3

-

Farmakope Indonesia adalah buku resmi yang berisi standar mutu obat-obatan yang berlaku di Indonesia. Buku ini diterbitkan oleh Kementerian Kesehatan Republik Indonesia dan merupakan acuan bagi industri farmasi, pemerintah, akademisi, praktisi kesehatan, dan masyarakat dalam hal pengembangan, produksi, pengawasan, dan penggunaan obat-obatan.

-

Buku Farmakope Indonesia Edisi 3 adalah edisi ketiga dari buku ini yang diterbitkan pada tahun 1979. Buku ini berisi 1031 halaman dan memiliki sampul keras. Buku ini memuat standar mutu obat-obatan yang terdiri dari monografi obat-obatan kimia sintetis, monografi obat-obatan alami, monografi obat-obatan biologi, dan monografi bahan tambahan farmasi. Buku ini juga memuat metode analisis obat-obatan, metode uji biologis, metode uji mikrobiologis, dan metode uji toksikologi.

-

buku farmakope indonesia edisi 3


Download Ziphttps://geags.com/2uCt0G



-

Buku Farmakope Indonesia Edisi 3 merupakan buku yang penting dan bermanfaat bagi para pelaku di bidang farmasi dan kesehatan di Indonesia. Buku ini dapat dijadikan sebagai sumber informasi dan referensi yang akurat dan terpercaya dalam hal standar mutu obat-obatan. Buku ini juga dapat membantu meningkatkan kualitas dan keamanan obat-obatan yang beredar di masyarakat.

-

Buku Farmakope Indonesia Edisi 3 dapat diperoleh dengan cara membelinya di toko buku online atau offline yang menjual buku-buku farmasi. Harga buku ini bervariasi tergantung dari penjual dan kondisi buku. Beberapa penjual online yang menyediakan buku ini antara lain adalah Ladang_Buku[^2^], Scribd[^3^], dan IDOC.PUB[^1^].

- -

Buku Farmakope Indonesia Edisi 3 merupakan hasil dari kerjasama antara Kementerian Kesehatan Republik Indonesia dengan berbagai lembaga dan organisasi yang terkait dengan bidang farmasi dan kesehatan di Indonesia. Buku ini disusun oleh tim redaksi yang terdiri dari para ahli dan pakar di bidangnya. Buku ini juga telah melalui proses evaluasi dan validasi oleh tim penilai yang independen.

-

Buku Farmakope Indonesia Edisi 3 mengikuti perkembangan ilmu pengetahuan dan teknologi di bidang farmasi dan kesehatan yang terjadi di dunia. Buku ini mencakup berbagai jenis obat-obatan yang baru ditemukan atau dikembangkan, serta obat-obatan yang sudah lama digunakan tetapi memiliki perubahan atau peningkatan dalam hal mutu, efikasi, atau keamanannya. Buku ini juga memperhatikan aspek etika, hukum, dan sosial dalam pengaturan standar mutu obat-obatan.

-

Buku Farmakope Indonesia Edisi 3 memiliki beberapa kelebihan dibandingkan dengan edisi-edisi sebelumnya. Buku ini memiliki tata letak dan desain yang lebih rapi dan menarik, serta menggunakan bahasa yang lebih mudah dipahami dan konsisten. Buku ini juga memiliki indeks yang lebih lengkap dan sistematis, serta menggunakan simbol-simbol kimia dan farmasi yang sesuai dengan standar internasional. Buku ini juga dilengkapi dengan tabel-tabel, gambar-gambar, grafik-grafik, dan rumus-rumus yang membantu memudahkan pemahaman dan penggunaan buku ini.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Cafezee 4.2.1 Keygen.md b/spaces/raedeXanto/academic-chatgpt-beta/Cafezee 4.2.1 Keygen.md deleted file mode 100644 index 0e1215f5d2b74c37690b6b2a3749e9666ef8888d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Cafezee 4.2.1 Keygen.md +++ /dev/null @@ -1,30 +0,0 @@ - -Here is what I created: - -

Cafezee 4.2.1 Keygen: How to Crack the Software for Free

-

Cafezee is a popular software for managing internet cafes and cybercafes. It allows you to control and monitor the usage of computers, printers, scanners, and other devices in your cafe. You can also manage your customers, employees, inventory, billing, and reports with ease.

-

Cafezee 4.2.1 Keygen


Download File ---> https://tinourl.com/2uL4sO



-

However, Cafezee is not a free software. You need to purchase a license to use it for more than 30 days. The license costs $249 for a single cafe or $499 for a multi-cafe license. That's quite expensive for some cafe owners who want to save money and maximize their profits.

-

That's why some people look for a Cafezee 4.2.1 keygen, which is a program that can generate a valid serial number or activation code for the software. By using a keygen, you can bypass the registration process and use Cafezee for free without any limitations.

-

But is it safe and legal to use a Cafezee 4.2.1 keygen? The answer is no. Here are some reasons why you should avoid using a keygen for Cafezee or any other software:

- -

Therefore, it is better to buy a genuine license for Cafezee from their official website or authorized resellers. You will get a valid code that will activate the software for lifetime. You will also get access to technical support, updates, and new features. You will also support the developer and respect their hard work and creativity.

-

So don't waste your time and risk your security by looking for a Cafezee 4.2.1 keygen. Buy Cafezee today and enjoy its benefits for your internet cafe business.

-

-Here is what I created: - -

If you are still not convinced that buying Cafezee is the best option for your internet cafe, here are some of the features and benefits that you will get from the software:

- -

As you can see, Cafezee is a comprehensive and powerful software that can help you run your internet cafe smoothly and efficiently. It is easy to install and use, and it comes with a user manual and online help. It also has a friendly and responsive customer support team that can assist you with any questions or problems.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Celemony Melodyne 3.2.2.2 Keygen Why You Need This Amazing Tool for Your Music Production.md b/spaces/raedeXanto/academic-chatgpt-beta/Celemony Melodyne 3.2.2.2 Keygen Why You Need This Amazing Tool for Your Music Production.md deleted file mode 100644 index 30e3f10ee6a7e12cc0a98a904d78fdb147242e43..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Celemony Melodyne 3.2.2.2 Keygen Why You Need This Amazing Tool for Your Music Production.md +++ /dev/null @@ -1,87 +0,0 @@ -
-

Re-Loader Activator v12.8 FINAL (Windows Office Activator) keygen

-

If you are looking for a simple and effective way to activate your Windows and Office products for free, you might want to try Re-Loader Activator. Re-Loader Activator is a small but powerful tool that can activate all versions and editions of Windows and Office with just a few clicks. In this article, we will tell you everything you need to know about Re-Loader Activator, including its features, how to use it, and its pros and cons.

-

Re-Loader Activator v12.8 FINAL (Windows Office Activator) keygen


Download File » https://tinourl.com/2uL11K



-

Introduction

-

Re-Loader Activator is a software that can activate any Windows or Office product without requiring a license key or a product key. It works by injecting a code into the system files that bypasses the activation process and makes the system think that it is genuine and activated. This way, you can enjoy all the features and benefits of Windows and Office without paying anything.

-

There are many activators available on the internet, but Re-Loader Activator stands out for several reasons. First of all, it is free and offers lifetime activation, which means you don't have to worry about expiration dates or renewal fees. Second, it supports all versions and editions of Windows and Office, from Windows XP to Windows 10, from Office 2003 to Office 2016. Third, it works offline and online, so you don't need an internet connection to activate your products. Fourth, it has a fast and easy activation process that takes only a few seconds. Fifth, it is safe and secure from viruses and malware, as it is tested by many users and experts.

-

To download Re-Loader Activator from the official site, you need to follow these steps:

-

How to use Re-Loader Activator v12.8 FINAL for Windows and Office activation
-Re-Loader Activator v12.8 FINAL crack download free
-Re-Loader Activator v12.8 FINAL review and features
-Re-Loader Activator v12.8 FINAL serial number generator
-Re-Loader Activator v12.8 FINAL patch and license key
-Re-Loader Activator v12.8 FINAL latest version download
-Re-Loader Activator v12.8 FINAL compatibility and system requirements
-Re-Loader Activator v12.8 FINAL alternative and comparison
-Re-Loader Activator v12.8 FINAL troubleshooting and support
-Re-Loader Activator v12.8 FINAL pros and cons
-Re-Loader Activator v12.8 FINAL vs KMSpico vs Microsoft Toolkit
-Re-Loader Activator v12.8 FINAL safe and virus-free download
-Re-Loader Activator v12.8 FINAL installation and activation guide
-Re-Loader Activator v12.8 FINAL best price and discount offer
-Re-Loader Activator v12.8 FINAL testimonials and feedback
-Re-Loader Activator v12.8 FINAL for Windows 10/8/7 and Office 2019/2016/2013
-Re-Loader Activator v12.8 FINAL update and changelog
-Re-Loader Activator v12.8 FINAL official website and download link
-Re-Loader Activator v12.8 FINAL benefits and advantages
-Re-Loader Activator v12.8 FINAL drawbacks and limitations
-Re-Loader Activator v12.8 FINAL FAQ and tips
-Re-Loader Activator v12.8 FINAL video tutorial and demo
-Re-Loader Activator v12.8 FINAL online activation and verification
-Re-Loader Activator v12.8 FINAL offline activation and backup
-Re-Loader Activator v12.8 FINAL error and fix
-Re-Loader Activator v12.8 FINAL for Windows Server and Office 365
-Re-Loader Activator v12.8 FINAL for Mac OS and Linux
-Re-Loader Activator v12.8 FINAL for Android and iOS devices
-Re-Loader Activator v12.8 FINAL free trial and full version download
-Re-Loader Activator v12.8 FINAL lifetime activation and warranty

-
    -
  1. Go to https://www.thepiratecity.co/softwares/re-loader-activator/ or https://lekms.com/en/download-reloader-activator/
  2. -
  3. Click on the download button or link
  4. -
  5. Save the file on your PC
  6. -
  7. Extract the file using WinRAR or any other extraction tool
  8. -
  9. You will get a folder named Re-loader.by.r@1n
  10. -
-

Features of Re-Loader Activator

-

Re-Loader Activator has many features that make it a great tool for activating Windows and Office products. Here are some of them:

- -

How to use Re-Loader Activator

-

To use Re-Loader Activator to activate your Windows or Office products, you need to follow these steps:

-
    -
  1. Install Re-Loader Activator on your PC: After downloading Re-loader.by.r@1n folder from the official site , open it and double-click on the file named re-loaderbyr@1n.exe.
  2. -
  3. Run Re-loaderbyr@1n.exe as administrator: Right-click on re-loaderbyr@1n.exe file and select Run as administrator. This will open a window with a green background.
  4. -
  5. Select the products you want to activate: In the window that opens, you will see several tabs with different icons representing different products. Click on the tabs that correspond to the products you want to activate. For example, if you want to activate Windows 10 Pro edition, click on the tab with a blue window icon that says Win10 Pro.
  6. -
  7. Click on the activate button: After selecting the products you want to activate, click on the button that says Activate at the bottom right corner of the window.
  8. -
  9. Wait for the confirmation message: After clicking on the activate button, you will see a progress bar that shows how much time is left until the activation is complete. When the activation is done, you will see a message that says Activation successful! at the top left corner of the window. You will also hear a sound that confirms that your products are activated.
  10. -
-

Pros and cons of Re-loader activator

-

Re-loader activator has many advantages, but it also has some disadvantages. Here are some of them:

-
-

Drift Hunters Download Mac: How to Enjoy the Ultimate Drifting Game on Your Mac

-

If you are a fan of drifting games, you might have heard of Drift Hunters, one of the most popular and realistic drifting games online. But did you know that you can also enjoy this game on your Mac? In this article, we will show you how to download and play Drift Hunters on your Mac, as well as some tips and tricks to master the game.

-

drift hunters download mac


Downloadhttps://urlin.us/2uSStl



-
-

What is Drift Hunters?

-

Drift Hunters is a free-to-play 3D drifting game with an excellent selection of tracks and cars. You can drift a variety of high-performance tuner cars on different exciting tracks, from racetracks to city streets. The game uses the UNITY engine, which means a completely 3D world with realistic physics and a solid frame rate.

-

Drift Hunters also features detailed car tuning, which allows you to customize every aspect of your vehicle, from engine and turbo upgrades to brake balance and camber adjustments. You can also change the color and rims of your car to suit your style. You can earn points by drifting and use them to buy new cars or upgrade existing ones.

-

Drift Hunters has stunning graphics that make the game look amazing on any device. The game also has different modes, such as full screen, theatre, or regular, to fit your preference. You can also sign in to save your progress and access more features, such as cloud save, VIP club, leaderboard, and Discord server.

-

Drift Hunters is available on browser, iOS, and Android platforms. You can play it on any device that supports Unity WebGL technology, which includes most modern browsers. You can also download the game from the App Store or Google Play if you prefer to play it on your mobile device. However, in this article, we will focus on how to play Drift Hunters on your Mac.

-
-

Why Drift Hunters is the best drifting game for Mac users

-

There are many drifting games out there, but Drift Hunters stands out as the best one for Mac users. Here are some of the reasons why:

-

drift hunters max download mac
-drift hunters game download for mac
-drift hunters mac os x download
-how to download drift hunters on mac
-drift hunters free download mac
-drift hunters online download mac
-drift hunters pc download mac
-drift hunters car game download mac
-drift hunters unblocked download mac
-drift hunters 2 download mac
-drift hunters mod apk download mac
-drift hunters android download mac
-drift hunters ios download mac
-drift hunters web game download mac
-drift hunters linux download mac
-drift hunters windows download mac
-drift hunters steam download mac
-drift hunters app store download mac
-drift hunters play store download mac
-drift hunters browser game download mac
-drift hunters mobile game download mac
-drift hunters desktop game download mac
-drift hunters laptop game download mac
-drift hunters simulator game download mac
-drift hunters racing game download mac
-drift hunters drifting game download mac
-drift hunters driving game download mac
-drift hunters tuning game download mac
-drift hunters customization game download mac
-drift hunters 3d game download mac
-drift hunters hd game download mac
-drift hunters realistic game download mac
-drift hunters physics game download mac
-drift hunters graphics game download mac
-drift hunters tracks game download mac
-drift hunters cars game download mac
-drift hunters maps game download mac
-drift hunters locations game download mac
-drift hunters modes game download mac
-drift hunters levels game download mac
-drift hunters challenges game download mac
-drift hunters missions game download mac
-drift hunters achievements game download mac
-drift hunters leaderboards game download mac
-drift hunters multiplayer game download mac
-drift hunters offline game download mac
-drift hunters cheats game download mac
-drift hunters hacks game download mac
-drift hunters tips game download mac
-drift hunters tricks game download mac

-
    -
  • Drift Hunters is compatible with Mac browsers that support Unity WebGL technology. This means that you don't need to download or install anything to play the game. You just need to visit the official website of Drift Hunters or one of its alternatives, such as Crazy Games or Paco Games, and start playing right away.
  • -
  • Drift Hunters offers smooth and responsive gameplay on low-spec devices. The game is optimized for performance and runs well on most Macs, even older models. You can also adjust the graphics quality and resolution to suit your device and internet speed.
  • -
  • Drift Hunters has a large and active community of drift enthusiasts. You can join the VIP club to access exclusive cars and tracks, as well as chat with other players on the Discord server. You can also compete with other players on the leaderboard and see how you rank among the best drifters in the world.
  • -
-

Drift Hunters is a game that will keep you entertained and challenged for hours. Whether you are a beginner or a pro, you will find something to enjoy in this game.

-
-

How to download and play Drift Hunters on your Mac

-

Downloading and playing Drift Hunters on your Mac is very easy and simple. Just follow these steps:

-
    -
  1. Visit the official website of Drift Hunters or one of its alternatives, such as Crazy Games or Paco Games. You can use any browser that supports Unity WebGL technology, such as Safari, Chrome, or Firefox.
  2. -
  3. Choose your preferred mode: full screen, theatre, or regular. Full screen mode will fill your entire screen with the game, while theatre mode will leave some space for the browser toolbar and tabs. Regular mode will show the game in a smaller window.
  4. -
  5. Sign in to save your progress and access more features. You can sign in with your email, Facebook, or Google account. Signing in will allow you to save your cars, tracks, and settings on the cloud, as well as join the VIP club and the Discord server.
  6. -
  7. Select your car, track, and settings. You can choose from over 25 cars and 10 tracks in the game, each with different characteristics and challenges. You can also customize your car's appearance and performance by tuning it up. You can change the settings for sound, graphics, controls, camera, and units according to your preference.
  8. -
  9. Start drifting and earning points to unlock more cars and upgrades. You can use the arrow keys or WASD keys to control your car, and the spacebar to use the handbrake. You can also use a controller if you have one connected to your Mac. The game will reward you with points based on how long and how well you drift. You can use these points to buy new cars or upgrade existing ones.
  10. -
-

That's it! You are now ready to enjoy Drift Hunters on your Mac.

-
-

Tips and tricks to master Drift Hunters on your Mac

-

Drift Hunters is a game that requires skill and practice to master. Here are some tips and tricks that will help you improve your drifting skills on your Mac:

-
    -
  • Use acceleration cautiously when approaching corners mid-drift. If you accelerate too much, you might lose control of your car and spin out. If you accelerate too little, you might lose momentum and end your drift prematurely. Try to find the right balance between speed and stability.
  • -
  • Drift from side-to-side on straight roads to keep the drift alive. If you only drift on corners, you might miss out on some points. To maximize your score, try to drift continuously by switching from left to right on straight roads. This will also help you maintain your speed and prepare for the next corner.
  • -
  • Tune-up your vehicles to find the sweet spot for maximum drift. Different cars have different settings that affect their drifting performance. For example, some cars might need more power or less weight to drift better. You can adjust these settings by tuning up your car in the garage. Experiment with different combinations until you find the one that works best for your car and style.
  • -
  • Drive on maps with plenty of space for long, uninterrupted drifting. Some maps are more suitable for drifting than others. For example, the airport map has a lot of open space and long roads that allow you to drift for a long time without stopping. The mountain map, on the other hand, has many sharp turns and obstacles that might interrupt your drift. Choose the map that matches your skill level and preference.
  • -
  • Watch video tutorials and learn from other players on the leaderboard and Discord server. If you want to learn more about drifting techniques and strategies, you can watch some video tutorials on YouTube or other platforms. You can also check out the leaderboard and see how the top players drift. You can even join the Discord server and chat with other drift fans, ask for tips, or challenge them to a friendly competition.
  • -
-

Drift Hunters is a game that will challenge you to improve your drifting skills and have fun at the same time. With these tips and tricks, you will be able to master the game on your Mac in no time.

-
-

Conclusion

-

Drift Hunters is a fun, free-to-play game that will test your drifting skills on your Mac. It has a variety of cars, tracks, and features to suit your preferences and style. It is easy to download and play on your Mac browser with Unity WebGL support. It also has a helpful and friendly community of drift fans that you can join and interact with.

-

If you are looking for a game that will give you the thrill of drifting without leaving your Mac, Drift Hunters is the game for you. Download it today and start drifting like a pro!

-
-

FAQs

-

Here are some frequently asked questions about Drift Hunters download Mac:

-
    -
  1. Is Drift Hunters safe to play on Mac?
  2. -

    Yes, Drift Hunters is safe to play on Mac. The game does not require any downloads or installations, so it does not pose any risk to your device or data. The game also does not contain any viruses, malware, or spyware.

    -
  3. How much does Drift Hunters cost to play on Mac?
  4. -

    Drift Hunters is free to play on Mac. You do not need to pay anything to access the game or its features. However, if you want to support the developers and get some extra benefits, you can join the VIP club for a small fee.

    -
  5. What are the minimum requirements to play Drift Hunters on Mac?
  6. -

    The minimum requirements to play Drift Hunters on Mac are:

    -
      -
    • A Mac device with an Intel processor and at least 4 GB of RAM
    • -
    • A browser that supports Unity WebGL technology, such as Safari, Chrome, or Firefox
    • -
    • A stable internet connection with at least 5 Mbps speed
    • -
    • A keyboard or a controller to control your car
    • -
    -
  7. Can I play Drift Hunters offline on Mac?
  8. -

    No, you cannot play Drift Hunters offline on Mac. The game requires an internet connection to load and run properly. You also need an internet connection to save your progress and access the VIP club and Discord server.

    -
  9. Can I play Drift Hunters with friends on Mac?
  10. -

    Yes, you can play Drift Hunters with friends on Mac. You can invite your friends to join the VIP club and chat with them on the Discord server. You can also challenge them to a drift competition and see who can score higher on the leaderboard.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Royal Match on Your Android Device and Unlock Amazing Rewards.md b/spaces/1phancelerku/anime-remove-background/Enjoy Royal Match on Your Android Device and Unlock Amazing Rewards.md deleted file mode 100644 index f7141ad89d0ef014c9855d137ee1234836aafd29..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Royal Match on Your Android Device and Unlock Amazing Rewards.md +++ /dev/null @@ -1,157 +0,0 @@ -
-

Royal Match: A Fun and Challenging Match-3 Puzzle Game

-

Introduction

-

If you are looking for a new and exciting match-3 puzzle game to play on your mobile device, you might want to check out Royal Match. This game is developed by Dream Games, a Turkish studio that has a lot of experience in creating addictive and engaging puzzle games. In this game, you will help King Robert to rebuild his castle by solving match-3 puzzles, collecting coins, unlocking boosters, and decorating the rooms. You will also compete with millions of players in various events and climb the leaderboards. Royal Match is a game that combines puzzle-solving and castle-building elements, making it a unique and enjoyable experience for all kinds of players.

-

What is Royal Match?

-

What is Royal Match?

-

Royal Match is a free-to-play match-3 puzzle game that is available for both Android and iOS devices. You can download it from Google Play or the App Store . The game has been released in February 2021 and has quickly become one of the top-grossing puzzle games on both platforms. It has also received positive reviews from critics and users alike, praising its graphics, gameplay, features, and content.

-

royal match uptodown


Download Zip ✺✺✺ https://jinyurl.com/2uNKrM



-

How to play Royal Match?

-

The gameplay of Royal Match is simple and intuitive. You just need to swipe your finger on the screen to match three or more tiles of the same color. By doing so, you will clear them from the board and complete the level's objective. Each level has a different objective, such as collecting a certain number of tiles, breaking boxes, clearing grass, or saving the king. You will also encounter various obstacles on the board, such as birds, boxes, potions, cupboards, diamonds, magic hats, coin safes, mysterious mailboxes, and piggy. You will need to clear them or use boosters to overcome them.

-

You have a limited number of moves for each level, so you need to plan your moves carefully and use them wisely. If you run out of moves before completing the objective, you will lose a life and have to try again. You can also buy extra moves with coins if you are close to finishing the level. Coins are the main currency of the game, which you can earn by completing levels, bonus levels, events, or opening chests. You can also buy coins with real money if you want to.

-

By completing levels, you will earn stars, which are needed to perform castle-decorating tasks. You will help King Robert to restore his castle by choosing from different options for each task. For example, you can choose the color of the walls, the type of furniture, or the style of the garden. Each task costs a certain number of stars, which vary depending on the difficulty of the level. You can also undo your choices if you change your mind later.

-

Features of Royal Match

-

Unique match-3 gameplay and fun levels

-

Royal Match offers a unique match-3 gameplay that is different from other games in the genre. It has fun and challenging levels that will test your skills and strategy. You will encounter various types of tiles, such as diamonds, rockets, TNTs, propellers, light balls, jesters hats, cannons, keys, locks, chests, crowns, hearts, stars, coins, and more. Each tile has a different effect when matched or activated.

-

For example, matching four tiles in a row or column will create a rocket that can clear an entire row or column when matched or tapped. Matching four tiles in a square will create a TNT that can explode and clear a 3x3 area when matched or tapped. Matching five tiles in a row or column will create a propeller that can clear all the tiles of the same color when matched or tapped. Matching five tiles in an L or T shape will create a light ball that can clear all the tiles in a cross shape when matched or tapped. Matching six tiles in a row or column will create a jester hat that can clear all the tiles on the board when matched or tapped.

-

The game has over 3000 levels to play, each with a different layout, objective, and difficulty. You will never get bored with the variety and challenge of the levels. You will also unlock new areas of the castle as you progress, such as the garden, the library, the kitchen, the bedroom, and more. Each area has its own theme and style, which you can customize according to your preference.

-

royal match game download uptodown
-royal match apk uptodown
-royal match android uptodown
-royal match mod apk uptodown
-royal match latest version uptodown
-royal match hack uptodown
-royal match free coins uptodown
-royal match unlimited lives uptodown
-royal match puzzle game uptodown
-royal match online uptodown
-royal match for pc uptodown
-royal match cheats uptodown
-royal match tips and tricks uptodown
-royal match walkthrough uptodown
-royal match levels uptodown
-royal match review uptodown
-royal match gameplay uptodown
-royal match update uptodown
-royal match offline uptodown
-royal match no ads uptodown
-royal match similar games uptodown
-royal match best strategies uptodown
-royal match challenges uptodown
-royal match rewards uptodown
-royal match boosters uptodown
-royal match characters uptodown
-royal match castle decoration uptodown
-royal match fun and addictive uptodown
-royal match how to play uptodown
-royal match features uptodown
-royal match graphics uptodown
-royal match sound effects uptodown
-royal match rating uptodown
-royal match feedback uptodown
-royal match support uptodown
-royal match bugs and issues uptodown
-royal match suggestions and ideas uptodown
-royal match community uptodown
-royal match facebook page uptodown
-royal match instagram account uptodown
-royal match twitter handle uptodown
-royal match youtube channel uptodown
-royal match developer team uptodown
-royal match dreamscapes studio uptodown
-royal match contact information uptodown
-royal match privacy policy uptodown
-royal match terms of service uptodown

-

Powerful boosters and special treasures

-

Royal Match also offers powerful boosters and special treasures that can help you complete the levels faster and easier. You can use these items before or during the level to get an advantage. Some of the boosters and treasures are:

-
    -
  • Hammer: This booster can break any tile or obstacle on the board. You can use it before or during the level.
  • -
  • Glove: This booster can swap any two adjacent tiles on the board. You can use it before or during the level.
  • -
  • Shuffle: This booster can shuffle all the tiles on the board. You can use it before or during the level.
  • -
  • Rocket: This treasure can clear an entire row or column on the board. You can use it during the level by tapping on it.
  • -
  • TNT: This treasure can explode and clear a 3x3 area on the board. You can use it during the level by tapping on it.
  • -
  • Propeller: This treasure can clear all the tiles of the same color on the board. You can use it during the level by tapping on it.
  • -
  • Light Ball: This treasure can clear all the tiles in a cross shape on the board. You can use it during the level by tapping on it.
  • -
  • Jester Hat: This treasure can clear all the tiles on the board. You can use it during the level by tapping on it.
  • -
-

You can earn these boosters and treasures by completing levels, bonus levels, events, or opening chests. You can also buy them with coins or real money if you want to.

-

Castle decoration and restoration

-

Royal Match is not only a puzzle game, but also a castle-building game. You will help King Robert to restore his castle by completing tasks with stars. You will earn stars by completing levels, bonus levels, events, or opening chests. Each task costs a certain number of stars, which vary depending on the difficulty of the level.

-

You will have different options for each task, such as the color of the walls, the type of furniture, or the style of the garden. You can choose the option that suits your taste and personality. You can also undo your choices if you change your mind later.

-

By decorating and restoring the castle, you will unlock new stories and characters. You will meet King Robert's friends and foes, such as Princess Alice, Prince Arthur, Duke Henry, Lady Violet, and more. You will also discover the secrets and mysteries of the castle, such as the hidden treasure, the ghost, the curse, and more. You will enjoy the fun and humorous dialogues and interactions between the characters.

-

Events and leaderboards

-

Royal Match also offers various events and leaderboards that can make the game more exciting and rewarding. You can participate in these events and leaderboards by completing levels, bonus levels, events, or opening chests. Some of the events and leaderboards are:

-
    -
  • King's Challenge: This event is a special bonus level that appears every few hours. You can play it to earn extra coins, stars, boosters, or treasures. The level is randomly generated and has a different objective and difficulty each time.
  • -
  • King's Tournament: This event is a weekly competition where you can compete with other players in a series of levels. You can earn points by completing levels, bonus levels, events, or opening chests. The more points you earn, the higher you rank on the leaderboard. You can win amazing prizes based on your rank at the end of the week.
  • -
  • King's Club: This event is a monthly subscription that gives you access to exclusive benefits and rewards. You can get unlimited lives, extra moves, free boosters, special chests, and more. You can also cancel your subscription at any time.
  • -
  • King's Guild: This feature allows you to join or create a guild with other players. You can chat with your guild members, share tips and tricks, request or send lives, and more. You can also participate in guild events and challenges to earn guild points and rewards.
  • -
-

Tips and tricks for Royal Match

-

Pay attention to the objective and the hints

-

One of the most important tips for Royal Match is to pay attention to the objective and the hints of each level. The objective tells you what you need to do to complete the level, such as collecting a certain number of tiles, breaking boxes, clearing grass, or saving the king. The hints show you which tiles or obstacles you need to focus on or clear first. They also show you which boosters or treasures you can use to help you.

-

You can see the objective and the hints at the top of the screen during the level. You can also tap on them to get more information or reminders. By following the objective and the hints, you can save your moves and time and complete the level faster and easier.

-

Save resources for hard levels and use them wisely

-

Another tip for Royal Match is to save your resources for hard levels and use them wisely. Your resources include your coins, stars, lives, moves, boosters, and treasures. You can earn these resources by completing levels, bonus levels, events, or opening chests. You can also buy them with real money if you want to.

-

However, you should not spend your resources recklessly or unnecessarily. You should save them for hard levels that are more difficult or require more moves to complete. You should also use them wisely and strategically, such as using boosters or treasures at the right time and place, buying extra moves only when you are close to finishing the level, or choosing the best option for each task.

-

By saving and using your resources wisely, you can avoid getting stuck or frustrated on hard levels and enjoy the game more.

-

Mix and match boosters to get amazing results

-

A third tip for Royal Match is to mix and match boosters to get amazing results. Boosters are special tiles that you can create by matching four or more tiles of the same color. They have different effects when matched or tapped, such as clearing rows, columns, areas, or colors. You can also mix and match boosters to create even more powerful effects, such as:

-
    -
  • Rocket + Rocket: This combination will clear two rows or columns in a cross shape.
  • -
  • Rocket + TNT: This combination will clear three rows or columns in a cross shape.
  • -
  • Rocket + Propeller: This combination will clear all the tiles of the same color as the rocket.
  • -
  • Rocket + Light Ball: This combination will clear four rows or columns in a cross shape.
  • -
  • Rocket + Jester Hat: This combination will clear all the tiles on the board.
  • -
  • TNT + TNT: This combination will explode and clear a 5x5 area on the board.
  • -
  • TNT + Propeller: This combination will explode and clear all the tiles of the same color as the TNT.
  • -
  • TNT + Light Ball: This combination will explode and clear all the tiles in a cross shape on the board.
  • -
  • TNT + Jester Hat: This combination will explode and clear all the tiles on the board.
  • -
  • Propeller + Propeller: This combination will clear two colors of tiles on the board.
  • -
  • Propeller + Light Ball: This combination will clear three colors of tiles on the board.
  • -
  • Propeller + Jester Hat: This combination will clear all the tiles on the board.
  • -
  • Light Ball + Light Ball: This combination will clear four colors of tiles on the board.
  • -
  • Light Ball + Jester Hat: This combination will clear all the tiles on the board.
  • -
-

You can also mix and match boosters with treasures to get even more amazing results. For example, you can match a rocket with a rocket treasure to clear three rows or columns in a cross shape, or match a propeller with a propeller treasure to clear four colors of tiles on the board. You can experiment with different combinations and see what happens.

-

By mixing and matching boosters and treasures, you can clear more tiles and obstacles, complete the objective faster and easier, and earn more points and rewards.

-

Clear obstacles and collect coins as soon as possible

-

A fourth tip for Royal Match is to clear obstacles and collect coins as soon as possible. Obstacles are items that block your way or prevent you from matching tiles. They include birds, boxes, potions, cupboards, diamonds, magic hats, coin safes, mysterious mailboxes, piggy banks, and more. You need to clear them by matching tiles next to them, using boosters or treasures, or completing special tasks. Coins are items that you can collect by matching tiles next to them, using boosters or treasures, or opening chests. They are the main currency of the game, which you can use to buy extra moves, boosters, treasures, or other items.

-

You should try to clear obstacles and collect coins as soon as possible because they can help you in many ways. For example, clearing obstacles can give you more space and options to match tiles, activate boosters or treasures, or complete the objective. Collecting coins can give you more resources to buy extra moves, boosters, treasures, or other items. You can also use coins to decorate and restore the castle.

-

By clearing obstacles and collecting coins as soon as possible, you can make the game easier and more fun.

-

Join a guild and get extra benefits

-

A fifth tip for Royal Match is to join a guild and get extra benefits. A guild is a group of players who can chat, share, and cooperate with each other. You can join or create a guild by tapping on the guild icon at the bottom of the screen. You can also invite your friends to join your guild or search for other guilds to join.

-

By joining a guild, you can get extra benefits such as:

-
    -
  • Lives: You can request or send lives to your guild members. Lives are needed to play levels, and they regenerate over time or can be bought with coins or real money. By requesting or sending lives, you can help yourself or your guild members to play more levels and have more fun.
  • -
  • Tips and tricks: You can chat with your guild members and share tips and tricks for the game. You can ask for advice, give suggestions, or exchange opinions about the game. By chatting with your guild members, you can learn more about the game and improve your skills and strategy.
  • -
  • Events and challenges: You can participate in guild events and challenges to earn guild points and rewards. Guild events and challenges are special tasks that you can complete with your guild members, such as collecting a certain number of tiles, clearing a certain number of levels, or reaching a certain score. By participating in guild events and challenges, you can compete with other guilds, earn more coins, stars, boosters, treasures, or other items, and have more fun.
  • -
-

By joining a guild and getting extra benefits, you can make the game more social and rewarding.

-

Conclusion

-

Royal Match is a fun and challenging match-3 puzzle game that you can play on your mobile device. You can download it from Google Play or the App Store . In this game, you will help King Robert to rebuild his castle by solving match-3 puzzles, collecting coins, unlocking boosters, and decorating the rooms. You will also compete with millions of players in various events and climb the leaderboards. Royal Match is a game that combines puzzle-solving and castle-building elements, making it a unique and enjoyable experience for all kinds of players.

-

If you are looking for some tips and tricks for Royal Match, you can follow these suggestions:

-
    -
  • Pay attention to the objective and the hints
  • -
  • Save resources for hard levels and use them wisely
  • -
  • Mix and match boosters to get amazing results
  • -
  • Clear obstacles and collect coins as soon as possible
  • -
  • Join a guild and get extra benefits
  • -
-

By following these tips and tricks, you can master the game and have more fun.

-

FAQs

-

Here are some frequently asked questions about Royal Match:

-
    -
  1. How do I download Royal Match?
  2. -

    You can download Royal Match from Google Play or the App Store . The game is free to play but offers in-app purchases.

    -
  3. How do I update Royal Match?
  4. -

    You can update Royal Match by going to Google Play or the App Store and tapping on the update button. The game updates regularly with new levels, features, events, and bug fixes.

    -
  5. How do I contact Royal Match support?
  6. -

    You can contact Royal Match support by tapping on the settings icon at the top right corner of the screen and then tapping on the support button. You can also email them at support@dreamgames.com or visit their website at https://www.dreamgames.com/royalmatch/.

    -
  7. How do I connect Royal Match to Facebook?
  8. -

    You can connect Royal Match to Facebook by tapping on the settings icon at the top right corner of the screen and then tapping on the connect button. By connecting to Facebook, you can save your progress across different devices, invite your friends to play, or join a guild.

    -
  9. How do I reset Royal Match?
  10. -

    You can reset Royal Match by tapping on the settings icon at the top right corner of the screen and then tapping on the reset button. By resetting the game, you will lose all your progress, coins, stars, boosters, treasures, and other items. You will also disconnect from Facebook and your guild. You should only reset the game if you want to start over from the beginning.

    -

    I hope you enjoyed this article and found it helpful. If you have any questions or feedback, please leave a comment below. Thank you for reading and happy playing!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/AGITM/ToneCorrectionRecognition/app.py b/spaces/AGITM/ToneCorrectionRecognition/app.py deleted file mode 100644 index 7615ea1bd81a9289364d4818c87ec51bb55de0e1..0000000000000000000000000000000000000000 --- a/spaces/AGITM/ToneCorrectionRecognition/app.py +++ /dev/null @@ -1,166 +0,0 @@ -import gradio as gr -from spleeter.separator import Separator -from spleeter.audio.adapter import AudioAdapter -import spleeter.utils.logging as logging -import parselmouth -import numpy as np -import matplotlib.pyplot as plt -import seaborn as sns -import time -from matplotlib import rcParams -from you_get import common - - -#主程序 -def main(audio,bg_time,ed_time): - #分离音频 - vocals=spleeter(audio,bg_time,ed_time) - #音高标记 - plt=pitch_mark(vocals) - #返回音高标记图像 - return [plt,vocals] - -#时间检查 -def time_check(bg_time,ed_time): - #当两者都为整数且ed_time>bg_time时返回True - if bg_time.isdigit() and ed_time.isdigit(): - if int(ed_time)>int(bg_time): - return True - return False - -#音频分离 -def spleeter(audio,bg_time,ed_time): - #分离音频并保存 - separator = Separator('spleeter:2stems') - if time_check(bg_time,ed_time): - waveform=AudioAdapter.default().load_tf_waveform(audio,offset=int(bg_time), duration=int(ed_time)-int(bg_time))['waveform'] - else: - waveform=AudioAdapter.default().load_tf_waveform(audio)['waveform'] - vocals = separator.separate(waveform)['vocals'] - #返回Tuple,格式[sample_rate, numpy array] - return (44100,vocals) - -#音高标记 -#计算标准音高和频率 -def frequency(pitch): - return 16.35 * 2 ** (pitch /12 ) -def generate_array(min_pitch, max_pitch): - array = [] - names = ["C", "C#", "D", "D#", - "E", "F", "F#", - 'G', 'G#', 'A', 'A#', 'B'] - - for pitch in range(120): - freq = frequency(pitch) - name = names[pitch % 12] + str(pitch // 12) - if frequency(pitch+1) > min_pitch and frequency(pitch-1) < max_pitch: - array.append([name, freq]) - return array - -def pitch_mark(wav): - config = { - "font.family":'serif', - "font.size": 8 - } - sns.set() - rcParams.update(config) - - wav = wav[1][:, 0] - snd = parselmouth.Sound(wav) - pitch = snd.to_pitch(pitch_floor=50, pitch_ceiling=3000) - plt.figure(figsize=(15,8),dpi=144) - - pitch_values = pitch.selected_array['frequency'] - #异常值修改为0 - pitch_values[pitch_values>np.nanpercentile(pitch_values, 99)] = np.nan - pitch_values[pitch_values音准测试 -

    🛠一个音准测量工具

    -
    -
    -

    📒使用说明

    -
      -
    1. 在下方上传音频/视频文件,或使用在线视频链接,输入需要分析的音频
    2. -
    3. 点击“音准测试”按钮,生成音高图,右侧同时输出提取人声后的音频
    4. -
    5. 输入开始和结束时间可以截取部分音频分析
    6. -
    -
    -
    -

    📝注意:10s音频分析时间不会超过30s,如卡住不动或出现error请尝试刷新

    -
    -
    - """) - with gr.Row(): - with gr.Column(): - with gr.Tabs(): - with gr.Tab("音频文件"): - audio = gr.Audio(type="filepath", label="音频文件") - btn_a=gr.Button("🎵音准测试") - with gr.Tab("视频文件"): - video = gr.Video(type="filepath", label="视频文件") - btn_b=gr.Button("🎦音准测试") - - with gr.Column(): - with gr.Box(elem_id="box"): - gr.HTML("""

    📅开始时间和结束时间,直接输入数字,单位为s,不填默认为全长 建议长度为10秒

    """) - with gr.Row(): - bg_time = gr.Textbox(type="text",label="开始时间") - ed_time = gr.Textbox(type="text",label="结束时间") - with gr.Box(): - audio_output = gr.Audio(type="numpy", label="提取结果") - output_img=gr.Image(type="filepath", label="音准匹配图") - btn_a.click(main, [audio,bg_time,ed_time], [output_img,audio_output]) - btn_b.click(main, [video,bg_time,ed_time], [output_img,audio_output]) - gr.HTML(""" -
    -

    😃Credit:人声提取:Spleeter 音高标注:Parselmouth - 网络视频获取:You-get 方法来源:码农高天 -

    -
    visitor badge
    -
    """) - -app.launch() \ No newline at end of file diff --git a/spaces/AHzizi/WaifuVoiceGen/text/symbols.py b/spaces/AHzizi/WaifuVoiceGen/text/symbols.py deleted file mode 100644 index edfbd24247be8c757275ce80b9ec27a0ffa808f3..0000000000000000000000000000000000000000 --- a/spaces/AHzizi/WaifuVoiceGen/text/symbols.py +++ /dev/null @@ -1,39 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -'''# japanese_cleaners -_pad = '_' -_punctuation = ',.!?-' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ ' -''' - -'''# japanese_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ ' -''' - -'''# korean_cleaners -_pad = '_' -_punctuation = ',.!?…~' -_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ ' -''' - -'''# chinese_cleaners -_pad = '_' -_punctuation = ',。!?—…' -_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ ' -''' - -# zh_ja_mixture_cleaners -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") \ No newline at end of file diff --git a/spaces/AIConsultant/MusicGen/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py b/spaces/AIConsultant/MusicGen/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py deleted file mode 100644 index 12f6d402a3c4a113d4c37be062790fa435b72104..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/grids/audiogen/audiogen_pretrained_16khz_eval.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Evaluation with objective metrics for the pretrained AudioGen models. -This grid takes signature from the training grid and runs evaluation-only stage. - -When running the grid for the first time, please use: -REGEN=1 dora grid audiogen.audiogen_pretrained_16khz_eval -and re-use the REGEN=1 option when the grid is changed to force regenerating it. - -Note that you need the proper metrics external libraries setup to use all -the objective metrics activated in this grid. Refer to the README for more information. -""" - -import os - -from ..musicgen._explorers import GenerationEvalExplorer -from ...environment import AudioCraftEnvironment -from ... import train - - -def eval(launcher, batch_size: int = 32): - opts = { - 'dset': 'audio/audiocaps_16khz', - 'solver/audiogen/evaluation': 'objective_eval', - 'execute_only': 'evaluate', - '+dataset.evaluate.batch_size': batch_size, - '+metrics.fad.tf.batch_size': 32, - } - # binary for FAD computation: replace this path with your own path - metrics_opts = { - 'metrics.fad.tf.bin': '/data/home/jadecopet/local/usr/opt/google-research' - } - opt1 = {'generate.lm.use_sampling': True, 'generate.lm.top_k': 250, 'generate.lm.top_p': 0.} - opt2 = {'transformer_lm.two_step_cfg': True} - - sub = launcher.bind(opts) - sub.bind_(metrics_opts) - - # base objective metrics - sub(opt1, opt2) - - -@GenerationEvalExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=4, partition=partitions) - - if 'REGEN' not in os.environ: - folder = train.main.dora.dir / 'grids' / __name__.split('.', 2)[-1] - with launcher.job_array(): - for sig in folder.iterdir(): - if not sig.is_symlink(): - continue - xp = train.main.get_xp_from_sig(sig.name) - launcher(xp.argv) - return - - audiogen_base = launcher.bind(solver="audiogen/audiogen_base_16khz") - audiogen_base.bind_({'autocast': False, 'fsdp.use': True}) - - audiogen_base_medium = audiogen_base.bind({'continue_from': '//pretrained/facebook/audiogen-medium'}) - audiogen_base_medium.bind_({'model/lm/model_scale': 'medium'}) - eval(audiogen_base_medium, batch_size=128) diff --git a/spaces/AIConsultant/MusicGen/audiocraft/grids/compression/encodec_musicgen_32khz.py b/spaces/AIConsultant/MusicGen/audiocraft/grids/compression/encodec_musicgen_32khz.py deleted file mode 100644 index 9da31daa5f009f46e753601a51a06391594b8f9b..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/grids/compression/encodec_musicgen_32khz.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Grid search file, simply list all the exp you want in `explorer`. -Any new exp added there will be scheduled. -You can cancel and experiment by commenting its line. - -This grid shows how to train a MusicGen EnCodec model at 32 kHz. -""" - -from ._explorers import CompressionExplorer -from ...environment import AudioCraftEnvironment - - -@CompressionExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=8, partition=partitions) - # use configuration for MusicGen's EnCodec model trained on monophonic audio sampled at 32 kHz - # MusicGen's EnCodec is trained with a total stride of 640 leading to a frame rate of 50 hz - launcher.bind_(solver='compression/encodec_musicgen_32khz') - # replace this by the desired music dataset - launcher.bind_(dset='internal/music_400k_32khz') - # launch xp - launcher() - launcher({ - 'metrics.visqol.bin': '/data/home/jadecopet/local/usr/opt/visqol', - 'label': 'visqol', - 'evaluate.metrics.visqol': True - }) diff --git a/spaces/AIFILMS/StyleGANEX/models/bisenet/README.md b/spaces/AIFILMS/StyleGANEX/models/bisenet/README.md deleted file mode 100644 index 849d55e2789c8852e01707d1ff755dc74e63a7f5..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/bisenet/README.md +++ /dev/null @@ -1,68 +0,0 @@ -# face-parsing.PyTorch - -

    - - - -

    - -### Contents -- [Training](#training) -- [Demo](#Demo) -- [References](#references) - -## Training - -1. Prepare training data: - -- download [CelebAMask-HQ dataset](https://github.com/switchablenorms/CelebAMask-HQ) - - -- change file path in the `prepropess_data.py` and run -```Shell -python prepropess_data.py -``` - -2. Train the model using CelebAMask-HQ dataset: -Just run the train script: -``` - $ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py -``` - -If you do not wish to train the model, you can download [our pre-trained model](https://drive.google.com/open?id=154JgKpzCPW82qINcVieuPH3fZ2e0P812) and save it in `res/cp`. - - -## Demo -1. Evaluate the trained model using: -```Shell -# evaluate using GPU -python test.py -``` - -## Face makeup using parsing maps -[**face-makeup.PyTorch**](https://github.com/zllrunning/face-makeup.PyTorch) - - - - - - - - - - - - - - - - - - - - - - -
     HairLip
    Original InputOriginal InputOriginal Input
    ColorColorColor
    - - -## References -- [BiSeNet](https://github.com/CoinCheung/BiSeNet) \ No newline at end of file diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/__init__.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/audio/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/synta.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/synta.py deleted file mode 100644 index 62cf865bc7d17c62b2c0c71f21d5b0ab596ba312..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/synta.py +++ /dev/null @@ -1,25 +0,0 @@ -import os -import torch -import torch.nn.functional as F -from torch import nn - -from modules.tts.syntaspeech.syntaspeech import SyntaSpeech -from tasks.tts.ps_adv import PortaSpeechAdvTask -from utils.hparams import hparams - - -class SyntaSpeechTask(PortaSpeechAdvTask): - def build_tts_model(self): - ph_dict_size = len(self.token_encoder) - word_dict_size = len(self.word_encoder) - self.model = SyntaSpeech(ph_dict_size, word_dict_size, hparams) - - self.gen_params = [p for p in self.model.parameters() if p.requires_grad] - self.dp_params = [p for k, p in self.model.named_parameters() if (('dur_predictor' in k) and p.requires_grad)] - self.gen_params_except_dp = [p for k, p in self.model.named_parameters() if (('dur_predictor' not in k) and p.requires_grad)] - self.bert_params = [p for k, p in self.model.named_parameters() if (('bert' in k) and p.requires_grad)] - self.gen_params_except_bert_and_dp = [p for k, p in self.model.named_parameters() if ('dur_predictor' not in k) and ('bert' not in k) and p.requires_grad ] - - self.use_bert = True if len(self.bert_params) > 0 else False - - \ No newline at end of file diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/train_vggishish.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/train_vggishish.py deleted file mode 100644 index 205668224ec87a9ce571f6428531080231b1c16b..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/train_vggishish.py +++ /dev/null @@ -1,199 +0,0 @@ -from loss import WeightedCrossEntropy -import random - -import numpy as np -import torch -import torchvision -from omegaconf import OmegaConf -from torch.utils.data.dataloader import DataLoader -from tqdm import tqdm - -from dataset import VGGSound -from transforms import Crop, StandardNormalizeAudio, ToTensor -from logger import LoggerWithTBoard -from metrics import metrics -from model import VGGishish - -if __name__ == "__main__": - cfg_cli = OmegaConf.from_cli() - cfg_yml = OmegaConf.load(cfg_cli.config) - # the latter arguments are prioritized - cfg = OmegaConf.merge(cfg_yml, cfg_cli) - OmegaConf.set_readonly(cfg, True) - print(OmegaConf.to_yaml(cfg)) - - logger = LoggerWithTBoard(cfg) - - random.seed(cfg.seed) - np.random.seed(cfg.seed) - torch.manual_seed(cfg.seed) - torch.cuda.manual_seed_all(cfg.seed) - # makes iterations faster (in this case 30%) if your inputs are of a fixed size - # https://discuss.pytorch.org/t/what-does-torch-backends-cudnn-benchmark-do/5936/3 - torch.backends.cudnn.benchmark = True - - transforms = [ - StandardNormalizeAudio(cfg.mels_path), - ] - if cfg.cropped_size not in [None, 'None', 'none']: - logger.print_logger.info(f'Using cropping {cfg.cropped_size}') - transforms.append(Crop(cfg.cropped_size)) - transforms.append(ToTensor()) - transforms = torchvision.transforms.transforms.Compose(transforms) - - datasets = { - 'train': VGGSound('train', cfg.mels_path, transforms), - 'valid': VGGSound('valid', cfg.mels_path, transforms), - 'test': VGGSound('test', cfg.mels_path, transforms), - } - - loaders = { - 'train': DataLoader(datasets['train'], batch_size=cfg.batch_size, shuffle=True, drop_last=True, - num_workers=cfg.num_workers, pin_memory=True), - 'valid': DataLoader(datasets['valid'], batch_size=cfg.batch_size, - num_workers=cfg.num_workers, pin_memory=True), - 'test': DataLoader(datasets['test'], batch_size=cfg.batch_size, - num_workers=cfg.num_workers, pin_memory=True), - } - - device = torch.device(cfg.device if torch.cuda.is_available() else 'cpu') - - model = VGGishish(cfg.conv_layers, cfg.use_bn, num_classes=len(datasets['train'].target2label)) - model = model.to(device) - param_num = logger.log_param_num(model) - - if cfg.optimizer == 'adam': - optimizer = torch.optim.Adam( - model.parameters(), lr=cfg.learning_rate, betas=cfg.betas, weight_decay=cfg.weight_decay) - elif cfg.optimizer == 'sgd': - optimizer = torch.optim.SGD( - model.parameters(), lr=cfg.learning_rate, momentum=cfg.momentum, weight_decay=cfg.weight_decay) - else: - raise NotImplementedError - - if cfg.cls_weights_in_loss: - weights = 1 / datasets['train'].class_counts - else: - weights = torch.ones(len(datasets['train'].target2label)) - criterion = WeightedCrossEntropy(weights.to(device)) - - # loop over the train and validation multiple times (typical PT boilerplate) - no_change_epochs = 0 - best_valid_loss = float('inf') - early_stop_triggered = False - - for epoch in range(cfg.num_epochs): - - for phase in ['train', 'valid']: - if phase == 'train': - model.train() - else: - model.eval() - - running_loss = 0 - preds_from_each_batch = [] - targets_from_each_batch = [] - - prog_bar = tqdm(loaders[phase], f'{phase} ({epoch})', ncols=0) - for i, batch in enumerate(prog_bar): - inputs = batch['input'].to(device) - targets = batch['target'].to(device) - - # zero the parameter gradients - optimizer.zero_grad() - - # forward + backward + optimize - with torch.set_grad_enabled(phase == 'train'): - outputs = model(inputs) - loss = criterion(outputs, targets, to_weight=phase == 'train') - - if phase == 'train': - loss.backward() - optimizer.step() - - # loss - running_loss += loss.item() - - # for metrics calculation later on - preds_from_each_batch += [outputs.detach().cpu()] - targets_from_each_batch += [targets.cpu()] - - # iter logging - if i % 50 == 0: - logger.log_iter_loss(loss.item(), epoch*len(loaders[phase])+i, phase) - # tracks loss in the tqdm progress bar - prog_bar.set_postfix(loss=loss.item()) - - # logging loss - epoch_loss = running_loss / len(loaders[phase]) - logger.log_epoch_loss(epoch_loss, epoch, phase) - - # logging metrics - preds_from_each_batch = torch.cat(preds_from_each_batch) - targets_from_each_batch = torch.cat(targets_from_each_batch) - metrics_dict = metrics(targets_from_each_batch, preds_from_each_batch) - logger.log_epoch_metrics(metrics_dict, epoch, phase) - - # Early stopping - if phase == 'valid': - if epoch_loss < best_valid_loss: - no_change_epochs = 0 - best_valid_loss = epoch_loss - logger.log_best_model(model, epoch_loss, epoch, optimizer, metrics_dict) - else: - no_change_epochs += 1 - logger.print_logger.info( - f'Valid loss hasnt changed for {no_change_epochs} patience: {cfg.patience}' - ) - if no_change_epochs >= cfg.patience: - early_stop_triggered = True - - if early_stop_triggered: - logger.print_logger.info(f'Training is early stopped @ {epoch}') - break - - logger.print_logger.info('Finished Training') - - # loading the best model - ckpt = torch.load(logger.best_model_path) - model.load_state_dict(ckpt['model']) - logger.print_logger.info(f'Loading the best model from {logger.best_model_path}') - logger.print_logger.info((f'The model was trained for {ckpt["epoch"]} epochs. Loss: {ckpt["loss"]:.4f}')) - - # Testing the model - model.eval() - running_loss = 0 - preds_from_each_batch = [] - targets_from_each_batch = [] - - for i, batch in enumerate(loaders['test']): - inputs = batch['input'].to(device) - targets = batch['target'].to(device) - - # zero the parameter gradients - optimizer.zero_grad() - - # forward + backward + optimize - with torch.set_grad_enabled(False): - outputs = model(inputs) - loss = criterion(outputs, targets, to_weight=False) - - # loss - running_loss += loss.item() - - # for metrics calculation later on - preds_from_each_batch += [outputs.detach().cpu()] - targets_from_each_batch += [targets.cpu()] - - # logging metrics - preds_from_each_batch = torch.cat(preds_from_each_batch) - targets_from_each_batch = torch.cat(targets_from_each_batch) - test_metrics_dict = metrics(targets_from_each_batch, preds_from_each_batch) - test_metrics_dict['avg_loss'] = running_loss / len(loaders['test']) - test_metrics_dict['param_num'] = param_num - # TODO: I have no idea why tboard doesn't keep metrics (hparams) when - # I run this experiment from cli: `python train_vggishish.py config=./configs/vggish.yaml` - # while when I run it in vscode debugger the metrics are logger (wtf) - logger.log_test_metrics(test_metrics_dict, dict(cfg), ckpt['epoch']) - - logger.print_logger.info('Finished the experiment') diff --git a/spaces/AIWaves/Software_Company/src/agents/Agent/Agent.py b/spaces/AIWaves/Software_Company/src/agents/Agent/Agent.py deleted file mode 100644 index e7f6ecc72682e8aeb74d9f933e6aa721656d350a..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Software_Company/src/agents/Agent/Agent.py +++ /dev/null @@ -1,243 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The AIWaves Inc. team. - -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""LLM autonoumous agent""" -from LLM.base_LLM import * -from Component import * -from Action import Action -from Prompt import * - -headers = { - "Content-Type": "text/event-stream", - "Cache-Control": "no-cache", - "X-Accel-Buffering": "no", -} - - - - -class Agent: - """ - Auto agent, input the JSON of SOP. - """ - - # Agent should have args: agents,states - def __init__(self, name, agent_state_roles, **kwargs) -> None: - self.state_roles = agent_state_roles - self.name = name - - self.style = kwargs["style"] - self.LLMs = kwargs["LLMs"] - self.LLM = None - self.is_user = kwargs["is_user"] - self.begins = kwargs["begins"] if "begins" in kwargs else False - self.current_role = "" - self.long_term_memory = [] - self.short_term_memory = "" - self.current_state = None - self.first_speak = True - self.environment = None - - - @classmethod - def from_config(cls, config_path): - """ - Initialize agents based on json file - Return: - agents(dict) : key:agent_name;value:class(Agent) - names_to_roles(dict) : key:state_name value:(dict; (key:agent_name ; value:agent_role)) - roles_to_names(dict) : key:state_name value:(dict; (key:agent_role ; value:agent_name)) - """ - with open(config_path) as f: - config = json.load(f) - - roles_to_names = {} - names_to_roles = {} - agents = {} - user_names = json.loads(os.environ["User_Names"]) if "User_Names" in os.environ else [] - for agent_name, agent_dict in config["agents"].items(): - agent_state_roles = {} - agent_LLMs = {} - agent_begins = {} - for state_name, agent_role in agent_dict["roles"].items(): - - agent_begins[state_name] = {} - - if state_name not in roles_to_names: - roles_to_names[state_name] = {} - if state_name not in names_to_roles: - names_to_roles[state_name] = {} - roles_to_names[state_name][agent_role] = agent_name - names_to_roles[state_name][agent_name] = agent_role - agent_state_roles[state_name] = agent_role - current_state = config["states"][state_name] - - current_state_begin_role = current_state["begin_role"] if "begin_role" in current_state else current_state["roles"][0] - agent_begins[state_name]["is_begin"] = current_state_begin_role==agent_role if "begin_role" in current_state else False - agent_begins[state_name]["begin_query"] = current_state["begin_query"] if "begin_query" in current_state else " " - agent_LLMs[state_name] = init_LLM(f"logs/{agent_name}",**current_state["agent_states"][agent_role]) - agents[agent_name] = cls( - agent_name, - agent_state_roles, - LLMs=agent_LLMs, - is_user=agent_name in user_names, - style = agent_dict["style"], - begins = agent_begins - ) - assert len(config["agents"].keys()) != 2 or (roles_to_names[config["root"]][config["states"][config["root"]]["begin_role"]] not in user_names and "begin_query" in config["states"][config["root"]]),"In a single-agent scenario, there must be an opening statement and it must be the agent" - return agents, roles_to_names, names_to_roles - - def step(self, current_state,input=""): - """ - return actions by current state and environment - Return: action(Action) - """ - - current_state.chat_nums +=1 - state_begin = current_state.is_begin - agent_begin = self.begins[current_state.name]["is_begin"] - self.begins[current_state.name]["is_begin"] = False - current_state.is_begin = False - environment = self.environment - - self.current_state = current_state - # 先根据当前环境更新信息 - # First update the information according to the current environment - - response = " " - res_dict = {} - - if self.is_user: - response = f"{self.name}:{input}" - else: - if len(environment.shared_memory["long_term_memory"])>0: - current_history = self.observe() - self.long_term_memory.append(current_history) - if agent_begin: - response = (char for char in self.begins[current_state.name]["begin_query"]) - else: - response,res_dict = self.act() - - - action_dict = { - "response": response, - "res_dict": res_dict, - "role": self.state_roles[current_state.name], - "name": self.name, - "state_begin" : state_begin, - "agent_begin" : agent_begin, - "is_user" : self.is_user - } - return Action(**action_dict) - - def act(self): - """ - return actions by the current state - """ - current_state = self.current_state - chat_history = self.long_term_memory - current_LLM = self.LLMs[current_state.name] - - system_prompt, last_prompt, res_dict = self.compile() - - - - response = current_LLM.get_response( - chat_history, system_prompt, last_prompt, stream=True - ) - return response,res_dict - - def update_memory(self, memory): - self.long_term_memory.append( - {"role": "assistant", "content": memory.content} - ) - - MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"]) - environment = self.environment - current_chat_history_idx = environment.current_chat_history_idx if environment.environment_type == "competive" else 0 - - current_long_term_memory = environment.shared_memory["long_term_memory"][current_chat_history_idx:] - last_conversation_idx = environment._get_agent_last_conversation_idx(self,current_long_term_memory) - if len(current_long_term_memory)-last_conversation_idx >= MAX_CHAT_HISTORY: - current_state = self.current_state - current_role = self.state_roles[current_state.name] - current_component_dict = current_state.components[current_role] - - # get chat history from new conversation - conversations = environment._get_agent_new_memory(self,current_long_term_memory) - - # get summary - summary_prompt = ( - current_state.summary_prompt[current_role] - if current_state.summary_prompt - else f"""your name is {self.name},your role is{current_component_dict["style"].role},your task is {current_component_dict["task"].task}.\n""" - ) - summary_prompt =eval(Agent_summary_system_prompt) - summary = self.LLMs[current_state.name].get_response(None, summary_prompt,stream = False) - self.short_term_memory = summary - - - def compile(self): - """ - get prompt from state depend on your role - Return: - system_prompt:system_prompt for agents's LLM - last_prompt:last_prompt for agents's LLM - res_dict(dict): Other return from tool component.For example: search engine results - """ - current_state = self.current_state - self.current_roles = self.state_roles[current_state.name] - current_state_name = current_state.name - self.LLM = self.LLMs[current_state_name] - components = current_state.components[self.state_roles[current_state_name]] - - system_prompt = self.current_state.environment_prompt - last_prompt = "" - - res_dict = {} - for component in components.values(): - if isinstance(component, (OutputComponent, LastComponent)): - last_prompt = last_prompt + "\n" + component.get_prompt(self) - elif isinstance(component, PromptComponent): - system_prompt = ( - system_prompt + "\n" + component.get_prompt(self) - ) - elif isinstance(component, ToolComponent): - response = component.func(self) - if "prompt" in response and response["prompt"]: - last_prompt = last_prompt + "\n" + response["prompt"] - res_dict.update(response) - - name = self.name - query = self.environment.shared_memory["long_term_memory"][-1] - last_prompt = eval(Agent_last_prompt) - system_prompt = eval(Agent_system_prompt) - return system_prompt, last_prompt, res_dict - - - def observe(self): - """ - Update one's own memory according to the current environment, including: updating short-term memory; updating long-term memory - """ - return self.environment._observe(self) - - - def generate_sop(self): - pass - - def reflection(self): - pass - - diff --git a/spaces/AIWaves/Software_Company/src/agents/LLM/__init__.py b/spaces/AIWaves/Software_Company/src/agents/LLM/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/work_dirs/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/work_dirs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AUBADA-ALARABI/AraPoet/app.py b/spaces/AUBADA-ALARABI/AraPoet/app.py deleted file mode 100644 index af769dff8abd1dbf74587cd2d33de416baf01ade..0000000000000000000000000000000000000000 --- a/spaces/AUBADA-ALARABI/AraPoet/app.py +++ /dev/null @@ -1,121 +0,0 @@ -# coding=utf8 - -import json -import torch -import gradio as gr -import pyarabic.araby as araby -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoConfig - -feature_names = [ - "Title", - "Meter", - "Theme", - "Name", - "Era", - "Country", - "Type" -] - -with open("./poet_names.json", 'r', encoding="utf-8") as fin: - poet_names = json.load(fin) - -def normalize_text(text): - text = araby.strip_tatweel(text) - return text - -def generate_poem(country, era, meter, theme, lang_type, poet, num_lines, num_poems, title): - - num_poems = int(num_poems) - prompt = title - prompt = normalize_text(prompt) - - features = [prompt, meter, theme, poet, era, country, lang_type] - - prompt = "" - for name, feat in zip(feature_names, features): - prompt += f"{name}: {feat}; " - prompt += f"Length: {num_lines}; Poem:" - - num_beams = 5 - top_k = 50 - top_p = 0.9 - r_penalty = 5. - - input_ids = torch.tensor(tokenizer.encode(prompt)).unsqueeze(0) - print(f"> Running: {prompt} | {num_poems} Poems") - outputs = model.generate(input_ids=input_ids, - min_length=32, - max_length=256, - do_sample=True, - top_k=top_k, - top_p=top_p, - repetition_penalty=r_penalty, - num_beams=num_beams, - num_return_sequences=num_poems, - early_stopping=True - ) - - poems = [] - print(f"> # of Outputs: {len(outputs)}") - for output in outputs: - raw = tokenizer.decode(output) - raw = raw.replace("", "").replace("", "") - print("="*100) - print(raw) - print("="*100) - poems += ['\n'.join(raw.split(""))] - - return "\n\n".join(poems) - -meters = ['البسيط', 'التفعيله', 'الحداء', 'الخفيف', 'الدوبيت', 'الرجز', 'الرمل', 'السريع', 'السلسلة', 'الصخري', 'الطويل', 'الكامل', 'الكان كان', 'اللويحاني', 'المتدارك', 'المتقارب', 'المجتث', 'المديد', 'المسحوب', 'المضارع', 'المقتضب', 'المنسرح', 'المواليا', 'الموشح', 'الهجيني', 'الهزج', 'الوافر', 'بحر أحذ الكامل', 'بحر أحذ المديد', 'بحر أحذ الوافر', 'بحر البسيط', 'بحر التفعيله', 'بحر الخبب', 'بحر الخفيف', 'بحر الدوبيت', 'بحر الرجز', 'بحر الرمل', 'بحر السريع', 'بحر السلسلة', 'بحر الطويل', 'بحر القوما', 'بحر الكامل', 'بحر الكامل المقطوع', 'بحر المتدارك', 'بحر المتدارك المنهوك', 'بحر المتقارب', 'بحر المجتث', 'بحر المديد', 'بحر المضارع', 'بحر المقتضب', 'بحر المنسرح', 'بحر المواليا', 'بحر الهزج', 'بحر الوافر', 'بحر تفعيلة الرجز', 'بحر تفعيلة الرمل', 'بحر تفعيلة الكامل', 'بحر تفعيلة المتقارب', 'بحر مجزوء البسيط', 'بحر مجزوء الخفيف', 'بحر مجزوء الدوبيت', 'بحر مجزوء الرجز', 'بحر مجزوء الرمل', 'بحر مجزوء الرمل ', 'بحر مجزوء السريع', 'بحر مجزوء الطويل', 'بحر مجزوء الكامل', 'بحر مجزوء المتدارك', 'بحر مجزوء المتقارب', 'بحر مجزوء المجتث', 'بحر مجزوء المديد', 'بحر مجزوء المنسرح', 'بحر مجزوء المواليا', 'بحر مجزوء الهزج', 'بحر مجزوء الوافر', 'بحر مجزوء موشح', 'بحر مخلع البسيط', 'بحر مخلع الرجز', 'بحر مخلع الرمل', 'بحر مخلع السريع', 'بحر مخلع الكامل', 'بحر مخلع موشح', 'بحر مربع البسيط', 'بحر مربع الرجز', 'بحر مشطور الرجز', 'بحر مشطور السريع', 'بحر مشطور الطويل', 'بحر منهوك البسيط', 'بحر منهوك الرجز', 'بحر منهوك الكامل', 'بحر منهوك المنسرح', 'بحر موشح', 'بسيط', 'زجل', 'شعر التفعيلة', 'شعر حر', 'عامي', 'عدة أبحر', 'عموديه', 'مجزوء الخفيف', 'نثريه', 'None'] -themes = ['قصيدة اعتذار', 'قصيدة الاناشيد', 'قصيدة المعلقات', 'قصيدة حزينه', 'قصيدة دينية', 'قصيدة ذم', 'قصيدة رثاء', 'قصيدة رومنسيه', 'قصيدة سياسية', 'قصيدة شوق', 'قصيدة عامه', 'قصيدة عتاب', 'قصيدة غزل', 'قصيدة فراق', 'قصيدة قصيره', 'قصيدة مدح', 'قصيدة هجاء', 'قصيدة وطنيه', 'None'] -language_types = ['شعبي', 'عامي', 'فصحى', 'فصيح', '-', 'None'] -poet_era = ['العصر الأموي', 'العصر الأندلسي', 'العصر الأيوبي', 'العصر الإسلامي', 'العصر الجاهلي', 'العصر الحديث', 'العصر العباسي', 'العصر العثماني', 'العصر الفاطمي', 'العصر المملوكي', 'المخضرمين', 'المغرب والأندلس', 'عصر بين الدولتين', 'قبل الإسلام', 'None'] -countries = ['الأردن', 'الإمارات', 'البحرين', 'الجزائر', 'السعودية', 'السنغال', 'السودان', 'الصومال', 'العراق', 'الكويت', 'المغرب', 'اليمن', 'تونس', 'سوريا', 'سورية', 'عمان', 'فلسطين', 'قطر', 'لبنان', 'ليبيا', 'مصر', 'موريتانيا', 'None'] - -tokenizer: AutoTokenizer = AutoTokenizer.from_pretrained("bkhmsi/arapoet-mt5", use_auth_token="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") -model: AutoModelForSeq2SeqLM = AutoModelForSeq2SeqLM.from_pretrained("bkhmsi/arapoet-mt5", use_auth_token="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") -model.eval() - -title = "" -with gr.Blocks(title=title) as demo: - inputs = [] - - gr.Markdown( - """ - # AraPoet: Controlled Arabic Poetry Generation - - The model hosted here is a finetuned version of [mT5-large](https://huggingface.co/google/mt5-large) (∼ 1.2B parameters) on the largest repository of Arabic poems, the [ashaar](https://huggingface.co/datasets/arbml/ashaar) dataset. - The model can be conditioned on a set of attributes to control the style of the generated poem. - Namely: the poet name, country, era, meter, theme, language type, title and the length of the poem. - You can start by clicking on one of the examples below or try your own input. - """ - ) - - with gr.Row(): - inputs += [gr.Dropdown(countries, label="Country", value="مصر")] - inputs += [gr.Dropdown(poet_era, label="Era", value="العصر الحديث")] - with gr.Row(): - inputs += [gr.Dropdown(meters, label="Meter", value="بحر السريع")] - inputs += [gr.Dropdown(themes, label="Theme", value="قصيدة رومنسيه")] - with gr.Row(): - inputs += [gr.Dropdown(language_types, label="Language Type", value="فصحى")] - inputs += [gr.Dropdown(poet_names, label="Poet", value="أحمد شوقي")] - with gr.Row(): - inputs += [gr.Slider(2, 20, value=6, step=1, label="Number of Lines")] - inputs += [gr.Slider(1, 4, value=1, step=1, label="Number of Samples")] - with gr.Row(): - inputs += [gr.Textbox(label="Title", value="إثن عنان القلب واسلم به")] - - btn = gr.Button("Generate") - examples = gr.Examples(examples="./examples", inputs=inputs) - btn.click(generate_poem, inputs, gr.TextArea(label="Generation")) - - - gr.Markdown( - """ - Checkout our [AraPoet Preprint](https://github.com/BKHMSI/BKHMSI.github.io/blob/master/archive/resources/AraPoet.pdf) for more details about the model. - """ - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/$types.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/$types.d.ts deleted file mode 100644 index d011ba126135bc07a71ff46037ecfcf2bff72810..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/$types.d.ts +++ /dev/null @@ -1,24 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { id: string } -type RouteId = '/conversation/[id]'; -type MaybeWithVoid = {} extends T ? T | void : T; -export type RequiredKeys = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? never : K; }[keyof T]; -type OutputDataShape = MaybeWithVoid> & Partial> & Record> -type EnsureDefined = T extends null | undefined ? {} : T; -type OptionalUnion, A extends keyof U = U extends U ? keyof U : never> = U extends unknown ? { [P in Exclude]?: never } & U : never; -export type Snapshot = Kit.Snapshot; -type PageServerParentData = EnsureDefined; -type PageParentData = EnsureDefined; - -export type EntryGenerator = () => Promise> | Array; -export type PageServerLoad = OutputDataShape> = Kit.ServerLoad; -export type PageServerLoadEvent = Parameters[0]; -export type ActionData = unknown; -export type PageServerData = Expand>>>>>; -export type PageData = Expand & EnsureDefined>; -export type Action | void = Record | void> = Kit.Action -export type Actions | void = Record | void> = Kit.Actions -export type RequestHandler = Kit.RequestHandler; -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/database.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/database.ts deleted file mode 100644 index d91bd445988f2dfe09431c2b803e387ebd911f16..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/server/database.ts +++ /dev/null @@ -1,23 +0,0 @@ -let client = undefined; -export const connectPromise = undefined; - -const db = undefined; - -const conversations = undefined; -const sharedConversations = undefined; -const abortedGenerations = undefined; -const settings = undefined; -const users = undefined; -const webSearches = undefined; -const messageEvents = undefined; - -export { client, db }; -export const collections = { - conversations, - sharedConversations, - abortedGenerations, - settings, - users, - webSearches, - messageEvents, -}; diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/api.py b/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/api.py deleted file mode 100644 index d6968ef9dd4a087c862f8e66b05108eb12f671f4..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/api.py +++ /dev/null @@ -1,269 +0,0 @@ -from enum import Enum, unique - -import cv2 -import torch -from basicsr.utils import img2tensor -from ldm.util import resize_numpy_image -from PIL import Image -from torch import autocast - - -@unique -class ExtraCondition(Enum): - sketch = 0 - keypose = 1 - seg = 2 - depth = 3 - canny = 4 - style = 5 - color = 6 - openpose = 7 - - -def get_cond_model(opt, cond_type: ExtraCondition): - if cond_type == ExtraCondition.sketch: - from ldm.modules.extra_condition.model_edge import pidinet - model = pidinet() - ckp = torch.load('models/table5_pidinet.pth', map_location='cpu')['state_dict'] - model.load_state_dict({k.replace('module.', ''): v for k, v in ckp.items()}, strict=True) - model.to(opt.device) - return model - elif cond_type == ExtraCondition.seg: - raise NotImplementedError - elif cond_type == ExtraCondition.keypose: - import mmcv - from mmdet.apis import init_detector - from mmpose.apis import init_pose_model - det_config = 'configs/mm/faster_rcnn_r50_fpn_coco.py' - det_checkpoint = 'models/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth' - pose_config = 'configs/mm/hrnet_w48_coco_256x192.py' - pose_checkpoint = 'models/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth' - det_config_mmcv = mmcv.Config.fromfile(det_config) - det_model = init_detector(det_config_mmcv, det_checkpoint, device=opt.device) - pose_config_mmcv = mmcv.Config.fromfile(pose_config) - pose_model = init_pose_model(pose_config_mmcv, pose_checkpoint, device=opt.device) - return {'pose_model': pose_model, 'det_model': det_model} - elif cond_type == ExtraCondition.depth: - from ldm.modules.extra_condition.midas.api import MiDaSInference - model = MiDaSInference(model_type='dpt_hybrid').to(opt.device) - return model - elif cond_type == ExtraCondition.canny: - return None - elif cond_type == ExtraCondition.style: - from transformers import CLIPProcessor, CLIPVisionModel - version = 'openai/clip-vit-large-patch14' - processor = CLIPProcessor.from_pretrained(version) - clip_vision_model = CLIPVisionModel.from_pretrained(version).to(opt.device) - return {'processor': processor, 'clip_vision_model': clip_vision_model} - elif cond_type == ExtraCondition.color: - return None - elif cond_type == ExtraCondition.openpose: - from ldm.modules.extra_condition.openpose.api import OpenposeInference - model = OpenposeInference().to(opt.device) - return model - else: - raise NotImplementedError - - -def get_cond_sketch(opt, cond_image, cond_inp_type, cond_model=None): - if isinstance(cond_image, str): - edge = cv2.imread(cond_image) - else: - # for gradio input, pay attention, it's rgb numpy - edge = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - edge = resize_numpy_image(edge, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = edge.shape[:2] - if cond_inp_type == 'sketch': - edge = img2tensor(edge)[0].unsqueeze(0).unsqueeze(0) / 255. - edge = edge.to(opt.device) - elif cond_inp_type == 'image': - edge = img2tensor(edge).unsqueeze(0) / 255. - edge = cond_model(edge.to(opt.device))[-1] - else: - raise NotImplementedError - - # edge = 1-edge # for white background - edge = edge > 0.5 - edge = edge.float() - - return edge - - -def get_cond_seg(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - seg = cv2.imread(cond_image) - else: - seg = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - seg = resize_numpy_image(seg, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = seg.shape[:2] - if cond_inp_type == 'seg': - seg = img2tensor(seg).unsqueeze(0) / 255. - seg = seg.to(opt.device) - else: - raise NotImplementedError - - return seg - - -def get_cond_keypose(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - pose = cv2.imread(cond_image) - else: - pose = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - pose = resize_numpy_image(pose, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = pose.shape[:2] - if cond_inp_type == 'keypose': - pose = img2tensor(pose).unsqueeze(0) / 255. - pose = pose.to(opt.device) - elif cond_inp_type == 'image': - from ldm.modules.extra_condition.utils import imshow_keypoints - from mmdet.apis import inference_detector - from mmpose.apis import (inference_top_down_pose_model, process_mmdet_results) - - # mmpose seems not compatible with autocast fp16 - with autocast("cuda", dtype=torch.float32): - mmdet_results = inference_detector(cond_model['det_model'], pose) - # keep the person class bounding boxes. - person_results = process_mmdet_results(mmdet_results, 1) - - # optional - return_heatmap = False - dataset = cond_model['pose_model'].cfg.data['test']['type'] - - # e.g. use ('backbone', ) to return backbone feature - output_layer_names = None - pose_results, returned_outputs = inference_top_down_pose_model( - cond_model['pose_model'], - pose, - person_results, - bbox_thr=0.2, - format='xyxy', - dataset=dataset, - dataset_info=None, - return_heatmap=return_heatmap, - outputs=output_layer_names) - - # show the results - pose = imshow_keypoints(pose, pose_results, radius=2, thickness=2) - pose = img2tensor(pose).unsqueeze(0) / 255. - pose = pose.to(opt.device) - else: - raise NotImplementedError - - return pose - - -def get_cond_depth(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - depth = cv2.imread(cond_image) - else: - depth = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - depth = resize_numpy_image(depth, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = depth.shape[:2] - if cond_inp_type == 'depth': - depth = img2tensor(depth).unsqueeze(0) / 255. - depth = depth.to(opt.device) - elif cond_inp_type == 'image': - depth = img2tensor(depth).unsqueeze(0) / 127.5 - 1.0 - depth = cond_model(depth.to(opt.device)).repeat(1, 3, 1, 1) - depth -= torch.min(depth) - depth /= torch.max(depth) - else: - raise NotImplementedError - - return depth - - -def get_cond_canny(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - canny = cv2.imread(cond_image) - else: - canny = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - canny = resize_numpy_image(canny, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = canny.shape[:2] - if cond_inp_type == 'canny': - canny = img2tensor(canny)[0:1].unsqueeze(0) / 255. - canny = canny.to(opt.device) - elif cond_inp_type == 'image': - canny = cv2.Canny(canny, 100, 200)[..., None] - canny = img2tensor(canny).unsqueeze(0) / 255. - canny = canny.to(opt.device) - else: - raise NotImplementedError - - return canny - - -def get_cond_style(opt, cond_image, cond_inp_type='image', cond_model=None): - assert cond_inp_type == 'image' - if isinstance(cond_image, str): - style = Image.open(cond_image) - else: - # numpy image to PIL image - style = Image.fromarray(cond_image) - - style_for_clip = cond_model['processor'](images=style, return_tensors="pt")['pixel_values'] - style_feat = cond_model['clip_vision_model'](style_for_clip.to(opt.device))['last_hidden_state'] - - return style_feat - - -def get_cond_color(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - color = cv2.imread(cond_image) - else: - color = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - color = resize_numpy_image(color, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = color.shape[:2] - if cond_inp_type == 'image': - color = cv2.resize(color, (opt.W//64, opt.H//64), interpolation=cv2.INTER_CUBIC) - color = cv2.resize(color, (opt.W, opt.H), interpolation=cv2.INTER_NEAREST) - color = img2tensor(color).unsqueeze(0) / 255. - color = color.to(opt.device) - return color - - -def get_cond_openpose(opt, cond_image, cond_inp_type='image', cond_model=None): - if isinstance(cond_image, str): - openpose_keypose = cv2.imread(cond_image) - else: - openpose_keypose = cv2.cvtColor(cond_image, cv2.COLOR_RGB2BGR) - openpose_keypose = resize_numpy_image( - openpose_keypose, max_resolution=opt.max_resolution, resize_short_edge=opt.resize_short_edge) - opt.H, opt.W = openpose_keypose.shape[:2] - if cond_inp_type == 'openpose': - openpose_keypose = img2tensor(openpose_keypose).unsqueeze(0) / 255. - openpose_keypose = openpose_keypose.to(opt.device) - elif cond_inp_type == 'image': - with autocast('cuda', dtype=torch.float32): - openpose_keypose = cond_model(openpose_keypose) - openpose_keypose = img2tensor(openpose_keypose).unsqueeze(0) / 255. - openpose_keypose = openpose_keypose.to(opt.device) - - else: - raise NotImplementedError - - return openpose_keypose - - -def get_adapter_feature(inputs, adapters): - ret_feat_map = None - ret_feat_seq = None - if not isinstance(inputs, list): - inputs = [inputs] - adapters = [adapters] - - for input, adapter in zip(inputs, adapters): - cur_feature = adapter['model'](input) - if isinstance(cur_feature, list): - if ret_feat_map is None: - ret_feat_map = list(map(lambda x: x * adapter['cond_weight'], cur_feature)) - else: - ret_feat_map = list(map(lambda x, y: x + y * adapter['cond_weight'], ret_feat_map, cur_feature)) - else: - if ret_feat_seq is None: - ret_feat_seq = cur_feature - else: - ret_feat_seq = torch.cat([ret_feat_seq, cur_feature], dim=1) - - return ret_feat_map, ret_feat_seq diff --git a/spaces/AgentVerse/agentVerse/agentverse_command/main_simulation_gui.py b/spaces/AgentVerse/agentVerse/agentverse_command/main_simulation_gui.py deleted file mode 100644 index 1634b3fd196a845e8bf2f0e5d24c8aa2ada50bf0..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse_command/main_simulation_gui.py +++ /dev/null @@ -1,21 +0,0 @@ -import os -from agentverse.gui import GUI -from argparse import ArgumentParser - -parser = ArgumentParser() -parser.add_argument("--task", type=str, default="simulation/nlp_classroom_9players") -parser.add_argument( - "--tasks_dir", - type=str, - default=os.path.join(os.path.dirname(__file__), "..", "agentverse", "tasks"), -) -args = parser.parse_args() - - -def cli_main(): - ui = GUI(args.task, args.tasks_dir) - ui.launch() - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/GetExpandedChildHeight.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/GetExpandedChildHeight.js deleted file mode 100644 index 76b5e4880ceef6aed77ab5d5d4e4b1a1190219f7..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridsizer/GetExpandedChildHeight.js +++ /dev/null @@ -1,11 +0,0 @@ -var GetExpandedChildHeight = function (child, rowHeight) { - var childHeight; - var childConfig = child.rexSizer; - if (childConfig.expand) { - var padding = childConfig.padding; - childHeight = rowHeight - padding.top - padding.bottom; - } - return childHeight; -} - -export default GetExpandedChildHeight; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/OverlapSizer.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/OverlapSizer.js deleted file mode 100644 index 871377e81e0159db297ad1bc7a2c6022c9593814..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/OverlapSizer.js +++ /dev/null @@ -1,49 +0,0 @@ -import BaseSizer from '../basesizer/BaseSizer.js'; -import Methods from './Methods.js'; -import Clear from '../../../plugins/utils/object/Clear.js'; -import IndexOf from '../../../plugins/utils/object/IndexOf.js'; - -const IsPlainObject = Phaser.Utils.Objects.IsPlainObject; -const GetValue = Phaser.Utils.Objects.GetValue; - -class OverlapSizer extends BaseSizer { - constructor(scene, x, y, minWidth, minHeight, config) { - if (IsPlainObject(x)) { - config = x; - x = GetValue(config, 'x', 0); - y = GetValue(config, 'y', 0); - minWidth = GetValue(config, 'width', undefined); - minHeight = GetValue(config, 'height', undefined); - } else if (IsPlainObject(minWidth)) { - config = minWidth; - minWidth = GetValue(config, 'width', undefined); - minHeight = GetValue(config, 'height', undefined); - } - - super(scene, x, y, minWidth, minHeight, config); - - this.type = 'rexOverlapSizer'; - this.sizerChildren = {}; - - this.addChildrenMap('items', this.sizerChildren); - } - - childToKey(gameObject) { - if (typeof (gameObject) === 'string') { - var key = gameObject; - if (this.sizerChildren.hasOwnPropery(key)) { - return key; - } - } else { - return IndexOf(this.sizerChildren, gameObject); - } - return null; - } -} - -Object.assign( - OverlapSizer.prototype, - Methods -); - -export default OverlapSizer; \ No newline at end of file diff --git a/spaces/AlexWang/lama/saicinpainting/training/losses/segmentation.py b/spaces/AlexWang/lama/saicinpainting/training/losses/segmentation.py deleted file mode 100644 index 3d4a9f94eaae84722db584277dbbf9bc41ede357..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/losses/segmentation.py +++ /dev/null @@ -1,43 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .constants import weights as constant_weights - - -class CrossEntropy2d(nn.Module): - def __init__(self, reduction="mean", ignore_label=255, weights=None, *args, **kwargs): - """ - weight (Tensor, optional): a manual rescaling weight given to each class. - If given, has to be a Tensor of size "nclasses" - """ - super(CrossEntropy2d, self).__init__() - self.reduction = reduction - self.ignore_label = ignore_label - self.weights = weights - if self.weights is not None: - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.weights = torch.FloatTensor(constant_weights[weights]).to(device) - - def forward(self, predict, target): - """ - Args: - predict:(n, c, h, w) - target:(n, 1, h, w) - """ - target = target.long() - assert not target.requires_grad - assert predict.dim() == 4, "{0}".format(predict.size()) - assert target.dim() == 4, "{0}".format(target.size()) - assert predict.size(0) == target.size(0), "{0} vs {1} ".format(predict.size(0), target.size(0)) - assert target.size(1) == 1, "{0}".format(target.size(1)) - assert predict.size(2) == target.size(2), "{0} vs {1} ".format(predict.size(2), target.size(2)) - assert predict.size(3) == target.size(3), "{0} vs {1} ".format(predict.size(3), target.size(3)) - target = target.squeeze(1) - n, c, h, w = predict.size() - target_mask = (target >= 0) * (target != self.ignore_label) - target = target[target_mask] - predict = predict.transpose(1, 2).transpose(2, 3).contiguous() - predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c) - loss = F.cross_entropy(predict, target, weight=self.weights, reduction=self.reduction) - return loss diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/shap_e.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/shap_e.md deleted file mode 100644 index 39f6416b18be861955a7b1c8af0d5d0e82feba56..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/shap_e.md +++ /dev/null @@ -1,190 +0,0 @@ - - -# Shap-E - -The Shap-E model was proposed in [Shap-E: Generating Conditional 3D Implicit Functions](https://huggingface.co/papers/2305.02463) by Alex Nichol and Heewon Jun from [OpenAI](https://github.com/openai). - -The abstract from the paper is: - -*We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space.* - -The original codebase can be found at [openai/shap-e](https://github.com/openai/shap-e). - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## Usage Examples - -In the following, we will walk you through some examples of how to use Shap-E pipelines to create 3D objects in gif format. - -### Text-to-3D image generation - -We can use [`ShapEPipeline`] to create 3D object based on a text prompt. In this example, we will make a birthday cupcake for :firecracker: diffusers library's 1 year birthday. The workflow to use the Shap-E text-to-image pipeline is same as how you would use other text-to-image pipelines in diffusers. - -```python -import torch - -from diffusers import DiffusionPipeline - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -repo = "openai/shap-e" -pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) -pipe = pipe.to(device) - -guidance_scale = 15.0 -prompt = ["A firecracker", "A birthday cupcake"] - -images = pipe( - prompt, - guidance_scale=guidance_scale, - num_inference_steps=64, - frame_size=256, -).images -``` - -The output of [`ShapEPipeline`] is a list of lists of images frames. Each list of frames can be used to create a 3D object. Let's use the `export_to_gif` utility function in diffusers to make a 3D cupcake! - -```python -from diffusers.utils import export_to_gif - -export_to_gif(images[0], "firecracker_3d.gif") -export_to_gif(images[1], "cake_3d.gif") -``` -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/firecracker_out.gif) -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/cake_out.gif) - - -### Image-to-Image generation - -You can use [`ShapEImg2ImgPipeline`] along with other text-to-image pipelines in diffusers and turn your 2D generation into 3D. - -In this example, We will first genrate a cheeseburger with a simple prompt "A cheeseburger, white background" - -```python -from diffusers import DiffusionPipeline -import torch - -pipe_prior = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16) -pipe_prior.to("cuda") - -t2i_pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) -t2i_pipe.to("cuda") - -prompt = "A cheeseburger, white background" - -image_embeds, negative_image_embeds = pipe_prior(prompt, guidance_scale=1.0).to_tuple() -image = t2i_pipe( - prompt, - image_embeds=image_embeds, - negative_image_embeds=negative_image_embeds, -).images[0] - -image.save("burger.png") -``` - -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/burger_in.png) - -we will then use the Shap-E image-to-image pipeline to turn it into a 3D cheeseburger :) - -```python -from PIL import Image -from diffusers.utils import export_to_gif - -repo = "openai/shap-e-img2img" -pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) -pipe = pipe.to("cuda") - -guidance_scale = 3.0 -image = Image.open("burger.png").resize((256, 256)) - -images = pipe( - image, - guidance_scale=guidance_scale, - num_inference_steps=64, - frame_size=256, -).images - -gif_path = export_to_gif(images[0], "burger_3d.gif") -``` -![img](https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/shap_e/burger_out.gif) - -### Generate mesh - -For both [`ShapEPipeline`] and [`ShapEImg2ImgPipeline`], you can generate mesh output by passing `output_type` as `mesh` to the pipeline, and then use the [`ShapEPipeline.export_to_ply`] utility function to save the output as a `ply` file. We also provide a [`ShapEPipeline.export_to_obj`] function that you can use to save mesh outputs as `obj` files. - -```python -import torch - -from diffusers import DiffusionPipeline -from diffusers.utils import export_to_ply - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -repo = "openai/shap-e" -pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16, variant="fp16") -pipe = pipe.to(device) - -guidance_scale = 15.0 -prompt = "A birthday cupcake" - -images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images - -ply_path = export_to_ply(images[0], "3d_cake.ply") -print(f"saved to folder: {ply_path}") -``` - -Huggingface Datasets supports mesh visualization for mesh files in `glb` format. Below we will show you how to convert your mesh file into `glb` format so that you can use the Dataset viewer to render 3D objects. - -We need to install `trimesh` library. - -``` -pip install trimesh -``` - -To convert the mesh file into `glb` format, - -```python -import trimesh - -mesh = trimesh.load("3d_cake.ply") -mesh.export("3d_cake.glb", file_type="glb") -``` - -By default, the mesh output of Shap-E is from the bottom viewpoint; you can change the default viewpoint by applying a rotation transformation - -```python -import trimesh -import numpy as np - -mesh = trimesh.load("3d_cake.ply") -rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) -mesh = mesh.apply_transform(rot) -mesh.export("3d_cake.glb", file_type="glb") -``` - -Now you can upload your mesh file to your dataset and visualize it! Here is the link to the 3D cake we just generated -https://huggingface.co/datasets/hf-internal-testing/diffusers-images/blob/main/shap_e/3d_cake.glb - -## ShapEPipeline -[[autodoc]] ShapEPipeline - - all - - __call__ - -## ShapEImg2ImgPipeline -[[autodoc]] ShapEImg2ImgPipeline - - all - - __call__ - -## ShapEPipelineOutput -[[autodoc]] pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/habana.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/habana.md deleted file mode 100644 index 24846615c95ce1ed975822fe3ee854a5b379bcf7..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/habana.md +++ /dev/null @@ -1,79 +0,0 @@ - - -# How to use Stable Diffusion on Habana Gaudi - -🤗 Diffusers is compatible with Habana Gaudi through 🤗 [Optimum Habana](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion). - -## Requirements - -- Optimum Habana 1.6 or later, [here](https://huggingface.co/docs/optimum/habana/installation) is how to install it. -- SynapseAI 1.10. - - -## Inference Pipeline - -To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: -- A pipeline with [`GaudiStableDiffusionPipeline`](https://huggingface.co/docs/optimum/habana/package_reference/stable_diffusion_pipeline). This pipeline supports *text-to-image generation*. -- A scheduler with [`GaudiDDIMScheduler`](https://huggingface.co/docs/optimum/habana/package_reference/stable_diffusion_pipeline#optimum.habana.diffusers.GaudiDDIMScheduler). This scheduler has been optimized for Habana Gaudi. - -When initializing the pipeline, you have to specify `use_habana=True` to deploy it on HPUs. -Furthermore, in order to get the fastest possible generations you should enable **HPU graphs** with `use_hpu_graphs=True`. -Finally, you will need to specify a [Gaudi configuration](https://huggingface.co/docs/optimum/habana/package_reference/gaudi_config) which can be downloaded from the [Hugging Face Hub](https://huggingface.co/Habana). - -```python -from optimum.habana import GaudiConfig -from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline - -model_name = "stabilityai/stable-diffusion-2-base" -scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") -pipeline = GaudiStableDiffusionPipeline.from_pretrained( - model_name, - scheduler=scheduler, - use_habana=True, - use_hpu_graphs=True, - gaudi_config="Habana/stable-diffusion-2", -) -``` - -You can then call the pipeline to generate images by batches from one or several prompts: -```python -outputs = pipeline( - prompt=[ - "High quality photo of an astronaut riding a horse in space", - "Face of a yellow cat, high resolution, sitting on a park bench", - ], - num_images_per_prompt=10, - batch_size=4, -) -``` - -For more information, check out Optimum Habana's [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion) and the [example](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion) provided in the official Github repository. - - -## Benchmark - -Here are the latencies for Habana first-generation Gaudi and Gaudi2 with the [Habana/stable-diffusion](https://huggingface.co/Habana/stable-diffusion) and [Habana/stable-diffusion-2](https://huggingface.co/Habana/stable-diffusion-2) Gaudi configurations (mixed precision bf16/fp32): - -- [Stable Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) (512x512 resolution): - -| | Latency (batch size = 1) | Throughput (batch size = 8) | -| ---------------------- |:------------------------:|:---------------------------:| -| first-generation Gaudi | 3.80s | 0.308 images/s | -| Gaudi2 | 1.33s | 1.081 images/s | - -- [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1) (768x768 resolution): - -| | Latency (batch size = 1) | Throughput | -| ---------------------- |:------------------------:|:-------------------------------:| -| first-generation Gaudi | 10.2s | 0.108 images/s (batch size = 4) | -| Gaudi2 | 3.17s | 0.379 images/s (batch size = 8) | diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_unet_2d_blocks.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_unet_2d_blocks.py deleted file mode 100644 index 4d658f2829329a1fd5d26edb0c50c0887d024044..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_unet_2d_blocks.py +++ /dev/null @@ -1,337 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import unittest - -from diffusers.models.unet_2d_blocks import * # noqa F403 -from diffusers.utils import torch_device - -from .test_unet_blocks_common import UNetBlockTesterMixin - - -class DownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = DownBlock2D # noqa F405 - block_type = "down" - - def test_output(self): - expected_slice = [-0.0232, -0.9869, 0.8054, -0.0637, -0.1688, -1.4264, 0.4470, -1.3394, 0.0904] - super().test_output(expected_slice) - - -class ResnetDownsampleBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = ResnetDownsampleBlock2D # noqa F405 - block_type = "down" - - def test_output(self): - expected_slice = [0.0710, 0.2410, -0.7320, -1.0757, -1.1343, 0.3540, -0.0133, -0.2576, 0.0948] - super().test_output(expected_slice) - - -class AttnDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = AttnDownBlock2D # noqa F405 - block_type = "down" - - def test_output(self): - expected_slice = [0.0636, 0.8964, -0.6234, -1.0131, 0.0844, 0.4935, 0.3437, 0.0911, -0.2957] - super().test_output(expected_slice) - - -class CrossAttnDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = CrossAttnDownBlock2D # noqa F405 - block_type = "down" - - def prepare_init_args_and_inputs_for_common(self): - init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() - init_dict["cross_attention_dim"] = 32 - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [0.2238, -0.7396, -0.2255, -0.3829, 0.1925, 1.1665, 0.0603, -0.7295, 0.1983] - super().test_output(expected_slice) - - -class SimpleCrossAttnDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = SimpleCrossAttnDownBlock2D # noqa F405 - block_type = "down" - - @property - def dummy_input(self): - return super().get_dummy_input(include_encoder_hidden_states=True) - - def prepare_init_args_and_inputs_for_common(self): - init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() - init_dict["cross_attention_dim"] = 32 - return init_dict, inputs_dict - - @unittest.skipIf(torch_device == "mps", "MPS result is not consistent") - def test_output(self): - expected_slice = [0.7921, -0.0992, -0.1962, -0.7695, -0.4242, 0.7804, 0.4737, 0.2765, 0.3338] - super().test_output(expected_slice) - - -class SkipDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = SkipDownBlock2D # noqa F405 - block_type = "down" - - @property - def dummy_input(self): - return super().get_dummy_input(include_skip_sample=True) - - def test_output(self): - expected_slice = [-0.0845, -0.2087, -0.2465, 0.0971, 0.1900, -0.0484, 0.2664, 0.4179, 0.5069] - super().test_output(expected_slice) - - -class AttnSkipDownBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = AttnSkipDownBlock2D # noqa F405 - block_type = "down" - - @property - def dummy_input(self): - return super().get_dummy_input(include_skip_sample=True) - - def test_output(self): - expected_slice = [0.5539, 0.1609, 0.4924, 0.0537, -0.1995, 0.4050, 0.0979, -0.2721, -0.0642] - super().test_output(expected_slice) - - -class DownEncoderBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = DownEncoderBlock2D # noqa F405 - block_type = "down" - - @property - def dummy_input(self): - return super().get_dummy_input(include_temb=False) - - def prepare_init_args_and_inputs_for_common(self): - init_dict = { - "in_channels": 32, - "out_channels": 32, - } - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [1.1102, 0.5302, 0.4872, -0.0023, -0.8042, 0.0483, -0.3489, -0.5632, 0.7626] - super().test_output(expected_slice) - - -class AttnDownEncoderBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = AttnDownEncoderBlock2D # noqa F405 - block_type = "down" - - @property - def dummy_input(self): - return super().get_dummy_input(include_temb=False) - - def prepare_init_args_and_inputs_for_common(self): - init_dict = { - "in_channels": 32, - "out_channels": 32, - } - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [0.8966, -0.1486, 0.8568, 0.8141, -0.9046, -0.1342, -0.0972, -0.7417, 0.1538] - super().test_output(expected_slice) - - -class UNetMidBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = UNetMidBlock2D # noqa F405 - block_type = "mid" - - def prepare_init_args_and_inputs_for_common(self): - init_dict = { - "in_channels": 32, - "temb_channels": 128, - } - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [-0.1062, 1.7248, 0.3494, 1.4569, -0.0910, -1.2421, -0.9984, 0.6736, 1.0028] - super().test_output(expected_slice) - - -class UNetMidBlock2DCrossAttnTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = UNetMidBlock2DCrossAttn # noqa F405 - block_type = "mid" - - def prepare_init_args_and_inputs_for_common(self): - init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() - init_dict["cross_attention_dim"] = 32 - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [0.0187, 2.4220, 0.4484, 1.1203, -0.6121, -1.5122, -0.8270, 0.7851, 1.8335] - super().test_output(expected_slice) - - -class UNetMidBlock2DSimpleCrossAttnTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = UNetMidBlock2DSimpleCrossAttn # noqa F405 - block_type = "mid" - - @property - def dummy_input(self): - return super().get_dummy_input(include_encoder_hidden_states=True) - - def prepare_init_args_and_inputs_for_common(self): - init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() - init_dict["cross_attention_dim"] = 32 - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [0.7143, 1.9974, 0.5448, 1.3977, 0.1282, -1.1237, -1.4238, 0.5530, 0.8880] - super().test_output(expected_slice) - - -class UpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = UpBlock2D # noqa F405 - block_type = "up" - - @property - def dummy_input(self): - return super().get_dummy_input(include_res_hidden_states_tuple=True) - - def test_output(self): - expected_slice = [-0.2041, -0.4165, -0.3022, 0.0041, -0.6628, -0.7053, 0.1928, -0.0325, 0.0523] - super().test_output(expected_slice) - - -class ResnetUpsampleBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = ResnetUpsampleBlock2D # noqa F405 - block_type = "up" - - @property - def dummy_input(self): - return super().get_dummy_input(include_res_hidden_states_tuple=True) - - def test_output(self): - expected_slice = [0.2287, 0.3549, -0.1346, 0.4797, -0.1715, -0.9649, 0.7305, -0.5864, -0.6244] - super().test_output(expected_slice) - - -class CrossAttnUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = CrossAttnUpBlock2D # noqa F405 - block_type = "up" - - @property - def dummy_input(self): - return super().get_dummy_input(include_res_hidden_states_tuple=True) - - def prepare_init_args_and_inputs_for_common(self): - init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() - init_dict["cross_attention_dim"] = 32 - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [-0.1403, -0.3515, -0.0420, -0.1425, 0.3167, 0.5094, -0.2181, 0.5931, 0.5582] - super().test_output(expected_slice) - - -class SimpleCrossAttnUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = SimpleCrossAttnUpBlock2D # noqa F405 - block_type = "up" - - @property - def dummy_input(self): - return super().get_dummy_input(include_res_hidden_states_tuple=True, include_encoder_hidden_states=True) - - def prepare_init_args_and_inputs_for_common(self): - init_dict, inputs_dict = super().prepare_init_args_and_inputs_for_common() - init_dict["cross_attention_dim"] = 32 - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [0.2645, 0.1480, 0.0909, 0.8044, -0.9758, -0.9083, 0.0994, -1.1453, -0.7402] - super().test_output(expected_slice) - - -class AttnUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = AttnUpBlock2D # noqa F405 - block_type = "up" - - @property - def dummy_input(self): - return super().get_dummy_input(include_res_hidden_states_tuple=True) - - @unittest.skipIf(torch_device == "mps", "MPS result is not consistent") - def test_output(self): - expected_slice = [0.0979, 0.1326, 0.0021, 0.0659, 0.2249, 0.0059, 0.1132, 0.5952, 0.1033] - super().test_output(expected_slice) - - -class SkipUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = SkipUpBlock2D # noqa F405 - block_type = "up" - - @property - def dummy_input(self): - return super().get_dummy_input(include_res_hidden_states_tuple=True) - - def test_output(self): - expected_slice = [-0.0893, -0.1234, -0.1506, -0.0332, 0.0123, -0.0211, 0.0566, 0.0143, 0.0362] - super().test_output(expected_slice) - - -class AttnSkipUpBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = AttnSkipUpBlock2D # noqa F405 - block_type = "up" - - @property - def dummy_input(self): - return super().get_dummy_input(include_res_hidden_states_tuple=True) - - def test_output(self): - expected_slice = [0.0361, 0.0617, 0.2787, -0.0350, 0.0342, 0.3421, -0.0843, 0.0913, 0.3015] - super().test_output(expected_slice) - - -class UpDecoderBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = UpDecoderBlock2D # noqa F405 - block_type = "up" - - @property - def dummy_input(self): - return super().get_dummy_input(include_temb=False) - - def prepare_init_args_and_inputs_for_common(self): - init_dict = {"in_channels": 32, "out_channels": 32} - - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [0.4404, 0.1998, -0.9886, -0.3320, -0.3128, -0.7034, -0.6955, -0.2338, -0.3137] - super().test_output(expected_slice) - - -class AttnUpDecoderBlock2DTests(UNetBlockTesterMixin, unittest.TestCase): - block_class = AttnUpDecoderBlock2D # noqa F405 - block_type = "up" - - @property - def dummy_input(self): - return super().get_dummy_input(include_temb=False) - - def prepare_init_args_and_inputs_for_common(self): - init_dict = {"in_channels": 32, "out_channels": 32} - - inputs_dict = self.dummy_input - return init_dict, inputs_dict - - def test_output(self): - expected_slice = [0.6738, 0.4491, 0.1055, 1.0710, 0.7316, 0.3339, 0.3352, 0.1023, 0.3568] - super().test_output(expected_slice) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py deleted file mode 100644 index 9bbc86ead7003ab75264f8cf0cd18edb735fe9fd..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py +++ /dev/null @@ -1,17 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py' -# model settings -conv_cfg = dict(type='ConvWS') -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://jhu/resnext50_32x4d_gn_ws', - backbone=dict( - type='ResNeXt', - depth=50, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - style='pytorch', - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) diff --git a/spaces/Andy1621/uniformer_image_detection/tools/model_converters/upgrade_model_version.py b/spaces/Andy1621/uniformer_image_detection/tools/model_converters/upgrade_model_version.py deleted file mode 100644 index 232c8bc4cf010084b817c545ab4e2ef34fdd4549..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/tools/model_converters/upgrade_model_version.py +++ /dev/null @@ -1,209 +0,0 @@ -import argparse -import re -import tempfile -from collections import OrderedDict - -import torch -from mmcv import Config - - -def is_head(key): - valid_head_list = [ - 'bbox_head', 'mask_head', 'semantic_head', 'grid_head', 'mask_iou_head' - ] - - return any(key.startswith(h) for h in valid_head_list) - - -def parse_config(config_strings): - temp_file = tempfile.NamedTemporaryFile() - config_path = f'{temp_file.name}.py' - with open(config_path, 'w') as f: - f.write(config_strings) - - config = Config.fromfile(config_path) - is_two_stage = True - is_ssd = False - is_retina = False - reg_cls_agnostic = False - if 'rpn_head' not in config.model: - is_two_stage = False - # check whether it is SSD - if config.model.bbox_head.type == 'SSDHead': - is_ssd = True - elif config.model.bbox_head.type == 'RetinaHead': - is_retina = True - elif isinstance(config.model['bbox_head'], list): - reg_cls_agnostic = True - elif 'reg_class_agnostic' in config.model.bbox_head: - reg_cls_agnostic = config.model.bbox_head \ - .reg_class_agnostic - temp_file.close() - return is_two_stage, is_ssd, is_retina, reg_cls_agnostic - - -def reorder_cls_channel(val, num_classes=81): - # bias - if val.dim() == 1: - new_val = torch.cat((val[1:], val[:1]), dim=0) - # weight - else: - out_channels, in_channels = val.shape[:2] - # conv_cls for softmax output - if out_channels != num_classes and out_channels % num_classes == 0: - new_val = val.reshape(-1, num_classes, in_channels, *val.shape[2:]) - new_val = torch.cat((new_val[:, 1:], new_val[:, :1]), dim=1) - new_val = new_val.reshape(val.size()) - # fc_cls - elif out_channels == num_classes: - new_val = torch.cat((val[1:], val[:1]), dim=0) - # agnostic | retina_cls | rpn_cls - else: - new_val = val - - return new_val - - -def truncate_cls_channel(val, num_classes=81): - - # bias - if val.dim() == 1: - if val.size(0) % num_classes == 0: - new_val = val[:num_classes - 1] - else: - new_val = val - # weight - else: - out_channels, in_channels = val.shape[:2] - # conv_logits - if out_channels % num_classes == 0: - new_val = val.reshape(num_classes, in_channels, *val.shape[2:])[1:] - new_val = new_val.reshape(-1, *val.shape[1:]) - # agnostic - else: - new_val = val - - return new_val - - -def truncate_reg_channel(val, num_classes=81): - # bias - if val.dim() == 1: - # fc_reg | rpn_reg - if val.size(0) % num_classes == 0: - new_val = val.reshape(num_classes, -1)[:num_classes - 1] - new_val = new_val.reshape(-1) - # agnostic - else: - new_val = val - # weight - else: - out_channels, in_channels = val.shape[:2] - # fc_reg | rpn_reg - if out_channels % num_classes == 0: - new_val = val.reshape(num_classes, -1, in_channels, - *val.shape[2:])[1:] - new_val = new_val.reshape(-1, *val.shape[1:]) - # agnostic - else: - new_val = val - - return new_val - - -def convert(in_file, out_file, num_classes): - """Convert keys in checkpoints. - - There can be some breaking changes during the development of mmdetection, - and this tool is used for upgrading checkpoints trained with old versions - to the latest one. - """ - checkpoint = torch.load(in_file) - in_state_dict = checkpoint.pop('state_dict') - out_state_dict = OrderedDict() - meta_info = checkpoint['meta'] - is_two_stage, is_ssd, is_retina, reg_cls_agnostic = parse_config( - '#' + meta_info['config']) - if meta_info['mmdet_version'] <= '0.5.3' and is_retina: - upgrade_retina = True - else: - upgrade_retina = False - - # MMDetection v2.5.0 unifies the class order in RPN - # if the model is trained in version=2.5.0 - if meta_info['mmdet_version'] < '2.5.0': - upgrade_rpn = True - else: - upgrade_rpn = False - - for key, val in in_state_dict.items(): - new_key = key - new_val = val - if is_two_stage and is_head(key): - new_key = 'roi_head.{}'.format(key) - - # classification - if upgrade_rpn: - m = re.search( - r'(conv_cls|retina_cls|rpn_cls|fc_cls|fcos_cls|' - r'fovea_cls).(weight|bias)', new_key) - else: - m = re.search( - r'(conv_cls|retina_cls|fc_cls|fcos_cls|' - r'fovea_cls).(weight|bias)', new_key) - if m is not None: - print(f'reorder cls channels of {new_key}') - new_val = reorder_cls_channel(val, num_classes) - - # regression - if upgrade_rpn: - m = re.search(r'(fc_reg).(weight|bias)', new_key) - else: - m = re.search(r'(fc_reg|rpn_reg).(weight|bias)', new_key) - if m is not None and not reg_cls_agnostic: - print(f'truncate regression channels of {new_key}') - new_val = truncate_reg_channel(val, num_classes) - - # mask head - m = re.search(r'(conv_logits).(weight|bias)', new_key) - if m is not None: - print(f'truncate mask prediction channels of {new_key}') - new_val = truncate_cls_channel(val, num_classes) - - m = re.search(r'(cls_convs|reg_convs).\d.(weight|bias)', key) - # Legacy issues in RetinaNet since V1.x - # Use ConvModule instead of nn.Conv2d in RetinaNet - # cls_convs.0.weight -> cls_convs.0.conv.weight - if m is not None and upgrade_retina: - param = m.groups()[1] - new_key = key.replace(param, f'conv.{param}') - out_state_dict[new_key] = val - print(f'rename the name of {key} to {new_key}') - continue - - m = re.search(r'(cls_convs).\d.(weight|bias)', key) - if m is not None and is_ssd: - print(f'reorder cls channels of {new_key}') - new_val = reorder_cls_channel(val, num_classes) - - out_state_dict[new_key] = new_val - checkpoint['state_dict'] = out_state_dict - torch.save(checkpoint, out_file) - - -def main(): - parser = argparse.ArgumentParser(description='Upgrade model version') - parser.add_argument('in_file', help='input checkpoint file') - parser.add_argument('out_file', help='output checkpoint file') - parser.add_argument( - '--num-classes', - type=int, - default=81, - help='number of classes of the original model') - args = parser.parse_args() - convert(args.in_file, args.out_file, args.num_classes) - - -if __name__ == '__main__': - main() diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r101-d16_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r101-d16_512x1024_40k_cityscapes.py deleted file mode 100644 index aec4254c8f4ae835cdfbe785bb0c375173d1e232..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r101-d16_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fcn_d6_r50-d16_512x1024_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x512_160k_ade20k.py deleted file mode 100644 index a3c86e18ea65c6aaa36a4fb6e2708f08c7ae1698..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x512_160k_ade20k.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../_base_/models/ocrnet_hr18.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict(decode_head=[ - dict( - type='FCNHead', - in_channels=[18, 36, 72, 144], - channels=sum([18, 36, 72, 144]), - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - kernel_size=1, - num_convs=1, - concat_input=False, - dropout_ratio=-1, - num_classes=150, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=[18, 36, 72, 144], - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - channels=512, - ocr_channels=256, - dropout_ratio=-1, - num_classes=150, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), -]) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-messenger.css b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-messenger.css deleted file mode 100644 index fb3f65a458e76beddbab532539f56e2132e4a887..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/css/chat_style-messenger.css +++ /dev/null @@ -1,99 +0,0 @@ -.message { - padding-bottom: 25px; - font-size: 15px; - font-family: 'Noto Sans', Helvetica, Arial, sans-serif; - line-height: 1.428571429; -} - -.circle-you { - width: 50px; - height: 50px; - background-color: rgb(238, 78, 59); - border-radius: 50%; -} - -.circle-bot { - width: 50px; - height: 50px; - background-color: rgb(59, 78, 244); - border-radius: 50%; - float: left; - margin-right: 10px; - margin-top: 5px; -} - -.circle-bot img, -.circle-you img { - border-radius: 50%; - width: 100%; - height: 100%; - object-fit: cover; -} - -.circle-you { - margin-top: 5px; - float: right; -} - -.circle-bot + .text, .circle-you + .text { - border-radius: 18px; - padding: 8px 12px; -} - -.circle-bot + .text { - background-color: #E4E6EB; - float: left; -} - -.circle-you + .text { - float: right; - background-color: rgb(0, 132, 255); - margin-right: 10px; -} - -.circle-you + .text div, .circle-you + .text *, .dark .circle-you + .text div, .dark .circle-you + .text * { - color: #FFF !important; -} - -.circle-you + .text .username { - text-align: right; -} - -.dark .circle-bot + .text div, .dark .circle-bot + .text * { - color: #000; -} - -.text { - max-width: 80%; -} - -.text p { - margin-top: 5px; -} - -.username { - font-weight: bold; -} - -.message-body { -} - -.message-body img { - max-width: 300px; - max-height: 300px; - border-radius: 20px; -} - -.message-body p { - margin-bottom: 0 !important; - font-size: 15px !important; - line-height: 1.428571429 !important; -} - -.dark .message-body p em { - color: rgb(138, 138, 138) !important; -} - -.message-body p em { - color: rgb(110, 110, 110) !important; -} diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/scale.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/scale.py deleted file mode 100644 index c905fffcc8bf998d18d94f927591963c428025e2..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/scale.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -class Scale(nn.Module): - """A learnable scale parameter. - - This layer scales the input by a learnable factor. It multiplies a - learnable scale parameter of shape (1,) with input of any shape. - - Args: - scale (float): Initial value of scale factor. Default: 1.0 - """ - - def __init__(self, scale=1.0): - super(Scale, self).__init__() - self.scale = nn.Parameter(torch.tensor(scale, dtype=torch.float)) - - def forward(self, x): - return x * self.scale diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install.py deleted file mode 100644 index 55fdb124e8966a859b2655a8e99a9186c8755ba7..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install.py +++ /dev/null @@ -1,139 +0,0 @@ -from distutils.errors import DistutilsArgError -import inspect -import glob -import warnings -import platform -import distutils.command.install as orig - -import setuptools - -# Prior to numpy 1.9, NumPy relies on the '_install' name, so provide it for -# now. See https://github.com/pypa/setuptools/issues/199/ -_install = orig.install - - -class install(orig.install): - """Use easy_install to install the package, w/dependencies""" - - user_options = orig.install.user_options + [ - ('old-and-unmanageable', None, "Try not to use this!"), - ('single-version-externally-managed', None, - "used by system package builders to create 'flat' eggs"), - ] - boolean_options = orig.install.boolean_options + [ - 'old-and-unmanageable', 'single-version-externally-managed', - ] - new_commands = [ - ('install_egg_info', lambda self: True), - ('install_scripts', lambda self: True), - ] - _nc = dict(new_commands) - - def initialize_options(self): - - warnings.warn( - "setup.py install is deprecated. " - "Use build and pip and other standards-based tools.", - setuptools.SetuptoolsDeprecationWarning, - ) - - orig.install.initialize_options(self) - self.old_and_unmanageable = None - self.single_version_externally_managed = None - - def finalize_options(self): - orig.install.finalize_options(self) - if self.root: - self.single_version_externally_managed = True - elif self.single_version_externally_managed: - if not self.root and not self.record: - raise DistutilsArgError( - "You must specify --record or --root when building system" - " packages" - ) - - def handle_extra_path(self): - if self.root or self.single_version_externally_managed: - # explicit backward-compatibility mode, allow extra_path to work - return orig.install.handle_extra_path(self) - - # Ignore extra_path when installing an egg (or being run by another - # command without --root or --single-version-externally-managed - self.path_file = None - self.extra_dirs = '' - - def run(self): - # Explicit request for old-style install? Just do it - if self.old_and_unmanageable or self.single_version_externally_managed: - return orig.install.run(self) - - if not self._called_from_setup(inspect.currentframe()): - # Run in backward-compatibility mode to support bdist_* commands. - orig.install.run(self) - else: - self.do_egg_install() - - @staticmethod - def _called_from_setup(run_frame): - """ - Attempt to detect whether run() was called from setup() or by another - command. If called by setup(), the parent caller will be the - 'run_command' method in 'distutils.dist', and *its* caller will be - the 'run_commands' method. If called any other way, the - immediate caller *might* be 'run_command', but it won't have been - called by 'run_commands'. Return True in that case or if a call stack - is unavailable. Return False otherwise. - """ - if run_frame is None: - msg = "Call stack not available. bdist_* commands may fail." - warnings.warn(msg) - if platform.python_implementation() == 'IronPython': - msg = "For best results, pass -X:Frames to enable call stack." - warnings.warn(msg) - return True - - frames = inspect.getouterframes(run_frame) - for frame in frames[2:4]: - caller, = frame[:1] - info = inspect.getframeinfo(caller) - caller_module = caller.f_globals.get('__name__', '') - - if caller_module == "setuptools.dist" and info.function == "run_command": - # Starting from v61.0.0 setuptools overwrites dist.run_command - continue - - return ( - caller_module == 'distutils.dist' - and info.function == 'run_commands' - ) - - def do_egg_install(self): - - easy_install = self.distribution.get_command_class('easy_install') - - cmd = easy_install( - self.distribution, args="x", root=self.root, record=self.record, - ) - cmd.ensure_finalized() # finalize before bdist_egg munges install cmd - cmd.always_copy_from = '.' # make sure local-dir eggs get installed - - # pick up setup-dir .egg files only: no .egg-info - cmd.package_index.scan(glob.glob('*.egg')) - - self.run_command('bdist_egg') - args = [self.distribution.get_command_obj('bdist_egg').egg_output] - - if setuptools.bootstrap_install_from: - # Bootstrap self-installation of setuptools - args.insert(0, setuptools.bootstrap_install_from) - - cmd.args = args - cmd.run(show_deprecation=False) - setuptools.bootstrap_install_from = None - - -# XXX Python 3.1 doesn't see _nc if this is inside the class -install.sub_commands = ( - [cmd for cmd in orig.install.sub_commands if cmd[0] not in install._nc] + - install.new_commands -) diff --git a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/fused_bias_act.cpp b/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/models/stylegan2/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_inference.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_inference.py deleted file mode 100644 index deb886c0417285ed1d5ad85eb941fa1ac757cdab..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_inference.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -from itertools import count -import torch -from caffe2.proto import caffe2_pb2 -from caffe2.python import core - -from .caffe2_modeling import META_ARCH_CAFFE2_EXPORT_TYPE_MAP, convert_batched_inputs_to_c2_format -from .shared import ScopedWS, get_pb_arg_vali, get_pb_arg_vals, infer_device_type - -logger = logging.getLogger(__name__) - - -# ===== ref: mobile-vision predictor's 'Caffe2Wrapper' class ====== -class ProtobufModel(torch.nn.Module): - """ - Wrapper of a caffe2's protobuf model. - It works just like nn.Module, but running caffe2 under the hood. - Input/Output are tuple[tensor] that match the caffe2 net's external_input/output. - """ - - _ids = count(0) - - def __init__(self, predict_net, init_net): - logger.info(f"Initializing ProtobufModel for: {predict_net.name} ...") - super().__init__() - assert isinstance(predict_net, caffe2_pb2.NetDef) - assert isinstance(init_net, caffe2_pb2.NetDef) - # create unique temporary workspace for each instance - self.ws_name = "__tmp_ProtobufModel_{}__".format(next(self._ids)) - self.net = core.Net(predict_net) - - logger.info("Running init_net once to fill the parameters ...") - with ScopedWS(self.ws_name, is_reset=True, is_cleanup=False) as ws: - ws.RunNetOnce(init_net) - uninitialized_external_input = [] - for blob in self.net.Proto().external_input: - if blob not in ws.Blobs(): - uninitialized_external_input.append(blob) - ws.CreateBlob(blob) - ws.CreateNet(self.net) - - self._error_msgs = set() - self._input_blobs = uninitialized_external_input - - def _infer_output_devices(self, inputs): - """ - Returns: - list[str]: list of device for each external output - """ - - def _get_device_type(torch_tensor): - assert torch_tensor.device.type in ["cpu", "cuda"] - assert torch_tensor.device.index == 0 - return torch_tensor.device.type - - predict_net = self.net.Proto() - input_device_types = { - (name, 0): _get_device_type(tensor) for name, tensor in zip(self._input_blobs, inputs) - } - device_type_map = infer_device_type( - predict_net, known_status=input_device_types, device_name_style="pytorch" - ) - ssa, versions = core.get_ssa(predict_net) - versioned_outputs = [(name, versions[name]) for name in predict_net.external_output] - output_devices = [device_type_map[outp] for outp in versioned_outputs] - return output_devices - - def forward(self, inputs): - """ - Args: - inputs (tuple[torch.Tensor]) - - Returns: - tuple[torch.Tensor] - """ - assert len(inputs) == len(self._input_blobs), ( - f"Length of inputs ({len(inputs)}) " - f"doesn't match the required input blobs: {self._input_blobs}" - ) - - with ScopedWS(self.ws_name, is_reset=False, is_cleanup=False) as ws: - for b, tensor in zip(self._input_blobs, inputs): - ws.FeedBlob(b, tensor) - - try: - ws.RunNet(self.net.Proto().name) - except RuntimeError as e: - if not str(e) in self._error_msgs: - self._error_msgs.add(str(e)) - logger.warning("Encountered new RuntimeError: \n{}".format(str(e))) - logger.warning("Catch the error and use partial results.") - - c2_outputs = [ws.FetchBlob(b) for b in self.net.Proto().external_output] - # Remove outputs of current run, this is necessary in order to - # prevent fetching the result from previous run if the model fails - # in the middle. - for b in self.net.Proto().external_output: - # Needs to create uninitialized blob to make the net runable. - # This is "equivalent" to: ws.RemoveBlob(b) then ws.CreateBlob(b), - # but there'no such API. - ws.FeedBlob(b, f"{b}, a C++ native class of type nullptr (uninitialized).") - - # Cast output to torch.Tensor on the desired device - output_devices = ( - self._infer_output_devices(inputs) - if any(t.device.type != "cpu" for t in inputs) - else ["cpu" for _ in self.net.Proto().external_output] - ) - - outputs = [] - for name, c2_output, device in zip( - self.net.Proto().external_output, c2_outputs, output_devices - ): - if not isinstance(c2_output, np.ndarray): - raise RuntimeError( - "Invalid output for blob {}, received: {}".format(name, c2_output) - ) - outputs.append(torch.tensor(c2_output).to(device=device)) - return tuple(outputs) - - -class ProtobufDetectionModel(torch.nn.Module): - """ - A class works just like a pytorch meta arch in terms of inference, but running - caffe2 model under the hood. - """ - - def __init__(self, predict_net, init_net, *, convert_outputs=None): - """ - Args: - predict_net, init_net (core.Net): caffe2 nets - convert_outptus (callable): a function that converts caffe2 - outputs to the same format of the original pytorch model. - By default, use the one defined in the caffe2 meta_arch. - """ - super().__init__() - self.protobuf_model = ProtobufModel(predict_net, init_net) - self.size_divisibility = get_pb_arg_vali(predict_net, "size_divisibility", 0) - self.device = get_pb_arg_vals(predict_net, "device", b"cpu").decode("ascii") - - if convert_outputs is None: - meta_arch = get_pb_arg_vals(predict_net, "meta_architecture", b"GeneralizedRCNN") - meta_arch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[meta_arch.decode("ascii")] - self._convert_outputs = meta_arch.get_outputs_converter(predict_net, init_net) - else: - self._convert_outputs = convert_outputs - - def _convert_inputs(self, batched_inputs): - # currently all models convert inputs in the same way - return convert_batched_inputs_to_c2_format( - batched_inputs, self.size_divisibility, self.device - ) - - def forward(self, batched_inputs): - c2_inputs = self._convert_inputs(batched_inputs) - c2_results = self.protobuf_model(c2_inputs) - c2_results = dict(zip(self.protobuf_model.net.Proto().external_output, c2_results)) - return self._convert_outputs(batched_inputs, c2_inputs, c2_results) diff --git a/spaces/Benson/text-generation/Examples/Calle Carx 1.74 5 Mod Apk.md b/spaces/Benson/text-generation/Examples/Calle Carx 1.74 5 Mod Apk.md deleted file mode 100644 index 629540dd503989803e42fcf0cc6325a9f3c02d19..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Calle Carx 1.74 5 Mod Apk.md +++ /dev/null @@ -1,64 +0,0 @@ -
    -

    CarX Street 1.74.5 Mod APK: Un juego de carreras gratis y divertido para Android

    -

    Si usted es un fan de los juegos de carreras de coches, es posible que desee echar un vistazo a CarX Street, un juego de carreras gratis de CarX Technology para dispositivos Android. CarX Street es un juego de carreras realista e inmersivo que te permite personalizar tus coches, elegir tus pistas y competir con otros jugadores en línea o fuera de línea. En este artículo, le diremos qué es CarX Street, cómo descargar e instalar CarX Street 1.74.5 Mod APK, y cuáles son los beneficios de usar esta versión modificada del juego.

    -

    ¿Qué es CarX Street?

    -

    CarX Street es un juego de carreras que simula la cultura de las carreras callejeras, donde puedes correr con diferentes tipos de autos, desde autos clásicos hasta autos deportivos modernos, en varias pistas urbanas, desde carreteras hasta zonas industriales. También puede personalizar sus coches con diferentes partes, colores, pegatinas y calcomanías, para que se vean únicos y se adapten a su estilo. También puede actualizar sus coches con diferentes motores, transmisiones, suspensiones, frenos, neumáticos y más, para mejorar su rendimiento y manejo.

    -

    calle carx 1.74 5 mod apk


    Download Zip ✵✵✵ https://bltlly.com/2v6L8s



    -

    Características de CarX Street

    -

    CarX Street tiene muchas características que lo convierten en un juego de carreras divertido y emocionante para usuarios de Android. Estos son algunos de ellos:

    -

    Física y gráficos realistas

    -

    CarX Street utiliza el CarX Physics Engine, una tecnología patentada que simula el comportamiento realista de los automóviles en diferentes superficies y condiciones. Puede sentir la diferencia entre conducir sobre asfalto, grava, arena, nieve o hielo, así como los efectos de la gravedad, la inercia, la fricción y la aerodinámica. También puede ver los efectos de daño realistas en sus coches, como arañazos, abolladuras, ventanas rotas o humo.

    - -

    Coches y pistas personalizables

    -

    CarX Street le permite personalizar sus coches con más de 1000 piezas y accesorios, como parachoques, spoilers, campanas, ruedas, escapes, luces, espejos y más. También puede cambiar el color de sus coches con más de 100 opciones de pintura, o añadir pegatinas y calcomanías para hacerlos más personales. También puede crear sus propias pistas personalizadas con la función Editor de pistas, donde puede elegir la ubicación, longitud, ancho, curvatura, elevación, tipo de superficie y obstáculos de su pista.

    -

    Modos online y offline

    -

    CarX Street te permite jugar online o offline dependiendo de tu preferencia. Puedes jugar online con otros jugadores de todo el mundo en diferentes modos, como Carrera rápida, Time Attack, Drift Race o Torneo. También puedes chatear con otros jugadores, unirte a clubes o crear tu propio club. Puedes jugar sin conexión con oponentes de IA en diferentes modos, como Carrera, Free Ride o Test Drive. También puede jugar sin conexión con sus amigos en el mismo dispositivo con el modo de pantalla dividida.

    -

    ¿Cómo descargar e instalar CarX Street 1.74.5 Mod APK?

    -

    Si desea descargar e instalar CarX Street 1.74.5 Mod APK, debe seguir estos pasos:

    -

    Requisitos y permisos

    -

    Antes de descargar e instalar CarX Street 1.74.5 Mod APK, debe asegurarse de que su dispositivo cumple con los siguientes requisitos y permisos:

    -
      -
    • Tu dispositivo debe tener Android 6.0 o superior.
    • -
    • Su dispositivo debe tener al menos 2 GB de RAM y 1 GB de espacio de almacenamiento libre.
    • -
    • Es necesario habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo.
    • -
    • Necesitas permitir que la aplicación acceda al almacenamiento, ubicación, cámara, micrófono y red de tu dispositivo.
    • -
    -

    Pasos para descargar e instalar

    -

    Después de haber comprobado los requisitos y permisos, puede seguir estos pasos para descargar e instalar CarX Street 1.74.5 Mod APK:

    -
      - -
    1. Localice el archivo descargado en el administrador de archivos de su dispositivo y toque en él para iniciar el proceso de instalación.
    2. -
    3. Siga las instrucciones en la pantalla y espere a que se complete la instalación.
    4. -
    5. Iniciar la aplicación y disfrutar del juego.
    6. -
    -

    ¿Cuáles son los beneficios de CarX Street 1.74.5 Mod APK?

    -

    CarX Street 1.74.5 Mod APK es una versión modificada del juego original que ofrece algunos beneficios adicionales para los jugadores. Estos son algunos de ellos:

    -

    -

    Dinero y oro ilimitados

    -

    Con CarX Street 1.74.5 Mod APK, puede obtener dinero y oro ilimitados en el juego, que se puede utilizar para comprar y actualizar sus coches, piezas y pistas. También puede desbloquear todos los coches y pistas en el juego sin gastar dinero real.

    -

    Coches y pistas desbloqueados

    -

    Con CarX Street 1.74.5 Mod APK, se puede acceder a todos los coches y pistas en el juego sin tener que completar ninguna misión o logros. Puedes elegir entre más de 50 coches y más de 20 pistas en el juego, cada una con sus propias características y desafíos.

    -

    No se necesitan anuncios ni root

    -

    Con CarX Street 1.74.5 Mod APK, puede disfrutar del juego sin ningún molesto anuncios o ventanas emergentes que podrían interrumpir su juego o consumir sus datos. Tampoco necesitas rootear tu dispositivo para usar esta versión modificada del juego, lo que podría comprometer la seguridad o garantía de tu dispositivo.

    -

    Conclusión

    -

    CarX Street es un juego de carreras gratuito y divertido para dispositivos Android que ofrece física y gráficos realistas, coches y pistas personalizables, modos en línea y fuera de línea, y más. Si desea mejorar su experiencia de juego con dinero ilimitado y oro, coches desbloqueados y pistas, sin anuncios y sin raíz necesaria, puede descargar e instalar CarX Street 1.74.5 Mod APK siguiendo los pasos que hemos proporcionado en este artículo.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre CarX Street 1.74.5 Mod APK:

    -
      - -

      Sí, CarX Street 1.74.5 Mod APK es seguro de usar siempre y cuando lo descargue de una fuente de confianza, como APKPure o APKDone. Sin embargo, siempre debe tener cuidado al instalar aplicaciones de fuentes desconocidas, ya que podrían contener malware o virus que podrían dañar su dispositivo o datos.

      -
    1. ¿Se me prohibirá el uso de CarX Street 1.74.5 Mod APK?
    2. -

      No, no se le prohibió el uso de CarX Street 1.74.5 Mod APK, ya que esta versión modificada del juego no interfiere con los servidores del juego o características en línea. Sin embargo, siempre debes respetar las reglas y políticas del juego, y evitar usar trucos o hacks que puedan darte una ventaja injusta sobre otros jugadores.

      -
    3. ¿Puedo actualizar CarX Street 1.74.5 Mod APK?
    4. -

      Sí, puede actualizar CarX Street 1.74.5 Mod APK cada vez que hay una nueva versión disponible desde la misma fuente que lo descargó desde. Sin embargo, siempre debes hacer una copia de seguridad de los datos del juego antes de actualizarlo, ya que algunas actualizaciones pueden sobrescribir o eliminar las características modificadas o el progreso.

      -
    5. ¿Puedo jugar CarX Street 1.74.5 Mod APK en el PC?
    6. -

      Sí, puede jugar CarX Street 1.74.5 Mod APK en el PC mediante el uso de un emulador de Android, como BlueStacks o NoxPlayer. Un emulador de Android es un software que le permite ejecutar aplicaciones y juegos de Android en su PC. Solo tiene que descargar e instalar el emulador en su PC, y luego descargar e instalar CarX Street 1.74.5 Mod APK en el emulador.

      -
    7. ¿Cuáles son algunas alternativas a CarX Street 1.74.5 Mod APK?
    8. -

      Si usted está buscando algunas alternativas a CarX Street 1.74.5 Mod APK, es posible que desee probar estos otros juegos de carreras para Android:

      -
        -
      • Asphalt 9: Legends: Un juego de carreras de ritmo rápido y lleno de acción que cuenta con más de 60 coches y más de 80 pistas de todo el mundo.
      • - -
      • Need for Speed: No Limits: Un emocionante juego de carreras de adrenalina que cuenta con más de 100 coches y más de 1000 carreras en diferentes modos y eventos.
      • -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Bravefe/Artist_Classification/README.md b/spaces/Bravefe/Artist_Classification/README.md deleted file mode 100644 index 8be6b0f5541d19188ab94e23b5ba235f78b80cb9..0000000000000000000000000000000000000000 --- a/spaces/Bravefe/Artist_Classification/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Artist Classification -emoji: 🎨 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/CVPR/LIVE/sample_boundary.h b/spaces/CVPR/LIVE/sample_boundary.h deleted file mode 100644 index 28af12959f578c9f72872c85b59b957729c5ba68..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/sample_boundary.h +++ /dev/null @@ -1,454 +0,0 @@ -#pragma once - -#include "diffvg.h" -#include "shape.h" -#include "scene.h" -#include "vector.h" -#include "cdf.h" - -struct PathBoundaryData { - int base_point_id; - int point_id; - float t; -}; - -struct BoundaryData { - PathBoundaryData path; - bool is_stroke; -}; - -DEVICE -Vector2f sample_boundary(const Circle &circle, - float t, - Vector2f &normal, - float &pdf, - BoundaryData &, - float stroke_perturb_direction, - float stroke_radius) { - // Parametric form of a circle (t in [0, 1)): - // x = center.x + r * cos(2pi * t) - // y = center.y + r * sin(2pi * t) - auto offset = Vector2f{ - circle.radius * cos(2 * float(M_PI) * t), - circle.radius * sin(2 * float(M_PI) * t) - }; - normal = normalize(offset); - pdf /= (2 * float(M_PI) * circle.radius); - auto ret = circle.center + offset; - if (stroke_perturb_direction != 0.f) { - ret += stroke_perturb_direction * stroke_radius * normal; - if (stroke_perturb_direction < 0) { - // normal should point towards the perturb direction - normal = -normal; - } - } - return ret; -} - -DEVICE -Vector2f sample_boundary(const Ellipse &ellipse, - float t, - Vector2f &normal, - float &pdf, - BoundaryData &, - float stroke_perturb_direction, - float stroke_radius) { - // Parametric form of a ellipse (t in [0, 1)): - // x = center.x + r.x * cos(2pi * t) - // y = center.y + r.y * sin(2pi * t) - const auto &r = ellipse.radius; - auto offset = Vector2f{ - r.x * cos(2 * float(M_PI) * t), - r.y * sin(2 * float(M_PI) * t) - }; - auto dxdt = -r.x * sin(2 * float(M_PI) * t) * 2 * float(M_PI); - auto dydt = r.y * cos(2 * float(M_PI) * t) * 2 * float(M_PI); - // tangent is normalize(dxdt, dydt) - normal = normalize(Vector2f{dydt, -dxdt}); - pdf /= sqrt(square(dxdt) + square(dydt)); - auto ret = ellipse.center + offset; - if (stroke_perturb_direction != 0.f) { - ret += stroke_perturb_direction * stroke_radius * normal; - if (stroke_perturb_direction < 0) { - // normal should point towards the perturb direction - normal = -normal; - } - } - return ret; -} - -DEVICE -Vector2f sample_boundary(const Path &path, - const float *path_length_cdf, - const float *path_length_pmf, - const int *point_id_map, - float path_length, - float t, - Vector2f &normal, - float &pdf, - BoundaryData &data, - float stroke_perturb_direction, - float stroke_radius) { - if (stroke_perturb_direction != 0.f && !path.is_closed) { - // We need to samples the "caps" of the path - // length of a cap is pi * abs(stroke_perturb_direction) - // there are two caps - auto cap_length = 0.f; - if (path.thickness != nullptr) { - auto r0 = path.thickness[0]; - auto r1 = path.thickness[path.num_points - 1]; - cap_length = float(M_PI) * (r0 + r1); - } else { - cap_length = 2 * float(M_PI) * stroke_radius; - } - auto cap_prob = cap_length / (cap_length + path_length); - if (t < cap_prob) { - t = t / cap_prob; - pdf *= cap_prob; - auto r0 = stroke_radius; - auto r1 = stroke_radius; - if (path.thickness != nullptr) { - r0 = path.thickness[0]; - r1 = path.thickness[path.num_points - 1]; - } - // HACK: in theory we want to compute the tangent and - // sample the hemi-circle, but here we just sample the - // full circle since it's less typing - if (stroke_perturb_direction < 0) { - // Sample the cap at the beginning - auto p0 = Vector2f{path.points[0], path.points[1]}; - auto offset = Vector2f{ - r0 * cos(2 * float(M_PI) * t), - r0 * sin(2 * float(M_PI) * t) - }; - normal = normalize(offset); - pdf /= (2 * float(M_PI) * r0); - data.path.base_point_id = 0; - data.path.point_id = 0; - data.path.t = 0; - return p0 + offset; - } else { - // Sample the cap at the end - auto p0 = Vector2f{path.points[2 * (path.num_points - 1)], - path.points[2 * (path.num_points - 1) + 1]}; - auto offset = Vector2f{ - r1 * cos(2 * float(M_PI) * t), - r1 * sin(2 * float(M_PI) * t) - }; - normal = normalize(offset); - pdf /= (2 * float(M_PI) * r1); - data.path.base_point_id = path.num_base_points - 1; - data.path.point_id = path.num_points - 2 - - path.num_control_points[data.path.base_point_id]; - data.path.t = 1; - return p0 + offset; - } - } else { - t = (t - cap_prob) / (1 - cap_prob); - pdf *= (1 - cap_prob); - } - } - // Binary search on path_length_cdf - auto sample_id = sample(path_length_cdf, - path.num_base_points, - t, - &t); - assert(sample_id >= 0 && sample_id < path.num_base_points); - auto point_id = point_id_map[sample_id]; - if (path.num_control_points[sample_id] == 0) { - // Straight line - auto i0 = point_id; - auto i1 = (i0 + 1) % path.num_points; - assert(i0 < path.num_points); - auto p0 = Vector2f{path.points[2 * i0], path.points[2 * i0 + 1]}; - auto p1 = Vector2f{path.points[2 * i1], path.points[2 * i1 + 1]}; - data.path.base_point_id = sample_id; - data.path.point_id = point_id; - data.path.t = t; - if (t < -1e-3f || t > 1+1e-3f) { - // return invalid sample - pdf = 0; - return Vector2f{0, 0}; - } - auto tangent = (p1 - p0); - auto tan_len = length(tangent); - if (tan_len == 0) { - // return invalid sample - pdf = 0; - return Vector2f{0, 0}; - } - normal = Vector2f{-tangent.y, tangent.x} / tan_len; - // length of tangent is the Jacobian of the sampling transformation - pdf *= path_length_pmf[sample_id] / tan_len; - auto ret = p0 + t * (p1 - p0); - if (stroke_perturb_direction != 0.f) { - auto r0 = stroke_radius; - auto r1 = stroke_radius; - if (path.thickness != nullptr) { - r0 = path.thickness[i0]; - r1 = path.thickness[i1]; - } - auto r = r0 + t * (r1 - r0); - ret += stroke_perturb_direction * r * normal; - if (stroke_perturb_direction < 0) { - // normal should point towards the perturb direction - normal = -normal; - } - } - return ret; - } else if (path.num_control_points[sample_id] == 1) { - // Quadratic Bezier curve - auto i0 = point_id; - auto i1 = i0 + 1; - auto i2 = (i0 + 2) % path.num_points; - auto p0 = Vector2f{path.points[2 * i0], path.points[2 * i0 + 1]}; - auto p1 = Vector2f{path.points[2 * i1], path.points[2 * i1 + 1]}; - auto p2 = Vector2f{path.points[2 * i2], path.points[2 * i2 + 1]}; - auto eval = [&](float t) -> Vector2f { - auto tt = 1 - t; - return (tt*tt)*p0 + (2*tt*t)*p1 + (t*t)*p2; - }; - data.path.base_point_id = sample_id; - data.path.point_id = point_id; - data.path.t = t; - if (t < -1e-3f || t > 1+1e-3f) { - // return invalid sample - pdf = 0; - return Vector2f{0, 0}; - } - auto tangent = 2 * (1 - t) * (p1 - p0) + 2 * t * (p2 - p1); - auto tan_len = length(tangent); - if (tan_len == 0) { - // return invalid sample - pdf = 0; - return Vector2f{0, 0}; - } - normal = Vector2f{-tangent.y, tangent.x} / tan_len; - // length of tangent is the Jacobian of the sampling transformation - pdf *= path_length_pmf[sample_id] / tan_len; - auto ret = eval(t); - if (stroke_perturb_direction != 0.f) { - auto r0 = stroke_radius; - auto r1 = stroke_radius; - auto r2 = stroke_radius; - if (path.thickness != nullptr) { - r0 = path.thickness[i0]; - r1 = path.thickness[i1]; - r2 = path.thickness[i2]; - } - auto tt = 1 - t; - auto r = (tt*tt)*r0 + (2*tt*t)*r1 + (t*t)*r2; - ret += stroke_perturb_direction * r * normal; - if (stroke_perturb_direction < 0) { - // normal should point towards the perturb direction - normal = -normal; - } - } - return ret; - } else if (path.num_control_points[sample_id] == 2) { - // Cubic Bezier curve - auto i0 = point_id; - auto i1 = point_id + 1; - auto i2 = point_id + 2; - auto i3 = (point_id + 3) % path.num_points; - assert(i0 >= 0 && i2 < path.num_points); - auto p0 = Vector2f{path.points[2 * i0], path.points[2 * i0 + 1]}; - auto p1 = Vector2f{path.points[2 * i1], path.points[2 * i1 + 1]}; - auto p2 = Vector2f{path.points[2 * i2], path.points[2 * i2 + 1]}; - auto p3 = Vector2f{path.points[2 * i3], path.points[2 * i3 + 1]}; - auto eval = [&](float t) -> Vector2f { - auto tt = 1 - t; - return (tt*tt*tt)*p0 + (3*tt*tt*t)*p1 + (3*tt*t*t)*p2 + (t*t*t)*p3; - }; - data.path.base_point_id = sample_id; - data.path.point_id = point_id; - data.path.t = t; - if (t < -1e-3f || t > 1+1e-3f) { - // return invalid sample - pdf = 0; - return Vector2f{0, 0}; - } - auto tangent = 3 * square(1 - t) * (p1 - p0) + 6 * (1 - t) * t * (p2 - p1) + 3 * t * t * (p3 - p2); - auto tan_len = length(tangent); - if (tan_len == 0) { - // return invalid sample - pdf = 0; - return Vector2f{0, 0}; - } - normal = Vector2f{-tangent.y, tangent.x} / tan_len; - // length of tangent is the Jacobian of the sampling transformation - pdf *= path_length_pmf[sample_id] / tan_len; - auto ret = eval(t); - if (stroke_perturb_direction != 0.f) { - auto r0 = stroke_radius; - auto r1 = stroke_radius; - auto r2 = stroke_radius; - auto r3 = stroke_radius; - if (path.thickness != nullptr) { - r0 = path.thickness[i0]; - r1 = path.thickness[i1]; - r2 = path.thickness[i2]; - r3 = path.thickness[i3]; - } - auto tt = 1 - t; - auto r = (tt*tt*tt)*r0 + (3*tt*tt*t)*r1 + (3*tt*t*t)*r2 + (t*t*t)*r3; - ret += stroke_perturb_direction * r * normal; - if (stroke_perturb_direction < 0) { - // normal should point towards the perturb direction - normal = -normal; - } - } - return ret; - } else { - assert(false); - } - assert(false); - return Vector2f{0, 0}; -} - -DEVICE -Vector2f sample_boundary(const Rect &rect, - float t, Vector2f &normal, - float &pdf, - BoundaryData &, - float stroke_perturb_direction, - float stroke_radius) { - // Roll a dice to decide whether to sample width or height - auto w = rect.p_max.x - rect.p_min.x; - auto h = rect.p_max.y - rect.p_min.y; - pdf /= (2 * (w +h)); - if (t <= w / (w + h)) { - // Sample width - // reuse t for the next dice - t *= (w + h) / w; - // Roll a dice to decide whether to sample upper width or lower width - if (t < 0.5f) { - // Sample upper width - normal = Vector2f{0, -1}; - auto ret = rect.p_min + 2 * t * Vector2f{rect.p_max.x - rect.p_min.x, 0.f}; - if (stroke_perturb_direction != 0.f) { - ret += stroke_perturb_direction * stroke_radius * normal; - if (stroke_perturb_direction < 0) { - // normal should point towards the perturb direction - normal = -normal; - } - } - return ret; - } else { - // Sample lower width - normal = Vector2f{0, 1}; - auto ret = Vector2f{rect.p_min.x, rect.p_max.y} + - 2 * (t - 0.5f) * Vector2f{rect.p_max.x - rect.p_min.x, 0.f}; - if (stroke_perturb_direction != 0.f) { - ret += stroke_perturb_direction * stroke_radius * normal; - if (stroke_perturb_direction < 0) { - // normal should point towards the perturb direction - normal = -normal; - } - } - return ret; - } - } else { - // Sample height - // reuse t for the next dice - assert(h > 0); - t = (t - w / (w + h)) * (w + h) / h; - // Roll a dice to decide whether to sample left height or right height - if (t < 0.5f) { - // Sample left height - normal = Vector2f{-1, 0}; - auto ret = rect.p_min + 2 * t * Vector2f{0.f, rect.p_max.y - rect.p_min.y}; - if (stroke_perturb_direction != 0.f) { - ret += stroke_perturb_direction * stroke_radius * normal; - if (stroke_perturb_direction < 0) { - // normal should point towards the perturb direction - normal = -normal; - } - } - return ret; - } else { - // Sample right height - normal = Vector2f{1, 0}; - auto ret = Vector2f{rect.p_max.x, rect.p_min.y} + - 2 * (t - 0.5f) * Vector2f{0.f, rect.p_max.y - rect.p_min.y}; - if (stroke_perturb_direction != 0.f) { - ret += stroke_perturb_direction * stroke_radius * normal; - if (stroke_perturb_direction < 0) { - // normal should point towards the perturb direction - normal = -normal; - } - } - return ret; - } - } -} - -DEVICE -Vector2f sample_boundary(const SceneData &scene, - int shape_group_id, - int shape_id, - float t, - Vector2f &normal, - float &pdf, - BoundaryData &data) { - const ShapeGroup &shape_group = scene.shape_groups[shape_group_id]; - const Shape &shape = scene.shapes[shape_id]; - pdf = 1; - // Choose which one to sample: stroke discontinuities or fill discontinuities. - // TODO: we don't need to sample fill discontinuities when stroke alpha is 1 and both - // fill and stroke color exists - auto stroke_perturb = false; - if (shape_group.fill_color != nullptr && shape_group.stroke_color != nullptr) { - if (t < 0.5f) { - stroke_perturb = false; - t = 2 * t; - pdf = 0.5f; - } else { - stroke_perturb = true; - t = 2 * (t - 0.5f); - pdf = 0.5f; - } - } else if (shape_group.stroke_color != nullptr) { - stroke_perturb = true; - } - data.is_stroke = stroke_perturb; - auto stroke_perturb_direction = 0.f; - if (stroke_perturb) { - if (t < 0.5f) { - stroke_perturb_direction = -1.f; - t = 2 * t; - pdf *= 0.5f; - } else { - stroke_perturb_direction = 1.f; - t = 2 * (t - 0.5f); - pdf *= 0.5f; - } - } - switch (shape.type) { - case ShapeType::Circle: - return sample_boundary( - *(const Circle *)shape.ptr, t, normal, pdf, data, stroke_perturb_direction, shape.stroke_width); - case ShapeType::Ellipse: - return sample_boundary( - *(const Ellipse *)shape.ptr, t, normal, pdf, data, stroke_perturb_direction, shape.stroke_width); - case ShapeType::Path: - return sample_boundary( - *(const Path *)shape.ptr, - scene.path_length_cdf[shape_id], - scene.path_length_pmf[shape_id], - scene.path_point_id_map[shape_id], - scene.shapes_length[shape_id], - t, - normal, - pdf, - data, - stroke_perturb_direction, - shape.stroke_width); - case ShapeType::Rect: - return sample_boundary( - *(const Rect *)shape.ptr, t, normal, pdf, data, stroke_perturb_direction, shape.stroke_width); - } - assert(false); - return Vector2f{}; -} - diff --git a/spaces/CVPR/WALT/mmdet/__init__.py b/spaces/CVPR/WALT/mmdet/__init__.py deleted file mode 100644 index ce2930f62a0091e06b37575b96db2ae51ca7908e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -import mmcv - -from .version import __version__, short_version - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -mmcv_minimum_version = '1.2.4' -mmcv_maximum_version = '1.4.0' -mmcv_version = digit_version(mmcv.__version__) - - -assert (mmcv_version >= digit_version(mmcv_minimum_version) - and mmcv_version <= digit_version(mmcv_maximum_version)), \ - f'MMCV=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.' - -__all__ = ['__version__', 'short_version'] diff --git a/spaces/CVPR/lama-example/saicinpainting/training/modules/ffc.py b/spaces/CVPR/lama-example/saicinpainting/training/modules/ffc.py deleted file mode 100644 index 0e7b84683fccb4bccac97b6371994fa6bb44dbe4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/training/modules/ffc.py +++ /dev/null @@ -1,485 +0,0 @@ -# Fast Fourier Convolution NeurIPS 2020 -# original implementation https://github.com/pkumivision/FFC/blob/main/model_zoo/ffc.py -# paper https://proceedings.neurips.cc/paper/2020/file/2fd5d41ec6cfab47e32164d5624269b1-Paper.pdf - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from saicinpainting.training.modules.base import get_activation, BaseDiscriminator -from saicinpainting.training.modules.spatial_transform import LearnableSpatialTransformWrapper -from saicinpainting.training.modules.squeeze_excitation import SELayer -from saicinpainting.utils import get_shape - - -class FFCSE_block(nn.Module): - - def __init__(self, channels, ratio_g): - super(FFCSE_block, self).__init__() - in_cg = int(channels * ratio_g) - in_cl = channels - in_cg - r = 16 - - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.conv1 = nn.Conv2d(channels, channels // r, - kernel_size=1, bias=True) - self.relu1 = nn.ReLU(inplace=True) - self.conv_a2l = None if in_cl == 0 else nn.Conv2d( - channels // r, in_cl, kernel_size=1, bias=True) - self.conv_a2g = None if in_cg == 0 else nn.Conv2d( - channels // r, in_cg, kernel_size=1, bias=True) - self.sigmoid = nn.Sigmoid() - - def forward(self, x): - x = x if type(x) is tuple else (x, 0) - id_l, id_g = x - - x = id_l if type(id_g) is int else torch.cat([id_l, id_g], dim=1) - x = self.avgpool(x) - x = self.relu1(self.conv1(x)) - - x_l = 0 if self.conv_a2l is None else id_l * \ - self.sigmoid(self.conv_a2l(x)) - x_g = 0 if self.conv_a2g is None else id_g * \ - self.sigmoid(self.conv_a2g(x)) - return x_l, x_g - - -class FourierUnit(nn.Module): - - def __init__(self, in_channels, out_channels, groups=1, spatial_scale_factor=None, spatial_scale_mode='bilinear', - spectral_pos_encoding=False, use_se=False, se_kwargs=None, ffc3d=False, fft_norm='ortho'): - # bn_layer not used - super(FourierUnit, self).__init__() - self.groups = groups - - self.conv_layer = torch.nn.Conv2d(in_channels=in_channels * 2 + (2 if spectral_pos_encoding else 0), - out_channels=out_channels * 2, - kernel_size=1, stride=1, padding=0, groups=self.groups, bias=False) - self.bn = torch.nn.BatchNorm2d(out_channels * 2) - self.relu = torch.nn.ReLU(inplace=True) - - # squeeze and excitation block - self.use_se = use_se - if use_se: - if se_kwargs is None: - se_kwargs = {} - self.se = SELayer(self.conv_layer.in_channels, **se_kwargs) - - self.spatial_scale_factor = spatial_scale_factor - self.spatial_scale_mode = spatial_scale_mode - self.spectral_pos_encoding = spectral_pos_encoding - self.ffc3d = ffc3d - self.fft_norm = fft_norm - - def forward(self, x): - batch = x.shape[0] - - if self.spatial_scale_factor is not None: - orig_size = x.shape[-2:] - x = F.interpolate(x, scale_factor=self.spatial_scale_factor, mode=self.spatial_scale_mode, align_corners=False) - - r_size = x.size() - # (batch, c, h, w/2+1, 2) - fft_dim = (-3, -2, -1) if self.ffc3d else (-2, -1) - ffted = torch.fft.rfftn(x, dim=fft_dim, norm=self.fft_norm) - ffted = torch.stack((ffted.real, ffted.imag), dim=-1) - ffted = ffted.permute(0, 1, 4, 2, 3).contiguous() # (batch, c, 2, h, w/2+1) - ffted = ffted.view((batch, -1,) + ffted.size()[3:]) - - if self.spectral_pos_encoding: - height, width = ffted.shape[-2:] - coords_vert = torch.linspace(0, 1, height)[None, None, :, None].expand(batch, 1, height, width).to(ffted) - coords_hor = torch.linspace(0, 1, width)[None, None, None, :].expand(batch, 1, height, width).to(ffted) - ffted = torch.cat((coords_vert, coords_hor, ffted), dim=1) - - if self.use_se: - ffted = self.se(ffted) - - ffted = self.conv_layer(ffted) # (batch, c*2, h, w/2+1) - ffted = self.relu(self.bn(ffted)) - - ffted = ffted.view((batch, -1, 2,) + ffted.size()[2:]).permute( - 0, 1, 3, 4, 2).contiguous() # (batch,c, t, h, w/2+1, 2) - ffted = torch.complex(ffted[..., 0], ffted[..., 1]) - - ifft_shape_slice = x.shape[-3:] if self.ffc3d else x.shape[-2:] - output = torch.fft.irfftn(ffted, s=ifft_shape_slice, dim=fft_dim, norm=self.fft_norm) - - if self.spatial_scale_factor is not None: - output = F.interpolate(output, size=orig_size, mode=self.spatial_scale_mode, align_corners=False) - - return output - - -class SeparableFourierUnit(nn.Module): - - def __init__(self, in_channels, out_channels, groups=1, kernel_size=3): - # bn_layer not used - super(SeparableFourierUnit, self).__init__() - self.groups = groups - row_out_channels = out_channels // 2 - col_out_channels = out_channels - row_out_channels - self.row_conv = torch.nn.Conv2d(in_channels=in_channels * 2, - out_channels=row_out_channels * 2, - kernel_size=(kernel_size, 1), # kernel size is always like this, but the data will be transposed - stride=1, padding=(kernel_size // 2, 0), - padding_mode='reflect', - groups=self.groups, bias=False) - self.col_conv = torch.nn.Conv2d(in_channels=in_channels * 2, - out_channels=col_out_channels * 2, - kernel_size=(kernel_size, 1), # kernel size is always like this, but the data will be transposed - stride=1, padding=(kernel_size // 2, 0), - padding_mode='reflect', - groups=self.groups, bias=False) - self.row_bn = torch.nn.BatchNorm2d(row_out_channels * 2) - self.col_bn = torch.nn.BatchNorm2d(col_out_channels * 2) - self.relu = torch.nn.ReLU(inplace=True) - - def process_branch(self, x, conv, bn): - batch = x.shape[0] - - r_size = x.size() - # (batch, c, h, w/2+1, 2) - ffted = torch.fft.rfft(x, norm="ortho") - ffted = torch.stack((ffted.real, ffted.imag), dim=-1) - ffted = ffted.permute(0, 1, 4, 2, 3).contiguous() # (batch, c, 2, h, w/2+1) - ffted = ffted.view((batch, -1,) + ffted.size()[3:]) - - ffted = self.relu(bn(conv(ffted))) - - ffted = ffted.view((batch, -1, 2,) + ffted.size()[2:]).permute( - 0, 1, 3, 4, 2).contiguous() # (batch,c, t, h, w/2+1, 2) - ffted = torch.complex(ffted[..., 0], ffted[..., 1]) - - output = torch.fft.irfft(ffted, s=x.shape[-1:], norm="ortho") - return output - - - def forward(self, x): - rowwise = self.process_branch(x, self.row_conv, self.row_bn) - colwise = self.process_branch(x.permute(0, 1, 3, 2), self.col_conv, self.col_bn).permute(0, 1, 3, 2) - out = torch.cat((rowwise, colwise), dim=1) - return out - - -class SpectralTransform(nn.Module): - - def __init__(self, in_channels, out_channels, stride=1, groups=1, enable_lfu=True, separable_fu=False, **fu_kwargs): - # bn_layer not used - super(SpectralTransform, self).__init__() - self.enable_lfu = enable_lfu - if stride == 2: - self.downsample = nn.AvgPool2d(kernel_size=(2, 2), stride=2) - else: - self.downsample = nn.Identity() - - self.stride = stride - self.conv1 = nn.Sequential( - nn.Conv2d(in_channels, out_channels // - 2, kernel_size=1, groups=groups, bias=False), - nn.BatchNorm2d(out_channels // 2), - nn.ReLU(inplace=True) - ) - fu_class = SeparableFourierUnit if separable_fu else FourierUnit - self.fu = fu_class( - out_channels // 2, out_channels // 2, groups, **fu_kwargs) - if self.enable_lfu: - self.lfu = fu_class( - out_channels // 2, out_channels // 2, groups) - self.conv2 = torch.nn.Conv2d( - out_channels // 2, out_channels, kernel_size=1, groups=groups, bias=False) - - def forward(self, x): - - x = self.downsample(x) - x = self.conv1(x) - output = self.fu(x) - - if self.enable_lfu: - n, c, h, w = x.shape - split_no = 2 - split_s = h // split_no - xs = torch.cat(torch.split( - x[:, :c // 4], split_s, dim=-2), dim=1).contiguous() - xs = torch.cat(torch.split(xs, split_s, dim=-1), - dim=1).contiguous() - xs = self.lfu(xs) - xs = xs.repeat(1, 1, split_no, split_no).contiguous() - else: - xs = 0 - - output = self.conv2(x + output + xs) - - return output - - -class FFC(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, - ratio_gin, ratio_gout, stride=1, padding=0, - dilation=1, groups=1, bias=False, enable_lfu=True, - padding_type='reflect', gated=False, **spectral_kwargs): - super(FFC, self).__init__() - - assert stride == 1 or stride == 2, "Stride should be 1 or 2." - self.stride = stride - - in_cg = int(in_channels * ratio_gin) - in_cl = in_channels - in_cg - out_cg = int(out_channels * ratio_gout) - out_cl = out_channels - out_cg - #groups_g = 1 if groups == 1 else int(groups * ratio_gout) - #groups_l = 1 if groups == 1 else groups - groups_g - - self.ratio_gin = ratio_gin - self.ratio_gout = ratio_gout - self.global_in_num = in_cg - - module = nn.Identity if in_cl == 0 or out_cl == 0 else nn.Conv2d - self.convl2l = module(in_cl, out_cl, kernel_size, - stride, padding, dilation, groups, bias, padding_mode=padding_type) - module = nn.Identity if in_cl == 0 or out_cg == 0 else nn.Conv2d - self.convl2g = module(in_cl, out_cg, kernel_size, - stride, padding, dilation, groups, bias, padding_mode=padding_type) - module = nn.Identity if in_cg == 0 or out_cl == 0 else nn.Conv2d - self.convg2l = module(in_cg, out_cl, kernel_size, - stride, padding, dilation, groups, bias, padding_mode=padding_type) - module = nn.Identity if in_cg == 0 or out_cg == 0 else SpectralTransform - self.convg2g = module( - in_cg, out_cg, stride, 1 if groups == 1 else groups // 2, enable_lfu, **spectral_kwargs) - - self.gated = gated - module = nn.Identity if in_cg == 0 or out_cl == 0 or not self.gated else nn.Conv2d - self.gate = module(in_channels, 2, 1) - - def forward(self, x): - x_l, x_g = x if type(x) is tuple else (x, 0) - out_xl, out_xg = 0, 0 - - if self.gated: - total_input_parts = [x_l] - if torch.is_tensor(x_g): - total_input_parts.append(x_g) - total_input = torch.cat(total_input_parts, dim=1) - - gates = torch.sigmoid(self.gate(total_input)) - g2l_gate, l2g_gate = gates.chunk(2, dim=1) - else: - g2l_gate, l2g_gate = 1, 1 - - if self.ratio_gout != 1: - out_xl = self.convl2l(x_l) + self.convg2l(x_g) * g2l_gate - if self.ratio_gout != 0: - out_xg = self.convl2g(x_l) * l2g_gate + self.convg2g(x_g) - - return out_xl, out_xg - - -class FFC_BN_ACT(nn.Module): - - def __init__(self, in_channels, out_channels, - kernel_size, ratio_gin, ratio_gout, - stride=1, padding=0, dilation=1, groups=1, bias=False, - norm_layer=nn.BatchNorm2d, activation_layer=nn.Identity, - padding_type='reflect', - enable_lfu=True, **kwargs): - super(FFC_BN_ACT, self).__init__() - self.ffc = FFC(in_channels, out_channels, kernel_size, - ratio_gin, ratio_gout, stride, padding, dilation, - groups, bias, enable_lfu, padding_type=padding_type, **kwargs) - lnorm = nn.Identity if ratio_gout == 1 else norm_layer - gnorm = nn.Identity if ratio_gout == 0 else norm_layer - global_channels = int(out_channels * ratio_gout) - self.bn_l = lnorm(out_channels - global_channels) - self.bn_g = gnorm(global_channels) - - lact = nn.Identity if ratio_gout == 1 else activation_layer - gact = nn.Identity if ratio_gout == 0 else activation_layer - self.act_l = lact(inplace=True) - self.act_g = gact(inplace=True) - - def forward(self, x): - x_l, x_g = self.ffc(x) - x_l = self.act_l(self.bn_l(x_l)) - x_g = self.act_g(self.bn_g(x_g)) - return x_l, x_g - - -class FFCResnetBlock(nn.Module): - def __init__(self, dim, padding_type, norm_layer, activation_layer=nn.ReLU, dilation=1, - spatial_transform_kwargs=None, inline=False, **conv_kwargs): - super().__init__() - self.conv1 = FFC_BN_ACT(dim, dim, kernel_size=3, padding=dilation, dilation=dilation, - norm_layer=norm_layer, - activation_layer=activation_layer, - padding_type=padding_type, - **conv_kwargs) - self.conv2 = FFC_BN_ACT(dim, dim, kernel_size=3, padding=dilation, dilation=dilation, - norm_layer=norm_layer, - activation_layer=activation_layer, - padding_type=padding_type, - **conv_kwargs) - if spatial_transform_kwargs is not None: - self.conv1 = LearnableSpatialTransformWrapper(self.conv1, **spatial_transform_kwargs) - self.conv2 = LearnableSpatialTransformWrapper(self.conv2, **spatial_transform_kwargs) - self.inline = inline - - def forward(self, x): - if self.inline: - x_l, x_g = x[:, :-self.conv1.ffc.global_in_num], x[:, -self.conv1.ffc.global_in_num:] - else: - x_l, x_g = x if type(x) is tuple else (x, 0) - - id_l, id_g = x_l, x_g - - x_l, x_g = self.conv1((x_l, x_g)) - x_l, x_g = self.conv2((x_l, x_g)) - - x_l, x_g = id_l + x_l, id_g + x_g - out = x_l, x_g - if self.inline: - out = torch.cat(out, dim=1) - return out - - -class ConcatTupleLayer(nn.Module): - def forward(self, x): - assert isinstance(x, tuple) - x_l, x_g = x - assert torch.is_tensor(x_l) or torch.is_tensor(x_g) - if not torch.is_tensor(x_g): - return x_l - return torch.cat(x, dim=1) - - -class FFCResNetGenerator(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', activation_layer=nn.ReLU, - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), - init_conv_kwargs={}, downsample_conv_kwargs={}, resnet_conv_kwargs={}, - spatial_transform_layers=None, spatial_transform_kwargs={}, - add_out_act=True, max_features=1024, out_ffc=False, out_ffc_kwargs={}): - assert (n_blocks >= 0) - super().__init__() - - model = [nn.ReflectionPad2d(3), - FFC_BN_ACT(input_nc, ngf, kernel_size=7, padding=0, norm_layer=norm_layer, - activation_layer=activation_layer, **init_conv_kwargs)] - - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - if i == n_downsampling - 1: - cur_conv_kwargs = dict(downsample_conv_kwargs) - cur_conv_kwargs['ratio_gout'] = resnet_conv_kwargs.get('ratio_gin', 0) - else: - cur_conv_kwargs = downsample_conv_kwargs - model += [FFC_BN_ACT(min(max_features, ngf * mult), - min(max_features, ngf * mult * 2), - kernel_size=3, stride=2, padding=1, - norm_layer=norm_layer, - activation_layer=activation_layer, - **cur_conv_kwargs)] - - mult = 2 ** n_downsampling - feats_num_bottleneck = min(max_features, ngf * mult) - - ### resnet blocks - for i in range(n_blocks): - cur_resblock = FFCResnetBlock(feats_num_bottleneck, padding_type=padding_type, activation_layer=activation_layer, - norm_layer=norm_layer, **resnet_conv_kwargs) - if spatial_transform_layers is not None and i in spatial_transform_layers: - cur_resblock = LearnableSpatialTransformWrapper(cur_resblock, **spatial_transform_kwargs) - model += [cur_resblock] - - model += [ConcatTupleLayer()] - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(min(max_features, ngf * mult), - min(max_features, int(ngf * mult / 2)), - kernel_size=3, stride=2, padding=1, output_padding=1), - up_norm_layer(min(max_features, int(ngf * mult / 2))), - up_activation] - - if out_ffc: - model += [FFCResnetBlock(ngf, padding_type=padding_type, activation_layer=activation_layer, - norm_layer=norm_layer, inline=True, **out_ffc_kwargs)] - - model += [nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - if add_out_act: - model.append(get_activation('tanh' if add_out_act is True else add_out_act)) - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -class FFCNLayerDiscriminator(BaseDiscriminator): - def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, max_features=512, - init_conv_kwargs={}, conv_kwargs={}): - super().__init__() - self.n_layers = n_layers - - def _act_ctor(inplace=True): - return nn.LeakyReLU(negative_slope=0.2, inplace=inplace) - - kw = 3 - padw = int(np.ceil((kw-1.0)/2)) - sequence = [[FFC_BN_ACT(input_nc, ndf, kernel_size=kw, padding=padw, norm_layer=norm_layer, - activation_layer=_act_ctor, **init_conv_kwargs)]] - - nf = ndf - for n in range(1, n_layers): - nf_prev = nf - nf = min(nf * 2, max_features) - - cur_model = [ - FFC_BN_ACT(nf_prev, nf, - kernel_size=kw, stride=2, padding=padw, - norm_layer=norm_layer, - activation_layer=_act_ctor, - **conv_kwargs) - ] - sequence.append(cur_model) - - nf_prev = nf - nf = min(nf * 2, 512) - - cur_model = [ - FFC_BN_ACT(nf_prev, nf, - kernel_size=kw, stride=1, padding=padw, - norm_layer=norm_layer, - activation_layer=lambda *args, **kwargs: nn.LeakyReLU(*args, negative_slope=0.2, **kwargs), - **conv_kwargs), - ConcatTupleLayer() - ] - sequence.append(cur_model) - - sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]] - - for n in range(len(sequence)): - setattr(self, 'model'+str(n), nn.Sequential(*sequence[n])) - - def get_all_activations(self, x): - res = [x] - for n in range(self.n_layers + 2): - model = getattr(self, 'model' + str(n)) - res.append(model(res[-1])) - return res[1:] - - def forward(self, x): - act = self.get_all_activations(x) - feats = [] - for out in act[:-1]: - if isinstance(out, tuple): - if torch.is_tensor(out[1]): - out = torch.cat(out, dim=1) - else: - out = out[0] - feats.append(out) - return act[-1], feats diff --git a/spaces/CVPR/unicl-zero-shot-img-recog/model/image_encoder/swin_transformer.py b/spaces/CVPR/unicl-zero-shot-img-recog/model/image_encoder/swin_transformer.py deleted file mode 100644 index 6e171663bb49a2c995bd61aaacda6cf5f6223e22..0000000000000000000000000000000000000000 --- a/spaces/CVPR/unicl-zero-shot-img-recog/model/image_encoder/swin_transformer.py +++ /dev/null @@ -1,636 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu -# -------------------------------------------------------- -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - r""" Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """ - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}' - - def flops(self, N): - # calculate flops for 1 window with token length of N - flops = 0 - # qkv = self.qkv(x) - flops += N * self.dim * 3 * self.dim - # attn = (q @ k.transpose(-2, -1)) - flops += self.num_heads * N * (self.dim // self.num_heads) * N - # x = (attn @ v) - flops += self.num_heads * N * N * (self.dim // self.num_heads) - # x = self.proj(x) - flops += N * self.dim * self.dim - return flops - - -class SwinTransformerBlock(nn.Module): - r""" Swin Transformer Block. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resulotion. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - if min(self.input_resolution) <= self.window_size: - # if window size is larger than input resolution, we don't partition windows - self.shift_size = 0 - self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=to_2tuple(self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - if self.shift_size > 0: - # calculate attention mask for SW-MSA - H, W = self.input_resolution - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - else: - attn_mask = None - - self.register_buffer("attn_mask", attn_mask) - - def forward(self, x, Ph, Pw, attn_mask): - # H, W = self.input_resolution - B, L, C = x.shape - # assert L == H * W, "input feature has wrong size" - - shortcut = x - x = self.norm1(x) - x = x.view(B, Ph, Pw, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - Pw % self.window_size) % self.window_size - pad_b = (self.window_size - Ph % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = attn_mask - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :Ph, :Pw, :].contiguous() - - x = x.view(B, Ph * Pw, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ - f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" - - def flops(self): - flops = 0 - H, W = self.input_resolution - # norm1 - flops += self.dim * H * W - # W-MSA/SW-MSA - nW = H * W / self.window_size / self.window_size - flops += nW * self.attn.flops(self.window_size * self.window_size) - # mlp - flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * H * W - return flops - - -class PatchMerging(nn.Module): - r""" Patch Merging Layer. - - Args: - input_resolution (tuple[int]): Resolution of input feature. - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.input_resolution = input_resolution - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, Ph, Pw): - """ - x: B, H*W, C - """ - B, L, C = x.shape - # assert L == H * W, "input feature has wrong size" - # assert Ph % 2 == 0 and Pw % 2 == 0, f"x size ({Ph}*{Pw}) are not even." - - x = x.view(B, Ph, Pw, C) - - # padding - pad_input = (Ph % 2 == 1) or (Pw % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, Pw % 2, 0, Ph % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - def extra_repr(self) -> str: - return f"input_resolution={self.input_resolution}, dim={self.dim}" - - def flops(self): - H, W = self.input_resolution - flops = H * W * self.dim - flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim - return flops - - -class BasicLayer(nn.Module): - """ A basic Swin Transformer layer for one stage. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - num_heads (int): Number of attention heads. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__(self, dim, input_resolution, depth, num_heads, window_size, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False): - - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.depth = depth - self.use_checkpoint = use_checkpoint - self.window_size = window_size - self.shift_size = window_size // 2 - - # build blocks - self.blocks = nn.ModuleList([ - SwinTransformerBlock(dim=dim, input_resolution=input_resolution, - num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop, attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer) - for i in range(depth)]) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, Ph, Pw): - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(Ph / self.window_size)) * self.window_size - Wp = int(np.ceil(Pw / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - - for blk in self.blocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x, Ph, Pw, attn_mask) - if self.downsample is not None: - x = self.downsample(x, Ph, Pw) - Ph, Pw = (Ph + 1) // 2, (Pw + 1) // 2 - return x, Ph, Pw - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}" - - def flops(self): - flops = 0 - for blk in self.blocks: - flops += blk.flops() - if self.downsample is not None: - flops += self.downsample.flops() - return flops - - -class PatchEmbed(nn.Module): - r""" Image to Patch Embedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - B, C, H, W = x.shape - # FIXME look at relaxing size constraints - # assert H == self.img_size[0] and W == self.img_size[1], \ - # f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})." - x = self.proj(x) - Ph, Pw = x.shape[2:] - x = x.flatten(2).transpose(1, 2) # B Ph*Pw C - if self.norm is not None: - x = self.norm(x) - return x, Ph, Pw - - def flops(self): - Ho, Wo = self.patches_resolution - flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1]) - if self.norm is not None: - flops += Ho * Wo * self.embed_dim - return flops - - -class SwinTransformer(nn.Module): - r""" Swin Transformer - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - - Args: - img_size (int | tuple(int)): Input image size. Default 224 - patch_size (int | tuple(int)): Patch size. Default: 4 - in_chans (int): Number of input image channels. Default: 3 - num_classes (int): Number of classes for classification head. Default: 1000 - embed_dim (int): Patch embedding dimension. Default: 96 - depths (tuple(int)): Depth of each Swin Transformer layer. - num_heads (tuple(int)): Number of attention heads in different layers. - window_size (int): Window size. Default: 7 - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None - drop_rate (float): Dropout rate. Default: 0 - attn_drop_rate (float): Attention dropout rate. Default: 0 - drop_path_rate (float): Stochastic depth rate. Default: 0.1 - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False - patch_norm (bool): If True, add normalization after patch embedding. Default: True - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False - """ - - def __init__(self, img_size=224, patch_size=4, in_chans=3, num_classes=1000, - embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24], - window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1, - norm_layer=nn.LayerNorm, ape=False, patch_norm=True, - use_checkpoint=False, **kwargs): - super().__init__() - - self.num_classes = num_classes - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.num_features = int(embed_dim * 2 ** (self.num_layers - 1)) - self.mlp_ratio = mlp_ratio - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None) - num_patches = self.patch_embed.num_patches - patches_resolution = self.patch_embed.patches_resolution - self.patches_resolution = patches_resolution - - # absolute position embedding - if self.ape: - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim)) - trunc_normal_(self.absolute_pos_embed, std=.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer(dim=int(embed_dim * 2 ** i_layer), - input_resolution=(patches_resolution[0] // (2 ** i_layer), - patches_resolution[1] // (2 ** i_layer)), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=self.mlp_ratio, - qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint) - self.layers.append(layer) - - self.norm = norm_layer(self.num_features) - self.avgpool = nn.AdaptiveAvgPool1d(1) - self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() - self.dim_out = self.num_features - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'absolute_pos_embed'} - - @torch.jit.ignore - def no_weight_decay_keywords(self): - return {'relative_position_bias_table'} - - def forward_features(self, x, output_map=False): - x, Ph, Pw = self.patch_embed(x) - if self.ape: - x = x + self.absolute_pos_embed - x = self.pos_drop(x) - - for layer in self.layers: - x, Ph, Pw = layer(x, Ph, Pw) - - x_map = self.norm(x).transpose(1, 2) # B C L - x = self.avgpool(x_map) # B C 1 - x = torch.flatten(x, 1) - - if output_map: - return x, x_map, Ph, Pw - else: - return x - - def forward(self, x): - x = self.forward_features(x) - x = self.head(x) - return x - - def flops(self): - flops = 0 - flops += self.patch_embed.flops() - for i, layer in enumerate(self.layers): - flops += layer.flops() - flops += self.num_features * self.patches_resolution[0] * self.patches_resolution[1] // (2 ** self.num_layers) - flops += self.num_features * self.num_classes - return flops diff --git a/spaces/ClassCat/Brain-tumor-3D-segmentation-with-MONAI/README.md b/spaces/ClassCat/Brain-tumor-3D-segmentation-with-MONAI/README.md deleted file mode 100644 index 179d00ccbac1b793588e467b8b4c5056ecfe7135..0000000000000000000000000000000000000000 --- a/spaces/ClassCat/Brain-tumor-3D-segmentation-with-MONAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Brain Tumor 3D Segmentation With MONAI -emoji: 🌖 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Cpp4App/Cpp4App/app.py b/spaces/Cpp4App/Cpp4App/app.py deleted file mode 100644 index ba1e405a53504dc3a5ae9a36a070ec3d4f384d9d..0000000000000000000000000000000000000000 --- a/spaces/Cpp4App/Cpp4App/app.py +++ /dev/null @@ -1,284 +0,0 @@ -import gradio as gr -import cv2 -import numpy as np -import shutil -from bs4 import BeautifulSoup -import requests -import pandas as pd - -from SEM.run_single_sem import run_single_pp -from CDM.run_single import run_single_img - -title = "Cpp4App" -description = "Automated Contextual Privacy Policies Generation for Mobile Apps" - - -def write_and_read(): - # Write - with open('myfile.txt', 'w') as f: - f.write('Hello, World!') - - # Read - with open('myfile.txt', 'r') as f: - data = f.read() - - print("this is data: ", data) - - return data - -def run_demo(img_root, output_root, segment_root, file): - print(type(file)) - - # file_content = file.read().decode('utf-8') - run_single_pp(file) - - output_board, output_data, complete_result = run_single_img(img_root, output_root, segment_root) - - return output_board, output_data, complete_result - -def inference(img, html): - - write_and_read() - - if img is None or html is None: - return None, None - - output_root = "./CDM/result_classification" - segment_root = './SEM/txt' - img_root = "./CDM/input_examples/1-1-write.jpg" - pp_root = "1.txt" - - # output_root = "" - # segment_root = "" - # img_root = "demo_img.jpg" - - img_array = np.array(img) - - cv2.imwrite(img_root, img_array) - - # replace example string with real example - # if html == 'html content 1': - # with open("examples/6.txt", "r") as f: - # html = f.read() - # elif html == 'html content 2': - # with open("examples/11.txt", "r") as f: - # html = f.read() - - # print("string: ", html) - # with open(pp_root, 'w', encoding='utf-8') as file: # Open the destination file in text mode - # file.write(html) # Write the HTML content to the destination file - - try: - response = requests.get(html) - response.raise_for_status() # Will raise an exception if the status is an error - input_text = response.text - except requests.HTTPError: - input_text = "" - # print("input_text: ", input_text) - with open(pp_root, 'w', encoding='utf-8') as file: - file.write(input_text) - - soup = BeautifulSoup(open(pp_root, encoding='utf-8'), features="html.parser") - # print("pp_root soup: ", soup.contents) - - output_board, output_data, complete_result = run_demo(img_root, output_root, segment_root, pp_root) - - # print(output_data) - - return output_board, output_data, complete_result - -# inputs = [ -# gr.inputs.Image(type="pil", label="Image Upload"), -# # gr.inputs.File(label="HTML File Upload"), -# gr.inputs.Textbox(label="Text Input") -# # gr.inputs.Textbox(lines=True, label="HTML Text") -# ] -# output = [ -# gr.outputs.Image(type="pil", label="Result Image"), -# gr.outputs.Dataframe(type="pandas", label="Result Excel") -# ] - -# gr.Interface( -# inference, -# # inputs, -# # output, -# inputs=[image_input_row, textbox_input_row], -# outputs=[image_output_row, dataframe_output_row], -# title=title, -# description=description, -# # examples=[['examples/6-8.jpg', 'examples/6.txt'], ['examples/11-9.jpg', 'examples/11.html']], -# # examples=[['examples/6-8.jpg', example_file_content_1], ['examples/11-9.jpg', example_file_content_2]], -# examples=[['examples/6-8.jpg', 'html content 1'], ['examples/11-9.jpg', 'html content 2']], -# enable_queue=True, -# capture_session=True, -# layout='vertical' -# ).launch(debug=False) - -# def example_inference(): -# image_input_bgr = cv2.imread('examples/6-8.jpg') -# image_input = cv2.cvtColor(image_input_bgr, cv2.COLOR_BGR2RGB) -# # text_input = 'html content 1' # example string -# text_input = 'https://www.whatsapp.com/legal/privacy-policy' -# -# out_result, out_segment = inference(image_input, text_input) -# -# return image_input, text_input, out_result, out_segment - -def example_inference_1(): - image_input_bgr = cv2.imread("examples/6-8.jpg") - image_input = cv2.cvtColor(image_input_bgr, cv2.COLOR_BGR2RGB) - text_input = 'https://www.whatsapp.com/legal/privacy-policy' - out_result, out_segment, complete_result = inference(image_input, text_input) - return image_input, text_input, out_result, out_segment, complete_result - -def example_inference_2(): - image_input_bgr = cv2.imread("examples/11-9.jpg") - image_input = cv2.cvtColor(image_input_bgr, cv2.COLOR_BGR2RGB) - text_input = 'https://values.snap.com/privacy/privacy-policy' - out_result, out_segment, complete_result = inference(image_input, text_input) - return image_input, text_input, out_result, out_segment, complete_result - -def example_inference_3(): - image_input_bgr = cv2.imread("examples/1-1.jpg") - image_input = cv2.cvtColor(image_input_bgr, cv2.COLOR_BGR2RGB) - text_input = 'https://mcdonalds.com.au/privacy-policy' - out_result, out_segment, complete_result = inference(image_input, text_input) - return image_input, text_input, out_result, out_segment, complete_result - -def new_example_inference_1(): - image_input_bgr = cv2.imread("examples/6-8.jpg") - image_input = cv2.cvtColor(image_input_bgr, cv2.COLOR_BGR2RGB) - text_input = 'https://www.whatsapp.com/legal/privacy-policy' - - out_result_bgr = cv2.imread("results/result_1.png") - out_result = cv2.cvtColor(out_result_bgr, cv2.COLOR_BGR2RGB) - - out_segment = pd.read_excel("results/result_1_S.xlsx") - complete_result = pd.read_excel("results/result_1_C.xlsx") - - return image_input, text_input, out_result, out_segment, complete_result - -def new_example_inference_2(): - image_input_bgr = cv2.imread("examples/11-9.jpg") - image_input = cv2.cvtColor(image_input_bgr, cv2.COLOR_BGR2RGB) - text_input = 'https://values.snap.com/privacy/privacy-policy' - - out_result_bgr = cv2.imread("results/result_2.png") - out_result = cv2.cvtColor(out_result_bgr, cv2.COLOR_BGR2RGB) - - out_segment = pd.read_excel("results/result_2_S.xlsx") - complete_result = pd.read_excel("results/result_2_C.xlsx") - - return image_input, text_input, out_result, out_segment, complete_result - -def new_example_inference_3(): - image_input_bgr = cv2.imread("examples/1-1.jpg") - image_input = cv2.cvtColor(image_input_bgr, cv2.COLOR_BGR2RGB) - text_input = 'https://mcdonalds.com.au/privacy-policy' - - out_result_bgr = cv2.imread("results/result_3.png") - out_result = cv2.cvtColor(out_result_bgr, cv2.COLOR_BGR2RGB) - - out_segment = pd.read_excel("results/result_3_S.xlsx") - complete_result = pd.read_excel("results/result_3_C.xlsx") - - return image_input, text_input, out_result, out_segment, complete_result - -# def toggle_dataframe_callback(): -# complete_result_dataframe.visible = not complete_result_dataframe.visible - -demo = gr.Blocks() -with demo: - gr.Markdown("# Cpp4App\n\n**Automated Contextual Privacy Policies Generation for Mobile Apps**" - "\n\nThere are two inputs to generate CPP for a mobile app: app's privacy policy URL link and a GUI screenshot") - - with gr.Row(): - example_image_1 = gr.Image(value="examples/6-8.jpg", label="Example 1") - example_image_2 = gr.Image(value="examples/11-9.jpg", label="Example 2") - example_image_3 = gr.Image(value="examples/1-1.jpg", label="Example 3") - with gr.Column(): - gr.Markdown("**You can try with three examples we provided:**" - "\n\n- WhatsApp" - "\n\n- Snap" - "\n\n- Mcdonald's" - "\n\n**You can also try with your own example:**" - "\n\nUpload the screenshot and privacy policy URL link, then click 'submit' button" - # "\n\n" - # "\n\nThe three provided examples are pre-run, while your own screenshot needs to run for approximately one minute." - ) - - with gr.Row(): - example_button_1 = gr.Button("Run with Example 1") - example_button_2 = gr.Button("Run with Example 2") - example_button_3 = gr.Button("Run with Example 3") - with gr.Column(): - clear_button = gr.Button("Clear") - submit_button = gr.Button("Submit") - - with gr.Row(): - text_input = gr.inputs.Textbox(label="URL Input for the Privacy Policy of the App") - - with gr.Column(): - image_input = gr.inputs.Image(type="pil", label="Screenshot Upload") - result_image = gr.outputs.Image(type="pil", label="Result Screenshot") - - with gr.Row(): - result_dataframe = gr.outputs.Dataframe(type="pandas", label="Result Excel (Summarized)") - - # with gr.Row(): - # # Create a button to control the display of complete_result_dataframe - # toggle_dataframe_button = gr.Button("Show Complete Result Excel") - - with gr.Row(): - complete_result_dataframe = gr.outputs.Dataframe(type="pandas", label="Result Excel (Complete)") - - # with gr.Row(): - # example_button_1 = gr.Button("Run with Example 1") - # example_button_2 = gr.Button("Run with Example 2") - # example_button_3 = gr.Button("Run with Example 3") - # with gr.Column(): - # clear_button = gr.Button("Clear") - # submit_button = gr.Button("Submit") - # - # with gr.Row(): - # example_image_1 = gr.Image(value="examples/6-8.jpg", label="Example 1") - # example_image_2 = gr.Image(value="examples/11-9.jpg", label="Example 2") - # example_image_3 = gr.Image(value="examples/1-1.jpg", label="Example 3") - # with gr.Column(): - # gr.Markdown("**You can try with three examples we provided:**" - # "\n\n- WhatsApp" - # "\n\n- Snap" - # "\n\n- Mcdonald's" - # "\n\n**You can also try with your own example:**" - # "\n\nUpload the screenshot and privacy policy URL link, then click 'submit' button") - - submit_button.click(inference, inputs=[image_input, text_input], outputs=[result_image, result_dataframe, complete_result_dataframe]) - clear_button.click(lambda: [None, None, None, None, None, None], inputs=[], outputs=[image_input, text_input, result_image, result_dataframe, complete_result_dataframe]) - # example_button.click(example_inference, inputs=[], outputs=[image_input, text_input, result_image, result_dataframe]) - example_button_1.click(new_example_inference_1, - inputs=[], - outputs=[image_input, text_input, result_image, result_dataframe, complete_result_dataframe]) - example_button_2.click(new_example_inference_2, - inputs=[], - outputs=[image_input, text_input, result_image, result_dataframe, complete_result_dataframe]) - example_button_3.click(new_example_inference_3, - inputs=[], - outputs=[image_input, text_input, result_image, result_dataframe, complete_result_dataframe]) - - # # Create a unique CSS ID for the dataframe output - # dataframe_id = id(complete_result_dataframe) - # - # # Define CSS styles for hiding/showing the dataframe - # hide_style = f"#{dataframe_id} {{ display: none; }}" - # show_style = f"#{dataframe_id} {{ display: block; }}" - # - # - # def toggle_dataframe_callback(): - # if toggle_dataframe_button.label == "Show Complete Result Excel": - # toggle_dataframe_button.label = "Hide Complete Result Excel" - # gr.Html(style=show_style).show() - # else: - # toggle_dataframe_button.label = "Show Complete Result Excel" - # gr.Html(style=hide_style).show() - -demo.launch() diff --git a/spaces/Cropinky/esrgan/realesrgan/archs/discriminator_arch.py b/spaces/Cropinky/esrgan/realesrgan/archs/discriminator_arch.py deleted file mode 100644 index 4b66ab1226d6793de846bc9828bbe427031a0e2d..0000000000000000000000000000000000000000 --- a/spaces/Cropinky/esrgan/realesrgan/archs/discriminator_arch.py +++ /dev/null @@ -1,67 +0,0 @@ -from basicsr.utils.registry import ARCH_REGISTRY -from torch import nn as nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm - - -@ARCH_REGISTRY.register() -class UNetDiscriminatorSN(nn.Module): - """Defines a U-Net discriminator with spectral normalization (SN) - - It is used in Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - Arg: - num_in_ch (int): Channel number of inputs. Default: 3. - num_feat (int): Channel number of base intermediate features. Default: 64. - skip_connection (bool): Whether to use skip connections between U-Net. Default: True. - """ - - def __init__(self, num_in_ch, num_feat=64, skip_connection=True): - super(UNetDiscriminatorSN, self).__init__() - self.skip_connection = skip_connection - norm = spectral_norm - # the first convolution - self.conv0 = nn.Conv2d(num_in_ch, num_feat, kernel_size=3, stride=1, padding=1) - # downsample - self.conv1 = norm(nn.Conv2d(num_feat, num_feat * 2, 4, 2, 1, bias=False)) - self.conv2 = norm(nn.Conv2d(num_feat * 2, num_feat * 4, 4, 2, 1, bias=False)) - self.conv3 = norm(nn.Conv2d(num_feat * 4, num_feat * 8, 4, 2, 1, bias=False)) - # upsample - self.conv4 = norm(nn.Conv2d(num_feat * 8, num_feat * 4, 3, 1, 1, bias=False)) - self.conv5 = norm(nn.Conv2d(num_feat * 4, num_feat * 2, 3, 1, 1, bias=False)) - self.conv6 = norm(nn.Conv2d(num_feat * 2, num_feat, 3, 1, 1, bias=False)) - # extra convolutions - self.conv7 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False)) - self.conv8 = norm(nn.Conv2d(num_feat, num_feat, 3, 1, 1, bias=False)) - self.conv9 = nn.Conv2d(num_feat, 1, 3, 1, 1) - - def forward(self, x): - # downsample - x0 = F.leaky_relu(self.conv0(x), negative_slope=0.2, inplace=True) - x1 = F.leaky_relu(self.conv1(x0), negative_slope=0.2, inplace=True) - x2 = F.leaky_relu(self.conv2(x1), negative_slope=0.2, inplace=True) - x3 = F.leaky_relu(self.conv3(x2), negative_slope=0.2, inplace=True) - - # upsample - x3 = F.interpolate(x3, scale_factor=2, mode='bilinear', align_corners=False) - x4 = F.leaky_relu(self.conv4(x3), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x4 = x4 + x2 - x4 = F.interpolate(x4, scale_factor=2, mode='bilinear', align_corners=False) - x5 = F.leaky_relu(self.conv5(x4), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x5 = x5 + x1 - x5 = F.interpolate(x5, scale_factor=2, mode='bilinear', align_corners=False) - x6 = F.leaky_relu(self.conv6(x5), negative_slope=0.2, inplace=True) - - if self.skip_connection: - x6 = x6 + x0 - - # extra convolutions - out = F.leaky_relu(self.conv7(x6), negative_slope=0.2, inplace=True) - out = F.leaky_relu(self.conv8(out), negative_slope=0.2, inplace=True) - out = self.conv9(out) - - return out diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/roundingPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/roundingPen.py deleted file mode 100644 index 2a7c476c36f4d244d62c92b745dc462d977ba394..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/roundingPen.py +++ /dev/null @@ -1,112 +0,0 @@ -from fontTools.misc.roundTools import otRound -from fontTools.misc.transform import Transform -from fontTools.pens.filterPen import FilterPen, FilterPointPen - - -__all__ = ["RoundingPen", "RoundingPointPen"] - - -class RoundingPen(FilterPen): - """ - Filter pen that rounds point coordinates and component XY offsets to integer. - - >>> from fontTools.pens.recordingPen import RecordingPen - >>> recpen = RecordingPen() - >>> roundpen = RoundingPen(recpen) - >>> roundpen.moveTo((0.4, 0.6)) - >>> roundpen.lineTo((1.6, 2.5)) - >>> roundpen.qCurveTo((2.4, 4.6), (3.3, 5.7), (4.9, 6.1)) - >>> roundpen.curveTo((6.4, 8.6), (7.3, 9.7), (8.9, 10.1)) - >>> roundpen.addComponent("a", (1.5, 0, 0, 1.5, 10.5, -10.5)) - >>> recpen.value == [ - ... ('moveTo', ((0, 1),)), - ... ('lineTo', ((2, 3),)), - ... ('qCurveTo', ((2, 5), (3, 6), (5, 6))), - ... ('curveTo', ((6, 9), (7, 10), (9, 10))), - ... ('addComponent', ('a', (1.5, 0, 0, 1.5, 11, -10))), - ... ] - True - """ - - def __init__(self, outPen, roundFunc=otRound): - super().__init__(outPen) - self.roundFunc = roundFunc - - def moveTo(self, pt): - self._outPen.moveTo((self.roundFunc(pt[0]), self.roundFunc(pt[1]))) - - def lineTo(self, pt): - self._outPen.lineTo((self.roundFunc(pt[0]), self.roundFunc(pt[1]))) - - def curveTo(self, *points): - self._outPen.curveTo( - *((self.roundFunc(x), self.roundFunc(y)) for x, y in points) - ) - - def qCurveTo(self, *points): - self._outPen.qCurveTo( - *((self.roundFunc(x), self.roundFunc(y)) for x, y in points) - ) - - def addComponent(self, glyphName, transformation): - self._outPen.addComponent( - glyphName, - Transform( - *transformation[:4], - self.roundFunc(transformation[4]), - self.roundFunc(transformation[5]), - ), - ) - - -class RoundingPointPen(FilterPointPen): - """ - Filter point pen that rounds point coordinates and component XY offsets to integer. - - >>> from fontTools.pens.recordingPen import RecordingPointPen - >>> recpen = RecordingPointPen() - >>> roundpen = RoundingPointPen(recpen) - >>> roundpen.beginPath() - >>> roundpen.addPoint((0.4, 0.6), 'line') - >>> roundpen.addPoint((1.6, 2.5), 'line') - >>> roundpen.addPoint((2.4, 4.6)) - >>> roundpen.addPoint((3.3, 5.7)) - >>> roundpen.addPoint((4.9, 6.1), 'qcurve') - >>> roundpen.endPath() - >>> roundpen.addComponent("a", (1.5, 0, 0, 1.5, 10.5, -10.5)) - >>> recpen.value == [ - ... ('beginPath', (), {}), - ... ('addPoint', ((0, 1), 'line', False, None), {}), - ... ('addPoint', ((2, 3), 'line', False, None), {}), - ... ('addPoint', ((2, 5), None, False, None), {}), - ... ('addPoint', ((3, 6), None, False, None), {}), - ... ('addPoint', ((5, 6), 'qcurve', False, None), {}), - ... ('endPath', (), {}), - ... ('addComponent', ('a', (1.5, 0, 0, 1.5, 11, -10)), {}), - ... ] - True - """ - - def __init__(self, outPen, roundFunc=otRound): - super().__init__(outPen) - self.roundFunc = roundFunc - - def addPoint(self, pt, segmentType=None, smooth=False, name=None, **kwargs): - self._outPen.addPoint( - (self.roundFunc(pt[0]), self.roundFunc(pt[1])), - segmentType=segmentType, - smooth=smooth, - name=name, - **kwargs, - ) - - def addComponent(self, baseGlyphName, transformation, **kwargs): - self._outPen.addComponent( - baseGlyphName, - Transform( - *transformation[:4], - self.roundFunc(transformation[4]), - self.roundFunc(transformation[5]), - ), - **kwargs, - ) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-f90e1963.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-f90e1963.js deleted file mode 100644 index d8559fecd70d523515a920fa6ad2e748b9975c5b..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-f90e1963.js +++ /dev/null @@ -1,13 +0,0 @@ -import{S as si,e as ri,s as oi,J as io,K as se,p as Ce,M as Yt,n as vi,A as Ae,_ as Pe,N as dt,B as Xl,C as Xc,h as xr,k as fe,O as pt,U as yn,o as ue,Q as Zl,z as H,u as _n,v as j,y as Vn,x as de,ai as Ql,Z as ea,ao as bn,m as ta,am as Zc,E as ia,ae as na,j as sa,q as ra,r as oa,t as la}from"./index-3370be2a.js";import"./Blocks-f0129fcd.js";import{f as wn,B as aa}from"./Button-89624748.js";import{B as ha}from"./BlockLabel-56db415e.js";import{E as Qc}from"./Empty-585389a4.js";import{C as ef,a as ca}from"./Copy-6cd42558.js";import{D as tf}from"./Download-fdaaf5d4.js";function nf(n){let e,t;return{c(){e=io("svg"),t=io("path"),se(t,"fill","currentColor"),se(t,"d","m31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7zM1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7zm11.42 9.484L17.64 6l1.932.517L14.352 26z"),se(e,"width","100%"),se(e,"height","100%"),se(e,"viewBox","0 0 32 32")},m(i,s){Ce(i,e,s),Yt(e,t)},p:vi,i:vi,o:vi,d(i){i&&Ae(e)}}}let Sr=class extends si{constructor(e){super(),ri(this,e,null,nf,oi,{})}};class _{constructor(){}lineAt(e){if(e<0||e>this.length)throw new RangeError(`Invalid position ${e} in document of length ${this.length}`);return this.lineInner(e,!1,1,0)}line(e){if(e<1||e>this.lines)throw new RangeError(`Invalid line number ${e} in ${this.lines}-line document`);return this.lineInner(e,!0,1,0)}replace(e,t,i){let s=[];return this.decompose(0,e,s,2),i.length&&i.decompose(0,i.length,s,3),this.decompose(t,this.length,s,1),$e.from(s,this.length-(t-e)+i.length)}append(e){return this.replace(this.length,this.length,e)}slice(e,t=this.length){let i=[];return this.decompose(e,t,i,0),$e.from(i,t-e)}eq(e){if(e==this)return!0;if(e.length!=this.length||e.lines!=this.lines)return!1;let t=this.scanIdentical(e,1),i=this.length-this.scanIdentical(e,-1),s=new xi(this),r=new xi(e);for(let o=t,l=t;;){if(s.next(o),r.next(o),o=0,s.lineBreak!=r.lineBreak||s.done!=r.done||s.value!=r.value)return!1;if(l+=s.value.length,s.done||l>=i)return!0}}iter(e=1){return new xi(this,e)}iterRange(e,t=this.length){return new fa(this,e,t)}iterLines(e,t){let i;if(e==null)i=this.iter();else{t==null&&(t=this.lines+1);let s=this.line(e).from;i=this.iterRange(s,Math.max(s,t==this.lines+1?this.length:t<=1?0:this.line(t-1).to))}return new ua(i)}toString(){return this.sliceString(0)}toJSON(){let e=[];return this.flatten(e),e}static of(e){if(e.length==0)throw new RangeError("A document must have at least one line");return e.length==1&&!e[0]?_.empty:e.length<=32?new te(e):$e.from(te.split(e,[]))}}class te extends _{constructor(e,t=sf(e)){super(),this.text=e,this.length=t}get lines(){return this.text.length}get children(){return null}lineInner(e,t,i,s){for(let r=0;;r++){let o=this.text[r],l=s+o.length;if((t?i:l)>=e)return new rf(s,l,i,o);s=l+1,i++}}decompose(e,t,i,s){let r=e<=0&&t>=this.length?this:new te(no(this.text,e,t),Math.min(t,this.length)-Math.max(0,e));if(s&1){let o=i.pop(),l=fn(r.text,o.text.slice(),0,r.length);if(l.length<=32)i.push(new te(l,o.length+r.length));else{let a=l.length>>1;i.push(new te(l.slice(0,a)),new te(l.slice(a)))}}else i.push(r)}replace(e,t,i){if(!(i instanceof te))return super.replace(e,t,i);let s=fn(this.text,fn(i.text,no(this.text,0,e)),t),r=this.length+i.length-(t-e);return s.length<=32?new te(s,r):$e.from(te.split(s,[]),r)}sliceString(e,t=this.length,i=` -`){let s="";for(let r=0,o=0;r<=t&&oe&&o&&(s+=i),er&&(s+=l.slice(Math.max(0,e-r),t-r)),r=a+1}return s}flatten(e){for(let t of this.text)e.push(t)}scanIdentical(){return 0}static split(e,t){let i=[],s=-1;for(let r of e)i.push(r),s+=r.length+1,i.length==32&&(t.push(new te(i,s)),i=[],s=-1);return s>-1&&t.push(new te(i,s)),t}}class $e extends _{constructor(e,t){super(),this.children=e,this.length=t,this.lines=0;for(let i of e)this.lines+=i.lines}lineInner(e,t,i,s){for(let r=0;;r++){let o=this.children[r],l=s+o.length,a=i+o.lines-1;if((t?a:l)>=e)return o.lineInner(e,t,i,s);s=l+1,i=a+1}}decompose(e,t,i,s){for(let r=0,o=0;o<=t&&r=o){let h=s&((o<=e?1:0)|(a>=t?2:0));o>=e&&a<=t&&!h?i.push(l):l.decompose(e-o,t-o,i,h)}o=a+1}}replace(e,t,i){if(i.lines=r&&t<=l){let a=o.replace(e-r,t-r,i),h=this.lines-o.lines+a.lines;if(a.lines>5-1&&a.lines>h>>5+1){let c=this.children.slice();return c[s]=a,new $e(c,this.length-(t-e)+i.length)}return super.replace(r,l,a)}r=l+1}return super.replace(e,t,i)}sliceString(e,t=this.length,i=` -`){let s="";for(let r=0,o=0;re&&r&&(s+=i),eo&&(s+=l.sliceString(e-o,t-o,i)),o=a+1}return s}flatten(e){for(let t of this.children)t.flatten(e)}scanIdentical(e,t){if(!(e instanceof $e))return 0;let i=0,[s,r,o,l]=t>0?[0,0,this.children.length,e.children.length]:[this.children.length-1,e.children.length-1,-1,-1];for(;;s+=t,r+=t){if(s==o||r==l)return i;let a=this.children[s],h=e.children[r];if(a!=h)return i+a.scanIdentical(h,t);i+=a.length+1}}static from(e,t=e.reduce((i,s)=>i+s.length+1,-1)){let i=0;for(let d of e)i+=d.lines;if(i<32){let d=[];for(let p of e)p.flatten(d);return new te(d,t)}let s=Math.max(32,i>>5),r=s<<1,o=s>>1,l=[],a=0,h=-1,c=[];function f(d){let p;if(d.lines>r&&d instanceof $e)for(let g of d.children)f(g);else d.lines>o&&(a>o||!a)?(u(),l.push(d)):d instanceof te&&a&&(p=c[c.length-1])instanceof te&&d.lines+p.lines<=32?(a+=d.lines,h+=d.length+1,c[c.length-1]=new te(p.text.concat(d.text),p.length+1+d.length)):(a+d.lines>s&&u(),a+=d.lines,h+=d.length+1,c.push(d))}function u(){a!=0&&(l.push(c.length==1?c[0]:$e.from(c,h)),h=-1,a=c.length=0)}for(let d of e)f(d);return u(),l.length==1?l[0]:new $e(l,t)}}_.empty=new te([""],0);function sf(n){let e=-1;for(let t of n)e+=t.length+1;return e}function fn(n,e,t=0,i=1e9){for(let s=0,r=0,o=!0;r=t&&(a>i&&(l=l.slice(0,i-s)),s0?1:(e instanceof te?e.text.length:e.children.length)<<1]}nextInner(e,t){for(this.done=this.lineBreak=!1;;){let i=this.nodes.length-1,s=this.nodes[i],r=this.offsets[i],o=r>>1,l=s instanceof te?s.text.length:s.children.length;if(o==(t>0?l:0)){if(i==0)return this.done=!0,this.value="",this;t>0&&this.offsets[i-1]++,this.nodes.pop(),this.offsets.pop()}else if((r&1)==(t>0?0:1)){if(this.offsets[i]+=t,e==0)return this.lineBreak=!0,this.value=` -`,this;e--}else if(s instanceof te){let a=s.text[o+(t<0?-1:0)];if(this.offsets[i]+=t,a.length>Math.max(0,e))return this.value=e==0?a:t>0?a.slice(e):a.slice(0,a.length-e),this;e-=a.length}else{let a=s.children[o+(t<0?-1:0)];e>a.length?(e-=a.length,this.offsets[i]+=t):(t<0&&this.offsets[i]--,this.nodes.push(a),this.offsets.push(t>0?1:(a instanceof te?a.text.length:a.children.length)<<1))}}}next(e=0){return e<0&&(this.nextInner(-e,-this.dir),e=this.value.length),this.nextInner(e,this.dir)}}class fa{constructor(e,t,i){this.value="",this.done=!1,this.cursor=new xi(e,t>i?-1:1),this.pos=t>i?e.length:0,this.from=Math.min(t,i),this.to=Math.max(t,i)}nextInner(e,t){if(t<0?this.pos<=this.from:this.pos>=this.to)return this.value="",this.done=!0,this;e+=Math.max(0,t<0?this.pos-this.to:this.from-this.pos);let i=t<0?this.pos-this.from:this.to-this.pos;e>i&&(e=i),i-=e;let{value:s}=this.cursor.next(e);return this.pos+=(s.length+e)*t,this.value=s.length<=i?s:t<0?s.slice(s.length-i):s.slice(0,i),this.done=!this.value,this}next(e=0){return e<0?e=Math.max(e,this.from-this.pos):e>0&&(e=Math.min(e,this.to-this.pos)),this.nextInner(e,this.cursor.dir)}get lineBreak(){return this.cursor.lineBreak&&this.value!=""}}class ua{constructor(e){this.inner=e,this.afterBreak=!0,this.value="",this.done=!1}next(e=0){let{done:t,lineBreak:i,value:s}=this.inner.next(e);return t?(this.done=!0,this.value=""):i?this.afterBreak?this.value="":(this.afterBreak=!0,this.next()):(this.value=s,this.afterBreak=!1),this}get lineBreak(){return!1}}typeof Symbol<"u"&&(_.prototype[Symbol.iterator]=function(){return this.iter()},xi.prototype[Symbol.iterator]=fa.prototype[Symbol.iterator]=ua.prototype[Symbol.iterator]=function(){return this});class rf{constructor(e,t,i,s){this.from=e,this.to=t,this.number=i,this.text=s}get length(){return this.to-this.from}}let Ut="lc,34,7n,7,7b,19,,,,2,,2,,,20,b,1c,l,g,,2t,7,2,6,2,2,,4,z,,u,r,2j,b,1m,9,9,,o,4,,9,,3,,5,17,3,3b,f,,w,1j,,,,4,8,4,,3,7,a,2,t,,1m,,,,2,4,8,,9,,a,2,q,,2,2,1l,,4,2,4,2,2,3,3,,u,2,3,,b,2,1l,,4,5,,2,4,,k,2,m,6,,,1m,,,2,,4,8,,7,3,a,2,u,,1n,,,,c,,9,,14,,3,,1l,3,5,3,,4,7,2,b,2,t,,1m,,2,,2,,3,,5,2,7,2,b,2,s,2,1l,2,,,2,4,8,,9,,a,2,t,,20,,4,,2,3,,,8,,29,,2,7,c,8,2q,,2,9,b,6,22,2,r,,,,,,1j,e,,5,,2,5,b,,10,9,,2u,4,,6,,2,2,2,p,2,4,3,g,4,d,,2,2,6,,f,,jj,3,qa,3,t,3,t,2,u,2,1s,2,,7,8,,2,b,9,,19,3,3b,2,y,,3a,3,4,2,9,,6,3,63,2,2,,1m,,,7,,,,,2,8,6,a,2,,1c,h,1r,4,1c,7,,,5,,14,9,c,2,w,4,2,2,,3,1k,,,2,3,,,3,1m,8,2,2,48,3,,d,,7,4,,6,,3,2,5i,1m,,5,ek,,5f,x,2da,3,3x,,2o,w,fe,6,2x,2,n9w,4,,a,w,2,28,2,7k,,3,,4,,p,2,5,,47,2,q,i,d,,12,8,p,b,1a,3,1c,,2,4,2,2,13,,1v,6,2,2,2,2,c,,8,,1b,,1f,,,3,2,2,5,2,,,16,2,8,,6m,,2,,4,,fn4,,kh,g,g,g,a6,2,gt,,6a,,45,5,1ae,3,,2,5,4,14,3,4,,4l,2,fx,4,ar,2,49,b,4w,,1i,f,1k,3,1d,4,2,2,1x,3,10,5,,8,1q,,c,2,1g,9,a,4,2,,2n,3,2,,,2,6,,4g,,3,8,l,2,1l,2,,,,,m,,e,7,3,5,5f,8,2,3,,,n,,29,,2,6,,,2,,,2,,2,6j,,2,4,6,2,,2,r,2,2d,8,2,,,2,2y,,,,2,6,,,2t,3,2,4,,5,77,9,,2,6t,,a,2,,,4,,40,4,2,2,4,,w,a,14,6,2,4,8,,9,6,2,3,1a,d,,2,ba,7,,6,,,2a,m,2,7,,2,,2,3e,6,3,,,2,,7,,,20,2,3,,,,9n,2,f0b,5,1n,7,t4,,1r,4,29,,f5k,2,43q,,,3,4,5,8,8,2,7,u,4,44,3,1iz,1j,4,1e,8,,e,,m,5,,f,11s,7,,h,2,7,,2,,5,79,7,c5,4,15s,7,31,7,240,5,gx7k,2o,3k,6o".split(",").map(n=>n?parseInt(n,36):1);for(let n=1;nn)return Ut[e-1]<=n;return!1}function so(n){return n>=127462&&n<=127487}const ro=8205;function Oe(n,e,t=!0,i=!0){return(t?da:lf)(n,e,i)}function da(n,e,t){if(e==n.length)return e;e&&pa(n.charCodeAt(e))&&ma(n.charCodeAt(e-1))&&e--;let i=ge(n,e);for(e+=Ee(i);e=0&&so(ge(n,o));)r++,o-=2;if(r%2==0)break;e+=2}else break}return e}function lf(n,e,t){for(;e>0;){let i=da(n,e-2,t);if(i=56320&&n<57344}function ma(n){return n>=55296&&n<56320}function ge(n,e){let t=n.charCodeAt(e);if(!ma(t)||e+1==n.length)return t;let i=n.charCodeAt(e+1);return pa(i)?(t-55296<<10)+(i-56320)+65536:t}function ga(n){return n<=65535?String.fromCharCode(n):(n-=65536,String.fromCharCode((n>>10)+55296,(n&1023)+56320))}function Ee(n){return n<65536?1:2}const Bs=/\r\n?|\n/;var ce=function(n){return n[n.Simple=0]="Simple",n[n.TrackDel=1]="TrackDel",n[n.TrackBefore=2]="TrackBefore",n[n.TrackAfter=3]="TrackAfter",n}(ce||(ce={}));class Ze{constructor(e){this.sections=e}get length(){let e=0;for(let t=0;te)return r+(e-s);r+=l}else{if(i!=ce.Simple&&h>=e&&(i==ce.TrackDel&&se||i==ce.TrackBefore&&se))return null;if(h>e||h==e&&t<0&&!l)return e==s||t<0?r:r+a;r+=a}s=h}if(e>s)throw new RangeError(`Position ${e} is out of range for changeset of length ${s}`);return r}touchesRange(e,t=e){for(let i=0,s=0;i=0&&s<=t&&l>=e)return st?"cover":!0;s=l}return!1}toString(){let e="";for(let t=0;t=0?":"+s:"")}return e}toJSON(){return this.sections}static fromJSON(e){if(!Array.isArray(e)||e.length%2||e.some(t=>typeof t!="number"))throw new RangeError("Invalid JSON representation of ChangeDesc");return new Ze(e)}static create(e){return new Ze(e)}}class ne extends Ze{constructor(e,t){super(e),this.inserted=t}apply(e){if(this.length!=e.length)throw new RangeError("Applying change set to a document with the wrong length");return Ps(this,(t,i,s,r,o)=>e=e.replace(s,s+(i-t),o),!1),e}mapDesc(e,t=!1){return Es(this,e,t,!0)}invert(e){let t=this.sections.slice(),i=[];for(let s=0,r=0;s=0){t[s]=l,t[s+1]=o;let a=s>>1;for(;i.length0&&ht(i,t,r.text),r.forward(c),l+=c}let h=e[o++];for(;l>1].toJSON()))}return e}static of(e,t,i){let s=[],r=[],o=0,l=null;function a(c=!1){if(!c&&!s.length)return;ou||f<0||u>t)throw new RangeError(`Invalid change range ${f} to ${u} (in doc of length ${t})`);let p=d?typeof d=="string"?_.of(d.split(i||Bs)):d:_.empty,g=p.length;if(f==u&&g==0)return;fo&&me(s,f-o,-1),me(s,u-f,g),ht(r,s,p),o=u}}return h(e),a(!l),l}static empty(e){return new ne(e?[e,-1]:[],[])}static fromJSON(e){if(!Array.isArray(e))throw new RangeError("Invalid JSON representation of ChangeSet");let t=[],i=[];for(let s=0;sl&&typeof o!="string"))throw new RangeError("Invalid JSON representation of ChangeSet");if(r.length==1)t.push(r[0],0);else{for(;i.length=0&&t<=0&&t==n[s+1]?n[s]+=e:e==0&&n[s]==0?n[s+1]+=t:i?(n[s]+=e,n[s+1]+=t):n.push(e,t)}function ht(n,e,t){if(t.length==0)return;let i=e.length-2>>1;if(i>1])),!(t||o==n.sections.length||n.sections[o+1]<0);)l=n.sections[o++],a=n.sections[o++];e(s,h,r,c,f),s=h,r=c}}}function Es(n,e,t,i=!1){let s=[],r=i?[]:null,o=new Di(n),l=new Di(e);for(let a=-1;;)if(o.ins==-1&&l.ins==-1){let h=Math.min(o.len,l.len);me(s,h,-1),o.forward(h),l.forward(h)}else if(l.ins>=0&&(o.ins<0||a==o.i||o.off==0&&(l.len=0&&a=0){let h=0,c=o.len;for(;c;)if(l.ins==-1){let f=Math.min(c,l.len);h+=f,c-=f,l.forward(f)}else if(l.ins==0&&l.lena||o.ins>=0&&o.len>a)&&(l||i.length>h),r.forward2(a),o.forward(a)}}}}class Di{constructor(e){this.set=e,this.i=0,this.next()}next(){let{sections:e}=this.set;this.i>1;return t>=e.length?_.empty:e[t]}textBit(e){let{inserted:t}=this.set,i=this.i-2>>1;return i>=t.length&&!e?_.empty:t[i].slice(this.off,e==null?void 0:this.off+e)}forward(e){e==this.len?this.next():(this.len-=e,this.off+=e)}forward2(e){this.ins==-1?this.forward(e):e==this.ins?this.next():(this.ins-=e,this.off+=e)}}class Mt{constructor(e,t,i){this.from=e,this.to=t,this.flags=i}get anchor(){return this.flags&16?this.to:this.from}get head(){return this.flags&16?this.from:this.to}get empty(){return this.from==this.to}get assoc(){return this.flags&4?-1:this.flags&8?1:0}get bidiLevel(){let e=this.flags&3;return e==3?null:e}get goalColumn(){let e=this.flags>>5;return e==33554431?void 0:e}map(e,t=-1){let i,s;return this.empty?i=s=e.mapPos(this.from,t):(i=e.mapPos(this.from,1),s=e.mapPos(this.to,-1)),i==this.from&&s==this.to?this:new Mt(i,s,this.flags)}extend(e,t=e){if(e<=this.anchor&&t>=this.anchor)return w.range(e,t);let i=Math.abs(e-this.anchor)>Math.abs(t-this.anchor)?e:t;return w.range(this.anchor,i)}eq(e){return this.anchor==e.anchor&&this.head==e.head}toJSON(){return{anchor:this.anchor,head:this.head}}static fromJSON(e){if(!e||typeof e.anchor!="number"||typeof e.head!="number")throw new RangeError("Invalid JSON representation for SelectionRange");return w.range(e.anchor,e.head)}static create(e,t,i){return new Mt(e,t,i)}}class w{constructor(e,t){this.ranges=e,this.mainIndex=t}map(e,t=-1){return e.empty?this:w.create(this.ranges.map(i=>i.map(e,t)),this.mainIndex)}eq(e){if(this.ranges.length!=e.ranges.length||this.mainIndex!=e.mainIndex)return!1;for(let t=0;te.toJSON()),main:this.mainIndex}}static fromJSON(e){if(!e||!Array.isArray(e.ranges)||typeof e.main!="number"||e.main>=e.ranges.length)throw new RangeError("Invalid JSON representation for EditorSelection");return new w(e.ranges.map(t=>Mt.fromJSON(t)),e.main)}static single(e,t=e){return new w([w.range(e,t)],0)}static create(e,t=0){if(e.length==0)throw new RangeError("A selection needs at least one range");for(let i=0,s=0;se?4:0))}static normalized(e,t=0){let i=e[t];e.sort((s,r)=>s.from-r.from),t=e.indexOf(i);for(let s=1;sr.head?w.range(a,l):w.range(l,a))}}return new w(e,t)}}function ba(n,e){for(let t of n.ranges)if(t.to>e)throw new RangeError("Selection points outside of document")}let Cr=0;class D{constructor(e,t,i,s,r){this.combine=e,this.compareInput=t,this.compare=i,this.isStatic=s,this.id=Cr++,this.default=e([]),this.extensions=typeof r=="function"?r(this):r}static define(e={}){return new D(e.combine||(t=>t),e.compareInput||((t,i)=>t===i),e.compare||(e.combine?(t,i)=>t===i:Ar),!!e.static,e.enables)}of(e){return new un([],this,0,e)}compute(e,t){if(this.isStatic)throw new Error("Can't compute a static facet");return new un(e,this,1,t)}computeN(e,t){if(this.isStatic)throw new Error("Can't compute a static facet");return new un(e,this,2,t)}from(e,t){return t||(t=i=>i),this.compute([e],i=>t(i.field(e)))}}function Ar(n,e){return n==e||n.length==e.length&&n.every((t,i)=>t===e[i])}class un{constructor(e,t,i,s){this.dependencies=e,this.facet=t,this.type=i,this.value=s,this.id=Cr++}dynamicSlot(e){var t;let i=this.value,s=this.facet.compareInput,r=this.id,o=e[r]>>1,l=this.type==2,a=!1,h=!1,c=[];for(let f of this.dependencies)f=="doc"?a=!0:f=="selection"?h=!0:((t=e[f.id])!==null&&t!==void 0?t:1)&1||c.push(e[f.id]);return{create(f){return f.values[o]=i(f),1},update(f,u){if(a&&u.docChanged||h&&(u.docChanged||u.selection)||Rs(f,c)){let d=i(f);if(l?!oo(d,f.values[o],s):!s(d,f.values[o]))return f.values[o]=d,1}return 0},reconfigure:(f,u)=>{let d=i(f),p=u.config.address[r];if(p!=null){let g=vn(u,p);if(this.dependencies.every(y=>y instanceof D?u.facet(y)===f.facet(y):y instanceof Me?u.field(y,!1)==f.field(y,!1):!0)||(l?oo(d,g,s):s(d,g)))return f.values[o]=g,0}return f.values[o]=d,1}}}}function oo(n,e,t){if(n.length!=e.length)return!1;for(let i=0;in[a.id]),s=t.map(a=>a.type),r=i.filter(a=>!(a&1)),o=n[e.id]>>1;function l(a){let h=[];for(let c=0;ci===s),e);return e.provide&&(t.provides=e.provide(t)),t}create(e){let t=e.facet(lo).find(i=>i.field==this);return(t?.create||this.createF)(e)}slot(e){let t=e[this.id]>>1;return{create:i=>(i.values[t]=this.create(i),1),update:(i,s)=>{let r=i.values[t],o=this.updateF(r,s);return this.compareF(r,o)?0:(i.values[t]=o,1)},reconfigure:(i,s)=>s.config.address[this.id]!=null?(i.values[t]=s.field(this),0):(i.values[t]=this.create(i),1)}}init(e){return[this,lo.of({field:this,create:e})]}get extension(){return this}}const Ct={lowest:4,low:3,default:2,high:1,highest:0};function ci(n){return e=>new wa(e,n)}const Vi={highest:ci(Ct.highest),high:ci(Ct.high),default:ci(Ct.default),low:ci(Ct.low),lowest:ci(Ct.lowest)};class wa{constructor(e,t){this.inner=e,this.prec=t}}class Fn{of(e){return new Ls(this,e)}reconfigure(e){return Fn.reconfigure.of({compartment:this,extension:e})}get(e){return e.config.compartments.get(this)}}class Ls{constructor(e,t){this.compartment=e,this.inner=t}}class kn{constructor(e,t,i,s,r,o){for(this.base=e,this.compartments=t,this.dynamicSlots=i,this.address=s,this.staticValues=r,this.facets=o,this.statusTemplate=[];this.statusTemplate.length>1]}static resolve(e,t,i){let s=[],r=Object.create(null),o=new Map;for(let u of hf(e,t,o))u instanceof Me?s.push(u):(r[u.facet.id]||(r[u.facet.id]=[])).push(u);let l=Object.create(null),a=[],h=[];for(let u of s)l[u.id]=h.length<<1,h.push(d=>u.slot(d));let c=i?.config.facets;for(let u in r){let d=r[u],p=d[0].facet,g=c&&c[u]||[];if(d.every(y=>y.type==0))if(l[p.id]=a.length<<1|1,Ar(g,d))a.push(i.facet(p));else{let y=p.combine(d.map(b=>b.value));a.push(i&&p.compare(y,i.facet(p))?i.facet(p):y)}else{for(let y of d)y.type==0?(l[y.id]=a.length<<1|1,a.push(y.value)):(l[y.id]=h.length<<1,h.push(b=>y.dynamicSlot(b)));l[p.id]=h.length<<1,h.push(y=>af(y,p,d))}}let f=h.map(u=>u(l));return new kn(e,o,f,l,a,r)}}function hf(n,e,t){let i=[[],[],[],[],[]],s=new Map;function r(o,l){let a=s.get(o);if(a!=null){if(a<=l)return;let h=i[a].indexOf(o);h>-1&&i[a].splice(h,1),o instanceof Ls&&t.delete(o.compartment)}if(s.set(o,l),Array.isArray(o))for(let h of o)r(h,l);else if(o instanceof Ls){if(t.has(o.compartment))throw new RangeError("Duplicate use of compartment in extensions");let h=e.get(o.compartment)||o.inner;t.set(o.compartment,h),r(h,l)}else if(o instanceof wa)r(o.inner,o.prec);else if(o instanceof Me)i[l].push(o),o.provides&&r(o.provides,l);else if(o instanceof un)i[l].push(o),o.facet.extensions&&r(o.facet.extensions,Ct.default);else{let h=o.extension;if(!h)throw new Error(`Unrecognized extension value in extension set (${o}). This sometimes happens because multiple instances of @codemirror/state are loaded, breaking instanceof checks.`);r(h,l)}}return r(n,Ct.default),i.reduce((o,l)=>o.concat(l))}function Si(n,e){if(e&1)return 2;let t=e>>1,i=n.status[t];if(i==4)throw new Error("Cyclic dependency between fields and/or facets");if(i&2)return i;n.status[t]=4;let s=n.computeSlot(n,n.config.dynamicSlots[t]);return n.status[t]=2|s}function vn(n,e){return e&1?n.config.staticValues[e>>1]:n.values[e>>1]}const ka=D.define(),va=D.define({combine:n=>n.some(e=>e),static:!0}),xa=D.define({combine:n=>n.length?n[0]:void 0,static:!0}),Sa=D.define(),Ca=D.define(),Aa=D.define(),Ma=D.define({combine:n=>n.length?n[0]:!1});class Nt{constructor(e,t){this.type=e,this.value=t}static define(){return new cf}}class cf{of(e){return new Nt(this,e)}}class ff{constructor(e){this.map=e}of(e){return new R(this,e)}}class R{constructor(e,t){this.type=e,this.value=t}map(e){let t=this.type.map(this.value,e);return t===void 0?void 0:t==this.value?this:new R(this.type,t)}is(e){return this.type==e}static define(e={}){return new ff(e.map||(t=>t))}static mapEffects(e,t){if(!e.length)return e;let i=[];for(let s of e){let r=s.map(t);r&&i.push(r)}return i}}R.reconfigure=R.define();R.appendConfig=R.define();class re{constructor(e,t,i,s,r,o){this.startState=e,this.changes=t,this.selection=i,this.effects=s,this.annotations=r,this.scrollIntoView=o,this._doc=null,this._state=null,i&&ba(i,t.newLength),r.some(l=>l.type==re.time)||(this.annotations=r.concat(re.time.of(Date.now())))}static create(e,t,i,s,r,o){return new re(e,t,i,s,r,o)}get newDoc(){return this._doc||(this._doc=this.changes.apply(this.startState.doc))}get newSelection(){return this.selection||this.startState.selection.map(this.changes)}get state(){return this._state||this.startState.applyTransaction(this),this._state}annotation(e){for(let t of this.annotations)if(t.type==e)return t.value}get docChanged(){return!this.changes.empty}get reconfigured(){return this.startState.config!=this.state.config}isUserEvent(e){let t=this.annotation(re.userEvent);return!!(t&&(t==e||t.length>e.length&&t.slice(0,e.length)==e&&t[e.length]=="."))}}re.time=Nt.define();re.userEvent=Nt.define();re.addToHistory=Nt.define();re.remote=Nt.define();function uf(n,e){let t=[];for(let i=0,s=0;;){let r,o;if(i=n[i]))r=n[i++],o=n[i++];else if(s=0;s--){let r=i[s](n);r instanceof re?n=r:Array.isArray(r)&&r.length==1&&r[0]instanceof re?n=r[0]:n=Ta(e,Gt(r),!1)}return n}function pf(n){let e=n.startState,t=e.facet(Aa),i=n;for(let s=t.length-1;s>=0;s--){let r=t[s](n);r&&Object.keys(r).length&&(i=Da(i,Is(e,r,n.changes.newLength),!0))}return i==n?n:re.create(e,n.changes,n.selection,i.effects,i.annotations,i.scrollIntoView)}const mf=[];function Gt(n){return n==null?mf:Array.isArray(n)?n:[n]}var Re=function(n){return n[n.Word=0]="Word",n[n.Space=1]="Space",n[n.Other=2]="Other",n}(Re||(Re={}));const gf=/[\u00df\u0587\u0590-\u05f4\u0600-\u06ff\u3040-\u309f\u30a0-\u30ff\u3400-\u4db5\u4e00-\u9fcc\uac00-\ud7af]/;let Ns;try{Ns=new RegExp("[\\p{Alphabetic}\\p{Number}_]","u")}catch{}function yf(n){if(Ns)return Ns.test(n);for(let e=0;e"€"&&(t.toUpperCase()!=t.toLowerCase()||gf.test(t)))return!0}return!1}function bf(n){return e=>{if(!/\S/.test(e))return Re.Space;if(yf(e))return Re.Word;for(let t=0;t-1)return Re.Word;return Re.Other}}class N{constructor(e,t,i,s,r,o){this.config=e,this.doc=t,this.selection=i,this.values=s,this.status=e.statusTemplate.slice(),this.computeSlot=r,o&&(o._state=this);for(let l=0;ls.set(a,l)),t=null),s.set(o.value.compartment,o.value.extension)):o.is(R.reconfigure)?(t=null,i=o.value):o.is(R.appendConfig)&&(t=null,i=Gt(i).concat(o.value));let r;t?r=e.startState.values.slice():(t=kn.resolve(i,s,this),r=new N(t,this.doc,this.selection,t.dynamicSlots.map(()=>null),(l,a)=>a.reconfigure(l,this),null).values),new N(t,e.newDoc,e.newSelection,r,(o,l)=>l.update(o,e),e)}replaceSelection(e){return typeof e=="string"&&(e=this.toText(e)),this.changeByRange(t=>({changes:{from:t.from,to:t.to,insert:e},range:w.cursor(t.from+e.length)}))}changeByRange(e){let t=this.selection,i=e(t.ranges[0]),s=this.changes(i.changes),r=[i.range],o=Gt(i.effects);for(let l=1;lo.spec.fromJSON(l,a)))}}return N.create({doc:e.doc,selection:w.fromJSON(e.selection),extensions:t.extensions?s.concat([t.extensions]):s})}static create(e={}){let t=kn.resolve(e.extensions||[],new Map),i=e.doc instanceof _?e.doc:_.of((e.doc||"").split(t.staticFacet(N.lineSeparator)||Bs)),s=e.selection?e.selection instanceof w?e.selection:w.single(e.selection.anchor,e.selection.head):w.single(0);return ba(s,i.length),t.staticFacet(va)||(s=s.asSingle()),new N(t,i,s,t.dynamicSlots.map(()=>null),(r,o)=>o.create(r),null)}get tabSize(){return this.facet(N.tabSize)}get lineBreak(){return this.facet(N.lineSeparator)||` -`}get readOnly(){return this.facet(Ma)}phrase(e,...t){for(let i of this.facet(N.phrases))if(Object.prototype.hasOwnProperty.call(i,e)){e=i[e];break}return t.length&&(e=e.replace(/\$(\$|\d*)/g,(i,s)=>{if(s=="$")return"$";let r=+(s||1);return!r||r>t.length?i:t[r-1]})),e}languageDataAt(e,t,i=-1){let s=[];for(let r of this.facet(ka))for(let o of r(this,t,i))Object.prototype.hasOwnProperty.call(o,e)&&s.push(o[e]);return s}charCategorizer(e){return bf(this.languageDataAt("wordChars",e).join(""))}wordAt(e){let{text:t,from:i,length:s}=this.doc.lineAt(e),r=this.charCategorizer(e),o=e-i,l=e-i;for(;o>0;){let a=Oe(t,o,!1);if(r(t.slice(a,o))!=Re.Word)break;o=a}for(;ln.length?n[0]:4});N.lineSeparator=xa;N.readOnly=Ma;N.phrases=D.define({compare(n,e){let t=Object.keys(n),i=Object.keys(e);return t.length==i.length&&t.every(s=>n[s]==e[s])}});N.languageData=ka;N.changeFilter=Sa;N.transactionFilter=Ca;N.transactionExtender=Aa;Fn.reconfigure=R.define();function _t(n,e,t={}){let i={};for(let s of n)for(let r of Object.keys(s)){let o=s[r],l=i[r];if(l===void 0)i[r]=o;else if(!(l===o||o===void 0))if(Object.hasOwnProperty.call(t,r))i[r]=t[r](l,o);else throw new Error("Config merge conflict for field "+r)}for(let s in e)i[s]===void 0&&(i[s]=e[s]);return i}class Bt{eq(e){return this==e}range(e,t=e){return _s.create(e,t,this)}}Bt.prototype.startSide=Bt.prototype.endSide=0;Bt.prototype.point=!1;Bt.prototype.mapMode=ce.TrackDel;let _s=class Oa{constructor(e,t,i){this.from=e,this.to=t,this.value=i}static create(e,t,i){return new Oa(e,t,i)}};function Vs(n,e){return n.from-e.from||n.value.startSide-e.value.startSide}class Mr{constructor(e,t,i,s){this.from=e,this.to=t,this.value=i,this.maxPoint=s}get length(){return this.to[this.to.length-1]}findIndex(e,t,i,s=0){let r=i?this.to:this.from;for(let o=s,l=r.length;;){if(o==l)return o;let a=o+l>>1,h=r[a]-e||(i?this.value[a].endSide:this.value[a].startSide)-t;if(a==o)return h>=0?o:l;h>=0?l=a:o=a+1}}between(e,t,i,s){for(let r=this.findIndex(t,-1e9,!0),o=this.findIndex(i,1e9,!1,r);rd||u==d&&h.startSide>0&&h.endSide<=0)continue;(d-u||h.endSide-h.startSide)<0||(o<0&&(o=u),h.point&&(l=Math.max(l,d-u)),i.push(h),s.push(u-o),r.push(d-o))}return{mapped:i.length?new Mr(s,r,i,l):null,pos:o}}}class F{constructor(e,t,i,s){this.chunkPos=e,this.chunk=t,this.nextLayer=i,this.maxPoint=s}static create(e,t,i,s){return new F(e,t,i,s)}get length(){let e=this.chunk.length-1;return e<0?0:Math.max(this.chunkEnd(e),this.nextLayer.length)}get size(){if(this.isEmpty)return 0;let e=this.nextLayer.size;for(let t of this.chunk)e+=t.value.length;return e}chunkEnd(e){return this.chunkPos[e]+this.chunk[e].length}update(e){let{add:t=[],sort:i=!1,filterFrom:s=0,filterTo:r=this.length}=e,o=e.filter;if(t.length==0&&!o)return this;if(i&&(t=t.slice().sort(Vs)),this.isEmpty)return t.length?F.of(t):this;let l=new Ba(this,null,-1).goto(0),a=0,h=[],c=new Pt;for(;l.value||a=0){let f=t[a++];c.addInner(f.from,f.to,f.value)||h.push(f)}else l.rangeIndex==1&&l.chunkIndexthis.chunkEnd(l.chunkIndex)||rl.to||r=r&&e<=r+o.length&&o.between(r,e-r,t-r,i)===!1)return}this.nextLayer.between(e,t,i)}}iter(e=0){return Ti.from([this]).goto(e)}get isEmpty(){return this.nextLayer==this}static iter(e,t=0){return Ti.from(e).goto(t)}static compare(e,t,i,s,r=-1){let o=e.filter(f=>f.maxPoint>0||!f.isEmpty&&f.maxPoint>=r),l=t.filter(f=>f.maxPoint>0||!f.isEmpty&&f.maxPoint>=r),a=ao(o,l,i),h=new fi(o,a,r),c=new fi(l,a,r);i.iterGaps((f,u,d)=>ho(h,f,c,u,d,s)),i.empty&&i.length==0&&ho(h,0,c,0,0,s)}static eq(e,t,i=0,s){s==null&&(s=1e9);let r=e.filter(c=>!c.isEmpty&&t.indexOf(c)<0),o=t.filter(c=>!c.isEmpty&&e.indexOf(c)<0);if(r.length!=o.length)return!1;if(!r.length)return!0;let l=ao(r,o),a=new fi(r,l,0).goto(i),h=new fi(o,l,0).goto(i);for(;;){if(a.to!=h.to||!Fs(a.active,h.active)||a.point&&(!h.point||!a.point.eq(h.point)))return!1;if(a.to>s)return!0;a.next(),h.next()}}static spans(e,t,i,s,r=-1){let o=new fi(e,null,r).goto(t),l=t,a=o.openStart;for(;;){let h=Math.min(o.to,i);if(o.point?(s.point(l,h,o.point,o.activeForPoint(o.to),a,o.pointRank),a=o.openEnd(h)+(o.to>h?1:0)):h>l&&(s.span(l,h,o.active,a),a=o.openEnd(h)),o.to>i)break;l=o.to,o.next()}return a}static of(e,t=!1){let i=new Pt;for(let s of e instanceof _s?[e]:t?wf(e):e)i.add(s.from,s.to,s.value);return i.finish()}}F.empty=new F([],[],null,-1);function wf(n){if(n.length>1)for(let e=n[0],t=1;t0)return n.slice().sort(Vs);e=i}return n}F.empty.nextLayer=F.empty;class Pt{constructor(){this.chunks=[],this.chunkPos=[],this.chunkStart=-1,this.last=null,this.lastFrom=-1e9,this.lastTo=-1e9,this.from=[],this.to=[],this.value=[],this.maxPoint=-1,this.setMaxPoint=-1,this.nextLayer=null}finishChunk(e){this.chunks.push(new Mr(this.from,this.to,this.value,this.maxPoint)),this.chunkPos.push(this.chunkStart),this.chunkStart=-1,this.setMaxPoint=Math.max(this.setMaxPoint,this.maxPoint),this.maxPoint=-1,e&&(this.from=[],this.to=[],this.value=[])}add(e,t,i){this.addInner(e,t,i)||(this.nextLayer||(this.nextLayer=new Pt)).add(e,t,i)}addInner(e,t,i){let s=e-this.lastTo||i.startSide-this.last.endSide;if(s<=0&&(e-this.lastFrom||i.startSide-this.last.startSide)<0)throw new Error("Ranges must be added sorted by `from` position and `startSide`");return s<0?!1:(this.from.length==250&&this.finishChunk(!0),this.chunkStart<0&&(this.chunkStart=e),this.from.push(e-this.chunkStart),this.to.push(t-this.chunkStart),this.last=i,this.lastFrom=e,this.lastTo=t,this.value.push(i),i.point&&(this.maxPoint=Math.max(this.maxPoint,t-e)),!0)}addChunk(e,t){if((e-this.lastTo||t.value[0].startSide-this.last.endSide)<0)return!1;this.from.length&&this.finishChunk(!0),this.setMaxPoint=Math.max(this.setMaxPoint,t.maxPoint),this.chunks.push(t),this.chunkPos.push(e);let i=t.value.length-1;return this.last=t.value[i],this.lastFrom=t.from[i]+e,this.lastTo=t.to[i]+e,!0}finish(){return this.finishInner(F.empty)}finishInner(e){if(this.from.length&&this.finishChunk(!1),this.chunks.length==0)return e;let t=F.create(this.chunkPos,this.chunks,this.nextLayer?this.nextLayer.finishInner(e):e,this.setMaxPoint);return this.from=null,t}}function ao(n,e,t){let i=new Map;for(let r of n)for(let o=0;o=this.minPoint)break}}setRangeIndex(e){if(e==this.layer.chunk[this.chunkIndex].value.length){if(this.chunkIndex++,this.skip)for(;this.chunkIndex=i&&s.push(new Ba(o,t,i,r));return s.length==1?s[0]:new Ti(s)}get startSide(){return this.value?this.value.startSide:0}goto(e,t=-1e9){for(let i of this.heap)i.goto(e,t);for(let i=this.heap.length>>1;i>=0;i--)es(this.heap,i);return this.next(),this}forward(e,t){for(let i of this.heap)i.forward(e,t);for(let i=this.heap.length>>1;i>=0;i--)es(this.heap,i);(this.to-e||this.value.endSide-t)<0&&this.next()}next(){if(this.heap.length==0)this.from=this.to=1e9,this.value=null,this.rank=-1;else{let e=this.heap[0];this.from=e.from,this.to=e.to,this.value=e.value,this.rank=e.rank,e.value&&e.next(),es(this.heap,0)}}}function es(n,e){for(let t=n[e];;){let i=(e<<1)+1;if(i>=n.length)break;let s=n[i];if(i+1=0&&(s=n[i+1],i++),t.compare(s)<0)break;n[i]=t,n[e]=s,e=i}}class fi{constructor(e,t,i){this.minPoint=i,this.active=[],this.activeTo=[],this.activeRank=[],this.minActive=-1,this.point=null,this.pointFrom=0,this.pointRank=0,this.to=-1e9,this.endSide=0,this.openStart=-1,this.cursor=Ti.from(e,t,i)}goto(e,t=-1e9){return this.cursor.goto(e,t),this.active.length=this.activeTo.length=this.activeRank.length=0,this.minActive=-1,this.to=e,this.endSide=t,this.openStart=-1,this.next(),this}forward(e,t){for(;this.minActive>-1&&(this.activeTo[this.minActive]-e||this.active[this.minActive].endSide-t)<0;)this.removeActive(this.minActive);this.cursor.forward(e,t)}removeActive(e){Ki(this.active,e),Ki(this.activeTo,e),Ki(this.activeRank,e),this.minActive=co(this.active,this.activeTo)}addActive(e){let t=0,{value:i,to:s,rank:r}=this.cursor;for(;t-1&&(this.activeTo[r]-this.cursor.from||this.active[r].endSide-this.cursor.startSide)<0){if(this.activeTo[r]>e){this.to=this.activeTo[r],this.endSide=this.active[r].endSide;break}this.removeActive(r),i&&Ki(i,r)}else if(this.cursor.value)if(this.cursor.from>e){this.to=this.cursor.from,this.endSide=this.cursor.startSide;break}else{let o=this.cursor.value;if(!o.point)this.addActive(i),this.cursor.frome&&s++,this.cursor.next();else if(t&&this.cursor.to==this.to&&this.cursor.from=0&&!(this.activeRank[i]e||this.activeTo[i]==e&&this.active[i].endSide>=this.point.endSide)&&t.push(this.active[i]);return t.reverse()}openEnd(e){let t=0;for(let i=this.activeTo.length-1;i>=0&&this.activeTo[i]>e;i--)t++;return t}}function ho(n,e,t,i,s,r){n.goto(e),t.goto(i);let o=i+s,l=i,a=i-e;for(;;){let h=n.to+a-t.to||n.endSide-t.endSide,c=h<0?n.to+a:t.to,f=Math.min(c,o);if(n.point||t.point?n.point&&t.point&&(n.point==t.point||n.point.eq(t.point))&&Fs(n.activeForPoint(n.to+a),t.activeForPoint(t.to))||r.comparePoint(l,f,n.point,t.point):f>l&&!Fs(n.active,t.active)&&r.compareRange(l,f,n.active,t.active),c>o)break;l=c,h<=0&&n.next(),h>=0&&t.next()}}function Fs(n,e){if(n.length!=e.length)return!1;for(let t=0;t=e;i--)n[i+1]=n[i];n[e]=t}function co(n,e){let t=-1,i=1e9;for(let s=0;s=e)return s;if(s==n.length)break;r+=n.charCodeAt(s)==9?t-r%t:1,s=Oe(n,s)}return i===!0?-1:n.length}const Ws="ͼ",fo=typeof Symbol>"u"?"__"+Ws:Symbol.for(Ws),zs=typeof Symbol>"u"?"__styleSet"+Math.floor(Math.random()*1e8):Symbol("styleSet"),uo=typeof globalThis<"u"?globalThis:typeof window<"u"?window:{};class mt{constructor(e,t){this.rules=[];let{finish:i}=t||{};function s(o){return/^@/.test(o)?[o]:o.split(/,\s*/)}function r(o,l,a,h){let c=[],f=/^@(\w+)\b/.exec(o[0]),u=f&&f[1]=="keyframes";if(f&&l==null)return a.push(o[0]+";");for(let d in l){let p=l[d];if(/&/.test(d))r(d.split(/,\s*/).map(g=>o.map(y=>g.replace(/&/,y))).reduce((g,y)=>g.concat(y)),p,a);else if(p&&typeof p=="object"){if(!f)throw new RangeError("The value of a property ("+d+") should be a primitive value.");r(s(d),p,c,u)}else p!=null&&c.push(d.replace(/_.*/,"").replace(/[A-Z]/g,g=>"-"+g.toLowerCase())+": "+p+";")}(c.length||u)&&a.push((i&&!f&&!h?o.map(i):o).join(", ")+" {"+c.join(" ")+"}")}for(let o in e)r(s(o),e[o],this.rules)}getRules(){return this.rules.join(` -`)}static newName(){let e=uo[fo]||1;return uo[fo]=e+1,Ws+e.toString(36)}static mount(e,t){(e[zs]||new kf(e)).mount(Array.isArray(t)?t:[t])}}let Gi=null;class kf{constructor(e){if(!e.head&&e.adoptedStyleSheets&&typeof CSSStyleSheet<"u"){if(Gi)return e.adoptedStyleSheets=[Gi.sheet].concat(e.adoptedStyleSheets),e[zs]=Gi;this.sheet=new CSSStyleSheet,e.adoptedStyleSheets=[this.sheet].concat(e.adoptedStyleSheets),Gi=this}else{this.styleTag=(e.ownerDocument||e).createElement("style");let t=e.head||e;t.insertBefore(this.styleTag,t.firstChild)}this.modules=[],e[zs]=this}mount(e){let t=this.sheet,i=0,s=0;for(let r=0;r-1&&(this.modules.splice(l,1),s--,l=-1),l==-1){if(this.modules.splice(s++,0,o),t)for(let a=0;a",191:"?",192:"~",219:"{",220:"|",221:"}",222:'"'},po=typeof navigator<"u"&&/Chrome\/(\d+)/.exec(navigator.userAgent),vf=typeof navigator<"u"&&/Mac/.test(navigator.platform),xf=typeof navigator<"u"&&/MSIE \d|Trident\/(?:[7-9]|\d{2,})\..*rv:(\d+)/.exec(navigator.userAgent),Sf=vf||po&&+po[1]<57;for(var he=0;he<10;he++)gt[48+he]=gt[96+he]=String(he);for(var he=1;he<=24;he++)gt[he+111]="F"+he;for(var he=65;he<=90;he++)gt[he]=String.fromCharCode(he+32),Oi[he]=String.fromCharCode(he);for(var ts in gt)Oi.hasOwnProperty(ts)||(Oi[ts]=gt[ts]);function Cf(n){var e=Sf&&(n.ctrlKey||n.altKey||n.metaKey)||xf&&n.shiftKey&&n.key&&n.key.length==1||n.key=="Unidentified",t=!e&&n.key||(n.shiftKey?Oi:gt)[n.keyCode]||n.key||"Unidentified";return t=="Esc"&&(t="Escape"),t=="Del"&&(t="Delete"),t=="Left"&&(t="ArrowLeft"),t=="Up"&&(t="ArrowUp"),t=="Right"&&(t="ArrowRight"),t=="Down"&&(t="ArrowDown"),t}function xn(n){let e;return n.nodeType==11?e=n.getSelection?n:n.ownerDocument:e=n,e.getSelection()}function Xt(n,e){return e?n==e||n.contains(e.nodeType!=1?e.parentNode:e):!1}function Af(n){let e=n.activeElement;for(;e&&e.shadowRoot;)e=e.shadowRoot.activeElement;return e}function dn(n,e){if(!e.anchorNode)return!1;try{return Xt(n,e.anchorNode)}catch{return!1}}function Bi(n){return n.nodeType==3?Zt(n,0,n.nodeValue.length).getClientRects():n.nodeType==1?n.getClientRects():[]}function Sn(n,e,t,i){return t?mo(n,e,t,i,-1)||mo(n,e,t,i,1):!1}function Cn(n){for(var e=0;;e++)if(n=n.previousSibling,!n)return e}function mo(n,e,t,i,s){for(;;){if(n==t&&e==i)return!0;if(e==(s<0?0:Pi(n))){if(n.nodeName=="DIV")return!1;let r=n.parentNode;if(!r||r.nodeType!=1)return!1;e=Cn(n)+(s<0?0:1),n=r}else if(n.nodeType==1){if(n=n.childNodes[e+(s<0?-1:0)],n.nodeType==1&&n.contentEditable=="false")return!1;e=s<0?Pi(n):0}else return!1}}function Pi(n){return n.nodeType==3?n.nodeValue.length:n.childNodes.length}const Pa={left:0,right:0,top:0,bottom:0};function Dr(n,e){let t=e?n.left:n.right;return{left:t,right:t,top:n.top,bottom:n.bottom}}function Mf(n){return{left:0,right:n.innerWidth,top:0,bottom:n.innerHeight}}function Df(n,e,t,i,s,r,o,l){let a=n.ownerDocument,h=a.defaultView||window;for(let c=n;c;)if(c.nodeType==1){let f,u=c==a.body;if(u)f=Mf(h);else{if(c.scrollHeight<=c.clientHeight&&c.scrollWidth<=c.clientWidth){c=c.assignedSlot||c.parentNode;continue}let g=c.getBoundingClientRect();f={left:g.left,right:g.left+c.clientWidth,top:g.top,bottom:g.top+c.clientHeight}}let d=0,p=0;if(s=="nearest")e.top0&&e.bottom>f.bottom+p&&(p=e.bottom-f.bottom+p+o)):e.bottom>f.bottom&&(p=e.bottom-f.bottom+o,t<0&&e.top-p0&&e.right>f.right+d&&(d=e.right-f.right+d+r)):e.right>f.right&&(d=e.right-f.right+r,t<0&&e.leftt)return f.domBoundsAround(e,t,h);if(u>=e&&s==-1&&(s=a,r=h),h>t&&f.dom.parentNode==this.dom){o=a,l=c;break}c=u,h=u+f.breakAfter}return{from:r,to:l<0?i+this.length:l,startDOM:(s?this.children[s-1].dom.nextSibling:null)||this.dom.firstChild,endDOM:o=0?this.children[o].dom:null}}markDirty(e=!1){this.dirty|=2,this.markParentsDirty(e)}markParentsDirty(e){for(let t=this.parent;t;t=t.parent){if(e&&(t.dirty|=2),t.dirty&1)return;t.dirty|=1,e=!1}}setParent(e){this.parent!=e&&(this.parent=e,this.dirty&&this.markParentsDirty(!0))}setDOM(e){this.dom&&(this.dom.cmView=null),this.dom=e,e.cmView=this}get rootView(){for(let e=this;;){let t=e.parent;if(!t)return e;e=t}}replaceChildren(e,t,i=Tr){this.markDirty();for(let s=e;sthis.pos||e==this.pos&&(t>0||this.i==0||this.children[this.i-1].breakAfter))return this.off=e-this.pos,this;let i=this.children[--this.i];this.pos-=i.length+i.breakAfter}}}function Ia(n,e,t,i,s,r,o,l,a){let{children:h}=n,c=h.length?h[e]:null,f=r.length?r[r.length-1]:null,u=f?f.breakAfter:o;if(!(e==i&&c&&!o&&!u&&r.length<2&&c.merge(t,s,r.length?f:null,t==0,l,a))){if(i0&&(!o&&r.length&&c.merge(t,c.length,r[0],!1,l,0)?c.breakAfter=r.shift().breakAfter:(t2);var A={mac:ko||/Mac/.test(Te.platform),windows:/Win/.test(Te.platform),linux:/Linux|X11/.test(Te.platform),ie:Hn,ie_version:_a?qs.documentMode||6:Ks?+Ks[1]:js?+js[1]:0,gecko:bo,gecko_version:bo?+(/Firefox\/(\d+)/.exec(Te.userAgent)||[0,0])[1]:0,chrome:!!is,chrome_version:is?+is[1]:0,ios:ko,android:/Android\b/.test(Te.userAgent),webkit:wo,safari:Va,webkit_version:wo?+(/\bAppleWebKit\/(\d+)/.exec(navigator.userAgent)||[0,0])[1]:0,tabSize:qs.documentElement.style.tabSize!=null?"tab-size":"-moz-tab-size"};const Pf=256;class yt extends K{constructor(e){super(),this.text=e}get length(){return this.text.length}createDOM(e){this.setDOM(e||document.createTextNode(this.text))}sync(e){this.dom||this.createDOM(),this.dom.nodeValue!=this.text&&(e&&e.node==this.dom&&(e.written=!0),this.dom.nodeValue=this.text)}reuseDOM(e){e.nodeType==3&&this.createDOM(e)}merge(e,t,i){return i&&(!(i instanceof yt)||this.length-(t-e)+i.length>Pf)?!1:(this.text=this.text.slice(0,e)+(i?i.text:"")+this.text.slice(t),this.markDirty(),!0)}split(e){let t=new yt(this.text.slice(e));return this.text=this.text.slice(0,e),this.markDirty(),t}localPosFromDOM(e,t){return e==this.dom?t:t?this.text.length:0}domAtPos(e){return new ye(this.dom,e)}domBoundsAround(e,t,i){return{from:i,to:i+this.length,startDOM:this.dom,endDOM:this.dom.nextSibling}}coordsAt(e,t){return Us(this.dom,e,t)}}class et extends K{constructor(e,t=[],i=0){super(),this.mark=e,this.children=t,this.length=i;for(let s of t)s.setParent(this)}setAttrs(e){if(Ra(e),this.mark.class&&(e.className=this.mark.class),this.mark.attrs)for(let t in this.mark.attrs)e.setAttribute(t,this.mark.attrs[t]);return e}reuseDOM(e){e.nodeName==this.mark.tagName.toUpperCase()&&(this.setDOM(e),this.dirty|=6)}sync(e){this.dom?this.dirty&4&&this.setAttrs(this.dom):this.setDOM(this.setAttrs(document.createElement(this.mark.tagName))),super.sync(e)}merge(e,t,i,s,r,o){return i&&(!(i instanceof et&&i.mark.eq(this.mark))||e&&r<=0||te&&t.push(i=e&&(s=r),i=a,r++}let o=this.length-e;return this.length=e,s>-1&&(this.children.length=s,this.markDirty()),new et(this.mark,t,o)}domAtPos(e){return Wa(this,e)}coordsAt(e,t){return qa(this,e,t)}}function Us(n,e,t){let i=n.nodeValue.length;e>i&&(e=i);let s=e,r=e,o=0;e==0&&t<0||e==i&&t>=0?A.chrome||A.gecko||(e?(s--,o=1):r=0)?0:l.length-1];return A.safari&&!o&&a.width==0&&(a=Array.prototype.find.call(l,h=>h.width)||a),o?Dr(a,o<0):a||null}class ct extends K{constructor(e,t,i){super(),this.widget=e,this.length=t,this.side=i,this.prevWidget=null}static create(e,t,i){return new(e.customView||ct)(e,t,i)}split(e){let t=ct.create(this.widget,this.length-e,this.side);return this.length-=e,t}sync(){(!this.dom||!this.widget.updateDOM(this.dom))&&(this.dom&&this.prevWidget&&this.prevWidget.destroy(this.dom),this.prevWidget=null,this.setDOM(this.widget.toDOM(this.editorView)),this.dom.contentEditable="false")}getSide(){return this.side}merge(e,t,i,s,r,o){return i&&(!(i instanceof ct)||!this.widget.compare(i.widget)||e>0&&r<=0||t0?i.length-1:0;s=i[r],!(e>0?r==0:r==i.length-1||s.top0?-1:1);return this.length?s:Dr(s,this.side>0)}get isEditable(){return!1}destroy(){super.destroy(),this.dom&&this.widget.destroy(this.dom)}}class Fa extends ct{domAtPos(e){let{topView:t,text:i}=this.widget;return t?Gs(e,0,t,i,(s,r)=>s.domAtPos(r),s=>new ye(i,Math.min(s,i.nodeValue.length))):new ye(i,Math.min(e,i.nodeValue.length))}sync(){this.setDOM(this.widget.toDOM())}localPosFromDOM(e,t){let{topView:i,text:s}=this.widget;return i?Ha(e,t,i,s):Math.min(t,this.length)}ignoreMutation(){return!1}get overrideDOMText(){return null}coordsAt(e,t){let{topView:i,text:s}=this.widget;return i?Gs(e,t,i,s,(r,o,l)=>r.coordsAt(o,l),(r,o)=>Us(s,r,o)):Us(s,e,t)}destroy(){var e;super.destroy(),(e=this.widget.topView)===null||e===void 0||e.destroy()}get isEditable(){return!0}canReuseDOM(){return!0}}function Gs(n,e,t,i,s,r){if(t instanceof et){for(let o=t.dom.firstChild;o;o=o.nextSibling){let l=K.get(o);if(!l)return r(n,e);let a=Xt(o,i),h=l.length+(a?i.nodeValue.length:0);if(n0?-1:1);return i&&i.topt.top?{left:t.left,right:t.right,top:i.top,bottom:i.bottom}:t}get overrideDOMText(){return _.empty}}yt.prototype.children=ct.prototype.children=Qt.prototype.children=Tr;function Ef(n,e){let t=n.parent,i=t?t.children.indexOf(n):-1;for(;t&&i>=0;)if(e<0?i>0:ir&&e0;r--){let o=i[r-1];if(o.dom.parentNode==t)return o.domAtPos(o.length)}for(let r=s;r0&&e instanceof et&&s.length&&(i=s[s.length-1])instanceof et&&i.mark.eq(e.mark)?za(i,e.children[0],t-1):(s.push(e),e.setParent(n)),n.length+=e.length}function qa(n,e,t){let i=null,s=-1,r=null,o=-1;function l(h,c){for(let f=0,u=0;f=c&&(d.children.length?l(d,c-u):!r&&(p>c||u==p&&d.getSide()>0)?(r=d,o=c-u):(u0?3e8:-4e8:t>0?1e8:-1e8,new Et(e,t,t,i,e.widget||null,!1)}static replace(e){let t=!!e.block,i,s;if(e.isBlockGap)i=-5e8,s=4e8;else{let{start:r,end:o}=ja(e,t);i=(r?t?-3e8:-1:5e8)-1,s=(o?t?2e8:1:-6e8)+1}return new Et(e,i,s,t,e.widget||null,!0)}static line(e){return new Hi(e)}static set(e,t=!1){return F.of(e,t)}hasHeight(){return this.widget?this.widget.estimatedHeight>-1:!1}}E.none=F.empty;class Wn extends E{constructor(e){let{start:t,end:i}=ja(e);super(t?-1:5e8,i?1:-6e8,null,e),this.tagName=e.tagName||"span",this.class=e.class||"",this.attrs=e.attributes||null}eq(e){return this==e||e instanceof Wn&&this.tagName==e.tagName&&this.class==e.class&&Or(this.attrs,e.attrs)}range(e,t=e){if(e>=t)throw new RangeError("Mark decorations may not be empty");return super.range(e,t)}}Wn.prototype.point=!1;class Hi extends E{constructor(e){super(-2e8,-2e8,null,e)}eq(e){return e instanceof Hi&&Or(this.spec.attributes,e.spec.attributes)}range(e,t=e){if(t!=e)throw new RangeError("Line decoration ranges must be zero-length");return super.range(e,t)}}Hi.prototype.mapMode=ce.TrackBefore;Hi.prototype.point=!0;class Et extends E{constructor(e,t,i,s,r,o){super(t,i,r,e),this.block=s,this.isReplace=o,this.mapMode=s?t<=0?ce.TrackBefore:ce.TrackAfter:ce.TrackDel}get type(){return this.startSide=5}eq(e){return e instanceof Et&&Lf(this.widget,e.widget)&&this.block==e.block&&this.startSide==e.startSide&&this.endSide==e.endSide}range(e,t=e){if(this.isReplace&&(e>t||e==t&&this.startSide>0&&this.endSide<=0))throw new RangeError("Invalid range for replacement decoration");if(!this.isReplace&&t!=e)throw new RangeError("Widget decorations can only have zero-length ranges");return super.range(e,t)}}Et.prototype.point=!0;function ja(n,e=!1){let{inclusiveStart:t,inclusiveEnd:i}=n;return t==null&&(t=n.inclusive),i==null&&(i=n.inclusive),{start:t??e,end:i??e}}function Lf(n,e){return n==e||!!(n&&e&&n.compare(e))}function Ys(n,e,t,i=0){let s=t.length-1;s>=0&&t[s]+i>=n?t[s]=Math.max(t[s],e):t.push(n,e)}class ke extends K{constructor(){super(...arguments),this.children=[],this.length=0,this.prevAttrs=void 0,this.attrs=null,this.breakAfter=0}merge(e,t,i,s,r,o){if(i){if(!(i instanceof ke))return!1;this.dom||i.transferDOM(this)}return s&&this.setDeco(i?i.attrs:null),Na(this,e,t,i?i.children:[],r,o),!0}split(e){let t=new ke;if(t.breakAfter=this.breakAfter,this.length==0)return t;let{i,off:s}=this.childPos(e);s&&(t.append(this.children[i].split(s),0),this.children[i].merge(s,this.children[i].length,null,!1,0,0),i++);for(let r=i;r0&&this.children[i-1].length==0;)this.children[--i].destroy();return this.children.length=i,this.markDirty(),this.length=e,t}transferDOM(e){this.dom&&(this.markDirty(),e.setDOM(this.dom),e.prevAttrs=this.prevAttrs===void 0?this.attrs:this.prevAttrs,this.prevAttrs=void 0,this.dom=null)}setDeco(e){Or(this.attrs,e)||(this.dom&&(this.prevAttrs=this.attrs,this.markDirty()),this.attrs=e)}append(e,t){za(this,e,t)}addLineDeco(e){let t=e.spec.attributes,i=e.spec.class;t&&(this.attrs=$s(t,this.attrs||{})),i&&(this.attrs=$s({class:i},this.attrs||{}))}domAtPos(e){return Wa(this,e)}reuseDOM(e){e.nodeName=="DIV"&&(this.setDOM(e),this.dirty|=6)}sync(e){var t;this.dom?this.dirty&4&&(Ra(this.dom),this.dom.className="cm-line",this.prevAttrs=this.attrs?null:void 0):(this.setDOM(document.createElement("div")),this.dom.className="cm-line",this.prevAttrs=this.attrs?null:void 0),this.prevAttrs!==void 0&&(Js(this.dom,this.prevAttrs,this.attrs),this.dom.classList.add("cm-line"),this.prevAttrs=void 0),super.sync(e);let i=this.dom.lastChild;for(;i&&K.get(i)instanceof et;)i=i.lastChild;if(!i||!this.length||i.nodeName!="BR"&&((t=K.get(i))===null||t===void 0?void 0:t.isEditable)==!1&&(!A.ios||!this.children.some(s=>s instanceof yt))){let s=document.createElement("BR");s.cmIgnore=!0,this.dom.appendChild(s)}}measureTextSize(){if(this.children.length==0||this.length>20)return null;let e=0;for(let t of this.children){if(!(t instanceof yt)||/[^ -~]/.test(t.text))return null;let i=Bi(t.dom);if(i.length!=1)return null;e+=i[0].width}return e?{lineHeight:this.dom.getBoundingClientRect().height,charWidth:e/this.length}:null}coordsAt(e,t){return qa(this,e,t)}become(e){return!1}get type(){return W.Text}static find(e,t){for(let i=0,s=0;i=t){if(r instanceof ke)return r;if(o>t)break}s=o+r.breakAfter}return null}}class Ot extends K{constructor(e,t,i){super(),this.widget=e,this.length=t,this.type=i,this.breakAfter=0,this.prevWidget=null}merge(e,t,i,s,r,o){return i&&(!(i instanceof Ot)||!this.widget.compare(i.widget)||e>0&&r<=0||t0;){if(this.textOff==this.text.length){let{value:r,lineBreak:o,done:l}=this.cursor.next(this.skip);if(this.skip=0,l)throw new Error("Ran out of text content when drawing inline views");if(o){this.posCovered()||this.getLine(),this.content.length?this.content[this.content.length-1].breakAfter=1:this.breakAtStart=1,this.flushBuffer([]),this.curLine=null,e--;continue}else this.text=r,this.textOff=0}let s=Math.min(this.text.length-this.textOff,e,512);this.flushBuffer(t.slice(0,i)),this.getLine().append($i(new yt(this.text.slice(this.textOff,this.textOff+s)),t),i),this.atCursorPos=!0,this.textOff+=s,e-=s,i=0}}span(e,t,i,s){this.buildText(t-e,i,s),this.pos=t,this.openStart<0&&(this.openStart=s)}point(e,t,i,s,r,o){if(this.disallowBlockEffectsFor[o]&&i instanceof Et){if(i.block)throw new RangeError("Block decorations may not be specified via plugins");if(t>this.doc.lineAt(this.pos).to)throw new RangeError("Decorations that replace line breaks may not be specified via plugins")}let l=t-e;if(i instanceof Et)if(i.block){let{type:a}=i;a==W.WidgetAfter&&!this.posCovered()&&this.getLine(),this.addBlockWidget(new Ot(i.widget||new vo("div"),l,a))}else{let a=ct.create(i.widget||new vo("span"),l,l?0:i.startSide),h=this.atCursorPos&&!a.isEditable&&r<=s.length&&(e0),c=!a.isEditable&&(en.some(e=>e)}),Xa=D.define({combine:n=>n.some(e=>e)});class An{constructor(e,t="nearest",i="nearest",s=5,r=5){this.range=e,this.y=t,this.x=i,this.yMargin=s,this.xMargin=r}map(e){return e.empty?this:new An(this.range.map(e),this.y,this.x,this.yMargin,this.xMargin)}}const xo=R.define({map:(n,e)=>n.map(e)});function He(n,e,t){let i=n.facet($a);i.length?i[0](e):window.onerror?window.onerror(String(e),t,void 0,void 0,e):t?console.error(t+":",e):console.error(e)}const zn=D.define({combine:n=>n.length?n[0]:!0});let If=0;const yi=D.define();class be{constructor(e,t,i,s){this.id=e,this.create=t,this.domEventHandlers=i,this.extension=s(this)}static define(e,t){const{eventHandlers:i,provide:s,decorations:r}=t||{};return new be(If++,e,i,o=>{let l=[yi.of(o)];return r&&l.push(Ei.of(a=>{let h=a.plugin(o);return h?r(h):E.none})),s&&l.push(s(o)),l})}static fromClass(e,t){return be.define(i=>new e(i),t)}}class ns{constructor(e){this.spec=e,this.mustUpdate=null,this.value=null}update(e){if(this.value){if(this.mustUpdate){let t=this.mustUpdate;if(this.mustUpdate=null,this.value.update)try{this.value.update(t)}catch(i){if(He(t.state,i,"CodeMirror plugin crashed"),this.value.destroy)try{this.value.destroy()}catch{}this.deactivate()}}}else if(this.spec)try{this.value=this.spec.create(e)}catch(t){He(e.state,t,"CodeMirror plugin crashed"),this.deactivate()}return this}destroy(e){var t;if(!((t=this.value)===null||t===void 0)&&t.destroy)try{this.value.destroy()}catch(i){He(e.state,i,"CodeMirror plugin crashed")}}deactivate(){this.spec=this.value=null}}const Za=D.define(),Qa=D.define(),Ei=D.define(),eh=D.define(),th=D.define(),bi=D.define();class Qe{constructor(e,t,i,s){this.fromA=e,this.toA=t,this.fromB=i,this.toB=s}join(e){return new Qe(Math.min(this.fromA,e.fromA),Math.max(this.toA,e.toA),Math.min(this.fromB,e.fromB),Math.max(this.toB,e.toB))}addToSet(e){let t=e.length,i=this;for(;t>0;t--){let s=e[t-1];if(!(s.fromA>i.toA)){if(s.toAc)break;r+=2}if(!a)return i;new Qe(a.fromA,a.toA,a.fromB,a.toB).addToSet(i),o=a.toA,l=a.toB}}}class Mn{constructor(e,t,i){this.view=e,this.state=t,this.transactions=i,this.flags=0,this.startState=e.state,this.changes=ne.empty(this.startState.doc.length);for(let o of i)this.changes=this.changes.compose(o.changes);let s=[];this.changes.iterChangedRanges((o,l,a,h)=>s.push(new Qe(o,l,a,h))),this.changedRanges=s;let r=e.hasFocus;r!=e.inputState.notifiedFocused&&(e.inputState.notifiedFocused=r,this.flags|=1)}static create(e,t,i){return new Mn(e,t,i)}get viewportChanged(){return(this.flags&4)>0}get heightChanged(){return(this.flags&2)>0}get geometryChanged(){return this.docChanged||(this.flags&10)>0}get focusChanged(){return(this.flags&1)>0}get docChanged(){return!this.changes.empty}get selectionSet(){return this.transactions.some(e=>e.selection)}get empty(){return this.flags==0&&this.transactions.length==0}}var Z=function(n){return n[n.LTR=0]="LTR",n[n.RTL=1]="RTL",n}(Z||(Z={}));const Zs=Z.LTR,Nf=Z.RTL;function ih(n){let e=[];for(let t=0;t=t){if(l.level==i)return o;(r<0||(s!=0?s<0?l.fromt:e[r].level>l.level))&&(r=o)}}if(r<0)throw new RangeError("Index out of range");return r}}const X=[];function Wf(n,e){let t=n.length,i=e==Zs?1:2,s=e==Zs?2:1;if(!n||i==1&&!Hf.test(n))return nh(t);for(let o=0,l=i,a=i;o=0;u-=3)if(ze[u+1]==-c){let d=ze[u+2],p=d&2?i:d&4?d&1?s:i:0;p&&(X[o]=X[ze[u]]=p),l=u;break}}else{if(ze.length==189)break;ze[l++]=o,ze[l++]=h,ze[l++]=a}else if((f=X[o])==2||f==1){let u=f==i;a=u?0:1;for(let d=l-3;d>=0;d-=3){let p=ze[d+2];if(p&2)break;if(u)ze[d+2]|=2;else{if(p&4)break;ze[d+2]|=4}}}for(let o=0;ol;){let c=h,f=X[--h]!=2;for(;h>l&&f==(X[h-1]!=2);)h--;r.push(new Jt(h,c,f?2:1))}else r.push(new Jt(l,o,0))}else for(let o=0;o1)for(let a of this.points)a.node==e&&a.pos>this.text.length&&(a.pos-=o-1);i=r+o}}readNode(e){if(e.cmIgnore)return;let t=K.get(e),i=t&&t.overrideDOMText;if(i!=null){this.findPointInside(e,i.length);for(let s=i.iter();!s.next().done;)s.lineBreak?this.lineBreak():this.append(s.value)}else e.nodeType==3?this.readTextNode(e):e.nodeName=="BR"?e.nextSibling&&this.lineBreak():e.nodeType==1&&this.readRange(e.firstChild,null)}findPointBefore(e,t){for(let i of this.points)i.node==e&&e.childNodes[i.offset]==t&&(i.pos=this.text.length)}findPointInside(e,t){for(let i of this.points)(e.nodeType==3?i.node==e:e.contains(i.node))&&(i.pos=this.text.length+Math.min(t,i.offset))}}function So(n){return n.nodeType==1&&/^(DIV|P|LI|UL|OL|BLOCKQUOTE|DD|DT|H\d|SECTION|PRE)$/.test(n.nodeName)}class Co{constructor(e,t){this.node=e,this.offset=t,this.pos=-1}}class Ao extends K{constructor(e){super(),this.view=e,this.compositionDeco=E.none,this.decorations=[],this.dynamicDecorationMap=[],this.minWidth=0,this.minWidthFrom=0,this.minWidthTo=0,this.impreciseAnchor=null,this.impreciseHead=null,this.forceSelection=!1,this.lastUpdate=Date.now(),this.setDOM(e.contentDOM),this.children=[new ke],this.children[0].setParent(this),this.updateDeco(),this.updateInner([new Qe(0,0,0,e.state.doc.length)],0)}get editorView(){return this.view}get length(){return this.view.state.doc.length}update(e){let t=e.changedRanges;this.minWidth>0&&t.length&&(t.every(({fromA:o,toA:l})=>lthis.minWidthTo)?(this.minWidthFrom=e.changes.mapPos(this.minWidthFrom,1),this.minWidthTo=e.changes.mapPos(this.minWidthTo,1)):this.minWidth=this.minWidthFrom=this.minWidthTo=0),this.view.inputState.composing<0?this.compositionDeco=E.none:(e.transactions.length||this.dirty)&&(this.compositionDeco=jf(this.view,e.changes)),(A.ie||A.chrome)&&!this.compositionDeco.size&&e&&e.state.doc.lines!=e.startState.doc.lines&&(this.forceSelection=!0);let i=this.decorations,s=this.updateDeco(),r=$f(i,s,e.changes);return t=Qe.extendWithRanges(t,r),this.dirty==0&&t.length==0?!1:(this.updateInner(t,e.startState.doc.length),e.transactions.length&&(this.lastUpdate=Date.now()),!0)}updateInner(e,t){this.view.viewState.mustMeasureContent=!0,this.updateChildren(e,t);let{observer:i}=this.view;i.ignore(()=>{this.dom.style.height=this.view.viewState.contentHeight+"px",this.dom.style.flexBasis=this.minWidth?this.minWidth+"px":"";let r=A.chrome||A.ios?{node:i.selectionRange.focusNode,written:!1}:void 0;this.sync(r),this.dirty=0,r&&(r.written||i.selectionRange.focusNode!=r.node)&&(this.forceSelection=!0),this.dom.style.height=""});let s=[];if(this.view.viewport.from||this.view.viewport.to=0?e[s]:null;if(!r)break;let{fromA:o,toA:l,fromB:a,toB:h}=r,{content:c,breakAtStart:f,openStart:u,openEnd:d}=Br.build(this.view.state.doc,a,h,this.decorations,this.dynamicDecorationMap),{i:p,off:g}=i.findPos(l,1),{i:y,off:b}=i.findPos(o,-1);Ia(this,y,b,p,g,c,f,u,d)}}updateSelection(e=!1,t=!1){if((e||!this.view.observer.selectionRange.focusNode)&&this.view.observer.readSelectionRange(),!(t||this.mayControlSelection()))return;let i=this.forceSelection;this.forceSelection=!1;let s=this.view.state.selection.main,r=this.domAtPos(s.anchor),o=s.empty?r:this.domAtPos(s.head);if(A.gecko&&s.empty&&qf(r)){let a=document.createTextNode("");this.view.observer.ignore(()=>r.node.insertBefore(a,r.node.childNodes[r.offset]||null)),r=o=new ye(a,0),i=!0}let l=this.view.observer.selectionRange;(i||!l.focusNode||!Sn(r.node,r.offset,l.anchorNode,l.anchorOffset)||!Sn(o.node,o.offset,l.focusNode,l.focusOffset))&&(this.view.observer.ignore(()=>{A.android&&A.chrome&&this.dom.contains(l.focusNode)&&Jf(l.focusNode,this.dom)&&(this.dom.blur(),this.dom.focus({preventScroll:!0}));let a=xn(this.view.root);if(a)if(s.empty){if(A.gecko){let h=Uf(r.node,r.offset);if(h&&h!=3){let c=lh(r.node,r.offset,h==1?1:-1);c&&(r=new ye(c,h==1?0:c.nodeValue.length))}}a.collapse(r.node,r.offset),s.bidiLevel!=null&&l.cursorBidiLevel!=null&&(l.cursorBidiLevel=s.bidiLevel)}else if(a.extend){a.collapse(r.node,r.offset);try{a.extend(o.node,o.offset)}catch{}}else{let h=document.createRange();s.anchor>s.head&&([r,o]=[o,r]),h.setEnd(o.node,o.offset),h.setStart(r.node,r.offset),a.removeAllRanges(),a.addRange(h)}}),this.view.observer.setSelectionRange(r,o)),this.impreciseAnchor=r.precise?null:new ye(l.anchorNode,l.anchorOffset),this.impreciseHead=o.precise?null:new ye(l.focusNode,l.focusOffset)}enforceCursorAssoc(){if(this.compositionDeco.size)return;let{view:e}=this,t=e.state.selection.main,i=xn(e.root),{anchorNode:s,anchorOffset:r}=e.observer.selectionRange;if(!i||!t.empty||!t.assoc||!i.modify)return;let o=ke.find(this,t.head);if(!o)return;let l=o.posAtStart;if(t.head==l||t.head==l+o.length)return;let a=this.coordsAt(t.head,-1),h=this.coordsAt(t.head,1);if(!a||!h||a.bottom>h.top)return;let c=this.domAtPos(t.head+t.assoc);i.collapse(c.node,c.offset),i.modify("move",t.assoc<0?"forward":"backward","lineboundary"),e.observer.readSelectionRange();let f=e.observer.selectionRange;e.docView.posFromDOM(f.anchorNode,f.anchorOffset)!=t.from&&i.collapse(s,r)}mayControlSelection(){let e=this.view.root.activeElement;return e==this.dom||dn(this.dom,this.view.observer.selectionRange)&&!(e&&this.dom.contains(e))}nearest(e){for(let t=e;t;){let i=K.get(t);if(i&&i.rootView==this)return i;t=t.parentNode}return null}posFromDOM(e,t){let i=this.nearest(e);if(!i)throw new RangeError("Trying to find position for a DOM position outside of the document");return i.localPosFromDOM(e,t)+i.posAtStart}domAtPos(e){let{i:t,off:i}=this.childCursor().findPos(e,-1);for(;to||e==o&&r.type!=W.WidgetBefore&&r.type!=W.WidgetAfter&&(!s||t==2||this.children[s-1].breakAfter||this.children[s-1].type==W.WidgetBefore&&t>-2))return r.coordsAt(e-o,t);i=o}}measureVisibleLineHeights(e){let t=[],{from:i,to:s}=e,r=this.view.contentDOM.clientWidth,o=r>Math.max(this.view.scrollDOM.clientWidth,this.minWidth)+1,l=-1,a=this.view.textDirection==Z.LTR;for(let h=0,c=0;cs)break;if(h>=i){let d=f.dom.getBoundingClientRect();if(t.push(d.height),o){let p=f.dom.lastChild,g=p?Bi(p):[];if(g.length){let y=g[g.length-1],b=a?y.right-d.left:d.right-y.left;b>l&&(l=b,this.minWidth=r,this.minWidthFrom=h,this.minWidthTo=u)}}}h=u+f.breakAfter}return t}textDirectionAt(e){let{i:t}=this.childPos(e,1);return getComputedStyle(this.children[t].dom).direction=="rtl"?Z.RTL:Z.LTR}measureTextSize(){for(let s of this.children)if(s instanceof ke){let r=s.measureTextSize();if(r)return r}let e=document.createElement("div"),t,i;return e.className="cm-line",e.style.width="99999px",e.textContent="abc def ghi jkl mno pqr stu",this.view.observer.ignore(()=>{this.dom.appendChild(e);let s=Bi(e.firstChild)[0];t=e.getBoundingClientRect().height,i=s?s.width/27:7,e.remove()}),{lineHeight:t,charWidth:i}}childCursor(e=this.length){let t=this.children.length;return t&&(e-=this.children[--t].length),new La(this.children,e,t)}computeBlockGapDeco(){let e=[],t=this.view.viewState;for(let i=0,s=0;;s++){let r=s==t.viewports.length?null:t.viewports[s],o=r?r.from-1:this.length;if(o>i){let l=t.lineBlockAt(o).bottom-t.lineBlockAt(i).top;e.push(E.replace({widget:new Mo(l),block:!0,inclusive:!0,isBlockGap:!0}).range(i,o))}if(!r)break;i=r.to+1}return E.set(e)}updateDeco(){let e=this.view.state.facet(Ei).map((t,i)=>(this.dynamicDecorationMap[i]=typeof t=="function")?t(this.view):t);for(let t=e.length;tt.anchor?-1:1),s;if(!i)return;!t.empty&&(s=this.coordsAt(t.anchor,t.anchor>t.head?-1:1))&&(i={left:Math.min(i.left,s.left),top:Math.min(i.top,s.top),right:Math.max(i.right,s.right),bottom:Math.max(i.bottom,s.bottom)});let r=0,o=0,l=0,a=0;for(let c of this.view.state.facet(th).map(f=>f(this.view)))if(c){let{left:f,right:u,top:d,bottom:p}=c;f!=null&&(r=Math.max(r,f)),u!=null&&(o=Math.max(o,u)),d!=null&&(l=Math.max(l,d)),p!=null&&(a=Math.max(a,p))}let h={left:i.left-r,top:i.top-l,right:i.right+o,bottom:i.bottom+a};Df(this.view.scrollDOM,h,t.head0&&t<=0)n=n.childNodes[e-1],e=Pi(n);else if(n.nodeType==1&&e=0)n=n.childNodes[e],e=0;else return null}}function Uf(n,e){return n.nodeType!=1?0:(e&&n.childNodes[e-1].contentEditable=="false"?1:0)|(e0;){let h=Oe(s.text,o,!1);if(i(s.text.slice(h,o))!=a)break;o=h}for(;ln?e.left-n:Math.max(0,n-e.right)}function Zf(n,e){return e.top>n?e.top-n:Math.max(0,n-e.bottom)}function ss(n,e){return n.tope.top+1}function Do(n,e){return en.bottom?{top:n.top,left:n.left,right:n.right,bottom:e}:n}function er(n,e,t){let i,s,r,o,l=!1,a,h,c,f;for(let p=n.firstChild;p;p=p.nextSibling){let g=Bi(p);for(let y=0;yS||o==S&&r>v)&&(i=p,s=b,r=v,o=S,l=!v||(v>0?y0)),v==0?t>b.bottom&&(!c||c.bottomb.top)&&(h=p,f=b):c&&ss(c,b)?c=To(c,b.bottom):f&&ss(f,b)&&(f=Do(f,b.top))}}if(c&&c.bottom>=t?(i=a,s=c):f&&f.top<=t&&(i=h,s=f),!i)return{node:n,offset:0};let u=Math.max(s.left,Math.min(s.right,e));if(i.nodeType==3)return Oo(i,u,t);if(l&&i.contentEditable!="false")return er(i,u,t);let d=Array.prototype.indexOf.call(n.childNodes,i)+(e>=(s.left+s.right)/2?1:0);return{node:n,offset:d}}function Oo(n,e,t){let i=n.nodeValue.length,s=-1,r=1e9,o=0;for(let l=0;lt?c.top-t:t-c.bottom)-1;if(c.left-1<=e&&c.right+1>=e&&f=(c.left+c.right)/2,d=u;if((A.chrome||A.gecko)&&Zt(n,l).getBoundingClientRect().left==c.right&&(d=!u),f<=0)return{node:n,offset:l+(d?1:0)};s=l+(d?1:0),r=f}}}return{node:n,offset:s>-1?s:o>0?n.nodeValue.length:0}}function ah(n,{x:e,y:t},i,s=-1){var r;let o=n.contentDOM.getBoundingClientRect(),l=o.top+n.viewState.paddingTop,a,{docHeight:h}=n.viewState,c=t-l;if(c<0)return 0;if(c>h)return n.state.doc.length;for(let b=n.defaultLineHeight/2,v=!1;a=n.elementAtHeight(c),a.type!=W.Text;)for(;c=s>0?a.bottom+b:a.top-b,!(c>=0&&c<=h);){if(v)return i?null:0;v=!0,s=-s}t=l+c;let f=a.from;if(fn.viewport.to)return n.viewport.to==n.state.doc.length?n.state.doc.length:i?null:Bo(n,o,a,e,t);let u=n.dom.ownerDocument,d=n.root.elementFromPoint?n.root:u,p=d.elementFromPoint(e,t);p&&!n.contentDOM.contains(p)&&(p=null),p||(e=Math.max(o.left+1,Math.min(o.right-1,e)),p=d.elementFromPoint(e,t),p&&!n.contentDOM.contains(p)&&(p=null));let g,y=-1;if(p&&((r=n.docView.nearest(p))===null||r===void 0?void 0:r.isEditable)!=!1){if(u.caretPositionFromPoint){let b=u.caretPositionFromPoint(e,t);b&&({offsetNode:g,offset:y}=b)}else if(u.caretRangeFromPoint){let b=u.caretRangeFromPoint(e,t);b&&({startContainer:g,startOffset:y}=b,(!n.contentDOM.contains(g)||A.safari&&Qf(g,y,e)||A.chrome&&eu(g,y,e))&&(g=void 0))}}if(!g||!n.docView.dom.contains(g)){let b=ke.find(n.docView,f);if(!b)return c>a.top+a.height/2?a.to:a.from;({node:g,offset:y}=er(b.dom,e,t))}return n.docView.posFromDOM(g,y)}function Bo(n,e,t,i,s){let r=Math.round((i-e.left)*n.defaultCharacterWidth);if(n.lineWrapping&&t.height>n.defaultLineHeight*1.5){let l=Math.floor((s-t.top)/n.defaultLineHeight);r+=l*n.viewState.heightOracle.lineLength}let o=n.state.sliceDoc(t.from,t.to);return t.from+Hs(o,r,n.state.tabSize)}function Qf(n,e,t){let i;if(n.nodeType!=3||e!=(i=n.nodeValue.length))return!1;for(let s=n.nextSibling;s;s=s.nextSibling)if(s.nodeType!=1||s.nodeName!="BR")return!1;return Zt(n,i-1,i).getBoundingClientRect().left>t}function eu(n,e,t){if(e!=0)return!1;for(let s=n;;){let r=s.parentNode;if(!r||r.nodeType!=1||r.firstChild!=s)return!1;if(r.classList.contains("cm-line"))break;s=r}let i=n.nodeType==1?n.getBoundingClientRect():Zt(n,0,Math.max(n.nodeValue.length,1)).getBoundingClientRect();return t-i.left>5}function tu(n,e,t,i){let s=n.state.doc.lineAt(e.head),r=!i||!n.lineWrapping?null:n.coordsAtPos(e.assoc<0&&e.head>s.from?e.head-1:e.head);if(r){let a=n.dom.getBoundingClientRect(),h=n.textDirectionAt(s.from),c=n.posAtCoords({x:t==(h==Z.LTR)?a.right-1:a.left+1,y:(r.top+r.bottom)/2});if(c!=null)return w.cursor(c,t?-1:1)}let o=ke.find(n.docView,e.head),l=o?t?o.posAtEnd:o.posAtStart:t?s.to:s.from;return w.cursor(l,t?-1:1)}function Po(n,e,t,i){let s=n.state.doc.lineAt(e.head),r=n.bidiSpans(s),o=n.textDirectionAt(s.from);for(let l=e,a=null;;){let h=zf(s,r,o,l,t),c=sh;if(!h){if(s.number==(t?n.state.doc.lines:1))return l;c=` -`,s=n.state.doc.line(s.number+(t?1:-1)),r=n.bidiSpans(s),h=w.cursor(t?s.from:s.to)}if(a){if(!a(c))return l}else{if(!i)return h;a=i(c)}l=h}}function iu(n,e,t){let i=n.state.charCategorizer(e),s=i(t);return r=>{let o=i(r);return s==Re.Space&&(s=o),s==o}}function nu(n,e,t,i){let s=e.head,r=t?1:-1;if(s==(t?n.state.doc.length:0))return w.cursor(s,e.assoc);let o=e.goalColumn,l,a=n.contentDOM.getBoundingClientRect(),h=n.coordsAtPos(s),c=n.documentTop;if(h)o==null&&(o=h.left-a.left),l=r<0?h.top:h.bottom;else{let d=n.viewState.lineBlockAt(s);o==null&&(o=Math.min(a.right-a.left,n.defaultCharacterWidth*(s-d.from))),l=(r<0?d.top:d.bottom)+c}let f=a.left+o,u=i??n.defaultLineHeight>>1;for(let d=0;;d+=10){let p=l+(u+d)*r,g=ah(n,{x:f,y:p},!1,r);if(pa.bottom||(r<0?gs))return w.cursor(g,e.assoc,void 0,o)}}function rs(n,e,t){let i=n.state.facet(eh).map(s=>s(n));for(;;){let s=!1;for(let r of i)r.between(t.from-1,t.from+1,(o,l,a)=>{t.from>o&&t.fromt.from?w.cursor(o,1):w.cursor(l,-1),s=!0)});if(!s)return t}}class su{constructor(e){this.lastKeyCode=0,this.lastKeyTime=0,this.lastTouchTime=0,this.lastFocusTime=0,this.lastScrollTop=0,this.lastScrollLeft=0,this.chromeScrollHack=-1,this.pendingIOSKey=void 0,this.lastSelectionOrigin=null,this.lastSelectionTime=0,this.lastEscPress=0,this.lastContextMenu=0,this.scrollHandlers=[],this.registeredEvents=[],this.customHandlers=[],this.composing=-1,this.compositionFirstChange=null,this.compositionEndedAt=0,this.mouseSelection=null;for(let t in oe){let i=oe[t];e.contentDOM.addEventListener(t,s=>{!Eo(e,s)||this.ignoreDuringComposition(s)||t=="keydown"&&this.keydown(e,s)||(this.mustFlushObserver(s)&&e.observer.forceFlush(),this.runCustomHandlers(t,e,s)?s.preventDefault():i(e,s))},tr[t]),this.registeredEvents.push(t)}A.chrome&&A.chrome_version==102&&e.scrollDOM.addEventListener("wheel",()=>{this.chromeScrollHack<0?e.contentDOM.style.pointerEvents="none":window.clearTimeout(this.chromeScrollHack),this.chromeScrollHack=setTimeout(()=>{this.chromeScrollHack=-1,e.contentDOM.style.pointerEvents=""},100)},{passive:!0}),this.notifiedFocused=e.hasFocus,A.safari&&e.contentDOM.addEventListener("input",()=>null)}setSelectionOrigin(e){this.lastSelectionOrigin=e,this.lastSelectionTime=Date.now()}ensureHandlers(e,t){var i;let s;this.customHandlers=[];for(let r of t)if(s=(i=r.update(e).spec)===null||i===void 0?void 0:i.domEventHandlers){this.customHandlers.push({plugin:r.value,handlers:s});for(let o in s)this.registeredEvents.indexOf(o)<0&&o!="scroll"&&(this.registeredEvents.push(o),e.contentDOM.addEventListener(o,l=>{Eo(e,l)&&this.runCustomHandlers(o,e,l)&&l.preventDefault()}))}}runCustomHandlers(e,t,i){for(let s of this.customHandlers){let r=s.handlers[e];if(r)try{if(r.call(s.plugin,i,t)||i.defaultPrevented)return!0}catch(o){He(t.state,o)}}return!1}runScrollHandlers(e,t){this.lastScrollTop=e.scrollDOM.scrollTop,this.lastScrollLeft=e.scrollDOM.scrollLeft;for(let i of this.customHandlers){let s=i.handlers.scroll;if(s)try{s.call(i.plugin,t,e)}catch(r){He(e.state,r)}}}keydown(e,t){if(this.lastKeyCode=t.keyCode,this.lastKeyTime=Date.now(),t.keyCode==9&&Date.now()s.keyCode==t.keyCode))&&!t.ctrlKey||ru.indexOf(t.key)>-1&&t.ctrlKey&&!t.shiftKey)?(this.pendingIOSKey=i||t,setTimeout(()=>this.flushIOSKey(e),250),!0):!1}flushIOSKey(e){let t=this.pendingIOSKey;return t?(this.pendingIOSKey=void 0,$t(e.contentDOM,t.key,t.keyCode)):!1}ignoreDuringComposition(e){return/^key/.test(e.type)?this.composing>0?!0:A.safari&&!A.ios&&Date.now()-this.compositionEndedAt<100?(this.compositionEndedAt=0,!0):!1:!1}mustFlushObserver(e){return e.type=="keydown"&&e.keyCode!=229}startMouseSelection(e){this.mouseSelection&&this.mouseSelection.destroy(),this.mouseSelection=e}update(e){this.mouseSelection&&this.mouseSelection.update(e),e.transactions.length&&(this.lastKeyCode=this.lastSelectionTime=0)}destroy(){this.mouseSelection&&this.mouseSelection.destroy()}}const hh=[{key:"Backspace",keyCode:8,inputType:"deleteContentBackward"},{key:"Enter",keyCode:13,inputType:"insertParagraph"},{key:"Delete",keyCode:46,inputType:"deleteContentForward"}],ru="dthko",ch=[16,17,18,20,91,92,224,225];class ou{constructor(e,t,i,s){this.view=e,this.style=i,this.mustSelect=s,this.lastEvent=t;let r=e.contentDOM.ownerDocument;r.addEventListener("mousemove",this.move=this.move.bind(this)),r.addEventListener("mouseup",this.up=this.up.bind(this)),this.extend=t.shiftKey,this.multiple=e.state.facet(N.allowMultipleSelections)&&lu(e,t),this.dragMove=au(e,t),this.dragging=hu(e,t)&&ph(t)==1?null:!1,this.dragging===!1&&(t.preventDefault(),this.select(t))}move(e){if(e.buttons==0)return this.destroy();this.dragging===!1&&this.select(this.lastEvent=e)}up(e){this.dragging==null&&this.select(this.lastEvent),this.dragging||e.preventDefault(),this.destroy()}destroy(){let e=this.view.contentDOM.ownerDocument;e.removeEventListener("mousemove",this.move),e.removeEventListener("mouseup",this.up),this.view.inputState.mouseSelection=null}select(e){let t=this.style.get(e,this.extend,this.multiple);(this.mustSelect||!t.eq(this.view.state.selection)||t.main.assoc!=this.view.state.selection.main.assoc)&&this.view.dispatch({selection:t,userEvent:"select.pointer",scrollIntoView:!0}),this.mustSelect=!1}update(e){e.docChanged&&this.dragging&&(this.dragging=this.dragging.map(e.changes)),this.style.update(e)&&setTimeout(()=>this.select(this.lastEvent),20)}}function lu(n,e){let t=n.state.facet(Ka);return t.length?t[0](e):A.mac?e.metaKey:e.ctrlKey}function au(n,e){let t=n.state.facet(Ua);return t.length?t[0](e):A.mac?!e.altKey:!e.ctrlKey}function hu(n,e){let{main:t}=n.state.selection;if(t.empty)return!1;let i=xn(n.root);if(!i||i.rangeCount==0)return!0;let s=i.getRangeAt(0).getClientRects();for(let r=0;r=e.clientX&&o.top<=e.clientY&&o.bottom>=e.clientY)return!0}return!1}function Eo(n,e){if(!e.bubbles)return!0;if(e.defaultPrevented)return!1;for(let t=e.target,i;t!=n.contentDOM;t=t.parentNode)if(!t||t.nodeType==11||(i=K.get(t))&&i.ignoreEvent(e))return!1;return!0}const oe=Object.create(null),tr=Object.create(null),fh=A.ie&&A.ie_version<15||A.ios&&A.webkit_version<604;function cu(n){let e=n.dom.parentNode;if(!e)return;let t=e.appendChild(document.createElement("textarea"));t.style.cssText="position: fixed; left: -10000px; top: 10px",t.focus(),setTimeout(()=>{n.focus(),t.remove(),uh(n,t.value)},50)}function uh(n,e){let{state:t}=n,i,s=1,r=t.toText(e),o=r.lines==t.selection.ranges.length;if(ir!=null&&t.selection.ranges.every(a=>a.empty)&&ir==r.toString()){let a=-1;i=t.changeByRange(h=>{let c=t.doc.lineAt(h.from);if(c.from==a)return{range:h};a=c.from;let f=t.toText((o?r.line(s++).text:e)+t.lineBreak);return{changes:{from:c.from,insert:f},range:w.cursor(h.from+f.length)}})}else o?i=t.changeByRange(a=>{let h=r.line(s++);return{changes:{from:a.from,to:a.to,insert:h.text},range:w.cursor(a.from+h.length)}}):i=t.replaceSelection(r);n.dispatch(i,{userEvent:"input.paste",scrollIntoView:!0})}oe.keydown=(n,e)=>{n.inputState.setSelectionOrigin("select"),e.keyCode==27?n.inputState.lastEscPress=Date.now():ch.indexOf(e.keyCode)<0&&(n.inputState.lastEscPress=0)};oe.touchstart=(n,e)=>{n.inputState.lastTouchTime=Date.now(),n.inputState.setSelectionOrigin("select.pointer")};oe.touchmove=n=>{n.inputState.setSelectionOrigin("select.pointer")};tr.touchstart=tr.touchmove={passive:!0};oe.mousedown=(n,e)=>{if(n.observer.flush(),n.inputState.lastTouchTime>Date.now()-2e3)return;let t=null;for(let i of n.state.facet(Ga))if(t=i(n,e),t)break;if(!t&&e.button==0&&(t=du(n,e)),t){let i=n.root.activeElement!=n.contentDOM;i&&n.observer.ignore(()=>Ea(n.contentDOM)),n.inputState.startMouseSelection(new ou(n,e,t,i))}};function Ro(n,e,t,i){if(i==1)return w.cursor(e,t);if(i==2)return Yf(n.state,e,t);{let s=ke.find(n.docView,e),r=n.state.doc.lineAt(s?s.posAtEnd:e),o=s?s.posAtStart:r.from,l=s?s.posAtEnd:r.to;return ln>=e.top&&n<=e.bottom,Lo=(n,e,t)=>dh(e,t)&&n>=t.left&&n<=t.right;function fu(n,e,t,i){let s=ke.find(n.docView,e);if(!s)return 1;let r=e-s.posAtStart;if(r==0)return 1;if(r==s.length)return-1;let o=s.coordsAt(r,-1);if(o&&Lo(t,i,o))return-1;let l=s.coordsAt(r,1);return l&&Lo(t,i,l)?1:o&&dh(i,o)?-1:1}function Io(n,e){let t=n.posAtCoords({x:e.clientX,y:e.clientY},!1);return{pos:t,bias:fu(n,t,e.clientX,e.clientY)}}const uu=A.ie&&A.ie_version<=11;let No=null,_o=0,Vo=0;function ph(n){if(!uu)return n.detail;let e=No,t=Vo;return No=n,Vo=Date.now(),_o=!e||t>Date.now()-400&&Math.abs(e.clientX-n.clientX)<2&&Math.abs(e.clientY-n.clientY)<2?(_o+1)%3:1}function du(n,e){let t=Io(n,e),i=ph(e),s=n.state.selection,r=t,o=e;return{update(l){l.docChanged&&(t.pos=l.changes.mapPos(t.pos),s=s.map(l.changes),o=null)},get(l,a,h){let c;o&&l.clientX==o.clientX&&l.clientY==o.clientY?c=r:(c=r=Io(n,l),o=l);let f=Ro(n,c.pos,c.bias,i);if(t.pos!=c.pos&&!a){let u=Ro(n,t.pos,t.bias,i),d=Math.min(u.from,f.from),p=Math.max(u.to,f.to);f=d1&&s.ranges.some(u=>u.eq(f))?pu(s,f):h?s.addRange(f):w.create([f])}}}function pu(n,e){for(let t=0;;t++)if(n.ranges[t].eq(e))return w.create(n.ranges.slice(0,t).concat(n.ranges.slice(t+1)),n.mainIndex==t?0:n.mainIndex-(n.mainIndex>t?1:0))}oe.dragstart=(n,e)=>{let{selection:{main:t}}=n.state,{mouseSelection:i}=n.inputState;i&&(i.dragging=t),e.dataTransfer&&(e.dataTransfer.setData("Text",n.state.sliceDoc(t.from,t.to)),e.dataTransfer.effectAllowed="copyMove")};function Fo(n,e,t,i){if(!t)return;let s=n.posAtCoords({x:e.clientX,y:e.clientY},!1);e.preventDefault();let{mouseSelection:r}=n.inputState,o=i&&r&&r.dragging&&r.dragMove?{from:r.dragging.from,to:r.dragging.to}:null,l={from:s,insert:t},a=n.state.changes(o?[o,l]:l);n.focus(),n.dispatch({changes:a,selection:{anchor:a.mapPos(s,-1),head:a.mapPos(s,1)},userEvent:o?"move.drop":"input.drop"})}oe.drop=(n,e)=>{if(!e.dataTransfer)return;if(n.state.readOnly)return e.preventDefault();let t=e.dataTransfer.files;if(t&&t.length){e.preventDefault();let i=Array(t.length),s=0,r=()=>{++s==t.length&&Fo(n,e,i.filter(o=>o!=null).join(n.state.lineBreak),!1)};for(let o=0;o{/[\x00-\x08\x0e-\x1f]{2}/.test(l.result)||(i[o]=l.result),r()},l.readAsText(t[o])}}else Fo(n,e,e.dataTransfer.getData("Text"),!0)};oe.paste=(n,e)=>{if(n.state.readOnly)return e.preventDefault();n.observer.flush();let t=fh?null:e.clipboardData;t?(uh(n,t.getData("text/plain")),e.preventDefault()):cu(n)};function mu(n,e){let t=n.dom.parentNode;if(!t)return;let i=t.appendChild(document.createElement("textarea"));i.style.cssText="position: fixed; left: -10000px; top: 10px",i.value=e,i.focus(),i.selectionEnd=e.length,i.selectionStart=0,setTimeout(()=>{i.remove(),n.focus()},50)}function gu(n){let e=[],t=[],i=!1;for(let s of n.selection.ranges)s.empty||(e.push(n.sliceDoc(s.from,s.to)),t.push(s));if(!e.length){let s=-1;for(let{from:r}of n.selection.ranges){let o=n.doc.lineAt(r);o.number>s&&(e.push(o.text),t.push({from:o.from,to:Math.min(n.doc.length,o.to+1)})),s=o.number}i=!0}return{text:e.join(n.lineBreak),ranges:t,linewise:i}}let ir=null;oe.copy=oe.cut=(n,e)=>{let{text:t,ranges:i,linewise:s}=gu(n.state);if(!t&&!s)return;ir=s?t:null;let r=fh?null:e.clipboardData;r?(e.preventDefault(),r.clearData(),r.setData("text/plain",t)):mu(n,t),e.type=="cut"&&!n.state.readOnly&&n.dispatch({changes:i,scrollIntoView:!0,userEvent:"delete.cut"})};function mh(n){setTimeout(()=>{n.hasFocus!=n.inputState.notifiedFocused&&n.update([])},10)}oe.focus=n=>{n.inputState.lastFocusTime=Date.now(),!n.scrollDOM.scrollTop&&(n.inputState.lastScrollTop||n.inputState.lastScrollLeft)&&(n.scrollDOM.scrollTop=n.inputState.lastScrollTop,n.scrollDOM.scrollLeft=n.inputState.lastScrollLeft),mh(n)};oe.blur=n=>{n.observer.clearSelectionRange(),mh(n)};oe.compositionstart=oe.compositionupdate=n=>{n.inputState.compositionFirstChange==null&&(n.inputState.compositionFirstChange=!0),n.inputState.composing<0&&(n.inputState.composing=0)};oe.compositionend=n=>{n.inputState.composing=-1,n.inputState.compositionEndedAt=Date.now(),n.inputState.compositionFirstChange=null,A.chrome&&A.android&&n.observer.flushSoon(),setTimeout(()=>{n.inputState.composing<0&&n.docView.compositionDeco.size&&n.update([])},50)};oe.contextmenu=n=>{n.inputState.lastContextMenu=Date.now()};oe.beforeinput=(n,e)=>{var t;let i;if(A.chrome&&A.android&&(i=hh.find(s=>s.inputType==e.inputType))&&(n.observer.delayAndroidKey(i.key,i.keyCode),i.key=="Backspace"||i.key=="Delete")){let s=((t=window.visualViewport)===null||t===void 0?void 0:t.height)||0;setTimeout(()=>{var r;(((r=window.visualViewport)===null||r===void 0?void 0:r.height)||0)>s+10&&n.hasFocus&&(n.contentDOM.blur(),n.focus())},100)}};const Ho=["pre-wrap","normal","pre-line","break-spaces"];class yu{constructor(){this.doc=_.empty,this.lineWrapping=!1,this.heightSamples={},this.lineHeight=14,this.charWidth=7,this.lineLength=30,this.heightChanged=!1}heightForGap(e,t){let i=this.doc.lineAt(t).number-this.doc.lineAt(e).number+1;return this.lineWrapping&&(i+=Math.ceil((t-e-i*this.lineLength*.5)/this.lineLength)),this.lineHeight*i}heightForLine(e){return this.lineWrapping?(1+Math.max(0,Math.ceil((e-this.lineLength)/(this.lineLength-5))))*this.lineHeight:this.lineHeight}setDoc(e){return this.doc=e,this}mustRefreshForWrapping(e){return Ho.indexOf(e)>-1!=this.lineWrapping}mustRefreshForHeights(e){let t=!1;for(let i=0;i-1,l=Math.round(t)!=Math.round(this.lineHeight)||this.lineWrapping!=o;if(this.lineWrapping=o,this.lineHeight=t,this.charWidth=i,this.lineLength=s,l){this.heightSamples={};for(let a=0;a0}set outdated(e){this.flags=(e?2:0)|this.flags&-3}setHeight(e,t){this.height!=t&&(Math.abs(this.height-t)>pn&&(e.heightChanged=!0),this.height=t)}replace(e,t,i){return ve.of(i)}decomposeLeft(e,t){t.push(this)}decomposeRight(e,t){t.push(this)}applyChanges(e,t,i,s){let r=this;for(let o=s.length-1;o>=0;o--){let{fromA:l,toA:a,fromB:h,toB:c}=s[o],f=r.lineAt(l,q.ByPosNoHeight,t,0,0),u=f.to>=a?f:r.lineAt(a,q.ByPosNoHeight,t,0,0);for(c+=u.to-a,a=u.to;o>0&&f.from<=s[o-1].toA;)l=s[o-1].fromA,h=s[o-1].fromB,o--,lr*2){let l=e[t-1];l.break?e.splice(--t,1,l.left,null,l.right):e.splice(--t,1,l.left,l.right),i+=1+l.break,s-=l.size}else if(r>s*2){let l=e[i];l.break?e.splice(i,1,l.left,null,l.right):e.splice(i,1,l.left,l.right),i+=2+l.break,r-=l.size}else break;else if(s=r&&o(this.blockAt(0,i,s,r))}updateHeight(e,t=0,i=!1,s){return s&&s.from<=t&&s.more&&this.setHeight(e,s.heights[s.index++]),this.outdated=!1,this}toString(){return`block(${this.length})`}}class De extends gh{constructor(e,t){super(e,t,W.Text),this.collapsed=0,this.widgetHeight=0}replace(e,t,i){let s=i[0];return i.length==1&&(s instanceof De||s instanceof ae&&s.flags&4)&&Math.abs(this.length-s.length)<10?(s instanceof ae?s=new De(s.length,this.height):s.height=this.height,this.outdated||(s.outdated=!1),s):ve.of(i)}updateHeight(e,t=0,i=!1,s){return s&&s.from<=t&&s.more?this.setHeight(e,s.heights[s.index++]):(i||this.outdated)&&this.setHeight(e,Math.max(this.widgetHeight,e.heightForLine(this.length-this.collapsed))),this.outdated=!1,this}toString(){return`line(${this.length}${this.collapsed?-this.collapsed:""}${this.widgetHeight?":"+this.widgetHeight:""})`}}class ae extends ve{constructor(e){super(e,0)}lines(e,t){let i=e.lineAt(t).number,s=e.lineAt(t+this.length).number;return{firstLine:i,lastLine:s,lineHeight:this.height/(s-i+1)}}blockAt(e,t,i,s){let{firstLine:r,lastLine:o,lineHeight:l}=this.lines(t,s),a=Math.max(0,Math.min(o-r,Math.floor((e-i)/l))),{from:h,length:c}=t.line(r+a);return new ut(h,c,i+l*a,l,W.Text)}lineAt(e,t,i,s,r){if(t==q.ByHeight)return this.blockAt(e,i,s,r);if(t==q.ByPosNoHeight){let{from:f,to:u}=i.lineAt(e);return new ut(f,u-f,0,0,W.Text)}let{firstLine:o,lineHeight:l}=this.lines(i,r),{from:a,length:h,number:c}=i.lineAt(e);return new ut(a,h,s+l*(c-o),l,W.Text)}forEachLine(e,t,i,s,r,o){let{firstLine:l,lineHeight:a}=this.lines(i,r);for(let h=Math.max(e,r),c=Math.min(r+this.length,t);h<=c;){let f=i.lineAt(h);h==e&&(s+=a*(f.number-l)),o(new ut(f.from,f.length,s,a,W.Text)),s+=a,h=f.to+1}}replace(e,t,i){let s=this.length-t;if(s>0){let r=i[i.length-1];r instanceof ae?i[i.length-1]=new ae(r.length+s):i.push(null,new ae(s-1))}if(e>0){let r=i[0];r instanceof ae?i[0]=new ae(e+r.length):i.unshift(new ae(e-1),null)}return ve.of(i)}decomposeLeft(e,t){t.push(new ae(e-1),null)}decomposeRight(e,t){t.push(null,new ae(this.length-e-1))}updateHeight(e,t=0,i=!1,s){let r=t+this.length;if(s&&s.from<=t+this.length&&s.more){let o=[],l=Math.max(t,s.from),a=-1,h=e.heightChanged;for(s.from>t&&o.push(new ae(s.from-t-1).updateHeight(e,t));l<=r&&s.more;){let f=e.doc.lineAt(l).length;o.length&&o.push(null);let u=s.heights[s.index++];a==-1?a=u:Math.abs(u-a)>=pn&&(a=-2);let d=new De(f,u);d.outdated=!1,o.push(d),l+=f+1}l<=r&&o.push(null,new ae(r-l).updateHeight(e,l));let c=ve.of(o);return e.heightChanged=h||a<0||Math.abs(c.height-this.height)>=pn||Math.abs(a-this.lines(e.doc,t).lineHeight)>=pn,c}else(i||this.outdated)&&(this.setHeight(e,e.heightForGap(t,t+this.length)),this.outdated=!1);return this}toString(){return`gap(${this.length})`}}class wu extends ve{constructor(e,t,i){super(e.length+t+i.length,e.height+i.height,t|(e.outdated||i.outdated?2:0)),this.left=e,this.right=i,this.size=e.size+i.size}get break(){return this.flags&1}blockAt(e,t,i,s){let r=i+this.left.height;return el))return h;let c=t==q.ByPosNoHeight?q.ByPosNoHeight:q.ByPos;return a?h.join(this.right.lineAt(l,c,i,o,l)):this.left.lineAt(l,c,i,s,r).join(h)}forEachLine(e,t,i,s,r,o){let l=s+this.left.height,a=r+this.left.length+this.break;if(this.break)e=a&&this.right.forEachLine(e,t,i,l,a,o);else{let h=this.lineAt(a,q.ByPos,i,s,r);e=e&&h.from<=t&&o(h),t>h.to&&this.right.forEachLine(h.to+1,t,i,l,a,o)}}replace(e,t,i){let s=this.left.length+this.break;if(tthis.left.length)return this.balanced(this.left,this.right.replace(e-s,t-s,i));let r=[];e>0&&this.decomposeLeft(e,r);let o=r.length;for(let l of i)r.push(l);if(e>0&&Wo(r,o-1),t=i&&t.push(null)),e>i&&this.right.decomposeLeft(e-i,t)}decomposeRight(e,t){let i=this.left.length,s=i+this.break;if(e>=s)return this.right.decomposeRight(e-s,t);e2*t.size||t.size>2*e.size?ve.of(this.break?[e,null,t]:[e,t]):(this.left=e,this.right=t,this.height=e.height+t.height,this.outdated=e.outdated||t.outdated,this.size=e.size+t.size,this.length=e.length+this.break+t.length,this)}updateHeight(e,t=0,i=!1,s){let{left:r,right:o}=this,l=t+r.length+this.break,a=null;return s&&s.from<=t+r.length&&s.more?a=r=r.updateHeight(e,t,i,s):r.updateHeight(e,t,i),s&&s.from<=l+o.length&&s.more?a=o=o.updateHeight(e,l,i,s):o.updateHeight(e,l,i),a?this.balanced(r,o):(this.height=this.left.height+this.right.height,this.outdated=!1,this)}toString(){return this.left+(this.break?" ":"-")+this.right}}function Wo(n,e){let t,i;n[e]==null&&(t=n[e-1])instanceof ae&&(i=n[e+1])instanceof ae&&n.splice(e-1,3,new ae(t.length+1+i.length))}const ku=5;class Pr{constructor(e,t){this.pos=e,this.oracle=t,this.nodes=[],this.lineStart=-1,this.lineEnd=-1,this.covering=null,this.writtenTo=e}get isCovered(){return this.covering&&this.nodes[this.nodes.length-1]==this.covering}span(e,t){if(this.lineStart>-1){let i=Math.min(t,this.lineEnd),s=this.nodes[this.nodes.length-1];s instanceof De?s.length+=i-this.pos:(i>this.pos||!this.isCovered)&&this.nodes.push(new De(i-this.pos,-1)),this.writtenTo=i,t>i&&(this.nodes.push(null),this.writtenTo++,this.lineStart=-1)}this.pos=t}point(e,t,i){if(e=ku)&&this.addLineDeco(s,r)}else t>e&&this.span(e,t);this.lineEnd>-1&&this.lineEnd-1)return;let{from:e,to:t}=this.oracle.doc.lineAt(this.pos);this.lineStart=e,this.lineEnd=t,this.writtenToe&&this.nodes.push(new De(this.pos-e,-1)),this.writtenTo=this.pos}blankContent(e,t){let i=new ae(t-e);return this.oracle.doc.lineAt(e).to==t&&(i.flags|=4),i}ensureLine(){this.enterLine();let e=this.nodes.length?this.nodes[this.nodes.length-1]:null;if(e instanceof De)return e;let t=new De(0,-1);return this.nodes.push(t),t}addBlock(e){this.enterLine(),e.type==W.WidgetAfter&&!this.isCovered&&this.ensureLine(),this.nodes.push(e),this.writtenTo=this.pos=this.pos+e.length,e.type!=W.WidgetBefore&&(this.covering=e)}addLineDeco(e,t){let i=this.ensureLine();i.length+=t,i.collapsed+=t,i.widgetHeight=Math.max(i.widgetHeight,e),this.writtenTo=this.pos=this.pos+t}finish(e){let t=this.nodes.length==0?null:this.nodes[this.nodes.length-1];this.lineStart>-1&&!(t instanceof De)&&!this.isCovered?this.nodes.push(new De(0,-1)):(this.writtenToc.clientHeight||c.scrollWidth>c.clientWidth)&&f.overflow!="visible"){let u=c.getBoundingClientRect();r=Math.max(r,u.left),o=Math.min(o,u.right),l=Math.max(l,u.top),a=h==n.parentNode?u.bottom:Math.min(a,u.bottom)}h=f.position=="absolute"||f.position=="fixed"?c.offsetParent:c.parentNode}else if(h.nodeType==11)h=h.host;else break;return{left:r-t.left,right:Math.max(r,o)-t.left,top:l-(t.top+e),bottom:Math.max(l,a)-(t.top+e)}}function Cu(n,e){let t=n.getBoundingClientRect();return{left:0,right:t.right-t.left,top:e,bottom:t.bottom-(t.top+e)}}class os{constructor(e,t,i){this.from=e,this.to=t,this.size=i}static same(e,t){if(e.length!=t.length)return!1;for(let i=0;itypeof t!="function"),this.heightMap=ve.empty().applyChanges(this.stateDeco,_.empty,this.heightOracle.setDoc(e.doc),[new Qe(0,0,0,e.doc.length)]),this.viewport=this.getViewport(0,null),this.updateViewportLines(),this.updateForViewport(),this.lineGaps=this.ensureLineGaps([]),this.lineGapDeco=E.set(this.lineGaps.map(t=>t.draw(!1))),this.computeVisibleRanges()}updateForViewport(){let e=[this.viewport],{main:t}=this.state.selection;for(let i=0;i<=1;i++){let s=i?t.head:t.anchor;if(!e.some(({from:r,to:o})=>s>=r&&s<=o)){let{from:r,to:o}=this.lineBlockAt(s);e.push(new Ji(r,o))}}this.viewports=e.sort((i,s)=>i.from-s.from),this.scaler=this.heightMap.height<=7e6?qo:new Tu(this.heightOracle.doc,this.heightMap,this.viewports)}updateViewportLines(){this.viewportLines=[],this.heightMap.forEachLine(this.viewport.from,this.viewport.to,this.state.doc,0,0,e=>{this.viewportLines.push(this.scaler.scale==1?e:wi(e,this.scaler))})}update(e,t=null){this.state=e.state;let i=this.stateDeco;this.stateDeco=this.state.facet(Ei).filter(h=>typeof h!="function");let s=e.changedRanges,r=Qe.extendWithRanges(s,vu(i,this.stateDeco,e?e.changes:ne.empty(this.state.doc.length))),o=this.heightMap.height;this.heightMap=this.heightMap.applyChanges(this.stateDeco,e.startState.doc,this.heightOracle.setDoc(this.state.doc),r),this.heightMap.height!=o&&(e.flags|=2);let l=r.length?this.mapViewport(this.viewport,e.changes):this.viewport;(t&&(t.range.headl.to)||!this.viewportIsAppropriate(l))&&(l=this.getViewport(0,t));let a=!e.changes.empty||e.flags&2||l.from!=this.viewport.from||l.to!=this.viewport.to;this.viewport=l,this.updateForViewport(),a&&this.updateViewportLines(),(this.lineGaps.length||this.viewport.to-this.viewport.from>2e3<<1)&&this.updateLineGaps(this.ensureLineGaps(this.mapLineGaps(this.lineGaps,e.changes))),e.flags|=this.computeVisibleRanges(),t&&(this.scrollTarget=t),!this.mustEnforceCursorAssoc&&e.selectionSet&&e.view.lineWrapping&&e.state.selection.main.empty&&e.state.selection.main.assoc&&!e.state.facet(Xa)&&(this.mustEnforceCursorAssoc=!0)}measure(e){let t=e.contentDOM,i=window.getComputedStyle(t),s=this.heightOracle,r=i.whiteSpace;this.defaultTextDirection=i.direction=="rtl"?Z.RTL:Z.LTR;let o=this.heightOracle.mustRefreshForWrapping(r),l=o||this.mustMeasureContent||this.contentDOMHeight!=t.clientHeight;this.contentDOMHeight=t.clientHeight,this.mustMeasureContent=!1;let a=0,h=0,c=parseInt(i.paddingTop)||0,f=parseInt(i.paddingBottom)||0;(this.paddingTop!=c||this.paddingBottom!=f)&&(this.paddingTop=c,this.paddingBottom=f,a|=10),this.editorWidth!=e.scrollDOM.clientWidth&&(s.lineWrapping&&(l=!0),this.editorWidth=e.scrollDOM.clientWidth,a|=8);let u=(this.printing?Cu:Su)(t,this.paddingTop),d=u.top-this.pixelViewport.top,p=u.bottom-this.pixelViewport.bottom;this.pixelViewport=u;let g=this.pixelViewport.bottom>this.pixelViewport.top&&this.pixelViewport.right>this.pixelViewport.left;if(g!=this.inView&&(this.inView=g,g&&(l=!0)),!this.inView&&!this.scrollTarget)return 0;let y=t.clientWidth;if((this.contentDOMWidth!=y||this.editorHeight!=e.scrollDOM.clientHeight)&&(this.contentDOMWidth=y,this.editorHeight=e.scrollDOM.clientHeight,a|=8),l){let v=e.docView.measureVisibleLineHeights(this.viewport);if(s.mustRefreshForHeights(v)&&(o=!0),o||s.lineWrapping&&Math.abs(y-this.contentDOMWidth)>s.charWidth){let{lineHeight:S,charWidth:k}=e.docView.measureTextSize();o=S>0&&s.refresh(r,S,k,y/k,v),o&&(e.docView.minWidth=0,a|=8)}d>0&&p>0?h=Math.max(d,p):d<0&&p<0&&(h=Math.min(d,p)),s.heightChanged=!1;for(let S of this.viewports){let k=S.from==this.viewport.from?v:e.docView.measureVisibleLineHeights(S);this.heightMap=o?ve.empty().applyChanges(this.stateDeco,_.empty,this.heightOracle,[new Qe(0,0,0,e.state.doc.length)]):this.heightMap.updateHeight(s,0,o,new bu(S.from,k))}s.heightChanged&&(a|=2)}let b=!this.viewportIsAppropriate(this.viewport,h)||this.scrollTarget&&(this.scrollTarget.range.headthis.viewport.to);return b&&(this.viewport=this.getViewport(h,this.scrollTarget)),this.updateForViewport(),(a&2||b)&&this.updateViewportLines(),(this.lineGaps.length||this.viewport.to-this.viewport.from>2e3<<1)&&this.updateLineGaps(this.ensureLineGaps(o?[]:this.lineGaps,e)),a|=this.computeVisibleRanges(),this.mustEnforceCursorAssoc&&(this.mustEnforceCursorAssoc=!1,e.docView.enforceCursorAssoc()),a}get visibleTop(){return this.scaler.fromDOM(this.pixelViewport.top)}get visibleBottom(){return this.scaler.fromDOM(this.pixelViewport.bottom)}getViewport(e,t){let i=.5-Math.max(-.5,Math.min(.5,e/1e3/2)),s=this.heightMap,r=this.state.doc,{visibleTop:o,visibleBottom:l}=this,a=new Ji(s.lineAt(o-i*1e3,q.ByHeight,r,0,0).from,s.lineAt(l+(1-i)*1e3,q.ByHeight,r,0,0).to);if(t){let{head:h}=t.range;if(ha.to){let c=Math.min(this.editorHeight,this.pixelViewport.bottom-this.pixelViewport.top),f=s.lineAt(h,q.ByPos,r,0,0),u;t.y=="center"?u=(f.top+f.bottom)/2-c/2:t.y=="start"||t.y=="nearest"&&h=l+Math.max(10,Math.min(i,250)))&&s>o-2*1e3&&r>1,o=s<<1;if(this.defaultTextDirection!=Z.LTR&&!i)return[];let l=[],a=(h,c,f,u)=>{if(c-hh&&yy.from>=f.from&&y.to<=f.to&&Math.abs(y.from-h)y.fromb));if(!g){if(cy.from<=c&&y.to>=c)){let y=t.moveToLineBoundary(w.cursor(c),!1,!0).head;y>h&&(c=y)}g=new os(h,c,this.gapSize(f,h,c,u))}l.push(g)};for(let h of this.viewportLines){if(h.lengthh.from&&a(h.from,u,h,c),dt.draw(this.heightOracle.lineWrapping))))}computeVisibleRanges(){let e=this.stateDeco;this.lineGaps.length&&(e=e.concat(this.lineGapDeco));let t=[];F.spans(e,this.viewport.from,this.viewport.to,{span(s,r){t.push({from:s,to:r})},point(){}},20);let i=t.length!=this.visibleRanges.length||this.visibleRanges.some((s,r)=>s.from!=t[r].from||s.to!=t[r].to);return this.visibleRanges=t,i?4:0}lineBlockAt(e){return e>=this.viewport.from&&e<=this.viewport.to&&this.viewportLines.find(t=>t.from<=e&&t.to>=e)||wi(this.heightMap.lineAt(e,q.ByPos,this.state.doc,0,0),this.scaler)}lineBlockAtHeight(e){return wi(this.heightMap.lineAt(this.scaler.fromDOM(e),q.ByHeight,this.state.doc,0,0),this.scaler)}elementAtHeight(e){return wi(this.heightMap.blockAt(this.scaler.fromDOM(e),this.state.doc,0,0),this.scaler)}get docHeight(){return this.scaler.toDOM(this.heightMap.height)}get contentHeight(){return this.docHeight+this.paddingTop+this.paddingBottom}}class Ji{constructor(e,t){this.from=e,this.to=t}}function Mu(n,e,t){let i=[],s=n,r=0;return F.spans(t,n,e,{span(){},point(o,l){o>s&&(i.push({from:s,to:o}),r+=o-s),s=l}},20),s=1)return e[e.length-1].to;let i=Math.floor(n*t);for(let s=0;;s++){let{from:r,to:o}=e[s],l=o-r;if(i<=l)return r+i;i-=l}}function Xi(n,e){let t=0;for(let{from:i,to:s}of n.ranges){if(e<=s){t+=e-i;break}t+=s-i}return t/n.total}function Du(n,e){for(let t of n)if(e(t))return t}const qo={toDOM(n){return n},fromDOM(n){return n},scale:1};class Tu{constructor(e,t,i){let s=0,r=0,o=0;this.viewports=i.map(({from:l,to:a})=>{let h=t.lineAt(l,q.ByPos,e,0,0).top,c=t.lineAt(a,q.ByPos,e,0,0).bottom;return s+=c-h,{from:l,to:a,top:h,bottom:c,domTop:0,domBottom:0}}),this.scale=(7e6-s)/(t.height-s);for(let l of this.viewports)l.domTop=o+(l.top-r)*this.scale,o=l.domBottom=l.domTop+(l.bottom-l.top),r=l.bottom}toDOM(e){for(let t=0,i=0,s=0;;t++){let r=twi(s,e)):n.type)}const Zi=D.define({combine:n=>n.join(" ")}),nr=D.define({combine:n=>n.indexOf(!0)>-1}),sr=mt.newName(),yh=mt.newName(),bh=mt.newName(),wh={"&light":"."+yh,"&dark":"."+bh};function rr(n,e,t){return new mt(e,{finish(i){return/&/.test(i)?i.replace(/&\w*/,s=>{if(s=="&")return n;if(!t||!t[s])throw new RangeError(`Unsupported selector: ${s}`);return t[s]}):n+" "+i}})}const Ou=rr("."+sr,{"&.cm-editor":{position:"relative !important",boxSizing:"border-box","&.cm-focused":{outline:"1px dotted #212121"},display:"flex !important",flexDirection:"column"},".cm-scroller":{display:"flex !important",alignItems:"flex-start !important",fontFamily:"monospace",lineHeight:1.4,height:"100%",overflowX:"auto",position:"relative",zIndex:0},".cm-content":{margin:0,flexGrow:2,flexShrink:0,minHeight:"100%",display:"block",whiteSpace:"pre",wordWrap:"normal",boxSizing:"border-box",padding:"4px 0",outline:"none","&[contenteditable=true]":{WebkitUserModify:"read-write-plaintext-only"}},".cm-lineWrapping":{whiteSpace_fallback:"pre-wrap",whiteSpace:"break-spaces",wordBreak:"break-word",overflowWrap:"anywhere",flexShrink:1},"&light .cm-content":{caretColor:"black"},"&dark .cm-content":{caretColor:"white"},".cm-line":{display:"block",padding:"0 2px 0 4px"},".cm-selectionLayer":{zIndex:-1,contain:"size style"},".cm-selectionBackground":{position:"absolute"},"&light .cm-selectionBackground":{background:"#d9d9d9"},"&dark .cm-selectionBackground":{background:"#222"},"&light.cm-focused .cm-selectionBackground":{background:"#d7d4f0"},"&dark.cm-focused .cm-selectionBackground":{background:"#233"},".cm-cursorLayer":{zIndex:100,contain:"size style",pointerEvents:"none"},"&.cm-focused .cm-cursorLayer":{animation:"steps(1) cm-blink 1.2s infinite"},"@keyframes cm-blink":{"0%":{},"50%":{opacity:0},"100%":{}},"@keyframes cm-blink2":{"0%":{},"50%":{opacity:0},"100%":{}},".cm-cursor, .cm-dropCursor":{position:"absolute",borderLeft:"1.2px solid black",marginLeft:"-0.6px",pointerEvents:"none"},".cm-cursor":{display:"none"},"&dark .cm-cursor":{borderLeftColor:"#444"},"&.cm-focused .cm-cursor":{display:"block"},"&light .cm-activeLine":{backgroundColor:"#cceeff44"},"&dark .cm-activeLine":{backgroundColor:"#99eeff33"},"&light .cm-specialChar":{color:"red"},"&dark .cm-specialChar":{color:"#f78"},".cm-gutters":{flexShrink:0,display:"flex",height:"100%",boxSizing:"border-box",left:0,zIndex:200},"&light .cm-gutters":{backgroundColor:"#f5f5f5",color:"#6c6c6c",borderRight:"1px solid #ddd"},"&dark .cm-gutters":{backgroundColor:"#333338",color:"#ccc"},".cm-gutter":{display:"flex !important",flexDirection:"column",flexShrink:0,boxSizing:"border-box",minHeight:"100%",overflow:"hidden"},".cm-gutterElement":{boxSizing:"border-box"},".cm-lineNumbers .cm-gutterElement":{padding:"0 3px 0 5px",minWidth:"20px",textAlign:"right",whiteSpace:"nowrap"},"&light .cm-activeLineGutter":{backgroundColor:"#e2f2ff"},"&dark .cm-activeLineGutter":{backgroundColor:"#222227"},".cm-panels":{boxSizing:"border-box",position:"sticky",left:0,right:0},"&light .cm-panels":{backgroundColor:"#f5f5f5",color:"black"},"&light .cm-panels-top":{borderBottom:"1px solid #ddd"},"&light .cm-panels-bottom":{borderTop:"1px solid #ddd"},"&dark .cm-panels":{backgroundColor:"#333338",color:"white"},".cm-tab":{display:"inline-block",overflow:"hidden",verticalAlign:"bottom"},".cm-widgetBuffer":{verticalAlign:"text-top",height:"1em",width:0,display:"inline"},".cm-placeholder":{color:"#888",display:"inline-block",verticalAlign:"top"},".cm-button":{verticalAlign:"middle",color:"inherit",fontSize:"70%",padding:".2em 1em",borderRadius:"1px"},"&light .cm-button":{backgroundImage:"linear-gradient(#eff1f5, #d9d9df)",border:"1px solid #888","&:active":{backgroundImage:"linear-gradient(#b4b4b4, #d0d3d6)"}},"&dark .cm-button":{backgroundImage:"linear-gradient(#393939, #111)",border:"1px solid #888","&:active":{backgroundImage:"linear-gradient(#111, #333)"}},".cm-textfield":{verticalAlign:"middle",color:"inherit",fontSize:"70%",border:"1px solid silver",padding:".2em .5em"},"&light .cm-textfield":{backgroundColor:"white"},"&dark .cm-textfield":{border:"1px solid #555",backgroundColor:"inherit"}},wh);class Bu{constructor(e,t,i,s){this.typeOver=s,this.bounds=null,this.text="";let{impreciseHead:r,impreciseAnchor:o}=e.docView;if(t>-1&&!e.state.readOnly&&(this.bounds=e.docView.domBoundsAround(t,i,0))){let l=r||o?[]:Eu(e),a=new rh(l,e.state);a.readRange(this.bounds.startDOM,this.bounds.endDOM),this.text=a.text,this.newSel=Ru(l,this.bounds.from)}else{let l=e.observer.selectionRange,a=r&&r.node==l.focusNode&&r.offset==l.focusOffset||!Xt(e.contentDOM,l.focusNode)?e.state.selection.main.head:e.docView.posFromDOM(l.focusNode,l.focusOffset),h=o&&o.node==l.anchorNode&&o.offset==l.anchorOffset||!Xt(e.contentDOM,l.anchorNode)?e.state.selection.main.anchor:e.docView.posFromDOM(l.anchorNode,l.anchorOffset);this.newSel=w.single(h,a)}}}function kh(n,e){let t,{newSel:i}=e,s=n.state.selection.main;if(e.bounds){let{from:r,to:o}=e.bounds,l=s.from,a=null;(n.inputState.lastKeyCode===8&&n.inputState.lastKeyTime>Date.now()-100||A.android&&e.text.length=s.from&&t.to<=s.to&&(t.from!=s.from||t.to!=s.to)&&s.to-s.from-(t.to-t.from)<=4?t={from:s.from,to:s.to,insert:n.state.doc.slice(s.from,t.from).append(t.insert).append(n.state.doc.slice(t.to,s.to))}:(A.mac||A.android)&&t&&t.from==t.to&&t.from==s.head-1&&/^\. ?$/.test(t.insert.toString())?(i&&t.insert.length==2&&(i=w.single(i.main.anchor-1,i.main.head-1)),t={from:s.from,to:s.to,insert:_.of([" "])}):A.chrome&&t&&t.from==t.to&&t.from==s.head&&t.insert.toString()==` - `&&n.lineWrapping&&(i&&(i=w.single(i.main.anchor-1,i.main.head-1)),t={from:s.from,to:s.to,insert:_.of([" "])}),t){let r=n.state;if(A.ios&&n.inputState.flushIOSKey(n)||A.android&&(t.from==s.from&&t.to==s.to&&t.insert.length==1&&t.insert.lines==2&&$t(n.contentDOM,"Enter",13)||t.from==s.from-1&&t.to==s.to&&t.insert.length==0&&$t(n.contentDOM,"Backspace",8)||t.from==s.from&&t.to==s.to+1&&t.insert.length==0&&$t(n.contentDOM,"Delete",46)))return!0;let o=t.insert.toString();if(n.state.facet(Ja).some(h=>h(n,t.from,t.to,o)))return!0;n.inputState.composing>=0&&n.inputState.composing++;let l;if(t.from>=s.from&&t.to<=s.to&&t.to-t.from>=(s.to-s.from)/3&&(!i||i.main.empty&&i.main.from==t.from+t.insert.length)&&n.inputState.composing<0){let h=s.fromt.to?r.sliceDoc(t.to,s.to):"";l=r.replaceSelection(n.state.toText(h+t.insert.sliceString(0,void 0,n.state.lineBreak)+c))}else{let h=r.changes(t),c=i&&!r.selection.main.eq(i.main)&&i.main.to<=h.newLength?i.main:void 0;if(r.selection.ranges.length>1&&n.inputState.composing>=0&&t.to<=s.to&&t.to>=s.to-10){let f=n.state.sliceDoc(t.from,t.to),u=oh(n)||n.state.doc.lineAt(s.head),d=s.to-t.to,p=s.to-s.from;l=r.changeByRange(g=>{if(g.from==s.from&&g.to==s.to)return{changes:h,range:c||g.map(h)};let y=g.to-d,b=y-f.length;if(g.to-g.from!=p||n.state.sliceDoc(b,y)!=f||u&&g.to>=u.from&&g.from<=u.to)return{range:g};let v=r.changes({from:b,to:y,insert:t.insert}),S=g.to-s.to;return{changes:v,range:c?w.range(Math.max(0,c.anchor+S),Math.max(0,c.head+S)):g.map(v)}})}else l={changes:h,selection:c&&r.selection.replaceRange(c)}}let a="input.type";return n.composing&&(a+=".compose",n.inputState.compositionFirstChange&&(a+=".start",n.inputState.compositionFirstChange=!1)),n.dispatch(l,{scrollIntoView:!0,userEvent:a}),!0}else if(i&&!i.main.eq(s)){let r=!1,o="select";return n.inputState.lastSelectionTime>Date.now()-50&&(n.inputState.lastSelectionOrigin=="select"&&(r=!0),o=n.inputState.lastSelectionOrigin),n.dispatch({selection:i,scrollIntoView:r,userEvent:o}),!0}else return!1}function Pu(n,e,t,i){let s=Math.min(n.length,e.length),r=0;for(;r0&&l>0&&n.charCodeAt(o-1)==e.charCodeAt(l-1);)o--,l--;if(i=="end"){let a=Math.max(0,r-Math.min(o,l));t-=o+a-r}if(o=o?r-t:0;r-=a,l=r+(l-o),o=r}else if(l=l?r-t:0;r-=a,o=r+(o-l),l=r}return{from:r,toA:o,toB:l}}function Eu(n){let e=[];if(n.root.activeElement!=n.contentDOM)return e;let{anchorNode:t,anchorOffset:i,focusNode:s,focusOffset:r}=n.observer.selectionRange;return t&&(e.push(new Co(t,i)),(s!=t||r!=i)&&e.push(new Co(s,r))),e}function Ru(n,e){if(n.length==0)return null;let t=n[0].pos,i=n.length==2?n[1].pos:t;return t>-1&&i>-1?w.single(t+e,i+e):null}const Lu={childList:!0,characterData:!0,subtree:!0,attributes:!0,characterDataOldValue:!0},ls=A.ie&&A.ie_version<=11;class Iu{constructor(e){this.view=e,this.active=!1,this.selectionRange=new Tf,this.selectionChanged=!1,this.delayedFlush=-1,this.resizeTimeout=-1,this.queue=[],this.delayedAndroidKey=null,this.flushingAndroidKey=-1,this.lastChange=0,this.scrollTargets=[],this.intersection=null,this.resize=null,this.intersecting=!1,this.gapIntersection=null,this.gaps=[],this.parentCheck=-1,this.dom=e.contentDOM,this.observer=new MutationObserver(t=>{for(let i of t)this.queue.push(i);(A.ie&&A.ie_version<=11||A.ios&&e.composing)&&t.some(i=>i.type=="childList"&&i.removedNodes.length||i.type=="characterData"&&i.oldValue.length>i.target.nodeValue.length)?this.flushSoon():this.flush()}),ls&&(this.onCharData=t=>{this.queue.push({target:t.target,type:"characterData",oldValue:t.prevValue}),this.flushSoon()}),this.onSelectionChange=this.onSelectionChange.bind(this),this.onResize=this.onResize.bind(this),this.onPrint=this.onPrint.bind(this),this.onScroll=this.onScroll.bind(this),typeof ResizeObserver=="function"&&(this.resize=new ResizeObserver(()=>{var t;((t=this.view.docView)===null||t===void 0?void 0:t.lastUpdate){this.parentCheck<0&&(this.parentCheck=setTimeout(this.listenForScroll.bind(this),1e3)),t.length>0&&t[t.length-1].intersectionRatio>0!=this.intersecting&&(this.intersecting=!this.intersecting,this.intersecting!=this.view.inView&&this.onScrollChanged(document.createEvent("Event")))},{}),this.intersection.observe(this.dom),this.gapIntersection=new IntersectionObserver(t=>{t.length>0&&t[t.length-1].intersectionRatio>0&&this.onScrollChanged(document.createEvent("Event"))},{})),this.listenForScroll(),this.readSelectionRange()}onScrollChanged(e){this.view.inputState.runScrollHandlers(this.view,e),this.intersecting&&this.view.measure()}onScroll(e){this.intersecting&&this.flush(!1),this.onScrollChanged(e)}onResize(){this.resizeTimeout<0&&(this.resizeTimeout=setTimeout(()=>{this.resizeTimeout=-1,this.view.requestMeasure()},50))}onPrint(){this.view.viewState.printing=!0,this.view.measure(),setTimeout(()=>{this.view.viewState.printing=!1,this.view.requestMeasure()},500)}updateGaps(e){if(this.gapIntersection&&(e.length!=this.gaps.length||this.gaps.some((t,i)=>t!=e[i]))){this.gapIntersection.disconnect();for(let t of e)this.gapIntersection.observe(t);this.gaps=e}}onSelectionChange(e){let t=this.selectionChanged;if(!this.readSelectionRange()||this.delayedAndroidKey)return;let{view:i}=this,s=this.selectionRange;if(i.state.facet(zn)?i.root.activeElement!=this.dom:!dn(i.dom,s))return;let r=s.anchorNode&&i.docView.nearest(s.anchorNode);if(r&&r.ignoreEvent(e)){t||(this.selectionChanged=!1);return}(A.ie&&A.ie_version<=11||A.android&&A.chrome)&&!i.state.selection.main.empty&&s.focusNode&&Sn(s.focusNode,s.focusOffset,s.anchorNode,s.anchorOffset)?this.flushSoon():this.flush(!1)}readSelectionRange(){let{view:e}=this,t=A.safari&&e.root.nodeType==11&&Af(this.dom.ownerDocument)==this.dom&&Nu(this.view)||xn(e.root);if(!t||this.selectionRange.eq(t))return!1;let i=dn(this.dom,t);return i&&!this.selectionChanged&&e.inputState.lastFocusTime>Date.now()-200&&e.inputState.lastTouchTime{let r=this.delayedAndroidKey;r&&(this.clearDelayedAndroidKey(),!this.flush()&&r.force&&$t(this.dom,r.key,r.keyCode))};this.flushingAndroidKey=this.view.win.requestAnimationFrame(s)}(!this.delayedAndroidKey||e=="Enter")&&(this.delayedAndroidKey={key:e,keyCode:t,force:this.lastChange{this.delayedFlush=-1,this.flush()}))}forceFlush(){this.delayedFlush>=0&&(this.view.win.cancelAnimationFrame(this.delayedFlush),this.delayedFlush=-1),this.flush()}processRecords(){let e=this.queue;for(let r of this.observer.takeRecords())e.push(r);e.length&&(this.queue=[]);let t=-1,i=-1,s=!1;for(let r of e){let o=this.readMutation(r);o&&(o.typeOver&&(s=!0),t==-1?{from:t,to:i}=o:(t=Math.min(o.from,t),i=Math.max(o.to,i)))}return{from:t,to:i,typeOver:s}}readChange(){let{from:e,to:t,typeOver:i}=this.processRecords(),s=this.selectionChanged&&dn(this.dom,this.selectionRange);return e<0&&!s?null:(e>-1&&(this.lastChange=Date.now()),this.view.inputState.lastFocusTime=0,this.selectionChanged=!1,new Bu(this.view,e,t,i))}flush(e=!0){if(this.delayedFlush>=0||this.delayedAndroidKey)return!1;e&&this.readSelectionRange();let t=this.readChange();if(!t)return!1;let i=this.view.state,s=kh(this.view,t);return this.view.state==i&&this.view.update([]),s}readMutation(e){let t=this.view.docView.nearest(e.target);if(!t||t.ignoreMutation(e))return null;if(t.markDirty(e.type=="attributes"),e.type=="attributes"&&(t.dirty|=4),e.type=="childList"){let i=jo(t,e.previousSibling||e.target.previousSibling,-1),s=jo(t,e.nextSibling||e.target.nextSibling,1);return{from:i?t.posAfter(i):t.posAtStart,to:s?t.posBefore(s):t.posAtEnd,typeOver:!1}}else return e.type=="characterData"?{from:t.posAtStart,to:t.posAtEnd,typeOver:e.target.nodeValue==e.oldValue}:null}setWindow(e){e!=this.win&&(this.removeWindowListeners(this.win),this.win=e,this.addWindowListeners(this.win))}addWindowListeners(e){e.addEventListener("resize",this.onResize),e.addEventListener("beforeprint",this.onPrint),e.addEventListener("scroll",this.onScroll),e.document.addEventListener("selectionchange",this.onSelectionChange)}removeWindowListeners(e){e.removeEventListener("scroll",this.onScroll),e.removeEventListener("resize",this.onResize),e.removeEventListener("beforeprint",this.onPrint),e.document.removeEventListener("selectionchange",this.onSelectionChange)}destroy(){var e,t,i;this.stop(),(e=this.intersection)===null||e===void 0||e.disconnect(),(t=this.gapIntersection)===null||t===void 0||t.disconnect(),(i=this.resize)===null||i===void 0||i.disconnect();for(let s of this.scrollTargets)s.removeEventListener("scroll",this.onScroll);this.removeWindowListeners(this.win),clearTimeout(this.parentCheck),clearTimeout(this.resizeTimeout),this.win.cancelAnimationFrame(this.delayedFlush),this.win.cancelAnimationFrame(this.flushingAndroidKey)}}function jo(n,e,t){for(;e;){let i=K.get(e);if(i&&i.parent==n)return i;let s=e.parentNode;e=s!=n.dom?s:t>0?e.nextSibling:e.previousSibling}return null}function Nu(n){let e=null;function t(a){a.preventDefault(),a.stopImmediatePropagation(),e=a.getTargetRanges()[0]}if(n.contentDOM.addEventListener("beforeinput",t,!0),n.dom.ownerDocument.execCommand("indent"),n.contentDOM.removeEventListener("beforeinput",t,!0),!e)return null;let i=e.startContainer,s=e.startOffset,r=e.endContainer,o=e.endOffset,l=n.docView.domAtPos(n.state.selection.main.anchor);return Sn(l.node,l.offset,r,o)&&([i,s,r,o]=[r,o,i,s]),{anchorNode:i,anchorOffset:s,focusNode:r,focusOffset:o}}class O{constructor(e={}){this.plugins=[],this.pluginMap=new Map,this.editorAttrs={},this.contentAttrs={},this.bidiCache=[],this.destroyed=!1,this.updateState=2,this.measureScheduled=-1,this.measureRequests=[],this.contentDOM=document.createElement("div"),this.scrollDOM=document.createElement("div"),this.scrollDOM.tabIndex=-1,this.scrollDOM.className="cm-scroller",this.scrollDOM.appendChild(this.contentDOM),this.announceDOM=document.createElement("div"),this.announceDOM.style.cssText="position: absolute; top: -10000px",this.announceDOM.setAttribute("aria-live","polite"),this.dom=document.createElement("div"),this.dom.appendChild(this.announceDOM),this.dom.appendChild(this.scrollDOM),this._dispatch=e.dispatch||(t=>this.update([t])),this.dispatch=this.dispatch.bind(this),this._root=e.root||Of(e.parent)||document,this.viewState=new zo(e.state||N.create(e)),this.plugins=this.state.facet(yi).map(t=>new ns(t));for(let t of this.plugins)t.update(this);this.observer=new Iu(this),this.inputState=new su(this),this.inputState.ensureHandlers(this,this.plugins),this.docView=new Ao(this),this.mountStyles(),this.updateAttrs(),this.updateState=0,this.requestMeasure(),e.parent&&e.parent.appendChild(this.dom)}get state(){return this.viewState.state}get viewport(){return this.viewState.viewport}get visibleRanges(){return this.viewState.visibleRanges}get inView(){return this.viewState.inView}get composing(){return this.inputState.composing>0}get compositionStarted(){return this.inputState.composing>=0}get root(){return this._root}get win(){return this.dom.ownerDocument.defaultView||window}dispatch(...e){this._dispatch(e.length==1&&e[0]instanceof re?e[0]:this.state.update(...e))}update(e){if(this.updateState!=0)throw new Error("Calls to EditorView.update are not allowed while an update is in progress");let t=!1,i=!1,s,r=this.state;for(let h of e){if(h.startState!=r)throw new RangeError("Trying to update state with a transaction that doesn't start from the previous state.");r=h.state}if(this.destroyed){this.viewState.state=r;return}let o=this.observer.delayedAndroidKey,l=null;if(o?(this.observer.clearDelayedAndroidKey(),l=this.observer.readChange(),(l&&!this.state.doc.eq(r.doc)||!this.state.selection.eq(r.selection))&&(l=null)):this.observer.clear(),r.facet(N.phrases)!=this.state.facet(N.phrases))return this.setState(r);s=Mn.create(this,r,e);let a=this.viewState.scrollTarget;try{this.updateState=2;for(let h of e){if(a&&(a=a.map(h.changes)),h.scrollIntoView){let{main:c}=h.state.selection;a=new An(c.empty?c:w.cursor(c.head,c.head>c.anchor?-1:1))}for(let c of h.effects)c.is(xo)&&(a=c.value)}this.viewState.update(s,a),this.bidiCache=Dn.update(this.bidiCache,s.changes),s.empty||(this.updatePlugins(s),this.inputState.update(s)),t=this.docView.update(s),this.state.facet(bi)!=this.styleModules&&this.mountStyles(),i=this.updateAttrs(),this.showAnnouncements(e),this.docView.updateSelection(t,e.some(h=>h.isUserEvent("select.pointer")))}finally{this.updateState=0}if(s.startState.facet(Zi)!=s.state.facet(Zi)&&(this.viewState.mustMeasureContent=!0),(t||i||a||this.viewState.mustEnforceCursorAssoc||this.viewState.mustMeasureContent)&&this.requestMeasure(),!s.empty)for(let h of this.state.facet(Xs))h(s);l&&!kh(this,l)&&o.force&&$t(this.contentDOM,o.key,o.keyCode)}setState(e){if(this.updateState!=0)throw new Error("Calls to EditorView.setState are not allowed while an update is in progress");if(this.destroyed){this.viewState.state=e;return}this.updateState=2;let t=this.hasFocus;try{for(let i of this.plugins)i.destroy(this);this.viewState=new zo(e),this.plugins=e.facet(yi).map(i=>new ns(i)),this.pluginMap.clear();for(let i of this.plugins)i.update(this);this.docView=new Ao(this),this.inputState.ensureHandlers(this,this.plugins),this.mountStyles(),this.updateAttrs(),this.bidiCache=[]}finally{this.updateState=0}t&&this.focus(),this.requestMeasure()}updatePlugins(e){let t=e.startState.facet(yi),i=e.state.facet(yi);if(t!=i){let s=[];for(let r of i){let o=t.indexOf(r);if(o<0)s.push(new ns(r));else{let l=this.plugins[o];l.mustUpdate=e,s.push(l)}}for(let r of this.plugins)r.mustUpdate!=e&&r.destroy(this);this.plugins=s,this.pluginMap.clear(),this.inputState.ensureHandlers(this,this.plugins)}else for(let s of this.plugins)s.mustUpdate=e;for(let s=0;s-1&&cancelAnimationFrame(this.measureScheduled),this.measureScheduled=0,e&&this.observer.forceFlush();let t=null,{scrollHeight:i,scrollTop:s,clientHeight:r}=this.scrollDOM,o=s>i-r-4?i:s;try{for(let l=0;;l++){this.updateState=1;let a=this.viewport,h=this.viewState.lineBlockAtHeight(o),c=this.viewState.measure(this);if(!c&&!this.measureRequests.length&&this.viewState.scrollTarget==null)break;if(l>5){console.warn(this.measureRequests.length?"Measure loop restarted more than 5 times":"Viewport failed to stabilize");break}let f=[];c&4||([this.measureRequests,f]=[f,this.measureRequests]);let u=f.map(y=>{try{return y.read(this)}catch(b){return He(this.state,b),Ko}}),d=Mn.create(this,this.state,[]),p=!1,g=!1;d.flags|=c,t?t.flags|=c:t=d,this.updateState=2,d.empty||(this.updatePlugins(d),this.inputState.update(d),this.updateAttrs(),p=this.docView.update(d));for(let y=0;y1||y<-1)&&(this.scrollDOM.scrollTop+=y,g=!0)}if(p&&this.docView.updateSelection(!0),this.viewport.from==a.from&&this.viewport.to==a.to&&!g&&this.measureRequests.length==0)break}}finally{this.updateState=0,this.measureScheduled=-1}if(t&&!t.empty)for(let l of this.state.facet(Xs))l(t)}get themeClasses(){return sr+" "+(this.state.facet(nr)?bh:yh)+" "+this.state.facet(Zi)}updateAttrs(){let e=Uo(this,Za,{class:"cm-editor"+(this.hasFocus?" cm-focused ":" ")+this.themeClasses}),t={spellcheck:"false",autocorrect:"off",autocapitalize:"off",translate:"no",contenteditable:this.state.facet(zn)?"true":"false",class:"cm-content",style:`${A.tabSize}: ${this.state.tabSize}`,role:"textbox","aria-multiline":"true"};this.state.readOnly&&(t["aria-readonly"]="true"),Uo(this,Qa,t);let i=this.observer.ignore(()=>{let s=Js(this.contentDOM,this.contentAttrs,t),r=Js(this.dom,this.editorAttrs,e);return s||r});return this.editorAttrs=e,this.contentAttrs=t,i}showAnnouncements(e){let t=!0;for(let i of e)for(let s of i.effects)if(s.is(O.announce)){t&&(this.announceDOM.textContent=""),t=!1;let r=this.announceDOM.appendChild(document.createElement("div"));r.textContent=s.value}}mountStyles(){this.styleModules=this.state.facet(bi),mt.mount(this.root,this.styleModules.concat(Ou).reverse())}readMeasured(){if(this.updateState==2)throw new Error("Reading the editor layout isn't allowed during an update");this.updateState==0&&this.measureScheduled>-1&&this.measure(!1)}requestMeasure(e){if(this.measureScheduled<0&&(this.measureScheduled=this.win.requestAnimationFrame(()=>this.measure())),e){if(e.key!=null){for(let t=0;ti.spec==e)||null),t&&t.update(this).value}get documentTop(){return this.contentDOM.getBoundingClientRect().top+this.viewState.paddingTop}get documentPadding(){return{top:this.viewState.paddingTop,bottom:this.viewState.paddingBottom}}elementAtHeight(e){return this.readMeasured(),this.viewState.elementAtHeight(e)}lineBlockAtHeight(e){return this.readMeasured(),this.viewState.lineBlockAtHeight(e)}get viewportLineBlocks(){return this.viewState.viewportLines}lineBlockAt(e){return this.viewState.lineBlockAt(e)}get contentHeight(){return this.viewState.contentHeight}moveByChar(e,t,i){return rs(this,e,Po(this,e,t,i))}moveByGroup(e,t){return rs(this,e,Po(this,e,t,i=>iu(this,e.head,i)))}moveToLineBoundary(e,t,i=!0){return tu(this,e,t,i)}moveVertically(e,t,i){return rs(this,e,nu(this,e,t,i))}domAtPos(e){return this.docView.domAtPos(e)}posAtDOM(e,t=0){return this.docView.posFromDOM(e,t)}posAtCoords(e,t=!0){return this.readMeasured(),ah(this,e,t)}coordsAtPos(e,t=1){this.readMeasured();let i=this.docView.coordsAt(e,t);if(!i||i.left==i.right)return i;let s=this.state.doc.lineAt(e),r=this.bidiSpans(s),o=r[Jt.find(r,e-s.from,-1,t)];return Dr(i,o.dir==Z.LTR==t>0)}get defaultCharacterWidth(){return this.viewState.heightOracle.charWidth}get defaultLineHeight(){return this.viewState.heightOracle.lineHeight}get textDirection(){return this.viewState.defaultTextDirection}textDirectionAt(e){return!this.state.facet(Ya)||ethis.viewport.to?this.textDirection:(this.readMeasured(),this.docView.textDirectionAt(e))}get lineWrapping(){return this.viewState.heightOracle.lineWrapping}bidiSpans(e){if(e.length>_u)return nh(e.length);let t=this.textDirectionAt(e.from);for(let s of this.bidiCache)if(s.from==e.from&&s.dir==t)return s.order;let i=Wf(e.text,t);return this.bidiCache.push(new Dn(e.from,e.to,t,i)),i}get hasFocus(){var e;return(this.dom.ownerDocument.hasFocus()||A.safari&&((e=this.inputState)===null||e===void 0?void 0:e.lastContextMenu)>Date.now()-3e4)&&this.root.activeElement==this.contentDOM}focus(){this.observer.ignore(()=>{Ea(this.contentDOM),this.docView.updateSelection()})}setRoot(e){this._root!=e&&(this._root=e,this.observer.setWindow((e.nodeType==9?e:e.ownerDocument).defaultView||window),this.mountStyles())}destroy(){for(let e of this.plugins)e.destroy(this);this.plugins=[],this.inputState.destroy(),this.dom.remove(),this.observer.destroy(),this.measureScheduled>-1&&cancelAnimationFrame(this.measureScheduled),this.destroyed=!0}static scrollIntoView(e,t={}){return xo.of(new An(typeof e=="number"?w.cursor(e):e,t.y,t.x,t.yMargin,t.xMargin))}static domEventHandlers(e){return be.define(()=>({}),{eventHandlers:e})}static theme(e,t){let i=mt.newName(),s=[Zi.of(i),bi.of(rr(`.${i}`,e))];return t&&t.dark&&s.push(nr.of(!0)),s}static baseTheme(e){return Vi.lowest(bi.of(rr("."+sr,e,wh)))}static findFromDOM(e){var t;let i=e.querySelector(".cm-content"),s=i&&K.get(i)||K.get(e);return((t=s?.rootView)===null||t===void 0?void 0:t.view)||null}}O.styleModule=bi;O.inputHandler=Ja;O.perLineTextDirection=Ya;O.exceptionSink=$a;O.updateListener=Xs;O.editable=zn;O.mouseSelectionStyle=Ga;O.dragMovesSelection=Ua;O.clickAddsSelectionRange=Ka;O.decorations=Ei;O.atomicRanges=eh;O.scrollMargins=th;O.darkTheme=nr;O.contentAttributes=Qa;O.editorAttributes=Za;O.lineWrapping=O.contentAttributes.of({class:"cm-lineWrapping"});O.announce=R.define();const _u=4096,Ko={};class Dn{constructor(e,t,i,s){this.from=e,this.to=t,this.dir=i,this.order=s}static update(e,t){if(t.empty)return e;let i=[],s=e.length?e[e.length-1].dir:Z.LTR;for(let r=Math.max(0,e.length-10);r=0;s--){let r=i[s],o=typeof r=="function"?r(n):r;o&&$s(o,t)}return t}const Vu=A.mac?"mac":A.windows?"win":A.linux?"linux":"key";function Fu(n,e){const t=n.split(/-(?!$)/);let i=t[t.length-1];i=="Space"&&(i=" ");let s,r,o,l;for(let a=0;ai.concat(s),[]))),t}let at=null;const zu=4e3;function qu(n,e=Vu){let t=Object.create(null),i=Object.create(null),s=(o,l)=>{let a=i[o];if(a==null)i[o]=l;else if(a!=l)throw new Error("Key binding "+o+" is used both as a regular binding and as a multi-stroke prefix")},r=(o,l,a,h)=>{var c,f;let u=t[o]||(t[o]=Object.create(null)),d=l.split(/ (?!$)/).map(y=>Fu(y,e));for(let y=1;y{let S=at={view:v,prefix:b,scope:o};return setTimeout(()=>{at==S&&(at=null)},zu),!0}]})}let p=d.join(" ");s(p,!1);let g=u[p]||(u[p]={preventDefault:!1,run:((f=(c=u._any)===null||c===void 0?void 0:c.run)===null||f===void 0?void 0:f.slice())||[]});a&&g.run.push(a),h&&(g.preventDefault=!0)};for(let o of n){let l=o.scope?o.scope.split(" "):["editor"];if(o.any)for(let h of l){let c=t[h]||(t[h]=Object.create(null));c._any||(c._any={preventDefault:!1,run:[]});for(let f in c)c[f].run.push(o.any)}let a=o[e]||o.key;if(a)for(let h of l)r(h,a,o.run,o.preventDefault),o.shift&&r(h,"Shift-"+a,o.shift,o.preventDefault)}return t}function ju(n,e,t,i){let s=Cf(e),r=ge(s,0),o=Ee(r)==s.length&&s!=" ",l="",a=!1;at&&at.view==t&&at.scope==i&&(l=at.prefix+" ",(a=ch.indexOf(e.keyCode)<0)&&(at=null));let h=new Set,c=p=>{if(p){for(let g of p.run)if(!h.has(g)&&(h.add(g),g(t,e)))return!0;p.preventDefault&&(a=!0)}return!1},f=n[i],u,d;if(f){if(c(f[l+Qi(s,e,!o)]))return!0;if(o&&(e.shiftKey||e.altKey||e.metaKey||r>127)&&(u=gt[e.keyCode])&&u!=s){if(c(f[l+Qi(u,e,!0)]))return!0;if(e.shiftKey&&(d=Oi[e.keyCode])!=s&&d!=u&&c(f[l+Qi(d,e,!1)]))return!0}else if(o&&e.shiftKey&&c(f[l+Qi(s,e,!0)]))return!0;if(c(f._any))return!0}return a}const vh=!A.ios,ki=D.define({combine(n){return _t(n,{cursorBlinkRate:1200,drawRangeCursor:!0},{cursorBlinkRate:(e,t)=>Math.min(e,t),drawRangeCursor:(e,t)=>e||t})}});function Ku(n={}){return[ki.of(n),Uu,Gu,Xa.of(!0)]}class xh{constructor(e,t,i,s,r){this.left=e,this.top=t,this.width=i,this.height=s,this.className=r}draw(){let e=document.createElement("div");return e.className=this.className,this.adjust(e),e}adjust(e){e.style.left=this.left+"px",e.style.top=this.top+"px",this.width>=0&&(e.style.width=this.width+"px"),e.style.height=this.height+"px"}eq(e){return this.left==e.left&&this.top==e.top&&this.width==e.width&&this.height==e.height&&this.className==e.className}}const Uu=be.fromClass(class{constructor(n){this.view=n,this.rangePieces=[],this.cursors=[],this.measureReq={read:this.readPos.bind(this),write:this.drawSel.bind(this)},this.selectionLayer=n.scrollDOM.appendChild(document.createElement("div")),this.selectionLayer.className="cm-selectionLayer",this.selectionLayer.setAttribute("aria-hidden","true"),this.cursorLayer=n.scrollDOM.appendChild(document.createElement("div")),this.cursorLayer.className="cm-cursorLayer",this.cursorLayer.setAttribute("aria-hidden","true"),n.requestMeasure(this.measureReq),this.setBlinkRate()}setBlinkRate(){this.cursorLayer.style.animationDuration=this.view.state.facet(ki).cursorBlinkRate+"ms"}update(n){let e=n.startState.facet(ki)!=n.state.facet(ki);(e||n.selectionSet||n.geometryChanged||n.viewportChanged)&&this.view.requestMeasure(this.measureReq),n.transactions.some(t=>t.scrollIntoView)&&(this.cursorLayer.style.animationName=this.cursorLayer.style.animationName=="cm-blink"?"cm-blink2":"cm-blink"),e&&this.setBlinkRate()}readPos(){let{state:n}=this.view,e=n.facet(ki),t=n.selection.ranges.map(s=>s.empty?[]:$u(this.view,s)).reduce((s,r)=>s.concat(r)),i=[];for(let s of n.selection.ranges){let r=s==n.selection.main;if(s.empty?!r||vh:e.drawRangeCursor){let o=Ju(this.view,s,r);o&&i.push(o)}}return{rangePieces:t,cursors:i}}drawSel({rangePieces:n,cursors:e}){if(n.length!=this.rangePieces.length||n.some((t,i)=>!t.eq(this.rangePieces[i]))){this.selectionLayer.textContent="";for(let t of n)this.selectionLayer.appendChild(t.draw());this.rangePieces=n}if(e.length!=this.cursors.length||e.some((t,i)=>!t.eq(this.cursors[i]))){let t=this.cursorLayer.children;if(t.length!==e.length){this.cursorLayer.textContent="";for(const i of e)this.cursorLayer.appendChild(i.draw())}else e.forEach((i,s)=>i.adjust(t[s]));this.cursors=e}}destroy(){this.selectionLayer.remove(),this.cursorLayer.remove()}}),Sh={".cm-line":{"& ::selection":{backgroundColor:"transparent !important"},"&::selection":{backgroundColor:"transparent !important"}}};vh&&(Sh[".cm-line"].caretColor="transparent !important");const Gu=Vi.highest(O.theme(Sh));function Ch(n){let e=n.scrollDOM.getBoundingClientRect();return{left:(n.textDirection==Z.LTR?e.left:e.right-n.scrollDOM.clientWidth)-n.scrollDOM.scrollLeft,top:e.top-n.scrollDOM.scrollTop}}function $o(n,e,t){let i=w.cursor(e);return{from:Math.max(t.from,n.moveToLineBoundary(i,!1,!0).from),to:Math.min(t.to,n.moveToLineBoundary(i,!0,!0).from),type:W.Text}}function Jo(n,e){let t=n.lineBlockAt(e);if(Array.isArray(t.type)){for(let i of t.type)if(i.to>e||i.to==e&&(i.to==t.to||i.type==W.Text))return i}return t}function $u(n,e){if(e.to<=n.viewport.from||e.from>=n.viewport.to)return[];let t=Math.max(e.from,n.viewport.from),i=Math.min(e.to,n.viewport.to),s=n.textDirection==Z.LTR,r=n.contentDOM,o=r.getBoundingClientRect(),l=Ch(n),a=window.getComputedStyle(r.firstChild),h=o.left+parseInt(a.paddingLeft)+Math.min(0,parseInt(a.textIndent)),c=o.right-parseInt(a.paddingRight),f=Jo(n,t),u=Jo(n,i),d=f.type==W.Text?f:null,p=u.type==W.Text?u:null;if(n.lineWrapping&&(d&&(d=$o(n,t,d)),p&&(p=$o(n,i,p))),d&&p&&d.from==p.from)return y(b(e.from,e.to,d));{let S=d?b(e.from,null,d):v(f,!1),k=p?b(null,e.to,p):v(u,!0),C=[];return(d||f).to<(p||u).from-1?C.push(g(h,S.bottom,c,k.top)):S.bottomP&&G.from=M)break;J>Q&&I(Math.max(le,Q),S==null&&le<=P,Math.min(J,M),k==null&&J>=V,Y.dir)}if(Q=$.to+1,Q>=M)break}return U.length==0&&I(P,S==null,V,k==null,n.textDirection),{top:T,bottom:B,horizontal:U}}function v(S,k){let C=o.top+(k?S.top:S.bottom);return{top:C,bottom:C,horizontal:[]}}}function Ju(n,e,t){let i=n.coordsAtPos(e.head,e.assoc||1);if(!i)return null;let s=Ch(n);return new xh(i.left-s.left,i.top-s.top,-1,i.bottom-i.top,t?"cm-cursor cm-cursor-primary":"cm-cursor cm-cursor-secondary")}function Yo(n,e,t,i,s){e.lastIndex=0;for(let r=n.iterRange(t,i),o=t,l;!r.next().done;o+=r.value.length)if(!r.lineBreak)for(;l=e.exec(r.value);)s(o+l.index,l)}function Yu(n,e){let t=n.visibleRanges;if(t.length==1&&t[0].from==n.viewport.from&&t[0].to==n.viewport.to)return t;let i=[];for(let{from:s,to:r}of t)s=Math.max(n.state.doc.lineAt(s).from,s-e),r=Math.min(n.state.doc.lineAt(r).to,r+e),i.length&&i[i.length-1].to>=s?i[i.length-1].to=r:i.push({from:s,to:r});return i}class Xu{constructor(e){const{regexp:t,decoration:i,decorate:s,boundary:r,maxLength:o=1e3}=e;if(!t.global)throw new RangeError("The regular expression given to MatchDecorator should have its 'g' flag set");if(this.regexp=t,s)this.addMatch=(l,a,h,c)=>s(c,h,h+l[0].length,l,a);else if(typeof i=="function")this.addMatch=(l,a,h,c)=>{let f=i(l,a,h);f&&c(h,h+l[0].length,f)};else if(i)this.addMatch=(l,a,h,c)=>c(h,h+l[0].length,i);else throw new RangeError("Either 'decorate' or 'decoration' should be provided to MatchDecorator");this.boundary=r,this.maxLength=o}createDeco(e){let t=new Pt,i=t.add.bind(t);for(let{from:s,to:r}of Yu(e,this.maxLength))Yo(e.state.doc,this.regexp,s,r,(o,l)=>this.addMatch(l,e,o,i));return t.finish()}updateDeco(e,t){let i=1e9,s=-1;return e.docChanged&&e.changes.iterChanges((r,o,l,a)=>{a>e.view.viewport.from&&l1e3?this.createDeco(e.view):s>-1?this.updateRange(e.view,t.map(e.changes),i,s):t}updateRange(e,t,i,s){for(let r of e.visibleRanges){let o=Math.max(r.from,i),l=Math.min(r.to,s);if(l>o){let a=e.state.doc.lineAt(o),h=a.toa.from;o--)if(this.boundary.test(a.text[o-1-a.from])){c=o;break}for(;lu.push(b.range(g,y));if(a==h)for(this.regexp.lastIndex=c-a.from;(d=this.regexp.exec(a.text))&&d.indexthis.addMatch(y,e,g,p));t=t.update({filterFrom:c,filterTo:f,filter:(g,y)=>gf,add:u})}}return t}}const or=/x/.unicode!=null?"gu":"g",Zu=new RegExp(`[\0-\b ---Ÿ­؜​‎‏\u2028\u2029‭‮⁦⁧⁩\uFEFF-]`,or),Qu={0:"null",7:"bell",8:"backspace",10:"newline",11:"vertical tab",13:"carriage return",27:"escape",8203:"zero width space",8204:"zero width non-joiner",8205:"zero width joiner",8206:"left-to-right mark",8207:"right-to-left mark",8232:"line separator",8237:"left-to-right override",8238:"right-to-left override",8294:"left-to-right isolate",8295:"right-to-left isolate",8297:"pop directional isolate",8233:"paragraph separator",65279:"zero width no-break space",65532:"object replacement"};let as=null;function ed(){var n;if(as==null&&typeof document<"u"&&document.body){let e=document.body.style;as=((n=e.tabSize)!==null&&n!==void 0?n:e.MozTabSize)!=null}return as||!1}const mn=D.define({combine(n){let e=_t(n,{render:null,specialChars:Zu,addSpecialChars:null});return(e.replaceTabs=!ed())&&(e.specialChars=new RegExp(" |"+e.specialChars.source,or)),e.addSpecialChars&&(e.specialChars=new RegExp(e.specialChars.source+"|"+e.addSpecialChars.source,or)),e}});function td(n={}){return[mn.of(n),id()]}let Xo=null;function id(){return Xo||(Xo=be.fromClass(class{constructor(n){this.view=n,this.decorations=E.none,this.decorationCache=Object.create(null),this.decorator=this.makeDecorator(n.state.facet(mn)),this.decorations=this.decorator.createDeco(n)}makeDecorator(n){return new Xu({regexp:n.specialChars,decoration:(e,t,i)=>{let{doc:s}=t.state,r=ge(e[0],0);if(r==9){let o=s.lineAt(i),l=t.state.tabSize,a=Fi(o.text,l,i-o.from);return E.replace({widget:new od((l-a%l)*this.view.defaultCharacterWidth)})}return this.decorationCache[r]||(this.decorationCache[r]=E.replace({widget:new rd(n,r)}))},boundary:n.replaceTabs?void 0:/[^]/})}update(n){let e=n.state.facet(mn);n.startState.facet(mn)!=e?(this.decorator=this.makeDecorator(e),this.decorations=this.decorator.createDeco(n.view)):this.decorations=this.decorator.updateDeco(n,this.decorations)}},{decorations:n=>n.decorations}))}const nd="•";function sd(n){return n>=32?nd:n==10?"␤":String.fromCharCode(9216+n)}class rd extends tt{constructor(e,t){super(),this.options=e,this.code=t}eq(e){return e.code==this.code}toDOM(e){let t=sd(this.code),i=e.state.phrase("Control character")+" "+(Qu[this.code]||"0x"+this.code.toString(16)),s=this.options.render&&this.options.render(this.code,i,t);if(s)return s;let r=document.createElement("span");return r.textContent=t,r.title=i,r.setAttribute("aria-label",i),r.className="cm-specialChar",r}ignoreEvent(){return!1}}class od extends tt{constructor(e){super(),this.width=e}eq(e){return e.width==this.width}toDOM(){let e=document.createElement("span");return e.textContent=" ",e.className="cm-tab",e.style.width=this.width+"px",e}ignoreEvent(){return!1}}class ld extends tt{constructor(e){super(),this.content=e}toDOM(){let e=document.createElement("span");return e.className="cm-placeholder",e.style.pointerEvents="none",e.appendChild(typeof this.content=="string"?document.createTextNode(this.content):this.content),typeof this.content=="string"?e.setAttribute("aria-label","placeholder "+this.content):e.setAttribute("aria-hidden","true"),e}ignoreEvent(){return!1}}function ad(n){return be.fromClass(class{constructor(e){this.view=e,this.placeholder=E.set([E.widget({widget:new ld(n),side:1}).range(0)])}get decorations(){return this.view.state.doc.length?E.none:this.placeholder}},{decorations:e=>e.decorations})}const lr=2e3;function hd(n,e,t){let i=Math.min(e.line,t.line),s=Math.max(e.line,t.line),r=[];if(e.off>lr||t.off>lr||e.col<0||t.col<0){let o=Math.min(e.off,t.off),l=Math.max(e.off,t.off);for(let a=i;a<=s;a++){let h=n.doc.line(a);h.length<=l&&r.push(w.range(h.from+o,h.to+l))}}else{let o=Math.min(e.col,t.col),l=Math.max(e.col,t.col);for(let a=i;a<=s;a++){let h=n.doc.line(a),c=Hs(h.text,o,n.tabSize,!0);if(c<0)r.push(w.cursor(h.to));else{let f=Hs(h.text,l,n.tabSize);r.push(w.range(h.from+c,h.from+f))}}}return r}function cd(n,e){let t=n.coordsAtPos(n.viewport.from);return t?Math.round(Math.abs((t.left-e)/n.defaultCharacterWidth)):-1}function Zo(n,e){let t=n.posAtCoords({x:e.clientX,y:e.clientY},!1),i=n.state.doc.lineAt(t),s=t-i.from,r=s>lr?-1:s==i.length?cd(n,e.clientX):Fi(i.text,n.state.tabSize,t-i.from);return{line:i.number,col:r,off:s}}function fd(n,e){let t=Zo(n,e),i=n.state.selection;return t?{update(s){if(s.docChanged){let r=s.changes.mapPos(s.startState.doc.line(t.line).from),o=s.state.doc.lineAt(r);t={line:o.number,col:t.col,off:Math.min(t.off,o.length)},i=i.map(s.changes)}},get(s,r,o){let l=Zo(n,s);if(!l)return i;let a=hd(n.state,t,l);return a.length?o?w.create(a.concat(i.ranges)):w.create(a):i}}:null}function ud(n){let e=n?.eventFilter||(t=>t.altKey&&t.button==0);return O.mouseSelectionStyle.of((t,i)=>e(i)?fd(t,i):null)}const dd={Alt:[18,n=>n.altKey],Control:[17,n=>n.ctrlKey],Shift:[16,n=>n.shiftKey],Meta:[91,n=>n.metaKey]},pd={style:"cursor: crosshair"};function md(n={}){let[e,t]=dd[n.key||"Alt"],i=be.fromClass(class{constructor(s){this.view=s,this.isDown=!1}set(s){this.isDown!=s&&(this.isDown=s,this.view.update([]))}},{eventHandlers:{keydown(s){this.set(s.keyCode==e||t(s))},keyup(s){(s.keyCode==e||!t(s))&&this.set(!1)},mousemove(s){this.set(t(s))}}});return[i,O.contentAttributes.of(s=>{var r;return!((r=s.plugin(i))===null||r===void 0)&&r.isDown?pd:null})]}const hs="-10000px";class Ah{constructor(e,t,i){this.facet=t,this.createTooltipView=i,this.input=e.state.facet(t),this.tooltips=this.input.filter(s=>s),this.tooltipViews=this.tooltips.map(i)}update(e){var t;let i=e.state.facet(this.facet),s=i.filter(o=>o);if(i===this.input){for(let o of this.tooltipViews)o.update&&o.update(e);return!1}let r=[];for(let o=0;o{var e,t,i;return{position:A.ios?"absolute":((e=n.find(s=>s.position))===null||e===void 0?void 0:e.position)||"fixed",parent:((t=n.find(s=>s.parent))===null||t===void 0?void 0:t.parent)||null,tooltipSpace:((i=n.find(s=>s.tooltipSpace))===null||i===void 0?void 0:i.tooltipSpace)||gd}}}),Mh=be.fromClass(class{constructor(n){this.view=n,this.inView=!0,this.lastTransaction=0,this.measureTimeout=-1;let e=n.state.facet(cs);this.position=e.position,this.parent=e.parent,this.classes=n.themeClasses,this.createContainer(),this.measureReq={read:this.readMeasure.bind(this),write:this.writeMeasure.bind(this),key:this},this.manager=new Ah(n,Er,t=>this.createTooltip(t)),this.intersectionObserver=typeof IntersectionObserver=="function"?new IntersectionObserver(t=>{Date.now()>this.lastTransaction-50&&t.length>0&&t[t.length-1].intersectionRatio<1&&this.measureSoon()},{threshold:[1]}):null,this.observeIntersection(),n.win.addEventListener("resize",this.measureSoon=this.measureSoon.bind(this)),this.maybeMeasure()}createContainer(){this.parent?(this.container=document.createElement("div"),this.container.style.position="relative",this.container.className=this.view.themeClasses,this.parent.appendChild(this.container)):this.container=this.view.dom}observeIntersection(){if(this.intersectionObserver){this.intersectionObserver.disconnect();for(let n of this.manager.tooltipViews)this.intersectionObserver.observe(n.dom)}}measureSoon(){this.measureTimeout<0&&(this.measureTimeout=setTimeout(()=>{this.measureTimeout=-1,this.maybeMeasure()},50))}update(n){n.transactions.length&&(this.lastTransaction=Date.now());let e=this.manager.update(n);e&&this.observeIntersection();let t=e||n.geometryChanged,i=n.state.facet(cs);if(i.position!=this.position){this.position=i.position;for(let s of this.manager.tooltipViews)s.dom.style.position=this.position;t=!0}if(i.parent!=this.parent){this.parent&&this.container.remove(),this.parent=i.parent,this.createContainer();for(let s of this.manager.tooltipViews)this.container.appendChild(s.dom);t=!0}else this.parent&&this.view.themeClasses!=this.classes&&(this.classes=this.container.className=this.view.themeClasses);t&&this.maybeMeasure()}createTooltip(n){let e=n.create(this.view);if(e.dom.classList.add("cm-tooltip"),n.arrow&&!e.dom.querySelector(".cm-tooltip > .cm-tooltip-arrow")){let t=document.createElement("div");t.className="cm-tooltip-arrow",e.dom.appendChild(t)}return e.dom.style.position=this.position,e.dom.style.top=hs,this.container.appendChild(e.dom),e.mount&&e.mount(this.view),e}destroy(){var n,e;this.view.win.removeEventListener("resize",this.measureSoon);for(let t of this.manager.tooltipViews)t.dom.remove(),(n=t.destroy)===null||n===void 0||n.call(t);(e=this.intersectionObserver)===null||e===void 0||e.disconnect(),clearTimeout(this.measureTimeout)}readMeasure(){let n=this.view.dom.getBoundingClientRect();return{editor:n,parent:this.parent?this.container.getBoundingClientRect():n,pos:this.manager.tooltips.map((e,t)=>{let i=this.manager.tooltipViews[t];return i.getCoords?i.getCoords(e.pos):this.view.coordsAtPos(e.pos)}),size:this.manager.tooltipViews.map(({dom:e})=>e.getBoundingClientRect()),space:this.view.state.facet(cs).tooltipSpace(this.view)}}writeMeasure(n){let{editor:e,space:t}=n,i=[];for(let s=0;s=Math.min(e.bottom,t.bottom)||a.rightMath.min(e.right,t.right)+.1){l.style.top=hs;continue}let c=r.arrow?o.dom.querySelector(".cm-tooltip-arrow"):null,f=c?7:0,u=h.right-h.left,d=h.bottom-h.top,p=o.offset||bd,g=this.view.textDirection==Z.LTR,y=h.width>t.right-t.left?g?t.left:t.right-h.width:g?Math.min(a.left-(c?14:0)+p.x,t.right-u):Math.max(t.left,a.left-u+(c?14:0)-p.x),b=!!r.above;!r.strictSide&&(b?a.top-(h.bottom-h.top)-p.yt.bottom)&&b==t.bottom-a.bottom>a.top-t.top&&(b=!b);let v=b?a.top-d-f-p.y:a.bottom+f+p.y,S=y+u;if(o.overlap!==!0)for(let k of i)k.lefty&&k.topv&&(v=b?k.top-d-2-f:k.bottom+f+2);this.position=="absolute"?(l.style.top=v-n.parent.top+"px",l.style.left=y-n.parent.left+"px"):(l.style.top=v+"px",l.style.left=y+"px"),c&&(c.style.left=`${a.left+(g?p.x:-p.x)-(y+14-7)}px`),o.overlap!==!0&&i.push({left:y,top:v,right:S,bottom:v+d}),l.classList.toggle("cm-tooltip-above",b),l.classList.toggle("cm-tooltip-below",!b),o.positioned&&o.positioned()}}maybeMeasure(){if(this.manager.tooltips.length&&(this.view.inView&&this.view.requestMeasure(this.measureReq),this.inView!=this.view.inView&&(this.inView=this.view.inView,!this.inView)))for(let n of this.manager.tooltipViews)n.dom.style.top=hs}},{eventHandlers:{scroll(){this.maybeMeasure()}}}),yd=O.baseTheme({".cm-tooltip":{zIndex:100},"&light .cm-tooltip":{border:"1px solid #bbb",backgroundColor:"#f5f5f5"},"&light .cm-tooltip-section:not(:first-child)":{borderTop:"1px solid #bbb"},"&dark .cm-tooltip":{backgroundColor:"#333338",color:"white"},".cm-tooltip-arrow":{height:"7px",width:`${7*2}px`,position:"absolute",zIndex:-1,overflow:"hidden","&:before, &:after":{content:"''",position:"absolute",width:0,height:0,borderLeft:"7px solid transparent",borderRight:"7px solid transparent"},".cm-tooltip-above &":{bottom:"-7px","&:before":{borderTop:"7px solid #bbb"},"&:after":{borderTop:"7px solid #f5f5f5",bottom:"1px"}},".cm-tooltip-below &":{top:"-7px","&:before":{borderBottom:"7px solid #bbb"},"&:after":{borderBottom:"7px solid #f5f5f5",top:"1px"}}},"&dark .cm-tooltip .cm-tooltip-arrow":{"&:before":{borderTopColor:"#333338",borderBottomColor:"#333338"},"&:after":{borderTopColor:"transparent",borderBottomColor:"transparent"}}}),bd={x:0,y:0},Er=D.define({enables:[Mh,yd]}),Tn=D.define();class Rr{constructor(e){this.view=e,this.mounted=!1,this.dom=document.createElement("div"),this.dom.classList.add("cm-tooltip-hover"),this.manager=new Ah(e,Tn,t=>this.createHostedView(t))}static create(e){return new Rr(e)}createHostedView(e){let t=e.create(this.view);return t.dom.classList.add("cm-tooltip-section"),this.dom.appendChild(t.dom),this.mounted&&t.mount&&t.mount(this.view),t}mount(e){for(let t of this.manager.tooltipViews)t.mount&&t.mount(e);this.mounted=!0}positioned(){for(let e of this.manager.tooltipViews)e.positioned&&e.positioned()}update(e){this.manager.update(e)}}const wd=Er.compute([Tn],n=>{let e=n.facet(Tn).filter(t=>t);return e.length===0?null:{pos:Math.min(...e.map(t=>t.pos)),end:Math.max(...e.filter(t=>t.end!=null).map(t=>t.end)),create:Rr.create,above:e[0].above,arrow:e.some(t=>t.arrow)}});class kd{constructor(e,t,i,s,r){this.view=e,this.source=t,this.field=i,this.setHover=s,this.hoverTime=r,this.hoverTimeout=-1,this.restartTimeout=-1,this.pending=null,this.lastMove={x:0,y:0,target:e.dom,time:0},this.checkHover=this.checkHover.bind(this),e.dom.addEventListener("mouseleave",this.mouseleave=this.mouseleave.bind(this)),e.dom.addEventListener("mousemove",this.mousemove=this.mousemove.bind(this))}update(){this.pending&&(this.pending=null,clearTimeout(this.restartTimeout),this.restartTimeout=setTimeout(()=>this.startHover(),20))}get active(){return this.view.state.field(this.field)}checkHover(){if(this.hoverTimeout=-1,this.active)return;let e=Date.now()-this.lastMove.time;ei.bottom||e.xi.right+this.view.defaultCharacterWidth)return;let s=this.view.bidiSpans(this.view.state.doc.lineAt(t)).find(l=>l.from<=t&&l.to>=t),r=s&&s.dir==Z.RTL?-1:1,o=this.source(this.view,t,e.x{this.pending==l&&(this.pending=null,a&&this.view.dispatch({effects:this.setHover.of(a)}))},a=>He(this.view.state,a,"hover tooltip"))}else o&&this.view.dispatch({effects:this.setHover.of(o)})}mousemove(e){var t;this.lastMove={x:e.clientX,y:e.clientY,target:e.target,time:Date.now()},this.hoverTimeout<0&&(this.hoverTimeout=setTimeout(this.checkHover,this.hoverTime));let i=this.active;if(i&&!vd(this.lastMove.target)||this.pending){let{pos:s}=i||this.pending,r=(t=i?.end)!==null&&t!==void 0?t:s;(s==r?this.view.posAtCoords(this.lastMove)!=s:!xd(this.view,s,r,e.clientX,e.clientY,6))&&(this.view.dispatch({effects:this.setHover.of(null)}),this.pending=null)}}mouseleave(){clearTimeout(this.hoverTimeout),this.hoverTimeout=-1,this.active&&this.view.dispatch({effects:this.setHover.of(null)})}destroy(){clearTimeout(this.hoverTimeout),this.view.dom.removeEventListener("mouseleave",this.mouseleave),this.view.dom.removeEventListener("mousemove",this.mousemove)}}function vd(n){for(let e=n;e;e=e.parentNode)if(e.nodeType==1&&e.classList.contains("cm-tooltip"))return!0;return!1}function xd(n,e,t,i,s,r){let o=document.createRange(),l=n.domAtPos(e),a=n.domAtPos(t);o.setEnd(a.node,a.offset),o.setStart(l.node,l.offset);let h=o.getClientRects();o.detach();for(let c=0;cTn.from(s)});return[i,be.define(s=>new kd(s,n,i,t,e.hoverTime||300)),wd]}function Cd(n,e){let t=n.plugin(Mh);if(!t)return null;let i=t.manager.tooltips.indexOf(e);return i<0?null:t.manager.tooltipViews[i]}const Ad=R.define(),Qo=D.define({combine(n){let e,t;for(let i of n)e=e||i.topContainer,t=t||i.bottomContainer;return{topContainer:e,bottomContainer:t}}});function Md(n,e){let t=n.plugin(Dh),i=t?t.specs.indexOf(e):-1;return i>-1?t.panels[i]:null}const Dh=be.fromClass(class{constructor(n){this.input=n.state.facet(ar),this.specs=this.input.filter(t=>t),this.panels=this.specs.map(t=>t(n));let e=n.state.facet(Qo);this.top=new en(n,!0,e.topContainer),this.bottom=new en(n,!1,e.bottomContainer),this.top.sync(this.panels.filter(t=>t.top)),this.bottom.sync(this.panels.filter(t=>!t.top));for(let t of this.panels)t.dom.classList.add("cm-panel"),t.mount&&t.mount()}update(n){let e=n.state.facet(Qo);this.top.container!=e.topContainer&&(this.top.sync([]),this.top=new en(n.view,!0,e.topContainer)),this.bottom.container!=e.bottomContainer&&(this.bottom.sync([]),this.bottom=new en(n.view,!1,e.bottomContainer)),this.top.syncClasses(),this.bottom.syncClasses();let t=n.state.facet(ar);if(t!=this.input){let i=t.filter(a=>a),s=[],r=[],o=[],l=[];for(let a of i){let h=this.specs.indexOf(a),c;h<0?(c=a(n.view),l.push(c)):(c=this.panels[h],c.update&&c.update(n)),s.push(c),(c.top?r:o).push(c)}this.specs=i,this.panels=s,this.top.sync(r),this.bottom.sync(o);for(let a of l)a.dom.classList.add("cm-panel"),a.mount&&a.mount()}else for(let i of this.panels)i.update&&i.update(n)}destroy(){this.top.sync([]),this.bottom.sync([])}},{provide:n=>O.scrollMargins.of(e=>{let t=e.plugin(n);return t&&{top:t.top.scrollMargin(),bottom:t.bottom.scrollMargin()}})});class en{constructor(e,t,i){this.view=e,this.top=t,this.container=i,this.dom=void 0,this.classes="",this.panels=[],this.syncClasses()}sync(e){for(let t of this.panels)t.destroy&&e.indexOf(t)<0&&t.destroy();this.panels=e,this.syncDOM()}syncDOM(){if(this.panels.length==0){this.dom&&(this.dom.remove(),this.dom=void 0);return}if(!this.dom){this.dom=document.createElement("div"),this.dom.className=this.top?"cm-panels cm-panels-top":"cm-panels cm-panels-bottom",this.dom.style[this.top?"top":"bottom"]="0";let t=this.container||this.view.dom;t.insertBefore(this.dom,this.top?t.firstChild:null)}let e=this.dom.firstChild;for(let t of this.panels)if(t.dom.parentNode==this.dom){for(;e!=t.dom;)e=el(e);e=e.nextSibling}else this.dom.insertBefore(t.dom,e);for(;e;)e=el(e)}scrollMargin(){return!this.dom||this.container?0:Math.max(0,this.top?this.dom.getBoundingClientRect().bottom-Math.max(0,this.view.scrollDOM.getBoundingClientRect().top):Math.min(innerHeight,this.view.scrollDOM.getBoundingClientRect().bottom)-this.dom.getBoundingClientRect().top)}syncClasses(){if(!(!this.container||this.classes==this.view.themeClasses)){for(let e of this.classes.split(" "))e&&this.container.classList.remove(e);for(let e of(this.classes=this.view.themeClasses).split(" "))e&&this.container.classList.add(e)}}}function el(n){let e=n.nextSibling;return n.remove(),e}const ar=D.define({enables:Dh});class bt extends Bt{compare(e){return this==e||this.constructor==e.constructor&&this.eq(e)}eq(e){return!1}destroy(e){}}bt.prototype.elementClass="";bt.prototype.toDOM=void 0;bt.prototype.mapMode=ce.TrackBefore;bt.prototype.startSide=bt.prototype.endSide=-1;bt.prototype.point=!0;const fs=D.define(),Dd={class:"",renderEmptyElements:!1,elementStyle:"",markers:()=>F.empty,lineMarker:()=>null,lineMarkerChange:null,initialSpacer:null,updateSpacer:null,domEventHandlers:{}},Ci=D.define();function Td(n){return[Th(),Ci.of(Object.assign(Object.assign({},Dd),n))]}const hr=D.define({combine:n=>n.some(e=>e)});function Th(n){let e=[Od];return n&&n.fixed===!1&&e.push(hr.of(!0)),e}const Od=be.fromClass(class{constructor(n){this.view=n,this.prevViewport=n.viewport,this.dom=document.createElement("div"),this.dom.className="cm-gutters",this.dom.setAttribute("aria-hidden","true"),this.dom.style.minHeight=this.view.contentHeight+"px",this.gutters=n.state.facet(Ci).map(e=>new il(n,e));for(let e of this.gutters)this.dom.appendChild(e.dom);this.fixed=!n.state.facet(hr),this.fixed&&(this.dom.style.position="sticky"),this.syncGutters(!1),n.scrollDOM.insertBefore(this.dom,n.contentDOM)}update(n){if(this.updateGutters(n)){let e=this.prevViewport,t=n.view.viewport,i=Math.min(e.to,t.to)-Math.max(e.from,t.from);this.syncGutters(i<(t.to-t.from)*.8)}n.geometryChanged&&(this.dom.style.minHeight=this.view.contentHeight+"px"),this.view.state.facet(hr)!=!this.fixed&&(this.fixed=!this.fixed,this.dom.style.position=this.fixed?"sticky":""),this.prevViewport=n.view.viewport}syncGutters(n){let e=this.dom.nextSibling;n&&this.dom.remove();let t=F.iter(this.view.state.facet(fs),this.view.viewport.from),i=[],s=this.gutters.map(r=>new Bd(r,this.view.viewport,-this.view.documentPadding.top));for(let r of this.view.viewportLineBlocks){let o;if(Array.isArray(r.type)){for(let l of r.type)if(l.type==W.Text){o=l;break}}else o=r.type==W.Text?r:void 0;if(o){i.length&&(i=[]),Oh(t,i,r.from);for(let l of s)l.line(this.view,o,i)}}for(let r of s)r.finish();n&&this.view.scrollDOM.insertBefore(this.dom,e)}updateGutters(n){let e=n.startState.facet(Ci),t=n.state.facet(Ci),i=n.docChanged||n.heightChanged||n.viewportChanged||!F.eq(n.startState.facet(fs),n.state.facet(fs),n.view.viewport.from,n.view.viewport.to);if(e==t)for(let s of this.gutters)s.update(n)&&(i=!0);else{i=!0;let s=[];for(let r of t){let o=e.indexOf(r);o<0?s.push(new il(this.view,r)):(this.gutters[o].update(n),s.push(this.gutters[o]))}for(let r of this.gutters)r.dom.remove(),s.indexOf(r)<0&&r.destroy();for(let r of s)this.dom.appendChild(r.dom);this.gutters=s}return i}destroy(){for(let n of this.gutters)n.destroy();this.dom.remove()}},{provide:n=>O.scrollMargins.of(e=>{let t=e.plugin(n);return!t||t.gutters.length==0||!t.fixed?null:e.textDirection==Z.LTR?{left:t.dom.offsetWidth}:{right:t.dom.offsetWidth}})});function tl(n){return Array.isArray(n)?n:[n]}function Oh(n,e,t){for(;n.value&&n.from<=t;)n.from==t&&e.push(n.value),n.next()}class Bd{constructor(e,t,i){this.gutter=e,this.height=i,this.localMarkers=[],this.i=0,this.cursor=F.iter(e.markers,t.from)}line(e,t,i){this.localMarkers.length&&(this.localMarkers=[]),Oh(this.cursor,this.localMarkers,t.from);let s=i.length?this.localMarkers.concat(i):this.localMarkers,r=this.gutter.config.lineMarker(e,t,s);r&&s.unshift(r);let o=this.gutter;if(s.length==0&&!o.config.renderEmptyElements)return;let l=t.top-this.height;if(this.i==o.elements.length){let a=new Bh(e,t.height,l,s);o.elements.push(a),o.dom.appendChild(a.dom)}else o.elements[this.i].update(e,t.height,l,s);this.height=t.bottom,this.i++}finish(){let e=this.gutter;for(;e.elements.length>this.i;){let t=e.elements.pop();e.dom.removeChild(t.dom),t.destroy()}}}class il{constructor(e,t){this.view=e,this.config=t,this.elements=[],this.spacer=null,this.dom=document.createElement("div"),this.dom.className="cm-gutter"+(this.config.class?" "+this.config.class:"");for(let i in t.domEventHandlers)this.dom.addEventListener(i,s=>{let r=e.lineBlockAtHeight(s.clientY-e.documentTop);t.domEventHandlers[i](e,r,s)&&s.preventDefault()});this.markers=tl(t.markers(e)),t.initialSpacer&&(this.spacer=new Bh(e,0,0,[t.initialSpacer(e)]),this.dom.appendChild(this.spacer.dom),this.spacer.dom.style.cssText+="visibility: hidden; pointer-events: none")}update(e){let t=this.markers;if(this.markers=tl(this.config.markers(e.view)),this.spacer&&this.config.updateSpacer){let s=this.config.updateSpacer(this.spacer.markers[0],e);s!=this.spacer.markers[0]&&this.spacer.update(e.view,0,0,[s])}let i=e.view.viewport;return!F.eq(this.markers,t,i.from,i.to)||(this.config.lineMarkerChange?this.config.lineMarkerChange(e):!1)}destroy(){for(let e of this.elements)e.destroy()}}class Bh{constructor(e,t,i,s){this.height=-1,this.above=0,this.markers=[],this.dom=document.createElement("div"),this.dom.className="cm-gutterElement",this.update(e,t,i,s)}update(e,t,i,s){this.height!=t&&(this.dom.style.height=(this.height=t)+"px"),this.above!=i&&(this.dom.style.marginTop=(this.above=i)?i+"px":""),Pd(this.markers,s)||this.setMarkers(e,s)}setMarkers(e,t){let i="cm-gutterElement",s=this.dom.firstChild;for(let r=0,o=0;;){let l=o,a=rr(l,a,h)||o(l,a,h):o}return i}})}});class us extends bt{constructor(e){super(),this.number=e}eq(e){return this.number==e.number}toDOM(){return document.createTextNode(this.number)}}function ds(n,e){return n.state.facet(zt).formatNumber(e,n.state)}const Rd=Ci.compute([zt],n=>({class:"cm-lineNumbers",renderEmptyElements:!1,markers(e){return e.state.facet(Ed)},lineMarker(e,t,i){return i.some(s=>s.toDOM)?null:new us(ds(e,e.state.doc.lineAt(t.from).number))},lineMarkerChange:e=>e.startState.facet(zt)!=e.state.facet(zt),initialSpacer(e){return new us(ds(e,nl(e.state.doc.lines)))},updateSpacer(e,t){let i=ds(t.view,nl(t.view.state.doc.lines));return i==e.number?e:new us(i)},domEventHandlers:n.facet(zt).domEventHandlers}));function Ld(n={}){return[zt.of(n),Th(),Rd]}function nl(n){let e=9;for(;e{throw new Error("This node type doesn't define a deserialize function")})}add(e){if(this.perNode)throw new RangeError("Can't add per-node props to node types");return typeof e!="function"&&(e=xe.match(e)),t=>{let i=e(t);return i===void 0?null:[this,i]}}}L.closedBy=new L({deserialize:n=>n.split(" ")});L.openedBy=new L({deserialize:n=>n.split(" ")});L.group=new L({deserialize:n=>n.split(" ")});L.contextHash=new L({perNode:!0});L.lookAhead=new L({perNode:!0});L.mounted=new L({perNode:!0});class _d{constructor(e,t,i){this.tree=e,this.overlay=t,this.parser=i}}const Vd=Object.create(null);class xe{constructor(e,t,i,s=0){this.name=e,this.props=t,this.id=i,this.flags=s}static define(e){let t=e.props&&e.props.length?Object.create(null):Vd,i=(e.top?1:0)|(e.skipped?2:0)|(e.error?4:0)|(e.name==null?8:0),s=new xe(e.name||"",t,e.id,i);if(e.props){for(let r of e.props)if(Array.isArray(r)||(r=r(s)),r){if(r[0].perNode)throw new RangeError("Can't store a per-node prop on a node type");t[r[0].id]=r[1]}}return s}prop(e){return this.props[e.id]}get isTop(){return(this.flags&1)>0}get isSkipped(){return(this.flags&2)>0}get isError(){return(this.flags&4)>0}get isAnonymous(){return(this.flags&8)>0}is(e){if(typeof e=="string"){if(this.name==e)return!0;let t=this.prop(L.group);return t?t.indexOf(e)>-1:!1}return this.id==e}static match(e){let t=Object.create(null);for(let i in e)for(let s of i.split(" "))t[s]=e[i];return i=>{for(let s=i.prop(L.group),r=-1;r<(s?s.length:0);r++){let o=t[r<0?i.name:s[r]];if(o)return o}}}}xe.none=new xe("",Object.create(null),0,8);class Lr{constructor(e){this.types=e;for(let t=0;t=s&&(o.type.isAnonymous||t(o)!==!1)){if(o.firstChild())continue;l=!0}for(;l&&i&&!o.type.isAnonymous&&i(o),!o.nextSibling();){if(!o.parent())return;l=!0}}}prop(e){return e.perNode?this.props?this.props[e.id]:void 0:this.type.prop(e)}get propValues(){let e=[];if(this.props)for(let t in this.props)e.push([+t,this.props[t]]);return e}balance(e={}){return this.children.length<=8?this:_r(xe.none,this.children,this.positions,0,this.children.length,0,this.length,(t,i,s)=>new z(this.type,t,i,s,this.propValues),e.makeTree||((t,i,s)=>new z(xe.none,t,i,s)))}static build(e){return Hd(e)}}z.empty=new z(xe.none,[],[],0);class Ir{constructor(e,t){this.buffer=e,this.index=t}get id(){return this.buffer[this.index-4]}get start(){return this.buffer[this.index-3]}get end(){return this.buffer[this.index-2]}get size(){return this.buffer[this.index-1]}get pos(){return this.index}next(){this.index-=4}fork(){return new Ir(this.buffer,this.index)}}class Vt{constructor(e,t,i){this.buffer=e,this.length=t,this.set=i}get type(){return xe.none}toString(){let e=[];for(let t=0;t0));a=o[a+3]);return l}slice(e,t,i){let s=this.buffer,r=new Uint16Array(t-e),o=0;for(let l=e,a=0;l=e&&te;case 1:return t<=e&&i>e;case 2:return i>e;case 4:return!0}}function Eh(n,e){let t=n.childBefore(e);for(;t;){let i=t.lastChild;if(!i||i.to!=t.to)break;i.type.isError&&i.from==i.to?(n=t,t=i.prevSibling):t=i}return n}function ei(n,e,t,i){for(var s;n.from==n.to||(t<1?n.from>=e:n.from>e)||(t>-1?n.to<=e:n.to0?l.length:-1;e!=h;e+=t){let c=l[e],f=a[e]+o.from;if(Ph(s,i,f,f+c.length)){if(c instanceof Vt){if(r&ee.ExcludeBuffers)continue;let u=c.findChild(0,c.buffer.length,t,i-f,s);if(u>-1)return new Ye(new Fd(o,c,e,f),null,u)}else if(r&ee.IncludeAnonymous||!c.type.isAnonymous||Nr(c)){let u;if(!(r&ee.IgnoreMounts)&&c.props&&(u=c.prop(L.mounted))&&!u.overlay)return new _e(u.tree,f,e,o);let d=new _e(c,f,e,o);return r&ee.IncludeAnonymous||!d.type.isAnonymous?d:d.nextChild(t<0?c.children.length-1:0,t,i,s)}}}if(r&ee.IncludeAnonymous||!o.type.isAnonymous||(o.index>=0?e=o.index+t:e=t<0?-1:o._parent._tree.children.length,o=o._parent,!o))return null}}get firstChild(){return this.nextChild(0,1,0,4)}get lastChild(){return this.nextChild(this._tree.children.length-1,-1,0,4)}childAfter(e){return this.nextChild(0,1,e,2)}childBefore(e){return this.nextChild(this._tree.children.length-1,-1,e,-2)}enter(e,t,i=0){let s;if(!(i&ee.IgnoreOverlays)&&(s=this._tree.prop(L.mounted))&&s.overlay){let r=e-this.from;for(let{from:o,to:l}of s.overlay)if((t>0?o<=r:o=r:l>r))return new _e(s.tree,s.overlay[0].from+this.from,-1,this)}return this.nextChild(0,1,e,t,i)}nextSignificantParent(){let e=this;for(;e.type.isAnonymous&&e._parent;)e=e._parent;return e}get parent(){return this._parent?this._parent.nextSignificantParent():null}get nextSibling(){return this._parent&&this.index>=0?this._parent.nextChild(this.index+1,1,0,4):null}get prevSibling(){return this._parent&&this.index>=0?this._parent.nextChild(this.index-1,-1,0,4):null}cursor(e=0){return new Ri(this,e)}get tree(){return this._tree}toTree(){return this._tree}resolve(e,t=0){return ei(this,e,t,!1)}resolveInner(e,t=0){return ei(this,e,t,!0)}enterUnfinishedNodesBefore(e){return Eh(this,e)}getChild(e,t=null,i=null){let s=On(this,e,t,i);return s.length?s[0]:null}getChildren(e,t=null,i=null){return On(this,e,t,i)}toString(){return this._tree.toString()}get node(){return this}matchContext(e){return Bn(this,e)}}function On(n,e,t,i){let s=n.cursor(),r=[];if(!s.firstChild())return r;if(t!=null){for(;!s.type.is(t);)if(!s.nextSibling())return r}for(;;){if(i!=null&&s.type.is(i))return r;if(s.type.is(e)&&r.push(s.node),!s.nextSibling())return i==null?r:[]}}function Bn(n,e,t=e.length-1){for(let i=n.parent;t>=0;i=i.parent){if(!i)return!1;if(!i.type.isAnonymous){if(e[t]&&e[t]!=i.name)return!1;t--}}return!0}class Fd{constructor(e,t,i,s){this.parent=e,this.buffer=t,this.index=i,this.start=s}}class Ye{get name(){return this.type.name}get from(){return this.context.start+this.context.buffer.buffer[this.index+1]}get to(){return this.context.start+this.context.buffer.buffer[this.index+2]}constructor(e,t,i){this.context=e,this._parent=t,this.index=i,this.type=e.buffer.set.types[e.buffer.buffer[i]]}child(e,t,i){let{buffer:s}=this.context,r=s.findChild(this.index+4,s.buffer[this.index+3],e,t-this.context.start,i);return r<0?null:new Ye(this.context,this,r)}get firstChild(){return this.child(1,0,4)}get lastChild(){return this.child(-1,0,4)}childAfter(e){return this.child(1,e,2)}childBefore(e){return this.child(-1,e,-2)}enter(e,t,i=0){if(i&ee.ExcludeBuffers)return null;let{buffer:s}=this.context,r=s.findChild(this.index+4,s.buffer[this.index+3],t>0?1:-1,e-this.context.start,t);return r<0?null:new Ye(this.context,this,r)}get parent(){return this._parent||this.context.parent.nextSignificantParent()}externalSibling(e){return this._parent?null:this.context.parent.nextChild(this.context.index+e,e,0,4)}get nextSibling(){let{buffer:e}=this.context,t=e.buffer[this.index+3];return t<(this._parent?e.buffer[this._parent.index+3]:e.buffer.length)?new Ye(this.context,this._parent,t):this.externalSibling(1)}get prevSibling(){let{buffer:e}=this.context,t=this._parent?this._parent.index+4:0;return this.index==t?this.externalSibling(-1):new Ye(this.context,this._parent,e.findChild(t,this.index,-1,0,4))}cursor(e=0){return new Ri(this,e)}get tree(){return null}toTree(){let e=[],t=[],{buffer:i}=this.context,s=this.index+4,r=i.buffer[this.index+3];if(r>s){let o=i.buffer[this.index+1];e.push(i.slice(s,r,o)),t.push(0)}return new z(this.type,e,t,this.to-this.from)}resolve(e,t=0){return ei(this,e,t,!1)}resolveInner(e,t=0){return ei(this,e,t,!0)}enterUnfinishedNodesBefore(e){return Eh(this,e)}toString(){return this.context.buffer.childString(this.index)}getChild(e,t=null,i=null){let s=On(this,e,t,i);return s.length?s[0]:null}getChildren(e,t=null,i=null){return On(this,e,t,i)}get node(){return this}matchContext(e){return Bn(this,e)}}class Ri{get name(){return this.type.name}constructor(e,t=0){if(this.mode=t,this.buffer=null,this.stack=[],this.index=0,this.bufferNode=null,e instanceof _e)this.yieldNode(e);else{this._tree=e.context.parent,this.buffer=e.context;for(let i=e._parent;i;i=i._parent)this.stack.unshift(i.index);this.bufferNode=e,this.yieldBuf(e.index)}}yieldNode(e){return e?(this._tree=e,this.type=e.type,this.from=e.from,this.to=e.to,!0):!1}yieldBuf(e,t){this.index=e;let{start:i,buffer:s}=this.buffer;return this.type=t||s.set.types[s.buffer[e]],this.from=i+s.buffer[e+1],this.to=i+s.buffer[e+2],!0}yield(e){return e?e instanceof _e?(this.buffer=null,this.yieldNode(e)):(this.buffer=e.context,this.yieldBuf(e.index,e.type)):!1}toString(){return this.buffer?this.buffer.buffer.childString(this.index):this._tree.toString()}enterChild(e,t,i){if(!this.buffer)return this.yield(this._tree.nextChild(e<0?this._tree._tree.children.length-1:0,e,t,i,this.mode));let{buffer:s}=this.buffer,r=s.findChild(this.index+4,s.buffer[this.index+3],e,t-this.buffer.start,i);return r<0?!1:(this.stack.push(this.index),this.yieldBuf(r))}firstChild(){return this.enterChild(1,0,4)}lastChild(){return this.enterChild(-1,0,4)}childAfter(e){return this.enterChild(1,e,2)}childBefore(e){return this.enterChild(-1,e,-2)}enter(e,t,i=this.mode){return this.buffer?i&ee.ExcludeBuffers?!1:this.enterChild(1,e,t):this.yield(this._tree.enter(e,t,i))}parent(){if(!this.buffer)return this.yieldNode(this.mode&ee.IncludeAnonymous?this._tree._parent:this._tree.parent);if(this.stack.length)return this.yieldBuf(this.stack.pop());let e=this.mode&ee.IncludeAnonymous?this.buffer.parent:this.buffer.parent.nextSignificantParent();return this.buffer=null,this.yieldNode(e)}sibling(e){if(!this.buffer)return this._tree._parent?this.yield(this._tree.index<0?null:this._tree._parent.nextChild(this._tree.index+e,e,0,4,this.mode)):!1;let{buffer:t}=this.buffer,i=this.stack.length-1;if(e<0){let s=i<0?0:this.stack[i]+4;if(this.index!=s)return this.yieldBuf(t.findChild(s,this.index,-1,0,4))}else{let s=t.buffer[this.index+3];if(s<(i<0?t.buffer.length:t.buffer[this.stack[i]+3]))return this.yieldBuf(s)}return i<0?this.yield(this.buffer.parent.nextChild(this.buffer.index+e,e,0,4,this.mode)):!1}nextSibling(){return this.sibling(1)}prevSibling(){return this.sibling(-1)}atLastNode(e){let t,i,{buffer:s}=this;if(s){if(e>0){if(this.index-1)for(let r=t+e,o=e<0?-1:i._tree.children.length;r!=o;r+=e){let l=i._tree.children[r];if(this.mode&ee.IncludeAnonymous||l instanceof Vt||!l.type.isAnonymous||Nr(l))return!1}return!0}move(e,t){if(t&&this.enterChild(e,0,4))return!0;for(;;){if(this.sibling(e))return!0;if(this.atLastNode(e)||!this.parent())return!1}}next(e=!0){return this.move(1,e)}prev(e=!0){return this.move(-1,e)}moveTo(e,t=0){for(;(this.from==this.to||(t<1?this.from>=e:this.from>e)||(t>-1?this.to<=e:this.to=0;){for(let o=e;o;o=o._parent)if(o.index==s){if(s==this.index)return o;t=o,i=r+1;break e}s=this.stack[--r]}for(let s=i;s=0;r--){if(r<0)return Bn(this.node,e,s);let o=i[t.buffer[this.stack[r]]];if(!o.isAnonymous){if(e[s]&&e[s]!=o.name)return!1;s--}}return!0}}function Nr(n){return n.children.some(e=>e instanceof Vt||!e.type.isAnonymous||Nr(e))}function Hd(n){var e;let{buffer:t,nodeSet:i,maxBufferLength:s=Id,reused:r=[],minRepeatType:o=i.types.length}=n,l=Array.isArray(t)?new Ir(t,t.length):t,a=i.types,h=0,c=0;function f(k,C,T,B,U){let{id:I,start:P,end:V,size:G}=l,Q=c;for(;G<0;)if(l.next(),G==-1){let J=r[I];T.push(J),B.push(P-k);return}else if(G==-3){h=I;return}else if(G==-4){c=I;return}else throw new RangeError(`Unrecognized record size: ${G}`);let M=a[I],$,Y,le=P-k;if(V-P<=s&&(Y=g(l.pos-C,U))){let J=new Uint16Array(Y.size-Y.skip),ie=l.pos-Y.size,nt=J.length;for(;l.pos>ie;)nt=y(Y.start,J,nt);$=new Vt(J,V-Y.start,i),le=Y.start-k}else{let J=l.pos-G;l.next();let ie=[],nt=[],vt=I>=o?I:-1,Ft=0,ji=V;for(;l.pos>J;)vt>=0&&l.id==vt&&l.size>=0?(l.end<=ji-s&&(d(ie,nt,P,Ft,l.end,ji,vt,Q),Ft=ie.length,ji=l.end),l.next()):f(P,J,ie,nt,vt);if(vt>=0&&Ft>0&&Ft-1&&Ft>0){let to=u(M);$=_r(M,ie,nt,0,ie.length,0,V-P,to,to)}else $=p(M,ie,nt,V-P,Q-V)}T.push($),B.push(le)}function u(k){return(C,T,B)=>{let U=0,I=C.length-1,P,V;if(I>=0&&(P=C[I])instanceof z){if(!I&&P.type==k&&P.length==B)return P;(V=P.prop(L.lookAhead))&&(U=T[I]+P.length+V)}return p(k,C,T,B,U)}}function d(k,C,T,B,U,I,P,V){let G=[],Q=[];for(;k.length>B;)G.push(k.pop()),Q.push(C.pop()+T-U);k.push(p(i.types[P],G,Q,I-U,V-I)),C.push(U-T)}function p(k,C,T,B,U=0,I){if(h){let P=[L.contextHash,h];I=I?[P].concat(I):[P]}if(U>25){let P=[L.lookAhead,U];I=I?[P].concat(I):[P]}return new z(k,C,T,B,I)}function g(k,C){let T=l.fork(),B=0,U=0,I=0,P=T.end-s,V={size:0,start:0,skip:0};e:for(let G=T.pos-k;T.pos>G;){let Q=T.size;if(T.id==C&&Q>=0){V.size=B,V.start=U,V.skip=I,I+=4,B+=4,T.next();continue}let M=T.pos-Q;if(Q<0||M=o?4:0,Y=T.start;for(T.next();T.pos>M;){if(T.size<0)if(T.size==-3)$+=4;else break e;else T.id>=o&&($+=4);T.next()}U=Y,B+=Q,I+=$}return(C<0||B==k)&&(V.size=B,V.start=U,V.skip=I),V.size>4?V:void 0}function y(k,C,T){let{id:B,start:U,end:I,size:P}=l;if(l.next(),P>=0&&B4){let G=l.pos-(P-4);for(;l.pos>G;)T=y(k,C,T)}C[--T]=V,C[--T]=I-k,C[--T]=U-k,C[--T]=B}else P==-3?h=B:P==-4&&(c=B);return T}let b=[],v=[];for(;l.pos>0;)f(n.start||0,n.bufferStart||0,b,v,-1);let S=(e=n.length)!==null&&e!==void 0?e:b.length?v[0]+b[0].length:0;return new z(a[n.topID],b.reverse(),v.reverse(),S)}const rl=new WeakMap;function gn(n,e){if(!n.isAnonymous||e instanceof Vt||e.type!=n)return 1;let t=rl.get(e);if(t==null){t=1;for(let i of e.children){if(i.type!=n||!(i instanceof z)){t=1;break}t+=gn(n,i)}rl.set(e,t)}return t}function _r(n,e,t,i,s,r,o,l,a){let h=0;for(let p=i;p=c)break;T+=B}if(S==k+1){if(T>c){let B=p[k];d(B.children,B.positions,0,B.children.length,g[k]+v);continue}f.push(p[k])}else{let B=g[S-1]+p[S-1].length-C;f.push(_r(n,p,g,k,S,C,B,null,a))}u.push(C+v-r)}}return d(e,t,i,s,0),(l||a)(f,u,o)}class ty{constructor(){this.map=new WeakMap}setBuffer(e,t,i){let s=this.map.get(e);s||this.map.set(e,s=new Map),s.set(t,i)}getBuffer(e,t){let i=this.map.get(e);return i&&i.get(t)}set(e,t){e instanceof Ye?this.setBuffer(e.context.buffer,e.index,t):e instanceof _e&&this.map.set(e.tree,t)}get(e){return e instanceof Ye?this.getBuffer(e.context.buffer,e.index):e instanceof _e?this.map.get(e.tree):void 0}cursorSet(e,t){e.buffer?this.setBuffer(e.buffer.buffer,e.index,t):this.map.set(e.tree,t)}cursorGet(e){return e.buffer?this.getBuffer(e.buffer.buffer,e.index):this.map.get(e.tree)}}class rt{constructor(e,t,i,s,r=!1,o=!1){this.from=e,this.to=t,this.tree=i,this.offset=s,this.open=(r?1:0)|(o?2:0)}get openStart(){return(this.open&1)>0}get openEnd(){return(this.open&2)>0}static addTree(e,t=[],i=!1){let s=[new rt(0,e.length,e,0,!1,i)];for(let r of t)r.to>e.length&&s.push(r);return s}static applyChanges(e,t,i=128){if(!t.length)return e;let s=[],r=1,o=e.length?e[0]:null;for(let l=0,a=0,h=0;;l++){let c=l=i)for(;o&&o.from=u.from||f<=u.to||h){let d=Math.max(u.from,a)-h,p=Math.min(u.to,f)-h;u=d>=p?null:new rt(d,p,u.tree,u.offset+h,l>0,!!c)}if(u&&s.push(u),o.to>f)break;o=rnew Le(s.from,s.to)):[new Le(0,0)]:[new Le(0,e.length)],this.createParse(e,t||[],i)}parse(e,t,i){let s=this.startParse(e,t,i);for(;;){let r=s.advance();if(r)return r}}}class Wd{constructor(e){this.string=e}get length(){return this.string.length}chunk(e){return this.string.slice(e)}get lineChunks(){return!1}read(e,t){return this.string.slice(e,t)}}function iy(n){return(e,t,i,s)=>new qd(e,n,t,i,s)}class ol{constructor(e,t,i,s,r){this.parser=e,this.parse=t,this.overlay=i,this.target=s,this.ranges=r}}class zd{constructor(e,t,i,s,r,o,l){this.parser=e,this.predicate=t,this.mounts=i,this.index=s,this.start=r,this.target=o,this.prev=l,this.depth=0,this.ranges=[]}}const cr=new L({perNode:!0});class qd{constructor(e,t,i,s,r){this.nest=t,this.input=i,this.fragments=s,this.ranges=r,this.inner=[],this.innerDone=0,this.baseTree=null,this.stoppedAt=null,this.baseParse=e}advance(){if(this.baseParse){let i=this.baseParse.advance();if(!i)return null;if(this.baseParse=null,this.baseTree=i,this.startInner(),this.stoppedAt!=null)for(let s of this.inner)s.parse.stopAt(this.stoppedAt)}if(this.innerDone==this.inner.length){let i=this.baseTree;return this.stoppedAt!=null&&(i=new z(i.type,i.children,i.positions,i.length,i.propValues.concat([[cr,this.stoppedAt]]))),i}let e=this.inner[this.innerDone],t=e.parse.advance();if(t){this.innerDone++;let i=Object.assign(Object.create(null),e.target.props);i[L.mounted.id]=new _d(t,e.overlay,e.parser),e.target.props=i}return null}get parsedPos(){if(this.baseParse)return 0;let e=this.input.length;for(let t=this.innerDone;tc.frag.from<=s.from&&c.frag.to>=s.to&&c.mount.overlay);if(h)for(let c of h.mount.overlay){let f=c.from+h.pos,u=c.to+h.pos;f>=s.from&&u<=s.to&&!t.ranges.some(d=>d.fromf)&&t.ranges.push({from:f,to:u})}}l=!1}else if(i&&(o=jd(i.ranges,s.from,s.to)))l=o!=2;else if(!s.type.isAnonymous&&s.fromnew Le(f.from-s.from,f.to-s.from)):null,s.tree,c)),r.overlay?c.length&&(i={ranges:c,depth:0,prev:i}):l=!1}}else t&&(a=t.predicate(s))&&(a===!0&&(a=new Le(s.from,s.to)),a.fromnew Le(c.from-t.start,c.to-t.start)),t.target,h)),t=t.prev}i&&!--i.depth&&(i=i.prev)}}}}function jd(n,e,t){for(let i of n){if(i.from>=t)break;if(i.to>e)return i.from<=e&&i.to>=t?2:1}return 0}function ll(n,e,t,i,s,r){if(e=e.to);i++);let o=s.children[i],l=o.buffer;function a(h,c,f,u,d){let p=h;for(;l[p+2]+r<=e.from;)p=l[p+3];let g=[],y=[];ll(o,h,p,g,y,u);let b=l[p+1],v=l[p+2],S=b+r==e.from&&v+r==e.to&&l[p]==e.type.id;return g.push(S?e.toTree():a(p+4,l[p+3],o.set.types[l[p]],b,v-b)),y.push(b-u),ll(o,l[p+3],c,g,y,u),new z(f,g,y,d)}s.children[i]=a(0,l.length,xe.none,0,o.length);for(let h=0;h<=t;h++)n.childAfter(e.from)}class al{constructor(e,t){this.offset=t,this.done=!1,this.cursor=e.cursor(ee.IncludeAnonymous|ee.IgnoreMounts)}moveTo(e){let{cursor:t}=this,i=e-this.offset;for(;!this.done&&t.from=e&&t.enter(i,1,ee.IgnoreOverlays|ee.ExcludeBuffers)||t.next(!1)||(this.done=!0)}hasNode(e){if(this.moveTo(e.from),!this.done&&this.cursor.from+this.offset==e.from&&this.cursor.tree)for(let t=this.cursor.tree;;){if(t==e.tree)return!0;if(t.children.length&&t.positions[0]==0&&t.children[0]instanceof z)t=t.children[0];else break}return!1}}class Ud{constructor(e){var t;if(this.fragments=e,this.curTo=0,this.fragI=0,e.length){let i=this.curFrag=e[0];this.curTo=(t=i.tree.prop(cr))!==null&&t!==void 0?t:i.to,this.inner=new al(i.tree,-i.offset)}else this.curFrag=this.inner=null}hasNode(e){for(;this.curFrag&&e.from>=this.curTo;)this.nextFrag();return this.curFrag&&this.curFrag.from<=e.from&&this.curTo>=e.to&&this.inner.hasNode(e)}nextFrag(){var e;if(this.fragI++,this.fragI==this.fragments.length)this.curFrag=this.inner=null;else{let t=this.curFrag=this.fragments[this.fragI];this.curTo=(e=t.tree.prop(cr))!==null&&e!==void 0?e:t.to,this.inner=new al(t.tree,-t.offset)}}findMounts(e,t){var i;let s=[];if(this.inner){this.inner.cursor.moveTo(e,1);for(let r=this.inner.cursor.node;r;r=r.parent){let o=(i=r.tree)===null||i===void 0?void 0:i.prop(L.mounted);if(o&&o.parser==t)for(let l=this.fragI;l=r.to)break;a.tree==this.curFrag.tree&&s.push({frag:a,pos:r.from-a.offset,mount:o})}}}return s}}function hl(n,e){let t=null,i=e;for(let s=1,r=0;s=l)break;a.to<=o||(t||(i=t=e.slice()),a.froml&&t.splice(r+1,0,new Le(l,a.to))):a.to>l?t[r--]=new Le(l,a.to):t.splice(r--,1))}}return i}function Gd(n,e,t,i){let s=0,r=0,o=!1,l=!1,a=-1e9,h=[];for(;;){let c=s==n.length?1e9:o?n[s].to:n[s].from,f=r==e.length?1e9:l?e[r].to:e[r].from;if(o!=l){let u=Math.max(a,t),d=Math.min(c,f,i);unew Le(u.from+i,u.to+i)),f=Gd(e,c,a,h);for(let u=0,d=a;;u++){let p=u==f.length,g=p?h:f[u].from;if(g>d&&t.push(new rt(d,g,s.tree,-o,r.from>=d||r.openStart,r.to<=g||r.openEnd)),p)break;d=f[u].to}}else t.push(new rt(a,h,s.tree,-o,r.from>=o||r.openStart,r.to<=l||r.openEnd))}return t}let $d=0;class Ge{constructor(e,t,i){this.set=e,this.base=t,this.modified=i,this.id=$d++}static define(e){if(e?.base)throw new Error("Can not derive from a modified tag");let t=new Ge([],null,[]);if(t.set.push(t),e)for(let i of e.set)t.set.push(i);return t}static defineModifier(){let e=new Pn;return t=>t.modified.indexOf(e)>-1?t:Pn.get(t.base||t,t.modified.concat(e).sort((i,s)=>i.id-s.id))}}let Jd=0;class Pn{constructor(){this.instances=[],this.id=Jd++}static get(e,t){if(!t.length)return e;let i=t[0].instances.find(l=>l.base==e&&Yd(t,l.modified));if(i)return i;let s=[],r=new Ge(s,e,t);for(let l of t)l.instances.push(r);let o=Xd(t);for(let l of e.set)if(!l.modified.length)for(let a of o)s.push(Pn.get(l,a));return r}}function Yd(n,e){return n.length==e.length&&n.every((t,i)=>t==e[i])}function Xd(n){let e=[[]];for(let t=0;ti.length-t.length)}function Zd(n){let e=Object.create(null);for(let t in n){let i=n[t];Array.isArray(i)||(i=[i]);for(let s of t.split(" "))if(s){let r=[],o=2,l=s;for(let f=0;;){if(l=="..."&&f>0&&f+3==s.length){o=1;break}let u=/^"(?:[^"\\]|\\.)*?"|[^\/!]+/.exec(l);if(!u)throw new RangeError("Invalid path: "+s);if(r.push(u[0]=="*"?"":u[0][0]=='"'?JSON.parse(u[0]):u[0]),f+=u[0].length,f==s.length)break;let d=s[f++];if(f==s.length&&d=="!"){o=0;break}if(d!="/")throw new RangeError("Invalid path: "+s);l=s.slice(f)}let a=r.length-1,h=r[a];if(!h)throw new RangeError("Invalid path: "+s);let c=new En(i,o,a>0?r.slice(0,a):null);e[h]=c.sort(e[h])}}return Lh.add(e)}const Lh=new L;class En{constructor(e,t,i,s){this.tags=e,this.mode=t,this.context=i,this.next=s}get opaque(){return this.mode==0}get inherit(){return this.mode==1}sort(e){return!e||e.depth{let o=s;for(let l of r)for(let a of l.set){let h=t[a.id];if(h){o=o?o+" "+h:h;break}}return o},scope:i}}function Qd(n,e){let t=null;for(let i of n){let s=i.style(e);s&&(t=t?t+" "+s:s)}return t}function ep(n,e,t,i=0,s=n.length){let r=new tp(i,Array.isArray(e)?e:[e],t);r.highlightRange(n.cursor(),i,s,"",r.highlighters),r.flush(s)}class tp{constructor(e,t,i){this.at=e,this.highlighters=t,this.span=i,this.class=""}startSpan(e,t){t!=this.class&&(this.flush(e),e>this.at&&(this.at=e),this.class=t)}flush(e){e>this.at&&this.class&&this.span(this.at,e,this.class)}highlightRange(e,t,i,s,r){let{type:o,from:l,to:a}=e;if(l>=i||a<=t)return;o.isTop&&(r=this.highlighters.filter(d=>!d.scope||d.scope(o)));let h=s,c=ip(e)||En.empty,f=Qd(r,c.tags);if(f&&(h&&(h+=" "),h+=f,c.mode==1&&(s+=(s?" ":"")+f)),this.startSpan(e.from,h),c.opaque)return;let u=e.tree&&e.tree.prop(L.mounted);if(u&&u.overlay){let d=e.node.enter(u.overlay[0].from+l,1),p=this.highlighters.filter(y=>!y.scope||y.scope(u.tree.type)),g=e.firstChild();for(let y=0,b=l;;y++){let v=y=S||!e.nextSibling())););if(!v||S>i)break;b=v.to+l,b>t&&(this.highlightRange(d.cursor(),Math.max(t,v.from+l),Math.min(i,b),s,p),this.startSpan(b,h))}g&&e.parent()}else if(e.firstChild()){do if(!(e.to<=t)){if(e.from>=i)break;this.highlightRange(e,t,i,s,r),this.startSpan(Math.min(i,e.to),h)}while(e.nextSibling());e.parent()}}}function ip(n){let e=n.type.prop(Lh);for(;e&&e.context&&!n.matchContext(e.context);)e=e.next;return e||null}const x=Ge.define,nn=x(),ot=x(),fl=x(ot),ul=x(ot),lt=x(),sn=x(lt),ps=x(lt),Ke=x(),xt=x(Ke),qe=x(),je=x(),fr=x(),ui=x(fr),rn=x(),m={comment:nn,lineComment:x(nn),blockComment:x(nn),docComment:x(nn),name:ot,variableName:x(ot),typeName:fl,tagName:x(fl),propertyName:ul,attributeName:x(ul),className:x(ot),labelName:x(ot),namespace:x(ot),macroName:x(ot),literal:lt,string:sn,docString:x(sn),character:x(sn),attributeValue:x(sn),number:ps,integer:x(ps),float:x(ps),bool:x(lt),regexp:x(lt),escape:x(lt),color:x(lt),url:x(lt),keyword:qe,self:x(qe),null:x(qe),atom:x(qe),unit:x(qe),modifier:x(qe),operatorKeyword:x(qe),controlKeyword:x(qe),definitionKeyword:x(qe),moduleKeyword:x(qe),operator:je,derefOperator:x(je),arithmeticOperator:x(je),logicOperator:x(je),bitwiseOperator:x(je),compareOperator:x(je),updateOperator:x(je),definitionOperator:x(je),typeOperator:x(je),controlOperator:x(je),punctuation:fr,separator:x(fr),bracket:ui,angleBracket:x(ui),squareBracket:x(ui),paren:x(ui),brace:x(ui),content:Ke,heading:xt,heading1:x(xt),heading2:x(xt),heading3:x(xt),heading4:x(xt),heading5:x(xt),heading6:x(xt),contentSeparator:x(Ke),list:x(Ke),quote:x(Ke),emphasis:x(Ke),strong:x(Ke),link:x(Ke),monospace:x(Ke),strikethrough:x(Ke),inserted:x(),deleted:x(),changed:x(),invalid:x(),meta:rn,documentMeta:x(rn),annotation:x(rn),processingInstruction:x(rn),definition:Ge.defineModifier(),constant:Ge.defineModifier(),function:Ge.defineModifier(),standard:Ge.defineModifier(),local:Ge.defineModifier(),special:Ge.defineModifier()};Ih([{tag:m.link,class:"tok-link"},{tag:m.heading,class:"tok-heading"},{tag:m.emphasis,class:"tok-emphasis"},{tag:m.strong,class:"tok-strong"},{tag:m.keyword,class:"tok-keyword"},{tag:m.atom,class:"tok-atom"},{tag:m.bool,class:"tok-bool"},{tag:m.url,class:"tok-url"},{tag:m.labelName,class:"tok-labelName"},{tag:m.inserted,class:"tok-inserted"},{tag:m.deleted,class:"tok-deleted"},{tag:m.literal,class:"tok-literal"},{tag:m.string,class:"tok-string"},{tag:m.number,class:"tok-number"},{tag:[m.regexp,m.escape,m.special(m.string)],class:"tok-string2"},{tag:m.variableName,class:"tok-variableName"},{tag:m.local(m.variableName),class:"tok-variableName tok-local"},{tag:m.definition(m.variableName),class:"tok-variableName tok-definition"},{tag:m.special(m.variableName),class:"tok-variableName2"},{tag:m.definition(m.propertyName),class:"tok-propertyName tok-definition"},{tag:m.typeName,class:"tok-typeName"},{tag:m.namespace,class:"tok-namespace"},{tag:m.className,class:"tok-className"},{tag:m.macroName,class:"tok-macroName"},{tag:m.propertyName,class:"tok-propertyName"},{tag:m.operator,class:"tok-operator"},{tag:m.comment,class:"tok-comment"},{tag:m.meta,class:"tok-meta"},{tag:m.invalid,class:"tok-invalid"},{tag:m.punctuation,class:"tok-punctuation"}]);var ms;const Dt=new L;function Nh(n){return D.define({combine:n?e=>e.concat(n):void 0})}const np=new L;class Ie{constructor(e,t,i=[],s=""){this.data=e,this.name=s,N.prototype.hasOwnProperty("tree")||Object.defineProperty(N.prototype,"tree",{get(){return pe(this)}}),this.parser=t,this.extension=[wt.of(this),N.languageData.of((r,o,l)=>{let a=dl(r,o,l),h=a.type.prop(Dt);if(!h)return[];let c=r.facet(h),f=a.type.prop(np);if(f){let u=a.resolve(o-a.from,l);for(let d of f)if(d.test(u,r)){let p=r.facet(d.facet);return d.type=="replace"?p:p.concat(c)}}return c})].concat(i)}isActiveAt(e,t,i=-1){return dl(e,t,i).type.prop(Dt)==this.data}findRegions(e){let t=e.facet(wt);if(t?.data==this.data)return[{from:0,to:e.doc.length}];if(!t||!t.allowsNesting)return[];let i=[],s=(r,o)=>{if(r.prop(Dt)==this.data){i.push({from:o,to:o+r.length});return}let l=r.prop(L.mounted);if(l){if(l.tree.prop(Dt)==this.data){if(l.overlay)for(let a of l.overlay)i.push({from:a.from+o,to:a.to+o});else i.push({from:o,to:o+r.length});return}else if(l.overlay){let a=i.length;if(s(l.tree,l.overlay[0].from+o),i.length>a)return}}for(let a=0;ai.isTop?t:void 0)]}),e.name)}configure(e,t){return new ur(this.data,this.parser.configure(e),t||this.name)}get allowsNesting(){return this.parser.hasWrappers()}}function pe(n){let e=n.field(Ie.state,!1);return e?e.tree:z.empty}class sp{constructor(e){this.doc=e,this.cursorPos=0,this.string="",this.cursor=e.iter()}get length(){return this.doc.length}syncTo(e){return this.string=this.cursor.next(e-this.cursorPos).value,this.cursorPos=e+this.string.length,this.cursorPos-this.string.length}chunk(e){return this.syncTo(e),this.string}get lineChunks(){return!0}read(e,t){let i=this.cursorPos-this.string.length;return e=this.cursorPos?this.doc.sliceString(e,t):this.string.slice(e-i,t-i)}}let di=null;class ti{constructor(e,t,i=[],s,r,o,l,a){this.parser=e,this.state=t,this.fragments=i,this.tree=s,this.treeLen=r,this.viewport=o,this.skipped=l,this.scheduleOn=a,this.parse=null,this.tempSkipped=[]}static create(e,t,i){return new ti(e,t,[],z.empty,0,i,[],null)}startParse(){return this.parser.startParse(new sp(this.state.doc),this.fragments)}work(e,t){return t!=null&&t>=this.state.doc.length&&(t=void 0),this.tree!=z.empty&&this.isDone(t??this.state.doc.length)?(this.takeTree(),!0):this.withContext(()=>{var i;if(typeof e=="number"){let s=Date.now()+e;e=()=>Date.now()>s}for(this.parse||(this.parse=this.startParse()),t!=null&&(this.parse.stoppedAt==null||this.parse.stoppedAt>t)&&t=this.treeLen&&((this.parse.stoppedAt==null||this.parse.stoppedAt>e)&&this.parse.stopAt(e),this.withContext(()=>{for(;!(t=this.parse.advance()););}),this.treeLen=e,this.tree=t,this.fragments=this.withoutTempSkipped(rt.addTree(this.tree,this.fragments,!0)),this.parse=null)}withContext(e){let t=di;di=this;try{return e()}finally{di=t}}withoutTempSkipped(e){for(let t;t=this.tempSkipped.pop();)e=pl(e,t.from,t.to);return e}changes(e,t){let{fragments:i,tree:s,treeLen:r,viewport:o,skipped:l}=this;if(this.takeTree(),!e.empty){let a=[];if(e.iterChangedRanges((h,c,f,u)=>a.push({fromA:h,toA:c,fromB:f,toB:u})),i=rt.applyChanges(i,a),s=z.empty,r=0,o={from:e.mapPos(o.from,-1),to:e.mapPos(o.to,1)},this.skipped.length){l=[];for(let h of this.skipped){let c=e.mapPos(h.from,1),f=e.mapPos(h.to,-1);ce.from&&(this.fragments=pl(this.fragments,s,r),this.skipped.splice(i--,1))}return this.skipped.length>=t?!1:(this.reset(),!0)}reset(){this.parse&&(this.takeTree(),this.parse=null)}skipUntilInView(e,t){this.skipped.push({from:e,to:t})}static getSkippingParser(e){return new class extends Rh{createParse(t,i,s){let r=s[0].from,o=s[s.length-1].to;return{parsedPos:r,advance(){let a=di;if(a){for(let h of s)a.tempSkipped.push(h);e&&(a.scheduleOn=a.scheduleOn?Promise.all([a.scheduleOn,e]):e)}return this.parsedPos=o,new z(xe.none,[],[],o-r)},stoppedAt:null,stopAt(){}}}}}isDone(e){e=Math.min(e,this.state.doc.length);let t=this.fragments;return this.treeLen>=e&&t.length&&t[0].from==0&&t[0].to>=e}static get(){return di}}function pl(n,e,t){return rt.applyChanges(n,[{fromA:e,toA:t,fromB:e,toB:t}])}class ii{constructor(e){this.context=e,this.tree=e.tree}apply(e){if(!e.docChanged&&this.tree==this.context.tree)return this;let t=this.context.changes(e.changes,e.state),i=this.context.treeLen==e.startState.doc.length?void 0:Math.max(e.changes.mapPos(this.context.treeLen),t.viewport.to);return t.work(20,i)||t.takeTree(),new ii(t)}static init(e){let t=Math.min(3e3,e.doc.length),i=ti.create(e.facet(wt).parser,e,{from:0,to:t});return i.work(20,t)||i.takeTree(),new ii(i)}}Ie.state=Me.define({create:ii.init,update(n,e){for(let t of e.effects)if(t.is(Ie.setState))return t.value;return e.startState.facet(wt)!=e.state.facet(wt)?ii.init(e.state):n.apply(e)}});let _h=n=>{let e=setTimeout(()=>n(),500);return()=>clearTimeout(e)};typeof requestIdleCallback<"u"&&(_h=n=>{let e=-1,t=setTimeout(()=>{e=requestIdleCallback(n,{timeout:500-100})},100);return()=>e<0?clearTimeout(t):cancelIdleCallback(e)});const gs=typeof navigator<"u"&&(!((ms=navigator.scheduling)===null||ms===void 0)&&ms.isInputPending)?()=>navigator.scheduling.isInputPending():null,rp=be.fromClass(class{constructor(e){this.view=e,this.working=null,this.workScheduled=0,this.chunkEnd=-1,this.chunkBudget=-1,this.work=this.work.bind(this),this.scheduleWork()}update(e){let t=this.view.state.field(Ie.state).context;(t.updateViewport(e.view.viewport)||this.view.viewport.to>t.treeLen)&&this.scheduleWork(),e.docChanged&&(this.view.hasFocus&&(this.chunkBudget+=50),this.scheduleWork()),this.checkAsyncSchedule(t)}scheduleWork(){if(this.working)return;let{state:e}=this.view,t=e.field(Ie.state);(t.tree!=t.context.tree||!t.context.isDone(e.doc.length))&&(this.working=_h(this.work))}work(e){this.working=null;let t=Date.now();if(this.chunkEnds+1e3,a=r.context.work(()=>gs&&gs()||Date.now()>o,s+(l?0:1e5));this.chunkBudget-=Date.now()-t,(a||this.chunkBudget<=0)&&(r.context.takeTree(),this.view.dispatch({effects:Ie.setState.of(new ii(r.context))})),this.chunkBudget>0&&!(a&&!l)&&this.scheduleWork(),this.checkAsyncSchedule(r.context)}checkAsyncSchedule(e){e.scheduleOn&&(this.workScheduled++,e.scheduleOn.then(()=>this.scheduleWork()).catch(t=>He(this.view.state,t)).then(()=>this.workScheduled--),e.scheduleOn=null)}destroy(){this.working&&this.working()}isWorking(){return!!(this.working||this.workScheduled>0)}},{eventHandlers:{focus(){this.scheduleWork()}}}),wt=D.define({combine(n){return n.length?n[0]:null},enables:n=>[Ie.state,rp,O.contentAttributes.compute([n],e=>{let t=e.facet(n);return t&&t.name?{"data-language":t.name}:{}})]});class sy{constructor(e,t=[]){this.language=e,this.support=t,this.extension=[e,t]}}class Vh{constructor(e,t,i,s,r,o=void 0){this.name=e,this.alias=t,this.extensions=i,this.filename=s,this.loadFunc=r,this.support=o,this.loading=null}load(){return this.loading||(this.loading=this.loadFunc().then(e=>this.support=e,e=>{throw this.loading=null,e}))}static of(e){let{load:t,support:i}=e;if(!t){if(!i)throw new RangeError("Must pass either 'load' or 'support' to LanguageDescription.of");t=()=>Promise.resolve(i)}return new Vh(e.name,(e.alias||[]).concat(e.name).map(s=>s.toLowerCase()),e.extensions||[],e.filename,t,i)}static matchFilename(e,t){for(let s of e)if(s.filename&&s.filename.test(t))return s;let i=/\.([^.]+)$/.exec(t);if(i){for(let s of e)if(s.extensions.indexOf(i[1])>-1)return s}return null}static matchLanguageName(e,t,i=!0){t=t.toLowerCase();for(let s of e)if(s.alias.some(r=>r==t))return s;if(i)for(let s of e)for(let r of s.alias){let o=t.indexOf(r);if(o>-1&&(r.length>2||!/\w/.test(t[o-1])&&!/\w/.test(t[o+r.length])))return s}return null}}const Fh=D.define(),jn=D.define({combine:n=>{if(!n.length)return" ";let e=n[0];if(!e||/\S/.test(e)||Array.from(e).some(t=>t!=e[0]))throw new Error("Invalid indent unit: "+JSON.stringify(n[0]));return e}});function Rt(n){let e=n.facet(jn);return e.charCodeAt(0)==9?n.tabSize*e.length:e.length}function Li(n,e){let t="",i=n.tabSize,s=n.facet(jn)[0];if(s==" "){for(;e>=i;)t+=" ",e-=i;s=" "}for(let r=0;r=i.from&&s<=i.to?r&&s==e?{text:"",from:e}:(t<0?s-1&&(r+=o-this.countColumn(i,i.search(/\S|$/))),r}countColumn(e,t=e.length){return Fi(e,this.state.tabSize,t)}lineIndent(e,t=1){let{text:i,from:s}=this.lineAt(e,t),r=this.options.overrideIndentation;if(r){let o=r(s);if(o>-1)return o}return this.countColumn(i,i.search(/\S|$/))}get simulatedBreak(){return this.options.simulateBreak||null}}const op=new L;function lp(n,e,t){return Hh(e.resolveInner(t).enterUnfinishedNodesBefore(t),t,n)}function ap(n){return n.pos==n.options.simulateBreak&&n.options.simulateDoubleBreak}function hp(n){let e=n.type.prop(op);if(e)return e;let t=n.firstChild,i;if(t&&(i=t.type.prop(L.closedBy))){let s=n.lastChild,r=s&&i.indexOf(s.name)>-1;return o=>Wh(o,!0,1,void 0,r&&!ap(o)?s.from:void 0)}return n.parent==null?cp:null}function Hh(n,e,t){for(;n;n=n.parent){let i=hp(n);if(i)return i(Fr.create(t,e,n))}return null}function cp(){return 0}class Fr extends Kn{constructor(e,t,i){super(e.state,e.options),this.base=e,this.pos=t,this.node=i}static create(e,t,i){return new Fr(e,t,i)}get textAfter(){return this.textAfterPos(this.pos)}get baseIndent(){let e=this.state.doc.lineAt(this.node.from);for(;;){let t=this.node.resolve(e.from);for(;t.parent&&t.parent.from==t.from;)t=t.parent;if(fp(t,this.node))break;e=this.state.doc.lineAt(t.from)}return this.lineIndent(e.from)}continue(){let e=this.node.parent;return e?Hh(e,this.pos,this.base):0}}function fp(n,e){for(let t=e;t;t=t.parent)if(n==t)return!0;return!1}function up(n){let e=n.node,t=e.childAfter(e.from),i=e.lastChild;if(!t)return null;let s=n.options.simulateBreak,r=n.state.doc.lineAt(t.from),o=s==null||s<=r.from?r.to:Math.min(r.to,s);for(let l=t.to;;){let a=e.childAfter(l);if(!a||a==i)return null;if(!a.type.isSkipped)return a.fromWh(i,e,t,n)}function Wh(n,e,t,i,s){let r=n.textAfter,o=r.match(/^\s*/)[0].length,l=i&&r.slice(o,o+i.length)==i||s==n.pos+o,a=e?up(n):null;return a?l?n.column(a.from):n.column(a.to):n.baseIndent+(l?0:n.unit*t)}const oy=n=>n.baseIndent;function ly({except:n,units:e=1}={}){return t=>{let i=n&&n.test(t.textAfter);return t.baseIndent+(i?0:e*t.unit)}}const dp=200;function pp(){return N.transactionFilter.of(n=>{if(!n.docChanged||!n.isUserEvent("input.type")&&!n.isUserEvent("input.complete"))return n;let e=n.startState.languageDataAt("indentOnInput",n.startState.selection.main.head);if(!e.length)return n;let t=n.newDoc,{head:i}=n.newSelection.main,s=t.lineAt(i);if(i>s.from+dp)return n;let r=t.sliceString(s.from,i);if(!e.some(h=>h.test(r)))return n;let{state:o}=n,l=-1,a=[];for(let{head:h}of o.selection.ranges){let c=o.doc.lineAt(h);if(c.from==l)continue;l=c.from;let f=Vr(o,c.from);if(f==null)continue;let u=/^\s*/.exec(c.text)[0],d=Li(o,f);u!=d&&a.push({from:c.from,to:c.from+u.length,insert:d})}return a.length?[n,{changes:a,sequential:!0}]:n})}const mp=D.define(),gp=new L;function ay(n){let e=n.firstChild,t=n.lastChild;return e&&e.tot)continue;if(r&&o.from=e&&a.to>t&&(r=a)}}return r}function bp(n){let e=n.lastChild;return e&&e.to==n.to&&e.type.isError}function Rn(n,e,t){for(let i of n.facet(mp)){let s=i(n,e,t);if(s)return s}return yp(n,e,t)}function zh(n,e){let t=e.mapPos(n.from,1),i=e.mapPos(n.to,-1);return t>=i?void 0:{from:t,to:i}}const Un=R.define({map:zh}),Wi=R.define({map:zh});function qh(n){let e=[];for(let{head:t}of n.state.selection.ranges)e.some(i=>i.from<=t&&i.to>=t)||e.push(n.lineBlockAt(t));return e}const Lt=Me.define({create(){return E.none},update(n,e){n=n.map(e.changes);for(let t of e.effects)t.is(Un)&&!wp(n,t.value.from,t.value.to)?n=n.update({add:[ml.range(t.value.from,t.value.to)]}):t.is(Wi)&&(n=n.update({filter:(i,s)=>t.value.from!=i||t.value.to!=s,filterFrom:t.value.from,filterTo:t.value.to}));if(e.selection){let t=!1,{head:i}=e.selection.main;n.between(i,i,(s,r)=>{si&&(t=!0)}),t&&(n=n.update({filterFrom:i,filterTo:i,filter:(s,r)=>r<=i||s>=i}))}return n},provide:n=>O.decorations.from(n),toJSON(n,e){let t=[];return n.between(0,e.doc.length,(i,s)=>{t.push(i,s)}),t},fromJSON(n){if(!Array.isArray(n)||n.length%2)throw new RangeError("Invalid JSON for fold state");let e=[];for(let t=0;t{(!s||s.from>r)&&(s={from:r,to:o})}),s}function wp(n,e,t){let i=!1;return n.between(e,e,(s,r)=>{s==e&&r==t&&(i=!0)}),i}function jh(n,e){return n.field(Lt,!1)?e:e.concat(R.appendConfig.of(Gh()))}const kp=n=>{for(let e of qh(n)){let t=Rn(n.state,e.from,e.to);if(t)return n.dispatch({effects:jh(n.state,[Un.of(t),Kh(n,t)])}),!0}return!1},vp=n=>{if(!n.state.field(Lt,!1))return!1;let e=[];for(let t of qh(n)){let i=Ln(n.state,t.from,t.to);i&&e.push(Wi.of(i),Kh(n,i,!1))}return e.length&&n.dispatch({effects:e}),e.length>0};function Kh(n,e,t=!0){let i=n.state.doc.lineAt(e.from).number,s=n.state.doc.lineAt(e.to).number;return O.announce.of(`${n.state.phrase(t?"Folded lines":"Unfolded lines")} ${i} ${n.state.phrase("to")} ${s}.`)}const xp=n=>{let{state:e}=n,t=[];for(let i=0;i{let e=n.state.field(Lt,!1);if(!e||!e.size)return!1;let t=[];return e.between(0,n.state.doc.length,(i,s)=>{t.push(Wi.of({from:i,to:s}))}),n.dispatch({effects:t}),!0},Cp=[{key:"Ctrl-Shift-[",mac:"Cmd-Alt-[",run:kp},{key:"Ctrl-Shift-]",mac:"Cmd-Alt-]",run:vp},{key:"Ctrl-Alt-[",run:xp},{key:"Ctrl-Alt-]",run:Sp}],Ap={placeholderDOM:null,placeholderText:"…"},Uh=D.define({combine(n){return _t(n,Ap)}});function Gh(n){let e=[Lt,Tp];return n&&e.push(Uh.of(n)),e}const ml=E.replace({widget:new class extends tt{toDOM(n){let{state:e}=n,t=e.facet(Uh),i=r=>{let o=n.lineBlockAt(n.posAtDOM(r.target)),l=Ln(n.state,o.from,o.to);l&&n.dispatch({effects:Wi.of(l)}),r.preventDefault()};if(t.placeholderDOM)return t.placeholderDOM(n,i);let s=document.createElement("span");return s.textContent=t.placeholderText,s.setAttribute("aria-label",e.phrase("folded code")),s.title=e.phrase("unfold"),s.className="cm-foldPlaceholder",s.onclick=i,s}}}),Mp={openText:"⌄",closedText:"›",markerDOM:null,domEventHandlers:{},foldingChanged:()=>!1};class ys extends bt{constructor(e,t){super(),this.config=e,this.open=t}eq(e){return this.config==e.config&&this.open==e.open}toDOM(e){if(this.config.markerDOM)return this.config.markerDOM(this.open);let t=document.createElement("span");return t.textContent=this.open?this.config.openText:this.config.closedText,t.title=e.state.phrase(this.open?"Fold line":"Unfold line"),t}}function Dp(n={}){let e=Object.assign(Object.assign({},Mp),n),t=new ys(e,!0),i=new ys(e,!1),s=be.fromClass(class{constructor(o){this.from=o.viewport.from,this.markers=this.buildMarkers(o)}update(o){(o.docChanged||o.viewportChanged||o.startState.facet(wt)!=o.state.facet(wt)||o.startState.field(Lt,!1)!=o.state.field(Lt,!1)||pe(o.startState)!=pe(o.state)||e.foldingChanged(o))&&(this.markers=this.buildMarkers(o.view))}buildMarkers(o){let l=new Pt;for(let a of o.viewportLineBlocks){let h=Ln(o.state,a.from,a.to)?i:Rn(o.state,a.from,a.to)?t:null;h&&l.add(a.from,a.from,h)}return l.finish()}}),{domEventHandlers:r}=e;return[s,Td({class:"cm-foldGutter",markers(o){var l;return((l=o.plugin(s))===null||l===void 0?void 0:l.markers)||F.empty},initialSpacer(){return new ys(e,!1)},domEventHandlers:Object.assign(Object.assign({},r),{click:(o,l,a)=>{if(r.click&&r.click(o,l,a))return!0;let h=Ln(o.state,l.from,l.to);if(h)return o.dispatch({effects:Wi.of(h)}),!0;let c=Rn(o.state,l.from,l.to);return c?(o.dispatch({effects:Un.of(c)}),!0):!1}})}),Gh()]}const Tp=O.baseTheme({".cm-foldPlaceholder":{backgroundColor:"#eee",border:"1px solid #ddd",color:"#888",borderRadius:".2em",margin:"0 1px",padding:"0 1px",cursor:"pointer"},".cm-foldGutter span":{padding:"0 1px",cursor:"pointer"}});class li{constructor(e,t){this.specs=e;let i;function s(l){let a=mt.newName();return(i||(i=Object.create(null)))["."+a]=l,a}const r=typeof t.all=="string"?t.all:t.all?s(t.all):void 0,o=t.scope;this.scope=o instanceof Ie?l=>l.prop(Dt)==o.data:o?l=>l==o:void 0,this.style=Ih(e.map(l=>({tag:l.tag,class:l.class||s(Object.assign({},l,{tag:null}))})),{all:r}).style,this.module=i?new mt(i):null,this.themeType=t.themeType}static define(e,t){return new li(e,t||{})}}const dr=D.define(),$h=D.define({combine(n){return n.length?[n[0]]:null}});function bs(n){let e=n.facet(dr);return e.length?e:n.facet($h)}function Hr(n,e){let t=[Bp],i;return n instanceof li&&(n.module&&t.push(O.styleModule.of(n.module)),i=n.themeType),e?.fallback?t.push($h.of(n)):i?t.push(dr.computeN([O.darkTheme],s=>s.facet(O.darkTheme)==(i=="dark")?[n]:[])):t.push(dr.of(n)),t}class Op{constructor(e){this.markCache=Object.create(null),this.tree=pe(e.state),this.decorations=this.buildDeco(e,bs(e.state))}update(e){let t=pe(e.state),i=bs(e.state),s=i!=bs(e.startState);t.length{i.add(o,l,this.markCache[a]||(this.markCache[a]=E.mark({class:a})))},s,r);return i.finish()}}const Bp=Vi.high(be.fromClass(Op,{decorations:n=>n.decorations})),Pp=li.define([{tag:m.meta,color:"#404740"},{tag:m.link,textDecoration:"underline"},{tag:m.heading,textDecoration:"underline",fontWeight:"bold"},{tag:m.emphasis,fontStyle:"italic"},{tag:m.strong,fontWeight:"bold"},{tag:m.strikethrough,textDecoration:"line-through"},{tag:m.keyword,color:"#708"},{tag:[m.atom,m.bool,m.url,m.contentSeparator,m.labelName],color:"#219"},{tag:[m.literal,m.inserted],color:"#164"},{tag:[m.string,m.deleted],color:"#a11"},{tag:[m.regexp,m.escape,m.special(m.string)],color:"#e40"},{tag:m.definition(m.variableName),color:"#00f"},{tag:m.local(m.variableName),color:"#30a"},{tag:[m.typeName,m.namespace],color:"#085"},{tag:m.className,color:"#167"},{tag:[m.special(m.variableName),m.macroName],color:"#256"},{tag:m.definition(m.propertyName),color:"#00c"},{tag:m.comment,color:"#940"},{tag:m.invalid,color:"#f00"}]),Ep=1e4,Rp="()[]{}",Lp=new L;function pr(n,e,t){let i=n.prop(e<0?L.openedBy:L.closedBy);if(i)return i;if(n.name.length==1){let s=t.indexOf(n.name);if(s>-1&&s%2==(e<0?1:0))return[t[s+e]]}return null}function mr(n){let e=n.type.prop(Lp);return e?e(n.node):n}function qt(n,e,t,i={}){let s=i.maxScanDistance||Ep,r=i.brackets||Rp,o=pe(n),l=o.resolveInner(e,t);for(let a=l;a;a=a.parent){let h=pr(a.type,t,r);if(h&&a.from0?e>=c.from&&ec.from&&e<=c.to))return Ip(n,e,t,a,c,h,r)}}return Np(n,e,t,o,l.type,s,r)}function Ip(n,e,t,i,s,r,o){let l=i.parent,a={from:s.from,to:s.to},h=0,c=l?.cursor();if(c&&(t<0?c.childBefore(i.from):c.childAfter(i.to)))do if(t<0?c.to<=i.from:c.from>=i.to){if(h==0&&r.indexOf(c.type.name)>-1&&c.from0)return null;let h={from:t<0?e-1:e,to:t>0?e+1:e},c=n.doc.iterRange(e,t>0?n.doc.length:0),f=0;for(let u=0;!c.next().done&&u<=r;){let d=c.value;t<0&&(u+=d.length);let p=e+u*t;for(let g=t>0?0:d.length-1,y=t>0?d.length:-1;g!=y;g+=t){let b=o.indexOf(d[g]);if(!(b<0||i.resolveInner(p+g,1).type!=s))if(b%2==0==t>0)f++;else{if(f==1)return{start:h,end:{from:p+g,to:p+g+1},matched:b>>1==a>>1};f--}}t>0&&(u+=d.length)}return c.done?{start:h,matched:!1}:null}function gl(n,e,t,i=0,s=0){e==null&&(e=n.search(/[^\s\u00a0]/),e==-1&&(e=n.length));let r=s;for(let o=i;o=this.string.length}sol(){return this.pos==0}peek(){return this.string.charAt(this.pos)||void 0}next(){if(this.post}eatSpace(){let e=this.pos;for(;/[\s\u00a0]/.test(this.string.charAt(this.pos));)++this.pos;return this.pos>e}skipToEnd(){this.pos=this.string.length}skipTo(e){let t=this.string.indexOf(e,this.pos);if(t>-1)return this.pos=t,!0}backUp(e){this.pos-=e}column(){return this.lastColumnPosi?o.toLowerCase():o,r=this.string.substr(this.pos,e.length);return s(r)==s(e)?(t!==!1&&(this.pos+=e.length),!0):null}else{let s=this.string.slice(this.pos).match(e);return s&&s.index>0?null:(s&&t!==!1&&(this.pos+=s[0].length),s)}}current(){return this.string.slice(this.start,this.pos)}}function _p(n){return{name:n.name||"",token:n.token,blankLine:n.blankLine||(()=>{}),startState:n.startState||(()=>!0),copyState:n.copyState||Vp,indent:n.indent||(()=>null),languageData:n.languageData||{},tokenTable:n.tokenTable||zr}}function Vp(n){if(typeof n!="object")return n;let e={};for(let t in n){let i=n[t];e[t]=i instanceof Array?i.slice():i}return e}const yl=new WeakMap;class jt extends Ie{constructor(e){let t=Nh(e.languageData),i=_p(e),s,r=new class extends Rh{createParse(o,l,a){return new Hp(s,o,l,a)}};super(t,r,[Fh.of((o,l)=>this.getIndent(o,l))],e.name),this.topNode=qp(t),s=this,this.streamParser=i,this.stateAfter=new L({perNode:!0}),this.tokenTable=e.tokenTable?new Qh(i.tokenTable):zp}static define(e){return new jt(e)}getIndent(e,t){let i=pe(e.state),s=i.resolve(t);for(;s&&s.type!=this.topNode;)s=s.parent;if(!s)return null;let r,{overrideIndentation:o}=e.options;o&&(r=yl.get(e.state),r!=null&&r1e4)return null;for(;a=i&&t+e.length<=s&&e.prop(n.stateAfter);if(r)return{state:n.streamParser.copyState(r),pos:t+e.length};for(let o=e.children.length-1;o>=0;o--){let l=e.children[o],a=t+e.positions[o],h=l instanceof z&&a=e.length)return e;!s&&e.type==n.topNode&&(s=!0);for(let r=e.children.length-1;r>=0;r--){let o=e.positions[r],l=e.children[r],a;if(ot&&Wr(n,s.tree,0-s.offset,t,o),a;if(l&&(a=Yh(n,s.tree,t+s.offset,l.pos+s.offset,!1)))return{state:l.state,tree:a}}return{state:n.streamParser.startState(i?Rt(i):4),tree:z.empty}}class Hp{constructor(e,t,i,s){this.lang=e,this.input=t,this.fragments=i,this.ranges=s,this.stoppedAt=null,this.chunks=[],this.chunkPos=[],this.chunk=[],this.chunkReused=void 0,this.rangeIndex=0,this.to=s[s.length-1].to;let r=ti.get(),o=s[0].from,{state:l,tree:a}=Fp(e,i,o,r?.state);this.state=l,this.parsedPos=this.chunkStart=o+a.length;for(let h=0;h=t?this.finish():e&&this.parsedPos>=e.viewport.to?(e.skipUntilInView(this.parsedPos,t),this.finish()):null}stopAt(e){this.stoppedAt=e}lineAfter(e){let t=this.input.chunk(e);if(this.input.lineChunks)t==` -`&&(t="");else{let i=t.indexOf(` -`);i>-1&&(t=t.slice(0,i))}return e+t.length<=this.to?t:t.slice(0,this.to-e)}nextLine(){let e=this.parsedPos,t=this.lineAfter(e),i=e+t.length;for(let s=this.rangeIndex;;){let r=this.ranges[s].to;if(r>=i||(t=t.slice(0,r-(i-t.length)),s++,s==this.ranges.length))break;let o=this.ranges[s].from,l=this.lineAfter(o);t+=l,i=o+l.length}return{line:t,end:i}}skipGapsTo(e,t,i){for(;;){let s=this.ranges[this.rangeIndex].to,r=e+t;if(i>0?s>r:s>=r)break;let o=this.ranges[++this.rangeIndex].from;t+=o-s}return t}moveRangeIndex(){for(;this.ranges[this.rangeIndex].to1){r=this.skipGapsTo(t,r,1),t+=r;let o=this.chunk.length;r=this.skipGapsTo(i,r,-1),i+=r,s+=this.chunk.length-o}return this.chunk.push(e,t,i,s),r}parseLine(e){let{line:t,end:i}=this.nextLine(),s=0,{streamParser:r}=this.lang,o=new Jh(t,e?e.state.tabSize:4,e?Rt(e.state):2);if(o.eol())r.blankLine(this.state,o.indentUnit);else for(;!o.eol();){let l=Xh(r.token,o,this.state);if(l&&(s=this.emitToken(this.lang.tokenTable.resolve(l),this.parsedPos+o.start,this.parsedPos+o.pos,4,s)),o.start>1e4)break}this.parsedPos=i,this.moveRangeIndex(),this.parsedPose.start)return s}throw new Error("Stream parser failed to advance stream.")}const zr=Object.create(null),Ii=[xe.none],Wp=new Lr(Ii),bl=[],Zh=Object.create(null);for(let[n,e]of[["variable","variableName"],["variable-2","variableName.special"],["string-2","string.special"],["def","variableName.definition"],["tag","tagName"],["attribute","attributeName"],["type","typeName"],["builtin","variableName.standard"],["qualifier","modifier"],["error","invalid"],["header","heading"],["property","propertyName"]])Zh[n]=ec(zr,e);class Qh{constructor(e){this.extra=e,this.table=Object.assign(Object.create(null),Zh)}resolve(e){return e?this.table[e]||(this.table[e]=ec(this.extra,e)):0}}const zp=new Qh(zr);function ws(n,e){bl.indexOf(n)>-1||(bl.push(n),console.warn(e))}function ec(n,e){let t=null;for(let r of e.split(".")){let o=n[r]||m[r];o?typeof o=="function"?t?t=o(t):ws(r,`Modifier ${r} used at start of tag`):t?ws(r,`Tag ${r} used as modifier`):t=o:ws(r,`Unknown highlighting tag ${r}`)}if(!t)return 0;let i=e.replace(/ /g,"_"),s=xe.define({id:Ii.length,name:i,props:[Zd({[i]:t})]});return Ii.push(s),s.id}function qp(n){let e=xe.define({id:Ii.length,name:"Document",props:[Dt.add(()=>n)]});return Ii.push(e),e}const jp=n=>{let e=jr(n.state);return e.line?Kp(n):e.block?Gp(n):!1};function qr(n,e){return({state:t,dispatch:i})=>{if(t.readOnly)return!1;let s=n(e,t);return s?(i(t.update(s)),!0):!1}}const Kp=qr(Yp,0),Up=qr(tc,0),Gp=qr((n,e)=>tc(n,e,Jp(e)),0);function jr(n,e=n.selection.main.head){let t=n.languageDataAt("commentTokens",e);return t.length?t[0]:{}}const pi=50;function $p(n,{open:e,close:t},i,s){let r=n.sliceDoc(i-pi,i),o=n.sliceDoc(s,s+pi),l=/\s*$/.exec(r)[0].length,a=/^\s*/.exec(o)[0].length,h=r.length-l;if(r.slice(h-e.length,h)==e&&o.slice(a,a+t.length)==t)return{open:{pos:i-l,margin:l&&1},close:{pos:s+a,margin:a&&1}};let c,f;s-i<=2*pi?c=f=n.sliceDoc(i,s):(c=n.sliceDoc(i,i+pi),f=n.sliceDoc(s-pi,s));let u=/^\s*/.exec(c)[0].length,d=/\s*$/.exec(f)[0].length,p=f.length-d-t.length;return c.slice(u,u+e.length)==e&&f.slice(p,p+t.length)==t?{open:{pos:i+u+e.length,margin:/\s/.test(c.charAt(u+e.length))?1:0},close:{pos:s-d-t.length,margin:/\s/.test(f.charAt(p-1))?1:0}}:null}function Jp(n){let e=[];for(let t of n.selection.ranges){let i=n.doc.lineAt(t.from),s=t.to<=i.to?i:n.doc.lineAt(t.to),r=e.length-1;r>=0&&e[r].to>i.from?e[r].to=s.to:e.push({from:i.from,to:s.to})}return e}function tc(n,e,t=e.selection.ranges){let i=t.map(r=>jr(e,r.from).block);if(!i.every(r=>r))return null;let s=t.map((r,o)=>$p(e,i[o],r.from,r.to));if(n!=2&&!s.every(r=>r))return{changes:e.changes(t.map((r,o)=>s[o]?[]:[{from:r.from,insert:i[o].open+" "},{from:r.to,insert:" "+i[o].close}]))};if(n!=1&&s.some(r=>r)){let r=[];for(let o=0,l;os&&(r==o||o>c.from)){s=c.from;let f=jr(e,h).line;if(!f)continue;let u=/^\s*/.exec(c.text)[0].length,d=u==c.length,p=c.text.slice(u,u+f.length)==f?u:-1;ur.comment<0&&(!r.empty||r.single))){let r=[];for(let{line:l,token:a,indent:h,empty:c,single:f}of i)(f||!c)&&r.push({from:l.from+h,insert:a+" "});let o=e.changes(r);return{changes:o,selection:e.selection.map(o,1)}}else if(n!=1&&i.some(r=>r.comment>=0)){let r=[];for(let{line:o,comment:l,token:a}of i)if(l>=0){let h=o.from+l,c=h+a.length;o.text[c-o.from]==" "&&c++,r.push({from:h,to:c})}return{changes:r}}return null}const gr=Nt.define(),Xp=Nt.define(),Zp=D.define(),ic=D.define({combine(n){return _t(n,{minDepth:100,newGroupDelay:500},{minDepth:Math.max,newGroupDelay:Math.min})}});function Qp(n){let e=0;return n.iterChangedRanges((t,i)=>e=i),e}const nc=Me.define({create(){return Xe.empty},update(n,e){let t=e.state.facet(ic),i=e.annotation(gr);if(i){let a=e.docChanged?w.single(Qp(e.changes)):void 0,h=Se.fromTransaction(e,a),c=i.side,f=c==0?n.undone:n.done;return h?f=In(f,f.length,t.minDepth,h):f=oc(f,e.startState.selection),new Xe(c==0?i.rest:f,c==0?f:i.rest)}let s=e.annotation(Xp);if((s=="full"||s=="before")&&(n=n.isolate()),e.annotation(re.addToHistory)===!1)return e.changes.empty?n:n.addMapping(e.changes.desc);let r=Se.fromTransaction(e),o=e.annotation(re.time),l=e.annotation(re.userEvent);return r?n=n.addChanges(r,o,l,t.newGroupDelay,t.minDepth):e.selection&&(n=n.addSelection(e.startState.selection,o,l,t.newGroupDelay)),(s=="full"||s=="after")&&(n=n.isolate()),n},toJSON(n){return{done:n.done.map(e=>e.toJSON()),undone:n.undone.map(e=>e.toJSON())}},fromJSON(n){return new Xe(n.done.map(Se.fromJSON),n.undone.map(Se.fromJSON))}});function em(n={}){return[nc,ic.of(n),O.domEventHandlers({beforeinput(e,t){let i=e.inputType=="historyUndo"?sc:e.inputType=="historyRedo"?yr:null;return i?(e.preventDefault(),i(t)):!1}})]}function Gn(n,e){return function({state:t,dispatch:i}){if(!e&&t.readOnly)return!1;let s=t.field(nc,!1);if(!s)return!1;let r=s.pop(n,t,e);return r?(i(r),!0):!1}}const sc=Gn(0,!1),yr=Gn(1,!1),tm=Gn(0,!0),im=Gn(1,!0);class Se{constructor(e,t,i,s,r){this.changes=e,this.effects=t,this.mapped=i,this.startSelection=s,this.selectionsAfter=r}setSelAfter(e){return new Se(this.changes,this.effects,this.mapped,this.startSelection,e)}toJSON(){var e,t,i;return{changes:(e=this.changes)===null||e===void 0?void 0:e.toJSON(),mapped:(t=this.mapped)===null||t===void 0?void 0:t.toJSON(),startSelection:(i=this.startSelection)===null||i===void 0?void 0:i.toJSON(),selectionsAfter:this.selectionsAfter.map(s=>s.toJSON())}}static fromJSON(e){return new Se(e.changes&&ne.fromJSON(e.changes),[],e.mapped&&Ze.fromJSON(e.mapped),e.startSelection&&w.fromJSON(e.startSelection),e.selectionsAfter.map(w.fromJSON))}static fromTransaction(e,t){let i=Ne;for(let s of e.startState.facet(Zp)){let r=s(e);r.length&&(i=i.concat(r))}return!i.length&&e.changes.empty?null:new Se(e.changes.invert(e.startState.doc),i,void 0,t||e.startState.selection,Ne)}static selection(e){return new Se(void 0,Ne,void 0,void 0,e)}}function In(n,e,t,i){let s=e+1>t+20?e-t-1:0,r=n.slice(s,e);return r.push(i),r}function nm(n,e){let t=[],i=!1;return n.iterChangedRanges((s,r)=>t.push(s,r)),e.iterChangedRanges((s,r,o,l)=>{for(let a=0;a=h&&o<=c&&(i=!0)}}),i}function sm(n,e){return n.ranges.length==e.ranges.length&&n.ranges.filter((t,i)=>t.empty!=e.ranges[i].empty).length===0}function rc(n,e){return n.length?e.length?n.concat(e):n:e}const Ne=[],rm=200;function oc(n,e){if(n.length){let t=n[n.length-1],i=t.selectionsAfter.slice(Math.max(0,t.selectionsAfter.length-rm));return i.length&&i[i.length-1].eq(e)?n:(i.push(e),In(n,n.length-1,1e9,t.setSelAfter(i)))}else return[Se.selection([e])]}function om(n){let e=n[n.length-1],t=n.slice();return t[n.length-1]=e.setSelAfter(e.selectionsAfter.slice(0,e.selectionsAfter.length-1)),t}function ks(n,e){if(!n.length)return n;let t=n.length,i=Ne;for(;t;){let s=lm(n[t-1],e,i);if(s.changes&&!s.changes.empty||s.effects.length){let r=n.slice(0,t);return r[t-1]=s,r}else e=s.mapped,t--,i=s.selectionsAfter}return i.length?[Se.selection(i)]:Ne}function lm(n,e,t){let i=rc(n.selectionsAfter.length?n.selectionsAfter.map(l=>l.map(e)):Ne,t);if(!n.changes)return Se.selection(i);let s=n.changes.map(e),r=e.mapDesc(n.changes,!0),o=n.mapped?n.mapped.composeDesc(r):r;return new Se(s,R.mapEffects(n.effects,e),o,n.startSelection.map(r),i)}const am=/^(input\.type|delete)($|\.)/;class Xe{constructor(e,t,i=0,s=void 0){this.done=e,this.undone=t,this.prevTime=i,this.prevUserEvent=s}isolate(){return this.prevTime?new Xe(this.done,this.undone):this}addChanges(e,t,i,s,r){let o=this.done,l=o[o.length-1];return l&&l.changes&&!l.changes.empty&&e.changes&&(!i||am.test(i))&&(!l.selectionsAfter.length&&t-this.prevTime0&&t-this.prevTimet.empty?n.moveByChar(t,e):$n(t,e))}function we(n){return n.textDirectionAt(n.state.selection.main.head)==Z.LTR}const ac=n=>lc(n,!we(n)),hc=n=>lc(n,we(n));function cc(n,e){return We(n,t=>t.empty?n.moveByGroup(t,e):$n(t,e))}const cm=n=>cc(n,!we(n)),fm=n=>cc(n,we(n));function um(n,e,t){if(e.type.prop(t))return!0;let i=e.to-e.from;return i&&(i>2||/[^\s,.;:]/.test(n.sliceDoc(e.from,e.to)))||e.firstChild}function Jn(n,e,t){let i=pe(n).resolveInner(e.head),s=t?L.closedBy:L.openedBy;for(let a=e.head;;){let h=t?i.childAfter(a):i.childBefore(a);if(!h)break;um(n,h,s)?i=h:a=t?h.to:h.from}let r=i.type.prop(s),o,l;return r&&(o=t?qt(n,i.from,1):qt(n,i.to,-1))&&o.matched?l=t?o.end.to:o.end.from:l=t?i.to:i.from,w.cursor(l,t?-1:1)}const dm=n=>We(n,e=>Jn(n.state,e,!we(n))),pm=n=>We(n,e=>Jn(n.state,e,we(n)));function fc(n,e){return We(n,t=>{if(!t.empty)return $n(t,e);let i=n.moveVertically(t,e);return i.head!=t.head?i:n.moveToLineBoundary(t,e)})}const uc=n=>fc(n,!1),dc=n=>fc(n,!0);function pc(n){return Math.max(n.defaultLineHeight,Math.min(n.dom.clientHeight,innerHeight)-5)}function mc(n,e){let{state:t}=n,i=ai(t.selection,l=>l.empty?n.moveVertically(l,e,pc(n)):$n(l,e));if(i.eq(t.selection))return!1;let s=n.coordsAtPos(t.selection.main.head),r=n.scrollDOM.getBoundingClientRect(),o;return s&&s.top>r.top&&s.bottommc(n,!1),br=n=>mc(n,!0);function kt(n,e,t){let i=n.lineBlockAt(e.head),s=n.moveToLineBoundary(e,t);if(s.head==e.head&&s.head!=(t?i.to:i.from)&&(s=n.moveToLineBoundary(e,t,!1)),!t&&s.head==i.from&&i.length){let r=/^\s*/.exec(n.state.sliceDoc(i.from,Math.min(i.from+100,i.to)))[0].length;r&&e.head!=i.from+r&&(s=w.cursor(i.from+r))}return s}const mm=n=>We(n,e=>kt(n,e,!0)),gm=n=>We(n,e=>kt(n,e,!1)),ym=n=>We(n,e=>kt(n,e,!we(n))),bm=n=>We(n,e=>kt(n,e,we(n))),wm=n=>We(n,e=>w.cursor(n.lineBlockAt(e.head).from,1)),km=n=>We(n,e=>w.cursor(n.lineBlockAt(e.head).to,-1));function vm(n,e,t){let i=!1,s=ai(n.selection,r=>{let o=qt(n,r.head,-1)||qt(n,r.head,1)||r.head>0&&qt(n,r.head-1,1)||r.headvm(n,e,!1);function Ve(n,e){let t=ai(n.state.selection,i=>{let s=e(i);return w.range(i.anchor,s.head,s.goalColumn)});return t.eq(n.state.selection)?!1:(n.dispatch(it(n.state,t)),!0)}function gc(n,e){return Ve(n,t=>n.moveByChar(t,e))}const yc=n=>gc(n,!we(n)),bc=n=>gc(n,we(n));function wc(n,e){return Ve(n,t=>n.moveByGroup(t,e))}const Sm=n=>wc(n,!we(n)),Cm=n=>wc(n,we(n)),Am=n=>Ve(n,e=>Jn(n.state,e,!we(n))),Mm=n=>Ve(n,e=>Jn(n.state,e,we(n)));function kc(n,e){return Ve(n,t=>n.moveVertically(t,e))}const vc=n=>kc(n,!1),xc=n=>kc(n,!0);function Sc(n,e){return Ve(n,t=>n.moveVertically(t,e,pc(n)))}const kl=n=>Sc(n,!1),vl=n=>Sc(n,!0),Dm=n=>Ve(n,e=>kt(n,e,!0)),Tm=n=>Ve(n,e=>kt(n,e,!1)),Om=n=>Ve(n,e=>kt(n,e,!we(n))),Bm=n=>Ve(n,e=>kt(n,e,we(n))),Pm=n=>Ve(n,e=>w.cursor(n.lineBlockAt(e.head).from)),Em=n=>Ve(n,e=>w.cursor(n.lineBlockAt(e.head).to)),xl=({state:n,dispatch:e})=>(e(it(n,{anchor:0})),!0),Sl=({state:n,dispatch:e})=>(e(it(n,{anchor:n.doc.length})),!0),Cl=({state:n,dispatch:e})=>(e(it(n,{anchor:n.selection.main.anchor,head:0})),!0),Al=({state:n,dispatch:e})=>(e(it(n,{anchor:n.selection.main.anchor,head:n.doc.length})),!0),Rm=({state:n,dispatch:e})=>(e(n.update({selection:{anchor:0,head:n.doc.length},userEvent:"select"})),!0),Lm=({state:n,dispatch:e})=>{let t=Xn(n).map(({from:i,to:s})=>w.range(i,Math.min(s+1,n.doc.length)));return e(n.update({selection:w.create(t),userEvent:"select"})),!0},Im=({state:n,dispatch:e})=>{let t=ai(n.selection,i=>{var s;let r=pe(n).resolveInner(i.head,1);for(;!(r.from=i.to||r.to>i.to&&r.from<=i.from||!(!((s=r.parent)===null||s===void 0)&&s.parent));)r=r.parent;return w.range(r.to,r.from)});return e(it(n,t)),!0},Nm=({state:n,dispatch:e})=>{let t=n.selection,i=null;return t.ranges.length>1?i=w.create([t.main]):t.main.empty||(i=w.create([w.cursor(t.main.head)])),i?(e(it(n,i)),!0):!1};function Yn(n,e){if(n.state.readOnly)return!1;let t="delete.selection",{state:i}=n,s=i.changeByRange(r=>{let{from:o,to:l}=r;if(o==l){let a=e(o);ao&&(t="delete.forward",a=on(n,a,!0)),o=Math.min(o,a),l=Math.max(l,a)}else o=on(n,o,!1),l=on(n,l,!0);return o==l?{range:r}:{changes:{from:o,to:l},range:w.cursor(o)}});return s.changes.empty?!1:(n.dispatch(i.update(s,{scrollIntoView:!0,userEvent:t,effects:t=="delete.selection"?O.announce.of(i.phrase("Selection deleted")):void 0})),!0)}function on(n,e,t){if(n instanceof O)for(let i of n.state.facet(O.atomicRanges).map(s=>s(n)))i.between(e,e,(s,r)=>{se&&(e=t?r:s)});return e}const Cc=(n,e)=>Yn(n,t=>{let{state:i}=n,s=i.doc.lineAt(t),r,o;if(!e&&t>s.from&&tCc(n,!1),Ac=n=>Cc(n,!0),Mc=(n,e)=>Yn(n,t=>{let i=t,{state:s}=n,r=s.doc.lineAt(i),o=s.charCategorizer(i);for(let l=null;;){if(i==(e?r.to:r.from)){i==t&&r.number!=(e?s.doc.lines:1)&&(i+=e?1:-1);break}let a=Oe(r.text,i-r.from,e)+r.from,h=r.text.slice(Math.min(i,a)-r.from,Math.max(i,a)-r.from),c=o(h);if(l!=null&&c!=l)break;(h!=" "||i!=t)&&(l=c),i=a}return i}),Dc=n=>Mc(n,!1),_m=n=>Mc(n,!0),Tc=n=>Yn(n,e=>{let t=n.lineBlockAt(e).to;return eYn(n,e=>{let t=n.lineBlockAt(e).from;return e>t?t:Math.max(0,e-1)}),Fm=({state:n,dispatch:e})=>{if(n.readOnly)return!1;let t=n.changeByRange(i=>({changes:{from:i.from,to:i.to,insert:_.of(["",""])},range:w.cursor(i.from)}));return e(n.update(t,{scrollIntoView:!0,userEvent:"input"})),!0},Hm=({state:n,dispatch:e})=>{if(n.readOnly)return!1;let t=n.changeByRange(i=>{if(!i.empty||i.from==0||i.from==n.doc.length)return{range:i};let s=i.from,r=n.doc.lineAt(s),o=s==r.from?s-1:Oe(r.text,s-r.from,!1)+r.from,l=s==r.to?s+1:Oe(r.text,s-r.from,!0)+r.from;return{changes:{from:o,to:l,insert:n.doc.slice(s,l).append(n.doc.slice(o,s))},range:w.cursor(l)}});return t.changes.empty?!1:(e(n.update(t,{scrollIntoView:!0,userEvent:"move.character"})),!0)};function Xn(n){let e=[],t=-1;for(let i of n.selection.ranges){let s=n.doc.lineAt(i.from),r=n.doc.lineAt(i.to);if(!i.empty&&i.to==r.from&&(r=n.doc.lineAt(i.to-1)),t>=s.number){let o=e[e.length-1];o.to=r.to,o.ranges.push(i)}else e.push({from:s.from,to:r.to,ranges:[i]});t=r.number+1}return e}function Oc(n,e,t){if(n.readOnly)return!1;let i=[],s=[];for(let r of Xn(n)){if(t?r.to==n.doc.length:r.from==0)continue;let o=n.doc.lineAt(t?r.to+1:r.from-1),l=o.length+1;if(t){i.push({from:r.to,to:o.to},{from:r.from,insert:o.text+n.lineBreak});for(let a of r.ranges)s.push(w.range(Math.min(n.doc.length,a.anchor+l),Math.min(n.doc.length,a.head+l)))}else{i.push({from:o.from,to:r.from},{from:r.to,insert:n.lineBreak+o.text});for(let a of r.ranges)s.push(w.range(a.anchor-l,a.head-l))}}return i.length?(e(n.update({changes:i,scrollIntoView:!0,selection:w.create(s,n.selection.mainIndex),userEvent:"move.line"})),!0):!1}const Wm=({state:n,dispatch:e})=>Oc(n,e,!1),zm=({state:n,dispatch:e})=>Oc(n,e,!0);function Bc(n,e,t){if(n.readOnly)return!1;let i=[];for(let s of Xn(n))t?i.push({from:s.from,insert:n.doc.slice(s.from,s.to)+n.lineBreak}):i.push({from:s.to,insert:n.lineBreak+n.doc.slice(s.from,s.to)});return e(n.update({changes:i,scrollIntoView:!0,userEvent:"input.copyline"})),!0}const qm=({state:n,dispatch:e})=>Bc(n,e,!1),jm=({state:n,dispatch:e})=>Bc(n,e,!0),Km=n=>{if(n.state.readOnly)return!1;let{state:e}=n,t=e.changes(Xn(e).map(({from:s,to:r})=>(s>0?s--:rn.moveVertically(s,!0)).map(t);return n.dispatch({changes:t,selection:i,scrollIntoView:!0,userEvent:"delete.line"}),!0};function Um(n,e){if(/\(\)|\[\]|\{\}/.test(n.sliceDoc(e-1,e+1)))return{from:e,to:e};let t=pe(n).resolveInner(e),i=t.childBefore(e),s=t.childAfter(e),r;return i&&s&&i.to<=e&&s.from>=e&&(r=i.type.prop(L.closedBy))&&r.indexOf(s.name)>-1&&n.doc.lineAt(i.to).from==n.doc.lineAt(s.from).from?{from:i.to,to:s.from}:null}const Gm=Pc(!1),$m=Pc(!0);function Pc(n){return({state:e,dispatch:t})=>{if(e.readOnly)return!1;let i=e.changeByRange(s=>{let{from:r,to:o}=s,l=e.doc.lineAt(r),a=!n&&r==o&&Um(e,r);n&&(r=o=(o<=l.to?l:e.doc.lineAt(o)).to);let h=new Kn(e,{simulateBreak:r,simulateDoubleBreak:!!a}),c=Vr(h,r);for(c==null&&(c=/^\s*/.exec(e.doc.lineAt(r).text)[0].length);ol.from&&r{let s=[];for(let o=i.from;o<=i.to;){let l=n.doc.lineAt(o);l.number>t&&(i.empty||i.to>l.from)&&(e(l,s,i),t=l.number),o=l.to+1}let r=n.changes(s);return{changes:s,range:w.range(r.mapPos(i.anchor,1),r.mapPos(i.head,1))}})}const Jm=({state:n,dispatch:e})=>{if(n.readOnly)return!1;let t=Object.create(null),i=new Kn(n,{overrideIndentation:r=>{let o=t[r];return o??-1}}),s=Kr(n,(r,o,l)=>{let a=Vr(i,r.from);if(a==null)return;/\S/.test(r.text)||(a=0);let h=/^\s*/.exec(r.text)[0],c=Li(n,a);(h!=c||l.fromn.readOnly?!1:(e(n.update(Kr(n,(t,i)=>{i.push({from:t.from,insert:n.facet(jn)})}),{userEvent:"input.indent"})),!0),Rc=({state:n,dispatch:e})=>n.readOnly?!1:(e(n.update(Kr(n,(t,i)=>{let s=/^\s*/.exec(t.text)[0];if(!s)return;let r=Fi(s,n.tabSize),o=0,l=Li(n,Math.max(0,r-Rt(n)));for(;o({mac:n.key,run:n.run,shift:n.shift}))),Zm=[{key:"Alt-ArrowLeft",mac:"Ctrl-ArrowLeft",run:dm,shift:Am},{key:"Alt-ArrowRight",mac:"Ctrl-ArrowRight",run:pm,shift:Mm},{key:"Alt-ArrowUp",run:Wm},{key:"Shift-Alt-ArrowUp",run:qm},{key:"Alt-ArrowDown",run:zm},{key:"Shift-Alt-ArrowDown",run:jm},{key:"Escape",run:Nm},{key:"Mod-Enter",run:$m},{key:"Alt-l",mac:"Ctrl-l",run:Lm},{key:"Mod-i",run:Im,preventDefault:!0},{key:"Mod-[",run:Rc},{key:"Mod-]",run:Ec},{key:"Mod-Alt-\\",run:Jm},{key:"Shift-Mod-k",run:Km},{key:"Shift-Mod-\\",run:xm},{key:"Mod-/",run:jp},{key:"Alt-A",run:Up}].concat(Xm),Qm={key:"Tab",run:Ec,shift:Rc},eg="#2E3235",Ue="#DDDDDD",Ai="#B9D2FF",ln="#b0b0b0",tg="#e0e0e0",Lc="#808080",vs="#000000",ig="#A54543",Ic="#fc6d24",St="#fda331",xs="#8abeb7",Ml="#b5bd68",mi="#6fb3d2",gi="#cc99cc",ng="#6987AF",Dl=Ic,Tl="#292d30",an=Ai+"30",sg=eg,Ss=Ue,rg="#202325",Ol=Ue,og=O.theme({"&":{color:Ue,backgroundColor:sg},".cm-content":{caretColor:Ol},".cm-cursor, .cm-dropCursor":{borderLeftColor:Ol},"&.cm-focused .cm-selectionBackground, .cm-selectionBackground, .cm-content ::selection":{backgroundColor:rg},".cm-panels":{backgroundColor:Tl,color:ln},".cm-panels.cm-panels-top":{borderBottom:"2px solid black"},".cm-panels.cm-panels-bottom":{borderTop:"2px solid black"},".cm-searchMatch":{backgroundColor:Ai,outline:`1px solid ${ln}`,color:vs},".cm-searchMatch.cm-searchMatch-selected":{backgroundColor:tg,color:vs},".cm-activeLine":{backgroundColor:an},".cm-selectionMatch":{backgroundColor:an},"&.cm-focused .cm-matchingBracket, &.cm-focused .cm-nonmatchingBracket":{outline:`1px solid ${ln}`},"&.cm-focused .cm-matchingBracket":{backgroundColor:Ai,color:vs},".cm-gutters":{borderRight:"1px solid #ffffff10",color:Lc,backgroundColor:Tl},".cm-activeLineGutter":{backgroundColor:an},".cm-foldPlaceholder":{backgroundColor:"transparent",border:"none",color:Ai},".cm-tooltip":{border:"none",backgroundColor:Ss},".cm-tooltip .cm-tooltip-arrow:before":{borderTopColor:"transparent",borderBottomColor:"transparent"},".cm-tooltip .cm-tooltip-arrow:after":{borderTopColor:Ss,borderBottomColor:Ss},".cm-tooltip-autocomplete":{"& > ul > li[aria-selected]":{backgroundColor:an,color:ln}}},{dark:!0}),lg=li.define([{tag:m.keyword,color:St},{tag:[m.name,m.deleted,m.character,m.propertyName,m.macroName],color:Ml},{tag:[m.variableName],color:mi},{tag:[m.function(m.variableName)],color:St},{tag:[m.labelName],color:Ic},{tag:[m.color,m.constant(m.name),m.standard(m.name)],color:St},{tag:[m.definition(m.name),m.separator],color:gi},{tag:[m.brace],color:gi},{tag:[m.annotation],color:Dl},{tag:[m.number,m.changed,m.annotation,m.modifier,m.self,m.namespace],color:St},{tag:[m.typeName,m.className],color:mi},{tag:[m.operator,m.operatorKeyword],color:gi},{tag:[m.tagName],color:St},{tag:[m.squareBracket],color:gi},{tag:[m.angleBracket],color:gi},{tag:[m.attributeName],color:mi},{tag:[m.regexp],color:St},{tag:[m.quote],color:Ue},{tag:[m.string],color:Ml},{tag:m.link,color:ng,textDecoration:"underline",textUnderlinePosition:"under"},{tag:[m.url,m.escape,m.special(m.string)],color:xs},{tag:[m.meta],color:ig},{tag:[m.comment],color:Lc,fontStyle:"italic"},{tag:m.monospace,color:Ue},{tag:m.strong,fontWeight:"bold",color:St},{tag:m.emphasis,fontStyle:"italic",color:mi},{tag:m.strikethrough,textDecoration:"line-through"},{tag:m.heading,fontWeight:"bold",color:Ue},{tag:m.special(m.heading1),fontWeight:"bold",color:Ue},{tag:m.heading1,fontWeight:"bold",color:Ue},{tag:[m.heading2,m.heading3,m.heading4],fontWeight:"bold",color:Ue},{tag:[m.heading5,m.heading6],color:Ue},{tag:[m.atom,m.bool,m.special(m.variableName)],color:xs},{tag:[m.processingInstruction,m.inserted],color:xs},{tag:[m.contentSeparator],color:mi},{tag:m.invalid,color:Ai,borderBottom:`1px dotted ${Dl}`}]),ag=[og,Hr(lg)],Bl="#2e3440",Ur="#3b4252",Pl="#434c5e",hn="#4c566a",El="#e5e9f0",kr="#eceff4",Cs="#8fbcbb",Rl="#88c0d0",hg="#81a1c1",Fe="#5e81ac",cg="#bf616a",Wt="#d08770",As="#ebcb8b",Ll="#a3be8c",fg="#b48ead",Il="#d30102",Gr=kr,Ms=Gr,ug="#ffffff",Ds=Ur,dg=Gr,Nl=Ur,pg=O.theme({"&":{color:Bl,backgroundColor:ug},".cm-content":{caretColor:Nl},".cm-cursor, .cm-dropCursor":{borderLeftColor:Nl},"&.cm-focused .cm-selectionBackground, .cm-selectionBackground, .cm-content ::selection":{backgroundColor:dg},".cm-panels":{backgroundColor:Gr,color:hn},".cm-panels.cm-panels-top":{borderBottom:"2px solid black"},".cm-panels.cm-panels-bottom":{borderTop:"2px solid black"},".cm-searchMatch":{backgroundColor:"#72a1ff59",outline:`1px solid ${hn}`},".cm-searchMatch.cm-searchMatch-selected":{backgroundColor:El},".cm-activeLine":{backgroundColor:Ms},".cm-selectionMatch":{backgroundColor:El},"&.cm-focused .cm-matchingBracket, &.cm-focused .cm-nonmatchingBracket":{outline:`1px solid ${hn}`},"&.cm-focused .cm-matchingBracket":{backgroundColor:kr},".cm-gutters":{backgroundColor:kr,color:Bl,border:"none"},".cm-activeLineGutter":{backgroundColor:Ms},".cm-foldPlaceholder":{backgroundColor:"transparent",border:"none",color:"#ddd"},".cm-tooltip":{border:"none",backgroundColor:Ds},".cm-tooltip .cm-tooltip-arrow:before":{borderTopColor:"transparent",borderBottomColor:"transparent"},".cm-tooltip .cm-tooltip-arrow:after":{borderTopColor:Ds,borderBottomColor:Ds},".cm-tooltip-autocomplete":{"& > ul > li[aria-selected]":{backgroundColor:Ms,color:hn}}},{dark:!1}),mg=li.define([{tag:m.keyword,color:Fe},{tag:[m.name,m.deleted,m.character,m.propertyName,m.macroName],color:Wt},{tag:[m.variableName],color:Wt},{tag:[m.function(m.variableName)],color:Fe},{tag:[m.labelName],color:hg},{tag:[m.color,m.constant(m.name),m.standard(m.name)],color:Fe},{tag:[m.definition(m.name),m.separator],color:Ll},{tag:[m.brace],color:Cs},{tag:[m.annotation],color:Il},{tag:[m.number,m.changed,m.annotation,m.modifier,m.self,m.namespace],color:Rl},{tag:[m.typeName,m.className],color:As},{tag:[m.operator,m.operatorKeyword],color:Ll},{tag:[m.tagName],color:fg},{tag:[m.squareBracket],color:cg},{tag:[m.angleBracket],color:Wt},{tag:[m.attributeName],color:As},{tag:[m.regexp],color:Fe},{tag:[m.quote],color:Ur},{tag:[m.string],color:Wt},{tag:m.link,color:Cs,textDecoration:"underline",textUnderlinePosition:"under"},{tag:[m.url,m.escape,m.special(m.string)],color:Wt},{tag:[m.meta],color:Rl},{tag:[m.comment],color:Pl,fontStyle:"italic"},{tag:m.strong,fontWeight:"bold",color:Fe},{tag:m.emphasis,fontStyle:"italic",color:Fe},{tag:m.strikethrough,textDecoration:"line-through"},{tag:m.heading,fontWeight:"bold",color:Fe},{tag:m.special(m.heading1),fontWeight:"bold",color:Fe},{tag:m.heading1,fontWeight:"bold",color:Fe},{tag:[m.heading2,m.heading3,m.heading4],fontWeight:"bold",color:Fe},{tag:[m.heading5,m.heading6],color:Fe},{tag:[m.atom,m.bool,m.special(m.variableName)],color:Wt},{tag:[m.processingInstruction,m.inserted],color:Cs},{tag:[m.contentSeparator],color:As},{tag:m.invalid,color:Pl,borderBottom:`1px dotted ${Il}`}]),gg=[pg,Hr(mg)];function _l(n){let e=Object.keys(n).join(""),t=/\w/.test(e);return t&&(e=e.replace(/\w/g,"")),`[${t?"\\w":""}${e.replace(/[^\w\s]/g,"\\$&")}]`}function yg(n){let e=Object.create(null),t=Object.create(null);for(let{label:s}of n){e[s[0]]=!0;for(let r=1;rtypeof s=="string"?{label:s}:s),[t,i]=e.every(s=>/^\w+$/.test(s.label))?[/\w*$/,/\w+$/]:yg(e);return s=>{let r=s.matchBefore(i);return r||s.explicit?{from:r?r.from:s.pos,options:e,validFor:t}:null}}function hy(n,e){return t=>{for(let i=pe(t.state).resolveInner(t.pos,-1);i;i=i.parent)if(n.indexOf(i.name)>-1)return null;return e(t)}}class Vl{constructor(e,t,i){this.completion=e,this.source=t,this.match=i}}function vr(n){return n.selection.main.head}function wg(n,e,t,i){return Object.assign(Object.assign({},n.changeByRange(s=>{if(s==n.selection.main)return{changes:{from:t,to:i,insert:e},range:w.cursor(t+e.length)};let r=i-t;return!s.empty||r&&n.sliceDoc(s.from-r,s.from)!=n.sliceDoc(t,i)?{range:s}:{changes:{from:s.from-r,to:s.from,insert:e},range:w.cursor(s.from-r+e.length)}})),{userEvent:"input.complete"})}function Nc(n,e){const t=e.completion.apply||e.completion.label;let i=e.source;typeof t=="string"?n.dispatch(wg(n.state,t,i.from,i.to)):t(n,e.completion,i.from,i.to)}const Fl=new WeakMap;function kg(n){if(!Array.isArray(n))return n;let e=Fl.get(n);return e||Fl.set(n,e=bg(n)),e}class vg{constructor(e){this.pattern=e,this.chars=[],this.folded=[],this.any=[],this.precise=[],this.byWord=[];for(let t=0;t=48&&C<=57||C>=97&&C<=122?2:C>=65&&C<=90?1:0:(T=ga(C))!=T.toLowerCase()?1:T!=T.toUpperCase()?2:0;(!v||B==1&&y||k==0&&B!=0)&&(t[f]==C||i[f]==C&&(u=!0)?o[f++]=v:o.length&&(b=!1)),k=B,v+=Ee(C)}return f==a&&o[0]==0&&b?this.result(-100+(u?-200:0),o,e):d==a&&p==0?[-200-e.length,0,g]:l>-1?[-700-e.length,l,l+this.pattern.length]:d==a?[-200+-700-e.length,p,g]:f==a?this.result(-100+(u?-200:0)+-700+(b?0:-1100),o,e):t.length==2?null:this.result((s[0]?-700:0)+-200+-1100,s,e)}result(e,t,i){let s=[e-i.length],r=1;for(let o of t){let l=o+(this.astral?Ee(ge(i,o)):1);r>1&&s[r-1]==o?s[r-1]=l:(s[r++]=o,s[r++]=l)}return s}}const It=D.define({combine(n){return _t(n,{activateOnTyping:!0,selectOnOpen:!0,override:null,closeOnBlur:!0,maxRenderedOptions:100,defaultKeymap:!0,optionClass:()=>"",aboveCursor:!1,icons:!0,addToOptions:[],compareCompletions:(e,t)=>e.label.localeCompare(t.label),interactionDelay:75},{defaultKeymap:(e,t)=>e&&t,closeOnBlur:(e,t)=>e&&t,icons:(e,t)=>e&&t,optionClass:(e,t)=>i=>xg(e(i),t(i)),addToOptions:(e,t)=>e.concat(t)})}});function xg(n,e){return n?e?n+" "+e:n:e}function Sg(n){let e=n.addToOptions.slice();return n.icons&&e.push({render(t){let i=document.createElement("div");return i.classList.add("cm-completionIcon"),t.type&&i.classList.add(...t.type.split(/\s+/g).map(s=>"cm-completionIcon-"+s)),i.setAttribute("aria-hidden","true"),i},position:20}),e.push({render(t,i,s){let r=document.createElement("span");r.className="cm-completionLabel";let{label:o}=t,l=0;for(let a=1;al&&r.appendChild(document.createTextNode(o.slice(l,h)));let f=r.appendChild(document.createElement("span"));f.appendChild(document.createTextNode(o.slice(h,c))),f.className="cm-completionMatchedText",l=c}return lt.position-i.position).map(t=>t.render)}function Hl(n,e,t){if(n<=t)return{from:0,to:n};if(e<0&&(e=0),e<=n>>1){let s=Math.floor(e/t);return{from:s*t,to:(s+1)*t}}let i=Math.floor((n-e)/t);return{from:n-(i+1)*t,to:n-i*t}}class Cg{constructor(e,t){this.view=e,this.stateField=t,this.info=null,this.placeInfo={read:()=>this.measureInfo(),write:l=>this.positionInfo(l),key:this};let i=e.state.field(t),{options:s,selected:r}=i.open,o=e.state.facet(It);this.optionContent=Sg(o),this.optionClass=o.optionClass,this.range=Hl(s.length,r,o.maxRenderedOptions),this.dom=document.createElement("div"),this.dom.className="cm-tooltip-autocomplete",this.dom.addEventListener("mousedown",l=>{for(let a=l.target,h;a&&a!=this.dom;a=a.parentNode)if(a.nodeName=="LI"&&(h=/-(\d+)$/.exec(a.id))&&+h[1]{this.info&&this.view.requestMeasure(this.placeInfo)})}mount(){this.updateSel()}update(e){e.state.field(this.stateField)!=e.startState.field(this.stateField)&&this.updateSel()}positioned(){this.info&&this.view.requestMeasure(this.placeInfo)}updateSel(){let e=this.view.state.field(this.stateField),t=e.open;if((t.selected>-1&&t.selected=this.range.to)&&(this.range=Hl(t.options.length,t.selected,this.view.state.facet(It).maxRenderedOptions),this.list.remove(),this.list=this.dom.appendChild(this.createListBox(t.options,e.id,this.range)),this.list.addEventListener("scroll",()=>{this.info&&this.view.requestMeasure(this.placeInfo)})),this.updateSelectedOption(t.selected)){this.info&&(this.info.remove(),this.info=null);let{completion:i}=t.options[t.selected],{info:s}=i;if(!s)return;let r=typeof s=="string"?document.createTextNode(s):s(i);if(!r)return;"then"in r?r.then(o=>{o&&this.view.state.field(this.stateField,!1)==e&&this.addInfoPane(o)}).catch(o=>He(this.view.state,o,"completion info")):this.addInfoPane(r)}}addInfoPane(e){let t=this.info=document.createElement("div");t.className="cm-tooltip cm-completionInfo",t.appendChild(e),this.dom.appendChild(t),this.view.requestMeasure(this.placeInfo)}updateSelectedOption(e){let t=null;for(let i=this.list.firstChild,s=this.range.from;i;i=i.nextSibling,s++)s==e?i.hasAttribute("aria-selected")||(i.setAttribute("aria-selected","true"),t=i):i.hasAttribute("aria-selected")&&i.removeAttribute("aria-selected");return t&&Mg(this.list,t),t}measureInfo(){let e=this.dom.querySelector("[aria-selected]");if(!e||!this.info)return null;let t=this.dom.ownerDocument.defaultView||window,i=this.dom.getBoundingClientRect(),s=this.info.getBoundingClientRect(),r=e.getBoundingClientRect();if(r.top>Math.min(t.innerHeight,i.bottom)-10||r.bottom=s.height||p>i.top?c=r.bottom-i.top+"px":f=i.bottom-r.top+"px"}return{top:c,bottom:f,maxWidth:h,class:a?o?"left-narrow":"right-narrow":l?"left":"right"}}positionInfo(e){this.info&&(e?(this.info.style.top=e.top,this.info.style.bottom=e.bottom,this.info.style.maxWidth=e.maxWidth,this.info.className="cm-tooltip cm-completionInfo cm-completionInfo-"+e.class):this.info.style.top="-1e6px")}createListBox(e,t,i){const s=document.createElement("ul");s.id=t,s.setAttribute("role","listbox"),s.setAttribute("aria-expanded","true"),s.setAttribute("aria-label",this.view.state.phrase("Completions"));for(let r=i.from;rnew Cg(e,n)}function Mg(n,e){let t=n.getBoundingClientRect(),i=e.getBoundingClientRect();i.topt.bottom&&(n.scrollTop+=i.bottom-t.bottom)}function Wl(n){return(n.boost||0)*100+(n.apply?10:0)+(n.info?5:0)+(n.type?1:0)}function Dg(n,e){let t=[],i=0;for(let l of n)if(l.hasResult())if(l.result.filter===!1){let a=l.result.getMatch;for(let h of l.result.options){let c=[1e9-i++];if(a)for(let f of a(h))c.push(f);t.push(new Vl(h,l,c))}}else{let a=new vg(e.sliceDoc(l.from,l.to)),h;for(let c of l.result.options)(h=a.match(c.label))&&(c.boost!=null&&(h[0]+=c.boost),t.push(new Vl(c,l,h)))}let s=[],r=null,o=e.facet(It).compareCompletions;for(let l of t.sort((a,h)=>h.match[0]-a.match[0]||o(a.completion,h.completion)))!r||r.label!=l.completion.label||r.detail!=l.completion.detail||r.type!=null&&l.completion.type!=null&&r.type!=l.completion.type||r.apply!=l.completion.apply?s.push(l):Wl(l.completion)>Wl(r)&&(s[s.length-1]=l),r=l.completion;return s}class Mi{constructor(e,t,i,s,r){this.options=e,this.attrs=t,this.tooltip=i,this.timestamp=s,this.selected=r}setSelected(e,t){return e==this.selected||e>=this.options.length?this:new Mi(this.options,zl(t,e),this.tooltip,this.timestamp,e)}static build(e,t,i,s,r){let o=Dg(e,t);if(!o.length)return null;let l=t.facet(It).selectOnOpen?0:-1;if(s&&s.selected!=l&&s.selected!=-1){let a=s.options[s.selected].completion;for(let h=0;hh.hasResult()?Math.min(a,h.from):a,1e8),create:Ag(zi),above:r.aboveCursor},s?s.timestamp:Date.now(),l)}map(e){return new Mi(this.options,this.attrs,Object.assign(Object.assign({},this.tooltip),{pos:e.mapPos(this.tooltip.pos)}),this.timestamp,this.selected)}}class Nn{constructor(e,t,i){this.active=e,this.id=t,this.open=i}static start(){return new Nn(Bg,"cm-ac-"+Math.floor(Math.random()*2e6).toString(36),null)}update(e){let{state:t}=e,i=t.facet(It),r=(i.override||t.languageDataAt("autocomplete",vr(t)).map(kg)).map(l=>(this.active.find(h=>h.source==l)||new st(l,this.active.some(h=>h.state!=0)?1:0)).update(e,i));r.length==this.active.length&&r.every((l,a)=>l==this.active[a])&&(r=this.active);let o=e.selection||r.some(l=>l.hasResult()&&e.changes.touchesRange(l.from,l.to))||!Tg(r,this.active)?Mi.build(r,t,this.id,this.open,i):this.open&&e.docChanged?this.open.map(e.changes):this.open;!o&&r.every(l=>l.state!=1)&&r.some(l=>l.hasResult())&&(r=r.map(l=>l.hasResult()?new st(l.source,0):l));for(let l of e.effects)l.is(Fc)&&(o=o&&o.setSelected(l.value,this.id));return r==this.active&&o==this.open?this:new Nn(r,this.id,o)}get tooltip(){return this.open?this.open.tooltip:null}get attrs(){return this.open?this.open.attrs:Og}}function Tg(n,e){if(n==e)return!0;for(let t=0,i=0;;){for(;t-1&&(t["aria-activedescendant"]=n+"-"+e),t}const Bg=[];function Pg(n){return n.isUserEvent("input.type")?"input":n.isUserEvent("delete.backward")?"delete":null}class st{constructor(e,t,i=-1){this.source=e,this.state=t,this.explicitPos=i}hasResult(){return!1}update(e,t){let i=Pg(e),s=this;i?s=s.handleUserEvent(e,i,t):e.docChanged?s=s.handleChange(e):e.selection&&s.state!=0&&(s=new st(s.source,0));for(let r of e.effects)if(r.is(_c))s=new st(s.source,1,r.value?vr(e.state):-1);else if(r.is(Vc))s=new st(s.source,0);else if(r.is(Eg))for(let o of r.value)o.source==s.source&&(s=o);return s}handleUserEvent(e,t,i){return t=="delete"||!i.activateOnTyping?this.map(e.changes):new st(this.source,1)}handleChange(e){return e.changes.touchesRange(vr(e.startState))?new st(this.source,0):this.map(e.changes)}map(e){return e.empty||this.explicitPos<0?this:new st(this.source,this.state,e.mapPos(this.explicitPos))}}const _c=R.define(),Vc=R.define(),Eg=R.define({map(n,e){return n.map(t=>t.map(e))}}),Fc=R.define(),zi=Me.define({create(){return Nn.start()},update(n,e){return n.update(e)},provide:n=>[Er.from(n,e=>e.tooltip),O.contentAttributes.from(n,e=>e.attrs)]});function cn(n,e="option"){return t=>{let i=t.state.field(zi,!1);if(!i||!i.open||Date.now()-i.open.timestamp-1?i.open.selected+s*(n?1:-1):n?0:o-1;return l<0?l=e=="page"?0:o-1:l>=o&&(l=e=="page"?o-1:0),t.dispatch({effects:Fc.of(l)}),!0}}const Rg=n=>{let e=n.state.field(zi,!1);return n.state.readOnly||!e||!e.open||e.open.selected<0||Date.now()-e.open.timestampn.state.field(zi,!1)?(n.dispatch({effects:_c.of(!0)}),!0):!1,Ig=n=>{let e=n.state.field(zi,!1);return!e||!e.active.some(t=>t.state!=0)?!1:(n.dispatch({effects:Vc.of(null)}),!0)},Ng=O.baseTheme({".cm-tooltip.cm-tooltip-autocomplete":{"& > ul":{fontFamily:"monospace",whiteSpace:"nowrap",overflow:"hidden auto",maxWidth_fallback:"700px",maxWidth:"min(700px, 95vw)",minWidth:"250px",maxHeight:"10em",listStyle:"none",margin:0,padding:0,"& > li":{overflowX:"hidden",textOverflow:"ellipsis",cursor:"pointer",padding:"1px 3px",lineHeight:1.2}}},"&light .cm-tooltip-autocomplete ul li[aria-selected]":{background:"#17c",color:"white"},"&dark .cm-tooltip-autocomplete ul li[aria-selected]":{background:"#347",color:"white"},".cm-completionListIncompleteTop:before, .cm-completionListIncompleteBottom:after":{content:'"···"',opacity:.5,display:"block",textAlign:"center"},".cm-tooltip.cm-completionInfo":{position:"absolute",padding:"3px 9px",width:"max-content",maxWidth:"400px",boxSizing:"border-box"},".cm-completionInfo.cm-completionInfo-left":{right:"100%"},".cm-completionInfo.cm-completionInfo-right":{left:"100%"},".cm-completionInfo.cm-completionInfo-left-narrow":{right:"30px"},".cm-completionInfo.cm-completionInfo-right-narrow":{left:"30px"},"&light .cm-snippetField":{backgroundColor:"#00000022"},"&dark .cm-snippetField":{backgroundColor:"#ffffff22"},".cm-snippetFieldPosition":{verticalAlign:"text-top",width:0,height:"1.15em",display:"inline-block",margin:"0 -0.7px -.7em",borderLeft:"1.4px dotted #888"},".cm-completionMatchedText":{textDecoration:"underline"},".cm-completionDetail":{marginLeft:"0.5em",fontStyle:"italic"},".cm-completionIcon":{fontSize:"90%",width:".8em",display:"inline-block",textAlign:"center",paddingRight:".6em",opacity:"0.6"},".cm-completionIcon-function, .cm-completionIcon-method":{"&:after":{content:"'ƒ'"}},".cm-completionIcon-class":{"&:after":{content:"'○'"}},".cm-completionIcon-interface":{"&:after":{content:"'◌'"}},".cm-completionIcon-variable":{"&:after":{content:"'𝑥'"}},".cm-completionIcon-constant":{"&:after":{content:"'𝐶'"}},".cm-completionIcon-type":{"&:after":{content:"'𝑡'"}},".cm-completionIcon-enum":{"&:after":{content:"'∪'"}},".cm-completionIcon-property":{"&:after":{content:"'□'"}},".cm-completionIcon-keyword":{"&:after":{content:"'🔑︎'"}},".cm-completionIcon-namespace":{"&:after":{content:"'▢'"}},".cm-completionIcon-text":{"&:after":{content:"'abc'",fontSize:"50%",verticalAlign:"middle"}}});class _g{constructor(e,t,i,s){this.field=e,this.line=t,this.from=i,this.to=s}}class $r{constructor(e,t,i){this.field=e,this.from=t,this.to=i}map(e){let t=e.mapPos(this.from,-1,ce.TrackDel),i=e.mapPos(this.to,1,ce.TrackDel);return t==null||i==null?null:new $r(this.field,t,i)}}class Jr{constructor(e,t){this.lines=e,this.fieldPositions=t}instantiate(e,t){let i=[],s=[t],r=e.doc.lineAt(t),o=/^\s*/.exec(r.text)[0];for(let a of this.lines){if(i.length){let h=o,c=/^\t*/.exec(a)[0].length;for(let f=0;fnew $r(a.field,s[a.line]+a.from,s[a.line]+a.to));return{text:i,ranges:l}}static parse(e){let t=[],i=[],s=[],r;for(let o of e.split(/\r\n?|\n/)){for(;r=/[#$]\{(?:(\d+)(?::([^}]*))?|([^}]*))\}/.exec(o);){let l=r[1]?+r[1]:null,a=r[2]||r[3]||"",h=-1;for(let c=0;c=h&&f.field++}s.push(new _g(h,i.length,r.index,r.index+a.length)),o=o.slice(0,r.index)+a+o.slice(r.index+r[0].length)}for(let l;l=/([$#])\\{/.exec(o);){o=o.slice(0,l.index)+l[1]+"{"+o.slice(l.index+l[0].length);for(let a of s)a.line==i.length&&a.from>l.index&&(a.from--,a.to--)}i.push(o)}return new Jr(i,s)}}let Vg=E.widget({widget:new class extends tt{toDOM(){let n=document.createElement("span");return n.className="cm-snippetFieldPosition",n}ignoreEvent(){return!1}}}),Fg=E.mark({class:"cm-snippetField"});class hi{constructor(e,t){this.ranges=e,this.active=t,this.deco=E.set(e.map(i=>(i.from==i.to?Vg:Fg).range(i.from,i.to)))}map(e){let t=[];for(let i of this.ranges){let s=i.map(e);if(!s)return null;t.push(s)}return new hi(t,this.active)}selectionInsideField(e){return e.ranges.every(t=>this.ranges.some(i=>i.field==this.active&&i.from<=t.from&&i.to>=t.to))}}const qi=R.define({map(n,e){return n&&n.map(e)}}),Hg=R.define(),Ni=Me.define({create(){return null},update(n,e){for(let t of e.effects){if(t.is(qi))return t.value;if(t.is(Hg)&&n)return new hi(n.ranges,t.value)}return n&&e.docChanged&&(n=n.map(e.changes)),n&&e.selection&&!n.selectionInsideField(e.selection)&&(n=null),n},provide:n=>O.decorations.from(n,e=>e?e.deco:E.none)});function Yr(n,e){return w.create(n.filter(t=>t.field==e).map(t=>w.range(t.from,t.to)))}function Wg(n){let e=Jr.parse(n);return(t,i,s,r)=>{let{text:o,ranges:l}=e.instantiate(t.state,s),a={changes:{from:s,to:r,insert:_.of(o)},scrollIntoView:!0};if(l.length&&(a.selection=Yr(l,0)),l.length>1){let h=new hi(l,0),c=a.effects=[qi.of(h)];t.state.field(Ni,!1)===void 0&&c.push(R.appendConfig.of([Ni,Ug,Gg,Ng]))}t.dispatch(t.state.update(a))}}function Hc(n){return({state:e,dispatch:t})=>{let i=e.field(Ni,!1);if(!i||n<0&&i.active==0)return!1;let s=i.active+n,r=n>0&&!i.ranges.some(o=>o.field==s+n);return t(e.update({selection:Yr(i.ranges,s),effects:qi.of(r?null:new hi(i.ranges,s))})),!0}}const zg=({state:n,dispatch:e})=>n.field(Ni,!1)?(e(n.update({effects:qi.of(null)})),!0):!1,qg=Hc(1),jg=Hc(-1),Kg=[{key:"Tab",run:qg,shift:jg},{key:"Escape",run:zg}],ql=D.define({combine(n){return n.length?n[0]:Kg}}),Ug=Vi.highest(qn.compute([ql],n=>n.facet(ql)));function cy(n,e){return Object.assign(Object.assign({},e),{apply:Wg(n)})}const Gg=O.domEventHandlers({mousedown(n,e){let t=e.state.field(Ni,!1),i;if(!t||(i=e.posAtCoords({x:n.clientX,y:n.clientY}))==null)return!1;let s=t.ranges.find(r=>r.from<=i&&r.to>=i);return!s||s.field==t.active?!1:(e.dispatch({selection:Yr(t.ranges,s.field),effects:qi.of(t.ranges.some(r=>r.field>s.field)?new hi(t.ranges,s.field):null)}),!0)}}),_i={brackets:["(","[","{","'",'"'],before:")]}:;>",stringPrefixes:[]},Tt=R.define({map(n,e){let t=e.mapPos(n,-1,ce.TrackAfter);return t??void 0}}),Xr=R.define({map(n,e){return e.mapPos(n)}}),Zr=new class extends Bt{};Zr.startSide=1;Zr.endSide=-1;const Wc=Me.define({create(){return F.empty},update(n,e){if(e.selection){let t=e.state.doc.lineAt(e.selection.main.head).from,i=e.startState.doc.lineAt(e.startState.selection.main.head).from;t!=e.changes.mapPos(i,-1)&&(n=F.empty)}n=n.map(e.changes);for(let t of e.effects)t.is(Tt)?n=n.update({add:[Zr.range(t.value,t.value+1)]}):t.is(Xr)&&(n=n.update({filter:i=>i!=t.value}));return n}});function $g(){return[Yg,Wc]}const Ts="()[]{}<>";function zc(n){for(let e=0;e{if((Jg?n.composing:n.compositionStarted)||n.state.readOnly)return!1;let s=n.state.selection.main;if(i.length>2||i.length==2&&Ee(ge(i,0))==1||e!=s.from||t!=s.to)return!1;let r=Qg(n.state,i);return r?(n.dispatch(r),!0):!1}),Xg=({state:n,dispatch:e})=>{if(n.readOnly)return!1;let i=qc(n,n.selection.main.head).brackets||_i.brackets,s=null,r=n.changeByRange(o=>{if(o.empty){let l=e0(n.doc,o.head);for(let a of i)if(a==l&&Zn(n.doc,o.head)==zc(ge(a,0)))return{changes:{from:o.head-a.length,to:o.head+a.length},range:w.cursor(o.head-a.length)}}return{range:s=o}});return s||e(n.update(r,{scrollIntoView:!0,userEvent:"delete.backward"})),!s},Zg=[{key:"Backspace",run:Xg}];function Qg(n,e){let t=qc(n,n.selection.main.head),i=t.brackets||_i.brackets;for(let s of i){let r=zc(ge(s,0));if(e==s)return r==s?n0(n,s,i.indexOf(s+s+s)>-1,t):t0(n,s,r,t.before||_i.before);if(e==r&&jc(n,n.selection.main.from))return i0(n,s,r)}return null}function jc(n,e){let t=!1;return n.field(Wc).between(0,n.doc.length,i=>{i==e&&(t=!0)}),t}function Zn(n,e){let t=n.sliceString(e,e+2);return t.slice(0,Ee(ge(t,0)))}function e0(n,e){let t=n.sliceString(e-2,e);return Ee(ge(t,0))==t.length?t:t.slice(1)}function t0(n,e,t,i){let s=null,r=n.changeByRange(o=>{if(!o.empty)return{changes:[{insert:e,from:o.from},{insert:t,from:o.to}],effects:Tt.of(o.to+e.length),range:w.range(o.anchor+e.length,o.head+e.length)};let l=Zn(n.doc,o.head);return!l||/\s/.test(l)||i.indexOf(l)>-1?{changes:{insert:e+t,from:o.head},effects:Tt.of(o.head+e.length),range:w.cursor(o.head+e.length)}:{range:s=o}});return s?null:n.update(r,{scrollIntoView:!0,userEvent:"input.type"})}function i0(n,e,t){let i=null,s=n.selection.ranges.map(r=>r.empty&&Zn(n.doc,r.head)==t?w.cursor(r.head+t.length):i=r);return i?null:n.update({selection:w.create(s,n.selection.mainIndex),scrollIntoView:!0,effects:n.selection.ranges.map(({from:r})=>Xr.of(r))})}function n0(n,e,t,i){let s=i.stringPrefixes||_i.stringPrefixes,r=null,o=n.changeByRange(l=>{if(!l.empty)return{changes:[{insert:e,from:l.from},{insert:e,from:l.to}],effects:Tt.of(l.to+e.length),range:w.range(l.anchor+e.length,l.head+e.length)};let a=l.head,h=Zn(n.doc,a),c;if(h==e){if(jl(n,a))return{changes:{insert:e+e,from:a},effects:Tt.of(a+e.length),range:w.cursor(a+e.length)};if(jc(n,a)){let f=t&&n.sliceDoc(a,a+e.length*3)==e+e+e;return{range:w.cursor(a+e.length*(f?3:1)),effects:Xr.of(a)}}}else{if(t&&n.sliceDoc(a-2*e.length,a)==e+e&&(c=Kl(n,a-2*e.length,s))>-1&&jl(n,c))return{changes:{insert:e+e+e+e,from:a},effects:Tt.of(a+e.length),range:w.cursor(a+e.length)};if(n.charCategorizer(a)(h)!=Re.Word&&Kl(n,a,s)>-1&&!s0(n,a,e,s))return{changes:{insert:e+e,from:a},effects:Tt.of(a+e.length),range:w.cursor(a+e.length)}}return{range:r=l}});return r?null:n.update(o,{scrollIntoView:!0,userEvent:"input.type"})}function jl(n,e){let t=pe(n).resolveInner(e+1);return t.parent&&t.from==e}function s0(n,e,t,i){let s=pe(n).resolveInner(e,-1),r=i.reduce((o,l)=>Math.max(o,l.length),0);for(let o=0;o<5;o++){let l=n.sliceDoc(s.from,Math.min(s.to,s.from+t.length+r)),a=l.indexOf(t);if(!a||a>-1&&i.indexOf(l.slice(0,a))>-1){let c=s.firstChild;for(;c&&c.from==s.from&&c.to-c.from>t.length+a;){if(n.sliceDoc(c.to-t.length,c.to)==t)return!1;c=c.firstChild}return!0}let h=s.to==e&&s.parent;if(!h)break;s=h}return!1}function Kl(n,e,t){let i=n.charCategorizer(e);if(i(n.sliceDoc(e-1,e))!=Re.Word)return e;for(let s of t){let r=e-s.length;if(n.sliceDoc(r,e)==s&&i(n.sliceDoc(r-1,r))!=Re.Word)return r}return-1}const r0=[{key:"Ctrl-Space",run:Lg},{key:"Escape",run:Ig},{key:"ArrowDown",run:cn(!0)},{key:"ArrowUp",run:cn(!1)},{key:"PageDown",run:cn(!0,"page")},{key:"PageUp",run:cn(!1,"page")},{key:"Enter",run:Rg}];function Je(){var n=arguments[0];typeof n=="string"&&(n=document.createElement(n));var e=1,t=arguments[1];if(t&&typeof t=="object"&&t.nodeType==null&&!Array.isArray(t)){for(var i in t)if(Object.prototype.hasOwnProperty.call(t,i)){var s=t[i];typeof s=="string"?n.setAttribute(i,s):s!=null&&(n[i]=s)}e++}for(;el.from==l.to||l.from==l.to-1&&i.doc.lineAt(l.from).to==l.from?E.widget({widget:new g0(l),diagnostic:l}).range(l.from):E.mark({attributes:{class:"cm-lintRange cm-lintRange-"+l.severity},diagnostic:l}).range(l.from,l.to)),!0);return new At(o,t,ni(o))}}function ni(n,e=null,t=0){let i=null;return n.between(t,1e9,(s,r,{spec:o})=>{if(!(e&&o.diagnostic!=e))return i=new o0(s,r,o.diagnostic),!1}),i}function l0(n,e){return!!(n.effects.some(t=>t.is(Qr))||n.changes.touchesRange(e.pos))}function Uc(n,e){return n.field(Be,!1)?e:e.concat(R.appendConfig.of([Be,O.decorations.compute([Be],t=>{let{selected:i,panel:s}=t.field(Be);return!i||!s||i.from==i.to?E.none:E.set([h0.range(i.from,i.to)])}),Sd(c0,{hideOn:l0}),b0]))}function a0(n,e){return{effects:Uc(n,[Qr.of(e)])}}const Qr=R.define(),eo=R.define(),Gc=R.define(),Be=Me.define({create(){return new At(E.none,null,null)},update(n,e){if(e.docChanged){let t=n.diagnostics.map(e.changes),i=null;if(n.selected){let s=e.changes.mapPos(n.selected.from,1);i=ni(t,n.selected.diagnostic,s)||ni(t,null,s)}n=new At(t,n.panel,i)}for(let t of e.effects)t.is(Qr)?n=At.init(t.value,n.panel,e.state):t.is(eo)?n=new At(n.diagnostics,t.value?Qn.open:null,n.selected):t.is(Gc)&&(n=new At(n.diagnostics,n.panel,t.value));return n},provide:n=>[ar.from(n,e=>e.panel),O.decorations.from(n,e=>e.diagnostics)]}),h0=E.mark({class:"cm-lintRange cm-lintRange-active"});function c0(n,e,t){let{diagnostics:i}=n.state.field(Be),s=[],r=2e8,o=0;i.between(e-(t<0?1:0),e+(t>0?1:0),(a,h,{spec:c})=>{e>=a&&e<=h&&(a==h||(e>a||t>0)&&(eJc(n,t,!1)))}const u0=n=>{let e=n.state.field(Be,!1);(!e||!e.panel)&&n.dispatch({effects:Uc(n.state,[eo.of(!0)])});let t=Md(n,Qn.open);return t&&t.dom.querySelector(".cm-panel-lint ul").focus(),!0},Ul=n=>{let e=n.state.field(Be,!1);return!e||!e.panel?!1:(n.dispatch({effects:eo.of(!1)}),!0)},d0=n=>{let e=n.state.field(Be,!1);if(!e)return!1;let t=n.state.selection.main,i=e.diagnostics.iter(t.to+1);return!i.value&&(i=e.diagnostics.iter(0),!i.value||i.from==t.from&&i.to==t.to)?!1:(n.dispatch({selection:{anchor:i.from,head:i.to},scrollIntoView:!0}),!0)},p0=[{key:"Mod-Shift-m",run:u0},{key:"F8",run:d0}],m0=be.fromClass(class{constructor(n){this.view=n,this.timeout=-1,this.set=!0;let{delay:e}=n.state.facet(Kt);this.lintTime=Date.now()+e,this.run=this.run.bind(this),this.timeout=setTimeout(this.run,e)}run(){let n=Date.now();if(nPromise.resolve(i(this.view)))).then(i=>{let s=i.reduce((r,o)=>r.concat(o));this.view.state.doc==e.doc&&this.view.dispatch(a0(this.view.state,s))},i=>{He(this.view.state,i)})}}update(n){let e=n.state.facet(Kt);(n.docChanged||e!=n.startState.facet(Kt))&&(this.lintTime=Date.now()+e.delay,this.set||(this.set=!0,this.timeout=setTimeout(this.run,e.delay)))}force(){this.set&&(this.lintTime=Date.now(),this.run())}destroy(){clearTimeout(this.timeout)}}),Kt=D.define({combine(n){return Object.assign({sources:n.map(e=>e.source)},_t(n.map(e=>e.config),{delay:750,markerFilter:null,tooltipFilter:null}))},enables:m0});function $c(n){let e=[];if(n)e:for(let{name:t}of n){for(let i=0;ir.toLowerCase()==s.toLowerCase())){e.push(s);continue e}}e.push("")}return e}function Jc(n,e,t){var i;let s=t?$c(e.actions):[];return Je("li",{class:"cm-diagnostic cm-diagnostic-"+e.severity},Je("span",{class:"cm-diagnosticText"},e.renderMessage?e.renderMessage():e.message),(i=e.actions)===null||i===void 0?void 0:i.map((r,o)=>{let l=f=>{f.preventDefault();let u=ni(n.state.field(Be).diagnostics,e);u&&r.apply(n,u.from,u.to)},{name:a}=r,h=s[o]?a.indexOf(s[o]):-1,c=h<0?a:[a.slice(0,h),Je("u",a.slice(h,h+1)),a.slice(h+1)];return Je("button",{type:"button",class:"cm-diagnosticAction",onclick:l,onmousedown:l,"aria-label":` Action: ${a}${h<0?"":` (access key "${s[o]})"`}.`},c)}),e.source&&Je("div",{class:"cm-diagnosticSource"},e.source))}class g0 extends tt{constructor(e){super(),this.diagnostic=e}eq(e){return e.diagnostic==this.diagnostic}toDOM(){return Je("span",{class:"cm-lintPoint cm-lintPoint-"+this.diagnostic.severity})}}class Gl{constructor(e,t){this.diagnostic=t,this.id="item_"+Math.floor(Math.random()*4294967295).toString(16),this.dom=Jc(e,t,!0),this.dom.id=this.id,this.dom.setAttribute("role","option")}}class Qn{constructor(e){this.view=e,this.items=[];let t=s=>{if(s.keyCode==27)Ul(this.view),this.view.focus();else if(s.keyCode==38||s.keyCode==33)this.moveSelection((this.selectedIndex-1+this.items.length)%this.items.length);else if(s.keyCode==40||s.keyCode==34)this.moveSelection((this.selectedIndex+1)%this.items.length);else if(s.keyCode==36)this.moveSelection(0);else if(s.keyCode==35)this.moveSelection(this.items.length-1);else if(s.keyCode==13)this.view.focus();else if(s.keyCode>=65&&s.keyCode<=90&&this.selectedIndex>=0){let{diagnostic:r}=this.items[this.selectedIndex],o=$c(r.actions);for(let l=0;l{for(let r=0;rUl(this.view)},"×")),this.update()}get selectedIndex(){let e=this.view.state.field(Be).selected;if(!e)return-1;for(let t=0;t{let h=-1,c;for(let f=i;fi&&(this.items.splice(i,h-i),s=!0)),t&&c.diagnostic==t.diagnostic?c.dom.hasAttribute("aria-selected")||(c.dom.setAttribute("aria-selected","true"),r=c):c.dom.hasAttribute("aria-selected")&&c.dom.removeAttribute("aria-selected"),i++});i({sel:r.dom.getBoundingClientRect(),panel:this.list.getBoundingClientRect()}),write:({sel:o,panel:l})=>{o.topl.bottom&&(this.list.scrollTop+=o.bottom-l.bottom)}})):this.selectedIndex<0&&this.list.removeAttribute("aria-activedescendant"),s&&this.sync()}sync(){let e=this.list.firstChild;function t(){let i=e;e=i.nextSibling,i.remove()}for(let i of this.items)if(i.dom.parentNode==this.list){for(;e!=i.dom;)t();e=i.dom.nextSibling}else this.list.insertBefore(i.dom,e);for(;e;)t()}moveSelection(e){if(this.selectedIndex<0)return;let t=this.view.state.field(Be),i=ni(t.diagnostics,this.items[e].diagnostic);i&&this.view.dispatch({selection:{anchor:i.from,head:i.to},scrollIntoView:!0,effects:Gc.of(i)})}static open(e){return new Qn(e)}}function y0(n,e='viewBox="0 0 40 40"'){return`url('data:image/svg+xml,${encodeURIComponent(n)}')`}function Os(n){return y0(``,'width="6" height="3"')}const b0=O.baseTheme({".cm-diagnostic":{padding:"3px 6px 3px 8px",marginLeft:"-1px",display:"block",whiteSpace:"pre-wrap"},".cm-diagnostic-error":{borderLeft:"5px solid #d11"},".cm-diagnostic-warning":{borderLeft:"5px solid orange"},".cm-diagnostic-info":{borderLeft:"5px solid #999"},".cm-diagnosticAction":{font:"inherit",border:"none",padding:"2px 4px",backgroundColor:"#444",color:"white",borderRadius:"3px",marginLeft:"8px"},".cm-diagnosticSource":{fontSize:"70%",opacity:.7},".cm-lintRange":{backgroundPosition:"left bottom",backgroundRepeat:"repeat-x",paddingBottom:"0.7px"},".cm-lintRange-error":{backgroundImage:Os("#d11")},".cm-lintRange-warning":{backgroundImage:Os("orange")},".cm-lintRange-info":{backgroundImage:Os("#999")},".cm-lintRange-active":{backgroundColor:"#ffdd9980"},".cm-tooltip-lint":{padding:0,margin:0},".cm-lintPoint":{position:"relative","&:after":{content:'""',position:"absolute",bottom:0,left:"-2px",borderLeft:"3px solid transparent",borderRight:"3px solid transparent",borderBottom:"4px solid #d11"}},".cm-lintPoint-warning":{"&:after":{borderBottomColor:"orange"}},".cm-lintPoint-info":{"&:after":{borderBottomColor:"#999"}},".cm-panel.cm-panel-lint":{position:"relative","& ul":{maxHeight:"100px",overflowY:"auto","& [aria-selected]":{backgroundColor:"#ddd","& u":{textDecoration:"underline"}},"&:focus [aria-selected]":{background_fallback:"#bdf",backgroundColor:"Highlight",color_fallback:"white",color:"HighlightText"},"& u":{textDecoration:"none"},padding:0,margin:0},"& [name=close]":{position:"absolute",top:"0",right:"2px",background:"inherit",border:"none",font:"inherit",padding:0,margin:0}}}),w0=(()=>[Ld(),td(),em(),Dp(),Ku(),N.allowMultipleSelections.of(!0),pp(),Hr(Pp,{fallback:!0}),$g(),ud(),md(),qn.of([...Zg,...Zm,...hm,...Cp,...r0,...p0])])(),$l={python:()=>Pe(()=>import("./index-f8a15c0a.js"),["./index-f8a15c0a.js","./index-ae57ca19.js","./index-3370be2a.js","./index-f2292b12.css","./Blocks-f0129fcd.js","./Button-89624748.js","./Button-9b719f62.css","./Blocks-f08d137e.css","./BlockLabel-56db415e.js","./Empty-585389a4.js","./Copy-6cd42558.js","./Download-fdaaf5d4.js"],import.meta.url).then(n=>n.python()),markdown:async()=>{const[n,e]=await Promise.all([Pe(()=>import("./index-98c587a9.js"),["./index-98c587a9.js","./index-218a3021.js","./index-ae57ca19.js","./index-c5e2dbc1.js","./index-3370be2a.js","./index-f2292b12.css","./Blocks-f0129fcd.js","./Button-89624748.js","./Button-9b719f62.css","./Blocks-f08d137e.css","./BlockLabel-56db415e.js","./Empty-585389a4.js","./Copy-6cd42558.js","./Download-fdaaf5d4.js","./index-0644e979.js"],import.meta.url),Pe(()=>import("./frontmatter-d26451dd.js"),["./frontmatter-d26451dd.js","./yaml-95012b83.js","./index-3370be2a.js","./index-f2292b12.css","./Blocks-f0129fcd.js","./Button-89624748.js","./Button-9b719f62.css","./Blocks-f08d137e.css","./BlockLabel-56db415e.js","./Empty-585389a4.js","./Copy-6cd42558.js","./Download-fdaaf5d4.js"],import.meta.url)]);return n.markdown({extensions:[e.frontmatter]})},json:()=>Pe(()=>import("./index-da80a9a6.js"),["./index-da80a9a6.js","./index-ae57ca19.js","./index-3370be2a.js","./index-f2292b12.css","./Blocks-f0129fcd.js","./Button-89624748.js","./Button-9b719f62.css","./Blocks-f08d137e.css","./BlockLabel-56db415e.js","./Empty-585389a4.js","./Copy-6cd42558.js","./Download-fdaaf5d4.js"],import.meta.url).then(n=>n.json()),html:()=>Pe(()=>import("./index-218a3021.js"),["./index-218a3021.js","./index-ae57ca19.js","./index-c5e2dbc1.js","./index-3370be2a.js","./index-f2292b12.css","./Blocks-f0129fcd.js","./Button-89624748.js","./Button-9b719f62.css","./Blocks-f08d137e.css","./BlockLabel-56db415e.js","./Empty-585389a4.js","./Copy-6cd42558.js","./Download-fdaaf5d4.js","./index-0644e979.js"],import.meta.url).then(n=>n.html()),css:()=>Pe(()=>import("./index-c5e2dbc1.js"),["./index-c5e2dbc1.js","./index-ae57ca19.js","./index-3370be2a.js","./index-f2292b12.css","./Blocks-f0129fcd.js","./Button-89624748.js","./Button-9b719f62.css","./Blocks-f08d137e.css","./BlockLabel-56db415e.js","./Empty-585389a4.js","./Copy-6cd42558.js","./Download-fdaaf5d4.js"],import.meta.url).then(n=>n.css()),javascript:()=>Pe(()=>import("./index-0644e979.js"),["./index-0644e979.js","./index-ae57ca19.js","./index-3370be2a.js","./index-f2292b12.css","./Blocks-f0129fcd.js","./Button-89624748.js","./Button-9b719f62.css","./Blocks-f08d137e.css","./BlockLabel-56db415e.js","./Empty-585389a4.js","./Copy-6cd42558.js","./Download-fdaaf5d4.js"],import.meta.url).then(n=>n.javascript()),typescript:()=>Pe(()=>import("./index-0644e979.js"),["./index-0644e979.js","./index-ae57ca19.js","./index-3370be2a.js","./index-f2292b12.css","./Blocks-f0129fcd.js","./Button-89624748.js","./Button-9b719f62.css","./Blocks-f08d137e.css","./BlockLabel-56db415e.js","./Empty-585389a4.js","./Copy-6cd42558.js","./Download-fdaaf5d4.js"],import.meta.url).then(n=>n.javascript({typescript:!0})),yaml:()=>Pe(()=>import("./yaml-95012b83.js"),[],import.meta.url).then(n=>jt.define(n.yaml)),dockerfile:()=>Pe(()=>import("./dockerfile-d67bbd50.js"),[],import.meta.url).then(n=>jt.define(n.dockerFile)),shell:()=>Pe(()=>import("./shell-86dd1d99.js"),[],import.meta.url).then(n=>jt.define(n.shell)),r:()=>Pe(()=>import("./r-3ca97919.js"),[],import.meta.url).then(n=>jt.define(n.r))},k0={py:"python",md:"markdown",js:"javascript",ts:"typescript",sh:"shell"};async function v0(n){const e=$l[n]||$l[k0[n]]||void 0;if(e)return e()}function x0(n){let e,t,i;return{c(){e=dt("div"),t=dt("div"),se(t,"class",i="codemirror-wrapper "+n[0]+" svelte-1sc8eck"),se(e,"class","wrap svelte-1sc8eck")},m(s,r){Ce(s,e,r),Yt(e,t),n[12](t)},p(s,[r]){r&1&&i!==(i="codemirror-wrapper "+s[0]+" svelte-1sc8eck")&&se(t,"class",i)},i:vi,o:vi,d(s){s&&Ae(e),n[12](null)}}}function S0(n){let e=n.dom.querySelectorAll(".cm-gutterElement");if(e.length===0)return null;for(var t=0;t(y=k(),()=>y?.destroy()));function Q(M){xr[M?"unshift":"push"](()=>{g=M,t(1,g)})}return n.$$set=M=>{"classNames"in M&&t(0,i=M.classNames),"value"in M&&t(2,s=M.value),"dark_mode"in M&&t(3,r=M.dark_mode),"basic"in M&&t(4,o=M.basic),"language"in M&&t(5,l=M.language),"lines"in M&&t(6,a=M.lines),"extensions"in M&&t(7,h=M.extensions),"useTab"in M&&t(8,c=M.useTab),"readonly"in M&&t(9,f=M.readonly),"placeholder"in M&&t(10,u=M.placeholder)},n.$$.update=()=>{n.$$.dirty&32&&b(l),n.$$.dirty&2048&&G(),n.$$.dirty&4&&v(s),n.$$.dirty&64&&S()},[i,g,s,r,o,l,a,h,c,f,u,p,Q]}class Yc extends si{constructor(e){super(),ri(this,e,C0,x0,oi,{classNames:0,value:2,dark_mode:3,basic:4,language:5,lines:6,extensions:7,useTab:8,readonly:9,placeholder:10})}}function Jl(n){let e,t,i,s;return t=new ca({}),{c(){e=dt("span"),fe(t.$$.fragment),se(e,"class","check svelte-qi7jcw")},m(r,o){Ce(r,e,o),ue(t,e,null),s=!0},i(r){s||(H(t.$$.fragment,r),r&&ea(()=>{s&&(i||(i=bn(e,wn,{},!0)),i.run(1))}),s=!0)},o(r){j(t.$$.fragment,r),r&&(i||(i=bn(e,wn,{},!1)),i.run(0)),s=!1},d(r){r&&Ae(e),de(t),r&&i&&i.end()}}}function A0(n){let e,t,i,s,r,o,l;i=new ef({});let a=n[0]&&Jl();return{c(){e=dt("button"),t=dt("span"),fe(i.$$.fragment),s=pt(),a&&a.c(),se(t,"class","copy-text"),yn(t,"copied",n[0]),se(e,"title","copy"),se(e,"class","svelte-qi7jcw")},m(h,c){Ce(h,e,c),Yt(e,t),ue(i,t,null),Yt(e,s),a&&a.m(e,null),r=!0,o||(l=Zl(e,"click",n[1]),o=!0)},p(h,[c]){(!r||c&1)&&yn(t,"copied",h[0]),h[0]?a?c&1&&H(a,1):(a=Jl(),a.c(),H(a,1),a.m(e,null)):a&&(_n(),j(a,1,1,()=>{a=null}),Vn())},i(h){r||(H(i.$$.fragment,h),H(a),r=!0)},o(h){j(i.$$.fragment,h),j(a),r=!1},d(h){h&&Ae(e),de(i),a&&a.d(),o=!1,l()}}}function M0(n,e,t){let i=!1,{value:s}=e,r;function o(){t(0,i=!0),r&&clearTimeout(r),r=setTimeout(()=>{t(0,i=!1)},2e3)}async function l(){"clipboard"in navigator&&(await navigator.clipboard.writeText(s),o())}return Ql(()=>{r&&clearTimeout(r)}),n.$$set=a=>{"value"in a&&t(2,s=a.value)},[i,l,s]}class D0 extends si{constructor(e){super(),ri(this,e,M0,A0,oi,{value:2})}}function Yl(n){let e,t,i,s;return t=new ca({}),{c(){e=dt("span"),fe(t.$$.fragment),se(e,"class","check svelte-14d303a")},m(r,o){Ce(r,e,o),ue(t,e,null),s=!0},i(r){s||(H(t.$$.fragment,r),r&&ea(()=>{s&&(i||(i=bn(e,wn,{},!0)),i.run(1))}),s=!0)},o(r){j(t.$$.fragment,r),r&&(i||(i=bn(e,wn,{},!1)),i.run(0)),s=!1},d(r){r&&Ae(e),de(t),r&&i&&i.end()}}}function T0(n){let e,t,i,s,r,o,l;t=new tf({});let a=n[0]&&Yl();return{c(){e=dt("a"),fe(t.$$.fragment),i=pt(),a&&a.c(),se(e,"download",s="file."+n[2]),se(e,"href",n[1]),se(e,"class","svelte-14d303a"),yn(e,"copied",n[0])},m(h,c){Ce(h,e,c),ue(t,e,null),Yt(e,i),a&&a.m(e,null),r=!0,o||(l=Zl(e,"click",n[3]),o=!0)},p(h,[c]){h[0]?a?c&1&&H(a,1):(a=Yl(),a.c(),H(a,1),a.m(e,null)):a&&(_n(),j(a,1,1,()=>{a=null}),Vn()),(!r||c&4&&s!==(s="file."+h[2]))&&se(e,"download",s),(!r||c&2)&&se(e,"href",h[1]),(!r||c&1)&&yn(e,"copied",h[0])},i(h){r||(H(t.$$.fragment,h),H(a),r=!0)},o(h){j(t.$$.fragment,h),j(a),r=!1},d(h){h&&Ae(e),de(t),a&&a.d(),o=!1,l()}}}function O0(n){return{py:"py",python:"py",md:"md",markdown:"md",json:"json",html:"html",css:"css",js:"js",javascript:"js",ts:"ts",typescript:"ts",yaml:"yaml",yml:"yml",dockerfile:"dockerfile",sh:"sh",shell:"sh",r:"r"}[n]||"txt"}function B0(n,e,t){let i,s,{value:r}=e,{language:o}=e,l=!1,a;function h(){t(0,l=!0),a&&clearTimeout(a),a=setTimeout(()=>{t(0,l=!1)},2e3)}return Ql(()=>{a&&clearTimeout(a)}),n.$$set=c=>{"value"in c&&t(4,r=c.value),"language"in c&&t(5,o=c.language)},n.$$.update=()=>{n.$$.dirty&32&&t(2,i=O0(o)),n.$$.dirty&16&&t(1,s=URL.createObjectURL(new Blob([r])))},[l,s,i,h,r,o]}class P0 extends si{constructor(e){super(),ri(this,e,B0,T0,oi,{value:4,language:5})}}function E0(n){let e,t,i,s,r;return t=new P0({props:{value:n[0],language:n[1]}}),s=new D0({props:{value:n[0]}}),{c(){e=dt("div"),fe(t.$$.fragment),i=pt(),fe(s.$$.fragment),se(e,"class","svelte-1yin446")},m(o,l){Ce(o,e,l),ue(t,e,null),Yt(e,i),ue(s,e,null),r=!0},p(o,[l]){const a={};l&1&&(a.value=o[0]),l&2&&(a.language=o[1]),t.$set(a);const h={};l&1&&(h.value=o[0]),s.$set(h)},i(o){r||(H(t.$$.fragment,o),H(s.$$.fragment,o),r=!0)},o(o){j(t.$$.fragment,o),j(s.$$.fragment,o),r=!1},d(o){o&&Ae(e),de(t),de(s)}}}function R0(n,e,t){let{value:i}=e,{language:s}=e;return n.$$set=r=>{"value"in r&&t(0,i=r.value),"language"in r&&t(1,s=r.language)},[i,s]}class L0 extends si{constructor(e){super(),ri(this,e,R0,E0,oi,{value:0,language:1})}}function I0(n){let e,t;return e=new aa({props:{variant:"solid",padding:!1,elem_id:n[3],elem_classes:n[4],visible:n[5],$$slots:{default:[_0]},$$scope:{ctx:n}}}),{c(){fe(e.$$.fragment)},m(i,s){ue(e,i,s),t=!0},p(i,s){const r={};s&8&&(r.elem_id=i[3]),s&16&&(r.elem_classes=i[4]),s&32&&(r.visible=i[5]),s&131975&&(r.$$scope={dirty:s,ctx:i}),e.$set(r)},i(i){t||(H(e.$$.fragment,i),t=!0)},o(i){j(e.$$.fragment,i),t=!1},d(i){de(e,i)}}}function N0(n){let e,t;return e=new aa({props:{variant:"solid",padding:!1,elem_id:n[3],elem_classes:n[4],visible:n[5],$$slots:{default:[W0]},$$scope:{ctx:n}}}),{c(){fe(e.$$.fragment)},m(i,s){ue(e,i,s),t=!0},p(i,s){const r={};s&8&&(r.elem_id=i[3]),s&16&&(r.elem_classes=i[4]),s&32&&(r.visible=i[5]),s&131975&&(r.$$scope={dirty:s,ctx:i}),e.$set(r)},i(i){t||(H(e.$$.fragment,i),t=!0)},o(i){j(e.$$.fragment,i),t=!1},d(i){de(e,i)}}}function _0(n){let e,t,i,s,r,o,l;const a=[n[9]];let h={};for(let u=0;usa(r,"value",c)),{c(){fe(e.$$.fragment),t=pt(),fe(i.$$.fragment),s=pt(),fe(r.$$.fragment)},m(u,d){ue(e,u,d),Ce(u,t,d),ue(i,u,d),Ce(u,s,d),ue(r,u,d),l=!0},p(u,d){const p=d&512?ra(a,[oa(u[9])]):{};e.$set(p);const g={};d&256&&(g.show_label=u[8]),d&128&&(g.label=u[7]),i.$set(g);const y={};d&2&&(y.language=u[1]),d&4&&(y.lines=u[2]),!o&&d&1&&(o=!0,y.value=u[0],la(()=>o=!1)),r.$set(y)},i(u){l||(H(e.$$.fragment,u),H(i.$$.fragment,u),H(r.$$.fragment,u),l=!0)},o(u){j(e.$$.fragment,u),j(i.$$.fragment,u),j(r.$$.fragment,u),l=!1},d(u){u&&(Ae(t),Ae(s)),de(e,u),de(i,u),de(r,u)}}}function V0(n){let e,t,i,s,r;e=new L0({props:{language:n[1],value:n[0]}});function o(a){n[13](a)}let l={language:n[1],lines:n[2],dark_mode:n[10],readonly:!0};return n[0]!==void 0&&(l.value=n[0]),i=new Yc({props:l}),xr.push(()=>sa(i,"value",o)),{c(){fe(e.$$.fragment),t=pt(),fe(i.$$.fragment)},m(a,h){ue(e,a,h),Ce(a,t,h),ue(i,a,h),r=!0},p(a,h){const c={};h&2&&(c.language=a[1]),h&1&&(c.value=a[0]),e.$set(c);const f={};h&2&&(f.language=a[1]),h&4&&(f.lines=a[2]),!s&&h&1&&(s=!0,f.value=a[0],la(()=>s=!1)),i.$set(f)},i(a){r||(H(e.$$.fragment,a),H(i.$$.fragment,a),r=!0)},o(a){j(e.$$.fragment,a),j(i.$$.fragment,a),r=!1},d(a){a&&Ae(t),de(e,a),de(i,a)}}}function F0(n){let e,t;return e=new Qc({props:{unpadded_box:!0,size:"large",$$slots:{default:[H0]},$$scope:{ctx:n}}}),{c(){fe(e.$$.fragment)},m(i,s){ue(e,i,s),t=!0},p(i,s){const r={};s&131072&&(r.$$scope={dirty:s,ctx:i}),e.$set(r)},i(i){t||(H(e.$$.fragment,i),t=!0)},o(i){j(e.$$.fragment,i),t=!1},d(i){de(e,i)}}}function H0(n){let e,t;return e=new Sr({}),{c(){fe(e.$$.fragment)},m(i,s){ue(e,i,s),t=!0},i(i){t||(H(e.$$.fragment,i),t=!0)},o(i){j(e.$$.fragment,i),t=!1},d(i){de(e,i)}}}function W0(n){let e,t,i,s,r,o,l,a;const h=[n[9]];let c={};for(let p=0;p{u[v]=null}),Vn(),o=u[r],o?o.p(p,g):(o=u[r]=f[r](p),o.c()),H(o,1),o.m(l.parentNode,l))},i(p){a||(H(e.$$.fragment,p),H(i.$$.fragment,p),H(o),a=!0)},o(p){j(e.$$.fragment,p),j(i.$$.fragment,p),j(o),a=!1},d(p){p&&(Ae(t),Ae(s),Ae(l)),de(e,p),de(i,p),u[r].d(p)}}}function z0(n){let e,t,i,s;const r=[N0,I0],o=[];function l(a,h){return a[6]==="static"?0:1}return e=l(n),t=o[e]=r[e](n),{c(){t.c(),i=ta()},m(a,h){o[e].m(a,h),Ce(a,i,h),s=!0},p(a,[h]){let c=e;e=l(a),e===c?o[e].p(a,h):(_n(),j(o[c],1,1,()=>{o[c]=null}),Vn(),t=o[e],t?t.p(a,h):(t=o[e]=r[e](a),t.c()),H(t,1),t.m(i.parentNode,i))},i(a){s||(H(t),s=!0)},o(a){j(t),s=!1},d(a){a&&Ae(i),o[e].d(a)}}}function q0(n,e,t){const i=Xl();let{value:s=""}=e,{value_is_output:r=!1}=e,{language:o=""}=e,{lines:l=5}=e,{target:a}=e,{elem_id:h=""}=e,{elem_classes:c=[]}=e,{visible:f=!0}=e,{mode:u}=e,{label:d="Code"}=e,{show_label:p=!0}=e,{loading_status:g}=e,y=a.classList.contains("dark");function b(){i("change",s),r||i("input")}Zc(()=>{t(11,r=!1)});function v(k){s=k,t(0,s)}function S(k){s=k,t(0,s)}return n.$$set=k=>{"value"in k&&t(0,s=k.value),"value_is_output"in k&&t(11,r=k.value_is_output),"language"in k&&t(1,o=k.language),"lines"in k&&t(2,l=k.lines),"target"in k&&t(12,a=k.target),"elem_id"in k&&t(3,h=k.elem_id),"elem_classes"in k&&t(4,c=k.elem_classes),"visible"in k&&t(5,f=k.visible),"mode"in k&&t(6,u=k.mode),"label"in k&&t(7,d=k.label),"show_label"in k&&t(8,p=k.show_label),"loading_status"in k&&t(9,g=k.loading_status)},n.$$.update=()=>{n.$$.dirty&1&&b()},[s,o,l,h,c,f,u,d,p,g,y,r,a,v,S]}class j0 extends si{constructor(e){super(),ri(this,e,q0,z0,oi,{value:0,value_is_output:11,language:1,lines:2,target:12,elem_id:3,elem_classes:4,visible:5,mode:6,label:7,show_label:8,loading_status:9})}}const K0=j0,U0=["static","dynamic"],fy=Object.freeze(Object.defineProperty({__proto__:null,Component:K0,modes:U0},Symbol.toStringTag,{value:"Module"}));export{np as A,hy as B,bg as C,Id as D,w as E,fy as F,ee as I,ur as L,Lr as N,Rh as P,jt as S,z as T,ay as a,sy as b,xe as c,ry as d,L as e,gp as f,Ge as g,pe as h,op as i,Vi as j,qn as k,Ie as l,Nh as m,Dt as n,mp as o,iy as p,Vh as q,ti as r,Zd as s,m as t,Lp as u,O as v,ly as w,ty as x,cy as y,oy as z}; -//# sourceMappingURL=index-f90e1963.js.map diff --git a/spaces/Datasculptor/DescriptionGPT/detic/data/custom_dataset_mapper.py b/spaces/Datasculptor/DescriptionGPT/detic/data/custom_dataset_mapper.py deleted file mode 100644 index c7727dded3f93f5eeafdcd72e257197e3fdc817b..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/detic/data/custom_dataset_mapper.py +++ /dev/null @@ -1,280 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import copy -import logging -import numpy as np -from typing import List, Optional, Union -import torch -import pycocotools.mask as mask_util - -from detectron2.config import configurable - -from detectron2.data import detection_utils as utils -from detectron2.data.detection_utils import transform_keypoint_annotations -from detectron2.data import transforms as T -from detectron2.data.dataset_mapper import DatasetMapper -from detectron2.structures import Boxes, BoxMode, Instances -from detectron2.structures import Keypoints, PolygonMasks, BitMasks -from fvcore.transforms.transform import TransformList -from .custom_build_augmentation import build_custom_augmentation -from .tar_dataset import DiskTarDataset - -__all__ = ["CustomDatasetMapper"] - -class CustomDatasetMapper(DatasetMapper): - @configurable - def __init__(self, is_train: bool, - with_ann_type=False, - dataset_ann=[], - use_diff_bs_size=False, - dataset_augs=[], - is_debug=False, - use_tar_dataset=False, - tarfile_path='', - tar_index_dir='', - **kwargs): - """ - add image labels - """ - self.with_ann_type = with_ann_type - self.dataset_ann = dataset_ann - self.use_diff_bs_size = use_diff_bs_size - if self.use_diff_bs_size and is_train: - self.dataset_augs = [T.AugmentationList(x) for x in dataset_augs] - self.is_debug = is_debug - self.use_tar_dataset = use_tar_dataset - if self.use_tar_dataset: - print('Using tar dataset') - self.tar_dataset = DiskTarDataset(tarfile_path, tar_index_dir) - super().__init__(is_train, **kwargs) - - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - ret = super().from_config(cfg, is_train) - ret.update({ - 'with_ann_type': cfg.WITH_IMAGE_LABELS, - 'dataset_ann': cfg.DATALOADER.DATASET_ANN, - 'use_diff_bs_size': cfg.DATALOADER.USE_DIFF_BS_SIZE, - 'is_debug': cfg.IS_DEBUG, - 'use_tar_dataset': cfg.DATALOADER.USE_TAR_DATASET, - 'tarfile_path': cfg.DATALOADER.TARFILE_PATH, - 'tar_index_dir': cfg.DATALOADER.TAR_INDEX_DIR, - }) - if ret['use_diff_bs_size'] and is_train: - if cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop': - dataset_scales = cfg.DATALOADER.DATASET_INPUT_SCALE - dataset_sizes = cfg.DATALOADER.DATASET_INPUT_SIZE - ret['dataset_augs'] = [ - build_custom_augmentation(cfg, True, scale, size) \ - for scale, size in zip(dataset_scales, dataset_sizes)] - else: - assert cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge' - min_sizes = cfg.DATALOADER.DATASET_MIN_SIZES - max_sizes = cfg.DATALOADER.DATASET_MAX_SIZES - ret['dataset_augs'] = [ - build_custom_augmentation( - cfg, True, min_size=mi, max_size=ma) \ - for mi, ma in zip(min_sizes, max_sizes)] - else: - ret['dataset_augs'] = [] - - return ret - - def __call__(self, dataset_dict): - """ - include image labels - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - # USER: Write your own image loading if it's not from a file - if 'file_name' in dataset_dict: - ori_image = utils.read_image( - dataset_dict["file_name"], format=self.image_format) - else: - ori_image, _, _ = self.tar_dataset[dataset_dict["tar_index"]] - ori_image = utils._apply_exif_orientation(ori_image) - ori_image = utils.convert_PIL_to_numpy(ori_image, self.image_format) - utils.check_image_size(dataset_dict, ori_image) - - # USER: Remove if you don't do semantic/panoptic segmentation. - if "sem_seg_file_name" in dataset_dict: - sem_seg_gt = utils.read_image( - dataset_dict.pop("sem_seg_file_name"), "L").squeeze(2) - else: - sem_seg_gt = None - - if self.is_debug: - dataset_dict['dataset_source'] = 0 - - not_full_labeled = 'dataset_source' in dataset_dict and \ - self.with_ann_type and \ - self.dataset_ann[dataset_dict['dataset_source']] != 'box' - - aug_input = T.AugInput(copy.deepcopy(ori_image), sem_seg=sem_seg_gt) - if self.use_diff_bs_size and self.is_train: - transforms = \ - self.dataset_augs[dataset_dict['dataset_source']](aug_input) - else: - transforms = self.augmentations(aug_input) - image, sem_seg_gt = aug_input.image, aug_input.sem_seg - - image_shape = image.shape[:2] # h, w - dataset_dict["image"] = torch.as_tensor( - np.ascontiguousarray(image.transpose(2, 0, 1))) - - if sem_seg_gt is not None: - dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long")) - - # USER: Remove if you don't use pre-computed proposals. - # Most users would not need this feature. - if self.proposal_topk is not None: - utils.transform_proposals( - dataset_dict, image_shape, transforms, - proposal_topk=self.proposal_topk - ) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - dataset_dict.pop("sem_seg_file_name", None) - return dataset_dict - - if "annotations" in dataset_dict: - # USER: Modify this if you want to keep them for some reason. - for anno in dataset_dict["annotations"]: - if not self.use_instance_mask: - anno.pop("segmentation", None) - if not self.use_keypoint: - anno.pop("keypoints", None) - - # USER: Implement additional transformations if you have other types of data - all_annos = [ - (utils.transform_instance_annotations( - obj, transforms, image_shape, - keypoint_hflip_indices=self.keypoint_hflip_indices, - ), obj.get("iscrowd", 0)) - for obj in dataset_dict.pop("annotations") - ] - annos = [ann[0] for ann in all_annos if ann[1] == 0] - instances = utils.annotations_to_instances( - annos, image_shape, mask_format=self.instance_mask_format - ) - - del all_annos - if self.recompute_boxes: - instances.gt_boxes = instances.gt_masks.get_bounding_boxes() - dataset_dict["instances"] = utils.filter_empty_instances(instances) - if self.with_ann_type: - dataset_dict["pos_category_ids"] = dataset_dict.get( - 'pos_category_ids', []) - dataset_dict["ann_type"] = \ - self.dataset_ann[dataset_dict['dataset_source']] - if self.is_debug and (('pos_category_ids' not in dataset_dict) or \ - (dataset_dict['pos_category_ids'] == [])): - dataset_dict['pos_category_ids'] = [x for x in sorted(set( - dataset_dict['instances'].gt_classes.tolist() - ))] - return dataset_dict - -# DETR augmentation -def build_transform_gen(cfg, is_train): - """ - """ - if is_train: - min_size = cfg.INPUT.MIN_SIZE_TRAIN - max_size = cfg.INPUT.MAX_SIZE_TRAIN - sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING - else: - min_size = cfg.INPUT.MIN_SIZE_TEST - max_size = cfg.INPUT.MAX_SIZE_TEST - sample_style = "choice" - if sample_style == "range": - assert len(min_size) == 2, "more than 2 ({}) min_size(s) are provided for ranges".format(len(min_size)) - - logger = logging.getLogger(__name__) - tfm_gens = [] - if is_train: - tfm_gens.append(T.RandomFlip()) - tfm_gens.append(T.ResizeShortestEdge(min_size, max_size, sample_style)) - if is_train: - logger.info("TransformGens used in training: " + str(tfm_gens)) - return tfm_gens - - -class DetrDatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by DETR. - The callable currently does the following: - 1. Read the image from "file_name" - 2. Applies geometric transforms to the image and annotation - 3. Find and applies suitable cropping to the image and annotation - 4. Prepare image and annotation to Tensors - """ - - def __init__(self, cfg, is_train=True): - if cfg.INPUT.CROP.ENABLED and is_train: - self.crop_gen = [ - T.ResizeShortestEdge([400, 500, 600], sample_style="choice"), - T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE), - ] - else: - self.crop_gen = None - - self.mask_on = cfg.MODEL.MASK_ON - self.tfm_gens = build_transform_gen(cfg, is_train) - logging.getLogger(__name__).info( - "Full TransformGens used in training: {}, crop: {}".format(str(self.tfm_gens), str(self.crop_gen)) - ) - - self.img_format = cfg.INPUT.FORMAT - self.is_train = is_train - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - image = utils.read_image(dataset_dict["file_name"], format=self.img_format) - utils.check_image_size(dataset_dict, image) - - if self.crop_gen is None: - image, transforms = T.apply_transform_gens(self.tfm_gens, image) - else: - if np.random.rand() > 0.5: - image, transforms = T.apply_transform_gens(self.tfm_gens, image) - else: - image, transforms = T.apply_transform_gens( - self.tfm_gens[:-1] + self.crop_gen + self.tfm_gens[-1:], image - ) - - image_shape = image.shape[:2] # h, w - - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - return dataset_dict - - if "annotations" in dataset_dict: - # USER: Modify this if you want to keep them for some reason. - for anno in dataset_dict["annotations"]: - if not self.mask_on: - anno.pop("segmentation", None) - anno.pop("keypoints", None) - - # USER: Implement additional transformations if you have other types of data - annos = [ - utils.transform_instance_annotations(obj, transforms, image_shape) - for obj in dataset_dict.pop("annotations") - if obj.get("iscrowd", 0) == 0 - ] - instances = utils.annotations_to_instances(annos, image_shape) - dataset_dict["instances"] = utils.filter_empty_instances(instances) - return dataset_dict \ No newline at end of file diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/models/__init__.py b/spaces/Dorado607/ChuanhuChatGPT/modules/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/psp.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/psp.py deleted file mode 100644 index 4fa715ae86a6280a7cdb8640ef9192608a5b7e30..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/psp.py +++ /dev/null @@ -1,109 +0,0 @@ -from pti.pti_models.e4e.stylegan2.model import Generator -from pti.pti_models.e4e.encoders import psp_encoders -from torch import nn -import torch -import matplotlib -from pti.pti_configs import paths_config -matplotlib.use('Agg') - - -def get_keys(d, name): - if 'state_dict' in d: - d = d['state_dict'] - d_filt = {k[len(name) + 1:]: v for k, v in d.items() - if k[:len(name)] == name} - return d_filt - - -class pSp(nn.Module): - - def __init__(self, opts): - super(pSp, self).__init__() - self.opts = opts - # Define architecture - self.encoder = self.set_encoder() - self.decoder = Generator( - opts.stylegan_size, 512, 8, channel_multiplier=2) - self.face_pool = torch.nn.AdaptiveAvgPool2d((256, 256 // 2)) - # Load weights if needed - self.load_weights() - - def set_encoder(self): - if self.opts.encoder_type == 'GradualStyleEncoder': - encoder = psp_encoders.GradualStyleEncoder(50, 'ir_se', self.opts) - elif self.opts.encoder_type == 'Encoder4Editing': - encoder = psp_encoders.Encoder4Editing(50, 'ir_se', self.opts) - elif self.opts.encoder_type == 'SingleStyleCodeEncoder': - encoder = psp_encoders.BackboneEncoderUsingLastLayerIntoW( - 50, 'ir_se', self.opts) - else: - raise Exception('{} is not a valid encoders'.format( - self.opts.encoder_type)) - return encoder - - def load_weights(self): - if self.opts.checkpoint_path is not None: - print('Loading e4e over the pSp framework from checkpoint: {}'.format( - self.opts.checkpoint_path)) - ckpt = torch.load(self.opts.checkpoint_path, map_location='cpu') - self.encoder.load_state_dict( - get_keys(ckpt, 'encoder'), strict=True) - self.decoder.load_state_dict( - get_keys(ckpt, 'decoder'), strict=True) - self.__load_latent_avg(ckpt) - else: - print('Loading encoders weights from irse50!') - encoder_ckpt = torch.load(model_paths['ir_se50']) - self.encoder.load_state_dict(encoder_ckpt, strict=False) - print('Loading decoder weights from pretrained!') - ckpt = torch.load(self.opts.stylegan_weights) - self.decoder.load_state_dict(ckpt['g_ema'], strict=False) - self.__load_latent_avg(ckpt, repeat=self.encoder.style_count) - - def forward(self, x, resize=True, latent_mask=None, input_code=False, randomize_noise=True, - inject_latent=None, return_latents=False, alpha=None): - if input_code: - codes = x - else: - codes = self.encoder(x) - # normalize with respect to the center of an average face - if self.opts.start_from_latent_avg: - if codes.ndim == 2: - codes = codes + \ - self.latent_avg.repeat(codes.shape[0], 1, 1)[:, 0, :] - else: - codes = codes + \ - self.latent_avg.repeat(codes.shape[0], 1, 1) - - if latent_mask is not None: - for i in latent_mask: - if inject_latent is not None: - if alpha is not None: - codes[:, i] = alpha * inject_latent[:, i] + \ - (1 - alpha) * codes[:, i] - else: - codes[:, i] = inject_latent[:, i] - else: - codes[:, i] = 0 - - input_is_latent = not input_code - images, result_latent = self.decoder([codes], - input_is_latent=input_is_latent, - randomize_noise=randomize_noise, - return_latents=return_latents) - - if resize: - images = self.face_pool(images) - - if return_latents: - return images, result_latent - else: - return images - - def __load_latent_avg(self, ckpt, repeat=None): - if 'latent_avg' in ckpt: - self.latent_avg = ckpt['latent_avg'].to(self.opts.device) - if repeat is not None: - self.latent_avg = self.latent_avg.repeat(repeat, 1) - else: - self.latent_avg = None diff --git a/spaces/ECCV2022/bytetrack/tutorials/transtrack/save_track.py b/spaces/ECCV2022/bytetrack/tutorials/transtrack/save_track.py deleted file mode 100644 index 7a0517c8620d2868b056b7b84c3e5c41713d06f3..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tutorials/transtrack/save_track.py +++ /dev/null @@ -1,52 +0,0 @@ -""" -Copyright (c) https://github.com/xingyizhou/CenterTrack -Modified by Peize Sun, Rufeng Zhang -""" -# coding: utf-8 -import os -import json -import logging -from collections import defaultdict - - -def save_track(results, out_root, video_to_images, video_names, data_split='val'): - assert out_root is not None - out_dir = os.path.join(out_root, data_split) - if not os.path.exists(out_dir): - os.mkdir(out_dir) - - # save json. - # json_path = os.path.join(out_dir, "track_results.json") - # with open(json_path, "w") as f: - # f.write(json.dumps(results)) - # f.flush() - - # save it in standard format. - track_dir = os.path.join(out_dir, "tracks") - if not os.path.exists(track_dir): - os.mkdir(track_dir) - for video_id in video_to_images.keys(): - video_infos = video_to_images[video_id] - video_name = video_names[video_id] - file_path = os.path.join(track_dir, "{}.txt".format(video_name)) - f = open(file_path, "w") - tracks = defaultdict(list) - for video_info in video_infos: - image_id, frame_id = video_info["image_id"], video_info["frame_id"] - result = results[image_id] - for item in result: - if not ("tracking_id" in item): - raise NotImplementedError - tracking_id = item["tracking_id"] - bbox = item["bbox"] - bbox = [bbox[0], bbox[1], bbox[2], bbox[3], item['score'], item['active']] - tracks[tracking_id].append([frame_id] + bbox) - - rename_track_id = 0 - for track_id in sorted(tracks): - rename_track_id += 1 - for t in tracks[track_id]: - if t[6] > 0: - f.write("{},{},{:.2f},{:.2f},{:.2f},{:.2f},-1,-1,-1,-1\n".format( - t[0], rename_track_id, t[1], t[2], t[3] - t[1], t[4] - t[2])) - f.close() diff --git a/spaces/Eli-chan/Test03/README.md b/spaces/Eli-chan/Test03/README.md deleted file mode 100644 index 459fc1f123ad346e6636ab69b130d8ffc9126250..0000000000000000000000000000000000000000 --- a/spaces/Eli-chan/Test03/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Test03 -emoji: 💻 -colorFrom: gray -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EmilyBrat/bratty-space-needs-correction/Dockerfile b/spaces/EmilyBrat/bratty-space-needs-correction/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/EmilyBrat/bratty-space-needs-correction/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/Felladrin/MiniSearch/src/modules/transformers.ts b/spaces/Felladrin/MiniSearch/src/modules/transformers.ts deleted file mode 100644 index 9656fe8e1865129e88ac7d89b17dd7fb759676d2..0000000000000000000000000000000000000000 --- a/spaces/Felladrin/MiniSearch/src/modules/transformers.ts +++ /dev/null @@ -1,59 +0,0 @@ -import { pipeline, env } from "@xenova/transformers"; -import ortWasmUrl from "@xenova/transformers/dist/ort-wasm.wasm?url"; -import ortWasmThreadedUrl from "@xenova/transformers/dist/ort-wasm-threaded.wasm?url"; -import ortWasmSimdUrl from "@xenova/transformers/dist/ort-wasm-simd.wasm?url"; -import ortWasmSimdThreadedUrl from "@xenova/transformers/dist/ort-wasm-simd-threaded.wasm?url"; - -env.backends.onnx.wasm.wasmPaths = { - "ort-wasm.wasm": ortWasmUrl, - "ort-wasm-threaded.wasm": ortWasmThreadedUrl, - "ort-wasm-simd.wasm": ortWasmSimdUrl, - "ort-wasm-simd-threaded.wasm": ortWasmSimdThreadedUrl, -}; - -export async function runTextToTextGenerationPipeline< - T extends string | string[], ->(params: { - handleModelLoadingProgress?: (event: { - file: string; - progress: number; - }) => void; - textToTextGenerationModel: string; - quantized: boolean; - input: T; -}): Promise { - const generator = await pipeline( - "text2text-generation", - params.textToTextGenerationModel, - { - quantized: params.quantized, - progress_callback: - params.handleModelLoadingProgress ?? - ((event: { file: string; progress: number }) => { - self.postMessage({ - type: "model-loading-progress", - payload: event, - }); - }), - }, - ); - - const responses = await generator(params.input, { - min_length: 32, - max_new_tokens: 512, - do_sample: true, - no_repeat_ngram_size: 2, - num_beams: 3, - }); - - await generator.dispose(); - - if (Array.isArray(params.input)) { - return responses.map( - ({ generated_text }: { generated_text: string }) => generated_text, - ); - } - - const [response] = responses; - return response.generated_text; -} diff --git a/spaces/Ferion/image-matting-app/ppmatting/transforms/__init__.py b/spaces/Ferion/image-matting-app/ppmatting/transforms/__init__.py deleted file mode 100644 index 7986cdd642998fb0638a81c9ea22615faf8bad0b..0000000000000000000000000000000000000000 --- a/spaces/Ferion/image-matting-app/ppmatting/transforms/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .transforms import * diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/monotonic_align/__init__.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/monotonic_align/__init__.py deleted file mode 100644 index e97eecc595dd3bd97d0104ec62799e2e5efea57c..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/GAS17/Dream-awAI-Image-Upscaling/app.py b/spaces/GAS17/Dream-awAI-Image-Upscaling/app.py deleted file mode 100644 index b49230389e95fc56d8c1fa670d2a21d038c3cb87..0000000000000000000000000000000000000000 --- a/spaces/GAS17/Dream-awAI-Image-Upscaling/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import numpy as np -import cv2 -import onnxruntime -import gradio as gr - - -def pre_process(img: np.array) -> np.array: - # H, W, C -> C, H, W - img = np.transpose(img[:, :, 0:3], (2, 0, 1)) - # C, H, W -> 1, C, H, W - img = np.expand_dims(img, axis=0).astype(np.float32) - return img - - -def post_process(img: np.array) -> np.array: - # 1, C, H, W -> C, H, W - img = np.squeeze(img) - # C, H, W -> H, W, C - img = np.transpose(img, (1, 2, 0))[:, :, ::-1].astype(np.uint8) - return img - - -def inference(model_path: str, img_array: np.array) -> np.array: - options = onnxruntime.SessionOptions() - options.intra_op_num_threads = 1 - options.inter_op_num_threads = 1 - ort_session = onnxruntime.InferenceSession(model_path, options) - ort_inputs = {ort_session.get_inputs()[0].name: img_array} - ort_outs = ort_session.run(None, ort_inputs) - - return ort_outs[0] - - -def convert_pil_to_cv2(image): - # pil_image = image.convert("RGB") - open_cv_image = np.array(image) - # RGB to BGR - open_cv_image = open_cv_image[:, :, ::-1].copy() - return open_cv_image - - -def upscale(image, model): - model_path = f"models/{model}.ort" - img = convert_pil_to_cv2(image) - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - if img.shape[2] == 4: - alpha = img[:, :, 3] # GRAY - alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2BGR) # BGR - alpha_output = post_process(inference(model_path, pre_process(alpha))) # BGR - alpha_output = cv2.cvtColor(alpha_output, cv2.COLOR_BGR2GRAY) # GRAY - - img = img[:, :, 0:3] # BGR - image_output = post_process(inference(model_path, pre_process(img))) # BGR - image_output = cv2.cvtColor(image_output, cv2.COLOR_BGR2BGRA) # BGRA - image_output[:, :, 3] = alpha_output - - elif img.shape[2] == 3: - image_output = post_process(inference(model_path, pre_process(img))) # BGR - - return image_output - - -css = ".output-image, .input-image, .image-preview {height: 480px !important} " -model_choices = ["modelx2", "modelx2 25 JXL", "modelx4", "minecraft_modelx4"] - -gr.Interface( - fn=upscale, - inputs=[ - gr.inputs.Image(type="pil", label="Input Image"), - gr.inputs.Radio( - model_choices, - type="value", - default=None, - label="Elegir Upscaler", - optional=False, - ), - ], - outputs="image", - title="Dream awAI | image upscaling ✨💫", - description="", - allow_flagging="never", - css=css, -).launch() diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/README.md b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/README.md deleted file mode 100644 index 3aefc423c9166783f309221b0cb1480bab8a61d5..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/README.md +++ /dev/null @@ -1,67 +0,0 @@ -# Tasks - -### Descriptions - -This folder contains a total of 10 goal-conditioned (language or image) and 8 demo-conditioned (original TransporterNets) tasks. 8 out of the 10 goal-conditioned tasks contain two splits: **seen** and **unseen**. The **full** version is a union of both **seen** and **unseen** attributes made specifically for multi-attr training. Sequential tasks that involve following instructions in a specific order are indicated by **seq** in their names. - -See [__init__.py](__init__.py) for the full list of demo-conditioned and goal-conditioned (language or image) tasks: - -```python -# demo conditioned -'align-box-corner': AlignBoxCorner, -'assembling-kits': AssemblingKits, -'assembling-kits-easy': AssemblingKitsEasy, -'block-insertion': BlockInsertion, -'block-insertion-easy': BlockInsertionEasy, -'block-insertion-nofixture': BlockInsertionNoFixture, -'block-insertion-sixdof': BlockInsertionSixDof, -'block-insertion-translation': BlockInsertionTranslation, -'manipulating-rope': ManipulatingRope, -'packing-boxes': PackingBoxes, -'palletizing-boxes': PalletizingBoxes, -'place-red-in-green': PlaceRedInGreen, -'stack-block-pyramid': StackBlockPyramid, -'sweeping-piles': SweepingPiles, -'towers-of-hanoi': TowersOfHanoi, - -# goal conditioned -'align-rope': AlignRope, -'assembling-kits-seq-seen-colors': AssemblingKitsSeqSeenColors, -'assembling-kits-seq-unseen-colors': AssemblingKitsSeqUnseenColors, -'assembling-kits-seq-full': AssemblingKitsSeqFull, -'packing-shapes': PackingShapes, -'packing-boxes-pairs-seen-colors': PackingBoxesPairsSeenColors, -'packing-boxes-pairs-unseen-colors': PackingBoxesPairsUnseenColors, -'packing-boxes-pairs-full': PackingBoxesPairsFull, -'packing-seen-google-objects-seq': PackingSeenGoogleObjectsSeq, -'packing-unseen-google-objects-seq': PackingUnseenGoogleObjectsSeq, -'packing-seen-google-objects-group': PackingSeenGoogleObjectsGroup, -'packing-unseen-google-objects-group': PackingUnseenGoogleObjectsGroup, -'put-block-in-bowl-seen-colors': PutBlockInBowlSeenColors, -'put-block-in-bowl-unseen-colors': PutBlockInBowlUnseenColors, -'put-block-in-bowl-full': PutBlockInBowlFull, -'stack-block-pyramid-seq-seen-colors': StackBlockPyramidSeqSeenColors, -'stack-block-pyramid-seq-unseen-colors': StackBlockPyramidSeqUnseenColors, -'stack-block-pyramid-seq-full': StackBlockPyramidSeqFull, -'separating-piles-seen-colors': SeparatingPilesSeenColors, -'separating-piles-unseen-colors': SeparatingPilesUnseenColors, -'separating-piles-full': SeparatingPilesFull, -'towers-of-hanoi-seq-seen-colors': TowersOfHanoiSeqSeenColors, -'towers-of-hanoi-seq-unseen-colors': TowersOfHanoiSeqUnseenColors, -'towers-of-hanoi-seq-full': TowersOfHanoiSeqFull, -``` - -### Generated Tasks by GPT -1. All of them should be automatically imported and exists in `generated_tasks` - -### Adding New Tasks - -See [put_block_in_bowl.py](put_block_in_bowl.py) for an example on how a task is specified. Creating a new task involves: (1) setting up a scene with the desired objects, (2) specifying goals with a language instruction and target "zones" or "poses", (3) defining an evaluation metric that is either sequential or non-sequential. See the original [Ravens codebase](https://github.com/google-research/ravens) for more details on task specification and organization. - -### Correcting COM for Google Scanned Objects - -By default all [Google Scanned Objects](https://app.ignitionrobotics.org/GoogleResearch/fuel/collections/Google%20Scanned%20Objects) have COMs (Center of Mass) at the base of the object, which leads to weird behavior with the physics engine. To correct this, I manually edited the COM of each `.obj` file to be the geometric center of the mesh with [Blender](https://www.blender.org/). See this [guide on editing COMs](https://blender.stackexchange.com/questions/14294/how-to-recenter-an-objects-origin) for reference. After correction, the original `.obj` can be overwritten using Blender's Export option. - -## Credit - -All demo-conditioned are from [Ravens](https://github.com/google-research/ravens). The language-conditioned tasks were built-off the same PyBullet environments. \ No newline at end of file diff --git a/spaces/Gmq-x/gpt-academic/toolbox.py b/spaces/Gmq-x/gpt-academic/toolbox.py deleted file mode 100644 index 038d7be858f3b7fbc6ff62f5031dcacdebe4d70c..0000000000000000000000000000000000000000 --- a/spaces/Gmq-x/gpt-academic/toolbox.py +++ /dev/null @@ -1,507 +0,0 @@ -import markdown -import importlib -import traceback -import inspect -import re -from latex2mathml.converter import convert as tex2mathml -from functools import wraps, lru_cache -############################### 插件输入输出接驳区 ####################################### -class ChatBotWithCookies(list): - def __init__(self, cookie): - self._cookies = cookie - - def write_list(self, list): - for t in list: - self.append(t) - - def get_list(self): - return [t for t in self] - - def get_cookies(self): - return self._cookies - -def ArgsGeneralWrapper(f): - """ - 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。 - """ - def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, *args): - txt_passon = txt - if txt == "" and txt2 != "": txt_passon = txt2 - # 引入一个有cookie的chatbot - cookies.update({ - 'top_p':top_p, - 'temperature':temperature, - }) - llm_kwargs = { - 'api_key': cookies['api_key'], - 'llm_model': llm_model, - 'top_p':top_p, - 'max_length': max_length, - 'temperature':temperature, - } - plugin_kwargs = { - # 目前还没有 - } - chatbot_with_cookie = ChatBotWithCookies(cookies) - chatbot_with_cookie.write_list(chatbot) - yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args) - return decorated - -def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面 - """ - 刷新用户界面 - """ - assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。" - yield chatbot.get_cookies(), chatbot, history, msg - -def CatchException(f): - """ - 装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。 - """ - @wraps(f) - def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - try: - yield from f(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT) - except Exception as e: - from check_proxy import check_proxy - from toolbox import get_conf - proxies, = get_conf('proxies') - tb_str = '```\n' + traceback.format_exc() + '```' - if chatbot is None or len(chatbot) == 0: - chatbot = [["插件调度异常", "异常原因"]] - chatbot[-1] = (chatbot[-1][0], - f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}") - yield from update_ui(chatbot=chatbot, history=history, msg=f'异常 {e}') # 刷新界面 - return decorated - - -def HotReload(f): - """ - HotReload的装饰器函数,用于实现Python函数插件的热更新。 - 函数热更新是指在不停止程序运行的情况下,更新函数代码,从而达到实时更新功能。 - 在装饰器内部,使用wraps(f)来保留函数的元信息,并定义了一个名为decorated的内部函数。 - 内部函数通过使用importlib模块的reload函数和inspect模块的getmodule函数来重新加载并获取函数模块, - 然后通过getattr函数获取函数名,并在新模块中重新加载函数。 - 最后,使用yield from语句返回重新加载过的函数,并在被装饰的函数上执行。 - 最终,装饰器函数返回内部函数。这个内部函数可以将函数的原始定义更新为最新版本,并执行函数的新版本。 - """ - @wraps(f) - def decorated(*args, **kwargs): - fn_name = f.__name__ - f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name) - yield from f_hot_reload(*args, **kwargs) - return decorated - - -####################################### 其他小工具 ##################################### - -def get_reduce_token_percent(text): - """ - * 此函数未来将被弃用 - """ - try: - # text = "maximum context length is 4097 tokens. However, your messages resulted in 4870 tokens" - pattern = r"(\d+)\s+tokens\b" - match = re.findall(pattern, text) - EXCEED_ALLO = 500 # 稍微留一点余地,否则在回复时会因余量太少出问题 - max_limit = float(match[0]) - EXCEED_ALLO - current_tokens = float(match[1]) - ratio = max_limit/current_tokens - assert ratio > 0 and ratio < 1 - return ratio, str(int(current_tokens-max_limit)) - except: - return 0.5, '不详' - - - -def write_results_to_file(history, file_name=None): - """ - 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。 - """ - import os - import time - if file_name is None: - # file_name = time.strftime("chatGPT分析报告%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md' - file_name = 'chatGPT分析报告' + \ - time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md' - os.makedirs('./gpt_log/', exist_ok=True) - with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f: - f.write('# chatGPT 分析报告\n') - for i, content in enumerate(history): - try: # 这个bug没找到触发条件,暂时先这样顶一下 - if type(content) != str: - content = str(content) - except: - continue - if i % 2 == 0: - f.write('## ') - f.write(content) - f.write('\n\n') - res = '以上材料已经被写入' + os.path.abspath(f'./gpt_log/{file_name}') - print(res) - return res - - -def regular_txt_to_markdown(text): - """ - 将普通文本转换为Markdown格式的文本。 - """ - text = text.replace('\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - return text - - - - -def report_execption(chatbot, history, a, b): - """ - 向chatbot中添加错误信息 - """ - chatbot.append((a, b)) - history.append(a) - history.append(b) - - -def text_divide_paragraph(text): - """ - 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。 - """ - if '```' in text: - # careful input - return text - else: - # wtf input - lines = text.split("\n") - for i, line in enumerate(lines): - lines[i] = lines[i].replace(" ", " ") - text = "
      ".join(lines) - return text - - -def markdown_convertion(txt): - """ - 将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。 - """ - pre = '
      ' - suf = '
      ' - markdown_extension_configs = { - 'mdx_math': { - 'enable_dollar_delimiter': True, - 'use_gitlab_delimiters': False, - }, - } - find_equation_pattern = r'\n', '') - return content - - - if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识 - # convert everything to html format - split = markdown.markdown(text='---') - convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs) - convert_stage_1 = markdown_bug_hunt(convert_stage_1) - # re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s). - # 1. convert to easy-to-copy tex (do not render math) - convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL) - # 2. convert to rendered equation - convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL) - # cat them together - return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf - else: - return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf - - -def close_up_code_segment_during_stream(gpt_reply): - """ - 在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的``` - - Args: - gpt_reply (str): GPT模型返回的回复字符串。 - - Returns: - str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。 - - """ - if '```' not in gpt_reply: - return gpt_reply - if gpt_reply.endswith('```'): - return gpt_reply - - # 排除了以上两个情况,我们 - segments = gpt_reply.split('```') - n_mark = len(segments) - 1 - if n_mark % 2 == 1: - # print('输出代码片段中!') - return gpt_reply+'\n```' - else: - return gpt_reply - - -def format_io(self, y): - """ - 将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。 - """ - if y is None or y == []: - return [] - i_ask, gpt_reply = y[-1] - i_ask = text_divide_paragraph(i_ask) # 输入部分太自由,预处理一波 - gpt_reply = close_up_code_segment_during_stream(gpt_reply) # 当代码输出半截的时候,试着补上后个``` - y[-1] = ( - None if i_ask is None else markdown.markdown(i_ask, extensions=['fenced_code', 'tables']), - None if gpt_reply is None else markdown_convertion(gpt_reply) - ) - return y - - -def find_free_port(): - """ - 返回当前系统中可用的未使用端口。 - """ - import socket - from contextlib import closing - with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s: - s.bind(('', 0)) - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - return s.getsockname()[1] - - -def extract_archive(file_path, dest_dir): - import zipfile - import tarfile - import os - # Get the file extension of the input file - file_extension = os.path.splitext(file_path)[1] - - # Extract the archive based on its extension - if file_extension == '.zip': - with zipfile.ZipFile(file_path, 'r') as zipobj: - zipobj.extractall(path=dest_dir) - print("Successfully extracted zip archive to {}".format(dest_dir)) - - elif file_extension in ['.tar', '.gz', '.bz2']: - with tarfile.open(file_path, 'r:*') as tarobj: - tarobj.extractall(path=dest_dir) - print("Successfully extracted tar archive to {}".format(dest_dir)) - - # 第三方库,需要预先pip install rarfile - # 此外,Windows上还需要安装winrar软件,配置其Path环境变量,如"C:\Program Files\WinRAR"才可以 - elif file_extension == '.rar': - try: - import rarfile - with rarfile.RarFile(file_path) as rf: - rf.extractall(path=dest_dir) - print("Successfully extracted rar archive to {}".format(dest_dir)) - except: - print("Rar format requires additional dependencies to install") - return '\n\n需要安装pip install rarfile来解压rar文件' - - # 第三方库,需要预先pip install py7zr - elif file_extension == '.7z': - try: - import py7zr - with py7zr.SevenZipFile(file_path, mode='r') as f: - f.extractall(path=dest_dir) - print("Successfully extracted 7z archive to {}".format(dest_dir)) - except: - print("7z format requires additional dependencies to install") - return '\n\n需要安装pip install py7zr来解压7z文件' - else: - return '' - return '' - - -def find_recent_files(directory): - """ - me: find files that is created with in one minutes under a directory with python, write a function - gpt: here it is! - """ - import os - import time - current_time = time.time() - one_minute_ago = current_time - 60 - recent_files = [] - - for filename in os.listdir(directory): - file_path = os.path.join(directory, filename) - if file_path.endswith('.log'): - continue - created_time = os.path.getmtime(file_path) - if created_time >= one_minute_ago: - if os.path.isdir(file_path): - continue - recent_files.append(file_path) - - return recent_files - - -def on_file_uploaded(files, chatbot, txt, txt2, checkboxes): - if len(files) == 0: - return chatbot, txt - import shutil - import os - import time - import glob - from toolbox import extract_archive - try: - shutil.rmtree('./private_upload/') - except: - pass - time_tag = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) - os.makedirs(f'private_upload/{time_tag}', exist_ok=True) - err_msg = '' - for file in files: - file_origin_name = os.path.basename(file.orig_name) - shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}') - err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}', - dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract') - moved_files = [fp for fp in glob.glob( - 'private_upload/**/*', recursive=True)] - if "底部输入区" in checkboxes: - txt = "" - txt2 = f'private_upload/{time_tag}' - else: - txt = f'private_upload/{time_tag}' - txt2 = "" - moved_files_str = '\t\n\n'.join(moved_files) - chatbot.append(['我上传了文件,请查收', - f'[Local Message] 收到以下文件: \n\n{moved_files_str}' + - f'\n\n调用路径参数已自动修正到: \n\n{txt}' + - f'\n\n现在您点击任意“红颜色”标识的函数插件时,以上文件将被作为输入参数'+err_msg]) - return chatbot, txt, txt2 - - -def on_report_generated(files, chatbot): - from toolbox import find_recent_files - report_files = find_recent_files('gpt_log') - if len(report_files) == 0: - return None, chatbot - # files.extend(report_files) - chatbot.append(['汇总报告如何远程获取?', '汇总报告已经添加到右侧“文件上传区”(可能处于折叠状态),请查收。']) - return report_files, chatbot - -def is_openai_api_key(key): - API_MATCH = re.match(r"sk-[a-zA-Z0-9]{48}$", key) - return bool(API_MATCH) - -def is_api2d_key(key): - if key.startswith('fk') and len(key) == 41: - return True - else: - return False - -def is_any_api_key(key): - if ',' in key: - keys = key.split(',') - for k in keys: - if is_any_api_key(k): return True - return False - else: - return is_openai_api_key(key) or is_api2d_key(key) - - -def select_api_key(keys, llm_model): - import random - avail_key_list = [] - key_list = keys.split(',') - - if llm_model.startswith('gpt-'): - for k in key_list: - if is_openai_api_key(k): avail_key_list.append(k) - - if llm_model.startswith('api2d-'): - for k in key_list: - if is_api2d_key(k): avail_key_list.append(k) - - if len(avail_key_list) == 0: - raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。") - - api_key = random.choice(avail_key_list) # 随机负载均衡 - return api_key - -@lru_cache(maxsize=128) -def read_single_conf_with_lru_cache(arg): - from colorful import print亮红, print亮绿 - try: - r = getattr(importlib.import_module('config_private'), arg) - except: - r = getattr(importlib.import_module('config'), arg) - # 在读取API_KEY时,检查一下是不是忘了改config - if arg == 'API_KEY': - if is_any_api_key(r): - print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功") - else: - print亮红( "[API_KEY] 正确的 API_KEY 是'sk'开头的51位密钥(OpenAI),或者 'fk'开头的41位密钥,请在config文件中修改API密钥之后再运行。") - if arg == 'proxies': - if r is None: - print亮红('[PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。') - else: - print亮绿('[PROXY] 网络代理状态:已配置。配置信息如下:', r) - assert isinstance(r, dict), 'proxies格式错误,请注意proxies选项的格式,不要遗漏括号。' - return r - - -def get_conf(*args): - # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到 - res = [] - for arg in args: - r = read_single_conf_with_lru_cache(arg) - res.append(r) - return res - - -def clear_line_break(txt): - txt = txt.replace('\n', ' ') - txt = txt.replace(' ', ' ') - txt = txt.replace(' ', ' ') - return txt - - -class DummyWith(): - """ - 这段代码定义了一个名为DummyWith的空上下文管理器, - 它的作用是……额……没用,即在代码结构不变得情况下取代其他的上下文管理器。 - 上下文管理器是一种Python对象,用于与with语句一起使用, - 以确保一些资源在代码块执行期间得到正确的初始化和清理。 - 上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。 - 在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用, - 而在上下文执行结束时,__exit__()方法则会被调用。 - """ - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - return diff --git a/spaces/Godrose0728/Aisound02/text/korean.py b/spaces/Godrose0728/Aisound02/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/Godrose0728/Aisound02/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/scripts/generate_multiscale_DF2K.py b/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/scripts/generate_multiscale_DF2K.py deleted file mode 100644 index 7ae6484e5c7a325bc55fdfb490ce4acd394f721a..0000000000000000000000000000000000000000 --- a/spaces/GolDNenex/Super-Resolution-Anime-Diffusion/RealESRGANv030/scripts/generate_multiscale_DF2K.py +++ /dev/null @@ -1,57 +0,0 @@ -import argparse -import glob -import os -from PIL import Image - - -def main(args): - # For DF2K, we consider the following three scales, - # and the smallest image whose shortest edge is 400 - scale_list = [0.75, 0.5, 1 / 3] - shortest_edge = 400 - - path_list = sorted(glob.glob(os.path.join(args.input, "*"))) - for path in path_list: - print(path) - basename = os.path.splitext(os.path.basename(path))[0] - - img = Image.open(path) - width, height = img.size - for idx, scale in enumerate(scale_list): - print(f"\t{scale:.2f}") - rlt = img.resize( - (int(width * scale), int(height * scale)), resample=Image.LANCZOS - ) - rlt.save(os.path.join(args.output, f"{basename}T{idx}.png")) - - # save the smallest image which the shortest edge is 400 - if width < height: - ratio = height / width - width = shortest_edge - height = int(width * ratio) - else: - ratio = width / height - height = shortest_edge - width = int(height * ratio) - rlt = img.resize((int(width), int(height)), resample=Image.LANCZOS) - rlt.save(os.path.join(args.output, f"{basename}T{idx+1}.png")) - - -if __name__ == "__main__": - """Generate multi-scale versions for GT images with LANCZOS resampling. - It is now used for DF2K dataset (DIV2K + Flickr 2K) - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--input", type=str, default="datasets/DF2K/DF2K_HR", help="Input folder" - ) - parser.add_argument( - "--output", - type=str, - default="datasets/DF2K/DF2K_multiscale", - help="Output folder", - ) - args = parser.parse_args() - - os.makedirs(args.output, exist_ok=True) - main(args) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/detectors_htc_r50_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/detectors_htc_r50_1x_coco.py deleted file mode 100644 index 0d2fc4f77fcca715c1dfb613306d214b636aa0c0..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detectors/detectors_htc_r50_1x_coco.py +++ /dev/null @@ -1,28 +0,0 @@ -_base_ = '../htc/htc_r50_fpn_1x_coco.py' - -model = dict( - backbone=dict( - type='DetectoRS_ResNet', - conv_cfg=dict(type='ConvAWS'), - sac=dict(type='SAC', use_deform=True), - stage_with_sac=(False, True, True, True), - output_img=True), - neck=dict( - type='RFP', - rfp_steps=2, - aspp_out_channels=64, - aspp_dilations=(1, 3, 6, 1), - rfp_backbone=dict( - rfp_inplanes=256, - type='DetectoRS_ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - conv_cfg=dict(type='ConvAWS'), - sac=dict(type='SAC', use_deform=True), - stage_with_sac=(False, True, True, True), - pretrained='torchvision://resnet50', - style='pytorch'))) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/README.md deleted file mode 100644 index 32768030d61019ea9302d4d734183b06040d3d95..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/lvis/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# LVIS dataset - -## Introduction - -[DATASET] - -```latex -@inproceedings{gupta2019lvis, - title={{LVIS}: A Dataset for Large Vocabulary Instance Segmentation}, - author={Gupta, Agrim and Dollar, Piotr and Girshick, Ross}, - booktitle={Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition}, - year={2019} -} -``` - -## Common Setting - -* Please follow [install guide](../../docs/install.md#install-mmdetection) to install open-mmlab forked cocoapi first. -* Run following scripts to install our forked lvis-api. - - ```shell - # mmlvis is fully compatible with official lvis - pip install mmlvis - ``` - - or - - ```shell - pip install -r requirements/optional.txt - ``` - -* All experiments use oversample strategy [here](../../docs/tutorials/new_dataset.md#class-balanced-dataset) with oversample threshold `1e-3`. -* The size of LVIS v0.5 is half of COCO, so schedule `2x` in LVIS is roughly the same iterations as `1x` in COCO. - -## Results and models of LVIS v0.5 - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :------: |:--------: | -| R-50-FPN | pytorch | 2x | - | - | 26.1 | 25.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis-dbd06831.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_20200531_160435.log.json) | -| R-101-FPN | pytorch | 2x | - | - | 27.1 | 27.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_2x_lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_2x_lvis-54582ee2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_2x_lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_2x_lvis_20200601_134748.log.json) | -| X-101-32x4d-FPN | pytorch | 2x | - | - | 26.7 | 26.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis-3cf55ea2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_20200531_221749.log.json) | -| X-101-64x4d-FPN | pytorch | 2x | - | - | 26.4 | 26.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_2x_lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_2x_lvis-1c99a5ad.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_2x_lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_2x_lvis_20200601_194651.log.json) | - -## Results and models of LVIS v1 - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :--------: | -| R-50-FPN | pytorch | 1x | 9.1 | - | 22.5 | 21.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1-aa78ac3d.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1-20200829_061305.log.json) | -| R-101-FPN | pytorch | 1x | 10.8 | - | 24.6 | 23.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_1x_lvis_v1.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_1x_lvis_v1/mask_rcnn_r101_fpn_sample1e-3_mstrain_1x_lvis_v1-ec55ce32.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_1x_lvis_v1/mask_rcnn_r101_fpn_sample1e-3_mstrain_1x_lvis_v1-20200829_070959.log.json) | -| X-101-32x4d-FPN | pytorch | 1x | 11.8 | - | 26.7 | 25.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1-ebbc5c81.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1-20200829_071317.log.json) | -| X-101-64x4d-FPN | pytorch | 1x | 14.6 | - | 27.2 | 25.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_1x_lvis_v1/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_1x_lvis_v1-43d9edfe.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_1x_lvis_v1/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_1x_lvis_v1-20200830_060206.log.json) | diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_40k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_40k_voc12aug.py deleted file mode 100644 index 492bd3dfdce331070cb9645dbe55142e9b662da1..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x512_40k_voc12aug.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3_r50-d8.py', - '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21)) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/Makefile b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/Makefile deleted file mode 100644 index 5bfd89dd833d7448b21073eb6ee7cfac1d5157dd..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/Makefile +++ /dev/null @@ -1,21 +0,0 @@ -default: linter tests - -install: - pip install -U pip - pip install -U -e '.[dev]' - -linter: - flake8 audiocraft && mypy audiocraft - flake8 tests && mypy tests - -tests: - coverage run -m pytest tests - coverage report --include 'audiocraft/*' - -docs: - pdoc3 --html -o docs -f audiocraft - -dist: - python setup.py sdist - -.PHONY: linter tests docs dist diff --git a/spaces/GurudattaBS/GenDiseasePrediction/code/DiseaseModel.py b/spaces/GurudattaBS/GenDiseasePrediction/code/DiseaseModel.py deleted file mode 100644 index c30e6473ac6e5bbb6e6ee82c9dcf2693c2190989..0000000000000000000000000000000000000000 --- a/spaces/GurudattaBS/GenDiseasePrediction/code/DiseaseModel.py +++ /dev/null @@ -1,76 +0,0 @@ -import xgboost as xgb -import pandas as pd - -class DiseaseModel: - - def __init__(self): - self.all_symptoms = None - self.symptoms = None - self.pred_disease = None - self.model = xgb.XGBClassifier() - self.diseases = self.disease_list('data/dataset.csv') - - def load_xgboost(self, model_path): - self.model.load_model(model_path) - - def save_xgboost(self, model_path): - self.model.save_model(model_path) - - def predict(self, X): - self.symptoms = X - disease_pred_idx = self.model.predict(self.symptoms) - self.pred_disease = self.diseases[disease_pred_idx].values[0] - disease_probability_array = self.model.predict_proba(self.symptoms) - disease_probability = disease_probability_array[0, disease_pred_idx[0]] - return self.pred_disease, disease_probability - - - def describe_disease(self, disease_name): - - if disease_name not in self.diseases: - return "That disease is not contemplated in this model" - - # Read disease dataframe - desc_df = pd.read_csv('data/symptom_Description.csv') - desc_df = desc_df.apply(lambda col: col.str.strip()) - - return desc_df[desc_df['Disease'] == disease_name]['Description'].values[0] - - def describe_predicted_disease(self): - - if self.pred_disease is None: - return "No predicted disease yet" - - return self.describe_disease(self.pred_disease) - - def disease_precautions(self, disease_name): - - if disease_name not in self.diseases: - return "That disease is not contemplated in this model" - - # Read precautions dataframe - prec_df = pd.read_csv('data/symptom_precaution.csv') - prec_df = prec_df.apply(lambda col: col.str.strip()) - - return prec_df[prec_df['Disease'] == disease_name].filter(regex='Precaution').values.tolist()[0] - - def predicted_disease_precautions(self): - - if self.pred_disease is None: - return "No predicted disease yet" - - return self.disease_precautions(self.pred_disease) - - def disease_list(self, kaggle_dataset): - - df = pd.read_csv('data/clean_dataset.tsv', sep='\t') - # Preprocessing - y_data = df.iloc[:,-1] - X_data = df.iloc[:,:-1] - - self.all_symptoms = X_data.columns - - # Convert y to categorical values - y_data = y_data.astype('category') - - return y_data.cat.categories \ No newline at end of file diff --git a/spaces/HI915/Test02/README.md b/spaces/HI915/Test02/README.md deleted file mode 100644 index 28489beea8b219d9b49bdabc61942161ae1f4242..0000000000000000000000000000000000000000 --- a/spaces/HI915/Test02/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Test02 -emoji: 💻 -colorFrom: green -colorTo: purple -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/train.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/train.py deleted file mode 100644 index 321de3d9b53f8194b58c26f5cb2c03281afc2bb1..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/train.py +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Legacy entry point. Use fairseq_cli/train.py or fairseq-train instead. -""" - -from fairseq_cli.train import cli_main - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/attentions.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/attentions.py deleted file mode 100644 index 62b8c83acbd3150b6d6686f21f3627781107c1ba..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/attentions.py +++ /dev/null @@ -1,378 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=None, - block_length=None, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - self.block_length = block_length - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - window_size=window_size, - p_dropout=p_dropout, - block_length=block_length, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - for i in range(self.n_layers): - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class CouplingBlock(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - sigmoid_scale=False, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - self.sigmoid_scale = sigmoid_scale - - start = torch.nn.Conv1d(in_channels // 2, hidden_channels, 1) - start = torch.nn.utils.weight_norm(start) - self.start = start - # Initializing last layer to 0 makes the affine coupling layers - # do nothing at first. It helps to stabilze training. - end = torch.nn.Conv1d(hidden_channels, in_channels, 1) - end.weight.data.zero_() - end.bias.data.zero_() - self.end = end - - self.wn = modules.WN( - in_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels, - p_dropout, - ) - - def forward(self, x, x_mask=None, reverse=False, g=None, **kwargs): - b, c, t = x.size() - if x_mask is None: - x_mask = 1 - x_0, x_1 = x[:, : self.in_channels // 2], x[:, self.in_channels // 2 :] - - x = self.start(x_0) * x_mask - x = self.wn(x, x_mask, g) - out = self.end(x) - - z_0 = x_0 - m = out[:, : self.in_channels // 2, :] - logs = out[:, self.in_channels // 2 :, :] - if self.sigmoid_scale: - logs = torch.log(1e-6 + torch.sigmoid(logs + 2)) - - if reverse: - z_1 = (x_1 - m) * torch.exp(-logs) * x_mask - logdet = None - else: - z_1 = (m + torch.exp(logs) * x_1) * x_mask - logdet = torch.sum(logs * x_mask, [1, 2]) - - z = torch.cat([z_0, z_1], 1) - return z, logdet - - def store_inverse(self): - self.wn.remove_weight_norm() - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - window_size=None, - heads_share=True, - p_dropout=0.0, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.p_dropout = p_dropout - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels ** -0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - if proximal_init: - self.conv_k.weight.data.copy_(self.conv_q.weight.data) - self.conv_k.bias.data.copy_(self.conv_q.bias.data) - nn.init.xavier_uniform_(self.conv_v.weight) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(self.k_channels) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query, key_relative_embeddings) - rel_logits = self._relative_position_to_absolute_position(rel_logits) - scores_local = rel_logits / math.sqrt(self.k_channels) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores * block_mask + -1e4 * (1 - block_mask) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length ** 2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - - self.conv_1 = nn.Conv1d( - in_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.conv_2 = nn.Conv1d( - filter_channels, out_channels, kernel_size, padding=kernel_size // 2 - ) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(x * x_mask) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - return x * x_mask diff --git a/spaces/Hila/RobustViT/SegmentationTest/utils/saver.py b/spaces/Hila/RobustViT/SegmentationTest/utils/saver.py deleted file mode 100644 index f767d288f662a9685d90cab8eb188d7b0ae920ce..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/SegmentationTest/utils/saver.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import torch -from collections import OrderedDict -import glob - - -class Saver(object): - - def __init__(self, args): - self.args = args - self.directory = os.path.join('run', args.train_dataset, args.checkname) - self.runs = sorted(glob.glob(os.path.join(self.directory, 'experiment_*'))) - run_id = int(self.runs[-1].split('_')[-1]) + 1 if self.runs else 0 - - self.experiment_dir = os.path.join(self.directory, 'experiment_{}'.format(str(run_id))) - if not os.path.exists(self.experiment_dir): - os.makedirs(self.experiment_dir) - - def save_checkpoint(self, state, filename='checkpoint.pth.tar'): - """Saves checkpoint to disk""" - filename = os.path.join(self.experiment_dir, filename) - torch.save(state, filename) - - def save_experiment_config(self): - logfile = os.path.join(self.experiment_dir, 'parameters.txt') - log_file = open(logfile, 'w') - p = OrderedDict() - p['train_dataset'] = self.args.train_dataset - p['lr'] = self.args.lr - p['epoch'] = self.args.epochs - - for key, val in p.items(): - log_file.write(key + ':' + str(val) + '\n') - log_file.close() diff --git a/spaces/HuggingFaceH4/Falcon-vs-LLaMA/README.md b/spaces/HuggingFaceH4/Falcon-vs-LLaMA/README.md deleted file mode 100644 index 5b49404a71c72e98e871578dad61304062af1d92..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceH4/Falcon-vs-LLaMA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Falcon Vs LLaMA -emoji: 🦅 🦙 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HuguesdeF/moulinette/Dockerfile b/spaces/HuguesdeF/moulinette/Dockerfile deleted file mode 100644 index 037cb254d5cfa6c6a936232bd5a1e9d197708e5b..0000000000000000000000000000000000000000 --- a/spaces/HuguesdeF/moulinette/Dockerfile +++ /dev/null @@ -1,29 +0,0 @@ -FROM python:3.9-slim - -EXPOSE 7860 - -RUN apt-get update && apt-get install -y \ - build-essential \ - software-properties-common \ - git \ - libcairo2 \ - libcairo2-dev \ - imagemagick \ - && rm -rf /var/lib/apt/lists/* - -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -WORKDIR $HOME/moulinette -RUN mkdir $HOME/moulinette/images -COPY --chown=user . $HOME/moulinette -RUN echo $(ls -1 .. ) - -RUN pip3 install -r requirements.txt - -# For Windows Docker execution, uncomment below: -#ENTRYPOINT ["streamlit", "run", "Corriger.py", "--server.port=8501", "--server.address=0.0.0.0"] -# For HuggingFace execution, uncommment below: -ENTRYPOINT ["streamlit", "run", "Corriger.py", "--server.port=7860"] diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/ops/fused_act/fused_act.py b/spaces/Iceclear/StableSR/StableSR/basicsr/ops/fused_act/fused_act.py deleted file mode 100644 index 88edc445484b71119dc22a258e83aef49ce39b07..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/ops/fused_act/fused_act.py +++ /dev/null @@ -1,95 +0,0 @@ -# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501 - -import os -import torch -from torch import nn -from torch.autograd import Function - -BASICSR_JIT = os.getenv('BASICSR_JIT') -if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - fused_act_ext = load( - 'fused', - sources=[ - os.path.join(module_path, 'src', 'fused_bias_act.cpp'), - os.path.join(module_path, 'src', 'fused_bias_act_kernel.cu'), - ], - ) -else: - try: - from . import fused_act_ext - except ImportError: - pass - # avoid annoying print output - # print(f'Cannot import deform_conv_ext. Error: {error}. You may need to: \n ' - # '1. compile with BASICSR_EXT=True. or\n ' - # '2. set BASICSR_JIT=True during running') - - -class FusedLeakyReLUFunctionBackward(Function): - - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused_act_ext.fused_bias_act(grad_output, empty, out, 3, 1, negative_slope, scale) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused_act_ext.fused_bias_act(gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, - ctx.scale) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused_act_ext.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(grad_output, out, ctx.negative_slope, ctx.scale) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - - def __init__(self, channel, negative_slope=0.2, scale=2**0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2**0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/flask_api.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros/flask_api.py deleted file mode 100644 index 8cc236a1c34c9ddeddea99bcea13024fb0ccc90b..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros/flask_api.py +++ /dev/null @@ -1,56 +0,0 @@ -import io -import logging - -import soundfile -import torch -import torchaudio -from flask import Flask, request, send_file -from flask_cors import CORS - -from inference.infer_tool import Svc, RealTimeVC - -app = Flask(__name__) - -CORS(app) - -logging.getLogger('numba').setLevel(logging.WARNING) - - -@app.route("/voiceChangeModel", methods=["POST"]) -def voice_change_model(): - request_form = request.form - wave_file = request.files.get("sample", None) - # 变调信息 - f_pitch_change = float(request_form.get("fPitchChange", 0)) - # DAW所需的采样率 - daw_sample = int(float(request_form.get("sampleRate", 0))) - speaker_id = int(float(request_form.get("sSpeakId", 0))) - # http获得wav文件并转换 - input_wav_path = io.BytesIO(wave_file.read()) - - # 模型推理 - if raw_infer: - out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample) - else: - out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample) - # 返回音频 - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav") - out_wav_path.seek(0) - return send_file(out_wav_path, download_name="temp.wav", as_attachment=True) - - -if __name__ == '__main__': - # 启用则为直接切片合成,False为交叉淡化方式 - # vst插件调整0.3-0.5s切片时间可以降低延迟,直接切片方法会有连接处爆音、交叉淡化会有轻微重叠声音 - # 自行选择能接受的方法,或将vst最大切片时间调整为1s,此处设为Ture,延迟大音质稳定一些 - raw_infer = True - # 每个模型和config是唯一对应的 - model_name = "logs/32k/G_174000-Copy1.pth" - config_name = "configs/config.json" - svc_model = Svc(model_name, config_name) - svc = RealTimeVC() - # 此处与vst插件对应,不建议更改 - app.run(port=6842, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/client/api.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/client/api.py deleted file mode 100644 index 0e1eb7734350b700bfeeeb1edadb7ba1998f330a..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/client/api.py +++ /dev/null @@ -1,72 +0,0 @@ -from typing import Dict, List, Optional -import asyncio -import os - -import httpx -from fastchat.protocol.chat_completion import ( - ChatCompletionRequest, - ChatCompletionResponse, -) - -_BASE_URL = "http://localhost:8000" - -if os.environ.get("FASTCHAT_BASE_URL"): - _BASE_URL = os.environ.get("FASTCHAT_BASE_URL") - - -def set_baseurl(base_url: str): - global _BASE_URL - _BASE_URL = base_url - - -class ChatCompletionClient: - def __init__(self, base_url: str): - self.base_url = base_url - - async def request_completion( - self, request: ChatCompletionRequest, timeout: Optional[float] = None - ) -> ChatCompletionResponse: - async with httpx.AsyncClient() as client: - response = await client.post( - f"{self.base_url}/v1/chat/completions", - json=request.dict(), - timeout=timeout, - ) - response.raise_for_status() - return ChatCompletionResponse.parse_obj(response.json()) - - -class ChatCompletion: - OBJECT_NAME = "chat.completions" - - @classmethod - def create(cls, *args, **kwargs) -> ChatCompletionResponse: - """Creates a new chat completion for the provided messages and parameters. - - See `acreate` for more details. - """ - return asyncio.run(cls.acreate(*args, **kwargs)) - - @classmethod - async def acreate( - cls, - model: str, - messages: List[Dict[str, str]], - temperature: Optional[float] = 0.7, - n: int = 1, - max_tokens: Optional[int] = None, - stop: Optional[str] = None, - timeout: Optional[float] = None, - ) -> ChatCompletionResponse: - """Creates a new chat completion for the provided messages and parameters.""" - request = ChatCompletionRequest( - model=model, - messages=messages, - temperature=temperature, - n=n, - max_tokens=max_tokens, - stop=stop, - ) - client = ChatCompletionClient(_BASE_URL) - response = await client.request_completion(request, timeout=timeout) - return response diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/register_worker.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/register_worker.py deleted file mode 100644 index 2c2c40295e0351f25709ba25554c9329f15bf0d2..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/register_worker.py +++ /dev/null @@ -1,26 +0,0 @@ -""" -Manually register workers. - -Usage: -python3 -m fastchat.serve.register_worker --controller http://localhost:21001 --worker-name http://localhost:21002 -""" - -import argparse - -import requests - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--controller-address", type=str) - parser.add_argument("--worker-name", type=str) - parser.add_argument("--check-heart-beat", action="store_true") - args = parser.parse_args() - - url = args.controller_address + "/register_worker" - data = { - "worker_name": args.worker_name, - "check_heart_beat": args.check_heart_beat, - "worker_status": None, - } - r = requests.post(url, json=data) - assert r.status_code == 200 diff --git a/spaces/Jackflack09/diffuse-custom/README.md b/spaces/Jackflack09/diffuse-custom/README.md deleted file mode 100644 index 8a70a256e7e5f3ca2e54098c79fa08978ad72c03..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/README.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: Super Resolution Anime Diffusion -emoji: 📊 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: YeOldHermit/Super-Resolution-Anime-Diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -# Super Resolution Anime Diffusion -This is demo forked from https://huggingface.co/Linaqruf/anything-v3.0. - -## Super Resolution Anime Diffusion -At this moment, many diffusion models can only generate <1024 width and length pictures. -I integrated the Super Resolution with [Anything diffusion model](https://huggingface.co/Linaqruf/anything-v3.0) to produce high resolution pictures. -Thanks to the open-source project: https://github.com/yu45020/Waifu2x - -## Modifications -1. Disable the safety checker to save time and memory. You need to abide the original rules of the model. -2. Add the Super Resolution function to the model. -3. Add batch generation function to the model (see inference.py). -4. -# Origin README ---- -language: -- en -license: creativeml-openrail-m -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- diffusers -inference: true ---- - -# Anything V3 - -Welcome to Anything V3 - a latent diffusion model for weebs. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images. - -e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_** - -## Gradio - -We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Anything-V3.0: - -[Open in Spaces](https://huggingface.co/spaces/akhaliq/anything-v3.0) - - - -## 🧨 Diffusers - -This model can be used just like any other Stable Diffusion model. For more information, -please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). - -You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). - -```python -from diffusers import StableDiffusionPipeline -import torch - -model_id = "Linaqruf/anything-v3.0" -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) -pipe = pipe.to("cuda") - -prompt = "pikachu" -image = pipe(prompt).images[0] - -image.save("./pikachu.png") -``` - -## Examples - -Below are some examples of images generated using this model: - -**Anime Girl:** -![Anime Girl](https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/1girl.png) -``` -1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden -Steps: 50, Sampler: DDIM, CFG scale: 12 -``` -**Anime Boy:** -![Anime Boy](https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/1boy.png) -``` -1boy, medium hair, blonde hair, blue eyes, bishounen, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden -Steps: 50, Sampler: DDIM, CFG scale: 12 -``` -**Scenery:** -![Scenery](https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/scenery.png) -``` -scenery, shibuya tokyo, post-apocalypse, ruins, rust, sky, skyscraper, abandoned, blue sky, broken window, building, cloud, crane machine, outdoors, overgrown, pillar, sunset -Steps: 50, Sampler: DDIM, CFG scale: 12 -``` - -## License - -This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. -The CreativeML OpenRAIL License specifies: - -1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content -2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license -3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) -[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/ddpm/__init__.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/ddpm/__init__.py deleted file mode 100644 index 8889bdae1224e91916e0f8454bafba0ee566f3b9..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/ddpm/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# flake8: noqa -from .pipeline_ddpm import DDPMPipeline diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py deleted file mode 100644 index 5345c4e5625ee519a411b4fd80468fc991757165..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Optional, Tuple, Union - -import torch - -from ...models import UNet2DModel, VQModel -from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from ...schedulers import DDIMScheduler - - -class LDMPipeline(DiffusionPipeline): - r""" - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Parameters: - vqvae ([`VQModel`]): - Vector-quantized (VQ) Model to encode and decode images to and from latent representations. - unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - [`DDIMScheduler`] is to be used in combination with `unet` to denoise the encoded image latents. - """ - - def __init__(self, vqvae: VQModel, unet: UNet2DModel, scheduler: DDIMScheduler): - super().__init__() - self.register_modules(vqvae=vqvae, unet=unet, scheduler=scheduler) - - @torch.no_grad() - def __call__( - self, - batch_size: int = 1, - generator: Optional[torch.Generator] = None, - eta: float = 0.0, - num_inference_steps: int = 50, - output_type: Optional[str] = "pil", - return_dict: bool = True, - **kwargs, - ) -> Union[Tuple, ImagePipelineOutput]: - r""" - Args: - batch_size (`int`, *optional*, defaults to 1): - Number of images to generate. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. - - Returns: - [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if - `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the - generated images. - """ - - latents = torch.randn( - (batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size), - generator=generator, - ) - latents = latents.to(self.device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - - self.scheduler.set_timesteps(num_inference_steps) - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - - extra_kwargs = {} - if accepts_eta: - extra_kwargs["eta"] = eta - - for t in self.progress_bar(self.scheduler.timesteps): - latent_model_input = self.scheduler.scale_model_input(latents, t) - # predict the noise residual - noise_prediction = self.unet(latent_model_input, t).sample - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_prediction, t, latents, **extra_kwargs).prev_sample - - # decode the image latents with the VAE - image = self.vqvae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/Kelpy-Codos.js b/spaces/JohnSmith9982/ChuanhuChatGPT/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/autodiff/compressor.py b/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/autodiff/compressor.py deleted file mode 100644 index 2e81cae7a1c9ecad08e8bacd4e9ca770259c5efc..0000000000000000000000000000000000000000 --- a/spaces/JohnnyPittt/audio-styling/deepafx_st/processors/autodiff/compressor.py +++ /dev/null @@ -1,169 +0,0 @@ -import math -import torch -import scipy.signal - -import deepafx_st.processors.autodiff.signal -from deepafx_st.processors.processor import Processor - - -@torch.jit.script -def compressor( - x: torch.Tensor, - sample_rate: float, - threshold: torch.Tensor, - ratio: torch.Tensor, - attack_time: torch.Tensor, - release_time: torch.Tensor, - knee_dB: torch.Tensor, - makeup_gain_dB: torch.Tensor, - eps: float = 1e-8, -): - """Note the `release` parameter is not used.""" - # print(f"autodiff comp fs = {sample_rate}") - - s = x.size() # should be one 1d - - threshold = threshold.squeeze() - ratio = ratio.squeeze() - attack_time = attack_time.squeeze() - makeup_gain_dB = makeup_gain_dB.squeeze() - - # uni-polar dB signal - # Turn the input signal into a uni-polar signal on the dB scale - x_G = 20 * torch.log10(torch.abs(x) + 1e-8) # x_uni casts type - - # Ensure there are no values of negative infinity - x_G = torch.clamp(x_G, min=-96) - - # Static characteristics with knee - y_G = torch.zeros(s).type_as(x) - - ratio = ratio.view(-1) - threshold = threshold.view(-1) - attack_time = attack_time.view(-1) - release_time = release_time.view(-1) - knee_dB = knee_dB.view(-1) - makeup_gain_dB = makeup_gain_dB.view(-1) - - # Below knee - idx = torch.where((2 * (x_G - threshold)) < -knee_dB)[0] - y_G[idx] = x_G[idx] - - # At knee - idx = torch.where((2 * torch.abs(x_G - threshold)) <= knee_dB)[0] - y_G[idx] = x_G[idx] + ( - (1 / ratio) * (((x_G[idx] - threshold + knee_dB) / 2) ** 2) - ) / (2 * knee_dB) - - # Above knee threshold - idx = torch.where((2 * (x_G - threshold)) > knee_dB)[0] - y_G[idx] = threshold + ((x_G[idx] - threshold) / ratio) - - x_L = x_G - y_G - - # design 1-pole butterworth lowpass - fc = 1.0 / (attack_time * sample_rate) - b, a = deepafx_st.processors.autodiff.signal.butter(fc) - - # apply FIR approx of IIR filter - y_L = deepafx_st.processors.autodiff.signal.approx_iir_filter(b, a, x_L) - - lin_y_L = torch.pow(10.0, -y_L / 20.0) # convert back to linear - y = lin_y_L * x # apply gain - - # apply makeup gain - y *= torch.pow(10.0, makeup_gain_dB / 20.0) - - return y - - -class Compressor(Processor): - def __init__( - self, - sample_rate, - max_threshold=0.0, - min_threshold=-80, - max_ratio=20.0, - min_ratio=1.0, - max_attack=0.1, - min_attack=0.0001, - max_release=1.0, - min_release=0.005, - max_knee=12.0, - min_knee=0.0, - max_mkgain=48.0, - min_mkgain=-48.0, - eps=1e-8, - ): - """ """ - super().__init__() - self.sample_rate = sample_rate - self.eps = eps - self.ports = [ - { - "name": "Threshold", - "min": min_threshold, - "max": max_threshold, - "default": -12.0, - "units": "dB", - }, - { - "name": "Ratio", - "min": min_ratio, - "max": max_ratio, - "default": 2.0, - "units": "", - }, - { - "name": "Attack", - "min": min_attack, - "max": max_attack, - "default": 0.001, - "units": "s", - }, - { - # this is a dummy parameter - "name": "Release (dummy)", - "min": min_release, - "max": max_release, - "default": 0.045, - "units": "s", - }, - { - "name": "Knee", - "min": min_knee, - "max": max_knee, - "default": 6.0, - "units": "dB", - }, - { - "name": "Makeup Gain", - "min": min_mkgain, - "max": max_mkgain, - "default": 0.0, - "units": "dB", - }, - ] - - self.num_control_params = len(self.ports) - - def forward(self, x, p, sample_rate=24000, **kwargs): - """ - - Assume that parameters in p are normalized between 0 and 1. - - x (tensor): Shape batch x 1 x samples - p (tensor): shape batch x params - - """ - bs, ch, s = x.size() - - inputs = torch.split(x, 1, 0) - params = torch.split(p, 1, 0) - - y = [] # loop over batch dimension - for input, param in zip(inputs, params): - denorm_param = self.denormalize_params(param.view(-1)) - y.append(compressor(input.view(-1), sample_rate, *denorm_param)) - - return torch.stack(y, dim=0).view(bs, 1, -1) diff --git a/spaces/Kaludi/Food-Category-Classification-And-Recipes-Recommender_App/pages/Meal_Planner.py b/spaces/Kaludi/Food-Category-Classification-And-Recipes-Recommender_App/pages/Meal_Planner.py deleted file mode 100644 index d9ef8ebecf13c18344dabf67b55070da2b7a8d83..0000000000000000000000000000000000000000 --- a/spaces/Kaludi/Food-Category-Classification-And-Recipes-Recommender_App/pages/Meal_Planner.py +++ /dev/null @@ -1,151 +0,0 @@ -import streamlit as st -import requests -import json -import random -import re - -def main(): - st.title("Meal Planner") - st.markdown("The Meal Planner app helps users plan a meal out for the day which will include breakfast, lunch, and dinner options that fit their dietary needs, cuisine preferences, specific ingredients, and calorie limits. After submitting their choices, the app retrieves recipe options from an API and randomly selects one recipe from each of the following categories: breakfast, lunch, and dinner. The app also displays the selected recipes' nutrition information and calculates the total nutrition of all three recipes combined.") - - # Dropdown for Diet - diet_options = ['All', 'Gluten-Free', 'Vegan', 'Vegetarian', 'Dairy-Free'] - diet = st.selectbox('Diet', diet_options) - - # Dropdown for Cuisine - cuisine_options = ['All', 'African', 'Asian', 'Caribbean', 'Central American', 'Europe', 'Middle Eastern', 'North American', 'Oceanic', 'South American'] - cuisine = st.selectbox('Cuisine', cuisine_options) - - # Text input for ingredients - ingredients = st.text_input("Enter ingredients (Separated By Commas)", placeholder="Enter Atleast One Ingredient", value="") - - # Slider for Calories - calories = st.slider("Select Max Calories for All Three Recipes", 25, 2500, 1500) - st.write("Selected: **{}** Max Calories.".format(calories)) - - # Submit button - if st.button("Submit"): - if not ingredients: # Check if ingredients text input field is empty - st.error("Please enter at least one ingredient.") - return - url = "https://alcksyjrmd.execute-api.us-east-2.amazonaws.com/default/nutrients_response" - - params = {"k": str(calories)} - - if diet != "All": - params["d"] = diet - - if cuisine != "All": - params["c"] = cuisine - - if ingredients: - params["i"] = ingredients - - response = requests.get(url, params=params) - if len(response.content) < 180: - st.error("The query was too large, please decrease the calories or fine-tune your search.") - return - response_json = json.loads(response.content) - - - - - # Convert response_json to a list - response_json = list(response_json) - - # Find 3 recipes that add up to the target calorie limit - recipes = [] - total_calories = 0 - - # Breakfast Section - st.markdown("## Breakfast Recipe") - breakfast_recipes = [recipe for recipe in response_json if "breakfast" in recipe['Course Keywords']] - if len(breakfast_recipes) > 0: - random_recipe = random.choice(breakfast_recipes) - recipe_calories = random_recipe['Calories'] - if total_calories + recipe_calories <= calories: - total_calories += recipe_calories - recipes.append(random_recipe) - st.write("**Title:** ", random_recipe['Title']) - st.write("**Calories:** ", recipe_calories) - st.write("**Total Fat:** ", random_recipe['Total Fat']) - st.write("**Total Carbohydrate:** ", random_recipe['Total Carbohydrate']) - st.write("**Protein:** ", random_recipe['Protein']) - if random_recipe['Image Link'].endswith(".jpg") or random_recipe['Image Link'].endswith(".jpeg") or random_recipe['Image Link'].endswith(".png"): - st.image(random_recipe['Image Link'], width=300) - else: - st.write("**Image Link:** ", random_recipe['Image Link']) - st.write("**Recipe URL:** ", random_recipe['Recipe URLs']) - st.markdown("---") - - # Brunch Section - st.markdown("## Lunch Recipe") - brunch_recipes = [recipe for recipe in response_json if "main" in recipe['Course Keywords']] - if len(brunch_recipes) > 0: - random_recipe = random.choice(brunch_recipes) - recipe_calories = random_recipe['Calories'] - if total_calories + recipe_calories <= calories: - total_calories += recipe_calories - recipes.append(random_recipe) - st.write("**Title:** ", random_recipe['Title']) - st.write("**Calories:** ", recipe_calories) - st.write("**Total Fat:** ", random_recipe['Total Fat']) - st.write("**Total Carbohydrate:** ", random_recipe['Total Carbohydrate']) - st.write("**Protein:** ", random_recipe['Protein']) - if random_recipe['Image Link'].endswith(".jpg") or random_recipe['Image Link'].endswith(".jpeg") or random_recipe['Image Link'].endswith(".png"): - st.image(random_recipe['Image Link'], width=300) - else: - st.write("**Image Link:** ", random_recipe['Image Link']) - st.write("**Recipe URL:** ", random_recipe['Recipe URLs']) - st.markdown("---") - - # Main Section - st.markdown("## Dinner Recipe") - main_recipes = [recipe for recipe in response_json if "main" in recipe['Course Keywords']] - if len(main_recipes) > 0: - random_recipe = random.choice(main_recipes) - recipe_calories = random_recipe['Calories'] - if total_calories + recipe_calories <= calories: - total_calories += recipe_calories - recipes.append(random_recipe) - st.write("**Title:** ", random_recipe['Title']) - st.write("**Calories:** ", recipe_calories) - st.write("**Total Fat:** ", random_recipe['Total Fat']) - st.write("**Total Carbohydrate:** ", random_recipe['Total Carbohydrate']) - st.write("**Protein:** ", random_recipe['Protein']) - if random_recipe['Image Link'].endswith(".jpg") or random_recipe['Image Link'].endswith(".jpeg") or random_recipe['Image Link'].endswith(".png"): - st.image(random_recipe['Image Link'], width=300) - else: - st.write("**Image Link:** ", random_recipe['Image Link']) - st.write("**Recipe URL:** ", random_recipe['Recipe URLs']) - else: - st.markdown("### Not Enough Recipes Found:") - st.write("**Not enough recipes found that match your search criteria. Please adjust your search criteria.**") - - if len(recipes) < 3: - st.markdown("### Not Enough Recipes Found:") - st.write("**Not enough recipes found that match your search criteria. Please adjust your search criteria.**") - else: - st.markdown("---") - - # Calculate total Calories, Total Fat, Total Carbohydrate, and Protein of all three recipes - total_calories = 0 - total_fat = 0 - total_carbs = 0 - total_protein = 0 - for recipe in recipes: - total_calories += recipe['Calories'] - total_fat += float(re.sub(r'[^\d.]+', '', recipe['Total Fat'])) - total_carbs += float(re.sub(r'[^\d.]+', '', recipe['Total Carbohydrate'])) - total_protein += float(re.sub(r'[^\d.]+', '', recipe['Protein'])) - - st.markdown("## Total Nutrition of All Three Recipes") - st.write("Total Calories:", total_calories) - st.write("Total Fat:", total_fat, "g") - st.write("Total Carbohydrate:", total_carbs, "g") - st.write("Total Protein:", total_protein, "g") - st.write("") - st.write("*To download this recipe as a PDF, open the hamburger menu on the top right and click on Print.*") - -if __name__ == '__main__': - main() diff --git a/spaces/Kimata/Sanskrit-TTS/model_modules/commons.py b/spaces/Kimata/Sanskrit-TTS/model_modules/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/Kimata/Sanskrit-TTS/model_modules/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py deleted file mode 100644 index f33612b1b141668d0463435975c14a26fbe5a0cd..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/mask_heads/dynamic_mask_head.py +++ /dev/null @@ -1,166 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List - -import torch -import torch.nn as nn -from mmengine.config import ConfigDict -from torch import Tensor - -from mmdet.models.task_modules import SamplingResult -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, InstanceList, OptConfigType, reduce_mean -from .fcn_mask_head import FCNMaskHead - - -@MODELS.register_module() -class DynamicMaskHead(FCNMaskHead): - r"""Dynamic Mask Head for - `Instances as Queries `_ - - Args: - num_convs (int): Number of convolution layer. - Defaults to 4. - roi_feat_size (int): The output size of RoI extractor, - Defaults to 14. - in_channels (int): Input feature channels. - Defaults to 256. - conv_kernel_size (int): Kernel size of convolution layers. - Defaults to 3. - conv_out_channels (int): Output channels of convolution layers. - Defaults to 256. - num_classes (int): Number of classes. - Defaults to 80 - class_agnostic (int): Whether generate class agnostic prediction. - Defaults to False. - dropout (float): Probability of drop the channel. - Defaults to 0.0 - upsample_cfg (:obj:`ConfigDict` or dict): The config for - upsample layer. - conv_cfg (:obj:`ConfigDict` or dict, optional): The convolution - layer config. - norm_cfg (:obj:`ConfigDict` or dict, optional): The norm layer config. - dynamic_conv_cfg (:obj:`ConfigDict` or dict): The dynamic convolution - layer config. - loss_mask (:obj:`ConfigDict` or dict): The config for mask loss. - """ - - def __init__(self, - num_convs: int = 4, - roi_feat_size: int = 14, - in_channels: int = 256, - conv_kernel_size: int = 3, - conv_out_channels: int = 256, - num_classes: int = 80, - class_agnostic: bool = False, - upsample_cfg: ConfigType = dict( - type='deconv', scale_factor=2), - conv_cfg: OptConfigType = None, - norm_cfg: OptConfigType = None, - dynamic_conv_cfg: ConfigType = dict( - type='DynamicConv', - in_channels=256, - feat_channels=64, - out_channels=256, - input_feat_shape=14, - with_proj=False, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')), - loss_mask: ConfigType = dict( - type='DiceLoss', loss_weight=8.0), - **kwargs) -> None: - super().__init__( - num_convs=num_convs, - roi_feat_size=roi_feat_size, - in_channels=in_channels, - conv_kernel_size=conv_kernel_size, - conv_out_channels=conv_out_channels, - num_classes=num_classes, - class_agnostic=class_agnostic, - upsample_cfg=upsample_cfg, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - loss_mask=loss_mask, - **kwargs) - assert class_agnostic is False, \ - 'DynamicMaskHead only support class_agnostic=False' - self.fp16_enabled = False - - self.instance_interactive_conv = MODELS.build(dynamic_conv_cfg) - - def init_weights(self) -> None: - """Use xavier initialization for all weight parameter and set - classification head bias as a specific value when use focal loss.""" - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - nn.init.constant_(self.conv_logits.bias, 0.) - - def forward(self, roi_feat: Tensor, proposal_feat: Tensor) -> Tensor: - """Forward function of DynamicMaskHead. - - Args: - roi_feat (Tensor): Roi-pooling features with shape - (batch_size*num_proposals, feature_dimensions, - pooling_h , pooling_w). - proposal_feat (Tensor): Intermediate feature get from - diihead in last stage, has shape - (batch_size*num_proposals, feature_dimensions) - - Returns: - mask_preds (Tensor): Predicted foreground masks with shape - (batch_size*num_proposals, num_classes, pooling_h*2, pooling_w*2). - """ - - proposal_feat = proposal_feat.reshape(-1, self.in_channels) - proposal_feat_iic = self.instance_interactive_conv( - proposal_feat, roi_feat) - - x = proposal_feat_iic.permute(0, 2, 1).reshape(roi_feat.size()) - - for conv in self.convs: - x = conv(x) - if self.upsample is not None: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_preds = self.conv_logits(x) - return mask_preds - - def loss_and_target(self, mask_preds: Tensor, - sampling_results: List[SamplingResult], - batch_gt_instances: InstanceList, - rcnn_train_cfg: ConfigDict) -> dict: - """Calculate the loss based on the features extracted by the mask head. - - Args: - mask_preds (Tensor): Predicted foreground masks, has shape - (num_pos, num_classes, h, w). - sampling_results (List[obj:SamplingResult]): Assign results of - all images in a batch after sampling. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes``, ``labels``, and - ``masks`` attributes. - rcnn_train_cfg (obj:ConfigDict): `train_cfg` of RCNN. - - Returns: - dict: A dictionary of loss and targets components. - """ - mask_targets = self.get_targets( - sampling_results=sampling_results, - batch_gt_instances=batch_gt_instances, - rcnn_train_cfg=rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - - num_pos = pos_labels.new_ones(pos_labels.size()).float().sum() - avg_factor = torch.clamp(reduce_mean(num_pos), min=1.).item() - loss = dict() - if mask_preds.size(0) == 0: - loss_mask = mask_preds.sum() - else: - loss_mask = self.loss_mask( - mask_preds[torch.arange(num_pos).long(), pos_labels, - ...].sigmoid(), - mask_targets, - avg_factor=avg_factor) - loss['loss_mask'] = loss_mask - return dict(loss_mask=loss, mask_targets=mask_targets) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/samplers/random_sampler.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/samplers/random_sampler.py deleted file mode 100644 index fa03665fc36cc6a0084431324b16727b2dc8993e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/samplers/random_sampler.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Union - -import torch -from numpy import ndarray -from torch import Tensor - -from mmdet.registry import TASK_UTILS -from ..assigners import AssignResult -from .base_sampler import BaseSampler - - -@TASK_UTILS.register_module() -class RandomSampler(BaseSampler): - """Random sampler. - - Args: - num (int): Number of samples - pos_fraction (float): Fraction of positive samples - neg_pos_up (int): Upper bound number of negative and - positive samples. Defaults to -1. - add_gt_as_proposals (bool): Whether to add ground truth - boxes as proposals. Defaults to True. - """ - - def __init__(self, - num: int, - pos_fraction: float, - neg_pos_ub: int = -1, - add_gt_as_proposals: bool = True, - **kwargs): - from .sampling_result import ensure_rng - super().__init__( - num=num, - pos_fraction=pos_fraction, - neg_pos_ub=neg_pos_ub, - add_gt_as_proposals=add_gt_as_proposals) - self.rng = ensure_rng(kwargs.get('rng', None)) - - def random_choice(self, gallery: Union[Tensor, ndarray, list], - num: int) -> Union[Tensor, ndarray]: - """Random select some elements from the gallery. - - If `gallery` is a Tensor, the returned indices will be a Tensor; - If `gallery` is a ndarray or list, the returned indices will be a - ndarray. - - Args: - gallery (Tensor | ndarray | list): indices pool. - num (int): expected sample num. - - Returns: - Tensor or ndarray: sampled indices. - """ - assert len(gallery) >= num - - is_tensor = isinstance(gallery, torch.Tensor) - if not is_tensor: - if torch.cuda.is_available(): - device = torch.cuda.current_device() - else: - device = 'cpu' - gallery = torch.tensor(gallery, dtype=torch.long, device=device) - # This is a temporary fix. We can revert the following code - # when PyTorch fixes the abnormal return of torch.randperm. - # See: https://github.com/open-mmlab/mmdetection/pull/5014 - perm = torch.randperm(gallery.numel())[:num].to(device=gallery.device) - rand_inds = gallery[perm] - if not is_tensor: - rand_inds = rand_inds.cpu().numpy() - return rand_inds - - def _sample_pos(self, assign_result: AssignResult, num_expected: int, - **kwargs) -> Union[Tensor, ndarray]: - """Randomly sample some positive samples. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - num_expected (int): The number of expected positive samples - - Returns: - Tensor or ndarray: sampled indices. - """ - pos_inds = torch.nonzero(assign_result.gt_inds > 0, as_tuple=False) - if pos_inds.numel() != 0: - pos_inds = pos_inds.squeeze(1) - if pos_inds.numel() <= num_expected: - return pos_inds - else: - return self.random_choice(pos_inds, num_expected) - - def _sample_neg(self, assign_result: AssignResult, num_expected: int, - **kwargs) -> Union[Tensor, ndarray]: - """Randomly sample some negative samples. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - num_expected (int): The number of expected positive samples - - Returns: - Tensor or ndarray: sampled indices. - """ - neg_inds = torch.nonzero(assign_result.gt_inds == 0, as_tuple=False) - if neg_inds.numel() != 0: - neg_inds = neg_inds.squeeze(1) - if len(neg_inds) <= num_expected: - return neg_inds - else: - return self.random_choice(neg_inds, num_expected) diff --git a/spaces/Latryna/roop/roop/metadata.py b/spaces/Latryna/roop/roop/metadata.py deleted file mode 100644 index 35b0f0245a38eb9ec024f2ed2c829044f6051c29..0000000000000000000000000000000000000000 --- a/spaces/Latryna/roop/roop/metadata.py +++ /dev/null @@ -1,2 +0,0 @@ -name = 'roop' -version = '1.1.0' diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/transforms.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/LegacyLeague/Legacy_League/setup.sh b/spaces/LegacyLeague/Legacy_League/setup.sh deleted file mode 100644 index e221c60655cf9d06bd304bc6395c60f761ef174d..0000000000000000000000000000000000000000 --- a/spaces/LegacyLeague/Legacy_League/setup.sh +++ /dev/null @@ -1,2 +0,0 @@ -export GRADIO_SERVER_NAME=0.0.0.0 -export GRADIO_SERVER_PORT="$PORT" diff --git a/spaces/Leozin11/openai-reverse-proxy/server.js b/spaces/Leozin11/openai-reverse-proxy/server.js deleted file mode 100644 index 04a48b7a429c4d0ad0b772ba1edf503e349eda21..0000000000000000000000000000000000000000 --- a/spaces/Leozin11/openai-reverse-proxy/server.js +++ /dev/null @@ -1,32 +0,0 @@ -const express = require('express'); -const proxy = require('express-http-proxy'); -const app = express(); -const targetUrl = 'https://api.openai.com'; -const openaiKey = process.env.OPENAI_KEY -const port = 7860; -const baseUrl = getExternalUrl(process.env.SPACE_ID); - -app.use('/api', proxy(targetUrl, { - proxyReqOptDecorator: (proxyReqOpts, srcReq) => { - // Modify the request headers if necessary - proxyReqOpts.headers['Authorization'] = 'Bearer '+openaiKey; - return proxyReqOpts; - }, -})); - -app.get("/", (req, res) => { - res.send(`This is your OpenAI Reverse Proxy URL: ${baseUrl}`); -}); - -function getExternalUrl(spaceId) { - try { - const [username, spacename] = spaceId.split("/"); - return `https://${username}-${spacename.replace(/_/g, "-")}.hf.space/api/v1`; - } catch (e) { - return ""; - } -} - -app.listen(port, () => { - console.log(`Reverse proxy server running on ${baseUrl}`); -}); \ No newline at end of file diff --git a/spaces/MAMADREZAMORADIam/Hgyukhfgtffftt/README.md b/spaces/MAMADREZAMORADIam/Hgyukhfgtffftt/README.md deleted file mode 100644 index adf15b1e397433c6455c1649663c7c370fdbd60f..0000000000000000000000000000000000000000 --- a/spaces/MAMADREZAMORADIam/Hgyukhfgtffftt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hgyukhfgtffftt -emoji: 🏆 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/group_points.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/group_points.py deleted file mode 100644 index 6c3ec9d758ebe4e1c2205882af4be154008253a5..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/ops/group_points.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch -from torch import nn as nn -from torch.autograd import Function - -from ..utils import ext_loader -from .ball_query import ball_query -from .knn import knn - -ext_module = ext_loader.load_ext( - '_ext', ['group_points_forward', 'group_points_backward']) - - -class QueryAndGroup(nn.Module): - """Groups points with a ball query of radius. - - Args: - max_radius (float): The maximum radius of the balls. - If None is given, we will use kNN sampling instead of ball query. - sample_num (int): Maximum number of features to gather in the ball. - min_radius (float, optional): The minimum radius of the balls. - Default: 0. - use_xyz (bool, optional): Whether to use xyz. - Default: True. - return_grouped_xyz (bool, optional): Whether to return grouped xyz. - Default: False. - normalize_xyz (bool, optional): Whether to normalize xyz. - Default: False. - uniform_sample (bool, optional): Whether to sample uniformly. - Default: False - return_unique_cnt (bool, optional): Whether to return the count of - unique samples. Default: False. - return_grouped_idx (bool, optional): Whether to return grouped idx. - Default: False. - """ - - def __init__(self, - max_radius, - sample_num, - min_radius=0, - use_xyz=True, - return_grouped_xyz=False, - normalize_xyz=False, - uniform_sample=False, - return_unique_cnt=False, - return_grouped_idx=False): - super().__init__() - self.max_radius = max_radius - self.min_radius = min_radius - self.sample_num = sample_num - self.use_xyz = use_xyz - self.return_grouped_xyz = return_grouped_xyz - self.normalize_xyz = normalize_xyz - self.uniform_sample = uniform_sample - self.return_unique_cnt = return_unique_cnt - self.return_grouped_idx = return_grouped_idx - if self.return_unique_cnt: - assert self.uniform_sample, \ - 'uniform_sample should be True when ' \ - 'returning the count of unique samples' - if self.max_radius is None: - assert not self.normalize_xyz, \ - 'can not normalize grouped xyz when max_radius is None' - - def forward(self, points_xyz, center_xyz, features=None): - """ - Args: - points_xyz (Tensor): (B, N, 3) xyz coordinates of the features. - center_xyz (Tensor): (B, npoint, 3) coordinates of the centriods. - features (Tensor): (B, C, N) Descriptors of the features. - - Returns: - Tensor: (B, 3 + C, npoint, sample_num) Grouped feature. - """ - # if self.max_radius is None, we will perform kNN instead of ball query - # idx is of shape [B, npoint, sample_num] - if self.max_radius is None: - idx = knn(self.sample_num, points_xyz, center_xyz, False) - idx = idx.transpose(1, 2).contiguous() - else: - idx = ball_query(self.min_radius, self.max_radius, self.sample_num, - points_xyz, center_xyz) - - if self.uniform_sample: - unique_cnt = torch.zeros((idx.shape[0], idx.shape[1])) - for i_batch in range(idx.shape[0]): - for i_region in range(idx.shape[1]): - unique_ind = torch.unique(idx[i_batch, i_region, :]) - num_unique = unique_ind.shape[0] - unique_cnt[i_batch, i_region] = num_unique - sample_ind = torch.randint( - 0, - num_unique, (self.sample_num - num_unique, ), - dtype=torch.long) - all_ind = torch.cat((unique_ind, unique_ind[sample_ind])) - idx[i_batch, i_region, :] = all_ind - - xyz_trans = points_xyz.transpose(1, 2).contiguous() - # (B, 3, npoint, sample_num) - grouped_xyz = grouping_operation(xyz_trans, idx) - grouped_xyz_diff = grouped_xyz - \ - center_xyz.transpose(1, 2).unsqueeze(-1) # relative offsets - if self.normalize_xyz: - grouped_xyz_diff /= self.max_radius - - if features is not None: - grouped_features = grouping_operation(features, idx) - if self.use_xyz: - # (B, C + 3, npoint, sample_num) - new_features = torch.cat([grouped_xyz_diff, grouped_features], - dim=1) - else: - new_features = grouped_features - else: - assert (self.use_xyz - ), 'Cannot have not features and not use xyz as a feature!' - new_features = grouped_xyz_diff - - ret = [new_features] - if self.return_grouped_xyz: - ret.append(grouped_xyz) - if self.return_unique_cnt: - ret.append(unique_cnt) - if self.return_grouped_idx: - ret.append(idx) - if len(ret) == 1: - return ret[0] - else: - return tuple(ret) - - -class GroupAll(nn.Module): - """Group xyz with feature. - - Args: - use_xyz (bool): Whether to use xyz. - """ - - def __init__(self, use_xyz: bool = True): - super().__init__() - self.use_xyz = use_xyz - - def forward(self, - xyz: torch.Tensor, - new_xyz: torch.Tensor, - features: torch.Tensor = None): - """ - Args: - xyz (Tensor): (B, N, 3) xyz coordinates of the features. - new_xyz (Tensor): new xyz coordinates of the features. - features (Tensor): (B, C, N) features to group. - - Returns: - Tensor: (B, C + 3, 1, N) Grouped feature. - """ - grouped_xyz = xyz.transpose(1, 2).unsqueeze(2) - if features is not None: - grouped_features = features.unsqueeze(2) - if self.use_xyz: - # (B, 3 + C, 1, N) - new_features = torch.cat([grouped_xyz, grouped_features], - dim=1) - else: - new_features = grouped_features - else: - new_features = grouped_xyz - - return new_features - - -class GroupingOperation(Function): - """Group feature with given index.""" - - @staticmethod - def forward(ctx, features: torch.Tensor, - indices: torch.Tensor) -> torch.Tensor: - """ - Args: - features (Tensor): (B, C, N) tensor of features to group. - indices (Tensor): (B, npoint, nsample) the indices of - features to group with. - - Returns: - Tensor: (B, C, npoint, nsample) Grouped features. - """ - features = features.contiguous() - indices = indices.contiguous() - - B, nfeatures, nsample = indices.size() - _, C, N = features.size() - output = torch.cuda.FloatTensor(B, C, nfeatures, nsample) - - ext_module.group_points_forward(B, C, N, nfeatures, nsample, features, - indices, output) - - ctx.for_backwards = (indices, N) - return output - - @staticmethod - def backward(ctx, - grad_out: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Args: - grad_out (Tensor): (B, C, npoint, nsample) tensor of the gradients - of the output from forward. - - Returns: - Tensor: (B, C, N) gradient of the features. - """ - idx, N = ctx.for_backwards - - B, C, npoint, nsample = grad_out.size() - grad_features = torch.cuda.FloatTensor(B, C, N).zero_() - - grad_out_data = grad_out.data.contiguous() - ext_module.group_points_backward(B, C, N, npoint, nsample, - grad_out_data, idx, - grad_features.data) - return grad_features, None - - -grouping_operation = GroupingOperation.apply diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/hooks/ema.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/hooks/ema.py deleted file mode 100644 index 15c7e68088f019802a59e7ae41cc1fe0c7f28f96..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/runner/hooks/ema.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...parallel import is_module_wrapper -from ..hooks.hook import HOOKS, Hook - - -@HOOKS.register_module() -class EMAHook(Hook): - r"""Exponential Moving Average Hook. - - Use Exponential Moving Average on all parameters of model in training - process. All parameters have a ema backup, which update by the formula - as below. EMAHook takes priority over EvalHook and CheckpointSaverHook. - - .. math:: - - \text{Xema\_{t+1}} = (1 - \text{momentum}) \times - \text{Xema\_{t}} + \text{momentum} \times X_t - - Args: - momentum (float): The momentum used for updating ema parameter. - Defaults to 0.0002. - interval (int): Update ema parameter every interval iteration. - Defaults to 1. - warm_up (int): During first warm_up steps, we may use smaller momentum - to update ema parameters more slowly. Defaults to 100. - resume_from (str): The checkpoint path. Defaults to None. - """ - - def __init__(self, - momentum=0.0002, - interval=1, - warm_up=100, - resume_from=None): - assert isinstance(interval, int) and interval > 0 - self.warm_up = warm_up - self.interval = interval - assert momentum > 0 and momentum < 1 - self.momentum = momentum**interval - self.checkpoint = resume_from - - def before_run(self, runner): - """To resume model with it's ema parameters more friendly. - - Register ema parameter as ``named_buffer`` to model - """ - model = runner.model - if is_module_wrapper(model): - model = model.module - self.param_ema_buffer = {} - self.model_parameters = dict(model.named_parameters(recurse=True)) - for name, value in self.model_parameters.items(): - # "." is not allowed in module's buffer name - buffer_name = f"ema_{name.replace('.', '_')}" - self.param_ema_buffer[name] = buffer_name - model.register_buffer(buffer_name, value.data.clone()) - self.model_buffers = dict(model.named_buffers(recurse=True)) - if self.checkpoint is not None: - runner.resume(self.checkpoint) - - def after_train_iter(self, runner): - """Update ema parameter every self.interval iterations.""" - curr_step = runner.iter - # We warm up the momentum considering the instability at beginning - momentum = min(self.momentum, - (1 + curr_step) / (self.warm_up + curr_step)) - if curr_step % self.interval != 0: - return - for name, parameter in self.model_parameters.items(): - buffer_name = self.param_ema_buffer[name] - buffer_parameter = self.model_buffers[buffer_name] - buffer_parameter.mul_(1 - momentum).add_(momentum, parameter.data) - - def after_train_epoch(self, runner): - """We load parameter values from ema backup to model before the - EvalHook.""" - self._swap_ema_parameters() - - def before_train_epoch(self, runner): - """We recover model's parameter from ema backup after last epoch's - EvalHook.""" - self._swap_ema_parameters() - - def _swap_ema_parameters(self): - """Swap the parameter of model with parameter in ema_buffer.""" - for name, value in self.model_parameters.items(): - temp = value.data.clone() - ema_buffer = self.model_buffers[self.param_ema_buffer[name]] - value.data.copy_(ema_buffer.data) - ema_buffer.data.copy_(temp) diff --git a/spaces/Michael2008S/flowise/README.md b/spaces/Michael2008S/flowise/README.md deleted file mode 100644 index 665631172580ec7d0c0efb46264f77bde097830d..0000000000000000000000000000000000000000 --- a/spaces/Michael2008S/flowise/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Flowise -emoji: 💻 -colorFrom: pink -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MrYXJ/calculate-model-flops/model_utils.py b/spaces/MrYXJ/calculate-model-flops/model_utils.py deleted file mode 100644 index 0ff9ef8a50de14d457693a3af88c1d11732a11c7..0000000000000000000000000000000000000000 --- a/spaces/MrYXJ/calculate-model-flops/model_utils.py +++ /dev/null @@ -1,151 +0,0 @@ -# !usr/bin/env python -# -*- coding:utf-8 -*- - -''' - Description : - Version : 1.0 - Author : MrYXJ - Mail : yxj2017@gmail.com - Github : https://github.com/MrYxJ - Date : 2023-09-05 23:28:32 - LastEditTime : 2023-09-09 19:14:20 - Copyright (C) 2023 mryxj. All rights reserved. -''' - - -import gradio as gr -import torch - -from accelerate.commands.estimate import check_has_model -from urllib.parse import urlparse -from huggingface_hub.utils import GatedRepoError -from huggingface_hub.utils import RepositoryNotFoundError - -from calflops import create_empty_model -from calflops import calculate_flops_hf -from calflops import flops_to_string -from calflops import macs_to_string -from calflops import params_to_string - -def calculate_flops_in_hugging_space(model_name: str, - empty_model: torch.nn.modules, - access_token: str, - input_shape: tuple, - bp_factor: float, - output_unit: str): - - "Calculates the FLOPs and Params usage for a model init on `meta` device" - - try: - flops, macs, params, return_print = calculate_flops_hf(model_name=model_name, - empty_model=empty_model, - access_token=access_token, - input_shape=input_shape, - return_results=True, - output_as_string=False) - except Exception as e: - print("Error info:", e) - raise gr.Error( - f"Model `{model_name}` does not support inference on the meta device, You can download the complete model parameters to your local and using the python package calflops to calculate FLOPs and Params of model `{model_name}`." - ) - - fw_bp_flops = flops * (1.0 + bp_factor) - fw_bp_macs = macs * (1.0 + bp_factor) - - if output_unit == "": - pass - elif output_unit == "auto": - params = params_to_string(params, units=None, precision=3) - flops = flops_to_string(flops, units=None, precision=3) - macs = macs_to_string(macs, units=None, precision=3) - fw_bp_flops = flops_to_string(fw_bp_flops, units=None, precision=3) - fw_bp_macs = macs_to_string(fw_bp_macs, units=None, precision=3) - elif output_unit == "T" or output_unit == "G" or output_unit == "M" or output_unit == "K" or output_unit == "m" or output_unit == "u": - params = params_to_string(params, units=output_unit, precision=3) - flops = flops_to_string(flops, units=output_unit, precision=3) - macs = macs_to_string(macs, units=output_unit, precision=3) - fw_bp_flops = flops_to_string(fw_bp_flops, units=output_unit, precision=3) - fw_bp_macs = macs_to_string(fw_bp_macs, units=output_unit, precision=3) - - return_lines = return_print.split("\n") - return_start = False - return_print = "" - for line in return_lines[:-2]: - if return_start: - return_print += line + "\n" - if "Detailed" in line: - return_start = True - - data = [] - data.append( - { "Total Training Params": params, - "Forward FLOPs": flops, - "Forward MACs": macs, - "Forward+Backward FLOPs": fw_bp_flops, - "Forward+Backward MACs": fw_bp_macs - } - ) - return data, return_print - - -def extract_from_url(name: str): - "Checks if `name` is a URL, and if so converts it to a model name" - is_url = False - try: - result = urlparse(name) - is_url = all([result.scheme, result.netloc]) - except Exception: - is_url = False - # Pass through if not a URL - if not is_url: - return name - else: - path = result.path - return path[1:] - - -def translate_llama2(text): - "Translates llama-2 to its hf counterpart" - if not text.endswith("-hf"): - return text + "-hf" - return text - - -def get_mode_from_hf(model_name: str, library: str, access_token: str): - "Finds and grabs model from the Hub, and initializes on `meta`" - if "meta-llama" in model_name: - model_name = translate_llama2(model_name) - if library == "auto": - library = None - model_name = extract_from_url(model_name) - try: - model = create_empty_model(model_name, library_name=library, trust_remote_code=True, access_token=access_token) - except GatedRepoError: - raise gr.Error( - f"Model `{model_name}` is a gated model, please ensure to pass in your access token and try again if you have access. You can find your access token here : https://huggingface.co/settings/tokens. " - ) - except RepositoryNotFoundError: - raise gr.Error(f"Model `{model_name}` was not found on the Hub, please try another model name.") - except ValueError: - raise gr.Error( - f"Model `{model_name}` does not have any library metadata on the Hub, please manually select a library_name to use (such as `transformers`)" - ) - except (RuntimeError, OSError) as e: - library = check_has_model(e) - if library != "unknown": - raise gr.Error( - f"Tried to load `{model_name}` with `{library}` but a possible model to load was not found inside the repo." - ) - raise gr.Error( - f"Model `{model_name}` had an error, please open a discussion on the model's page with the error message and name: `{e}`" - ) - except ImportError: - # hacky way to check if it works with `trust_remote_code=False` - model = create_empty_model( - model_name, library_name=library, trust_remote_code=False, access_token=access_token - ) - except Exception as e: - raise gr.Error( - f"Model `{model_name}` had an error, please open a discussion on the model's page with the error message and name: `{e}`" - ) - return model diff --git a/spaces/NCTCMumbai/NCTC/models/official/modeling/activations/swish_test.py b/spaces/NCTCMumbai/NCTC/models/official/modeling/activations/swish_test.py deleted file mode 100644 index 22042e9a290a420805fc75bbfca6ded6e917d9eb..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/modeling/activations/swish_test.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for the customized Swish activation.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import numpy as np -import tensorflow as tf - -from tensorflow.python.keras import keras_parameterized # pylint: disable=g-direct-tensorflow-import -from official.modeling import activations - - -@keras_parameterized.run_all_keras_modes -class CustomizedSwishTest(keras_parameterized.TestCase): - - def _hard_swish_np(self, x): - x = np.float32(x) - return x * np.clip(x + 3, 0, 6) / 6 - - def test_simple_swish(self): - features = [[.25, 0, -.25], [-1, -2, 3]] - customized_swish_data = activations.simple_swish(features) - swish_data = tf.nn.swish(features) - self.assertAllClose(customized_swish_data, swish_data) - - def test_hard_swish(self): - features = [[.25, 0, -.25], [-1, -2, 3]] - customized_swish_data = activations.hard_swish(features) - swish_data = self._hard_swish_np(features) - self.assertAllClose(customized_swish_data, swish_data) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NeuralInternet/Text-Generation_Playground/modules/html_generator.py b/spaces/NeuralInternet/Text-Generation_Playground/modules/html_generator.py deleted file mode 100644 index 162040bac68c2e987b33a02ccb12e90b51a63b2d..0000000000000000000000000000000000000000 --- a/spaces/NeuralInternet/Text-Generation_Playground/modules/html_generator.py +++ /dev/null @@ -1,357 +0,0 @@ -''' - -This is a library for formatting GPT-4chan and chat outputs as nice HTML. - -''' - -import os -import re -from pathlib import Path - -from PIL import Image - -# This is to store the paths to the thumbnails of the profile pictures -image_cache = {} - -def generate_basic_html(s): - css = """ - .container { - max-width: 600px; - margin-left: auto; - margin-right: auto; - background-color: rgb(31, 41, 55); - padding:3em; - } - .container p { - font-size: 16px !important; - color: white !important; - margin-bottom: 22px; - line-height: 1.4 !important; - } - """ - s = '\n'.join([f'

      {line}

      ' for line in s.split('\n')]) - s = f'
      {s}
      ' - return s - -def process_post(post, c): - t = post.split('\n') - number = t[0].split(' ')[1] - if len(t) > 1: - src = '\n'.join(t[1:]) - else: - src = '' - src = re.sub('>', '>', src) - src = re.sub('(>>[0-9]*)', '\\1', src) - src = re.sub('\n', '
      \n', src) - src = f'
      {src}\n' - src = f'Anonymous No.{number}\n{src}' - return src - -def generate_4chan_html(f): - css = """ - - #parent #container { - background-color: #eef2ff; - padding: 17px; - } - #parent #container .reply { - background-color: rgb(214, 218, 240); - border-bottom-color: rgb(183, 197, 217); - border-bottom-style: solid; - border-bottom-width: 1px; - border-image-outset: 0; - border-image-repeat: stretch; - border-image-slice: 100%; - border-image-source: none; - border-image-width: 1; - border-left-color: rgb(0, 0, 0); - border-left-style: none; - border-left-width: 0px; - border-right-color: rgb(183, 197, 217); - border-right-style: solid; - border-right-width: 1px; - border-top-color: rgb(0, 0, 0); - border-top-style: none; - border-top-width: 0px; - color: rgb(0, 0, 0); - display: table; - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - margin-bottom: 4px; - margin-left: 0px; - margin-right: 0px; - margin-top: 4px; - overflow-x: hidden; - overflow-y: hidden; - padding-bottom: 4px; - padding-left: 2px; - padding-right: 2px; - padding-top: 4px; - } - - #parent #container .number { - color: rgb(0, 0, 0); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - width: 342.65px; - margin-right: 7px; - } - - #parent #container .op { - color: rgb(0, 0, 0); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - margin-bottom: 8px; - margin-left: 0px; - margin-right: 0px; - margin-top: 4px; - overflow-x: hidden; - overflow-y: hidden; - } - - #parent #container .op blockquote { - margin-left: 0px !important; - } - - #parent #container .name { - color: rgb(17, 119, 67); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - font-weight: 700; - margin-left: 7px; - } - - #parent #container .quote { - color: rgb(221, 0, 0); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - text-decoration-color: rgb(221, 0, 0); - text-decoration-line: underline; - text-decoration-style: solid; - text-decoration-thickness: auto; - } - - #parent #container .greentext { - color: rgb(120, 153, 34); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - } - - #parent #container blockquote { - margin: 0px !important; - margin-block-start: 1em; - margin-block-end: 1em; - margin-inline-start: 40px; - margin-inline-end: 40px; - margin-top: 13.33px !important; - margin-bottom: 13.33px !important; - margin-left: 40px !important; - margin-right: 40px !important; - } - - #parent #container .message { - color: black; - border: none; - } - """ - - posts = [] - post = '' - c = -2 - for line in f.splitlines(): - line += "\n" - if line == '-----\n': - continue - elif line.startswith('--- '): - c += 1 - if post != '': - src = process_post(post, c) - posts.append(src) - post = line - else: - post += line - if post != '': - src = process_post(post, c) - posts.append(src) - - for i in range(len(posts)): - if i == 0: - posts[i] = f'
      {posts[i]}
      \n' - else: - posts[i] = f'
      {posts[i]}
      \n' - - output = '' - output += f'
      ' - for post in posts: - output += post - output += '
      ' - output = output.split('\n') - for i in range(len(output)): - output[i] = re.sub(r'^(>(.*?)(
      |))', r'\1', output[i]) - output[i] = re.sub(r'^
      (>(.*?)(
      |))', r'
      \1', output[i]) - output = '\n'.join(output) - - return output - -def get_image_cache(path): - cache_folder = Path("cache") - if not cache_folder.exists(): - cache_folder.mkdir() - - mtime = os.stat(path).st_mtime - if (path in image_cache and mtime != image_cache[path][0]) or (path not in image_cache): - img = Image.open(path) - img.thumbnail((200, 200)) - output_file = Path(f'cache/{path.name}_cache.png') - img.convert('RGB').save(output_file, format='PNG') - image_cache[path] = [mtime, output_file.as_posix()] - - return image_cache[path][1] - -def generate_chat_html(history, name1, name2, character): - css = """ - .chat { - margin-left: auto; - margin-right: auto; - max-width: 800px; - height: 66.67vh; - overflow-y: auto; - padding-right: 20px; - display: flex; - flex-direction: column-reverse; - } - - .message { - display: grid; - grid-template-columns: 60px 1fr; - padding-bottom: 25px; - font-size: 15px; - font-family: Helvetica, Arial, sans-serif; - line-height: 1.428571429; - } - - .circle-you { - width: 50px; - height: 50px; - background-color: rgb(238, 78, 59); - border-radius: 50%; - } - - .circle-bot { - width: 50px; - height: 50px; - background-color: rgb(59, 78, 244); - border-radius: 50%; - } - - .circle-bot img, .circle-you img { - border-radius: 50%; - width: 100%; - height: 100%; - object-fit: cover; - } - - .text { - } - - .text p { - margin-top: 5px; - } - - .username { - font-weight: bold; - } - - .message-body { - } - - .message-body img { - max-width: 300px; - max-height: 300px; - border-radius: 20px; - } - - .message-body p { - margin-bottom: 0 !important; - font-size: 15px !important; - line-height: 1.428571429 !important; - } - - .dark .message-body p em { - color: rgb(138, 138, 138) !important; - } - - .message-body p em { - color: rgb(110, 110, 110) !important; - } - - """ - - output = '' - output += f'
      ' - img = '' - - for i in [ - f"characters/{character}.png", - f"characters/{character}.jpg", - f"characters/{character}.jpeg", - "img_bot.png", - "img_bot.jpg", - "img_bot.jpeg" - ]: - - path = Path(i) - if path.exists(): - img = f'' - break - - img_me = '' - for i in ["img_me.png", "img_me.jpg", "img_me.jpeg"]: - path = Path(i) - if path.exists(): - img_me = f'' - break - - for i,_row in enumerate(history[::-1]): - row = _row.copy() - row[0] = re.sub(r"(\*\*)([^\*\n]*)(\*\*)", r"\2", row[0]) - row[1] = re.sub(r"(\*\*)([^\*\n]*)(\*\*)", r"\2", row[1]) - row[0] = re.sub(r"(\*)([^\*\n]*)(\*)", r"\2", row[0]) - row[1] = re.sub(r"(\*)([^\*\n]*)(\*)", r"\2", row[1]) - p = '\n'.join([f"

      {x}

      " for x in row[1].split('\n')]) - output += f""" -
      -
      - {img} -
      -
      -
      - {name2} -
      -
      - {p} -
      -
      -
      - """ - - if not (i == len(history)-1 and len(row[0]) == 0): - p = '\n'.join([f"

      {x}

      " for x in row[0].split('\n')]) - output += f""" -
      -
      - {img_me} -
      -
      -
      - {name1} -
      -
      - {p} -
      -
      -
      - """ - - output += "
      " - return output diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/criss/mining/mine_example.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/criss/mining/mine_example.sh deleted file mode 100644 index ace995ac44665f99d904b6a89d7fbbce24103afe..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/criss/mining/mine_example.sh +++ /dev/null @@ -1,103 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -source_lang=kk_KZ -target_lang=en_XX -MODEL=criss_checkpoints/criss.3rd.pt -SPM=criss_checkpoints/sentence.bpe.model -SPLIT=test -LANG_DICT=criss_checkpoints/lang_dict.txt -SPM_ENCODE=flores/scripts/spm_encode.py -SAVE_ENCODER=save_encoder.py -ENCODER_SAVE_ROOT=sentence_embeddings/$MODEL -DICT=criss_checkpoints/dict.txt -THRESHOLD=1.02 -MIN_COUNT=500 - -DATA_DIR=data_tmp -SAVE_DIR=mining/${source_lang}_${target_lang}_mined -ENCODER_SAVE_DIR=${ENCODER_SAVE_ROOT}/${source_lang}-${target_lang} -INPUT_DIR=$DATA_DIR/${source_lang}-${target_lang}-tatoeba - -mkdir -p $ENCODER_SAVE_DIR/${target_lang} -mkdir -p $ENCODER_SAVE_DIR/${source_lang} -mkdir -p $SAVE_DIR - -## Save encoder outputs - -# Save encoder outputs for source sentences -python $SAVE_ENCODER \ - ${INPUT_DIR} \ - --path ${MODEL} \ - --task translation_multi_simple_epoch \ - --lang-pairs ${source_lang}-${target_lang} \ - --lang-dict ${LANG_DICT} \ - --gen-subset ${SPLIT} \ - --bpe 'sentencepiece' \ - -s ${source_lang} -t ${target_lang} \ - --sentencepiece-model ${SPM} \ - --remove-bpe 'sentencepiece' \ - --beam 1 \ - --lang-tok-style mbart \ - --encoder-save-dir ${ENCODER_SAVE_DIR}/${source_lang} - -## Save encoder outputs for target sentences -python $SAVE_ENCODER \ - ${INPUT_DIR} \ - --path ${MODEL} \ - --lang-pairs ${source_lang}-${target_lang} \ - --lang-dict ${LANG_DICT} \ - --task translation_multi_simple_epoch \ - --gen-subset ${SPLIT} \ - --bpe 'sentencepiece' \ - -t ${source_lang} -s ${target_lang} \ - --sentencepiece-model ${SPM} \ - --remove-bpe 'sentencepiece' \ - --beam 1 \ - --lang-tok-style mbart \ - --encoder-save-dir ${ENCODER_SAVE_DIR}/${target_lang} - -## Mining -python mining/mine.py \ - --src-lang ${source_lang} \ - --tgt-lang ${target_lang} \ - --dim 1024 \ - --mem 10 \ - --neighborhood 4 \ - --src-dir ${ENCODER_SAVE_DIR}/${source_lang} \ - --tgt-dir ${ENCODER_SAVE_DIR}/${target_lang} \ - --output $SAVE_DIR \ - --threshold ${THRESHOLD} \ - --min-count ${MIN_COUNT} \ - --valid-size 100 \ - --dict-path ${DICT} \ - --spm-path ${SPM} \ - - -## Process and binarize mined data -python $SPM_ENCODE \ - --model ${SPM} \ - --output_format=piece \ - --inputs mining/${source_lang}_${target_lang}_mined/train.${source_lang} mining/${source_lang}_${target_lang}_mined/train.${target_lang} \ - --outputs mining/${source_lang}_${target_lang}_mined/train.bpe.${source_lang} mining/${source_lang}_${target_lang}_mined/train.bpe.${target_lang} - -python $SPM_ENCODE \ - --model ${SPM} \ - --output_format=piece \ - --inputs mining/${source_lang}_${target_lang}_mined/valid.${source_lang} mining/${source_lang}_${target_lang}_mined/valid.${target_lang} \ - --outputs mining/${source_lang}_${target_lang}_mined/valid.bpe.${source_lang} mining/${source_lang}_${target_lang}_mined/valid.bpe.${target_lang} - - -fairseq-preprocess \ - --source-lang ${source_lang} \ - --target-lang ${target_lang} \ - --trainpref mining/${source_lang}_${target_lang}_mined/train.bpe \ - --validpref mining/${source_lang}_${target_lang}_mined/valid.bpe \ - --destdir mining/${source_lang}_${target_lang}_mined \ - --srcdict ${DICT} \ - --joined-dictionary \ - --workers 8 diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py deleted file mode 100644 index 6fff4faf44a92d42504559ecea8ec1047d2e5f14..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import os -import os.path as op -import sys - -from dump_hubert_feature import HubertFeatureReader -from feature_utils import get_shard_range, dump_feature -from fairseq.data.audio.audio_utils import get_waveform -from fairseq.data.audio.speech_to_text_dataset import ( - read_from_uncompressed_zip, -) - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_hubert_feature_s2t") - - -class HubertFeatureReaderS2T(HubertFeatureReader): - def read_audio(self, path, ref_len=None): - path, *extra = path.split(":") - assert len(extra) == 2 - assert path.endswith(".zip") - - data = read_from_uncompressed_zip(path, int(extra[0]), int(extra[1])) - f = io.BytesIO(data) - wav, sr = get_waveform(f) - assert sr == self.task.cfg.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - -def get_path_iterator(root, tsv, nshard, rank): - with open(tsv) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - subpaths = [op.join(root, e["audio"]) for e in reader] - start, end = get_shard_range(len(subpaths), nshard, rank) - subpaths = subpaths[start:end] - def iterate(): - for subpath in subpaths: - yield op.join(root, subpath), None - return iterate, len(subpaths) - - -def main( - root, tsv_path, ckpt_path, layer, nshard, rank, feat_dir, split, max_chunk -): - reader = HubertFeatureReaderS2T(ckpt_path, layer, max_chunk) - generator, num = get_path_iterator(root, tsv_path, nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("root") - parser.add_argument("tsv_path") - parser.add_argument("ckpt_path") - parser.add_argument("layer", type=int) - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("split") - parser.add_argument("--max_chunk", type=int, default=1600000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/byte_level_bpe/README.md b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/byte_level_bpe/README.md deleted file mode 100644 index 657092660eae42d20f67647417623b8b8cb7b66c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/byte_level_bpe/README.md +++ /dev/null @@ -1,88 +0,0 @@ -# Neural Machine Translation with Byte-Level Subwords - -https://arxiv.org/abs/1909.03341 - -We provide an implementation of byte-level byte-pair encoding (BBPE), taking IWSLT 2017 Fr-En translation as -example. - -## Data -Get data and generate fairseq binary dataset: -```bash -bash ./get_data.sh -``` - -## Model Training -Train Transformer model with Bi-GRU embedding contextualization (implemented in `gru_transformer.py`): -```bash -# VOCAB=bytes -# VOCAB=chars -VOCAB=bbpe2048 -# VOCAB=bpe2048 -# VOCAB=bbpe4096 -# VOCAB=bpe4096 -# VOCAB=bpe16384 -``` -```bash -fairseq-train "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --arch gru_transformer --encoder-layers 2 --decoder-layers 2 --dropout 0.3 --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --log-format 'simple' --log-interval 100 --save-dir "checkpoints/${VOCAB}" \ - --batch-size 100 --max-update 100000 --update-freq 2 -``` - -## Generation -`fairseq-generate` requires bytes (BBPE) decoder to convert byte-level representation back to characters: -```bash -# BPE=--bpe bytes -# BPE=--bpe characters -BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe2048.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe2048.model -# BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe16384.model -``` - -```bash -fairseq-generate "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --source-lang fr --gen-subset test --sacrebleu --path "checkpoints/${VOCAB}/checkpoint_last.pt" \ - --tokenizer moses --moses-target-lang en ${BPE} -``` -When using `fairseq-interactive`, bytes (BBPE) encoder/decoder is required to tokenize input data and detokenize model predictions: -```bash -fairseq-interactive "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --path "checkpoints/${VOCAB}/checkpoint_last.pt" --input data/test.fr --tokenizer moses --moses-source-lang fr \ - --moses-target-lang en ${BPE} --buffer-size 1000 --max-tokens 10000 -``` - -## Results -| Vocabulary | Model | BLEU | -|:-------------:|:-------------:|:-------------:| -| Joint BPE 16k ([Kudo, 2018](https://arxiv.org/abs/1804.10959)) | 512d LSTM 2+2 | 33.81 | -| Joint BPE 16k | Transformer base 2+2 (w/ GRU) | 36.64 (36.72) | -| Joint BPE 4k | Transformer base 2+2 (w/ GRU) | 35.49 (36.10) | -| Joint BBPE 4k | Transformer base 2+2 (w/ GRU) | 35.61 (35.82) | -| Joint BPE 2k | Transformer base 2+2 (w/ GRU) | 34.87 (36.13) | -| Joint BBPE 2k | Transformer base 2+2 (w/ GRU) | 34.98 (35.43) | -| Characters | Transformer base 2+2 (w/ GRU) | 31.78 (33.30) | -| Bytes | Transformer base 2+2 (w/ GRU) | 31.57 (33.62) | - - -## Citation -``` -@misc{wang2019neural, - title={Neural Machine Translation with Byte-Level Subwords}, - author={Changhan Wang and Kyunghyun Cho and Jiatao Gu}, - year={2019}, - eprint={1909.03341}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - - -## Contact -Changhan Wang ([changhan@fb.com](mailto:changhan@fb.com)), -Kyunghyun Cho ([kyunghyuncho@fb.com](mailto:kyunghyuncho@fb.com)), -Jiatao Gu ([jgu@fb.com](mailto:jgu@fb.com)) diff --git a/spaces/OFA-Sys/OFA-vqa/models/ofa/resnet.py b/spaces/OFA-Sys/OFA-vqa/models/ofa/resnet.py deleted file mode 100644 index 9ad8ee87de4bb579d745ab8302a368ca1749a1fe..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/models/ofa/resnet.py +++ /dev/null @@ -1,225 +0,0 @@ -import torch -import torch.nn as nn - - -def drop_path(x, drop_prob: float = 0., training: bool = False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, - the original name is misleading as 'Drop Connect' is a.sh different form of dropout in a.sh separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for - changing the layer and argument names to 'drop path' rather than mix DropConnect as a.sh layer name and use - 'survival rate' as the argument. - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, - base_width=64, dilation=1, norm_layer=None): - super(BasicBlock, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - assert False - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2) - # while original implementation places the stride at the first 1x1 convolution(self.conv1) - # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385. - # This variant is also known as ResNet V1.5 and improves accuracy according to - # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch. - - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1, - base_width=64, dilation=1, norm_layer=None, drop_path_rate=0.0): - super(Bottleneck, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - width = int(planes * (base_width / 64.)) * groups - # Both self.conv2 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv1x1(inplanes, width) - self.bn1 = norm_layer(width) - self.conv2 = conv3x3(width, width, stride, groups, dilation) - self.bn2 = norm_layer(width) - self.conv3 = conv1x1(width, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity() - - def forward(self, x): - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out = identity + self.drop_path(out) - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__(self, layers, zero_init_residual=False, - groups=1, width_per_group=64, replace_stride_with_dilation=None, - norm_layer=None, drop_path_rate=0.0): - super(ResNet, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - # each element in the tuple indicates if we should replace - # the 2x2 stride with a dilated convolution instead - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(Bottleneck, 64, layers[0], drop_path_rate=drop_path_rate) - self.layer2 = self._make_layer(Bottleneck, 128, layers[1], stride=2, - dilate=replace_stride_with_dilation[0], drop_path_rate=drop_path_rate) - self.layer3 = self._make_layer(Bottleneck, 256, layers[2], stride=2, - dilate=replace_stride_with_dilation[1], drop_path_rate=drop_path_rate) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.SyncBatchNorm, nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - nn.init.constant_(m.bn3.weight, 0) - elif isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer(self, block, planes, blocks, stride=1, dilate=False, drop_path_rate=0.0): - norm_layer = self._norm_layer - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - norm_layer(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation, norm_layer)) - self.inplanes = planes * block.expansion - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, blocks)] - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=self.groups, - base_width=self.base_width, dilation=self.dilation, - norm_layer=norm_layer, drop_path_rate=dpr[i])) - - return nn.Sequential(*layers) - - def _forward_impl(self, x): - # See note [TorchScript super()] - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - - return x - - def forward(self, x): - return self._forward_impl(x) \ No newline at end of file diff --git a/spaces/ORI-Muchim/BlueArchiveTTS/README.md b/spaces/ORI-Muchim/BlueArchiveTTS/README.md deleted file mode 100644 index 21f7c46bba9270043c550007b949b46a6e6e7167..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/BlueArchiveTTS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BlueArchiveTTS -emoji: 📉 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/models/quantize.py b/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/models/quantize.py deleted file mode 100644 index 81800337f30b993df1ebd46f536cfe7ecf88737b..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/inpainting_src/ldm_inpainting/ldm/models/quantize.py +++ /dev/null @@ -1,445 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -from torch import einsum -from einops import rearrange - - -class VectorQuantizer(nn.Module): - """ - see https://github.com/MishaLaskin/vqvae/blob/d761a999e2267766400dc646d82d3ac3657771d4/models/quantizer.py - ____________________________________________ - Discretization bottleneck part of the VQ-VAE. - Inputs: - - n_e : number of embeddings - - e_dim : dimension of embedding - - beta : commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2 - _____________________________________________ - """ - - # NOTE: this class contains a bug regarding beta; see VectorQuantizer2 for - # a fix and use legacy=False to apply that fix. VectorQuantizer2 can be - # used wherever VectorQuantizer has been used before and is additionally - # more efficient. - def __init__(self, n_e, e_dim, beta): - super(VectorQuantizer, self).__init__() - self.n_e = n_e - self.e_dim = e_dim - self.beta = beta - - self.embedding = nn.Embedding(self.n_e, self.e_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - def forward(self, z): - """ - Inputs the output of the encoder network z and maps it to a discrete - one-hot vector that is the index of the closest embedding vector e_j - z (continuous) -> z_q (discrete) - z.shape = (batch, channel, height, width) - quantization pipeline: - 1. get encoder input (B,C,H,W) - 2. flatten input to (B*H*W,C) - """ - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.e_dim) - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.matmul(z_flattened, self.embedding.weight.t()) - - ## could possible replace this here - # #\start... - # find closest encodings - min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1) - - min_encodings = torch.zeros( - min_encoding_indices.shape[0], self.n_e).to(z) - min_encodings.scatter_(1, min_encoding_indices, 1) - - # dtype min encodings: torch.float32 - # min_encodings shape: torch.Size([2048, 512]) - # min_encoding_indices.shape: torch.Size([2048, 1]) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape) - #.........\end - - # with: - # .........\start - #min_encoding_indices = torch.argmin(d, dim=1) - #z_q = self.embedding(min_encoding_indices) - # ......\end......... (TODO) - - # compute loss for embedding - loss = torch.mean((z_q.detach()-z)**2) + self.beta * \ - torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # perplexity - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10))) - - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - # TODO: check for more easy handling with nn.Embedding - min_encodings = torch.zeros(indices.shape[0], self.n_e).to(indices) - min_encodings.scatter_(1, indices[:,None], 1) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings.float(), self.embedding.weight) - - if shape is not None: - z_q = z_q.view(shape) - - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q - - -class GumbelQuantize(nn.Module): - """ - credit to @karpathy: https://github.com/karpathy/deep-vector-quantization/blob/main/model.py (thanks!) - Gumbel Softmax trick quantizer - Categorical Reparameterization with Gumbel-Softmax, Jang et al. 2016 - https://arxiv.org/abs/1611.01144 - """ - def __init__(self, num_hiddens, embedding_dim, n_embed, straight_through=True, - kl_weight=5e-4, temp_init=1.0, use_vqinterface=True, - remap=None, unknown_index="random"): - super().__init__() - - self.embedding_dim = embedding_dim - self.n_embed = n_embed - - self.straight_through = straight_through - self.temperature = temp_init - self.kl_weight = kl_weight - - self.proj = nn.Conv2d(num_hiddens, n_embed, 1) - self.embed = nn.Embedding(n_embed, embedding_dim) - - self.use_vqinterface = use_vqinterface - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed+1 - print(f"Remapping {self.n_embed} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_embed - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - match = (inds[:,:,None]==used[None,None,...]).long() - new = match.argmax(-1) - unknown = match.sum(2)<1 - if self.unknown_index == "random": - new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds>=self.used.shape[0]] = 0 # simply set to zero - back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds) - return back.reshape(ishape) - - def forward(self, z, temp=None, return_logits=False): - # force hard = True when we are in eval mode, as we must quantize. actually, always true seems to work - hard = self.straight_through if self.training else True - temp = self.temperature if temp is None else temp - - logits = self.proj(z) - if self.remap is not None: - # continue only with used logits - full_zeros = torch.zeros_like(logits) - logits = logits[:,self.used,...] - - soft_one_hot = F.gumbel_softmax(logits, tau=temp, dim=1, hard=hard) - if self.remap is not None: - # go back to all entries but unused set to zero - full_zeros[:,self.used,...] = soft_one_hot - soft_one_hot = full_zeros - z_q = einsum('b n h w, n d -> b d h w', soft_one_hot, self.embed.weight) - - # + kl divergence to the prior loss - qy = F.softmax(logits, dim=1) - diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.n_embed + 1e-10), dim=1).mean() - - ind = soft_one_hot.argmax(dim=1) - if self.remap is not None: - ind = self.remap_to_used(ind) - if self.use_vqinterface: - if return_logits: - return z_q, diff, (None, None, ind), logits - return z_q, diff, (None, None, ind) - return z_q, diff, ind - - def get_codebook_entry(self, indices, shape): - b, h, w, c = shape - assert b*h*w == indices.shape[0] - indices = rearrange(indices, '(b h w) -> b h w', b=b, h=h, w=w) - if self.remap is not None: - indices = self.unmap_to_all(indices) - one_hot = F.one_hot(indices, num_classes=self.n_embed).permute(0, 3, 1, 2).float() - z_q = einsum('b n h w, n d -> b d h w', one_hot, self.embed.weight) - return z_q - - -class VectorQuantizer2(nn.Module): - """ - Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly - avoids costly matrix multiplications and allows for post-hoc remapping of indices. - """ - # NOTE: due to a bug the beta term was applied to the wrong term. for - # backwards compatibility we use the buggy version by default, but you can - # specify legacy=False to fix it. - def __init__(self, n_e, e_dim, beta, remap=None, unknown_index="random", - sane_index_shape=False, legacy=True): - super().__init__() - self.n_e = n_e - self.e_dim = e_dim - self.beta = beta - self.legacy = legacy - - self.embedding = nn.Embedding(self.n_e, self.e_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed+1 - print(f"Remapping {self.n_e} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_e - - self.sane_index_shape = sane_index_shape - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - match = (inds[:,:,None]==used[None,None,...]).long() - new = match.argmax(-1) - unknown = match.sum(2)<1 - if self.unknown_index == "random": - new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds>=self.used.shape[0]] = 0 # simply set to zero - back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds) - return back.reshape(ishape) - - def forward(self, z, temp=None, rescale_logits=False, return_logits=False): - assert temp is None or temp==1.0, "Only for interface compatible with Gumbel" - assert rescale_logits==False, "Only for interface compatible with Gumbel" - assert return_logits==False, "Only for interface compatible with Gumbel" - # reshape z -> (batch, height, width, channel) and flatten - z = rearrange(z, 'b c h w -> b h w c').contiguous() - z_flattened = z.view(-1, self.e_dim) - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.einsum('bd,dn->bn', z_flattened, rearrange(self.embedding.weight, 'n d -> d n')) - - min_encoding_indices = torch.argmin(d, dim=1) - z_q = self.embedding(min_encoding_indices).view(z.shape) - perplexity = None - min_encodings = None - - # compute loss for embedding - if not self.legacy: - loss = self.beta * torch.mean((z_q.detach()-z)**2) + \ - torch.mean((z_q - z.detach()) ** 2) - else: - loss = torch.mean((z_q.detach()-z)**2) + self.beta * \ - torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - z_q = rearrange(z_q, 'b h w c -> b c h w').contiguous() - - if self.remap is not None: - min_encoding_indices = min_encoding_indices.reshape(z.shape[0],-1) # add batch axis - min_encoding_indices = self.remap_to_used(min_encoding_indices) - min_encoding_indices = min_encoding_indices.reshape(-1,1) # flatten - - if self.sane_index_shape: - min_encoding_indices = min_encoding_indices.reshape( - z_q.shape[0], z_q.shape[2], z_q.shape[3]) - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - if self.remap is not None: - indices = indices.reshape(shape[0],-1) # add batch axis - indices = self.unmap_to_all(indices) - indices = indices.reshape(-1) # flatten again - - # get quantized latent vectors - z_q = self.embedding(indices) - - if shape is not None: - z_q = z_q.view(shape) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q - -class EmbeddingEMA(nn.Module): - def __init__(self, num_tokens, codebook_dim, decay=0.99, eps=1e-5): - super().__init__() - self.decay = decay - self.eps = eps - weight = torch.randn(num_tokens, codebook_dim) - self.weight = nn.Parameter(weight, requires_grad = False) - self.cluster_size = nn.Parameter(torch.zeros(num_tokens), requires_grad = False) - self.embed_avg = nn.Parameter(weight.clone(), requires_grad = False) - self.update = True - - def forward(self, embed_id): - return F.embedding(embed_id, self.weight) - - def cluster_size_ema_update(self, new_cluster_size): - self.cluster_size.data.mul_(self.decay).add_(new_cluster_size, alpha=1 - self.decay) - - def embed_avg_ema_update(self, new_embed_avg): - self.embed_avg.data.mul_(self.decay).add_(new_embed_avg, alpha=1 - self.decay) - - def weight_update(self, num_tokens): - n = self.cluster_size.sum() - smoothed_cluster_size = ( - (self.cluster_size + self.eps) / (n + num_tokens * self.eps) * n - ) - #normalize embedding average with smoothed cluster size - embed_normalized = self.embed_avg / smoothed_cluster_size.unsqueeze(1) - self.weight.data.copy_(embed_normalized) - - -class EMAVectorQuantizer(nn.Module): - def __init__(self, n_embed, embedding_dim, beta, decay=0.99, eps=1e-5, - remap=None, unknown_index="random"): - super().__init__() - self.codebook_dim = codebook_dim - self.num_tokens = num_tokens - self.beta = beta - self.embedding = EmbeddingEMA(self.num_tokens, self.codebook_dim, decay, eps) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed+1 - print(f"Remapping {self.n_embed} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_embed - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - match = (inds[:,:,None]==used[None,None,...]).long() - new = match.argmax(-1) - unknown = match.sum(2)<1 - if self.unknown_index == "random": - new[unknown]=torch.randint(0,self.re_embed,size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape)>1 - inds = inds.reshape(ishape[0],-1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds>=self.used.shape[0]] = 0 # simply set to zero - back=torch.gather(used[None,:][inds.shape[0]*[0],:], 1, inds) - return back.reshape(ishape) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - #z, 'b c h w -> b h w c' - z = rearrange(z, 'b c h w -> b h w c') - z_flattened = z.reshape(-1, self.codebook_dim) - - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - d = z_flattened.pow(2).sum(dim=1, keepdim=True) + \ - self.embedding.weight.pow(2).sum(dim=1) - 2 * \ - torch.einsum('bd,nd->bn', z_flattened, self.embedding.weight) # 'n d -> d n' - - - encoding_indices = torch.argmin(d, dim=1) - - z_q = self.embedding(encoding_indices).view(z.shape) - encodings = F.one_hot(encoding_indices, self.num_tokens).type(z.dtype) - avg_probs = torch.mean(encodings, dim=0) - perplexity = torch.exp(-torch.sum(avg_probs * torch.log(avg_probs + 1e-10))) - - if self.training and self.embedding.update: - #EMA cluster size - encodings_sum = encodings.sum(0) - self.embedding.cluster_size_ema_update(encodings_sum) - #EMA embedding average - embed_sum = encodings.transpose(0,1) @ z_flattened - self.embedding.embed_avg_ema_update(embed_sum) - #normalize embed_avg and update weight - self.embedding.weight_update(self.num_tokens) - - # compute loss for embedding - loss = self.beta * F.mse_loss(z_q.detach(), z) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - #z_q, 'b h w c -> b c h w' - z_q = rearrange(z_q, 'b h w c -> b c h w') - return z_q, loss, (perplexity, encodings, encoding_indices) \ No newline at end of file diff --git a/spaces/PAIR/Text2Video-Zero/app_depth.py b/spaces/PAIR/Text2Video-Zero/app_depth.py deleted file mode 100644 index 5931891bc321131cbd8a35064b9641fe221cc811..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/app_depth.py +++ /dev/null @@ -1,91 +0,0 @@ -import gradio as gr -from model import Model -import os -on_huggingspace = os.environ.get("SPACE_AUTHOR_NAME") == "PAIR" - - -def create_demo(model: Model): - - examples = [ - ["__assets__/depth_videos_depth/girl_dancing.mp4", - "A stormtrooper, masterpiece, a high-quality, detailed, and professional photo"], - ["__assets__/depth_videos_depth/girl_dancing.mp4", - "Oil painting of a catwoman, masterpiece, a high-quality, detailed, and professional photo"], - ["__assets__/depth_videos_depth/girl_dancing.mp4", - "Oil painting of a girl dancing closed eyes, masterpiece, a high-quality, detailed, and professional photo"], - - ["__assets__/depth_videos_depth/woman.mp4", - "A robot is dancing in the Sahara desert, detailed, and professional photo"], - ["__assets__/depth_videos_depth/woman.mp4", - "Wonder woman is dancing, a high-quality, detailed, and professional photo"], - ["__assets__/depth_videos_depth/woman.mp4", - "Oil painting of a girl dancing close-up, masterpiece, a high-quality, detailed, and professional photo"], - - ["__assets__/depth_videos_depth/man.mp4", - "An astronaut is Dancing in space, a high-quality, detailed, and professional photo"], - ["__assets__/depth_videos_depth/man.mp4", - "Iron Man is dancing, a high-quality, detailed, and professional photo"], - ["__assets__/depth_videos_depth/man.mp4", - "Spiderman is Dancing, a high-quality, detailed, and professional photo"], - - ["__assets__/depth_videos_depth/halloween.mp4", - "Beautiful blonde girl, a high-quality, detailed, and professional photo"], - ["__assets__/depth_videos_depth/halloween.mp4", - "Beautiful brunette girl, a high-quality, detailed, and professional photo"], - ["__assets__/depth_videos_depth/halloween.mp4", - "Beautiful red-haired girl, a high-quality, detailed, and professional photo"], - ] - - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown('## Text and Depth Conditional Video Generation') - with gr.Row(): - gr.HTML( - """ -
      -

      - Description: For performance purposes, our current preview release supports any input videos but caps output videos after 80 frames and the input videos are scaled down before processing. -

      -
      - """) - - with gr.Row(): - with gr.Column(): - input_video = gr.Video( - label="Input Video", source='upload', format="mp4", visible=True).style(height="auto") - with gr.Column(): - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Accordion('Advanced options', open=False): - watermark = gr.Radio(["Picsart AI Research", "Text2Video-Zero", - "None"], label="Watermark", value='Picsart AI Research') - chunk_size = gr.Slider( - label="Chunk size", minimum=2, maximum=16, value=2, step=1, visible=not on_huggingspace, - info="Number of frames processed at once. Reduce for lower memory usage.") - merging_ratio = gr.Slider( - label="Merging ratio", minimum=0.0, maximum=0.9, step=0.1, value=0.0, visible=not on_huggingspace, - info="Ratio of how many tokens are merged. The higher the more compression (less memory and faster inference).") - with gr.Column(): - result = gr.Video(label="Generated Video").style(height="auto") - - inputs = [ - input_video, - prompt, - chunk_size, - watermark, - merging_ratio, - ] - - gr.Examples(examples=examples, - inputs=inputs, - outputs=result, - fn=model.process_controlnet_depth, - # cache_examples=on_huggingspace, - cache_examples=False, - run_on_click=False, - ) - - run_button.click(fn=model.process_controlnet_depth, - inputs=inputs, - outputs=result,) - return demo diff --git a/spaces/PKUWilliamYang/StyleGANEX/configs/transforms_config.py b/spaces/PKUWilliamYang/StyleGANEX/configs/transforms_config.py deleted file mode 100644 index 0af0404f4f59c79e5f672205031470bdab013622..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/configs/transforms_config.py +++ /dev/null @@ -1,242 +0,0 @@ -from abc import abstractmethod -import torchvision.transforms as transforms -from datasets import augmentations - - -class TransformsConfig(object): - - def __init__(self, opts): - self.opts = opts - - @abstractmethod - def get_transforms(self): - pass - - -class EncodeTransforms(TransformsConfig): - - def __init__(self, opts): - super(EncodeTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': None, - 'transform_test': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class FrontalizationTransforms(TransformsConfig): - - def __init__(self, opts): - super(FrontalizationTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class SketchToImageTransforms(TransformsConfig): - - def __init__(self, opts): - super(SketchToImageTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor()]), - 'transform_test': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor()]), - } - return transforms_dict - - -class SegToImageTransforms(TransformsConfig): - - def __init__(self, opts): - super(SegToImageTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.ToOneHot(self.opts.label_nc), - transforms.ToTensor()]), - 'transform_test': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.ToOneHot(self.opts.label_nc), - transforms.ToTensor()]) - } - return transforms_dict - - -class SuperResTransforms(TransformsConfig): - - def __init__(self, opts): - super(SuperResTransforms, self).__init__(opts) - - def get_transforms(self): - if self.opts.resize_factors is None: - self.opts.resize_factors = '1,2,4,8,16,32' - factors = [int(f) for f in self.opts.resize_factors.split(",")] - print("Performing down-sampling with factors: {}".format(factors)) - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((1280, 1280)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((1280, 1280)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class SuperResTransforms_320(TransformsConfig): - - def __init__(self, opts): - super(SuperResTransforms_320, self).__init__(opts) - - def get_transforms(self): - if self.opts.resize_factors is None: - self.opts.resize_factors = '1,2,4,8,16,32' - factors = [int(f) for f in self.opts.resize_factors.split(",")] - print("Performing down-sampling with factors: {}".format(factors)) - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class ToonifyTransforms(TransformsConfig): - - def __init__(self, opts): - super(ToonifyTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((1024, 1024)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((1024, 1024)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - -class EditingTransforms(TransformsConfig): - - def __init__(self, opts): - super(EditingTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((1280, 1280)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((1280, 1280)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict \ No newline at end of file diff --git a/spaces/PKaushik/HumanCounter/README.md b/spaces/PKaushik/HumanCounter/README.md deleted file mode 100644 index 37e98a609c77f64ac8688bf2950a656ec24bd5f7..0000000000000000000000000000000000000000 --- a/spaces/PKaushik/HumanCounter/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Human Counting -emoji: 📊 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.0.5 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Patt/demo_eng_ara_translate/README.md b/spaces/Patt/demo_eng_ara_translate/README.md deleted file mode 100644 index 5f3c7659415876a72f0ccaa72c28f6c6327207cc..0000000000000000000000000000000000000000 --- a/spaces/Patt/demo_eng_ara_translate/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Demo Eng Ara Translate -emoji: 🦀 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/streams.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/streams.go deleted file mode 100644 index 755e602a37ff2ccc217078f4688fb53389d29104..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/streams.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/Bark-Voice-Cloning/Dockerfile b/spaces/PeepDaSlan9/Bark-Voice-Cloning/Dockerfile deleted file mode 100644 index 00b1196aa099cc58dbbc3bc37d09af3d1e7031e6..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/Bark-Voice-Cloning/Dockerfile +++ /dev/null @@ -1,38 +0,0 @@ -FROM debian:stable - -# Install system packages -RUN apt update && apt install -y git pip - -# Create non-root user -RUN useradd -m -d /bark bark - -# Run as new user -USER bark -WORKDIR /bark - -# Clone git repo -RUN git clone https://github.com/C0untFloyd/bark-gui - -# Switch to git directory -WORKDIR /bark/bark-gui - -# Append pip bin path to PATH -ENV PATH=$PATH:/bark/.local/bin - -# Install dependancies -RUN pip install . -RUN pip install -r requirements.txt - -# List on all addresses, since we are in a container. -RUN sed -i "s/server_name: ''/server_name: 0.0.0.0/g" ./config.yaml - -# Suggested volumes -VOLUME /bark/bark-gui/assets/prompts/custom -VOLUME /bark/bark-gui/models -VOLUME /bark/.cache/huggingface/hub - -# Default port for web-ui -EXPOSE 7860/tcp - -# Start script -CMD python3 webui.py diff --git a/spaces/Pengyey/bingo-chuchu/src/pages/api/create.ts b/spaces/Pengyey/bingo-chuchu/src/pages/api/create.ts deleted file mode 100644 index 30d47d2ea34d72b669e01d04281302fd6105f764..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/pages/api/create.ts +++ /dev/null @@ -1,50 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { fetch, debug } from '@/lib/isomorphic' -import { createHeaders, randomIP } from '@/lib/utils' -import { sleep } from '@/lib/bots/bing/utils' - -const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create' -// const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create'; - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - let count = 0 - let { BING_IP, ...cookies } = req.cookies - do { - const headers = createHeaders({ - ...cookies, - BING_IP: BING_IP || randomIP(), - }) - const response = await fetch(API_ENDPOINT, { method: 'GET', headers }) - if (response.status === 200) { - res.setHeader('set-cookie', [headers.cookie, `BING_IP=${headers['x-forwarded-for']}`] - .map(cookie => `${cookie}; Max-Age=${86400 * 30}; Path=/; SameSite=None; Secure`)) - debug('headers', headers) - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - res.end(await response.text()) - return - } - BING_IP = '' - await sleep(2000) - debug('loop', count) - } while(count++ < 10) - res.end(JSON.stringify({ - result: { - value: 'TryLater', - message: `Please try again after a while` - } - })) - } catch (e) { - console.log('error', e) - return res.end(JSON.stringify({ - result: { - value: 'UnauthorizedRequest', - message: `${e}` - } - })) - } -} diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/swish.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/swish.py deleted file mode 100644 index e2ca8ed7b749413f011ae54aac0cab27e6f0b51f..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/swish.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class Swish(nn.Module): - """Swish Module. - - This module applies the swish function: - - .. math:: - Swish(x) = x * Sigmoid(x) - - Returns: - Tensor: The output tensor. - """ - - def __init__(self): - super(Swish, self).__init__() - - def forward(self, x): - return x * torch.sigmoid(x) diff --git a/spaces/Pie31415/control-animation/webui/app_pose.py b/spaces/Pie31415/control-animation/webui/app_pose.py deleted file mode 100644 index df61763a2fc2f8fa66165131ea4b85ad7f6eb62a..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/webui/app_pose.py +++ /dev/null @@ -1,105 +0,0 @@ -from text_to_animation.model import ControlAnimationModel -import gradio as gr -import os - -huggingspace_name = os.environ.get("SPACE_AUTHOR_NAME") -on_huggingspace = huggingspace_name if huggingspace_name is not None else False - -examples = [ - ["Motion 1", "An astronaut dancing in the outer space"], - ["Motion 2", "An astronaut dancing in the outer space"], - ["Motion 3", "An astronaut dancing in the outer space"], - ["Motion 4", "An astronaut dancing in the outer space"], - ["Motion 5", "An astronaut dancing in the outer space"], -] - - -def create_demo(model: ControlAnimationModel): - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown("## Text and Pose Conditional Video Generation") - - with gr.Row(): - gr.Markdown( - "Selection: **one motion** and a **prompt**, or use the examples below." - ) - with gr.Column(): - gallery_pose_sequence = gr.Gallery( - label="Pose Sequence", - value=[ - ("__assets__/dance1.gif", "Motion 1"), - ("__assets__/dance2.gif", "Motion 2"), - ("__assets__/dance3.gif", "Motion 3"), - ("__assets__/dance4.gif", "Motion 4"), - ("__assets__/dance5.gif", "Motion 5"), - ], - ).style(grid=[2], height="auto") - input_video_path = gr.Textbox( - label="Pose Sequence", visible=False, value="Motion 1" - ) - gr.Markdown("## Selection") - pose_sequence_selector = gr.Markdown("Pose Sequence: **Motion 1**") - with gr.Column(): - prompt = gr.Textbox(label="Prompt") - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - watermark = gr.Radio( - ["Picsart AI Research", "Text2Video-Zero", "None"], - label="Watermark", - value="Picsart AI Research", - ) - chunk_size = gr.Slider( - label="Chunk size", - minimum=2, - maximum=16, - value=8, - step=1, - visible=not on_huggingspace, - info="Number of frames processed at once. Reduce for lower memory usage.", - ) - merging_ratio = gr.Slider( - label="Merging ratio", - minimum=0.0, - maximum=0.9, - step=0.1, - value=0.0, - visible=not on_huggingspace, - info="Ratio of how many tokens are merged. The higher the more compression (less memory and faster inference).", - ) - with gr.Column(): - result = gr.Image(label="Generated Video") - - input_video_path.change(on_video_path_update, None, pose_sequence_selector) - gallery_pose_sequence.select(pose_gallery_callback, None, input_video_path) - inputs = [ - input_video_path, - prompt, - chunk_size, - # watermark, - # merging_ratio, - ] - - gr.Examples( - examples=examples, - inputs=inputs, - outputs=result, - fn=model.process_controlnet_pose, - cache_examples=on_huggingspace, - run_on_click=False, - ) - - run_button.click( - fn=model.process_controlnet_pose, - inputs=inputs, - outputs=result, - ) - - return demo - - -def on_video_path_update(evt: gr.EventData): - return f"Selection: **{evt._data}**" - - -def pose_gallery_callback(evt: gr.SelectData): - return f"Motion {evt.index+1}" diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/comm.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/comm.py deleted file mode 100644 index 9a5a69bd8005ff649329d5b8fb46b87ceac2b8ae..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/comm.py +++ /dev/null @@ -1,157 +0,0 @@ -""" -This file contains primitives for multi-gpu communication. -This is useful when doing distributed training. -""" - -import pickle -import time -import functools -import logging -import torch -import torch.distributed as dist -import numpy as np - - -def get_world_size(): - if not dist.is_available(): - return 1 - if not dist.is_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank(): - if not dist.is_available(): - return 0 - if not dist.is_initialized(): - return 0 - return dist.get_rank() - - -def is_main_process(): - return get_rank() == 0 - - -def synchronize(): - """ - Helper function to synchronize (barrier) among all processes when - using distributed training - """ - if not dist.is_available(): - return - if not dist.is_initialized(): - return - world_size = dist.get_world_size() - if world_size == 1: - return - dist.barrier() - - -def all_gather(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to("cuda") - - # obtain Tensor size of each rank - local_size = torch.LongTensor([tensor.numel()]).to("cuda") - size_list = [torch.LongTensor([0]).to("cuda") for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.ByteTensor(size=(max_size,)).to("cuda")) - if local_size != max_size: - padding = torch.ByteTensor(size=(max_size - local_size,)).to("cuda") - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_dict(input_dict, average=True): - """ - Args: - input_dict (dict): all the values will be reduced - average (bool): whether to do average or sum - Reduce the values in the dictionary from all processes so that process with rank - 0 has the averaged results. Returns a dict with the same fields as - input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.reduce(values, dst=0) - if dist.get_rank() == 0 and average: - # only main process gets accumulated, so only divide by - # world_size in this case - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict - - -def broadcast_data(data): - if not torch.distributed.is_initialized(): - return data - rank = dist.get_rank() - if rank == 0: - data_tensor = torch.tensor(data + [0], device="cuda") - else: - data_tensor = torch.tensor(data + [1], device="cuda") - torch.distributed.broadcast(data_tensor, 0) - while data_tensor.cpu().numpy()[-1] == 1: - time.sleep(1) - - return data_tensor.cpu().numpy().tolist()[:-1] - - -def reduce_sum(tensor): - if get_world_size() <= 1: - return tensor - - tensor = tensor.clone() - dist.all_reduce(tensor, op=dist.ReduceOp.SUM) - return tensor - - -def shared_random_seed(): - """ - Returns: - int: a random number that is the same across all workers. - If workers need a shared RNG, they can use this shared seed to - create one. - - All workers must call this function, otherwise it will deadlock. - """ - ints = np.random.randint(2 ** 31) - all_ints = all_gather(ints) - return all_ints[0] \ No newline at end of file diff --git a/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/components/body/langchain_PDF.py b/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/components/body/langchain_PDF.py deleted file mode 100644 index 72c1c158f23f23dc4ae9e48f53831729401607ca..0000000000000000000000000000000000000000 --- a/spaces/PrabhuKiranKonda/Streamlit-PDF-Assistant-Docker/components/body/langchain_PDF.py +++ /dev/null @@ -1,51 +0,0 @@ -from PyPDF2 import PdfReader -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.text_splitter import CharacterTextSplitter -from langchain.vectorstores import FAISS -from langchain.chains.question_answering import load_qa_chain -from langchain.llms import OpenAI -import streamlit as st - -def get_response_from_OpenAI_LangChain(uploaded_file, prompt): - - try: - reader = PdfReader(uploaded_file) - - raw_text = "" - for page in reader.pages: - text = page.extract_text() - if text: - raw_text += text - - text_splitter = CharacterTextSplitter(separator = "\n", - chunk_size = 1000, - chunk_overlap = 200, - length_function = len) - - texts = text_splitter.split_text(raw_text) - with st.spinner('Processing Embeddings...'): - embeddings = OpenAIEmbeddings() - doc_search = FAISS.from_texts(texts, embeddings) - chain = load_qa_chain(OpenAI(), chain_type='map_reduce') - - query = prompt - docs = doc_search.similarity_search(query) - - with st.spinner('Generating Answer...'): - response = chain.run(input_documents=docs, question=query) # response - from components.sidebar.Auth import upload_data - - data = {"prompt": prompt, - "response": response} - - st.session_state['response'] = response - upload_data(st.session_state['uuid'], data, uploaded_file.name[:-4]) - return response - - except Exception as e: - if "You exceeded your current quota" in str(e): - st.error('Oops! You may have exceeded your API rate limit.\nPlease check you OpenAI API key usage at https://platform.openai.com/account/usage') - else: - st.error("Oops! Something went wrong. Please try again. Please check your OpenAI API key in the sidebar.") - st.stop() - return \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/utils/autocast.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tomli/_parser.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tomli/_parser.py deleted file mode 100644 index f1bb0aa19a556725aa2ae2b8cea95489c99a9078..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/tomli/_parser.py +++ /dev/null @@ -1,691 +0,0 @@ -# SPDX-License-Identifier: MIT -# SPDX-FileCopyrightText: 2021 Taneli Hukkinen -# Licensed to PSF under a Contributor Agreement. - -from __future__ import annotations - -from collections.abc import Iterable -import string -from types import MappingProxyType -from typing import Any, BinaryIO, NamedTuple - -from ._re import ( - RE_DATETIME, - RE_LOCALTIME, - RE_NUMBER, - match_to_datetime, - match_to_localtime, - match_to_number, -) -from ._types import Key, ParseFloat, Pos - -ASCII_CTRL = frozenset(chr(i) for i in range(32)) | frozenset(chr(127)) - -# Neither of these sets include quotation mark or backslash. They are -# currently handled as separate cases in the parser functions. -ILLEGAL_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t") -ILLEGAL_MULTILINE_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t\n") - -ILLEGAL_LITERAL_STR_CHARS = ILLEGAL_BASIC_STR_CHARS -ILLEGAL_MULTILINE_LITERAL_STR_CHARS = ILLEGAL_MULTILINE_BASIC_STR_CHARS - -ILLEGAL_COMMENT_CHARS = ILLEGAL_BASIC_STR_CHARS - -TOML_WS = frozenset(" \t") -TOML_WS_AND_NEWLINE = TOML_WS | frozenset("\n") -BARE_KEY_CHARS = frozenset(string.ascii_letters + string.digits + "-_") -KEY_INITIAL_CHARS = BARE_KEY_CHARS | frozenset("\"'") -HEXDIGIT_CHARS = frozenset(string.hexdigits) - -BASIC_STR_ESCAPE_REPLACEMENTS = MappingProxyType( - { - "\\b": "\u0008", # backspace - "\\t": "\u0009", # tab - "\\n": "\u000A", # linefeed - "\\f": "\u000C", # form feed - "\\r": "\u000D", # carriage return - '\\"': "\u0022", # quote - "\\\\": "\u005C", # backslash - } -) - - -class TOMLDecodeError(ValueError): - """An error raised if a document is not valid TOML.""" - - -def load(__fp: BinaryIO, *, parse_float: ParseFloat = float) -> dict[str, Any]: - """Parse TOML from a binary file object.""" - b = __fp.read() - try: - s = b.decode() - except AttributeError: - raise TypeError( - "File must be opened in binary mode, e.g. use `open('foo.toml', 'rb')`" - ) from None - return loads(s, parse_float=parse_float) - - -def loads(__s: str, *, parse_float: ParseFloat = float) -> dict[str, Any]: # noqa: C901 - """Parse TOML from a string.""" - - # The spec allows converting "\r\n" to "\n", even in string - # literals. Let's do so to simplify parsing. - src = __s.replace("\r\n", "\n") - pos = 0 - out = Output(NestedDict(), Flags()) - header: Key = () - parse_float = make_safe_parse_float(parse_float) - - # Parse one statement at a time - # (typically means one line in TOML source) - while True: - # 1. Skip line leading whitespace - pos = skip_chars(src, pos, TOML_WS) - - # 2. Parse rules. Expect one of the following: - # - end of file - # - end of line - # - comment - # - key/value pair - # - append dict to list (and move to its namespace) - # - create dict (and move to its namespace) - # Skip trailing whitespace when applicable. - try: - char = src[pos] - except IndexError: - break - if char == "\n": - pos += 1 - continue - if char in KEY_INITIAL_CHARS: - pos = key_value_rule(src, pos, out, header, parse_float) - pos = skip_chars(src, pos, TOML_WS) - elif char == "[": - try: - second_char: str | None = src[pos + 1] - except IndexError: - second_char = None - out.flags.finalize_pending() - if second_char == "[": - pos, header = create_list_rule(src, pos, out) - else: - pos, header = create_dict_rule(src, pos, out) - pos = skip_chars(src, pos, TOML_WS) - elif char != "#": - raise suffixed_err(src, pos, "Invalid statement") - - # 3. Skip comment - pos = skip_comment(src, pos) - - # 4. Expect end of line or end of file - try: - char = src[pos] - except IndexError: - break - if char != "\n": - raise suffixed_err( - src, pos, "Expected newline or end of document after a statement" - ) - pos += 1 - - return out.data.dict - - -class Flags: - """Flags that map to parsed keys/namespaces.""" - - # Marks an immutable namespace (inline array or inline table). - FROZEN = 0 - # Marks a nest that has been explicitly created and can no longer - # be opened using the "[table]" syntax. - EXPLICIT_NEST = 1 - - def __init__(self) -> None: - self._flags: dict[str, dict] = {} - self._pending_flags: set[tuple[Key, int]] = set() - - def add_pending(self, key: Key, flag: int) -> None: - self._pending_flags.add((key, flag)) - - def finalize_pending(self) -> None: - for key, flag in self._pending_flags: - self.set(key, flag, recursive=False) - self._pending_flags.clear() - - def unset_all(self, key: Key) -> None: - cont = self._flags - for k in key[:-1]: - if k not in cont: - return - cont = cont[k]["nested"] - cont.pop(key[-1], None) - - def set(self, key: Key, flag: int, *, recursive: bool) -> None: # noqa: A003 - cont = self._flags - key_parent, key_stem = key[:-1], key[-1] - for k in key_parent: - if k not in cont: - cont[k] = {"flags": set(), "recursive_flags": set(), "nested": {}} - cont = cont[k]["nested"] - if key_stem not in cont: - cont[key_stem] = {"flags": set(), "recursive_flags": set(), "nested": {}} - cont[key_stem]["recursive_flags" if recursive else "flags"].add(flag) - - def is_(self, key: Key, flag: int) -> bool: - if not key: - return False # document root has no flags - cont = self._flags - for k in key[:-1]: - if k not in cont: - return False - inner_cont = cont[k] - if flag in inner_cont["recursive_flags"]: - return True - cont = inner_cont["nested"] - key_stem = key[-1] - if key_stem in cont: - cont = cont[key_stem] - return flag in cont["flags"] or flag in cont["recursive_flags"] - return False - - -class NestedDict: - def __init__(self) -> None: - # The parsed content of the TOML document - self.dict: dict[str, Any] = {} - - def get_or_create_nest( - self, - key: Key, - *, - access_lists: bool = True, - ) -> dict: - cont: Any = self.dict - for k in key: - if k not in cont: - cont[k] = {} - cont = cont[k] - if access_lists and isinstance(cont, list): - cont = cont[-1] - if not isinstance(cont, dict): - raise KeyError("There is no nest behind this key") - return cont - - def append_nest_to_list(self, key: Key) -> None: - cont = self.get_or_create_nest(key[:-1]) - last_key = key[-1] - if last_key in cont: - list_ = cont[last_key] - if not isinstance(list_, list): - raise KeyError("An object other than list found behind this key") - list_.append({}) - else: - cont[last_key] = [{}] - - -class Output(NamedTuple): - data: NestedDict - flags: Flags - - -def skip_chars(src: str, pos: Pos, chars: Iterable[str]) -> Pos: - try: - while src[pos] in chars: - pos += 1 - except IndexError: - pass - return pos - - -def skip_until( - src: str, - pos: Pos, - expect: str, - *, - error_on: frozenset[str], - error_on_eof: bool, -) -> Pos: - try: - new_pos = src.index(expect, pos) - except ValueError: - new_pos = len(src) - if error_on_eof: - raise suffixed_err(src, new_pos, f"Expected {expect!r}") from None - - if not error_on.isdisjoint(src[pos:new_pos]): - while src[pos] not in error_on: - pos += 1 - raise suffixed_err(src, pos, f"Found invalid character {src[pos]!r}") - return new_pos - - -def skip_comment(src: str, pos: Pos) -> Pos: - try: - char: str | None = src[pos] - except IndexError: - char = None - if char == "#": - return skip_until( - src, pos + 1, "\n", error_on=ILLEGAL_COMMENT_CHARS, error_on_eof=False - ) - return pos - - -def skip_comments_and_array_ws(src: str, pos: Pos) -> Pos: - while True: - pos_before_skip = pos - pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE) - pos = skip_comment(src, pos) - if pos == pos_before_skip: - return pos - - -def create_dict_rule(src: str, pos: Pos, out: Output) -> tuple[Pos, Key]: - pos += 1 # Skip "[" - pos = skip_chars(src, pos, TOML_WS) - pos, key = parse_key(src, pos) - - if out.flags.is_(key, Flags.EXPLICIT_NEST) or out.flags.is_(key, Flags.FROZEN): - raise suffixed_err(src, pos, f"Cannot declare {key} twice") - out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False) - try: - out.data.get_or_create_nest(key) - except KeyError: - raise suffixed_err(src, pos, "Cannot overwrite a value") from None - - if not src.startswith("]", pos): - raise suffixed_err(src, pos, "Expected ']' at the end of a table declaration") - return pos + 1, key - - -def create_list_rule(src: str, pos: Pos, out: Output) -> tuple[Pos, Key]: - pos += 2 # Skip "[[" - pos = skip_chars(src, pos, TOML_WS) - pos, key = parse_key(src, pos) - - if out.flags.is_(key, Flags.FROZEN): - raise suffixed_err(src, pos, f"Cannot mutate immutable namespace {key}") - # Free the namespace now that it points to another empty list item... - out.flags.unset_all(key) - # ...but this key precisely is still prohibited from table declaration - out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False) - try: - out.data.append_nest_to_list(key) - except KeyError: - raise suffixed_err(src, pos, "Cannot overwrite a value") from None - - if not src.startswith("]]", pos): - raise suffixed_err(src, pos, "Expected ']]' at the end of an array declaration") - return pos + 2, key - - -def key_value_rule( - src: str, pos: Pos, out: Output, header: Key, parse_float: ParseFloat -) -> Pos: - pos, key, value = parse_key_value_pair(src, pos, parse_float) - key_parent, key_stem = key[:-1], key[-1] - abs_key_parent = header + key_parent - - relative_path_cont_keys = (header + key[:i] for i in range(1, len(key))) - for cont_key in relative_path_cont_keys: - # Check that dotted key syntax does not redefine an existing table - if out.flags.is_(cont_key, Flags.EXPLICIT_NEST): - raise suffixed_err(src, pos, f"Cannot redefine namespace {cont_key}") - # Containers in the relative path can't be opened with the table syntax or - # dotted key/value syntax in following table sections. - out.flags.add_pending(cont_key, Flags.EXPLICIT_NEST) - - if out.flags.is_(abs_key_parent, Flags.FROZEN): - raise suffixed_err( - src, pos, f"Cannot mutate immutable namespace {abs_key_parent}" - ) - - try: - nest = out.data.get_or_create_nest(abs_key_parent) - except KeyError: - raise suffixed_err(src, pos, "Cannot overwrite a value") from None - if key_stem in nest: - raise suffixed_err(src, pos, "Cannot overwrite a value") - # Mark inline table and array namespaces recursively immutable - if isinstance(value, (dict, list)): - out.flags.set(header + key, Flags.FROZEN, recursive=True) - nest[key_stem] = value - return pos - - -def parse_key_value_pair( - src: str, pos: Pos, parse_float: ParseFloat -) -> tuple[Pos, Key, Any]: - pos, key = parse_key(src, pos) - try: - char: str | None = src[pos] - except IndexError: - char = None - if char != "=": - raise suffixed_err(src, pos, "Expected '=' after a key in a key/value pair") - pos += 1 - pos = skip_chars(src, pos, TOML_WS) - pos, value = parse_value(src, pos, parse_float) - return pos, key, value - - -def parse_key(src: str, pos: Pos) -> tuple[Pos, Key]: - pos, key_part = parse_key_part(src, pos) - key: Key = (key_part,) - pos = skip_chars(src, pos, TOML_WS) - while True: - try: - char: str | None = src[pos] - except IndexError: - char = None - if char != ".": - return pos, key - pos += 1 - pos = skip_chars(src, pos, TOML_WS) - pos, key_part = parse_key_part(src, pos) - key += (key_part,) - pos = skip_chars(src, pos, TOML_WS) - - -def parse_key_part(src: str, pos: Pos) -> tuple[Pos, str]: - try: - char: str | None = src[pos] - except IndexError: - char = None - if char in BARE_KEY_CHARS: - start_pos = pos - pos = skip_chars(src, pos, BARE_KEY_CHARS) - return pos, src[start_pos:pos] - if char == "'": - return parse_literal_str(src, pos) - if char == '"': - return parse_one_line_basic_str(src, pos) - raise suffixed_err(src, pos, "Invalid initial character for a key part") - - -def parse_one_line_basic_str(src: str, pos: Pos) -> tuple[Pos, str]: - pos += 1 - return parse_basic_str(src, pos, multiline=False) - - -def parse_array(src: str, pos: Pos, parse_float: ParseFloat) -> tuple[Pos, list]: - pos += 1 - array: list = [] - - pos = skip_comments_and_array_ws(src, pos) - if src.startswith("]", pos): - return pos + 1, array - while True: - pos, val = parse_value(src, pos, parse_float) - array.append(val) - pos = skip_comments_and_array_ws(src, pos) - - c = src[pos : pos + 1] - if c == "]": - return pos + 1, array - if c != ",": - raise suffixed_err(src, pos, "Unclosed array") - pos += 1 - - pos = skip_comments_and_array_ws(src, pos) - if src.startswith("]", pos): - return pos + 1, array - - -def parse_inline_table(src: str, pos: Pos, parse_float: ParseFloat) -> tuple[Pos, dict]: - pos += 1 - nested_dict = NestedDict() - flags = Flags() - - pos = skip_chars(src, pos, TOML_WS) - if src.startswith("}", pos): - return pos + 1, nested_dict.dict - while True: - pos, key, value = parse_key_value_pair(src, pos, parse_float) - key_parent, key_stem = key[:-1], key[-1] - if flags.is_(key, Flags.FROZEN): - raise suffixed_err(src, pos, f"Cannot mutate immutable namespace {key}") - try: - nest = nested_dict.get_or_create_nest(key_parent, access_lists=False) - except KeyError: - raise suffixed_err(src, pos, "Cannot overwrite a value") from None - if key_stem in nest: - raise suffixed_err(src, pos, f"Duplicate inline table key {key_stem!r}") - nest[key_stem] = value - pos = skip_chars(src, pos, TOML_WS) - c = src[pos : pos + 1] - if c == "}": - return pos + 1, nested_dict.dict - if c != ",": - raise suffixed_err(src, pos, "Unclosed inline table") - if isinstance(value, (dict, list)): - flags.set(key, Flags.FROZEN, recursive=True) - pos += 1 - pos = skip_chars(src, pos, TOML_WS) - - -def parse_basic_str_escape( - src: str, pos: Pos, *, multiline: bool = False -) -> tuple[Pos, str]: - escape_id = src[pos : pos + 2] - pos += 2 - if multiline and escape_id in {"\\ ", "\\\t", "\\\n"}: - # Skip whitespace until next non-whitespace character or end of - # the doc. Error if non-whitespace is found before newline. - if escape_id != "\\\n": - pos = skip_chars(src, pos, TOML_WS) - try: - char = src[pos] - except IndexError: - return pos, "" - if char != "\n": - raise suffixed_err(src, pos, "Unescaped '\\' in a string") - pos += 1 - pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE) - return pos, "" - if escape_id == "\\u": - return parse_hex_char(src, pos, 4) - if escape_id == "\\U": - return parse_hex_char(src, pos, 8) - try: - return pos, BASIC_STR_ESCAPE_REPLACEMENTS[escape_id] - except KeyError: - raise suffixed_err(src, pos, "Unescaped '\\' in a string") from None - - -def parse_basic_str_escape_multiline(src: str, pos: Pos) -> tuple[Pos, str]: - return parse_basic_str_escape(src, pos, multiline=True) - - -def parse_hex_char(src: str, pos: Pos, hex_len: int) -> tuple[Pos, str]: - hex_str = src[pos : pos + hex_len] - if len(hex_str) != hex_len or not HEXDIGIT_CHARS.issuperset(hex_str): - raise suffixed_err(src, pos, "Invalid hex value") - pos += hex_len - hex_int = int(hex_str, 16) - if not is_unicode_scalar_value(hex_int): - raise suffixed_err(src, pos, "Escaped character is not a Unicode scalar value") - return pos, chr(hex_int) - - -def parse_literal_str(src: str, pos: Pos) -> tuple[Pos, str]: - pos += 1 # Skip starting apostrophe - start_pos = pos - pos = skip_until( - src, pos, "'", error_on=ILLEGAL_LITERAL_STR_CHARS, error_on_eof=True - ) - return pos + 1, src[start_pos:pos] # Skip ending apostrophe - - -def parse_multiline_str(src: str, pos: Pos, *, literal: bool) -> tuple[Pos, str]: - pos += 3 - if src.startswith("\n", pos): - pos += 1 - - if literal: - delim = "'" - end_pos = skip_until( - src, - pos, - "'''", - error_on=ILLEGAL_MULTILINE_LITERAL_STR_CHARS, - error_on_eof=True, - ) - result = src[pos:end_pos] - pos = end_pos + 3 - else: - delim = '"' - pos, result = parse_basic_str(src, pos, multiline=True) - - # Add at maximum two extra apostrophes/quotes if the end sequence - # is 4 or 5 chars long instead of just 3. - if not src.startswith(delim, pos): - return pos, result - pos += 1 - if not src.startswith(delim, pos): - return pos, result + delim - pos += 1 - return pos, result + (delim * 2) - - -def parse_basic_str(src: str, pos: Pos, *, multiline: bool) -> tuple[Pos, str]: - if multiline: - error_on = ILLEGAL_MULTILINE_BASIC_STR_CHARS - parse_escapes = parse_basic_str_escape_multiline - else: - error_on = ILLEGAL_BASIC_STR_CHARS - parse_escapes = parse_basic_str_escape - result = "" - start_pos = pos - while True: - try: - char = src[pos] - except IndexError: - raise suffixed_err(src, pos, "Unterminated string") from None - if char == '"': - if not multiline: - return pos + 1, result + src[start_pos:pos] - if src.startswith('"""', pos): - return pos + 3, result + src[start_pos:pos] - pos += 1 - continue - if char == "\\": - result += src[start_pos:pos] - pos, parsed_escape = parse_escapes(src, pos) - result += parsed_escape - start_pos = pos - continue - if char in error_on: - raise suffixed_err(src, pos, f"Illegal character {char!r}") - pos += 1 - - -def parse_value( # noqa: C901 - src: str, pos: Pos, parse_float: ParseFloat -) -> tuple[Pos, Any]: - try: - char: str | None = src[pos] - except IndexError: - char = None - - # IMPORTANT: order conditions based on speed of checking and likelihood - - # Basic strings - if char == '"': - if src.startswith('"""', pos): - return parse_multiline_str(src, pos, literal=False) - return parse_one_line_basic_str(src, pos) - - # Literal strings - if char == "'": - if src.startswith("'''", pos): - return parse_multiline_str(src, pos, literal=True) - return parse_literal_str(src, pos) - - # Booleans - if char == "t": - if src.startswith("true", pos): - return pos + 4, True - if char == "f": - if src.startswith("false", pos): - return pos + 5, False - - # Arrays - if char == "[": - return parse_array(src, pos, parse_float) - - # Inline tables - if char == "{": - return parse_inline_table(src, pos, parse_float) - - # Dates and times - datetime_match = RE_DATETIME.match(src, pos) - if datetime_match: - try: - datetime_obj = match_to_datetime(datetime_match) - except ValueError as e: - raise suffixed_err(src, pos, "Invalid date or datetime") from e - return datetime_match.end(), datetime_obj - localtime_match = RE_LOCALTIME.match(src, pos) - if localtime_match: - return localtime_match.end(), match_to_localtime(localtime_match) - - # Integers and "normal" floats. - # The regex will greedily match any type starting with a decimal - # char, so needs to be located after handling of dates and times. - number_match = RE_NUMBER.match(src, pos) - if number_match: - return number_match.end(), match_to_number(number_match, parse_float) - - # Special floats - first_three = src[pos : pos + 3] - if first_three in {"inf", "nan"}: - return pos + 3, parse_float(first_three) - first_four = src[pos : pos + 4] - if first_four in {"-inf", "+inf", "-nan", "+nan"}: - return pos + 4, parse_float(first_four) - - raise suffixed_err(src, pos, "Invalid value") - - -def suffixed_err(src: str, pos: Pos, msg: str) -> TOMLDecodeError: - """Return a `TOMLDecodeError` where error message is suffixed with - coordinates in source.""" - - def coord_repr(src: str, pos: Pos) -> str: - if pos >= len(src): - return "end of document" - line = src.count("\n", 0, pos) + 1 - if line == 1: - column = pos + 1 - else: - column = pos - src.rindex("\n", 0, pos) - return f"line {line}, column {column}" - - return TOMLDecodeError(f"{msg} (at {coord_repr(src, pos)})") - - -def is_unicode_scalar_value(codepoint: int) -> bool: - return (0 <= codepoint <= 55295) or (57344 <= codepoint <= 1114111) - - -def make_safe_parse_float(parse_float: ParseFloat) -> ParseFloat: - """A decorator to make `parse_float` safe. - - `parse_float` must not return dicts or lists, because these types - would be mixed with parsed TOML tables and arrays, thus confusing - the parser. The returned decorated callable raises `ValueError` - instead of returning illegal types. - """ - # The default `float` callable never returns illegal types. Optimize it. - if parse_float is float: # type: ignore[comparison-overlap] - return float - - def safe_parse_float(float_str: str) -> Any: - float_value = parse_float(float_str) - if isinstance(float_value, (dict, list)): - raise ValueError("parse_float must not return dicts or lists") - return float_value - - return safe_parse_float diff --git a/spaces/Ricecake123/RVC-demo/Dockerfile b/spaces/Ricecake123/RVC-demo/Dockerfile deleted file mode 100644 index 49f62d5f9c0901931de6523721b3a97b40f34219..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -# syntax=docker/dockerfile:1 - -FROM python:3.10-bullseye - -EXPOSE 7865 - -WORKDIR /app - -COPY . . - -RUN pip3 install -r requirements.txt - -CMD ["python3", "infer-web.py"] \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/utils/flops_counter.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/utils/flops_counter.py deleted file mode 100644 index d10af5feca7f4b8c0ba359b7b1c826f754e048be..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/utils/flops_counter.py +++ /dev/null @@ -1,599 +0,0 @@ -# Modified from flops-counter.pytorch by Vladislav Sovrasov -# original repo: https://github.com/sovrasov/flops-counter.pytorch - -# MIT License - -# Copyright (c) 2018 Vladislav Sovrasov - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import sys -from functools import partial - -import numpy as np -import torch -import torch.nn as nn - -import annotator.uniformer.mmcv as mmcv - - -def get_model_complexity_info(model, - input_shape, - print_per_layer_stat=True, - as_strings=True, - input_constructor=None, - flush=False, - ost=sys.stdout): - """Get complexity information of a model. - - This method can calculate FLOPs and parameter counts of a model with - corresponding input shape. It can also print complexity information for - each layer in a model. - - Supported layers are listed as below: - - Convolutions: ``nn.Conv1d``, ``nn.Conv2d``, ``nn.Conv3d``. - - Activations: ``nn.ReLU``, ``nn.PReLU``, ``nn.ELU``, ``nn.LeakyReLU``, - ``nn.ReLU6``. - - Poolings: ``nn.MaxPool1d``, ``nn.MaxPool2d``, ``nn.MaxPool3d``, - ``nn.AvgPool1d``, ``nn.AvgPool2d``, ``nn.AvgPool3d``, - ``nn.AdaptiveMaxPool1d``, ``nn.AdaptiveMaxPool2d``, - ``nn.AdaptiveMaxPool3d``, ``nn.AdaptiveAvgPool1d``, - ``nn.AdaptiveAvgPool2d``, ``nn.AdaptiveAvgPool3d``. - - BatchNorms: ``nn.BatchNorm1d``, ``nn.BatchNorm2d``, - ``nn.BatchNorm3d``, ``nn.GroupNorm``, ``nn.InstanceNorm1d``, - ``InstanceNorm2d``, ``InstanceNorm3d``, ``nn.LayerNorm``. - - Linear: ``nn.Linear``. - - Deconvolution: ``nn.ConvTranspose2d``. - - Upsample: ``nn.Upsample``. - - Args: - model (nn.Module): The model for complexity calculation. - input_shape (tuple): Input shape used for calculation. - print_per_layer_stat (bool): Whether to print complexity information - for each layer in a model. Default: True. - as_strings (bool): Output FLOPs and params counts in a string form. - Default: True. - input_constructor (None | callable): If specified, it takes a callable - method that generates input. otherwise, it will generate a random - tensor with input shape to calculate FLOPs. Default: None. - flush (bool): same as that in :func:`print`. Default: False. - ost (stream): same as ``file`` param in :func:`print`. - Default: sys.stdout. - - Returns: - tuple[float | str]: If ``as_strings`` is set to True, it will return - FLOPs and parameter counts in a string format. otherwise, it will - return those in a float number format. - """ - assert type(input_shape) is tuple - assert len(input_shape) >= 1 - assert isinstance(model, nn.Module) - flops_model = add_flops_counting_methods(model) - flops_model.eval() - flops_model.start_flops_count() - if input_constructor: - input = input_constructor(input_shape) - _ = flops_model(**input) - else: - try: - batch = torch.ones(()).new_empty( - (1, *input_shape), - dtype=next(flops_model.parameters()).dtype, - device=next(flops_model.parameters()).device) - except StopIteration: - # Avoid StopIteration for models which have no parameters, - # like `nn.Relu()`, `nn.AvgPool2d`, etc. - batch = torch.ones(()).new_empty((1, *input_shape)) - - _ = flops_model(batch) - - flops_count, params_count = flops_model.compute_average_flops_cost() - if print_per_layer_stat: - print_model_with_flops( - flops_model, flops_count, params_count, ost=ost, flush=flush) - flops_model.stop_flops_count() - - if as_strings: - return flops_to_string(flops_count), params_to_string(params_count) - - return flops_count, params_count - - -def flops_to_string(flops, units='GFLOPs', precision=2): - """Convert FLOPs number into a string. - - Note that Here we take a multiply-add counts as one FLOP. - - Args: - flops (float): FLOPs number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'GFLOPs', - 'MFLOPs', 'KFLOPs', 'FLOPs'. If set to None, it will automatically - choose the most suitable unit for FLOPs. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted FLOPs number with units. - - Examples: - >>> flops_to_string(1e9) - '1.0 GFLOPs' - >>> flops_to_string(2e5, 'MFLOPs') - '0.2 MFLOPs' - >>> flops_to_string(3e-9, None) - '3e-09 FLOPs' - """ - if units is None: - if flops // 10**9 > 0: - return str(round(flops / 10.**9, precision)) + ' GFLOPs' - elif flops // 10**6 > 0: - return str(round(flops / 10.**6, precision)) + ' MFLOPs' - elif flops // 10**3 > 0: - return str(round(flops / 10.**3, precision)) + ' KFLOPs' - else: - return str(flops) + ' FLOPs' - else: - if units == 'GFLOPs': - return str(round(flops / 10.**9, precision)) + ' ' + units - elif units == 'MFLOPs': - return str(round(flops / 10.**6, precision)) + ' ' + units - elif units == 'KFLOPs': - return str(round(flops / 10.**3, precision)) + ' ' + units - else: - return str(flops) + ' FLOPs' - - -def params_to_string(num_params, units=None, precision=2): - """Convert parameter number into a string. - - Args: - num_params (float): Parameter number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'M', - 'K' and ''. If set to None, it will automatically choose the most - suitable unit for Parameter number. Default: None. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted parameter number with units. - - Examples: - >>> params_to_string(1e9) - '1000.0 M' - >>> params_to_string(2e5) - '200.0 k' - >>> params_to_string(3e-9) - '3e-09' - """ - if units is None: - if num_params // 10**6 > 0: - return str(round(num_params / 10**6, precision)) + ' M' - elif num_params // 10**3: - return str(round(num_params / 10**3, precision)) + ' k' - else: - return str(num_params) - else: - if units == 'M': - return str(round(num_params / 10.**6, precision)) + ' ' + units - elif units == 'K': - return str(round(num_params / 10.**3, precision)) + ' ' + units - else: - return str(num_params) - - -def print_model_with_flops(model, - total_flops, - total_params, - units='GFLOPs', - precision=3, - ost=sys.stdout, - flush=False): - """Print a model with FLOPs for each layer. - - Args: - model (nn.Module): The model to be printed. - total_flops (float): Total FLOPs of the model. - total_params (float): Total parameter counts of the model. - units (str | None): Converted FLOPs units. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 3. - ost (stream): same as `file` param in :func:`print`. - Default: sys.stdout. - flush (bool): same as that in :func:`print`. Default: False. - - Example: - >>> class ExampleModel(nn.Module): - - >>> def __init__(self): - >>> super().__init__() - >>> self.conv1 = nn.Conv2d(3, 8, 3) - >>> self.conv2 = nn.Conv2d(8, 256, 3) - >>> self.conv3 = nn.Conv2d(256, 8, 3) - >>> self.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) - >>> self.flatten = nn.Flatten() - >>> self.fc = nn.Linear(8, 1) - - >>> def forward(self, x): - >>> x = self.conv1(x) - >>> x = self.conv2(x) - >>> x = self.conv3(x) - >>> x = self.avg_pool(x) - >>> x = self.flatten(x) - >>> x = self.fc(x) - >>> return x - - >>> model = ExampleModel() - >>> x = (3, 16, 16) - to print the complexity information state for each layer, you can use - >>> get_model_complexity_info(model, x) - or directly use - >>> print_model_with_flops(model, 4579784.0, 37361) - ExampleModel( - 0.037 M, 100.000% Params, 0.005 GFLOPs, 100.000% FLOPs, - (conv1): Conv2d(0.0 M, 0.600% Params, 0.0 GFLOPs, 0.959% FLOPs, 3, 8, kernel_size=(3, 3), stride=(1, 1)) # noqa: E501 - (conv2): Conv2d(0.019 M, 50.020% Params, 0.003 GFLOPs, 58.760% FLOPs, 8, 256, kernel_size=(3, 3), stride=(1, 1)) - (conv3): Conv2d(0.018 M, 49.356% Params, 0.002 GFLOPs, 40.264% FLOPs, 256, 8, kernel_size=(3, 3), stride=(1, 1)) - (avg_pool): AdaptiveAvgPool2d(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.017% FLOPs, output_size=(1, 1)) - (flatten): Flatten(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.000% FLOPs, ) - (fc): Linear(0.0 M, 0.024% Params, 0.0 GFLOPs, 0.000% FLOPs, in_features=8, out_features=1, bias=True) - ) - """ - - def accumulate_params(self): - if is_supported_instance(self): - return self.__params__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_params() - return sum - - def accumulate_flops(self): - if is_supported_instance(self): - return self.__flops__ / model.__batch_counter__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_flops() - return sum - - def flops_repr(self): - accumulated_num_params = self.accumulate_params() - accumulated_flops_cost = self.accumulate_flops() - return ', '.join([ - params_to_string( - accumulated_num_params, units='M', precision=precision), - '{:.3%} Params'.format(accumulated_num_params / total_params), - flops_to_string( - accumulated_flops_cost, units=units, precision=precision), - '{:.3%} FLOPs'.format(accumulated_flops_cost / total_flops), - self.original_extra_repr() - ]) - - def add_extra_repr(m): - m.accumulate_flops = accumulate_flops.__get__(m) - m.accumulate_params = accumulate_params.__get__(m) - flops_extra_repr = flops_repr.__get__(m) - if m.extra_repr != flops_extra_repr: - m.original_extra_repr = m.extra_repr - m.extra_repr = flops_extra_repr - assert m.extra_repr != m.original_extra_repr - - def del_extra_repr(m): - if hasattr(m, 'original_extra_repr'): - m.extra_repr = m.original_extra_repr - del m.original_extra_repr - if hasattr(m, 'accumulate_flops'): - del m.accumulate_flops - - model.apply(add_extra_repr) - print(model, file=ost, flush=flush) - model.apply(del_extra_repr) - - -def get_model_parameters_number(model): - """Calculate parameter number of a model. - - Args: - model (nn.module): The model for parameter number calculation. - - Returns: - float: Parameter number of the model. - """ - num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - return num_params - - -def add_flops_counting_methods(net_main_module): - # adding additional methods to the existing module object, - # this is done this way so that each function has access to self object - net_main_module.start_flops_count = start_flops_count.__get__( - net_main_module) - net_main_module.stop_flops_count = stop_flops_count.__get__( - net_main_module) - net_main_module.reset_flops_count = reset_flops_count.__get__( - net_main_module) - net_main_module.compute_average_flops_cost = compute_average_flops_cost.__get__( # noqa: E501 - net_main_module) - - net_main_module.reset_flops_count() - - return net_main_module - - -def compute_average_flops_cost(self): - """Compute average FLOPs cost. - - A method to compute average FLOPs cost, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - - Returns: - float: Current mean flops consumption per image. - """ - batches_count = self.__batch_counter__ - flops_sum = 0 - for module in self.modules(): - if is_supported_instance(module): - flops_sum += module.__flops__ - params_sum = get_model_parameters_number(self) - return flops_sum / batches_count, params_sum - - -def start_flops_count(self): - """Activate the computation of mean flops consumption per image. - - A method to activate the computation of mean flops consumption per image. - which will be available after ``add_flops_counting_methods()`` is called on - a desired net object. It should be called before running the network. - """ - add_batch_counter_hook_function(self) - - def add_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - return - - else: - handle = module.register_forward_hook( - get_modules_mapping()[type(module)]) - - module.__flops_handle__ = handle - - self.apply(partial(add_flops_counter_hook_function)) - - -def stop_flops_count(self): - """Stop computing the mean flops consumption per image. - - A method to stop computing the mean flops consumption per image, which will - be available after ``add_flops_counting_methods()`` is called on a desired - net object. It can be called to pause the computation whenever. - """ - remove_batch_counter_hook_function(self) - self.apply(remove_flops_counter_hook_function) - - -def reset_flops_count(self): - """Reset statistics computed so far. - - A method to Reset computed statistics, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - """ - add_batch_counter_variables_or_reset(self) - self.apply(add_flops_counter_variable_or_reset) - - -# ---- Internal functions -def empty_flops_counter_hook(module, input, output): - module.__flops__ += 0 - - -def upsample_flops_counter_hook(module, input, output): - output_size = output[0] - batch_size = output_size.shape[0] - output_elements_count = batch_size - for val in output_size.shape[1:]: - output_elements_count *= val - module.__flops__ += int(output_elements_count) - - -def relu_flops_counter_hook(module, input, output): - active_elements_count = output.numel() - module.__flops__ += int(active_elements_count) - - -def linear_flops_counter_hook(module, input, output): - input = input[0] - output_last_dim = output.shape[ - -1] # pytorch checks dimensions, so here we don't care much - module.__flops__ += int(np.prod(input.shape) * output_last_dim) - - -def pool_flops_counter_hook(module, input, output): - input = input[0] - module.__flops__ += int(np.prod(input.shape)) - - -def norm_flops_counter_hook(module, input, output): - input = input[0] - - batch_flops = np.prod(input.shape) - if (getattr(module, 'affine', False) - or getattr(module, 'elementwise_affine', False)): - batch_flops *= 2 - module.__flops__ += int(batch_flops) - - -def deconv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - input_height, input_width = input.shape[2:] - - kernel_height, kernel_width = conv_module.kernel_size - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = ( - kernel_height * kernel_width * in_channels * filters_per_channel) - - active_elements_count = batch_size * input_height * input_width - overall_conv_flops = conv_per_position_flops * active_elements_count - bias_flops = 0 - if conv_module.bias is not None: - output_height, output_width = output.shape[2:] - bias_flops = out_channels * batch_size * output_height * output_height - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def conv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - output_dims = list(output.shape[2:]) - - kernel_dims = list(conv_module.kernel_size) - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = int( - np.prod(kernel_dims)) * in_channels * filters_per_channel - - active_elements_count = batch_size * int(np.prod(output_dims)) - - overall_conv_flops = conv_per_position_flops * active_elements_count - - bias_flops = 0 - - if conv_module.bias is not None: - - bias_flops = out_channels * active_elements_count - - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def batch_counter_hook(module, input, output): - batch_size = 1 - if len(input) > 0: - # Can have multiple inputs, getting the first one - input = input[0] - batch_size = len(input) - else: - pass - print('Warning! No positional inputs found for a module, ' - 'assuming batch size is 1.') - module.__batch_counter__ += batch_size - - -def add_batch_counter_variables_or_reset(module): - - module.__batch_counter__ = 0 - - -def add_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - return - - handle = module.register_forward_hook(batch_counter_hook) - module.__batch_counter_handle__ = handle - - -def remove_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - module.__batch_counter_handle__.remove() - del module.__batch_counter_handle__ - - -def add_flops_counter_variable_or_reset(module): - if is_supported_instance(module): - if hasattr(module, '__flops__') or hasattr(module, '__params__'): - print('Warning: variables __flops__ or __params__ are already ' - 'defined for the module' + type(module).__name__ + - ' ptflops can affect your code!') - module.__flops__ = 0 - module.__params__ = get_model_parameters_number(module) - - -def is_supported_instance(module): - if type(module) in get_modules_mapping(): - return True - return False - - -def remove_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - module.__flops_handle__.remove() - del module.__flops_handle__ - - -def get_modules_mapping(): - return { - # convolutions - nn.Conv1d: conv_flops_counter_hook, - nn.Conv2d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv2d: conv_flops_counter_hook, - nn.Conv3d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv3d: conv_flops_counter_hook, - # activations - nn.ReLU: relu_flops_counter_hook, - nn.PReLU: relu_flops_counter_hook, - nn.ELU: relu_flops_counter_hook, - nn.LeakyReLU: relu_flops_counter_hook, - nn.ReLU6: relu_flops_counter_hook, - # poolings - nn.MaxPool1d: pool_flops_counter_hook, - nn.AvgPool1d: pool_flops_counter_hook, - nn.AvgPool2d: pool_flops_counter_hook, - nn.MaxPool2d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool2d: pool_flops_counter_hook, - nn.MaxPool3d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool3d: pool_flops_counter_hook, - nn.AvgPool3d: pool_flops_counter_hook, - nn.AdaptiveMaxPool1d: pool_flops_counter_hook, - nn.AdaptiveAvgPool1d: pool_flops_counter_hook, - nn.AdaptiveMaxPool2d: pool_flops_counter_hook, - nn.AdaptiveAvgPool2d: pool_flops_counter_hook, - nn.AdaptiveMaxPool3d: pool_flops_counter_hook, - nn.AdaptiveAvgPool3d: pool_flops_counter_hook, - # normalizations - nn.BatchNorm1d: norm_flops_counter_hook, - nn.BatchNorm2d: norm_flops_counter_hook, - nn.BatchNorm3d: norm_flops_counter_hook, - nn.GroupNorm: norm_flops_counter_hook, - nn.InstanceNorm1d: norm_flops_counter_hook, - nn.InstanceNorm2d: norm_flops_counter_hook, - nn.InstanceNorm3d: norm_flops_counter_hook, - nn.LayerNorm: norm_flops_counter_hook, - # FC - nn.Linear: linear_flops_counter_hook, - mmcv.cnn.bricks.Linear: linear_flops_counter_hook, - # Upscale - nn.Upsample: upsample_flops_counter_hook, - # Deconvolution - nn.ConvTranspose2d: deconv_flops_counter_hook, - mmcv.cnn.bricks.ConvTranspose2d: deconv_flops_counter_hook, - } diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/detr.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/detr.py deleted file mode 100644 index 5ff82a280daa0a015f662bdf2509fa11542d46d4..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/detr.py +++ /dev/null @@ -1,46 +0,0 @@ -from mmdet.core import bbox2result -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class DETR(SingleStageDetector): - r"""Implementation of `DETR: End-to-End Object Detection with - Transformers `_""" - - def __init__(self, - backbone, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(DETR, self).__init__(backbone, None, bbox_head, train_cfg, - test_cfg, pretrained) - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - batch_size = len(img_metas) - assert batch_size == 1, 'Currently only batch_size 1 for inference ' \ - f'mode is supported. Found batch_size {batch_size}.' - x = self.extract_feat(img) - outs = self.bbox_head(x, img_metas) - bbox_list = self.bbox_head.get_bboxes( - *outs, img_metas, rescale=rescale) - - bbox_results = [ - bbox2result(det_bboxes, det_labels, self.bbox_head.num_classes) - for det_bboxes, det_labels in bbox_list - ] - return bbox_results diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/grid_roi_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/grid_roi_head.py deleted file mode 100644 index 4c52c79863ebaf17bd023382c7e5d4c237b4da77..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/roi_heads/grid_roi_head.py +++ /dev/null @@ -1,176 +0,0 @@ -import torch - -from mmdet.core import bbox2result, bbox2roi -from ..builder import HEADS, build_head, build_roi_extractor -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class GridRoIHead(StandardRoIHead): - """Grid roi head for Grid R-CNN. - - https://arxiv.org/abs/1811.12030 - """ - - def __init__(self, grid_roi_extractor, grid_head, **kwargs): - assert grid_head is not None - super(GridRoIHead, self).__init__(**kwargs) - if grid_roi_extractor is not None: - self.grid_roi_extractor = build_roi_extractor(grid_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.grid_roi_extractor = self.bbox_roi_extractor - self.grid_head = build_head(grid_head) - - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - super(GridRoIHead, self).init_weights(pretrained) - self.grid_head.init_weights() - if not self.share_roi_extractor: - self.grid_roi_extractor.init_weights() - - def _random_jitter(self, sampling_results, img_metas, amplitude=0.15): - """Ramdom jitter positive proposals for training.""" - for sampling_result, img_meta in zip(sampling_results, img_metas): - bboxes = sampling_result.pos_bboxes - random_offsets = bboxes.new_empty(bboxes.shape[0], 4).uniform_( - -amplitude, amplitude) - # before jittering - cxcy = (bboxes[:, 2:4] + bboxes[:, :2]) / 2 - wh = (bboxes[:, 2:4] - bboxes[:, :2]).abs() - # after jittering - new_cxcy = cxcy + wh * random_offsets[:, :2] - new_wh = wh * (1 + random_offsets[:, 2:]) - # xywh to xyxy - new_x1y1 = (new_cxcy - new_wh / 2) - new_x2y2 = (new_cxcy + new_wh / 2) - new_bboxes = torch.cat([new_x1y1, new_x2y2], dim=1) - # clip bboxes - max_shape = img_meta['img_shape'] - if max_shape is not None: - new_bboxes[:, 0::2].clamp_(min=0, max=max_shape[1] - 1) - new_bboxes[:, 1::2].clamp_(min=0, max=max_shape[0] - 1) - - sampling_result.pos_bboxes = new_bboxes - return sampling_results - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - - # grid head - grid_rois = rois[:100] - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], grid_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - grid_pred = self.grid_head(grid_feats) - outs = outs + (grid_pred, ) - - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - bbox_results = super(GridRoIHead, - self)._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - - # Grid head forward and loss - sampling_results = self._random_jitter(sampling_results, img_metas) - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - - # GN in head does not support zero shape input - if pos_rois.shape[0] == 0: - return bbox_results - - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], pos_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - # Accelerate training - max_sample_num_grid = self.train_cfg.get('max_num_grid', 192) - sample_idx = torch.randperm( - grid_feats.shape[0])[:min(grid_feats.shape[0], max_sample_num_grid - )] - grid_feats = grid_feats[sample_idx] - - grid_pred = self.grid_head(grid_feats) - - grid_targets = self.grid_head.get_targets(sampling_results, - self.train_cfg) - grid_targets = grid_targets[sample_idx] - - loss_grid = self.grid_head.loss(grid_pred, grid_targets) - - bbox_results['loss_bbox'].update(loss_grid) - return bbox_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=False) - # pack rois into bboxes - grid_rois = bbox2roi([det_bbox[:, :4] for det_bbox in det_bboxes]) - if grid_rois.shape[0] != 0: - grid_feats = self.grid_roi_extractor( - x[:len(self.grid_roi_extractor.featmap_strides)], grid_rois) - self.grid_head.test_mode = True - grid_pred = self.grid_head(grid_feats) - # split batch grid head prediction back to each image - num_roi_per_img = tuple(len(det_bbox) for det_bbox in det_bboxes) - grid_pred = { - k: v.split(num_roi_per_img, 0) - for k, v in grid_pred.items() - } - - # apply bbox post-processing to each image individually - bbox_results = [] - num_imgs = len(det_bboxes) - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - bbox_results.append(grid_rois.new_tensor([])) - else: - det_bbox = self.grid_head.get_bboxes( - det_bboxes[i], grid_pred['fused'][i], [img_metas[i]]) - if rescale: - det_bbox[:, :4] /= img_metas[i]['scale_factor'] - bbox_results.append( - bbox2result(det_bbox, det_labels[i], - self.bbox_head.num_classes)) - else: - bbox_results = [ - grid_rois.new_tensor([]) for _ in range(len(det_bboxes)) - ] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/datasets/pascal_context.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/datasets/pascal_context.py deleted file mode 100644 index ff65bad1b86d7e3a5980bb5b9fc55798dc8df5f4..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/datasets/pascal_context.py +++ /dev/null @@ -1,60 +0,0 @@ -# dataset settings -dataset_type = 'PascalContextDataset' -data_root = 'data/VOCdevkit/VOC2010/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -img_scale = (520, 520) -crop_size = (480, 480) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/handlers/json_handler.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/handlers/json_handler.py deleted file mode 100644 index 18d4f15f74139d20adff18b20be5529c592a66b6..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/fileio/handlers/json_handler.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json - -import numpy as np - -from .base import BaseFileHandler - - -def set_default(obj): - """Set default json values for non-serializable values. - - It helps convert ``set``, ``range`` and ``np.ndarray`` data types to list. - It also converts ``np.generic`` (including ``np.int32``, ``np.float32``, - etc.) into plain numbers of plain python built-in types. - """ - if isinstance(obj, (set, range)): - return list(obj) - elif isinstance(obj, np.ndarray): - return obj.tolist() - elif isinstance(obj, np.generic): - return obj.item() - raise TypeError(f'{type(obj)} is unsupported for json dump') - - -class JsonHandler(BaseFileHandler): - - def load_from_fileobj(self, file): - return json.load(file) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('default', set_default) - json.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('default', set_default) - return json.dumps(obj, **kwargs) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/scatter_gather.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/scatter_gather.py deleted file mode 100644 index 900ff88566f8f14830590459dc4fd16d4b382e47..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/scatter_gather.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import Scatter as OrigScatter - -from ._functions import Scatter -from .data_container import DataContainer - - -def scatter(inputs, target_gpus, dim=0): - """Scatter inputs to target gpus. - - The only difference from original :func:`scatter` is to add support for - :type:`~mmcv.parallel.DataContainer`. - """ - - def scatter_map(obj): - if isinstance(obj, torch.Tensor): - if target_gpus != [-1]: - return OrigScatter.apply(target_gpus, None, dim, obj) - else: - # for CPU inference we use self-implemented scatter - return Scatter.forward(target_gpus, obj) - if isinstance(obj, DataContainer): - if obj.cpu_only: - return obj.data - else: - return Scatter.forward(target_gpus, obj.data) - if isinstance(obj, tuple) and len(obj) > 0: - return list(zip(*map(scatter_map, obj))) - if isinstance(obj, list) and len(obj) > 0: - out = list(map(list, zip(*map(scatter_map, obj)))) - return out - if isinstance(obj, dict) and len(obj) > 0: - out = list(map(type(obj), zip(*map(scatter_map, obj.items())))) - return out - return [obj for targets in target_gpus] - - # After scatter_map is called, a scatter_map cell will exist. This cell - # has a reference to the actual function scatter_map, which has references - # to a closure that has a reference to the scatter_map cell (because the - # fn is recursive). To avoid this reference cycle, we set the function to - # None, clearing the cell - try: - return scatter_map(inputs) - finally: - scatter_map = None - - -def scatter_kwargs(inputs, kwargs, target_gpus, dim=0): - """Scatter with support for kwargs dictionary.""" - inputs = scatter(inputs, target_gpus, dim) if inputs else [] - kwargs = scatter(kwargs, target_gpus, dim) if kwargs else [] - if len(inputs) < len(kwargs): - inputs.extend([() for _ in range(len(kwargs) - len(inputs))]) - elif len(kwargs) < len(inputs): - kwargs.extend([{} for _ in range(len(inputs) - len(kwargs))]) - inputs = tuple(inputs) - kwargs = tuple(kwargs) - return inputs, kwargs diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/optimizers/radam.py b/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/optimizers/radam.py deleted file mode 100644 index e805d7e34921bee436e1e7fd9e1f753c7609186b..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/modules/parallel_wavegan/optimizers/radam.py +++ /dev/null @@ -1,91 +0,0 @@ -# -*- coding: utf-8 -*- - -"""RAdam optimizer. - -This code is drived from https://github.com/LiyuanLucasLiu/RAdam. -""" - -import math -import torch - -from torch.optim.optimizer import Optimizer - - -class RAdam(Optimizer): - """Rectified Adam optimizer.""" - - def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0): - """Initilize RAdam optimizer.""" - defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay) - self.buffer = [[None, None, None] for ind in range(10)] - super(RAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - """Set state.""" - super(RAdam, self).__setstate__(state) - - def step(self, closure=None): - """Run one step.""" - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError('RAdam does not support sparse gradients') - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state['step'] = 0 - state['exp_avg'] = torch.zeros_like(p_data_fp32) - state['exp_avg_sq'] = torch.zeros_like(p_data_fp32) - else: - state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32) - state['exp_avg_sq'] = state['exp_avg_sq'].type_as(p_data_fp32) - - exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] - beta1, beta2 = group['betas'] - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - state['step'] += 1 - buffered = self.buffer[int(state['step'] % 10)] - if state['step'] == buffered[0]: - N_sma, step_size = buffered[1], buffered[2] - else: - buffered[0] = state['step'] - beta2_t = beta2 ** state['step'] - N_sma_max = 2 / (1 - beta2) - 1 - N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1 - beta2_t) - buffered[1] = N_sma - - # more conservative since it's an approximated value - if N_sma >= 5: - step_size = math.sqrt( - (1 - beta2_t) * (N_sma - 4) / (N_sma_max - 4) * (N_sma - 2) / N_sma * N_sma_max / (N_sma_max - 2)) / (1 - beta1 ** state['step']) # NOQA - else: - step_size = 1.0 / (1 - beta1 ** state['step']) - buffered[2] = step_size - - if group['weight_decay'] != 0: - p_data_fp32.add_(-group['weight_decay'] * group['lr'], p_data_fp32) - - # more conservative since it's an approximated value - if N_sma >= 5: - denom = exp_avg_sq.sqrt().add_(group['eps']) - p_data_fp32.addcdiv_(-step_size * group['lr'], exp_avg, denom) - else: - p_data_fp32.add_(-step_size * group['lr'], exp_avg) - - p.data.copy_(p_data_fp32) - - return loss diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/outpainting_example1.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/outpainting_example1.py deleted file mode 100644 index 0e62fd7903a84a40f87cde3b380105a366c31034..0000000000000000000000000000000000000000 --- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/outpainting_example1.py +++ /dev/null @@ -1,38 +0,0 @@ -# %% -# an example script of how to do outpainting with the diffusers inpainting pipeline -# this is basically just the example from -# https://huggingface.co/runwayml/stable-diffusion-inpainting -#% -from diffusers import StableDiffusionInpaintPipeline - -from PIL import Image -import numpy as np -import torch - -from diffusers import StableDiffusionInpaintPipeline - -pipe = StableDiffusionInpaintPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - revision="fp16", - torch_dtype=torch.float16, -) -pipe.to("cuda") - -# load the image, extract the mask -rgba = Image.open('primed_image_with_alpha_channel.png') -mask_image = Image.fromarray(np.array(rgba)[:, :, 3] == 0) - -# run the pipeline -prompt = "Face of a yellow cat, high resolution, sitting on a park bench." -# image and mask_image should be PIL images. -# The mask structure is white for outpainting and black for keeping as is -image = pipe( - prompt=prompt, - image=rgba, - mask_image=mask_image, -).images[0] -image - -# %% -# the vae does lossy encoding, we could get better quality if we pasted the original image into our result. -# this may yield visible edges diff --git a/spaces/Ryzal/rvc-models-new/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/Ryzal/rvc-models-new/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/Ryzal/rvc-models-new/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/SRankChatGpt/Presentation-Assistant/text2ppt_test.md b/spaces/SRankChatGpt/Presentation-Assistant/text2ppt_test.md deleted file mode 100644 index ef0651fe34edd57a1ef8d097637125aa5bc1dde5..0000000000000000000000000000000000000000 --- a/spaces/SRankChatGpt/Presentation-Assistant/text2ppt_test.md +++ /dev/null @@ -1,87 +0,0 @@ - -# 인공지능 개요 -> 인공지능(Artificial Intelligence)은 인간의 지능을 모방하여 만든 컴퓨터 시스템입니다. - ---- - - -# 인공지능 분류 - - - - - - - - - - - - - - - - - -
      분류설명
      강한 인공지능사람과 같이 모든 인간의 작업을 수행
      약한 인공지능일부분의 작업만 수행
      - ---- - - -# 인공지능 적용 분야 -- 자율주행 자동차 -- 음성인식 기술 -- 인공지능 영상 인식 기술 -- 자동 번역 ---- - - -# 인공신경망 -- 생물학적 퍼셉트론 구조 참고 -- 입력 계층, 출력 계층, 은닉 계층으로 구성 -![인공신경망 이미지](https://upload.wikimedia.org/wikipedia/commons/4/46/Colored_neural_network.svg) - ---- - - -# 심층학습 -- 다층퍼셉트론(MLP)과 같은 네트워크 위에 다양한 레이어를 쌓아 올림 -- 이미지/음성/텍스트 분석 분야에서 광범위하게 활용 - ---- - - -# 강화학습 -- 상호작용을 통해 주어진 환경을 학습하는 방식 -- 에이전트가 최적의 행동을 취하도록 보상하는 방식 -- 슈퍼마리오 게임/바둑/장기/알파고 등에서 활용 - ---- - - -# 자연어 처리(NLP) -- 인간의 언어(음성 또는 문자)를 컴퓨터가 이해하고 처리 -- 딥러닝 이용한 자동번역, 챗봇, 텍스트 감정분석 등 - ---- - - -# 딥러닝 -- 심층신경망을 이용한 머신러닝 방법론 -- 컴퓨터가 인간의 학습 능력을 모방하여 자동으로 패턴 인식 달성 -- 이미지/음성/자연어 처리 등에서 활용 - ---- - - -# 인공지능의 미래 -- 기술의 발전과 함께 인간의 여러 분야에서 고도화 -- 더욱 편리하고 효율적인 인간의 일상생활에 보다 많은 기여 - ---- - - -# PA! 🎉 -- 🤗**TEXT2PPT** 서비스 PA!를 이용해주셔서 감사합니다. -- 리뷰나 건의사항은 언제든지 환영합니다! -- 📧문의: pa@pa.com diff --git a/spaces/Smotto/Vocal-Isolator/app.py b/spaces/Smotto/Vocal-Isolator/app.py deleted file mode 100644 index 1d05708179fa977ed8b8123b60ab3bd7112e771b..0000000000000000000000000000000000000000 --- a/spaces/Smotto/Vocal-Isolator/app.py +++ /dev/null @@ -1,82 +0,0 @@ -# Standard Library -import os - -# Third-Party -import streamlit as st -import librosa - -# Local -from src.models.MDX_net.kimvocal import KimVocal -from src.loader import Loader -from src.models.MDX_net.mdx_net import Conv_TDF_net_trimm - -# Constants -from src.constants import ONNX_MODEL_PATH - -INPUT_FOLDER = "./datasets/input" -OUTPUT_FOLDER = "./datasets/output" - - -def main(): - # Set page configuration and theming - st.set_page_config( - page_title="Sing For Me", - page_icon="🎵", - ) - st.title("Vocal Isolator") - - # Upload WAV file - uploaded_file = st.file_uploader( - "Upload an Audio File (WAV, MP3, OGG, FLAC)", - type=["wav", "mp3", "ogg", "flac"], - key="file_uploader", - ) - - if uploaded_file is not None: - # Process the uploaded audio - st.subheader("Audio Processing") - st.write("Processing the uploaded audio file...") - - # Display a progress bar while processing - progress_bar = st.progress(0) - progress_text = st.empty() - - loader = Loader(INPUT_FOLDER, OUTPUT_FOLDER) - music_tensor, samplerate = loader.prepare_uploaded_file( - uploaded_file=uploaded_file - ) - - model_raw_python = Conv_TDF_net_trimm( - model_path=ONNX_MODEL_PATH, - use_onnx=True, - target_name="vocals", - L=11, - l=3, - g=48, - bn=8, - bias=False, - dim_f=11, - dim_t=8, - ) - - kimvocal = KimVocal() - vocals_tensor = kimvocal.demix_vocals( - music_tensor=music_tensor, - sample_rate=samplerate, - model=model_raw_python, - streamlit_progressbar=progress_bar, - ) - vocals_array = vocals_tensor.numpy() - - # Update progress - progress_bar.progress(100) - progress_text.text("Audio processing complete!") - - # Display processed audio - st.subheader("Processed Audio") - # TODO: Is it encoding it wrong? Maybe fix it later. - st.audio(data=vocals_array, format="audio/mpeg", sample_rate=samplerate) - - -if __name__ == "__main__": - main() diff --git a/spaces/Solis/Solis/llm_src/utils/logger.py b/spaces/Solis/Solis/llm_src/utils/logger.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Sumit7864/Image-Enhancer/scripts/extract_subimages.py b/spaces/Sumit7864/Image-Enhancer/scripts/extract_subimages.py deleted file mode 100644 index 9b969ae0d4adff403f2ad362b9afaaaee58e2cef..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/scripts/extract_subimages.py +++ /dev/null @@ -1,135 +0,0 @@ -import argparse -import cv2 -import numpy as np -import os -import sys -from basicsr.utils import scandir -from multiprocessing import Pool -from os import path as osp -from tqdm import tqdm - - -def main(args): - """A multi-thread tool to crop large images to sub-images for faster IO. - - opt (dict): Configuration dict. It contains: - n_thread (int): Thread number. - compression_level (int): CV_IMWRITE_PNG_COMPRESSION from 0 to 9. A higher value means a smaller size - and longer compression time. Use 0 for faster CPU decompression. Default: 3, same in cv2. - input_folder (str): Path to the input folder. - save_folder (str): Path to save folder. - crop_size (int): Crop size. - step (int): Step for overlapped sliding window. - thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped. - - Usage: - For each folder, run this script. - Typically, there are GT folder and LQ folder to be processed for DIV2K dataset. - After process, each sub_folder should have the same number of subimages. - Remember to modify opt configurations according to your settings. - """ - - opt = {} - opt['n_thread'] = args.n_thread - opt['compression_level'] = args.compression_level - opt['input_folder'] = args.input - opt['save_folder'] = args.output - opt['crop_size'] = args.crop_size - opt['step'] = args.step - opt['thresh_size'] = args.thresh_size - extract_subimages(opt) - - -def extract_subimages(opt): - """Crop images to subimages. - - Args: - opt (dict): Configuration dict. It contains: - input_folder (str): Path to the input folder. - save_folder (str): Path to save folder. - n_thread (int): Thread number. - """ - input_folder = opt['input_folder'] - save_folder = opt['save_folder'] - if not osp.exists(save_folder): - os.makedirs(save_folder) - print(f'mkdir {save_folder} ...') - else: - print(f'Folder {save_folder} already exists. Exit.') - sys.exit(1) - - # scan all images - img_list = list(scandir(input_folder, full_path=True)) - - pbar = tqdm(total=len(img_list), unit='image', desc='Extract') - pool = Pool(opt['n_thread']) - for path in img_list: - pool.apply_async(worker, args=(path, opt), callback=lambda arg: pbar.update(1)) - pool.close() - pool.join() - pbar.close() - print('All processes done.') - - -def worker(path, opt): - """Worker for each process. - - Args: - path (str): Image path. - opt (dict): Configuration dict. It contains: - crop_size (int): Crop size. - step (int): Step for overlapped sliding window. - thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped. - save_folder (str): Path to save folder. - compression_level (int): for cv2.IMWRITE_PNG_COMPRESSION. - - Returns: - process_info (str): Process information displayed in progress bar. - """ - crop_size = opt['crop_size'] - step = opt['step'] - thresh_size = opt['thresh_size'] - img_name, extension = osp.splitext(osp.basename(path)) - - # remove the x2, x3, x4 and x8 in the filename for DIV2K - img_name = img_name.replace('x2', '').replace('x3', '').replace('x4', '').replace('x8', '') - - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - - h, w = img.shape[0:2] - h_space = np.arange(0, h - crop_size + 1, step) - if h - (h_space[-1] + crop_size) > thresh_size: - h_space = np.append(h_space, h - crop_size) - w_space = np.arange(0, w - crop_size + 1, step) - if w - (w_space[-1] + crop_size) > thresh_size: - w_space = np.append(w_space, w - crop_size) - - index = 0 - for x in h_space: - for y in w_space: - index += 1 - cropped_img = img[x:x + crop_size, y:y + crop_size, ...] - cropped_img = np.ascontiguousarray(cropped_img) - cv2.imwrite( - osp.join(opt['save_folder'], f'{img_name}_s{index:03d}{extension}'), cropped_img, - [cv2.IMWRITE_PNG_COMPRESSION, opt['compression_level']]) - process_info = f'Processing {img_name} ...' - return process_info - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder') - parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_HR_sub', help='Output folder') - parser.add_argument('--crop_size', type=int, default=480, help='Crop size') - parser.add_argument('--step', type=int, default=240, help='Step for overlapped sliding window') - parser.add_argument( - '--thresh_size', - type=int, - default=0, - help='Threshold size. Patches whose size is lower than thresh_size will be dropped.') - parser.add_argument('--n_thread', type=int, default=20, help='Thread number.') - parser.add_argument('--compression_level', type=int, default=3, help='Compression level') - args = parser.parse_args() - - main(args) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageGrab.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageGrab.py deleted file mode 100644 index 982f77f206de28e086af15ad86e52dfd7aa3d2ea..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageGrab.py +++ /dev/null @@ -1,149 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# screen grabber -# -# History: -# 2001-04-26 fl created -# 2001-09-17 fl use builtin driver, if present -# 2002-11-19 fl added grabclipboard support -# -# Copyright (c) 2001-2002 by Secret Labs AB -# Copyright (c) 2001-2002 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import os -import shutil -import subprocess -import sys -import tempfile - -from . import Image - - -def grab(bbox=None, include_layered_windows=False, all_screens=False, xdisplay=None): - if xdisplay is None: - if sys.platform == "darwin": - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - args = ["screencapture"] - if bbox: - left, top, right, bottom = bbox - args += ["-R", f"{left},{top},{right-left},{bottom-top}"] - subprocess.call(args + ["-x", filepath]) - im = Image.open(filepath) - im.load() - os.unlink(filepath) - if bbox: - im_resized = im.resize((right - left, bottom - top)) - im.close() - return im_resized - return im - elif sys.platform == "win32": - offset, size, data = Image.core.grabscreen_win32( - include_layered_windows, all_screens - ) - im = Image.frombytes( - "RGB", - size, - data, - # RGB, 32-bit line padding, origin lower left corner - "raw", - "BGR", - (size[0] * 3 + 3) & -4, - -1, - ) - if bbox: - x0, y0 = offset - left, top, right, bottom = bbox - im = im.crop((left - x0, top - y0, right - x0, bottom - y0)) - return im - elif shutil.which("gnome-screenshot"): - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - subprocess.call(["gnome-screenshot", "-f", filepath]) - im = Image.open(filepath) - im.load() - os.unlink(filepath) - if bbox: - im_cropped = im.crop(bbox) - im.close() - return im_cropped - return im - # use xdisplay=None for default display on non-win32/macOS systems - if not Image.core.HAVE_XCB: - msg = "Pillow was built without XCB support" - raise OSError(msg) - size, data = Image.core.grabscreen_x11(xdisplay) - im = Image.frombytes("RGB", size, data, "raw", "BGRX", size[0] * 4, 1) - if bbox: - im = im.crop(bbox) - return im - - -def grabclipboard(): - if sys.platform == "darwin": - fh, filepath = tempfile.mkstemp(".jpg") - os.close(fh) - commands = [ - 'set theFile to (open for access POSIX file "' - + filepath - + '" with write permission)', - "try", - " write (the clipboard as JPEG picture) to theFile", - "end try", - "close access theFile", - ] - script = ["osascript"] - for command in commands: - script += ["-e", command] - subprocess.call(script) - - im = None - if os.stat(filepath).st_size != 0: - im = Image.open(filepath) - im.load() - os.unlink(filepath) - return im - elif sys.platform == "win32": - fmt, data = Image.core.grabclipboard_win32() - if fmt == "file": # CF_HDROP - import struct - - o = struct.unpack_from("I", data)[0] - if data[16] != 0: - files = data[o:].decode("utf-16le").split("\0") - else: - files = data[o:].decode("mbcs").split("\0") - return files[: files.index("")] - if isinstance(data, bytes): - import io - - data = io.BytesIO(data) - if fmt == "png": - from . import PngImagePlugin - - return PngImagePlugin.PngImageFile(data) - elif fmt == "DIB": - from . import BmpImagePlugin - - return BmpImagePlugin.DibImageFile(data) - return None - else: - if shutil.which("wl-paste"): - args = ["wl-paste"] - elif shutil.which("xclip"): - args = ["xclip", "-selection", "clipboard", "-t", "image/png", "-o"] - else: - msg = "wl-paste or xclip is required for ImageGrab.grabclipboard() on Linux" - raise NotImplementedError(msg) - fh, filepath = tempfile.mkstemp() - subprocess.call(args, stdout=fh) - os.close(fh) - im = Image.open(filepath) - im.load() - os.unlink(filepath) - return im diff --git a/spaces/Superlang/remove_background/DIS/Inference.py b/spaces/Superlang/remove_background/DIS/Inference.py deleted file mode 100644 index 0b2907ddfc8475d477078b990efb6ae106c8ce1e..0000000000000000000000000000000000000000 --- a/spaces/Superlang/remove_background/DIS/Inference.py +++ /dev/null @@ -1,53 +0,0 @@ -import os -import time -import numpy as np -from skimage import io -import time -from glob import glob -from tqdm import tqdm - -import torch, gc -import torch.nn as nn -from torch.autograd import Variable -import torch.optim as optim -import torch.nn.functional as F -from torchvision.transforms.functional import normalize - -from models import * - - -if __name__ == "__main__": - dataset_path="../demo_datasets/your_dataset" #Your dataset path - model_path="../saved_models/IS-Net/isnet-general-use.pth" # the model path - result_path="../demo_datasets/your_dataset_result" #The folder path that you want to save the results - input_size=[1024,1024] - net=ISNetDIS() - - if torch.cuda.is_available(): - net.load_state_dict(torch.load(model_path)) - net=net.cuda() - else: - net.load_state_dict(torch.load(model_path,map_location="cpu")) - net.eval() - im_list = glob(dataset_path+"/*.jpg")+glob(dataset_path+"/*.JPG")+glob(dataset_path+"/*.jpeg")+glob(dataset_path+"/*.JPEG")+glob(dataset_path+"/*.png")+glob(dataset_path+"/*.PNG")+glob(dataset_path+"/*.bmp")+glob(dataset_path+"/*.BMP")+glob(dataset_path+"/*.tiff")+glob(dataset_path+"/*.TIFF") - with torch.no_grad(): - for i, im_path in tqdm(enumerate(im_list), total=len(im_list)): - print("im_path: ", im_path) - im = io.imread(im_path) - if len(im.shape) < 3: - im = im[:, :, np.newaxis] - im_shp=im.shape[0:2] - im_tensor = torch.tensor(im, dtype=torch.float32).permute(2,0,1) - im_tensor = F.upsample(torch.unsqueeze(im_tensor,0), input_size, mode="bilinear").type(torch.uint8) - image = torch.divide(im_tensor,255.0) - image = normalize(image,[0.5,0.5,0.5],[1.0,1.0,1.0]) - - if torch.cuda.is_available(): - image=image.cuda() - result=net(image) - result=torch.squeeze(F.upsample(result[0][0],im_shp,mode='bilinear'),0) - ma = torch.max(result) - mi = torch.min(result) - result = (result-mi)/(ma-mi) - im_name=im_path.split('/')[-1].split('.')[0] - io.imsave(os.path.join(result_path,im_name+".png"),(result*255).permute(1,2,0).cpu().data.numpy().astype(np.uint8)) diff --git a/spaces/TEnngal/bingo/src/components/ui/dialog.tsx b/spaces/TEnngal/bingo/src/components/ui/dialog.tsx deleted file mode 100644 index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/ui/dialog.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DialogPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - children, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - -
      - {children} -
      -
      -) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - {children} - - - Close - - - -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
      -) -DialogHeader.displayName = 'DialogHeader' - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
      -) -DialogFooter.displayName = 'DialogFooter' - -const DialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription -} diff --git a/spaces/TNR-5/libt/app.py b/spaces/TNR-5/libt/app.py deleted file mode 100644 index 1d8a4a719ac1697f03430d72a392ef216e9ad3ba..0000000000000000000000000000000000000000 --- a/spaces/TNR-5/libt/app.py +++ /dev/null @@ -1,118 +0,0 @@ -import streamlit as st -from st_utils import bm25_search, semantic_search, hf_api, paginator -from huggingface_hub import ModelSearchArguments -import webbrowser -from numerize.numerize import numerize -import math - -st.set_page_config( - page_title="TNR LIBRERY", - page_icon="♾️", - layout="wide", - initial_sidebar_state="auto", -) - -### SIDEBAR -search_backend = st.sidebar.selectbox( - "Search method", - ["semantic", "bm25", "hfapi"], - format_func=lambda x: {"hfapi": "Keyword search", "bm25": "BM25 search", "semantic": "Semantic Search"}[x], -) -limit_results = int(st.sidebar.number_input("Limit results", min_value=0, value=10)) -sort_by = st.sidebar.selectbox( - "Sort by", - [None, "downloads", "likes", "lastModified"], - format_func=lambda x: {None: "Relevance", "downloads": "Most downloads", "likes": "Most likes", "lastModified": "Recently updated"}[x], -) - -st.sidebar.markdown("# Filters") -args = ModelSearchArguments() -library = st.sidebar.multiselect( - "Library", args.library.values(), format_func=lambda x: {v: k for k, v in args.library.items()}[x] -) -task = st.sidebar.multiselect( - "Task", args.pipeline_tag.values(), format_func=lambda x: {v: k for k, v in args.pipeline_tag.items()}[x] -) - -### MAIN PAGE -st.markdown( - "

      ♾️ TNR LIBRERY

      ", - unsafe_allow_html=True, -) - -# Search bar -search_query = st.text_input("Search for a model in HuggingFace", value="", max_chars=None, key=None, type="default") - -if search_query != "": - filters = { - "library": library, - "task": task, - } - if search_backend == "hfapi": - res = hf_api(search_query, limit_results, sort_by, filters) - elif search_backend == "semantic": - res = semantic_search(search_query, limit_results, sort_by, filters) - elif search_backend == "bm25": - res = bm25_search(search_query, limit_results, sort_by, filters) - hit_list, hits_count = res["hits"], res["count"] - hit_list = [ - { - "modelId": hit["modelId"], - "tags": hit["tags"], - "downloads": hit["downloads"], - "likes": hit["likes"], - "readme": hit.get("readme", None), - } - for hit in hit_list - ] - - if hit_list: - st.write(f"Search results ({hits_count}):") - - if hits_count > 100: - shown_results = 100 - else: - shown_results = hits_count - - for i, hit in paginator( - f"Select results (showing {shown_results} of {hits_count} results)", - hit_list, - ): - col1, col2, col3 = st.columns([5, 1, 1]) - col1.metric("Model", hit["modelId"]) - col2.metric("N° downloads", numerize(hit["downloads"]) if hit["downloads"] and not math.isnan(hit["downloads"]) else "N/A") - col3.metric("N° likes", numerize(hit["likes"]) if hit["likes"] and not math.isnan(hit["likes"]) else "N/A") - st.button( - f"View model on ♾️", - on_click=lambda hit=hit: webbrowser.open(f"https://libt.lpmotortest.com", new=2), - key=f"{i}-{hit['modelId']}", - ) - st.write(f"**Tags:** {'  •  '.join(hit['tags'])}") - - if hit["readme"]: - with st.expander("See README"): - st.write(hit["readme"]) - - # TODO: embed huggingface spaces - # import streamlit.components.v1 as components - # components.html( - # f""" - # - #
      - # - # - # """, - # height=400, - # ) - - st.markdown("---") - - else: - st.write(f"No Search results 😔") - -st.markdown( - "
      Made with ❤️ By TNR Studio - Checkout complete project here
      ", - unsafe_allow_html=True, -) \ No newline at end of file diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/encoders.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/encoders.py deleted file mode 100644 index 72885327200d5e3c026d0715568cd0f1c1f35ac4..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/encoders.py +++ /dev/null @@ -1,225 +0,0 @@ -import math - -import torch -import torch.nn as nn -from utils import normalize_data -import torch.nn.functional as F -from torch.nn import TransformerEncoder, TransformerEncoderLayer - - -class StyleEncoder(nn.Module): - def __init__(self, em_size, hyperparameter_definitions): - super().__init__() - # self.embeddings = {} - self.em_size = em_size - # self.hyperparameter_definitions = {} - # for hp in hyperparameter_definitions: - # self.embeddings[hp] = nn.Linear(1, self.em_size) - # self.embeddings = nn.ModuleDict(self.embeddings) - self.embedding = nn.Linear(hyperparameter_definitions.shape[0], self.em_size) - - def forward(self, hyperparameters): # T x B x num_features - # Make faster by using matrices - # sampled_embeddings = [torch.stack([ - # self.embeddings[hp](torch.tensor([batch[hp]], device=self.embeddings[hp].weight.device, dtype=torch.float)) - # for hp in batch - # ], -1).sum(-1) for batch in hyperparameters] - # return torch.stack(sampled_embeddings, 0) - return self.embedding(hyperparameters) - - -class _PositionalEncoding(nn.Module): - def __init__(self, d_model, dropout=0.): - super().__init__() - self.dropout = nn.Dropout(p=dropout) - self.d_model = d_model - self.device_test_tensor = nn.Parameter(torch.tensor(1.)) - - def forward(self, x):# T x B x num_features - assert self.d_model % x.shape[-1]*2 == 0 - d_per_feature = self.d_model // x.shape[-1] - pe = torch.zeros(*x.shape, d_per_feature, device=self.device_test_tensor.device) - #position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1) - interval_size = 10 - div_term = (1./interval_size) * 2*math.pi*torch.exp(torch.arange(0, d_per_feature, 2, device=self.device_test_tensor.device).float()*math.log(math.sqrt(2))) - #print(div_term/2/math.pi) - pe[..., 0::2] = torch.sin(x.unsqueeze(-1) * div_term) - pe[..., 1::2] = torch.cos(x.unsqueeze(-1) * div_term) - return self.dropout(pe).view(x.shape[0],x.shape[1],self.d_model) - - -Positional = lambda _, emsize: _PositionalEncoding(d_model=emsize) - -class EmbeddingEncoder(nn.Module): - def __init__(self, num_features, em_size, num_embs=100): - super().__init__() - self.num_embs = num_embs - self.embeddings = nn.Embedding(num_embs * num_features, em_size, max_norm=True) - self.init_weights(.1) - self.min_max = (-2,+2) - - @property - def width(self): - return self.min_max[1] - self.min_max[0] - - def init_weights(self, initrange): - self.embeddings.weight.data.uniform_(-initrange, initrange) - - def discretize(self, x): - split_size = self.width / self.num_embs - return (x - self.min_max[0] // split_size).int().clamp(0, self.num_embs - 1) - - def forward(self, x): # T x B x num_features - x_idxs = self.discretize(x) - x_idxs += torch.arange(x.shape[-1], device=x.device).view(1, 1, -1) * self.num_embs - # print(x_idxs,self.embeddings.weight.shape) - return self.embeddings(x_idxs).mean(-2) - - -class Normalize(nn.Module): - def __init__(self, mean, std): - super().__init__() - self.mean = mean - self.std = std - - def forward(self, x): - return (x-self.mean)/self.std - - -def get_normalized_uniform_encoder(encoder_creator): - """ - This can be used to wrap an encoder that is fed uniform samples in [0,1] and normalizes these to 0 mean and 1 std. - For example, it can be used as `encoder_creator = get_normalized_uniform_encoder(encoders.Linear)`, now this can - be initialized with `encoder_creator(feature_dim, in_dim)`. - :param encoder: - :return: - """ - return lambda in_dim, out_dim: nn.Sequential(Normalize(.5, math.sqrt(1/12)), encoder_creator(in_dim, out_dim)) - - -Linear = nn.Linear -MLP = lambda num_features, emsize: nn.Sequential(nn.Linear(num_features+1,emsize*2), - nn.ReLU(), - nn.Linear(emsize*2,emsize)) - -class NanHandlingEncoder(nn.Module): - def __init__(self, num_features, emsize, keep_nans=True): - super().__init__() - self.num_features = 2 * num_features if keep_nans else num_features - self.emsize = emsize - self.keep_nans = keep_nans - self.layer = nn.Linear(self.num_features, self.emsize) - - def forward(self, x): - if self.keep_nans: - x = torch.cat([torch.nan_to_num(x, nan=0.0), normalize_data(torch.isnan(x) * -1 - + torch.logical_and(torch.isinf(x), torch.sign(x) == 1) * 1 - + torch.logical_and(torch.isinf(x), torch.sign(x) == -1) * 2 - )], -1) - else: - x = torch.nan_to_num(x, nan=0.0) - return self.layer(x) - -class Linear(nn.Linear): - def __init__(self, num_features, emsize): - super().__init__(num_features, emsize) - self.num_features = num_features - self.emsize = emsize - - def forward(self, x): - x = torch.nan_to_num(x, nan=0.0) - return super().forward(x) - -class SequenceSpanningEncoder(nn.Module): - # Regular Encoder transforms Seq_len, B, S -> Seq_len, B, E attending only to last dimension - # This Encoder accesses the Seq_Len dimension additionally - - # Why would we want this? We can learn normalization and embedding of features - # , this might be more important for e.g. categorical, ordinal feats, nan detection - # However maybe this can be easily learned through transformer as well? - # A problem is to make this work across any sequence length and be independent of ordering - - # We could use average and maximum pooling and use those with a linear layer - - - # Another idea !! Similar to this we would like to encode features so that their number is variable - # We would like to embed features, also using knowledge of the features in the entire sequence - - # We could use convolution or another transformer - # Convolution: - - # Transformer/Conv across sequence dimension that encodes and normalizes features - # -> Transformer across feature dimension that encodes features to a constant size - - # Conv with flexible features but no sequence info: S,B,F -(reshape)-> S*B,1,F - # -(Conv1d)-> S*B,N,F -(AvgPool,MaxPool)-> S*B,N,1 -> S,B,N - # This probably won't work since it's missing a way to recognize which feature is encoded - - # Transformer with flexible features: S,B,F -> F,B*S,1 -> F2,B*S,1 -> S,B,F2 - - def __init__(self, num_features, em_size): - super().__init__() - - raise NotImplementedError() - # Seq_len, B, S -> Seq_len, B, E - # - self.convs = torch.nn.ModuleList([nn.Conv1d(64 if i else 1, 64, 3) for i in range(5)]) - # self.linear = nn.Linear(64, emsize) - -class TransformerBasedFeatureEncoder(nn.Module): - def __init__(self, num_features, emsize): - super().__init__() - - hidden_emsize = emsize - encoder = Linear(1, hidden_emsize) - n_out = emsize - nhid = 2*emsize - dropout =0.0 - nhead=4 - nlayers=4 - model = nn.Transformer(nhead=nhead, num_encoder_layers=4, num_decoder_layers=4, d_model=1) - - def forward(self, *input): - # S,B,F -> F,S*B,1 -> F2,S*B,1 -> S,B,F2 - input = input.transpose() - self.model(input) - -class Conv(nn.Module): - def __init__(self, input_size, emsize): - super().__init__() - self.convs = torch.nn.ModuleList([nn.Conv2d(64 if i else 1, 64, 3) for i in range(5)]) - self.linear = nn.Linear(64,emsize) - - - def forward(self, x): - size = math.isqrt(x.shape[-1]) - assert size*size == x.shape[-1] - x = x.reshape(*x.shape[:-1], 1, size, size) - for conv in self.convs: - if x.shape[-1] < 4: - break - x = conv(x) - x.relu_() - x = nn.AdaptiveAvgPool2d((1,1))(x).squeeze(-1).squeeze(-1) - return self.linear(x) - - - - -class CanEmb(nn.Embedding): - def __init__(self, num_features, num_embeddings: int, embedding_dim: int, *args, **kwargs): - assert embedding_dim % num_features == 0 - embedding_dim = embedding_dim // num_features - super().__init__(num_embeddings, embedding_dim, *args, **kwargs) - - def forward(self, x): - lx = x.long() - assert (lx == x).all(), "CanEmb only works with tensors of whole numbers" - x = super().forward(lx) - return x.view(*x.shape[:-2], -1) - -def get_Canonical(num_classes): - return lambda num_features, emsize: CanEmb(num_features, num_classes, emsize) - -def get_Embedding(num_embs_per_feature=100): - return lambda num_features, emsize: EmbeddingEncoder(num_features, emsize, num_embs=num_embs_per_feature) diff --git a/spaces/TencentARC/VLog/models/grit_src/grit/modeling/text/modeling_bert.py b/spaces/TencentARC/VLog/models/grit_src/grit/modeling/text/modeling_bert.py deleted file mode 100644 index 3f8bf2d5d7552ee6c314da86a19a56eb0bdaa03e..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/grit/modeling/text/modeling_bert.py +++ /dev/null @@ -1,529 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch BERT model. """ -# Adapted from https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py - -from __future__ import absolute_import, division, print_function, unicode_literals -import copy -import os -import json -import logging -import math -import sys -from io import open -import torch -from torch import nn -import torch.utils.checkpoint as checkpoint -from .file_utils import cached_path - - -logger = logging.getLogger() - - -BERT_PRETRAINED_CONFIG_ARCHIVE_MAP = { - 'bert-base-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json", - 'bert-large-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-config.json", - 'bert-base-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json", - 'bert-large-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-config.json", - 'bert-base-multilingual-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-config.json", - 'bert-base-multilingual-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-config.json", - 'bert-base-chinese': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json", - 'bert-base-german-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-cased-config.json", - 'bert-large-uncased-whole-word-masking': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-config.json", - 'bert-large-cased-whole-word-masking': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-config.json", - 'bert-large-uncased-whole-word-masking-finetuned-squad': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-config.json", - 'bert-large-cased-whole-word-masking-finetuned-squad': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-config.json", - 'bert-base-cased-finetuned-mrpc': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-config.json", -} - - -def qk2attn(query, key, attention_mask, gamma): - query = query / gamma - attention_scores = torch.matmul(query, key.transpose(-1, -2)) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - attention_scores = attention_scores + attention_mask - return attention_scores.softmax(dim=-1) - - -class QK2Attention(nn.Module): - def forward(self, query, key, attention_mask, gamma): - return qk2attn(query, key, attention_mask, gamma) - - -LayerNormClass = torch.nn.LayerNorm - - -class BertSelfAttention(nn.Module): - def __init__(self, config): - super(BertSelfAttention, self).__init__() - if config.hidden_size % config.num_attention_heads != 0: - raise ValueError( - "The hidden size (%d) is not a multiple of the number of attention " - "heads (%d)" % (config.hidden_size, config.num_attention_heads)) - self.output_attentions = config.output_attentions - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.softmax = nn.Softmax(dim=-1) - self.qk2attn = QK2Attention() - - def transpose_for_scores(self, x): - if torch._C._get_tracing_state(): - # exporter is not smart enough to detect dynamic size for some paths - x = x.view(x.shape[0], -1, self.num_attention_heads, self.attention_head_size) - else: - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(*new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward(self, hidden_states, attention_mask, head_mask=None, - history_state=None): - if history_state is not None: - x_states = torch.cat([history_state, hidden_states], dim=1) - mixed_query_layer = self.query(hidden_states) - mixed_key_layer = self.key(x_states) - mixed_value_layer = self.value(x_states) - else: - mixed_query_layer = self.query(hidden_states) - mixed_key_layer = self.key(hidden_states) - mixed_value_layer = self.value(hidden_states) - - query_layer = self.transpose_for_scores(mixed_query_layer) - key_layer = self.transpose_for_scores(mixed_key_layer) - value_layer = self.transpose_for_scores(mixed_value_layer) - - attention_probs = self.qk2attn(query_layer, key_layer, attention_mask, math.sqrt(self.attention_head_size)) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(*new_context_layer_shape) - - outputs = (context_layer, attention_probs) if self.output_attentions else (context_layer,) - return outputs - - -class BertSelfOutput(nn.Module): - def __init__(self, config): - super(BertSelfOutput, self).__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.pre_norm = hasattr(config, 'pre_norm') and config.pre_norm - if not self.pre_norm: - self.LayerNorm = LayerNormClass(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - if not self.pre_norm: - hidden_states = self.LayerNorm(hidden_states + input_tensor) - else: - hidden_states = hidden_states + input_tensor - return hidden_states - - -class BertAttention(nn.Module): - def __init__(self, config): - super(BertAttention, self).__init__() - self.pre_norm = hasattr(config, 'pre_norm') and config.pre_norm - if self.pre_norm: - self.LayerNorm = LayerNormClass(config.hidden_size, eps=config.layer_norm_eps) - self.self = BertSelfAttention(config) - self.output = BertSelfOutput(config) - - def forward(self, input_tensor, attention_mask, head_mask=None, - history_state=None): - if self.pre_norm: - self_outputs = self.self(self.LayerNorm(input_tensor), attention_mask, head_mask, - self.layerNorm(history_state) if history_state else history_state) - else: - self_outputs = self.self(input_tensor, attention_mask, head_mask, - history_state) - attention_output = self.output(self_outputs[0], input_tensor) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -class BertIntermediate(nn.Module): - def __init__(self, config): - super(BertIntermediate, self).__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - assert config.hidden_act == 'gelu', 'Please implement other activation functions' - self.intermediate_act_fn = _gelu_python - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -class BertOutput(nn.Module): - def __init__(self, config): - super(BertOutput, self).__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.pre_norm = hasattr(config, 'pre_norm') and config.pre_norm - self.dropout = nn.Dropout(config.hidden_dropout_prob) - if not self.pre_norm: - self.LayerNorm = LayerNormClass(config.hidden_size, eps=config.layer_norm_eps) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - if not self.pre_norm: - hidden_states = self.LayerNorm(hidden_states + input_tensor) - else: - hidden_states = hidden_states + input_tensor - return hidden_states - - -class Mlp(nn.Module): - def __init__(self, config): - super().__init__() - self.pre_norm = hasattr(config, 'pre_norm') and config.pre_norm - self.intermediate = BertIntermediate(config) - if self.pre_norm: - self.LayerNorm = LayerNormClass(config.hidden_size, eps=config.layer_norm_eps) - self.output = BertOutput(config) - - def forward(self, attention_output): - if not self.pre_norm: - intermediate_output = self.intermediate(attention_output) - else: - intermediate_output = self.intermediate(self.LayerNorm(attention_output)) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - -class BertLayer(nn.Module): - def __init__(self, config, use_act_checkpoint=True): - super(BertLayer, self).__init__() - self.pre_norm = hasattr(config, 'pre_norm') and config.pre_norm - self.use_mlp_wrapper = hasattr(config, 'use_mlp_wrapper') and config.use_mlp_wrapper - self.attention = BertAttention(config) - self.use_act_checkpoint = use_act_checkpoint - if self.use_mlp_wrapper: - self.mlp = Mlp(config) - else: - self.intermediate = BertIntermediate(config) - if self.pre_norm: - self.LayerNorm = LayerNormClass(config.hidden_size, eps=config.layer_norm_eps) - self.output = BertOutput(config) - - def forward(self, hidden_states, attention_mask, head_mask=None, - history_state=None): - if self.use_act_checkpoint: - attention_outputs = checkpoint.checkpoint(self.attention, hidden_states, - attention_mask, head_mask, history_state) - else: - attention_outputs = self.attention(hidden_states, attention_mask, - head_mask, history_state) - attention_output = attention_outputs[0] - if self.use_mlp_wrapper: - layer_output = self.mlp(attention_output) - else: - if not self.pre_norm: - intermediate_output = self.intermediate(attention_output) - else: - intermediate_output = self.intermediate(self.LayerNorm(attention_output)) - layer_output = self.output(intermediate_output, attention_output) - outputs = (layer_output,) + attention_outputs[1:] # add attentions if we output them - return outputs - - -class BertEncoder(nn.Module): - def __init__(self, config, use_act_checkpoint=True): - super(BertEncoder, self).__init__() - self.output_attentions = config.output_attentions - self.output_hidden_states = config.output_hidden_states - self.layer = nn.ModuleList([BertLayer(config, use_act_checkpoint=use_act_checkpoint) for _ in range(config.num_hidden_layers)]) - self.pre_norm = hasattr(config, 'pre_norm') and config.pre_norm - if self.pre_norm: - self.LayerNorm = LayerNormClass(config.hidden_size, eps=config.layer_norm_eps) - - def forward(self, hidden_states, attention_mask, head_mask=None, - encoder_history_states=None): - all_hidden_states = () - all_attentions = () - for i, layer_module in enumerate(self.layer): - if self.output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - history_state = None if encoder_history_states is None else encoder_history_states[i] - layer_outputs = layer_module( - hidden_states, attention_mask, - (None if head_mask is None else head_mask[i]), - history_state, - ) - hidden_states = layer_outputs[0] - - if self.output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - if self.pre_norm: - hidden_states = self.LayerNorm(hidden_states) - outputs = (hidden_states,) - if self.output_hidden_states: - outputs = outputs + (all_hidden_states,) - if self.output_attentions: - outputs = outputs + (all_attentions,) - return outputs - -CONFIG_NAME = "config.json" - -class PretrainedConfig(object): - """ Base class for all configuration classes. - Handle a few common parameters and methods for loading/downloading/saving configurations. - """ - pretrained_config_archive_map = {} - - def __init__(self, **kwargs): - self.finetuning_task = kwargs.pop('finetuning_task', None) - self.num_labels = kwargs.pop('num_labels', 2) - self.output_attentions = kwargs.pop('output_attentions', False) - self.output_hidden_states = kwargs.pop('output_hidden_states', False) - self.torchscript = kwargs.pop('torchscript', False) - - def save_pretrained(self, save_directory): - """ Save a configuration object to a directory, so that it - can be re-loaded using the `from_pretrained(save_directory)` class method. - """ - assert os.path.isdir(save_directory), "Saving path should be a directory where the model and configuration can be saved" - - # If we save using the predefined names, we can load using `from_pretrained` - output_config_file = os.path.join(save_directory, CONFIG_NAME) - - self.to_json_file(output_config_file) - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path, **kwargs): - r""" Instantiate a PretrainedConfig from a pre-trained model configuration. - - Params: - **pretrained_model_name_or_path**: either: - - a string with the `shortcut name` of a pre-trained model configuration to load from cache - or download and cache if not already stored in cache (e.g. 'bert-base-uncased'). - - a path to a `directory` containing a configuration file saved - using the `save_pretrained(save_directory)` method. - - a path or url to a saved configuration `file`. - **cache_dir**: (`optional`) string: - Path to a directory in which a downloaded pre-trained model - configuration should be cached if the standard cache should not be used. - **return_unused_kwargs**: (`optional`) bool: - - If False, then this function returns just the final configuration object. - - If True, then this functions returns a tuple `(config, unused_kwargs)` where `unused_kwargs` - is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: - ie the part of kwargs which has not been used to update `config` and is otherwise ignored. - **kwargs**: (`optional`) dict: - Dictionary of key/value pairs with which to update the configuration object after loading. - - The values in kwargs of any keys which are configuration attributes will be used - to override the loaded values. - - Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled - by the `return_unused_kwargs` keyword parameter. - - Examples:: - - >>> config = BertConfig.from_pretrained('bert-base-uncased') # Download configuration from S3 and cache. - >>> config = BertConfig.from_pretrained('./test/saved_model/') # E.g. config (or model) was saved using `save_pretrained('./test/saved_model/')` - >>> config = BertConfig.from_pretrained('./test/saved_model/my_configuration.json') - >>> config = BertConfig.from_pretrained('bert-base-uncased', output_attention=True, foo=False) - >>> assert config.output_attention == True - >>> config, unused_kwargs = BertConfig.from_pretrained('bert-base-uncased', output_attention=True, - >>> foo=False, return_unused_kwargs=True) - >>> assert config.output_attention == True - >>> assert unused_kwargs == {'foo': False} - - """ - cache_dir = kwargs.pop('cache_dir', None) - return_unused_kwargs = kwargs.pop('return_unused_kwargs', False) - - if pretrained_model_name_or_path in cls.pretrained_config_archive_map: - config_file = cls.pretrained_config_archive_map[pretrained_model_name_or_path] - elif os.path.isdir(pretrained_model_name_or_path): - config_file = os.path.join(pretrained_model_name_or_path, CONFIG_NAME) - else: - config_file = pretrained_model_name_or_path - # redirect to the cache, if necessary - try: - resolved_config_file = cached_path(config_file, cache_dir=cache_dir) - except EnvironmentError: - if pretrained_model_name_or_path in cls.pretrained_config_archive_map: - logger.error( - "Couldn't reach server at '{}' to download pretrained model configuration file.".format( - config_file)) - else: - logger.error( - "Model name '{}' was not found in model name list ({}). " - "We assumed '{}' was a path or url but couldn't find any file " - "associated to this path or url.".format( - pretrained_model_name_or_path, - ', '.join(cls.pretrained_config_archive_map.keys()), - config_file)) - return None - if resolved_config_file == config_file: - logger.info("loading configuration file {}".format(config_file)) - else: - logger.info("loading configuration file {} from cache at {}".format( - config_file, resolved_config_file)) - - # Load config - config = cls.from_json_file(resolved_config_file) - - # Update config with kwargs if needed - to_remove = [] - for key, value in kwargs.items(): - if hasattr(config, key): - setattr(config, key, value) - to_remove.append(key) - # add img_layer_norm_eps, use_img_layernorm - if "img_layer_norm_eps" in kwargs: - setattr(config, "img_layer_norm_eps", kwargs["img_layer_norm_eps"]) - to_remove.append("img_layer_norm_eps") - if "use_img_layernorm" in kwargs: - setattr(config, "use_img_layernorm", kwargs["use_img_layernorm"]) - to_remove.append("use_img_layernorm") - for key in to_remove: - kwargs.pop(key, None) - - logger.info("Model config %s", config) - if return_unused_kwargs: - return config, kwargs - else: - return config - - @classmethod - def from_dict(cls, json_object): - """Constructs a `Config` from a Python dictionary of parameters.""" - config = cls(vocab_size_or_config_json_file=-1) - for key, value in json_object.items(): - config.__dict__[key] = value - return config - - @classmethod - def from_json_file(cls, json_file): - """Constructs a `BertConfig` from a json file of parameters.""" - with open(json_file, "r", encoding='utf-8') as reader: - text = reader.read() - return cls.from_dict(json.loads(text)) - - def __eq__(self, other): - return self.__dict__ == other.__dict__ - - def __repr__(self): - return str(self.to_json_string()) - - def to_dict(self): - """Serializes this instance to a Python dictionary.""" - output = copy.deepcopy(self.__dict__) - return output - - def to_json_string(self): - """Serializes this instance to a JSON string.""" - return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n" - - def to_json_file(self, json_file_path): - """ Save this instance to a json file.""" - with open(json_file_path, "w", encoding='utf-8') as writer: - writer.write(self.to_json_string()) - - -class BertConfig(PretrainedConfig): - r""" - :class:`~pytorch_transformers.BertConfig` is the configuration class to store the configuration of a - `BertModel`. - - - Arguments: - vocab_size_or_config_json_file: Vocabulary size of `inputs_ids` in `BertModel`. - hidden_size: Size of the encoder layers and the pooler layer. - num_hidden_layers: Number of hidden layers in the Transformer encoder. - num_attention_heads: Number of attention heads for each attention layer in - the Transformer encoder. - intermediate_size: The size of the "intermediate" (i.e., feed-forward) - layer in the Transformer encoder. - hidden_act: The non-linear activation function (function or string) in the - encoder and pooler. If string, "gelu", "relu" and "swish" are supported. - hidden_dropout_prob: The dropout probabilitiy for all fully connected - layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob: The dropout ratio for the attention - probabilities. - max_position_embeddings: The maximum sequence length that this model might - ever be used with. Typically set this to something large just in case - (e.g., 512 or 1024 or 2048). - type_vocab_size: The vocabulary size of the `token_type_ids` passed into - `BertModel`. - initializer_range: The sttdev of the truncated_normal_initializer for - initializing all weight matrices. - layer_norm_eps: The epsilon used by LayerNorm. - """ - pretrained_config_archive_map = BERT_PRETRAINED_CONFIG_ARCHIVE_MAP - - def __init__(self, - vocab_size_or_config_json_file=30522, - hidden_size=768, - num_hidden_layers=12, - num_attention_heads=12, - intermediate_size=3072, - hidden_act="gelu", - hidden_dropout_prob=0.1, - attention_probs_dropout_prob=0.1, - max_position_embeddings=512, - type_vocab_size=2, - initializer_range=0.02, - layer_norm_eps=1e-12, - **kwargs): - super(BertConfig, self).__init__(**kwargs) - if isinstance(vocab_size_or_config_json_file, str): - with open(vocab_size_or_config_json_file, "r", encoding='utf-8') as reader: - json_config = json.loads(reader.read()) - for key, value in json_config.items(): - self.__dict__[key] = value - elif isinstance(vocab_size_or_config_json_file, int): - self.vocab_size = vocab_size_or_config_json_file - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.hidden_act = hidden_act - self.intermediate_size = intermediate_size - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.max_position_embeddings = max_position_embeddings - self.type_vocab_size = type_vocab_size - self.initializer_range = initializer_range - self.layer_norm_eps = layer_norm_eps - else: - raise ValueError("First argument must be either a vocabulary size (int)" - "or the path to a pretrained model config file (str)") - - -def _gelu_python(x): - - return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))) \ No newline at end of file diff --git a/spaces/ThomasSimonini/Huggy/Build/Huggy.loader.js b/spaces/ThomasSimonini/Huggy/Build/Huggy.loader.js deleted file mode 100644 index 9ba2d32cc3a447f117f2bc0acff70724a145cba6..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/Huggy/Build/Huggy.loader.js +++ /dev/null @@ -1 +0,0 @@ -function createUnityInstance(t,n,l){function d(e,t){if(!d.aborted&&n.showBanner)return"error"==t&&(d.aborted=!0),n.showBanner(e,t);switch(t){case"error":console.error(e);break;case"warning":console.warn(e);break;default:console.log(e)}}function r(e){var t=e.reason||e.error,n=t?t.toString():e.message||e.reason||"",r=t&&t.stack?t.stack.toString():"";(n+="\n"+(r=r.startsWith(n)?r.substring(n.length):r).trim())&&c.stackTraceRegExp&&c.stackTraceRegExp.test(n)&&k(n,e.filename||t&&(t.fileName||t.sourceURL)||"",e.lineno||t&&(t.lineNumber||t.line)||0)}function e(e,t,n){var r=e[t];void 0!==r&&r||(console.warn('Config option "'+t+'" is missing or empty. Falling back to default value: "'+n+'". Consider updating your WebGL template to include the missing config option.'),e[t]=n)}l=l||function(){};var o,c={canvas:t,webglContextAttributes:{preserveDrawingBuffer:!1,powerPreference:2},cacheControl:function(e){return e==c.dataUrl?"must-revalidate":"no-store"},streamingAssetsUrl:"StreamingAssets",downloadProgress:{},deinitializers:[],intervals:{},setInterval:function(e,t){e=window.setInterval(e,t);return this.intervals[e]=!0,e},clearInterval:function(e){delete this.intervals[e],window.clearInterval(e)},preRun:[],postRun:[],print:function(e){console.log(e)},printErr:function(e){console.error(e),"string"==typeof e&&-1!=e.indexOf("wasm streaming compile failed")&&(-1!=e.toLowerCase().indexOf("mime")?d('HTTP Response Header "Content-Type" configured incorrectly on the server for file '+c.codeUrl+' , should be "application/wasm". Startup time performance will suffer.',"warning"):d('WebAssembly streaming compilation failed! This can happen for example if "Content-Encoding" HTTP header is incorrectly enabled on the server for file '+c.codeUrl+", but the file is not pre-compressed on disk (or vice versa). Check the Network tab in browser Devtools to debug server header configuration.","warning"))},locateFile:function(e){return e},disabledCanvasEvents:["contextmenu","dragstart"]};for(o in e(n,"companyName","Unity"),e(n,"productName","WebGL Player"),e(n,"productVersion","1.0"),n)c[o]=n[o];c.streamingAssetsUrl=new URL(c.streamingAssetsUrl,document.URL).href;var i=c.disabledCanvasEvents.slice();function a(e){e.preventDefault()}i.forEach(function(e){t.addEventListener(e,a)}),window.addEventListener("error",r),window.addEventListener("unhandledrejection",r),c.deinitializers.push(function(){for(var e in c.disableAccessToMediaDevices(),i.forEach(function(e){t.removeEventListener(e,a)}),window.removeEventListener("error",r),window.removeEventListener("unhandledrejection",r),c.intervals)window.clearInterval(e);c.intervals={}}),c.QuitCleanup=function(){for(var e=0;e>>6:(n<65536?t[o++]=224|n>>>12:(t[o++]=240|n>>>18,t[o++]=128|n>>>12&63),t[o++]=128|n>>>6&63),t[o++]=128|63&n);return t},n.buf2binstring=function(e){return u(e,e.length)},n.binstring2buf=function(e){for(var t=new l.Buf8(e.length),n=0,r=t.length;n>10&1023,i[a++]=56320|1023&n)}return u(i,a)},n.utf8border=function(e,t){for(var n=(t=(t=t||e.length)>e.length?e.length:t)-1;0<=n&&128==(192&e[n]);)n--;return!(n<0)&&0!==n&&n+d[e[n]]>t?n:t}},"zlib/inflate.js":function(e,t,n){"use strict";var L=e("../utils/common"),O=e("./adler32"),I=e("./crc32"),A=e("./inffast"),P=e("./inftrees"),D=0,N=-2,z=1,r=852,o=592;function F(e){return(e>>>24&255)+(e>>>8&65280)+((65280&e)<<8)+((255&e)<<24)}function i(){this.mode=0,this.last=!1,this.wrap=0,this.havedict=!1,this.flags=0,this.dmax=0,this.check=0,this.total=0,this.head=null,this.wbits=0,this.wsize=0,this.whave=0,this.wnext=0,this.window=null,this.hold=0,this.bits=0,this.length=0,this.offset=0,this.extra=0,this.lencode=null,this.distcode=null,this.lenbits=0,this.distbits=0,this.ncode=0,this.nlen=0,this.ndist=0,this.have=0,this.next=null,this.lens=new L.Buf16(320),this.work=new L.Buf16(288),this.lendyn=null,this.distdyn=null,this.sane=0,this.back=0,this.was=0}function a(e){var t;return e&&e.state?(t=e.state,e.total_in=e.total_out=t.total=0,e.msg="",t.wrap&&(e.adler=1&t.wrap),t.mode=z,t.last=0,t.havedict=0,t.dmax=32768,t.head=null,t.hold=0,t.bits=0,t.lencode=t.lendyn=new L.Buf32(r),t.distcode=t.distdyn=new L.Buf32(o),t.sane=1,t.back=-1,D):N}function s(e){var t;return e&&e.state?((t=e.state).wsize=0,t.whave=0,t.wnext=0,a(e)):N}function l(e,t){var n,r;return!e||!e.state||(r=e.state,t<0?(n=0,t=-t):(n=1+(t>>4),t<48&&(t&=15)),t&&(t<8||15=e.wsize?(L.arraySet(e.window,t,n-e.wsize,e.wsize,0),e.wnext=0,e.whave=e.wsize):(r<(o=e.wsize-e.wnext)&&(o=r),L.arraySet(e.window,t,n-r,o,e.wnext),(r-=o)?(L.arraySet(e.window,t,n-r,r,0),e.wnext=r,e.whave=e.wsize):(e.wnext+=o,e.wnext===e.wsize&&(e.wnext=0),e.whave>>8&255,n.check=I(n.check,B,2,0),u=d=0,n.mode=2;else if(n.flags=0,n.head&&(n.head.done=!1),!(1&n.wrap)||(((255&d)<<8)+(d>>8))%31)e.msg="incorrect header check",n.mode=30;else if(8!=(15&d))e.msg="unknown compression method",n.mode=30;else{if(u-=4,x=8+(15&(d>>>=4)),0===n.wbits)n.wbits=x;else if(x>n.wbits){e.msg="invalid window size",n.mode=30;break}n.dmax=1<>8&1),512&n.flags&&(B[0]=255&d,B[1]=d>>>8&255,n.check=I(n.check,B,2,0)),u=d=0,n.mode=3;case 3:for(;u<32;){if(0===s)break e;s--,d+=r[i++]<>>8&255,B[2]=d>>>16&255,B[3]=d>>>24&255,n.check=I(n.check,B,4,0)),u=d=0,n.mode=4;case 4:for(;u<16;){if(0===s)break e;s--,d+=r[i++]<>8),512&n.flags&&(B[0]=255&d,B[1]=d>>>8&255,n.check=I(n.check,B,2,0)),u=d=0,n.mode=5;case 5:if(1024&n.flags){for(;u<16;){if(0===s)break e;s--,d+=r[i++]<>>8&255,n.check=I(n.check,B,2,0)),u=d=0}else n.head&&(n.head.extra=null);n.mode=6;case 6:if(1024&n.flags&&((h=s<(h=n.length)?s:h)&&(n.head&&(x=n.head.extra_len-n.length,n.head.extra||(n.head.extra=new Array(n.head.extra_len)),L.arraySet(n.head.extra,r,i,h,x)),512&n.flags&&(n.check=I(n.check,r,h,i)),s-=h,i+=h,n.length-=h),n.length))break e;n.length=0,n.mode=7;case 7:if(2048&n.flags){if(0===s)break e;for(h=0;x=r[i+h++],n.head&&x&&n.length<65536&&(n.head.name+=String.fromCharCode(x)),x&&h>9&1,n.head.done=!0),e.adler=n.check=0,n.mode=12;break;case 10:for(;u<32;){if(0===s)break e;s--,d+=r[i++]<>>=7&u,u-=7&u,n.mode=27;else{for(;u<3;){if(0===s)break e;s--,d+=r[i++]<>>=1)){case 0:n.mode=14;break;case 1:var T,T=R=void 0,R=n;if(H){for(Z=new L.Buf32(512),j=new L.Buf32(32),T=0;T<144;)R.lens[T++]=8;for(;T<256;)R.lens[T++]=9;for(;T<280;)R.lens[T++]=7;for(;T<288;)R.lens[T++]=8;for(P(1,R.lens,0,288,Z,0,R.work,{bits:9}),T=0;T<32;)R.lens[T++]=5;P(2,R.lens,0,32,j,0,R.work,{bits:5}),H=!1}if(R.lencode=Z,R.lenbits=9,R.distcode=j,R.distbits=5,n.mode=20,6!==t)break;d>>>=2,u-=2;break e;case 2:n.mode=17;break;case 3:e.msg="invalid block type",n.mode=30}d>>>=2,u-=2}break;case 14:for(d>>>=7&u,u-=7&u;u<32;){if(0===s)break e;s--,d+=r[i++]<>>16^65535)){e.msg="invalid stored block lengths",n.mode=30;break}if(n.length=65535&d,u=d=0,n.mode=15,6===t)break e;case 15:n.mode=16;case 16:if(h=n.length){if(0===(h=l<(h=s>>=5,u-=5,n.ndist=1+(31&d),d>>>=5,u-=5,n.ncode=4+(15&d),d>>>=4,u-=4,286>>=3,u-=3}for(;n.have<19;)n.lens[U[n.have++]]=0;if(n.lencode=n.lendyn,n.lenbits=7,S={bits:n.lenbits},_=P(0,n.lens,0,19,n.lencode,0,n.work,S),n.lenbits=S.bits,_){e.msg="invalid code lengths set",n.mode=30;break}n.have=0,n.mode=19;case 19:for(;n.have>>16&255,w=65535&C,!((g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[i++]<>>=g,u-=g,n.lens[n.have++]=w;else{if(16===w){for(E=g+2;u>>=g,u-=g,0===n.have){e.msg="invalid bit length repeat",n.mode=30;break}x=n.lens[n.have-1],h=3+(3&d),d>>>=2,u-=2}else if(17===w){for(E=g+3;u>>=g)),d>>>=3,u=u-g-3}else{for(E=g+7;u>>=g)),d>>>=7,u=u-g-7}if(n.have+h>n.nlen+n.ndist){e.msg="invalid bit length repeat",n.mode=30;break}for(;h--;)n.lens[n.have++]=x}}if(30===n.mode)break;if(0===n.lens[256]){e.msg="invalid code -- missing end-of-block",n.mode=30;break}if(n.lenbits=9,S={bits:n.lenbits},_=P(1,n.lens,0,n.nlen,n.lencode,0,n.work,S),n.lenbits=S.bits,_){e.msg="invalid literal/lengths set",n.mode=30;break}if(n.distbits=6,n.distcode=n.distdyn,S={bits:n.distbits},_=P(2,n.lens,n.nlen,n.ndist,n.distcode,0,n.work,S),n.distbits=S.bits,_){e.msg="invalid distances set",n.mode=30;break}if(n.mode=20,6===t)break e;case 20:n.mode=21;case 21:if(6<=s&&258<=l){e.next_out=a,e.avail_out=l,e.next_in=i,e.avail_in=s,n.hold=d,n.bits=u,A(e,f),a=e.next_out,o=e.output,l=e.avail_out,i=e.next_in,r=e.input,s=e.avail_in,d=n.hold,u=n.bits,12===n.mode&&(n.back=-1);break}for(n.back=0;p=(C=n.lencode[d&(1<>>16&255,w=65535&C,!((g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[i++]<>v)])>>>16&255,w=65535&C,!(v+(g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[i++]<>>=v,u-=v,n.back+=v}if(d>>>=g,u-=g,n.back+=g,n.length=w,0===p){n.mode=26;break}if(32&p){n.back=-1,n.mode=12;break}if(64&p){e.msg="invalid literal/length code",n.mode=30;break}n.extra=15&p,n.mode=22;case 22:if(n.extra){for(E=n.extra;u>>=n.extra,u-=n.extra,n.back+=n.extra}n.was=n.length,n.mode=23;case 23:for(;p=(C=n.distcode[d&(1<>>16&255,w=65535&C,!((g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[i++]<>v)])>>>16&255,w=65535&C,!(v+(g=C>>>24)<=u);){if(0===s)break e;s--,d+=r[i++]<>>=v,u-=v,n.back+=v}if(d>>>=g,u-=g,n.back+=g,64&p){e.msg="invalid distance code",n.mode=30;break}n.offset=w,n.extra=15&p,n.mode=24;case 24:if(n.extra){for(E=n.extra;u>>=n.extra,u-=n.extra,n.back+=n.extra}if(n.offset>n.dmax){e.msg="invalid distance too far back",n.mode=30;break}n.mode=25;case 25:if(0===l)break e;if(n.offset>(h=f-l)){if((h=n.offset-h)>n.whave&&n.sane){e.msg="invalid distance too far back",n.mode=30;break}b=h>n.wnext?(h-=n.wnext,n.wsize-h):n.wnext-h,h>n.length&&(h=n.length),m=n.window}else m=o,b=a-n.offset,h=n.length;for(l-=h=l>>16&65535|0,a=0;0!==n;){for(n-=a=2e3>>1:n>>>1;e[t]=n}return e}();t.exports=function(e,t,n,r){var o=s,i=r+n;e^=-1;for(var a=r;a>>8^o[255&(e^t[a])];return-1^e}},"zlib/inffast.js":function(e,t,n){"use strict";t.exports=function(e,t){var n,r,o,i,a,s,l=e.state,d=e.next_in,u=e.input,c=d+(e.avail_in-5),f=e.next_out,h=e.output,b=f-(t-e.avail_out),m=f+(e.avail_out-257),g=l.dmax,p=l.wsize,w=l.whave,v=l.wnext,y=l.window,k=l.hold,x=l.bits,_=l.lencode,S=l.distcode,E=(1<>>=r=n>>>24,x-=r,0==(r=n>>>16&255))h[f++]=65535&n;else{if(!(16&r)){if(0==(64&r)){n=_[(65535&n)+(k&(1<>>=r,x-=r),x<15&&(k+=u[d++]<>>=r=n>>>24,x-=r,!(16&(r=n>>>16&255))){if(0==(64&r)){n=S[(65535&n)+(k&(1<>>=r,x-=r,(r=f-b)>3)<<3))-1,e.next_in=d-=o,e.next_out=f,e.avail_in=dh?(m=O[I+a[v]],U[T+a[v]]):(m=96,0),l=1<<(b=w-S),y=d=1<<_;o[f+(B>>S)+(d-=l)]=b<<24|m<<16|g|0,0!==d;);for(l=1<>=1;if(B=0!==l?(B&l-1)+l:0,v++,0==--R[w]){if(w===k)break;w=t[n+a[v]]}if(xe.length||31!=e[0]||139!=e[1])return!1;var r=e[3];if(4&r){if(t+2>e.length)return!1;if((t+=2+e[t]+(e[t+1]<<8))>e.length)return!1}if(8&r){for(;te.length)return!1;t++}return 16&r&&String.fromCharCode.apply(null,e.subarray(t,t+n.length+1))==n+"\0"}}};function T(n){x(n);var e=c.cacheControl(c[n]),t=c.companyName&&c.productName?c.cachedFetch:c.fetchWithProgress,r=c[n],r=/file:\/\//.exec(r)?"same-origin":void 0;return t(c[n],{method:"GET",companyName:c.companyName,productName:c.productName,control:e,mode:r,onProgress:function(e){x(n,e)}}).then(function(e){return a=e.parsedBody,s=c[n],new Promise(function(e,t){try{for(var n in U){var r,o,i;if(U[n].hasUnityMarker(a))return s&&console.log('You can reduce startup time if you configure your web server to add "Content-Encoding: '+n+'" response header when serving "'+s+'" file.'),(r=U[n]).worker||(o=URL.createObjectURL(new Blob(["this.require = ",r.require.toString(),"; this.decompress = ",r.decompress.toString(),"; this.onmessage = ",function(e){e={id:e.data.id,decompressed:this.decompress(e.data.compressed)};postMessage(e,e.decompressed?[e.decompressed.buffer]:[])}.toString(),"; postMessage({ ready: true });"],{type:"application/javascript"})),r.worker=new Worker(o),r.worker.onmessage=function(e){e.data.ready?URL.revokeObjectURL(o):(this.callbacks[e.data.id](e.data.decompressed),delete this.callbacks[e.data.id])},r.worker.callbacks={},r.worker.nextCallbackId=0),i=r.worker.nextCallbackId++,r.worker.callbacks[i]=e,void r.worker.postMessage({id:i,compressed:a},[a.buffer])}e(a)}catch(e){t(e)}});var a,s}).catch(function(e){var t="Failed to download file "+c[n];"file:"==location.protocol?d(t+". Loading web pages via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host Unity content, or use the Unity Build and Run option.","error"):console.error(t)})}function R(){Promise.all([T("frameworkUrl").then(function(e){var s=URL.createObjectURL(new Blob([e],{type:"application/javascript"}));return new Promise(function(i,e){var a=document.createElement("script");a.src=s,a.onload=function(){if("undefined"==typeof unityFramework||!unityFramework){var e,t=[["br","br"],["gz","gzip"]];for(e in t){var n,r=t[e];if(c.frameworkUrl.endsWith("."+r[0]))return n="Unable to parse "+c.frameworkUrl+"!","file:"==location.protocol?void d(n+" Loading pre-compressed (brotli or gzip) content via a file:// URL without a web server is not supported by this browser. Please use a local development web server to host compressed Unity content, or use the Unity Build and Run option.","error"):(n+=' This can happen if build compression was enabled but web server hosting the content was misconfigured to not serve the file with HTTP Response Header "Content-Encoding: '+r[1]+'" present. Check browser Console and Devtools Network tab to debug.',"br"==r[0]&&"http:"==location.protocol&&(r=-1!=["localhost","127.0.0.1"].indexOf(location.hostname)?"":"Migrate your server to use HTTPS.",n=/Firefox/.test(navigator.userAgent)?"Unable to parse "+c.frameworkUrl+'!
      If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported in Firefox over HTTP connections. '+r+' See https://bugzilla.mozilla.org/show_bug.cgi?id=1670675 for more information.':"Unable to parse "+c.frameworkUrl+'!
      If using custom web server, verify that web server is sending .br files with HTTP Response Header "Content-Encoding: br". Brotli compression may not be supported over HTTP connections. Migrate your server to use HTTPS.'),void d(n,"error"))}d("Unable to parse "+c.frameworkUrl+"! The file is corrupt, or compression was misconfigured? (check Content-Encoding HTTP Response Header on web server)","error")}var o=unityFramework;unityFramework=null,a.onload=null,URL.revokeObjectURL(s),i(o)},a.onerror=function(e){d("Unable to load file "+c.frameworkUrl+"! Check that the file exists on the remote server. (also check browser Console and Devtools Network tab to debug)","error")},document.body.appendChild(a),c.deinitializers.push(function(){document.body.removeChild(a)})})}),T("codeUrl")]).then(function(e){c.wasmBinary=e[1],e[0](c)});var e=T("dataUrl");c.preRun.push(function(){c.addRunDependency("dataUrl"),e.then(function(e){var t=new DataView(e.buffer,e.byteOffset,e.byteLength),n=0,r="UnityWebData1.0\0";if(!String.fromCharCode.apply(null,e.subarray(n,n+r.length))==r)throw"unknown data format";var o=t.getUint32(n+=r.length,!0);for(n+=4;n /app/txlib/$TXLIB_VERSION/config.json - -# 运行 -CMD bash bin/unidbg-fetch-qsign --basePath=txlib/$TXLIB_VERSION - -# 暴露端口 -EXPOSE 7860 \ No newline at end of file diff --git a/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000 --- a/spaces/Toritto/Genshin-impact-IA-project-v1/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,90 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/VoiceHero69/changer/setup_tools/commands.py b/spaces/VoiceHero69/changer/setup_tools/commands.py deleted file mode 100644 index d0a92b94bff182f2a9e3ca23d2093abaabacc66d..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/setup_tools/commands.py +++ /dev/null @@ -1,26 +0,0 @@ -import sys -from os import system - -from setup_tools.os import is_windows - - -def get_python(): - return sys.executable - - -def run_command(command: list[tuple[str, str]] | str, args='', show_output=True): - extra = (' >nul' if is_windows() else ' >/dev/null') if not show_output else '' - if not isinstance(command, str): - commandstr = '&&'.join([' '.join(cmd) + extra for cmd in command]) - else: - commandstr = f'{command} {args}' + extra - - # Ensure bin/bash shell - if not is_windows(): - commandstr = f'/bin/bash -c "{commandstr}"' - - system(commandstr) - - -def run_pip(package, show_output=False): - run_command('pip', f'install {package}', show_output) diff --git a/spaces/Weyaxi/open-llm-leaderboard-renamer/app.py b/spaces/Weyaxi/open-llm-leaderboard-renamer/app.py deleted file mode 100644 index ae8be6ee94dc9a9a38120a1ef8cf0c6a7438d57c..0000000000000000000000000000000000000000 --- a/spaces/Weyaxi/open-llm-leaderboard-renamer/app.py +++ /dev/null @@ -1,88 +0,0 @@ -from huggingface_hub import * -import os -import json -import gradio as gr - -fs = HfFileSystem() -api = HfApi() - - -def remove_from(text, from_model, to_model): - text = text.replace(from_model, to_model) - return text - - -def return_operation_requests(from_model, to_model): - ls = [i['name'] for i in fs.ls(path=f'datasets/open-llm-leaderboard/requests/{from_model.split("/")[0]}') if from_model in i['name']] - liste=[] - - for i in range(len(ls)): - path_for = ls[i] - will_write = json.loads(fs.read_text(path_for)) - will_write['model'] = to_model - will_write = json.dumps(will_write) - - liste.extend([CommitOperationAdd(path_in_repo="/".join(remove_from(path_for, from_model, to_model).split("/")[3:]), path_or_fileobj=will_write.encode()), - CommitOperationDelete(path_in_repo="/".join(path_for.split("/")[3:]))]) - - return liste - - -def return_operation_results(from_model, to_model): - ls = [i['name'] for i in fs.ls(path=f'datasets/open-llm-leaderboard/results/{from_model}') if from_model in i['name']] - liste=[] - - for i in range(len(ls)): - path_for = ls[i] - - - will_write = json.loads(fs.read_text(path_for)) - will_write['config_general']['model_name'] = to_model - will_write = json.dumps(will_write, indent=2) - - liste.extend([CommitOperationAdd(path_in_repo="/".join(remove_from(path_for, from_model, to_model).split("/")[3:]), path_or_fileobj=will_write.encode()), - CommitOperationDelete(path_in_repo="/".join(path_for.split("/")[3:]))]) - - return liste - - - -def model_name_to_details(model_name): - return f"datasets/open-llm-leaderboard/details_{model_name.split('/')[0]}__{model_name.split('/')[1]}" - - -def return_operation_details(from_model, to_model): - ls = [i['name'] for i in fs.ls(path=model_name_to_details(from_model)) if ("results" in i['name'] and ".json" in i['name'])] - liste=[] - - for i in range(len(ls)): - path_for = ls[i] - - will_write = json.loads(fs.read_text(path_for)) - will_write['config_general']['model_name'] = to_model - will_write = json.dumps(will_write, indent=2) - - readme_file = fs.read_text("/".join(path_for.split("/")[:3])+"/README.md").replace(from_model, to_model).replace(model_name_to_details(from_model).split('/')[2], model_name_to_details(to_model).split('/')[2]) - - liste.extend([CommitOperationAdd(path_in_repo="/".join(path_for.split("/")[3:]), path_or_fileobj=will_write.encode()), - CommitOperationAdd(path_in_repo="README.md", path_or_fileobj=readme_file.encode())]) - - return liste - -def commit(liste_requests, liste_results, liste_details, details_path, from_model, to_model): - request_commit = (create_commit(repo_id="open-llm-leaderboard/requests", operations=liste_requests, commit_message=f"Renaming Model {from_model} to {to_model}", repo_type="dataset", create_pr=True).__dict__['pr_url']) - result_commit = (create_commit(repo_id="open-llm-leaderboard/results", operations=liste_results, commit_message=f"Renaming Model {from_model} to {to_model}", repo_type="dataset", create_pr=True).__dict__['pr_url']) - details_commit = (create_commit(repo_id="/".join(details_path.split("/")[1:]), operations=liste_details, commit_message=f"Renaming Model {from_model} to {to_model}", repo_type="dataset", create_pr=True).__dict__['pr_url']) - return f"{request_commit}\n{result_commit}\n{details_commit}" - - -def commit_gradio(from_model, to_model, hf_token): - try: - login(hf_token) - return commit(return_operation_requests(from_model, to_model), return_operation_results(from_model, to_model), return_operation_details(from_model, to_model), model_name_to_details(from_model), from_model, to_model) - except Exception as e: - return e - -demo = gr.Interface(fn=commit_gradio, inputs=["text", "text", "text"], outputs="text") - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/data/test_audio_dataset.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/data/test_audio_dataset.py deleted file mode 100644 index b69c9c397830738b73d6c229009f84b867cda801..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/tests/data/test_audio_dataset.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from functools import partial -from itertools import product -import json -import math -import os -import random -import typing as tp - -import pytest -import torch -from torch.utils.data import DataLoader - -from audiocraft.data.audio_dataset import ( - AudioDataset, - AudioMeta, - _get_audio_meta, - load_audio_meta, - save_audio_meta -) -from audiocraft.data.zip import PathInZip - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestAudioMeta(TempDirMixin): - - def test_get_audio_meta(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(duration * sample_rate) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path('sample.wav') - save_wav(path, wav, sample_rate) - m = _get_audio_meta(path, minimal=True) - assert m.path == path, 'path does not match' - assert m.sample_rate == sample_rate, 'sample rate does not match' - assert m.duration == duration, 'duration does not match' - assert m.amplitude is None - assert m.info_path is None - - def test_save_audio_meta(self): - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_audio_meta = [] - for idx, meta in enumerate([audio_meta, empty_audio_meta]): - path = self.get_temp_path(f'data_{idx}_save.jsonl') - save_audio_meta(path, meta) - with open(path, 'r') as f: - lines = f.readlines() - read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines] - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - assert m == read_m - - def test_load_audio_meta(self): - try: - import dora - except ImportError: - dora = None # type: ignore - - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_meta = [] - for idx, meta in enumerate([audio_meta, empty_meta]): - path = self.get_temp_path(f'data_{idx}_load.jsonl') - with open(path, 'w') as f: - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - f.write(json_str) - read_meta = load_audio_meta(path) - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - if dora: - m.path = dora.git_save.to_absolute_path(m.path) - assert m == read_m, f'original={m}, read={read_m}' - - -class TestAudioDataset(TempDirMixin): - - def _create_audio_files(self, - root_name: str, - num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1): - root_dir = self.get_temp_dir(root_name) - for i in range(num_examples): - if isinstance(durations, float): - duration = durations - elif isinstance(durations, tuple) and len(durations) == 1: - duration = durations[0] - elif isinstance(durations, tuple) and len(durations) == 2: - duration = random.uniform(durations[0], durations[1]) - else: - assert False - n_frames = int(duration * sample_rate) - wav = get_white_noise(channels, n_frames) - path = os.path.join(root_dir, f'example_{i}.wav') - save_wav(path, wav, sample_rate) - return root_dir - - def _create_audio_dataset(self, - root_name: str, - total_num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1, - segment_duration: tp.Optional[float] = None, - num_examples: int = 10, - shuffle: bool = True, - return_info: bool = False): - root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels) - dataset = AudioDataset.from_path(root_dir, - minimal_meta=True, - segment_duration=segment_duration, - num_samples=num_examples, - sample_rate=sample_rate, - channels=channels, - shuffle=shuffle, - return_info=return_info) - return dataset - - def test_dataset_full(self): - total_examples = 10 - min_duration, max_duration = 1., 4. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), - sample_rate=sample_rate, channels=channels, segment_duration=None) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] <= int(max_duration * sample_rate) - assert sample.shape[1] >= int(min_duration * sample_rate) - - def test_dataset_segment(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - - def test_dataset_equal_audio_and_segment_durations(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - # the random seek_time adds variability on audio read - sample_1 = dataset[0] - sample_2 = dataset[1] - assert not torch.allclose(sample_1, sample_2) - - def test_dataset_samples(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - - create_dataset = partial( - self._create_audio_dataset, - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, - ) - - dataset = create_dataset(shuffle=True) - # when shuffle = True, we have different inputs for the same index across epoch - sample_1 = dataset[0] - sample_2 = dataset[0] - assert not torch.allclose(sample_1, sample_2) - - dataset_noshuffle = create_dataset(shuffle=False) - # when shuffle = False, we have same inputs for the same index across epoch - sample_1 = dataset_noshuffle[0] - sample_2 = dataset_noshuffle[0] - assert torch.allclose(sample_1, sample_2) - - def test_dataset_return_info(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - assert segment_info.sample_rate == sample_rate - assert segment_info.total_frames == int(segment_duration * sample_rate) - assert segment_info.n_frames <= int(segment_duration * sample_rate) - assert segment_info.seek_time >= 0 - - def test_dataset_return_info_no_segment_duration(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = None - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == segment_info.total_frames - assert segment_info.sample_rate == sample_rate - assert segment_info.n_frames <= segment_info.total_frames - - def test_dataset_collate_fn(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - assert batch.shape[0] == batch_size - - @pytest.mark.parametrize("segment_duration", [1.0, None]) - def test_dataset_with_meta_collate_fn(self, segment_duration): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - collate_fn=dataset.collater, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - wav, infos = batch - assert wav.shape[0] == batch_size - assert len(infos) == batch_size - - @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [ - [1, True, True, 0.5, 0.5, 0.0], - [1, False, True, 0.25, 0.5, 0.25], - [1, True, False, 0.666, 0.333, 0.0], - [1, False, False, 0.333, 0.333, 0.333], - [None, False, False, 0.333, 0.333, 0.333]]) - def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist): - random.seed(1234) - rng = torch.Generator() - rng.manual_seed(1234) - - def _get_histogram(dataset, repetitions=20_000): - counts = {file_meta.path: 0. for file_meta in meta} - for _ in range(repetitions): - file_meta = dataset.sample_file(rng) - counts[file_meta.path] += 1 - return {name: count / repetitions for name, count in counts.items()} - - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset( - meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight, - sample_on_duration=sample_on_duration) - hist = _get_histogram(dataset) - assert math.isclose(hist['a'], a_hist, abs_tol=0.01) - assert math.isclose(hist['b'], b_hist, abs_tol=0.01) - assert math.isclose(hist['c'], c_hist, abs_tol=0.01) - - def test_meta_duration_filter_all(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - try: - AudioDataset(meta, segment_duration=11, min_segment_ratio=1) - assert False - except AssertionError: - assert True - - def test_meta_duration_filter_long(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7) - assert len(dataset) == 2 diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/version.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/version.py deleted file mode 100644 index f533a81bcb2dbae1879f59695714ec82f95f384b..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/version.py +++ /dev/null @@ -1,2 +0,0 @@ -__all__ = ['__version__'] -__version__ = '1.0.56.dev0' diff --git a/spaces/Xenova/ai-code-playground/assets/index-3eabce1b.css b/spaces/Xenova/ai-code-playground/assets/index-3eabce1b.css deleted file mode 100644 index 7eeb571b688cb85d5ee8488cd4a15168743b2a4e..0000000000000000000000000000000000000000 --- a/spaces/Xenova/ai-code-playground/assets/index-3eabce1b.css +++ /dev/null @@ -1 +0,0 @@ -.sidebar{background-color:#181818;color:#ccc}body{background-color:#1f1f1f;color:#fff}.progress-container{position:relative;font-size:16px;color:#fff;border-radius:8px;text-align:left;overflow:hidden}.progress-bar{padding:2px 4px;z-index:0;top:0;width:1%;height:100%;overflow:hidden;background-color:#007bff;white-space:nowrap}.progress-text{z-index:2}*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-feature-settings:inherit;font-variation-settings:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}dialog{padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.static{position:static}.absolute{position:absolute}.left-0{left:0}.top-0{top:0}.z-50{z-index:50}.mb-1{margin-bottom:.25rem}.ml-2{margin-left:.5rem}.mt-3{margin-top:.75rem}.flex{display:flex}.h-4{height:1rem}.h-6{height:1.5rem}.h-full{height:100%}.h-screen{height:100vh}.w-4{width:1rem}.w-6{width:1.5rem}.w-full{width:100%}.w-screen{width:100vw}.flex-grow{flex-grow:1}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.justify-between{justify-content:space-between}.gap-1{gap:.25rem}.gap-2{gap:.5rem}.overflow-y-auto{overflow-y:auto}.rounded{border-radius:.25rem}.rounded-lg{border-radius:.5rem}.border{border-width:1px}.border-gray-200{--tw-border-opacity: 1;border-color:rgb(229 231 235 / var(--tw-border-opacity))}.border-gray-600{--tw-border-opacity: 1;border-color:rgb(75 85 99 / var(--tw-border-opacity))}.bg-gray-50{--tw-bg-opacity: 1;background-color:rgb(249 250 251 / var(--tw-bg-opacity))}.bg-gray-700{--tw-bg-opacity: 1;background-color:rgb(55 65 81 / var(--tw-bg-opacity))}.p-2{padding:.5rem}.p-2\.5{padding:.625rem}.p-3{padding:.75rem}.p-4{padding:1rem}.px-32{padding-left:8rem;padding-right:8rem}.text-center{text-align:center}.text-2xl{font-size:1.5rem;line-height:2rem}.text-3xl{font-size:1.875rem;line-height:2.25rem}.font-medium{font-weight:500}.font-normal{font-weight:400}.font-semibold{font-weight:600}.text-blue-600{--tw-text-opacity: 1;color:rgb(37 99 235 / var(--tw-text-opacity))}.text-gray-900{--tw-text-opacity: 1;color:rgb(17 24 39 / var(--tw-text-opacity))}.text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.underline{text-decoration-line:underline}.underline-offset-1{text-underline-offset:1px}.ring-offset-gray-800{--tw-ring-offset-color: #1f2937}.blur{--tw-blur: blur(8px);filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.transition-all{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}:root{font-family:Inter,system-ui,Avenir,Helvetica,Arial,sans-serif;line-height:1.5;font-weight:400;color-scheme:light dark;color:#ffffffde;background-color:#242424;font-synthesis:none;text-rendering:optimizeLegibility;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;-webkit-text-size-adjust:100%}a{font-weight:500;color:#646cff;text-decoration:inherit}a:hover{color:#535bf2}body{margin:0;display:flex;place-items:center}h1{font-size:3.2em;line-height:1.1}button{border-radius:8px;border:1px solid transparent;padding:.6em 1.2em;font-size:1em;font-weight:500;font-family:inherit;background-color:#1a1a1a;cursor:pointer;transition:border-color .25s}button:hover{border-color:#646cff}button:focus,button:focus-visible{outline:4px auto -webkit-focus-ring-color}@media (prefers-color-scheme: light){:root{color:#213547;background-color:#fff}a:hover{color:#747bff}button{background-color:#f9f9f9}}.focus\:ring-2:focus{--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.focus\:ring-blue-600:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(37 99 235 / var(--tw-ring-opacity))} diff --git a/spaces/XzJosh/nanami-Bert-VITS2/text/chinese_bert.py b/spaces/XzJosh/nanami-Bert-VITS2/text/chinese_bert.py deleted file mode 100644 index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/nanami-Bert-VITS2/text/chinese_bert.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") -model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device) - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors='pt') - for i in inputs: - inputs[i] = inputs[i].to(device) - res = model(**inputs, output_hidden_states=True) - res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text)+2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - - return phone_level_feature.T - -if __name__ == '__main__': - # feature = get_bert_feature('你好,我是说的道理。') - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) - diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/datasets/prepare_panoptic_fpn.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/datasets/prepare_panoptic_fpn.py deleted file mode 100644 index 597d791afab1bcc0013203a66c7fba225065eebe..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/datasets/prepare_panoptic_fpn.py +++ /dev/null @@ -1,116 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import functools -import json -import multiprocessing as mp -import numpy as np -import os -import time -from fvcore.common.download import download -from panopticapi.utils import rgb2id -from PIL import Image - -from detectron2.data.datasets.builtin_meta import COCO_CATEGORIES - - -def _process_panoptic_to_semantic(input_panoptic, output_semantic, segments, id_map): - panoptic = np.asarray(Image.open(input_panoptic), dtype=np.uint32) - panoptic = rgb2id(panoptic) - output = np.zeros_like(panoptic, dtype=np.uint8) + 255 - for seg in segments: - cat_id = seg["category_id"] - new_cat_id = id_map[cat_id] - output[panoptic == seg["id"]] = new_cat_id - Image.fromarray(output).save(output_semantic) - - -def separate_coco_semantic_from_panoptic(panoptic_json, panoptic_root, sem_seg_root, categories): - """ - Create semantic segmentation annotations from panoptic segmentation - annotations, to be used by PanopticFPN. - - It maps all thing categories to class 0, and maps all unlabeled pixels to class 255. - It maps all stuff categories to contiguous ids starting from 1. - - Args: - panoptic_json (str): path to the panoptic json file, in COCO's format. - panoptic_root (str): a directory with panoptic annotation files, in COCO's format. - sem_seg_root (str): a directory to output semantic annotation files - categories (list[dict]): category metadata. Each dict needs to have: - "id": corresponds to the "category_id" in the json annotations - "isthing": 0 or 1 - """ - os.makedirs(sem_seg_root, exist_ok=True) - - stuff_ids = [k["id"] for k in categories if k["isthing"] == 0] - thing_ids = [k["id"] for k in categories if k["isthing"] == 1] - id_map = {} # map from category id to id in the output semantic annotation - assert len(stuff_ids) <= 254 - for i, stuff_id in enumerate(stuff_ids): - id_map[stuff_id] = i + 1 - for thing_id in thing_ids: - id_map[thing_id] = 0 - id_map[0] = 255 - - with open(panoptic_json) as f: - obj = json.load(f) - - pool = mp.Pool(processes=max(mp.cpu_count() // 2, 4)) - - def iter_annotations(): - for anno in obj["annotations"]: - file_name = anno["file_name"] - segments = anno["segments_info"] - input = os.path.join(panoptic_root, file_name) - output = os.path.join(sem_seg_root, file_name) - yield input, output, segments - - print("Start writing to {} ...".format(sem_seg_root)) - start = time.time() - pool.starmap( - functools.partial(_process_panoptic_to_semantic, id_map=id_map), - iter_annotations(), - chunksize=100, - ) - print("Finished. time: {:.2f}s".format(time.time() - start)) - - -if __name__ == "__main__": - dataset_dir = os.path.join(os.getenv("DETECTRON2_DATASETS", "datasets"), "coco") - for s in ["val2017", "train2017"]: - separate_coco_semantic_from_panoptic( - os.path.join(dataset_dir, "annotations/panoptic_{}.json".format(s)), - os.path.join(dataset_dir, "panoptic_{}".format(s)), - os.path.join(dataset_dir, "panoptic_stuff_{}".format(s)), - COCO_CATEGORIES, - ) - - # Prepare val2017_100 for quick testing: - - dest_dir = os.path.join(dataset_dir, "annotations/") - URL_PREFIX = "https://dl.fbaipublicfiles.com/detectron2/" - download(URL_PREFIX + "annotations/coco/panoptic_val2017_100.json", dest_dir) - with open(os.path.join(dest_dir, "panoptic_val2017_100.json")) as f: - obj = json.load(f) - - def link_val100(dir_full, dir_100): - print("Creating " + dir_100 + " ...") - os.makedirs(dir_100, exist_ok=True) - for img in obj["images"]: - basename = os.path.splitext(img["file_name"])[0] - src = os.path.join(dir_full, basename + ".png") - dst = os.path.join(dir_100, basename + ".png") - src = os.path.relpath(src, start=dir_100) - os.symlink(src, dst) - - link_val100( - os.path.join(dataset_dir, "panoptic_val2017"), - os.path.join(dataset_dir, "panoptic_val2017_100"), - ) - - link_val100( - os.path.join(dataset_dir, "panoptic_stuff_val2017"), - os.path.join(dataset_dir, "panoptic_stuff_val2017_100"), - ) diff --git a/spaces/Yuliang/ECON/lib/torch_utils/ops/fma.py b/spaces/Yuliang/ECON/lib/torch_utils/ops/fma.py deleted file mode 100644 index 5c030932fb439b4dcc7b08ad55d0fa2aa9d8f82f..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/torch_utils/ops/fma.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. -"""Fused multiply-add, with slightly faster gradients than `torch.addcmul()`.""" - -import torch - -#---------------------------------------------------------------------------- - - -def fma(a, b, c): # => a * b + c - return _FusedMultiplyAdd.apply(a, b, c) - - -#---------------------------------------------------------------------------- - - -class _FusedMultiplyAdd(torch.autograd.Function): # a * b + c - @staticmethod - def forward(ctx, a, b, c): # pylint: disable=arguments-differ - out = torch.addcmul(c, a, b) - ctx.save_for_backward(a, b) - ctx.c_shape = c.shape - return out - - @staticmethod - def backward(ctx, dout): # pylint: disable=arguments-differ - a, b = ctx.saved_tensors - c_shape = ctx.c_shape - da = None - db = None - dc = None - - if ctx.needs_input_grad[0]: - da = _unbroadcast(dout * b, a.shape) - - if ctx.needs_input_grad[1]: - db = _unbroadcast(dout * a, b.shape) - - if ctx.needs_input_grad[2]: - dc = _unbroadcast(dout, c_shape) - - return da, db, dc - - -#---------------------------------------------------------------------------- - - -def _unbroadcast(x, shape): - extra_dims = x.ndim - len(shape) - assert extra_dims >= 0 - dim = [ - i - for i in range(x.ndim) if x.shape[i] > 1 and (i < extra_dims or shape[i - extra_dims] == 1) - ] - if len(dim): - x = x.sum(dim=dim, keepdim=True) - if extra_dims: - x = x.reshape(-1, *x.shape[extra_dims + 1:]) - assert x.shape == shape - return x - - -#---------------------------------------------------------------------------- diff --git a/spaces/ZettaFi/SeeFood/create_model.py b/spaces/ZettaFi/SeeFood/create_model.py deleted file mode 100644 index a37c5beb2c5639183580a5505adf0670f13bf83f..0000000000000000000000000000000000000000 --- a/spaces/ZettaFi/SeeFood/create_model.py +++ /dev/null @@ -1,39 +0,0 @@ -from duckduckgo_search import ddg_images -from fastai.vision.all import download_images, resize_images, verify_images, get_image_files, ImageBlock, \ - CategoryBlock, RandomSplitter, parent_label, ResizeMethod, Resize, vision_learner, resnet18, error_rate, \ - L, Path, DataBlock - - -def search_images(search_term, max_images=30): - print(f"Searching for '{search_term}'") - return L(ddg_images(search_term, max_results=max_images)).itemgot('image') - - -def search_and_populate(search_term, category, file_path, max_images=30): - dest = (file_path/category) - dest.mkdir(exist_ok=True, parents=True) - download_images(dest, urls=search_images(f'{search_term} photo', max_images=max_images)) - resize_images(file_path/category, max_size=400, dest=file_path/category) - - -path = Path('seefood') -search_and_populate("hotdog", "hotdog", path, max_images=90) -for o in ['burger', 'sandwich', 'fruit', 'chips', 'salad']: - search_and_populate(o, "not_hotdog", path, max_images=30) - -failed = verify_images(get_image_files(path)) -failed.map(Path.unlink) -print(f"{len(failed)} failed images") - -dls = DataBlock( - blocks=(ImageBlock, CategoryBlock), - get_items=get_image_files, - splitter=RandomSplitter(valid_pct=0.2), - get_y=parent_label, - item_tfms=[Resize(256, ResizeMethod.Squish)] -).dataloaders(path, bs=32) - -learn = vision_learner(dls, resnet18, metrics=error_rate) -learn.fine_tune(3) - -learn.export("hotdogModel.pkl") diff --git a/spaces/ZilliaxOfficial/nyaru-svc-3.0/utils.py b/spaces/ZilliaxOfficial/nyaru-svc-3.0/utils.py deleted file mode 100644 index 3733a75111dc89cefa333b34933ae01623550ea7..0000000000000000000000000000000000000000 --- a/spaces/ZilliaxOfficial/nyaru-svc-3.0/utils.py +++ /dev/null @@ -1,338 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess - -import librosa -import numpy as np -import torchaudio -from scipy.io.wavfile import read -import torch -import torchvision -from torch.nn import functional as F -from commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(rank=None): - - hubert_soft = hubert_model.hubert_soft("hubert/hubert-soft-0d54a1f4.pt") - if rank is not None: - hubert_soft = hubert_soft.cuda(rank) - return hubert_soft - -def get_hubert_content(hmodel, y=None, path=None): - if path is not None: - source, sr = torchaudio.load(path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - else: - source = y - source = source.unsqueeze(0) - with torch.inference_mode(): - units = hmodel.units(source) - return units.transpose(1,2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def transform(mel, height): # 68-92 - #r = np.random.random() - #rate = r * 0.3 + 0.85 # 0.85-1.15 - #height = int(mel.size(-2) * rate) - tgt = torchvision.transforms.functional.resize(mel, (height, mel.size(-1))) - if height >= mel.size(-2): - return tgt[:, :mel.size(-2), :] - else: - silence = tgt[:,-1:,:].repeat(1,mel.size(-2)-height,1) - silence += torch.randn_like(silence) / 10 - return torch.cat((tgt, silence), 1) - - -def stretch(mel, width): # 0.5-2 - return torchvision.transforms.functional.resize(mel, (mel.size(-2), width)) - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if iteration is None: - iteration = 1 - if learning_rate is None: - learning_rate = 0.0002 - if optimizer is not None and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - # ckptname = checkpoint_path.split(os.sep)[-1] - # newest_step = int(ckptname.split(".")[0].split("_")[1]) - # val_steps = 2000 - # last_ckptname = checkpoint_path.replace(str(newest_step), str(newest_step - val_steps*3)) - # if newest_step >= val_steps*3: - # os.system(f"rm {last_ckptname}") - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/ZiyadCodes/ArabicGPT/index.html b/spaces/ZiyadCodes/ArabicGPT/index.html deleted file mode 100644 index 9ecbeeac01c509c2b375a9a6e99ef49af4a6e7b4..0000000000000000000000000000000000000000 --- a/spaces/ZiyadCodes/ArabicGPT/index.html +++ /dev/null @@ -1,288 +0,0 @@ - - - - - - - ArabicGPT - - - - - - - - -
      - -
      -
      - -
      - -
      - - - - -
      -
      - - - - - - - - - - - - - - - - - -
      - - - - - - \ No newline at end of file diff --git a/spaces/a-v-bely/russian-task-generator/utilities_language_w2v/rus_sentence_w2v.py b/spaces/a-v-bely/russian-task-generator/utilities_language_w2v/rus_sentence_w2v.py deleted file mode 100644 index 6c8198c2b1dca0b81b6742dca353803c1798fa75..0000000000000000000000000000000000000000 --- a/spaces/a-v-bely/russian-task-generator/utilities_language_w2v/rus_sentence_w2v.py +++ /dev/null @@ -1,245 +0,0 @@ -import copy -import string -from random import random -from random import sample -from utilities_language_general.rus_constants import nlp -from utilities_language_general.rus_utils import get_tags -from utilities_language_general.rus_utils import check_token -from utilities_language_general.rus_constants import PHRASES -from utilities_language_general.rus_utils import define_gender -from utilities_language_general.rus_utils import convert_gender -from utilities_language_general.rus_utils import make_inflection -from utilities_language_general.rus_constants import BAD_USER_TARGET_WORDS -from utilities_language_general.rus_utils import get_distractors_from_model - - -class SENTENCE: - def __init__(self, original: str, n_sentence: int, max_num_distractors): - self.original = original - self.n_sentence = n_sentence - self.max_num_distractors = max_num_distractors - self.parsed = nlp(self.original) - self.sentence_lemma_pos = [] - self.sentence_phrases = [] - self.target_words = [] - - def lemmatize_sentence(self): - for token in self.parsed: - lemma_pos = f'{token.lemma_}_{token.pos_}' - self.sentence_lemma_pos.append((lemma_pos, token)) - - def bind_phrases(self): - previous_was_phrase = False - for i in range(len(self.sentence_lemma_pos) - 1): - phrase_candidate = f'{self.sentence_lemma_pos[i][0]}_{self.sentence_lemma_pos[i + 1][0]}' - if phrase_candidate in PHRASES and not previous_was_phrase: - # phrase is {phrase: {original_token1: spacy.token, original_token2: spacy.token}} - phrase = [ - f'{self.sentence_lemma_pos[i][0]}_{self.sentence_lemma_pos[i + 1][0]}', - { - 'original_token1': self.sentence_lemma_pos[i][1], - 'original_token2': self.sentence_lemma_pos[i + 1][1] - } - ] - self.sentence_phrases.append(phrase) - previous_was_phrase = True - else: - if not previous_was_phrase: - self.sentence_phrases.append(self.sentence_lemma_pos[i][1]) - previous_was_phrase = False - - def search_target_words_automatically(self, model, target_minimum: set, frequency_dict: dict = None): - for token in self.sentence_phrases: - # TODO: Still do not have w2v model with phrases - # therefore cannot come up with the criteria - if isinstance(token, list): # if token is a phrase - original_token1 = token[1]['original_token1'] - original_token2 = token[1]['original_token2'] - original_token1_tags = get_tags(original_token1.text)[0] - original_token2_tags = get_tags(original_token2.text)[0] - tags = original_token1_tags | original_token2_tags - not_ner = True if (original_token1.ent_type == 0 and original_token2.ent_type == 0) else False - target_word = { - 'sentence_number': self.n_sentence, - 'sentence_text': self.original, - 'original_text': f'{original_token1.text} {original_token2.text}', - 'lemma': token[0], - 'pos': ('phrase', [original_token1.pos_, original_token2.pos_]), - 'gender': list({define_gender(original_token1), define_gender(original_token2)})[0], - 'tags': tags, - 'position_in_sentence': self.original.find(original_token1.text), - 'not_named_entity': not_ner, - 'frequency_in_text': 0 - } - self.target_words.append(target_word) - else: # if token is just a spacy.nlp token - if check_token(model=model, token=token, lemma_pos='auto', current_minimum=target_minimum): - target_word = { - 'sentence_number': self.n_sentence, - 'sentence_text': self.original, - 'original_text': token.text, - 'lemma': token.lemma_, - 'pos': ('simple', token.pos_), - 'gender': define_gender(token.lemma_), - 'number_children': len([child for child in token.children]), - 'tags': get_tags(token.text)[0], - 'position_in_sentence': self.original.find(token.text), - 'not_named_entity': True if token.ent_type == 0 else False, - 'frequency_in_text': frequency_dict.get(token.lemma_, 1), - } - self.target_words.append(target_word) - - def search_user_target_words(self, model, user_target_words: set = None, frequency_dict: dict = None): - for _utw in user_target_words: - if _utw in self.original: - parse_utw = nlp(_utw) - if ' ' in _utw: - tags = get_tags(parse_utw[0].text)[0] | get_tags(parse_utw[1].text)[0] - user_target_word_lemma = '_'.join([f'{token.lemma_}_{token.pos_}' for token in parse_utw]) - user_target_word_pos = ('phrase', [token.pos_ for token in parse_utw]) - user_target_word_tags = tags - not_ner = True if (parse_utw[0].ent_type == 0 and parse_utw[1].ent_type == 0) else False - else: - user_target_word_lemma = f'{parse_utw[0].lemma_}_{parse_utw[0].pos_}' - user_target_word_pos = ('simple', parse_utw[0].pos_) - user_target_word_tags = get_tags(parse_utw[0].text)[0] - not_ner = parse_utw[0].ent_type == 0 - target_word = { - 'sentence_number': self.n_sentence, - 'sentence_text': self.original, - 'original_text': _utw, - 'lemma': user_target_word_lemma, - 'pos': user_target_word_pos, - 'gender': convert_gender(user_target_word_tags.get('Gender')), - 'tags': user_target_word_tags, - 'position_in_sentence': self.original.find(_utw), - 'not_named_entity': not_ner, - 'frequency_in_text': frequency_dict.get(user_target_word_lemma, 1) - } - if not (model.has_index_for(user_target_word_lemma) - or model.has_index_for(f'{user_target_word_lemma}_{user_target_word_pos[1]}')): - BAD_USER_TARGET_WORDS.append(_utw) - else: - self.target_words.append(target_word) - - def search_target_words(self, model, target_words_automatic_mode: bool, target_minimum, - user_target_words: set = None, - frequency_dict: dict = None): - if target_words_automatic_mode: - self.search_target_words_automatically(model=model, target_minimum=target_minimum, - frequency_dict=frequency_dict) - else: - self.search_user_target_words(model=model, user_target_words=user_target_words, - frequency_dict=frequency_dict) - - def attach_distractors_to_target_word(self, model, global_distractors, distractor_minimum, level_name, - max_frequency, - progress, logs): - n_target_words = len(self.target_words) - bad_target_words = [] - for i, target_word in enumerate(self.target_words): - pos = target_word['pos'][0] if target_word['pos'][0] == 'phrase' else target_word['pos'][1] - distractors = get_distractors_from_model(model, lemma=target_word['lemma'], pos=pos, - gender=target_word['gender'], level_name=level_name, - global_distractors=global_distractors, - distractor_minimum=distractor_minimum, - max_num_distractors=self.max_num_distractors) - if distractors is None or target_word['frequency_in_text'] > max_frequency: - target_word['distractors'] = distractors - bad_target_words.append(target_word) - target_word['distractors'] = distractors - target_word['distractors_number'] = len(distractors) if distractors is not None else 0 - progress.progress(i / n_target_words) - logs.success(f'Обработали {i}/{n_target_words} слов в {self.n_sentence + 1}-м предложении') - for btw in bad_target_words: - BAD_USER_TARGET_WORDS.append(btw['original_text']) - self.target_words.remove(btw) - progress.progress(100) - logs.success( - f'Обработали {n_target_words}/{n_target_words} слов в {self.n_sentence + 1}-м предложении') - - def inflect_distractors(self): - bad_target_words = [] - for target_word in self.target_words: - inflected_distractors = [] - for distractor_lemma, distractor_similarity in target_word['distractors']: - if distractor_lemma.count('_') > 1: - # TODO The same. Has to train model and test this code - inflected = make_inflection(text=distractor_lemma, - pos=target_word['pos'][1], tags=target_word['tags']) - else: - inflected = make_inflection(text=distractor_lemma, - pos=target_word['pos'][1], tags=target_word['tags']) - if inflected is not None: - inflected_distractors.append(inflected) - else: - new_tags = copy.deepcopy(target_word['tags']) - if 'NOUN' in target_word['tags'] and 'inan' in target_word['tags']: - new_tags.discard('inan') - new_tags.add('anim') - elif 'NOUN' in target_word['tags'] and 'anim' in target_word['tags']: - new_tags.discard('anim') - new_tags.add('inan') - inflected = make_inflection(text=distractor_lemma, pos=target_word['pos'][1], tags=new_tags) - if inflected is not None: - inflected_distractors.append(inflected) - num_distractors = min(4, self.max_num_distractors) if self.max_num_distractors >= 4 \ - else self.max_num_distractors - if len(inflected_distractors) < num_distractors: - bad_target_words.append(target_word) - else: - target_word['inflected_distractors'] = inflected_distractors - for btw in bad_target_words: - BAD_USER_TARGET_WORDS.append(btw['original_text']) - self.target_words.remove(btw) - - def filter_target_words(self, target_words_automatic_mode): - c_position = 0 - bad_target_words = [] - for target_word in self.target_words: - position_difference = 3 if target_words_automatic_mode else 0 - if not (target_word['position_in_sentence'] == 0 - or abs(target_word['position_in_sentence'] - c_position) >= position_difference): - bad_target_words.append(target_word) - for btw in bad_target_words: - BAD_USER_TARGET_WORDS.append(btw['original_text']) - self.target_words.remove(btw) - - def sample_distractors(self, num_distractors): - for target_word in self.target_words: - len_inflected_distractors = len(target_word['inflected_distractors']) - num_distractors = min(len_inflected_distractors, num_distractors) \ - if num_distractors >= 4 else num_distractors - target_word['inflected_distractors'] = sample(target_word['inflected_distractors'][:min( - len_inflected_distractors, 10)], num_distractors) - - -class TASK: - def __init__(self, task_data): - self.task_data = task_data - - self.original_text = None - self.sentence_text = None - self.inflected_distractors = None - self.sentence_number = task_data['sentence_number'] - self.position_in_sentence = task_data['position_in_sentence'] - self.result = '' - self.variants = [] - for key, value in task_data.items(): - self.__setattr__(key, value) - - def __repr__(self): - return '\n'.join([f'{key}\t=\t{value}' for key, value in self.__dict__.items()]) - - def compile_task(self, max_num_distractors): - len_distractors = len(self.inflected_distractors) - len_variants = min(len_distractors, max_num_distractors) if max_num_distractors > 4 \ - else max_num_distractors - letters = (f'({letter})' for letter in string.ascii_lowercase[:len_variants + 1]) - try: - distractors = sample(self.inflected_distractors, len_variants) + [self.original_text, ] - except ValueError: - distractors = self.inflected_distractors + [self.original_text, ] - self.variants.append( - (self.original_text, [f'{item[0]} {item[1].replace("_", " ")}' - for item in zip(letters, sorted(distractors, key=lambda _: random()))])) diff --git a/spaces/a-v-bely/spanish-task-generator/utilities_cookies/src/index.ts b/spaces/a-v-bely/spanish-task-generator/utilities_cookies/src/index.ts deleted file mode 100644 index 7016b6ae68e614903d588c314a669e9258bf30c9..0000000000000000000000000000000000000000 --- a/spaces/a-v-bely/spanish-task-generator/utilities_cookies/src/index.ts +++ /dev/null @@ -1,52 +0,0 @@ -import {RenderData, Streamlit} from "streamlit-component-lib" - -const targetWindow: Window = window.parent || window -const targetDocument = targetWindow.document - -let lastValue: string | null = null - -interface AddCookieSpec { - value: string - expires_at: string - path: string -} - -interface DeleteCookieSpec { - value: null - path: string -} - -type CookieSpec = AddCookieSpec | DeleteCookieSpec - -function onRender(event: Event): void { - const data = (event as CustomEvent).detail - - saveCookies(data.args["queue"]) - - const newValue = targetDocument.cookie - if (lastValue !== newValue && !data.args.saveOnly) { - Streamlit.setComponentValue(newValue) - lastValue = newValue - } -} - -Streamlit.events.addEventListener(Streamlit.RENDER_EVENT, onRender) -Streamlit.setComponentReady() -Streamlit.setFrameHeight(0) - - -function saveCookies(queue: { [k in string]: CookieSpec }) { - Object.keys(queue).forEach((name) => { - const spec = queue[name] - if (spec.value === null) - targetDocument.cookie = `${encodeURIComponent(name)}=; max-age=0; path=${encodeURIComponent(spec.path)}` - else { - const date = new Date(spec.expires_at) - targetDocument.cookie = ( - `${encodeURIComponent(name)}=${encodeURIComponent(spec.value)};` + - ` expires=${date.toUTCString()};` + - ` path=${encodeURIComponent(spec.path)};` - ) - } - }) -} \ No newline at end of file diff --git a/spaces/aTrapDeer/Img2TimDillonRant/app.py b/spaces/aTrapDeer/Img2TimDillonRant/app.py deleted file mode 100644 index 2e36637c55a97670a9f724e4ddac51fb39beb108..0000000000000000000000000000000000000000 --- a/spaces/aTrapDeer/Img2TimDillonRant/app.py +++ /dev/null @@ -1,86 +0,0 @@ -from pydub import AudioSegment -from dotenv import find_dotenv, load_dotenv -from transformers import pipeline -from elevenlabs import generate, play, voices -from elevenlabs import set_api_key -from elevenlabslib import ElevenLabsUser -import streamlit as st -import uuid - -from langchain import LLMChain, PromptTemplate, OpenAI -import os - -# Load environment variables -load_dotenv(find_dotenv()) - -def img2text(url): - image_to_text = pipeline("image-to-text", model="Salesforce/blip-image-captioning-large") - text = image_to_text(url)[0]["generated_text"] - print(text) - return text - -def generated_story(scenario): - template = """ - You are the famous comedian and podcaster Tim Dillon. You will generate a short story in the style of his podcast using a funny narrative with explicit language and an angry rant-like tone, the story should be no more than 100 words; - Context: {scenario} - Story: - """ - prompt = PromptTemplate(template=template, input_variables=["scenario"]) - story_llm = LLMChain(llm=OpenAI(model_name="gpt-3.5-turbo", temperature=1), prompt=prompt, verbose=True) - story = story_llm.predict(scenario=scenario) - print(story) - return story - - -def textToSpeech(story): - # Error Handling: Print available user-generated voices - try: - available_voices = user.get_available_voices() - print("Available Voices:", available_voices) - except Exception as e: - print("Error fetching available voices:", e) - - # Set API key for ElevenLabs - set_api_key = os.getenv("ELEVENLABS_API_KEY") - user = ElevenLabsUser(set_api_key) - voice = user.get_voice_by_ID("cgOzEASJmlEWHtXnZJ5q") - - # Generate the audio data - result = voice.generate_audio_v2(story) - - # Assuming the audio data is the first element of the tuple - audio_data = result[0] - - # Save the audio data to a file in the project folder - random_id = str(uuid.uuid4()) - name = f"story_{random_id}.mp3" - - #Save the audio data to a file in the project folder - with open(name, 'wb') as f: - f.write(audio_data) - return name - -def main(): - st.set_page_config(page_title="Tim Dillon Image To Story", page_icon="📖", layout="wide") - st.header("Tim Dillon Image To Story") - uploaded_file = st.file_uploader("Upload an image...", type="jpg") - if uploaded_file is not None: - print(uploaded_file) - bytes_data = uploaded_file.getvalue() - with open (uploaded_file.name, 'wb') as f: - f.write(bytes_data) - st.image(bytes_data, caption='Uploaded Image.', use_column_width=True) - scenario = img2text(uploaded_file.name) - story = generated_story(scenario) - generated_file_name = textToSpeech(story) - - with st.expander("scenario"): - st.write(scenario) - with st.expander("story"): - st.write(story) - - st.audio(generated_file_name) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/aadnk/whisper-webui/src/diarization/transcriptLoader.py b/spaces/aadnk/whisper-webui/src/diarization/transcriptLoader.py deleted file mode 100644 index 8b17bb6e9b65f712a689dc8b96f8411a6088cd63..0000000000000000000000000000000000000000 --- a/spaces/aadnk/whisper-webui/src/diarization/transcriptLoader.py +++ /dev/null @@ -1,80 +0,0 @@ - -import json -from pathlib import Path - -def load_transcript_json(transcript_file: str): - """ - Parse a Whisper JSON file into a Whisper JSON object - - # Parameters: - transcript_file (str): Path to the Whisper JSON file - """ - with open(transcript_file, "r", encoding="utf-8") as f: - whisper_result = json.load(f) - - # Format of Whisper JSON file: - # { - # "text": " And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.", - # "segments": [ - # { - # "text": " And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.", - # "start": 0.0, - # "end": 10.36, - # "words": [ - # { - # "start": 0.0, - # "end": 0.56, - # "word": " And", - # "probability": 0.61767578125 - # }, - # { - # "start": 0.56, - # "end": 0.88, - # "word": " so", - # "probability": 0.9033203125 - # }, - # etc. - - return whisper_result - - -def load_transcript_srt(subtitle_file: str): - import srt - - """ - Parse a SRT file into a Whisper JSON object - - # Parameters: - subtitle_file (str): Path to the SRT file - """ - with open(subtitle_file, "r", encoding="utf-8") as f: - subs = srt.parse(f) - - whisper_result = { - "text": "", - "segments": [] - } - - for sub in subs: - # Subtitle(index=1, start=datetime.timedelta(seconds=33, microseconds=843000), end=datetime.timedelta(seconds=38, microseconds=97000), content='地球上只有3%的水是淡水', proprietary='') - segment = { - "text": sub.content, - "start": sub.start.total_seconds(), - "end": sub.end.total_seconds(), - "words": [] - } - whisper_result["segments"].append(segment) - whisper_result["text"] += sub.content - - return whisper_result - -def load_transcript(file: str): - # Determine file type - file_extension = Path(file).suffix.lower() - - if file_extension == ".json": - return load_transcript_json(file) - elif file_extension == ".srt": - return load_transcript_srt(file) - else: - raise ValueError(f"Unsupported file type: {file_extension}") \ No newline at end of file diff --git a/spaces/aadnk/whisper-webui/src/vadParallel.py b/spaces/aadnk/whisper-webui/src/vadParallel.py deleted file mode 100644 index c2323c0b632c34014ac1fe7ac79141b5bd9c5731..0000000000000000000000000000000000000000 --- a/spaces/aadnk/whisper-webui/src/vadParallel.py +++ /dev/null @@ -1,298 +0,0 @@ -import multiprocessing -from queue import Empty -import threading -import time -from src.hooks.progressListener import ProgressListener -from src.vad import AbstractTranscription, TranscriptionConfig, get_audio_duration - -from multiprocessing import Pool, Queue - -from typing import Any, Dict, List, Union -import os - -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback - -class _ProgressListenerToQueue(ProgressListener): - def __init__(self, progress_queue: Queue): - self.progress_queue = progress_queue - self.progress_total = 0 - self.prev_progress = 0 - - def on_progress(self, current: Union[int, float], total: Union[int, float]): - delta = current - self.prev_progress - self.prev_progress = current - self.progress_total = total - self.progress_queue.put(delta) - - def on_finished(self): - if self.progress_total > self.prev_progress: - delta = self.progress_total - self.prev_progress - self.progress_queue.put(delta) - self.prev_progress = self.progress_total - -class ParallelContext: - def __init__(self, num_processes: int = None, auto_cleanup_timeout_seconds: float = None): - self.num_processes = num_processes - self.auto_cleanup_timeout_seconds = auto_cleanup_timeout_seconds - self.lock = threading.Lock() - - self.ref_count = 0 - self.pool = None - self.cleanup_timer = None - - def get_pool(self): - # Initialize pool lazily - if (self.pool is None): - context = multiprocessing.get_context('spawn') - self.pool = context.Pool(self.num_processes) - - self.ref_count = self.ref_count + 1 - - if (self.auto_cleanup_timeout_seconds is not None): - self._stop_auto_cleanup() - - return self.pool - - def return_pool(self, pool): - if (self.pool == pool and self.ref_count > 0): - self.ref_count = self.ref_count - 1 - - if (self.ref_count == 0): - if (self.auto_cleanup_timeout_seconds is not None): - self._start_auto_cleanup() - - def _start_auto_cleanup(self): - if (self.cleanup_timer is not None): - self.cleanup_timer.cancel() - self.cleanup_timer = threading.Timer(self.auto_cleanup_timeout_seconds, self._execute_cleanup) - self.cleanup_timer.start() - - print("Started auto cleanup of pool in " + str(self.auto_cleanup_timeout_seconds) + " seconds") - - def _stop_auto_cleanup(self): - if (self.cleanup_timer is not None): - self.cleanup_timer.cancel() - self.cleanup_timer = None - - print("Stopped auto cleanup of pool") - - def _execute_cleanup(self): - print("Executing cleanup of pool") - - if (self.ref_count == 0): - self.close() - - def close(self): - self._stop_auto_cleanup() - - if (self.pool is not None): - print("Closing pool of " + str(self.num_processes) + " processes") - self.pool.close() - self.pool.join() - self.pool = None - -class ParallelTranscriptionConfig(TranscriptionConfig): - def __init__(self, device_id: str, override_timestamps, initial_segment_index, copy: TranscriptionConfig = None): - super().__init__(copy.non_speech_strategy, copy.segment_padding_left, copy.segment_padding_right, copy.max_silent_period, copy.max_merge_size, copy.max_prompt_window, initial_segment_index) - self.device_id = device_id - self.override_timestamps = override_timestamps - -class ParallelTranscription(AbstractTranscription): - # Silero VAD typically takes about 3 seconds per minute, so there's no need to split the chunks - # into smaller segments than 2 minute (min 6 seconds per CPU core) - MIN_CPU_CHUNK_SIZE_SECONDS = 2 * 60 - - def __init__(self, sampling_rate: int = 16000): - super().__init__(sampling_rate=sampling_rate) - - def transcribe_parallel(self, transcription: AbstractTranscription, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig, - cpu_device_count: int, gpu_devices: List[str], cpu_parallel_context: ParallelContext = None, gpu_parallel_context: ParallelContext = None, - progress_listener: ProgressListener = None): - total_duration = get_audio_duration(audio) - - # First, get the timestamps for the original audio - if (cpu_device_count > 1 and not transcription.is_transcribe_timestamps_fast()): - merged = self._get_merged_timestamps_parallel(transcription, audio, config, total_duration, cpu_device_count, cpu_parallel_context) - else: - timestamp_segments = transcription.get_transcribe_timestamps(audio, config, 0, total_duration) - merged = transcription.get_merged_timestamps(timestamp_segments, config, total_duration) - - # We must make sure the whisper model is downloaded - if (len(gpu_devices) > 1): - whisperCallable.model_container.ensure_downloaded() - - # Split into a list for each device - # TODO: Split by time instead of by number of chunks - merged_split = list(self._split(merged, len(gpu_devices))) - - # Parameters that will be passed to the transcribe function - parameters = [] - segment_index = config.initial_segment_index - - processing_manager = multiprocessing.Manager() - progress_queue = processing_manager.Queue() - - for i in range(len(gpu_devices)): - # Note that device_segment_list can be empty. But we will still create a process for it, - # as otherwise we run the risk of assigning the same device to multiple processes. - device_segment_list = list(merged_split[i]) if i < len(merged_split) else [] - device_id = gpu_devices[i] - - print("Device " + str(device_id) + " (index " + str(i) + ") has " + str(len(device_segment_list)) + " segments") - - # Create a new config with the given device ID - device_config = ParallelTranscriptionConfig(device_id, device_segment_list, segment_index, config) - segment_index += len(device_segment_list) - - progress_listener_to_queue = _ProgressListenerToQueue(progress_queue) - parameters.append([audio, whisperCallable, device_config, progress_listener_to_queue]); - - merged = { - 'text': '', - 'segments': [], - 'language': None - } - - created_context = False - - perf_start_gpu = time.perf_counter() - - # Spawn a separate process for each device - try: - if (gpu_parallel_context is None): - gpu_parallel_context = ParallelContext(len(gpu_devices)) - created_context = True - - # Get a pool of processes - pool = gpu_parallel_context.get_pool() - - # Run the transcription in parallel - results_async = pool.starmap_async(self.transcribe, parameters) - total_progress = 0 - - while not results_async.ready(): - try: - delta = progress_queue.get(timeout=5) # Set a timeout of 5 seconds - except Empty: - continue - - total_progress += delta - if progress_listener is not None: - progress_listener.on_progress(total_progress, total_duration) - - results = results_async.get() - - # Call the finished callback - if progress_listener is not None: - progress_listener.on_finished() - - for result in results: - # Merge the results - if (result['text'] is not None): - merged['text'] += result['text'] - if (result['segments'] is not None): - merged['segments'].extend(result['segments']) - if (result['language'] is not None): - merged['language'] = result['language'] - - finally: - # Return the pool to the context - if (gpu_parallel_context is not None): - gpu_parallel_context.return_pool(pool) - # Always close the context if we created it - if (created_context): - gpu_parallel_context.close() - - perf_end_gpu = time.perf_counter() - print("Parallel transcription took " + str(perf_end_gpu - perf_start_gpu) + " seconds") - - return merged - - def _get_merged_timestamps_parallel(self, transcription: AbstractTranscription, audio: str, config: TranscriptionConfig, total_duration: float, - cpu_device_count: int, cpu_parallel_context: ParallelContext = None): - parameters = [] - - chunk_size = max(total_duration / cpu_device_count, self.MIN_CPU_CHUNK_SIZE_SECONDS) - chunk_start = 0 - cpu_device_id = 0 - - perf_start_time = time.perf_counter() - - # Create chunks that will be processed on the CPU - while (chunk_start < total_duration): - chunk_end = min(chunk_start + chunk_size, total_duration) - - if (chunk_end - chunk_start < 1): - # No need to process chunks that are less than 1 second - break - - print("Parallel VAD: Executing chunk from " + str(chunk_start) + " to " + - str(chunk_end) + " on CPU device " + str(cpu_device_id)) - parameters.append([audio, config, chunk_start, chunk_end]); - - cpu_device_id += 1 - chunk_start = chunk_end - - created_context = False - - # Spawn a separate process for each device - try: - if (cpu_parallel_context is None): - cpu_parallel_context = ParallelContext(cpu_device_count) - created_context = True - - # Get a pool of processes - pool = cpu_parallel_context.get_pool() - - # Run the transcription in parallel. Note that transcription must be picklable. - results = pool.starmap(transcription.get_transcribe_timestamps, parameters) - - timestamps = [] - - # Flatten the results - for result in results: - timestamps.extend(result) - - merged = transcription.get_merged_timestamps(timestamps, config, total_duration) - - perf_end_time = time.perf_counter() - print("Parallel VAD processing took {} seconds".format(perf_end_time - perf_start_time)) - return merged - - finally: - # Return the pool to the context - if (cpu_parallel_context is not None): - cpu_parallel_context.return_pool(pool) - # Always close the context if we created it - if (created_context): - cpu_parallel_context.close() - - def get_transcribe_timestamps(self, audio: str, config: ParallelTranscriptionConfig, start_time: float, duration: float): - return [] - - def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: ParallelTranscriptionConfig, total_duration: float): - # Override timestamps that will be processed - if (config.override_timestamps is not None): - print("(get_merged_timestamps) Using override timestamps of size " + str(len(config.override_timestamps))) - return config.override_timestamps - return super().get_merged_timestamps(timestamps, config, total_duration) - - def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: ParallelTranscriptionConfig, - progressListener: ProgressListener = None): - # Override device ID the first time - if (os.environ.get("INITIALIZED", None) is None): - os.environ["INITIALIZED"] = "1" - - # Note that this may be None if the user didn't specify a device. In that case, Whisper will - # just use the default GPU device. - if (config.device_id is not None): - print("Using device " + config.device_id) - os.environ["CUDA_VISIBLE_DEVICES"] = config.device_id - - return super().transcribe(audio, whisperCallable, config, progressListener) - - def _split(self, a, n): - """Split a list into n approximately equal parts.""" - k, m = divmod(len(a), n) - return (a[i*k+min(i, m):(i+1)*k+min(i+1, m)] for i in range(n)) - diff --git a/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/references/generate-access-token.md b/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/references/generate-access-token.md deleted file mode 100644 index e75cf63006f9f41053c2dcbf0a6092946ee74f10..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/references/generate-access-token.md +++ /dev/null @@ -1,28 +0,0 @@ -# Generate Access Token -With CURL, you need to provide tokens. To generate token, run the following comand. - -```shell -curl --location --request POST 'http://localhost:8080/api/graphql' \ ---header 'X-DataHub-Actor: urn:li:corpuser:datahub' \ ---header 'Content-Type: application/json' \ ---data-raw '{ "query":"mutation { createAccessToken(input: { type: PERSONAL, actorUrn: \"urn:li:corpuser:datahub\", duration: ONE_HOUR, name: \"my personal token\" } ) { accessToken metadata { id name description} } }", "variables":{}}' -``` - -Expected Response: -```json -{ - "data": { - "createAccessToken": { - "accessToken": , - "metadata": { - "id": , - "name": "my personal token", - "description": null - } - } - }, - "extensions": {} -} -``` - -You can now copy `accessToken` and pass it to header. diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/pascal_context_59.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/pascal_context_59.py deleted file mode 100644 index 37585abab89834b95cd5bdd993b994fca1db65f6..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/configs/_base_/datasets/pascal_context_59.py +++ /dev/null @@ -1,60 +0,0 @@ -# dataset settings -dataset_type = 'PascalContextDataset59' -data_root = 'data/VOCdevkit/VOC2010/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -img_scale = (520, 520) -crop_size = (480, 480) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', reduce_zero_label=True), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/necks/multilevel_neck.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/necks/multilevel_neck.py deleted file mode 100644 index 766144d8136326a1fab5906a153a0c0df69b6b60..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/necks/multilevel_neck.py +++ /dev/null @@ -1,70 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule - -from ..builder import NECKS - - -@NECKS.register_module() -class MultiLevelNeck(nn.Module): - """MultiLevelNeck. - - A neck structure connect vit backbone and decoder_heads. - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - scales (List[int]): Scale factors for each input feature map. - norm_cfg (dict): Config dict for normalization layer. Default: None. - act_cfg (dict): Config dict for activation layer in ConvModule. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - scales=[0.5, 1, 2, 4], - norm_cfg=None, - act_cfg=None): - super(MultiLevelNeck, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.scales = scales - self.num_outs = len(scales) - self.lateral_convs = nn.ModuleList() - self.convs = nn.ModuleList() - for in_channel in in_channels: - self.lateral_convs.append( - ConvModule( - in_channel, - out_channels, - kernel_size=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - for _ in range(self.num_outs): - self.convs.append( - ConvModule( - out_channels, - out_channels, - kernel_size=3, - padding=1, - stride=1, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - def forward(self, inputs): - assert len(inputs) == len(self.in_channels) - print(inputs[0].shape) - inputs = [ - lateral_conv(inputs[i]) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - # for len(inputs) not equal to self.num_outs - if len(inputs) == 1: - inputs = [inputs[0] for _ in range(self.num_outs)] - outs = [] - for i in range(self.num_outs): - x_resize = F.interpolate( - inputs[i], scale_factor=self.scales[i], mode='bilinear') - outs.append(self.convs[i](x_resize)) - return tuple(outs) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/correlation.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/correlation.py deleted file mode 100644 index 3d0b79c301b29915dfaf4d2b1846c59be73127d3..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/correlation.py +++ /dev/null @@ -1,196 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import Tensor, nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['correlation_forward', 'correlation_backward']) - - -class CorrelationFunction(Function): - - @staticmethod - def forward(ctx, - input1, - input2, - kernel_size=1, - max_displacement=1, - stride=1, - padding=1, - dilation=1, - dilation_patch=1): - - ctx.save_for_backward(input1, input2) - - kH, kW = ctx.kernel_size = _pair(kernel_size) - patch_size = max_displacement * 2 + 1 - ctx.patch_size = patch_size - dH, dW = ctx.stride = _pair(stride) - padH, padW = ctx.padding = _pair(padding) - dilationH, dilationW = ctx.dilation = _pair(dilation) - dilation_patchH, dilation_patchW = ctx.dilation_patch = _pair( - dilation_patch) - - output_size = CorrelationFunction._output_size(ctx, input1) - - output = input1.new_zeros(output_size) - - ext_module.correlation_forward( - input1, - input2, - output, - kH=kH, - kW=kW, - patchH=patch_size, - patchW=patch_size, - padH=padH, - padW=padW, - dilationH=dilationH, - dilationW=dilationW, - dilation_patchH=dilation_patchH, - dilation_patchW=dilation_patchW, - dH=dH, - dW=dW) - - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input1, input2 = ctx.saved_tensors - - kH, kW = ctx.kernel_size - patch_size = ctx.patch_size - padH, padW = ctx.padding - dilationH, dilationW = ctx.dilation - dilation_patchH, dilation_patchW = ctx.dilation_patch - dH, dW = ctx.stride - grad_input1 = torch.zeros_like(input1) - grad_input2 = torch.zeros_like(input2) - - ext_module.correlation_backward( - grad_output, - input1, - input2, - grad_input1, - grad_input2, - kH=kH, - kW=kW, - patchH=patch_size, - patchW=patch_size, - padH=padH, - padW=padW, - dilationH=dilationH, - dilationW=dilationW, - dilation_patchH=dilation_patchH, - dilation_patchW=dilation_patchW, - dH=dH, - dW=dW) - return grad_input1, grad_input2, None, None, None, None, None, None - - @staticmethod - def _output_size(ctx, input1): - iH, iW = input1.size(2), input1.size(3) - batch_size = input1.size(0) - kH, kW = ctx.kernel_size - patch_size = ctx.patch_size - dH, dW = ctx.stride - padH, padW = ctx.padding - dilationH, dilationW = ctx.dilation - dilatedKH = (kH - 1) * dilationH + 1 - dilatedKW = (kW - 1) * dilationW + 1 - - oH = int((iH + 2 * padH - dilatedKH) / dH + 1) - oW = int((iW + 2 * padW - dilatedKW) / dW + 1) - - output_size = (batch_size, patch_size, patch_size, oH, oW) - return output_size - - -class Correlation(nn.Module): - r"""Correlation operator - - This correlation operator works for optical flow correlation computation. - - There are two batched tensors with shape :math:`(N, C, H, W)`, - and the correlation output's shape is :math:`(N, max\_displacement \times - 2 + 1, max\_displacement * 2 + 1, H_{out}, W_{out})` - - where - - .. math:: - H_{out} = \left\lfloor\frac{H_{in} + 2 \times padding - - dilation \times (kernel\_size - 1) - 1} - {stride} + 1\right\rfloor - - .. math:: - W_{out} = \left\lfloor\frac{W_{in} + 2 \times padding - dilation - \times (kernel\_size - 1) - 1} - {stride} + 1\right\rfloor - - the correlation item :math:`(N_i, dy, dx)` is formed by taking the sliding - window convolution between input1 and shifted input2, - - .. math:: - Corr(N_i, dx, dy) = - \sum_{c=0}^{C-1} - input1(N_i, c) \star - \mathcal{S}(input2(N_i, c), dy, dx) - - where :math:`\star` is the valid 2d sliding window convolution operator, - and :math:`\mathcal{S}` means shifting the input features (auto-complete - zero marginal), and :math:`dx, dy` are shifting distance, :math:`dx, dy \in - [-max\_displacement \times dilation\_patch, max\_displacement \times - dilation\_patch]`. - - Args: - kernel_size (int): The size of sliding window i.e. local neighborhood - representing the center points and involved in correlation - computation. Defaults to 1. - max_displacement (int): The radius for computing correlation volume, - but the actual working space can be dilated by dilation_patch. - Defaults to 1. - stride (int): The stride of the sliding blocks in the input spatial - dimensions. Defaults to 1. - padding (int): Zero padding added to all four sides of the input1. - Defaults to 0. - dilation (int): The spacing of local neighborhood that will involved - in correlation. Defaults to 1. - dilation_patch (int): The spacing between position need to compute - correlation. Defaults to 1. - """ - - def __init__(self, - kernel_size: int = 1, - max_displacement: int = 1, - stride: int = 1, - padding: int = 0, - dilation: int = 1, - dilation_patch: int = 1) -> None: - super().__init__() - self.kernel_size = kernel_size - self.max_displacement = max_displacement - self.stride = stride - self.padding = padding - self.dilation = dilation - self.dilation_patch = dilation_patch - - def forward(self, input1: Tensor, input2: Tensor) -> Tensor: - return CorrelationFunction.apply(input1, input2, self.kernel_size, - self.max_displacement, self.stride, - self.padding, self.dilation, - self.dilation_patch) - - def __repr__(self) -> str: - s = self.__class__.__name__ - s += f'(kernel_size={self.kernel_size}, ' - s += f'max_displacement={self.max_displacement}, ' - s += f'stride={self.stride}, ' - s += f'padding={self.padding}, ' - s += f'dilation={self.dilation}, ' - s += f'dilation_patch={self.dilation_patch})' - return s diff --git a/spaces/abidlabs/Echocardiogram-Segmentation/README.md b/spaces/abidlabs/Echocardiogram-Segmentation/README.md deleted file mode 100644 index ac03bf13d360431449ceef3f90f41902c3fd5e2a..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/Echocardiogram-Segmentation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Echocardiogram Segmentation -emoji: 📊 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -This is a demo based on a very simplified approach described in the paper, ["High-Throughput Precision Phenotyping of Left Ventricular Hypertrophy with Cardiovascular Deep Learning"](https://arxiv.org/abs/2306.07954) \ No newline at end of file diff --git a/spaces/ajayhk/colorize/start.py b/spaces/ajayhk/colorize/start.py deleted file mode 100644 index f5b9217e0ef966bb596f57a8ec32839f7cd3eafe..0000000000000000000000000000000000000000 --- a/spaces/ajayhk/colorize/start.py +++ /dev/null @@ -1,3 +0,0 @@ -import subprocess - -subprocess.run("uvicorn modules.app:app --host 0.0.0.0 --port 7860", shell=True) diff --git a/spaces/akhaliq/Mask2Former/mask2former_video/data_video/dataset_mapper.py b/spaces/akhaliq/Mask2Former/mask2former_video/data_video/dataset_mapper.py deleted file mode 100644 index b89d9437e8b71aa716d0c1118511c75ccde0dca2..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former_video/data_video/dataset_mapper.py +++ /dev/null @@ -1,383 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/sukjunhwang/IFC - -import copy -import logging -import random -import numpy as np -from typing import List, Union -import torch - -from detectron2.config import configurable -from detectron2.structures import ( - BitMasks, - Boxes, - BoxMode, - Instances, -) - -from detectron2.data import detection_utils as utils -from detectron2.data import transforms as T - -from .augmentation import build_augmentation - -__all__ = ["YTVISDatasetMapper", "CocoClipDatasetMapper"] - - -def filter_empty_instances(instances, by_box=True, by_mask=True, box_threshold=1e-5): - """ - Filter out empty instances in an `Instances` object. - - Args: - instances (Instances): - by_box (bool): whether to filter out instances with empty boxes - by_mask (bool): whether to filter out instances with empty masks - box_threshold (float): minimum width and height to be considered non-empty - - Returns: - Instances: the filtered instances. - """ - assert by_box or by_mask - r = [] - if by_box: - r.append(instances.gt_boxes.nonempty(threshold=box_threshold)) - if instances.has("gt_masks") and by_mask: - r.append(instances.gt_masks.nonempty()) - - if not r: - return instances - m = r[0] - for x in r[1:]: - m = m & x - - instances.gt_ids[~m] = -1 - return instances - - -def _get_dummy_anno(num_classes): - return { - "iscrowd": 0, - "category_id": num_classes, - "id": -1, - "bbox": np.array([0, 0, 0, 0]), - "bbox_mode": BoxMode.XYXY_ABS, - "segmentation": [np.array([0.0] * 6)] - } - - -def ytvis_annotations_to_instances(annos, image_size): - """ - Create an :class:`Instances` object used by the models, - from instance annotations in the dataset dict. - - Args: - annos (list[dict]): a list of instance annotations in one image, each - element for one instance. - image_size (tuple): height, width - - Returns: - Instances: - It will contain fields "gt_boxes", "gt_classes", "gt_ids", - "gt_masks", if they can be obtained from `annos`. - This is the format that builtin models expect. - """ - boxes = [BoxMode.convert(obj["bbox"], obj["bbox_mode"], BoxMode.XYXY_ABS) for obj in annos] - target = Instances(image_size) - target.gt_boxes = Boxes(boxes) - - classes = [int(obj["category_id"]) for obj in annos] - classes = torch.tensor(classes, dtype=torch.int64) - target.gt_classes = classes - - ids = [int(obj["id"]) for obj in annos] - ids = torch.tensor(ids, dtype=torch.int64) - target.gt_ids = ids - - if len(annos) and "segmentation" in annos[0]: - segms = [obj["segmentation"] for obj in annos] - masks = [] - for segm in segms: - assert segm.ndim == 2, "Expect segmentation of 2 dimensions, got {}.".format( - segm.ndim - ) - # mask array - masks.append(segm) - # torch.from_numpy does not support array with negative stride. - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x)) for x in masks]) - ) - target.gt_masks = masks - - return target - - -class YTVISDatasetMapper: - """ - A callable which takes a dataset dict in YouTube-VIS Dataset format, - and map it into a format used by the model. - """ - - @configurable - def __init__( - self, - is_train: bool, - *, - augmentations: List[Union[T.Augmentation, T.Transform]], - image_format: str, - use_instance_mask: bool = False, - sampling_frame_num: int = 2, - sampling_frame_range: int = 5, - sampling_frame_shuffle: bool = False, - num_classes: int = 40, - ): - """ - NOTE: this interface is experimental. - Args: - is_train: whether it's used in training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - use_instance_mask: whether to process instance segmentation annotations, if available - """ - # fmt: off - self.is_train = is_train - self.augmentations = T.AugmentationList(augmentations) - self.image_format = image_format - self.use_instance_mask = use_instance_mask - self.sampling_frame_num = sampling_frame_num - self.sampling_frame_range = sampling_frame_range - self.sampling_frame_shuffle = sampling_frame_shuffle - self.num_classes = num_classes - # fmt: on - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}") - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - augs = build_augmentation(cfg, is_train) - - sampling_frame_num = cfg.INPUT.SAMPLING_FRAME_NUM - sampling_frame_range = cfg.INPUT.SAMPLING_FRAME_RANGE - sampling_frame_shuffle = cfg.INPUT.SAMPLING_FRAME_SHUFFLE - - ret = { - "is_train": is_train, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "use_instance_mask": cfg.MODEL.MASK_ON, - "sampling_frame_num": sampling_frame_num, - "sampling_frame_range": sampling_frame_range, - "sampling_frame_shuffle": sampling_frame_shuffle, - "num_classes": cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES, - } - - return ret - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one video, in YTVIS Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - # TODO consider examining below deepcopy as it costs huge amount of computations. - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - - video_length = dataset_dict["length"] - if self.is_train: - ref_frame = random.randrange(video_length) - - start_idx = max(0, ref_frame-self.sampling_frame_range) - end_idx = min(video_length, ref_frame+self.sampling_frame_range + 1) - - selected_idx = np.random.choice( - np.array(list(range(start_idx, ref_frame)) + list(range(ref_frame+1, end_idx))), - self.sampling_frame_num - 1, - ) - selected_idx = selected_idx.tolist() + [ref_frame] - selected_idx = sorted(selected_idx) - if self.sampling_frame_shuffle: - random.shuffle(selected_idx) - else: - selected_idx = range(video_length) - - video_annos = dataset_dict.pop("annotations", None) - file_names = dataset_dict.pop("file_names", None) - - if self.is_train: - _ids = set() - for frame_idx in selected_idx: - _ids.update([anno["id"] for anno in video_annos[frame_idx]]) - ids = dict() - for i, _id in enumerate(_ids): - ids[_id] = i - - dataset_dict["image"] = [] - dataset_dict["instances"] = [] - dataset_dict["file_names"] = [] - for frame_idx in selected_idx: - dataset_dict["file_names"].append(file_names[frame_idx]) - - # Read image - image = utils.read_image(file_names[frame_idx], format=self.image_format) - utils.check_image_size(dataset_dict, image) - - aug_input = T.AugInput(image) - transforms = self.augmentations(aug_input) - image = aug_input.image - - image_shape = image.shape[:2] # h, w - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"].append(torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1)))) - - if (video_annos is None) or (not self.is_train): - continue - - # NOTE copy() is to prevent annotations getting changed from applying augmentations - _frame_annos = [] - for anno in video_annos[frame_idx]: - _anno = {} - for k, v in anno.items(): - _anno[k] = copy.deepcopy(v) - _frame_annos.append(_anno) - - # USER: Implement additional transformations if you have other types of data - annos = [ - utils.transform_instance_annotations(obj, transforms, image_shape) - for obj in _frame_annos - if obj.get("iscrowd", 0) == 0 - ] - sorted_annos = [_get_dummy_anno(self.num_classes) for _ in range(len(ids))] - - for _anno in annos: - idx = ids[_anno["id"]] - sorted_annos[idx] = _anno - _gt_ids = [_anno["id"] for _anno in sorted_annos] - - instances = utils.annotations_to_instances(sorted_annos, image_shape, mask_format="bitmask") - instances.gt_ids = torch.tensor(_gt_ids) - if instances.has("gt_masks"): - instances.gt_boxes = instances.gt_masks.get_bounding_boxes() - instances = filter_empty_instances(instances) - else: - instances.gt_masks = BitMasks(torch.empty((0, *image_shape))) - dataset_dict["instances"].append(instances) - - return dataset_dict - - -class CocoClipDatasetMapper: - """ - A callable which takes a COCO image which converts into multiple frames, - and map it into a format used by the model. - """ - - @configurable - def __init__( - self, - is_train: bool, - *, - augmentations: List[Union[T.Augmentation, T.Transform]], - image_format: str, - use_instance_mask: bool = False, - sampling_frame_num: int = 2, - ): - """ - NOTE: this interface is experimental. - Args: - is_train: whether it's used in training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - use_instance_mask: whether to process instance segmentation annotations, if available - """ - # fmt: off - self.is_train = is_train - self.augmentations = T.AugmentationList(augmentations) - self.image_format = image_format - self.use_instance_mask = use_instance_mask - self.sampling_frame_num = sampling_frame_num - # fmt: on - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}") - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - augs = build_augmentation(cfg, is_train) - - sampling_frame_num = cfg.INPUT.SAMPLING_FRAME_NUM - - ret = { - "is_train": is_train, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "use_instance_mask": cfg.MODEL.MASK_ON, - "sampling_frame_num": sampling_frame_num, - } - - return ret - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - - img_annos = dataset_dict.pop("annotations", None) - file_name = dataset_dict.pop("file_name", None) - original_image = utils.read_image(file_name, format=self.image_format) - - dataset_dict["image"] = [] - dataset_dict["instances"] = [] - dataset_dict["file_names"] = [file_name] * self.sampling_frame_num - for _ in range(self.sampling_frame_num): - utils.check_image_size(dataset_dict, original_image) - - aug_input = T.AugInput(original_image) - transforms = self.augmentations(aug_input) - image = aug_input.image - - image_shape = image.shape[:2] # h, w - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"].append(torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1)))) - - if (img_annos is None) or (not self.is_train): - continue - - _img_annos = [] - for anno in img_annos: - _anno = {} - for k, v in anno.items(): - _anno[k] = copy.deepcopy(v) - _img_annos.append(_anno) - - # USER: Implement additional transformations if you have other types of data - annos = [ - utils.transform_instance_annotations(obj, transforms, image_shape) - for obj in _img_annos - if obj.get("iscrowd", 0) == 0 - ] - _gt_ids = list(range(len(annos))) - for idx in range(len(annos)): - if len(annos[idx]["segmentation"]) == 0: - annos[idx]["segmentation"] = [np.array([0.0] * 6)] - - instances = utils.annotations_to_instances(annos, image_shape, mask_format="bitmask") - instances.gt_ids = torch.tensor(_gt_ids) - if instances.has("gt_masks"): - instances.gt_boxes = instances.gt_masks.get_bounding_boxes() - instances = filter_empty_instances(instances) - else: - instances.gt_masks = BitMasks(torch.empty((0, *image_shape))) - dataset_dict["instances"].append(instances) - - return dataset_dict diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer/utils/text.py b/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer/utils/text.py deleted file mode 100644 index 29372174aec95cd2eac1ea40096fcc148f532b07..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer/utils/text.py +++ /dev/null @@ -1,74 +0,0 @@ -from .symbols import symbols -from . import cleaners -import re - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - -# Regular expression matching text enclosed in curly braces: -_curly_re = re.compile(r"(.*?)\{(.+?)\}(.*)") - - -def text_to_sequence(text, cleaner_names): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - - The text can optionally have ARPAbet sequences enclosed in curly braces embedded - in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street." - - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - - Returns: - List of integers corresponding to the symbols in the text - """ - sequence = [] - - # Check for curly braces and treat their contents as ARPAbet: - while len(text): - m = _curly_re.match(text) - if not m: - sequence += _symbols_to_sequence(_clean_text(text, cleaner_names)) - break - sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names)) - sequence += _arpabet_to_sequence(m.group(2)) - text = m.group(3) - - # Append EOS token - sequence.append(_symbol_to_id["~"]) - return sequence - - -def sequence_to_text(sequence): - """Converts a sequence of IDs back to a string""" - result = "" - for symbol_id in sequence: - if symbol_id in _id_to_symbol: - s = _id_to_symbol[symbol_id] - # Enclose ARPAbet back in curly braces: - if len(s) > 1 and s[0] == "@": - s = "{%s}" % s[1:] - result += s - return result.replace("}{", " ") - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception("Unknown cleaner: %s" % name) - text = cleaner(text) - return text - - -def _symbols_to_sequence(symbols): - return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)] - - -def _arpabet_to_sequence(text): - return _symbols_to_sequence(["@" + s for s in text.split()]) - - -def _should_keep_symbol(s): - return s in _symbol_to_id and s not in ("_", "~") diff --git a/spaces/akhaliq/VQMIVC/predict.py b/spaces/akhaliq/VQMIVC/predict.py deleted file mode 100644 index 4cd11ed038c091eae5aee050eaf7e925ed31c9c0..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/predict.py +++ /dev/null @@ -1,147 +0,0 @@ -import argparse -import json -import os -import subprocess -import tempfile -import zipfile -from pathlib import Path - -import cog -import kaldiio -import numpy as np -import pyworld as pw -import resampy -import soundfile as sf -import torch - -from model_decoder import Decoder_ac -from model_encoder import Encoder, Encoder_lf0 -from model_encoder import SpeakerEncoder as Encoder_spk -from spectrogram import logmelspectrogram - - -def extract_logmel(wav_path, mean, std, sr=16000): - # wav, fs = librosa.load(wav_path, sr=sr) - wav, fs = sf.read(wav_path) - if fs != sr: - wav = resampy.resample(wav, fs, sr, axis=0) - fs = sr - # wav, _ = librosa.effects.trim(wav, top_db=15) - # duration = len(wav)/fs - assert fs == 16000 - peak = np.abs(wav).max() - if peak > 1.0: - wav /= peak - mel = logmelspectrogram( - x=wav, - fs=fs, - n_mels=80, - n_fft=400, - n_shift=160, - win_length=400, - window="hann", - fmin=80, - fmax=7600, - ) - mel = (mel - mean) / (std + 1e-8) - tlen = mel.shape[0] - frame_period = 160 / fs * 1000 - f0, timeaxis = pw.dio(wav.astype("float64"), fs, frame_period=frame_period) - f0 = pw.stonemask(wav.astype("float64"), f0, timeaxis, fs) - f0 = f0[:tlen].reshape(-1).astype("float32") - nonzeros_indices = np.nonzero(f0) - lf0 = f0.copy() - lf0[nonzeros_indices] = np.log( - f0[nonzeros_indices] - ) # for f0(Hz), lf0 > 0 when f0 != 0 - mean, std = np.mean(lf0[nonzeros_indices]), np.std(lf0[nonzeros_indices]) - lf0[nonzeros_indices] = (lf0[nonzeros_indices] - mean) / (std + 1e-8) - return mel, lf0 - - -class Predictor(cog.Predictor): - def setup(self): - """Load models""" - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - checkpoint_path = "VQMIVC-pretrained models/checkpoints/useCSMITrue_useCPMITrue_usePSMITrue_useAmpTrue/VQMIVC-model.ckpt-500.pt" - mel_stats = np.load("./mel_stats/stats.npy") - - encoder = Encoder( - in_channels=80, channels=512, n_embeddings=512, z_dim=64, c_dim=256 - ) - encoder_lf0 = Encoder_lf0() - encoder_spk = Encoder_spk() - decoder = Decoder_ac(dim_neck=64) - encoder.to(device) - encoder_lf0.to(device) - encoder_spk.to(device) - decoder.to(device) - - checkpoint = torch.load( - checkpoint_path, map_location=lambda storage, loc: storage - ) - encoder.load_state_dict(checkpoint["encoder"]) - encoder_spk.load_state_dict(checkpoint["encoder_spk"]) - decoder.load_state_dict(checkpoint["decoder"]) - - encoder.eval() - encoder_spk.eval() - decoder.eval() - - self.mean = mel_stats[0] - self.std = mel_stats[1] - self.encoder = encoder - self.encoder_spk = encoder_spk - self.encoder_lf0 = encoder_lf0 - self.decoder = decoder - self.device = device - - @cog.input("input_source", type=Path, help="Source voice wav path") - @cog.input("input_reference", type=Path, help="Reference voice wav path") - def predict(self, input_source, input_reference): - """Compute prediction""" - # inference - out_dir = Path(tempfile.mkdtemp()) - out_path = out_dir / Path( - os.path.basename(str(input_source)).split(".")[0] + "_converted_gen.wav" - ) - src_wav_path = input_source - ref_wav_path = input_reference - feat_writer = kaldiio.WriteHelper( - "ark,scp:{o}.ark,{o}.scp".format(o=str(out_dir) + "/feats.1") - ) - src_mel, src_lf0 = extract_logmel(src_wav_path, self.mean, self.std) - ref_mel, _ = extract_logmel(ref_wav_path, self.mean, self.std) - - src_mel = torch.FloatTensor(src_mel.T).unsqueeze(0).to(self.device) - src_lf0 = torch.FloatTensor(src_lf0).unsqueeze(0).to(self.device) - ref_mel = torch.FloatTensor(ref_mel.T).unsqueeze(0).to(self.device) - out_filename = os.path.basename(src_wav_path).split(".")[0] - - with torch.no_grad(): - z, _, _, _ = self.encoder.encode(src_mel) - lf0_embs = self.encoder_lf0(src_lf0) - spk_emb = self.encoder_spk(ref_mel) - output = self.decoder(z, lf0_embs, spk_emb) - - feat_writer[out_filename + "_converted"] = output.squeeze(0).cpu().numpy() - feat_writer[out_filename + "_source"] = src_mel.squeeze(0).cpu().numpy().T - feat_writer[out_filename + "_reference"] = ( - ref_mel.squeeze(0).cpu().numpy().T - ) - - feat_writer.close() - - print("synthesize waveform...") - cmd = [ - "parallel-wavegan-decode", - "--checkpoint", - "./vocoder/checkpoint-3000000steps.pkl", - "--feats-scp", - f"{str(out_dir)}/feats.1.scp", - "--outdir", - str(out_dir), - ] - subprocess.call(cmd) - - return out_path diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/database.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/database.py deleted file mode 100644 index f486994416ec0eaac3778ab30066f26b50348149..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/distlib/database.py +++ /dev/null @@ -1,1345 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012-2017 The Python Software Foundation. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -"""PEP 376 implementation.""" - -from __future__ import unicode_literals - -import base64 -import codecs -import contextlib -import hashlib -import logging -import os -import posixpath -import sys -import zipimport - -from . import DistlibException, resources -from .compat import StringIO -from .version import get_scheme, UnsupportedVersionError -from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, - LEGACY_METADATA_FILENAME) -from .util import (parse_requirement, cached_property, parse_name_and_version, - read_exports, write_exports, CSVReader, CSVWriter) - - -__all__ = ['Distribution', 'BaseInstalledDistribution', - 'InstalledDistribution', 'EggInfoDistribution', - 'DistributionPath'] - - -logger = logging.getLogger(__name__) - -EXPORTS_FILENAME = 'pydist-exports.json' -COMMANDS_FILENAME = 'pydist-commands.json' - -DIST_FILES = ('INSTALLER', METADATA_FILENAME, 'RECORD', 'REQUESTED', - 'RESOURCES', EXPORTS_FILENAME, 'SHARED') - -DISTINFO_EXT = '.dist-info' - - -class _Cache(object): - """ - A simple cache mapping names and .dist-info paths to distributions - """ - def __init__(self): - """ - Initialise an instance. There is normally one for each DistributionPath. - """ - self.name = {} - self.path = {} - self.generated = False - - def clear(self): - """ - Clear the cache, setting it to its initial state. - """ - self.name.clear() - self.path.clear() - self.generated = False - - def add(self, dist): - """ - Add a distribution to the cache. - :param dist: The distribution to add. - """ - if dist.path not in self.path: - self.path[dist.path] = dist - self.name.setdefault(dist.key, []).append(dist) - - -class DistributionPath(object): - """ - Represents a set of distributions installed on a path (typically sys.path). - """ - def __init__(self, path=None, include_egg=False): - """ - Create an instance from a path, optionally including legacy (distutils/ - setuptools/distribute) distributions. - :param path: The path to use, as a list of directories. If not specified, - sys.path is used. - :param include_egg: If True, this instance will look for and return legacy - distributions as well as those based on PEP 376. - """ - if path is None: - path = sys.path - self.path = path - self._include_dist = True - self._include_egg = include_egg - - self._cache = _Cache() - self._cache_egg = _Cache() - self._cache_enabled = True - self._scheme = get_scheme('default') - - def _get_cache_enabled(self): - return self._cache_enabled - - def _set_cache_enabled(self, value): - self._cache_enabled = value - - cache_enabled = property(_get_cache_enabled, _set_cache_enabled) - - def clear_cache(self): - """ - Clears the internal cache. - """ - self._cache.clear() - self._cache_egg.clear() - - - def _yield_distributions(self): - """ - Yield .dist-info and/or .egg(-info) distributions. - """ - # We need to check if we've seen some resources already, because on - # some Linux systems (e.g. some Debian/Ubuntu variants) there are - # symlinks which alias other files in the environment. - seen = set() - for path in self.path: - finder = resources.finder_for_path(path) - if finder is None: - continue - r = finder.find('') - if not r or not r.is_container: - continue - rset = sorted(r.resources) - for entry in rset: - r = finder.find(entry) - if not r or r.path in seen: - continue - try: - if self._include_dist and entry.endswith(DISTINFO_EXT): - possible_filenames = [METADATA_FILENAME, - WHEEL_METADATA_FILENAME, - LEGACY_METADATA_FILENAME] - for metadata_filename in possible_filenames: - metadata_path = posixpath.join(entry, metadata_filename) - pydist = finder.find(metadata_path) - if pydist: - break - else: - continue - - with contextlib.closing(pydist.as_stream()) as stream: - metadata = Metadata(fileobj=stream, scheme='legacy') - logger.debug('Found %s', r.path) - seen.add(r.path) - yield new_dist_class(r.path, metadata=metadata, - env=self) - elif self._include_egg and entry.endswith(('.egg-info', - '.egg')): - logger.debug('Found %s', r.path) - seen.add(r.path) - yield old_dist_class(r.path, self) - except Exception as e: - msg = 'Unable to read distribution at %s, perhaps due to bad metadata: %s' - logger.warning(msg, r.path, e) - import warnings - warnings.warn(msg % (r.path, e), stacklevel=2) - - def _generate_cache(self): - """ - Scan the path for distributions and populate the cache with - those that are found. - """ - gen_dist = not self._cache.generated - gen_egg = self._include_egg and not self._cache_egg.generated - if gen_dist or gen_egg: - for dist in self._yield_distributions(): - if isinstance(dist, InstalledDistribution): - self._cache.add(dist) - else: - self._cache_egg.add(dist) - - if gen_dist: - self._cache.generated = True - if gen_egg: - self._cache_egg.generated = True - - @classmethod - def distinfo_dirname(cls, name, version): - """ - The *name* and *version* parameters are converted into their - filename-escaped form, i.e. any ``'-'`` characters are replaced - with ``'_'`` other than the one in ``'dist-info'`` and the one - separating the name from the version number. - - :parameter name: is converted to a standard distribution name by replacing - any runs of non- alphanumeric characters with a single - ``'-'``. - :type name: string - :parameter version: is converted to a standard version string. Spaces - become dots, and all other non-alphanumeric characters - (except dots) become dashes, with runs of multiple - dashes condensed to a single dash. - :type version: string - :returns: directory name - :rtype: string""" - name = name.replace('-', '_') - return '-'.join([name, version]) + DISTINFO_EXT - - def get_distributions(self): - """ - Provides an iterator that looks for distributions and returns - :class:`InstalledDistribution` or - :class:`EggInfoDistribution` instances for each one of them. - - :rtype: iterator of :class:`InstalledDistribution` and - :class:`EggInfoDistribution` instances - """ - if not self._cache_enabled: - for dist in self._yield_distributions(): - yield dist - else: - self._generate_cache() - - for dist in self._cache.path.values(): - yield dist - - if self._include_egg: - for dist in self._cache_egg.path.values(): - yield dist - - def get_distribution(self, name): - """ - Looks for a named distribution on the path. - - This function only returns the first result found, as no more than one - value is expected. If nothing is found, ``None`` is returned. - - :rtype: :class:`InstalledDistribution`, :class:`EggInfoDistribution` - or ``None`` - """ - result = None - name = name.lower() - if not self._cache_enabled: - for dist in self._yield_distributions(): - if dist.key == name: - result = dist - break - else: - self._generate_cache() - - if name in self._cache.name: - result = self._cache.name[name][0] - elif self._include_egg and name in self._cache_egg.name: - result = self._cache_egg.name[name][0] - return result - - def provides_distribution(self, name, version=None): - """ - Iterates over all distributions to find which distributions provide *name*. - If a *version* is provided, it will be used to filter the results. - - This function only returns the first result found, since no more than - one values are expected. If the directory is not found, returns ``None``. - - :parameter version: a version specifier that indicates the version - required, conforming to the format in ``PEP-345`` - - :type name: string - :type version: string - """ - matcher = None - if version is not None: - try: - matcher = self._scheme.matcher('%s (%s)' % (name, version)) - except ValueError: - raise DistlibException('invalid name or version: %r, %r' % - (name, version)) - - for dist in self.get_distributions(): - # We hit a problem on Travis where enum34 was installed and doesn't - # have a provides attribute ... - if not hasattr(dist, 'provides'): - logger.debug('No "provides": %s', dist) - else: - provided = dist.provides - - for p in provided: - p_name, p_ver = parse_name_and_version(p) - if matcher is None: - if p_name == name: - yield dist - break - else: - if p_name == name and matcher.match(p_ver): - yield dist - break - - def get_file_path(self, name, relative_path): - """ - Return the path to a resource file. - """ - dist = self.get_distribution(name) - if dist is None: - raise LookupError('no distribution named %r found' % name) - return dist.get_resource_path(relative_path) - - def get_exported_entries(self, category, name=None): - """ - Return all of the exported entries in a particular category. - - :param category: The category to search for entries. - :param name: If specified, only entries with that name are returned. - """ - for dist in self.get_distributions(): - r = dist.exports - if category in r: - d = r[category] - if name is not None: - if name in d: - yield d[name] - else: - for v in d.values(): - yield v - - -class Distribution(object): - """ - A base class for distributions, whether installed or from indexes. - Either way, it must have some metadata, so that's all that's needed - for construction. - """ - - build_time_dependency = False - """ - Set to True if it's known to be only a build-time dependency (i.e. - not needed after installation). - """ - - requested = False - """A boolean that indicates whether the ``REQUESTED`` metadata file is - present (in other words, whether the package was installed by user - request or it was installed as a dependency).""" - - def __init__(self, metadata): - """ - Initialise an instance. - :param metadata: The instance of :class:`Metadata` describing this - distribution. - """ - self.metadata = metadata - self.name = metadata.name - self.key = self.name.lower() # for case-insensitive comparisons - self.version = metadata.version - self.locator = None - self.digest = None - self.extras = None # additional features requested - self.context = None # environment marker overrides - self.download_urls = set() - self.digests = {} - - @property - def source_url(self): - """ - The source archive download URL for this distribution. - """ - return self.metadata.source_url - - download_url = source_url # Backward compatibility - - @property - def name_and_version(self): - """ - A utility property which displays the name and version in parentheses. - """ - return '%s (%s)' % (self.name, self.version) - - @property - def provides(self): - """ - A set of distribution names and versions provided by this distribution. - :return: A set of "name (version)" strings. - """ - plist = self.metadata.provides - s = '%s (%s)' % (self.name, self.version) - if s not in plist: - plist.append(s) - return plist - - def _get_requirements(self, req_attr): - md = self.metadata - logger.debug('Getting requirements from metadata %r', md.todict()) - reqts = getattr(md, req_attr) - return set(md.get_requirements(reqts, extras=self.extras, - env=self.context)) - - @property - def run_requires(self): - return self._get_requirements('run_requires') - - @property - def meta_requires(self): - return self._get_requirements('meta_requires') - - @property - def build_requires(self): - return self._get_requirements('build_requires') - - @property - def test_requires(self): - return self._get_requirements('test_requires') - - @property - def dev_requires(self): - return self._get_requirements('dev_requires') - - def matches_requirement(self, req): - """ - Say if this instance matches (fulfills) a requirement. - :param req: The requirement to match. - :rtype req: str - :return: True if it matches, else False. - """ - # Requirement may contain extras - parse to lose those - # from what's passed to the matcher - r = parse_requirement(req) - scheme = get_scheme(self.metadata.scheme) - try: - matcher = scheme.matcher(r.requirement) - except UnsupportedVersionError: - # XXX compat-mode if cannot read the version - logger.warning('could not read version %r - using name only', - req) - name = req.split()[0] - matcher = scheme.matcher(name) - - name = matcher.key # case-insensitive - - result = False - for p in self.provides: - p_name, p_ver = parse_name_and_version(p) - if p_name != name: - continue - try: - result = matcher.match(p_ver) - break - except UnsupportedVersionError: - pass - return result - - def __repr__(self): - """ - Return a textual representation of this instance, - """ - if self.source_url: - suffix = ' [%s]' % self.source_url - else: - suffix = '' - return '' % (self.name, self.version, suffix) - - def __eq__(self, other): - """ - See if this distribution is the same as another. - :param other: The distribution to compare with. To be equal to one - another. distributions must have the same type, name, - version and source_url. - :return: True if it is the same, else False. - """ - if type(other) is not type(self): - result = False - else: - result = (self.name == other.name and - self.version == other.version and - self.source_url == other.source_url) - return result - - def __hash__(self): - """ - Compute hash in a way which matches the equality test. - """ - return hash(self.name) + hash(self.version) + hash(self.source_url) - - -class BaseInstalledDistribution(Distribution): - """ - This is the base class for installed distributions (whether PEP 376 or - legacy). - """ - - hasher = None - - def __init__(self, metadata, path, env=None): - """ - Initialise an instance. - :param metadata: An instance of :class:`Metadata` which describes the - distribution. This will normally have been initialised - from a metadata file in the ``path``. - :param path: The path of the ``.dist-info`` or ``.egg-info`` - directory for the distribution. - :param env: This is normally the :class:`DistributionPath` - instance where this distribution was found. - """ - super(BaseInstalledDistribution, self).__init__(metadata) - self.path = path - self.dist_path = env - - def get_hash(self, data, hasher=None): - """ - Get the hash of some data, using a particular hash algorithm, if - specified. - - :param data: The data to be hashed. - :type data: bytes - :param hasher: The name of a hash implementation, supported by hashlib, - or ``None``. Examples of valid values are ``'sha1'``, - ``'sha224'``, ``'sha384'``, '``sha256'``, ``'md5'`` and - ``'sha512'``. If no hasher is specified, the ``hasher`` - attribute of the :class:`InstalledDistribution` instance - is used. If the hasher is determined to be ``None``, MD5 - is used as the hashing algorithm. - :returns: The hash of the data. If a hasher was explicitly specified, - the returned hash will be prefixed with the specified hasher - followed by '='. - :rtype: str - """ - if hasher is None: - hasher = self.hasher - if hasher is None: - hasher = hashlib.md5 - prefix = '' - else: - hasher = getattr(hashlib, hasher) - prefix = '%s=' % self.hasher - digest = hasher(data).digest() - digest = base64.urlsafe_b64encode(digest).rstrip(b'=').decode('ascii') - return '%s%s' % (prefix, digest) - - -class InstalledDistribution(BaseInstalledDistribution): - """ - Created with the *path* of the ``.dist-info`` directory provided to the - constructor. It reads the metadata contained in ``pydist.json`` when it is - instantiated., or uses a passed in Metadata instance (useful for when - dry-run mode is being used). - """ - - hasher = 'sha256' - - def __init__(self, path, metadata=None, env=None): - self.modules = [] - self.finder = finder = resources.finder_for_path(path) - if finder is None: - raise ValueError('finder unavailable for %s' % path) - if env and env._cache_enabled and path in env._cache.path: - metadata = env._cache.path[path].metadata - elif metadata is None: - r = finder.find(METADATA_FILENAME) - # Temporary - for Wheel 0.23 support - if r is None: - r = finder.find(WHEEL_METADATA_FILENAME) - # Temporary - for legacy support - if r is None: - r = finder.find(LEGACY_METADATA_FILENAME) - if r is None: - raise ValueError('no %s found in %s' % (METADATA_FILENAME, - path)) - with contextlib.closing(r.as_stream()) as stream: - metadata = Metadata(fileobj=stream, scheme='legacy') - - super(InstalledDistribution, self).__init__(metadata, path, env) - - if env and env._cache_enabled: - env._cache.add(self) - - r = finder.find('REQUESTED') - self.requested = r is not None - p = os.path.join(path, 'top_level.txt') - if os.path.exists(p): - with open(p, 'rb') as f: - data = f.read().decode('utf-8') - self.modules = data.splitlines() - - def __repr__(self): - return '' % ( - self.name, self.version, self.path) - - def __str__(self): - return "%s %s" % (self.name, self.version) - - def _get_records(self): - """ - Get the list of installed files for the distribution - :return: A list of tuples of path, hash and size. Note that hash and - size might be ``None`` for some entries. The path is exactly - as stored in the file (which is as in PEP 376). - """ - results = [] - r = self.get_distinfo_resource('RECORD') - with contextlib.closing(r.as_stream()) as stream: - with CSVReader(stream=stream) as record_reader: - # Base location is parent dir of .dist-info dir - #base_location = os.path.dirname(self.path) - #base_location = os.path.abspath(base_location) - for row in record_reader: - missing = [None for i in range(len(row), 3)] - path, checksum, size = row + missing - #if not os.path.isabs(path): - # path = path.replace('/', os.sep) - # path = os.path.join(base_location, path) - results.append((path, checksum, size)) - return results - - @cached_property - def exports(self): - """ - Return the information exported by this distribution. - :return: A dictionary of exports, mapping an export category to a dict - of :class:`ExportEntry` instances describing the individual - export entries, and keyed by name. - """ - result = {} - r = self.get_distinfo_resource(EXPORTS_FILENAME) - if r: - result = self.read_exports() - return result - - def read_exports(self): - """ - Read exports data from a file in .ini format. - - :return: A dictionary of exports, mapping an export category to a list - of :class:`ExportEntry` instances describing the individual - export entries. - """ - result = {} - r = self.get_distinfo_resource(EXPORTS_FILENAME) - if r: - with contextlib.closing(r.as_stream()) as stream: - result = read_exports(stream) - return result - - def write_exports(self, exports): - """ - Write a dictionary of exports to a file in .ini format. - :param exports: A dictionary of exports, mapping an export category to - a list of :class:`ExportEntry` instances describing the - individual export entries. - """ - rf = self.get_distinfo_file(EXPORTS_FILENAME) - with open(rf, 'w') as f: - write_exports(exports, f) - - def get_resource_path(self, relative_path): - """ - NOTE: This API may change in the future. - - Return the absolute path to a resource file with the given relative - path. - - :param relative_path: The path, relative to .dist-info, of the resource - of interest. - :return: The absolute path where the resource is to be found. - """ - r = self.get_distinfo_resource('RESOURCES') - with contextlib.closing(r.as_stream()) as stream: - with CSVReader(stream=stream) as resources_reader: - for relative, destination in resources_reader: - if relative == relative_path: - return destination - raise KeyError('no resource file with relative path %r ' - 'is installed' % relative_path) - - def list_installed_files(self): - """ - Iterates over the ``RECORD`` entries and returns a tuple - ``(path, hash, size)`` for each line. - - :returns: iterator of (path, hash, size) - """ - for result in self._get_records(): - yield result - - def write_installed_files(self, paths, prefix, dry_run=False): - """ - Writes the ``RECORD`` file, using the ``paths`` iterable passed in. Any - existing ``RECORD`` file is silently overwritten. - - prefix is used to determine when to write absolute paths. - """ - prefix = os.path.join(prefix, '') - base = os.path.dirname(self.path) - base_under_prefix = base.startswith(prefix) - base = os.path.join(base, '') - record_path = self.get_distinfo_file('RECORD') - logger.info('creating %s', record_path) - if dry_run: - return None - with CSVWriter(record_path) as writer: - for path in paths: - if os.path.isdir(path) or path.endswith(('.pyc', '.pyo')): - # do not put size and hash, as in PEP-376 - hash_value = size = '' - else: - size = '%d' % os.path.getsize(path) - with open(path, 'rb') as fp: - hash_value = self.get_hash(fp.read()) - if path.startswith(base) or (base_under_prefix and - path.startswith(prefix)): - path = os.path.relpath(path, base) - writer.writerow((path, hash_value, size)) - - # add the RECORD file itself - if record_path.startswith(base): - record_path = os.path.relpath(record_path, base) - writer.writerow((record_path, '', '')) - return record_path - - def check_installed_files(self): - """ - Checks that the hashes and sizes of the files in ``RECORD`` are - matched by the files themselves. Returns a (possibly empty) list of - mismatches. Each entry in the mismatch list will be a tuple consisting - of the path, 'exists', 'size' or 'hash' according to what didn't match - (existence is checked first, then size, then hash), the expected - value and the actual value. - """ - mismatches = [] - base = os.path.dirname(self.path) - record_path = self.get_distinfo_file('RECORD') - for path, hash_value, size in self.list_installed_files(): - if not os.path.isabs(path): - path = os.path.join(base, path) - if path == record_path: - continue - if not os.path.exists(path): - mismatches.append((path, 'exists', True, False)) - elif os.path.isfile(path): - actual_size = str(os.path.getsize(path)) - if size and actual_size != size: - mismatches.append((path, 'size', size, actual_size)) - elif hash_value: - if '=' in hash_value: - hasher = hash_value.split('=', 1)[0] - else: - hasher = None - - with open(path, 'rb') as f: - actual_hash = self.get_hash(f.read(), hasher) - if actual_hash != hash_value: - mismatches.append((path, 'hash', hash_value, actual_hash)) - return mismatches - - @cached_property - def shared_locations(self): - """ - A dictionary of shared locations whose keys are in the set 'prefix', - 'purelib', 'platlib', 'scripts', 'headers', 'data' and 'namespace'. - The corresponding value is the absolute path of that category for - this distribution, and takes into account any paths selected by the - user at installation time (e.g. via command-line arguments). In the - case of the 'namespace' key, this would be a list of absolute paths - for the roots of namespace packages in this distribution. - - The first time this property is accessed, the relevant information is - read from the SHARED file in the .dist-info directory. - """ - result = {} - shared_path = os.path.join(self.path, 'SHARED') - if os.path.isfile(shared_path): - with codecs.open(shared_path, 'r', encoding='utf-8') as f: - lines = f.read().splitlines() - for line in lines: - key, value = line.split('=', 1) - if key == 'namespace': - result.setdefault(key, []).append(value) - else: - result[key] = value - return result - - def write_shared_locations(self, paths, dry_run=False): - """ - Write shared location information to the SHARED file in .dist-info. - :param paths: A dictionary as described in the documentation for - :meth:`shared_locations`. - :param dry_run: If True, the action is logged but no file is actually - written. - :return: The path of the file written to. - """ - shared_path = os.path.join(self.path, 'SHARED') - logger.info('creating %s', shared_path) - if dry_run: - return None - lines = [] - for key in ('prefix', 'lib', 'headers', 'scripts', 'data'): - path = paths[key] - if os.path.isdir(paths[key]): - lines.append('%s=%s' % (key, path)) - for ns in paths.get('namespace', ()): - lines.append('namespace=%s' % ns) - - with codecs.open(shared_path, 'w', encoding='utf-8') as f: - f.write('\n'.join(lines)) - return shared_path - - def get_distinfo_resource(self, path): - if path not in DIST_FILES: - raise DistlibException('invalid path for a dist-info file: ' - '%r at %r' % (path, self.path)) - finder = resources.finder_for_path(self.path) - if finder is None: - raise DistlibException('Unable to get a finder for %s' % self.path) - return finder.find(path) - - def get_distinfo_file(self, path): - """ - Returns a path located under the ``.dist-info`` directory. Returns a - string representing the path. - - :parameter path: a ``'/'``-separated path relative to the - ``.dist-info`` directory or an absolute path; - If *path* is an absolute path and doesn't start - with the ``.dist-info`` directory path, - a :class:`DistlibException` is raised - :type path: str - :rtype: str - """ - # Check if it is an absolute path # XXX use relpath, add tests - if path.find(os.sep) >= 0: - # it's an absolute path? - distinfo_dirname, path = path.split(os.sep)[-2:] - if distinfo_dirname != self.path.split(os.sep)[-1]: - raise DistlibException( - 'dist-info file %r does not belong to the %r %s ' - 'distribution' % (path, self.name, self.version)) - - # The file must be relative - if path not in DIST_FILES: - raise DistlibException('invalid path for a dist-info file: ' - '%r at %r' % (path, self.path)) - - return os.path.join(self.path, path) - - def list_distinfo_files(self): - """ - Iterates over the ``RECORD`` entries and returns paths for each line if - the path is pointing to a file located in the ``.dist-info`` directory - or one of its subdirectories. - - :returns: iterator of paths - """ - base = os.path.dirname(self.path) - for path, checksum, size in self._get_records(): - # XXX add separator or use real relpath algo - if not os.path.isabs(path): - path = os.path.join(base, path) - if path.startswith(self.path): - yield path - - def __eq__(self, other): - return (isinstance(other, InstalledDistribution) and - self.path == other.path) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - __hash__ = object.__hash__ - - -class EggInfoDistribution(BaseInstalledDistribution): - """Created with the *path* of the ``.egg-info`` directory or file provided - to the constructor. It reads the metadata contained in the file itself, or - if the given path happens to be a directory, the metadata is read from the - file ``PKG-INFO`` under that directory.""" - - requested = True # as we have no way of knowing, assume it was - shared_locations = {} - - def __init__(self, path, env=None): - def set_name_and_version(s, n, v): - s.name = n - s.key = n.lower() # for case-insensitive comparisons - s.version = v - - self.path = path - self.dist_path = env - if env and env._cache_enabled and path in env._cache_egg.path: - metadata = env._cache_egg.path[path].metadata - set_name_and_version(self, metadata.name, metadata.version) - else: - metadata = self._get_metadata(path) - - # Need to be set before caching - set_name_and_version(self, metadata.name, metadata.version) - - if env and env._cache_enabled: - env._cache_egg.add(self) - super(EggInfoDistribution, self).__init__(metadata, path, env) - - def _get_metadata(self, path): - requires = None - - def parse_requires_data(data): - """Create a list of dependencies from a requires.txt file. - - *data*: the contents of a setuptools-produced requires.txt file. - """ - reqs = [] - lines = data.splitlines() - for line in lines: - line = line.strip() - if line.startswith('['): - logger.warning('Unexpected line: quitting requirement scan: %r', - line) - break - r = parse_requirement(line) - if not r: - logger.warning('Not recognised as a requirement: %r', line) - continue - if r.extras: - logger.warning('extra requirements in requires.txt are ' - 'not supported') - if not r.constraints: - reqs.append(r.name) - else: - cons = ', '.join('%s%s' % c for c in r.constraints) - reqs.append('%s (%s)' % (r.name, cons)) - return reqs - - def parse_requires_path(req_path): - """Create a list of dependencies from a requires.txt file. - - *req_path*: the path to a setuptools-produced requires.txt file. - """ - - reqs = [] - try: - with codecs.open(req_path, 'r', 'utf-8') as fp: - reqs = parse_requires_data(fp.read()) - except IOError: - pass - return reqs - - tl_path = tl_data = None - if path.endswith('.egg'): - if os.path.isdir(path): - p = os.path.join(path, 'EGG-INFO') - meta_path = os.path.join(p, 'PKG-INFO') - metadata = Metadata(path=meta_path, scheme='legacy') - req_path = os.path.join(p, 'requires.txt') - tl_path = os.path.join(p, 'top_level.txt') - requires = parse_requires_path(req_path) - else: - # FIXME handle the case where zipfile is not available - zipf = zipimport.zipimporter(path) - fileobj = StringIO( - zipf.get_data('EGG-INFO/PKG-INFO').decode('utf8')) - metadata = Metadata(fileobj=fileobj, scheme='legacy') - try: - data = zipf.get_data('EGG-INFO/requires.txt') - tl_data = zipf.get_data('EGG-INFO/top_level.txt').decode('utf-8') - requires = parse_requires_data(data.decode('utf-8')) - except IOError: - requires = None - elif path.endswith('.egg-info'): - if os.path.isdir(path): - req_path = os.path.join(path, 'requires.txt') - requires = parse_requires_path(req_path) - path = os.path.join(path, 'PKG-INFO') - tl_path = os.path.join(path, 'top_level.txt') - metadata = Metadata(path=path, scheme='legacy') - else: - raise DistlibException('path must end with .egg-info or .egg, ' - 'got %r' % path) - - if requires: - metadata.add_requirements(requires) - # look for top-level modules in top_level.txt, if present - if tl_data is None: - if tl_path is not None and os.path.exists(tl_path): - with open(tl_path, 'rb') as f: - tl_data = f.read().decode('utf-8') - if not tl_data: - tl_data = [] - else: - tl_data = tl_data.splitlines() - self.modules = tl_data - return metadata - - def __repr__(self): - return '' % ( - self.name, self.version, self.path) - - def __str__(self): - return "%s %s" % (self.name, self.version) - - def check_installed_files(self): - """ - Checks that the hashes and sizes of the files in ``RECORD`` are - matched by the files themselves. Returns a (possibly empty) list of - mismatches. Each entry in the mismatch list will be a tuple consisting - of the path, 'exists', 'size' or 'hash' according to what didn't match - (existence is checked first, then size, then hash), the expected - value and the actual value. - """ - mismatches = [] - record_path = os.path.join(self.path, 'installed-files.txt') - if os.path.exists(record_path): - for path, _, _ in self.list_installed_files(): - if path == record_path: - continue - if not os.path.exists(path): - mismatches.append((path, 'exists', True, False)) - return mismatches - - def list_installed_files(self): - """ - Iterates over the ``installed-files.txt`` entries and returns a tuple - ``(path, hash, size)`` for each line. - - :returns: a list of (path, hash, size) - """ - - def _md5(path): - f = open(path, 'rb') - try: - content = f.read() - finally: - f.close() - return hashlib.md5(content).hexdigest() - - def _size(path): - return os.stat(path).st_size - - record_path = os.path.join(self.path, 'installed-files.txt') - result = [] - if os.path.exists(record_path): - with codecs.open(record_path, 'r', encoding='utf-8') as f: - for line in f: - line = line.strip() - p = os.path.normpath(os.path.join(self.path, line)) - # "./" is present as a marker between installed files - # and installation metadata files - if not os.path.exists(p): - logger.warning('Non-existent file: %s', p) - if p.endswith(('.pyc', '.pyo')): - continue - #otherwise fall through and fail - if not os.path.isdir(p): - result.append((p, _md5(p), _size(p))) - result.append((record_path, None, None)) - return result - - def list_distinfo_files(self, absolute=False): - """ - Iterates over the ``installed-files.txt`` entries and returns paths for - each line if the path is pointing to a file located in the - ``.egg-info`` directory or one of its subdirectories. - - :parameter absolute: If *absolute* is ``True``, each returned path is - transformed into a local absolute path. Otherwise the - raw value from ``installed-files.txt`` is returned. - :type absolute: boolean - :returns: iterator of paths - """ - record_path = os.path.join(self.path, 'installed-files.txt') - if os.path.exists(record_path): - skip = True - with codecs.open(record_path, 'r', encoding='utf-8') as f: - for line in f: - line = line.strip() - if line == './': - skip = False - continue - if not skip: - p = os.path.normpath(os.path.join(self.path, line)) - if p.startswith(self.path): - if absolute: - yield p - else: - yield line - - def __eq__(self, other): - return (isinstance(other, EggInfoDistribution) and - self.path == other.path) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - __hash__ = object.__hash__ - -new_dist_class = InstalledDistribution -old_dist_class = EggInfoDistribution - - -class DependencyGraph(object): - """ - Represents a dependency graph between distributions. - - The dependency relationships are stored in an ``adjacency_list`` that maps - distributions to a list of ``(other, label)`` tuples where ``other`` - is a distribution and the edge is labeled with ``label`` (i.e. the version - specifier, if such was provided). Also, for more efficient traversal, for - every distribution ``x``, a list of predecessors is kept in - ``reverse_list[x]``. An edge from distribution ``a`` to - distribution ``b`` means that ``a`` depends on ``b``. If any missing - dependencies are found, they are stored in ``missing``, which is a - dictionary that maps distributions to a list of requirements that were not - provided by any other distributions. - """ - - def __init__(self): - self.adjacency_list = {} - self.reverse_list = {} - self.missing = {} - - def add_distribution(self, distribution): - """Add the *distribution* to the graph. - - :type distribution: :class:`distutils2.database.InstalledDistribution` - or :class:`distutils2.database.EggInfoDistribution` - """ - self.adjacency_list[distribution] = [] - self.reverse_list[distribution] = [] - #self.missing[distribution] = [] - - def add_edge(self, x, y, label=None): - """Add an edge from distribution *x* to distribution *y* with the given - *label*. - - :type x: :class:`distutils2.database.InstalledDistribution` or - :class:`distutils2.database.EggInfoDistribution` - :type y: :class:`distutils2.database.InstalledDistribution` or - :class:`distutils2.database.EggInfoDistribution` - :type label: ``str`` or ``None`` - """ - self.adjacency_list[x].append((y, label)) - # multiple edges are allowed, so be careful - if x not in self.reverse_list[y]: - self.reverse_list[y].append(x) - - def add_missing(self, distribution, requirement): - """ - Add a missing *requirement* for the given *distribution*. - - :type distribution: :class:`distutils2.database.InstalledDistribution` - or :class:`distutils2.database.EggInfoDistribution` - :type requirement: ``str`` - """ - logger.debug('%s missing %r', distribution, requirement) - self.missing.setdefault(distribution, []).append(requirement) - - def _repr_dist(self, dist): - return '%s %s' % (dist.name, dist.version) - - def repr_node(self, dist, level=1): - """Prints only a subgraph""" - output = [self._repr_dist(dist)] - for other, label in self.adjacency_list[dist]: - dist = self._repr_dist(other) - if label is not None: - dist = '%s [%s]' % (dist, label) - output.append(' ' * level + str(dist)) - suboutput = self.repr_node(other, level + 1) - subs = suboutput.split('\n') - output.extend(subs[1:]) - return '\n'.join(output) - - def to_dot(self, f, skip_disconnected=True): - """Writes a DOT output for the graph to the provided file *f*. - - If *skip_disconnected* is set to ``True``, then all distributions - that are not dependent on any other distribution are skipped. - - :type f: has to support ``file``-like operations - :type skip_disconnected: ``bool`` - """ - disconnected = [] - - f.write("digraph dependencies {\n") - for dist, adjs in self.adjacency_list.items(): - if len(adjs) == 0 and not skip_disconnected: - disconnected.append(dist) - for other, label in adjs: - if not label is None: - f.write('"%s" -> "%s" [label="%s"]\n' % - (dist.name, other.name, label)) - else: - f.write('"%s" -> "%s"\n' % (dist.name, other.name)) - if not skip_disconnected and len(disconnected) > 0: - f.write('subgraph disconnected {\n') - f.write('label = "Disconnected"\n') - f.write('bgcolor = red\n') - - for dist in disconnected: - f.write('"%s"' % dist.name) - f.write('\n') - f.write('}\n') - f.write('}\n') - - def topological_sort(self): - """ - Perform a topological sort of the graph. - :return: A tuple, the first element of which is a topologically sorted - list of distributions, and the second element of which is a - list of distributions that cannot be sorted because they have - circular dependencies and so form a cycle. - """ - result = [] - # Make a shallow copy of the adjacency list - alist = {} - for k, v in self.adjacency_list.items(): - alist[k] = v[:] - while True: - # See what we can remove in this run - to_remove = [] - for k, v in list(alist.items())[:]: - if not v: - to_remove.append(k) - del alist[k] - if not to_remove: - # What's left in alist (if anything) is a cycle. - break - # Remove from the adjacency list of others - for k, v in alist.items(): - alist[k] = [(d, r) for d, r in v if d not in to_remove] - logger.debug('Moving to result: %s', - ['%s (%s)' % (d.name, d.version) for d in to_remove]) - result.extend(to_remove) - return result, list(alist.keys()) - - def __repr__(self): - """Representation of the graph""" - output = [] - for dist, adjs in self.adjacency_list.items(): - output.append(self.repr_node(dist)) - return '\n'.join(output) - - -def make_graph(dists, scheme='default'): - """Makes a dependency graph from the given distributions. - - :parameter dists: a list of distributions - :type dists: list of :class:`distutils2.database.InstalledDistribution` and - :class:`distutils2.database.EggInfoDistribution` instances - :rtype: a :class:`DependencyGraph` instance - """ - scheme = get_scheme(scheme) - graph = DependencyGraph() - provided = {} # maps names to lists of (version, dist) tuples - - # first, build the graph and find out what's provided - for dist in dists: - graph.add_distribution(dist) - - for p in dist.provides: - name, version = parse_name_and_version(p) - logger.debug('Add to provided: %s, %s, %s', name, version, dist) - provided.setdefault(name, []).append((version, dist)) - - # now make the edges - for dist in dists: - requires = (dist.run_requires | dist.meta_requires | - dist.build_requires | dist.dev_requires) - for req in requires: - try: - matcher = scheme.matcher(req) - except UnsupportedVersionError: - # XXX compat-mode if cannot read the version - logger.warning('could not read version %r - using name only', - req) - name = req.split()[0] - matcher = scheme.matcher(name) - - name = matcher.key # case-insensitive - - matched = False - if name in provided: - for version, provider in provided[name]: - try: - match = matcher.match(version) - except UnsupportedVersionError: - match = False - - if match: - graph.add_edge(dist, provider, req) - matched = True - break - if not matched: - graph.add_missing(dist, req) - return graph - - -def get_dependent_dists(dists, dist): - """Recursively generate a list of distributions from *dists* that are - dependent on *dist*. - - :param dists: a list of distributions - :param dist: a distribution, member of *dists* for which we are interested - """ - if dist not in dists: - raise DistlibException('given distribution %r is not a member ' - 'of the list' % dist.name) - graph = make_graph(dists) - - dep = [dist] # dependent distributions - todo = graph.reverse_list[dist] # list of nodes we should inspect - - while todo: - d = todo.pop() - dep.append(d) - for succ in graph.reverse_list[d]: - if succ not in dep: - todo.append(succ) - - dep.pop(0) # remove dist from dep, was there to prevent infinite loops - return dep - - -def get_required_dists(dists, dist): - """Recursively generate a list of distributions from *dists* that are - required by *dist*. - - :param dists: a list of distributions - :param dist: a distribution, member of *dists* for which we are interested - """ - if dist not in dists: - raise DistlibException('given distribution %r is not a member ' - 'of the list' % dist.name) - graph = make_graph(dists) - - req = [] # required distributions - todo = graph.adjacency_list[dist] # list of nodes we should inspect - - while todo: - d = todo.pop()[0] - req.append(d) - for pred in graph.adjacency_list[d]: - if pred not in req: - todo.append(pred) - - return req - - -def make_dist(name, version, **kwargs): - """ - A convenience method for making a dist given just a name and version. - """ - summary = kwargs.pop('summary', 'Placeholder for summary') - md = Metadata(**kwargs) - md.name = name - md.version = version - md.summary = summary or 'Placeholder for summary' - return Distribution(md) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/optionaltags.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/optionaltags.py deleted file mode 100644 index 4a865012c16237c3c4dd3404af7ed767b98d235a..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/optionaltags.py +++ /dev/null @@ -1,207 +0,0 @@ -from __future__ import absolute_import, division, unicode_literals - -from . import base - - -class Filter(base.Filter): - """Removes optional tags from the token stream""" - def slider(self): - previous1 = previous2 = None - for token in self.source: - if previous1 is not None: - yield previous2, previous1, token - previous2 = previous1 - previous1 = token - if previous1 is not None: - yield previous2, previous1, None - - def __iter__(self): - for previous, token, next in self.slider(): - type = token["type"] - if type == "StartTag": - if (token["data"] or - not self.is_optional_start(token["name"], previous, next)): - yield token - elif type == "EndTag": - if not self.is_optional_end(token["name"], next): - yield token - else: - yield token - - def is_optional_start(self, tagname, previous, next): - type = next and next["type"] or None - if tagname in 'html': - # An html element's start tag may be omitted if the first thing - # inside the html element is not a space character or a comment. - return type not in ("Comment", "SpaceCharacters") - elif tagname == 'head': - # A head element's start tag may be omitted if the first thing - # inside the head element is an element. - # XXX: we also omit the start tag if the head element is empty - if type in ("StartTag", "EmptyTag"): - return True - elif type == "EndTag": - return next["name"] == "head" - elif tagname == 'body': - # A body element's start tag may be omitted if the first thing - # inside the body element is not a space character or a comment, - # except if the first thing inside the body element is a script - # or style element and the node immediately preceding the body - # element is a head element whose end tag has been omitted. - if type in ("Comment", "SpaceCharacters"): - return False - elif type == "StartTag": - # XXX: we do not look at the preceding event, so we never omit - # the body element's start tag if it's followed by a script or - # a style element. - return next["name"] not in ('script', 'style') - else: - return True - elif tagname == 'colgroup': - # A colgroup element's start tag may be omitted if the first thing - # inside the colgroup element is a col element, and if the element - # is not immediately preceded by another colgroup element whose - # end tag has been omitted. - if type in ("StartTag", "EmptyTag"): - # XXX: we do not look at the preceding event, so instead we never - # omit the colgroup element's end tag when it is immediately - # followed by another colgroup element. See is_optional_end. - return next["name"] == "col" - else: - return False - elif tagname == 'tbody': - # A tbody element's start tag may be omitted if the first thing - # inside the tbody element is a tr element, and if the element is - # not immediately preceded by a tbody, thead, or tfoot element - # whose end tag has been omitted. - if type == "StartTag": - # omit the thead and tfoot elements' end tag when they are - # immediately followed by a tbody element. See is_optional_end. - if previous and previous['type'] == 'EndTag' and \ - previous['name'] in ('tbody', 'thead', 'tfoot'): - return False - return next["name"] == 'tr' - else: - return False - return False - - def is_optional_end(self, tagname, next): - type = next and next["type"] or None - if tagname in ('html', 'head', 'body'): - # An html element's end tag may be omitted if the html element - # is not immediately followed by a space character or a comment. - return type not in ("Comment", "SpaceCharacters") - elif tagname in ('li', 'optgroup', 'tr'): - # A li element's end tag may be omitted if the li element is - # immediately followed by another li element or if there is - # no more content in the parent element. - # An optgroup element's end tag may be omitted if the optgroup - # element is immediately followed by another optgroup element, - # or if there is no more content in the parent element. - # A tr element's end tag may be omitted if the tr element is - # immediately followed by another tr element, or if there is - # no more content in the parent element. - if type == "StartTag": - return next["name"] == tagname - else: - return type == "EndTag" or type is None - elif tagname in ('dt', 'dd'): - # A dt element's end tag may be omitted if the dt element is - # immediately followed by another dt element or a dd element. - # A dd element's end tag may be omitted if the dd element is - # immediately followed by another dd element or a dt element, - # or if there is no more content in the parent element. - if type == "StartTag": - return next["name"] in ('dt', 'dd') - elif tagname == 'dd': - return type == "EndTag" or type is None - else: - return False - elif tagname == 'p': - # A p element's end tag may be omitted if the p element is - # immediately followed by an address, article, aside, - # blockquote, datagrid, dialog, dir, div, dl, fieldset, - # footer, form, h1, h2, h3, h4, h5, h6, header, hr, menu, - # nav, ol, p, pre, section, table, or ul, element, or if - # there is no more content in the parent element. - if type in ("StartTag", "EmptyTag"): - return next["name"] in ('address', 'article', 'aside', - 'blockquote', 'datagrid', 'dialog', - 'dir', 'div', 'dl', 'fieldset', 'footer', - 'form', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', - 'header', 'hr', 'menu', 'nav', 'ol', - 'p', 'pre', 'section', 'table', 'ul') - else: - return type == "EndTag" or type is None - elif tagname == 'option': - # An option element's end tag may be omitted if the option - # element is immediately followed by another option element, - # or if it is immediately followed by an optgroup - # element, or if there is no more content in the parent - # element. - if type == "StartTag": - return next["name"] in ('option', 'optgroup') - else: - return type == "EndTag" or type is None - elif tagname in ('rt', 'rp'): - # An rt element's end tag may be omitted if the rt element is - # immediately followed by an rt or rp element, or if there is - # no more content in the parent element. - # An rp element's end tag may be omitted if the rp element is - # immediately followed by an rt or rp element, or if there is - # no more content in the parent element. - if type == "StartTag": - return next["name"] in ('rt', 'rp') - else: - return type == "EndTag" or type is None - elif tagname == 'colgroup': - # A colgroup element's end tag may be omitted if the colgroup - # element is not immediately followed by a space character or - # a comment. - if type in ("Comment", "SpaceCharacters"): - return False - elif type == "StartTag": - # XXX: we also look for an immediately following colgroup - # element. See is_optional_start. - return next["name"] != 'colgroup' - else: - return True - elif tagname in ('thead', 'tbody'): - # A thead element's end tag may be omitted if the thead element - # is immediately followed by a tbody or tfoot element. - # A tbody element's end tag may be omitted if the tbody element - # is immediately followed by a tbody or tfoot element, or if - # there is no more content in the parent element. - # A tfoot element's end tag may be omitted if the tfoot element - # is immediately followed by a tbody element, or if there is no - # more content in the parent element. - # XXX: we never omit the end tag when the following element is - # a tbody. See is_optional_start. - if type == "StartTag": - return next["name"] in ['tbody', 'tfoot'] - elif tagname == 'tbody': - return type == "EndTag" or type is None - else: - return False - elif tagname == 'tfoot': - # A tfoot element's end tag may be omitted if the tfoot element - # is immediately followed by a tbody element, or if there is no - # more content in the parent element. - # XXX: we never omit the end tag when the following element is - # a tbody. See is_optional_start. - if type == "StartTag": - return next["name"] == 'tbody' - else: - return type == "EndTag" or type is None - elif tagname in ('td', 'th'): - # A td element's end tag may be omitted if the td element is - # immediately followed by a td or th element, or if there is - # no more content in the parent element. - # A th element's end tag may be omitted if the th element is - # immediately followed by a td or th element, or if there is - # no more content in the parent element. - if type == "StartTag": - return next["name"] in ('td', 'th') - else: - return type == "EndTag" or type is None - return False diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/resolvelib/compat/collections_abc.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/resolvelib/compat/collections_abc.py deleted file mode 100644 index 1becc5093c5ab8e196bb9fee415e2381e7158fc3..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/resolvelib/compat/collections_abc.py +++ /dev/null @@ -1,6 +0,0 @@ -__all__ = ["Mapping", "Sequence"] - -try: - from collections.abc import Mapping, Sequence -except ImportError: - from collections import Mapping, Sequence diff --git a/spaces/ali-ghamdan/deoldify/fastai/tabular/__init__.py b/spaces/ali-ghamdan/deoldify/fastai/tabular/__init__.py deleted file mode 100644 index 3404df06dde4af905a8d82893110a1b285a78250..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/fastai/tabular/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .. import basics -from ..basics import * -from .data import * -from .transform import * -from .models import * -from .. import tabular - -__all__ = [*basics.__all__, *data.__all__, *transform.__all__, *models.__all__, 'tabular'] - diff --git a/spaces/allknowingroger/Image-Models-Test113/README.md b/spaces/allknowingroger/Image-Models-Test113/README.md deleted file mode 100644 index 2b69b5bd446a4f4d04f29db6dbde10c6565173f7..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test113/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test112 ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test48/README.md b/spaces/allknowingroger/Image-Models-Test48/README.md deleted file mode 100644 index dd816a900be6e86e0c76e6ab63129d5be678ece4..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test48/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Models -emoji: 👀 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test47 ---- - - \ No newline at end of file diff --git a/spaces/almakedon/faster-whisper-webui/src/whisper/abstractWhisperContainer.py b/spaces/almakedon/faster-whisper-webui/src/whisper/abstractWhisperContainer.py deleted file mode 100644 index 3df2f19ad8c5665b1f09bc3e59943049769b54b7..0000000000000000000000000000000000000000 --- a/spaces/almakedon/faster-whisper-webui/src/whisper/abstractWhisperContainer.py +++ /dev/null @@ -1,107 +0,0 @@ -import abc -from typing import List - -from src.config import ModelConfig, VadInitialPromptMode - -from src.hooks.progressListener import ProgressListener -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache -from src.prompts.abstractPromptStrategy import AbstractPromptStrategy - -class AbstractWhisperCallback: - def __init__(self): - self.__prompt_mode_gpt = None - - @abc.abstractmethod - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - raise NotImplementedError() - -class AbstractWhisperContainer: - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - self.model_name = model_name - self.device = device - self.compute_type = compute_type - self.download_root = download_root - self.cache = cache - - # Will be created on demand - self.model = None - - # List of known models - self.models = models - - def get_model(self): - if self.model is None: - - if (self.cache is None): - self.model = self._create_model() - else: - model_key = "WhisperContainer." + self.model_name + ":" + (self.device if self.device else '') - self.model = self.cache.get(model_key, self._create_model) - return self.model - - @abc.abstractmethod - def _create_model(self): - raise NotImplementedError() - - def ensure_downloaded(self): - pass - - @abc.abstractmethod - def create_callback(self, language: str = None, task: str = None, - prompt_strategy: AbstractPromptStrategy = None, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - prompt_strategy: AbstractPromptStrategy - The prompt strategy to use for the transcription. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - raise NotImplementedError() - - # This is required for multiprocessing - def __getstate__(self): - return { - "model_name": self.model_name, - "device": self.device, - "download_root": self.download_root, - "models": self.models, - "compute_type": self.compute_type - } - - def __setstate__(self, state): - self.model_name = state["model_name"] - self.device = state["device"] - self.download_root = state["download_root"] - self.models = state["models"] - self.compute_type = state["compute_type"] - self.model = None - # Depickled objects must use the global cache - self.cache = GLOBAL_MODEL_CACHE \ No newline at end of file diff --git a/spaces/anakin87/who-killed-laura-palmer/crawler/README.md b/spaces/anakin87/who-killed-laura-palmer/crawler/README.md deleted file mode 100644 index d1e1f7c10200ffa35f174defdd94092781744cc6..0000000000000000000000000000000000000000 --- a/spaces/anakin87/who-killed-laura-palmer/crawler/README.md +++ /dev/null @@ -1,15 +0,0 @@ -# Twin Peaks crawler - -This crawler download texts and metadata from [Twin Peaks Fandom Wiki](https://twinpeaks.fandom.com/wiki/Twin_Peaks_Wiki). The output format is JSON. The crawler is based on the combination of [Scrapy](https://github.com/scrapy/scrapy) and [fandom-py](https://github.com/NikolajDanger/fandom-py). - -*Several wiki pages are discarded, since they are not related to Twin Peaks plot and create noise in the Question Answering index.* - -## Installation -- copy this folder (if needed, see [stackoverflow](https://stackoverflow.com/questions/7106012/download-a-single-folder-or-directory-from-a-github-repo)) -- `pip install -r requirements.txt` - -## Usage -- (if needed, activate the virtual environment) -- `cd tpcrawler` -- `scrapy crawl tpcrawler` -- you can find the downloaded pages in `data` subfolder diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Forefront.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Forefront.py deleted file mode 100644 index e7e89831cc4ec6dc37ea094d9828a7582e981ff1..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/Forefront.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -import json -import requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://forefront.com' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - json_data = { - 'text': messages[-1]['content'], - 'action': 'noauth', - 'id': '', - 'parentId': '', - 'workspaceId': '', - 'messagePersona': '607e41fe-95be-497e-8e97-010a59b2e2c0', - 'model': 'gpt-4', - 'messages': messages[:-1] if len(messages) > 1 else [], - 'internetMode': 'auto' - } - response = requests.post( 'https://streaming.tenant-forefront-default.knative.chi.coreweave.com/free-chat', - json=json_data, stream=True) - for token in response.iter_lines(): - if b'delta' in token: - token = json.loads(token.decode().split('data: ')[1])['delta'] - yield (token) -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/docs/Training-LoRAs.md b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/docs/Training-LoRAs.md deleted file mode 100644 index 3d75ec5aa2bc12e8c13d6a583bd9aefd118f04d7..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/docs/Training-LoRAs.md +++ /dev/null @@ -1,167 +0,0 @@ -## Training Your Own LoRAs - -The WebUI seeks to make training your own LoRAs as easy as possible. It comes down to just a few simple steps: - -### **Step 1**: Make a plan. -- What base model do you want to use? The LoRA you make has to be matched up to a single architecture (eg LLaMA-13B) and cannot be transferred to others (eg LLaMA-7B, StableLM, etc. would all be different). Derivatives of the same model (eg Alpaca finetune of LLaMA-13B) might be transferrable, but even then it's best to train exactly on what you plan to use. -- What model format do you want? At time of writing, 8-bit models are most stable, and 4-bit are supported but experimental. In the near future it is likely that 4-bit will be the best option for most users. -- What are you training it on? Do you want it to learn real information, a simple format, ...? - -### **Step 2**: Gather a dataset. -- If you use a dataset similar to the [Alpaca](https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json) format, that is natively supported by the `Formatted Dataset` input in the WebUI, with premade formatter options. -- If you use a dataset that isn't matched to Alpaca's format, but uses the same basic JSON structure, you can make your own format file by copying `training/formats/alpaca-format.json` to a new file and [editing its content](#format-files). -- If you can get the dataset into a simple text file, that works too! You can train using the `Raw text file` input option. - - This means you can for example just copy/paste a chatlog/documentation page/whatever you want, shove it in a plain text file, and train on it. -- If you use a structured dataset not in this format, you may have to find an external way to convert it - or open an issue to request native support. - -### **Step 3**: Do the training. -- **3.1**: Load the WebUI, and your model. - - Make sure you don't have any LoRAs already loaded (unless you want to train for multi-LoRA usage). -- **3.2**: Open the `Training` tab at the top, `Train LoRA` sub-tab. -- **3.3**: Fill in the name lof the LoRA, select your dataset in the dataset options. -- **3.4**: Select other parameters to your preference. See [parameters below](#parameters). -- **3.5**: click `Start LoRA Training`, and wait. - - It can take a few hours for a large dataset, or just a few minute if doing a small run. - - You may want to monitor your [loss value](#loss) while it goes. - -### **Step 4**: Evaluate your results. -- Load the LoRA under the Models Tab. -- You can go test-drive it on the `Text generation` tab, or you can use the `Perplexity evaluation` sub-tab of the `Training` tab. -- If you used the `Save every n steps` option, you can grab prior copies of the model from sub-folders within the LoRA model's folder and try them instead. - -### **Step 5**: Re-run if you're unhappy. -- Make sure to unload the LoRA before training it. -- You can simply resume a prior run - use `Copy parameters from` to select your LoRA, and edit parameters. Note that you cannot change the `Rank` of an already created LoRA. - - If you want to resume from a checkpoint saved along the way, simply copy the contents of the checkpoint folder into the LoRA's folder. - - (Note: `adapter_model.bin` is the important file that holds the actual LoRA content). - - This will start Learning Rate and Steps back to the start. If you want to resume as if you were midway through, you can adjust your Learning Rate to the last reported LR in logs and reduce your epochs. -- Or, you can start over entirely if you prefer. -- If your model is producing corrupted outputs, you probably need to start over and use a lower Learning Rate. -- If your model isn't learning detailed information but you want it to, you might need to just run more epochs, or you might need a higher Rank. -- If your model is enforcing a format you didn't want, you may need to tweak your dataset, or start over and not train as far. - -## Format Files - -If using JSON formatted datasets, they are presumed to be in the following approximate format: - -```json -[ - { - "somekey": "somevalue", - "key2": "value2" - }, - { - // etc - } -] -``` - -Where the keys (eg `somekey`, `key2` above) are standardized, and relatively consistent across the dataset, and the values (eg `somevalue`, `value2`) contain the content actually intended to be trained. - -For Alpaca, the keys are `instruction`, `input`, and `output`, wherein `input` is sometimes blank. - -A simple format file for Alpaca to be used as a chat bot is: - -```json -{ - "instruction,output": "User: %instruction%\nAssistant: %output%", - "instruction,input,output": "User: %instruction%: %input%\nAssistant: %output%" -} -``` - -Note that the keys (eg `instruction,output`) are a comma-separated list of dataset keys, and the values are a simple string that use those keys with `%%`. - -So for example if a dataset has `"instruction": "answer my question"`, then the format file's `User: %instruction%\n` will be automatically filled in as `User: answer my question\n`. - -If you have different sets of key inputs, you can make your own format file to match it. This format-file is designed to be as simple as possible to enable easy editing to match your needs. - -## Parameters - -The basic purpose and function of each parameter is documented on-page in the WebUI, so read through them in the UI to understand your options. - -That said, here's a guide to the most important parameter choices you should consider: - -### VRAM - -- First, you must consider your VRAM availability. - - Generally, under default settings, VRAM usage for training with default parameters is very close to when generating text (with 1000+ tokens of context) (ie, if you can generate text, you can train LoRAs). - - Note: worse by default in the 4-bit monkeypatch currently. Reduce `Micro Batch Size` to `1` to restore this to expectations. - - If you have VRAM to spare, setting higher batch sizes will use more VRAM and get you better quality training in exchange. - - If you have large data, setting a higher cutoff length may be beneficial, but will cost significant VRAM. If you can spare some, set your batch size to `1` and see how high you can push your cutoff length. - - If you're low on VRAM, reducing batch size or cutoff length will of course improve that. - - Don't be afraid to just try it and see what happens. If it's too much, it will just error out, and you can lower settings and try again. - -### Rank - -- Second, you want to consider the amount of learning you want. - - For example, you may wish to just learn a dialogue format (as in the case of Alpaca) in which case setting a low `Rank` value (32 or lower) works great. - - Or, you might be training on project documentation you want the bot to understand and be able to understand questions about, in which case the higher the rank, the better. - - Generally, higher Rank = more precise learning = more total content learned = more VRAM usage while training. - -### Learning Rate and Epochs - -- Third, how carefully you want it to be learned. - - In other words, how okay or not you are with the model losing unrelated understandings. - - You can control this with 3 key settings: the Learning Rate, its scheduler, and your total epochs. - - The learning rate controls how much change is made to the model by each token it sees. - - It's in scientific notation normally, so for example `3e-4` means `3 * 10^-4` which is `0.0003`. The number after `e-` controls how many `0`s are in the number. - - Higher values let training run faster, but also are more likely to corrupt prior data in the model. - - You essentially have two variables to balance: the LR, and Epochs. - - If you make LR higher, you can set Epochs equally lower to match. High LR + low epochs = very fast, low quality training. - - If you make LR low, set epochs high. Low LR + high epochs = slow but high-quality training. - - The scheduler controls change-over-time as you train - it starts high, and then goes low. This helps balance getting data in, and having decent quality, at the same time. - - You can see graphs of the different scheduler options [in the HuggingFace docs here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_1/en/main_classes/optimizer_schedules#transformers.SchedulerType) - -## Loss - -When you're running training, the WebUI's console window will log reports that include, among other things, a numeric value named `Loss`. It will start as a high number, and gradually get lower and lower as it goes. - -"Loss" in the world of AI training theoretically means "how close is the model to perfect", with `0` meaning "absolutely perfect". This is calculated by measuring the difference between the model outputting exactly the text you're training it to output, and what it actually outputs. - -In practice, a good LLM should have a very complex variable range of ideas running in its artificial head, so a loss of `0` would indicate that the model has broken and forgotten to how think about anything other than what you trained it. - -So, in effect, Loss is a balancing game: you want to get it low enough that it understands your data, but high enough that it isn't forgetting everything else. Generally, if it goes below `1.0`, it's going to start forgetting its prior memories, and you should stop training. In some cases you may prefer to take it as low as `0.5` (if you want it to be very very predictable). Different goals have different needs, so don't be afraid to experiment and see what works best for you. - -Note: if you see Loss start at or suddenly jump to exactly `0`, it is likely something has gone wrong in your training process (eg model corruption). - -## Note: 4-Bit Monkeypatch - -The [4-bit LoRA monkeypatch](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) works for training, but has side effects: -- VRAM usage is higher currently. You can reduce the `Micro Batch Size` to `1` to compensate. -- Models do funky things. LoRAs apply themselves, or refuse to apply, or spontaneously error out, or etc. It can be helpful to reload base model or restart the WebUI between training/usage to minimize chances of anything going haywire. -- Loading or working with multiple LoRAs at the same time doesn't currently work. -- Generally, recognize and treat the monkeypatch as the dirty temporary hack it is - it works, but isn't very stable. It will get better in time when everything is merged upstream for full official support. - -## Legacy notes - -LoRA training was contributed by [mcmonkey4eva](https://github.com/mcmonkey4eva) in PR [#570](https://github.com/oobabooga/text-generation-webui/pull/570). - -### Using the original alpaca-lora code - -Kept here for reference. The Training tab has much more features than this method. - -``` -conda activate textgen -git clone https://github.com/tloen/alpaca-lora -``` - -Edit those two lines in `alpaca-lora/finetune.py` to use your existing model folder instead of downloading everything from decapoda: - -``` -model = LlamaForCausalLM.from_pretrained( - "models/llama-7b", - load_in_8bit=True, - device_map="auto", -) -tokenizer = LlamaTokenizer.from_pretrained( - "models/llama-7b", add_eos_token=True -) -``` - -Run the script with: - -``` -python finetune.py -``` - -It just works. It runs at 22.32s/it, with 1170 iterations in total, so about 7 hours and a half for training a LoRA. RTX 3090, 18153MiB VRAM used, drawing maximum power (350W, room heater mode). diff --git a/spaces/arsalagrey/object-detection-vue/index.html b/spaces/arsalagrey/object-detection-vue/index.html deleted file mode 100644 index 0359d77c41d3ffc3ac9e00c214b7aef1a364f723..0000000000000000000000000000000000000000 --- a/spaces/arsalagrey/object-detection-vue/index.html +++ /dev/null @@ -1,44 +0,0 @@ - - - - - - Image Object Detection Vue - HuggingFace.js Live Examples - - - - - -
      -

      Detect Objects In Random Unsplash Images

      -
      -
      - - -
      -
      - - -
      -
      -
      - - -
      -

      {{statusMessage}}

      -
      -
      - -
      -
      -
      - - - diff --git a/spaces/arslan-ahmed/talk-to-your-docs/README.md b/spaces/arslan-ahmed/talk-to-your-docs/README.md deleted file mode 100644 index 13983c7e0c6c640ca74a31ba54b6887fb55a3d4d..0000000000000000000000000000000000000000 --- a/spaces/arslan-ahmed/talk-to-your-docs/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Talk To Your Docs -emoji: 🌍 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: mit ---- - - - -# Talk To Your Docs - -## For users -You can use this app via Hugging Face where it is already deployed:
      -https://huggingface.co/spaces/arslan-ahmed/talk-to-your-docs - - -## For developers -Source code:
      -https://huggingface.co/spaces/arslan-ahmed/talk-to-your-docs/tree/main - -## Create your personal bot (let people talk to your docs) -You can develop and deploy your own personal chatbot (similar to https://huggingface.co/spaces/arslan-ahmed/talk-to-arslan), with the following three commands: - - -docker pull arslan2k12/ttyd_base (https://hub.docker.com/r/arslan2k12/ttyd_base)
      -docker pull arslan2k12/arslanbot (https://hub.docker.com/r/arslan2k12/arslanbot)
      -docker run --rm -d -p 7860:7860 --env-file ./.env arslan2k12/ttyd_arslanbot - - -Contents of `.env` file: -``` -TTYD_MODE=personalBot_John -#replace John with your name - use only alphabets, no special characters - -GDRIVE_FOLDER_URL=https://drive.google.com/drive/folders/1ce1n1kleS1FOotdcu5joXeSRu_xnHjDt -# replace with your Google Drive folder URL that has all your knowledge base files (.pdf, .docs, .txt) - make sure this folder is publicly accessible (everyone with the link) - -OPENAI_API_KEY=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 -# your OpenAI API Key taken from https://platform.openai.com/account/api-keys -``` \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/phonemizers/espeak_wrapper.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/phonemizers/espeak_wrapper.py deleted file mode 100644 index 8982a893779495cc4d8187040546706fdcc9d11b..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/text/phonemizers/espeak_wrapper.py +++ /dev/null @@ -1,268 +0,0 @@ -import logging -import re -import subprocess -from typing import Dict, List - -from packaging.version import Version - -from TTS.tts.utils.text.phonemizers.base import BasePhonemizer -from TTS.tts.utils.text.punctuation import Punctuation - - -def is_tool(name): - from shutil import which - - return which(name) is not None - - -# Use a regex pattern to match the espeak version, because it may be -# symlinked to espeak-ng, which moves the version bits to another spot. -espeak_version_pattern = re.compile(r"text-to-speech:\s(?P\d+\.\d+(\.\d+)?)") - - -def get_espeak_version(): - output = subprocess.getoutput("espeak --version") - match = espeak_version_pattern.search(output) - - return match.group("version") - - -def get_espeakng_version(): - output = subprocess.getoutput("espeak-ng --version") - return output.split()[3] - - -# priority: espeakng > espeak -if is_tool("espeak-ng"): - _DEF_ESPEAK_LIB = "espeak-ng" - _DEF_ESPEAK_VER = get_espeakng_version() -elif is_tool("espeak"): - _DEF_ESPEAK_LIB = "espeak" - _DEF_ESPEAK_VER = get_espeak_version() -else: - _DEF_ESPEAK_LIB = None - _DEF_ESPEAK_VER = None - - -def _espeak_exe(espeak_lib: str, args: List, sync=False) -> List[str]: - """Run espeak with the given arguments.""" - cmd = [ - espeak_lib, - "-q", - "-b", - "1", # UTF8 text encoding - ] - cmd.extend(args) - logging.debug("espeakng: executing %s", repr(cmd)) - - with subprocess.Popen( - cmd, - stdout=subprocess.PIPE, - stderr=subprocess.STDOUT, - ) as p: - res = iter(p.stdout.readline, b"") - if not sync: - p.stdout.close() - if p.stderr: - p.stderr.close() - if p.stdin: - p.stdin.close() - return res - res2 = [] - for line in res: - res2.append(line) - p.stdout.close() - if p.stderr: - p.stderr.close() - if p.stdin: - p.stdin.close() - p.wait() - return res2 - - -class ESpeak(BasePhonemizer): - """ESpeak wrapper calling `espeak` or `espeak-ng` from the command-line the perform G2P - - Args: - language (str): - Valid language code for the used backend. - - backend (str): - Name of the backend library to use. `espeak` or `espeak-ng`. If None, set automatically - prefering `espeak-ng` over `espeak`. Defaults to None. - - punctuations (str): - Characters to be treated as punctuation. Defaults to Punctuation.default_puncs(). - - keep_puncs (bool): - If True, keep the punctuations after phonemization. Defaults to True. - - Example: - - >>> from TTS.tts.utils.text.phonemizers import ESpeak - >>> phonemizer = ESpeak("tr") - >>> phonemizer.phonemize("Bu Türkçe, bir örnektir.", separator="|") - 'b|ʊ t|ˈø|r|k|tʃ|ɛ, b|ɪ|r œ|r|n|ˈɛ|c|t|ɪ|r.' - - """ - - _ESPEAK_LIB = _DEF_ESPEAK_LIB - _ESPEAK_VER = _DEF_ESPEAK_VER - - def __init__(self, language: str, backend=None, punctuations=Punctuation.default_puncs(), keep_puncs=True): - if self._ESPEAK_LIB is None: - raise Exception(" [!] No espeak backend found. Install espeak-ng or espeak to your system.") - self.backend = self._ESPEAK_LIB - - # band-aid for backwards compatibility - if language == "en": - language = "en-us" - if language == "zh-cn": - language = "cmn" - - super().__init__(language, punctuations=punctuations, keep_puncs=keep_puncs) - if backend is not None: - self.backend = backend - - @property - def backend(self): - return self._ESPEAK_LIB - - @property - def backend_version(self): - return self._ESPEAK_VER - - @backend.setter - def backend(self, backend): - if backend not in ["espeak", "espeak-ng"]: - raise Exception("Unknown backend: %s" % backend) - self._ESPEAK_LIB = backend - self._ESPEAK_VER = get_espeakng_version() if backend == "espeak-ng" else get_espeak_version() - - def auto_set_espeak_lib(self) -> None: - if is_tool("espeak-ng"): - self._ESPEAK_LIB = "espeak-ng" - self._ESPEAK_VER = get_espeakng_version() - elif is_tool("espeak"): - self._ESPEAK_LIB = "espeak" - self._ESPEAK_VER = get_espeak_version() - else: - raise Exception("Cannot set backend automatically. espeak-ng or espeak not found") - - @staticmethod - def name(): - return "espeak" - - def phonemize_espeak(self, text: str, separator: str = "|", tie=False) -> str: - """Convert input text to phonemes. - - Args: - text (str): - Text to be converted to phonemes. - - tie (bool, optional) : When True use a '͡' character between - consecutive characters of a single phoneme. Else separate phoneme - with '_'. This option requires espeak>=1.49. Default to False. - """ - # set arguments - args = ["-v", f"{self._language}"] - # espeak and espeak-ng parses `ipa` differently - if tie: - # use '͡' between phonemes - if self.backend == "espeak": - args.append("--ipa=1") - else: - args.append("--ipa=3") - else: - # split with '_' - if self.backend == "espeak": - if Version(self.backend_version) >= Version("1.48.15"): - args.append("--ipa=1") - else: - args.append("--ipa=3") - else: - args.append("--ipa=1") - if tie: - args.append("--tie=%s" % tie) - - args.append('"' + text + '"') - # compute phonemes - phonemes = "" - for line in _espeak_exe(self._ESPEAK_LIB, args, sync=True): - logging.debug("line: %s", repr(line)) - ph_decoded = line.decode("utf8").strip() - # espeak need to skip first two characters of the retuned text: - # version 1.48.03: "_ p_ɹ_ˈaɪ_ɚ t_ə n_oʊ_v_ˈɛ_m_b_ɚ t_w_ˈɛ_n_t_i t_ˈuː\n" - # version 1.48.15: " p_ɹ_ˈaɪ_ɚ t_ə n_oʊ_v_ˈɛ_m_b_ɚ t_w_ˈɛ_n_t_i t_ˈuː\n" - # espeak-ng need to skip the first character of the retuned text: - # "_p_ɹ_ˈaɪ_ɚ t_ə n_oʊ_v_ˈɛ_m_b_ɚ t_w_ˈɛ_n_t_i t_ˈuː\n" - - # dealing with the conditions descrived above - ph_decoded = ph_decoded[:1].replace("_", "") + ph_decoded[1:] - - # espeak-ng backend can add language flags that need to be removed: - # "sɛʁtˈɛ̃ mˈo kɔm (en)fˈʊtbɔːl(fr) ʒenˈɛʁ de- flˈaɡ də- lˈɑ̃ɡ." - # phonemize needs to remove the language flags of the returned text: - # "sɛʁtˈɛ̃ mˈo kɔm fˈʊtbɔːl ʒenˈɛʁ de- flˈaɡ də- lˈɑ̃ɡ." - ph_decoded = re.sub(r"\(.+?\)", "", ph_decoded) - - phonemes += ph_decoded.strip() - return phonemes.replace("_", separator) - - def _phonemize(self, text, separator=None): - return self.phonemize_espeak(text, separator, tie=False) - - @staticmethod - def supported_languages() -> Dict: - """Get a dictionary of supported languages. - - Returns: - Dict: Dictionary of language codes. - """ - if _DEF_ESPEAK_LIB is None: - return {} - args = ["--voices"] - langs = {} - count = 0 - for line in _espeak_exe(_DEF_ESPEAK_LIB, args, sync=True): - line = line.decode("utf8").strip() - if count > 0: - cols = line.split() - lang_code = cols[1] - lang_name = cols[3] - langs[lang_code] = lang_name - logging.debug("line: %s", repr(line)) - count += 1 - return langs - - def version(self) -> str: - """Get the version of the used backend. - - Returns: - str: Version of the used backend. - """ - args = ["--version"] - for line in _espeak_exe(self.backend, args, sync=True): - version = line.decode("utf8").strip().split()[2] - logging.debug("line: %s", repr(line)) - return version - - @classmethod - def is_available(cls): - """Return true if ESpeak is available else false""" - return is_tool("espeak") or is_tool("espeak-ng") - - -if __name__ == "__main__": - e = ESpeak(language="en-us") - print(e.supported_languages()) - print(e.version()) - print(e.language) - print(e.name()) - print(e.is_available()) - - e = ESpeak(language="en-us", keep_puncs=False) - print("`" + e.phonemize("hello how are you today?") + "`") - - e = ESpeak(language="en-us", keep_puncs=True) - print("`" + e.phonemize("hello how are you today?") + "`") diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/aux_tests/test_embedding_manager.py b/spaces/artificialguybr/video-dubbing/TTS/tests/aux_tests/test_embedding_manager.py deleted file mode 100644 index 7392150163dd92bfa65e43774db9b6684d807237..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/aux_tests/test_embedding_manager.py +++ /dev/null @@ -1,92 +0,0 @@ -import os -import unittest - -import numpy as np -import torch - -from tests import get_tests_input_path -from TTS.config import load_config -from TTS.encoder.utils.generic_utils import setup_encoder_model -from TTS.encoder.utils.io import save_checkpoint -from TTS.tts.utils.managers import EmbeddingManager -from TTS.utils.audio import AudioProcessor - -encoder_config_path = os.path.join(get_tests_input_path(), "test_speaker_encoder_config.json") -encoder_model_path = os.path.join(get_tests_input_path(), "checkpoint_0.pth") -sample_wav_path = os.path.join(get_tests_input_path(), "../data/ljspeech/wavs/LJ001-0001.wav") -sample_wav_path2 = os.path.join(get_tests_input_path(), "../data/ljspeech/wavs/LJ001-0002.wav") -embedding_file_path = os.path.join(get_tests_input_path(), "../data/dummy_speakers.json") -embeddings_file_path2 = os.path.join(get_tests_input_path(), "../data/dummy_speakers2.json") -embeddings_file_pth_path = os.path.join(get_tests_input_path(), "../data/dummy_speakers.pth") - - -class EmbeddingManagerTest(unittest.TestCase): - """Test emEeddingManager for loading embedding files and computing embeddings from waveforms""" - - @staticmethod - def test_speaker_embedding(): - # load config - config = load_config(encoder_config_path) - config.audio.resample = True - - # create a dummy speaker encoder - model = setup_encoder_model(config) - save_checkpoint(model, None, None, get_tests_input_path(), 0) - - # load audio processor and speaker encoder - manager = EmbeddingManager(encoder_model_path=encoder_model_path, encoder_config_path=encoder_config_path) - - # load a sample audio and compute embedding - ap = AudioProcessor(**config.audio) - waveform = ap.load_wav(sample_wav_path) - mel = ap.melspectrogram(waveform) - embedding = manager.compute_embeddings(mel) - assert embedding.shape[1] == 256 - - # compute embedding directly from an input file - embedding = manager.compute_embedding_from_clip(sample_wav_path) - embedding2 = manager.compute_embedding_from_clip(sample_wav_path) - embedding = torch.FloatTensor(embedding) - embedding2 = torch.FloatTensor(embedding2) - assert embedding.shape[0] == 256 - assert (embedding - embedding2).sum() == 0.0 - - # compute embedding from a list of wav files. - embedding3 = manager.compute_embedding_from_clip([sample_wav_path, sample_wav_path2]) - embedding3 = torch.FloatTensor(embedding3) - assert embedding3.shape[0] == 256 - assert (embedding - embedding3).sum() != 0.0 - - # remove dummy model - os.remove(encoder_model_path) - - def test_embedding_file_processing(self): # pylint: disable=no-self-use - manager = EmbeddingManager(embedding_file_path=embeddings_file_pth_path) - # test embedding querying - embedding = manager.get_embedding_by_clip(manager.clip_ids[0]) - assert len(embedding) == 256 - embeddings = manager.get_embeddings_by_name(manager.embedding_names[0]) - assert len(embeddings[0]) == 256 - embedding1 = manager.get_mean_embedding(manager.embedding_names[0], num_samples=2, randomize=True) - assert len(embedding1) == 256 - embedding2 = manager.get_mean_embedding(manager.embedding_names[0], num_samples=2, randomize=False) - assert len(embedding2) == 256 - assert np.sum(np.array(embedding1) - np.array(embedding2)) != 0 - - def test_embedding_file_loading(self): - # test loading a json file - manager = EmbeddingManager(embedding_file_path=embedding_file_path) - self.assertEqual(manager.num_embeddings, 384) - self.assertEqual(manager.embedding_dim, 256) - # test loading a pth file - manager = EmbeddingManager(embedding_file_path=embeddings_file_pth_path) - self.assertEqual(manager.num_embeddings, 384) - self.assertEqual(manager.embedding_dim, 256) - # test loading a pth files with duplicate embedding keys - with self.assertRaises(Exception) as context: - manager = EmbeddingManager(embedding_file_path=[embeddings_file_pth_path, embeddings_file_pth_path]) - self.assertTrue("Duplicate embedding names" in str(context.exception)) - # test loading embedding files with different embedding keys - manager = EmbeddingManager(embedding_file_path=[embeddings_file_pth_path, embeddings_file_path2]) - self.assertEqual(manager.embedding_dim, 256) - self.assertEqual(manager.num_embeddings, 384 * 2) diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests2/test_glow_tts.py b/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests2/test_glow_tts.py deleted file mode 100644 index 2a723f105f56e25fee096831719f78155180ee89..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests2/test_glow_tts.py +++ /dev/null @@ -1,378 +0,0 @@ -import copy -import os -import unittest - -import torch -from torch import optim -from trainer.logging.tensorboard_logger import TensorboardLogger - -from tests import get_tests_data_path, get_tests_input_path, get_tests_output_path -from TTS.tts.configs.glow_tts_config import GlowTTSConfig -from TTS.tts.layers.losses import GlowTTSLoss -from TTS.tts.models.glow_tts import GlowTTS -from TTS.tts.utils.speakers import SpeakerManager -from TTS.utils.audio import AudioProcessor - -# pylint: disable=unused-variable - -torch.manual_seed(1) -use_cuda = torch.cuda.is_available() -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - -c = GlowTTSConfig() - -ap = AudioProcessor(**c.audio) -WAV_FILE = os.path.join(get_tests_input_path(), "example_1.wav") -BATCH_SIZE = 3 - - -def count_parameters(model): - r"""Count number of trainable parameters in a network""" - return sum(p.numel() for p in model.parameters() if p.requires_grad) - - -class TestGlowTTS(unittest.TestCase): - @staticmethod - def _create_inputs(batch_size=8): - input_dummy = torch.randint(0, 24, (batch_size, 128)).long().to(device) - input_lengths = torch.randint(100, 129, (batch_size,)).long().to(device) - input_lengths[-1] = 128 - mel_spec = torch.rand(batch_size, 30, c.audio["num_mels"]).to(device) - mel_lengths = torch.randint(20, 30, (batch_size,)).long().to(device) - speaker_ids = torch.randint(0, 5, (batch_size,)).long().to(device) - return input_dummy, input_lengths, mel_spec, mel_lengths, speaker_ids - - @staticmethod - def _check_parameter_changes(model, model_ref): - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param != param_ref).any(), "param {} with shape {} not updated!! \n{}\n{}".format( - count, param.shape, param, param_ref - ) - count += 1 - - def test_init_multispeaker(self): - config = GlowTTSConfig(num_chars=32) - model = GlowTTS(config) - # speaker embedding with default speaker_embedding_dim - config.use_speaker_embedding = True - config.num_speakers = 5 - config.d_vector_dim = None - model.init_multispeaker(config) - self.assertEqual(model.c_in_channels, model.hidden_channels_enc) - # use external speaker embeddings with speaker_embedding_dim = 301 - config = GlowTTSConfig(num_chars=32) - config.use_d_vector_file = True - config.d_vector_dim = 301 - model = GlowTTS(config) - model.init_multispeaker(config) - self.assertEqual(model.c_in_channels, 301) - # use speaker embedddings by the provided speaker_manager - config = GlowTTSConfig(num_chars=32) - config.use_speaker_embedding = True - config.speakers_file = os.path.join(get_tests_data_path(), "ljspeech", "speakers.json") - speaker_manager = SpeakerManager.init_from_config(config) - model = GlowTTS(config) - model.speaker_manager = speaker_manager - model.init_multispeaker(config) - self.assertEqual(model.c_in_channels, model.hidden_channels_enc) - self.assertEqual(model.num_speakers, speaker_manager.num_speakers) - # use external speaker embeddings by the provided speaker_manager - config = GlowTTSConfig(num_chars=32) - config.use_d_vector_file = True - config.d_vector_dim = 256 - config.d_vector_file = os.path.join(get_tests_data_path(), "dummy_speakers.json") - speaker_manager = SpeakerManager.init_from_config(config) - model = GlowTTS(config) - model.speaker_manager = speaker_manager - model.init_multispeaker(config) - self.assertEqual(model.c_in_channels, speaker_manager.embedding_dim) - self.assertEqual(model.num_speakers, speaker_manager.num_speakers) - - def test_unlock_act_norm_layers(self): - config = GlowTTSConfig(num_chars=32) - model = GlowTTS(config).to(device) - model.unlock_act_norm_layers() - for f in model.decoder.flows: - if getattr(f, "set_ddi", False): - self.assertFalse(f.initialized) - - def test_lock_act_norm_layers(self): - config = GlowTTSConfig(num_chars=32) - model = GlowTTS(config).to(device) - model.lock_act_norm_layers() - for f in model.decoder.flows: - if getattr(f, "set_ddi", False): - self.assertTrue(f.initialized) - - def _test_forward(self, batch_size): - input_dummy, input_lengths, mel_spec, mel_lengths, speaker_ids = self._create_inputs(batch_size) - # create model - config = GlowTTSConfig(num_chars=32) - model = GlowTTS(config).to(device) - model.train() - print(" > Num parameters for GlowTTS model:%s" % (count_parameters(model))) - # inference encoder and decoder with MAS - y = model.forward(input_dummy, input_lengths, mel_spec, mel_lengths) - self.assertEqual(y["z"].shape, mel_spec.shape) - self.assertEqual(y["logdet"].shape, torch.Size([batch_size])) - self.assertEqual(y["y_mean"].shape, mel_spec.shape) - self.assertEqual(y["y_log_scale"].shape, mel_spec.shape) - self.assertEqual(y["alignments"].shape, mel_spec.shape[:2] + (input_dummy.shape[1],)) - self.assertEqual(y["durations_log"].shape, input_dummy.shape + (1,)) - self.assertEqual(y["total_durations_log"].shape, input_dummy.shape + (1,)) - - def test_forward(self): - self._test_forward(1) - self._test_forward(3) - - def _test_forward_with_d_vector(self, batch_size): - input_dummy, input_lengths, mel_spec, mel_lengths, speaker_ids = self._create_inputs(batch_size) - d_vector = torch.rand(batch_size, 256).to(device) - # create model - config = GlowTTSConfig( - num_chars=32, - use_d_vector_file=True, - d_vector_dim=256, - d_vector_file=os.path.join(get_tests_data_path(), "dummy_speakers.json"), - ) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - model.train() - print(" > Num parameters for GlowTTS model:%s" % (count_parameters(model))) - # inference encoder and decoder with MAS - y = model.forward(input_dummy, input_lengths, mel_spec, mel_lengths, {"d_vectors": d_vector}) - self.assertEqual(y["z"].shape, mel_spec.shape) - self.assertEqual(y["logdet"].shape, torch.Size([batch_size])) - self.assertEqual(y["y_mean"].shape, mel_spec.shape) - self.assertEqual(y["y_log_scale"].shape, mel_spec.shape) - self.assertEqual(y["alignments"].shape, mel_spec.shape[:2] + (input_dummy.shape[1],)) - self.assertEqual(y["durations_log"].shape, input_dummy.shape + (1,)) - self.assertEqual(y["total_durations_log"].shape, input_dummy.shape + (1,)) - - def test_forward_with_d_vector(self): - self._test_forward_with_d_vector(1) - self._test_forward_with_d_vector(3) - - def _test_forward_with_speaker_id(self, batch_size): - input_dummy, input_lengths, mel_spec, mel_lengths, speaker_ids = self._create_inputs(batch_size) - speaker_ids = torch.randint(0, 24, (batch_size,)).long().to(device) - # create model - config = GlowTTSConfig( - num_chars=32, - use_speaker_embedding=True, - num_speakers=24, - ) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - model.train() - print(" > Num parameters for GlowTTS model:%s" % (count_parameters(model))) - # inference encoder and decoder with MAS - y = model.forward(input_dummy, input_lengths, mel_spec, mel_lengths, {"speaker_ids": speaker_ids}) - self.assertEqual(y["z"].shape, mel_spec.shape) - self.assertEqual(y["logdet"].shape, torch.Size([batch_size])) - self.assertEqual(y["y_mean"].shape, mel_spec.shape) - self.assertEqual(y["y_log_scale"].shape, mel_spec.shape) - self.assertEqual(y["alignments"].shape, mel_spec.shape[:2] + (input_dummy.shape[1],)) - self.assertEqual(y["durations_log"].shape, input_dummy.shape + (1,)) - self.assertEqual(y["total_durations_log"].shape, input_dummy.shape + (1,)) - - def test_forward_with_speaker_id(self): - self._test_forward_with_speaker_id(1) - self._test_forward_with_speaker_id(3) - - def _assert_inference_outputs(self, outputs, input_dummy, mel_spec): - output_shape = outputs["model_outputs"].shape - self.assertEqual(outputs["model_outputs"].shape[::2], mel_spec.shape[::2]) - self.assertEqual(outputs["logdet"], None) - self.assertEqual(outputs["y_mean"].shape, output_shape) - self.assertEqual(outputs["y_log_scale"].shape, output_shape) - self.assertEqual(outputs["alignments"].shape, output_shape[:2] + (input_dummy.shape[1],)) - self.assertEqual(outputs["durations_log"].shape, input_dummy.shape + (1,)) - self.assertEqual(outputs["total_durations_log"].shape, input_dummy.shape + (1,)) - - def _test_inference(self, batch_size): - input_dummy, input_lengths, mel_spec, mel_lengths, speaker_ids = self._create_inputs(batch_size) - config = GlowTTSConfig(num_chars=32) - model = GlowTTS(config).to(device) - model.eval() - outputs = model.inference(input_dummy, {"x_lengths": input_lengths}) - self._assert_inference_outputs(outputs, input_dummy, mel_spec) - - def test_inference(self): - self._test_inference(1) - self._test_inference(3) - - def _test_inference_with_d_vector(self, batch_size): - input_dummy, input_lengths, mel_spec, mel_lengths, speaker_ids = self._create_inputs(batch_size) - d_vector = torch.rand(batch_size, 256).to(device) - config = GlowTTSConfig( - num_chars=32, - use_d_vector_file=True, - d_vector_dim=256, - d_vector_file=os.path.join(get_tests_data_path(), "dummy_speakers.json"), - ) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - model.eval() - outputs = model.inference(input_dummy, {"x_lengths": input_lengths, "d_vectors": d_vector}) - self._assert_inference_outputs(outputs, input_dummy, mel_spec) - - def test_inference_with_d_vector(self): - self._test_inference_with_d_vector(1) - self._test_inference_with_d_vector(3) - - def _test_inference_with_speaker_ids(self, batch_size): - input_dummy, input_lengths, mel_spec, mel_lengths, speaker_ids = self._create_inputs(batch_size) - speaker_ids = torch.randint(0, 24, (batch_size,)).long().to(device) - # create model - config = GlowTTSConfig( - num_chars=32, - use_speaker_embedding=True, - num_speakers=24, - ) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - outputs = model.inference(input_dummy, {"x_lengths": input_lengths, "speaker_ids": speaker_ids}) - self._assert_inference_outputs(outputs, input_dummy, mel_spec) - - def test_inference_with_speaker_ids(self): - self._test_inference_with_speaker_ids(1) - self._test_inference_with_speaker_ids(3) - - def _test_inference_with_MAS(self, batch_size): - input_dummy, input_lengths, mel_spec, mel_lengths, speaker_ids = self._create_inputs(batch_size) - # create model - config = GlowTTSConfig(num_chars=32) - model = GlowTTS(config).to(device) - model.eval() - # inference encoder and decoder with MAS - y = model.inference_with_MAS(input_dummy, input_lengths, mel_spec, mel_lengths) - y2 = model.decoder_inference(mel_spec, mel_lengths) - assert ( - y2["model_outputs"].shape == y["model_outputs"].shape - ), "Difference between the shapes of the glowTTS inference with MAS ({}) and the inference using only the decoder ({}) !!".format( - y["model_outputs"].shape, y2["model_outputs"].shape - ) - - def test_inference_with_MAS(self): - self._test_inference_with_MAS(1) - self._test_inference_with_MAS(3) - - def test_train_step(self): - batch_size = BATCH_SIZE - input_dummy, input_lengths, mel_spec, mel_lengths, speaker_ids = self._create_inputs(batch_size) - criterion = GlowTTSLoss() - # model to train - config = GlowTTSConfig(num_chars=32) - model = GlowTTS(config).to(device) - # reference model to compare model weights - model_ref = GlowTTS(config).to(device) - model.train() - print(" > Num parameters for GlowTTS model:%s" % (count_parameters(model))) - # pass the state to ref model - model_ref.load_state_dict(copy.deepcopy(model.state_dict())) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count += 1 - optimizer = optim.Adam(model.parameters(), lr=0.001) - for _ in range(5): - optimizer.zero_grad() - outputs = model.forward(input_dummy, input_lengths, mel_spec, mel_lengths, None) - loss_dict = criterion( - outputs["z"], - outputs["y_mean"], - outputs["y_log_scale"], - outputs["logdet"], - mel_lengths, - outputs["durations_log"], - outputs["total_durations_log"], - input_lengths, - ) - loss = loss_dict["loss"] - loss.backward() - optimizer.step() - # check parameter changes - self._check_parameter_changes(model, model_ref) - - def test_train_eval_log(self): - batch_size = BATCH_SIZE - input_dummy, input_lengths, mel_spec, mel_lengths, _ = self._create_inputs(batch_size) - batch = {} - batch["text_input"] = input_dummy - batch["text_lengths"] = input_lengths - batch["mel_lengths"] = mel_lengths - batch["mel_input"] = mel_spec - batch["d_vectors"] = None - batch["speaker_ids"] = None - config = GlowTTSConfig(num_chars=32) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - model.run_data_dep_init = False - model.train() - logger = TensorboardLogger( - log_dir=os.path.join(get_tests_output_path(), "dummy_glow_tts_logs"), model_name="glow_tts_test_train_log" - ) - criterion = model.get_criterion() - outputs, _ = model.train_step(batch, criterion) - model.train_log(batch, outputs, logger, None, 1) - model.eval_log(batch, outputs, logger, None, 1) - logger.finish() - - def test_test_run(self): - config = GlowTTSConfig(num_chars=32) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - model.run_data_dep_init = False - model.eval() - test_figures, test_audios = model.test_run(None) - self.assertTrue(test_figures is not None) - self.assertTrue(test_audios is not None) - - def test_load_checkpoint(self): - chkp_path = os.path.join(get_tests_output_path(), "dummy_glow_tts_checkpoint.pth") - config = GlowTTSConfig(num_chars=32) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - chkp = {} - chkp["model"] = model.state_dict() - torch.save(chkp, chkp_path) - model.load_checkpoint(config, chkp_path) - self.assertTrue(model.training) - model.load_checkpoint(config, chkp_path, eval=True) - self.assertFalse(model.training) - - def test_get_criterion(self): - config = GlowTTSConfig(num_chars=32) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - criterion = model.get_criterion() - self.assertTrue(criterion is not None) - - def test_init_from_config(self): - config = GlowTTSConfig(num_chars=32) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - - config = GlowTTSConfig(num_chars=32, num_speakers=2) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - self.assertTrue(model.num_speakers == 2) - self.assertTrue(not hasattr(model, "emb_g")) - - config = GlowTTSConfig(num_chars=32, num_speakers=2, use_speaker_embedding=True) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - self.assertTrue(model.num_speakers == 2) - self.assertTrue(hasattr(model, "emb_g")) - - config = GlowTTSConfig( - num_chars=32, - num_speakers=2, - use_speaker_embedding=True, - speakers_file=os.path.join(get_tests_data_path(), "ljspeech", "speakers.json"), - ) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - self.assertTrue(model.num_speakers == 10) - self.assertTrue(hasattr(model, "emb_g")) - - config = GlowTTSConfig( - num_chars=32, - use_d_vector_file=True, - d_vector_dim=256, - d_vector_file=os.path.join(get_tests_data_path(), "dummy_speakers.json"), - ) - model = GlowTTS.init_from_config(config, verbose=False).to(device) - self.assertTrue(model.num_speakers == 1) - self.assertTrue(not hasattr(model, "emb_g")) - self.assertTrue(model.c_in_channels == config.d_vector_dim) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/ImagePalette.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/ImagePalette.py deleted file mode 100644 index fe76c86f40ee08ad4c9f6e2ddcb189bbc7409b4f..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/ImagePalette.py +++ /dev/null @@ -1,268 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# image palette object -# -# History: -# 1996-03-11 fl Rewritten. -# 1997-01-03 fl Up and running. -# 1997-08-23 fl Added load hack -# 2001-04-16 fl Fixed randint shadow bug in random() -# -# Copyright (c) 1997-2001 by Secret Labs AB -# Copyright (c) 1996-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import array - -from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile -from ._deprecate import deprecate - - -class ImagePalette: - """ - Color palette for palette mapped images - - :param mode: The mode to use for the palette. See: - :ref:`concept-modes`. Defaults to "RGB" - :param palette: An optional palette. If given, it must be a bytearray, - an array or a list of ints between 0-255. The list must consist of - all channels for one color followed by the next color (e.g. RGBRGBRGB). - Defaults to an empty palette. - """ - - def __init__(self, mode="RGB", palette=None, size=0): - self.mode = mode - self.rawmode = None # if set, palette contains raw data - self.palette = palette or bytearray() - self.dirty = None - if size != 0: - deprecate("The size parameter", 10, None) - if size != len(self.palette): - raise ValueError("wrong palette size") - - @property - def palette(self): - return self._palette - - @palette.setter - def palette(self, palette): - self._colors = None - self._palette = palette - - @property - def colors(self): - if self._colors is None: - mode_len = len(self.mode) - self._colors = {} - for i in range(0, len(self.palette), mode_len): - color = tuple(self.palette[i : i + mode_len]) - if color in self._colors: - continue - self._colors[color] = i // mode_len - return self._colors - - @colors.setter - def colors(self, colors): - self._colors = colors - - def copy(self): - new = ImagePalette() - - new.mode = self.mode - new.rawmode = self.rawmode - if self.palette is not None: - new.palette = self.palette[:] - new.dirty = self.dirty - - return new - - def getdata(self): - """ - Get palette contents in format suitable for the low-level - ``im.putpalette`` primitive. - - .. warning:: This method is experimental. - """ - if self.rawmode: - return self.rawmode, self.palette - return self.mode, self.tobytes() - - def tobytes(self): - """Convert palette to bytes. - - .. warning:: This method is experimental. - """ - if self.rawmode: - raise ValueError("palette contains raw palette data") - if isinstance(self.palette, bytes): - return self.palette - arr = array.array("B", self.palette) - return arr.tobytes() - - # Declare tostring as an alias for tobytes - tostring = tobytes - - def getcolor(self, color, image=None): - """Given an rgb tuple, allocate palette entry. - - .. warning:: This method is experimental. - """ - if self.rawmode: - raise ValueError("palette contains raw palette data") - if isinstance(color, tuple): - if self.mode == "RGB": - if len(color) == 4: - if color[3] != 255: - raise ValueError( - "cannot add non-opaque RGBA color to RGB palette" - ) - color = color[:3] - elif self.mode == "RGBA": - if len(color) == 3: - color += (255,) - try: - return self.colors[color] - except KeyError as e: - # allocate new color slot - if not isinstance(self.palette, bytearray): - self._palette = bytearray(self.palette) - index = len(self.palette) // 3 - special_colors = () - if image: - special_colors = ( - image.info.get("background"), - image.info.get("transparency"), - ) - while index in special_colors: - index += 1 - if index >= 256: - if image: - # Search for an unused index - for i, count in reversed(list(enumerate(image.histogram()))): - if count == 0 and i not in special_colors: - index = i - break - if index >= 256: - raise ValueError("cannot allocate more than 256 colors") from e - self.colors[color] = index - if index * 3 < len(self.palette): - self._palette = ( - self.palette[: index * 3] - + bytes(color) - + self.palette[index * 3 + 3 :] - ) - else: - self._palette += bytes(color) - self.dirty = 1 - return index - else: - raise ValueError(f"unknown color specifier: {repr(color)}") - - def save(self, fp): - """Save palette to text file. - - .. warning:: This method is experimental. - """ - if self.rawmode: - raise ValueError("palette contains raw palette data") - if isinstance(fp, str): - fp = open(fp, "w") - fp.write("# Palette\n") - fp.write(f"# Mode: {self.mode}\n") - for i in range(256): - fp.write(f"{i}") - for j in range(i * len(self.mode), (i + 1) * len(self.mode)): - try: - fp.write(f" {self.palette[j]}") - except IndexError: - fp.write(" 0") - fp.write("\n") - fp.close() - - -# -------------------------------------------------------------------- -# Internal - - -def raw(rawmode, data): - palette = ImagePalette() - palette.rawmode = rawmode - palette.palette = data - palette.dirty = 1 - return palette - - -# -------------------------------------------------------------------- -# Factories - - -def make_linear_lut(black, white): - lut = [] - if black == 0: - for i in range(256): - lut.append(white * i // 255) - else: - raise NotImplementedError # FIXME - return lut - - -def make_gamma_lut(exp): - lut = [] - for i in range(256): - lut.append(int(((i / 255.0) ** exp) * 255.0 + 0.5)) - return lut - - -def negative(mode="RGB"): - palette = list(range(256 * len(mode))) - palette.reverse() - return ImagePalette(mode, [i // len(mode) for i in palette]) - - -def random(mode="RGB"): - from random import randint - - palette = [] - for i in range(256 * len(mode)): - palette.append(randint(0, 255)) - return ImagePalette(mode, palette) - - -def sepia(white="#fff0c0"): - bands = [make_linear_lut(0, band) for band in ImageColor.getrgb(white)] - return ImagePalette("RGB", [bands[i % 3][i // 3] for i in range(256 * 3)]) - - -def wedge(mode="RGB"): - palette = list(range(256 * len(mode))) - return ImagePalette(mode, [i // len(mode) for i in palette]) - - -def load(filename): - - # FIXME: supports GIMP gradients only - - with open(filename, "rb") as fp: - - for paletteHandler in [ - GimpPaletteFile.GimpPaletteFile, - GimpGradientFile.GimpGradientFile, - PaletteFile.PaletteFile, - ]: - try: - fp.seek(0) - lut = paletteHandler(fp).getpalette() - if lut: - break - except (SyntaxError, ValueError): - # import traceback - # traceback.print_exc() - pass - else: - raise OSError("cannot load palette") - - return lut # data, rawmode diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/__init__.py deleted file mode 100644 index 7cbe00a10520331709441e5e77991bd2edca8c06..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/encoders/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import importlib -import os - -from fairseq import registry - - -build_tokenizer, register_tokenizer, TOKENIZER_REGISTRY, _ = registry.setup_registry( - "--tokenizer", - default=None, -) - - -build_bpe, register_bpe, BPE_REGISTRY, _ = registry.setup_registry( - "--bpe", - default=None, -) - - -# automatically import any Python files in the encoders/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - module = file[: file.find(".py")] - importlib.import_module("fairseq.data.encoders." + module) diff --git a/spaces/ashishraics/NLP/README.md b/spaces/ashishraics/NLP/README.md deleted file mode 100644 index d7ee1516d55a58df389e0dad4c374920767c5894..0000000000000000000000000000000000000000 --- a/spaces/ashishraics/NLP/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Classification using NLI & MLM -emoji: 📈 -colorFrom: indigo -colorTo: green -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Harrison Mamin.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Harrison Mamin.html deleted file mode 100644 index 472443f4584f8a39bc8164d1c501dd3ba7dab653..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Harrison Mamin.html +++ /dev/null @@ -1,132 +0,0 @@ - - - - Harrison Mamin - - - - -
      -

      Harrison Mamin

      - -
      -
      1. How did you hear about SM? Why Mentorship with SM?
      - Spoke to Russell in 2022, Now have the bandwidth to take up mentorship. Had a difficult time breaking into DS and now want to help others do the same. Want to give back to the DS community. 

      2. Take me through your DS Career. 
      - Started working as a BA at a Green-tech startup. Got introduced to ML there and started self-study. Pursued a Master's in DS after. 
      -Worked at an ed-tech startup after that. Built ML Model for safe web browsing for K-12 kids, also developed mental health tools for students. 
      - Currently working at CashApp under Block. Building ML-powered tools. Worked with NLP for Chatbots with the Support ML team. Also, responsible for cross-functional model hosting & building. 

      3. Previous Mentorship Experience. 
      - Helped a junior from academia background with technical & programming skills and knowledge during a previous role. The person ended up becoming a Lead Data Scientist in their next job. 

      4. Common Mistakes beginners make? What do they need most help with? How can you help with this?
      - Most beginners focus on algorithms that are worth learning but have lesser business applications. Focus should be on framing business problems around ML Models. 
      Can share perspective & personal experience of what actual Job v/s School v/d personal projects are and how they are different, and how to sell yourself in interviews. 

      5. Questions about SM?
      - What do mentees need the most help with and is this same?
      - Do mentees hop mentor-to-mentor to use the free 2 week trial period?
      - Do mentor-mentee need to connect apart from just video calls, suppose on texts or chats?
      - What is the mentee placement rate?
      - Next steps in the process?
      -
      - -
      - - - \ No newline at end of file diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/grounding_net_example.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/grounding_net_example.py deleted file mode 100644 index 7a09caf5e48bb11f789236a4c34bdbd9ee6cabee..0000000000000000000000000000000000000000 --- a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/diffusionmodules/grounding_net_example.py +++ /dev/null @@ -1,22 +0,0 @@ -""" -This is a high-level pseudo code for grounding net. - -This class needs to tokenize grounding input into gronding tokens which -will be used in GatedAttenion layers. - - -class PositionNet(nn.Module): - def __init__(self, **kwargs): - super().__init__() - - kwargs should be defined by model.grounding_tokenizer in config yaml file. - - def forward(self, **kwargs): - - kwargs should be the output of grounding_tokenizer_input network - - return grounding_tokens # with shape: Batch * Num_Of_Token* Token_Channel_Dimension - - - -""" \ No newline at end of file diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/ui/config/on_sd_start.bat b/spaces/awaawawawa/iurf7irfuyytruyyugb/ui/config/on_sd_start.bat deleted file mode 100644 index 8c50ee45732383f7214e64a82f7cd922b9d94dd6..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/ui/config/on_sd_start.bat +++ /dev/null @@ -1,335 +0,0 @@ -@echo off - -@copy sd-ui-files\scripts\on_env_start.bat scripts\ /Y - -@REM Caution, this file will make your eyes and brain bleed. It's such an unholy mess. -@REM Note to self: Please rewrite this in Python. For the sake of your own sanity. - -@copy "sd-ui-files\scripts\Developer Console.cmd" . /Y -if exist "Open Developer Console.cmd" del "Open Developer Console.cmd" - -@call python -c "import os; import shutil; frm = 'sd-ui-files\\ui\\hotfix\\9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'; dst = os.path.join(os.path.expanduser('~'), '.cache', 'huggingface', 'transformers', '9c24e6cd9f499d02c4f21a033736dabd365962dc80fe3aeb57a8f85ea45a20a3.26fead7ea4f0f843f6eb4055dfd25693f1a71f3c6871b184042d4b126244e142'); shutil.copyfile(frm, dst) if os.path.exists(dst) else print(''); print('Hotfixed broken JSON file from OpenAI');" - -@>nul grep -c "sd_git_cloned" scripts\install_status.txt -@if "%ERRORLEVEL%" EQU "0" ( - @echo "Stable Diffusion's git repository was already installed. Updating.." - - @cd stable-diffusion - - @call git reset --hard - @call git pull - @call git checkout f6cfebffa752ee11a7b07497b8529d5971de916c - - @call git apply ..\ui\sd_internal\ddim_callback.patch - @call git apply ..\ui\sd_internal\env_yaml.patch - - @cd .. -) else ( - @echo. & echo "Downloading Stable Diffusion.." & echo. - - @call git clone https://github.com/basujindal/stable-diffusion.git && ( - @echo sd_git_cloned >> scripts\install_status.txt - ) || ( - @echo "Error downloading Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" - pause - @exit /b - ) - - @cd stable-diffusion - @call git checkout f6cfebffa752ee11a7b07497b8529d5971de916c - - @call git apply ..\ui\sd_internal\ddim_callback.patch - @call git apply ..\ui\sd_internal\env_yaml.patch - - @cd .. -) - -@cd stable-diffusion - -@>nul grep -c "conda_sd_env_created" ..\scripts\install_status.txt -@if "%ERRORLEVEL%" EQU "0" ( - @echo "Packages necessary for Stable Diffusion were already installed" - - @call conda activate .\env -) else ( - @echo. & echo "Downloading packages necessary for Stable Diffusion.." & echo. & echo "***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** .." & echo. - - @rmdir /s /q .\env - - @REM prevent conda from using packages from the user's home directory, to avoid conflicts - @set PYTHONNOUSERSITE=1 - - set TMP=%cd%\tmp - set TEMP=%cd%\tmp - - @call conda env create --prefix env -f environment.yaml || ( - @echo. & echo "Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - - @call conda activate .\env - - @call conda install -c conda-forge -y --prefix env antlr4-python3-runtime=4.8 || ( - @echo. & echo "Error installing antlr4-python3-runtime for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - - for /f "tokens=*" %%a in ('python -c "import torch; import ldm; import transformers; import numpy; import antlr4; print(42)"') do if "%%a" NEQ "42" ( - @echo. & echo "Dependency test failed! Error installing the packages necessary for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - - @echo conda_sd_env_created >> ..\scripts\install_status.txt -) - -set PATH=C:\Windows\System32;%PATH% - -@>nul grep -c "conda_sd_gfpgan_deps_installed" ..\scripts\install_status.txt -@if "%ERRORLEVEL%" EQU "0" ( - @echo "Packages necessary for GFPGAN (Face Correction) were already installed" -) else ( - @echo. & echo "Downloading packages necessary for GFPGAN (Face Correction).." & echo. - - @set PYTHONNOUSERSITE=1 - - set TMP=%cd%\tmp - set TEMP=%cd%\tmp - - @call pip install -e git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN || ( - @echo. & echo "Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - - @call pip install basicsr==1.4.2 || ( - @echo. & echo "Error installing the basicsr package necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - - for /f "tokens=*" %%a in ('python -c "from gfpgan import GFPGANer; print(42)"') do if "%%a" NEQ "42" ( - @echo. & echo "Dependency test failed! Error installing the packages necessary for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - - @echo conda_sd_gfpgan_deps_installed >> ..\scripts\install_status.txt -) - -@>nul grep -c "conda_sd_esrgan_deps_installed" ..\scripts\install_status.txt -@if "%ERRORLEVEL%" EQU "0" ( - @echo "Packages necessary for ESRGAN (Resolution Upscaling) were already installed" -) else ( - @echo. & echo "Downloading packages necessary for ESRGAN (Resolution Upscaling).." & echo. - - @set PYTHONNOUSERSITE=1 - - set TMP=%cd%\tmp - set TEMP=%cd%\tmp - - @call pip install -e git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan || ( - @echo. & echo "Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - - for /f "tokens=*" %%a in ('python -c "from basicsr.archs.rrdbnet_arch import RRDBNet; from realesrgan import RealESRGANer; print(42)"') do if "%%a" NEQ "42" ( - @echo. & echo "Dependency test failed! Error installing the packages necessary for ESRGAN (Resolution Upscaling). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - - @echo conda_sd_esrgan_deps_installed >> ..\scripts\install_status.txt -) - -@>nul grep -c "conda_sd_ui_deps_installed" ..\scripts\install_status.txt -@if "%ERRORLEVEL%" EQU "0" ( - echo "Packages necessary for Stable Diffusion UI were already installed" -) else ( - @echo. & echo "Downloading packages necessary for Stable Diffusion UI.." & echo. - - @set PYTHONNOUSERSITE=1 - - set TMP=%cd%\tmp - set TEMP=%cd%\tmp - - @call conda install -c conda-forge -y --prefix env uvicorn fastapi || ( - echo "Error installing the packages necessary for Stable Diffusion UI. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" - pause - exit /b - ) -) - -call WHERE uvicorn > .tmp -@>nul grep -c "uvicorn" .tmp -@if "%ERRORLEVEL%" NEQ "0" ( - @echo. & echo "UI packages not found! Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b -) - -@>nul grep -c "conda_sd_ui_deps_installed" ..\scripts\install_status.txt -@if "%ERRORLEVEL%" NEQ "0" ( - @echo conda_sd_ui_deps_installed >> ..\scripts\install_status.txt -) - - - -if not exist "..\models\stable-diffusion" mkdir "..\models\stable-diffusion" -echo. > "..\models\stable-diffusion\Put your custom ckpt files here.txt" - -@if exist "sd-v1-4.ckpt" ( - for %%I in ("sd-v1-4.ckpt") do if "%%~zI" EQU "4265380512" ( - echo "Data files (weights) necessary for Stable Diffusion were already downloaded. Using the HuggingFace 4 GB Model." - ) else ( - for %%J in ("sd-v1-4.ckpt") do if "%%~zJ" EQU "7703807346" ( - echo "Data files (weights) necessary for Stable Diffusion were already downloaded. Using the HuggingFace 7 GB Model." - ) else ( - for %%K in ("sd-v1-4.ckpt") do if "%%~zK" EQU "7703810927" ( - echo "Data files (weights) necessary for Stable Diffusion were already downloaded. Using the Waifu Model." - ) else ( - echo. & echo "The model file present at %cd%\sd-v1-4.ckpt is invalid. It is only %%~zK bytes in size. Re-downloading.." & echo. - del "sd-v1-4.ckpt" - ) - ) - ) -) - -@if not exist "sd-v1-4.ckpt" ( - @echo. & echo "Downloading data files (weights) for Stable Diffusion.." & echo. - - @call curl -L -k https://me.cmdr2.org/stable-diffusion-ui/sd-v1-4.ckpt > sd-v1-4.ckpt - - @if exist "sd-v1-4.ckpt" ( - for %%I in ("sd-v1-4.ckpt") do if "%%~zI" NEQ "4265380512" ( - echo. & echo "Error: The downloaded model file was invalid! Bytes downloaded: %%~zI" & echo. - echo. & echo "Error downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - ) else ( - @echo. & echo "Error downloading the data files (weights) for Stable Diffusion. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) -) - - - -@if exist "GFPGANv1.3.pth" ( - for %%I in ("GFPGANv1.3.pth") do if "%%~zI" EQU "348632874" ( - echo "Data files (weights) necessary for GFPGAN (Face Correction) were already downloaded" - ) else ( - echo. & echo "The GFPGAN model file present at %cd%\GFPGANv1.3.pth is invalid. It is only %%~zI bytes in size. Re-downloading.." & echo. - del "GFPGANv1.3.pth" - ) -) - -@if not exist "GFPGANv1.3.pth" ( - @echo. & echo "Downloading data files (weights) for GFPGAN (Face Correction).." & echo. - - @call curl -L -k https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth > GFPGANv1.3.pth - - @if exist "GFPGANv1.3.pth" ( - for %%I in ("GFPGANv1.3.pth") do if "%%~zI" NEQ "348632874" ( - echo. & echo "Error: The downloaded GFPGAN model file was invalid! Bytes downloaded: %%~zI" & echo. - echo. & echo "Error downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - ) else ( - @echo. & echo "Error downloading the data files (weights) for GFPGAN (Face Correction). Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) -) - - - -@if exist "RealESRGAN_x4plus.pth" ( - for %%I in ("RealESRGAN_x4plus.pth") do if "%%~zI" EQU "67040989" ( - echo "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus were already downloaded" - ) else ( - echo. & echo "The GFPGAN model file present at %cd%\RealESRGAN_x4plus.pth is invalid. It is only %%~zI bytes in size. Re-downloading.." & echo. - del "RealESRGAN_x4plus.pth" - ) -) - -@if not exist "RealESRGAN_x4plus.pth" ( - @echo. & echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus.." & echo. - - @call curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth > RealESRGAN_x4plus.pth - - @if exist "RealESRGAN_x4plus.pth" ( - for %%I in ("RealESRGAN_x4plus.pth") do if "%%~zI" NEQ "67040989" ( - echo. & echo "Error: The downloaded ESRGAN x4plus model file was invalid! Bytes downloaded: %%~zI" & echo. - echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - ) else ( - @echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) -) - - - -@if exist "RealESRGAN_x4plus_anime_6B.pth" ( - for %%I in ("RealESRGAN_x4plus_anime_6B.pth") do if "%%~zI" EQU "17938799" ( - echo "Data files (weights) necessary for ESRGAN (Resolution Upscaling) x4plus_anime were already downloaded" - ) else ( - echo. & echo "The GFPGAN model file present at %cd%\RealESRGAN_x4plus_anime_6B.pth is invalid. It is only %%~zI bytes in size. Re-downloading.." & echo. - del "RealESRGAN_x4plus_anime_6B.pth" - ) -) - -@if not exist "RealESRGAN_x4plus_anime_6B.pth" ( - @echo. & echo "Downloading data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime.." & echo. - - @call curl -L -k https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth > RealESRGAN_x4plus_anime_6B.pth - - @if exist "RealESRGAN_x4plus_anime_6B.pth" ( - for %%I in ("RealESRGAN_x4plus_anime_6B.pth") do if "%%~zI" NEQ "17938799" ( - echo. & echo "Error: The downloaded ESRGAN x4plus_anime model file was invalid! Bytes downloaded: %%~zI" & echo. - echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) - ) else ( - @echo. & echo "Error downloading the data files (weights) for ESRGAN (Resolution Upscaling) x4plus_anime. Sorry about that, please try to:" & echo " 1. Run this installer again." & echo " 2. If that doesn't fix it, please try the common troubleshooting steps at https://github.com/cmdr2/stable-diffusion-ui/blob/main/Troubleshooting.md" & echo " 3. If those steps don't help, please copy *all* the error messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB" & echo " 4. If that doesn't solve the problem, please file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues" & echo "Thanks!" & echo. - pause - exit /b - ) -) - - - -@>nul grep -c "sd_install_complete" ..\scripts\install_status.txt -@if "%ERRORLEVEL%" NEQ "0" ( - @echo sd_weights_downloaded >> ..\scripts\install_status.txt - @echo sd_install_complete >> ..\scripts\install_status.txt -) - -@echo. & echo "Stable Diffusion is ready!" & echo. - -@set SD_DIR=%cd% - -@cd env\lib\site-packages -@set PYTHONPATH=%SD_DIR%;%cd% -@cd ..\..\.. -@echo PYTHONPATH=%PYTHONPATH% - -@cd .. -@set SD_UI_PATH=%cd%\ui -@cd stable-diffusion - -@call python --version - -@uvicorn server:app --app-dir "%SD_UI_PATH%" --port 9000 --host 0.0.0.0 - -@pause diff --git a/spaces/awacke1/AI-Atari-Live-Streamlit/README.md b/spaces/awacke1/AI-Atari-Live-Streamlit/README.md deleted file mode 100644 index 6793a5e8113e24ab9b35718a32814c9ebd62fbe6..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AI-Atari-Live-Streamlit/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🕹️AI Atari 2600 RL Agent -emoji: 🕹️ -colorFrom: blue -colorTo: indigo -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- -This demonstration shows a Pytorch trained PPO Agent that can play Atari 2600 games and shows Reinforcement Learning. An AI Agent is a program that can make predictions, take actions in the environment and performs observation to change weights. Full list of all Atari 2600 ROMs is here: https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/benchmark.md -Config ref: https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Assessment-By-Organs/app.py b/spaces/awacke1/Assessment-By-Organs/app.py deleted file mode 100644 index 1893c48b9ee430f4fa28d0c7b3a4a3c65e9df6b1..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Assessment-By-Organs/app.py +++ /dev/null @@ -1,351 +0,0 @@ -import streamlit as st -import random - -# Define the dictionary lists for each organ -HEART_LIST = [ - {"name": "Heartbeat", "description": "The rhythmic contraction and expansion of the heart", "emoji": "❤️"}, - {"name": "Blood vessels", "description": "The network of tubes that carry blood throughout the body", "emoji": "🧭"}, - {"name": "Pulse", "description": "The rhythmic expansion and contraction of an artery as blood flows through it", "emoji": "🔊"} -] - -LUNG_LIST = [ - {"name": "Breathing", "description": "The act of inhaling and exhaling air", "emoji": "🌬️"}, - {"name": "Respiration", "description": "The process by which oxygen is taken in and carbon dioxide is expelled", "emoji": "💨"}, - {"name": "Coughing", "description": "A reflex action that helps to clear the airways", "emoji": "🤧"} -] - -BRAIN_LIST = [ - {"name": "Memory", "description": "The ability to store and retrieve information", "emoji": "🧠"}, - {"name": "Concentration", "description": "The ability to focus on a task or idea", "emoji": "👀"}, - {"name": "Imagination", "description": "The ability to create mental images or ideas", "emoji": "🌈"} -] - -# Define a function to roll a 100 sided dice 10 times and return the rolls and the total score -def roll_dice(): - rolls = [random.randint(1, 100) for _ in range(10)] - total_score = sum(rolls) - return rolls, total_score - -# Define the Streamlit app -def app(): - st.sidebar.title("Choose an organ list") - organ_list = st.sidebar.selectbox("Select an organ", ["Heart", "Lung", "Brain"]) - - st.write(f"## {organ_list} List") - - if organ_list == "Heart": - selected_list = HEART_LIST - elif organ_list == "Lung": - selected_list = LUNG_LIST - else: - selected_list = BRAIN_LIST - - st.write("### Choose an item from the list") - selected_item = st.selectbox("Select an item", [item["name"] for item in selected_list]) - selected_item_dict = [item for item in selected_list if item["name"] == selected_item][0] - - st.write(f"### {selected_item_dict['name']} {selected_item_dict['emoji']}") - st.write(selected_item_dict['description']) - - st.write("### Roll the dice") - rolls, total_score = roll_dice() - st.write("Rolls:", rolls) - st.write("Total score:", total_score) - - st.write("### Aggregated score") - scores = [roll_dice()[1] for _ in range(10)] - st.write("Scores:", scores) - st.write("Total score:", sum(scores)) - -if __name__ == '__main__': - app() - - -st.markdown(""" - -``` -Create a streamlit python program that renders a plotly graph object treemap for random data. Use the program here as a template. import streamlit as st -import random - -# Define the dictionary lists for each organ -HEART_LIST = [ - {"name": "Heartbeat", "description": "The rhythmic contraction and expansion of the heart", "emoji": "❤️"}, - {"name": "Blood vessels", "description": "The network of tubes that carry blood throughout the body", "emoji": "🧭"}, - {"name": "Pulse", "description": "The rhythmic expansion and contraction of an artery as blood flows through it", "emoji": "🔊"} -] - -LUNG_LIST = [ - {"name": "Breathing", "description": "The act of inhaling and exhaling air", "emoji": "🌬️"}, - {"name": "Respiration", "description": "The process by which oxygen is taken in and carbon dioxide is expelled", "emoji": "💨"}, - {"name": "Coughing", "description": "A reflex action that helps to clear the airways", "emoji": "🤧"} -] - -BRAIN_LIST = [ - {"name": "Memory", "description": "The ability to store and retrieve information", "emoji": "🧠"}, - {"name": "Concentration", "description": "The ability to focus on a task or idea", "emoji": "👀"}, - {"name": "Imagination", "description": "The ability to create mental images or ideas", "emoji": "🌈"} -] - -# Define a function to roll a 100 sided dice 10 times and return the rolls and the total score -def roll_dice(): - rolls = [random.randint(1, 100) for _ in range(10)] - total_score = sum(rolls) - return rolls, total_score - -# Define the Streamlit app -def app(): - st.sidebar.title("Choose an organ list") - organ_list = st.sidebar.selectbox("Select an organ", ["Heart", "Lung", "Brain"]) - - st.write(f"## {organ_list} List") - - if organ_list == "Heart": - selected_list = HEART_LIST - elif organ_list == "Lung": - selected_list = LUNG_LIST - else: - selected_list = BRAIN_LIST - - st.write("### Choose an item from the list") - selected_item = st.selectbox("Select an item", [item["name"] for item in selected_list]) - selected_item_dict = [item for item in selected_list if item["name"] == selected_item][0] - - st.write(f"### {selected_item_dict['name']} {selected_item_dict['emoji']}") - st.write(selected_item_dict['description']) - - st.write("### Roll the dice") - rolls, total_score = roll_dice() - st.write("Rolls:", rolls) - st.write("Total score:", total_score) - - st.write("### Aggregated score") - scores = [roll_dice()[1] for _ in range(10)] - st.write("Scores:", scores) - st.write("Total score:", sum(scores)) - -if __name__ == '__main__': - app() -Sure, here's a Streamlit app that generates a Plotly Treemap chart with random data. You can replace the data with your own, or modify the code to generate different types of charts. - -python -Copy code -import streamlit as st -import plotly.express as px -import random - -# Define a function to generate random data -def generate_data(): - labels = [f"Category {i}" for i in range(1, 11)] - values = [random.randint(1, 100) for _ in range(10)] - return {"labels": labels, "values": values} - -# Define the Streamlit app -def app(): - st.title("Random Treemap Chart") - st.write("This app generates a random Treemap chart using Plotly.") - - data = generate_data() - fig = px.treemap(data, path=["labels"], values="values") - st.plotly_chart(fig) - -if __name__ == '__main__': - app() -The generate_data() function creates a dictionary with a list of category labels and a list of corresponding values. The px.treemap() function from Plotly is used to create a Treemap chart using the data. - -When the app is run, it displays a title and a brief description, and then renders the chart using st.plotly_chart(). You can run the app with streamlit run where is the name of the file containing the code. - - - - -Aaron Wacker -Add a dataset using pandas dataframe to be able to add new lists of information to the chart to include a game mechanic name as well as a short description. Action Queue -Action Retrieval -Campaign / Battle Card Driven -Card Play Conflict Resolution -Communication Limits -Cooperative Game -Critical Hits and Failures -Deck Construction -Grid Movement -Hand Management -Hexagon Grid -Legacy Game -Line of Sight -Modular Board -Once-Per-Game Abilities -Role Playing -Scenario / Mission / Campaign Game -Simultaneous Action Selection -Solo / Solitaire Game -Storytelling -Variable Player Powers -Sure, here's an updated version of the previous code that includes a Pandas DataFrame to store the data for the Treemap chart. The DataFrame includes columns for the category labels, corresponding values, game mechanics, and short descriptions. - -python -Copy code -import streamlit as st -import plotly.express as px -import pandas as pd - -# Define the data for the Treemap chart -data = pd.DataFrame({ - "category": ["Category 1", "Category 2", "Category 3", "Category 4", "Category 5", - "Category 6", "Category 7", "Category 8", "Category 9", "Category 10"], - "value": [10, 20, 30, 40, 50, 60, 70, 80, 90, 100], - "mechanic": ["Action Queue", "Action Retrieval", "Campaign / Battle Card Driven", - "Card Play Conflict Resolution", "Communication Limits", - "Cooperative Game", "Critical Hits and Failures", "Deck Construction", - "Grid Movement", "Hand Management"], - "description": ["Description 1", "Description 2", "Description 3", "Description 4", - "Description 5", "Description 6", "Description 7", "Description 8", - "Description 9", "Description 10"] -}) - -# Define the Streamlit app -def app(): - st.title("Game Mechanics Treemap Chart") - st.write("This app displays a Treemap chart of game mechanics.") - - fig = px.treemap(data, path=["category", "mechanic"], values="value", - color="value", hover_data=["description"]) - st.plotly_chart(fig) - -if __name__ == '__main__': - app() -The DataFrame is created using a dictionary with the column names as keys and the corresponding data as values. The Treemap chart is generated using px.treemap() with the DataFrame as the input, and the path, values, color, and hover_data parameters set to the appropriate columns. - -You can modify the DataFrame to add or remove categories, change the values, or update the game mechanics and descriptions. - - - - -Aaron Wacker -For each one of the list items, create two random numeric values named "HealthPoints" and Coins. Add it to the datasets. Provide a download link of the dataset after the chart is drawn. -Sure, here's an updated version of the code that generates two random numeric values for each game mechanic in the list, and adds them to the DataFrame. The values are stored in the "HealthPoints" and "Coins" columns. - -After the chart is drawn, a download link for the updated DataFrame is displayed. The link downloads a CSV file that can be opened in a spreadsheet program like Microsoft Excel or Google Sheets. - -python -Copy code -import streamlit as st -import plotly.express as px -import pandas as pd -import numpy as np - -# Define the list of game mechanics -game_mechanics = ["Action Queue", "Action Retrieval", "Campaign / Battle Card Driven", - "Card Play Conflict Resolution", "Communication Limits", - "Cooperative Game", "Critical Hits and Failures", "Deck Construction", - "Grid Movement", "Hand Management", "Hexagon Grid", "Legacy Game", - "Line of Sight", "Modular Board", "Once-Per-Game Abilities", "Role Playing", - "Scenario / Mission / Campaign Game", "Simultaneous Action Selection", - "Solo / Solitaire Game", "Storytelling", "Variable Player Powers"] - -# Define a function to generate random values for each game mechanic -def generate_values(): - health_points = np.random.randint(50, 100, size=len(game_mechanics)) - coins = np.random.randint(10, 50, size=len(game_mechanics)) - return {"HealthPoints": health_points, "Coins": coins} - -# Define the Streamlit app -def app(): - st.title("Game Mechanics Treemap Chart") - st.write("This app displays a Treemap chart of game mechanics.") - - # Generate the data for the chart - values = generate_values() - data = pd.DataFrame({ - "category": ["Category 1"] * len(game_mechanics), - "mechanic": game_mechanics, - "value": list(values.values()), - "HealthPoints": values["HealthPoints"], - "Coins": values["Coins"], - "description": ["Description"] * len(game_mechanics) - }) - data["value"] = data["value"].apply(lambda x: sum(x)) - - # Draw the chart - fig = px.treemap(data, path=["category", "mechanic"], values="value", - color="HealthPoints", hover_data=["Coins"]) - st.plotly_chart(fig) - - # Display a download link for the data - csv = data.to_csv(index=False) - b64 = base64.b64encode(csv.encode()).decode() - href = f'Download CSV' - st.markdown(href, unsafe_allow_html=True) - -if __name__ == '__main__': - app() -The generate_values() function uses NumPy to generate random values for "HealthPoints" and "Coins" for each game mechanic. The values are then added to the DataFrame in separate columns. - -The data["value"] column is updated to reflect the sum of "HealthPoints" and "Coins" for each game mechanic. - -A download link for the updated DataFrame is displayed using the to_csv() method to convert the DataFrame to a CSV file, and the base64 module to encode the file for download. - - - - -Aaron Wacker -In line 28 the pandas dataset has an error. Change this with refactoring to make sure all arrays are the same length: ValueError: All arrays must be of the same length -You're right, the values in the "HealthPoints" and "Coins" columns must be of the same length as the number of game mechanics in the list. To fix this error, we can modify the generate_values() function to accept the number of game mechanics as a parameter, and return random values with that length. - -Here's the updated code: - -python -Copy code -import streamlit as st -import plotly.express as px -import pandas as pd -import numpy as np - -# Define the list of game mechanics -game_mechanics = ["Action Queue", "Action Retrieval", "Campaign / Battle Card Driven", - "Card Play Conflict Resolution", "Communication Limits", - "Cooperative Game", "Critical Hits and Failures", "Deck Construction", - "Grid Movement", "Hand Management", "Hexagon Grid", "Legacy Game", - "Line of Sight", "Modular Board", "Once-Per-Game Abilities", "Role Playing", - "Scenario / Mission / Campaign Game", "Simultaneous Action Selection", - "Solo / Solitaire Game", "Storytelling", "Variable Player Powers"] - -# Define a function to generate random values for each game mechanic -def generate_values(n): - health_points = np.random.randint(50, 100, size=n) - coins = np.random.randint(10, 50, size=n) - return {"HealthPoints": health_points, "Coins": coins} - -# Define the Streamlit app -def app(): - st.title("Game Mechanics Treemap Chart") - st.write("This app displays a Treemap chart of game mechanics.") - - # Generate the data for the chart - n = len(game_mechanics) - values = generate_values(n) - data = pd.DataFrame({ - "category": ["Category 1"] * n, - "mechanic": game_mechanics, - "value": list(values.values()), - "HealthPoints": values["HealthPoints"], - "Coins": values["Coins"], - "description": ["Description"] * n - }) - data["value"] = data["value"].apply(lambda x: sum(x)) - - # Draw the chart - fig = px.treemap(data, path=["category", "mechanic"], values="value", - color="HealthPoints", hover_data=["Coins"]) - st.plotly_chart(fig) - - # Display a download link for the data - csv = data.to_csv(index=False) - b64 = base64.b64encode(csv.encode()).decode() - href = f'Download CSV' - st.markdown(href, unsafe_allow_html=True) - -if __name__ == '__main__': - app() -The generate_values() function now takes a parameter n, which is the length of the game mechanics list. The function generates random values for "HealthPoints" and "Coins" with length n. -``` - -""") \ No newline at end of file diff --git a/spaces/awacke1/ChemistryMoleculeModeler/README.md b/spaces/awacke1/ChemistryMoleculeModeler/README.md deleted file mode 100644 index 0b0d8095ed770c373425aebdccff5d746c5d8c6a..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ChemistryMoleculeModeler/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🧬3D Molecule Visualization Modeler⚛️ -emoji: 3DVis🧬 -colorFrom: green -colorTo: purple -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/awacke1/GeographyandPopulationDensityUnitedStates/README.md b/spaces/awacke1/GeographyandPopulationDensityUnitedStates/README.md deleted file mode 100644 index 10345568bd5fd4e534f1610c5df398491ef14391..0000000000000000000000000000000000000000 --- a/spaces/awacke1/GeographyandPopulationDensityUnitedStates/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: GeographyandPopulationDensityUnitedStates -emoji: 📊 -colorFrom: red -colorTo: purple -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/utils/SceneUtils.d.ts b/spaces/banana-projects/web3d/node_modules/three/examples/jsm/utils/SceneUtils.d.ts deleted file mode 100644 index b80cd2d841c91cae7dcaf0dd70ab859ec857e972..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/utils/SceneUtils.d.ts +++ /dev/null @@ -1,7 +0,0 @@ -import { Geometry, Material, Object3D, Scene } from '../../../src/Three'; - -export namespace SceneUtils { - export function createMultiMaterialObject(geometry: Geometry, materials: Material[]): Object3D; - export function detach(child: Object3D, parent: Object3D, scene: Scene): void; - export function attach(child: Object3D, scene: Scene, parent: Object3D): void; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/objects/ImmediateRenderObject.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/extras/objects/ImmediateRenderObject.d.ts deleted file mode 100644 index f1c6cd8c1c77d0d976c443cbce84d13c66adae39..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/extras/objects/ImmediateRenderObject.d.ts +++ /dev/null @@ -1,18 +0,0 @@ -import { Object3D } from './../../core/Object3D'; -import { Material } from './../../materials/Material'; - -/** - * @deprecated Use {@link WireframeGeometry THREE.WireframeGeometry} instead. - */ -// export class WireframeHelper extends LineSegments { -// constructor(object: Object3D, hex?: number); -// } - -// Extras / Objects ///////////////////////////////////////////////////////////////////// - -export class ImmediateRenderObject extends Object3D { - constructor(material: Material); - - material: Material; - render(renderCallback: Function): void; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/fog_pars_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/fog_pars_fragment.glsl.js deleted file mode 100644 index 808c7c55cc222a34f73af4271e114baa08bd4283..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/fog_pars_fragment.glsl.js +++ /dev/null @@ -1,19 +0,0 @@ -export default /* glsl */` -#ifdef USE_FOG - - uniform vec3 fogColor; - varying float fogDepth; - - #ifdef FOG_EXP2 - - uniform float fogDensity; - - #else - - uniform float fogNear; - uniform float fogFar; - - #endif - -#endif -`; diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/moco_loss.py b/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/moco_loss.py deleted file mode 100644 index 8fb13fbd426202cff9014c876c85b0d5c4ec6a9d..0000000000000000000000000000000000000000 --- a/spaces/bankholdup/stylegan_petbreeder/e4e/criteria/moco_loss.py +++ /dev/null @@ -1,71 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from configs.paths_config import model_paths - - -class MocoLoss(nn.Module): - - def __init__(self, opts): - super(MocoLoss, self).__init__() - print("Loading MOCO model from path: {}".format(model_paths["moco"])) - self.model = self.__load_model() - self.model.eval() - for param in self.model.parameters(): - param.requires_grad = False - - @staticmethod - def __load_model(): - import torchvision.models as models - model = models.__dict__["resnet50"]() - # freeze all layers but the last fc - for name, param in model.named_parameters(): - if name not in ['fc.weight', 'fc.bias']: - param.requires_grad = False - checkpoint = torch.load(model_paths['moco'], map_location="cpu") - state_dict = checkpoint['state_dict'] - # rename moco pre-trained keys - for k in list(state_dict.keys()): - # retain only encoder_q up to before the embedding layer - if k.startswith('module.encoder_q') and not k.startswith('module.encoder_q.fc'): - # remove prefix - state_dict[k[len("module.encoder_q."):]] = state_dict[k] - # delete renamed or unused k - del state_dict[k] - msg = model.load_state_dict(state_dict, strict=False) - assert set(msg.missing_keys) == {"fc.weight", "fc.bias"} - # remove output layer - model = nn.Sequential(*list(model.children())[:-1]).cuda() - return model - - def extract_feats(self, x): - x = F.interpolate(x, size=224) - x_feats = self.model(x) - x_feats = nn.functional.normalize(x_feats, dim=1) - x_feats = x_feats.squeeze() - return x_feats - - def forward(self, y_hat, y, x): - n_samples = x.shape[0] - x_feats = self.extract_feats(x) - y_feats = self.extract_feats(y) - y_hat_feats = self.extract_feats(y_hat) - y_feats = y_feats.detach() - loss = 0 - sim_improvement = 0 - sim_logs = [] - count = 0 - for i in range(n_samples): - diff_target = y_hat_feats[i].dot(y_feats[i]) - diff_input = y_hat_feats[i].dot(x_feats[i]) - diff_views = y_feats[i].dot(x_feats[i]) - sim_logs.append({'diff_target': float(diff_target), - 'diff_input': float(diff_input), - 'diff_views': float(diff_views)}) - loss += 1 - diff_target - sim_diff = float(diff_target) - float(diff_views) - sim_improvement += sim_diff - count += 1 - - return loss / count, sim_improvement / count, sim_logs diff --git a/spaces/bebetterfeng/CarperAI-stable-vicuna-13b-delta/app.py b/spaces/bebetterfeng/CarperAI-stable-vicuna-13b-delta/app.py deleted file mode 100644 index 6ad7dfa9562e17b33892102e308343b0fc9845c3..0000000000000000000000000000000000000000 --- a/spaces/bebetterfeng/CarperAI-stable-vicuna-13b-delta/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/CarperAI/stable-vicuna-13b-delta").launch() \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/utils/matlab_functions.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/utils/matlab_functions.py deleted file mode 100644 index f9f1a83bc8beee468dd7c9ca734966e926fd9fde..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/utils/matlab_functions.py +++ /dev/null @@ -1,359 +0,0 @@ -import math -import numpy as np -import torch - - -def cubic(x): - """cubic function used for calculate_weights_indices.""" - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5 * absx3 - 2.5 * absx2 + 1) * ( - (absx <= 1).type_as(absx)) + (-0.5 * absx3 + 2.5 * absx2 - 4 * absx + 2) * (((absx > 1) * - (absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - """Calculate weights and indices, used for imresize function. - - Args: - in_length (int): Input length. - out_length (int): Output length. - scale (float): Scale factor. - kernel_width (int): Kernel width. - antialisaing (bool): Whether to apply anti-aliasing when downsampling. - """ - - if (scale < 1) and antialiasing: - # Use a modified kernel (larger kernel width) to simultaneously - # interpolate and antialias - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5 + scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - p = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, p) + torch.linspace(0, p - 1, p).view(1, p).expand( - out_length, p) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, p) - indices - - # apply cubic kernel - if (scale < 1) and antialiasing: - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, p) - - # If a column in weights is all zero, get rid of it. only consider the - # first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, p - 2) - weights = weights.narrow(1, 1, p - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, p - 2) - weights = weights.narrow(1, 0, p - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -@torch.no_grad() -def imresize(img, scale, antialiasing=True): - """imresize function same as MATLAB. - - It now only supports bicubic. - The same scale applies for both height and width. - - Args: - img (Tensor | Numpy array): - Tensor: Input image with shape (c, h, w), [0, 1] range. - Numpy: Input image with shape (h, w, c), [0, 1] range. - scale (float): Scale factor. The same scale applies for both height - and width. - antialisaing (bool): Whether to apply anti-aliasing when downsampling. - Default: True. - - Returns: - Tensor: Output image with shape (c, h, w), [0, 1] range, w/o round. - """ - squeeze_flag = False - if type(img).__module__ == np.__name__: # numpy type - numpy_type = True - if img.ndim == 2: - img = img[:, :, None] - squeeze_flag = True - img = torch.from_numpy(img.transpose(2, 0, 1)).float() - else: - numpy_type = False - if img.ndim == 2: - img = img.unsqueeze(0) - squeeze_flag = True - - in_c, in_h, in_w = img.size() - out_h, out_w = math.ceil(in_h * scale), math.ceil(in_w * scale) - kernel_width = 4 - kernel = 'cubic' - - # get weights and indices - weights_h, indices_h, sym_len_hs, sym_len_he = calculate_weights_indices(in_h, out_h, scale, kernel, kernel_width, - antialiasing) - weights_w, indices_w, sym_len_ws, sym_len_we = calculate_weights_indices(in_w, out_w, scale, kernel, kernel_width, - antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_c, in_h + sym_len_hs + sym_len_he, in_w) - img_aug.narrow(1, sym_len_hs, in_h).copy_(img) - - sym_patch = img[:, :sym_len_hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_he:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_hs + in_h, sym_len_he).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_c, out_h, in_w) - kernel_width = weights_h.size(1) - for i in range(out_h): - idx = int(indices_h[i][0]) - for j in range(in_c): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_h[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_c, out_h, in_w + sym_len_ws + sym_len_we) - out_1_aug.narrow(2, sym_len_ws, in_w).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_we:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_ws + in_w, sym_len_we).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_c, out_h, out_w) - kernel_width = weights_w.size(1) - for i in range(out_w): - idx = int(indices_w[i][0]) - for j in range(in_c): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_w[i]) - - if squeeze_flag: - out_2 = out_2.squeeze(0) - if numpy_type: - out_2 = out_2.numpy() - if not squeeze_flag: - out_2 = out_2.transpose(1, 2, 0) - - return out_2 - - -def rgb2ycbcr(img, y_only=False): - """Convert a RGB image to YCbCr image. - - This function produces the same results as Matlab's `rgb2ycbcr` function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0 - else: - out_img = np.matmul( - img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], [24.966, 112.0, -18.214]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def bgr2ycbcr(img, y_only=False): - """Convert a BGR image to YCbCr image. - - The bgr version of rgb2ycbcr. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0 - else: - out_img = np.matmul( - img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], [65.481, -37.797, 112.0]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2rgb(img): - """Convert a YCbCr image to RGB image. - - This function produces the same results as Matlab's ycbcr2rgb function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted RGB image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] # noqa: E126 - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2bgr(img): - """Convert a YCbCr image to BGR image. - - The bgr version of ycbcr2rgb. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted BGR image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0.00791071, -0.00153632, 0], - [0, -0.00318811, 0.00625893]]) * 255.0 + [-276.836, 135.576, -222.921] # noqa: E126 - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def _convert_input_type_range(img): - """Convert the type and range of the input image. - - It converts the input image to np.float32 type and range of [0, 1]. - It is mainly used for pre-processing the input image in colorspace - conversion functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - (ndarray): The converted image with type of np.float32 and range of - [0, 1]. - """ - img_type = img.dtype - img = img.astype(np.float32) - if img_type == np.float32: - pass - elif img_type == np.uint8: - img /= 255. - else: - raise TypeError(f'The img type should be np.float32 or np.uint8, but got {img_type}') - return img - - -def _convert_output_type_range(img, dst_type): - """Convert the type and range of the image according to dst_type. - - It converts the image to desired type and range. If `dst_type` is np.uint8, - images will be converted to np.uint8 type with range [0, 255]. If - `dst_type` is np.float32, it converts the image to np.float32 type with - range [0, 1]. - It is mainly used for post-processing images in colorspace conversion - functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The image to be converted with np.float32 type and - range [0, 255]. - dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it - converts the image to np.uint8 type with range [0, 255]. If - dst_type is np.float32, it converts the image to np.float32 type - with range [0, 1]. - - Returns: - (ndarray): The converted image with desired type and range. - """ - if dst_type not in (np.uint8, np.float32): - raise TypeError(f'The dst_type should be np.float32 or np.uint8, but got {dst_type}') - if dst_type == np.uint8: - img = img.round() - else: - img /= 255. - return img.astype(dst_type) diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/img2img.py b/spaces/bigjoker/stable-diffusion-webui/modules/img2img.py deleted file mode 100644 index 8ddf224fa2b13a32cb51603a55482e0f0783ec72..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/img2img.py +++ /dev/null @@ -1,184 +0,0 @@ -import math -import os -import sys -import traceback - -import numpy as np -from PIL import Image, ImageOps, ImageFilter, ImageEnhance, ImageChops - -from modules import devices, sd_samplers -from modules.generation_parameters_copypaste import create_override_settings_dict -from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images -from modules.shared import opts, state -import modules.shared as shared -import modules.processing as processing -from modules.ui import plaintext_to_html -import modules.images as images -import modules.scripts - - -def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args): - processing.fix_seed(p) - - images = shared.listfiles(input_dir) - - is_inpaint_batch = False - if inpaint_mask_dir: - inpaint_masks = shared.listfiles(inpaint_mask_dir) - is_inpaint_batch = len(inpaint_masks) > 0 - if is_inpaint_batch: - print(f"\nInpaint batch is enabled. {len(inpaint_masks)} masks found.") - - print(f"Will process {len(images)} images, creating {p.n_iter * p.batch_size} new images for each.") - - save_normally = output_dir == '' - - p.do_not_save_grid = True - p.do_not_save_samples = not save_normally - - state.job_count = len(images) * p.n_iter - - for i, image in enumerate(images): - state.job = f"{i+1} out of {len(images)}" - if state.skipped: - state.skipped = False - - if state.interrupted: - break - - img = Image.open(image) - # Use the EXIF orientation of photos taken by smartphones. - img = ImageOps.exif_transpose(img) - p.init_images = [img] * p.batch_size - - if is_inpaint_batch: - # try to find corresponding mask for an image using simple filename matching - mask_image_path = os.path.join(inpaint_mask_dir, os.path.basename(image)) - # if not found use first one ("same mask for all images" use-case) - if not mask_image_path in inpaint_masks: - mask_image_path = inpaint_masks[0] - mask_image = Image.open(mask_image_path) - p.image_mask = mask_image - - proc = modules.scripts.scripts_img2img.run(p, *args) - if proc is None: - proc = process_images(p) - - for n, processed_image in enumerate(proc.images): - filename = os.path.basename(image) - - if n > 0: - left, right = os.path.splitext(filename) - filename = f"{left}-{n}{right}" - - if not save_normally: - os.makedirs(output_dir, exist_ok=True) - if processed_image.mode == 'RGBA': - processed_image = processed_image.convert("RGB") - processed_image.save(os.path.join(output_dir, filename)) - - -def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_index: int, mask_blur: int, mask_alpha: float, inpainting_fill: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, height: int, width: int, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, *args): - override_settings = create_override_settings_dict(override_settings_texts) - - is_batch = mode == 5 - - if mode == 0: # img2img - image = init_img.convert("RGB") - mask = None - elif mode == 1: # img2img sketch - image = sketch.convert("RGB") - mask = None - elif mode == 2: # inpaint - image, mask = init_img_with_mask["image"], init_img_with_mask["mask"] - alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1') - mask = ImageChops.lighter(alpha_mask, mask.convert('L')).convert('L') - image = image.convert("RGB") - elif mode == 3: # inpaint sketch - image = inpaint_color_sketch - orig = inpaint_color_sketch_orig or inpaint_color_sketch - pred = np.any(np.array(image) != np.array(orig), axis=-1) - mask = Image.fromarray(pred.astype(np.uint8) * 255, "L") - mask = ImageEnhance.Brightness(mask).enhance(1 - mask_alpha / 100) - blur = ImageFilter.GaussianBlur(mask_blur) - image = Image.composite(image.filter(blur), orig, mask.filter(blur)) - image = image.convert("RGB") - elif mode == 4: # inpaint upload mask - image = init_img_inpaint - mask = init_mask_inpaint - else: - image = None - mask = None - - # Use the EXIF orientation of photos taken by smartphones. - if image is not None: - image = ImageOps.exif_transpose(image) - - assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]' - - p = StableDiffusionProcessingImg2Img( - sd_model=shared.sd_model, - outpath_samples=opts.outdir_samples or opts.outdir_img2img_samples, - outpath_grids=opts.outdir_grids or opts.outdir_img2img_grids, - prompt=prompt, - negative_prompt=negative_prompt, - styles=prompt_styles, - seed=seed, - subseed=subseed, - subseed_strength=subseed_strength, - seed_resize_from_h=seed_resize_from_h, - seed_resize_from_w=seed_resize_from_w, - seed_enable_extras=seed_enable_extras, - sampler_name=sd_samplers.samplers_for_img2img[sampler_index].name, - batch_size=batch_size, - n_iter=n_iter, - steps=steps, - cfg_scale=cfg_scale, - width=width, - height=height, - restore_faces=restore_faces, - tiling=tiling, - init_images=[image], - mask=mask, - mask_blur=mask_blur, - inpainting_fill=inpainting_fill, - resize_mode=resize_mode, - denoising_strength=denoising_strength, - image_cfg_scale=image_cfg_scale, - inpaint_full_res=inpaint_full_res, - inpaint_full_res_padding=inpaint_full_res_padding, - inpainting_mask_invert=inpainting_mask_invert, - override_settings=override_settings, - ) - - p.scripts = modules.scripts.scripts_txt2img - p.script_args = args - - if shared.cmd_opts.enable_console_prompts: - print(f"\nimg2img: {prompt}", file=shared.progress_print_out) - - p.extra_generation_params["Mask blur"] = mask_blur - - if is_batch: - assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled" - - process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args) - - processed = Processed(p, [], p.seed, "") - else: - processed = modules.scripts.scripts_img2img.run(p, *args) - if processed is None: - processed = process_images(p) - - p.close() - - shared.total_tqdm.clear() - - generation_info_js = processed.js() - if opts.samples_log_stdout: - print(generation_info_js) - - if opts.do_not_show_images: - processed.images = [] - - return processed.images, generation_info_js, plaintext_to_html(processed.info), plaintext_to_html(processed.comments) diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/postprocessing.py b/spaces/bigjoker/stable-diffusion-webui/modules/postprocessing.py deleted file mode 100644 index 21e32af9866abfc02288f9f04a5195f1700de1b0..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/postprocessing.py +++ /dev/null @@ -1,103 +0,0 @@ -import os - -from PIL import Image - -from modules import shared, images, devices, scripts, scripts_postprocessing, ui_common, generation_parameters_copypaste -from modules.shared import opts - - -def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir, show_extras_results, *args, save_output: bool = True): - devices.torch_gc() - - shared.state.begin() - shared.state.job = 'extras' - - image_data = [] - image_names = [] - outputs = [] - - if extras_mode == 1: - for img in image_folder: - image = Image.open(img) - image_data.append(image) - image_names.append(os.path.splitext(img.orig_name)[0]) - elif extras_mode == 2: - assert not shared.cmd_opts.hide_ui_dir_config, '--hide-ui-dir-config option must be disabled' - assert input_dir, 'input directory not selected' - - image_list = shared.listfiles(input_dir) - for filename in image_list: - try: - image = Image.open(filename) - except Exception: - continue - image_data.append(image) - image_names.append(filename) - else: - assert image, 'image not selected' - - image_data.append(image) - image_names.append(None) - - if extras_mode == 2 and output_dir != '': - outpath = output_dir - else: - outpath = opts.outdir_samples or opts.outdir_extras_samples - - infotext = '' - - for image, name in zip(image_data, image_names): - shared.state.textinfo = name - - existing_pnginfo = image.info or {} - - pp = scripts_postprocessing.PostprocessedImage(image.convert("RGB")) - - scripts.scripts_postproc.run(pp, args) - - if opts.use_original_name_batch and name is not None: - basename = os.path.splitext(os.path.basename(name))[0] - else: - basename = '' - - infotext = ", ".join([k if k == v else f'{k}: {generation_parameters_copypaste.quote(v)}' for k, v in pp.info.items() if v is not None]) - - if opts.enable_pnginfo: - pp.image.info = existing_pnginfo - pp.image.info["postprocessing"] = infotext - - if save_output: - images.save_image(pp.image, path=outpath, basename=basename, seed=None, prompt=None, extension=opts.samples_format, info=infotext, short_filename=True, no_prompt=True, grid=False, pnginfo_section_name="extras", existing_info=existing_pnginfo, forced_filename=None) - - if extras_mode != 2 or show_extras_results: - outputs.append(pp.image) - - devices.torch_gc() - - return outputs, ui_common.plaintext_to_html(infotext), '' - - -def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_dir, show_extras_results, gfpgan_visibility, codeformer_visibility, codeformer_weight, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, upscale_first: bool, save_output: bool = True): - """old handler for API""" - - args = scripts.scripts_postproc.create_args_for_run({ - "Upscale": { - "upscale_mode": resize_mode, - "upscale_by": upscaling_resize, - "upscale_to_width": upscaling_resize_w, - "upscale_to_height": upscaling_resize_h, - "upscale_crop": upscaling_crop, - "upscaler_1_name": extras_upscaler_1, - "upscaler_2_name": extras_upscaler_2, - "upscaler_2_visibility": extras_upscaler_2_visibility, - }, - "GFPGAN": { - "gfpgan_visibility": gfpgan_visibility, - }, - "CodeFormer": { - "codeformer_visibility": codeformer_visibility, - "codeformer_weight": codeformer_weight, - }, - }) - - return run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir, show_extras_results, *args, save_output=save_output) diff --git a/spaces/bioriAsaeru/text-to-voice/CRACK WinX HD Video Converter Deluxe 7 12 0 286 Build 2930 Extra Quality.md b/spaces/bioriAsaeru/text-to-voice/CRACK WinX HD Video Converter Deluxe 7 12 0 286 Build 2930 Extra Quality.md deleted file mode 100644 index 23ee0f5359a8a95470f14a5ebc142a1dc48bc222..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/CRACK WinX HD Video Converter Deluxe 7 12 0 286 Build 2930 Extra Quality.md +++ /dev/null @@ -1,6 +0,0 @@ -

      CRACK WinX HD Video Converter Deluxe 7 12 0 286 Build 2930


      DOWNLOAD ––– https://urloso.com/2uyRJs



      -
      -File Name, Size. Readme.url, 1.36 KB. Torrent downloaded from Kickass.to.txt, 84 B. WinX HD Video Converter Deluxe 7 12 0 286 Build 2930.exe, 9.28 MB ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/bioriAsaeru/text-to-voice/Communications Electronics By Frenzel Pdf Free 2021 Download.md b/spaces/bioriAsaeru/text-to-voice/Communications Electronics By Frenzel Pdf Free 2021 Download.md deleted file mode 100644 index 9091345dc605d3ba2a766e389058e7da86c8d8a1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Communications Electronics By Frenzel Pdf Free 2021 Download.md +++ /dev/null @@ -1,7 +0,0 @@ - -

      Don't miss any details because it is a free book. More than ten years of the university thesis in science and technology in a sub-class, the German literary discipline book reviews. The right to store the approach is not a specific kind of content of the state. However, it must be other that only starts, for example, by a precondition. The chapter is the case of the new economic problems and then a quantum physicist uses a subtle method to determine his psychology. Fortunately, the essay thesis has a little chapter on literature and mathematics. But for the reader it is more convenient to write a literal conversion. In this case, you will be able to visit our blog, also here to read the chapter.

      -

      Communications Electronics By Frenzel Pdf Free Download


      Download Ziphttps://urloso.com/2uyP7u



      -

      Number 10 in the second edition of Philipp Lawrenz's, and the author is a professor of electronics and communication engineering at the University of Kaiserslautern. The second edition of "Principles of Electronic Communication Systems" includes over 100 new equations, more examples and problems, new chapters and illustrations. This edition of the book is the result of almost five years' work. Before the first edition was published, Professor Lawrenz received numerous requests to offer a second edition. This second edition emphasizes the relationship between the students and the engineers. The second edition of "Principles of Electronic Communication Systems" is the result of almost five years of research and writing. To obtain the new updates and for further details, visit the publisher's website: www. All Content copyright 2011 by Prof. Philipp Lawrenz.

      -

      Communications Electronics is one of those titles that has everything you will want to know about this field. It is an extremely valuable textbook for people in the field, from students to experienced professionals. It even covers some of the more specialized technologies in this field as well. It is a text you could feel good recommending to a friend or colleague.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Jus Accardo Tremble Epub Download.md b/spaces/bioriAsaeru/text-to-voice/Jus Accardo Tremble Epub Download.md deleted file mode 100644 index b45ad60bec883c4c868d9674b46c60cac8f9bb7a..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Jus Accardo Tremble Epub Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      jus accardo tremble epub download


      DOWNLOADhttps://urloso.com/2uyPXz



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/bradarrML/stablediffusion-infinity/css/w2ui.min.css b/spaces/bradarrML/stablediffusion-infinity/css/w2ui.min.css deleted file mode 100644 index 1e9a927d8c4521c622ab3de6cd6747daf306d378..0000000000000000000000000000000000000000 --- a/spaces/bradarrML/stablediffusion-infinity/css/w2ui.min.css +++ /dev/null @@ -1,2 +0,0 @@ -/* w2ui 2.0.x (nightly) (10/10/2022, 1:43:34 PM) (c) http://w2ui.com, vitmalina@gmail.com */ -@font-face{font-family:w2ui-font;src:url("data:application/x-font-woff;charset=utf-8;base64,d09GRgABAAAAAAnsAAsAAAAADpwAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAABHU1VCAAABCAAAADsAAABUIIslek9TLzIAAAFEAAAAQQAAAFZdKW6PY21hcAAAAYgAAACdAAACJimbHahnbHlmAAACKAAABYQAAAd0bnTEjmhlYWQAAAesAAAAMQAAADYiTbc3aGhlYQAAB+AAAAAYAAAAJA3eCBFobXR4AAAH+AAAABAAAAA8cA4AAGxvY2EAAAgIAAAAIAAAACALUg1CbWF4cAAACCgAAAAfAAAAIAEfAGBuYW1lAAAISAAAAS4AAAIibo8QqHBvc3QAAAl4AAAAdAAAAJs0xq68eJxjYGRgYOBiMGCwY2BycfMJYeDLSSzJY5BiYGGAAJA8MpsxJzM9kYEDxgPKsYBpDiBmg4gCACY7BUgAeJxjYGSvYJzAwMrAwCrCIsXAwHAJQjNdYPBkbAbSDKzMDFhBQJprCoMDgyODH+sdILedbRWDGZBmBMkBAGbXCH0AAAB4nL3RWQ6DMAwE0AkJISzhJD1Cf+i+cg6+er7ejo6juUGrWnqWcBYRG0ANwNOGAuBecLBYWHWl7tGVesC27AmlPq8r81Sys5PMFfcG3hjRIKHluR4DMkYuRnwfrvKhjk1qu37I4w8u/HMMJb/1ZQ+YxDq6k4r2YpM5iPX4KDa1k1hnz9LQRRJdpaWbcJq4S08Psb97SqZZxg9PnRB8AAAAeJyFVFtMFFcYPv/MzmypZmHYnb0IO8vMylJghdmdvbBZLmLBSkE0QXAbRV9AwMSUWKHBJpJYo9HSiBaaErEP0gdtYxOV2ESgadMHqW2qMX3woQhp0trYWBPS1JbdPfafWSpVm/Rkzjnf/Ldz/tshDMHBf8sdIDnERgj4vLLCi4LNrsnBSFgIsbIg8xMvxZKHYlu3xkyHY1v3JmtMX3INJS3x5PX4tm1xUyye7DG9jmZAt2Vu4V4jHCFZjDkL+PHUzfSjK8w0+wuXSH0Ii5+m600eXYwzZL1cA8kiVuIm60hEP53PBt7uyAKr2QIOsBaVARfxlSPdzEJhEW7I9oA9EoWIDyxQBtUgAXy/pk7sEfk1zBH69cyo5LTADuFlSw2zDsqVCSUAVmiDM6ek2pz0H9mihCCbeTFHTN/JWV8penCyi6h8yFVnT99nPoH+Ost6Kc0zv73ncXpQm4VyOTXFnGl9NdsmpXjUbkQr7F9SbcouGfqe9YQ14vgu14TITFahL5rgfTL75+eZvIUFJn9+nmta+oqrysx/Ysbv5V4hXvyx2zBodhsGQSnC6CnoeSiahUsNRIIYFlz4hCDQOFzJzW0R1FyaoAncWgQBLtNmQWjJVQU4D9eQZvoxF/k7aacQQHIujNEuXScgwBicy9DoNrgkCJlc8A+5ueW7e0kl3kUWZYc+NdGLcS7yFWBKzHyOQwKHvSBaDdFIDihFOidk/AUNjs2QYt5gC1IL7zCW9OKpBw/YwtjNY0P01tCxm7F4XMegDh29FYuncwcP9F5UAwH1Yu8BugLZ8+/sGuzcxYw+UXhaOfXoWY0MRDdMhDy+YVb41SSbuEiQNKInoUgcgnY32PhSUHxhkMOYk7AsaKzyvE+GR2zGsRyDVGCwcwxR9qzfE/T4jQX8tLejA/wdsxOjJ+9W1tRU3j05ChtHhjN4eGTiyEDfZEDTApN9A/DCCjapyxb8nnS7boD2mpqqq1CbXhsZnquqqamaGx6BjWi1qnqp7Tl9+qeBdV8xbyP8x8RB1qKvFXrWQmWgWEB8uot9hUEJbBZQyiBUDYCuixleFWhZEHWAg++kieD2PduDNGHNz7eaZnFNH3f5XS6/VupylTJbku9P59uY07a8mfQ+ervtDoy0cXPRXVH8JHHplihJIqeK0p78soryvLzyirL8pb2mG2N2SbKPJe/AW+2fwdvtJPPefMSdwttb8b5RKAIWY22NghfYQgiDmU1cYLa0pi/f9mzecPgCLKUpn2abYInyXUx9K9Pcmr5E/Rs2e6C7SyemJtO4Ue78ci+NcX3Eh5ZtZt7Me2VhOeuCrPiqsBQ0WcjUgyjIuPAf8Pxqp0IrVKXNq9KYoqoKXFe9bYoKs4rKfG61iquctMKgz6rKdqRf96qql8ZU5cmbZ+YGSSH+KPqZZm8WyFg41RCUGJuF8YrBf5WWqf/cyRM72u2L0ACb6EPLpljD9Hczp5ubT890HT3YdzWoacGrfQe5Q40Ng4fP0uPw5uYTFQ06W5caQmb/AKwe6EdB/WiM5+Mk/zv3Aykm1YQUYquaI9FIOGQclqlvm1HAWjAaieLjig9mNePD1vXpnRvRG9eu9y3enP+Cns2rcNUqa/2l453dP/V0jJf6EXb07O/avbMRnE4oW1NXr1nsqe7ORGsgFAq0Jr5BgHXZmliAMqeTadq5u2t/d2dGEW0YcK1S66rIo+P5dotWXwf7tGW9FQNG6pbfoUGs5zyikSjZQlowquGQD1PIu/UnCesWk4d1K4e5sCbCf7Pk/+OxE8XuX93F9B64DMA2Jc9MMadK3PfdxalJtqkYQQm46D368zQtfVY2NUnvTXGyw11c7HZMO9wlJbjhoDQDlxkrlCmnTnFO4SDkby6j4OB4nGNgZGBgAOJJO18HxPPbfGXgZr0DFGG4O23bVwT9/xQHI9sqIJeDgQkkCgCOwA3cAAAAeJxjYGRgYL3DAAQcjFASTCMBfgAdRwEFeJxjYGBg4GAkDwMADvIAfwAAAAAAJgA8AKQAwAECAWgBaAHeAjgCZgKgAuADRgO6eJxjYGRgYOBnCGFgYwABJiDmAkIGhv9gPgMAEp0BgAB4nG2PQU7CQBSG/0LBCIkhmpi4m7hwYyjQBQsOAHsW7AtMoaR0mukA4QKewDN4Bk/g0jN4FP+Wly6UaTr53vf+N+0A6OELHsrl4bbay9XADasLN0k9YZ/8JNxCF8/Cbfq+cAevGAt38YCIJ3h+edo9nHADd3gTbtK/C/vkD+EWHvEp3Kb/Fu5ggR/hLl688Sk8JP3YZG6uN4c0snVdw0LbIjGZGgXD2s10pm3k9Fotz6o4bkLnYhVbs1dTdnWaGpVbs9MrF2ydyyeDQSw+WJk9TghxQMJbxzDIeLM5NDZ0KW9sr/T/mwUnLAq6slYYIcDwSm7GXFZlI1Yaa2aXOHMvcOQ3Q1rHtOJrObMnTWVW839SskJe9XY0K/oA22oqxwQDPvGffMAUT/oFXxtfYgAAeJxtxcsOwiAQBVBuC7Q+6S/idLSNlCEMTfTvNXHr2RzTmZ9g/gvo0MPCwWPAiAOOOOGMC64ImEx/k5ejhenpSZJUHb7tW1ZHVVTtXKU43kp72zXfxZWojX3hTGuyJe3qKyeJs1eOlZZRubU1P9SYD+7xIE8=") format("woff");font-weight:400;font-style:normal}[class*=" w2ui-icon-"]:before,[class^=w2ui-icon-]:before{font-family:w2ui-font;display:block;vertical-align:middle;line-height:1;font-weight:400;font-style:normal;speak:none;text-decoration:inherit;text-transform:none;text-rendering:optimizeLegibility;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.w2ui-icon-box:before{content:"A"}.w2ui-icon-check:before{content:"B"}.w2ui-icon-colors:before{content:"C"}.w2ui-icon-columns:before{content:"D"}.w2ui-icon-cross:before{content:"E"}.w2ui-icon-drop:before{content:"F"}.w2ui-icon-empty:before{content:"G"}.w2ui-icon-info:before{content:"H"}.w2ui-icon-paste:before{content:"I"}.w2ui-icon-pencil:before{content:"J"}.w2ui-icon-plus:before{content:"K"}.w2ui-icon-reload:before{content:"L"}.w2ui-icon-search:before{content:"M"}.w2ui-icon-settings:before{content:"N"}@font-face{font-family:OpenSans;src:url("data:application/x-font-ttf;charset=utf-8;base64,AAEAAAARAQAABAAQR0RFRgt8DNQAAXd0AAAALkdQT1MAGQAMAAF3pAAAABBHU1VC47MpuAABd7QAAALuT1MvMqE2nskAAUdAAAAAYGNtYXCuu/X7AAFHoAAAA4hjdnQgD00YpAABU+gAAACiZnBnbX5hthEAAUsoAAAHtGdhc3AAFQAjAAF3ZAAAABBnbHlmdDiZSwAAARwAAS+0aGVhZAK6Y3AAAThIAAAANmhoZWENzAlzAAFHHAAAACRobXR46DU83QABOIAAAA6abG9jYSkU3PEAATDwAAAHVm1heHAFQwIKAAEw0AAAACBuYW1lW5KAHwABVIwAAAPScG9zdH+4CW8AAVhgAAAfA3ByZXBDt5akAAFS3AAAAQkAAgDBAAAECgW2AAMABwAVtwQDBQIEAwcAAC8yLzMBLzMvMzEwEyERITchESHBA0n8t2gCef2HBbb6SmgE5gACAJj/4wGJBbYAAwAOACtAFAMJCQIEBA8QAQEMAgwGT1kMFgIDAD8/KxESADkYLxESATkRMzMRMzEwASMDMwM0MzIWFRQGIyImAUZpM8/heDo/QDk0RAGTBCP6tIhGQkBHPwAAAgCFA6YCsAW2AAMABwAfQA0AAwcEAwQICQYCBwMDAD8zzTIREgE5OREzETMxMAEDIwMhAyMDAT8oaSkCKyloKQW2/fACEP3wAhAAAAIAMwAABPYFtgAbAB8AmUBVCB8cFQQUCREMDAkSDw4LBAoTExQWHR4HBAYXBAEAGQQYBQUGFAYKIQMaFwMYChggIQgEDA0MTlkcAQ0fABAREE5ZGRURTw0BTxEBDRENEQUXEwMKBQAvMz8zEjk5Ly9dXREzMysRADMzETMzKxEAMzMREgE5OREXMxESOTkRMxESFzkREhc5ETMREhc5MjIRMxESFzkxMAEDIRUhAyMTIQMjEyE1IRMhNSETMwMhEzMDIRUBIRMhA9VCARv+zVSJVP7RUohQ/voBH0T+6wErUotSATFUhlQBCPzlAS9C/tEDg/6sgf5SAa7+UgGugQFUfwG0/kwBtP5Mf/6sAVQAAwCD/4kEDAYSACAAJgAtAGZANScRJR0XBAQqFA0FIQAAGQURCQUuLyUNBg1NWQMGJA4qDkxZHSorHBQcTVkXKhQGFAYUBRYFAC8vEjk5Ly8SOTIrEQAzETMrEQAzETMrEQAzERIBFzkRMxEzMzMzETMzMxEzMTABFAYHFSM1IiYnNRYWMxEmJjU0Njc1MxUWFwcmJxEeAgc0JicRNgEUFhcRBgYEDMy3gXDSQ1PZWc2ly6eBuKs0lZqdnEqqWYDZ/d1ab2NmAcGIsRfo3yMfnCUvAbhBrIiDqBK2tAVFgzsL/k4yX3tlSFks/nseAwdMXCkBgxBdAAAFAGj/7AYtBcsACQAVACEALQAxAEVAJAAQBQoWKBwiIi4oCjAQBjIzAw0fKw0rDSswMQYwGBklGQcTBwA/Mz8zPz8SOTkvLxEzETMREgEXOREzETMRMxEzMTATFBYzMhEQIyIGBRQGIyImNTQ2MzIWARQWMzI2NTQmIyIGBRQGIyImNTQ2MzIWAQEjAfJKU6SkU0oBypmUjJuVkpGcAaZKVFRQUFRUSgHLmZSOmZWSjp/+/vzVkwMrBAKqqgFUAVKoquTp7t/j5u7826upp62rpaWr4+nu3uPm6wMg+koFtgAAAwBx/+wF0wXNAAsAFQA1AFFAMBMWAB0GIyorListIw4mGR0WCTY3MwxJWTMTDyctDjAFLwMZJgMqKiAvEiAJSlkgBAA/KwAYPxI5Lxc5Ehc5PysREgEXOREzETMRMxEzMTABFBYXNjY1NCYjIgYTMjcBDgIVFBYlNDY3LgI1NDYzMhYVFAYHATY2NzMCBwEjJwYGIyImAZ5IV4FlZ1ZZb5vxn/5Lb1wsm/65i7RVPSTEr6K6iJ0BlzhDF6hEiQEr5bl29JbX7QSTRX1YS39TTWFg+52aAahEWWZBdYn6gshmX2JqOZaop5VrtV3+eT6nY/7ilP7dsmpc1AAAAQCFA6YBPwW2AAMAFLcAAwMEBQIDAwA/zRESATkRMzEwAQMjAwE/KGkpBbb98AIQAAABAFL+vAIhBbYADQAcQAwHAAoEAAQODwsnAwMAPz8REgE5OREzETMxMBMQEjczBgIVFBIXIyYCUpuSopCRlIugk5oCMQEJAc6uwf4y9PD+Nr2qAcYAAAEAPf68AgwFtgANABxADAQKBwAKAA4PCgMEJwA/PxESATk5ETMRMzEwARACByM2EjU0AiczFhICDJuSoIuUkZCik5oCMf75/jqovAHL8PQBzsGv/jEAAQBWAn8EDgYUAA4AMEAbAwUEAQcNCgkLCQ8QBAoBDQIMDA0KBwQGCA4AAD/EMhc5ETMRMxEzERIBFzkxMAEDJRcFEwcDAycTJTcFAwKRKwGOGv6D+KywoLDy/ocdAYcrBhT+dW+2H/66XgFq/pZeAUYftm8BiwAAAQBoAOMEKQTDAAsAKEATAAQECQUFDA0DBwgHUFkADwgBCAAvXTMrEQAzERIBOREzMxEzMTABIRUhESMRITUhETMCjQGc/mSL/mYBmosDF4r+VgGqigGsAAEAP/74AW0A7gAIABG1BQAJCgUAAC/NERIBOTkxMCUXBgIHIzYSNwFeDxpiNX0bQQ3uF2T+93JoATJcAAEAVAHZAj8CcQADABG1AgAFBAABAC8zERIBOTkxMBM1IRVUAesB2ZiYAAEAmP/jAYkA8gALABhACwYAAAwNCQNPWQkWAD8rERIBOREzMTA3NDYzMhYVFAYjIiaYPTk6QUI5M0NqQ0VFQ0FGPwAAAQAUAAAC2wW2AAMAE7cCAAQFAwMCEgA/PxESATk5MTABASMBAtv936YCIQW2+koFtgACAGb/7AQtBc0ACwAXAChAFBIADAYABhkYCRVLWQkHAw9LWQMZAD8rABg/KxESATk5ETMRMzEwARACIyICERASMzISARASMzISERACIyICBC3v9uz27vTu9/zhlqSmlZWmpJYC3f6F/ooBfwFyAX4Bcv5+/pL+wf7dAScBOwE7ASX+3wABALwAAALLBbYACgAkQBAJAAEIAQsMBAkHBwEJBgEYAD8/EjkvEjkREgE5OREzMzEwISMRNDcGBgcnATMCy6IIFTTUWAGDjAQSgnQVLqxyASsAAQBkAAAEJQXLABkAK0AXGAEHEwATDgEEGhsQCktZEAcBGExZARgAPysAGD8rERIBFzkRMxEzMTAhITUBPgI1NCYjIgYHJzYzMhYVFAIHARUhBCX8PwGBsHA4jn5bo2RYyu7O6pzW/sAC8I8Bg7KYkFN1iTxPcajTsov+8ND+xwgAAAEAXv/sBBsFywAnAENAJBsAEwcHAAMWIg0GKCkDFxYXFktZFxcKJSUeS1klBwoRS1kKGQA/KwAYPysREgA5GC8rERIAORESARc5ETMRMzEwARQGBxUWFhUUBCEiJic1FhYzIBEQISM1MzI2NTQmIyIGByc2NjMyFgPunZCwqv7e/vV0wVtf12ABe/5ekJKryJN+YKptVFrrgtXsBF6Msh4IFrSS0eEjLJ4vMQEpAQqPl4ZrejRGcEdRwwAAAgArAAAEagW+AAoAEgA8QB4SBQkCAgsHAwADBQMTFAEFEgVMWQkPBxISAwcGAxgAPz8SOS8SOTMrEQAzERIBFzkRMzMzETMRMzEwASMRIxEhNQEzETMhETQ3IwYHAQRq2Z/9OQK2sNn+iAoIMCr+NwFQ/rABUJED3fwpAeaPtGA//XYAAQCF/+wEHQW2ABoAOkAfDwMZFAgUFwMEHBsAEUtZAAAGFRUYTFkVBgYMS1kGGQA/KwAYPysREgA5GC8rERIBFzkRMxEzMTABMgQVFAAjIic1FhYzMjY1ECEiBycTIRUhAzYCLecBCf7f/veCRtBlsMP+iV+fVjcC1/23JXMDfeXH4/7+T6AtM6adATIdNwKsmf5JFwAAAgB1/+wELwXLABYAJABEQCMaEQshIQAABhEDJiUMCw4dTVkLDg4UAxQXS1kUGQMITVkDBwA/KwAYPysREgA5GC85KxEAMxESARc5ETMRMxEzMTATEAAhMhcVJiMiAgMzNjMyFhUUAiMiAAUyNjU0JiMiBgYVFBYWdQFPAUhxQU1j6/gMDG7uxeP51OP+9gHrjp2SkVqWWVCTAnEBrwGrE48Z/tv+xqzuzOT++wFVyLOpkaZKgkZnsmgAAQBeAAAEKwW2AAYAH0AQAQUFAAIDBwgDAkxZAwYAGAA/PysREgEXOREzMTAhASE1IRUBAR0CXvzjA839qgUdmYX6zwADAGj/7AQpBcsAFgAiAC4ATUApFw8mFCwDHQkJAwYRFA8GLzAGESkgKSBLWSkpDAAMGk1ZDBkAI01ZAAcAPysAGD8rERIAORgvKxESADk5ERIBFzkRMxEzETMRMzEwATIWFRQGBxYWFRQGIyImNTQlJiY1NDYDFBYzMjY1NCYnBgYBIgYVFBYXNjY1NCYCSMjqhpOylv7d6vwBMop463enl5WmnMKVhgE6fY52n493kQXLuqRssklVu3u22c28+4xOtXCfvfumeIaMemGXR0CbA2d4ZFyEQjyKXGV3AAACAGr/7AQlBcsAFwAlAEFAIhsRIgoKAAAEEQMmJw4eTVkLFA4OAhQUGEtZFAcCB01ZAhkAPysAGD8rERIAORgvEjkrERIBFzkRMxEzETMxMAEQISInNRYzMhITIwYGIyImNTQSMzIWEgEiBhUUFjMyNjY1NCYmBCX9aHREUGbw9QsMN7ZywuT/0JXfeP4Uj5yQk1uZWFKTA0b8phSPGgEpATNTV+jQ5AEImf7bATC4pJClSoBGabJmAAACAJj/4wGJBGQACwAVAChAFBAGBgwAABYXDhNPWQ4QCQNPWQkWAD8rABg/KxESATkRMzMRMzEwNzQ2MzIWFRQGIyImETQzMhUUBiMiJpg9OTpBQjkzQ3Z7QjkzQ2pDRUVDQUY/A7uHh0FGPwACAD/++AGFBGQACAASACJAEAENDQUJCRQTCxBPWQsQBQAAL80/KxESATkRMzMRMzEwJRcGAgcjNhI3AzQzMhUUBiMiJgFeDxpiNX0bQQ0Vd3tCOTo97hdk/vdyaAEyXALvh4dBRkYAAAEAaADyBCkE2QAGABVACQQABQEEBwgDAAAvLxESARc5MTAlATUBFQEBBCn8PwPB/PIDDvIBpmIB35X+jf64AAACAHcBwQQZA+MAAwAHACpAFQcCBAACAAkIBAVQWQQBAFBZDwEBAQAvXSsAGC8rERIBOTkRMxEzMTATNSEVATUhFXcDovxeA6IDWomJ/meJiQAAAQBoAPIEKQTZAAYAFUAJBQECAAQHCAYDAC8vERIBFzkxMBMBATUBFQFoAw/88QPB/D8BiQFGAXWV/iFi/loAAAIAG//jAzkFywAbACYAOUAdIRwbAAcTEwAcDgQnKAAAJBAkHk9ZJBYQCklZEAQAPysAGD8rERIAORgvERIBFzkRMxEzETMxMAE1NDY3NjY1NCYjIgYHJzYzMhYVFAYGBwYGFRUDNDMyFhUUBiMiJgEhSGKIR4N7T5ZhO73Ov9QnTH5lQbJ4Oj9AOTREAZM2dZdUc3RSZm8lMYdjvKtJb2NuVnJfIf7XiEZCQEc/AAIAef9GBrgFtAA1AD8ARUAiIy42DjsHFBsAACkUDi4FQEEYODgEPQgRCxELESsfMgMmKwAvMz8zEjk5Ly8SOTIzMxEzERIBFzkRMxEzMxEzETMxMAEUBgYjIiYnIwYGIyImNTQ2MzIWFwMVFDMyNjU0AiQjIgQCFRAAITI3FQYjIAAREBIkITIEEgEUMzITEyYjIgYGuFigaFZ2CwgolWaWqezARKxFGYVbcpT+77Hf/rauAUIBL9LiwPT+lf5v1gGMAQDXAU+3+/bDzxIOSFWCkwLZjuyCaFFXYs2wzP8ZFv4qFrLXrLUBEJO5/qnh/s/+uFaFVAGPAWYBBAGW37X+s/6k/gE5AQUUtAACAAAAAAUQBbwABwAOADlAHgIOCwgBBQADAAcDBAcEEA8OAklZCwUODgQFAwAEEgA/Mz8SOS8SOSsREgE5OREzETMREhc5MTAhAyEDIwEzAQEDJicGBwMEYLb9trSsAkKPAj/+ZaohIxYprAHR/i8FvPpEAmoBxVZ9YHP+OwADAMkAAAS+BbYADgAXACAASUAmEwQdCg8ZGQ4KBAcOBCEiCA8YDxhKWQ8PDgAOGUpZDhIAF0pZAAMAPysAGD8rERIAORgvKxESADkREgEXOREzETMRMxEzMTATISAEFRQGBxUEERQEIyETITI2NTQmIyMRESEyNjU0JiPJAZ0BIwEEkYsBTf737v4CqgEYtJ6wwPoBMbGzt7sFtq68gqkZCjn+28TcA0Rxhntt/ZH93YmSiIAAAAEAff/sBM8FywAWACZAFAMOFAkOAxcYEgBJWRIECwZJWQsTAD8rABg/KxESARc5ETMxMAEiABEQADMyNxUGIyAAETQSJDMyFwcmAzvx/ukBDfmZxJjf/r3+oakBP9jmrEimBTP+v/7p/uH+xzeVOQGIAWniAVS4VJJOAAACAMkAAAVYBbYACAARAChAFA4ECQAEABITBQ1KWQUDBA5KWQQSAD8rABg/KxESATk5ETMRMzEwARAAISERISAAAxAAISMRMyAABVj+d/6P/msBwAFVAXq0/uH+5ffPATABMgLp/pb+gQW2/ob+pwEeASL7cAErAAABAMkAAAP4BbYACwA6QB8GCgoBBAAIAQQMDQYJSVkGBgECAgVJWQIDAQpJWQESAD8rABg/KxESADkYLysREgEXOREzETMxMCEhESEVIREhFSERIQP4/NEDL/17Al79ogKFBbaX/imW/eYAAQDJAAAD+AW2AAkAMkAaBgAAAQMIAQMKCwYJSVkGBgECAgVJWQIDARIAPz8rERIAORgvKxESARc5ETMRMzEwISMRIRUhESEVIQFzqgMv/XsCXv2iBbaX/emXAAABAH3/7AU9BcsAGwA6QB8UCBkCAg4bCAQcHQAbSVkAAAUMDBFJWQwEBRdJWQUTAD8rABg/KxESADkYLysREgEXOREzETMxMAEhEQYGIyAAETQSJDMyFwcmIyAAERAAITI3ESEDTAHxdPCe/rT+jrcBWOfqykLGt/71/tQBIQEYmJH+uQL+/TklJgGLAWTkAVe1VpZU/sL+5v7Y/s4jAcIAAQDJAAAFHwW2AAsAM0AZCQEBAAgEBAUABQ0MCANJWQgIBQoGAwEFEgA/Mz8zEjkvKxESATk5ETMRMxEzETMxMCEjESERIxEzESERMwUfqvz+qqoDAqoCsP1QBbb9kgJuAAABAFQAAAJWBbYACwA3QBwFAQoDCAAAAwEDDA0JBAYESlkGAwoDAQNKWQESAD8rEQAzGD8rEQAzERIBFzkRMxEzETMxMCEhNTcRJzUhFQcRFwJW/f6srAICrKxiIwSqJWJiJftWIwAB/2D+fwFoBbYADQAdQA0LCAgODwkDAAVJWQAiAD8rABg/ERIBOREzMTADIic1FjMyNjURMxEUBgxeNkdNY2eqwP5/G5EUeHEFtvpYvtEAAAEAyQAABOkFtgALACpAFQgEBAUFAgsKAAUNDAIIBQkGAwEFEgA/Mz8zEjk5ERIBFzkRMxEzMTAhIwEHESMRMxEBMwEE6cj965mqqgKXyf20AsWI/cMFtv0rAtX9hQABAMkAAAP4BbYABQAfQA4DAAAEBgcBAwADSVkAEgA/KwAYPxESATk5ETMxMDMRMxEhFcmqAoUFtvrkmgABAMkAAAZxBbYAEwAyQBgIBQUGCw4ODQYNFBUBChEDBgsHAw4ABhIAPzMzPzMSFzkREgE5OREzETMRMxEzMTAhASMWFREjESEBMwEzESMRNDcjAQNQ/hAIDp0BAAHPCAHT/qoOCP4MBRCa1PxeBbb7SgS2+koDrqK++vIAAQDJAAAFPwW2ABAALkAVCQYGBwEPDwAHABESCwMHDwgDAQcSAD8zPzMSOTkREgE5OREzETMRMxEzMTAhIwEjFhURIxEzATMmAjcRMwU/wvzhCBCdwAMdCAIOAp8Ey9i0/MEFtvs6GwElPwNHAAACAH3/7AW+Bc0ACwAXAChAFBIADAYABhkYCRVJWQkEAw9JWQMTAD8rABg/KxESATk5ETMRMzEwARAAISAAERAAISAAARASMzISERACIyICBb7+nf7E/r3+oQFgAUQBOwFi+3P98fP49/Lz/QLd/qH+bgGLAWgBZQGJ/nD+oP7X/s0BMgEqAScBMf7NAAIAyQAABGgFtgAJABIANEAaCgUFBg4ABgATFAoESlkKCgYHBxJKWQcDBhIAPz8rERIAORgvKxESATk5ETMRMxEzMTABFAQhIxEjESEgATMyNjU0JiMjBGj+0f7mrKoBewIk/QuZ4sq+yb4EDN7v/cEFtv0bkqGRjgAAAgB9/qQFvgXNAA8AGwA0QBsQChYAAAQDCgQcHQMNBw0ZSVkNBAcTSVkFBxMAP8YrABg/KxESADkREgEXOREzETMxMAEQAgcBIwEHIAAREAAhIAABEBIzMhIREAIjIgIFvuLOAVz3/uM3/r3+oQFgAUQBOwFi+3P98fP49/Lz/QLd/uf+jEL+lgFKAgGLAWgBZQGJ/nD+oP7X/s0BMgEqAScBMf7NAAIAyQAABM8FtgAMABUASEAlDQEBAgwJEQcLCgoHCQIEFhcJDQANAEpZDQ0CAwMVSVkDAwsCEgA/Mz8rERIAORgvKxESADkREgEXOREzETMRMxEzETMxMAERIxEhIAQVEAUBIwElMzI2NTQmIyMBc6oBkQENAQH+2gGNyf6e/s/ptKirvd0CYP2gBbbOz/7eZv1vAmCSj4+RgAABAGr/7AQCBcsAJAA0QBseEwwAABgTBQQlJgweAxYWG0lZFgQDCUlZAxMAPysAGD8rERIAOTkREgEXOREzETMxMAEUBCMgJzUWFjMyNjU0JiYnJiY1NDYzMhcHJiMiBhUUFhYXFhYEAv7o8P78jFrUaKqsPY+SzK/+0dq3NbWrh5g4hYnmrQGFwdhDpCYsgXNMYVI0ScihqchQlEx0Z0xhUTFSvAAAAQASAAAEWgW2AAcAJEASAAEFAQMDCAkHAwQDSVkEAwESAD8/KxEAMxESARc5ETMxMCEjESE1IRUhAouq/jEESP4xBR+XlwAAAQC6/+wFGQW2ABEAJUAREAEKBwEHExIRCAMEDUlZBBMAPysAGD8zERIBOTkRMxEzMTABERQAISAANREzERQWMzI2NREFGf7S/vj++P7fqsjCucgFtvxO+v7iASD8A678RrfExbgDuAABAAAAAATDBbYACgAaQAsBBAwLCAMABAMDEgA/PzMSORESATk5MTABMwEjATMBFhc2NwQMt/3xqP30tAFQOiIkOgW2+koFtvxOo5qioQABABsAAAdMBbYAGQAkQBAZChsaFQ4OBQkYEQoDAQkSAD8zPzMzEjk5ETMREgE5OTEwISMBJiYnBgcBIwEzExYXNjcBMwEWFzY3EzMFxaj+2RU0ARYw/uKo/nu05zAWGzUBBrQBEzAhEzXmtAPTQcYUhJ38MwW2/Hm+mrevA3n8f5vDjswDhQAAAQAIAAAElgW2AAsAI0ASBAYFCwoABg0MAggECQYDAQQSAD8zPzMSOTkREgEXOTEwISMBASMBATMBATMBBJbB/nf+cLQB5v47vAFrAW61/jsCg/19AvwCuv29AkP9TAAAAQAAAAAEewW2AAgAIEAPBAUCBQcDCQoABQEHAwUSAD8/MxI5ERIBFzkRMzEwAQEzAREjEQEzAj0Bhrj+GKz+GboC2wLb/IH9yQIvA4cAAQBSAAAEPwW2AAkAK0AXCAEDBwAHBAEECgsFBElZBQMBCElZARIAPysAGD8rERIBFzkRMxEzMTAhITUBITUhFQEhBD/8EwMI/RADv/z4Ax6FBJiZhftpAAEApv68Am8FtgAHACBADgYBBAABAAgJBQIDBgEnAD8zPzMREgE5OREzETMxMAEhESEVIREhAm/+NwHJ/t8BIf68BvqN+iEAAAEAFwAAAt0FtgADABO3AwEEBQMDAhIAPz8REgE5OTEwEwEjAboCI6b94AW2+koFtgAAAQAz/rwB/AW2AAcAIEAOAwABBgAGCAkABycDBAMAPzM/MxESATk5ETMRMzEwFyERITUhESEzASH+3wHJ/je2Bd+N+QYAAAEAMQInBCMFwQAGABhACQADBwgFAgAEAgAvLzMSORESATk5MTATATMBIwEBMQGyYwHdmP6M/rICJwOa/GYC6f0XAAH//P7FA5r/SAADABG1AAUBBAECAC8zEQEzETMxMAEhNSEDmvxiA57+xYMAAQGJBNkDEgYhAAkAE7YABAsKBoABAC8azRESATk5MTABIyYmJzUzFhYXAxJuQbIoyyByLATZNMA/FUW1NQACAF7/7APNBFoAGQAkAEdAJSIICx4eGRkSCAMlJgECCx5HWQILCwAVFQ9GWRUQBRpGWQUWABUAPz8rABg/KxESADkYLzkrEQAzERIBFzkRMxEzETMxMCEnIwYGIyImNRAlNzU0JiMiByc2NjMyFhURJTI2NTUHBgYVFBYDUiEIUqN6o7kCE7pveomtM1HBYcS9/g6bsabGr22cZ0momwFMEAZEgXtUfywyrsD9FHWqmWMHB21zWl4AAgCw/+wEdQYUABMAHwBEQCIKFxcPDwwdAwwDICENAAwVEhEKEQYABhpGWQYWABRGWQAQAD8rABg/KxESADk5ETMYPz8REgE5OREzETMRMxEzMTABMhIREAIjIiYnIwcjETMRFAczNhciBhUUFjMyNjU0JgKu2O/x1muxPAwjd6YICHTMqpaaqpmWlgRa/tn+8v7y/tVPUo0GFP6Gf2Wki8Pn58ff0dbSAAABAHP/7AOLBFwAFgAmQBQPAwMVCQMYFwYNRlkGEAASRlkAFgA/KwAYPysREgEXOREzMTAFIgAREAAzMhYXByYmIyARFBYzMjcVBgJm7v77AQn1T54tMzeCMv6yo6CJkG4UASUBDAETASwiF40WHf5Wytg7kzkAAgBz/+wENwYUABIAHwBCQCEdBhcADg4RBhEgIRIVDwAAAQEMAwkJGkZZCRADE0ZZAxYAPysAGD8rERIAOTkRMxg/PxESATk5ETMRMzMRMzEwJSMGIyICERASMzIXMycnETMRIyUyNjU1NCYjIgYVFBYDmglz5dfv8Nbfdw0HBKaH/p6qmZuqkpuak6cBJgEPAQ8BLKJPTQG++ex3uc4j6cfjz9LWAAIAc//sBBIEXAATABoAO0AfGAoXCwMDEQoDHBsXC0ZZFxcABgYURlkGEAAORlkAFgA/KwAYPysREgA5GC8rERIBFzkRMzMRMzEwBSIAERAAMzISFRUhFhYzMjcVBgYDIgYHITQmAn/z/ucBBdzO8P0NBbmosa1YnZyEnQ4CPYwUASgBBwEJATj+8d5pwchKlCYhA+WsmJ2nAAABAB0AAAMOBh8AFAA5QB0UDAwTAgIHAwUDFRYKD0ZZCgABBQcFRlkTBw8DFQA/PzMrEQAzGD8rERIBOTkRMzMRMzMSOTEwASERIxEjNTc1ECEyFwcmIyIGFRUhAp7+6abExAFhV3UrYEReWgEXA8f8OQPHSzw9AZQjhR99ikcAAAMAJ/4UBDEEXAAqADcAQQBuQD4rGTglDB89BTETARMFAioiHB8lGQpCQxwPNQ81RlkIO0dZCiIIKg8IDwgWKioCR1kqDyg/R1koEBYuR1kWGwA/KwAYPysAGD8rERIAOTkYLy8REjk5KysREgA5ERIBFzkRMxEzETMRMxEzMTABFQcWFhUUBiMiJwYVFBYzMzIWFRQEISImNTQ2NyYmNTQ2NyYmNTQ2MzIXARQWMzI2NTQmIyMiBhMUFjMyNTQjIgYEMcscLNzAMStqSlrCsr/+3P7o1+mAdCo5QEVVa9jGVkX+EZaM0clumMdxflqCdPP2dX4ESGkYI3FHocAIOFUtK5aPtr+gkmSSGhNQNTxaKiOobLTDFPsAWVx9a1lFbAM8c3bs934AAQCwAAAERAYUABYAM0AZDgwICAkAFgkWFxgOCRISBEZZEhAKAAAJFQA/Mz8/KxESADkREgE5OREzETMRMzMxMCERNCYjIgYVESMRMxEUBzM2NjMyFhURA556gq2fpqYICjG1dMnJAsWGhLzW/cMGFP4pVThPW7/Q/TUAAAIAogAAAWYF3wADAA8AI0ARCgAABAEBEBENB0hZDQIPARUAPz/OKxESATkRMzMRMzEwISMRMwM0NjMyFhUUBiMiJgFWpqa0OCooOjooKjgESAEpOTU2ODg3NwAAAv+R/hQBZgXfAAwAGAAsQBYTCwsNCAgZGhYQSFkWQAkPAAVGWQAbAD8rABg/Gs4rERIBOREzMxEzMTATIic1FjMyNjURMxEQAzQ2MzIWFRQGIyImK187RUNOSaa0OCooOjooKjj+FBmHFFVXBPz7EP68B105NTY4ODc3AAEAsAAABB0GFAAQADZAGxAOCgoLCwgGBAUIBBESDAAAEBAICAMHCxUDDwA/PzMSOS85ETM/ERIBFzkROREzETMzMTABNjcBMwEBIwEHESMRMxEUBwFUK1gBYsX+RAHbyf59faSkCAIxPWMBd/4t/YsCBmz+ZgYU/Mc3cwABALAAAAFWBhQAAwAWQAkAAQEEBQIAARUAPz8REgE5ETMxMCEjETMBVqamBhQAAQCwAAAGywRcACMARkAjFREREggJACMJEiMDJCUcFhUVEhkEDRkNRlkfGRATDwkAEhUAPzMzPz8zKxEAMxESORgvMzMREgEXOREzETMRMxEzMTAhETQmIyIGFREjETQmIyIGFREjETMXMzY2MyAXMzY2MzIWFREGJXB2m5SmcHeckaaHGwgvq2oBAU8IMbp3urkCyYODsrn9nALJg4O71f3BBEiWUFq6VmS/0v01AAABALAAAAREBFwAFAAxQBgAFAwICAkUCRYVDAkQEARGWRAQCg8ACRUAPzM/PysREgA5ERIBOTkRMxEzETMxMCERNCYjIgYVESMRMxczNjYzMhYVEQOeeoKsoKaHGwgzuHHGyALFhoS61v3BBEiWUVm/0v01AAIAc//sBGIEXAAMABgAKEAUEwANBwAHGhkKFkZZChADEEZZAxYAPysAGD8rERIBOTkRMxEzMTABEAAjIiYCNRAAMzIAARQWMzI2NTQmIyIGBGL+8u6T5HwBDO7mAQ/8vaijo6mppaOmAiX+9P7TigECrQEMASv+zv770tzb09HZ1gACALD+FAR1BFwAFAAhAD9AIBkLBAcHCB8SCBIiIwQLAA8PFUZZDxAJDwgbABxGWQAWAD8rABg/Pz8rERIAOTkREgE5OREzETMRMzMzMTAFIiYnIxYVESMRMxczNjYzMhIREAIDIgYHFRQWMzI2NTQmAq5rsTwMDKaHFwhAqm7a7fHuqJYCmqqOoaEUT1JgVv49BjSWWlD+1v7z/vL+1QPjussl58fmys3bAAIAc/4UBDcEXAAMAB8AREAiChAdFgMaGhkQGSAhGhsXDx0eHhYNExMHRlkTEA0ARlkNFgA/KwAYPysREgA5OREzGD8/ERIBOTkRMxEzMzMRMzEwJTI2NzU0JiMiBhUUFhciAhEQEjMyFzM3MxEjETQ3IwYCTqaYBZypkpuZfdTu8NbheQkYg6YLDXN3stMl5srjz8/ZiwEqAQsBDQEuqpb5zAHVZEanAAEAsAAAAycEXAAQACpAFA0JCQoKAhESCw8NAAoVAAVGWQAQAD8rABg/Ejk/ERIBOTkRMxEzMTABMhcHJiMiBhURIxEzFzM2NgKkSToXRDSFvaaJEwg9rARcDJoP2KH9tARIy2t0AAEAav/sA3MEXAAkADZAHB4TDAAAGAUTBCUmDB4DFhYbRlkWEAYDCUZZAxYAPysAGC8/KxESADk5ERIBFzkRMxEzMTABFAYjIic1FhYzMjY1NCYnLgI1NDYzMhcHJiMiBhUUFhYXFhYDc+TO2npPtVSCjG+hmYE/2r6xqTulhnZ4LWSOw4kBK5mmRZooLlNVQFs+OVVsS4abSIdESkEsPjg1R5AAAQAf/+wCqAVGABYANEAbEBQUCQsJEgMEGBcKExATR1kOQBAPBwBGWQcWAD8rABg/Gs0rEQAzERIBFzkRMxEzMTAlMjY3FQYGIyARESM1NzczFSEVIREUFgISLFIYG2kq/sKdnUZgAT7+wl51DQd/DREBTwKMUEXq/oH9e2NqAAABAKT/7AQ5BEgAFAA0QBkBEwcMDAoTChUWDA0NEAgUDxAERlkQFgsVAD8/KwAYPzMSOREzERIBOTkRMxEzETMxMAERFBYzMjY1ETMRIycjBgYjIiY1EQFMeoKsn6aJGAkztXTIxwRI/TmGhLzVAkD7uJNRVr7RAs0AAAEAAAAABAIESAALABhACgEKDA0FCQEPABUAPz8zORESATk5MTAhATMTFhczNhITMwEBoP5gsuxQDggLdcyy/mAESP125EQ1AU0CMPu4AAEAFwAABiMESAAcACxAFAkbHR4XFg4NAwQNBAgaEgkPAAgVAD8zPzMzEjk5ETMRMzMzERIBOTkxMCEDJicjBgcDIwEzEhIXMzY2NxMzExYXMzY2EzMBBC/JEzQIKB7PwP7VrmpvCAgLMRLJtMQ4FAgEI7+s/tECgzvRr1/9fwRI/mP+UEs5tTUCdf2LrHUklgLc+7gAAAEAJwAABAgESAALACJAEQcFBgABBQwNCQMBCAsVBAEPAD8zPzMSOTkREgEXOTEwAQEzAQEzAQEjAQEjAbj+g70BIQEgu/6DAZG8/s3+yrwCMQIX/lwBpP3p/c8BvP5EAAEAAv4UBAYESAAVACRAEgkPAAMWFwQNAA0SRlkNGwgADwA/Mj8rERIAORESARc5MTATMxMWFzM2NhMzAQYGIyInNRYzMjc3ArLwTxMIDVPmsv4pRruITEo3RKtJPQRI/Y/WXzP3Anz7ILmbEYUMwJwAAAEAUgAAA20ESAAJACtAFwgBAwcABwQBBAoLBQRHWQUPAQhHWQEVAD8rABg/KxESARc5ETMRMzEwISE1ASE1IRUBIQNt/OUCVv3PAuf9sgJdcQNWgYH8ugABAD3+vALBBbYAHAAsQBUZGhoLFwAADwcUAwMHCwMdHhMDBCcAPz8REgEXOREzETMzETMRMxEzMTAlFBYXFSYmNRE0JiM1NjY1ETQ2MxUGFREUBxUWFQHbdXG+0H54gnTYtubf3wxmXAKMAqqaAS9oWY0CXGABMpusiwbB/tnXJwwn1wABAe7+EAJ7BhQAAwAWQAkCAwMEBQMbAAAAPz8REgE5ETMxMAEzESMB7o2NBhT3/AABAEj+vALLBbYAHQAsQBUVBQoSEgIZAB0dDg4ZBQMeHxUnBgMAPz8REgEXOREzETMRMzMRMxEzMTABJjURNCc1MhYVERQWFxUiBhURFAYHNTY2NRE0NjcCCt/juNN2gnp+zb5vdG5xAj8n1wEnwQaLrpn+zmFbAo1ZaP7RmasCjAJcZgEpcngUAAABAGgCUAQpA1QAFwAkQBEDDxgZEgxQWQMSDwYGAFBZBgAvKwAQGMQvxCsREgE5OTEwASIGBzU2MzIWFxYWMzI2NxUGIyImJyYmAVI1fzZkkERxWUJiLzaANmaOSH5IS1oCyUM2l20cJhwbQDmWbiEgIBgAAAIAmP6LAYkEXgADAA4AK0AUAgQEAwkJDxAAAAMMDAZPWQwQAyIAPz8rERIAORgvERIBOREzMxEzMTATMxMjExQjIiY1NDYzMhbbaTPP4Xk8PD85M0YCrPvfBUyHR0A/SEAAAQC+/+wD2wXLABsAPkAeFggNAwMKBAAQEAQIAxwdGQUCEwoNAg0CDQQLBwQZAD8/Ejk5Ly8RMzMRMzMREgEXOREzETMzETMRMzEwJQYHFSM1JgI1ECU1MxUWFhcHJiMiBhUUFjMyNwPLaZOFy8EBjIdLjjExhW2sop+njY7wNgbIziABEfoB/D6spAMhF4wz09nUyzsAAQA/AAAERAXJAB0ASEAmGBMJDQ0aFhECCxYTBR4fDBgZGE5ZCRkZEwATEExZExgABUtZAAcAPysAGD8rERIAORgvMysRADMREgEXOREzMxEzETMxMAEyFwcmIyIGFREhFSEVFAYHIRUhNTY1NSM1MxE0NgKqvqo9mo97fQGm/lpBSgMb+/vNxsbgBclUhU18jP7Zf91kiCyajS/0338BPLLNAAACAHsBBgQXBKAAGwAnACBADRwAIg4ADigpHxUVJQcALzMzLzMREgE5OREzETMxMBM0Nyc3FzYzMhc3FwcWFRQHFwcnBiMiJwcnNyY3FBYzMjY1NCYjIga4Sodeh2iCf2aJX4ZKSoNciWZ/hmSHXIVKgZ10dJ6gcnSdAtN6a4xchUlJhVyKcXaDZ4dchUdJhVyIa3xwoJ9xcqKkAAABAB8AAARxBbYAFgBWQC4SDgcLCxAMBQkCCQMMFA4VBxcYCg4OBw8GEhIDABMVDxMfEwIPEw8TDAEVBgwYAD8/MxI5OS8vXRESOTIyETMRMzMRMxESARc5ETMRMzMRMxEzMTABATMBIRUhFSEVIREjESE1ITUhNSEBMwJIAXuu/mABBv7DAT3+w6T+xAE8/sQBAP5lsgLfAtf8/n+qf/70AQx/qn8DAgACAe7+EAJ7BhQAAwAHACRAEAIGBgMHBwgJBAMEAwcbAAAAPz85OS8vERIBOREzMxEzMTABMxEjETMRIwHujY2NjQYU/Pj+Dfz3AAIAe//4A5YGHQAxAD0AQ0AmMgATBioeOBkZHgwGACMGPj8VAzs2HC0GIQkhJ0dZIRUJEEdZCQAAPysAGD8rERIAFzkREgEXOREzETMRMxEzMTATNDY3JiY1NDYzMhYXByYmIyIGFRQWFxYWFRQGBxYVFAYjIic1FhYzMjY1NCYmJy4CNxQWFxc2NTQmJwYGi1ZOSlTPxV6fYTVih0x0dHuaupZSSpnq1NqATsJSho0wbHOOhkKShKcxiZO5RFUDKVaJJShvVXmLHSeDJxs7QDxUN0SXa1qNKVGSjJlBlCUtTEcuOjorNFpyYk1pPRNQb1NwORNkAAIBNQUOA2gF0wALABcAHkAMBgAMEgASGBkPAxUJAC8zzTIREgE5OREzETMxMAE0NjMyFhUUBiMiJiU0NjMyFhUUBiMiJgE1NSUmNzcmJTUBfTUlJTc3JSU1BXE0Li40MjExMjQuLjQyMTEAAAMAZP/sBkQFywAWACYANgBGQCcnFwMPLx8fFAkPFwU3OAYMABIPDB8MAgASEBICDBIMEhsrIxMzGwQAPzM/MxI5OS8vXV0RMxEzERIBFzkRMxEzETMxMAEiBhUUFjMyNxUGBiMiJjU0NjMyFwcmATQSJDMyBBIVFAIEIyIkAjcUEgQzMiQSNTQCJCMiBAIDfX2Hf4NWfTBlRsLQ3b+Adjps/JfIAV7KyAFeysL+otDP/qLDaa4BLayuASqvrv7XsK7+1q8EI66aqKItfBQc8djR9jx2M/64yAFeysj+osrF/qbQzwFaxq3+062uASmwrgEqr67+1wAAAgBGAxQCcQXHABYAHwA3QBwXBhsKAQEWFhAGAyAhHAoKEhkWAAMQAwIDDRIfAD8z1F3EMxI5LzMREgEXOREzETMzETMxMAEnBiMiJjU0Njc3NTQjIgcnNjMyFhURJRQzMjU1BwYGAhQYXIxfb5qldZRkaCtyhYKJ/lBwyWJwZwMhVGFjZmZpBgQnhTNgOGl5/jy8ZLQxBAQ5AAIAUgB1A6oDvgAGAA0AKUATAwYKDQIECwkJBA0GBA4PDAUIAQAvMy8zERIBFzkRMxEzETMRMzEwEwEXAQEHASUBFwEBBwFSAVZ3/t8BIXf+qgGLAVh1/uEBH3X+qAInAZdF/qL+oUcBlxsBl0X+ov6hRwGXAAABAGgBCAQpAxcABQAbQAwCAQQBBgcFBFBZBQIALy8rERIBOTkRMzEwAREjESE1BCmJ/MgDF/3xAYWKAP//AFQB2QI/AnECBgAQAAAABABk/+wGRAXLAAgAFgAmADYAXUAzJxcAERESBAkvHx8NCQwSFwY3OAwQEAAADhMOEggTDxIfEgIAExATAhITEhMbKyMTMxsEAD8zPzMSOTkvL11dETMRMxESOS8zETMREgEXOREzETMRMxEzETMxMAEzMjY1NCYjIwUUBgcTIwMjESMRITIWATQSJDMyBBIVFAIEIyIkAjcUEgQzMiQSNTQCJCMiBAIC02xQYVZdagGyVU3uqM+HlAEFppv738gBXsrIAV7Kwv6i0M/+osNprgEtrK4BKq+u/tewrv7WrwL6U0BLQYhQex7+dQFi/p4De4L+xcgBXsrI/qLKxf6m0M8BWsat/tOtrgEpsK4BKq+u/tcAAf/6BhQEBgaTAAMAEbUABQEEAQIALzMRATMRMzEwASE1IQQG+/QEDAYUfwACAH8DXALuBcsADAAYACFADg0AEwYABhkaEArAFgMEAD8zGswyERIBOTkRMxEzMTATNDYzMhYVFAYGIyImNxQWMzI2NTQmIyIGf7WCgrZSklSCtXN1UVBzcVJTcwSTgra1g1SPVLSDUnJxU1RxcgD//wBoAAEEKQTDAiYADgAAAAcCKwAA/XQAAQAxAkoCjQXJABgAI0ARBxMXAQEOEwAEGhkKEB8XASAAPzM/MxESARc5ETMRMzEwASE1Nz4CNTQmIyIGByc2MzIWFRQGBwchAo39pOxZUiFQPzRiRUKDmISTWZOuAbgCSmjmVmFMNkRFJjJYb4JwUJeKpQABACECOQKNBckAIwA5QCIPBQUAAxIeCgYkJRJdE20TAkwTAQsTGxMCExMIGiEfDQghAD8zPzMSOS9dXV0zERIBFzkRMzEwARQGBxYVFAYjIic1FjMyNTQjIzUzMjY1NCYjIgYHJzY2MzIWAnNSRLC4qJh0k3vT53V3Z2NQQ0JwOEU/jF6InQTnUGcXL6KAjzh7RKKRa09EPUQrI1otNncAAQGJBNkDEgYhAAkAE7YJBAoLBIAJAC8azRESATk5MTABNjY3MxUGBgcjAYkwbyDKLK5AbwTyPrBBFUG+NAABALD+FAREBEgAFgA1QBoFCgoIEAATExQIFBgXBhUPFBsNAkZZDRYJFQA/PysAGD8/MxESATk5ETMRMzMRMxEzMTABEDMyNjURMxEjJyMGIyInIxYVESMRMwFW/qufpogaCm/lllgKCqamAX3++r3UAkD7uJOnXFSg/sAGNAABAHH+/ARgBhQADwAnQBIEBQEAAAULAxARCAgFAw8FAQUALzM/MxI5LxESARc5ETMRMzEwASMRIxEjEQYjIiY1EDYzIQRgctVzPlTYy9roAi3+/Aaw+VADMxL6+wEE/gABAJgCTAGJA1oACwAXQAoGAAANDAMJT1kDAC8rERIBOREzMTATNDYzMhYVFAYjIiaYPjg6QUI5M0MC00JFRUJBRj8AAAEAJf4UAbQAAAASACRAEBEOCwAADgUDExQOEREIAxAAL8wyOS8zERIBFzkRMxEzMTABFAYjIic1FjMyNjU0Jic3MwcWAbSZljMtLTtPUU9tWG43tP7fYWoJaggoNis1EbJzJwABAEwCSgHhBbYACgAgQA4CAAMDCgwLCQkDIAYAHgA/Mj85LxESATk5ETMzMTABMxEjETQ3BgYHJwFSj4UGFjaHQwW2/JQCQ1taFi1fYAACAEIDFAK+BccACwAXACVAEgwGEgAGABgZDwADEAMCAxUJHwA/M8RdMhESATk5ETMRMzEwARQGIyImNTQ2MzIWBRQWMzI2NTQmIyIGAr6rlpKpqJeYpf3+W2hpXFxpZ1wEb6S3uqGjtbaienp6ent2dgACAFAAdQOoA74ABgANACNAEQsJBAIAAwcCCgkGDg8MBQgBAC8zLzMREgEXOREzETMxMAEBJwEBNwEFAScBATcBA6j+qHUBH/7hdQFY/nX+qHUBH/7hdQFYAgz+aUcBXwFeRf5pG/5pRwFfAV5F/mn//wBLAAAF0QW2ACcCFwKDAAAAJgB7/wABBwI8Ax39twAJswMCEhgAPzU1AP//AC4AAAXbBbYAJwIXAj8AAAAmAHviAAEHAHQDTv23AAeyAhAYAD81AP//ABoAAAYhBckAJgB1+QAAJwIXAt8AAAEHAjwDbf23AAmzAwIrGAA/NTUAAAIAM/53A1QEXgAdACgAQUAiCBQeIwEcDxwjFAQpKgAdAQwDHR0RJiYgT1kmEBELSVkRIwA/KwAYPysREgA5GC9fXl0REgEXOREzETMRMzEwARUUBgcOAhUUFjMyNjcXBiMiJjU0PgI3NjY1NRMUIyImNTQ2MzIWAk5LYXk9GYR6UJZiO8XGvtgjQFk2ZUG0eTs+QjczRgKsM3qUVGpLTThkcSYwh2C6qkZpWVIvWHRdHwErh0VCQEdA//8AAAAABRAHcwImACQAAAEHAEP/wgFSAAizAhAFJgArNf//AAAAAAUQB3MCJgAkAAABBwB2AIUBUgAIswIYBSYAKzX//wAAAAAFEAdzAiYAJAAAAQcBSwAjAVIACLMCHQUmACs1//8AAAAABRAHLwImACQAAAEHAVIABAFSAAizAhgFJgArNf//AAAAAAUQByUCJgAkAAABBwBqADcBUgAKtAMCJAUmACs1Nf//AAAAAAUQBwYCJgAkAAAABwFQADkAgQAC//4AAAaBBbYADwATAE5ALAoODhEBAAgMARAFBRUFFAkTBhNJWRADSVkKDUlZEAoQCgEGAwUSAQ5JWQESAD8rABg/PxI5OS8vKysrEQAzEQEzERIXOREzMxEzMTAhIREhAyMBIRUhESEVIREhASERIwaB/RL9/uOwAroDyf28Ah394wJE+1QBvnYB0f4vBbaX/imW/eYB0gK1AP//AH3+FATPBcsCJgAmAAAABwB6AgIAAP//AMkAAAP4B3MCJgAoAAABBwBD/7cBUgAIswENBSYAKzX//wDJAAAD+AdzAiYAKAAAAQcAdgA/AVIACLMBFQUmACs1//8AyQAAA/gHcwImACgAAAEHAUv/+wFSAAizARoFJgArNf//AMkAAAP4ByUCJgAoAAABBwBqABIBUgAKtAIBIQUmACs1Nf//ADwAAAJWB3MCJgAsAAABBwBD/rMBUgAIswENBSYAKzX//wBUAAACcwdzAiYALAAAAQcAdv9hAVIACLMBFQUmACs1/////wAAAqEHcwImACwAAAEHAUv+8wFSAAizARoFJgArNf//ADwAAAJvByUCJgAsAAABBwBq/wcBUgAKtAIBIQUmACs1NQACAC8AAAVIBbYADAAXAFdAMhEVFQgEDQAAEwQGBBgZFAYHBklZEQ8HPwevB88H3wcFCwMHBwQJCRBKWQkDBBVKWQQSAD8rABg/KxESADkYL19eXTMrEQAzERIBFzkRMxEzMxEzMTABEAAhIREjNTMRISAAAxAhIxEhFSERMyAFSP53/o/+e5qaAbIBUQF8tf3H5wF7/oW+AmIC6f6W/oECiZYCl/6J/qQCQP38lv4K//8AyQAABT8HLwImADEAAAEHAVIAkwFSAAizARoFJgArNf//AH3/7AW+B3MCJgAyAAABBwBDAHkBUgAIswIZBSYAKzX//wB9/+wFvgdzAiYAMgAAAQcAdgEKAVIACLMCIQUmACs1//8Aff/sBb4HcwImADIAAAEHAUsAtAFSAAizAiYFJgArNf//AH3/7AW+By8CJgAyAAABBwFSAJoBUgAIswIhBSYAKzX//wB9/+wFvgclAiYAMgAAAQcAagDVAVIACrQDAi0FJgArNTUAAQCFARAEDASYAAsAGUAJBwkDAQkBDA0IABkvERIBOTkRMxEzMTABFwEBBwEBJwEBNwEDrGD+oAFeYP6e/qRlAV7+oGQBYQSYY/6e/qBjAV/+oWMBYAFgZf6dAAADAH3/wwW+BfYAEwAbACMATkAsFh8XHgQcFBwKFAAAEg8FCAoGJCUWHiEZDSFJWQ8SCAUEAxANBAMZSVkGAxMAP8YrABg/xhIXOSsREgA5ORESARc5ETMRMxESFzkxMAEQACEiJwcnNyYREAAhMhc3FwcWAxAnARYzMhIBEBcBJiMiAgW+/p3+xOuUZXhssgFgAUTRnWF4asC0bv1gc7Dz+PwnZQKdaqjz/QLd/qH+bmSNT5rGAW0BZQGJXodQlMr+lQEQmvxMUgEyASr++poDr0n+zQD//wC6/+wFGQdzAiYAOAAAAQcAQwBGAVIACLMBEwUmACs1//8Auv/sBRkHcwImADgAAAEHAHYAzwFSAAizARsFJgArNf//ALr/7AUZB3MCJgA4AAABBwFLAH0BUgAIswEgBSYAKzX//wC6/+wFGQclAiYAOAAAAQcAagCYAVIACrQCAScFJgArNTX//wAAAAAEewdzAiYAPAAAAQcAdgAxAVIACLMBEgUmACs1AAIAyQAABHkFtgAMABUANkAcDQkFBQYRAAYAFhcNBEpZCRVKWQ0JDQkGBwMGEgA/PxI5OS8vKysREgE5OREzETMRMzMxMAEUBCEjESMRMxEzIAQBMzI2NTQmIyMEef7R/uG4qqrXARkBFvz6qOLKvsrMAxDj7v7BBbb/AM/96o+klYoAAAEAsP/sBJwGHwAwAEFAIikqBR0jABcMDAAdESoFMTISEiouLiZGWS4AKhUPFUZZDxYAPysAGD8/KxESADkYLxESARc5ETMRMxEzETMxMAEUBwYGFRQWFhcWFhUUBiMiJzUWFjMyNTQmJyYmNTQ2NzY2NTQmIyAVESMRNDYzMhYEGY9YOBtHToxmwrO8az+cSNdTbn9gRUdLQIh//uym3N7O4QTyh3NGQyEgKjkzX51loKtFmicvtktrRlJ7VD9qNTlaNVBV3/tMBLKyu53//wBe/+wDzQYhAiYARAAAAQYAQ44AAAizAiYRJgArNf//AF7/7APNBiECJgBEAAABBgB2KwAACLMCLhEmACs1//8AXv/sA80GIQImAEQAAAEGAUvYAAAIswIzESYAKzX//wBe/+wDzQXdAiYARAAAAQYBUr0AAAizAi4RJgArNf//AF7/7APNBdMCJgBEAAABBgBq4gAACrQDAjoRJgArNTX//wBe/+wDzQaFAiYARAAAAQYBUPcAAAq0AwIoESYAKzU1AAMAXv/sBnMEXAApADQAOwBhQDMqACQRMDgZGQQwORgYHzALAAU8PRstJy1GWRkxBDFHWTgkJxEEBA4iJxY1CA4IRlkUDhAAPzMrEQAzGD8zEjkvORI5MysRADMrEQAzERIBFzkRMxEzMxEzEjk5ETMxMBM0Njc3NTQmIyIHJzY2MzIWFzY2MzISFRUhEiEyNjcVBgYjICcGBiMiJjcUFjMyNjU1BwYGASIGByE0Jl74/rh0d5CjNErHYoKlKTWrbsDo/UMIATpbnVRWlWX+331RxYajua5rWJGonrqkA715iwsCB4ABL6GzCAZEgXtUfyk1V19YYP713mv+dSMnlCYh6X9qqpdfWamaYwcIbQIypp6cqAD//wBz/hQDiwRcAiYARgAAAAcAegFGAAD//wBz/+wEEgYhAiYASAAAAQYAQ7UAAAizAhwRJgArNf//AHP/7AQSBiECJgBIAAABBgB2TgAACLMCJBEmACs1//8Ac//sBBIGIQImAEgAAAEGAUv3AAAIswIpESYAKzX//wBz/+wEEgXTAiYASAAAAQYAagoAAAq0AwIwESYAKzU1////2gAAAWMGIQImAPMAAAEHAEP+UQAAAAizAQURJgArNf//AKkAAAIyBiECJgDzAAABBwB2/yAAAAAIswENESYAKzX///+zAAACVQYhAiYA8wAAAQcBS/6nAAAACLMBEhEmACs1////7AAAAh8F0wImAPMAAAEHAGr+twAAAAq0AgEZESYAKzU1AAIAcf/sBGIGIQAbACYASkArIQYMHBwAABgZFg4RExAGCScoCR9GWQsDFhEZDg8FFAkJAxcUAQMkRlkDFgA/KwAYPzMSOS8SFzkSOSsREgEXOREzETMRMzEwARAAIyIANTQAMzIXNyYnBSc3Jic3Fhc3FwcWEgM0JiMgERQWMzI2BGL++/fe/ukBB9ziZAg5zf7xSelcXkWcZu5Mz5ilqLSc/q+voq+hAjP+5/7SAQ3i5gEGeQTWv5tshT4xdUlLimt3j/5y/uiTqv6Yp7fJAP//ALAAAAREBd0CJgBRAAABBgFSDgAACLMBHhEmACs1//8Ac//sBGIGIQImAFIAAAEGAEPUAAAIswIaESYAKzX//wBz/+wEYgYhAiYAUgAAAQYAdlYAAAizAiIRJgArNf//AHP/7ARiBiECJgBSAAABBgFLDgAACLMCJxEmACs1//8Ac//sBGIF3QImAFIAAAEGAVLxAAAIswIiESYAKzX//wBz/+wEYgXTAiYAUgAAAQYAahsAAAq0AwIuESYAKzU1AAMAaAD8BCkEqAADAA8AGwAzQBgWCgoQBAIEAQMcHRkTEwEHDQ0BAQBQWQEALysRADMYLzMRMy8zERIBFzkRMzMRMzEwEzUhFQE0NjMyFhUUBiMiJhE0NjMyFhUUBiMiJmgDwf2uOzY0OjszND07NjQ6OzM0PQKNior+6Dw9Pzo5QD8C9Dw9Pzo5QD8AAwBz/7wEYgSHABMAGwAjAEtAKRcfHBQUChwAABIPBQgKBiQlFh4hGQ0ZRlkPEggFBAMQDRADIUZZBgMWAD/GKwAYP8YSFzkrERIAOTkREgEXOREzETMREjk5MTABEAAjIicHJzcmERAAMzIXNxcHFgUUFwEmIyIGBTQnARYzMjYEYv7y7ppwVHJegQEM7pp0VHVhf/y9NQHRS3KjpgKXM/4vR3GjqQIl/vT+00V1ToOYAQABDAErTHdMhZj5q2YChjXW1KRk/X0z2wD//wCk/+wEOQYhAiYAWAAAAQYAQ8QAAAizARYRJgArNf//AKT/7AQ5BiECJgBYAAABBgB2cQAACLMBHhEmACs1//8ApP/sBDkGIQImAFgAAAEGAUsSAAAIswEjESYAKzX//wCk/+wEOQXTAiYAWAAAAQYAaiEAAAq0AgEqESYAKzU1//8AAv4UBAYGIQImAFwAAAEGAHYSAAAIswEfESYAKzUAAgCw/hQEdQYUABYAIgA+QB8gBhsUEBARBhEkIxIAERsMFgkDCR5GWQkWAxdGWQMQAD8rABg/KxESADk5GD8/ERIBOTkRMxEzMxEzMTABNjYzMhIREAIjIicjFxYVESMRMxEUByUiBgcVFBYzIBE0JgFYQqpq1/Dx1t56DAQIpqYGAUiomAKaqgEvlAO0WU/+1P71/vT+06EiTT/+NQgA/i40Whu4ySnnxwGw19H//wAC/hQEBgXTAiYAXAAAAQYAarUAAAq0AgErESYAKzU1//8AAAAABRAGtAImACQAAAEHAU0APwFSAAizAhIFJgArNf//AF7/7APNBWICJgBEAAABBgFN9QAACLMCKBEmACs1//8AAAAABRAHNwImACQAAAEHAU4AKwFSAAizAg8FJgArNf//AF7/7APNBeUCJgBEAAABBgFO5AAACLMCJREmACs1//8AAP5CBREFvAImACQAAAAHAVEDoAAA//8AXv5CBAAEWgImAEQAAAAHAVECjwAA//8Aff/sBM8HcwImACYAAAEHAHYBCAFSAAizASAFJgArNf//AHP/7AOLBiECJgBGAAABBgB2RAAACLMBIBEmACs1//8Aff/sBM8HcwImACYAAAEHAUsArAFSAAizASUFJgArNf//AHP/7AOLBiECJgBGAAABBgFL1AAACLMBJREmACs1//8Aff/sBM8HMQImACYAAAEHAU8CGwFSAAizASAFJgArNf//AHP/7AOLBd8CJgBGAAABBwFPAVAAAAAIswEgESYAKzX//wB9/+wEzwdzAiYAJgAAAQcBTADBAVIACLMBIgUmACs1//8Ac//sA6EGIQImAEYAAAEGAUzzAAAIswEiESYAKzX//wDJAAAFWAdzAiYAJwAAAQcBTABYAVIACLMCHQUmACs1//8Ac//sBYEGFAImAEcAAAEHAjgDDAAAAAeyAiMAAD81AP//AC8AAAVIBbYCBgCSAAAAAgBz/+wE0wYUABoAJwBkQDclBhIOAB4eFRkWGRAGBCgpGhUYEBEQR1kVDxEfES8RAwkDEREJEwABDAMJCSJGWQkQAxtGWQMWAD8rABg/KxESADk5GD8SOS9fXl0zKxEAMxg/ERIBFzkRMzMRMzMzETMxMCUjBiMiAhEQEjMyFzMmNTUhNSE1MxUzFSMRIyUyNjU1NCYjIgYVFBYDmglz5dfv8Nbfdw0L/kABwKacnIf+nqqZm6qSm5qTpwEmAQ8BDwEsolNJhYG4uIH7JXe5ziPpx+PP0tb//wDJAAAD+Aa0AiYAKAAAAQcBTQASAVIACLMBDwUmACs1//8Ac//sBBIFYgImAEgAAAEGAU0KAAAIswIeESYAKzX//wDJAAAD+Ac3AiYAKAAAAQcBTgAQAVIACLMBDAUmACs1//8Ac//sBBIF5QImAEgAAAEGAU77AAAIswIbESYAKzX//wDJAAAD+AcUAiYAKAAAAQcBTwFvATUACLMBFQUmACs1//8Ac//sBBIF3wImAEgAAAEHAU8BVAAAAAizAiQRJgArNf//AMn+QgP4BbYCJgAoAAAABwFRAnMAAP//AHP+YQQSBFwCJgBIAAAABwFRAmYAH///AMkAAAP4B3MCJgAoAAABBwFMABABUgAIswEXBSYAKzX//wBz/+wEEgYhAiYASAAAAQYBTPsAAAizAiYRJgArNf//AH3/7AU9B3MCJgAqAAABBwFLAOkBUgAIswEqBSYAKzX//wAn/hQEMQYhAiYASgAAAQYBS8oAAAizA1ARJgArNf//AH3/7AU9BzcCJgAqAAABBwFOAQABUgAIswEcBSYAKzX//wAn/hQEMQXlAiYASgAAAQYBTs4AAAizA0IRJgArNf//AH3/7AU9BzECJgAqAAABBwFPAmQBUgAIswElBSYAKzX//wAn/hQEMQXfAiYASgAAAQcBTwEfAAAACLMDSxEmACs1//8Aff47BT0FywImACoAAAAHAjkBJwAA//8AJ/4UBDEGIQImAEoAAAEGAjpEAAAIswNGESYAKzX//wDJAAAFHwdzAiYAKwAAAQcBSwCWAVIACLMBGgUmACs1//8AsAAABEQHqgImAEsAAAEHAUsAHwGJAAizASUCJgArNQACAAAAAAXnBbYAEwAXAFRALBcDDw8AEBQEDAwHCwgLEBIEGBkXDklZFgoSExJKWQcDExcTFxMBDBASBQEDAD8zPzMSOTkvLxEzMysRADMzKxESARc5ETMzETMzETMzETMzMTATNTMVITUzFTMVIxEjESERIxEjNQE1IRXJqgMCqsjIqvz+qskEdfz+BL74+Pj4jfvPArD9UAQxjf6K6ekAAQAUAAAERAYUAB4AWUAyFhQQCAgNCQAeHhIJCwQfIBcWGgRGWRMLDAtHWRAMDwwfDC8MAxYaDAwaFgMJDgAACRUAPzM/Ehc5Ly8vXREzKxEAMysRADMREgEXOREzETMzETMzMzEwIRE0JiMiBhURIxEjNTM1MxUhFSEVFAczNjYzMhYVEQOeeoKunqacnKYBwf4/CAoxtXTJyQKehoS61f3nBNt/urp/xFQ4T1u/0v1c////4gAAAsoHLwImACwAAAEHAVL+2gFSAAizARUFJgArNf///5AAAAJ4Bd0CJgDzAAABBwFS/ogAAAAIswENESYAKzX//wAqAAACgga0AiYALAAAAQcBTf79AVIACLMBDwUmACs1////2gAAAjIFYgImAPMAAAEHAU3+rQAAAAizAQcRJgArNf//AB4AAAKKBzcCJgAsAAABBwFO/vkBUgAIswEMBSYAKzX////MAAACOAXlAiYA8wAAAQcBTv6nAAAACLMBBBEmACs1//8AVP5CAlYFtgImACwAAAAGAVFoAP//ADX+QgGBBd8CJgBMAAAABgFREAD//wBUAAACVgcxAiYALAAAAQcBTwBQAVIACLMBFQUmACs1AAEAsAAAAVYESAADABZACQABAQUEAg8BFQA/PxESATkRMzEwISMRMwFWpqYESP//AFT+fwQQBbYAJgAsAAAABwAtAqgAAP//AKL+FANsBd8AJgBMAAAABwBNAgYAAP///2D+fwJlB3MCJgAtAAABBwFL/rcBUgAIswEcBSYAKzX///+R/hQCTwYhAiYCNwAAAQcBS/6hAAAACLMBGxEmACs1//8Ayf47BOkFtgImAC4AAAAHAjkAiQAA//8AsP47BB0GFAImAE4AAAAGAjkrAAABALAAAAQbBEYADQAvQBkNCwcHCAMBAgUIBQ4PAg0FBgQIAAkPBAgVAD8zPzMSFzkREgEXOREzETMzMTABMwEBIwEHESMRMxEUBwMvz/5iAbvJ/peHsrIMBEb+Hv2cAfhx/nkERv7lpnH//wDJAAAD+AdzAiYALwAAAQcAdv9jAVIACLMBDwUmACs1//8AowAAAiwHrAImAE8AAAEHAHb/GgGLAAizAQ0CJgArNf//AMn+OwP4BbYCJgAvAAAABgI5MQD//wBZ/jsBVwYUAiYATwAAAAcCOf7oAAD//wDJAAAD+AW3AiYALwAAAQcCOAEd/6MAB7IBCQMAPzUA//8AsAAAAqAGFAImAE8AAAEGAjgrAAAHsgEHAAA/NQD//wDJAAAD+AW2AiYALwAAAAcBTwIE/Wf//wCwAAACqAYUACYATwAAAAcBTwFC/TgAAQAdAAAD+AW2AA0APUAhBwsLBAAMCQADBA8OCQcECgMBBggCCAIIAAUDAAtJWQASAD8rABg/Ejk5Ly8SFzkREgEXOREzMxEzMTAzEQcnNxEzESUXBREhFclpQ6yqASlD/pQChQH8O3JlAx79Rq550/48mgAB//wAAAInBhQACwA3QBwABAQJBQUMAg0IDAACCQMIBgYBBwEHAQUKAAUVAD8/Ejk5Ly8SFzkRATMRMxI5ETMzETMxMAE3FwcRIxEHJzcRMwFWiUjRpm5GtKYDYF5wjf0/AlRIcXcDIAD//wDJAAAFPwdzAiYAMQAAAQcAdgECAVIACLMBGgUmACs1//8AsAAABEQGIQImAFEAAAEGAHZ5AAAIswEeESYAKzX//wDJ/jsFPwW2AiYAMQAAAAcCOQDNAAD//wCw/jsERARcAiYAUQAAAAYCOVYA//8AyQAABT8HcwImADEAAAEHAUwApgFSAAizARwFJgArNf//ALAAAAREBiECJgBRAAABBgFMHwAACLMBIBEmACs1//8AAQAABMsFtgAnAFEAhwAAAQYCB+gAAAeyARwDAD81AAABAMn+fwU/BbYAGQA4QBwQDQ0OCBQUFxcCDgMaGxIKDhUPAw4SAAVJWQAiAD8rABg/PzMSOTkREgEXOREzETMRMxEzMTABIic1FjMyNjUBIxIVESMRMwEzJjURMxEUBgPJYjZHU2lq/MAIEJ3AAx0IDp/B/n8bkRR6bwTL/vie/NsFtvtOleADPfpYw8wAAQCw/hQERARcAB0AOEAeEw8PEAcbGwIQAx4fFwtGWRcQExARDxAVAAVGWQAbAD8rABg/PxI5PysREgEXOREzETMRMzEwASInNRYzMjURNCYjIgYVESMRMxczNjYzMhYVERQGAyVWNzw+jHqCrKCmhxsKNLRuy8eM/hQZhxSsA3mGhLrW/cEESJZSWL/S/I2aqv//AH3/7AW+BrQCJgAyAAABBwFNAMcBUgAIswIbBSYAKzX//wBz/+wEYgViAiYAUgAAAQYBTRIAAAizAhwRJgArNf//AH3/7AW+BzcCJgAyAAABBwFOAMEBUgAIswIYBSYAKzX//wBz/+wEYgXlAiYAUgAAAQYBTg4AAAizAhkRJgArNf//AH3/7AW+B3MCJgAyAAABBwFTARQBUgAKtAMCKwUmACs1Nf//AHP/7ARiBiECJgBSAAABBgFTWgAACrQDAiwRJgArNTUAAgB9/+wG5wXNABQAHwBTQC4YBg8TEx0ADREdBgUgIQ8SSVkPDwALCw5JWQsDCRVJWQkEAxtJWQMSABNJWQASAD8rABg/KwAYPysAGD8rERIAORgvKxESARc5ETMRMxEzMTAhIQYjIAAREAAhMhchFSERIRUhESEBIgAREAAzMjcRJgbn/QBmXP65/p8BXAFAZloDDv2zAif92QJN/ET5/v8BAfdwV1cUAYkBagFoAYYXl/4plv3mBJ3+z/7Z/tf+zSEEdR4AAwBx/+wHHwRaAB4AKgAxAFVALR8IDgIWFiUvFRUcJQgEMjMrKAsoRlkuFkZZAgUOCy4uBRELEBgiBSJGWQAFFgA/MysRADMYPzMSOS8SORI5KysRADMREgEXOREzETMSOTkRMzEwBSAnBgYjIgAREAAzMhYXNjYzMhIVFSESITI2NxUGBgEUFjMyNjU0JiMiBiUiBgchNCYFlv7bfT7Rid/+9AEG64PNPjrAfsnu/ScIAUpeoVdYmPshmKejmZulppUER3+RDAIghBTrdHcBMQEIAQkBLHdycHn+9+Jp/ncjJ5QnIAI509vV0d3V2Niknp6k//8AyQAABM8HcwImADUAAAEHAHYAeQFSAAizAh8FJgArNf//ALAAAAMnBiECJgBVAAABBgB23AAACLMBGhEmACs1//8Ayf47BM8FtgImADUAAAAGAjl9AP//AGD+OwMnBFwCJgBVAAAABwI5/u8AAP//AMkAAATPB3MCJgA1AAABBwFMABsBUgAIswIhBSYAKzX//wCCAAADJwYhAiYAVQAAAQcBTP92AAAACLMBHBEmACs1//8Aav/sBAIHcwImADYAAAEHAHYAUAFSAAizAS4FJgArNf//AGr/7ANzBiECJgBWAAABBgB26gAACLMBLhEmACs1//8Aav/sBAIHcwImADYAAAEHAUv/6gFSAAizATMFJgArNf//AGr/7ANzBiECJgBWAAABBgFLlwAACLMBMxEmACs1//8Aav4UBAIFywImADYAAAAHAHoBJwAA//8Aav4UA3MEXAImAFYAAAAHAHoA1QAA//8Aav/sBAIHcwImADYAAAEHAUz/5AFSAAizATAFJgArNf//AGr/7ANzBiECJgBWAAABBgFMmQAACLMBMBEmACs1//8AEv47BFoFtgImADcAAAAGAjkZAP//AB/+OwKoBUYCJgBXAAAABgI5ggD//wASAAAEWgdzAiYANwAAAQcBTP/cAVIACLMBEwUmACs1//8AH//sAtcGFAImAFcAAAEGAjhiAAAHsgEaAAA/NQAAAQASAAAEWgW2AA8AP0AhBwsLAAwECQwOAgUQEQoODw5KWQcPDwMMEgYCAwJJWQMDAD8rEQAzGD8SOS8zKxEAMxESARc5ETMzETMxMAERITUhFSERIRUhESMRITUB4f4xBEj+MQE2/sqq/scDLwHwl5f+EI39XgKijQABAB//7AKoBUYAHABMQCkXExsbDAgCFRkICg4GHR4OFhMWR1kaCgsKR1kXCwsGEUATDwYARlkGFgA/KwAYPxrNEjkvMysRADMrEQAzERIBFzkRMzMRMzMxMCUyNxUGBiMgETUjNTMRIzU3NzMVIRUhESEVIRUUAhdVPCBqKv7IjY2dnUZgAT7+wgEt/tN1FH8OEAFc/oEBAFBF6v6B/wCB9N0A//8Auv/sBRkHLwImADgAAAEHAVIAbwFSAAizARsFJgArNf//AKT/7AQ5Bd0CJgBYAAABBgFS9wAACLMBHhEmACs1//8Auv/sBRkGtAImADgAAAEHAU0AkQFSAAizARUFJgArNf//AKT/7AQ5BWICJgBYAAABBgFNGQAACLMBGBEmACs1//8Auv/sBRkHNwImADgAAAEHAU4AiwFSAAizARIFJgArNf//AKT/7AQ5BeUCJgBYAAABBgFOEgAACLMBFREmACs1//8Auv/sBRkH1wImADgAAAEHAVAAnAFSAAq0AgEVBSYAKzU1//8ApP/sBDkGhQImAFgAAAEGAVAjAAAKtAIBGBEmACs1Nf//ALr/7AUZB3MCJgA4AAABBwFTAOEBUgAKtAIBJQUmACs1Nf//AKT/7AQ5BiECJgBYAAABBgFTaAAACrQCASgRJgArNTX//wC6/kIFGQW2AiYAOAAAAAcBUQIhAAD//wCk/kIEZQRIAiYAWAAAAAcBUQL0AAD//wAbAAAHTAdzAiYAOgAAAQcBSwFUAVIACLMBKAUmACs1//8AFwAABiMGIQImAFoAAAEHAUsAwQAAAAizASsRJgArNf//AAAAAAR7B3MCJgA8AAABBwFL/+ABUgAIswEXBSYAKzX//wAC/hQEBgYhAiYAXAAAAQYBS60AAAizASQRJgArNf//AAAAAAR7ByUCJgA8AAABBwBq//EBUgAKtAIBHgUmACs1Nf//AFIAAAQ/B3MCJgA9AAABBwB2AEIBUgAIswETBSYAKzX//wBSAAADbQYhAiYAXQAAAQYAdugAAAizARMRJgArNf//AFIAAAQ/BzECJgA9AAABBwFPAUQBUgAIswETBSYAKzX//wBSAAADbQXfAiYAXQAAAQcBTwDfAAAACLMBExEmACs1//8AUgAABD8HcwImAD0AAAEHAUz/7QFSAAizARUFJgArNf//AFIAAANtBiECJgBdAAABBgFMhgAACLMBFREmACs1AAEAsAAAAtsGHwAMAB1ADgABAQ0GDgQJRlkEAAEVAD8/KxEBMxI5ETMxMCEjERAhMhcHJiMiBhUBVqYBZ2BkK1dJYVkEnAGDJYUee3oAAAEAw/4UBBcFywAgAERAJBoeHgwIEhwICgIFISIdCgwKRlkaDAwQABAWRlkQBAAFRlkAGwA/KwAYPysREgA5GC8zKxEAMxESARc5ETMzETMxMAEiJzUWMzI2NREjNTc1NDYzMhcHByYjIgYVFSEVIREUBgFIRUBGPV9N3t6itlV4FhVmPGJQARr+6p7+FBOLEmZxA81LPIvDsitAQSBpfJWB/De4rwAEAAAAAAUUB6oAEAAYACIALgBhQDQRBQQYBhQHBAMHCCMAKQsICwkiFAIAHQMJMC8mDiwCCRgGSVkJFA4YIg4YGA4iAwgcBAgSAD8zLxIXOS8vLxESOTkrEQAzMxEzERIBFzkRMxEzETMRMxESOTkROTkxMAEUBwEjAyEDIwEmNTQ2MzIWEwMmJwYGBwMTNjY3MxUGBgcjEzQmIyIGFRQWMzI2A2hoAhSusP2epq4CFGp6Y2R9G7IZLw4wCbGYMWYXyyCoQm/TQjMzQjw5NUAFloU4+ycBkf5vBNc0iGVydfw2AbA6kTCHGP5UBIU7lSoQLqEt/vU5PDw5Nz09AAUAXv/sA80HqgAJACQALwA7AEcAZ0A3LRJCNjwwKRUVCyQkBjAANh0SB0hJCQkEPzlFMxELDBUpR1kMFRUPICAZRlkgEA8lRlkPFgoVBAAvPz8rABg/KxESADkYLzkrEQAzGD8zxDIROS8REgEXOREzMxEzETMRMxEzMTABNTY2NyEVBgYHAScjBgYjIiY1ECU3NTQmIyIGByc2NjMyFhURJTI2NTUHBgYVFBYBFAYjIiY1NDYzMhYHNCYjIgYVFBYzMjYB1y5qFgEEFaSAAQIhCFKjeqO5Ahm0d4Vgp0c3VNBl0cn+DpuxpsavbQGqe2ZleXllZXxtQTMzQjw5NEAG2RAqeB8MGGlE+SecZ0momwFMEAZEgno0IH8rM67A/RR1qpljBwdtc1peBT1id3RjYnN3Xjg9PTg4PT0A/////gAABoEHcwImAIgAAAEHAHYCTAFSAAizAh0FJgArNf//AF7/7AZzBiECJgCoAAABBwB2AYUAAAAIswNFESYAKzX//wB9/8MFvgdzAiYAmgAAAQcAdgEZAVIACLMDLQUmACs1//8Ac/+8BGIGIQImALoAAAEGAHZWAAAIswMtESYAKzX//wBq/jsEAgXLAiYANgAAAAYCOQYA//8Aav47A3MEXAImAFYAAAAGAjm5AAABAQwE2QOuBiEADgAYQAkHABAPCwSADgkALzMazTIREgE5OTEwATY2NzMWFhcVIyYnBgcjAQx/ZhemFm19d1iFiFNzBPCIgCkqhYIXN4OGNAAAAQEMBNkDrgYhAA4AGEAJBgAQDwUBgAMLAC8zGs0yERIBOTkxMAEzFhc2NzMVBwYHIyYmJwEMc3Jpglt3QpAuphdmfwYhSnOCOxlElFcpfogAAAEBLQTZA4UFYgADABG1AAEEBQADAC8zERIBOTkxMAEhFSEBLQJY/agFYokAAQElBNkDkQXlAA4AGEAJDAMQDwsEgAgAAC8yGswyERIBOTkxMAEiJiczHgIzMjY3MwYGAlaMnAloBilJVWVgCmgKpwTZiYMxOBpAQ36OAAABAKIFAgFmBd8ACwATtgYAAAwNAwkAL80REgE5ETMxMBM0NjMyFhUUBiMiJqI4Kig6OigqOAVxOTU2ODg3NwAAAgFvBNkDLQaFAAsAFwAeQAwSBgwABgAYGQ8JFQMALzPMMhESATk5ETMRMzEwARQGIyImNTQ2MzIWBzQmIyIGFRQWMzI2Ay17ZmV4eWRlfGxCMzNCPDk0QQWyYnd1YmJzd144PT04OD09AAEAJf5CAXEAAAAPABhACgAJBA0JAxARAgcALzMREgEXOREzMTAXFDMyNxUGIyI1NDY3MwYGsl4qN0E8z1ZIeERF7l4NbRK8Roc1Qm0AAAEBCATZA/AF3QAXACRADwkVGBkRAAUMAAwADBWACQAvGsw5OS8vETMRMxESATk5MTABIi4CIyIGByM2NjMyHgIzMjY3MwYGAxQrUk9JIjIzDmINc1suVk5IIDEwD2MNcQTbJS0lPD15iSUtJTs+eYkAAAIA5wTZA7YGIQAJABMAG0AMDgUTCQQUFQ0EgBMJAC8zGs0yERIBFzkxMBM2NjczFQYGByMlNjY3MxUGBgcj5yRuH7olqzphAWUxZRq6Jas6YATyMLpFFT/EMBlEsToVP8QwAAABAfwE2QMQBnMACQATtgQACwoEgAkALxrNERIBOTkxMAE2NjczFQYGByMB/Bs1DLgSbTFkBPZI41IXSu1MAAMBGwUOA4MGtAAIABQAIAArQBQPCRUbGwMICQQhIhgMCAwIDAMeEgAvM8w5OS8vETMREgEXOREzETMxMAE2NzMVBgYHIyc0NjMyFhUUBiMiJiU0NjMyFhUUBiMiJgIAQR+9IXkzUOU0JikxNyMmNAG0NCYpMTcjJjQFhamGFEOzPQQ0LjQuMjExMjQuNC4yMTH//wAAAAAFEAYKAiYAJAAAAQcBVP4g/5cAB7ICEgAAPzUA//8AmAJMAYkDWgIGAHkAAP///9QAAAR1BgoAJgAofQABBwFU/dj/lwAHsgEQAAA/NQD////UAAAFtQYKACcAKwCWAAABBwFU/dj/lwAHsgEQAAA/NQD////kAAADRAYKACcALADuAAABBwFU/ej/lwAHsgEQAAA/NQD////k/+wGAgYKACYAMkQAAQcBVP3o/5cAB7ICHAAAPzUA////1AAABYUGCgAnADwBCgAAAQcBVP3Y/5cAB7IBDQAAPzUA////5AAABjMGCgAmAXY/AAEHAVT96P+XAAeyASMAAD81AP///+n/7AKTBrQCJgGGAAABBwFV/s4AAAAMtQMCAS4RJgArNTU1//8AAAAABRAFvAIGACQAAP//AMkAAAS+BbYCBgAlAAAAAQDJAAAD+AW2AAUAHUAOAwQEAAYHBQJJWQUDBBIAPz8rERIBOTkRMzEwARUhESMRA/j9e6oFtpn64wW2AP//ACcAAARtBbYCBgIoAAD//wDJAAAD+AW2AgYAKAAA//8AUgAABD8FtgIGAD0AAP//AMkAAAUfBbYCBgArAAAAAwB9/+wFvgXNAAMADwAbAD9AIAIDEBYQChYECgQcHQADSVkAAAcNDRlJWQ0EBxNJWQcTAD8rABg/KxESADkYLysREgE5OREzETMREjk5MTABIRUhJRAAISAAERAAISAAARASMzISERACIyICAeMCdf2LA9v+nf7E/r3+oQFgAUQBOwFi+3P69PP49/L1+wMzlT/+of5uAYsBaAFlAYn+cP6g/tj+zAEwASwBKgEu/s4A//8AVAAAAlYFtgIGACwAAP//AMkAAATpBbYCBgAuAAAAAQAAAAAE0wW2AAoAGkALCAAMCwQICQMBCBIAPzM/EjkREgE5OTEwISMBJicGBwEjATME07b+tlcWIUf+uLYCELEDoPxai8n8XgW2//8AyQAABnEFtgIGADAAAP//AMkAAAU/BbYCBgAxAAAAAwBIAAAEJQW2AAMABwALADRAHQoHAwIGCAYNDAADSVkAAAoECgtJWQoSBAdJWQQDAD8rABg/KxESADkYLysREgEXOTEwEyEVIQMhFSEBFSE1wwLn/RlSA4v8dQO0/CMDSJYDBJf7eZiY//8Aff/sBb4FzQIGADIAAAABAMkAAAUMBbYABwAjQBEBAAQFAAUJCAYDSVkGAwEFEgA/Mz8rERIBOTkRMxEzMTAhIxEhESMRIQUMqv0RqgRDBR/64QW2AP//AMkAAARoBbYCBgAzAAAAAQBKAAAEXAW2AAwANUAcCAoKAAkCCwYDAgAFDQ4HCAQISVkEAwAKSVkAEgA/KwAYPysRADMREgEXOREzETMRMzEwMzUBATUhFSEnAQEhFUoB4f4rA8v9XGABzP4fA1SNAm8CK4+ZAv3f/ZqYAP//ABIAAARaBbYCBgA3AAD//wAAAAAEewW2AgYAPAAAAAMAav/sBfgFywAZACIAKwBQQCknFBoCDQ0rGQ4eBwcOFAMsLQwQGioQKkpZIiQYJEpZAhgQGBAYDhMABAA/Pzk5Ly8RMysRADMrEQAzETMREgEXOREzETMzMxEzMxEzMTABMxUzMhYWFRQCBCMjFSM1IyIkAjU0NjYzMxMzMjY1NCYrAyIGFRQWMzMC26xGq/uFlf79sCmsLbD+/pKH/KtDrBnJ3865Oqw5ttHeyhgFy7SI+J+m/v2C4eGEAQShnviL/EXbw7nS1LfF2QD//wAIAAAElgW2AgYAOwAAAAEAbQAABfIFtgAdAD5AHwoHEQAADgEVGBgBBwMeHx0DDQNJWRENDQEWDwgDARIAPz8zMxI5LzMrEQAzERIBFzkRMxEzMxEzETMxMCEjESMiJiY1ETMRFBYzMxEzETMyNjURMxEUBgQjIwODqi2w/5Cuz9Qbqh3Tz7CQ/v2vLQG+evekAeP+IbzJA2T8nMa7AeP+H6X3ewAAAQBQAAAF9AXNAB8AOUAgAw0dExgTFhkHCg0ICCAhEABJWRAEGhYGCQgJSVkZCBIAPzMrEQAzMzMYPysREgEXOREzETMxMAEiAhUUEhcVITUhJgI1EAAhIAARFAIHIRUhNTYSNTQCAyHu+q20/bYBbJegAWIBOgE7AWKelwFr/ba3qfkFNf7//eH+s4SFmHYBXssBNgFg/qX+x8/+pniYhYYBTt78AQL//wA8AAACbwclAiYALAAAAQcAav8HAVIACrQCASEFJgArNTX//wAAAAAEewclAiYAPAAAAQcAav/vAVIACrQCAR4FJgArNTX//wBz/+wExwZzAiYBfgAAAQYBVB0AAAizAjQRJgArNf//AFr/7AOHBnMCJgGCAAABBgFUyAAACLMBLxEmACs1//8AsP4UBEQGcwImAYQAAAEGAVQ7AAAIswEeESYAKzX//wCo/+wCkwZzAiYBhgAAAQcBVP7EAAAACLMBGREmACs1//8ApP/sBHEGtAImAZIAAAEGAVU7AAAMtQMCATQRJgArNTU1AAIAc//sBMcEXAALACoAR0AkCQ8nFQQEHSIdDwMrLBgPJygoFgwSEgdGWRIQHwAMAEZZJAwWAD8zKxEAMxg/KxESADk5ETMYPxESARc5ETMRMzMRMzEwJTI2NTU0JiMgERQWFyICERASMzIWFzM2NzMGBhURFDMyNxUGIyImJyMGBgJQqZaYqf7Rk4XW7vTheaE2DBgpgRUcVB0hLkFRWRINO6d3w9oP5cf+UNTUiwEpAQwBEgEpVFRcOEL2dP5Jcgp3GlFWVlEAAgCw/hQEqAYfABMAKQBMQCgYDw8QJwMeCAgDBSIQBSorEBsjIkZZDiMOIwsACxtGWQsWABRGWQAAAD8rABg/KxESADk5GC8vKwAYPxESARc5ETMRMxEzETMxMAEyFhUQBRUEERQEIyImJxEjETQ2FyIGFREWFjMyNjU0JiMjNTMyNjU0JgKT3Pn+xwF5/vjubaBPpv3knp1doVarrb6xcFybopwGH9C3/tozCCr+kdHhHyb94wY04faMrKX8iTEllp2dpI6TiXuFAAEACv4UBA4ESAASACFAEA8EAQUEExQKCQkBDgUPARsAPz8zEjkvMxESARc5MTABIzQSNwEzExYXMz4CEzMBBgICFLRAK/4/rPBeEwgFKSvqrP5rMDX+FGABJnIEPP2462cejoECbfvTfP7cAAIAcf/sBGAGEgAeACoAO0AgJRwQAx8WFgkAAxwFKywQACIDGQYZKEZZGRYGDUZZBgAAPysAGD8rERIAFzkREgEXOREzETMRMzEwASYmNTQ2MzIWFwcmJiMiBhUUFhcWFhUUACMiJDU0EgE0JicGBhUUFjMyNgIhjHTCpGe9fkhwn1FVYWun0rH+8Ozj/vDiAmF7jc6/spOirgOoTp9jgpgtP4c+LE9CR29bc/Gk6/74+NKxAQX+c4C3SjXZoJCrugAAAQBa/+wDhwRcACUATUArBBAjFx0LARMXEAYmJxQlAiUCRlkPJR8lAgsDJSUNGhohRlkaEA0HRlkNFgA/KwAYPysREgA5GC9fXl0rERIAORESARc5ETMRMzEwARUjIBUUFjMyNjcVBiMiJjU0Njc1JiY1NDYzMhYXByYmIyIVFCECy5T+yZOSVKZkid3S8W6CYmvgwGGlZD9egk/6AT0CgY3DWmInL5RLqZRigykLHH9chZ4hLYUqHKKsAAABAHP+bwOgBhQAIAAwQBgHGR4TEw4OAwAZBCEiESMeAwABAEZZAQAAPysRADMzGD8REgEXOREzETMRMzEwEzUhFQYAAhUUFhYXFhYVFAcjNjU0JicmJjU0PgI3BiGwAvDX/uCKO32slYh/pn1vj8u8O3DJ8ij+8QWHjYG0/r3+36ZidkklH21blaShazg9GiTbwnLQw+XaCAAAAQCw/hQERARcABQAL0AYABQMCAgJFAkWFRAERlkQEAwJCg8JFQAbAD8/PxI5PysREgE5OREzETMRMzEwARE0JiMiBhURIxEzFzM2NjMyFhURA556gqygpocbCDO4ccbI/hQEsYaEutb9wQRIllFZv9L7SQADAHP/7ARKBisACwASABkASUAnFhAQBhcPDwAGABobFhBGWQ8WvxYCCwMWFgMJCRNGWQkBAwxGWQMWAD8rABg/KxESADkYL19eXSsREgE5OREzETMRMxEzMTABEAIjIgIREBIzMhIBMhITIRISEyICAyECAgRK9Prw+fX09Pr+EqScBv15BJanoZYKAoULmAMM/mr+dgGTAY0BlwGI/mv74QExATP+0P7MBSn+4f7nARkBHwABAKj/7AKTBEgADwAfQA4BDgcOERAPDwsERlkLFgA/KwAYPxESATk5ETMxMAERFBYzMjY3FQYGIyImNREBTklXJWUbH2kyoJEESPz6aGUNB38NEaipAwv//wCwAAAEGwRGAgYA+gAAAAH/8v/sBEYGIQAiADNAGwgBFQMkAAAjGBNGWRgWHh8fAAsLBkZZCwEAFQA/PysREgA5ETMYPysRATMREhc5MTAjAScuAiMiBzU2MzIWFhcBFhYzMjcVBiMiJicDJicjBgcDDgHZOh4yQzE6OUQ/W3lYNgFrEyojGyEwPUpTHZxUFgkcWP4EN6JVRiQNhRE8gpj8DDEzCnkYTFMBtPBgdNH9tgD//wCw/hQERARIAgYAdwAAAAEAAAAABAIESAAOABxADAkKCgAQDwUOFQkADwA/Mj85ERIBOTkRMzEwETMTFhYXMzYSETMQAgcjrNsaUxAIsZ+mz+G6BEj9skPuPq8BvQFR/pX+BOEAAQBx/m8DoAYUADEASUAnBBktHx0cEwwMKAAcHyUZBzIzHDABMAFHWTAwECYpJSYlRlkmABAjAD8/KxEAMxESORgvKxESADkREgEXOREzETMRMxEzMTABIyIGFRQeAhcWFhUUBgcjNjY1NCYnJiY1NDY3NSY1NDY3BiMjNSEVIyIGBhUUFjMzA1aysNUyX4dUjoc2Q5w1QnOPyMeegNmLpoBzRAK6M4Lgf6evqgLyso5QYj0kEh1uWkGVY0eTNDc9GSLIsIzSJwxA2XWeMgyNg1CQX3Ns//8Ac//sBGIEXAIGAFIAAAABABn/7AT0BEgAFQA2QB0KCwcTEAMTCw0FFhcSCQ0PDUZZDw8LFQUARlkFFgA/KwAYPz8rEQAzMxESARc5ETMRMzEwJTI3FQYjIjURIREjESM1NyEVIxEUFgR9JjArVNv+I6bdjwRM1TN1EoMY/QLR/EYDukpEjv08SjcAAgCm/hQEYgRcABAAHAA2QBsVCQkKGgAKAB0eBgMODhFGWQ4QChsDF0ZZAxYAPysAGD8/KxESADkREgE5OREzETMRMzEwARAAIyInIxYVESMREBIzMhIlIgYVERYzMjY1NCYEYv8A6bN4CAio++rb/P4hnpd6t5+YkAIl/vH+1l491P7bBB8BCgEf/tGiz9H+rmbQ3tbUAAABAHP+bwOiBFwAIAAuQBcOBwAVFQcbAyIhBBISGAsYHkZZGBALIwA/PysREgA5ETMREgEXOREzETMxMAEUFhYXFhYVFAYHIzY2NTQmJicmJjUQADMyFhcHJiMiBgEfO4+glIM2Q5w2QzNuYczDART4T542NYJysKoCCoeEUCIga1pCmF9GlDIoLyYSJf7bAR4BNiEYjTPaAAIAc//sBLYESAANABkAMEAZFAAOBwcMAAsEGxoMFwkXRlkJDwQRRlkEFgA/KwAYPysRADMREgEXOREzETMxMAEUBgYjIgA1ECEhFSEWARQWMzI2NRAnIyIGBGB75Zrr/vgCUAHz/viy/L+qoZ+rrkHeyAH8nfGCASD+Aj6Op/73wtHFtgEOutAAAAEAEv/nA5MESAATACxAFwMPAAkPEQQUFQIRExFGWRMPDAVGWQwWAD8rABg/KxEAMxESARc5ETMxMAEVIREUMzI2NxUGBiMiJjURITU3A5P+UM0vYhsjbzC1qv7XlARIjv2W3w0HfQ8SqqoCf0pEAAABAKT/7ARxBEgAFQAlQBEMEwYDEwMXFg8EDwAJRlkAFgA/KwAYPzMREgE5OREzETMxMAUiJhERMxEUFjMyNjU0JiczFhYVEAACc+fopp6Zp6EcIqYkHP7+FPoBCgJY/bDAw+77guCIkNaM/sL+1AAAAgBz/hQFTARcABgAIgBBQCMKBCAYGAwAGRMTAAcEBCMkEBxGWRAQBg8gDAEMRlkXARYAGwA/PzMrEQAzGD8/KxESARc5ETMRMzMRMxEzMTABESQAERA3FwYGFRAFETQ2MzISFRQCBgcRATQmIyIGFRE2NgKD/vz+9M+DWVEBaKaVtNqI+KUBeXxmSU6zxv4UAdoLASMBDwEo/Vp14Hz+dSMCbLu+/tv6sv77kAj+JgQnudt4cv2SEOwAAf/s/hQEUAROACAAOUAhDgcIBRUYHgciFyEFGAgVBAYXGxEMRlkRGwYPABxGWQAPAD8rABg/PysAGD8SFzkRATMSFzkxMBMyFhYXEwEzARMWFjMyNxUGIyImJwMBIwEDJiYjIgc1NrI2Tj4skQE+tP5UvjBSPy0tPDtzjTuW/payAdCsJkYrJRsxBE4rW3D+jwJh/Pz+HHpKCIEPdp8Bg/1oA0QBvGNQC4ERAAEApP4UBYcGEgAaAD1AHxYTAQ4OGQ8ECgoPEwMbHBoABxQPARkQGUZZDRAWDxsAPz8zKxEAMxg/Mz8REgEXOREzETMzETMRMzEwARE2NjU0JiczEhUQAAURIxEkABERMxEUFhcRA1q8yxolpj/+4/7wpP74/vamtLgGEvppD+fMeOuo/vD0/uz+zhD+JgHaCQEiARACH/3bw9oNBZkAAQBz/+wFvARIACcAPUAeCgMmExMQGSAgEAMDKCkmEREAHAYPFg0ADUZZIwAWAD8yKxEAMxg/MxI5LzkREgEXOREzETMSOREzMTAFIgI1NBI3MwYGFRQWMzI2NREzERQWMzI2NTQCJzMWEhUUAiMiJyMGAfS2yzdErEQ5eGteaaFqXWt4N0WsQTnLttxECUEUASj+nAEBmZz/ncHYj30BN/7JgIzYwZcBBJ2S/vmd/P7Wtrb//wAJ/+wCkwXTAiYBhgAAAQcAav7UAAAACrQCASURJgArNTX//wCk/+wEcQXTAiYBkgAAAQYAajkAAAq0AgErESYAKzU1//8Ac//sBGIGcwImAFIAAAEGAVQhAAAIswIiESYAKzX//wCk/+wEcQZzAiYBkgAAAQYBVCcAAAizAR8RJgArNf//AHP/7AW8BnMCJgGWAAABBwFUAMkAAAAIswExESYAKzX//wDJAAAD+AclAiYAKAAAAQcAagAnAVIACrQCASEFJgArNTUAAQAS/+wFQgW2AB0ARkAmFg4ODwgbGxQCDxEFHh8WDUlZFhYPEhUREhFJWRIDDxIABUlZABMAPysAGD8/KxEAMxESORgvKxESARc5ETMRMxEzMTAFIic1FjMyNjU1NCYjIREjESE1IRUhESEyFhUVFAYDz2A2N1tlaIOM/oOq/rADt/5DAYzN3cQUFpYTfHCDgHH9GwUfl5f+Xr+yj77T//8AyQAAA/gHcwImAWEAAAEHAHYAWgFSAAizAQ8FJgArNQABAH3/7ATjBc0AGAA4QB4GAxEWDAURBBkaAwZJWQMDDhQUAElZFAQOCUlZDhMAPysAGD8rERIAORgvKxESARc5ETMzMTABIgQHIRUhEgAzMjcVBiMgABEQACEyFwcmA0Li/vMeAtP9KQoBC/miyaHi/rT+ogF5AU7tskepBTP68Zb+7v7jN5U5AYQBbQFfAZFYlFL//wBq/+wEAgXLAgYANgAA//8AVAAAAlYFtgIGACwAAP//ADwAAAJvByUCJgAsAAABBwBq/wcBUgAKtAIBIQUmACs1Nf///2D+fwFoBbYCBgAtAAAAAgAA/+kHIwW2ABoAIwBHQCYYGxsEHwAABA0DJCUYI0lZGBgLFhYGSVkWAwsQSlkLEgQbSlkEEgA/KwAYPysAGD8rERIAORgvKxESARc5ETMRMxEzMTABFAQhIREhAgIGBiMiJzUWMzI+AhITIREzIAEzMjY1NCYjIwcj/u3+/P65/pM5VFCLa0VAMj8wQSs3REECpnoCOv1Mhca3wNxmAarO3AUf/kj99vt5GY8aPmf6Ab4B4v2Q/U2LjIp8AAIAyQAAB1QFtgARABoASkAmCwcHCA8SEgwEFgAABAgDGxwaBgsGSVkPCwsEDQkDCBIEEkpZBBIAPysAGD8/MxI5LzMrEQAzERIBFzkRMxEzMxEzETMRMzEwARQEISERIREjETMRIREzETMgATMyNjU0JiMjB1T+8P77/rf9faqqAoOseQI5/U6FxLnB22YBqs7cArD9UAW2/ZICbv2Q/U2LjIl9AAABABIAAAVCBbYAEwA6QB8ADAwNBgUFEg0PBBQVEw8QD0lZAAtJWQAADRADBg0SAD8zPxI5LysrEQAzERIBFzkRMxEzETMxMAEhMhYVESMRNCYjIREjESE1IRUhAgwBkM3Zqn2M/n2q/rAD9v4EA328tf30AfZ+cf0bBR+Xl///AMkAAATlB3MCJgG0AAABBwB2AKIBUgAIswEUBSYAKzX//wAb/+wE+AdeAiYBvQAAAQcCNgBEAVIACLMBFwUmACs1AAEAyf6DBQwFtgALADBAGAgFAgMJAAADBQMMDQoGAwUISVkBBRIDIgA/PzMrABg/MxESARc5ETMRMxEzMTAhIREjESERMxEhETMFDP4vsP4+qgLvqv6DAX0FtvrkBRwA//8AAAAABRAFvAIGACQAAAACAMkAAAR9BbYADQAWAD1AIBIACQ4OBAQHAAMYFwkWSVkJCQQFBQhJWQUDBA5KWQQSAD8rABg/KxESADkYLysREgEXOREzETMRMzEwARQEISERIRUhETMyFhYBMzI2NTQmIyMEff79/vv+VANe/UzjwfJ0/Pbvvq2w288BqtrQBbaX/idZrv5UgpWOeAD//wDJAAAEvgW2AgYAJQAA//8AyQAAA/gFtgIGAWEAAAACAA7+gwVKBbYADQATAENAJAQFEwcQCg4MAQAADAoHBQUUFQoQSVkKAwEFIhMMBgMGSVkDEgA/KxEAMzMYPzM/KxESARc5ETMRMxEzETMRMzEwASMRIREjETMSEhMhETMhESEGAgcFSqL8CKJxmtsMApG5/p3+sxLOif6DAX3+gwIXAQMC5gEz+uQEg/L9WeoA//8AyQAAA/gFtgIGACgAAAABAAIAAAa8BbYAEQA8QB8GDQ0DDgoJCAEOABEHEhMPDAkGAwAAAQ4LERIHBAEDAD8zMz8zMxI5ETMzMzMzERIBFzkRMzMRMzEwAQEzAREzEQEzAQEjAREjEQEjAlb9wb4COaQCOr79wAJSxP26pP27xwLwAsb9PALE/TwCxP08/Q4C5f0bAuX9GwABAEr/7AQ1BcsAKABDQCQcABMHBwADFyMMBikqAxgXGBdKWRgYCiYmH0pZJgQKEEpZChMAPysAGD8rERIAORgvKxESADkREgEXOREzETMxMAEUBgcVFhYVFAQhIic1FhYzMjY1NCYjIzUzMjY1NCYjIgYHJzY2MzIWBBm3obe9/s7+6f+jYN9nxsvh39rRzeGiiW6ydVRl+4fh/wRgkLQYCBm0kc3lT54uMpaNhoqPk4RrgDJKcktNxQABAMsAAAVSBbYADwA0QBgOAgIPBgkJCA8IEBEFBAwNBA0JDxIGAAMAPzI/Mzk5ETMRMxESATk5ETMRMxEzETMxMBMzERQHMwEzESMRNDcjASPLnw4IAzS6oBEJ/Mu6Bbb80+G2BMT6SgMlyd37NQD//wDLAAAFUgdeAiYBsgAAAQcCNgDhAVIACLMBEAUmACs1AAEAyQAABOUFtgAKAC1AFgcDAwQACQoEBAsMCgcCBwQIBQMBBBIAPzM/MxI5OREzERIBFzkRMxEzMTAhIwERIxEzEQEzAQTlzv1cqqoCk8P9eQLl/RsFtv08AsT9OgABAAD/5wTZBbYAEwAtQBgDEgEAABIKAxQVEgNJWRIDCA1KWQgTARIAPz8rABg/KxESARc5ETMRMzEwISMRIQcCAgYnIic1FjMyNjYSEyEE2ar+JR89XZh+Sjs2OzVPPV04AxIFH/D+If5FrgIZjxpX1wJZAbj//wDJAAAGcQW2AgYAMAAA//8AyQAABR8FtgIGACsAAP//AH3/7AW+Bc0CBgAyAAD//wDJAAAFDAW2AgYBbgAA//8AyQAABGgFtgIGADMAAP//AH3/7ATPBcsCBgAmAAD//wASAAAEWgW2AgYANwAAAAEAG//sBPgFtgAWACpAFRIIAgkEFxgODQgNABEJAwAFSVkAEwA/KwAYPzMSOTkRMxESARc5MTAFIic1FjMyNjcBMwEWFzM2NwEzAQ4CASVvVF1gboVC/ce8AbAZDggcCwFntP4tVIepFB6mK2WLBEH8wTEvVBYDNfvqu6pP//8Aav/sBfgFywIGAXMAAP//AAgAAASWBbYCBgA7AAAAAQDJ/oMFuAW2AAsAMkAZCAUJAAMCAgAFAwwNCgYDAAgFCElZBRIDIgA/PysRADMYPzMREgEXOREzETMRMzEwJTMRIxEhETMRIREzBQysofuyqgLvqpr96QF9Bbb65AUcAAABAKoAAATHBbYAEwAtQBYLCBEBAQAIABQVBQ5JWQUFARIJAwESAD8/MxI5LysREgE5OREzETMRMzEwISMRBgYjIiY1ETMRFBYzMjY3ETMEx6qVxmrP36p/j2GxqaoCXDUnvrMCRf3PeXQdNwLKAAEAyQAAB3kFtgALADFAGAQBCAUJAAAFAQMMDQoGAgMIBAEESVkBEgA/KxEAMxg/MzMREgEXOREzETMRMzEwISERMxEhETMRIREzB3n5UKoCWKoCWKwFtvrkBRz65AUcAAEAyf6DCAQFtgAPADtAHgMABwQICw4NDQsEAAQQEQ4iCQUBAwsHAwADSVkAEgA/KxEAMzMYPzMzPxESARc5ETMRMxEzETMxMDMRMxEhETMRIREzETMRIxHJqgJHrAJIqqyiBbb65AUc+uQFHPrk/ekBfQAAAgASAAAFFwW2AAwAFQA9QCAJDQ0EEQAABAYDFhcJFUlZCQkEBwcGSVkHAwQNSlkEEgA/KwAYPysREgA5GC8rERIBFzkRMxEzETMxMAEUBCMhESE1IREzIAQBMzI2NTQmIyMFF/79+f5H/rAB+vQBBQES/PX8tamvy+ABqs7cBR+X/ZDN/hqLjIh+AAADAMkAAAYKBbYACgATABcAP0AgAwsLAA8HFRQUBwADGBkVEgMTSVkDAwAWAQMAC0pZABIAPysAGD8zEjkvKwAYPxESARc5ETMRMxEzETMxMDMRMxEzIAQVFAQjJTMyNjU0JiMjASMRM8mq7wEFARL+/fn+9ve1qrPI2wSXqqoFtv2Qzc/O3JGNjIl7/VIFtgACAMkAAAS6BbYACgASADJAGQcLCwQOAAQAExQHEklZBwcEBQMEC0pZBBIAPysAGD8SOS8rERIBOTkRMxEzETMxMAEUBCMhETMRISAEASEgETQmIyEEuv7x+/4ZqgEjAQsBGfy5ASsBbLvO/vIBqsvfBbb9kNP+IAEXh38AAQA9/+wEiQXLABoAOkAfGBUVCQkWDwMEGxwXFklZFxcMBQwSSVkMEwUASVkFBAA/KwAYPysREgA5GC8rERIBFzkRMxEzMTABIgcnNjMyBBIVEAAhIic1FhYzIAATITUhJgAB06yiSKzs2QE5ov6U/qrjnFOsYwEPARQI/TECzRb+8QUzTJBUsP663f6I/mw5lRUiASEBEJjlAQIAAgDJ/+wH5wXNABIAHgBHQCYMCAgJEw0GGQAABgkDHyAQHElZEAQMB0lZDAwJCgMJEgMWSVkDEwA/KwAYPz8SOS8rABg/KxESARc5ETMRMzMRMxEzMTABEAAhIAADIREjETMRIRIAISAAARASMzISERACIyICB+f+q/7Q/tP+qwv+nqqqAWQXAVEBHwEzAVb7oO7n6u3r6OnwAt3+nv5xAW8BVf1QBbb9kgE3AU7+b/6h/tj+zAEyASoBKgEu/s8AAgAzAAAETgW2AA0AFQA9QCAVDAwLEgYCBgMLBBcWABRKWQMJAAACCQkPSlkJAwwCEgA/Mz8rERIAORgvEjkrERIBFzkRMxEzETMxMAEBIwEmJjU0JCEhESMRESMiBhUQITMCe/6ByQGaoZIBDwETAZKq47e+AXvdAmL9ngJ/M8+exNP6SgJiAsF+jv7d//8AXv/sA80EWgIGAEQAAAACAHf/7ARUBiEAFwAiADtAHhoSIAsAAAYSAyQjDAsPHEZZCw8PFQUVGEZZFRYFAQA/PysREgA5GC85KxEAMxESARc5ETMzETMxMBMQEjckNxcEBwYGBzM2NjMyEhUQACMiAAUgERAhIgYGBxASd9TmAR7aH/6llZGRBww+xGvK4v766uf++gH8ATH+60yNdSCmApEBaAGTMj0mkjoiIfbUVGD++uj+//7fAWLXAYUBcz9oN/75/u0AAwCwAAAETARIAA4AFgAfAElAJhwUFAsXAA8HBwADCwQgIQQcExwTRlkcHAsMDBtGWQwPCxRGWQsVAD8rABg/KxESADkYLysREgA5ERIBFzkRMxEzETMRMzEwARQGBxUWFhUUBiMhESEgAzQmIyERISADNCYjIREhMjYEKXtvjIHh2P4dAeEBmIOHnP7TATEBHx97ff7HARmafgM1a28TCRN+b5mmBEj9AllR/pcCmlBD/stMAAABALAAAANEBEgABQAdQA4CAwADBwYEAUZZBA8DFQA/PysREgE5OREzMTABIREjESEDRP4SpgKUA7r8RgRIAAIAKf6FBGgESAANABMAQ0AkBAUTBxAKDgwBAAAMCgcFBRQVChBHWQoPAQUiEwwGAwZGWQMVAD8rEQAzMxg/Mz8rERIBFzkRMxEzETMRMxEzMTABIxEhESMRMzYSEyERMyERIwYCBwRoof0CoFaGmAMCK53+w/YNkWz+hQF7/oUCCrYB6gEZ/EcDNt7+OZEA//8Ac//sBBIEXAIGAEgAAAABAAQAAAXfBEYAEQA8QB8CCQkRCgYEBQoODw0HExIRCwgFAg4ODQMADw8KBw0VAD8zMz8zMxI5ETMzMzMzERIBFzkRMzMRMzEwATMRATMBASMBESMRASMBATMBAqSZAcW2/jYB8cD+Hpn+H78B8P43tgHDBEb97QIT/e39zQIr/dUCK/3VAjMCE/3tAAEARP/sA38EXAAiAE1AKwINHhMTDQ8hCBgGIyQQIiEiIUZZDyIfIgILAyIiFgoWG0ZZFhYKBEZZChAAPysAGD8rERIAORgvX15dKxESADkREgEXOREzETMxMAEgNTQjIgYHJzYzMhYVFAcVFhYVFAYjIic1FjMyNjU0ISM1AYEBN/xNfmY7qsm92s1+dPXY7YG3u5CT/smYAoGsohwqh0ybhrg5CCWJZ5ipR5hWY12/jQABALAAAARiBEgADQA0QBkIBAcHBgsDAwwGDA8OAwoMBA0PDBUHFQQPAD8/Pz8REjk5ERIBOTkRMxEzETMRMzMxMAERBwcBMxEjETc3ASMRAUwHAwJRz5sDBf2wzwRI/Um2OQOm+7gCnoSC/FwESAD//wCwAAAEYgYMAiYB0gAAAQYCNj0AAAizAQ4RJgArNQABALAAAAQMBEgACgAtQBYKBgYHAwECBwQMCwIKBQoHAAgPBAcVAD8zPzMSOTkRMxESARc5ETMRMzEwATMBASMBESMRMxEDL7b+JwIAwv4MpqYESP3v/ckCK/3VBEj96wABABD/8gPhBEgAEAAtQBgBAAMPCg8AAxIRDwNGWQ8PBwxHWQcWARUAPz8rABg/KxESARc5ETMRMzEwISMRIQICBiMiJzUWMzISEyED4aj+txtgmXY2IBYcc4gjAoEDuv6c/l7CDHsGAeYB7wABALAAAAUvBEYAFAA1QBkDBgYFEg8PEAUQFhUHDgAOCwMRDwYQFQsVAD8/Mz8zEjk5ETMREgE5OREzETMRMxEzMTAlNzcBMxEjEQcHASMBJicRIxEzARYC6R8rASnTkxQ6/uWL/uU1FJTLAR8roF12AtP7ugOJOpn9SgK4hkv8dwRG/UluAAEAsAAABGIESAALADlAHgIGBgUBCQkKBQoNDAEIRlkvAT8BAgEBCgMLDwYKFQA/Mz8zEjkvXSsREgE5OREzETMRMxEzMTABESERMxEjESERIxEBVgJmpqb9mqYESP41Acv7uAHu/hIESP//AHP/7ARiBFwCBgBSAAAAAQCwAAAESARIAAcAI0ARAAEFBAEECAkCB0ZZAg8FARUAPzM/KxESATk5ETMRMzEwISMRIREjESEBVqYDmKj9tgRI+7gDuAD//wCw/hQEdQRcAgYAUwAA//8Ac//sA4sEXAIGAEYAAAABACkAAAOTBEgABwAkQBICAwADBQMICQEFBgVGWQYPAxUAPz8rEQAzERIBFzkRMzEwASERIxEhNSEDk/6cpv6gA2oDuvxGA7qO//8AAv4UBAYESAIGAFwAAAADAHH+FAVGBhQAEQAYAB4ATEAnEgkcDwQEFQwFGQAABQkDHyANABsWDBZGWQ8MEBwVBhVGWQMGFgUbAD8/MysRADMYPzMrEQAzGD8REgEXOREzETMzMxEzMxEzMTABFAAHESMRJgA1NAA3ETMRFgAFFBYXEQYGBRAlETY2BUb+5f6k+P7gAR//nvsBHvvZsMC5twN7/pO+rwIl+f7ZFf4kAdwTAS70+QEmFAG8/kQX/tTwwNoSA1QRz8gBfyf8rhPa//8AJwAABAgESAIGAFsAAAABALD+hQTdBEgACwAyQBkGAwcKAQAACgMDDA0IBA8KBgMGRlkDFQEiAD8/KxEAMxg/MxESARc5ETMRMxEzMTABIxEhETMRIREzETME3ab8eaYCRqab/oUBewRI/EcDufxHAAEAnAAABC0ESAASAC1AFgYKCgkBEQkRFBMDDkZZAwMKBxIPChUAPz8zEjkvKxESATk5ETMRMxEzMTABERQzMjY3ETMRIxEGBiMiJjURAULbW6ZppqZps3GkugRI/nDAOEMB1fu4AfBIO6yTAZwAAQCwAAAGbwRIAAsAMUAYCAUACQEEBAkFAwwNCgIGDwAIBQhGWQUVAD8rEQAzGD8zMxESARc5ETMRMxEzMTAlIREzESERMxEhETMD4QHmqPpBpgHlpo8Dufu4BEj8RwO5AAABALD+hwcKBEYADwA7QB4MCQANAQQHBgYEDQkEEBEOAgoPBAAMCQxGWQkVByIAPz8rEQAzMxg/MzMREgEXOREzETMRMxEzMTAlIREzETMRIxEhETMRIREzA+EB5qadqPpOpgHlpo8Dt/xJ/fgBeQRG/EkDtwAAAgApAAAFHQRIAAwAFAA9QCAAEhIIDQQECAoDFRYAEUZZAAAICwsKRlkLDwgSRlkIFQA/KwAYPysREgA5GC8rERIBFzkRMxEzETMxMAEhMhYVFAYjIREhNSEBNCYjIREhIAItATng19/c/iX+ogIEAkx8nf7NATkBEwKDmpumqAO6jvz8XVP+lwAAAwCwAAAFeQRIAAoADgAWAD9AIAAQEAgEEwwLCxMIAxcYDBUAD0ZZAAAIDQkPCBBGWQgVAD8rABg/MxI5LysAGD8REgEXOREzETMRMxEzMTABITIWFRQGIyERMwEjETMBESEgNTQmIwFWASvRydXP/jmmBCOmpvvdARkBCHqTAoObmqWpBEj7uARI/az+l7lcVAACALAAAARMBEgACQASADJAGQ8DAAsLBwMHFBMACkZZAAAHCA8HC0ZZBxUAPysAGD8SOS8rERIBOTkRMxEzETMxMAEhIBEUBiMhETMRESEyNjU0JiMBVgFSAaTb0/4SpgFAhIyBlAKD/suirARI/az+l1xdW1UAAQA5/+wDfQRcABoAREAmDAkJGBgKEgIEGxwLCkZZDwsfCwILAwsLABUVD0ZZFRAABkZZABYAPysAGD8rERIAORgvX15dKxESARc5ETMRMzEwBSInNRYWMzI2NyE1ISYmIyIHJzY2MyAAERAAAVandjyMW669Cv3VAikQqaFnly83pFABAAEK/t8UOZMXJLq5jaygNowaI/7b/uz+8/7WAAIAsP/sBjMEXAASAB4AUUAtDAgICRMNBhkAAAYJAx8gEBxGWRAQDAdGWQ8MHwwCCwMMDAkKDwkVAxZGWQMWAD8rABg/PxI5L19eXSsAGD8rERIBFzkRMxEzMxEzETMxMAEQACMiAichESMRMxEhNjYzMgABFBYzMjY1NCYjIgYGM/7/4NX6Dv7hpqYBIRT8z9wBAfzukqGelZKhoZICJf7z/tQBC/f+EgRI/jXk+/7P/vrT29XZ0tjYAAIAJQAAA8EESAANABQAPUAgEQsLCg4FAQUCCgQWFQ0QRlkCCA0NAQgIE0ZZCA8LARUAPzM/KxESADkYLxI5KxESARc5ETMRMxEzMTAzIwEmJjU0NjMhESMRIQEUISERISLnwgE7f4fKtQHopv7r/vYBFAEL/tPyAc8coXqWrPu4AbYBTr4Bcv//AHP/7AQSBdMCJgBIAAABBgBqCAAACrQDAjARJgArNTUAAQAU/hQERAYUACcAZkA6HRsXDw8UEAclJRkCEBIFKCkeHSELRlkaEhMSR1kXEw8THxMvEwMJAx0hExMhHQMQFQAQFQAFRlkAGwA/KwAYPz8SFzkvLy9fXl0RMysRADMrEQAzERIBFzkRMxEzMxEzMzMxMAEiJzUWMzI1ETQmIyIGFREjESM1MzUzFSEVIRUUBzM2NjMyFhURFAYDL080OjeBeoKtnaicnKYBkf5vCAoxtXTJyYn+FBmJFKoDUoaEvNP95wTbf7q6f8RUOE9bv9L8tpyq//8AsAAAA0QGIQImAc0AAAEGAHbxAAAIswEPESYAKzUAAQBz/+wDqgRcABkAREAmDxISAwkYEQMEGhsPEkZZDw8fDwILAw8PAAYGDEZZBhAAFUZZABYAPysAGD8rERIAORgvX15dKxESARc5ETMRMzEwBSIAERAAMzIWFwcmIyIGByEVIRYWMzI3FQYCefj+8gET+1KeOTGPbaSqEAIp/dUJqqeMl3QUASMBEAETASogGY0zo6mNvrU7kzn//wBq/+wDcwRcAgYAVgAA//8AogAAAWYF3wIGAEwAAP///+wAAAIfBdMCJgDzAAABBwBq/rcAAAAKtAIBGREmACs1Nf///5H+FAFmBd8CBgBNAAAAAgAQ//IGQgRIABUAHQBMQCkJFAAbGwcWBAQHFA4EHh8AGkZZAAAMFBQJRlkUDwwRR1kMFQcbRlkHFQA/KwAYPysAGD8rERIAORgvKxESARc5ETMRMxEzETMxMAEzMhYVECEhESECAiMiJzUWMzISEyEBNCYjIxEzIAOw9NPL/kv+Zf7+KLWrOCAWHHOIIwJQAex9nuftARUCg5ua/rIDuv36/j4MewYB5gHv/PxbVf6XAAIAsAAABqQERgARABkASkAmDwsLDAETExAIFgUFCAwDGhsSCg8KRlkBDw8IEQ0PDBUIE0ZZCBUAPysAGD8/MxI5LzMrEQAzERIBFzkRMxEzMxEzETMRMzEwAREhMhYVECEhESERIxEzESERExEzIDU0JiMEAAEA2cv+Tv5g/gqsrAH6pvABFICZBEb+O5ma/rIB7v4SBEb+NwHJ/a7+l7lcVAD//wAUAAAERAYUAgYA6QAA//8AsAAABAwGIQImAdQAAAEGAHYzAAAIswEUESYAKzX//wAC/hQEBgYMAiYAXAAAAQYCNrcAAAizARYRJgArNQABALD+hwRGBEYACwAyQBkEAQoLBQgICwEDDA0LIgYCDwkBAQRGWQEVAD8rEQAzGD8zPxESARc5ETMRMxEzMTAhIREzESERMxEhESMCL/6BpgJKpv6PpgRG/EkDt/u6/ocAAAEAyQAABAgG4wAHACNAEQADBQYDBgkIBwRJWQEHAwYSAD8/xisREgE5OREzETMxMAERMxEhESMRA2ai/WuqBbYBLf46+uMFtgAAAQCwAAADRAWJAAcAJ0ASBQACAwADCQgGBAQBR1kEDwMVAD8/KwAYEMYREgE5OREzETMxMAEhESMRIREzA0T+EqYB7qYDx/w5BEgBQQD//wAbAAAHTAdzAiYAOgAAAQcAQwEXAVIACLMBGwUmACs1//8AFwAABiMGIQImAFoAAAEGAENzAAAIswEeESYAKzX//wAbAAAHTAdzAiYAOgAAAQcAdgGwAVIACLMBIwUmACs1//8AFwAABiMGIQImAFoAAAEHAHYBGwAAAAizASYRJgArNf//ABsAAAdMByUCJgA6AAABBwBqAWQBUgAKtAIBLwUmACs1Nf//ABcAAAYjBdMCJgBaAAABBwBqAM8AAAAKtAIBMhEmACs1Nf//AAAAAAR7B3MCJgA8AAABBwBD/5QBUgAIswEKBSYAKzX//wAC/hQEBgYhAiYAXAAAAQcAQ/9hAAAACLMBFxEmACs1AAEAUgHZA64CcQADABG1AAIEBQABAC8zERIBOTkxMBM1IRVSA1wB2ZiYAAEAUgHZB64CcQADABG1AAIEBQABAC8zERIBOTkxMBM1IRVSB1wB2ZiY//8AUgHZB64CcQIGAgMAAAAC//z+MQNO/9MAAwAHABxACwQACQUBAQgFBgIBAC8zLzMRATMRMxEzMjEwASE1ITUhNSEDTvyuA1L8rgNS/jGLjIsAAAEAGQPBAUQFtgAHABK2AQUICQAEAwA/zRESATk5MTATJzYSNzMGByUMFmI4e0IlA8EWWgEMef73AAABABkDwQFEBbYABwAStgUBCAkFBwMAP8YREgE5OTEwARcGAgcjEjcBNQ8aYjV6RiAFthZk/vdyAR3YAP//AD/++AFtAO4CBgAPAAAAAQAZA8EBRgW2AAcAErYCBgkIAwcDAD/NERIBOTkxMBMWFyMmAic33yVCey1tGA4Ftvv6XgEcZRYAAAIAGQPBArQFtgAHAA8AGkAMBAENCQQQEQAIAwwDAD8zzTIREgEXOTEwASc2EzMGAgchJzYSNzMGBwGWDzh6ex47Df3XDBZiOHtCJQPBFtcBCHP+32EWWgEMef73AAACABkDwQK0BbYABwAQABpADAkNAQUEERINBRAHAwA/M8YyERIBFzkxMAEXBgIHIxI3IRcGAgcjNhI3ATUPGmI1ekYgAicOGGA4fRpCDQW2FmT+93IBHdgWW/72emQBNF0A//8AGf75ArQA7gEHAgsAAPs4ACC3AQAHQA0NSAe4/8CzDAxIB7j/wLMJCUgHABErKys1NQABAHsAAAOJBhQACwBDQCEJAgIIAwoBAQcEAAQDBQQMDQAFBQsGBgcIAAEEBAoHAxIAPy4zMxEzPxI5LzMzETMREgEXOREzMxEzETMzETMxMAElEyMTBTUFAzMDJQOJ/qAxxDH+tAFMMcQxAWAD5x/7+gQGH6oeAaH+Xx4AAQB7AAADmgYUABUAdUA6DAcVEAQEDwoFFBEAAwMOCwkGEwEBBgUHBBYXAQgIAgcDBgYACRQLCxEOEwwMEgkODQcNBw0FDwAFEgA/PxI5OS8vEjk5MjIRMxEzMxEzETMzETMRMzMRMxESARc5ETMRMzMzMxEzMzMRMzMzETMzETMxMAElFSUTIxMFNQUDEwU1BQMzAyUVJRMCOQFh/p8xxjH+pgFaKyv+pgFaMcYxAWH+nysB5x+oHf6FAXsdqB8BKwEbH6geAXz+hB6oH/7lAAEApAH0Al4D4wALABO2BgAADA0JAwAvzRESATkRMzEwEzQ2MzIWFRQGIyImpHFsaXRzamtyAux5fnx7d4GDAP//AJj/4wWuAPIAJgARAAAAJwARAhIAAAAHABEEJQAAAAcAZP/sCTsFywAJABQAGAAkAC8AOwBGAFtAMAAQBQowQjY8GSsfJSUrPBVCChcQCEdIHDMzKD8ZAw0iOTktRA1EDUQXGAYXGAcSBwA/Mz8/Ejk5Ly8RMzMRMxEzPzMzETMREgEXOREzETMRMxEzETMRMzEwExQWMzIRECMiBgUUBiMiJjUQITIWJQEjAQEUFjMyNjU0JiMiBgUUBiMiJjUQITIWBRQWMzI2NTQmIyIGBRQGIyImNRAhMhbsU120tF1TAe2hnJWjATiYpQJp/NWUAysCoFNdW1lZW11TAe2im5SjATeWp/s4UV1bWVlbXVEB66KblaMBOJanBAKqqgFUAVKoqubn7t8ByfDb+koFtvwCq6mnraulpavm5u/dAcns3aupp62rpaWr5ubu3gHJ7AD//wCFA6YBPwW2AgYACgAA//8AhQOmArAFtgAGAAUAAAABAFIAdQIfA74ABgAaQAoEAgMGAgYIBwUBAC8vERIBOTkRMxEzMTATARcBAQcBUgFWd/7fASF3/qoCJwGXRf6i/qFHAZcAAQBQAHUCHQO+AAYAGkAKAwAEAgACCAcFAQAvLxESATk5ETMRMzEwAQEnAQE3AQId/qh1AR/+4XUBWAIM/mlHAV8BXkX+aQD//wCY/+MDSgW2ACYABAAAAAcABAHBAAAAAf55AAACjwW2AAMAE7cABQIEAwMCEgA/PxEBMxEzMTABASMBAo/8eY8DhwW2+koFtgABAG0DIQLDBccAEgAmQBEAEgwICAkSCRQTBA8fAAkKHwA/zTI/MxESATk5ETMRMxEzMTABETQmIyIGFREjETMXMzYzIBURAkxOUHJbdGAOCkuRAQIDIQGkVEdpev6kAplYZfr+VAABAGIAAAQjBbYAEQBLQCgOAAQECQULEAIFBwUSEwMHCAdOWQAIDhFMWQgOCA4FCgoNTFkKBgUYAD8/KxESADk5GC8vKxEAMysRADMREgEXOREzMxEzMzEwASEVIREjESM1MxEhFSERIRUhAbgBNP7MprCwAxH9lQJE/bwBi4H+9gEKgQQrl/3plwABAEQAAARIBckAJQBwQEANCRERIh4aCw8VAg8aHCAXByYnEBwdHE5ZDR0MICEgTlkJIQ8hHyE/IU8hBAkDHSEdIRcAFxRMWRcYAAVLWQAHAD8rABg/KxESADk5GC8vX15dETMrEQAzETMrEQAzERIBFzkRMxEzMzMRMzMxMAEyFwcmIyIGFRUhFSEVIRUhFRQGByEVITU2NTUjNTM1IzUzNTQ2ArDJnjyYk3p+AaT+XAGk/lxBSgMb+/zOyMjIyOAFyVCDR4eBuoGmgSFkiCyajTDzI4Gmgc+yzQAAAwCa/+wF0QW2ABYAIQAqAGBANyIcHB0mFxAUFA0JAhIJFwsdBissGyJLWRATTlkDGwsQDg4QCxsDBR0eHipLWR4GHRgGAE1ZBhkAPysAGD8/KxESABc5GC8vLy8vKysREgEXOREzMxEzETMRMxEzMTAlMjY3FQYjIiY1ESM1NzczFTMVIxEUFgEUBCEjESMRISAWATMyNjU0JiMjBU4iVgs8bm2BnZ0+Yt3dNP6R/uv+9kClAQYBAP79oTTIuay3UnUOBH0eiIoBz1BFv9OB/kdNUgOX4+r9wQW20/3ukaKRjgAAAQA//+wEiQXLACYAcUA/HRcfFhYaCwIHBxokEQQKGhcGJygLFxgXTlkIGAUdHh1OWQIeDx4fHi8eAwkDGB4YHhMiIgBMWSIHEw5MWRMZAD8rABg/KxESADk5GC8vX15dETMrEQAzETMrEQAzERIBFzkRMxEzMxEzETMRMzEwASADIRUhBxUXIRUhFhYzMjcVBiMiAAMjNTMnNTcjNTMSADMyFwcmAxv+wU8B/v30AgIBz/5BJcuqnJmSq+3+3y6mmAICmKQnASTtyaVHpgU1/m2BOUAtgbTFQpZBAQ0BAYEqLFCBAQUBJGGLVgAEAI3/+AYKBcEAAwAPABcAKwBFQCQlGyAqEAoUBAQACioCGwYsLSMeBhIHGBYNJxgNGA0YAgMGAhgAPz8SOTkvLxEzETM/Mz8zERIBFzkRMxEzETMRMzEwAQEjAQEUBiMiJjU0NjMyFgUUMzI1NCMiJSImNTQ2MzIXByYjIhUUMzI3FQYFH/zVlAMrAX+plIuqp5SNqv4VsrCwsv3Kpra8q2hYIVFQ4NxiWk4FtvpKBbb7mJ+3uZ2euLqc7u7r27GhqLMjZx/u6yFlJQACAHf/7AOcBcsAHAAkAD1AHyMaGg8JHRYDFgkMBCUmIw8NGQoFDBMCDAIMBh8TAAYALzMvMxI5OS8vERIXORESARc5ETMRMzMRMzEwJTI3MwYGIyImNTUGBzU2NxE0NjMyFhUUAgcRFBYTNCMiBhURJAJ9rhJfCJmOlqBgYE5ylod1h86vUq5/Qz4BAG/VprK1qfMjFnEVJgHyip+hirn+0Er+5Wh7BCvCVmz+S4kAAAQAyQAAB8MFtgAPABsAJwArAF9AMQkGBgcBDQ0AHBYiEBArKBYABwYsLR8TJRkLKBMDGQgTGRMZKAgoKUpZKBIOCAMBBxIAPzM/Mz8rERIAOTkYLy8REjkREjkRMxEzERIBFzkRMxEzETMRMxEzETMxMCEjASMSFREjETMBMyY1ETMBFAYjIiY1NDYzMhYFFBYzMjY1NCYjIgYDNSEVBMe7/UwIEJfCAqoIDpgC/KGTi6Khk4ui/iJRXVtPT1tcUlYCAATL/uBs/MEFtvs69YoDR/y3o7i7oKO1u51ydnVzc3Bw/SCHhwACACUC5QWFBbYABwAYAE9AJwABDwwMDREUFBMTDQYBAwUZGhcWCQoKEQ4OBAcDAwQQCAgUDQEEAwA/xDIyOS8zETMRMxEzETMzETMzMxESARc5ETMRMxEzETMRMzEwASMRIzUhFSMBAyMXESMRMxMTMxEjETcjAwFxe9ECH9MCWMkIBne7xMu0fwYI0wLlAmdqav2ZAi+B/lIC0f3RAi/9LwGkif3TAP//AFAAAAX0Bc0CBgF2AAAAAgBm/90EiwRIABcAHwA0QBofDg4EGAwMFQQDICENFC8fPx8CHx8RHAgRAAAvMi8zEjkvXTkzERIBFzkRMxEzETMxMAUiJgI1NDY2MzIWEhUhERYWMzI2NxcGBhMRJiYjIgcRAnmd8YWK9JWY84f8xTGmUoO3UUhi2ZMyo1iteiOTAQWdq/+Mjv79pf6cNUZpgSmbfAKLARU1QnX+6f//AEf/7AXzBbYAJwIXAlwAAAAmAHv7AAEHAkADYP2zAAu0BAMCGRkAPzU1NQD//wAg/+wGCAXJACcCFwKiAAAAJwJAA3X9swEGAHX/AAALtAEDAg4ZAD81NTUA//8AR//sBgQFtgAnAhcCnAAAACYCPQwAAQcCQANx/bMAC7QEAwIsGQA/NTU1AP//AGr/7AYABbYAJwIXAkYAAAAnAkADbf2zAQYCPzEAAAu0AQMCDhkAPzU1NQAAAgBm/+wENQXHABoAKABBQCImBx8PDwAAFAcDKSoLIkdZDgQLCxgEGBFGWRgDBBtGWQQWAD8rABg/KxESADkYLxI5KxESARc5ETMRMxEzMTABEAIEIyImNTQSNjMyFhc3ECEiBgc1NjYzMhIBMjYSNyYmIyIGBhUUFgQ1p/7sray7iOiXYZIrBP7mPpAwL5tK0tj9ol+meBYZgFBlpWVlA6b++v416cnAqQEzoV1LWgGVLCGfFyX+7PvGkAEDlmFshPqAdoIAAgAnAAAEbQW2AAUADAAoQBMJBQoEBQQODQYFAQUJSVkFEgEDAD8/KxESADkREgE5OREzETMxMDcBMwEVIQEGBwEhASYnAc+mAdH7ugIhPSj+/ALR/v5EaAVO+rBmBPThefz+AvnKAAABAMn+EAUhBbYABwAjQBEABwMEBwQJCAUCSVkFAwAEGwA/Mz8rERIBOTkRMxEzMTABESERIxEhEQR3/PyqBFj+EAcN+PMHpvhaAAEATP4QBN0FtgALADFAGgcJCQMACAIKBgIABAwNBAdJWQQDAAlJWQAbAD8rABg/KxESARc5ETMRMzMRMzEwEzUBATUhFSEBASEVTAJ3/ZkEQPywAkP9pAOq/hBrA5wDM2yX/Pz8jZgAAQBoAo0EKQMXAAMAFUAJAgAFBAEAUFkBAC8rERIBOTkxMBM1IRVoA8ECjYqKAAEAJf/yBLwGmAAIABxACwgKAwkDBgQEAQgBAC8vEjkvOTMRATMRMzEwBSMBIzUhEwEzAm9//um0ASHrAgKJDgMOh/1UBb0AAAMAdwGTBS0EDAAVACEALQAzQBgfDCsAACUZDAQuLyIcHBEGCRMPKBYWAwkALzMzETMvMxI5OTMRMxESARc5ETMRMzEwARQGIyImJwYGIyImNTQ2MzIXNjMyFgEyNjcmJiMiBhUUFgEiBgcWFjMyNjU0JgUtp4BdmUE8mViDqKiDtXp8uYWi/H1CbTYybUhMZGECoUJtNzNuR0xkZQLPg7lqdGhxrY6Gs9vXr/67W2RhXWlXU2oBeVxiYV5rVFVpAAEADP4UAvgGFAAUABxADAgSAhINAxUWEAsFAAAvMi8zERIBFzkRMzEwATIXFSYjIhURFAYjIic1FjMyNREQAn1PLDE+sKWjSjs9OrYGFBCJFvP64bC7E4cW8wUfAWoAAAIAYgGHBC0EHwAXAC8AcEBAKA8bAw8DMTAnHh4YUFkPHh8eLx4DCQMeKkAqJFBZGypADwYGAFBZDwYfBi8GAwkDBhJAEgxQWQMAEhASIBIDEgAvXcQrABoYEM1fXl0rABAYxBrexCsAGhgQzV9eXSsAEBjEERIBOTkRMxEzMTABIgYHNTYzMhYXFhYzMjY3FQYjIiYnJiYDIgYHNTYzMhYXFhYzMjY3FQYjIiYnJiYBUDZ/OWyUQ3BYTVstNYA2ZZlDb1hJWzE5gDVqlkV0UkVfMTeBM2SaRXZPVFUCAEA5lm4cJSEZQjmXbR0lHhkBlkQ1lW0gIh0aQjeWbiAhIhgAAAEAaACmBCkFAgATAEZAJgUBEAsLCQoOBAATAQgUFQ0FBgVQWQoIDwYBCQMGDgIBAlBZEhEBAC8zxCsRADMYL19eXcYzKxEAMxESARc5ETMRMzEwASE1IRMhNSETFwchFSEDIRUhAycBff7rAVR//i0CE4d9bQEX/qqBAdf96YN9AcGJARCJAR855on+8In+5Tf//wBoAAEEKQTZAiYAHwAAAQcCKwAA/XQACbMBAAcSAD81NQD//wBoAAEEKQTZAiYAIQAAAQcCKwAA/XQACbMBAAcSAD81NQAAAgBvAAAEPQXDAAUACQAgQA0IAAYDAAMKCwkHAgUCAC8vEjk5ERIBOTkRMxEzMTATATMBASMJA28BwkgBxP48SAFi/sP+wwE9At8C5P0c/SEC4QIT/e397AD//wAdAAAEHAYfACYASQAAAAcATAK2AAD//wAdAAAEDAYfACYASQAAAAcATwK2AAAAAQDbBNkDvgYMAA0AGEAJCwMPDgoEgAcAAC8yGswyERIBOTkxMAEiJiczFhYzMjY3MwYGAki5qgqcCVtxZ2MLnQyyBNmPpGhSWGKelQAAAf+R/hQBVgRIAAwAHUANCwgIDg0JDwAFRlkAGwA/KwAYPxESATkRMzEwEyInNRYzMjY1ETMRECtfO0VDTkmm/hQZhxRVVwT8+xD+vAAAAQGJBM0CdQYUAAkAE7YJBAoLBIAJAC8azRESATk5MTABNjY3MxUGBgcjAYkTJwqoC1gvWgTlN6dREjO8RgABAXH+OwJv/4MACQATtgkECgsJgAQALxrNERIBOTkxMAE2NjczFQYGByMBcRwzB6gLYjda/lRAujUSM8FCAAEBgQTZAn8GIQAJABO2CQQKCwmABAAvGs0REgE5OTEwAQYGByM1NjY3MwJ/HTUGpg5jMVwGCD3BMRM9vzkAAgAnAjkCngXHAAsAFQAgQA4GDAARDBEXFgkTHwMOIQA/Mz8zERIBOTkRMxEzMTATFBYzMjY1NCYjIgYFECEiJjUQITIWsFJeXlZWXl5SAe7+xJ6dATuengQAqKalq6qkpan+N+zdAcXoAAIAFAJKArQFvAAKABQAPEAfFAULBwMDCQIAAgUDFRYBBQUJDxQfFAIUFAMOBx8DIAA/PzMSOS9dMzMRMxESARc5ETMzETMzETMxMAEjFSM1ITUBMxEzITU0Nw4DBwcCtH2R/m4BmIt9/vIGBRgeHguoAxTKymUCQ/3Nw4ZLDCctLRH2AAEAOwI3AokFqgAdACtAFRADHBcJFxoDBB8eEwAABhsYHg0GIQA/Mz8zEjkvMxESARc5ETMRMzEwATIWFRQGIyImJzUWFjMyNjU0JiMiBgcnEyEVIQc2AUiRsKqmSospOIw2X25tZjlMHzshAe/+gxQ+BGiPe4ybHxeDIiZTWU5YEQgpAaBo5gwAAAIAKQI5AqIFxwAXACMANkAcGxIhCwAABhIDJSQeCxUADxAPAg8PAxgVIQgDHwA/Mz8zEjkvXRI5MxESARc5ETMzETMxMBMQNjMyFxUmIyIGBzM2NjMyFhUUBiMiJgUyNjU0JiMiBhUUFinb20oxNFONlgoIHXFVfZSmjZmtAURRY1hWVXBqA8MBBf8PchKZpis7lH6QpNJjXWNPW1o7WXwAAAEAOQJKAo8FtgAGABxADQEFBQACAwcIAgMeACAAPz8zERIBFzkRMzEwEwEhNSEVAaIBXv45Alb+oAJKAvh0XvzyAAMAMwI5ApMFxwAVACIALQA/QCIWDSYTKwMcBwcDBRATDQYuLwUQICALKRspAikpGQohIwAfAD8yPzM5L10zEjk5ERIBFzkRMxEzETMRMzEwATIWFRQHFhUUBiMiJjU0NjcmJjU0NgMUFjMyNjU0JicnBgYTIgYVFBYXNjU0JgFkfJeUsKWKkp9JVUo5nTVUVlpUXVEcSEasREtEUYxOBcd2aIJMSp5xiYB0RXQuLl1EZn79ZjxJSTw/TxwKIlQB7zw5L0chNmE5PAACACMCOQKcBckAFgAiADxAHxoRIAoAAAURAyMkHQ4KCwsUDw4fDgIODgMXFB8IAyEAPzM/MxI5L10SOREzETMREgEXOREzMxEzMTABEAYjIic1FjMgEyMGBiMiJjU0NjMyFiUiBhUUFjMyNjU0JgKc2tRTMTFdARQVCiN0QYOZqYiYsP64UV9VV1RzZwRG/vL/D3QUAUYzNJKDiKXKW19XUV9VPmFyAAAWAFT+gQfBBe4ABQALABEAFwAbAB8AIwAnACsALwAzADcAOwA/AEMARwBTAFsAawB0AHwAiQD4QIdBQD08MTAPBQAMVE5YSHZrcGB6Z4WGRUQpKCUkFAoJFxeGBhI7G39nYDgYNy9rNCxIIx8gHAMRTgwZiosKACpCWlGGXHRcKUFGPmR1dWxFPYJ9VktrdmsmMiUxFQ0AQgFBPlw9bA0xMgNrDFxsa2tsXAMBLSwdHBkYExIPDDk4NTQhIAcGBAEALzMzMzMzMzMzMy8zMzMzMzMzMzMSFzkvLy8REhc5ETkSOTkROTkRMxEzETMRMxDEMsQyETMRMxI5ETMRMxEzEMTEMhEzETMREgEXOREzMzMzMzMzMzMRMxEzETMRMxEzETMRMzMzMzMzMzMzMTATESEVIxUlNSERIzUBETMVMxUhNTM1MxEhNSEVITUhFQE1IRUBIxEzESMRMwE1IRUBIxEzATUhFTM1IRUBIxEzNSMRMwEjETMFFAYjIiY1NDYzMhYFFDMyNTQjIiUzMhYVFAYHFRYWFRQGIyMTMzI2NTQmIyMVFTMyNjU0IwEiJzUWMzI1ETMRFAZUAS/ABc4BMG35AG/ABQ7Dbf1JARH74QEO/vIBDgS3bW1tbfvCARD8MG9vAsABEHcBEfqob29vbwb+bW37n4d/f4eHf36I/nOHh4eHAeGsbXAuLD0ubV7Pe0IuJCovO0oxJVoBXjQcKxlWfWkEvgEwb8HBb/7QwfkCAS/CbW3C/tFtbW1tBv5vb/qoAQ4CAgEP+jttbQGmAQ4ESm9vb2/8LwEQeQEP/WgBEEmRnJyRkpuak8XFxGFDUzFCCAgORDVRWQFiIiAiHeOaKyVK/voKZghWAZL+cl9jAAADAFT+wQeqBhQAAwAeACoALkAZAQsXJQQeHxEDCSssKB4UDiIeDg4eIgMCAAAvLxc5Ly8vETMRMxESARc5MTAJAwU1NDY3NjY1NCYjIgYHFzYzMhYVFAYHBgYVFQMUFjMyNjU0JiMiBgP+A6z8VPxWA+ssQWdJu6VPukdSoFo/PjFIVDsbR0ZCSUhDSEUGFPxW/FcDqfsvMkExUn5Yh5o4KrJQOi81SzZEcEo7/u0/SEk+QElI////kf4UAlcGIQImAjcAAAEHAUz+qQAAAAizARgRJgArNf//ABkDwQFEBbYCBgIHAAAAAgAK/+wE3wYrAC0ANgBmQDkbBxcLNCUuHx8rAi0CJQsHEgY3OBQOR1kAIS4hR1krLg8uHy4CCQMULhQuBSgoMUZZKAEFHUZZBRYAPysAGD8rERIAOTkYLy9fXl0RMysRADMrERIBFzkRMzMRMxEzETMRMzEwARYVEAAhIBE0NzY1NCYjIgYHJzYzMhYVFAcGFRQzIBE0JyYkJjU0NjMyABMzFSUmAiMiBhUUBARWBP7g/v3+dxAPJCAZNg8hU19YXQ8Q6QF3BN/+yaC2qNABACqP/scct3tdYQETA04uQf6f/m4BWDl7ehcvIw8JdiddXSODhDrPAnA/LAJpvIOQo/7N/teBgdMBAF9LjZoAAQAAAAAEewXDABUAKEAUERIHEhQDFhcAEhQDEhIFCkpZBQQAPysAGD8/EjkREgEXOREzMTABEhI2NjMyFxUmIyIOAwcRIxEBMwI5eo1NXDowKBofKDtWfGUfrP4jugLNASMBN2wwD4cGOKH87FX94wIvA4cAAAIAEv/sBncESAAUACkATEAnGAMSISEeJw0KDR4DBgUqKxMfHwAIFQsGCAZGWQgPJBsAG0ZZEAAWAD8yKxEAMxg/KxEAMzMREjkYLzkREgEXOREzETMSOREzMTAFIiY1NBMhNTchFSMWFRQGIyInIwYBBgIVFBYzMjY1NTMVFBYzMjY1NCcCKbrHh/7jjgXX+nXIud1ECET+zz9CbHVdbKJrXXVtbxTn8PABB0pEjvz78Oe2tgPOhP7+Z66oj328vHqSqa3+7wD//wDJAAAGcQd1AiYAMAAAAQcAdgGcAVQACLMBHQUmACs1//8AsAAABssGIQImAFAAAAEHAHYBzQAAAAizAS0RJgArNf//AAD91QUQBbwCJgAkAAAABwJbATUAAP//AF791QPNBFoCJgBEAAAABwJbAMcAAP///t//7AXSBc0AJgAyFAABBwJc/kcAAAAJswMCGgMAPzU1AAACAHX91QI1/4MACwAXAB5ADBIGDAAGABgZFQMPCQAvM8wyERIBOTkRMxEzMTABFAYjIiY1NDYzMhYHNCYjIgYVFBYzMjYCNX1mZXh4ZWV+bkIzM0I8OTVA/q5heHViYnV2YTk8PDk4PT0AAgCYBGgCzwXFAAgAFwAeQA4OCQMIDBMJBRgZAgsIFQAvxNzGERIBFzkRMzEwATY3MxUGBgcjJTQ3FQYVFB4CFRQjIiYBsEYcvSl3MU7+6O15HyUfXTdDBIe1ehROrDl2oz1IKTUUExAaHEpEAP//AB0AAAbTBh8AJwBJArAAAAAmAEkAAAAHAEwFbQAA//8AHQAABsMGHwAnAEkCsAAAACYASQAAAAcATwVtAAAAAgB9/+wGZAYUABUAIQA8QB8WBg8RERwAABQLBgQiIxQLAwkJH0lZDwkEAxlJWQMTAD8rABg/xisREgA5ORESARc5ETMzETMRMzEwARAAISAAERAAISAXPgI1MxcGBgcWARASMzISERACIyICBbz+nf7G/r3+oQFhAUMBRbMyOhu2Dh2DaGD7dfr08/b18vP9At3+nv5xAYkBagFoAYbXDENmaRabrSew/v7+1v7OATEBKwEnATH+0QAAAgBz/+wFGQTwABYAIgA8QB8XBxASEh0AABUMBwQjJBUMAwoKIEZZEAoQAxpGWQMWAD8rABg/xisREgA5ORESARc5ETMzETMRMzEwARAAIyImAjUQADMyFz4CNTMXBgYHFgUUFjMyNjU0JiMiBgRi/vLuk+R8AQzu2YkzOhq0Dx95Zkf8vZ6tr52fr62cAiX+9P7TigECrQEMASuND0FjbhecryaKudPb29PS2NgAAQC6/+wGewYUABsAM0AYBQcHAQsUEQsRHRwKAQ4bBRIDDhdJWQ4TAD8rABg/xjMSOTkREgE5OREzETMzETMxMAEVPgI1MxcGBgcREAAhIAA1ETMRFBYzMjY1EQUZOkYftQ4hrJX+4f74/vT+1KrMxrjBBbbGCD5wbha2uBn9jf7+/uoBH/0DrvxGt8TBvAO4AAABAKT/7AWWBPIAHQBEQCIBHA0PDxMUBwcKExwTHh8VFgoSFgMUDQgdDxkERlkZFhQVAD8/KwAYPzPGEhc5ETMREgE5OREzMxEzETMRMxEzMTABERQWMzI2NREzFTY2NTMXBgYHESMnIwYGIyImNREBTHqCrJ+mUkqyDyCwjYkYCTS1b8vIBEb9O4aEvNUCPnkLgJoXur8O/KyTUlW+0QLLAP///FME2f3cBiEABwBD+soAAP///Q0E2f6WBiEABwB2+4QAAP///BkE2f8BBd0ABwFS+xEAAAAB/QgEuP5zBo8AEQAeQAwCBQUNDQgAABMLEAQAL8wyEQEzETMzEjkRMzEwARQHByMnNjY1NCYjIgc1NjMg/nOmCmkMVk5DST4gJkUBAAXXjCJxsA4yKyspBmQKAAH9O/6g/gL/fQALABG1BgAADQkDAC/NEQEzETMxMAU0NjMyFhUUBiMiJv07OyooOjooKjvyOTY2OTc3NwD//wDJAAAD+AdzAiYAKAAAAQcAQ//YAVIACLMBDQUmACs1//8AywAABVIHcwImAbIAAAEHAEMAaAFSAAizAREFJgArNf//AHP/7AQSBiECJgBIAAABBgBDtwAACLMCHBEmACs1//8AsAAABGIGIQImAdIAAAEGAEPcAAAIswEPESYAKzUAAQCF/+wHkQXJADEARUAkIhYqJy8JCQQnGxYFMjMAHxkfSVkQKCgTBhkELCUTJUlZDBMTAD8zKxEAMxg/MxI5LzkrEQAzERIBFzkRMxEzETMxMAEiBgcnNjMyABEQACMiJicjBgYjIAAREBIzMhcHJiYjIgIREBIzMjcRMxEWMzISERACBaQ8Xi1FfpbkAQH+5f9srFMIUKlr/wD+5f/kmXxGLV08k6XPu4tmqmaOu86lBS8pH5JQ/oj+rf6N/mEtMzIuAZsBdwFTAXhQkh8p/tf+9v7T/rJMAcn+N0wBSwEwAQsBKAABAAAAAAYdBEgAHQAoQBYXAA0OBQUeHxsVDQASCgQEFg4FDwQVAD8/MzMSFzk/ERIBFzkxMAEGBgMjATMTFhczNjYTAzMAFhczNhIRMxACByMDJgMnChSz1f5/rPYgLggTSo6ssgEJLQoIrZmmw9u2fSEByRoz/oQESP1JXb01owEkAdX8/5AsuAGzAVL+lv4H5QFaXAACABcAAAT8BhQAEQAaAExAKAgEEhIBDxYLCwYPAAQbHAcRABFJWQQACBpJWQAIAAgPAgAPEkpZDxIAPysAGD8SOTkvLysRADMrEQAzERIBFzkRMxEzMxEzMzEwEyERMxEhFSERMyARFAQhIREhATMyNjU0JiMjFwE/rAGi/l7JAjH+9/77/mj+wQHr1cC1utq2BPoBGv7mlP7g/mTQ2gRm/CuJkIp6AAACABcAAAScBScAEQAZAEdAJgQAExMPCxYHBwILDQQaGwMNDg1GWQQSRlkEBAsQAA4PCxNGWQsVAD8rABg/M8YSOS8rKxEAMxESARc5ETMRMzMRMzMxMAEhFSERISARFAYjIREjNTM1MxERISA1NCYjAagBWP6oAT8Btd/c/iHr66YBMQEfh5wESIz+xf7NpqgDvIzf/M3+l7lcVAABAMn/7AchBcsAIABKQCkXExMUBhgdDAUYERQGISIbAElZGwQGEhcSSVkDFxcUFQMUEg4JSVkOEwA/KwAYPz8SOS8zKxEAMxg/KxESARc5ETMRMxEzMTABIgQHIRUhEgAzMjcVBiMgAAMhESMRMxEhEgAlMhcHJiYFj+P+/B8Cv/09CAEJ95rCmN7+wf6lCP6iqqoBZB4BcQEw1bZIZJ0FM/rxlv7v/uI3lTkBcAFU/VAFtv2SATMBTgJckjAmAAABALD/7AWcBFwAIQBZQDIWGRkKAwkFBQYQIBgDBgUiIw0TRlkNEBkECQRGWRYPCR8JAgsDCQkGBw8GFQAcRlkAFgA/KwAYPz8SOS9fXl0zKxEAMxg/KxESARc5ETMRMxEzMxEzMTAFIgAnIREjETMRITYkMzIWFwcmIyIGByEVIRYWMzI2NxUGBHfr/vQL/uGmpgEhGAEN31GaNjKKZaOnEAIY/eYJqaQ9d2JuFAEK+P4SBEj+M+v2IBmNM6Sqjby1FiWTOQACAAAAAAVtBbYACwASADRAGwIDBwwDDQoFFBMBBQwFSVkQCAwMBwgDCwMHEgA/MzM/EjkvEjkrEQAzERIBFzkRMzEwASMRIxEjASMBMwEjASEnJicGBwOYlJyV/t+yAmieAme3/VwBTFI4HhhAAqr9VgKq/VYFtvpKAz/PkGRipAAAAgAKAAAEeQRIAAsAEgA1QBwFBgoMBg0DAQYUEwQIDAhGWRELDAwKCw8GAgoVAD8zMz8SOS8SOSsRADMREgEXOREzMTABASMDIxEjESMDIwEDISYmJyMGAqgB0azPcZdzzawB0SEBDys4IgkcBEj7uAHp/hcB6f4XBEj+LWyKalwAAAIAyQAAB14FtgATABoARkAlDgoKCwIDEhUDFAgHCwcbHAUBCQ4JSVkUGAwODgsQDAMTBwMLEgA/MzMzPzMSOS8SOTMrEQAzMxESARc5ETMRMxEzMTABIxEjESMBIwEhESMRMxEhATMBIwEhAiYnBgYFhY+ak/7jugEi/l+qqgHhAQaeAma8/WYBPnYcDBMjArD9UAKw/VACsP1QBbb9kgJu+koDSAE1Vi9DaAACALAAAAYUBEgAEwAZAE1AKxENDQ4FBgEZBhgLCg4HGhsIBAwRDEZZGBUTLxE/EQIREQ4TDw8PCgYCDhUAPzMzMz8/EjkvXRI5MysRADMzERIBFzkRMxEzETMxMAEBIwMjESMRIwMjEyERIxEzESETFyMGBgchBEYBzqrQcZhu0azR/t+mpgFexWgICiBZAQwESPu4Ae7+EgHu/hIB7v4SBEj+MwHNcyJf2QAAAgAUAAAFrgW2AB8AIgBLQCggAQ8QIR4eHRACAQcGJCMeASEfHyFJWQ4SHRJKWSICHR0YHwMQCBgSAD8zMz8SOS8zMysRADMrERIAOTkREgEXOREzETMRMzEwARUBHgIXEyMDLgIjIxEjESMiBgYHAyMTPgI3ATUFIQEFKf5adppkMoWuiSNEZVkbqhpbY0Egh7mIL2OVdv5lA779CgF7BbaF/hEGSIuk/jsByW9gJv1CAr4nX2/+NwHFn45JBwHvhZn+OQAAAgAMAAAFFARIACAAIwBOQCohAQ8QIh8YHx4QAgEHByUkHwEiICAiRlkRDhIeEkdZIwIeHhggDxAIGBUAPzMzPxI5LzMzKxEAMzMrERIAOTkREgEXOREzETMRMzEwARUBHgMTIwMuAiMjESMRIyIGBgcDIxM+AzcBNQUhAQSL/q5Xb0kxm6yFIjpUTAqZC0tSOCeHqoMYMEluV/6xAyD9tAElBEhp/qAHMFBp/nEBUFdHHP32AgoaQF7+rgFQPWlPMggBYGmM/sEAAAIAyQAAB8UFtgAkACcAYUA1IR0dHiYjDxACJyUBBwEnECIbIxgeCSkoIwEkJiQmSVkSDhwhHElZJwIhIR4kAx8DGBAIHhIAPzMzMz8/EjkvMzMrEQAzMysREgA5ORESARc5ETMRMxEzETMRMxEzMTABFQEeAhcTIwMuAiMjESMRIyIGBgcDIxM2NyERIxEzESEBNQUhAQc9/l14mWUtiKiKH0ZpXxisGV5kQiGHsoc3OP5SqqoC1/5oA8H9CgF7BbaF/g4GSJCc/jsByWhjKP1EArwoX2z+NwG+uDr9UAW2/ZIB6YWZ/jcAAAIAsAAABroESAAkACcAZ0A6IR0dHiYjDxACJyUBBwEnECIbIxgeCSkoIwEkJiQmRlkSDhwhHEZZJwIvIT8hAiEhHiQPHw8YEAgeFQA/MzMzPz8SOS9dMzMrEQAzMysREgA5ORESARc5ETMRMxEzETMRMxEzMTABFQEeAxMjAy4CIyMRIxEjIgYGBwMjEzY3IREjETMRIQE1BSEBBjH+rlhvSTCbrIUiOlZKCpoKS1Q3Joeqgy8l/s2mpgI1/rADIf20ASUESGn+ngcxTmn+cgFQVkYc/fgCCBs/XP6uAVB4KP4QBEj+NQFiaYz+xwABAD/+TgQ1BtEASwCEQE0AEyE/GUZGCj83QzwqHC0oEwtMTUkWSllJEzk0MQ8uHy4vLgMJAy4qQEMdHB0cSlkdHRA8KiokSlkqBAoJSVkKEBADSVkQIwwHSVkMIgA/KwAYPysAGBDGKwAYPysRADMSORgvKxESADkaGBDdX15dOcQyPysREgEXOREzETMRMzEwFxQWMzI3NjMyFxUmIyIHBiMiJjU0Njc2NjUQISM1MzI2NTQmIyIGByc2NyYnJzUzFhc2NjMyFxUmIyIGBxYWFRQGBxUWFhUUBAUGBvBXWWF4eEabR1CgRGlpabO42ejMtf5A2tHN4aKJartuVqi+OXUxe1yDXINAMjAYKyxvMLLBv6q6y/7l/uaKhok3MgcGJ6YzBQV9hX6BCQiKjQEMj5OEa4A3RXJyHEJ5NBs7iHNWDnEKUkcXvY+MuBoIGLKQ0NUJBTcAAAEAGf57A38FTgBGAINAThcpNgsuEBAgCwMOCD4yQDwpC0dIRD5BAAVHWQAPQR9BL0EDCQNBPiYaRlkjHUZZDjMyMzJGWSYjMzMjJgMgPj44RlkIPhAgIhMsR1kTFgA/KwAYPz8zKxESABc5GC8vLysREgA5KysAGBDUX15dxCsREgA5ERIBFzkRMxEzETMxMAEyFxUmIyIGBxYWFRQHFRYVFAYHDgIVFBYzMjc3MhcVJiYjBwYjIiY1NDY3JDU0JiMjNTMgNTQjIgYHJzY3Jic1MxYXNjYC+DMtGCkvZy16jNP48uFdbTBLWVZ6r30nFVQ3s4JckJ++tAFOnJ+UdwE3/EqPWDt8flxne0uMWIYFTg9wCk8+HIpruDkIR8qUqAMCFyosMSsFBSePExgFBXdwdH0DBL5hWo2soiIkhzcPdWIbNIluVf//AG0AAAXyBbYCBgF1AAD//wCk/hQFhwYSAgYBlQAAAAMAff/sBb4FzQALABIAGQBHQCUWEBAGFw8PAAYAGhsWEElZDxYBCwMWFgMJCRNJWQkEAwxJWQMTAD8rABg/KxESADkYL19eXSsREgE5OREzETMRMxEzMTABEAAhIAAREAAhIAABMhITIRISEyICAyEmAgW+/p3+xP69/qEBYAFEATsBYv1h5fcN/CsN+ejg+xMD0xH0At3+of5uAYsBaAFlAYn+cPxEAREBDP71/u4EtP7+/wD+AQQAAAMAc//sBGIEXAAMABMAGgBJQCcXEREHGBAQAAcAGxwXEUZZDxcfFwILAxcXAwoKFEZZChADDUZZAxYAPysAGD8rERIAORgvX15dKxESATk5ETMRMxEzETMxMAEQACMiJgI1EAAzMgABMjY3IRYWEyIGByEmJgRi/vLuk+R8AQzu5gEP/giepAr9aQmgoJyeDQKTD6ECJf70/tOKAQKtAQwBK/7O/U24v7q9A1itp6isAAABAAAAAAVIBcMAFQAgQBAGFhMXEQBKWREECgUGAwUSAD8/Ejk/KxEBMxI5MTABIgYHASMBMwEWFzY3Ez4CMzIXFSYE4TtOOf64xf3utAFSSCMgRqI7VG5ZKk84BTdntfvlBbb8VsePkN8CBr+YQRONFAABAAAAAAQ9BFIAFgAeQA8BFw8YDRJHWQ0QBQEPABUAPz85PysRATMSOTEwIQEzExIXMzYTEz4CMzIXFSYjIgYHAwGW/mqu4WQTCBdSYCVHW1QtHh0mLzoc+ARI/Zv+9GR2AQsBNXp7NAp/CFRc/N///wAAAAAFSAdzAiYCgAAAAQcDdgTXAVIACrQCASEFJgArNTX//wAAAAAEPQYhAiYCgQAAAQcDdgRkAAAACrQCASIRJgArNTUAAwB9/hQJogXNAAsAFwAuAERAJgwGEgAhLicYAAYGLzAlKkpZJRsdHBwDIBgPCRVJWQkEAw9JWQMTAD8rABg/KwAYPzMSOREzPysREgEXOREzETMxMAEQACEgABEQACEgAAEQEjMyEhEQAiMiAiUzExYXMzY2EzMBBgYjIic1FjMyNjc3BVT+uf7c/tf+vQFDASwBIwFF+93f2drd3Nja4QRvsPZOFAgLU+Sw/itFvIhMSjdCXnUjPQLd/qD+bwGLAWgBZgGI/nD+oP7X/s0BMQErASkBL/7SQf2Lz2Ys+wKD+yC2nhGFDGdZnP//AHP+FAh7BFwAJgBSAAAABwBcBHUAAAACAH3/hwYQBi0AEwAoAFFAKhQKJg0HESIiAxwfAAAcBxcKBSkqJCImDSZJWREPDQMcGhcHF0lZBQMHEgA/MzMrEQAzMxg/MzMrEQAzMxESARc5ETMRMzMRMxEzMxEzMTABEAAFBiMiJyQAERAAJTYzMhcEAAEUEhc2NjMyFzYSNTQCJwYjIicGAgYQ/tH++Bp3fBT+9P7RASsBEBR8eRYBDAEt+yHKvRFJNm4fvcrKvR9ucR+9ygLd/tL+cyxvbykBigE2ATEBhSxsbCz+c/7V9P7PKTAmVikBMfT0AS8nWFYn/tMAAAIAc/+TBM8EtAAXAC0AUEAqGAwPCSsbJRUDIwAAAyAbCQwGLi8oJSsPK0ZZFRIPECAeGwkbRlkGAwkVAD8zMysRADMzGD8zMysRADMzERIBFzkRMxEzMxEzETMRMzEwARQCBwYGIyImJyYCNTQSNzY2MzIWFxYSBRQWFzY2MzIXNjY1ECUGBiMiJicGBgTP4MwJQDg5PQnL5eDQCD45OEAJyuL8UH2JDDw1ZxiGfP78DT0zNTwMiX0CJen+3yU2LSs4JAEm5ekBICQ4Kis5Jv7c4bHSHyoiSh/SrwFgPiogICwf0QAAAwB9/+wHfwg7ABUARQBUAFVALkM3HysrASZGS1BIPAw3ClVWFQICBwcQDFJASDoiQDpASVkoOgQcFjQWSVkuNBMAPzMrEQAzGD8zKxEAMxgQ1hrc1M0yEjkvMxESARc5ETMRMzEwARUjIi4CIyIGFRUjNTQ2MzIeAjMBMjY3FhYzMhIREAIjIgYHJzYzMgAREAAhIiYnBgYjIAAREAAzMhcHJiYjIgIREBIBFAc1NjU0LgI1NDMyFgWiEVSOeGYrLzx9dHA6cHeFTv0oWKs9N6tdvNKlkzxfK0Z5muQBAf7g/v1oqkxLp27+/P7jAQHkmnlGK148lKXSAoDteB8kH1w4QwfHeSQrJDQzEBxnbiQsJPi6Qj85SAFOAS0BCwEoKx+SUv6I/q3+jP5iKDAtKwGdAXUBVQF2UpIfK/7Z/vT+0f60BmiiPUgpNRQSERocSUQAAAMAc//sBgQHBgAqAD8ATgBcQDMTBxwoKCwiQEUNSkI2BwpPUDI6Py0tNkxCCkAfEAoQRlkCF0ZZAgQlChAaFQQVRlkABBYAPzMrEQAzGD8zEjkrKxEAMxoYEN7c1DIRM80yERIBFzkRMxEzMTAFIicGIyICERASMzIWFwcmIyIGFRAhMjcWFjMgETQmIyIHJzY2MzISERACAxUjIi4CIyIVFSM1NDYzMh4CMwUUBzU2NTQuAjU0MzIWBCuUXlyP4frPuj53KDlZR3RtATF7cD5vQwEtbnNHWTkodz67zvdREFSPeGUra31zcDpxdoNO/vDudx4kHlw4QxRBQQEjAQ4BFwEoIBmLM9bW/l5QKiYBotbWM4sZIP7X/ur+9f7aBqV4JCokZhEfZG8lKyXdoT5IKDgUEREZG0pEAAACAF7/7Ad/BwQADQBAAF9ANDAkOTY+FxcBEjYpDCQHQUIOLSctSVkeNzchJwUJCQ1ACQ9IDQcDC0AUJwQ7MyEzSVkaIRMAPzMrEQAzGD8zGt4yMs0rMhEzERI5LzkrEQAzERIBFzkRMxEzETMxMAEVByMnIwcjJyMHIyc1ASIGByc2MzISERAAISImJyMGBiMgABEQADMyFwcmJiMiAhEQEjMyNjcRMxEWMzISERACBYtQIDK6MSExvC8hUANDPF0tRnyZ5P/+4v79dKxMCU6scP78/uMBAeWWfkYtXTyTpdK+QYIzqmaRvNSlBwQbrGdnZ2esG/4rKR+SUP6I/q3+i/5jMDAxLwGgAXIBVQF2UJIfKf7X/vb+0f60JiYByf43TAFKATEBCwEoAAACAAAAAAYdBaQADQAqAD9AJCQBDhobDBIHKywoFQ4fFgMREgUJCQ1ACQ9IDQcDCyMbEg8RFQA/PzMz3jIyzSsyETMREhc5PxESARc5MTABFQcjJyMHIycjByMnNQEHAyMBMxMWFzM2NhMDMwAWFzM2EhEzEAIHIwMmBLZSHjK8MR8xvDIeUAGsJ6rV/n+s9icpCAwjuqyyAQktCgitmabD27Z9IQWkG6xnZ2dnrBv8JV/+lgRI/UlvqyNRAYgB1fz/kCy4AbMBUv6W/gflAVpcAAABAH3+FATjBcsAFwAtQBgDDwkKFQoPAxgZEwBJWRMEDAZJWQwTChsAPz8rABg/KxESARc5ETMRMzEwASIAERAAITI3ESMRIyAAETQSJDMyFwcmA0j1/uABCgECbzmqFP61/p+vAUjY7apHqwUz/sD+6P7a/tQX/XQB2AGEAW3gAVa4VJJOAAEAc/4UA6IEXAAYAC9AGA8DFxYJFgMDGRoXGwYMRlkGEAASRlkAFgA/KwAYPysAGD8REgEXOREzETMxMAUiABEQADMyFhcHJiMiBhUUFjMyNjcRIxECdf7+/AER+0+kMDGOaLGrq6s1UDmmFAEfARIBFAErIheNM83d3MgRGv1uAdgAAAEAav/8BHUFBgATAC9AIQQCCAMGABEHChANEgwODhUUEwADEQYPBRAHDQoJDAsBEgA/zRc5ERIBFzkxMAEDJxMlNwUTJTcFExcDBQclAwUHAgK2ebb+4UIBIc3+30MBIbl2uAEhRP7hzAEeQQE5/sNDAUKmc6gBZKZ1qAE9Q/7ApnOm/p6ocwABAMsEkQOsBbQAEwAeQAwABgoQBhAUFQMADQkALzMzMhESATk5ETMRMzEwAQYGIyImNTQ2MyE2NjMyFhUUBiMBhwYqMDMpKjYBwQYrLzMtLDYE8C0yMjU1KS4wMTM4KAABAPgE5QPbBdcAEwAcQAsHEhUUABISDASACQAvGswyMxEzERIBOTkxMAEyNzYzMhYVFSM1NCMiDgIjIzUBBHiWlVFvdH1qK2Z5jlQQBWI7Om9kHxFmJCskeQABAd8E1wLNBjUADgAYQAoKAAwFAAMPEAMNAC/MERIBFzkRMzEwATQ2MzIVFA4CFRQXFSYB30M4XB4kHnfuBbg4RUwbGRASFDYoSkAAAQHhBNcCzwY1AA4AGEAKBQAACgIDDxAMAgAvzBESARc5ETMxMAEUBzU2NTQuAjU0MzIWAs/udx4kHlw4QwW4oUBKKDYUEhAZG0xFAAgAKf7BB8EFkQAMABoAKAA2AEQAUgBfAG0AgEBJXyhEWiI+DBoHFFI2bUwwZxBubwAHOkhIQU9FRD5MVmNjXGpmX1ptHiwsJTMvIigDNhAXB09Mam0zNhcXNjNtakxPBwgJDRQDCQAvMy8zEhc5Ly8vLy8vLy8RMxEXMxEzMxEzETMzMxEzMxEzETMzMxEzMxEzETMREgEXOTEwASYmIyIGByM2MzIWFwMmJiMiBgcjNjYzMhYXASYmIyIGByM2NjMyFhchJiYjIgYHIzY2MzIWFwEmJiMiBgcjNjYzMhYXISYmIyIGByM2NjMyFhcBJiYjIgYHIzYzMhYXISYmIyIGByM2NjMyFhcEbwU8RU4yBUsLxV1xB08FPEVOMgVLBWRnXHMGAfQFPEROMgVMBWVnXHMG+y8FPEROMgVMBWVnXHMGBDEFPEROMgVMBWVnXHMG+y8FPEROMgVMBWVnXHMGBPAFPEROMwVLC8Zccwb5vgU8RE4yBUwFZWdccwYEzywsKS/CZV358iwsKS9ZaWZcARYtKycxWmlmXS0rJzFaaWZdA9stKycxWmlmXS0rJzFaaWZd/hksLCgwwmhaLSsnMVpoZlwAAAgAKf5/B30F0wAHAA8AFwAfACcALgA1AD4ANEAlFRclID46BQEpLB8cMjUJDRA/QDsrBy42GRUdES8nDyQzDgUMBQAvLxIXORESARc5MTAFFwYGByM2NwMnNjY3MwYHATcWFhcVJicFByYmJzUWFwE3NjY3FwYHAQcGByc2NwMnJic3FhcBFxYWFwcmJicENwsRRiRhNRE7CxNJH2E0EgIjDkfIQd2B+2gOQr9P3YEDpgJDvkNFsXj86gKbqUWxeCsRUkVDe0wDahEnWhZDH4ImIw5Cv0/dgQSYDkfIQdyC/hYLE0kfYTUROwsRRiRhNREBqhAnWBlEblj8lRBZP0RuWALeAoy3RsZj/OkCRcI8RjLDNAAAAgDJ/oMGCAdeABQAIgBZQC8NCgwHDg4JEwICFBQYIAkKBSQjFBIGBRESBRIOAA4JSVkOEgwiHw8YARgcFQcAAwA/Mt4yzV0yPz8rERIAOTkRMxEzGD8REgEXOREzETMRMxEzMxEzMTATMxEUBwczATMRMwMjEyMRNDcjASMBIiYnMxYWMzI2NzMGBsmhCgQIAzS4uI/FnKATCfzJugJDuqgKmwpdbmljCZ4MtQW2/NF2zlMExvri/esBfQMlr/f7NQYrj6RsTl1dn5QAAgCw/ocFEgYMABEAHwBPQCoKBwkECwsGDwEBEBAVHQYHBSEgAw4QEQ8LBkZZCxAVCSIcDxUBFRkSBA8AP94yzV0yPz8zKwAYPxI5ORESARc5ETMRMxEzETMzETMxMAERFAcBMxEzAyMTIxE0NwEjESUiJiczFhYzMjY3MwYGAUwKAlHPsIGsfZsI/a7NAey5qgqcB1p0Z2QKnQyyBEj9aoiIA6b8R/34AXkCoJ5o/FoESJGPpGZUWmCelQACAC8AAAR9BbYAEQAZAE1AKQgEEhIBDxULCwYPEQQaGwgZSVkHEQARSVkEAAgACAAPAg8SSlkPEgIDAD8/KxESADk5GC8vETMrEQAzKxESARc5ETMRMzMRMzMxMBMzNTMVIRUhETMgERQEISERIwEzIBE0JiMjL5qqAVb+qsACSv7s/vH+b5oBRN0Be7jJ1wT8urqW/uD+ZNLYBGb8KwEZhIAAAAIAFAAABEwGFAASABoAS0AoBAAUFBAMFwgIAgwOBBscBBNGWQMODw5HWQAPBA8EDwwRAAwURlkMFQA/KwAYPxI5OS8vETMrEQAzKxESARc5ETMRMzMRMzMxMAEhFSERITIWFRQGIyERIzUzNTMRESEgNTQmIwFWASf+2QFA39fg3f4hnJymATEBH4SfBR+B/eWam6SqBJ6B9fvg/pe5XFQAAAIAyQAABHkFtgAPABwASEApEAoKCxgAAAQFAxYGFRMUCwodHhYTHBAMHEpZCRBKWQYDDAkJCwwDCxIAPz8SOS8SOTkrKxESADk5ERIBFzkRMxEzETMxMAEUBgcXBycGIyMRIxEhIAQBMzI3JzcXNjU0JiMjBHlzbHhklWaIuKoBiQESARX8+qZXTGxsjH/CysgEDH/JOZ1UwBv9wQW21/3yCo1SsEiykY4AAgCw/hQEdQRcABgAKQBVQDEdCwQHBwgnEhIVFhQlFyIkIwgKKislIhkgDxlGWQwLCwQUFwQADxAJDwgbACBGWQAWAD8rABg/Pz8SFzkRMysREgA5ORESARc5ETMRMxEzMzMxMAUiJicjFhURIxEzFzM2NjMyEhEQBxcHJwYDIgYHFRQWMzI3JzcXNjU0JgKua7E8DAymhxkIQKlt2u23c2SDR22olgKaqi8peWqBZZYUT1KUIv49BjSWWlD+1v7z/q6RnFCuGAPjussl58cMnlCqZ/nX0QAAAQAvAAAECAW2AA0APEAfAwcHDAgABQgKBA4PBgoLCklZAwsLCA0NAklZDQMIEgA/PysREgA5GC8zKxEAMxESARc5ETMzETMxMAEVIREhFSERIxEjNTMRBAj9awGo/liqmpoFtpn+Apb9dwKJlgKXAAEAEgAAA0IESAANADxAHwIGBgsHAAQHCQQODwUJCglHWQIKCgcMDAFGWQwPBxUAPz8rERIAORgvMysRADMREgEXOREzMxEzMTABIREhFSERIxEjNTMRIQNC/hQBWv6mpp6eApIDvP6of/4bAeV/AeQAAAEAyf4ABNsFtgAbAEFAIwkDAwQZDg4HFAQEHB0RF0lZERwLAElZCwsEBQUISVkFAwQSAD8/KxESADkYLysAGD8rERIBFzkRMxEzETMxMAEiBxEjESEVIRE2MyAAERAAISImJzUWMyARNAACMWRaqgNJ/WFaeQFAAVX+4v79U31Ge4kBf/8AAo8M/X0Ftpn9/Ar+rf7G/sX+pRUcmDEB/vUBBAAAAQCw/goD+gRIABsAQUAjCBkUDg4PDwISGQQdHBYLRlkWFg8QEBNGWRAPDxUABUZZABsAPysAGD8/KxESADkYLysREgEXOREzETMRMzEwASInNRYzMjY1NCYjIgcRIxEhFSERNjMgABEQAgJGkWV0e4WIsrVFSqYCmv4MUjsBEAEH5P4KPJU/ytff0BH+JQRIjv63DP7l/tn+9f7aAAABAAL+gwb4BbYAFQBNQCkGEREDEg0MDAgJEgABFQcWFxIVEhMQCQYDAAAPAQ8KSVkPEg0iBwQBAwA/MzM/PysREgA5ETMzMzMzGD8zERIBFzkRMxEzMxEzMTABATMBETMRATMBATMRIxEjAREjEQEjAlb9wb4COaQCOr79wAHatKJe/bqk/bvHAvACxv08AsT9PALE/Tz9qP3pAX0C5f0bAuX9GwAAAQAE/ocGHwRIABUAS0AoAg0NFQ4JCAgEBQ4SExEHFhcVDwwFAhISCwMAEw8OERULBkZZCxUJIgA/PysAGD8zPzMzEjkRMzMzMzMREgEXOREzETMzETMxMAEzEQEzAQEzESMRIwERIxEBIwEBMwECpJkBxbb+NgFwwaJe/h6Z/h+/AfD+N7YBwwRI/e0CE/3t/lr9+AF5Ai390wIt/dMCNQIT/e0A//8ASv5CBDUFywImAbEAAAAHA38BWAAA//8ARP5CA38EXAImAdEAAAAHA38BCAAAAAEAyf6DBSsFtgAPADtAIAwICAkDAgIODwYJBRARDwwGAwUNCgMJEgUASVkFEgMiAD8/KwAYPz8zEhc5ERIBFzkRMxEzETMxMCUzESMRIwEHESMRMxEBMwEEf6yiZv3pmaqqApfJ/bSa/ekBfQLFiP3DBbb9KwLV/YUAAQCw/oUEPQRIAA4AOkAfDgoKCwYFBQECCwQPEAIOCQMIAAwPCxUIA0ZZCBUGIgA/PysAGD8/MxIXORESARc5ETMRMxEzMTABMwEBMxEjESMBESMRMxEDL7b+JwF/sp9U/gympgRI/e/+WP32AXsCK/3VBEj96wAAAQDJAAAE6QW2ABIAOEAeBgICAwoREQcSDgwSAwQTFAgKBgAQEgYDCwQDDwMSAD8zPzMSFzkREgEXOREzMxEzETMRMzEwAQcRIxEzETcRMxUBMwEBIwERIwHwfaqqfX0Bm8v9tAJiyP5MfQKoa/3DBbb9JYsBXdMBxv2F/MUCXP7PAAEAsAAABDsESAATADpAHwYCAgMOChISBxMPDBMDBBQVCAoGARETBgMLBA8QAxUAPzM/MxIXORESARc5ETMzETMzETMRMzEwAScRIxEzETcRMxUBMwEVASMBFSMBzXempneDAQ62/jwB68L+1YEBsnn91QRI/et5AUrNAR/+JWv9/gE73QAAAQAvAAAE6QW2ABMAR0AmCAQQEAERCw4MCgYOERMGFBUHEwATSVkECwgOAxEAAAINERIJAgMAPzM/MxI5LxIXOTMrEQAzERIBFzkRMxEzMxEzMzEwEzM1MxUzFSMRATMBASMBBxEjESMvmqrd3QKVy/20AmLO/fGZqpoFBLKyl/5uAtv9hfzFAsWG/cEEbQAAAQAUAAAEGwYUABkATUArCggEFhYBFxIQBhEXGQYaGxQKDxMXFQcZABlHWQQPAB8ALwADAAACDw8CAAA/PxI5L10zKxEAMxg/MxI5ORESARc5ETMzETMzMzEwEzM1MxUhFSERBwczNzY2ATMBASMBBxEjESMUnKQBff6DAwMIEjcoAXDH/kQB2cf+fX2knAVaurp//ehbNxhKMAGF/i39iwIEav5mBNsAAQAQAAAFgwW2AA0ANUAbAgoKCwUIBgQICwQODwgCAAcLEgMDAA1JWQADAD8rABg/PzMSOTkREgEXOREzETMRMzEwEyERATMBASMBBxEjESEQAfwClsv9tAJiyf3smqr+rgW2/SUC2/2F/MUCxYj9wwUdAAABACkAAATjBEgADAA1QBsFAQEJCQoMCgQGBA4NCAIABwoVAw8ADEZZAA8APysAGD8/MxI5ORESARc5ETMRMxEzMTATIREBMwEBIwERIxEhKQICAdu2/icCAML+CqT+ogRI/esCFf3t/csCK/3VA7wAAQDJ/oMFwQW2AA8AREAkDAgICQ0FBQADAgIACQMQEQwHSVkMDAUOCgMJEgUASVkFEgMiAD8/KwAYPz8zEjkvKxESARc5ETMRMxEzETMRMzEwJTMRIxEjESERIxEzESERMwUfoqKq/P6qqgMCqpr96QF9ArD9UAW2/ZICbgAAAQCw/ocE+ARIAA8ATkArAQ0NDgIKCgUIBwcFDgMQEQEMRlkPAR8BAgsDAQEKAw8PDhUKBUZZChUIIgA/PysAGD8/MxI5L19eXSsREgEXOREzETMRMxEzETMxMAERIREzETMRIxEjESERIxEBVgJmppamlv2apgRI/jUBy/xH/fgBeQHu/hIESAAAAQDJAAAGbwW2AA0AP0AhCgYGBwsDAwIAAgcDDg8KBUlZCgoHDAwBSVkMAwgDAwcSAD8zPz8rERIAORgvKxESARc5ETMRMxEzETMxMAEhESMRIREjETMRIREhBm/+sKz9AKqqAwAB/AUd+uMCsP1QBbb9kgJuAAEAsAAABcEESAANAElAJwELCwwCCAgHBAcMAw4PDQ8BCkZZDwEfAQILAwEBAwgMFQMGRlkDDwA/KwAYPzMSOS9fXl0rABg/ERIBFzkRMxEzETMRMzEwAREhESEVIREjESERIxEBVgJmAgX+oab9mqYESP41AcuM/EQB7v4SBEgAAQDJ/gAIHQW2AB0AR0AmBAUIAAABFw0NEgEFBB4fEBVJWRAcChpJWQoKBQYGA0lZBgMBBRIAPzM/KxESADkYLysAGD8rERIBFzkRMxEzETMRMzEwISMRIREjESERNjMgABEQACEiJzUWMyARNAIjIgYHBNmq/USqBBBEfQEyAVH+5f7+nHuGfwF65ugqfxgFHfrjBbb9YQz+qP7I/sf+pjGYMQH+8gEFBwUAAAEAsP4KBqgESAAcAEdAJhESFQ0NDgcaGgIOEgQdHhcKRlkXFxITExBGWRMPDhIVAAVGWQAbAD8rABg/Mz8rERIAORgvKxESARc5ETMRMxEzETMxMAEiJzUWMzIRNCYjIgcRIxEhESMRIRE2MzIAERACBReDYW1s8KasQ0io/d+mA29LQvYBBtH+CjyVPwGh39AV/ikDuPxIBEj+Jw7+1/7n/vT+2wACAH3/rAXhBc0AKAA0AFBALBsRLyMpAAgAAxYgIxEHNTYmLEpZDDImJg4UFBlJWRQECgVJWQoODh5JWQ4TAD8rABgQxCsAGD8rERIAORgvOTkrERIBFzkRMxEzETMxMAEUAgcWMzI3FQYjIicGIyAAERAAITIXByYjIBEQEjMyNyYCNTQSMzISAzQmIyIGFRQWFzY2BbiKdEJaTj04W7KUZpD+yv6hAUkBOn9cL1Ra/jP/6zYuVlzGr7XBsGddXmddU2ZzAqa1/stWHhaZGWQkAYkBVgF4AYojkRz9nv7g/s4KZwEcoPQBCv72/v6xzMmwjP5VQ/8AAAIAc//HBNMEXAAKADUAUEAsHhMAJgYsNCwvGCQmEwc2NykIR1kNAykpDxYWG0ZZFhALMUZZCw8PIUZZDxYAPysAGBDEKwAYPysREgA5GC85OSsREgEXOREzETMRMzEwARQWFzY2NTQjIgYBIicGIyImJjUQEjMyFwcmIyIGFRQWMzI2NyY1NDYzMhYVFAYHFjMyNxUGAu5EP0RTh0hLAWaTgmB7leJ6+ONbTSU2T5yRqqQlNQaLqJeUnWteNENCMScB8l6hNSyebut9/WNNKIv+pAETATAWihPR587SCQOU4a3BvbF90UAaDokOAP//AH3+QgTPBcsCJgAmAAAABwN/AiUAAP//AHP+QgOLBFwCJgBGAAAABwN/AYMAAAABABD+gwRaBbYACwAyQBsGCwgJAwkLAQQMDQsGSVkLEgkiBQECAUlZAgMAPysRADMYPz8rERIBFzkRMxEzMTABITUhFSERMxEjESMB3/4xBEr+MaKirAUdmZn7ff3pAX0AAAEAKf6HA5EESAALADRAGwYLCAkDCQsBBAwNCSIFAQIBRlkCDwsGRlkLFQA/KwAYPysRADMYPxESARc5ETMRMzEwASE1IRUhETMRIxEjAYn+oANo/p6WppYDvIyM/NP9+AF5AP//AAAAAAR7BbYCBgA8AAAAAQAA/hQEAgRIAA0AKUAUAAEMAQMDDg8IBw0HAgsDDwIVARsAPz8/MxI5OREzERIBFzkRMzEwASMRATMTFhczNjcTMwECVKb+UqzsUxMIIUbprP5S/hQB6ARM/ZveYYq1AmX7tAAAAQAAAAAEewW2ABAAOkAeBAgIDQkCBgkLDwUREgcLDAtJWQQADwwMCQEPAwkSAD8/MxI5LxI5MysRADMREgEXOREzMxEzMTABATMBFSEVIREjESE1ITUBMwI9AYa4/hgBK/7VrP7TAS3+GboC2wLb/IE7mP6cAWSYMwOHAAEAAP4UBAIESAATADxAHxEBAQYCEBMCBAcFFBUMCwsFDwcPAAQFBEdZEQUVAhsAPz8zKxEAMxg/MxI5ETMREgEXOREzMxEzMTAFESMRITUhATMTFhczNjcTMwEhFQJUpv7qART+VKzsUxMIIUbprP5UARKB/pUBa4EESP2b3mGKtQJl+7iBAAABAAj+gwTVBbYADwA3QCADAgIODwwGCQoICBARDA8JBgQFDQoDCBIFAElZBRIDIgA/PysAGD8/MxIXORESARc5ETMxMCUzESMRIwEBIwEBMwEBMwEEM6KiXv53/nC0Aeb+O7wBawFutf47mv3pAX0Cg/19AvwCuv29AkP9TAABACf+hQQ3BEgADwA5QCEKCQkFBgMNAAEPCBARDxUDBgANBAwBDAdGWQwVCiIEAQ8APzM/PysREgAXORg/ERIBFzkRMzEwAQEzAQEzAQEzESMRIwEBIwG4/oO9ASEBILv+gwErlaZF/s3+yrwCMQIX/lwBpP3p/l799gF7Abz+RAAAAQAQ/oMGqAW2AA8AQEAiDAUADQMCAg0KBQcFEBEOAwsHCAdJWQgDAAwFDElZBRIDIgA/PysRADMYPysRADMYPxESARc5ETMRMxEzMTAlMxEjESERITUhFSERIREzBf6qovu0/lYEL/4lAvCqmv3pAX0FHZmZ+30FHAABACn+hwWYBEYADwA/QCICCwYDCQgIAwALDQUQEQENDg1GWQ4PBgILAkZZCxUJIgQPAD8/PysRADMYPysRADMREgEXOREzETMRMzEwASERIREzETMRIxEhESE1IQN5/pcCRqacpvx4/r8DUAO6/NUDt/xJ/fgBeQO6jAAAAQCq/oMFaAW2ABcAO0AfFQAFAwIPDAIFDAMYGRIJSVkSEgUWDQMFAElZBRIDIgA/PysAGD8zEjkvKxESARc5ETMRMxEzMzEwJTMRIxEjEQYGIyImNREzERQWMzI2NxEzBMehoaqVxmrP36p/j2Gxqaqa/ekBfQJcNSe+swJF/c95dB03AsoAAAEAnP6FBMMESAAWADtAHwEVCQYODAsLDhUDFxgDEkZZAwMOBxYPDglGWQ4VDCIAPz8rABg/MxI5LysREgEXOREzETMzETMxMAERFDMyNjcRMxEzESMRIxEGBiMiJjURAULbW6ZpppamlmmzcaS6BEj+cMA4QwHV/Ef99gF7AfBIO6yTAZwAAQCqAAAExwW2ABYASkAmBQILFRUIFg0RERAQFgIDFxgUAAgASVkLCBYICQkIFgMDERIOAwMAPzM/Ehc5Ly8vETMrEQAzERIBFzkRMxEzETMzETMRMzEwASARETMRFBYzETMRNjcRMxEjEQYHESMCdf41qoeafYajrKyogX0CAAFxAkX9z3d2AVz+qg08As/6SgJYQRH+zwABAJwAAAQdBEgAFwBKQCYBFgYQEAMRCAwMCwsRFgMYGQ8TAxNGWQYDEQMEBAMRAwwJFw8MFQA/PzMSFzkvLy8RMysRADMREgEXOREzETMRMzMRMxEzMTABERQXETMRNjcRMxEjEQYHFSM1IyImNREBQsh3cYWmpoB2dxaguARI/nC6BgEt/t0YWQHV+7gB8Fsa+OqqlQGcAAEAyQAABOUFtgASAC9AFwIRERIJCAgSFBMEDUlZAhIEBAkSEgADAD8/MzkvEjkrERIBOTkRMxEzETMxMBMzESQzMhYVESMRNCYjIgYHESPJqgEAxM/fqn+Pa7qVqgW2/aRcv7H9ugIxeHYiMv01AAABALAAAARCBEgAEgAvQBcAEgsHBwgSCBQTDgNGWQsODggJDwAIFQA/Mz8SOS85KxESATk5ETMRMxEzMTAhETQjIgYHESMRMxE2NjMyFhURA5rZWJx3pqZfunKjvgGNwTFK/i0ESP4ORT6ol/5mAAIAPf/sBj8FzQAgACcAUUAqBQMAJBERCB4lEBAYHgAEKCkRHgceSVkkBwIHAhsMGxRJWRsTDCFJWQwEAD8rABg/KxESADk5GC8vMysRADMREgEXOREzETMzETMRMzMxMBM0NzMGFRQzMzcSACEgABEVIRIAMzI2NxUGBiMgAAMiJgEiAgchECY9G5EUcSIFHQFNARcBKQEo+9wOAQX3ZcqNct2C/sb+oxOOmwOv0fAQA27LA4dJNjI8ZysBKgFH/oX+j0X++P7vHyucJx4BZAFMdgIj/vX5AQn7AAACADP/7ATdBFoAHwAmAExAKAoIBRYNJBUVHQ0DBQUnKBYDDANGWSMMBwwHABERIEZZERAAGUZZABYAPysAGD8rERIAOTkYLy8zKxEAMxESARc5ETMRMxEzMzEwBSIAJyQ1NDczBhUUMzM3NjYzMhIVFSEWFjMyNjcVBgYDIgYHITQmA0rz/uwG/vYZjRRqFQYi+rfP8f0MBqytZZ9iWJ2ghpcOAj2MFAEe/ATdRTIvO2cjyuD+9+JpxsMgKpQmIQPjpJ6dpQACAD3+gwY/Bc0AIgApAF1AMQsJBiYXFw4DISInFhYeIgMGBSorIiIgExcDDQNJWSYNCA0IABISI0lZEgQAGkpZABMAPysAGD8rERIAOTkYLy8zKxEAMxg/PxESARc5ETMRMxEzMxEzETMzMTAFJAADIiY1NDczBhUUMzM3EgAhIAARFSESADMyNjcVBgcRIxMiAgchECYDoP7+/tsTjpsbkRRxIgUdAU0BFwEpASj73A4BBfdlyo2w66ZM0fAQA27LDB0BWgExdnVJNjI8ZysBKgFH/oX+j0X++P7vHyucPgX+lQay/vX5AQn7AAIAM/6HBN0EWgAhACgAWEAvCggFFg0gISYVFR0hDQMFBikqISIfFhYDDANGWSUMBwwHABERIkZZERAAGUZZABUAPysAGD8rERIAOTkYLy8zKxEAMxg/PxESARc5ETMRMxEzETMzMTAFJgInJDU0NzMGFRQzMzc2NjMyEhUVIRYWMzI2NxUGBxEjEyIGByE0JgLVv9MG/vYZjRRqFQYi+rfP8f0MBqytZZ9ijqWmRIaXDgI9jAofARHgBN1FMi87ZyPK4P734mnGwyAqlEEE/pkFSKSenaUA//8AVAAAAlYFtgIGACwAAP//AAIAAAa8B2ACJgGwAAABBwI2ARABVAAIswESBSYAKzX//wAEAAAF3wYMAiYB0AAAAQcCNgCkAAAACLMBEhEmACs1AAEAyf4ABRkFtgAcAEJAJQcDAwQaDg4JChQEBR0eERdJWREcBwJJWQsASlkHCwsECAUDBBIAPz8zEjkvOSsrABg/KxESARc5ETMRMxEzMTABIgcRIxEzEQEzATcgABEQACEiJic1FjMyEjU0JAJejF+qqgKJzf2FGgFPAWL+2f71UnxGepi7yP7rAnsf/aQFtv08AsT9VAL+u/7P/sb+pBQdmDEBDfHo/QAAAQCw/goEIQRIABwAQkAlBAAAARcKEAoGBwEFHR4OFEZZDhsEHEdZBxpGWQQHBwEFAg8BFQA/PzMSOS85KysAGD8rERIBFzkRMxEzETMxMCEjETMRATMBBBIRFAYGIyInNRYWMzI2NTQmIyIHAVSkpAHjt/43AQD8bsyFiF8ubEeHmLu+UlwESP36Agb+HgT+5P71sfyEPJEZJtnI088YAAEAAP6DBZEFtgAXADlAHwMABQQBAQUOAxgZFgdJWRYDDBFKWQwSBQBJWQUSAyIAPz8rABg/KwAYPysREgEXOREzETMzMTAlMwMjEyMRIQcCAgYnIic1FjMyNjYSEyEE2biPxZyq/iUfPV2Yfko7Njs1Tz1dOAMSmv3pAX0FH/D+If5FrgIZjxpX1wJZAbgAAAEAEP6HBI8ERgAUADlAHwMABQQBAQUNAxUWEwdGWRMPCxBHWQsVBQBGWQUVAyIAPz8rABg/KwAYPysREgEXOREzETMzMTAlMwMjEyMRIQICBiMiJzUWMzISEyED37CBrH2m/rUcXph2OhwWHHGJIgKBj/34AXkDuP6Y/mTACn8GAdkB9gAAAQDJ/gAFHwW2ABUAPUAgEg4ODxMLCwAABg8DFhcSDUlZEhIPFBADDxIDCUlZAxwAPysAGD8/MxI5LysREgEXOREzETMRMxEzMTAlEAAhIiYnNRYzIBERIREjETMRIREzBR/+5v77UnpNe4cBjPz+qqoDAqqW/sL+qBMeljEB9wIj/VAFtv2SAm4AAQCw/goEYgRIABUAR0AnDwsLDBAICBMTAgwDFhcPCkZZDw8fDwILAw8PDBENDwwVAAVGWQAbAD8rABg/PzMSOS9fXl0rERIBFzkRMxEzETMRMzEwASInNRYzMjY1ESERIxEzESERMxEQAgLThF1vZn12/ZympgJkqM/+CjqVPcbPAb3+EgRI/jUBy/vr/vT+4wABAMn+gwXXBbYADwBEQCQMCAgJDQMABQQBAQUJAxARDAdJWQwMBQ4KAwkSBQBJWQUSAyIAPz8rABg/PzMSOS8rERIBFzkRMxEzMzMRMxEzMTAlMwMjEyMRIREjETMRIREzBR+4kcWeqvz+qqoDAqqa/ekBfQKw/VAFtv2SAm4AAAEAsP6HBRIERgAPAERAJAENDQ4IBQIKCQYGCg4DEBEBDEZZAQEKAw8PDhUKBUZZChUIIgA/PysAGD8/MxI5LysREgEXOREzETMzMxEzETMxMAERIREzETMDIxMjESERIxEBVgJmprCBrH2m/ZqmBEb+NwHJ/En9+AF5Ae7+EgRGAAABAKr+gwTHBbYAFwA9QCAPDAIDFQUFAAADDAMYGRIJSVkSEgEWDQMDIgEESVkBEgA/KwAYPz8zEjkvKxESARc5ETMRMxEzETMxMCEjESMRMxEGBiMiJjURMxEUFjMyNjcRMwTHqqKilcZqz9+qf49hsamq/oMCFwHCNSe+swJF/c95dB03AsoAAQCc/oUELQRIABYAPUAgARULDAYODgkJDBUDFxgDEkZZAwMKBxYPDCIKDUZZChUAPysAGD8/MxI5LysREgEXOREzETMRMxEzMTABERQzMjY3ETMRIxEjETMRBgYjIiY1EQFC21umaaaVppVps3GkugRI/nDAOEMB1fu4/oUCCgFhSDuskwGcAAEAyf6DBykFtgAYAEhAJQkGBgcRDgwTEg8PEwcDGRoXFgILAhMIEw5JWRMSESIMCAMABxIAPzM/Mz8/KxESADk5ETMzERIBFzkRMxEzMzMRMxEzMTAhASMXFhURIxEhATMBMxEzAyMTIxE0NyMBA1D+EAgHB50BAAHRCAHR/riPx56qDgj+DAUQf8Av/F4FtvtKBLb65P3pAX0DroTc+vIAAAEAsP6HBd8ERgAYAD9AIBMUCAUKCQYGChQDGRoLEgASDwMVDxQVCgVGWQoPFQgiAD8/MysAGD8/MxI5OREzERIBFzkRMxEzMxEzMTAlNzcBMxEzAyMTIxEHBwEjASYnESMRMwEWAukfKwEp07CBrH2TFDr+5Yv+5TUUlMsBKS2gXXYC0/xJ/fgBeQOJOpn9SgK4hkv8dwRG/S1u//8AVAAAAlYFtgIGACwAAP//AAAAAAUQB14CJgAkAAABBwI2ADkBUgAIswIPBSYAKzX//wBe/+wDzQYMAiYARAAAAQYCNugAAAizAiURJgArNf//AAAAAAUQByUCJgAkAAABBwBqAD0BUgAKtAMCJAUmACs1Nf//AF7/7APNBdMCJgBEAAABBgBq8wAACrQDAjoRJgArNTX////+AAAGgQW2AgYAiAAA//8AXv/sBnMEXAIGAKgAAP//AMkAAAP4B14CJgAoAAABBwI2ABABUgAIswEMBSYAKzX//wBz/+wEEgYMAiYASAAAAQYCNgwAAAizAhsRJgArNQACAHX/7AVYBc0AEgAZAD1AIBcOEBYWCQkCDgMaGw8XSVkPDwwGDBNJWQwTBgBJWQYEAD8rABg/KxESADkYLysREgEXOREzETMRMzEwASIHNTY2MyAAERAAISARNSECAAMyEjchEBYCmOPic9KGAUsBb/6m/sv9rAQvEf75w9L5EPyHzAU1TJ4mIP5x/pv+ov5xAutGAQoBDvtOAQ33/vj8AAACAGb/7AQGBFwAFAAbADtAHxkJGAsDAxEJAxwdChlGWQoKBgAGFUZZBhYADkZZABAAPysAGD8rERIAORgvKxESARc5ETMzETMxMAEyABEQACMiAjU1ISYmIyIGBzU2NhMyNjchFBYB+vUBF/792tDzAvQFs6ZipV9ZopqFmgz9w40EXP7U/vv++P7JAQzhacy7ISmTKCL8G6WcnaQA//8Adf/sBVgHJQImAuEAAAEHAGoAkwFSAAq0AwIvBSYAKzU1//8AZv/sBAYF0wImAuIAAAEGAGrqAAAKtAMCMREmACs1Nf//AAIAAAa8ByUCJgGwAAABBwBqARABUgAKtAIBJwUmACs1Nf//AAQAAAXfBdMCJgHQAAABBwBqAKIAAAAKtAIBJxEmACs1Nf//AEr/7AQ1ByUCJgGxAAABBwBq//MBUgAKtAIBPgUmACs1Nf//AET/7AN/BdMCJgHRAAABBgBqlAAACrQCATgRJgArNTUAAQBK/+wENwW2ABkAQEAjABMVGQ8DAxkTFggFGhsZFhcWSVkAEkpZAAAGFwMGDEpZBhMAPysAGD8SOS8rKxEAMxESARc5ETMRMxEzMTABBAQVFAQhICc1FhYzMjY1NCYjIzUBITUhFQH8ARcBJP7N/ur+/6Ng3mrHyuHfjAHu/U4DhwM/CdPBzuhPni4ymZCGio0B3pmLAAABABv+FAOmBEgAGQBAQCMAExUZDwQEGRMWCQUaGxkWFxZGWQASR1kAAAcXDwcMRlkHGwA/KwAYPxI5LysrEQAzERIBFzkRMxEzETMxMAEeAhUUACMiJzUWMzI2NTQmIyM1ASE1IRUBrJXmf/7Y7+qKt8ihxdbKeQHF/YkDOAHPB3LKiN7+7kaaVr6gpKpyAf6OewD//wDLAAAFUga0AiYBsgAAAQcBTQC0AVIACLMBEwUmACs1//8AsAAABGIFYgImAdIAAAEGAU0xAAAIswERESYAKzX//wDLAAAFUgclAiYBsgAAAQcAagC+AVIACrQCASUFJgArNTX//wCwAAAEYgXTAiYB0gAAAQYAaj0AAAq0AgEjESYAKzU1//8Aff/sBb4HJQImADIAAAEHAGoA0QFSAAq0AwItBSYAKzU1//8Ac//sBGIF0wImAFIAAAEGAGodAAAKtAMCLhEmACs1Nf//AH3/7AW+Bc0CBgJ+AAD//wBz/+wEYgRcAgYCfwAA//8Aff/sBb4HJQImAn4AAAEHAGoA0QFSAAq0BAMvBSYAKzU1//8Ac//sBGIF0wImAn8AAAEGAGobAAAKtAQDMBEmACs1Nf//AD3/7ASJByUCJgHHAAABBwBq/+0BUgAKtAIBMAUmACs1Nf//ADn/7AN9BdMCJgHnAAABBgBqjgAACrQCATARJgArNTX//wAb/+wE+Aa0AiYBvQAAAQcBTQAvAVIACLMBGgUmACs1//8AAv4UBAYFYgImAFwAAAEGAU2tAAAIswEZESYAKzX//wAb/+wE+AclAiYBvQAAAQcAagA7AVIACrQCASwFJgArNTX//wAC/hQEBgXTAiYAXAAAAQYAarcAAAq0AgErESYAKzU1//8AG//sBPgHcwImAb0AAAEHAVMAjQFSAAq0AgEqBSYAKzU1//8AAv4UBAYGIQImAFwAAAEGAVMEAAAKtAIBKREmACs1Nf//AKoAAATHByUCJgHBAAABBwBqAGoBUgAKtAIBKQUmACs1Nf//AJwAAAQtBdMCJgHhAAABBgBqFwAACrQCASgRJgArNTUAAQDJ/oMECAW2AAkALUAYBAkGBwEHCQMKCwkESVkJEgciAANJWQADAD8rABg/PysREgEXOREzETMxMBMhFSERMxEjESPJAz/9a6GhqgW2mft9/ekBfQABALD+hwNCBEYACQAtQBgECQYHAQcJAwoLCQRGWQkVByIAA0ZZAA8APysAGD8/KxESARc5ETMRMzEwEyEVIREzESMRI7ACkv4UlqaWBEaM/NX9+AF5//8AyQAABgoHJQImAcUAAAEHAGoBGwFSAAq0BAMtBSYAKzU1//8AsAAABXkF0wImAeUAAAEHAGoAxQAAAAq0BAMsESYAKzU1//8AL/51BAgFtgImApsAAAAHA4AAkwAA//8AEv51A0IESAImApwAAAAGA4F1AP//AAj+dQTJBbYAJgA7AAAABwOAA1gAAP//ACf+dQQ0BEgAJgBbAAAABwOBAsMAAAABAAYAAASWBbYAEQA7QCIPAhEBEA0ECgcJBgsMExIKEQARSVkHDQ8EAAACDA8SBQIDAD8zPzMSOS85EjkzKxEAMxESARc5MTATIQEzAQEzASEVIQEjAQEjASF/ATP+d7wBawFst/5wATz+ugG9wf53/nC2Ab/+ugNUAmL9uwJF/Z6Y/UQCg/19ArwAAAEAJwAABAgESAARADtAIg8CEQEQDQQKBwkGCwwTEgoRABFHWQcNDwQAAAIMDxUFAg8APzM/MxI5LzkSOTMrEQAzERIBFzkxMBMhATMBATMBIRUhASMBASMBIXUBEv60vQEhASC7/rIBGP7iAWi8/s3+yrwBZv7oAncB0f5cAaT+L4H+CgG8/kQB9gAAAgCDAAAENwW2AAoAEwA0QBoEExMHDwAHABUUAwxJWQMDCAUIEkpZCBIFAwA/PysREgA5GC8rERIBOTkRMxEzETMxMBM0JCEzETMRISAkASMiBhUUFjMzgwEkASDGqv5j/vX+9AMKut7CtsvZAaTUzgJw+krVAdt8jo+E//8Ac//sBDcGFAIGAEcAAAACAIP/7AZ3BbYAGQAjAEZAJB4DGAoKByMPEhIjAwMkJQYbSVkYBhAGEAAIAwwgACBKWRUAEwA/MisRADMYPxI5OS8vOSsREgEXOREzETMzEjkRMzEwBSImNTQkITMRMxEUMzI2NREzERQGIyImJwYTIyIGFRAhMjY1Ak7i6QEqASKRquZkearPuHafM3Epl9TCASF/jRLR0NneAnD7t+x7bgHm/hiuzlJaqgLAi5b+9HdwAAACAHP/7AaHBhQAIgAuAFFAKSwTDCAgHRomAwYGJhMDLzAeAA0QGhYEBBAWFipGWRYQACMQI0ZZCRAWAD8zKxEAMxg/KxESADkYLxI5Ejk/ERIBFzkRMxEzMzMSOREzMTAlMjY1ETMRFAYjIiYnIwYGIyICERASMzIWFzMmJjURMxEUFiEyNjU1NCYjIBEUFgT+dmuoyL2BnisIS7mB0Ojnz2qfPwwCCKZt/bmikpSi/uKLd4SIATn+vcjFW3FxWwEpAQwBDAEvTVURcBsBvvuMoIm5ziPnyf5O1tIAAQBO/+wGgQXLACoAS0AoBhMoGR8iIhYZEwENBissFwIBAgFKWQIgAiAlECUcSVklExAJSlkQBAA/KwAYPysREgA5ORgvLysREgA5ERIBFzkRMxEzETMxMAEjNTMyNjU0JiMiBgcnNjYzMhYVFAYHFQQTFhYzMjY1ETMRFAYjIiYnJiYBrsnBwNWagGexZ1Rd9oLW9bKcAWIGAmx8d3Co0r3K0AICzQKsj5OEbH83RXJIUMSnjbcaCDP+0ZZ/eYcBzf4pxsfRyJaRAAEAUP/sBcUEXAAlAEtAKBIeCiQCBQUkHiAOGAYmJyEPDg8ORlkPAw8DCBsbFEZZGxAIAEZZCBYAPysAGD8rERIAOTkYLy8rERIAORESARc5ETMRMxEzMTAlMhERMxEUBiMgAyYmIyM1MyA1NCMiBgcnNjYzMhYVFAcVFhYXFgRC3aa7xP6GEAWNlIxvASHyS4dNOVWjaLjTwGN7BQl3AQwBOf69ysMBTWNYjayiJCKHKCSbhrg5CBR6atMAAQBO/oME0QXLACMASkAoGRoeIyEgIBYaIwQQBiQlGgUEBQRKWQUFIxMjHklZIxIhIhMMSlkTBAA/KwAYPz8rERIAORgvKxESADkREgEXOREzETMRMzEwATQmIyM1MzI2NTQmIyIGByc2NjMyFhUUBgcVFhYVETMRIxEjA4Pl4tnRzeGkh2nDaVRh/oTc/b2juMOsoqwBnIWLj5OEa4A6QnJKTsSnjLcZCBmzlP7+/ekBfQAAAQBQ/ocEEARaAB4ASkAoBxIZHhwbGxUeEgMNBiAfFQQDBANGWQQEHg8eGUZZHhUcIg8KRlkPEAA/KwAYPz8rERIAORgvKxESADkREgEXOREzETMRMzEwATQhIzUzIDU0JiMiByc2MzIWFRQHFRYWFRUzESMRIwLV/suWdQE5hXeZlj2hy7/Vy35wnaaVAS3HjaxSUEaHSpqHtjkLJYlmnP34AXkAAAEAAP/pByEFtgAjADpAHRQjGh0dIwkDJCUbGwcSEgFJWRIDFwwHDEpZIAcTAD8zKxEAMxg/KxESADkYLxESARc5ETMRMzEwASEHAgIGBiMiJzUWMzI2NhISEyERFBYzMjY1ETMRFAYjIiY1BAz+SB8rTFOCZEVAMj8xQCw4SjcC729zcHGozbzEyAUf8P6u/kTSZhmPGj5oAQIB6QGu+8+JeXmHAc3+KcHMzMUAAAEAEP/sBikERgAdADpAHQAOBQgIDhYDHx4GBhQcHBBGWRwPAxkUGUdZCxQWAD8zKxEAMxg/KxESADkYLxESARc5ETMRMzEwARQWMzIRETMRFAYjIiY1ESECAgYjIic1FjMyEhMhA89od9Wmu768y/7FHF6YdjocFhxxiSICcQGDiYMBCgE7/r3Kw8TLAj3+mP5kwAp/BgHZAfYAAAEAyf/sB14FtgAZAENAIxcADwYJFhISEwkPEwMaGxYRSVkWBxYHExgUAxMSDANJWQwTAD8rABg/PzMSOTkvLysREgEXOREzETMRMxEzMzEwARQWMzI2NREzERQGIyImNREhESMRMxEhETME9m5zcHGmyL/DyP0nqqoC2aoBhYl5eYcBzf4pv87LxgEz/VAFtv2SAm4AAAEAsP/sBqgESAAYAE1AKgUCEwoNARYWFw0TFwMZGgEVRlkPAR8BAgsDAQsBCxcDGA8XFRAIRlkQFgA/KwAYPz8zEjk5Ly9fXl0rERIBFzkRMxEzETMRMzMxMAERIREzERQWMzIRETMRFAYjIiY1NSERIxEBVgJQpmp31aa7wLrN/bCmBEj+NQHL/T2JhQEMATn+vcrDxslz/hIESAAAAQB9/+wFmgXLABwAOkAfFggbAgIPHAgEHR4AHElZAAAFDAwTSVkMBAUZSVkFEwA/KwAYPysREgA5GC8rERIBFzkRMxEzMTABIRUQACEgABE0EiQzMhYXByYmIyAAERAAMyARIQNmAjT+zP7J/rv+k7MBVep47VNCWtZX/vX+3gEL9wG0/n8C8Fb+of6xAZEBYOUBVLUxJ5QmLv7F/uP+4/7DAdcAAAEAc//sBLAEXAAZADpAHxIHGAICDBkHBBobABlGWQAABAoKD0ZZChAEFUZZBBYAPysAGD8rERIAORgvKxESARc5ETMRMzEwASEVECEgABEQACEyFwcmIyIGFRQWMzI2NSECsgH+/f7+7v7XAUMBIdSvO6imzeXMxamv/qoCP0P98AEnARABDgErUINK3tLP36CdAAABABD/7AT0BbYAFAA5QB0FEwoNDQMTAAQVFgsLEAEQCElZEBMEAAEASVkBAwA/KxEAMxg/KxESADkYLxESARc5ETMRMzEwEzUhFSERFBYzMhERMxEUBiMiJjUREAQ8/i93cuio073GzQUdmZn8aIl7AQABz/4pwM3OwwOgAAABACn/7ASHBEYAFAA2QBwCEAcKCgAQEgQVFgESExJGWQgIDRMPDQVGWQ0WAD8rABg/EjkvKxEAMxESARc5ETMRMzEwASERFBYzMhERMxEUBiMiJjURITUhA4H+pm1216a9wMDJ/qgDWAO6/cmJgwEEAUH+vcrDy8QCP4wAAQBv/+wEWAXLACYAR0AmFSAMACQjBRsRIwAgBicoIw8SDxJKWQ8PHQMdGEpZHRMDCUpZAwQAPysAGD8rERIAORgvKxESADkREgEXOREzETMRMzEwEzQkMyAXByYmIyIGFRQWMzMVIyIGFRQWMzI3FQYhICQ1NDY3NSYmnAEI4QEC0V5ptWWMn9HI2dXe6Mq36cev/vv+9P7bz7yqtARcqcaQeEQ0e3KAk42Oio6NXJ5N3MWXwBYIGbL//wBa/+wDhwRcAgYBggAA//8AAP51BWsFtgAmAbUAAAAHA4AD+gAA//8AEP51BHMESAImAdUAAAAHA4EDAgAA//8AAP6gBRAFvAImACQAAAAHAmcE6QAA//8AXv6gA80EWgImAEQAAAAHAmcEeQAA//8AAAAABRAH4QImACQAAAEHAmYE/AFSAAizAhMFJgArNf//AF7/7APNBo8CJgBEAAABBwJmBKYAAAAIswIpESYAKzX//wAAAAAFEAfRAiYAJAAAAQcDdwTlAVIACrQDAhUFJgArNTX//wBe/+wEQQZ/AiYARAAAAQcDdwSTAAAACrQDAisRJgArNTX//wAAAAAFEAfRAiYAJAAAAQcDeATdAVIACrQDAhUFJgArNTX//wAt/+wDzQZ/AiYARAAAAQcDeASTAAAACrQDAisRJgArNTX//wAAAAAFEAhKAiYAJAAAAQcDeQTZAVIACrQDAhUFJgArNTX//wBe/+wEFwb4AiYARAAAAQcDeQScAAAACrQDAisRJgArNTX//wAAAAAFEAhiAiYAJAAAAQcDegTlAVIACrQDAi0FJgArNTX//wBe/+wDzQcQAiYARAAAAQcDegSRAAAACrQDAkMRJgArNTX//wAA/qAFEAdzAiYAJAAAACcCZwTpAAABBwFLACsBUgAIswMpBSYAKzX//wBe/qADzQYhAiYARAAAACcCZwR5AAABBgFL1AAACLMDPhEmACs1//8AAAAABRAIEwImACQAAAEHA3sE7AFSAAq0AwIXBSYAKzU1//8AXv/sA80GwQImAEQAAAEHA3sEmgAAAAq0AwItESYAKzU1//8AAAAABRAIEwImACQAAAEHA3wE6QFSAAq0AwIXBSYAKzU1//8AXv/sA80GwQImAEQAAAEHA3wEmAAAAAq0AwItESYAKzU1//8AAAAABRAIWAImACQAAAEHA30E6QFSAAq0AwIhBSYAKzU1//8AXv/sA80HBgImAEQAAAEHA30EoAAAAAq0AwI3ESYAKzU1//8AAAAABRAIXgImACQAAAEHA34E4wFSAAq0AwInBSYAKzU1//8AXv/sA80HDAImAEQAAAEHA34EmAAAAAq0AwI9ESYAKzU1//8AAP6gBRAHSQImACQAAAAnAU4ALQFkAQcCZwTpAAAACLMCDwUmACs1//8AXv6gA80F5QImAEQAAAAmAU7YAAEHAmcEeQAAAAizAiURJgArNf//AMn+oAP4BbYCJgAoAAAABwJnBMEAAP//AHP+oAQSBFwCJgBIAAAABwJnBLgAAP//AMkAAAP4B+ECJgAoAAABBwJmBNEBUgAIswEQBSYAKzX//wBz/+wEEgaPAiYASAAAAQcCZgTJAAAACLMCHxEmACs1//8AyQAAA/gHLwImACgAAAEHAVL/5AFSAAizARUFJgArNf//AHP/7AQSBd0CJgBIAAABBgFS0AAACLMCJBEmACs1//8AyQAABG8H0QImACgAAAEHA3cEwQFSAAq0AgESBSYAKzU1//8Ac//sBFwGfwImAEgAAAEHA3cErgAAAAq0AwIhESYAKzU1//8AXQAAA/gH0QImACgAAAEHA3gEwwFSAAq0AgESBSYAKzU1//8ASv/sBBIGfwImAEgAAAEHA3gEsAAAAAq0AwIhESYAKzU1//8AyQAABDkISgImACgAAAEHA3kEvgFSAAq0AgESBSYAKzU1//8Ac//sBB0G+AImAEgAAAEHA3kEogAAAAq0AwIhESYAKzU1//8AyQAAA/gIYgImACgAAAEHA3oEuAFSAAq0AgEqBSYAKzU1//8Ac//sBBIHEAImAEgAAAEHA3oEogAAAAq0AwI5ESYAKzU1//8Ayf6gA/gHcwImACgAAAAnAmcEvgAAAQcBSwACAVIACLMCJQUmACs1//8Ac/6gBBIGIQImAEgAAAAnAmcEsAAAAQYBS/EAAAizAzQRJgArNf//AFQAAAJWB+ECJgAsAAABBwJmA8kBUgAIswEQBSYAKzX//wB7AAAB5gaPAiYA8wAAAQcCZgNzAAAACLMBCBEmACs1//8AVP6gAlYFtgImACwAAAAHAmcDtAAA//8Anf6gAWYF3wImAEwAAAAHAmcDYgAA//8Aff6gBb4FzQImADIAAAAHAmcFfwAA//8Ac/6gBGIEXAImAFIAAAAHAmcEyQAA//8Aff/sBb4H4QImADIAAAEHAmYFjwFSAAizAhwFJgArNf//AHP/7ARiBo8CJgBSAAABBwJmBNkAAAAIswIdESYAKzX//wB9/+wFvgfRAiYAMgAAAQcDdwV9AVIACrQDAh4FJgArNTX//wBz/+wEdQZ/AiYAUgAAAQcDdwTHAAAACrQDAh8RJgArNTX//wB9/+wFvgfRAiYAMgAAAQcDeAV9AVIACrQDAh4FJgArNTX//wBh/+wEYgZ/AiYAUgAAAQcDeATHAAAACrQDAh8RJgArNTX//wB9/+wFvghKAiYAMgAAAQcDeQV7AVIACrQDAh4FJgArNTX//wBz/+wEYgb4AiYAUgAAAQcDeQTHAAAACrQDAh8RJgArNTX//wB9/+wFvghiAiYAMgAAAQcDegV5AVIACrQDAjYFJgArNTX//wBz/+wEYgcQAiYAUgAAAQcDegTFAAAACrQDAjcRJgArNTX//wB9/qAFvgdzAiYAMgAAACcCZwV/AAABBwFLAMEBUgAIswMxBSYAKzX//wBz/qAEYgYhAiYAUgAAACcCZwTNAAABBgFLDgAACLMDMhEmACs1//8Aff/sBmQHcwImAl8AAAEHAHYBKwFSAAizAisFJgArNf//AHP/7AUZBiECJgJgAAABBgB2bQAACLMCKxEmACs1//8Aff/sBmQHcwImAl8AAAEHAEMAhwFSAAizAiMFJgArNf//AHP/7AUZBiECJgJgAAABBgBD1AAACLMCJBEmACs1//8Aff/sBmQH4QImAl8AAAEHAmYFjwFSAAizAiYFJgArNf//AHP/7AUZBo8CJgJgAAABBwJmBNkAAAAIswInESYAKzX//wB9/+wGZAcvAiYCXwAAAQcBUgCgAVIACLMCKwUmACs1//8Ac//sBRkF3QImAmAAAAEGAVL1AAAIswIjESYAKzX//wB9/qAGZAYUAiYCXwAAAAcCZwV7AAD//wBz/qAFGQTwAiYCYAAAAAcCZwTJAAD//wC6/qAFGQW2AiYAOAAAAAcCZwVKAAD//wCk/qAEOQRIAiYAWAAAAAcCZwS4AAD//wC6/+wFGQfhAiYAOAAAAQcCZgVUAVIACLMBFgUmACs1//8ApP/sBDkGjwImAFgAAAEHAmYE1QAAAAizARkRJgArNf//ALr/7AZ7B3MCJgJhAAABBwB2AO4BUgAIswElBSYAKzX//wCk/+wFlgYhAiYCYgAAAQYAdnkAAAizASYRJgArNf//ALr/7AZ7B3MCJgJhAAABBwBDAFoBUgAIswEdBSYAKzX//wCk/+wFlgYhAiYCYgAAAQYAQ7sAAAizAR8RJgArNf//ALr/7AZ7B+ECJgJhAAABBwJmBWABUgAIswEgBSYAKzX//wCk/+wFlgaPAiYCYgAAAQcCZgTbAAAACLMBIhEmACs1//8Auv/sBnsHLwImAmEAAAEHAVIAfwFSAAizASUFJgArNf//AKT/7AWWBd0CJgJiAAABBgFS/wAACLMBHhEmACs1//8Auv6gBnsGFAImAmEAAAAHAmcFTAAA//8ApP6gBZYE8gImAmIAAAAHAmcEsgAA//8AAP6gBHsFtgImADwAAAAHAmcEnAAA//8AAv4UBAYESAImAFwAAAAHAmcFnv/9//8AAAAABHsH4QImADwAAAEHAmYEqgFSAAizAQ0FJgArNf//AAL+FAQGBo8CJgBcAAABBwJmBGoAAAAIswEaESYAKzX//wAAAAAEewcvAiYAPAAAAQcBUv/CAVIACLMBEgUmACs1//8AAv4UBAYF3QImAFwAAAEGAVKKAAAIswEfESYAKzX//wBz/sUE0wYUAiYA0wAAAAcAQgC0AAAAAvvlBNn+tAYhAAkAEwAeQAwECg4OAAAVDwaACwEALzMazTIRATMRMxI5OTEwASMmJic1MxYWFwUjJiYnNTMWFhf+tGA0sSW6HGMx/pxgOK4luxxjMQTZKso/FT2uRBksyD8VPa5EAAAC/HEE2f+uBn8ADQAVAChAERUABhERFwMGChUKFQoRwAYBAC8zGsw5OS8vERI5EQEzETM5OTEwASMmJwYHIzU3NjczFhcnNjczFQYHI/7TXnBjcmFeNXA0sEKXUEk2rFN4YATZS1tlQRk8e01epsJbcBVuYAAAAvuaBNn+1wZ/AA0AFQAqQBIGDhERAAAXAwYKDwoPChPABgEALzMazDk5Ly8REjkRATMRMxI5OTEwASMmJwYHIzU3NjczFhclIyYnNTMWF/7XXmFyamleNXA0sEKX/e5feFSsNEsE2UFlYEYXPHtNXqasXnAVbGEAAvxxBNn/ewb4AA0AHwA0QBgQEwATGwMGBhYODiEDCgYSChIKGR7ABgEALzMazDI5OS8vERI5EQEzETMzEhc5ETMxMAEjJicGByM1NzY3MxYXExQHByMnNjY1NCYjIgc1NjMy/tNecGNyYV41cDSwQpeofwZQCjk/OSsuGhk3wwTZS1tlQRk8e01epgF7Zx1RgwkgJiUZBlAGAAL8aATZ/ucHEAAXACUAOkAbGB4JCRUVJxseIh4ZEQkABQwiAAwMACIDFcAZAC8azBc5Ly8vETMQxDMRMxESOREBMxEzEjk5MTABIi4CIyIGByM2NjMyHgIzMjY3MwYGEyMmJwYHIzU3NjczFhf+LSVHQz8cKCoOWw1lSyVJQz4bKCoMWgtjXl5hcmppXjVwNLBClwY1HiUeMTJqcR4kHjExaHP+pEFlYEYXPHtNXqYAAvx5BNn+xwbBAAcAFAAkQA8HBAoKEhIWA0AHEQqADggALzMa3TLUGs0RATMRMxI5OTEwATY3MxUGByMTIAMzFhYzMjY3MwYG/V5QMaxWd2A+/uwPZglMamJWCGkLlQX0aGUVcl3+/AEESDlBQHiMAAL8eQTZ/scGwQAHABQAJEAPBwQKChISFgRAAREKgA4IAC8zGt0y1BrNEQEzETMSOTkxMAEjJic1MxYXAyADMxYWMzI2NzMGBv3RXndWrDRLNf7sD2YJTGpiVghpC5UF3V1yFWxh/uUBBEg5QUB4jAAC/HkE2f7HBwYAEQAeAC5AFQgAAAUNAxQUHBwgCxAEBBgYGxSAEgAvGs0yMxE5L8QyEQEzETMSFzkRMzEwARQHByMnNjY1NCYjIgc1NjMyAyADMxYWMzI2NzMGBv4xfwZSCjlCOSwlJBY+wJX+7A9mCUxqYlYIaQuVBnlkHSlaCSAlJRoGTgj90wEESDlBQHiMAAL8aATZ/ucHDAAXACQAMEAVGiIJCRUmBQwMHh4YFUARCQAhGoAYAC8a3TLWxDMazREzETkvMxEBMzIROTkxMAEiLgIjIgYHIzY2MzIeAjMyNjczBgYDIAMzFhYzMjY3MwYG/i0lR0M/HCgqDlsNZEwlSUM+GygqDFoLY93+7A9mCUxqYlYIaQuVBjMeJB4wMmhxHiQeMTFncv6mAQRIOUFAeIwAAQAx/kIBbQAAAA8AGkALAAUFAgoDEBENCAMAL8wyERIBFzkRMzEwFzQnMxYVFAYjIic1FjMyNt+Le55mY0EyIDYlM+5nh3iEW2cQbAowAAABABn+dQFxAJoACwAYQAkKAAYADA0IAwAAL8wyERIBOTkRMzEwJREQIyInNRYzMjURAXHkODwpPV6a/t/+/BiME2QBMAAAAQAZ/nUBcQCPAAsAGEAJCgAGAAwNCAMAAC/MMhESATk5ETMxMCURECMiJzUWMzI1EQFx5Dg8KT1ej/7q/vwYjBNkASUA//8ANAAAAkMFtgAHABT/eAAAAAIAc//sBBcEcwALABcAKEAUDAYSAAYAGBkJFUtZCSYDD01ZAxkAPysAGD8rERIBOTkRMxEzMTABEAIjIgIREBIzMhIBFBYzMjY1NCYjIgYEF/fe2fb52tj5/QSbjo2eno+NmgIv/vX+yAE1AQ4BDwE1/sv+8dDo6s7M7OkAAAEALQAAAjcEXgAKACZAEQkBAQAIAAsMBwQHBAEJEAEYAD8/Ejk5Ly8REgE5OREzETMxMCEjETQ3BgcHJwEzAjehCEM+lloBf4sCMe+MQzBwcgEjAAEAKQAAA9cEcwAZACxAGAcTABMXDgEFGhsQCktZECYYFwEXTFkBGAA/KxEAMxg/KxESARc5ETMxMCEhNQE+AjU0JiMiBgcnNjMyFhUUBgcFFyED1/xSAZGdcSyLd1icXFrA8sbagrr+uQICvoUBL3doU0FXZz1KbaiolnO7gOcGAAABAF7+lQQbBHQAJwBHQCYDBBsAEwcHAAQWIg0GKCkEFxYXFktZFxcKJSUeS1klJgoRS1kKJQA/KwAYPysREgA5GC8rERIAORESARc5ETMRMxEzMTABFAYHFRYWFRQEISImJzUWFjMgERAhIzUzMjY1NCYjIgYHJzY2MzIWA+6dkLCq/t7+9XTBW1/XYAF7/l6QkqvIk35gqm1UWuuC1ewDB4yyHggWtJLR4SMsni8xASkBCo+Xhmt6NEZwR1HDAAACABf+qARmBF4ACgASAEJAIRIFCQICCwcDAAMFAxMUAQUSBU1ZCRIODw8HEhIDBxADJAA/PxI5LxI5ETMRMysRADMREgEXOREzMzMRMxEzMTAlIxEjESE1ATMRMyERNDcjBgcBBGbZqP0yAr642f6GDAopRP45G/6NAXN9A8b8RAFc2t5WXP2eAAABAIX+lQQdBF8AGgA6QB8PAxkUCBQXAwQcGwARS1kAAAYVFRhMWRUQBgxLWQYlAD8rABg/KxESADkYLysREgEXOREzETMxMAEyBBUUACMiJzUWFjMyNjUQISIHJxMhFSEDNgIt5wEJ/t/+94JG0GWww/6JXqBWNwLX/bclcwIm5cfj/v5PoC0zpp0BMh03AqyZ/kkXAP//AHX/7AQvBcsCBgAZAAAAAQBe/qkEKwRfAAYAH0AQAQUFAAIDBwgDAkxZAxAAJAA/PysREgEXOREzMTABASE1IRUBAR0CXvzjA839qv6pBR2ZhfrP//8AaP/sBCkFywIGABsAAAACAGr+lQQlBHQAFwAlAEFAIhsRIgoKAAAEEQMmJw4eTVkKFA4OAhQUGEtZFCYCB01ZAiUAPysAGD8rERIAORgvEjkrERIBFzkRMxEzETMxMAEQISInNRYzMhITIwYGIyImNTQSMzIWEgEiBhUUFjMyNjY1NCYmBCX9aHREUGbw9QsMN7ZywuT/0JXfeP4Uj5yQk1uZWFKTAe/8phSPGgEpATNTV+jQ5AEImf7bATC4pJClSoBGabJmAP//AB0AAAXEBh8AJwBJArYAAAAGAEkAAAACAFwC3QWqBcEAIgAzAFpALiwwMC4qJiYoCgAcEQURFgAoLgY1NCsxJAMtLy0pLyMjKBwKFAgDAygpGRQUKQMAPzMvMxDNMi8zEjk5ETMRMxEzERIXORESARc5ETMRMxEzETMRMxEzMTABFAYjIic1FjMyNTQmJicmJjU0NjMyFwcmIyIGFRQWFhcWFgEDIxcRIxEzExMzESMRNyMDAkiVfJFKaneUFzZVeFGObn1cImRTPEsSK1+BUAGmyQgGd7zDy7R/BgjTA6xibSFsKGQhKCEfLFtMVmknYyUuKB0kHCQyWv7sAi+B/lIC0f3RAi/9LwGkif3T//8AEv4UBFoFtgImADcAAAAHAHoBPwAA//8AH/4UAqgFRgImAFcAAAAHAHoAxQAAAAIAcf4UBDcEXAAMACoAR0AmChUaAyoqHh4kFQMrLCEnRlkkIRscDxoPGBIYB0ZZGBASAEZZEhYAPysAGD8rERIAOTkYPz8zKxESARc5ETMRMzMRMzEwJTI2NzU0JiMiBhUUFgU0NyMGIyICERASMzIXMzczERQGIyInNRYWMzI2NQJMqpcEnquQmZcB2wkLcObZ7/PT33sLGIPs+fKVS9J2jqV3t8or4szg0NHZayRjpwEtAQoBCAExppL7pOzsRp4qLqmS//8Acf4UBDcGIQImA5EAAAEGAUsGAAAIswI5ESYAKzX//wBx/hQENwXlAiYDkQAAAQYBTgwAAAizAisRJgArNf//AHH+FAQ3Bd8CJgORAAABBwFPAVYAAAAIswI0ESYAKzX//wBx/hQENwYhAiYDkQAAAQYCOncAAAizAi8RJgArNQABAMkAAAFzBbYAAwARtgAEBQEDABIAPz8REgE5MTAzETMRyaoFtvpKAP//AAUAAAGOB3MCJgOWAAABBwBD/nwBUgAIswEFBSYAKzX//wCzAAACPAdzAiYDlgAAAQcAdv8qAVIACLMBDQUmACs1////xwAAAmkHcwImA5YAAAEHAUv+uwFSAAizARIFJgArNf//AAUAAAI4ByUCJgOWAAABBwBq/tABUgAKtAIBGQUmACs1Nf///6sAAAKTBy8CJgOWAAABBwFS/qMBUgAIswENBSYAKzX////zAAACSwa0AiYDlgAAAQcBTf7GAVIACLMBBwUmACs1////5wAAAlMHNwImA5YAAAEHAU7+wgFSAAizAQQFJgArNf//AFb+QgGiBbYCJgOWAAAABgFRMQD//wC7AAABfwcxAiYDlgAAAQcBTwAZAVIACLMBDQUmACs1//8Ayf5/A6MFtgAmA5YAAAAHAC0COwAA////5AAAAh0GCgAnA5YAqgAAAQcBVP3o/5cAB7IBCAAAPzUA//8AyQAAAXMFtgIGA5YAAP//AAUAAAI4ByUCJgOWAAABBwBq/tABUgAKtAIBGQUmACs1Nf//AMkAAAFzBbYCBgOWAAD//wAFAAACOAclAiYDlgAAAQcAav7QAVIACrQCARkFJgArNTX//wDJAAABcwW2AgYDlgAA//8AyQAAAXMFtgIGA5YAAP//AJkAAAIEB+ECJgOWAAABBwJmA5EBUgAIswEIBSYAKzX//wC4/qABfwW2AiYDlgAAAAcCZwN9AAAAAQAAA6oAigAWAFYABQACABAALwBcAAABDgD4AAMAAQAAAB8AHwAfAB8AUQB3AP8BewHsAmoCgwKuAtkDFQNBA18DdAOWA68D8QQaBFsEuQT7BUYFowXFBjQGkQbHBvsHGwdEB2QHuwhBCIAI2wkZCVUJigm4CggKOQpsCpQKwwrhCx8LVgucC9kMLAx5DMwM8A0kDUsNjw2/DeYOEg42Dk8Ocg6TDqkOyA8kD3kPtBAHEFQQlBEoEWYRlBHSEhASJxJ/ErkS+hNPE6MT1hQoFGgUpRTMFRcVRxWAFawV7hYGFksWhRaFFrYXARdTF6EX9RgaGJUYyxlHGZQZzxntGfUafxqVGs0a2RsTG2MbghvBG/EcExxFHGwcpRzdHPMdCB0eHXsdjB2dHa4dvx3RHd0eKx43HkgeWR5qHnwejR6eHq8ewR8ZHyofOx9MH10fbh+AH64gGSAqIDsgTCBeIG8gsSEYISghOCFIIVghaSF6IgUiESIhIjEiQSJSImMidCKFIpci/yMPIx8jLyM/I08jYCOmJAwkHCQsJDwkTSRdJLQkxSTWJOYk9yUHJRMlHyUwJUAlUSVhJXIlgyWUJaQltSXGJc4mOiZLJlsmbCZ8Jo0mniaqJrYmxybXJugm+CcJJxknKic7J0cnVydoJ3knySgiKDMoRChVKGYodyiIKJMoniivKMYo0ijeKO8pACkMKRcpTCldKW4peSmFKZYppimyKb4p+CotKj4qTipaKmUqdiqGKpcq3isnKzgrSCtZK2kreyuMK+8saSx6LIoslSyhLLIswyzULOQs9S0FLREtHS0uLT4tSS1ULWUtdS2yLgQuFS4lLjYuRi5XLmcueS6KLpwurS65LsUu1i7nLvgvCC8aLysvOy9ML10vbi9+L6Uv+DB3MRYxJzE4MUkxWTFkMW8xmDHBMdcx/zIfMlQyezK0MuYzBTNOM18zZzN4M4oznDOtM78z0DPjM+sz8zQSNBo0IjQqNDI0izSTNJs0wTTJNNE1BjUONTI1OjVxNXk1gTXoNfA2PDaQNqI2tDbENtQ25Db1Nwc3azfQOAY4ZzjFORI5TDmmOdI52josOjQ6XzrKOtI7EDtcO6g77TwlPF08uj0QPV89uT3LPdw97D38Pg0+Hz5vPoA+yj7SPto+7D70P1M/pj/lP/ZAB0A3QD9AhkCOQJZA30DnQSxBiUHBQdJCAUI8QkRCTEJUQlxCZEJsQnRCs0K7QsNC9EMrQ1tDlUPbRCNEYUSvRQ9FVkVeRbpGFUY0RnxGhEbKRyNHW0drR5tH0UgUSElIUUh1SH1IhUiqSLJJE0kbSUxJg0m0Se9KNEp9SrhLCEtlS6lLukwlTDVMg0yLTJNMpUytTQZNWE1gTXBNgE2xTdZN/U4OTh5OL05ATlJOZE51ToZOm06wTrhO2k73TxVPHU86T2lPmk+0T/JQWlB6UIpRJFEsUTRRV1F7UYdRoFHTUhhShlL4U25T1FQsVKBU9FT8VUtVYlV5VZBVp1YKVj5WY1aXVq5W0lcyV2JX41gsWD5YUFh9WIlYlVi8WONZAlkhWUBZdVm3WfxaTVpuWtNbJ1snWydbJ1snWydbJ1snWydbJ1snWydbJ1snXHFczFzdXOVdbF2nXgteHF4tXjleRV5XXoxew17TXuNfQF+XX+BgMWA6YENgTGB6YJlgqmC7YMtg22FOYZlh7WI7Ypti/mM/Y4Bj1mQsZI9k9GVpZeBmjGcwZzhnQGedZ/ZoL2hnaHloi2kBaQ1pgGnzap1rO2vRbDpsfWy/bQNtM21gbYZtrG6QbxtvgW/fcDFwgnDXcUNxe3G0cgZyVXKocvtzB3MTc1BzjHPNdBB0WHSsdOZ1HnVddaJ13XYddnN2xndCd7l3xXfReAJ4NHg8eG94rXjxeTB5cXmueex6MHpzer97C3tDe3p76HxLfMF9LX01fUZ9V32sffx+RH6Hfsx/FX9Vf5Z/2oAegG+AvYDFgNaA5oD4gQmBEYEZgSqBOoGLgdqB7IH9gg+CIYIzgkSCkILaguuC+4MNgx6DMINBg0mDUYNjg3SDhoOXg6iDuIPKg9uD7YP+hBCEIYRMhHeEiYSbhKeEsoS+hMqFEIVWhZSFnIX2hmSGyYcnh4GH1IgriHmIxIkTiWaJsInvii2KioqSip6Kqoq2isKK04rkivaLCIsaiyyLPotQi2KLdIuJi52Lr4vBi9OL5Yv3jAmMG4wtjEKMVoxijG6Mf4yQjKGMsYzDjNWM54z5jQuNHY0vjUGNVo1qjXuNjI2YjaSNsI28jc2N3o3wjgKOFI4mjjiOSo5cjm6Og46XjqiOuI7JjtmO6o77jwyPHI8ojzSPQI9Mj12Pbo9/j4+PoI+wj8GP0o/jj/OP/5ALkBeQI5A0kEWQVpBmkHKQppDhkR2RapHCkfqSMpJ7ks2S9ZMYkzuTRJODk62T7pROlJOU3pTmlQmVEZVulXqV9pYClg6WcZaBlpGWopaylseW2JbplvqXDJcdly6XP5dKl1uXZ5d5l4GXk5ebl62XtZe9l86X2gAAAAEAAAABGdsfPbV9Xw889QAJCAAAAAAAyTUxiwAAAADVK8zV+5r91QmiCGIAAAAJAAIAAAAAAAAEzQDBAAAAAAQUAAACFAAAAiMAmAM1AIUFKwAzBJMAgwaWAGgF1wBxAcUAhQJeAFICXgA9BGoAVgSTAGgB9gA/ApMAVAIhAJgC8AAUBJMAZgSTALwEkwBkBJMAXgSTACsEkwCFBJMAdQSTAF4EkwBoBJMAagIhAJgCIQA/BJMAaASTAHcEkwBoA28AGwcxAHkFEAAABS8AyQUMAH0F1QDJBHMAyQQhAMkF0wB9BecAyQKqAFQCI/9gBOkAyQQnAMkHOQDJBggAyQY7AH0E0QDJBjsAfQTyAMkEZABqBG0AEgXTALoEwwAAB2gAGwSeAAgEewAABJEAUgKiAKYC8AAXAqIAMwRWADEDlv/8BJ4BiQRzAF4E5wCwA88AcwTnAHMEfQBzArYAHQRiACcE6QCwAgYAogIG/5EEMwCwAgYAsAdxALAE6QCwBNUAcwTnALAE5wBzA0QAsAPRAGoC0wAfBOkApAQCAAAGOQAXBDEAJwQIAAIDvgBSAwgAPQRoAe4DCABIBJMAaAIUAAACIwCYBJMAvgSTAD8EkwB7BJMAHwRoAe4EIQB7BJ4BNQaoAGQC1QBGA/oAUgSTAGgCkwBUBqgAZAQA//oDbQB/BJMAaALHADECxwAhBJ4BiQT0ALAFPQBxAiEAmAHRACUCxwBMAwAAQgP6AFAGPQBLBj0ALgY9ABoDbwAzBRAAAAUQAAAFEAAABRAAAAUQAAAFEAAABvz//gUMAH0EcwDJBHMAyQRzAMkEcwDJAqoAPAKqAFQCqv//AqoAPAXHAC8GCADJBjsAfQY7AH0GOwB9BjsAfQY7AH0EkwCFBjsAfQXTALoF0wC6BdMAugXTALoEewAABOMAyQT6ALAEcwBeBHMAXgRzAF4EcwBeBHMAXgRzAF4G3QBeA88AcwR9AHMEfQBzBH0AcwR9AHMCBv/aAgYAqQIG/7MCBv/sBMUAcQTpALAE1QBzBNUAcwTVAHME1QBzBNUAcwSTAGgE1QBzBOkApATpAKQE6QCkBOkApAQIAAIE5wCwBAgAAgUQAAAEcwBeBRAAAARzAF4FEAAABHMAXgUMAH0DzwBzBQwAfQPPAHMFDAB9A88AcwUMAH0DzwBzBdUAyQTnAHMFxwAvBOcAcwRzAMkEfQBzBHMAyQR9AHMEcwDJBH0AcwRzAMkEfQBzBHMAyQR9AHMF0wB9BGIAJwXTAH0EYgAnBdMAfQRiACcF0wB9BGIAJwXnAMkE6QCwBecAAATpABQCqv/iAgb/kAKqACoCBv/aAqoAHgIG/8wCqgBUAgYANQKqAFQCBgCwBM0AVAQMAKICI/9gAgb/kQTpAMkEMwCwBCUAsAQnAMkCBgCjBCcAyQIGAFkEJwDJAgYAsAQnAMkCgwCwBC8AHQIX//wGCADJBOkAsAYIAMkE6QCwBggAyQTpALAFcwABBggAyQTpALAGOwB9BNUAcwY7AH0E1QBzBjsAfQTVAHMHYgB9B4kAcQTyAMkDRACwBPIAyQNEAGAE8gDJA0QAggRkAGoD0QBqBGQAagPRAGoEZABqA9EAagRkAGoD0QBqBG0AEgLTAB8EbQASAtMAHwRtABIC0wAfBdMAugTpAKQF0wC6BOkApAXTALoE6QCkBdMAugTpAKQF0wC6BOkApAXTALoE6QCkB2gAGwY5ABcEewAABAgAAgR7AAAEkQBSA74AUgSRAFIDvgBSBJEAUgO+AFICjwCwBJ4AwwUUAAAEcwBeBvz//gbdAF4GOwB9BNUAcwRkAGoD0QBqBLwBDAS8AQwEsgEtBLwBJQIGAKIEngFvAZMAJQS8AQgEngDnBJ4B/ASeARsFEAAAAiEAmATy/9QGff/UA5j/5AaB/+QFhf/UBoH/5AK2/+kFEAAABS8AyQQpAMkEkwAnBHMAyQSRAFIF5wDJBjsAfQKqAFQE6QDJBNMAAAc5AMkGCADJBG0ASAY7AH0F1QDJBNEAyQSJAEoEbQASBHsAAAZiAGoEngAIBl4AbQZCAFACqgA8BHsAAATjAHMDzQBaBOkAsAK2AKgE3wCkBOMAcwUGALAEGQAKBKQAcQPNAFoD3QBzBOkAsAS8AHMCtgCoBCUAsARG//IE9ACwBFYAAAPNAHEE1QBzBTMAGQTVAKYD2wBzBOcAcwPJABIE3wCkBb4AcwRe/+wGBgCkBi8AcwK2AAkE3wCkBNUAcwTfAKQGLwBzBHMAyQXfABIEKQDJBR0AfQRkAGoCqgBUAqoAPAIj/2AHbwAAB6AAyQXfABIE5QDJBPgAGwXVAMkFEAAABOcAyQUvAMkEKQDJBXcADgRzAMkGwQACBKYASgYZAMsGGQDLBOUAyQWiAAAHOQDJBecAyQY7AH0F1QDJBNEAyQUMAH0EbQASBPgAGwZiAGoEngAIBeUAyQWPAKoIQgDJCEQAyQWBABIG0wDJBSUAyQUKAD0IZgDJBRcAMwRzAF4ExQB3BI0AsANtALAEkwApBH0AcwXjAAQD3QBEBRIAsAUSALAEJwCwBJEAEAXhALAFEgCwBNUAcwT4ALAE5wCwA88AcwO8ACkECAACBbgAcQQxACcFAgCwBN0AnAcfALAHLQCwBY8AKQYpALAEvACwA/AAOQamALAEcQAlBH0AcwTpABQDbQCwA/AAcwPRAGoCBgCiAgb/7AIG/5EGsgAQBxcAsATpABQEJwCwBAgAAgT4ALAENwDJA20AsAdoABsGOQAXB2gAGwY5ABcHaAAbBjkAFwR7AAAECAACBAAAUggAAFIIAABSA0r//AFcABkBXAAZAfYAPwFcABkCzQAZAs0AGQM9ABkEBAB7BBQAewMCAKQGRgCYCZ4AZAHFAIUDJQCFAm8AUgJvAFAD4wCYAQr+eQMnAG0EkwBiBJMARAYbAJoEuAA/BpgAjQQpAHcIJwDJBjUAJQZCAFAE9ABmBj0ARwY9ACAGPQBHBj0AagSmAGYEkwAnBekAyQUMAEwEkwBoBGQAJQWkAHcDEgAMBJMAYgSTAGgEkwBoBJMAaASqAG8EvAAdBLwAHQSeANsCBv+RBAABiQQAAXEEAAGBAscAJwLHABQCxwA7AscAKQLHADkCxwAzAscAIwQAAAAIAAAABAAAAAgAAAACqgAAAgAAAAFWAAAEeQAAAiEAAAGaAAAAzQAAAAAAAAAAAAAIAABUCAAAVAIG/5EBXAAZBPoACgSFAAAGuAASBzkAyQdxALAFEAAABHMAXgZS/t8CqgB1AzMAmAd1AB0HdQAdBj0AfQTfAHMGJQC6BVIApAAA/FMAAP0NAAD8GQAA/QgAAP07BHMAyQYZAMsEfQBzBRIAsAgXAIUGjQAABWYAFwUOABcHWgDJBeMAsAVtAAAEgwAKB14AyQYhALAFxQAUBSMADAfLAMkGxQCwBKgAPwPdABkGXgBtBgYApAY9AH0E1QBzBQIAAAQMAAAFAgAABAwAAAmsAH0IfQBzBo0AfQVCAHMH/gB9BncAcwffAF4GjQAABR0AfQPnAHME3wBqBHUAywSeAPgEngHfBJ4B4QfpACkHpgApBikAyQUlALAE5wAvBLwAFATjAMkE5wCwBDcALwNtABIFIwDJBDMAsAcfAAIGPQAEBKYASgPdAEQFSgDJBFwAsATpAMkERACwBOkALwQjABQFgwAQBOwAKQX4AMkFLwCwBoEAyQXjALAIiQDJBuwAsAY7AH0FHwBzBQwAfQPPAHMEbQAQA7wAKQR7AAAEAgAABHsAAAQCAAAE9AAIBFYAJwbXABAFvAApBYkAqgTfAJwFjwCqBM0AnAWPAMkErgCwBrQAPQVGADMGtAA9BUYAMwKqAFQGwQACBeMABAWDAMkEZACwBaYAAASTABAF0QDJBO4AsAX2AMkFOQCwBY8AqgTdAJwHOwDJBeMAsAKqAFQFEAAABHMAXgUQAAAEcwBeBvz//gbdAF4EcwDJBH0AcwXXAHUEeQBmBdcAdQR5AGYGwQACBeMABASmAEoD3QBEBKoASgPpABsGGQDLBRIAsAYZAMsFEgCwBjsAfQTVAHMGPQB9BNUAcwY9AH0E1QBzBQoAPQPwADkE+AAbBAgAAgT4ABsECAACBPgAGwQIAAIFjwCqBN0AnAQ3AMkDbQCwBtMAyQYpALAENwAvA20AEgT4AAgEUgAnBJ4ABgQxACcE5wCDBOcAcwcxAIMHKwBzBzsATgZqAFAFAABOBC8AUAfZAAAGzwAQCBkAyQdOALAGDAB9BR8AcwWuABAFLQApBKoAbwPNAFoFmgAABJEAEAUQAAAEcwBeBRAAAARzAF4FEAAABHMAXgUQAAAEcwAtBRAAAARzAF4FEAAABHMAXgUQAAAEcwBeBRAAAARzAF4FEAAABHMAXgUQAAAEcwBeBRAAAARzAF4FEAAABHMAXgRzAMkEfQBzBHMAyQR9AHMEcwDJBH0AcwRzAMkEfQBzBHMAXQR9AEoEcwDJBH0AcwRzAMkEfQBzBHMAyQR9AHMCqgBUAgYAewKqAFQCBgCdBjsAfQTVAHMGOwB9BNUAcwY7AH0E1QBzBjsAfQTVAGEGOwB9BNUAcwY7AH0E1QBzBjsAfQTVAHMGPQB9BN8AcwY9AH0E3wBzBj0AfQTfAHMGPQB9BN8AcwY9AH0E3wBzBdMAugTpAKQF0wC6BOkApAYlALoFUgCkBiUAugVSAKQGJQC6BVIApAYlALoFUgCkBiUAugVSAKQEewAABAgAAgR7AAAECAACBHsAAAQIAAIE5wBzAAD75QAA/HEAAPuaAAD8cQAA/GgAAPx5AAD8eQAA/HkAAPxoAaQAMQGkABkBpAAZAy0ANASJAHMC9AAtBBQAKQSTAF4EjwAXBJMAhQSTAHUEkwBeBJMAaASTAGoFbQAdBloAXARtABIC0wAfBOcAcQTnAHEE5wBxBOcAcQTnAHECOwDJAjsABQI7ALMCO//HAjsABQI7/6sCO//zAjv/5wI7AFYCOwC7BF4AyQLl/+QCOwDJAAUAyQAFAMkAyQCZALgAAAABAAAIjf2oAAAJrPua/nsJogABAAAAAAAAAAAAAAAAAAADowADBLYBkAAFAAAFmgUzAAABHwWaBTMAAAPRAGYB8QgCAgsGBgMFBAICBOAAAu9AACBbAAAAKAAAAAAxQVNDAEAAIP/9Bh/+FACECI0CWCAAAZ8AAAAABEgFtgAAACAAAwAAAAEAAwABAAAADAAEA3wAAADGAIAABgBGAEgASQB+AMsAzwEnATIBYQFjAX8BkgGhAbAB8AH/AhsCNwK8AscCyQLdAvMDAQMDAwkDDwMjA4kDigOMA5gDmQOhA6kDqgPOA9ID1gQNBE8EUARcBF8EhgSPBJEEvwTABM4EzwUTHgEePx6FHsceyh7xHvMe+R9NIAsgFSAeICIgJiAwIDMgOiA8IEQgcCB5IH8gpCCnIKwhBSETIRYhICEiISYhLiFeIgIiBiIPIhIiGiIeIisiSCJgImUlyvsE/v///f//AAAAIABJAEoAoADMANABKAEzAWIBZAGSAaABrwHwAfoCGAI3ArwCxgLJAtgC8wMAAwMDCQMPAyMDhAOKA4wDjgOZA5oDowOqA6sD0QPWBAAEDgRQBFEEXQRgBIgEkASSBMAEwQTPBNAeAB4+HoAeoB7IHsse8h70H00gACATIBcgICAmIDAgMiA5IDwgRCBwIHQgfyCjIKcgqyEFIRMhFiEgISIhJiEuIVsiAiIGIg8iESIaIh4iKyJIImAiZCXK+wD+///8////4wNN/+P/wgLL/8IAAP/CAi3/wv+wAL8AsgBh/0kAAAAA/5b+hf6E/nb/aP9j/2L/XQBn/0T90AAX/c/9zgAJ/c79zf/5/c3+gv5/AAD9mv4a/ZkAAP4M/gv9aP4J/ub+Cf7Y/gnkWOQY43rkfQAA5H3jDuR74w3iQuHv4e7h7eHq4eHh4OHb4drh0+HL4cjhmeF24XQAAOEY4QvhCeJu4P7g++D04MjgJeAi4BrgGeAS4A/gA9/n39DfzdxpAAADTwJTAAEAAAAAAAAAAAAAAAAAugAAAAAAAAAAAAAAAAAAAAAAvgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJgAAAAAAAAArAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAdgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFIAAAAAAAADmwDrA5wA7QOdAO8DngDxA58A8wOgAUkBSgEkASUCaAGcAZ0BngGfAaADpAOlAaMBpAGlAaYBpwJpAmsB9gH3A6gDRgOpA3UCHAONAjQCNQJdAl5AR1taWVhVVFNSUVBPTk1MS0pJSEdGRURDQkFAPz49PDs6OTg3NjUxMC8uLSwoJyYlJCMiIR8YFBEQDw4NCwoJCAcGBQQDAgEALCCwAWBFsAMlIBFGYSNFI2FILSwgRRhoRC0sRSNGYLAgYSCwRmCwBCYjSEgtLEUjRiNhsCBgILAmYbAgYbAEJiNISC0sRSNGYLBAYSCwZmCwBCYjSEgtLEUjRiNhsEBgILAmYbBAYbAEJiNISC0sARAgPAA8LSwgRSMgsM1EIyC4AVpRWCMgsI1EI1kgsO1RWCMgsE1EI1kgsAQmUVgjILANRCNZISEtLCAgRRhoRCCwAWAgRbBGdmiKRWBELSwBsQsKQyNDZQotLACxCgtDI0MLLSwAsCgjcLEBKD4BsCgjcLECKEU6sQIACA0tLCBFsAMlRWFksFBRWEVEGyEhWS0sSbAOI0QtLCBFsABDYEQtLAGwBkOwB0NlCi0sIGmwQGGwAIsgsSzAioy4EABiYCsMZCNkYVxYsANhWS0sigNFioqHsBErsCkjRLApeuQYLSxFZbAsI0RFsCsjRC0sS1JYRUQbISFZLSxLUVhFRBshIVktLAGwBSUQIyCK9QCwAWAj7ewtLAGwBSUQIyCK9QCwAWEj7ewtLAGwBiUQ9QDt7C0ssAJDsAFSWCEhISEhG0YjRmCKikYjIEaKYIphuP+AYiMgECOKsQwMinBFYCCwAFBYsAFhuP+6ixuwRoxZsBBgaAE6WS0sIEWwAyVGUkuwE1FbWLACJUYgaGGwAyWwAyU/IyE4GyERWS0sIEWwAyVGUFiwAiVGIGhhsAMlsAMlPyMhOBshEVktLACwB0OwBkMLLSwhIQxkI2SLuEAAYi0sIbCAUVgMZCNki7ggAGIbsgBALytZsAJgLSwhsMBRWAxkI2SLuBVVYhuyAIAvK1mwAmAtLAxkI2SLuEAAYmAjIS0sS1NYirAEJUlkI0VpsECLYbCAYrAgYWqwDiNEIxCwDvYbISOKEhEgOS9ZLSxLU1ggsAMlSWRpILAFJrAGJUlkI2GwgGKwIGFqsA4jRLAEJhCwDvaKELAOI0SwDvawDiNEsA7tG4qwBCYREiA5IyA5Ly9ZLSxFI0VgI0VgI0VgI3ZoGLCAYiAtLLBIKy0sIEWwAFRYsEBEIEWwQGFEGyEhWS0sRbEwL0UjRWFgsAFgaUQtLEtRWLAvI3CwFCNCGyEhWS0sS1FYILADJUVpU1hEGyEhWRshIVktLEWwFEOwAGBjsAFgaUQtLLAvRUQtLEUjIEWKYEQtLEUjRWBELSxLI1FYuQAz/+CxNCAbszMANABZREQtLLAWQ1iwAyZFilhkZrAfYBtksCBgZiBYGyGwQFmwAWFZI1hlWbApI0QjELAp4BshISEhIVktLLACQ1RYS1MjS1FaWDgbISFZGyEhISFZLSywFkNYsAQlRWSwIGBmIFgbIbBAWbABYSNYG2VZsCkjRLAFJbAIJQggWAIbA1mwBCUQsAUlIEawBCUjQjywBCWwByUIsAclELAGJSBGsAQlsAFgI0I8IFgBGwBZsAQlELAFJbAp4LApIEVlRLAHJRCwBiWwKeCwBSWwCCUIIFgCGwNZsAUlsAMlQ0iwBCWwByUIsAYlsAMlsAFgQ0gbIVkhISEhISEhLSwCsAQlICBGsAQlI0KwBSUIsAMlRUghISEhLSwCsAMlILAEJQiwAiVDSCEhIS0sRSMgRRggsABQIFgjZSNZI2ggsEBQWCGwQFkjWGVZimBELSxLUyNLUVpYIEWKYEQbISFZLSxLVFggRYpgRBshIVktLEtTI0tRWlg4GyEhWS0ssAAhS1RYOBshIVktLLACQ1RYsEYrGyEhISFZLSywAkNUWLBHKxshISFZLSywAkNUWLBIKxshISEhWS0ssAJDVFiwSSsbISEhWS0sIIoII0tTiktRWlgjOBshIVktLACwAiVJsABTWCCwQDgRGyFZLSwBRiNGYCNGYSMgECBGimG4/4BiirFAQIpwRWBoOi0sIIojSWSKI1NYPBshWS0sS1JYfRt6WS0ssBIASwFLVEItLLECAEKxIwGIUbFAAYhTWli5EAAAIIhUWLICAQJDYEJZsSQBiFFYuSAAAECIVFiyAgICQ2BCsSQBiFRYsgIgAkNgQgBLAUtSWLICCAJDYEJZG7lAAACAiFRYsgIEAkNgQlm5QAAAgGO4AQCIVFiyAggCQ2BCWblAAAEAY7gCAIhUWLICEAJDYEJZsSYBiFFYuUAAAgBjuAQAiFRYsgJAAkNgQlm5QAAEAGO4CACIVFiyAoACQ2BCWVlZWVlZsQACQ1RYQAoFQAhACUAMAg0CG7EBAkNUWLIFQAi6AQAACQEAswwBDQEbsYACQ1JYsgVACLgBgLEJQBuyBUAIugGAAAkBQFm5QAAAgIhVuUAAAgBjuAQAiFVaWLMMAA0BG7MMAA0BWVlZQkJCQkItLEUYaCNLUVgjIEUgZLBAUFh8WWiKYFlELSywABawAiWwAiUBsAEjPgCwAiM+sQECBgywCiNlQrALI0IBsAEjPwCwAiM/sQECBgywBiNlQrAHI0KwARYBLSywgLACQ1CwAbACQ1RbWCEjELAgGskbihDtWS0ssFkrLSyKEOUtQJkJIUggVSABHlUfSANVHx4BDx4/Hq8eA01LJh9MSzMfS0YlHyY0EFUlMyRVGRP/HwcE/x8GA/8fSkkzH0lGJR8TMxJVBQEDVQQzA1UfAwEPAz8DrwMDR0YZH+tGASMzIlUcMxtVFjMVVREBD1UQMw9VDw9PDwIfD88PAg8P/w8CBgIBAFUBMwBVbwB/AK8A7wAEEAABgBYBBQG4AZCxVFMrK0u4B/9SS7AJUFuwAYiwJVOwAYiwQFFasAaIsABVWltYsQEBjlmFjY0AQh1LsDJTWLAgHVlLsGRTWLAQHbEWAEJZc3MrK15zdHUrKysrK3Qrc3QrKysrKysrKysrKysrc3QrKysYXgAAAAYUABcATgW2ABcAdQW2Bc0AAAAAAAAAAAAAAAAAAARIABQAkQAA/+wAAAAA/+wAAAAA/+wAAP4U/+wAAAW2ABP8lP/t/oX/6v6p/+wAGP68AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAAAAIsAgQDdAJgAjwCOAJkAiACBAQ8AigAAAAAADQCiAAMAAQQJAAAAcgAAAAMAAQQJAAEAEgByAAMAAQQJAAIADgCEAAMAAQQJAAMANACSAAMAAQQJAAQAIgDGAAMAAQQJAAUAGADoAAMAAQQJAAYAIAEAAAMAAQQJAAcApAEgAAMAAQQJAAgAKAHEAAMAAQQJAAsAOAHsAAMAAQQJAAwAXAIkAAMAAQQJAA0AXAKAAAMAAQQJAA4AVALcAEQAaQBnAGkAdABpAHoAZQBkACAAZABhAHQAYQAgAGMAbwBwAHkAcgBpAGcAaAB0ACAAqQAgADIAMAAxADAALQAyADAAMQAxACwAIABHAG8AbwBnAGwAZQAgAEMAbwByAHAAbwByAGEAdABpAG8AbgAuAE8AcABlAG4AIABTAGEAbgBzAFIAZQBnAHUAbABhAHIAMQAuADEAMAA7ADEAQQBTAEMAOwBPAHAAZQBuAFMAYQBuAHMALQBSAGUAZwB1AGwAYQByAE8AcABlAG4AIABTAGEAbgBzACAAUgBlAGcAdQBsAGEAcgBWAGUAcgBzAGkAbwBuACAAMQAuADEAMABPAHAAZQBuAFMAYQBuAHMALQBSAGUAZwB1AGwAYQByAE8AcABlAG4AIABTAGEAbgBzACAAaQBzACAAYQAgAHQAcgBhAGQAZQBtAGEAcgBrACAAbwBmACAARwBvAG8AZwBsAGUAIABhAG4AZAAgAG0AYQB5ACAAYgBlACAAcgBlAGcAaQBzAHQAZQByAGUAZAAgAGkAbgAgAGMAZQByAHQAYQBpAG4AIABqAHUAcgBpAHMAZABpAGMAdABpAG8AbgBzAC4AQQBzAGMAZQBuAGQAZQByACAAQwBvAHIAcABvAHIAYQB0AGkAbwBuAGgAdAB0AHAAOgAvAC8AdwB3AHcALgBhAHMAYwBlAG4AZABlAHIAYwBvAHIAcAAuAGMAbwBtAC8AaAB0AHQAcAA6AC8ALwB3AHcAdwAuAGEAcwBjAGUAbgBkAGUAcgBjAG8AcgBwAC4AYwBvAG0ALwB0AHkAcABlAGQAZQBzAGkAZwBuAGUAcgBzAC4AaAB0AG0AbABMAGkAYwBlAG4AcwBlAGQAIAB1AG4AZABlAHIAIAB0AGgAZQAgAEEAcABhAGMAaABlACAATABpAGMAZQBuAHMAZQAsACAAVgBlAHIAcwBpAG8AbgAgADIALgAwAGgAdAB0AHAAOgAvAC8AdwB3AHcALgBhAHAAYQBjAGgAZQAuAG8AcgBnAC8AbABpAGMAZQBuAHMAZQBzAC8ATABJAEMARQBOAFMARQAtADIALgAwAAAAAgAAAAAAAP9mAGYAAAAAAAAAAAAAAAAAAAAAAAAAAAOqAAABAgACAAMABAAFAAYABwAIAAkACgALAAwADQAOAA8AEAARABIAEwAUABUAFgAXABgAGQAaABsAHAAdAB4AHwAgACEAIgAjACQAJQAmACcAKAApACoAKwEDAC0ALgAvADAAMQAyADMANAA1ADYANwA4ADkAOgA7ADwAPQA+AD8AQABBAEIAQwBEAEUARgBHAEgASQBKAEsATABNAE4ATwBQAFEAUgBTAFQAVQBWAFcAWABZAFoAWwBcAF0AXgBfAGAAYQCsAKMAhACFAL0AlgDoAIYAjgCLAJ0AqQCkAQQAigEFAIMAkwDyAPMAjQCXAIgAwwDeAPEAngCqAPUA9AD2AKIArQDJAMcArgBiAGMAkABkAMsAZQDIAMoBBgEHAQgBCQDpAGYA0wDQANEArwBnAPAAkQDWANQA1QBoAOsA7QCJAGoAaQBrAG0AbABuAKAAbwBxAHAAcgBzAHUAdAB2AHcA6gB4AHoAeQB7AH0AfAC4AKEAfwB+AIAAgQDsAO4AugEKAQsBDAENAQ4BDwD9AP4BEAERARIBEwD/AQABFAEVARYBAQEXARgBGQEaARsBHAEdAR4BHwEgASEBIgD4APkBIwEkASUBJgEnASgBKQEqASsBLAEtAS4BLwEwATEBMgEzANcBNAE1ATYBNwE4ATkBOgE7ATwBPQE+AT8BQAFBAUIA4gDjAUMBRAFFAUYBRwFIAUkBSgFLAUwBTQFOAU8BUAFRALAAsQFSAVMBVAFVAVYBVwFYAVkBWgFbAPsA/ADkAOUBXAFdAV4BXwFgAWEBYgFjAWQBZQFmAWcBaAFpAWoBawFsAW0BbgFvAXABcQC7AXIBcwF0AXUA5gDnAXYApgF3AXgBeQF6AXsBfAF9AX4A2ADhANoA2wDcAN0A4ADZAN8BfwGAAYEBggGDAYQBhQGGAYcBiAGJAYoBiwGMAY0BjgGPAZABkQGSAZMBlAGVAZYBlwGYAZkBmgGbAZwBnQGeAZ8BoAGhAaIBowGkAaUBpgGnAagBqQGqAasBrAGtAa4BrwGwAbEBsgGzAbQBtQG2AbcAmwG4AbkBugG7AbwBvQG+Ab8BwAHBAcIBwwHEAcUBxgHHAcgByQHKAcsBzAHNAc4BzwHQAdEB0gHTAdQB1QHWAdcB2AHZAdoB2wHcAd0B3gHfAeAB4QHiAeMB5AHlAeYB5wHoAekB6gHrAewB7QHuAe8B8AHxAfIB8wH0AfUB9gH3AfgB+QH6AfsB/AH9Af4B/wIAAgECAgIDAgQCBQIGAgcCCAIJAgoCCwIMAg0CDgIPAhACEQISAhMCFAIVAhYCFwIYAhkCGgIbAhwCHQIeAh8CIAIhAiICIwIkAiUCJgInAigCKQIqAisAsgCzAiwCLQC2ALcAxAIuALQAtQDFAIIAwgCHAKsAxgIvAjAAvgC/AjEAvAIyAPcCMwI0AjUCNgI3AjgAjACfAjkCOgI7AjwCPQCYAKgAmgCZAO8ApQCSAJwApwCPAJQAlQC5Aj4CPwJAAkECQgJDAkQCRQJGAkcCSAJJAkoCSwJMAk0CTgJPAlACUQJSAlMCVAJVAlYCVwJYAlkCWgJbAlwCXQJeAl8CYAJhAmICYwJkAmUCZgJnAmgCaQJqAmsCbAJtAm4CbwJwAnECcgJzAnQCdQJ2AncCeAJ5AnoCewJ8An0CfgJ/AoACgQKCAoMChAKFAoYChwKIAokCigKLAowCjQKOAo8CkAKRApICkwKUApUClgKXApgCmQKaApsCnAKdAp4CnwKgAqECogKjAqQCpQKmAqcCqAKpAqoCqwKsAq0CrgKvArACsQKyArMCtAK1ArYCtwK4ArkCugK7ArwCvQK+Ar8CwALBAsICwwLEAsUCxgLHAsgCyQLKAssCzALNAs4CzwLQAtEC0gLTAtQC1QLWAtcC2ALZAtoC2wLcAt0C3gLfAuAC4QLiAuMC5ALlAuYC5wLoAukC6gLrAuwC7QLuAu8C8ALxAvIC8wL0AvUC9gL3AvgC+QL6AvsC/AL9Av4C/wMAAwEDAgMDAwQDBQMGAwcDCAMJAwoDCwMMAw0DDgMPAxADEQMSAxMDFAMVAxYDFwMYAxkDGgMbAxwDHQMeAx8DIAMhAyIDIwMkAyUDJgMnAygDKQMqAysDLAMtAy4DLwMwAzEDMgMzAzQDNQM2AzcDOAM5AzoDOwM8Az0DPgM/A0ADQQNCA0MDRANFA0YDRwNIA0kDSgNLA0wDTQNOA08DUANRA1IDUwNUA1UDVgNXA1gDWQNaA1sDXANdA14DXwNgA2EDYgNjA2QDZQNmA2cDaANpA2oDawNsA20DbgNvA3ADcQNyA3MDdAN1A3YDdwN4A3kDegN7A3wDfQN+A38DgAOBA4IDgwOEA4UDhgOHA4gDiQOKA4sDjAONA44DjwOQA5EDkgOTA5QDlQOWA5cDmAOZA5oDmwOcA50DngOfACwAzwDMAM0AzgOgA6EDogOjAPoDpAOlA6YDpwOoA6kDqgOrA6wDrQRudWxsBUkuYWx0B3VuaTAwQUQJb3ZlcnNjb3JlCklncmF2ZS5hbHQKSWFjdXRlLmFsdA9JY2lyY3VtZmxleC5hbHQNSWRpZXJlc2lzLmFsdAdBbWFjcm9uB2FtYWNyb24GQWJyZXZlBmFicmV2ZQdBb2dvbmVrB2FvZ29uZWsLQ2NpcmN1bWZsZXgLY2NpcmN1bWZsZXgEQ2RvdARjZG90BkRjYXJvbgZkY2Fyb24GRGNyb2F0B0VtYWNyb24HZW1hY3JvbgZFYnJldmUGZWJyZXZlCkVkb3RhY2NlbnQKZWRvdGFjY2VudAdFb2dvbmVrB2VvZ29uZWsGRWNhcm9uBmVjYXJvbgtHY2lyY3VtZmxleAtnY2lyY3VtZmxleARHZG90BGdkb3QMR2NvbW1hYWNjZW50DGdjb21tYWFjY2VudAtIY2lyY3VtZmxleAtoY2lyY3VtZmxleARIYmFyBGhiYXIKSXRpbGRlLmFsdAZpdGlsZGULSW1hY3Jvbi5hbHQHaW1hY3JvbgpJYnJldmUuYWx0BmlicmV2ZQtJb2dvbmVrLmFsdAdpb2dvbmVrDklkb3RhY2NlbnQuYWx0BklKLmFsdAJpagtKY2lyY3VtZmxleAtqY2lyY3VtZmxleAxLY29tbWFhY2NlbnQMa2NvbW1hYWNjZW50DGtncmVlbmxhbmRpYwZMYWN1dGUGbGFjdXRlDExjb21tYWFjY2VudAxsY29tbWFhY2NlbnQGTGNhcm9uBmxjYXJvbgRMZG90BGxkb3QGTmFjdXRlBm5hY3V0ZQxOY29tbWFhY2NlbnQMbmNvbW1hYWNjZW50Bk5jYXJvbgZuY2Fyb24LbmFwb3N0cm9waGUDRW5nA2VuZwdPbWFjcm9uB29tYWNyb24GT2JyZXZlBm9icmV2ZQ1PaHVuZ2FydW1sYXV0DW9odW5nYXJ1bWxhdXQGUmFjdXRlBnJhY3V0ZQxSY29tbWFhY2NlbnQMcmNvbW1hYWNjZW50BlJjYXJvbgZyY2Fyb24GU2FjdXRlBnNhY3V0ZQtTY2lyY3VtZmxleAtzY2lyY3VtZmxleAxUY29tbWFhY2NlbnQMdGNvbW1hYWNjZW50BlRjYXJvbgZ0Y2Fyb24EVGJhcgR0YmFyBlV0aWxkZQZ1dGlsZGUHVW1hY3Jvbgd1bWFjcm9uBlVicmV2ZQZ1YnJldmUFVXJpbmcFdXJpbmcNVWh1bmdhcnVtbGF1dA11aHVuZ2FydW1sYXV0B1VvZ29uZWsHdW9nb25lawtXY2lyY3VtZmxleAt3Y2lyY3VtZmxleAtZY2lyY3VtZmxleAt5Y2lyY3VtZmxleAZaYWN1dGUGemFjdXRlClpkb3RhY2NlbnQKemRvdGFjY2VudAVsb25ncwpBcmluZ2FjdXRlCmFyaW5nYWN1dGUHQUVhY3V0ZQdhZWFjdXRlC09zbGFzaGFjdXRlC29zbGFzaGFjdXRlDFNjb21tYWFjY2VudAxzY29tbWFhY2NlbnQFdG9ub3MNZGllcmVzaXN0b25vcwpBbHBoYXRvbm9zCWFub3RlbGVpYQxFcHNpbG9udG9ub3MIRXRhdG9ub3MNSW90YXRvbm9zLmFsdAxPbWljcm9udG9ub3MMVXBzaWxvbnRvbm9zCk9tZWdhdG9ub3MRaW90YWRpZXJlc2lzdG9ub3MFQWxwaGEEQmV0YQVHYW1tYQd1bmkwMzk0B0Vwc2lsb24EWmV0YQNFdGEFVGhldGEISW90YS5hbHQFS2FwcGEGTGFtYmRhAk11Ak51AlhpB09taWNyb24CUGkDUmhvBVNpZ21hA1RhdQdVcHNpbG9uA1BoaQNDaGkDUHNpB3VuaTAzQTkQSW90YWRpZXJlc2lzLmFsdA9VcHNpbG9uZGllcmVzaXMKYWxwaGF0b25vcwxlcHNpbG9udG9ub3MIZXRhdG9ub3MJaW90YXRvbm9zFHVwc2lsb25kaWVyZXNpc3Rvbm9zBWFscGhhBGJldGEFZ2FtbWEFZGVsdGEHZXBzaWxvbgR6ZXRhA2V0YQV0aGV0YQRpb3RhBWthcHBhBmxhbWJkYQd1bmkwM0JDAm51AnhpB29taWNyb24DcmhvBnNpZ21hMQVzaWdtYQN0YXUHdXBzaWxvbgNwaGkDY2hpA3BzaQVvbWVnYQxpb3RhZGllcmVzaXMPdXBzaWxvbmRpZXJlc2lzDG9taWNyb250b25vcwx1cHNpbG9udG9ub3MKb21lZ2F0b25vcwlhZmlpMTAwMjMJYWZpaTEwMDUxCWFmaWkxMDA1MglhZmlpMTAwNTMJYWZpaTEwMDU0DWFmaWkxMDA1NS5hbHQNYWZpaTEwMDU2LmFsdAlhZmlpMTAwNTcJYWZpaTEwMDU4CWFmaWkxMDA1OQlhZmlpMTAwNjAJYWZpaTEwMDYxCWFmaWkxMDA2MglhZmlpMTAxNDUJYWZpaTEwMDE3CWFmaWkxMDAxOAlhZmlpMTAwMTkJYWZpaTEwMDIwCWFmaWkxMDAyMQlhZmlpMTAwMjIJYWZpaTEwMDI0CWFmaWkxMDAyNQlhZmlpMTAwMjYJYWZpaTEwMDI3CWFmaWkxMDAyOAlhZmlpMTAwMjkJYWZpaTEwMDMwCWFmaWkxMDAzMQlhZmlpMTAwMzIJYWZpaTEwMDMzCWFmaWkxMDAzNAlhZmlpMTAwMzUJYWZpaTEwMDM2CWFmaWkxMDAzNwlhZmlpMTAwMzgJYWZpaTEwMDM5CWFmaWkxMDA0MAlhZmlpMTAwNDEJYWZpaTEwMDQyCWFmaWkxMDA0MwlhZmlpMTAwNDQJYWZpaTEwMDQ1CWFmaWkxMDA0NglhZmlpMTAwNDcJYWZpaTEwMDQ4CWFmaWkxMDA0OQlhZmlpMTAwNjUJYWZpaTEwMDY2CWFmaWkxMDA2NwlhZmlpMTAwNjgJYWZpaTEwMDY5CWFmaWkxMDA3MAlhZmlpMTAwNzIJYWZpaTEwMDczCWFmaWkxMDA3NAlhZmlpMTAwNzUJYWZpaTEwMDc2CWFmaWkxMDA3NwlhZmlpMTAwNzgJYWZpaTEwMDc5CWFmaWkxMDA4MAlhZmlpMTAwODEJYWZpaTEwMDgyCWFmaWkxMDA4MwlhZmlpMTAwODQJYWZpaTEwMDg1CWFmaWkxMDA4NglhZmlpMTAwODcJYWZpaTEwMDg4CWFmaWkxMDA4OQlhZmlpMTAwOTAJYWZpaTEwMDkxCWFmaWkxMDA5MglhZmlpMTAwOTMJYWZpaTEwMDk0CWFmaWkxMDA5NQlhZmlpMTAwOTYJYWZpaTEwMDk3CWFmaWkxMDA3MQlhZmlpMTAwOTkJYWZpaTEwMTAwCWFmaWkxMDEwMQlhZmlpMTAxMDIJYWZpaTEwMTAzCWFmaWkxMDEwNAlhZmlpMTAxMDUJYWZpaTEwMTA2CWFmaWkxMDEwNwlhZmlpMTAxMDgJYWZpaTEwMTA5CWFmaWkxMDExMAlhZmlpMTAxOTMJYWZpaTEwMDUwCWFmaWkxMDA5OAZXZ3JhdmUGd2dyYXZlBldhY3V0ZQZ3YWN1dGUJV2RpZXJlc2lzCXdkaWVyZXNpcwZZZ3JhdmUGeWdyYXZlCWFmaWkwMDIwOA11bmRlcnNjb3JlZGJsDXF1b3RlcmV2ZXJzZWQGbWludXRlBnNlY29uZAlleGNsYW1kYmwJbnN1cGVyaW9yCWFmaWkwODk0MQZwZXNldGEERXVybwlhZmlpNjEyNDgJYWZpaTYxMjg5CWFmaWk2MTM1Mgllc3RpbWF0ZWQJb25lZWlnaHRoDHRocmVlZWlnaHRocwtmaXZlZWlnaHRocwxzZXZlbmVpZ2h0aHMHdW5pRkIwMQd1bmlGQjAyDWN5cmlsbGljYnJldmUIZG90bGVzc2oQY2Fyb25jb21tYWFjY2VudAtjb21tYWFjY2VudBFjb21tYWFjY2VudHJvdGF0ZQx6ZXJvc3VwZXJpb3IMZm91cnN1cGVyaW9yDGZpdmVzdXBlcmlvcgtzaXhzdXBlcmlvcg1zZXZlbnN1cGVyaW9yDWVpZ2h0c3VwZXJpb3IMbmluZXN1cGVyaW9yB3VuaTIwMDAHdW5pMjAwMQd1bmkyMDAyB3VuaTIwMDMHdW5pMjAwNAd1bmkyMDA1B3VuaTIwMDYHdW5pMjAwNwd1bmkyMDA4B3VuaTIwMDkHdW5pMjAwQQd1bmkyMDBCB3VuaUZFRkYHdW5pRkZGQwd1bmlGRkZEB3VuaTAxRjAHdW5pMDJCQwd1bmkwM0QxB3VuaTAzRDIHdW5pMDNENgd1bmkxRTNFB3VuaTFFM0YHdW5pMUUwMAd1bmkxRTAxB3VuaTFGNEQHdW5pMDJGMwlkYXNpYW94aWEHdW5pRkIwMwd1bmlGQjA0BU9ob3JuBW9ob3JuBVVob3JuBXVob3JuB3VuaTAzMDAHdW5pMDMwMQd1bmkwMzAzBGhvb2sIZG90YmVsb3cHdW5pMDQwMAd1bmkwNDBEB3VuaTA0NTAHdW5pMDQ1RAd1bmkwNDYwB3VuaTA0NjEHdW5pMDQ2Mgd1bmkwNDYzB3VuaTA0NjQHdW5pMDQ2NQd1bmkwNDY2B3VuaTA0NjcHdW5pMDQ2OAd1bmkwNDY5B3VuaTA0NkEHdW5pMDQ2Qgd1bmkwNDZDB3VuaTA0NkQHdW5pMDQ2RQd1bmkwNDZGB3VuaTA0NzAHdW5pMDQ3MQd1bmkwNDcyB3VuaTA0NzMHdW5pMDQ3NAd1bmkwNDc1B3VuaTA0NzYHdW5pMDQ3Nwd1bmkwNDc4B3VuaTA0NzkHdW5pMDQ3QQd1bmkwNDdCB3VuaTA0N0MHdW5pMDQ3RAd1bmkwNDdFB3VuaTA0N0YHdW5pMDQ4MAd1bmkwNDgxB3VuaTA0ODIHdW5pMDQ4Mwd1bmkwNDg0B3VuaTA0ODUHdW5pMDQ4Ngd1bmkwNDg4B3VuaTA0ODkHdW5pMDQ4QQd1bmkwNDhCB3VuaTA0OEMHdW5pMDQ4RAd1bmkwNDhFB3VuaTA0OEYHdW5pMDQ5Mgd1bmkwNDkzB3VuaTA0OTQHdW5pMDQ5NQd1bmkwNDk2B3VuaTA0OTcHdW5pMDQ5OAd1bmkwNDk5B3VuaTA0OUEHdW5pMDQ5Qgd1bmkwNDlDB3VuaTA0OUQHdW5pMDQ5RQd1bmkwNDlGB3VuaTA0QTAHdW5pMDRBMQd1bmkwNEEyB3VuaTA0QTMHdW5pMDRBNAd1bmkwNEE1B3VuaTA0QTYHdW5pMDRBNwd1bmkwNEE4B3VuaTA0QTkHdW5pMDRBQQd1bmkwNEFCB3VuaTA0QUMHdW5pMDRBRAd1bmkwNEFFB3VuaTA0QUYHdW5pMDRCMAd1bmkwNEIxB3VuaTA0QjIHdW5pMDRCMwd1bmkwNEI0B3VuaTA0QjUHdW5pMDRCNgd1bmkwNEI3B3VuaTA0QjgHdW5pMDRCOQd1bmkwNEJBB3VuaTA0QkIHdW5pMDRCQwd1bmkwNEJEB3VuaTA0QkUHdW5pMDRCRgt1bmkwNEMwLmFsdAd1bmkwNEMxB3VuaTA0QzIHdW5pMDRDMwd1bmkwNEM0B3VuaTA0QzUHdW5pMDRDNgd1bmkwNEM3B3VuaTA0QzgHdW5pMDRDOQd1bmkwNENBB3VuaTA0Q0IHdW5pMDRDQwd1bmkwNENEB3VuaTA0Q0ULdW5pMDRDRi5hbHQHdW5pMDREMAd1bmkwNEQxB3VuaTA0RDIHdW5pMDREMwd1bmkwNEQ0B3VuaTA0RDUHdW5pMDRENgd1bmkwNEQ3B3VuaTA0RDgHdW5pMDREOQd1bmkwNERBB3VuaTA0REIHdW5pMDREQwd1bmkwNEREB3VuaTA0REUHdW5pMDRERgd1bmkwNEUwB3VuaTA0RTEHdW5pMDRFMgd1bmkwNEUzB3VuaTA0RTQHdW5pMDRFNQd1bmkwNEU2B3VuaTA0RTcHdW5pMDRFOAd1bmkwNEU5B3VuaTA0RUEHdW5pMDRFQgd1bmkwNEVDB3VuaTA0RUQHdW5pMDRFRQd1bmkwNEVGB3VuaTA0RjAHdW5pMDRGMQd1bmkwNEYyB3VuaTA0RjMHdW5pMDRGNAd1bmkwNEY1B3VuaTA0RjYHdW5pMDRGNwd1bmkwNEY4B3VuaTA0RjkHdW5pMDRGQQd1bmkwNEZCB3VuaTA0RkMHdW5pMDRGRAd1bmkwNEZFB3VuaTA0RkYHdW5pMDUwMAd1bmkwNTAxB3VuaTA1MDIHdW5pMDUwMwd1bmkwNTA0B3VuaTA1MDUHdW5pMDUwNgd1bmkwNTA3B3VuaTA1MDgHdW5pMDUwOQd1bmkwNTBBB3VuaTA1MEIHdW5pMDUwQwd1bmkwNTBEB3VuaTA1MEUHdW5pMDUwRgd1bmkwNTEwB3VuaTA1MTEHdW5pMDUxMgd1bmkwNTEzB3VuaTFFQTAHdW5pMUVBMQd1bmkxRUEyB3VuaTFFQTMHdW5pMUVBNAd1bmkxRUE1B3VuaTFFQTYHdW5pMUVBNwd1bmkxRUE4B3VuaTFFQTkHdW5pMUVBQQd1bmkxRUFCB3VuaTFFQUMHdW5pMUVBRAd1bmkxRUFFB3VuaTFFQUYHdW5pMUVCMAd1bmkxRUIxB3VuaTFFQjIHdW5pMUVCMwd1bmkxRUI0B3VuaTFFQjUHdW5pMUVCNgd1bmkxRUI3B3VuaTFFQjgHdW5pMUVCOQd1bmkxRUJBB3VuaTFFQkIHdW5pMUVCQwd1bmkxRUJEB3VuaTFFQkUHdW5pMUVCRgd1bmkxRUMwB3VuaTFFQzEHdW5pMUVDMgd1bmkxRUMzB3VuaTFFQzQHdW5pMUVDNQd1bmkxRUM2B3VuaTFFQzcLdW5pMUVDOC5hbHQHdW5pMUVDOQt1bmkxRUNBLmFsdAd1bmkxRUNCB3VuaTFFQ0MHdW5pMUVDRAd1bmkxRUNFB3VuaTFFQ0YHdW5pMUVEMAd1bmkxRUQxB3VuaTFFRDIHdW5pMUVEMwd1bmkxRUQ0B3VuaTFFRDUHdW5pMUVENgd1bmkxRUQ3B3VuaTFFRDgHdW5pMUVEOQd1bmkxRURBB3VuaTFFREIHdW5pMUVEQwd1bmkxRUREB3VuaTFFREUHdW5pMUVERgd1bmkxRUUwB3VuaTFFRTEHdW5pMUVFMgd1bmkxRUUzB3VuaTFFRTQHdW5pMUVFNQd1bmkxRUU2B3VuaTFFRTcHdW5pMUVFOAd1bmkxRUU5B3VuaTFFRUEHdW5pMUVFQgd1bmkxRUVDB3VuaTFFRUQHdW5pMUVFRQd1bmkxRUVGB3VuaTFFRjAHdW5pMUVGMQd1bmkxRUY0B3VuaTFFRjUHdW5pMUVGNgd1bmkxRUY3B3VuaTFFRjgHdW5pMUVGOQd1bmkyMEFCB3VuaTAzMEYTY2lyY3VtZmxleGFjdXRlY29tYhNjaXJjdW1mbGV4Z3JhdmVjb21iEmNpcmN1bWZsZXhob29rY29tYhNjaXJjdW1mbGV4dGlsZGVjb21iDmJyZXZlYWN1dGVjb21iDmJyZXZlZ3JhdmVjb21iDWJyZXZlaG9va2NvbWIOYnJldmV0aWxkZWNvbWIQY3lyaWxsaWNob29rbGVmdBFjeXJpbGxpY2JpZ2hvb2tVQxFjeXJpbGxpY2JpZ2hvb2tMQwhvbmUucG51bQd6ZXJvLm9zBm9uZS5vcwZ0d28ub3MIdGhyZWUub3MHZm91ci5vcwdmaXZlLm9zBnNpeC5vcwhzZXZlbi5vcwhlaWdodC5vcwduaW5lLm9zAmZmB3VuaTIxMjAIVGNlZGlsbGEIdGNlZGlsbGEFZy5hbHQPZ2NpcmN1bWZsZXguYWx0CmdicmV2ZS5hbHQIZ2RvdC5hbHQQZ2NvbW1hYWNjZW50LmFsdAZJdGlsZGUHSW1hY3JvbgZJYnJldmUHSW9nb25lawJJSglJb3RhdG9ub3MESW90YQxJb3RhZGllcmVzaXMJYWZpaTEwMDU1CWFmaWkxMDA1Ngd1bmkwNEMwB3VuaTA0Q0YHdW5pMUVDOAd1bmkxRUNBAAABAAMACAAKAA0AB///AA8AAQAAAAwAAAAAAAAAAgAFAAACNQABAjcCNwABAjsCWwABAl0DdgABA4IDqQABAAAAAQAAAAoADAAOAAAAAAAAAAEAAAAKAG4BWgABbGF0bgAIABAAAk1PTCAAKFJPTSAAQgAA//8ACQADAAgACwAAAA4AEQAUABcAGgAA//8ACgAEAAYACQAMAAEADwASABUAGAAbAAD//wAKAAUABwAKAA0AAgAQABMAFgAZABwAHWxpZ2EAsGxpZ2EAsGxpZ2EAsGxudW0AtmxudW0AtmxudW0AtmxvY2wAvGxvY2wAvG9udW0Awm9udW0Awm9udW0AwnBudW0AynBudW0AynBudW0AynNhbHQA0HNhbHQA0HNhbHQA0HNzMDEA0HNzMDEA0HNzMDEA0HNzMDIA2HNzMDIA2HNzMDIA2HNzMDMA3nNzMDMA3nNzMDMA3nRudW0A5HRudW0A5HRudW0A5AAAAAEACQAAAAEABwAAAAEACAAAAAIAAgADAAAAAQAEAAAAAgAAAAEAAAABAAAAAAABAAEAAAACAAUABgAKABYAPAB8AJQAzADgAO4BAgEuAVAAAQAAAAEACAACABAABQORA5IDkwOUA5UAAQAFAEoA3wDhAOMA5QABAAAAAQAIAAIALgAUACwAjgCPAJAAkQDqAOwA7gDwAPIA9AFaAWcBdwGhAaICyQLYA0UDRwACAAEDlgOpAAAAAQAAAAEACAABAAYDcAACAAEAEwAcAAAAAQAAAAEACAACABoACgODA4UDhgOHA4gDiQOKA4sDjAOEAAIAAwATABMAAAAVABwAAQOCA4IACQABAAAAAQAIAAEABgNuAAEAAQAUAAEAAAABAAgAAQA8/JAAAQAAAAEACAABAAb8kgABAAEDggABAAAAAQAIAAIAGgAKABMDggAVABYAFwAYABkAGgAbABwAAgABA4MDjAAAAAEAAAABAAgAAgAOAAQDjwOQASABIQABAAQBJAElAUkBSgAEAAAAAQAIAAEANgABAAgABQAMABQAHAAiACgCXgADAEkATwJdAAMASQBMA40AAgBJAjUAAgBPAjQAAgBMAAEAAQBJAAA=") format("truetype");font-weight:400;font-style:normal}.w2ui-reset{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;-o-box-sizing:border-box;box-sizing:border-box;font-family:OpenSans;font-size:12px}.w2ui-reset *{color:default;line-height:100%;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;-o-box-sizing:border-box;box-sizing:border-box;margin:0;padding:0}.w2ui-reset table{max-width:none;background-color:transparent;border-collapse:separate;border-spacing:0;border:none}.w2ui-reset table tr td,.w2ui-reset table tr th{font-family:OpenSans;font-size:12px}.w2ui-reset input:not([type=button]):not([type=submit]):not([type=checkbox]):not([type=radio]),.w2ui-reset select,.w2ui-reset textarea{display:inline-block;width:auto;height:auto;vertical-align:baseline;padding:6px;margin:0;font-size:12px;background-color:#f8fafa;border:1px solid #e0e0e0}.w2ui-reset input:not([type=button]):not([type=submit]):not([type=checkbox]):not([type=radio]):focus,.w2ui-reset select:focus,.w2ui-reset textarea:focus{background-color:#fff}.w2ui-reset select{padding:5px;height:26px;font-size:12px}.w2ui-centered{position:absolute;left:0;right:0;top:0;bottom:0;display:flex;flex-wrap:wrap;align-items:center;justify-content:center;text-align:center;padding:10px}.w2ui-disabled,.w2ui-readonly{background-color:#f1f1f1;color:#777}div[contenteditable].w2ui-focus,input.w2ui-focus:not(button),select.w2ui-focus,textarea.w2ui-focus{outline-style:auto;outline-color:#72b2ff}div.w2ui-input:focus,select.w2ui-input:focus{outline-color:#72b2ff}input:not([type=button]):not([type=submit]).w2ui-input,textarea.w2ui-input{padding:6px;border:1px solid #e0e0e0;border-radius:3px;color:#000;background-color:#f8fafa;line-height:normal}input:not([type=button]):not([type=submit]).w2ui-input:focus,textarea.w2ui-input:focus{outline-color:#72b2ff;background-color:#fff}input:not([type=button]):not([type=submit]).w2ui-input:disabled,input:not([type=button]):not([type=submit]).w2ui-input[readonly],textarea.w2ui-input:disabled,textarea.w2ui-input[readonly]{background-color:#f1f1f1;color:#777}select.w2ui-input{color:#000;padding:0 20px 0 7px;line-height:1.8;border-radius:3px;border:1px solid #e0e0e0;-webkit-appearance:none;background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACgAAAALCAQAAACnzwd+AAAAcklEQVR4AcXMsQFBQQDG4P9tAgC0gJYRQJZgKQMwCqCku6vVAAAA+NJHP4KHOk0aV2pRw61n4BBmyOxKQ8I4ehZeuhd3HTx6DQEGZ7sBfr2OOOOj3Yi43kMKs9sZknofOexqZ8npMygwWZTX51CipP+YA1OiZJbYYg9lAAAAAElFTkSuQmCC),linear-gradient(to bottom,#f8f8f8 20%,#f8f8f8 50%,#f8f8f8 52%,#f8f8f8 100%);background-size:17px 6px,100% 100%;background-position:right center,left top;background-repeat:no-repeat,no-repeat}.w2ui-icon-expand:before{position:relative;top:1px;left:1px;content:' ';width:5px;height:5px;border:2px solid rgba(150,150,150,.8);border-bottom:0;border-left:0;transform:rotateZ(45deg)}.w2ui-icon-collapse:before{position:relative;top:-1px;left:3px;content:' ';width:5px;height:5px;border:2px solid rgba(150,150,150,.8);border-bottom:0;border-left:0;transform:rotateZ(135deg)}input[type=checkbox].w2ui-toggle{position:absolute;opacity:0;width:46px;height:22px;padding:0;margin:0;margin-left:2px}input[type=checkbox].w2ui-toggle:focus{box-shadow:0 0 1px 2px #a8cfff}input[type=checkbox].w2ui-toggle+div{display:inline-block;width:46px;height:22px;border:1px solid #bbb;border-radius:30px;background-color:#eee;transition-duration:.3s;transition-property:background-color,box-shadow;box-shadow:inset 0 0 0 0 rgba(0,0,0,.4);margin-left:2px}input[type=checkbox].w2ui-toggle.w2ui-small+div{width:30px;height:16px}input[type=checkbox].w2ui-toggle:focus+div{box-shadow:0 0 3px 2px #91baed}input[type=checkbox].w2ui-toggle:disabled+div{opacity:.3}input[type=checkbox].w2ui-toggle+div>div{float:left;width:22px;height:22px;border-radius:inherit;background:#f5f5f5;transition-duration:.3s;transition-property:transform,background-color,box-shadow;box-shadow:0 0 1px #323232,0 0 0 1px rgba(200,200,200,.6);pointer-events:none;margin-top:-1px;margin-left:-1px}input[type=checkbox].w2ui-toggle.w2ui-small+div>div{width:16px;height:16px}input[type=checkbox].w2ui-toggle:checked+div>div{transform:translate3d(24px,0,0);background-color:#fff}input[type=checkbox].w2ui-toggle.w2ui-small:checked+div>div{transform:translate3d(14px,0,0)}input[type=checkbox].w2ui-toggle:focus{outline:0}input[type=checkbox].w2ui-toggle:checked+div{border:1px solid #206fad;box-shadow:inset 0 0 0 12px #35a6eb}input[type=checkbox].w2ui-toggle:checked:focus+div{box-shadow:0 0 3px 2px #91baed,inset 0 0 0 12px #35a6eb}input[type=checkbox].w2ui-toggle:checked+div>div{box-shadow:0 2px 5px rgba(0,0,0,.3),0 0 0 1px #206fad}input[type=checkbox].w2ui-toggle.green:checked+div{border:1px solid #00a23f;box-shadow:inset 0 0 0 12px #54b350}input[type=checkbox].w2ui-toggle.green:checked:focus+div{box-shadow:0 0 3px 2px #91baed,inset 0 0 0 12px #54b350}input[type=checkbox].w2ui-toggle.green:checked+div>div{box-shadow:0 2px 5px rgba(0,0,0,.3),0 0 0 1px #00a23f}.w2ui-marker{background-color:rgba(214,161,252,.5)}.w2ui-spinner{display:inline-block;background-size:100%;background-repeat:no-repeat;background-image:url(data:image/gif;base64,R0lGODlhgACAAKIAAP///93d3bu7u5mZmQAA/wAAAAAAAAAAACH/C05FVFNDQVBFMi4wAwEAAAAh+QQFBQAEACwCAAIAfAB8AAAD/0i63P4wygYqmDjrzbtflvWNZGliYXiubKuloivPLlzReD7al+7/Eh5wSFQIi8hHYBkwHUmD6CD5YTJLz49USuVYraRsZ7vtar7XnQ1Kjpoz6LRHvGlz35O4nEPP2O94EnpNc2sef1OBGIOFMId/inB6jSmPdpGScR19EoiYmZobnBCIiZ95k6KGGp6ni4wvqxilrqBfqo6skLW2YBmjDa28r6Eosp27w8Rov8ekycqoqUHODrTRvXsQwArC2NLF29UM19/LtxO5yJd4Au4CK7DUNxPebG4e7+8n8iv2WmQ66BtoYpo/dvfacBjIkITBE9DGlMvAsOIIZjIUAixliv9ixYZVtLUos5GjwI8gzc3iCGghypQqrbFsme8lwZgLZtIcYfNmTJ34WPTUZw5oRxdD9w0z6iOpO15MgTh1BTTJUKos39jE+o/KS64IFVmsFfYT0aU7capdy7at27dw48qdS7eu3bt480I02vUbX2F/JxYNDImw4GiGE/P9qbhxVpWOI/eFKtlNZbWXuzlmG1mv58+gQ4seTbq06dOoU6vGQZJy0FNlMcV+czhQ7SQmYd8eMhPs5BxVdfcGEtV3buDBXQ+fURxx8oM6MT9P+Fh6dOrH2zavc13u9JXVJb520Vp8dvC76wXMuN5Sepm/1WtkEZHDefnzR9Qvsd9+/wi8+en3X0ntYVcSdAE+UN4zs7ln24CaLagghIxBaGF8kFGoIYV+Ybghh841GIyI5ICIFoklJsigihmimJOLEbLYIYwxSgigiZ+8l2KB+Ml4oo/w8dijjcrouCORKwIpnJIjMnkkksalNeR4fuBIm5UEYImhIlsGCeWNNJphpJdSTlkml1jWeOY6TnaRpppUctcmFW9mGSaZceYopH9zkjnjUe59iR5pdapWaGqHopboaYua1qije67GJ6CuJAAAIfkEBQUABAAsCgACAFcAMAAAA/9Iutz+ML5Ag7w46z0r5WAoSp43nihXVmnrdusrv+s332dt4Tyo9yOBUJD6oQBIQGs4RBlHySSKyczVTtHoidocPUNZaZAr9F5FYbGI3PWdQWn1mi36buLKFJvojsHjLnshdhl4L4IqbxqGh4gahBJ4eY1kiX6LgDN7fBmQEJI4jhieD4yhdJ2KkZk8oiSqEaatqBekDLKztBG2CqBACq4wJRi4PZu1sA2+v8C6EJexrBAD1AOBzsLE0g/V1UvYR9sN3eR6lTLi4+TlY1wz6Qzr8u1t6FkY8vNzZTxaGfn6mAkEGFDgL4LrDDJDyE4hEIbdHB6ESE1iD4oVLfLAqPETIsOODwmCDJlv5MSGJklaS6khAQAh+QQFBQAEACwfAAIAVwAwAAAD/0i63P5LSAGrvTjrNuf+YKh1nWieIumhbFupkivPBEzR+GnnfLj3ooFwwPqdAshAazhEGUXJJIrJ1MGOUamJ2jQ9QVltkCv0XqFh5IncBX01afGYnDqD40u2z76JK/N0bnxweC5sRB9vF34zh4gjg4uMjXobihWTlJUZlw9+fzSHlpGYhTminKSepqebF50NmTyor6qxrLO0L7YLn0ALuhCwCrJAjrUqkrjGrsIkGMW/BMEPJcphLgDaABjUKNEh29vdgTLLIOLpF80s5xrp8ORVONgi8PcZ8zlRJvf40tL8/QPYQ+BAgjgMxkPIQ6E6hgkdjoNIQ+JEijMsasNY0RQix4gKP+YIKXKkwJIFF6JMudFEAgAh+QQFBQAEACw8AAIAQgBCAAAD/kg0PPowykmrna3dzXvNmSeOFqiRaGoyaTuujitv8Gx/661HtSv8gt2jlwIChYtc0XjcEUnMpu4pikpv1I71astytkGh9wJGJk3QrXlcKa+VWjeSPZHP4Rtw+I2OW81DeBZ2fCB+UYCBfWRqiQp0CnqOj4J1jZOQkpOUIYx/m4oxg5cuAaYBO4Qop6c6pKusrDevIrG2rkwptrupXB67vKAbwMHCFcTFxhLIt8oUzLHOE9Cy0hHUrdbX2KjaENzey9Dh08jkz8Tnx83q66bt8PHy8/T19vf4+fr6AP3+/wADAjQmsKDBf6AOKjS4aaHDgZMeSgTQcKLDhBYPEswoA1BBAgAh+QQFBQAEACxOAAoAMABXAAAD7Ei6vPOjyUkrhdDqfXHm4OZ9YSmNpKmiqVqykbuysgvX5o2HcLxzup8oKLQQix0UcqhcVo5ORi+aHFEn02sDeuWqBGCBkbYLh5/NmnldxajX7LbPBK+PH7K6narfO/t+SIBwfINmUYaHf4lghYyOhlqJWgqDlAuAlwyBmpVnnaChoqOkpaanqKmqKgGtrq+wsbA1srW2ry63urasu764Jr/CAb3Du7nGt7TJsqvOz9DR0tPU1TIA2ACl2dyi3N/aneDf4uPklObj6OngWuzt7u/d8fLY9PXr9eFX+vv8+PnYlUsXiqC3c6PmUUgAACH5BAUFAAQALE4AHwAwAFcAAAPpSLrc/m7IAau9bU7MO9GgJ0ZgOI5leoqpumKt+1axPJO1dtO5vuM9yi8TlAyBvSMxqES2mo8cFFKb8kzWqzDL7Xq/4LB4TC6bz1yBes1uu9uzt3zOXtHv8xN+Dx/x/wJ6gHt2g3Rxhm9oi4yNjo+QkZKTCgGWAWaXmmOanZhgnp2goaJdpKGmp55cqqusrZuvsJays6mzn1m4uRAAvgAvuBW/v8GwvcTFxqfIycA3zA/OytCl0tPPO7HD2GLYvt7dYd/ZX99j5+Pi6tPh6+bvXuTuzujxXens9fr7YPn+7egRI9PPHrgpCQAAIfkEBQUABAAsPAA8AEIAQgAAA/lIutz+UI1Jq7026h2x/xUncmD5jehjrlnqSmz8vrE8u7V5z/m5/8CgcEgsGo/IpHLJbDqf0Kh0ShBYBdTXdZsdbb/Yrgb8FUfIYLMDTVYz2G13FV6Wz+lX+x0fdvPzdn9WeoJGAYcBN39EiIiKeEONjTt0kZKHQGyWl4mZdREAoQAcnJhBXBqioqSlT6qqG6WmTK+rsa1NtaGsuEu6o7yXubojsrTEIsa+yMm9SL8osp3PzM2cStDRykfZ2tfUtS/bRd3ewtzV5pLo4eLjQuUp70Hx8t9E9eqO5Oku5/ztdkxi90qPg3x2EMpR6IahGocPCxp8AGtigwQAIfkEBQUABAAsHwBOAFcAMAAAA/9Iutz+MMo36pg4682J/V0ojs1nXmSqSqe5vrDXunEdzq2ta3i+/5DeCUh0CGnF5BGULC4tTeUTFQVONYAs4CfoCkZPjFar83rBx8l4XDObSUL1Ott2d1U4yZwcs5/xSBB7dBMBhgEYfncrTBGDW4WHhomKUY+QEZKSE4qLRY8YmoeUfkmXoaKInJ2fgxmpqqulQKCvqRqsP7WooriVO7u8mhu5NacasMTFMMHCm8qzzM2RvdDRK9PUwxzLKdnaz9y/Kt8SyR3dIuXmtyHpHMcd5+jvWK4i8/TXHff47SLjQvQLkU+fG29rUhQ06IkEG4X/Rryp4mwUxSgLL/7IqFETB8eONT6ChCFy5ItqJomES6kgAQAh+QQFBQAEACwKAE4AVwAwAAAD/0i63A4QuEmrvTi3yLX/4MeNUmieITmibEuppCu3sDrfYG3jPKbHveDktxIaF8TOcZmMLI9NyBPanFKJp4A2IBx4B5lkdqvtfb8+HYpMxp3Pl1qLvXW/vWkli16/3dFxTi58ZRcChwIYf3hWBIRchoiHiotWj5AVkpIXi4xLjxiaiJR/T5ehoomcnZ+EGamqq6VGoK+pGqxCtaiiuJVBu7yaHrk4pxqwxMUzwcKbyrPMzZG90NGDrh/JH8t72dq3IN1jfCHb3L/e5ebh4ukmxyDn6O8g08jt7tf26ybz+m/W9GNXzUQ9fm1Q/APoSWAhhfkMAmpEbRhFKwsvCsmosRIHx444PoKcIXKkjIImjTzjkQAAIfkEBQUABAAsAgA8AEIAQgAAA/VIBNz+8KlJq72Yxs1d/uDVjVxogmQqnaylvkArT7A63/V47/m2/8CgcEgsGo/IpHLJbDqf0Kh0Sj0FroGqDMvVmrjgrDcTBo8v5fCZki6vCW33Oq4+0832O/at3+f7fICBdzsChgJGeoWHhkV0P4yMRG1BkYeOeECWl5hXQ5uNIAOjA1KgiKKko1CnqBmqqk+nIbCkTq20taVNs7m1vKAnurtLvb6wTMbHsUq4wrrFwSzDzcrLtknW16tI2tvERt6pv0fi48jh5h/U6Zs77EXSN/BE8jP09ZFA+PmhP/xvJgAMSGBgQINvEK5ReIZhQ3QEMTBLAAAh+QQFBQAEACwCAB8AMABXAAAD50i6DA4syklre87qTbHn4OaNYSmNqKmiqVqyrcvBsazRpH3jmC7yD98OCBF2iEXjBKmsAJsWHDQKmw571l8my+16v+CweEwum8+hgHrNbrvbtrd8znbR73MVfg838f8BeoB7doN0cYZvaIuMjY6PkJGSk2gClgJml5pjmp2YYJ6dX6GeXaShWaeoVqqlU62ir7CXqbOWrLafsrNctjIDwAMWvC7BwRWtNsbGFKc+y8fNsTrQ0dK3QtXAYtrCYd3eYN3c49/a5NVj5eLn5u3s6e7x8NDo9fbL+Mzy9/T5+tvUzdN3Zp+GBAAh+QQJBQAEACwCAAIAfAB8AAAD/0i63P4wykmrvTjrzbv/YCiOZGmeaKqubOu+cCzPdArcQK2TOL7/nl4PSMwIfcUk5YhUOh3M5nNKiOaoWCuWqt1Ou16l9RpOgsvEMdocXbOZ7nQ7DjzTaeq7zq6P5fszfIASAYUBIYKDDoaGIImKC4ySH3OQEJKYHZWWi5iZG0ecEZ6eHEOio6SfqCaqpaytrpOwJLKztCO2jLi1uoW8Ir6/wCHCxMG2x7muysukzb230M6H09bX2Nna29zd3t/g4cAC5OXm5+jn3Ons7eba7vHt2fL16tj2+QL0+vXw/e7WAUwnrqDBgwgTKlzIsKHDh2gGSBwAccHEixAvaqTYcFCjRoYeNyoM6REhyZIHT4o0qPIjy5YTTcKUmHImx5cwE85cmJPnSYckK66sSAAj0aNIkypdyrSp06dQo0qdSrWq1atYs2rdyrWr169gwxZJAAA7)}.w2ui-icon{background-repeat:no-repeat;height:16px;width:16px;overflow:hidden;margin:2px 2px;display:inline-block}.w2ui-icon.icon-folder{background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAABGdBTUEAAK/INwWK6QAAABl0RVh0U29mdHdhcmUAQWRvYmUgSW1hZ2VSZWFkeXHJZTwAAAGrSURBVDjLxZO7ihRBFIa/6u0ZW7GHBUV0UQQTZzd3QdhMQxOfwMRXEANBMNQX0MzAzFAwEzHwARbNFDdwEd31Mj3X7a6uOr9BtzNjYjKBJ6nicP7v3KqcJFaxhBVtZUAK8OHlld2st7Xl3DJPVONP+zEUV4HqL5UDYHr5xvuQAjgl/Qs7TzvOOVAjxjlC+ePSwe6DfbVegLVuT4r14eTr6zvA8xSAoBLzx6pvj4l+DZIezuVkG9fY2H7YRQIMZIBwycmzH1/s3F8AapfIPNF3kQk7+kw9PWBy+IZOdg5Ug3mkAATy/t0usovzGeCUWTjCz0B+Sj0ekfdvkZ3abBv+U4GaCtJ1iEm6ANQJ6fEzrG/engcKw/wXQvEKxSEKQxRGKE7Izt+DSiwBJMUSm71rguMYhQKrBygOIRStf4TiFFRBvbRGKiQLWP29yRSHKBTtfdBmHs0BUpgvtgF4yRFR+NUKi0XZcYjCeCG2smkzLAHkbRBmP0/Uk26O5YnUActBp1GsAI+S5nRJJJal5K1aAMrq0d6Tm9uI6zjyf75dAe6tx/SsWeD//o2/Ab6IH3/h25pOAAAAAElFTkSuQmCC) no-repeat center}.w2ui-icon.icon-page{background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAABGdBTUEAAK/INwWK6QAAABl0RVh0U29mdHdhcmUAQWRvYmUgSW1hZ2VSZWFkeXHJZTwAAAINSURBVBgZBcG/r55zGAfg6/4+z3va01NHlYgzEfE7MdCIGISFgS4Gk8ViYyM2Mdlsko4GSf8Do0FLRCIkghhYJA3aVBtEz3nP89wf11VJvPDepdd390+8Nso5nESBQoq0pfvXm9fzWf19453LF85vASqJlz748vInb517dIw6EyYBIIG49u+xi9/c9MdvR//99MPPZ7+4cP4IZhhTPbwzT2d+vGoaVRRp1rRliVvHq+cfvM3TD82+7mun0o/ceO7NT+/4/KOXjwZU1ekk0840bAZzMQ2mooqh0A72d5x/6sB9D5zYnff3PoYBoWBgFKPKqDKqjCpjKr//dcu9p489dra88cydps30KswACfNEKanSaxhlntjJ8Mv12Paie+vZ+0+oeSwwQ0Iw1xAR1CiFNJkGO4wu3ZMY1AAzBI0qSgmCNJsJUEOtJSMaCTBDLyQ0CknAGOgyTyFFiLI2awMzdEcSQgSAAKVUmAeNkxvWJWCGtVlDmgYQ0GFtgg4pNtOwbBcwQy/Rife/2yrRRVI0qYCEBly8Z+P4qMEMy7JaVw72N568e+iwhrXoECQkfH91kY7jwwXMsBx1L93ZruqrK6uuiAIdSnTIKKPLPFcvay8ww/Hh+ufeznTXu49v95IMoQG3784gYXdTqvRmqn/Wpa/ADFX58MW3L71SVU9ETgEIQQQIOOzub+fhIvwPRDgeVjWDahIAAAAASUVORK5CYII=) no-repeat center}.w2ui-lock{display:none;position:absolute;z-index:1400;top:0;left:0;width:100%;height:100%;opacity:.15;background-color:#333}.w2ui-lock-msg{display:none;position:absolute;z-index:1400;top:50%;left:50%;transform:translateX(-50%) translateY(-50%);min-width:100px;max-width:95%;padding:30px;white-space:nowrap;text-overflow:ellipsis;overflow:hidden;font-size:13px;font-family:OpenSans;opacity:.8;background-color:#555;color:#fff;text-align:center;border-radius:5px;border:2px solid #444}.w2ui-lock-msg .w2ui-spinner{display:inline-block;width:24px;height:24px;margin:-3px 8px -7px -10px}.w2ui-scroll-wrapper{overflow:hidden}.w2ui-scroll-left,.w2ui-scroll-right{top:0;width:18px;height:100%;cursor:default;z-index:10;display:none;position:absolute}.w2ui-scroll-left:hover,.w2ui-scroll-right:hover{background-color:#ddd}.w2ui-scroll-left{left:0;box-shadow:0 0 7px #5f5f5f;background:#f7f7f7 url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAQAAADZc7J/AAAAzklEQVR4Ae2THRDEMBCFzy1ucatb3eJ2uhi3uNUtbnGrW9zi1rOdNzdvdl7nDpvYt/9/r7+/51myZZf/zXkD2iMHHRSb0x3oskwMieK05PwEXqP4ExSL0wp0ROao2OOuMPOMdUL6XU1/oGLcFWb+NqyTd2W/P/2qTr9h+nFXhOkHXRHiNyjrgp/U/V+WaQcaNY13zZI0A1JvcVqAnrGDTdtDtZUHjHIJhxxVLN0iqXgCP1l/7h8U9kc6abyJ4/eNWPpGdBv+XdUK0K8cnvcBly2rDr7C1HQAAAAASUVORK5CYII=) center center no-repeat;background-size:15px 12px}.w2ui-scroll-right{right:0;box-shadow:0 0 7px #5f5f5f;background:#f7f7f7 url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAQAAADZc7J/AAAAz0lEQVR4Ae2UG7TGMBCEr1vd4la3uMUtuli3utWtbnGLW9zi9l/bDMzJG7u12cfJfLunf1+UEC9Bv0vVQwJ8hjRCaZafflb1C9RQf4OD0gSDE+i+PiJAabFhQc1y1AYYsJGLY3lgxM17uWPO56yPiFDqVPWgRtpIHSd1zPnwkBsdI58OlNwx4fP2X0TgfMTOoHSdKOXkpyNvEyQh7ul+4swxJSTQuwNDxz68l/ukVNbu0Neen5Z+KvzWxBAqHds349uPFJ/jVOrPjxUq++OLf+20q5+noXo0AAAAAElFTkSuQmCC) center center no-repeat;background-size:15px 13px}#w2ui-notify{position:absolute;display:flex;flex-direction:column;align-items:center;left:0;right:0;bottom:15px;z-index:10000;overflow:hidden}#w2ui-notify>div{position:relative;background-color:#292828ba;color:#fff;padding:8px 44px 8px 16px;border-radius:4px;box-shadow:3px 3px 10px #9c9c9c;max-height:76px;min-width:100px;max-width:800px;font-size:16px;text-shadow:1px 0 0 #000}#w2ui-notify>div a{color:#6cd0e8;text-decoration:none;cursor:pointer}#w2ui-notify>div a:hover{color:#a2f0ff}#w2ui-notify>div span.w2ui-notify-close{padding:6px 6px;border-radius:3px;font-size:13px;color:#c3c3c3;position:absolute;right:5px;top:5px}#w2ui-notify>div span.w2ui-notify-close:hover{background-color:#807e7e;color:#fff}#w2ui-notify>div.w2ui-notify-error{text-shadow:none;background-color:rgba(255,0,0,.8)}#w2ui-notify>div.w2ui-notify-error .w2ui-notify-close{color:#fff}#w2ui-notify>div.w2ui-notify-error .w2ui-notify-close:hover{background-color:#fcadad;color:rgba(255,0,0,.8)}button.w2ui-btn,input[type=button].w2ui-btn{position:relative;display:inline-block;border-radius:14px;margin:0 3px;padding:6px 12px;color:#666;font-size:12px;border:1px solid transparent;background-image:linear-gradient(#e8e8ee 0,#e8e8ee 100%);outline:0;box-shadow:0 1px 0 #fff;cursor:default;min-width:75px;line-height:110%;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;user-select:none;-webkit-tap-highlight-color:transparent}button.w2ui-btn:hover,input[type=button].w2ui-btn:hover{text-decoration:none;background-image:linear-gradient(#ddd 0,#ddd 100%);color:#333}button.w2ui-btn.clicked,button.w2ui-btn:active,input[type=button].w2ui-btn.clicked,input[type=button].w2ui-btn:active{background-image:linear-gradient(#ccc 0,#ccc 100%);text-shadow:1px 1px 1px #eee}button.w2ui-btn:focus:before,input[type=button].w2ui-btn:focus:before{content:"";border:1px dashed #aaa;border-radius:15px;position:absolute;top:2px;bottom:2px;left:2px;right:2px;pointer-events:none}button.w2ui-btn-blue,input[type=button].w2ui-btn-blue{color:#fff;background-image:linear-gradient(#269df0 0,#269df0 100%);border:1px solid #269df0;text-shadow:0 0 1px #111}button.w2ui-btn-blue:hover,input[type=button].w2ui-btn-blue:hover{color:#fff;background-image:linear-gradient(#2391dd 0,#2391dd 100%);border:1px solid #2391dd;text-shadow:0 0 1px #111}button.w2ui-btn-blue.clicked,button.w2ui-btn-blue:active,input[type=button].w2ui-btn-blue.clicked,input[type=button].w2ui-btn-blue:active{color:#fff;background-image:linear-gradient(#1e83c9 0,#1e83c9 100%);border:1px solid #1268a6;text-shadow:0 0 1px #111}button.w2ui-btn-blue:focus:before,input[type=button].w2ui-btn-blue:focus:before{border:1px dashed #e8e8e8}button.w2ui-btn-green,input[type=button].w2ui-btn-green{color:#fff;background-image:linear-gradient(#52a452 0,#52a452 100%);border:1px solid #52a452;text-shadow:0 0 1px #111}button.w2ui-btn-green:hover,input[type=button].w2ui-btn-green:hover{color:#fff;background-image:linear-gradient(#3f8f3d 0,#3f8f3d 100%);border:1px solid #3f8f3d;text-shadow:0 0 1px #111}button.w2ui-btn-green.clicked,button.w2ui-btn-green:active,input[type=button].w2ui-btn-green.clicked,input[type=button].w2ui-btn-green:active{color:#fff;background-image:linear-gradient(#377d36 0,#377d36 100%);border:1px solid #555;text-shadow:0 0 1px #111}button.w2ui-btn-green:focus:before,input[type=button].w2ui-btn-green:focus:before{border:1px dashed #e8e8e8}button.w2ui-btn-orange,input[type=button].w2ui-btn-orange{color:#fff;background-image:linear-gradient(#fb8822 0,#fb8822 100%);border:1px solid #fb8822;text-shadow:0 0 1px #111}button.w2ui-btn-orange:hover,input[type=button].w2ui-btn-orange:hover{color:#fff;background-image:linear-gradient(#f1731f 0,#f1731f 100%);border:1px solid #f1731f;text-shadow:0 0 1px #111}button.w2ui-btn-orange.clicked,button.w2ui-btn-orange:active,input[type=button].w2ui-btn-orange.clicked,input[type=button].w2ui-btn-orange:active{color:#fff;border:1px solid #666;background-image:linear-gradient(#b98747 0,#b98747 100%);text-shadow:0 0 1px #111}button.w2ui-btn-orange:focus:before,input[type=button].w2ui-btn-orange:focus:before{border:1px dashed #f9f9f9}button.w2ui-btn-red,input[type=button].w2ui-btn-red{color:#fff;background-image:linear-gradient(#f9585a 0,#f9585a 100%);border:1px solid #f9585a;text-shadow:0 0 1px #111}button.w2ui-btn-red:hover,input[type=button].w2ui-btn-red:hover{color:#fff;background-image:linear-gradient(#de4446 0,#de4446 100%);border:1px solid #de4446;text-shadow:0 0 1px #111}button.w2ui-btn-red.clicked,button.w2ui-btn-red:active,input[type=button].w2ui-btn-red.clicked,input[type=button].w2ui-btn-red:active{color:#fff;border:1px solid #861c1e;background-image:linear-gradient(#9c2123 0,#9c2123 100%);text-shadow:0 0 1px #111}button.w2ui-btn-red:focus:before,input[type=button].w2ui-btn-red:focus:before{border:1px dashed #ddd}button.w2ui-btn-small,input[type=button].w2ui-btn-small{padding:5px;border-radius:4px;margin:0;min-width:0}button.w2ui-btn-small:focus:before,input[type=button].w2ui-btn-small:focus:before{border-radius:2px;top:2px;bottom:2px;left:2px;right:2px}button.w2ui-btn:disabled,input[type=button].w2ui-btn:disabled{border:1px solid #e6e6e6;background:#f7f7f7;color:#bdbcbc;text-shadow:none}.w2ui-overlay{--tip-size:8px;position:fixed;z-index:1700;opacity:0;transition:opacity .1s;border-radius:4px}.w2ui-overlay *{box-sizing:border-box}.w2ui-overlay .w2ui-overlay-body{display:inline-block;border:1px solid #474747;border-radius:4px;padding:4px 8px;margin:0;font-size:12px;font-family:OpenSans;color:#fff;text-shadow:0 1px 1px #4a4a4a;background-color:#777;line-height:1.4;letter-spacing:.1px;overflow:auto}.w2ui-overlay .w2ui-overlay-body.w2ui-light{color:#3c3c3c;text-shadow:none;background-color:#fffde9;border:1px solid #d2d2d2;box-shadow:0 1px 1px 1px #fff}.w2ui-overlay .w2ui-overlay-body.w2ui-white{color:#3c3c3c;text-shadow:none;background-color:#fafafa;border:1px solid #cccace;box-shadow:0 0 1px 1px #fff}.w2ui-overlay .w2ui-overlay-body.w2ui-arrow-right:before{content:"";position:absolute;left:calc(var(--tip-size,8px) * -.5 - 1px);top:calc(50% - 1px);transform:rotate(-45deg) translateY(-50%);transform-origin:top center;margin:0;border:inherit;border-color:inherit;background-color:inherit;width:var(--tip-size,8px);height:var(--tip-size,8px);border-bottom-right-radius:200px;border-bottom-width:0;border-right-width:0}.w2ui-overlay .w2ui-overlay-body.w2ui-arrow-left:after{content:"";position:absolute;right:calc(var(--tip-size,8px) * -.5 - 1px);top:calc(50% - 1px);transform:rotate(135deg) translateY(-50%);transform-origin:top center;margin:0;border:inherit;border-color:inherit;background-color:inherit;width:var(--tip-size,8px);height:var(--tip-size,8px);border-bottom-right-radius:200px;border-bottom-width:0;border-right-width:0}.w2ui-overlay .w2ui-overlay-body.w2ui-arrow-top:before{content:"";position:absolute;bottom:calc(var(--tip-size,8px) * -.5 + 3px);left:50%;transform-origin:center left;transform:rotate(-135deg) translateX(-50%);margin:0;border:inherit;border-color:inherit;background-color:inherit;width:var(--tip-size,8px);height:var(--tip-size,8px);border-bottom-right-radius:200px;border-bottom-width:0;border-right-width:0}.w2ui-overlay .w2ui-overlay-body.w2ui-arrow-bottom:after{content:"";position:absolute;top:calc(var(--tip-size,8px) * -.5);left:50%;transform:rotate(45deg) translateX(-50%);transform-origin:center left;margin:0;border:inherit;border-color:inherit;background-color:inherit;width:var(--tip-size,8px);height:var(--tip-size,8px);border-bottom-right-radius:200px;border-bottom-width:0;border-right-width:0}.w2ui-colors{padding:8px;padding-bottom:0;background-color:#fff;border-radius:3px;overflow:hidden;width:270px;height:240px}.w2ui-colors *{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;-o-box-sizing:border-box;box-sizing:border-box}.w2ui-colors .w2ui-color-tabs{display:flex;background-color:#f7f7f7;height:34px;margin:14px -8px 0 -8px;border-top:1px solid #d6d6d6}.w2ui-colors .w2ui-color-tabs .w2ui-color-tab{display:inline-block;width:65px;height:32px;border:0;border-top:2px solid transparent;border-radius:1px;margin:-1.5px 4px;text-align:center;font-size:15px;padding-top:4px;color:#7b7b7b}.w2ui-colors .w2ui-color-tabs .w2ui-color-tab:hover{background-color:#e1e1e1}.w2ui-colors .w2ui-color-tabs .w2ui-color-tab.w2ui-selected{border-top-color:#0175ff}.w2ui-colors .w2ui-color-tabs .w2ui-color-tab .w2ui-icon{padding-top:1px;width:30px}.w2ui-colors .w2ui-tab-content.tab-1 .w2ui-color-row{display:flex}.w2ui-colors .w2ui-tab-content.tab-1 .w2ui-color-row .w2ui-color{cursor:default;text-align:center;display:inline-block;width:18px;height:18px;padding:6px;margin:1.5px;border:1px solid transparent}.w2ui-colors .w2ui-tab-content.tab-1 .w2ui-color-row .w2ui-color:hover{outline:1px solid #666;border:1px solid #fff}.w2ui-colors .w2ui-tab-content.tab-1 .w2ui-color-row .w2ui-color.w2ui-no-color{border:1px solid #efefef;background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAMAAAAoLQ9TAAAABlBMVEX/////TgCFoIUYAAAAGUlEQVR42uXHIQEAAACDsNO/NJ4Kn9uC8wsJkAARUrXAjwAAAABJRU5ErkJggg==) 15px 15px}.w2ui-colors .w2ui-tab-content.tab-1 .w2ui-color-row .w2ui-color.w2ui-selected:before{content:'\2022';position:relative;left:-1px;top:-8px;color:#fff;font-size:14px;text-shadow:0 0 2px #222}.w2ui-colors .w2ui-tab-content.tab-2{height:184px;padding:1px 2px}.w2ui-colors .w2ui-tab-content.tab-2 .palette{position:relative;width:150px;height:125px;outline:1px solid #d2d2d2}.w2ui-colors .w2ui-tab-content.tab-2 .palette .palette-bg{height:100%;background-image:linear-gradient(0deg,#000,rgba(204,154,129,0));pointer-events:none}.w2ui-colors .w2ui-tab-content.tab-2 .rainbow{position:relative;width:150px;height:12px;margin:10px 0 0 0;background:linear-gradient(90deg,red 0,#ff0 17%,#0f0 33%,#0ff 50%,#00f 67%,#f0f 83%,red 100%)}.w2ui-colors .w2ui-tab-content.tab-2 .alpha{position:relative;width:150px;height:12px;margin:20px 0 0 0;background-color:#fff;background-image:linear-gradient(45deg,#bbb 25%,transparent 25%,transparent 75%,#bbb 75%,#bbb),linear-gradient(45deg,#bbb 25%,transparent 25%,transparent 75%,#bbb 75%,#bbb);background-size:12px 12px;background-position:0 0,6px 6px}.w2ui-colors .w2ui-tab-content.tab-2 .alpha .alpha-bg{height:100%;background-image:linear-gradient(90deg,rgba(80,80,80,0) 0,#505050 100%);pointer-events:none}.w2ui-colors .w2ui-tab-content.tab-2 .value1{pointer-events:none;position:absolute;top:0;display:inline-block;width:8px;height:8px;border-radius:10px;border:1px solid #999;outline:1px solid #bbb;background-color:transparent;box-shadow:0 0 1px #fff;transform:translateX(-3px) translateY(-3px)}.w2ui-colors .w2ui-tab-content.tab-2 .value2{pointer-events:none;position:absolute;top:-2px;display:inline-block;width:8px;height:16px;border-radius:2px;border:1px solid #696969;background-color:#fff;box-shadow:0 0 1px #fff;transform:translateX(-1px)}.w2ui-colors .w2ui-tab-content.tab-2 .color-info{float:right;margin-right:-5px}.w2ui-colors .w2ui-tab-content.tab-2 .color-info .color-preview-bg{box-shadow:0 0 1px #c3c3c3;height:40px;background-color:#fff;background-image:linear-gradient(45deg,#bbb 25%,transparent 25%,transparent 75%,#bbb 75%,#bbb),linear-gradient(45deg,#bbb 25%,transparent 25%,transparent 75%,#bbb 75%,#bbb);background-size:16px 16px;background-position:0 0,8px 8px;margin-bottom:10px}.w2ui-colors .w2ui-tab-content.tab-2 .color-info .color-original,.w2ui-colors .w2ui-tab-content.tab-2 .color-info .color-preview{height:40px;width:50px;float:left}.w2ui-colors .w2ui-tab-content.tab-2 .color-info .color-part{padding-top:7px}.w2ui-colors .w2ui-tab-content.tab-2 .color-info .color-part span{display:inline-block;width:8px;margin:2px 1px 2px 5px;color:#666}.w2ui-colors .w2ui-tab-content.tab-2 .color-info .color-part input{font-size:12px;border-radius:2px;border:1px solid #ccc;width:30px;text-align:right;padding:4px;color:#333}.w2ui-colors .w2ui-tab-content.tab-2 .color-info .color-part.opacity{margin:11px 0 0 8px}.w2ui-colors .w2ui-tab-content.tab-2 .color-info .color-part.opacity span{width:42px}.w2ui-colors .w2ui-tab-content.tab-2 .color-info .color-part.opacity input{width:38px;text-align:center}.w2ui-menu-search,.w2ui-menu-top{position:sticky;top:0;background-color:#fff;border-bottom:1px dotted silver}.w2ui-menu-search{padding:6px 4px}.w2ui-menu-search .w2ui-icon{position:absolute;top:11px;left:6px;color:#90819c;font-size:14px}.w2ui-menu-search #menu-search{width:100%;padding:5px 5px 5px 25px}.w2ui-menu{display:block;color:#000;padding:5px 0;border-radius:5px;overflow-x:hidden;cursor:default}.w2ui-menu .w2ui-menu-item{display:flex;align-content:stretch;padding:8px 5px;user-select:none}.w2ui-menu .w2ui-menu-item.w2ui-even{color:inherit;background-color:#fff}.w2ui-menu .w2ui-menu-item.w2ui-odd{color:inherit;background-color:#fbfbfb}.w2ui-menu .w2ui-menu-item:hover{background-color:#f0f3ff}.w2ui-menu .w2ui-menu-item.w2ui-selected{background-color:#e1e7ff}.w2ui-menu .w2ui-menu-item.w2ui-disabled{opacity:.4;color:inherit;background-color:transparent}.w2ui-menu .w2ui-menu-item .menu-icon{flex:none;width:26px;height:16px;padding:0;margin:0}.w2ui-menu .w2ui-menu-item .menu-icon span{width:18px;font-size:14px;color:#8d99a7;display:inline-block;padding-top:1px}.w2ui-menu .w2ui-menu-item .menu-text{flex-grow:1;white-space:nowrap}.w2ui-menu .w2ui-menu-item .menu-extra{flex:none;min-width:10px}.w2ui-menu .w2ui-menu-item .menu-extra span{border:1px solid #f6fcf4;border-radius:20px;width:auto;height:18px;padding:2px 7px;margin:0 0 0 10px;background-color:#f2f8f0;color:#666;box-shadow:0 0 2px #474545;text-shadow:1px 1px 0 #fff}.w2ui-menu .w2ui-menu-item .menu-extra span.hotkey{border:none;border-radius:0;background-color:transparent;color:#888;box-shadow:none;text-shadow:none}.w2ui-menu .w2ui-menu-item .menu-extra span.remove{background-color:transparent;border-color:transparent;box-shadow:none;padding:0 5px;border-radius:3px;position:relative;margin-top:-3px;display:block;height:20px;width:20px;text-align:center;user-select:none}.w2ui-menu .w2ui-menu-item .menu-extra span.remove:hover{background-color:#f9e7e7;color:red}.w2ui-menu .w2ui-menu-item .menu-extra span.remove:active{background-color:#ffd1d1}.w2ui-menu .w2ui-menu-divider{padding:5px}.w2ui-menu .w2ui-menu-divider .line{border-top:1px dotted silver}.w2ui-menu .w2ui-menu-divider.has-text{height:26px;background-color:#fafafa;border-top:1px solid #f2f2f2;border-bottom:1px solid #f2f2f2;text-align:center}.w2ui-menu .w2ui-menu-divider.has-text .line{display:block;margin-top:7px}.w2ui-menu .w2ui-menu-divider.has-text .text{display:inline-block;position:relative;top:-10px;background-color:#fafafa;padding:0 7px;color:#a9a9a9}.w2ui-menu .w2ui-no-items{padding:5px 15px;text-align:center;color:gray}.w2ui-menu .w2ui-no-items .w2ui-spinner{position:relative;left:-2px;margin-bottom:-5px;width:18px;height:18px}.w2ui-menu .w2ui-sub-menu-box{background-color:#fafafd;border-top:1px solid #d6e2e6;border-bottom:1px solid #d6e2e6;padding:0 3px}.w2ui-menu .collapsed .menu-extra span,.w2ui-menu .expanded .menu-extra span{position:relative;border-color:transparent;background-color:transparent;box-shadow:none;padding:0 6px;border-radius:0;margin-left:5px}.w2ui-menu .collapsed .menu-extra span:after,.w2ui-menu .expanded .menu-extra span:after{content:"";position:absolute;border-left:5px solid grey;border-top:5px solid transparent;border-bottom:5px solid transparent;transform:rotateZ(-90deg);pointer-events:none;margin-left:-2px;margin-top:3px}.w2ui-menu .collapsed .menu-extra span:hover,.w2ui-menu .expanded .menu-extra span:hover{border-color:transparent;background-color:transparent}.w2ui-menu .collapsed .menu-extra span:after{transform:rotateZ(90deg)}.w2ui-calendar{margin:0;line-height:1.1;user-select:none}.w2ui-calendar.w2ui-overlay-body{border:1px solid #cccace;color:#3c3c3c;text-shadow:none;background-color:#fff;box-shadow:0 1px 6px 1px #ebeaec}.w2ui-calendar .w2ui-cal-title,.w2ui-calendar .w2ui-time-title{margin:0;padding:7px 2px;background-color:#fafafa;border-top:1px solid #fefefe;border-bottom:1px solid #ddd;color:#555;text-align:center;text-shadow:1px 1px 1px #eee;font-size:16px;cursor:pointer}.w2ui-calendar .w2ui-cal-title .arrow-down,.w2ui-calendar .w2ui-time-title .arrow-down{position:relative;top:-3px;left:5px;opacity:.6}.w2ui-calendar .w2ui-cal-next,.w2ui-calendar .w2ui-cal-previous{width:30px;height:30px;color:#666;border:1px solid transparent;border-radius:3px;padding:7px 5px;margin:-4px 1px 0 1px;cursor:default}.w2ui-calendar .w2ui-cal-next:hover,.w2ui-calendar .w2ui-cal-previous:hover{color:#000;border:1px solid #f5f5f5;background-color:#f9f7f7}.w2ui-calendar .w2ui-cal-next:active,.w2ui-calendar .w2ui-cal-previous:active{color:#000;background-color:#f2f1f4;border:1px solid #e6dbfb}.w2ui-calendar .w2ui-cal-next>div,.w2ui-calendar .w2ui-cal-previous>div{position:absolute;border-left:4px solid #888;border-top:4px solid #888;border-right:4px solid transparent;border-bottom:4px solid transparent;width:0;height:0;padding:0;margin:3px 0 0 0}.w2ui-calendar .w2ui-cal-previous{float:left}.w2ui-calendar .w2ui-cal-previous>div{-webkit-transform:rotate(-45deg);-moz-transform:rotate(-45deg);-ms-transform:rotate(-45deg);-o-transform:rotate(-45deg);transform:rotate(-45deg);margin-left:6px}.w2ui-calendar .w2ui-cal-next{float:right}.w2ui-calendar .w2ui-cal-next>div{-webkit-transform:rotate(135deg);-moz-transform:rotate(135deg);-ms-transform:rotate(135deg);-o-transform:rotate(135deg);transform:rotate(135deg);margin-left:2px;margin-right:2px}.w2ui-calendar .w2ui-cal-jump{display:flex;background-color:#fdfdfd}.w2ui-calendar .w2ui-cal-jump .w2ui-jump-month,.w2ui-calendar .w2ui-cal-jump .w2ui-jump-year{cursor:default;text-align:center;border:1px solid transparent;border-radius:3px;font-size:14px}.w2ui-calendar .w2ui-cal-jump #w2ui-jump-month{width:186px;padding:10px 5px 4px 3px;border-right:1px solid #efefef;display:grid;grid-template-columns:repeat(3,1fr);grid-template-rows:repeat(4,52px);grid-gap:4px}.w2ui-calendar .w2ui-cal-jump #w2ui-jump-month .w2ui-jump-month{padding:15px 0 0 0}.w2ui-calendar .w2ui-cal-jump #w2ui-jump-year{width:90px;height:240px;overflow-x:hidden;overflow-y:auto;margin:0 2px;display:flex;flex-wrap:wrap}.w2ui-calendar .w2ui-cal-jump #w2ui-jump-year .w2ui-jump-year{width:95%;height:30px;padding:5px 0;margin:1px 0}.w2ui-calendar .w2ui-cal-jump .w2ui-jump-month:hover,.w2ui-calendar .w2ui-cal-jump .w2ui-jump-year:hover{color:#000;border:1px solid #f5f5f5;background-color:#f9f7f7}.w2ui-calendar .w2ui-cal-jump .w2ui-jump-month.w2ui-selected,.w2ui-calendar .w2ui-cal-jump .w2ui-jump-year.w2ui-selected{color:#000;background-color:#f2f1f4;border:1px solid #e6dbfb}.w2ui-calendar .w2ui-cal-now{cursor:default;padding:3px;text-align:center;background-color:#f4f4f4;margin:5px;border:1px solid #e5e5e5;border-radius:4px}.w2ui-calendar .w2ui-cal-now:hover{color:#28759e;border:1px solid #c3d6df}.w2ui-calendar .w2ui-cal-days{width:280px;height:240px;padding:2px;display:grid;grid-template-columns:repeat(7,1fr)}.w2ui-calendar .w2ui-cal-days .w2ui-day{border:1px solid #fff;border-radius:3px;color:#000;background-color:#f7f7f7;padding:8px 0 0 0;cursor:default;text-align:center}.w2ui-calendar .w2ui-cal-days .w2ui-day.w2ui-saturday,.w2ui-calendar .w2ui-cal-days .w2ui-day.w2ui-sunday{border:1px solid #fff;color:#c8493b;background-color:#f7f7f7}.w2ui-calendar .w2ui-cal-days .w2ui-day.w2ui-today{background-color:#e2f7cd}.w2ui-calendar .w2ui-cal-days .w2ui-day:hover{background-color:#f2f1f4;border:1px solid #e6dbfb}.w2ui-calendar .w2ui-cal-days .w2ui-day:active{background-color:#eeebf3;border:1px solid #cec2e5}.w2ui-calendar .w2ui-cal-days .w2ui-day.w2ui-selected{border:1px solid #8cb067}.w2ui-calendar .w2ui-cal-days .w2ui-day.w2ui-weekday{text-align:center;background-color:#fff;color:#a99cc2}.w2ui-calendar .w2ui-cal-days .w2ui-day.w2ui-weekday:hover{border:1px solid #fff;background-color:#fff}.w2ui-calendar .w2ui-cal-days .w2ui-day.outside{color:#b5b5b5;background-color:#fff}.w2ui-calendar .w2ui-cal-days .w2ui-day.w2ui-blocked{color:#555;background-color:#fff;border:1px solid #fff}.w2ui-calendar .w2ui-cal-days .w2ui-day.w2ui-blocked:after{content:" ";position:absolute;color:#b3b3b378;font-size:27px;padding:0;font-family:verdana;transform:translate(-15px,15px) rotate(-36deg);border-top:1px solid #c9c2c2;width:24px;transform-origin:top left}.w2ui-cal-time{display:grid;grid-template-columns:repeat(3,1fr);background-color:#fff;cursor:default}.w2ui-cal-time .w2ui-cal-column{width:90px;display:flex;flex-wrap:wrap;padding:4px}.w2ui-cal-time .w2ui-cal-column:nth-child(even){background-color:#fafafa}.w2ui-cal-time .w2ui-cal-column span{width:100%;padding:8px;margin:1px;text-align:center;border:1px solid transparent;border-radius:2px;white-space:nowrap}.w2ui-cal-time .w2ui-cal-column span:hover{background-color:#f2f1f4;border:1px solid #e6dbfb}.w2ui-cal-time .w2ui-cal-column span:active{background-color:#eeebf3;border:1px solid #cec2e5}.w2ui-cal-time .w2ui-cal-column span.w2ui-blocked{pointer-events:none;text-decoration:line-through;color:silver}.w2ui-form{position:relative;color:#000;background-color:#fcfcfb;border:1px solid #e1e1e1;border-radius:3px;padding:0;overflow:hidden}.w2ui-form>div{position:absolute;overflow:hidden}.w2ui-form .w2ui-form-header{position:absolute;top:0;left:0;right:0;height:36px;padding:10px;overflow:hidden;font-size:16px;color:#444;background-color:#fff;border-top-left-radius:2px;border-top-right-radius:2px;border-bottom:1px solid #f1f1f1}.w2ui-form .w2ui-form-toolbar{position:absolute;left:0;right:0;margin:0;padding:2px;border-top-left-radius:3px;border-top-right-radius:3px;border-bottom:1px solid #f1f1f1}.w2ui-form .w2ui-form-tabs{position:absolute;left:0;right:0;margin:0;padding:0;height:32px;border-top-left-radius:3px;border-top-right-radius:3px;padding-top:4px;background-color:#fff}.w2ui-form .w2ui-form-tabs .w2ui-tab.active{background-color:#fcfcfb}.w2ui-form .w2ui-page{position:absolute;left:0;right:0;overflow:auto;padding:10px 5px 0 5px;border-left:1px solid inherit;border-right:1px solid inherit;background-color:inherit;border-radius:3px}.w2ui-form .w2ui-column-container{display:flex;padding:0}.w2ui-form .w2ui-column-container .w2ui-column{width:100%}.w2ui-form .w2ui-column-container .w2ui-column.col-0,.w2ui-form .w2ui-column-container .w2ui-column.col-1,.w2ui-form .w2ui-column-container .w2ui-column.col-10,.w2ui-form .w2ui-column-container .w2ui-column.col-2,.w2ui-form .w2ui-column-container .w2ui-column.col-3,.w2ui-form .w2ui-column-container .w2ui-column.col-4,.w2ui-form .w2ui-column-container .w2ui-column.col-5,.w2ui-form .w2ui-column-container .w2ui-column.col-6,.w2ui-form .w2ui-column-container .w2ui-column.col-7,.w2ui-form .w2ui-column-container .w2ui-column.col-8,.w2ui-form .w2ui-column-container .w2ui-column.col-9{padding:0;padding-left:10px}.w2ui-form .w2ui-column-container .w2ui-column.col-0{padding-left:0}.w2ui-form .w2ui-buttons{position:absolute;left:0;right:0;bottom:0;text-align:center;border-top:1px solid #f1f1f1;border-bottom:0 solid #f1f1f1;background-color:#fff;padding:15px 0;border-bottom-left-radius:3px;border-bottom-right-radius:3px}.w2ui-form .w2ui-buttons button,.w2ui-form .w2ui-buttons input[type=button]{min-width:80px;margin-right:5px}.w2ui-form input[type=checkbox]:not(.w2ui-toggle),.w2ui-form input[type=radio]{margin-top:4px;margin-bottom:4px;width:14px;height:14px}.w2ui-group-title{padding:5px 2px 0 5px;color:#656164cc;text-shadow:1px 1px 2px #fdfdfd;font-size:120%}.w2ui-group-fields{background-color:#fff;margin:5px 0 14px 0;padding:10px 5px;border-top:1px dotted #e1e1e1;border-bottom:1px dotted #e1e1e1}.w2ui-field>label{display:block;float:left;margin-top:10px;margin-bottom:0;width:120px;padding:0;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;text-align:right;min-height:20px;color:#666}.w2ui-field>div{margin-bottom:3px;margin-left:128px;padding:4px;min-height:28px;float:none}.w2ui-field.w2ui-required>div{position:relative}.w2ui-field.w2ui-required:not(.w2ui-field-inline)>div::before{content:'*';position:absolute;margin-top:7px;margin-left:-8px;color:red}.w2ui-field.w2ui-required.w2ui-field-inline>div::before{content:''!important}.w2ui-field.w2ui-disabled{opacity:.45;background-color:transparent!important}.w2ui-field.w2ui-disabled input:not([type=button]):not([type=submit]):not([type=checkbox]):not([type=radio]),.w2ui-field.w2ui-disabled select,.w2ui-field.w2ui-disabled textarea{border:1px solid #bdc0c3!important;background-color:#f5f5f5!important}.w2ui-field.w2ui-span-none>label{margin:0;padding:5px 12px 0 4px;display:block;width:98%;text-align:left}.w2ui-field.w2ui-span-none>div{margin-left:0}.w2ui-field.w2ui-span0>label{display:none}.w2ui-field.w2ui-span0>div{margin-left:0}.w2ui-field.w2ui-span1>label{width:20px}.w2ui-field.w2ui-span1>div{margin-left:28px}.w2ui-field.w2ui-span2>label{width:40px}.w2ui-field.w2ui-span2>div{margin-left:48px}.w2ui-field.w2ui-span3>label{width:60px}.w2ui-field.w2ui-span3>div{margin-left:68px}.w2ui-field.w2ui-span4>label{width:80px}.w2ui-field.w2ui-span4>div{margin-left:88px}.w2ui-field.w2ui-span5>label{width:100px}.w2ui-field.w2ui-span5>div{margin-left:108px}.w2ui-field.w2ui-span6>label{width:120px}.w2ui-field.w2ui-span6>div{margin-left:128px}.w2ui-field.w2ui-span7>label{width:140px}.w2ui-field.w2ui-span7>div{margin-left:148px}.w2ui-field.w2ui-span8>label{width:160px}.w2ui-field.w2ui-span8>div{margin-left:168px}.w2ui-field.w2ui-span9>label{width:180px}.w2ui-field.w2ui-span9>div{margin-left:188px}.w2ui-field.w2ui-span10>label{width:200px}.w2ui-field.w2ui-span10>div{margin-left:208px}.w2ui-field.w2ui-field-inline{display:inline}.w2ui-field.w2ui-field-inline>div{display:inline;margin:0;padding:0}.w2ui-field .w2ui-box-label{user-select:none;vertical-align:middle}.w2ui-field .w2ui-box-label input,.w2ui-field .w2ui-box-label span{display:inline-block;vertical-align:middle}.w2ui-field .w2ui-box-label span{padding-left:3px}.w2ui-field .w2ui-box-label input{margin:4px 0 3px 0}input:not([type=button]):not([type=submit]):not([type=checkbox]):not([type=radio]).w2ui-error,textarea.w2ui-error{border:1px solid #ffa8a8;background-color:#fff4eb}.w2field{padding:3px;border-radius:3px;border:1px solid silver}.w2ui-field-helper{position:absolute;display:inline-block;line-height:100%;pointer-events:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;user-select:none}.w2ui-field-helper .w2ui-field-up{position:absolute;top:0;padding:2px 3px;cursor:pointer;pointer-events:all}.w2ui-field-helper .w2ui-field-down{position:absolute;bottom:0;padding:2px 3px;cursor:pointer;pointer-events:all}.w2ui-field-helper .arrow-up:hover{border-bottom-color:#81c6ff}.w2ui-field-helper .arrow-down:hover{border-top-color:#81c6ff}.w2ui-field-helper .w2ui-icon-search{position:absolute;margin:8px 0 0 -2px;display:none;color:#777;width:21px!important;font-size:13px}.w2ui-field-helper .w2ui-icon-search.show-search{display:block}.w2ui-field-helper.w2ui-list{color:inherit;position:absolute;padding:0;margin:0;min-height:28px;overflow:auto;border:1px solid #e0e0e0;border-radius:3px;font-size:6px;line-height:100%;box-sizing:border-box;pointer-events:all;background-color:#f7fafa}.w2ui-field-helper.w2ui-list.has-focus,.w2ui-field-helper.w2ui-list:focus-within{outline:auto #72b2ff;background-color:#fff}.w2ui-field-helper.w2ui-list input[type=text]{-webkit-box-shadow:none;-moz-box-shadow:none;-ms-box-shadow:none;-o-box-shadow:none;box-shadow:none}.w2ui-field-helper.w2ui-list .w2ui-multi-items{position:absolute;display:inline-block;margin:0;padding:0;pointer-events:none}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-item{pointer-events:all;float:left;margin:3px 0 0 5px;border-radius:15px;width:auto;padding:3px 24px 1px 12px;border:1px solid #b4d0de;background-color:#eff3f5;white-space:nowrap;cursor:default;font-family:OpenSans;font-size:11px;line-height:100%;height:20px;overflow:hidden;text-overflow:ellipsis;box-sizing:border-box}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-item:hover{background-color:#d0dbe1}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-item:last-child{border-radius:0;border:1px solid transparent;background-color:transparent}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-item:last-child input{padding:1px;padding-top:0;margin:0;border:0;outline:0;height:auto;line-height:100%;font-size:inherit;font-family:inherit;background-color:transparent}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-item .w2ui-icon{float:left;color:#828aa7;margin:1px 2px 0 -6px}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-item .w2ui-list-remove{float:right;width:16px;height:16px;margin:-2px -20px 0 0;border-radius:2px;font-size:12px;border:1px solid transparent}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-item .w2ui-list-remove:hover{background-color:#f6e5e5;border:1px solid #fac2c2;color:red;opacity:1}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-item .w2ui-list-remove:before{position:relative;display:inline-block;left:4px;opacity:.7;content:'x';line-height:1}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-item>span.file-size{pointer-events:none;color:#777}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-search{float:left;margin:0;padding:0}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-search input[type=text]{pointer-events:all;width:0;height:20px;padding:3px 0 3px 0;margin:3px 0 0 5px;border:0;background-color:transparent}.w2ui-field-helper.w2ui-list .w2ui-multi-items .li-search input[type=text]:focus{outline:0;border:0}.w2ui-field-helper.w2ui-list .w2ui-multi-file{position:absolute;left:0;right:0;top:0;bottom:0}.w2ui-field-helper.w2ui-list.w2ui-readonly .w2ui-multi-items>.li-item:hover{background-color:#eff3f5}.w2ui-field-helper.w2ui-list.w2ui-file-dragover{background-color:#e4ffda;border:1px solid #93e07d}.w2ui-field-helper.w2ui-list .w2ui-enum-placeholder{display:inline;position:absolute;pointer-events:none;color:#999;box-sizing:border-box}.w2ui-overlay .w2ui-file-preview{padding:1px;background-color:#fff}.w2ui-overlay .w2ui-file-info{display:grid;grid-template-columns:1fr 2fr;color:#fff;padding:6px 0}.w2ui-overlay .w2ui-file-info .file-caption{text-align:right;color:silver;padding-right:10px}.w2ui-overlay .w2ui-file-info .file-value{color:#fff}.w2ui-overlay .w2ui-file-info .file-type{max-width:200px;display:block-inline;overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.arrow-up{background:0 0;width:0;height:0;border-left:4px solid transparent;border-right:4px solid transparent;border-bottom:5px solid #777;font-size:0;line-height:0}.arrow-down{background:0 0;width:0;height:0;border-left:4px solid transparent;border-right:4px solid transparent;border-top:5px solid #777;font-size:0;line-height:0}.arrow-left{background:0 0;width:0;height:0;border-bottom:4px solid transparent;border-top:4px solid transparent;border-right:5px solid #777;font-size:0;line-height:0}.arrow-right{background:0 0;width:0;height:0;border-bottom:4px solid transparent;border-top:4px solid transparent;border-left:5px solid #777;font-size:0;line-height:0}.w2ui-select{cursor:default;color:#000!important;background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACgAAAALCAQAAACnzwd+AAAAcklEQVR4AcXMsQFBQQDG4P9tAgC0gJYRQJZgKQMwCqCku6vVAAAA+NJHP4KHOk0aV2pRw61n4BBmyOxKQ8I4ehZeuhd3HTx6DQEGZ7sBfr2OOOOj3Yi43kMKs9sZknofOexqZ8npMygwWZTX51CipP+YA1OiZJbYYg9lAAAAAElFTkSuQmCC);background-size:17px 6px;background-position:right center;background-repeat:no-repeat}.w2ui-select.has-focus{outline:auto #72b2ff;background-color:#fff!important}.w2ui-select[disabled],.w2ui-select[readonly]{background-image:none;background-color:#f1f1f1!important;color:#777!important}.w2ui-layout{overflow:hidden;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;-o-box-sizing:border-box;box-sizing:border-box}.w2ui-layout *{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;-o-box-sizing:border-box;box-sizing:border-box}.w2ui-layout>div{position:absolute;overflow:hidden;border:0;margin:0;padding:0;outline:0;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;-o-box-sizing:border-box;box-sizing:border-box}.w2ui-layout>div .w2ui-panel{display:none;position:absolute;z-index:120}.w2ui-layout>div .w2ui-panel .w2ui-panel-title{position:absolute;left:0;top:0;right:0;padding:5px;background-color:#fff;color:#656164cc;border:1px solid #efefef;border-bottom:1px solid #f5f5f5}.w2ui-layout>div .w2ui-panel .w2ui-panel-tabs{position:absolute;left:0;top:0;right:0;z-index:2;display:none;overflow:hidden;background-color:#fff;padding:0}.w2ui-layout>div .w2ui-panel .w2ui-panel-tabs>.w2ui-tab.active{background-color:#fcfcfc}.w2ui-layout>div .w2ui-panel .w2ui-panel-toolbar{position:absolute;left:0;top:0;right:0;z-index:2;display:none;overflow:hidden;background-color:#fafafa;border-bottom:1px solid #efefef;padding:2px}.w2ui-layout>div .w2ui-panel .w2ui-panel-content{position:absolute;left:0;top:0;right:0;bottom:0;z-index:1;color:inherit;background-color:#fcfcfc}.w2ui-layout>div .w2ui-resizer{display:none;position:absolute;z-index:121;background-color:transparent}.w2ui-layout>div .w2ui-resizer.active,.w2ui-layout>div .w2ui-resizer:hover{background-color:#c8cad1}.w2ui-grid{position:relative;border:1px solid #e1e1e1;border-radius:2px;overflow:hidden!important}.w2ui-grid>div{position:absolute;overflow:hidden}.w2ui-grid .w2ui-grid-header{position:absolute;top:0;left:0;right:0;height:36px;padding:10px;overflow:hidden;font-size:16px;color:#444;background-color:#fff;border-top-left-radius:2px;border-top-right-radius:2px;border-bottom:1px solid #e1e1e1!important}.w2ui-grid .w2ui-grid-toolbar{position:absolute;border-bottom:1px solid #efefef;background-color:#fafafa;height:52px;padding:9px 3px 0 3px;margin:0;box-shadow:0 1px 2px #f5f5f5}.w2ui-grid .w2ui-grid-toolbar .w2ui-tb-button .w2ui-tb-icon{margin:3px 0 0 0!important}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input{position:relative;width:300px;left:0;top:-4px}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-search-down{position:absolute;top:7px;left:4px;color:#8c99a7;font-size:13px}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-grid-search-name{position:absolute;margin:5px 0 0 3px;padding:4px 27px 4px 10px;background-color:#fbfbfb;border:1px solid #b9b9b9;border-radius:15px;pointer-events:none}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-grid-search-name .name-icon{position:absolute;margin-left:-6px;color:#8c99a7}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-grid-search-name .name-text{padding-left:14px}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-grid-search-name .name-cross{position:absolute;margin-top:-4px;margin-left:7px;padding:4px 5px;pointer-events:all}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-grid-search-name .name-cross:hover{color:red}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-search-all{outline:0!important;border-radius:4px!important;line-height:normal!important;height:30px!important;width:300px!important;border:1px solid #e1e1e1!important;color:#000!important;background-color:#f1f1f1!important;padding:1px 28px 0 28px!important;margin:0!important;margin-top:1px!important;font-size:13px!important}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-search-all:focus{border:1px solid #007cff!important;background-color:#fff!important}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-search-drop{position:absolute;right:2px;top:3px;height:26px;width:26px;font-size:16px;color:#a4adb1;cursor:pointer;padding:7px 2px 7px 2px;border-radius:4px;background-color:transparent}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-search-drop span.w2ui-icon-drop{position:relative;top:-2px}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-search-drop.checked,.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-search-input .w2ui-search-drop:hover{color:#fff;background-color:#56a1e2}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches{display:flex;flex-direction:row;flex-wrap:nowrap;border-top:1px solid #ececec;border-bottom:1px solid #ececec;background-color:#fcfdff;margin:7px -20px 0 -20px;padding:6px 50px 6px 20px;height:36px}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches>div{white-space:nowrap}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches>span{white-space:nowrap;text-overflow:ellipsis;overflow:hidden;border:1px solid #88c3f7;border-radius:15px;padding:4px 12px;margin:0 4px;color:#4c9ad6;font-size:12px;font-weight:700;background-color:#f5f9fe}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches>span>span{font-size:9px;position:relative;top:-1px;left:2px}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches .grid-search-line{border-left:1px solid #ececec;width:11px;height:22px;margin-left:7px;margin-top:1px}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches .w2ui-grid-search-logic{border:1px solid #c8c9ca!important;color:#676767!important}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches button.grid-search-btn{margin:0 3px;padding:0;height:24px;font-size:11px}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches button.grid-search-btn.btn-remove{min-width:26px;position:absolute;left:calc(100% - 35px)}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches .grid-search-count{background-color:#4cb1fd;border-radius:10px;color:#fff;padding:0 6px 1px 6px;font-size:11px!important;position:relative!important;top:0!important}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches .grid-search-list li{padding:5px}.w2ui-grid .w2ui-grid-toolbar .w2ui-grid-searches .grid-search-list input{position:relative;top:2px;left:-3px}.w2ui-grid .w2ui-grid-save-search{padding-top:30px;text-align:center}.w2ui-grid .w2ui-grid-save-search span{width:280px;display:inline-block;text-align:left;padding-bottom:4px}.w2ui-grid .w2ui-grid-save-search .search-name{width:280px!important}.w2ui-grid .w2ui-grid-body{position:absolute;overflow:hidden;padding:0;background-color:#fff;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;user-select:none}.w2ui-grid .w2ui-grid-body input,.w2ui-grid .w2ui-grid-body select,.w2ui-grid .w2ui-grid-body textarea{-webkit-user-select:text;-moz-user-select:text;-ms-user-select:text;-o-user-select:text;user-select:text}.w2ui-grid .w2ui-grid-body .w2ui-grid-columns,.w2ui-grid .w2ui-grid-body .w2ui-grid-fcolumns{overflow:hidden;position:absolute;left:0;top:0;right:0;box-shadow:0 1px 4px #efefef;height:auto}.w2ui-grid .w2ui-grid-body .w2ui-grid-columns table,.w2ui-grid .w2ui-grid-body .w2ui-grid-fcolumns table{height:auto}.w2ui-grid .w2ui-grid-body .w2ui-grid-columns .w2ui-resizer,.w2ui-grid .w2ui-grid-body .w2ui-grid-fcolumns .w2ui-resizer{position:absolute;z-index:1000;display:block;background-image:none;background-color:rgba(0,0,0,0);padding:0;margin:0;width:6px;height:12px;cursor:ew-resize}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords,.w2ui-grid .w2ui-grid-body .w2ui-grid-records{position:absolute;left:0;right:0;top:0;bottom:0}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-odd,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-odd{color:inherit;background-color:#fff;border-bottom:1px solid #f5f5f5}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-odd.w2ui-record-hover,.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-odd:hover,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-odd.w2ui-record-hover,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-odd:hover{color:inherit;background-color:#f3f3f3}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-odd.w2ui-empty-record:hover,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-odd.w2ui-empty-record:hover{background-color:#fff}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-even,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-even{color:inherit;background-color:#fbfbfb;border-bottom:1px dotted #f5f5f5}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-even.w2ui-record-hover,.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-even:hover,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-even.w2ui-record-hover,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-even:hover{color:inherit;background-color:#f3f3f3}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-even.w2ui-empty-record:hover,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-even.w2ui-empty-record:hover{background-color:#fbfbfb}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr td.w2ui-selected,.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-selected,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr td.w2ui-selected,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-selected{color:#000!important;background-color:#d9eaff!important;border-bottom:1px solid transparent}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr td.w2ui-inactive,.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-inactive,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr td.w2ui-inactive,.w2ui-grid .w2ui-grid-body .w2ui-grid-records table tr.w2ui-inactive{background-color:#e8edf5!important}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords .w2ui-expanded1,.w2ui-grid .w2ui-grid-body .w2ui-grid-records .w2ui-expanded1{height:0;border-bottom:1px solid #b2bac0}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords .w2ui-expanded1>div,.w2ui-grid .w2ui-grid-body .w2ui-grid-records .w2ui-expanded1>div{height:0;border:0;transition:height .3s,opacity .3s}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords .w2ui-expanded2,.w2ui-grid .w2ui-grid-body .w2ui-grid-records .w2ui-expanded2{height:0;border-radius:0;border-bottom:1px solid #b2bac0}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords .w2ui-expanded2>div,.w2ui-grid .w2ui-grid-body .w2ui-grid-records .w2ui-expanded2>div{height:0;border:0;transition:height .3s,opacity .3s}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords .w2ui-load-more,.w2ui-grid .w2ui-grid-body .w2ui-grid-records .w2ui-load-more{cursor:pointer;background-color:rgba(233,237,243,.5);border-right:1px solid #f1f1f1;height:43px}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords .w2ui-load-more>div,.w2ui-grid .w2ui-grid-body .w2ui-grid-records .w2ui-load-more>div{text-align:center;color:#777;background-color:rgba(233,237,243,.5);padding:10px 0 15px 0;height:43px;border-top:1px dashed #d6d5d7;border-bottom:1px dashed #d6d5d7;font-size:12px}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords .w2ui-load-more>div:hover,.w2ui-grid .w2ui-grid-body .w2ui-grid-records .w2ui-load-more>div:hover{color:#438ba2;background-color:#f3f3f3}.w2ui-grid .w2ui-grid-body .w2ui-grid-frecords .w2ui-reoder-empty,.w2ui-grid .w2ui-grid-body .w2ui-grid-records .w2ui-reoder-empty{background-color:#eee;border-bottom:1px dashed #aaa;border-top:1px dashed #aaa}.w2ui-grid .w2ui-grid-body table{border-spacing:0;border-collapse:collapse;table-layout:fixed;width:1px}.w2ui-grid .w2ui-grid-body table .w2ui-head{margin:0;padding:0;border-right:1px solid #dcdcdc;border-bottom:1px solid #dcdcdc;color:#656164;background-image:linear-gradient(#fff,#f9f9f9)}.w2ui-grid .w2ui-grid-body table .w2ui-head>div{padding:7px 6px;white-space:nowrap;text-overflow:ellipsis;overflow:hidden;position:relative}.w2ui-grid .w2ui-grid-body table .w2ui-head.w2ui-reorder-cols-head:hover{cursor:move}.w2ui-grid .w2ui-grid-body table td{border-right:1px solid #f1f1f1;border-bottom:0 solid #d6d5d7;cursor:default;overflow:hidden}.w2ui-grid .w2ui-grid-body table td.w2ui-soft-hidden,.w2ui-grid .w2ui-grid-body table td.w2ui-soft-span{border-right-color:transparent}.w2ui-grid .w2ui-grid-body table td.w2ui-grid-data{margin:0;padding:0}.w2ui-grid .w2ui-grid-body table td.w2ui-grid-data .w2ui-info{position:relative;top:2px;left:-1px;font-size:13px;color:#8d99a7;cursor:pointer;width:18px;display:inline-block;margin-right:3px;text-align:center}.w2ui-grid .w2ui-grid-body table td.w2ui-grid-data .w2ui-clipboard-copy{float:right;margin-top:-15px;width:20px;height:16px;padding:0;text-align:center;cursor:pointer;font-size:13px;color:#8d98a7}.w2ui-grid .w2ui-grid-body table td.w2ui-grid-data .w2ui-clipboard-copy:hover{color:#545961}.w2ui-grid .w2ui-grid-body table td.w2ui-grid-data>div{padding:5px;overflow:hidden;white-space:nowrap;text-overflow:ellipsis}.w2ui-grid .w2ui-grid-body table td.w2ui-grid-data>div.flexible-record{height:auto;overflow:visible;white-space:normal}.w2ui-grid .w2ui-grid-body table td.w2ui-grid-data .w2ui-show-children{width:16px;height:10px;display:inline-block;position:relative;top:-1px;cursor:pointer}.w2ui-grid .w2ui-grid-body table td:last-child{border-right:0}.w2ui-grid .w2ui-grid-body table td:last-child div{text-overflow:clip}.w2ui-grid .w2ui-grid-body table .w2ui-col-number{width:34px;color:#777;background-color:rgba(233,237,243,.5)}.w2ui-grid .w2ui-grid-body table .w2ui-col-number div{padding:0 7px 0 3px;text-align:right}.w2ui-grid .w2ui-grid-body table .w2ui-col-number.w2ui-head{cursor:pointer}.w2ui-grid .w2ui-grid-body table .w2ui-col-select{width:26px}.w2ui-grid .w2ui-grid-body table .w2ui-col-select div{padding:0 0;text-align:center;overflow:hidden}.w2ui-grid .w2ui-grid-body table .w2ui-col-select div input[type=checkbox]{margin-top:0;margin-bottom:0;position:relative}.w2ui-grid .w2ui-grid-body table .w2ui-col-expand{width:26px}.w2ui-grid .w2ui-grid-body table .w2ui-col-expand div{padding:0 0;text-align:center;font-weight:700}.w2ui-grid .w2ui-grid-body table .w2ui-col-order{width:26px}.w2ui-grid .w2ui-grid-body table .w2ui-col-order.w2ui-grid-data div{cursor:move;height:18px;background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPgogIDxyZWN0IHN0eWxlPSJmaWxsOiAjYWFhOyIgeD0iMCIgeT0iNCIgaGVpZ2h0PSIzIiB3aWR0aD0iMTYiPjwvcmVjdD4KICA8cmVjdCBzdHlsZT0iZmlsbDogI2FhYTsiIHg9IjAiIHk9IjkiIGhlaWdodD0iMyIgd2lkdGg9IjE2Ij48L3JlY3Q+Cjwvc3ZnPg==);background-position:5px 2px;background-size:14px 12px;background-repeat:no-repeat}.w2ui-grid .w2ui-grid-body table .w2ui-col-selected{background-color:#d1d1d1!important}.w2ui-grid .w2ui-grid-body table .w2ui-row-selected{background-color:#e2e2e2!important}.w2ui-grid .w2ui-grid-body .w2ui-intersection-marker{position:absolute;top:0;left:0;margin-left:-5px;height:26px;width:10px}.w2ui-grid .w2ui-grid-body .w2ui-intersection-marker.left{left:0;margin-left:-5px}.w2ui-grid .w2ui-grid-body .w2ui-intersection-marker.right{right:0;margin-right:5px}.w2ui-grid .w2ui-grid-body .w2ui-intersection-marker .top-marker{position:absolute;top:0;height:0;width:0;border-top:5px solid #72b2ff;border-left:5px solid transparent;border-right:5px solid transparent}.w2ui-grid .w2ui-grid-body .w2ui-intersection-marker .bottom-marker{position:absolute;bottom:0;height:0;width:0;border-bottom:5px solid #72b2ff;border-left:5px solid transparent;border-right:5px solid transparent}.w2ui-grid .w2ui-grid-body div.w2ui-col-header{height:auto!important;width:100%;overflow:hidden;padding-right:10px!important}.w2ui-grid .w2ui-grid-body div.w2ui-col-header>div.w2ui-sort-up{border:4px solid transparent;border-bottom:5px solid #8d99a7;margin-top:-2px;margin-right:-7px;float:right}.w2ui-grid .w2ui-grid-body div.w2ui-col-header>div.w2ui-sort-down{border:4px solid transparent;border-top:5px solid #8d99a7;margin-top:2px;margin-right:-7px;float:right}.w2ui-grid .w2ui-grid-body .w2ui-col-group{text-align:center}.w2ui-grid .w2ui-grid-body .w2ui-grid-scroll1{position:absolute;left:0;bottom:0;border-top:1px solid #ddd;border-right:1px solid #ddd;background-color:#fafafa}.w2ui-grid .w2ui-grid-empty-msg{position:absolute;top:27px;left:0;right:0;bottom:0;background-color:rgba(255,255,255,.65)}.w2ui-grid .w2ui-grid-empty-msg>div{position:absolute;left:0;right:0;top:45%;transform:translateY(-45%);text-align:center;font-size:13px;color:#666}.w2ui-grid .w2ui-changed{background:url(data:image/gif;base64,R0lGODlhCgAKAJEAALAABf///wAAAAAAACH5BAEAAAIALAAAAAAKAAoAAAIPlI8Hy8mbxIsSUnup3rQAADs=) no-repeat top right}.w2ui-grid .w2ui-edit-box{position:absolute;z-index:1001;border:1.5px solid #6299da;pointer-events:auto;padding:2px!important;margin:0!important;background-color:#fff}.w2ui-grid .w2ui-edit-box .w2ui-editable div.w2ui-input{outline:0;padding:.5px 1.5px!important}.w2ui-grid .w2ui-edit-box .w2ui-editable input{top:-2px!important;padding:1.5px!important}.w2ui-grid .w2ui-editable{overflow:hidden;height:100%!important;margin:0!important;padding:3.5px 2px 2px 2px!important}.w2ui-grid .w2ui-editable input{position:relative;top:-1px;border:0!important;border-radius:0!important;border-color:transparent!important;padding:3px!important;display:inline-block;width:100%!important;height:100%!important;pointer-events:auto!important}.w2ui-grid .w2ui-editable div.w2ui-input{position:relative;top:-.5px;border:0 transparent;border-radius:0!important;margin:0!important;padding:5px 3px!important;display:inline-block;width:100%!important;height:100%!important;pointer-events:auto!important;background-color:#fff;white-space:pre;overflow:hidden;-webkit-user-select:text;-moz-user-select:text;-ms-user-select:text;-o-user-select:text;user-select:text}.w2ui-grid .w2ui-editable input.w2ui-select{outline:0!important;background:#fff}.w2ui-grid .w2ui-grid-summary{position:absolute;border-top:1px solid #dcdcdc;box-shadow:0 -1px 4px #f0eeee}.w2ui-grid .w2ui-grid-summary table{color:inherit}.w2ui-grid .w2ui-grid-summary table .w2ui-odd{background-color:#fff}.w2ui-grid .w2ui-grid-summary table .w2ui-even{background-color:#fbfbfb}.w2ui-grid .w2ui-grid-footer{position:absolute;bottom:0;left:0;right:0;margin:0;padding:0;text-align:center;font-size:11px;height:24px;overflow:hidden;-webkit-user-select:text;-moz-user-select:text;-ms-user-select:text;-o-user-select:text;user-select:text;box-shadow:0 -1px 4px #f5f5f5;color:#444;background-color:#f8f8f8;border-top:1px solid #e4e4e4;border-bottom-left-radius:2px;border-bottom-right-radius:2px}.w2ui-grid .w2ui-grid-footer .w2ui-footer-left{float:left;padding-top:5px;padding-left:5px}.w2ui-grid .w2ui-grid-footer .w2ui-footer-right{float:right;padding-top:5px;padding-right:5px}.w2ui-grid .w2ui-grid-footer .w2ui-footer-center{padding:2px;text-align:center}.w2ui-grid .w2ui-grid-footer .w2ui-footer-center .w2ui-footer-nav{width:110px;margin:0 auto;padding:0;text-align:center}.w2ui-grid .w2ui-grid-footer .w2ui-footer-center .w2ui-footer-nav input[type=text]{padding:1px 2px 2px 2px;border-radius:3px;width:40px;text-align:center}.w2ui-grid .w2ui-grid-footer .w2ui-footer-center .w2ui-footer-nav a.w2ui-footer-btn{display:inline-block;border-radius:3px;cursor:pointer;font-size:11px;line-height:16px;padding:1px 5px;width:30px;height:18px;margin-top:-1px;color:#000;background-color:transparent}.w2ui-grid .w2ui-grid-footer .w2ui-footer-center .w2ui-footer-nav a.w2ui-footer-btn:hover{color:#000;background-color:#aec8ff}.w2ui-grid .w2ui-grid-focus-input{position:absolute;top:0;right:0;z-index:-1;opacity:0;overflow:hidden;padding:0;margin:0;width:1px;height:1px;resize:none;border:0}.w2ui-ss .w2ui-grid-body .w2ui-grid-records table tr td.w2ui-selected{background-color:#eef4fe!important}.w2ui-ss .w2ui-grid-body .w2ui-grid-records table tr td.w2ui-inactive{background-color:#f4f6f9!important}.w2ui-ss .w2ui-grid-body .w2ui-grid-records table td{border-right-width:1px;border-bottom:1px solid #efefef}.w2ui-ss .w2ui-grid-body .w2ui-grid-records table tr.w2ui-even,.w2ui-ss .w2ui-grid-body .w2ui-grid-records table tr.w2ui-even:hover,.w2ui-ss .w2ui-grid-body .w2ui-grid-records table tr.w2ui-odd,.w2ui-ss .w2ui-grid-body .w2ui-grid-records table tr.w2ui-odd:hover{background-color:inherit}.w2ui-ss .w2ui-grid-body .w2ui-grid-records table tr:first-child td{border-top:0;border-bottom:0}.w2ui-ss .w2ui-grid-body .w2ui-grid-frecords table tr td.w2ui-selected{background-color:#eef4fe!important}.w2ui-ss .w2ui-grid-body .w2ui-grid-frecords table tr td.w2ui-inactive{background-color:#f4f6f9!important}.w2ui-ss .w2ui-grid-body .w2ui-grid-frecords table td{border-right-width:1px;border-bottom:1px solid #efefef}.w2ui-ss .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-even,.w2ui-ss .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-even:hover,.w2ui-ss .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-odd,.w2ui-ss .w2ui-grid-body .w2ui-grid-frecords table tr.w2ui-odd:hover{background-color:inherit}.w2ui-ss .w2ui-grid-body .w2ui-grid-frecords table tr:first-child td{border-bottom:0}.w2ui-ss .w2ui-grid-body .w2ui-selection{position:absolute;z-index:1000;border:1.5px solid #6299da;pointer-events:none}.w2ui-ss .w2ui-grid-body .w2ui-selection .w2ui-selection-resizer{cursor:crosshair;position:absolute;bottom:0;right:0;width:6px;height:6px;margin-right:-3px;margin-bottom:-3px;background-color:#457fc2;border:.5px solid #fff;outline:1px solid #fff;pointer-events:auto}.w2ui-ss .w2ui-grid-body .w2ui-selection.w2ui-inactive{border:1.5px solid #c0c2c5}.w2ui-ss .w2ui-grid-body .w2ui-selection.w2ui-inactive .w2ui-selection-resizer{background-color:#b0b0b0}.w2ui-ss .w2ui-grid-body .w2ui-soft-range{position:absolute;pointer-events:none;white-space:nowrap;overflow:hidden;text-overflow:ellipsis}.w2ui-ss .w2ui-grid-body .w2ui-changed{background:inherit}.w2ui-ss .w2ui-grid-body .w2ui-editable input{outline:0!important}.w2ui-info-bubble table{font-family:OpenSans;font-size:12px;color:#fff;text-shadow:1px 1px solid #999}.w2ui-info-bubble table tr td:first-child{white-space:nowrap;padding:2px;padding-right:10px;color:#ddd;vertical-align:top}.w2ui-info-bubble table tr td:last-child{white-space:pre;padding:2px}.w2ui-overlay .w2ui-grid-search-suggest{border-top-left-radius:5px;border-top-right-radius:5px;padding:10px;background-color:#fff;border-bottom:1px solid #e6e6e6;color:#444}.w2ui-overlay .w2ui-grid-search-single{font-size:12px;padding-top:10px}.w2ui-overlay .w2ui-grid-search-single .field{white-space:nowrap;text-overflow:ellipsis;overflow:hidden;border:1px solid #a9b6c2;border-radius:4px;padding:4px 12px;margin:0 2px;color:#4295d4;background-color:#f5f9fe}.w2ui-overlay .w2ui-grid-search-single .operator{display:inline-block;color:#000;background-color:#e6e6e6;border-radius:4px;margin:0 4px;padding:6px 10px}.w2ui-overlay .w2ui-grid-search-single .value{white-space:nowrap;text-overflow:ellipsis;overflow:hidden;border:1px solid #a9b6c2;border-radius:4px;margin:0 2px;padding:4px 12px}.w2ui-overlay .w2ui-grid-search-single .buttons{text-align:left;padding:15px 10px 10px 0}.w2ui-overlay .w2ui-grid-search-advanced{text-align:left;padding:0;background-color:#fff;text-shadow:none;border:1px solid #cdcdd8;box-shadow:0 3px 14px 1px #e8e8e8}.w2ui-overlay .w2ui-grid-search-advanced .search-title{padding:20px 0 9px 20px;font-size:17px;font-weight:700;color:#555}.w2ui-overlay .w2ui-grid-search-advanced .search-title .search-logic{float:right;padding-right:10px}.w2ui-overlay .w2ui-grid-search-advanced table{color:#5f5f5f;font-size:13px;padding:12px 4px 0 4px}.w2ui-overlay .w2ui-grid-search-advanced table td{padding:4px;min-height:40px}.w2ui-overlay .w2ui-grid-search-advanced table td.caption{text-align:right;padding-right:5px;padding-left:20px}.w2ui-overlay .w2ui-grid-search-advanced table td.operator{text-align:left;padding:5px}.w2ui-overlay .w2ui-grid-search-advanced table td.operator select{width:100%;color:#000}.w2ui-overlay .w2ui-grid-search-advanced table td.value{padding-right:5px;padding-left:5px}.w2ui-overlay .w2ui-grid-search-advanced table td.value input[type=text]{border-radius:3px;padding:5px;margin-right:3px;height:28px}.w2ui-overlay .w2ui-grid-search-advanced table td.value select{padding:0 20px 5px 5px;margin-right:3px;height:28px}.w2ui-overlay .w2ui-grid-search-advanced table td.actions:nth-child(1){padding:25px 10px 10px 10px;text-align:left}.w2ui-overlay .w2ui-grid-search-advanced table td.actions:nth-child(2){padding:25px 10px 10px 10px;text-align:right;background-color:#fff}.w2ui-grid-skip{width:50px;margin:-6px 3px;padding:3px!important}.w2ui-popup{position:fixed;z-index:1600;overflow:hidden;font-family:OpenSans;border-radius:6px;padding:0;margin:0;border:1px solid #777;background-color:#fafafa;box-shadow:0 0 25px #555}.w2ui-popup,.w2ui-popup *{box-sizing:border-box}.w2ui-popup.w2ui-anim-open{opacity:0;transform:scale(.8)}.w2ui-popup.w2ui-anim-close{opacity:0;transform:scale(.9)}.w2ui-popup .w2ui-popup-title{padding:10px;border-radius:6px 6px 0 0;background-color:#fff;border-bottom:1px solid #eee;position:absolute;overflow:hidden;height:42px;left:0;right:0;top:0;text-overflow:ellipsis;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;user-select:none;cursor:move;font-size:17px;color:#555;z-index:300}.w2ui-popup .w2ui-popup-button{float:right;width:25px;height:23px;cursor:pointer;color:#888;margin:0 0 0 5px}.w2ui-popup .w2ui-popup-button span.w2ui-icon{width:24px;height:23px}.w2ui-popup .w2ui-popup-button.w2ui-popup-close:hover{color:#222}.w2ui-popup .w2ui-popup-button.w2ui-popup-max:hover{color:#222}.w2ui-popup .w2ui-box,.w2ui-popup .w2ui-box-temp{position:absolute;left:0;right:0;top:42px;bottom:58px;z-index:100}.w2ui-popup .w2ui-popup-body{font-size:12px;line-height:130%;padding:0 7px 7px 7px;color:#000;background-color:#fafafa;position:absolute;overflow:auto;width:100%;height:100%}.w2ui-popup .w2ui-popup-buttons{font-size:11px;padding:14px;border-radius:0 0 6px 6px;border-top:1px solid #eee;background-color:#fff;text-align:center;position:absolute;overflow:hidden;height:56px;left:0;right:0;bottom:0;z-index:200}.w2ui-popup .w2ui-popup-no-title{border-top-left-radius:6px;border-top-right-radius:6px;top:0}.w2ui-popup .w2ui-popup-no-buttons{border-bottom-left-radius:6px;border-bottom-right-radius:6px;bottom:0}.w2ui-popup .w2ui-msg-text{font-size:14px;line-height:1.5}.w2ui-popup .w2ui-prompt{font-size:12px;padding:0 10px}.w2ui-popup .w2ui-prompt.textarea{margin-top:20px}.w2ui-popup .w2ui-prompt>div{margin-bottom:5px}.w2ui-popup .w2ui-prompt>label{margin-right:5px}.w2ui-popup .w2ui-prompt input{width:230px}.w2ui-popup .w2ui-prompt textarea{width:100%;height:50px;resize:none}.w2ui-message{font-size:12px;position:absolute;z-index:250;background-color:#fcfcfc;border:1px solid #999;box-shadow:0 0 15px #aaa;box-sizing:border-box;border-top:0;border-radius:0 0 6px 6px;overflow:auto}.w2ui-message .w2ui-msg-text{font-size:14px;line-height:1.5}.w2ui-message .w2ui-message-body{position:absolute;top:0;bottom:45px;left:0;right:0;overflow:auto}.w2ui-message .w2ui-message-body .w2ui-centered{line-height:1.5}.w2ui-message .w2ui-message-buttons{position:absolute;height:45px;bottom:0;left:0;right:0;border-top:1px solid #efefef;background-color:#fff;text-align:center;padding:8px}.w2ui-sidebar{position:relative;cursor:default;overflow:hidden;background-color:#fbfbfb;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;-o-box-sizing:border-box;box-sizing:border-box}.w2ui-sidebar *{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;-ms-box-sizing:border-box;-o-box-sizing:border-box;box-sizing:border-box}.w2ui-sidebar>div{position:absolute;overflow:hidden}.w2ui-sidebar .w2ui-sidebar-top{position:absolute;z-index:2;top:0;left:0;right:0}.w2ui-sidebar .w2ui-sidebar-top .w2ui-flat-left,.w2ui-sidebar .w2ui-sidebar-top .w2ui-flat-right{position:absolute;right:2px;top:2px;height:24px;padding:5px;border-radius:2px;background-size:16px 12px;background-position:center center;background-repeat:no-repeat;background-color:#fbfbfb}.w2ui-sidebar .w2ui-sidebar-top .w2ui-flat-left:hover,.w2ui-sidebar .w2ui-sidebar-top .w2ui-flat-right:hover{background-color:#f1f1f1}.w2ui-sidebar .w2ui-sidebar-top .w2ui-flat-left{left:auto;width:25px;background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAQAAADZc7J/AAAAzklEQVR4Ae2THRDEMBCFzy1ucatb3eJ2uhi3uNUtbnGrW9zi1rOdNzdvdl7nDpvYt/9/r7+/51myZZf/zXkD2iMHHRSb0x3oskwMieK05PwEXqP4ExSL0wp0ROao2OOuMPOMdUL6XU1/oGLcFWb+NqyTd2W/P/2qTr9h+nFXhOkHXRHiNyjrgp/U/V+WaQcaNY13zZI0A1JvcVqAnrGDTdtDtZUHjHIJhxxVLN0iqXgCP1l/7h8U9kc6abyJ4/eNWPpGdBv+XdUK0K8cnvcBly2rDr7C1HQAAAAASUVORK5CYII=)}.w2ui-sidebar .w2ui-sidebar-top .w2ui-flat-right{left:2px;width:auto;background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAQAAADZc7J/AAAAz0lEQVR4Ae2UG7TGMBCEr1vd4la3uMUtuli3utWtbnGLW9zi9l/bDMzJG7u12cfJfLunf1+UEC9Bv0vVQwJ8hjRCaZafflb1C9RQf4OD0gSDE+i+PiJAabFhQc1y1AYYsJGLY3lgxM17uWPO56yPiFDqVPWgRtpIHSd1zPnwkBsdI58OlNwx4fP2X0TgfMTOoHSdKOXkpyNvEyQh7ul+4swxJSTQuwNDxz68l/ukVNbu0Neen5Z+KvzWxBAqHds349uPFJ/jVOrPjxUq++OLf+20q5+noXo0AAAAAElFTkSuQmCC)}.w2ui-sidebar .w2ui-sidebar-bottom{position:absolute;z-index:2;bottom:0;left:0;right:0}.w2ui-sidebar .w2ui-sidebar-body{position:absolute;z-index:1;overflow:auto;top:0;bottom:0;left:0;right:0;padding:2px 0;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;user-select:none}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node{position:relative;border-radius:4px;margin:0 3px;padding:1px 0;border:1px solid transparent}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node .w2ui-node-text{color:#000;text-shadow:0 0 0 #fff;pointer-events:none}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node .w2ui-node-text:hover{color:inherit}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node .w2ui-node-image>span{color:#737485}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node .w2ui-node-handle{display:inline-block;padding:0;margin:0;height:100%;position:absolute}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node:hover{background-color:#f1f1f1}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node .w2ui-node-image{width:22px;text-align:center;pointer-events:none}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node .w2ui-node-image>span{color:#888}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node.w2ui-disabled,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node.w2ui-disabled:hover{background:0 0}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node.w2ui-disabled .w2ui-node-image,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node.w2ui-disabled .w2ui-node-image>span,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node.w2ui-disabled .w2ui-node-text,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node.w2ui-disabled:hover .w2ui-node-image,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node.w2ui-disabled:hover .w2ui-node-image>span,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node.w2ui-disabled:hover .w2ui-node-text{opacity:.4;color:#000;text-shadow:0 0 0 #fff}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node button,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node input{pointer-events:auto}.w2ui-sidebar .w2ui-sidebar-body .w2ui-selected,.w2ui-sidebar .w2ui-sidebar-body .w2ui-selected:hover{background-color:#f3f5ff;position:relative;border:1px solid #dee1ff}.w2ui-sidebar .w2ui-sidebar-body .w2ui-selected .w2ui-node-image,.w2ui-sidebar .w2ui-sidebar-body .w2ui-selected .w2ui-node-image>span,.w2ui-sidebar .w2ui-sidebar-body .w2ui-selected .w2ui-node-text,.w2ui-sidebar .w2ui-sidebar-body .w2ui-selected:hover .w2ui-node-image,.w2ui-sidebar .w2ui-sidebar-body .w2ui-selected:hover .w2ui-node-image>span,.w2ui-sidebar .w2ui-sidebar-body .w2ui-selected:hover .w2ui-node-text{color:inherit;text-shadow:0 0 0 #fff}.w2ui-sidebar .w2ui-sidebar-body .w2ui-selected:before{content:"";border:1px dashed transparent;border-radius:4px;position:absolute;top:-1px;bottom:-1px;left:-1px;right:-1px;pointer-events:none}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-text{white-space:nowrap;padding:5px 0 5px 3px;margin:1px 0 1px 22px;position:relative;z-index:1;font-size:12px}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-group{white-space:nowrap;overflow:hidden;padding:10px 0 10px 10px;margin:0;cursor:default;color:#6a5e88;background-color:transparent}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-group :nth-child(1){margin-right:10px;float:right;color:transparent}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-group :nth-child(2){font-weight:400;text-transform:uppercase}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-sub{overflow:hidden}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data{padding:2px}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-node-image{padding:3px 0 0 0;float:left}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-node-image>span{font-size:16px;color:#737485;text-shadow:0 0 0 #fff}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-node-image.w2ui-icon{margin-top:3px}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-node-count{float:right;border:1px solid #f6fcf4;border-radius:20px;width:auto;padding:2px 7px;margin:3px 4px -2px 0;background-color:#f2f8f0;color:#666;box-shadow:0 0 2px #474545;text-shadow:1px 1px 0 #fff;position:relative;z-index:2}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-collapsed,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-expanded{float:right;width:auto;height:18px;position:relative;z-index:2}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-collapsed span,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-expanded span{border-color:transparent;background-color:transparent;box-shadow:none;padding:2px 5px;border-radius:0}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-collapsed span:after,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-expanded span:after{content:"";position:absolute;border-left:5px solid grey;border-top:5px solid transparent;border-bottom:5px solid transparent;transform:rotateZ(-90deg);pointer-events:none;margin-left:-4px;margin-top:7px}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-collapsed span:hover,.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-expanded span:hover{border-color:transparent;background-color:transparent}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-data .w2ui-collapsed span:after{transform:rotateZ(90deg)}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-flat{display:block;padding:2px 0;text-align:center}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-flat .w2ui-node-image{float:none;text-align:center;width:auto;padding:1px 0}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-flat .w2ui-node-image>span{font-size:16px;color:#737485;text-shadow:0 0 0 #fff}.w2ui-sidebar .w2ui-sidebar-body .w2ui-node-flat .w2ui-node-image.w2ui-icon{width:21px}.w2ui-tabs{cursor:default;overflow:hidden;position:relative;background-color:#fff;min-height:28px;padding:0;margin:0}.w2ui-tabs .w2ui-tabs-line{position:absolute;left:0;right:0;bottom:0;z-index:1;border:0;height:1px;background-color:#e2e2e2}.w2ui-tabs .w2ui-scroll-left,.w2ui-tabs .w2ui-scroll-right{z-index:30;display:flex}.w2ui-tabs .w2ui-scroll-wrapper{display:flex;flex-direction:row;flex-wrap:nowrap;justify-content:flex-start;align-content:flex-start;padding:0 2px}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tab{height:28px;position:relative;z-index:20;padding:7px 20px 4px 20px;text-align:center;color:#000;background-color:transparent;border:2px solid transparent;white-space:nowrap;margin:0 1px;border-radius:0;cursor:default;user-select:none}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tab.active{color:#0175ff;background-color:transparent;border:2px solid transparent;border-bottom:2px solid #0175ff;margin-bottom:0}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tab:hover{background-color:#dfe1e630}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tab.moving{color:inherit;background-color:#eee;border:2px solid transparent;border-radius:0;margin-bottom:0}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tab.closable{padding:6px 28px 6px 20px}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tab .w2ui-tab-close{position:absolute;right:3px;top:5px;color:#555;float:right;margin-top:-3px;padding:2px 4px;width:20px;height:20px;opacity:.6;border:0;border-top:3px solid transparent;border-radius:3px}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tab .w2ui-tab-close:hover{background-color:#f9e7e7;color:red;opacity:1;font-weight:700}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tab .w2ui-tab-close:active{background-color:#ffd1d1}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tab .w2ui-tab-close:before{position:relative;top:-2px;left:0;color:inherit;text-shadow:inherit;content:'x'}.w2ui-tabs .w2ui-scroll-wrapper .w2ui-tabs-right{padding:8px 2px;width:100%;text-align:right;white-space:nowrap}.w2ui-tabs.w2ui-tabs-up .w2ui-tabs-line{top:0;bottom:auto}.w2ui-tabs.w2ui-tabs-up .w2ui-scroll-wrapper .w2ui-tab{border:2px solid transparent;border-top:2px solid transparent;border-radius:0 0 4px 4px}.w2ui-tabs.w2ui-tabs-up .w2ui-scroll-wrapper .w2ui-tab.active{border:2px solid transparent;border-top:2px solid #0175ff;margin-top:0}.w2ui-toolbar{background-color:#f5f5f5;user-select:none;padding:2px}.w2ui-toolbar .w2ui-tb-line{overflow:hidden;position:relative;min-height:28px;padding:2px;margin:0}.w2ui-toolbar .disabled{opacity:.3}.w2ui-toolbar .w2ui-scroll-left,.w2ui-toolbar .w2ui-scroll-right{z-index:30;display:flex}.w2ui-toolbar .w2ui-tb-line:nth-child(2),.w2ui-toolbar .w2ui-tb-line:nth-child(3),.w2ui-toolbar .w2ui-tb-line:nth-child(4){border-top:1px solid #e7e7e7;padding-top:4px;margin:0}.w2ui-toolbar .w2ui-scroll-wrapper{display:flex;flex-direction:row;flex-wrap:nowrap;justify-content:flex-start;align-content:flex-start;padding:0}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button{position:relative;z-index:20;height:30px;min-width:30px;padding:2px;border:1px solid transparent;border-radius:4px;background-color:transparent;white-space:nowrap;margin:0 1px;cursor:default;user-select:none;flex-shrink:0}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button .w2ui-tb-icon{float:left;width:22px;margin:4px 0 0 1px;text-align:center}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button .w2ui-tb-icon>span{font-size:15px;color:#8d99a7}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button .w2ui-tb-text{margin-left:20px;color:#000;padding:5px}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button .w2ui-tb-text .w2ui-tb-color-box{display:inline-block;height:13px;width:13px;margin:0 -1px -2px 0;border-radius:1px;border:1px solid #fff;box-shadow:0 0 1px #555}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button .w2ui-tb-text .w2ui-tb-count{padding:0 0 0 4px}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button .w2ui-tb-text .w2ui-tb-count>span{border:1px solid #f6fcf4;border-radius:11px;width:auto;height:18px;padding:0 6px 1px 6px;background-color:#f2f8f0;color:#666;box-shadow:0 0 2px #474545;text-shadow:1px 1px 0 #fff}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button .w2ui-tb-text .w2ui-tb-down{display:inline-block;width:10px}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button .w2ui-tb-text .w2ui-tb-down>span{display:inline-block;position:relative;top:3px;left:3px;border:4px solid transparent;border-top:5px solid #8d99a7}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button.over{border:1px solid transparent;background-color:#eaeaed}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button.over .w2ui-tb-text{color:#000}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button.checked{border:1px solid #d2d2d2;background-color:#fff}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button.checked .w2ui-tb-text{color:#000}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button.down{border:1px solid #ccc;background-color:#eaeaed}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button.down .w2ui-tb-text{color:#666}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-button.no-icon .w2ui-tb-text{margin-left:0}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-right{width:100%;text-align:right;white-space:nowrap}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-break{background-image:linear-gradient(to bottom,rgba(153,153,153,.1) 0,#999 40%,#999 60%,rgba(153,153,153,.1) 100%);width:1px;height:24px;padding:0;margin:3px 6px}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-html{white-space:nowrap}.w2ui-toolbar .w2ui-scroll-wrapper .w2ui-tb-spacer{width:100%} \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py b/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py deleted file mode 100644 index 8f369a2afedb6c6e69fd52ff9a9a6b1cdf965937..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 4 # 100ep -> 400ep - -lr_multiplier.scheduler.milestones = [ - milestone * 4 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py deleted file mode 100644 index b3b4d1c5663fb49b2fc40752d6b7a42eddd58e75..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/tracking/iou_weighted_hungarian_bbox_iou_tracker.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python3 -# Copyright 2004-present Facebook. All Rights Reserved. - -import numpy as np -from typing import List - -from detectron2.config import CfgNode as CfgNode_ -from detectron2.config import configurable - -from .base_tracker import TRACKER_HEADS_REGISTRY -from .vanilla_hungarian_bbox_iou_tracker import VanillaHungarianBBoxIOUTracker - - -@TRACKER_HEADS_REGISTRY.register() -class IOUWeightedHungarianBBoxIOUTracker(VanillaHungarianBBoxIOUTracker): - """ - A tracker using IoU as weight in Hungarian algorithm, also known - as Munkres or Kuhn-Munkres algorithm - """ - - @configurable - def __init__( - self, - *, - video_height: int, - video_width: int, - max_num_instances: int = 200, - max_lost_frame_count: int = 0, - min_box_rel_dim: float = 0.02, - min_instance_period: int = 1, - track_iou_threshold: float = 0.5, - **kwargs, - ): - """ - Args: - video_height: height the video frame - video_width: width of the video frame - max_num_instances: maximum number of id allowed to be tracked - max_lost_frame_count: maximum number of frame an id can lost tracking - exceed this number, an id is considered as lost - forever - min_box_rel_dim: a percentage, smaller than this dimension, a bbox is - removed from tracking - min_instance_period: an instance will be shown after this number of period - since its first showing up in the video - track_iou_threshold: iou threshold, below this number a bbox pair is removed - from tracking - """ - super().__init__( - video_height=video_height, - video_width=video_width, - max_num_instances=max_num_instances, - max_lost_frame_count=max_lost_frame_count, - min_box_rel_dim=min_box_rel_dim, - min_instance_period=min_instance_period, - track_iou_threshold=track_iou_threshold, - ) - - @classmethod - def from_config(cls, cfg: CfgNode_): - """ - Old style initialization using CfgNode - - Args: - cfg: D2 CfgNode, config file - Return: - dictionary storing arguments for __init__ method - """ - assert "VIDEO_HEIGHT" in cfg.TRACKER_HEADS - assert "VIDEO_WIDTH" in cfg.TRACKER_HEADS - video_height = cfg.TRACKER_HEADS.get("VIDEO_HEIGHT") - video_width = cfg.TRACKER_HEADS.get("VIDEO_WIDTH") - max_num_instances = cfg.TRACKER_HEADS.get("MAX_NUM_INSTANCES", 200) - max_lost_frame_count = cfg.TRACKER_HEADS.get("MAX_LOST_FRAME_COUNT", 0) - min_box_rel_dim = cfg.TRACKER_HEADS.get("MIN_BOX_REL_DIM", 0.02) - min_instance_period = cfg.TRACKER_HEADS.get("MIN_INSTANCE_PERIOD", 1) - track_iou_threshold = cfg.TRACKER_HEADS.get("TRACK_IOU_THRESHOLD", 0.5) - return { - "_target_": "detectron2.tracking.iou_weighted_hungarian_bbox_iou_tracker.IOUWeightedHungarianBBoxIOUTracker", # noqa - "video_height": video_height, - "video_width": video_width, - "max_num_instances": max_num_instances, - "max_lost_frame_count": max_lost_frame_count, - "min_box_rel_dim": min_box_rel_dim, - "min_instance_period": min_instance_period, - "track_iou_threshold": track_iou_threshold, - } - - def assign_cost_matrix_values(self, cost_matrix: np.ndarray, bbox_pairs: List) -> np.ndarray: - """ - Based on IoU for each pair of bbox, assign the associated value in cost matrix - - Args: - cost_matrix: np.ndarray, initialized 2D array with target dimensions - bbox_pairs: list of bbox pair, in each pair, iou value is stored - Return: - np.ndarray, cost_matrix with assigned values - """ - for pair in bbox_pairs: - # assign (-1 * IoU) for above threshold pairs, algorithms will minimize cost - cost_matrix[pair["idx"]][pair["prev_idx"]] = -1 * pair["IoU"] - return cost_matrix diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/write-models.md b/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/write-models.md deleted file mode 100644 index 967d126503c71b419bca94615cb1090e1a79cb49..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/write-models.md +++ /dev/null @@ -1,90 +0,0 @@ -# Write Models - -If you are trying to do something completely new, you may wish to implement -a model entirely from scratch. However, in many situations you may -be interested in modifying or extending some components of an existing model. -Therefore, we also provide mechanisms that let users override the -behavior of certain internal components of standard models. - - -## Register New Components - -For common concepts that users often want to customize, such as "backbone feature extractor", "box head", -we provide a registration mechanism for users to inject custom implementation that -will be immediately available to use in config files. - -For example, to add a new backbone, import this code in your code: -```python -from detectron2.modeling import BACKBONE_REGISTRY, Backbone, ShapeSpec - -@BACKBONE_REGISTRY.register() -class ToyBackbone(Backbone): - def __init__(self, cfg, input_shape): - super().__init__() - # create your own backbone - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=16, padding=3) - - def forward(self, image): - return {"conv1": self.conv1(image)} - - def output_shape(self): - return {"conv1": ShapeSpec(channels=64, stride=16)} -``` - -In this code, we implement a new backbone following the interface of the -[Backbone](../modules/modeling.html#detectron2.modeling.Backbone) class, -and register it into the [BACKBONE_REGISTRY](../modules/modeling.html#detectron2.modeling.BACKBONE_REGISTRY) -which requires subclasses of `Backbone`. -After importing this code, detectron2 can link the name of the class to its implementation. Therefore you can write the following code: - -```python -cfg = ... # read a config -cfg.MODEL.BACKBONE.NAME = 'ToyBackbone' # or set it in the config file -model = build_model(cfg) # it will find `ToyBackbone` defined above -``` - -As another example, to add new abilities to the ROI heads in the Generalized R-CNN meta-architecture, -you can implement a new -[ROIHeads](../modules/modeling.html#detectron2.modeling.ROIHeads) subclass and put it in the `ROI_HEADS_REGISTRY`. -[DensePose](../../projects/DensePose) -and [MeshRCNN](https://github.com/facebookresearch/meshrcnn) -are two examples that implement new ROIHeads to perform new tasks. -And [projects/](../../projects/) -contains more examples that implement different architectures. - -A complete list of registries can be found in [API documentation](../modules/modeling.html#model-registries). -You can register components in these registries to customize different parts of a model, or the -entire model. - -## Construct Models with Explicit Arguments - -Registry is a bridge to connect names in config files to the actual code. -They are meant to cover a few main components that users frequently need to replace. -However, the capability of a text-based config file is sometimes limited and -some deeper customization may be available only through writing code. - -Most model components in detectron2 have a clear `__init__` interface that documents -what input arguments it needs. Calling them with custom arguments will give you a custom variant -of the model. - -As an example, to use __custom loss function__ in the box head of a Faster R-CNN, we can do the following: - -1. Losses are currently computed in [FastRCNNOutputLayers](../modules/modeling.html#detectron2.modeling.FastRCNNOutputLayers). - We need to implement a variant or a subclass of it, with custom loss functions, named `MyRCNNOutput`. -2. Call `StandardROIHeads` with `box_predictor=MyRCNNOutput()` argument instead of the builtin `FastRCNNOutputLayers`. - If all other arguments should stay unchanged, this can be easily achieved by using the [configurable `__init__`](../modules/config.html#detectron2.config.configurable) mechanism: - - ```python - roi_heads = StandardROIHeads( - cfg, backbone.output_shape(), - box_predictor=MyRCNNOutput(...) - ) - ``` -3. (optional) If we want to enable this new model from a config file, registration is needed: - ```python - @ROI_HEADS_REGISTRY.register() - class MyStandardROIHeads(StandardROIHeads): - def __init__(self, cfg, input_shape): - super().__init__(cfg, input_shape, - box_predictor=MyRCNNOutput(...)) - ``` diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/filter.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/filter.py deleted file mode 100644 index 18a856789e390e0a54484db97488e2e869c27ac8..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/filter.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import List -import torch - -from detectron2.config import CfgNode -from detectron2.structures import Instances -from detectron2.structures.boxes import matched_pairwise_iou - - -class DensePoseDataFilter(object): - def __init__(self, cfg: CfgNode): - self.iou_threshold = cfg.MODEL.ROI_DENSEPOSE_HEAD.FG_IOU_THRESHOLD - self.keep_masks = cfg.MODEL.ROI_DENSEPOSE_HEAD.COARSE_SEGM_TRAINED_BY_MASKS - - @torch.no_grad() - def __call__(self, features: List[torch.Tensor], proposals_with_targets: List[Instances]): - """ - Filters proposals with targets to keep only the ones relevant for - DensePose training - - Args: - features (list[Tensor]): input data as a list of features, - each feature is a tensor. Axis 0 represents the number of - images `N` in the input data; axes 1-3 are channels, - height, and width, which may vary between features - (e.g., if a feature pyramid is used). - proposals_with_targets (list[Instances]): length `N` list of - `Instances`. The i-th `Instances` contains instances - (proposals, GT) for the i-th input image, - Returns: - list[Tensor]: filtered features - list[Instances]: filtered proposals - """ - proposals_filtered = [] - # TODO: the commented out code was supposed to correctly deal with situations - # where no valid DensePose GT is available for certain images. The corresponding - # image features were sliced and proposals were filtered. This led to performance - # deterioration, both in terms of runtime and in terms of evaluation results. - # - # feature_mask = torch.ones( - # len(proposals_with_targets), - # dtype=torch.bool, - # device=features[0].device if len(features) > 0 else torch.device("cpu"), - # ) - for i, proposals_per_image in enumerate(proposals_with_targets): - if not proposals_per_image.has("gt_densepose") and ( - not proposals_per_image.has("gt_masks") or not self.keep_masks - ): - # feature_mask[i] = 0 - continue - gt_boxes = proposals_per_image.gt_boxes - est_boxes = proposals_per_image.proposal_boxes - # apply match threshold for densepose head - iou = matched_pairwise_iou(gt_boxes, est_boxes) - iou_select = iou > self.iou_threshold - proposals_per_image = proposals_per_image[iou_select] # pyre-ignore[6] - - N_gt_boxes = len(proposals_per_image.gt_boxes) - assert N_gt_boxes == len(proposals_per_image.proposal_boxes), ( - f"The number of GT boxes {N_gt_boxes} is different from the " - f"number of proposal boxes {len(proposals_per_image.proposal_boxes)}" - ) - # filter out any target without suitable annotation - if self.keep_masks: - gt_masks = ( - proposals_per_image.gt_masks - if hasattr(proposals_per_image, "gt_masks") - else [None] * N_gt_boxes - ) - else: - gt_masks = [None] * N_gt_boxes - gt_densepose = ( - proposals_per_image.gt_densepose - if hasattr(proposals_per_image, "gt_densepose") - else [None] * N_gt_boxes - ) - assert len(gt_masks) == N_gt_boxes - assert len(gt_densepose) == N_gt_boxes - selected_indices = [ - i - for i, (dp_target, mask_target) in enumerate(zip(gt_densepose, gt_masks)) - if (dp_target is not None) or (mask_target is not None) - ] - # if not len(selected_indices): - # feature_mask[i] = 0 - # continue - if len(selected_indices) != N_gt_boxes: - proposals_per_image = proposals_per_image[selected_indices] # pyre-ignore[6] - assert len(proposals_per_image.gt_boxes) == len(proposals_per_image.proposal_boxes) - proposals_filtered.append(proposals_per_image) - # features_filtered = [feature[feature_mask] for feature in features] - # return features_filtered, proposals_filtered - return features, proposals_filtered diff --git a/spaces/bugbugbug/vits-uma-genshin-honkai/transforms.py b/spaces/bugbugbug/vits-uma-genshin-honkai/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/bugbugbug/vits-uma-genshin-honkai/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/feature_fusion.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/feature_fusion.py deleted file mode 100644 index dbe4e170e05894c12ebdc36ba1dc1de65e441b89..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/open_clip/feature_fusion.py +++ /dev/null @@ -1,192 +0,0 @@ -""" -Feature Fusion for Varible-Length Data Processing -AFF/iAFF is referred and modified from https://github.com/YimianDai/open-aff/blob/master/aff_pytorch/aff_net/fusion.py -According to the paper: Yimian Dai et al, Attentional Feature Fusion, IEEE Winter Conference on Applications of Computer Vision, WACV 2021 -""" - -import torch -import torch.nn as nn - - -class DAF(nn.Module): - """ - 直接相加 DirectAddFuse - """ - - def __init__(self): - super(DAF, self).__init__() - - def forward(self, x, residual): - return x + residual - - -class iAFF(nn.Module): - """ - 多特征融合 iAFF - """ - - def __init__(self, channels=64, r=4, type="2D"): - super(iAFF, self).__init__() - inter_channels = int(channels // r) - - if type == "1D": - # 本地注意力 - self.local_att = nn.Sequential( - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - - # 全局注意力 - self.global_att = nn.Sequential( - nn.AdaptiveAvgPool1d(1), - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - - # 第二次本地注意力 - self.local_att2 = nn.Sequential( - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - # 第二次全局注意力 - self.global_att2 = nn.Sequential( - nn.AdaptiveAvgPool1d(1), - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - elif type == "2D": - # 本地注意力 - self.local_att = nn.Sequential( - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - - # 全局注意力 - self.global_att = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - - # 第二次本地注意力 - self.local_att2 = nn.Sequential( - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - # 第二次全局注意力 - self.global_att2 = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - else: - raise f"the type is not supported" - - self.sigmoid = nn.Sigmoid() - - def forward(self, x, residual): - flag = False - xa = x + residual - if xa.size(0) == 1: - xa = torch.cat([xa, xa], dim=0) - flag = True - xl = self.local_att(xa) - xg = self.global_att(xa) - xlg = xl + xg - wei = self.sigmoid(xlg) - xi = x * wei + residual * (1 - wei) - - xl2 = self.local_att2(xi) - xg2 = self.global_att(xi) - xlg2 = xl2 + xg2 - wei2 = self.sigmoid(xlg2) - xo = x * wei2 + residual * (1 - wei2) - if flag: - xo = xo[0].unsqueeze(0) - return xo - - -class AFF(nn.Module): - """ - 多特征融合 AFF - """ - - def __init__(self, channels=64, r=4, type="2D"): - super(AFF, self).__init__() - inter_channels = int(channels // r) - - if type == "1D": - self.local_att = nn.Sequential( - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - self.global_att = nn.Sequential( - nn.AdaptiveAvgPool1d(1), - nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm1d(channels), - ) - elif type == "2D": - self.local_att = nn.Sequential( - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - self.global_att = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(inter_channels), - nn.ReLU(inplace=True), - nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0), - nn.BatchNorm2d(channels), - ) - else: - raise f"the type is not supported." - - self.sigmoid = nn.Sigmoid() - - def forward(self, x, residual): - flag = False - xa = x + residual - if xa.size(0) == 1: - xa = torch.cat([xa, xa], dim=0) - flag = True - xl = self.local_att(xa) - xg = self.global_att(xa) - xlg = xl + xg - wei = self.sigmoid(xlg) - xo = 2 * x * wei + 2 * residual * (1 - wei) - if flag: - xo = xo[0].unsqueeze(0) - return xo diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/torchscript_mask_rcnn.cpp b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/torchscript_mask_rcnn.cpp deleted file mode 100644 index fd6e1e9f82652a1d4d221447cd140ab675f312b2..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/deploy/torchscript_mask_rcnn.cpp +++ /dev/null @@ -1,188 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -// @lint-ignore-every CLANGTIDY -// This is an example code that demonstrates how to run inference -// with a torchscript format Mask R-CNN model exported by ./export_model.py -// using export method=tracing, caffe2_tracing & scripting. - -#include -#include -#include - -#include -#include -#include -#include - -// only needed for export_method=tracing -#include // @oss-only -// @fb-only: #include - -using namespace std; - -c10::IValue get_caffe2_tracing_inputs(cv::Mat& img, c10::Device device) { - const int height = img.rows; - const int width = img.cols; - // FPN models require divisibility of 32. - // Tracing mode does padding inside the graph, but caffe2_tracing does not. - assert(height % 32 == 0 && width % 32 == 0); - const int channels = 3; - - auto input = - torch::from_blob(img.data, {1, height, width, channels}, torch::kUInt8); - // NHWC to NCHW - input = input.to(device, torch::kFloat).permute({0, 3, 1, 2}).contiguous(); - - std::array im_info_data{height * 1.0f, width * 1.0f, 1.0f}; - auto im_info = - torch::from_blob(im_info_data.data(), {1, 3}).clone().to(device); - return std::make_tuple(input, im_info); -} - -c10::IValue get_tracing_inputs(cv::Mat& img, c10::Device device) { - const int height = img.rows; - const int width = img.cols; - const int channels = 3; - - auto input = - torch::from_blob(img.data, {height, width, channels}, torch::kUInt8); - // HWC to CHW - input = input.to(device, torch::kFloat).permute({2, 0, 1}).contiguous(); - return input; -} - -// create a Tuple[Dict[str, Tensor]] which is the input type of scripted model -c10::IValue get_scripting_inputs(cv::Mat& img, c10::Device device) { - const int height = img.rows; - const int width = img.cols; - const int channels = 3; - - auto img_tensor = - torch::from_blob(img.data, {height, width, channels}, torch::kUInt8); - // HWC to CHW - img_tensor = - img_tensor.to(device, torch::kFloat).permute({2, 0, 1}).contiguous(); - auto dic = c10::Dict(); - dic.insert("image", img_tensor); - return std::make_tuple(dic); -} - -c10::IValue -get_inputs(std::string export_method, cv::Mat& img, c10::Device device) { - // Given an image, create inputs in the format required by the model. - if (export_method == "tracing") - return get_tracing_inputs(img, device); - if (export_method == "caffe2_tracing") - return get_caffe2_tracing_inputs(img, device); - if (export_method == "scripting") - return get_scripting_inputs(img, device); - abort(); -} - -struct MaskRCNNOutputs { - at::Tensor pred_boxes, pred_classes, pred_masks, scores; - int num_instances() const { - return pred_boxes.sizes()[0]; - } -}; - -MaskRCNNOutputs get_outputs(std::string export_method, c10::IValue outputs) { - // Given outputs of the model, extract tensors from it to turn into a - // common MaskRCNNOutputs format. - if (export_method == "tracing") { - auto out_tuple = outputs.toTuple()->elements(); - // They are ordered alphabetically by their field name in Instances - return MaskRCNNOutputs{ - out_tuple[0].toTensor(), - out_tuple[1].toTensor(), - out_tuple[2].toTensor(), - out_tuple[3].toTensor()}; - } - if (export_method == "caffe2_tracing") { - auto out_tuple = outputs.toTuple()->elements(); - // A legacy order used by caffe2 models - return MaskRCNNOutputs{ - out_tuple[0].toTensor(), - out_tuple[2].toTensor(), - out_tuple[3].toTensor(), - out_tuple[1].toTensor()}; - } - if (export_method == "scripting") { - // With the ScriptableAdapter defined in export_model.py, the output is - // List[Dict[str, Any]]. - auto out_dict = outputs.toList().get(0).toGenericDict(); - return MaskRCNNOutputs{ - out_dict.at("pred_boxes").toTensor(), - out_dict.at("pred_classes").toTensor(), - out_dict.at("pred_masks").toTensor(), - out_dict.at("scores").toTensor()}; - } - abort(); -} - -int main(int argc, const char* argv[]) { - if (argc != 4) { - cerr << R"xx( -Usage: - ./torchscript_mask_rcnn model.ts input.jpg EXPORT_METHOD - - EXPORT_METHOD can be "tracing", "caffe2_tracing" or "scripting". -)xx"; - return 1; - } - std::string image_file = argv[2]; - std::string export_method = argv[3]; - assert( - export_method == "caffe2_tracing" || export_method == "tracing" || - export_method == "scripting"); - - torch::jit::FusionStrategy strat = {{torch::jit::FusionBehavior::DYNAMIC, 1}}; - torch::jit::setFusionStrategy(strat); - torch::autograd::AutoGradMode guard(false); - auto module = torch::jit::load(argv[1]); - - assert(module.buffers().size() > 0); - // Assume that the entire model is on the same device. - // We just put input to this device. - auto device = (*begin(module.buffers())).device(); - - cv::Mat input_img = cv::imread(image_file, cv::IMREAD_COLOR); - auto inputs = get_inputs(export_method, input_img, device); - - // Run the network - auto output = module.forward({inputs}); - if (device.is_cuda()) - c10::cuda::getCurrentCUDAStream().synchronize(); - - // run 3 more times to benchmark - int N_benchmark = 3, N_warmup = 1; - auto start_time = chrono::high_resolution_clock::now(); - for (int i = 0; i < N_benchmark + N_warmup; ++i) { - if (i == N_warmup) - start_time = chrono::high_resolution_clock::now(); - output = module.forward({inputs}); - if (device.is_cuda()) - c10::cuda::getCurrentCUDAStream().synchronize(); - } - auto end_time = chrono::high_resolution_clock::now(); - auto ms = chrono::duration_cast(end_time - start_time) - .count(); - cout << "Latency (should vary with different inputs): " - << ms * 1.0 / 1e6 / N_benchmark << " seconds" << endl; - - // Parse Mask R-CNN outputs - auto rcnn_outputs = get_outputs(export_method, output); - cout << "Number of detected objects: " << rcnn_outputs.num_instances() - << endl; - - cout << "pred_boxes: " << rcnn_outputs.pred_boxes.toString() << " " - << rcnn_outputs.pred_boxes.sizes() << endl; - cout << "scores: " << rcnn_outputs.scores.toString() << " " - << rcnn_outputs.scores.sizes() << endl; - cout << "pred_classes: " << rcnn_outputs.pred_classes.toString() << " " - << rcnn_outputs.pred_classes.sizes() << endl; - cout << "pred_masks: " << rcnn_outputs.pred_masks.toString() << " " - << rcnn_outputs.pred_masks.sizes() << endl; - - cout << rcnn_outputs.pred_boxes << endl; - return 0; -} diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/data_utils.py b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/data_utils.py deleted file mode 100644 index c6c8dee9d157161f2082484b89bdb282364e2a0e..0000000000000000000000000000000000000000 --- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/data_utils.py +++ /dev/null @@ -1,267 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import torchaudio - -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams, symbols): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - self.symbols = symbols - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - # audiopath = "./user_voice/" + audiopath - - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - # audio, sampling_rate = load_wav_to_torch(filename) - # if sampling_rate != self.sampling_rate: - # raise ValueError("{} {} SR doesn't match target {} SR".format( - # sampling_rate, self.sampling_rate)) - # audio_norm = audio / self.max_wav_value if audio.max() > 10 else audio - # audio_norm = audio_norm.unsqueeze(0) - audio_norm, sampling_rate = torchaudio.load(filename, frame_offset=0, num_frames=-1, normalize=True, channels_first=True) - # spec_filename = filename.replace(".wav", ".spec.pt") - # if os.path.exists(spec_filename): - # spec = torch.load(spec_filename) - # else: - # try: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = spec.squeeze(0) - # except NotImplementedError: - # print("?") - # spec = torch.squeeze(spec, 0) - # torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text, self.symbols) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size \ No newline at end of file diff --git a/spaces/ccds/vits_onnx/export/vits/train.py b/spaces/ccds/vits_onnx/export/vits/train.py deleted file mode 100644 index 80c8d9eb593249250d223e8527c38f4a118a69d6..0000000000000000000000000000000000000000 --- a/spaces/ccds/vits_onnx/export/vits/train.py +++ /dev/null @@ -1,328 +0,0 @@ -import os - -import torch -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import commons -import utils -from data_utils import (TextAudioSpeakerLoader, TextAudioSpeakerCollate, - DistributedBucketSampler) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import (generator_loss, discriminator_loss, feature_loss, kl_loss) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=( - n_gpus, - hps, - )) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter( - log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend='nccl', - init_method='env://', - world_size=n_gpus, - rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, - num_workers=8, - shuffle=False, - pin_memory=True, - collate_fn=collate_fn, - batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, - hps.data) - eval_loader = DataLoader(eval_dataset, - num_workers=8, - shuffle=False, - batch_size=hps.train.batch_size, - pin_memory=True, - drop_last=False, - collate_fn=collate_fn) - - net_g = SynthesizerTrn(hps.data.num_phones, - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW(net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW(net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - - try: - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g) - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], - [optim_g, optim_d], [scheduler_g, scheduler_d], - scaler, [train_loader, eval_loader], logger, - [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], - [optim_g, optim_d], [scheduler_g, scheduler_d], - scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, - loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, - speakers) in enumerate(train_loader): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda( - rank, non_blocking=True) - spec, spec_lengths = spec.cuda( - rank, non_blocking=True), spec_lengths.cuda(rank, - non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda( - rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, ( - z, z_p, m_p, logs_p, m_q, - logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers) - - mel = spec_to_mel_torch(spec, hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments( - mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), hps.data.filter_length, - hps.data.n_mel_channels, hps.data.sampling_rate, - hps.data.hop_length, hps.data.win_length, hps.data.mel_fmin, - hps.data.mel_fmax) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, - hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss( - y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, - z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [ - loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl - ] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = { - "loss/g/total": loss_gen_all, - "loss/d/total": loss_disc_all, - "learning_rate": lr, - "grad_norm_d": grad_norm_d, - "grad_norm_g": grad_norm_g - } - scalar_dict.update({ - "loss/g/fm": loss_fm, - "loss/g/mel": loss_mel, - "loss/g/dur": loss_dur, - "loss/g/kl": loss_kl - }) - - scalar_dict.update({ - "loss/g/{}".format(i): v - for i, v in enumerate(losses_gen) - }) - scalar_dict.update({ - "loss/d_r/{}".format(i): v - for i, v in enumerate(losses_disc_r) - }) - scalar_dict.update({ - "loss/d_g/{}".format(i): v - for i, v in enumerate(losses_disc_g) - }) - image_dict = { - "slice/mel_org": - utils.plot_spectrogram_to_numpy( - y_mel[0].data.cpu().numpy()), - "slice/mel_gen": - utils.plot_spectrogram_to_numpy( - y_hat_mel[0].data.cpu().numpy()), - "all/mel": - utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": - utils.plot_alignment_to_numpy(attn[0, - 0].data.cpu().numpy()) - } - utils.summarize(writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint( - net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, - "G_{}.pth".format(global_step))) - utils.save_checkpoint( - net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, - "D_{}.pth".format(global_step))) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, - speakers) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - speakers = speakers.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - speakers = speakers[:1] - break - y_hat, attn, mask, *_ = generator.module.infer(x, - x_lengths, - speakers, - max_len=1000) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch(spec, hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), hps.data.filter_length, - hps.data.n_mel_channels, hps.data.sampling_rate, - hps.data.hop_length, hps.data.win_length, hps.data.mel_fmin, - hps.data.mel_fmax) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = {"gen/audio": y_hat[0, :, :y_hat_lengths[0]]} - if global_step == 0: - image_dict.update( - {"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0, :, :y_lengths[0]]}) - - utils.summarize(writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/tests/test_model_classes.py b/spaces/cffl/Exploring_Intelligent_Writing_Assistance/tests/test_model_classes.py deleted file mode 100644 index 762e42deecf0cc5a13382ee7b7a7df054c87ee71..0000000000000000000000000000000000000000 --- a/spaces/cffl/Exploring_Intelligent_Writing_Assistance/tests/test_model_classes.py +++ /dev/null @@ -1,164 +0,0 @@ -# ########################################################################### -# -# CLOUDERA APPLIED MACHINE LEARNING PROTOTYPE (AMP) -# (C) Cloudera, Inc. 2022 -# All rights reserved. -# -# Applicable Open Source License: Apache 2.0 -# -# NOTE: Cloudera open source products are modular software products -# made up of hundreds of individual components, each of which was -# individually copyrighted. Each Cloudera open source product is a -# collective work under U.S. Copyright Law. Your license to use the -# collective work is as provided in your written agreement with -# Cloudera. Used apart from the collective work, this file is -# licensed for your use pursuant to the open source license -# identified above. -# -# This code is provided to you pursuant a written agreement with -# (i) Cloudera, Inc. or (ii) a third-party authorized to distribute -# this code. If you do not have a written agreement with Cloudera nor -# with an authorized and properly licensed third party, you do not -# have any rights to access nor to use this code. -# -# Absent a written agreement with Cloudera, Inc. (“Cloudera”) to the -# contrary, A) CLOUDERA PROVIDES THIS CODE TO YOU WITHOUT WARRANTIES OF ANY -# KIND; (B) CLOUDERA DISCLAIMS ANY AND ALL EXPRESS AND IMPLIED -# WARRANTIES WITH RESPECT TO THIS CODE, INCLUDING BUT NOT LIMITED TO -# IMPLIED WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY AND -# FITNESS FOR A PARTICULAR PURPOSE; (C) CLOUDERA IS NOT LIABLE TO YOU, -# AND WILL NOT DEFEND, INDEMNIFY, NOR HOLD YOU HARMLESS FOR ANY CLAIMS -# ARISING FROM OR RELATED TO THE CODE; AND (D)WITH RESPECT TO YOUR EXERCISE -# OF ANY RIGHTS GRANTED TO YOU FOR THE CODE, CLOUDERA IS NOT LIABLE FOR ANY -# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, PUNITIVE OR -# CONSEQUENTIAL DAMAGES INCLUDING, BUT NOT LIMITED TO, DAMAGES -# RELATED TO LOST REVENUE, LOST PROFITS, LOSS OF INCOME, LOSS OF -# BUSINESS ADVANTAGE OR UNAVAILABILITY, OR LOSS OR CORRUPTION OF -# DATA. -# -# ########################################################################### - -import pytest -import transformers - -from src.style_transfer import StyleTransfer -from src.style_classification import StyleIntensityClassifier -from src.content_preservation import ContentPreservationScorer -from src.transformer_interpretability import InterpretTransformer - - -@pytest.fixture -def subjectivity_example_data(): - examples = [ - """there is an iconic roadhouse, named "spud's roadhouse", which sells fuel and general shop items , has great meals and has accommodation.""", - "chemical abstracts service (cas), a prominent division of the american chemical society, is the world's leading source of chemical information.", - "the most serious scandal was the iran-contra affair.", - "another strikingly elegant four-door saloon for the s3 continental came from james young.", - "other ambassadors also sent their messages of condolence following her passing.", - ] - - ground_truth = [ - 'there is a roadhouse, named "spud\'s roadhouse", which sells fuel and general shop items and has accommodation.', - "chemical abstracts service (cas), a division of the american chemical society, is a source of chemical information.", - "one controversy was the iran-contra affair.", - "another four-door saloon for the s3 continental came from james young.", - "other ambassadors also sent their messages of condolence following her death.", - ] - - return {"examples": examples, "ground_truth": ground_truth} - - -@pytest.fixture -def subjectivity_styletransfer(): - MODEL_PATH = "cffl/bart-base-styletransfer-subjective-to-neutral" - return StyleTransfer(model_identifier=MODEL_PATH, max_gen_length=200) - - -@pytest.fixture -def subjectivity_styleintensityclassifier(): - CLS_MODEL_PATH = "cffl/bert-base-styleclassification-subjective-neutral" - return StyleIntensityClassifier(model_identifier=CLS_MODEL_PATH) - - -@pytest.fixture -def subjectivity_contentpreservationscorer(): - CLS_MODEL_PATH = "cffl/bert-base-styleclassification-subjective-neutral" - SBERT_MODEL_PATH = "sentence-transformers/all-MiniLM-L6-v2" - return ContentPreservationScorer( - cls_model_identifier=CLS_MODEL_PATH, sbert_model_identifier=SBERT_MODEL_PATH - ) - - -@pytest.fixture -def subjectivity_interprettransformer(): - CLS_MODEL_PATH = "cffl/bert-base-styleclassification-subjective-neutral" - return InterpretTransformer(cls_model_identifier=CLS_MODEL_PATH) - - -# test class initialization -def test_StyleTransfer_init(subjectivity_styletransfer): - assert isinstance( - subjectivity_styletransfer.pipeline, - transformers.pipelines.text2text_generation.Text2TextGenerationPipeline, - ) - - -def test_StyleIntensityClassifier_init(subjectivity_styleintensityclassifier): - assert isinstance( - subjectivity_styleintensityclassifier.pipeline, - transformers.pipelines.text_classification.TextClassificationPipeline, - ) - - -def test_ContentPreservationScorer_init(subjectivity_contentpreservationscorer): - assert isinstance( - subjectivity_contentpreservationscorer.cls_model, - transformers.models.bert.modeling_bert.BertForSequenceClassification, - ) - assert isinstance( - subjectivity_contentpreservationscorer.sbert_model, - transformers.models.bert.modeling_bert.BertModel, - ) - - -def test_InterpretTransformer_init(subjectivity_interprettransformer): - assert isinstance( - subjectivity_interprettransformer.cls_model, - transformers.models.bert.modeling_bert.BertForSequenceClassification, - ) - - -# test class functionality -def test_StyleTransfer_transfer(subjectivity_styletransfer, subjectivity_example_data): - assert subjectivity_example_data[ - "ground_truth" - ] == subjectivity_styletransfer.transfer(subjectivity_example_data["examples"]) - - -def test_StyleIntensityClassifier_calculate_transfer_intensity_fraction( - subjectivity_styleintensityclassifier, subjectivity_example_data -): - sti_frac = ( - subjectivity_styleintensityclassifier.calculate_transfer_intensity_fraction( - input_text=subjectivity_example_data["examples"], - output_text=subjectivity_example_data["ground_truth"], - ) - ) - assert sti_frac == [ - 0.9891820847234861, - 0.9808499743983614, - 0.8070009460737938, - 0.9913705583756346, - 0.9611679711017459, - ] - - -def test_ContentPreservationScorer_calculate_content_preservation_score( - subjectivity_contentpreservationscorer, subjectivity_example_data -): - cps = subjectivity_contentpreservationscorer.calculate_content_preservation_score( - input_text=subjectivity_example_data["examples"], - output_text=subjectivity_example_data["ground_truth"], - mask_type="none", - ) - assert cps == [0.9369, 0.9856, 0.7328, 0.9718, 0.9709] diff --git "a/spaces/cfwef/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" "b/spaces/cfwef/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" deleted file mode 100644 index 7dba9b51c4f12fd6af86fffadc4e2181e8d2d716..0000000000000000000000000000000000000000 --- "a/spaces/cfwef/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" +++ /dev/null @@ -1,151 +0,0 @@ -from predict import predict_no_ui -from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down - -fast_debug = False - -def readPdf(pdfPath): - """ - 读取pdf文件,返回文本内容 - """ - import pdfminer - from pdfminer.pdfparser import PDFParser - from pdfminer.pdfdocument import PDFDocument - from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed - from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter - from pdfminer.pdfdevice import PDFDevice - from pdfminer.layout import LAParams - from pdfminer.converter import PDFPageAggregator - - fp = open(pdfPath, 'rb') - - # Create a PDF parser object associated with the file object - parser = PDFParser(fp) - - # Create a PDF document object that stores the document structure. - # Password for initialization as 2nd parameter - document = PDFDocument(parser) - # Check if the document allows text extraction. If not, abort. - if not document.is_extractable: - raise PDFTextExtractionNotAllowed - - # Create a PDF resource manager object that stores shared resources. - rsrcmgr = PDFResourceManager() - - # Create a PDF device object. - # device = PDFDevice(rsrcmgr) - - # BEGIN LAYOUT ANALYSIS. - # Set parameters for analysis. - laparams = LAParams( - char_margin=10.0, - line_margin=0.2, - boxes_flow=0.2, - all_texts=False, - ) - # Create a PDF page aggregator object. - device = PDFPageAggregator(rsrcmgr, laparams=laparams) - # Create a PDF interpreter object. - interpreter = PDFPageInterpreter(rsrcmgr, device) - - # loop over all pages in the document - outTextList = [] - for page in PDFPage.create_pages(document): - # read the page into a layout object - interpreter.process_page(page) - layout = device.get_result() - for obj in layout._objs: - if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal): - # print(obj.get_text()) - outTextList.append(obj.get_text()) - - return outTextList - - -def 解析Paper(file_manifest, project_folder, top_p, api_key, temperature, chatbot, history, systemPromptTxt): - import time, glob, os - from bs4 import BeautifulSoup - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - if ".tex" in fp: - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - if ".pdf" in fp.lower(): - file_content = readPdf(fp) - file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk') - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - print('[1] yield chatbot, history') - yield chatbot, history, '正常' - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, api_key, temperature, history=[]) # 带超时倒计时 - - print('[2] end gpt req') - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - print('[3] yield chatbot, history') - yield chatbot, history, msg - print('[4] next') - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, api_key, temperature, history=history) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield chatbot, history, msg - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield chatbot, history, msg - - - -@CatchException -def 批量总结PDF文档pdfminer(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"]) - yield chatbot, history, '正常' - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield chatbot, history, '正常' - return - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield chatbot, history, '正常' - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}") - yield chatbot, history, '正常' - return - yield from 解析Paper(file_manifest, project_folder, top_p, api_key, temperature, chatbot, history, systemPromptTxt) - diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/save_randomly_initialized_model.py b/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/save_randomly_initialized_model.py deleted file mode 100644 index 1b7b17fde8d6b0e7f2eed7420c0570012558b1ed..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/save_randomly_initialized_model.py +++ /dev/null @@ -1,39 +0,0 @@ -#!/usr/bin/env python -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import fire - -from transformers import AutoConfig, AutoModelForSeq2SeqLM, AutoTokenizer - - -def save_randomly_initialized_version(config_name: str, save_dir: str, **config_kwargs): - """Save a randomly initialized version of a model using a pretrained config. - Args: - config_name: which config to use - save_dir: where to save the resulting model and tokenizer - config_kwargs: Passed to AutoConfig - - Usage:: - save_randomly_initialized_version("facebook/bart-large-cnn", "distilbart_random_cnn_6_3", encoder_layers=6, decoder_layers=3, num_beams=3) - """ - cfg = AutoConfig.from_pretrained(config_name, **config_kwargs) - model = AutoModelForSeq2SeqLM.from_config(cfg) - model.save_pretrained(save_dir) - AutoTokenizer.from_pretrained(config_name).save_pretrained(save_dir) - return model - - -if __name__ == "__main__": - fire.Fire(save_randomly_initialized_version) diff --git a/spaces/chronopt-research/ViTExCo/src/models/vit/embed.py b/spaces/chronopt-research/ViTExCo/src/models/vit/embed.py deleted file mode 100644 index d04b4dce8c8406dc9e575ce88a431b5c6863ee4f..0000000000000000000000000000000000000000 --- a/spaces/chronopt-research/ViTExCo/src/models/vit/embed.py +++ /dev/null @@ -1,72 +0,0 @@ -from torch import nn -from typing import List -from src.models.vit.factory import create_vit -from src.models.vit.vit import FeatureTransform -from ...utils import print_num_params -from timm import create_model -from einops import rearrange - - -class EmbedModel(nn.Module): - def __init__(self, config, head_out_idx: List[int], n_dim_output=3, device="cuda") -> None: - super().__init__() - self.head_out_idx = head_out_idx - self.n_dim_output = n_dim_output - self.device = device - self.vit = create_vit(config).to(self.device) - self.vit.eval() - for params in self.vit.parameters(): - params.requires_grad = False - print_num_params(self.vit) - print_num_params(self.vit, is_trainable=True) - - if self.n_dim_output == 3: - self.feature_transformer = FeatureTransform(config["image_size"], config["d_model"]).to(self.device) - print_num_params(self.feature_transformer) - print_num_params(self.feature_transformer, is_trainable=True) - - def forward(self, x): - vit_outputs = self.vit(x, self.head_out_idx, n_dim_output=self.n_dim_output, return_features=True) - feat0, feat1, feat2, feat3 = vit_outputs[0], vit_outputs[1], vit_outputs[2], vit_outputs[3] - if self.n_dim_output == 3: - feat0, feat1, feat2, feat3 = self.feature_transformer(vit_outputs) - return feat0, feat1, feat2, feat3 - - -class GeneralEmbedModel(nn.Module): - def __init__(self, pretrained_model="swin-tiny", device="cuda") -> None: - """ - vit_tiny_patch16_224.augreg_in21k_ft_in1k - swinv2_cr_tiny_ns_224.sw_in1k - """ - super().__init__() - self.device = device - self.pretrained_model = pretrained_model - if pretrained_model == "swin-tiny": - self.pretrained = create_model( - "swinv2_cr_tiny_ns_224.sw_in1k", - pretrained=True, - features_only=True, - out_indices=[-4, -3, -2, -1], - ).to(device) - elif pretrained_model == "swin-small": - self.pretrained = create_model( - "swinv2_cr_small_ns_224.sw_in1k", - pretrained=True, - features_only=True, - out_indices=[-4, -3, -2, -1], - ).to(device) - else: - raise NotImplementedError - - self.pretrained.eval() - self.upsample = nn.Upsample(scale_factor=2) - - for params in self.pretrained.parameters(): - params.requires_grad = False - - def forward(self, x): - outputs = self.pretrained(x) - outputs = [self.upsample(feat) for feat in outputs] - - return outputs diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/PixarImagePlugin.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/PixarImagePlugin.py deleted file mode 100644 index 7eb82228a9928bac325f641d45346364c61e8092..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/PixarImagePlugin.py +++ /dev/null @@ -1,69 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PIXAR raster support for PIL -# -# history: -# 97-01-29 fl Created -# -# notes: -# This is incomplete; it is based on a few samples created with -# Photoshop 2.5 and 3.0, and a summary description provided by -# Greg Coats . Hopefully, "L" and -# "RGBA" support will be added in future versions. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile -from ._binary import i16le as i16 - -# -# helpers - - -def _accept(prefix): - return prefix[:4] == b"\200\350\000\000" - - -## -# Image plugin for PIXAR raster images. - - -class PixarImageFile(ImageFile.ImageFile): - format = "PIXAR" - format_description = "PIXAR raster image" - - def _open(self): - # assuming a 4-byte magic label - s = self.fp.read(4) - if not _accept(s): - msg = "not a PIXAR file" - raise SyntaxError(msg) - - # read rest of header - s = s + self.fp.read(508) - - self._size = i16(s, 418), i16(s, 416) - - # get channel/depth descriptions - mode = i16(s, 424), i16(s, 426) - - if mode == (14, 2): - self.mode = "RGB" - # FIXME: to be continued... - - # create tile descriptor (assuming "dumped") - self.tile = [("raw", (0, 0) + self.size, 1024, (self.mode, 0, 1))] - - -# -# -------------------------------------------------------------------- - -Image.register_open(PixarImageFile.format, PixarImageFile, _accept) - -Image.register_extension(PixarImageFile.format, ".pxr") diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/http_websocket.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/http_websocket.py deleted file mode 100644 index 2cfc51930902e76c87f075f2cc445e878e737fd5..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/http_websocket.py +++ /dev/null @@ -1,701 +0,0 @@ -"""WebSocket protocol versions 13 and 8.""" - -import asyncio -import collections -import json -import random -import re -import sys -import zlib -from enum import IntEnum -from struct import Struct -from typing import Any, Callable, List, Optional, Pattern, Set, Tuple, Union, cast - -from .base_protocol import BaseProtocol -from .helpers import NO_EXTENSIONS -from .streams import DataQueue -from .typedefs import Final - -__all__ = ( - "WS_CLOSED_MESSAGE", - "WS_CLOSING_MESSAGE", - "WS_KEY", - "WebSocketReader", - "WebSocketWriter", - "WSMessage", - "WebSocketError", - "WSMsgType", - "WSCloseCode", -) - - -class WSCloseCode(IntEnum): - OK = 1000 - GOING_AWAY = 1001 - PROTOCOL_ERROR = 1002 - UNSUPPORTED_DATA = 1003 - ABNORMAL_CLOSURE = 1006 - INVALID_TEXT = 1007 - POLICY_VIOLATION = 1008 - MESSAGE_TOO_BIG = 1009 - MANDATORY_EXTENSION = 1010 - INTERNAL_ERROR = 1011 - SERVICE_RESTART = 1012 - TRY_AGAIN_LATER = 1013 - BAD_GATEWAY = 1014 - - -ALLOWED_CLOSE_CODES: Final[Set[int]] = {int(i) for i in WSCloseCode} - - -class WSMsgType(IntEnum): - # websocket spec types - CONTINUATION = 0x0 - TEXT = 0x1 - BINARY = 0x2 - PING = 0x9 - PONG = 0xA - CLOSE = 0x8 - - # aiohttp specific types - CLOSING = 0x100 - CLOSED = 0x101 - ERROR = 0x102 - - text = TEXT - binary = BINARY - ping = PING - pong = PONG - close = CLOSE - closing = CLOSING - closed = CLOSED - error = ERROR - - -WS_KEY: Final[bytes] = b"258EAFA5-E914-47DA-95CA-C5AB0DC85B11" - - -UNPACK_LEN2 = Struct("!H").unpack_from -UNPACK_LEN3 = Struct("!Q").unpack_from -UNPACK_CLOSE_CODE = Struct("!H").unpack -PACK_LEN1 = Struct("!BB").pack -PACK_LEN2 = Struct("!BBH").pack -PACK_LEN3 = Struct("!BBQ").pack -PACK_CLOSE_CODE = Struct("!H").pack -MSG_SIZE: Final[int] = 2**14 -DEFAULT_LIMIT: Final[int] = 2**16 - - -_WSMessageBase = collections.namedtuple("_WSMessageBase", ["type", "data", "extra"]) - - -class WSMessage(_WSMessageBase): - def json(self, *, loads: Callable[[Any], Any] = json.loads) -> Any: - """Return parsed JSON data. - - .. versionadded:: 0.22 - """ - return loads(self.data) - - -WS_CLOSED_MESSAGE = WSMessage(WSMsgType.CLOSED, None, None) -WS_CLOSING_MESSAGE = WSMessage(WSMsgType.CLOSING, None, None) - - -class WebSocketError(Exception): - """WebSocket protocol parser error.""" - - def __init__(self, code: int, message: str) -> None: - self.code = code - super().__init__(code, message) - - def __str__(self) -> str: - return cast(str, self.args[1]) - - -class WSHandshakeError(Exception): - """WebSocket protocol handshake error.""" - - -native_byteorder: Final[str] = sys.byteorder - - -# Used by _websocket_mask_python -_XOR_TABLE: Final[List[bytes]] = [bytes(a ^ b for a in range(256)) for b in range(256)] - - -def _websocket_mask_python(mask: bytes, data: bytearray) -> None: - """Websocket masking function. - - `mask` is a `bytes` object of length 4; `data` is a `bytearray` - object of any length. The contents of `data` are masked with `mask`, - as specified in section 5.3 of RFC 6455. - - Note that this function mutates the `data` argument. - - This pure-python implementation may be replaced by an optimized - version when available. - - """ - assert isinstance(data, bytearray), data - assert len(mask) == 4, mask - - if data: - a, b, c, d = (_XOR_TABLE[n] for n in mask) - data[::4] = data[::4].translate(a) - data[1::4] = data[1::4].translate(b) - data[2::4] = data[2::4].translate(c) - data[3::4] = data[3::4].translate(d) - - -if NO_EXTENSIONS: # pragma: no cover - _websocket_mask = _websocket_mask_python -else: - try: - from ._websocket import _websocket_mask_cython # type: ignore[import] - - _websocket_mask = _websocket_mask_cython - except ImportError: # pragma: no cover - _websocket_mask = _websocket_mask_python - -_WS_DEFLATE_TRAILING: Final[bytes] = bytes([0x00, 0x00, 0xFF, 0xFF]) - - -_WS_EXT_RE: Final[Pattern[str]] = re.compile( - r"^(?:;\s*(?:" - r"(server_no_context_takeover)|" - r"(client_no_context_takeover)|" - r"(server_max_window_bits(?:=(\d+))?)|" - r"(client_max_window_bits(?:=(\d+))?)))*$" -) - -_WS_EXT_RE_SPLIT: Final[Pattern[str]] = re.compile(r"permessage-deflate([^,]+)?") - - -def ws_ext_parse(extstr: Optional[str], isserver: bool = False) -> Tuple[int, bool]: - if not extstr: - return 0, False - - compress = 0 - notakeover = False - for ext in _WS_EXT_RE_SPLIT.finditer(extstr): - defext = ext.group(1) - # Return compress = 15 when get `permessage-deflate` - if not defext: - compress = 15 - break - match = _WS_EXT_RE.match(defext) - if match: - compress = 15 - if isserver: - # Server never fail to detect compress handshake. - # Server does not need to send max wbit to client - if match.group(4): - compress = int(match.group(4)) - # Group3 must match if group4 matches - # Compress wbit 8 does not support in zlib - # If compress level not support, - # CONTINUE to next extension - if compress > 15 or compress < 9: - compress = 0 - continue - if match.group(1): - notakeover = True - # Ignore regex group 5 & 6 for client_max_window_bits - break - else: - if match.group(6): - compress = int(match.group(6)) - # Group5 must match if group6 matches - # Compress wbit 8 does not support in zlib - # If compress level not support, - # FAIL the parse progress - if compress > 15 or compress < 9: - raise WSHandshakeError("Invalid window size") - if match.group(2): - notakeover = True - # Ignore regex group 5 & 6 for client_max_window_bits - break - # Return Fail if client side and not match - elif not isserver: - raise WSHandshakeError("Extension for deflate not supported" + ext.group(1)) - - return compress, notakeover - - -def ws_ext_gen( - compress: int = 15, isserver: bool = False, server_notakeover: bool = False -) -> str: - # client_notakeover=False not used for server - # compress wbit 8 does not support in zlib - if compress < 9 or compress > 15: - raise ValueError( - "Compress wbits must between 9 and 15, " "zlib does not support wbits=8" - ) - enabledext = ["permessage-deflate"] - if not isserver: - enabledext.append("client_max_window_bits") - - if compress < 15: - enabledext.append("server_max_window_bits=" + str(compress)) - if server_notakeover: - enabledext.append("server_no_context_takeover") - # if client_notakeover: - # enabledext.append('client_no_context_takeover') - return "; ".join(enabledext) - - -class WSParserState(IntEnum): - READ_HEADER = 1 - READ_PAYLOAD_LENGTH = 2 - READ_PAYLOAD_MASK = 3 - READ_PAYLOAD = 4 - - -class WebSocketReader: - def __init__( - self, queue: DataQueue[WSMessage], max_msg_size: int, compress: bool = True - ) -> None: - self.queue = queue - self._max_msg_size = max_msg_size - - self._exc: Optional[BaseException] = None - self._partial = bytearray() - self._state = WSParserState.READ_HEADER - - self._opcode: Optional[int] = None - self._frame_fin = False - self._frame_opcode: Optional[int] = None - self._frame_payload = bytearray() - - self._tail = b"" - self._has_mask = False - self._frame_mask: Optional[bytes] = None - self._payload_length = 0 - self._payload_length_flag = 0 - self._compressed: Optional[bool] = None - self._decompressobj: Any = None # zlib.decompressobj actually - self._compress = compress - - def feed_eof(self) -> None: - self.queue.feed_eof() - - def feed_data(self, data: bytes) -> Tuple[bool, bytes]: - if self._exc: - return True, data - - try: - return self._feed_data(data) - except Exception as exc: - self._exc = exc - self.queue.set_exception(exc) - return True, b"" - - def _feed_data(self, data: bytes) -> Tuple[bool, bytes]: - for fin, opcode, payload, compressed in self.parse_frame(data): - if compressed and not self._decompressobj: - self._decompressobj = zlib.decompressobj(wbits=-zlib.MAX_WBITS) - if opcode == WSMsgType.CLOSE: - if len(payload) >= 2: - close_code = UNPACK_CLOSE_CODE(payload[:2])[0] - if close_code < 3000 and close_code not in ALLOWED_CLOSE_CODES: - raise WebSocketError( - WSCloseCode.PROTOCOL_ERROR, - f"Invalid close code: {close_code}", - ) - try: - close_message = payload[2:].decode("utf-8") - except UnicodeDecodeError as exc: - raise WebSocketError( - WSCloseCode.INVALID_TEXT, "Invalid UTF-8 text message" - ) from exc - msg = WSMessage(WSMsgType.CLOSE, close_code, close_message) - elif payload: - raise WebSocketError( - WSCloseCode.PROTOCOL_ERROR, - f"Invalid close frame: {fin} {opcode} {payload!r}", - ) - else: - msg = WSMessage(WSMsgType.CLOSE, 0, "") - - self.queue.feed_data(msg, 0) - - elif opcode == WSMsgType.PING: - self.queue.feed_data( - WSMessage(WSMsgType.PING, payload, ""), len(payload) - ) - - elif opcode == WSMsgType.PONG: - self.queue.feed_data( - WSMessage(WSMsgType.PONG, payload, ""), len(payload) - ) - - elif ( - opcode not in (WSMsgType.TEXT, WSMsgType.BINARY) - and self._opcode is None - ): - raise WebSocketError( - WSCloseCode.PROTOCOL_ERROR, f"Unexpected opcode={opcode!r}" - ) - else: - # load text/binary - if not fin: - # got partial frame payload - if opcode != WSMsgType.CONTINUATION: - self._opcode = opcode - self._partial.extend(payload) - if self._max_msg_size and len(self._partial) >= self._max_msg_size: - raise WebSocketError( - WSCloseCode.MESSAGE_TOO_BIG, - "Message size {} exceeds limit {}".format( - len(self._partial), self._max_msg_size - ), - ) - else: - # previous frame was non finished - # we should get continuation opcode - if self._partial: - if opcode != WSMsgType.CONTINUATION: - raise WebSocketError( - WSCloseCode.PROTOCOL_ERROR, - "The opcode in non-fin frame is expected " - "to be zero, got {!r}".format(opcode), - ) - - if opcode == WSMsgType.CONTINUATION: - assert self._opcode is not None - opcode = self._opcode - self._opcode = None - - self._partial.extend(payload) - if self._max_msg_size and len(self._partial) >= self._max_msg_size: - raise WebSocketError( - WSCloseCode.MESSAGE_TOO_BIG, - "Message size {} exceeds limit {}".format( - len(self._partial), self._max_msg_size - ), - ) - - # Decompress process must to be done after all packets - # received. - if compressed: - self._partial.extend(_WS_DEFLATE_TRAILING) - payload_merged = self._decompressobj.decompress( - self._partial, self._max_msg_size - ) - if self._decompressobj.unconsumed_tail: - left = len(self._decompressobj.unconsumed_tail) - raise WebSocketError( - WSCloseCode.MESSAGE_TOO_BIG, - "Decompressed message size {} exceeds limit {}".format( - self._max_msg_size + left, self._max_msg_size - ), - ) - else: - payload_merged = bytes(self._partial) - - self._partial.clear() - - if opcode == WSMsgType.TEXT: - try: - text = payload_merged.decode("utf-8") - self.queue.feed_data( - WSMessage(WSMsgType.TEXT, text, ""), len(text) - ) - except UnicodeDecodeError as exc: - raise WebSocketError( - WSCloseCode.INVALID_TEXT, "Invalid UTF-8 text message" - ) from exc - else: - self.queue.feed_data( - WSMessage(WSMsgType.BINARY, payload_merged, ""), - len(payload_merged), - ) - - return False, b"" - - def parse_frame( - self, buf: bytes - ) -> List[Tuple[bool, Optional[int], bytearray, Optional[bool]]]: - """Return the next frame from the socket.""" - frames = [] - if self._tail: - buf, self._tail = self._tail + buf, b"" - - start_pos = 0 - buf_length = len(buf) - - while True: - # read header - if self._state == WSParserState.READ_HEADER: - if buf_length - start_pos >= 2: - data = buf[start_pos : start_pos + 2] - start_pos += 2 - first_byte, second_byte = data - - fin = (first_byte >> 7) & 1 - rsv1 = (first_byte >> 6) & 1 - rsv2 = (first_byte >> 5) & 1 - rsv3 = (first_byte >> 4) & 1 - opcode = first_byte & 0xF - - # frame-fin = %x0 ; more frames of this message follow - # / %x1 ; final frame of this message - # frame-rsv1 = %x0 ; - # 1 bit, MUST be 0 unless negotiated otherwise - # frame-rsv2 = %x0 ; - # 1 bit, MUST be 0 unless negotiated otherwise - # frame-rsv3 = %x0 ; - # 1 bit, MUST be 0 unless negotiated otherwise - # - # Remove rsv1 from this test for deflate development - if rsv2 or rsv3 or (rsv1 and not self._compress): - raise WebSocketError( - WSCloseCode.PROTOCOL_ERROR, - "Received frame with non-zero reserved bits", - ) - - if opcode > 0x7 and fin == 0: - raise WebSocketError( - WSCloseCode.PROTOCOL_ERROR, - "Received fragmented control frame", - ) - - has_mask = (second_byte >> 7) & 1 - length = second_byte & 0x7F - - # Control frames MUST have a payload - # length of 125 bytes or less - if opcode > 0x7 and length > 125: - raise WebSocketError( - WSCloseCode.PROTOCOL_ERROR, - "Control frame payload cannot be " "larger than 125 bytes", - ) - - # Set compress status if last package is FIN - # OR set compress status if this is first fragment - # Raise error if not first fragment with rsv1 = 0x1 - if self._frame_fin or self._compressed is None: - self._compressed = True if rsv1 else False - elif rsv1: - raise WebSocketError( - WSCloseCode.PROTOCOL_ERROR, - "Received frame with non-zero reserved bits", - ) - - self._frame_fin = bool(fin) - self._frame_opcode = opcode - self._has_mask = bool(has_mask) - self._payload_length_flag = length - self._state = WSParserState.READ_PAYLOAD_LENGTH - else: - break - - # read payload length - if self._state == WSParserState.READ_PAYLOAD_LENGTH: - length = self._payload_length_flag - if length == 126: - if buf_length - start_pos >= 2: - data = buf[start_pos : start_pos + 2] - start_pos += 2 - length = UNPACK_LEN2(data)[0] - self._payload_length = length - self._state = ( - WSParserState.READ_PAYLOAD_MASK - if self._has_mask - else WSParserState.READ_PAYLOAD - ) - else: - break - elif length > 126: - if buf_length - start_pos >= 8: - data = buf[start_pos : start_pos + 8] - start_pos += 8 - length = UNPACK_LEN3(data)[0] - self._payload_length = length - self._state = ( - WSParserState.READ_PAYLOAD_MASK - if self._has_mask - else WSParserState.READ_PAYLOAD - ) - else: - break - else: - self._payload_length = length - self._state = ( - WSParserState.READ_PAYLOAD_MASK - if self._has_mask - else WSParserState.READ_PAYLOAD - ) - - # read payload mask - if self._state == WSParserState.READ_PAYLOAD_MASK: - if buf_length - start_pos >= 4: - self._frame_mask = buf[start_pos : start_pos + 4] - start_pos += 4 - self._state = WSParserState.READ_PAYLOAD - else: - break - - if self._state == WSParserState.READ_PAYLOAD: - length = self._payload_length - payload = self._frame_payload - - chunk_len = buf_length - start_pos - if length >= chunk_len: - self._payload_length = length - chunk_len - payload.extend(buf[start_pos:]) - start_pos = buf_length - else: - self._payload_length = 0 - payload.extend(buf[start_pos : start_pos + length]) - start_pos = start_pos + length - - if self._payload_length == 0: - if self._has_mask: - assert self._frame_mask is not None - _websocket_mask(self._frame_mask, payload) - - frames.append( - (self._frame_fin, self._frame_opcode, payload, self._compressed) - ) - - self._frame_payload = bytearray() - self._state = WSParserState.READ_HEADER - else: - break - - self._tail = buf[start_pos:] - - return frames - - -class WebSocketWriter: - def __init__( - self, - protocol: BaseProtocol, - transport: asyncio.Transport, - *, - use_mask: bool = False, - limit: int = DEFAULT_LIMIT, - random: Any = random.Random(), - compress: int = 0, - notakeover: bool = False, - ) -> None: - self.protocol = protocol - self.transport = transport - self.use_mask = use_mask - self.randrange = random.randrange - self.compress = compress - self.notakeover = notakeover - self._closing = False - self._limit = limit - self._output_size = 0 - self._compressobj: Any = None # actually compressobj - - async def _send_frame( - self, message: bytes, opcode: int, compress: Optional[int] = None - ) -> None: - """Send a frame over the websocket with message as its payload.""" - if self._closing and not (opcode & WSMsgType.CLOSE): - raise ConnectionResetError("Cannot write to closing transport") - - rsv = 0 - - # Only compress larger packets (disabled) - # Does small packet needs to be compressed? - # if self.compress and opcode < 8 and len(message) > 124: - if (compress or self.compress) and opcode < 8: - if compress: - # Do not set self._compress if compressing is for this frame - compressobj = zlib.compressobj(level=zlib.Z_BEST_SPEED, wbits=-compress) - else: # self.compress - if not self._compressobj: - self._compressobj = zlib.compressobj( - level=zlib.Z_BEST_SPEED, wbits=-self.compress - ) - compressobj = self._compressobj - - message = compressobj.compress(message) - message = message + compressobj.flush( - zlib.Z_FULL_FLUSH if self.notakeover else zlib.Z_SYNC_FLUSH - ) - if message.endswith(_WS_DEFLATE_TRAILING): - message = message[:-4] - rsv = rsv | 0x40 - - msg_length = len(message) - - use_mask = self.use_mask - if use_mask: - mask_bit = 0x80 - else: - mask_bit = 0 - - if msg_length < 126: - header = PACK_LEN1(0x80 | rsv | opcode, msg_length | mask_bit) - elif msg_length < (1 << 16): - header = PACK_LEN2(0x80 | rsv | opcode, 126 | mask_bit, msg_length) - else: - header = PACK_LEN3(0x80 | rsv | opcode, 127 | mask_bit, msg_length) - if use_mask: - mask = self.randrange(0, 0xFFFFFFFF) - mask = mask.to_bytes(4, "big") - message = bytearray(message) - _websocket_mask(mask, message) - self._write(header + mask + message) - self._output_size += len(header) + len(mask) + len(message) - else: - if len(message) > MSG_SIZE: - self._write(header) - self._write(message) - else: - self._write(header + message) - - self._output_size += len(header) + len(message) - - if self._output_size > self._limit: - self._output_size = 0 - await self.protocol._drain_helper() - - def _write(self, data: bytes) -> None: - if self.transport is None or self.transport.is_closing(): - raise ConnectionResetError("Cannot write to closing transport") - self.transport.write(data) - - async def pong(self, message: bytes = b"") -> None: - """Send pong message.""" - if isinstance(message, str): - message = message.encode("utf-8") - await self._send_frame(message, WSMsgType.PONG) - - async def ping(self, message: bytes = b"") -> None: - """Send ping message.""" - if isinstance(message, str): - message = message.encode("utf-8") - await self._send_frame(message, WSMsgType.PING) - - async def send( - self, - message: Union[str, bytes], - binary: bool = False, - compress: Optional[int] = None, - ) -> None: - """Send a frame over the websocket with message as its payload.""" - if isinstance(message, str): - message = message.encode("utf-8") - if binary: - await self._send_frame(message, WSMsgType.BINARY, compress) - else: - await self._send_frame(message, WSMsgType.TEXT, compress) - - async def close(self, code: int = 1000, message: bytes = b"") -> None: - """Close the websocket, sending the specified code and message.""" - if isinstance(message, str): - message = message.encode("utf-8") - try: - await self._send_frame( - PACK_CLOSE_CODE(code) + message, opcode=WSMsgType.CLOSE - ) - finally: - self._closing = True diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/datatypes/numeric.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/datatypes/numeric.py deleted file mode 100644 index 77960987851d1c7d7f7ee45c94e59fcf84e6f353..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/datatypes/numeric.py +++ /dev/null @@ -1,357 +0,0 @@ -import decimal -from typing import Union, Type, Sequence, MutableSequence - -from math import nan - -from clickhouse_connect.datatypes.base import TypeDef, ArrayType, ClickHouseType -from clickhouse_connect.driver.common import array_type, write_array, decimal_size, decimal_prec -from clickhouse_connect.driver.ctypes import numpy_conv, data_conv -from clickhouse_connect.driver.insert import InsertContext -from clickhouse_connect.driver.options import pd, np -from clickhouse_connect.driver.query import QueryContext -from clickhouse_connect.driver.types import ByteSource - - -class Int8(ArrayType): - _array_type = 'b' - np_type = 'b' - - -class UInt8(ArrayType): - _array_type = 'B' - np_type = 'B' - - -class Int16(ArrayType): - _array_type = 'h' - np_type = ' Sequence: - return data_conv.read_nullable_array(source, 'q' if self.read_format(ctx) == 'signed' else 'Q', - num_rows, self._active_null(ctx)) - - def _finalize_column(self, column: Sequence, ctx: QueryContext) -> Sequence: - fmt = self.read_format(ctx) - if fmt == 'string': - return [str(x) for x in column] - if ctx.use_extended_dtypes and self.nullable: - return pd.array(column, dtype='Int64' if fmt == 'signed' else 'UInt64') - if ctx.use_numpy and self.nullable and (not ctx.use_none): - return np.array(column, dtype=' Sequence: - if self.read_format(ctx) == 'string': - return [str(x) for x in column] - if ctx.use_numpy and self.nullable and (not ctx.use_none): - return np.array(column, dtype=self.np_type) - return column - - def _active_null(self, ctx: QueryContext): - if ctx.use_extended_dtypes: - return nan - if ctx.use_none: - return None - if ctx.use_numpy: - return nan - return 0.0 - - -class Float32(Float): - np_type = ' Sequence: - if ctx.use_numpy: - return np.array(column) - return column - - def _write_column_binary(self, column, dest, _ctx): - write_array('B', [1 if x else 0 for x in column], dest) - - -class Boolean(Bool): - pass - - -class Enum(ClickHouseType): - __slots__ = '_name_map', '_int_map' - _array_type = 'b' - valid_formats = 'native', 'int' - python_type = str - - def __init__(self, type_def: TypeDef): - super().__init__(type_def) - escaped_keys = [key.replace("'", "\\'") for key in type_def.keys] - self._name_map = dict(zip(type_def.keys, type_def.values)) - self._int_map = dict(zip(type_def.values, type_def.keys)) - val_str = ', '.join(f"'{key}' = {value}" for key, value in zip(escaped_keys, type_def.values)) - self._name_suffix = f'({val_str})' - - def _read_column_binary(self, source: ByteSource, num_rows: int, ctx: QueryContext): - column = source.read_array(self._array_type, num_rows) - if self.read_format(ctx) == 'int': - return column - lookup = self._int_map.get - return [lookup(x, None) for x in column] - - def _write_column_binary(self, column: Union[Sequence, MutableSequence], dest: bytearray, _ctx): - first = self._first_value(column) - if first is None or not isinstance(first, str): - if self.nullable: - column = [0 if not x else x for x in column] - write_array(self._array_type, column, dest) - else: - lookup = self._name_map.get - write_array(self._array_type, [lookup(x, 0) for x in column], dest) - - -class Enum8(Enum): - _array_type = 'b' - byte_size = 1 - - -class Enum16(Enum): - _array_type = 'h' - byte_size = 2 - - -class Decimal(ClickHouseType): - __slots__ = 'prec', 'scale', '_mult', '_zeros', 'byte_size', '_array_type' - python_type = decimal.Decimal - dec_size = 0 - - @classmethod - def build(cls: Type['Decimal'], type_def: TypeDef): - size = cls.dec_size - if size == 0: - prec = type_def.values[0] - scale = type_def.values[1] - size = decimal_size(prec) - else: - prec = decimal_prec[size] - scale = type_def.values[0] - type_cls = BigDecimal if size > 64 else Decimal - return type_cls(type_def, prec, size, scale) - - def __init__(self, type_def: TypeDef, prec, size, scale): - super().__init__(type_def) - self.prec = prec - self.scale = scale - self._mult = 10 ** scale - self.byte_size = size // 8 - self._zeros = bytes([0] * self.byte_size) - self._name_suffix = f'({prec}, {scale})' - self._array_type = array_type(self.byte_size, True) - - def _read_column_binary(self, source: ByteSource, num_rows: int, _ctx: QueryContext): - column = source.read_array(self._array_type, num_rows) - dec = decimal.Decimal - scale = self.scale - prec = self.prec - if scale == 0: - return [dec(str(x)) for x in column] - new_col = [] - app = new_col.append - for x in column: - if x >= 0: - digits = str(x).rjust(prec, '0') - app(dec(f'{digits[:-scale]}.{digits[-scale:]}')) - else: - digits = str(-x).rjust(prec, '0') - app(dec(f'-{digits[:-scale]}.{digits[-scale:]}')) - return new_col - - def _write_column_binary(self, column: Union[Sequence, MutableSequence], dest: bytearray, _ctx): - with decimal.localcontext() as ctx: - ctx.prec = self.prec - dec = decimal.Decimal - mult = self._mult - if self.nullable: - write_array(self._array_type, [int(dec(x) * mult) if x else 0 for x in column], dest) - else: - write_array(self._array_type, [int(dec(x) * mult) for x in column], dest) - - def _active_null(self, ctx: QueryContext): - if ctx.use_none: - return None - digits = str('0').rjust(self.prec, '0') - scale = self.scale - return decimal.Decimal(f'{digits[:-scale]}.{digits[-scale:]}') - - -class BigDecimal(Decimal, registered=False): - def _read_column_binary(self, source: ByteSource, num_rows: int, _ctx): - dec = decimal.Decimal - scale = self.scale - prec = self.prec - column = [] - app = column.append - sz = self.byte_size - ifb = int.from_bytes - if scale == 0: - for _ in range(num_rows): - app(dec(str(ifb(source.read_bytes(sz), 'little', signed=True)))) - return column - for _ in range(num_rows): - x = ifb(source.read_bytes(sz), 'little', signed=True) - if x >= 0: - digits = str(x).rjust(prec, '0') - app(dec(f'{digits[:-scale]}.{digits[-scale:]}')) - else: - digits = str(-x).rjust(prec, '0') - app(dec(f'-{digits[:-scale]}.{digits[-scale:]}')) - return column - - def _write_column_binary(self, column: Union[Sequence, MutableSequence], dest: bytearray, _ctx): - with decimal.localcontext() as ctx: - ctx.prec = self.prec - mult = decimal.Decimal(f"{self._mult}.{'0' * self.scale}") - sz = self.byte_size - itb = int.to_bytes - if self.nullable: - v = self._zeros - for x in column: - dest += v if not x else itb(int(decimal.Decimal(x) * mult), sz, 'little', signed=True) - else: - for x in column: - dest += itb(int(decimal.Decimal(x) * mult), sz, 'little', signed=True) - - -class Decimal32(Decimal): - dec_size = 32 - - -class Decimal64(Decimal): - dec_size = 64 - - -class Decimal128(BigDecimal): - dec_size = 128 - - -class Decimal256(BigDecimal): - dec_size = 256 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/parts/image.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/parts/image.py deleted file mode 100644 index 6ece20d80b4466fef7857b656720c003e41f5779..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/parts/image.py +++ /dev/null @@ -1,89 +0,0 @@ -# encoding: utf-8 - -""" -The proxy class for an image part, and related objects. -""" - -from __future__ import ( - absolute_import, division, print_function, unicode_literals -) - -import hashlib - -from docx.image.image import Image -from docx.opc.part import Part -from docx.shared import Emu, Inches - - -class ImagePart(Part): - """ - An image part. Corresponds to the target part of a relationship with type - RELATIONSHIP_TYPE.IMAGE. - """ - def __init__(self, partname, content_type, blob, image=None): - super(ImagePart, self).__init__(partname, content_type, blob) - self._image = image - - @property - def default_cx(self): - """ - Native width of this image, calculated from its width in pixels and - horizontal dots per inch (dpi). - """ - px_width = self.image.px_width - horz_dpi = self.image.horz_dpi - width_in_inches = px_width / horz_dpi - return Inches(width_in_inches) - - @property - def default_cy(self): - """ - Native height of this image, calculated from its height in pixels and - vertical dots per inch (dpi). - """ - px_height = self.image.px_height - horz_dpi = self.image.horz_dpi - height_in_emu = 914400 * px_height / horz_dpi - return Emu(height_in_emu) - - @property - def filename(self): - """ - Filename from which this image part was originally created. A generic - name, e.g. 'image.png', is substituted if no name is available, for - example when the image was loaded from an unnamed stream. In that - case a default extension is applied based on the detected MIME type - of the image. - """ - if self._image is not None: - return self._image.filename - return 'image.%s' % self.partname.ext - - @classmethod - def from_image(cls, image, partname): - """ - Return an |ImagePart| instance newly created from *image* and - assigned *partname*. - """ - return ImagePart(partname, image.content_type, image.blob, image) - - @property - def image(self): - if self._image is None: - self._image = Image.from_blob(self.blob) - return self._image - - @classmethod - def load(cls, partname, content_type, blob, package): - """ - Called by ``docx.opc.package.PartFactory`` to load an image part from - a package being opened by ``Document(...)`` call. - """ - return cls(partname, content_type, blob) - - @property - def sha1(self): - """ - SHA1 hash digest of the blob of this image part. - """ - return hashlib.sha1(self._blob).hexdigest() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/feaLib/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/feaLib/__init__.py deleted file mode 100644 index ae532cd31b6eb54bdd5778c13989c1475b643db3..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/feaLib/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -"""fontTools.feaLib -- a package for dealing with OpenType feature files.""" - -# The structure of OpenType feature files is defined here: -# http://www.adobe.com/devnet/opentype/afdko/topic_feature_file_syntax.html diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/D_S_I_G_.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/D_S_I_G_.py deleted file mode 100644 index d902a29080aff5a275f530c7658d3c9eb4498034..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/D_S_I_G_.py +++ /dev/null @@ -1,151 +0,0 @@ -from fontTools.misc.textTools import bytesjoin, strjoin, tobytes, tostr, safeEval -from fontTools.misc import sstruct -from . import DefaultTable -import base64 - -DSIG_HeaderFormat = """ - > # big endian - ulVersion: L - usNumSigs: H - usFlag: H -""" -# followed by an array of usNumSigs DSIG_Signature records -DSIG_SignatureFormat = """ - > # big endian - ulFormat: L - ulLength: L # length includes DSIG_SignatureBlock header - ulOffset: L -""" -# followed by an array of usNumSigs DSIG_SignatureBlock records, -# each followed immediately by the pkcs7 bytes -DSIG_SignatureBlockFormat = """ - > # big endian - usReserved1: H - usReserved2: H - cbSignature: l # length of following raw pkcs7 data -""" - -# -# NOTE -# the DSIG table format allows for SignatureBlocks residing -# anywhere in the table and possibly in a different order as -# listed in the array after the first table header -# -# this implementation does not keep track of any gaps and/or data -# before or after the actual signature blocks while decompiling, -# and puts them in the same physical order as listed in the header -# on compilation with no padding whatsoever. -# - - -class table_D_S_I_G_(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - dummy, newData = sstruct.unpack2(DSIG_HeaderFormat, data, self) - assert self.ulVersion == 1, "DSIG ulVersion must be 1" - assert self.usFlag & ~1 == 0, "DSIG usFlag must be 0x1 or 0x0" - self.signatureRecords = sigrecs = [] - for n in range(self.usNumSigs): - sigrec, newData = sstruct.unpack2( - DSIG_SignatureFormat, newData, SignatureRecord() - ) - assert sigrec.ulFormat == 1, ( - "DSIG signature record #%d ulFormat must be 1" % n - ) - sigrecs.append(sigrec) - for sigrec in sigrecs: - dummy, newData = sstruct.unpack2( - DSIG_SignatureBlockFormat, data[sigrec.ulOffset :], sigrec - ) - assert sigrec.usReserved1 == 0, ( - "DSIG signature record #%d usReserverd1 must be 0" % n - ) - assert sigrec.usReserved2 == 0, ( - "DSIG signature record #%d usReserverd2 must be 0" % n - ) - sigrec.pkcs7 = newData[: sigrec.cbSignature] - - def compile(self, ttFont): - packed = sstruct.pack(DSIG_HeaderFormat, self) - headers = [packed] - offset = len(packed) + self.usNumSigs * sstruct.calcsize(DSIG_SignatureFormat) - data = [] - for sigrec in self.signatureRecords: - # first pack signature block - sigrec.cbSignature = len(sigrec.pkcs7) - packed = sstruct.pack(DSIG_SignatureBlockFormat, sigrec) + sigrec.pkcs7 - data.append(packed) - # update redundant length field - sigrec.ulLength = len(packed) - # update running table offset - sigrec.ulOffset = offset - headers.append(sstruct.pack(DSIG_SignatureFormat, sigrec)) - offset += sigrec.ulLength - if offset % 2: - # Pad to even bytes - data.append(b"\0") - return bytesjoin(headers + data) - - def toXML(self, xmlWriter, ttFont): - xmlWriter.comment( - "note that the Digital Signature will be invalid after recompilation!" - ) - xmlWriter.newline() - xmlWriter.simpletag( - "tableHeader", - version=self.ulVersion, - numSigs=self.usNumSigs, - flag="0x%X" % self.usFlag, - ) - for sigrec in self.signatureRecords: - xmlWriter.newline() - sigrec.toXML(xmlWriter, ttFont) - xmlWriter.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "tableHeader": - self.signatureRecords = [] - self.ulVersion = safeEval(attrs["version"]) - self.usNumSigs = safeEval(attrs["numSigs"]) - self.usFlag = safeEval(attrs["flag"]) - return - if name == "SignatureRecord": - sigrec = SignatureRecord() - sigrec.fromXML(name, attrs, content, ttFont) - self.signatureRecords.append(sigrec) - - -pem_spam = lambda l, spam={ - "-----BEGIN PKCS7-----": True, - "-----END PKCS7-----": True, - "": True, -}: not spam.get(l.strip()) - - -def b64encode(b): - s = base64.b64encode(b) - # Line-break at 76 chars. - items = [] - while s: - items.append(tostr(s[:76])) - items.append("\n") - s = s[76:] - return strjoin(items) - - -class SignatureRecord(object): - def __repr__(self): - return "<%s: %s>" % (self.__class__.__name__, self.__dict__) - - def toXML(self, writer, ttFont): - writer.begintag(self.__class__.__name__, format=self.ulFormat) - writer.newline() - writer.write_noindent("-----BEGIN PKCS7-----\n") - writer.write_noindent(b64encode(self.pkcs7)) - writer.write_noindent("-----END PKCS7-----\n") - writer.endtag(self.__class__.__name__) - - def fromXML(self, name, attrs, content, ttFont): - self.ulFormat = safeEval(attrs["format"]) - self.usReserved1 = safeEval(attrs.get("reserved1", "0")) - self.usReserved2 = safeEval(attrs.get("reserved2", "0")) - self.pkcs7 = base64.b64decode(tobytes(strjoin(filter(pem_spam, content)))) diff --git a/spaces/cihyFjudo/fairness-paper-search/Sulejman Bulgari Vratimo Se Gospodaru Kako se Pribliiti Allahu.md b/spaces/cihyFjudo/fairness-paper-search/Sulejman Bulgari Vratimo Se Gospodaru Kako se Pribliiti Allahu.md deleted file mode 100644 index 5d871d9aa642c73041be0f9c55a5b3fdf45fa41e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Sulejman Bulgari Vratimo Se Gospodaru Kako se Pribliiti Allahu.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Sulejman Bulgari Vratimo Se Gosp


      Download Zip →→→ https://tinurli.com/2uwiIa



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cleanmaster/so-vits-svc-akagi/add_speaker.py b/spaces/cleanmaster/so-vits-svc-akagi/add_speaker.py deleted file mode 100644 index e224f07c892a5fe1837e3cbf1745e0d8992ea283..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/so-vits-svc-akagi/add_speaker.py +++ /dev/null @@ -1,62 +0,0 @@ -import os -import argparse -from tqdm import tqdm -from random import shuffle -import json - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list") - parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list") - parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list") - parser.add_argument("--source_dir", type=str, default="./dataset/32k", help="path to source dir") - args = parser.parse_args() - - previous_config = json.load(open("configs/config.json", "rb")) - - train = [] - val = [] - test = [] - idx = 0 - spk_dict = previous_config["spk"] - spk_id = max([i for i in spk_dict.values()]) + 1 - for speaker in tqdm(os.listdir(args.source_dir)): - if speaker not in spk_dict.keys(): - spk_dict[speaker] = spk_id - spk_id += 1 - wavs = [os.path.join(args.source_dir, speaker, i)for i in os.listdir(os.path.join(args.source_dir, speaker))] - wavs = [i for i in wavs if i.endswith("wav")] - shuffle(wavs) - train += wavs[2:-10] - val += wavs[:2] - test += wavs[-10:] - - assert previous_config["model"]["n_speakers"] > len(spk_dict.keys()) - shuffle(train) - shuffle(val) - shuffle(test) - - print("Writing", args.train_list) - with open(args.train_list, "w") as f: - for fname in tqdm(train): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.val_list) - with open(args.val_list, "w") as f: - for fname in tqdm(val): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.test_list) - with open(args.test_list, "w") as f: - for fname in tqdm(test): - wavpath = fname - f.write(wavpath + "\n") - - previous_config["spk"] = spk_dict - - print("Writing configs/config.json") - with open("configs/config.json", "w") as f: - json.dump(previous_config, f, indent=2) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/utils/core.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/utils/core.py deleted file mode 100644 index 8ecaa896b5051811798ae9db01bbf85673af3dbc..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/utils/core.py +++ /dev/null @@ -1,733 +0,0 @@ -""" -Utility routines -""" -from collections.abc import Mapping -from copy import deepcopy -import json -import itertools -import re -import sys -import traceback -import warnings -from typing import Callable, TypeVar, Any - -import jsonschema -import pandas as pd -import numpy as np - -from altair.utils.schemapi import SchemaBase - -if sys.version_info >= (3, 10): - from typing import ParamSpec -else: - from typing_extensions import ParamSpec - -try: - from pandas.api.types import infer_dtype as _infer_dtype -except ImportError: - # Import for pandas < 0.20.0 - from pandas.lib import infer_dtype as _infer_dtype # type: ignore[no-redef] - -_V = TypeVar("_V") -_P = ParamSpec("_P") - - -def infer_dtype(value): - """Infer the dtype of the value. - - This is a compatibility function for pandas infer_dtype, - with skipna=False regardless of the pandas version. - """ - if not hasattr(infer_dtype, "_supports_skipna"): - try: - _infer_dtype([1], skipna=False) - except TypeError: - # pandas < 0.21.0 don't support skipna keyword - infer_dtype._supports_skipna = False - else: - infer_dtype._supports_skipna = True - if infer_dtype._supports_skipna: - return _infer_dtype(value, skipna=False) - else: - return _infer_dtype(value) - - -TYPECODE_MAP = { - "ordinal": "O", - "nominal": "N", - "quantitative": "Q", - "temporal": "T", - "geojson": "G", -} - -INV_TYPECODE_MAP = {v: k for k, v in TYPECODE_MAP.items()} - - -# aggregates from vega-lite version 4.6.0 -AGGREGATES = [ - "argmax", - "argmin", - "average", - "count", - "distinct", - "max", - "mean", - "median", - "min", - "missing", - "product", - "q1", - "q3", - "ci0", - "ci1", - "stderr", - "stdev", - "stdevp", - "sum", - "valid", - "values", - "variance", - "variancep", -] - -# window aggregates from vega-lite version 4.6.0 -WINDOW_AGGREGATES = [ - "row_number", - "rank", - "dense_rank", - "percent_rank", - "cume_dist", - "ntile", - "lag", - "lead", - "first_value", - "last_value", - "nth_value", -] - -# timeUnits from vega-lite version 4.17.0 -TIMEUNITS = [ - "year", - "quarter", - "month", - "week", - "day", - "dayofyear", - "date", - "hours", - "minutes", - "seconds", - "milliseconds", - "yearquarter", - "yearquartermonth", - "yearmonth", - "yearmonthdate", - "yearmonthdatehours", - "yearmonthdatehoursminutes", - "yearmonthdatehoursminutesseconds", - "yearweek", - "yearweekday", - "yearweekdayhours", - "yearweekdayhoursminutes", - "yearweekdayhoursminutesseconds", - "yeardayofyear", - "quartermonth", - "monthdate", - "monthdatehours", - "monthdatehoursminutes", - "monthdatehoursminutesseconds", - "weekday", - "weeksdayhours", - "weekdayhoursminutes", - "weekdayhoursminutesseconds", - "dayhours", - "dayhoursminutes", - "dayhoursminutesseconds", - "hoursminutes", - "hoursminutesseconds", - "minutesseconds", - "secondsmilliseconds", - "utcyear", - "utcquarter", - "utcmonth", - "utcweek", - "utcday", - "utcdayofyear", - "utcdate", - "utchours", - "utcminutes", - "utcseconds", - "utcmilliseconds", - "utcyearquarter", - "utcyearquartermonth", - "utcyearmonth", - "utcyearmonthdate", - "utcyearmonthdatehours", - "utcyearmonthdatehoursminutes", - "utcyearmonthdatehoursminutesseconds", - "utcyearweek", - "utcyearweekday", - "utcyearweekdayhours", - "utcyearweekdayhoursminutes", - "utcyearweekdayhoursminutesseconds", - "utcyeardayofyear", - "utcquartermonth", - "utcmonthdate", - "utcmonthdatehours", - "utcmonthdatehoursminutes", - "utcmonthdatehoursminutesseconds", - "utcweekday", - "utcweeksdayhours", - "utcweekdayhoursminutes", - "utcweekdayhoursminutesseconds", - "utcdayhours", - "utcdayhoursminutes", - "utcdayhoursminutesseconds", - "utchoursminutes", - "utchoursminutesseconds", - "utcminutesseconds", - "utcsecondsmilliseconds", -] - - -def infer_vegalite_type(data): - """ - From an array-like input, infer the correct vega typecode - ('ordinal', 'nominal', 'quantitative', or 'temporal') - - Parameters - ---------- - data: Numpy array or Pandas Series - """ - # Otherwise, infer based on the dtype of the input - typ = infer_dtype(data) - - if typ in [ - "floating", - "mixed-integer-float", - "integer", - "mixed-integer", - "complex", - ]: - return "quantitative" - elif typ == "categorical" and data.cat.ordered: - return ("ordinal", data.cat.categories.tolist()) - elif typ in ["string", "bytes", "categorical", "boolean", "mixed", "unicode"]: - return "nominal" - elif typ in [ - "datetime", - "datetime64", - "timedelta", - "timedelta64", - "date", - "time", - "period", - ]: - return "temporal" - else: - warnings.warn( - "I don't know how to infer vegalite type from '{}'. " - "Defaulting to nominal.".format(typ), - stacklevel=1, - ) - return "nominal" - - -def merge_props_geom(feat): - """ - Merge properties with geometry - * Overwrites 'type' and 'geometry' entries if existing - """ - - geom = {k: feat[k] for k in ("type", "geometry")} - try: - feat["properties"].update(geom) - props_geom = feat["properties"] - except (AttributeError, KeyError): - # AttributeError when 'properties' equals None - # KeyError when 'properties' is non-existing - props_geom = geom - - return props_geom - - -def sanitize_geo_interface(geo): - """Santize a geo_interface to prepare it for serialization. - - * Make a copy - * Convert type array or _Array to list - * Convert tuples to lists (using json.loads/dumps) - * Merge properties with geometry - """ - - geo = deepcopy(geo) - - # convert type _Array or array to list - for key in geo.keys(): - if str(type(geo[key]).__name__).startswith(("_Array", "array")): - geo[key] = geo[key].tolist() - - # convert (nested) tuples to lists - geo = json.loads(json.dumps(geo)) - - # sanitize features - if geo["type"] == "FeatureCollection": - geo = geo["features"] - if len(geo) > 0: - for idx, feat in enumerate(geo): - geo[idx] = merge_props_geom(feat) - elif geo["type"] == "Feature": - geo = merge_props_geom(geo) - else: - geo = {"type": "Feature", "geometry": geo} - - return geo - - -def sanitize_dataframe(df): # noqa: C901 - """Sanitize a DataFrame to prepare it for serialization. - - * Make a copy - * Convert RangeIndex columns to strings - * Raise ValueError if column names are not strings - * Raise ValueError if it has a hierarchical index. - * Convert categoricals to strings. - * Convert np.bool_ dtypes to Python bool objects - * Convert np.int dtypes to Python int objects - * Convert floats to objects and replace NaNs/infs with None. - * Convert DateTime dtypes into appropriate string representations - * Convert Nullable integers to objects and replace NaN with None - * Convert Nullable boolean to objects and replace NaN with None - * convert dedicated string column to objects and replace NaN with None - * Raise a ValueError for TimeDelta dtypes - """ - df = df.copy() - - if isinstance(df.columns, pd.RangeIndex): - df.columns = df.columns.astype(str) - - for col in df.columns: - if not isinstance(col, str): - raise ValueError( - "Dataframe contains invalid column name: {0!r}. " - "Column names must be strings".format(col) - ) - - if isinstance(df.index, pd.MultiIndex): - raise ValueError("Hierarchical indices not supported") - if isinstance(df.columns, pd.MultiIndex): - raise ValueError("Hierarchical indices not supported") - - def to_list_if_array(val): - if isinstance(val, np.ndarray): - return val.tolist() - else: - return val - - for col_name, dtype in df.dtypes.items(): - if str(dtype) == "category": - # Work around bug in to_json for categorical types in older versions of pandas - # https://github.com/pydata/pandas/issues/10778 - # https://github.com/altair-viz/altair/pull/2170 - col = df[col_name].astype(object) - df[col_name] = col.where(col.notnull(), None) - elif str(dtype) == "string": - # dedicated string datatype (since 1.0) - # https://pandas.pydata.org/pandas-docs/version/1.0.0/whatsnew/v1.0.0.html#dedicated-string-data-type - col = df[col_name].astype(object) - df[col_name] = col.where(col.notnull(), None) - elif str(dtype) == "bool": - # convert numpy bools to objects; np.bool is not JSON serializable - df[col_name] = df[col_name].astype(object) - elif str(dtype) == "boolean": - # dedicated boolean datatype (since 1.0) - # https://pandas.io/docs/user_guide/boolean.html - col = df[col_name].astype(object) - df[col_name] = col.where(col.notnull(), None) - elif str(dtype).startswith("datetime"): - # Convert datetimes to strings. This needs to be a full ISO string - # with time, which is why we cannot use ``col.astype(str)``. - # This is because Javascript parses date-only times in UTC, but - # parses full ISO-8601 dates as local time, and dates in Vega and - # Vega-Lite are displayed in local time by default. - # (see https://github.com/altair-viz/altair/issues/1027) - df[col_name] = ( - df[col_name].apply(lambda x: x.isoformat()).replace("NaT", "") - ) - elif str(dtype).startswith("timedelta"): - raise ValueError( - 'Field "{col_name}" has type "{dtype}" which is ' - "not supported by Altair. Please convert to " - "either a timestamp or a numerical value." - "".format(col_name=col_name, dtype=dtype) - ) - elif str(dtype).startswith("geometry"): - # geopandas >=0.6.1 uses the dtype geometry. Continue here - # otherwise it will give an error on np.issubdtype(dtype, np.integer) - continue - elif str(dtype) in { - "Int8", - "Int16", - "Int32", - "Int64", - "UInt8", - "UInt16", - "UInt32", - "UInt64", - "Float32", - "Float64", - }: # nullable integer datatypes (since 24.0) and nullable float datatypes (since 1.2.0) - # https://pandas.pydata.org/pandas-docs/version/0.25/whatsnew/v0.24.0.html#optional-integer-na-support - col = df[col_name].astype(object) - df[col_name] = col.where(col.notnull(), None) - elif np.issubdtype(dtype, np.integer): - # convert integers to objects; np.int is not JSON serializable - df[col_name] = df[col_name].astype(object) - elif np.issubdtype(dtype, np.floating): - # For floats, convert to Python float: np.float is not JSON serializable - # Also convert NaN/inf values to null, as they are not JSON serializable - col = df[col_name] - bad_values = col.isnull() | np.isinf(col) - df[col_name] = col.astype(object).where(~bad_values, None) - elif dtype == object: - # Convert numpy arrays saved as objects to lists - # Arrays are not JSON serializable - col = df[col_name].apply(to_list_if_array, convert_dtype=False) - df[col_name] = col.where(col.notnull(), None) - return df - - -def parse_shorthand( - shorthand, - data=None, - parse_aggregates=True, - parse_window_ops=False, - parse_timeunits=True, - parse_types=True, -): - """General tool to parse shorthand values - - These are of the form: - - - "col_name" - - "col_name:O" - - "average(col_name)" - - "average(col_name):O" - - Optionally, a dataframe may be supplied, from which the type - will be inferred if not specified in the shorthand. - - Parameters - ---------- - shorthand : dict or string - The shorthand representation to be parsed - data : DataFrame, optional - If specified and of type DataFrame, then use these values to infer the - column type if not provided by the shorthand. - parse_aggregates : boolean - If True (default), then parse aggregate functions within the shorthand. - parse_window_ops : boolean - If True then parse window operations within the shorthand (default:False) - parse_timeunits : boolean - If True (default), then parse timeUnits from within the shorthand - parse_types : boolean - If True (default), then parse typecodes within the shorthand - - Returns - ------- - attrs : dict - a dictionary of attributes extracted from the shorthand - - Examples - -------- - >>> data = pd.DataFrame({'foo': ['A', 'B', 'A', 'B'], - ... 'bar': [1, 2, 3, 4]}) - - >>> parse_shorthand('name') == {'field': 'name'} - True - - >>> parse_shorthand('name:Q') == {'field': 'name', 'type': 'quantitative'} - True - - >>> parse_shorthand('average(col)') == {'aggregate': 'average', 'field': 'col'} - True - - >>> parse_shorthand('foo:O') == {'field': 'foo', 'type': 'ordinal'} - True - - >>> parse_shorthand('min(foo):Q') == {'aggregate': 'min', 'field': 'foo', 'type': 'quantitative'} - True - - >>> parse_shorthand('month(col)') == {'field': 'col', 'timeUnit': 'month', 'type': 'temporal'} - True - - >>> parse_shorthand('year(col):O') == {'field': 'col', 'timeUnit': 'year', 'type': 'ordinal'} - True - - >>> parse_shorthand('foo', data) == {'field': 'foo', 'type': 'nominal'} - True - - >>> parse_shorthand('bar', data) == {'field': 'bar', 'type': 'quantitative'} - True - - >>> parse_shorthand('bar:O', data) == {'field': 'bar', 'type': 'ordinal'} - True - - >>> parse_shorthand('sum(bar)', data) == {'aggregate': 'sum', 'field': 'bar', 'type': 'quantitative'} - True - - >>> parse_shorthand('count()', data) == {'aggregate': 'count', 'type': 'quantitative'} - True - """ - if not shorthand: - return {} - - valid_typecodes = list(TYPECODE_MAP) + list(INV_TYPECODE_MAP) - - units = { - "field": "(?P.*)", - "type": "(?P{})".format("|".join(valid_typecodes)), - "agg_count": "(?Pcount)", - "op_count": "(?Pcount)", - "aggregate": "(?P{})".format("|".join(AGGREGATES)), - "window_op": "(?P{})".format("|".join(AGGREGATES + WINDOW_AGGREGATES)), - "timeUnit": "(?P{})".format("|".join(TIMEUNITS)), - } - - patterns = [] - - if parse_aggregates: - patterns.extend([r"{agg_count}\(\)"]) - patterns.extend([r"{aggregate}\({field}\)"]) - if parse_window_ops: - patterns.extend([r"{op_count}\(\)"]) - patterns.extend([r"{window_op}\({field}\)"]) - if parse_timeunits: - patterns.extend([r"{timeUnit}\({field}\)"]) - - patterns.extend([r"{field}"]) - - if parse_types: - patterns = list(itertools.chain(*((p + ":{type}", p) for p in patterns))) - - regexps = ( - re.compile(r"\A" + p.format(**units) + r"\Z", re.DOTALL) for p in patterns - ) - - # find matches depending on valid fields passed - if isinstance(shorthand, dict): - attrs = shorthand - else: - attrs = next( - exp.match(shorthand).groupdict() for exp in regexps if exp.match(shorthand) - ) - - # Handle short form of the type expression - if "type" in attrs: - attrs["type"] = INV_TYPECODE_MAP.get(attrs["type"], attrs["type"]) - - # counts are quantitative by default - if attrs == {"aggregate": "count"}: - attrs["type"] = "quantitative" - - # times are temporal by default - if "timeUnit" in attrs and "type" not in attrs: - attrs["type"] = "temporal" - - # if data is specified and type is not, infer type from data - if isinstance(data, pd.DataFrame) and "type" not in attrs: - # Remove escape sequences so that types can be inferred for columns with special characters - if "field" in attrs and attrs["field"].replace("\\", "") in data.columns: - attrs["type"] = infer_vegalite_type(data[attrs["field"].replace("\\", "")]) - # ordered categorical dataframe columns return the type and sort order as a tuple - if isinstance(attrs["type"], tuple): - attrs["sort"] = attrs["type"][1] - attrs["type"] = attrs["type"][0] - - # If an unescaped colon is still present, it's often due to an incorrect data type specification - # but could also be due to using a column name with ":" in it. - if ( - "field" in attrs - and ":" in attrs["field"] - and attrs["field"][attrs["field"].rfind(":") - 1] != "\\" - ): - raise ValueError( - '"{}" '.format(attrs["field"].split(":")[-1]) - + "is not one of the valid encoding data types: {}.".format( - ", ".join(TYPECODE_MAP.values()) - ) - + "\nFor more details, see https://altair-viz.github.io/user_guide/encodings/index.html#encoding-data-types. " - + "If you are trying to use a column name that contains a colon, " - + 'prefix it with a backslash; for example "column\\:name" instead of "column:name".' - ) - return attrs - - -def use_signature(Obj: Callable[_P, Any]): - """Apply call signature and documentation of Obj to the decorated method""" - - def decorate(f: Callable[..., _V]) -> Callable[_P, _V]: - # call-signature of f is exposed via __wrapped__. - # we want it to mimic Obj.__init__ - f.__wrapped__ = Obj.__init__ # type: ignore - f._uses_signature = Obj # type: ignore - - # Supplement the docstring of f with information from Obj - if Obj.__doc__: - # Patch in a reference to the class this docstring is copied from, - # to generate a hyperlink. - doclines = Obj.__doc__.splitlines() - doclines[0] = f"Refer to :class:`{Obj.__name__}`" - - if f.__doc__: - doc = f.__doc__ + "\n".join(doclines[1:]) - else: - doc = "\n".join(doclines) - try: - f.__doc__ = doc - except AttributeError: - # __doc__ is not modifiable for classes in Python < 3.3 - pass - - return f - - return decorate - - -def update_nested(original, update, copy=False): - """Update nested dictionaries - - Parameters - ---------- - original : dict - the original (nested) dictionary, which will be updated in-place - update : dict - the nested dictionary of updates - copy : bool, default False - if True, then copy the original dictionary rather than modifying it - - Returns - ------- - original : dict - a reference to the (modified) original dict - - Examples - -------- - >>> original = {'x': {'b': 2, 'c': 4}} - >>> update = {'x': {'b': 5, 'd': 6}, 'y': 40} - >>> update_nested(original, update) # doctest: +SKIP - {'x': {'b': 5, 'c': 4, 'd': 6}, 'y': 40} - >>> original # doctest: +SKIP - {'x': {'b': 5, 'c': 4, 'd': 6}, 'y': 40} - """ - if copy: - original = deepcopy(original) - for key, val in update.items(): - if isinstance(val, Mapping): - orig_val = original.get(key, {}) - if isinstance(orig_val, Mapping): - original[key] = update_nested(orig_val, val) - else: - original[key] = val - else: - original[key] = val - return original - - -def display_traceback(in_ipython=True): - exc_info = sys.exc_info() - - if in_ipython: - from IPython.core.getipython import get_ipython - - ip = get_ipython() - else: - ip = None - - if ip is not None: - ip.showtraceback(exc_info) - else: - traceback.print_exception(*exc_info) - - -def infer_encoding_types(args, kwargs, channels): - """Infer typed keyword arguments for args and kwargs - - Parameters - ---------- - args : tuple - List of function args - kwargs : dict - Dict of function kwargs - channels : module - The module containing all altair encoding channel classes. - - Returns - ------- - kwargs : dict - All args and kwargs in a single dict, with keys and types - based on the channels mapping. - """ - # Construct a dictionary of channel type to encoding name - # TODO: cache this somehow? - channel_objs = (getattr(channels, name) for name in dir(channels)) - channel_objs = ( - c for c in channel_objs if isinstance(c, type) and issubclass(c, SchemaBase) - ) - channel_to_name = {c: c._encoding_name for c in channel_objs} - name_to_channel = {} - for chan, name in channel_to_name.items(): - chans = name_to_channel.setdefault(name, {}) - if chan.__name__.endswith("Datum"): - key = "datum" - elif chan.__name__.endswith("Value"): - key = "value" - else: - key = "field" - chans[key] = chan - - # First use the mapping to convert args to kwargs based on their types. - for arg in args: - if isinstance(arg, (list, tuple)) and len(arg) > 0: - type_ = type(arg[0]) - else: - type_ = type(arg) - - encoding = channel_to_name.get(type_, None) - if encoding is None: - raise NotImplementedError("positional of type {}" "".format(type_)) - if encoding in kwargs: - raise ValueError("encoding {} specified twice.".format(encoding)) - kwargs[encoding] = arg - - def _wrap_in_channel_class(obj, encoding): - if isinstance(obj, SchemaBase): - return obj - - if isinstance(obj, str): - obj = {"shorthand": obj} - - if isinstance(obj, (list, tuple)): - return [_wrap_in_channel_class(subobj, encoding) for subobj in obj] - - if encoding not in name_to_channel: - warnings.warn( - "Unrecognized encoding channel '{}'".format(encoding), stacklevel=1 - ) - return obj - - classes = name_to_channel[encoding] - cls = classes["value"] if "value" in obj else classes["field"] - - try: - # Don't force validation here; some objects won't be valid until - # they're created in the context of a chart. - return cls.from_dict(obj, validate=False) - except jsonschema.ValidationError: - # our attempts at finding the correct class have failed - return obj - - return { - encoding: _wrap_in_channel_class(obj, encoding) - for encoding, obj in kwargs.items() - } diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attrs/setters.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attrs/setters.py deleted file mode 100644 index 9b50770804e4187f0c935ef17bddf2d9a61120ff..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attrs/setters.py +++ /dev/null @@ -1,3 +0,0 @@ -# SPDX-License-Identifier: MIT - -from attr.setters import * # noqa diff --git a/spaces/codedog-ai/edu-assistant/webui/qa.py b/spaces/codedog-ai/edu-assistant/webui/qa.py deleted file mode 100644 index 27989f0add6d99a3aac24430e12d6fc083fef382..0000000000000000000000000000000000000000 --- a/spaces/codedog-ai/edu-assistant/webui/qa.py +++ /dev/null @@ -1,85 +0,0 @@ -import gradio as gr -from fastapi.encoders import jsonable_encoder -from langchain.callbacks import get_openai_callback - -from edu_assistant.learning_tasks.qa import DEFAULT_INSTRUCTION, QaTask -from edu_assistant.utils.langchain_utils import load_vectorstore, shrink_docs - - -class QaUI: - def __init__( - self, *, instruction: str = DEFAULT_INSTRUCTION, enable_gpt4: bool = False, knowledge_name: str = "example" - ): - self._init_task(instruction, knowledge_name, enable_gpt4) - self._init_ui() - - def ui_render(self): - self.ui.render() - - def ui_reload( - self, - *, - instruction: str = DEFAULT_INSTRUCTION, - knowledge_name: str = "example", - enable_gpt4: bool = False, - refresh: bool = True, - ): - self._init_task(instruction, knowledge_name, enable_gpt4) - - if refresh: - self.ui_render() - - def get_instruction(self): - return self.instruction - - def _init_task(self, instruction, knowledge_name, enable_gpt4): - self.instruction = instruction - self.knowledge = knowledge_name - self.enable_gpt4 = enable_gpt4 - self.task = QaTask( - instruction=instruction, - knowledge=load_vectorstore(knowledge_name).as_retriever(), - enable_gpt4=enable_gpt4, - ) - - def _init_ui(self): - with gr.Blocks() as ui: - with gr.Row(): - with gr.Column(scale=6): - with gr.Row(): - chatbot = gr.Chatbot(height=500, label="聊天记录") - with gr.Row(): - msg = gr.Textbox(show_label=False) - with gr.Column(scale=1): - with gr.Row(): - clear_button = gr.Button(value="清空") - with gr.Row(): - session_id = gr.Textbox(label="Session", interactive=False, value="") - with gr.Row(): - status = gr.JSON(value="""{"tokens":0}""", label="Status") - with gr.Row(): - docs = gr.JSON(value="""["docs"]""", label="Docs") - - clear_button.click(self._clear, [], [msg, chatbot, session_id, status, docs]) - msg.submit(self._respond, [msg, chatbot, session_id], [msg, chatbot, session_id, status, docs]) - - self.ui = ui - - def _respond(self, message, chat_history, session_id): - with get_openai_callback() as cb: - if session_id: - result = self.task.ask(message, session_id=session_id) - else: - result = self.task.ask(message) - - session_id = result["session_id"] - docs = jsonable_encoder(shrink_docs(result.get("source_documents", []))) - - bot_message = result["answer"] - chat_history.append((message, bot_message)) - - status = {"tokens": cb.total_tokens, "cost": f"${cb.total_cost:.4f}"} - return "", chat_history, session_id, status, docs - - def _clear(self): - return "", [], "", {"tokens": 0}, ["docs"] diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_h264.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_h264.c deleted file mode 100644 index 6300b1418dfb01c2bd058f0a23e9312cbed68b4a..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_h264.c +++ /dev/null @@ -1,567 +0,0 @@ -/* - * DXVA2 H.264 HW acceleration. - * - * copyright (c) 2009 Laurent Aimar - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config_components.h" - -#include "libavutil/avassert.h" - -#include "dxva2_internal.h" -#include "h264dec.h" -#include "h264data.h" -#include "h264_ps.h" -#include "mpegutils.h" - -struct dxva2_picture_context { - DXVA_PicParams_H264 pp; - DXVA_Qmatrix_H264 qm; - unsigned slice_count; - DXVA_Slice_H264_Short slice_short[MAX_SLICES]; - DXVA_Slice_H264_Long slice_long[MAX_SLICES]; - const uint8_t *bitstream; - unsigned bitstream_size; -}; - -static void fill_picture_entry(DXVA_PicEntry_H264 *pic, - unsigned index, unsigned flag) -{ - assert((index&0x7f) == index && (flag&0x01) == flag); - pic->bPicEntry = index | (flag << 7); -} - -static void fill_picture_parameters(const AVCodecContext *avctx, AVDXVAContext *ctx, const H264Context *h, - DXVA_PicParams_H264 *pp) -{ - const H264Picture *current_picture = h->cur_pic_ptr; - const SPS *sps = h->ps.sps; - const PPS *pps = h->ps.pps; - int i, j; - - memset(pp, 0, sizeof(*pp)); - /* Configure current picture */ - fill_picture_entry(&pp->CurrPic, - ff_dxva2_get_surface_index(avctx, ctx, current_picture->f), - h->picture_structure == PICT_BOTTOM_FIELD); - /* Configure the set of references */ - pp->UsedForReferenceFlags = 0; - pp->NonExistingFrameFlags = 0; - for (i = 0, j = 0; i < FF_ARRAY_ELEMS(pp->RefFrameList); i++) { - const H264Picture *r; - if (j < h->short_ref_count) { - r = h->short_ref[j++]; - } else { - r = NULL; - while (!r && j < h->short_ref_count + 16) - r = h->long_ref[j++ - h->short_ref_count]; - } - if (r) { - fill_picture_entry(&pp->RefFrameList[i], - ff_dxva2_get_surface_index(avctx, ctx, r->f), - r->long_ref != 0); - - if ((r->reference & PICT_TOP_FIELD) && r->field_poc[0] != INT_MAX) - pp->FieldOrderCntList[i][0] = r->field_poc[0]; - if ((r->reference & PICT_BOTTOM_FIELD) && r->field_poc[1] != INT_MAX) - pp->FieldOrderCntList[i][1] = r->field_poc[1]; - - pp->FrameNumList[i] = r->long_ref ? r->pic_id : r->frame_num; - if (r->reference & PICT_TOP_FIELD) - pp->UsedForReferenceFlags |= 1 << (2*i + 0); - if (r->reference & PICT_BOTTOM_FIELD) - pp->UsedForReferenceFlags |= 1 << (2*i + 1); - } else { - pp->RefFrameList[i].bPicEntry = 0xff; - pp->FieldOrderCntList[i][0] = 0; - pp->FieldOrderCntList[i][1] = 0; - pp->FrameNumList[i] = 0; - } - } - - pp->wFrameWidthInMbsMinus1 = h->mb_width - 1; - pp->wFrameHeightInMbsMinus1 = h->mb_height - 1; - pp->num_ref_frames = sps->ref_frame_count; - - pp->wBitFields = ((h->picture_structure != PICT_FRAME) << 0) | - ((sps->mb_aff && - (h->picture_structure == PICT_FRAME)) << 1) | - (sps->residual_color_transform_flag << 2) | - /* sp_for_switch_flag (not implemented by FFmpeg) */ - (0 << 3) | - (sps->chroma_format_idc << 4) | - ((h->nal_ref_idc != 0) << 6) | - (pps->constrained_intra_pred << 7) | - (pps->weighted_pred << 8) | - (pps->weighted_bipred_idc << 9) | - /* MbsConsecutiveFlag */ - (1 << 11) | - (sps->frame_mbs_only_flag << 12) | - (pps->transform_8x8_mode << 13) | - ((sps->level_idc >= 31) << 14) | - /* IntraPicFlag (Modified if we detect a non - * intra slice in dxva2_h264_decode_slice) */ - (1 << 15); - - pp->bit_depth_luma_minus8 = sps->bit_depth_luma - 8; - pp->bit_depth_chroma_minus8 = sps->bit_depth_chroma - 8; - if (DXVA_CONTEXT_WORKAROUND(avctx, ctx) & FF_DXVA2_WORKAROUND_SCALING_LIST_ZIGZAG) - pp->Reserved16Bits = 0; - else if (DXVA_CONTEXT_WORKAROUND(avctx, ctx) & FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO) - pp->Reserved16Bits = 0x34c; - else - pp->Reserved16Bits = 3; /* FIXME is there a way to detect the right mode ? */ - pp->StatusReportFeedbackNumber = 1 + DXVA_CONTEXT_REPORT_ID(avctx, ctx)++; - pp->CurrFieldOrderCnt[0] = 0; - if ((h->picture_structure & PICT_TOP_FIELD) && - current_picture->field_poc[0] != INT_MAX) - pp->CurrFieldOrderCnt[0] = current_picture->field_poc[0]; - pp->CurrFieldOrderCnt[1] = 0; - if ((h->picture_structure & PICT_BOTTOM_FIELD) && - current_picture->field_poc[1] != INT_MAX) - pp->CurrFieldOrderCnt[1] = current_picture->field_poc[1]; - pp->pic_init_qs_minus26 = pps->init_qs - 26; - pp->chroma_qp_index_offset = pps->chroma_qp_index_offset[0]; - pp->second_chroma_qp_index_offset = pps->chroma_qp_index_offset[1]; - pp->ContinuationFlag = 1; - pp->pic_init_qp_minus26 = pps->init_qp - 26; - pp->num_ref_idx_l0_active_minus1 = pps->ref_count[0] - 1; - pp->num_ref_idx_l1_active_minus1 = pps->ref_count[1] - 1; - pp->Reserved8BitsA = 0; - pp->frame_num = h->poc.frame_num; - pp->log2_max_frame_num_minus4 = sps->log2_max_frame_num - 4; - pp->pic_order_cnt_type = sps->poc_type; - if (sps->poc_type == 0) - pp->log2_max_pic_order_cnt_lsb_minus4 = sps->log2_max_poc_lsb - 4; - else if (sps->poc_type == 1) - pp->delta_pic_order_always_zero_flag = sps->delta_pic_order_always_zero_flag; - pp->direct_8x8_inference_flag = sps->direct_8x8_inference_flag; - pp->entropy_coding_mode_flag = pps->cabac; - pp->pic_order_present_flag = pps->pic_order_present; - pp->num_slice_groups_minus1 = pps->slice_group_count - 1; - pp->slice_group_map_type = pps->mb_slice_group_map_type; - pp->deblocking_filter_control_present_flag = pps->deblocking_filter_parameters_present; - pp->redundant_pic_cnt_present_flag= pps->redundant_pic_cnt_present; - pp->Reserved8BitsB = 0; - pp->slice_group_change_rate_minus1= 0; /* XXX not implemented by FFmpeg */ - //pp->SliceGroupMap[810]; /* XXX not implemented by FFmpeg */ -} - -static void fill_scaling_lists(const AVCodecContext *avctx, AVDXVAContext *ctx, const H264Context *h, DXVA_Qmatrix_H264 *qm) -{ - const PPS *pps = h->ps.pps; - unsigned i, j; - memset(qm, 0, sizeof(*qm)); - if (DXVA_CONTEXT_WORKAROUND(avctx, ctx) & FF_DXVA2_WORKAROUND_SCALING_LIST_ZIGZAG) { - for (i = 0; i < 6; i++) - for (j = 0; j < 16; j++) - qm->bScalingLists4x4[i][j] = pps->scaling_matrix4[i][j]; - - for (i = 0; i < 64; i++) { - qm->bScalingLists8x8[0][i] = pps->scaling_matrix8[0][i]; - qm->bScalingLists8x8[1][i] = pps->scaling_matrix8[3][i]; - } - } else { - for (i = 0; i < 6; i++) - for (j = 0; j < 16; j++) - qm->bScalingLists4x4[i][j] = pps->scaling_matrix4[i][ff_zigzag_scan[j]]; - - for (i = 0; i < 64; i++) { - qm->bScalingLists8x8[0][i] = pps->scaling_matrix8[0][ff_zigzag_direct[i]]; - qm->bScalingLists8x8[1][i] = pps->scaling_matrix8[3][ff_zigzag_direct[i]]; - } - } -} - -static int is_slice_short(const AVCodecContext *avctx, AVDXVAContext *ctx) -{ - assert(DXVA_CONTEXT_CFG_BITSTREAM(avctx, ctx) == 1 || - DXVA_CONTEXT_CFG_BITSTREAM(avctx, ctx) == 2); - return DXVA_CONTEXT_CFG_BITSTREAM(avctx, ctx) == 2; -} - -static void fill_slice_short(DXVA_Slice_H264_Short *slice, - unsigned position, unsigned size) -{ - memset(slice, 0, sizeof(*slice)); - slice->BSNALunitDataLocation = position; - slice->SliceBytesInBuffer = size; - slice->wBadSliceChopping = 0; -} - -static int get_refpic_index(const DXVA_PicParams_H264 *pp, int surface_index) -{ - int i; - for (i = 0; i < FF_ARRAY_ELEMS(pp->RefFrameList); i++) { - if ((pp->RefFrameList[i].bPicEntry & 0x7f) == surface_index) - return i; - } - return 0x7f; -} - -static void fill_slice_long(AVCodecContext *avctx, DXVA_Slice_H264_Long *slice, - const DXVA_PicParams_H264 *pp, unsigned position, unsigned size) -{ - const H264Context *h = avctx->priv_data; - H264SliceContext *sl = &h->slice_ctx[0]; - AVDXVAContext *ctx = DXVA_CONTEXT(avctx); - unsigned list; - - memset(slice, 0, sizeof(*slice)); - slice->BSNALunitDataLocation = position; - slice->SliceBytesInBuffer = size; - slice->wBadSliceChopping = 0; - - slice->first_mb_in_slice = (sl->mb_y >> FIELD_OR_MBAFF_PICTURE(h)) * h->mb_width + sl->mb_x; - slice->NumMbsForSlice = 0; /* XXX it is set once we have all slices */ - slice->BitOffsetToSliceData = get_bits_count(&sl->gb) - 8; - slice->slice_type = ff_h264_get_slice_type(sl); - if (sl->slice_type_fixed) - slice->slice_type += 5; - slice->luma_log2_weight_denom = sl->pwt.luma_log2_weight_denom; - slice->chroma_log2_weight_denom = sl->pwt.chroma_log2_weight_denom; - if (sl->list_count > 0) - slice->num_ref_idx_l0_active_minus1 = sl->ref_count[0] - 1; - if (sl->list_count > 1) - slice->num_ref_idx_l1_active_minus1 = sl->ref_count[1] - 1; - slice->slice_alpha_c0_offset_div2 = sl->slice_alpha_c0_offset / 2; - slice->slice_beta_offset_div2 = sl->slice_beta_offset / 2; - slice->Reserved8Bits = 0; - - for (list = 0; list < 2; list++) { - unsigned i; - for (i = 0; i < FF_ARRAY_ELEMS(slice->RefPicList[list]); i++) { - if (list < sl->list_count && i < sl->ref_count[list]) { - const H264Picture *r = sl->ref_list[list][i].parent; - unsigned plane; - unsigned index; - if (DXVA_CONTEXT_WORKAROUND(avctx, ctx) & FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO) - index = ff_dxva2_get_surface_index(avctx, ctx, r->f); - else - index = get_refpic_index(pp, ff_dxva2_get_surface_index(avctx, ctx, r->f)); - fill_picture_entry(&slice->RefPicList[list][i], index, - sl->ref_list[list][i].reference == PICT_BOTTOM_FIELD); - for (plane = 0; plane < 3; plane++) { - int w, o; - if (plane == 0 && sl->pwt.luma_weight_flag[list]) { - w = sl->pwt.luma_weight[i][list][0]; - o = sl->pwt.luma_weight[i][list][1]; - } else if (plane >= 1 && sl->pwt.chroma_weight_flag[list]) { - w = sl->pwt.chroma_weight[i][list][plane-1][0]; - o = sl->pwt.chroma_weight[i][list][plane-1][1]; - } else { - w = 1 << (plane == 0 ? sl->pwt.luma_log2_weight_denom : - sl->pwt.chroma_log2_weight_denom); - o = 0; - } - slice->Weights[list][i][plane][0] = w; - slice->Weights[list][i][plane][1] = o; - } - } else { - unsigned plane; - slice->RefPicList[list][i].bPicEntry = 0xff; - for (plane = 0; plane < 3; plane++) { - slice->Weights[list][i][plane][0] = 0; - slice->Weights[list][i][plane][1] = 0; - } - } - } - } - slice->slice_qs_delta = 0; /* XXX not implemented by FFmpeg */ - slice->slice_qp_delta = sl->qscale - h->ps.pps->init_qp; - slice->redundant_pic_cnt = sl->redundant_pic_count; - if (sl->slice_type == AV_PICTURE_TYPE_B) - slice->direct_spatial_mv_pred_flag = sl->direct_spatial_mv_pred; - slice->cabac_init_idc = h->ps.pps->cabac ? sl->cabac_init_idc : 0; - if (sl->deblocking_filter < 2) - slice->disable_deblocking_filter_idc = 1 - sl->deblocking_filter; - else - slice->disable_deblocking_filter_idc = sl->deblocking_filter; - slice->slice_id = h->current_slice - 1; -} - -static int commit_bitstream_and_slice_buffer(AVCodecContext *avctx, - DECODER_BUFFER_DESC *bs, - DECODER_BUFFER_DESC *sc) -{ - const H264Context *h = avctx->priv_data; - const unsigned mb_count = h->mb_width * h->mb_height; - AVDXVAContext *ctx = DXVA_CONTEXT(avctx); - const H264Picture *current_picture = h->cur_pic_ptr; - struct dxva2_picture_context *ctx_pic = current_picture->hwaccel_picture_private; - DXVA_Slice_H264_Short *slice = NULL; - void *dxva_data_ptr = NULL; - uint8_t *dxva_data, *current, *end; - unsigned dxva_size = 0; - void *slice_data; - unsigned slice_size; - unsigned padding; - unsigned i; - unsigned type; - - /* Create an annex B bitstream buffer with only slice NAL and finalize slice */ -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - type = D3D11_VIDEO_DECODER_BUFFER_BITSTREAM; - if (FAILED(ID3D11VideoContext_GetDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context, - D3D11VA_CONTEXT(ctx)->decoder, - type, - &dxva_size, &dxva_data_ptr))) - return -1; - } -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - type = DXVA2_BitStreamDateBufferType; - if (FAILED(IDirectXVideoDecoder_GetBuffer(DXVA2_CONTEXT(ctx)->decoder, - type, - &dxva_data_ptr, &dxva_size))) - return -1; - } -#endif - - dxva_data = dxva_data_ptr; - current = dxva_data; - end = dxva_data + dxva_size; - - for (i = 0; i < ctx_pic->slice_count; i++) { - static const uint8_t start_code[] = { 0, 0, 1 }; - static const unsigned start_code_size = sizeof(start_code); - unsigned position, size; - - assert(offsetof(DXVA_Slice_H264_Short, BSNALunitDataLocation) == - offsetof(DXVA_Slice_H264_Long, BSNALunitDataLocation)); - assert(offsetof(DXVA_Slice_H264_Short, SliceBytesInBuffer) == - offsetof(DXVA_Slice_H264_Long, SliceBytesInBuffer)); - - if (is_slice_short(avctx, ctx)) - slice = &ctx_pic->slice_short[i]; - else - slice = (DXVA_Slice_H264_Short*)&ctx_pic->slice_long[i]; - - position = slice->BSNALunitDataLocation; - size = slice->SliceBytesInBuffer; - if (start_code_size + size > end - current) { - av_log(avctx, AV_LOG_ERROR, "Failed to build bitstream"); - break; - } - - slice->BSNALunitDataLocation = current - dxva_data; - slice->SliceBytesInBuffer = start_code_size + size; - - if (!is_slice_short(avctx, ctx)) { - DXVA_Slice_H264_Long *slice_long = (DXVA_Slice_H264_Long*)slice; - if (i < ctx_pic->slice_count - 1) - slice_long->NumMbsForSlice = - slice_long[1].first_mb_in_slice - slice_long[0].first_mb_in_slice; - else - slice_long->NumMbsForSlice = mb_count - slice_long->first_mb_in_slice; - } - - memcpy(current, start_code, start_code_size); - current += start_code_size; - - memcpy(current, &ctx_pic->bitstream[position], size); - current += size; - } - padding = FFMIN(128 - ((current - dxva_data) & 127), end - current); - if (slice && padding > 0) { - memset(current, 0, padding); - current += padding; - - slice->SliceBytesInBuffer += padding; - } -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) - if (FAILED(ID3D11VideoContext_ReleaseDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context, D3D11VA_CONTEXT(ctx)->decoder, type))) - return -1; -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) - if (FAILED(IDirectXVideoDecoder_ReleaseBuffer(DXVA2_CONTEXT(ctx)->decoder, type))) - return -1; -#endif - if (i < ctx_pic->slice_count) - return -1; - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - D3D11_VIDEO_DECODER_BUFFER_DESC *dsc11 = bs; - memset(dsc11, 0, sizeof(*dsc11)); - dsc11->BufferType = type; - dsc11->DataSize = current - dxva_data; - dsc11->NumMBsInBuffer = mb_count; - - type = D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL; - - av_assert0((dsc11->DataSize & 127) == 0); - } -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - DXVA2_DecodeBufferDesc *dsc2 = bs; - memset(dsc2, 0, sizeof(*dsc2)); - dsc2->CompressedBufferType = type; - dsc2->DataSize = current - dxva_data; - dsc2->NumMBsInBuffer = mb_count; - - type = DXVA2_SliceControlBufferType; - - av_assert0((dsc2->DataSize & 127) == 0); - } -#endif - - if (is_slice_short(avctx, ctx)) { - slice_data = ctx_pic->slice_short; - slice_size = ctx_pic->slice_count * sizeof(*ctx_pic->slice_short); - } else { - slice_data = ctx_pic->slice_long; - slice_size = ctx_pic->slice_count * sizeof(*ctx_pic->slice_long); - } - return ff_dxva2_commit_buffer(avctx, ctx, sc, - type, - slice_data, slice_size, mb_count); -} - - -static int dxva2_h264_start_frame(AVCodecContext *avctx, - av_unused const uint8_t *buffer, - av_unused uint32_t size) -{ - const H264Context *h = avctx->priv_data; - AVDXVAContext *ctx = DXVA_CONTEXT(avctx); - struct dxva2_picture_context *ctx_pic = h->cur_pic_ptr->hwaccel_picture_private; - - if (!DXVA_CONTEXT_VALID(avctx, ctx)) - return -1; - assert(ctx_pic); - - /* Fill up DXVA_PicParams_H264 */ - fill_picture_parameters(avctx, ctx, h, &ctx_pic->pp); - - /* Fill up DXVA_Qmatrix_H264 */ - fill_scaling_lists(avctx, ctx, h, &ctx_pic->qm); - - ctx_pic->slice_count = 0; - ctx_pic->bitstream_size = 0; - ctx_pic->bitstream = NULL; - return 0; -} - -static int dxva2_h264_decode_slice(AVCodecContext *avctx, - const uint8_t *buffer, - uint32_t size) -{ - const H264Context *h = avctx->priv_data; - const H264SliceContext *sl = &h->slice_ctx[0]; - AVDXVAContext *ctx = DXVA_CONTEXT(avctx); - const H264Picture *current_picture = h->cur_pic_ptr; - struct dxva2_picture_context *ctx_pic = current_picture->hwaccel_picture_private; - unsigned position; - - if (ctx_pic->slice_count >= MAX_SLICES) - return -1; - - if (!ctx_pic->bitstream) - ctx_pic->bitstream = buffer; - ctx_pic->bitstream_size += size; - - position = buffer - ctx_pic->bitstream; - if (is_slice_short(avctx, ctx)) - fill_slice_short(&ctx_pic->slice_short[ctx_pic->slice_count], - position, size); - else - fill_slice_long(avctx, &ctx_pic->slice_long[ctx_pic->slice_count], - &ctx_pic->pp, position, size); - ctx_pic->slice_count++; - - if (sl->slice_type != AV_PICTURE_TYPE_I && sl->slice_type != AV_PICTURE_TYPE_SI) - ctx_pic->pp.wBitFields &= ~(1 << 15); /* Set IntraPicFlag to 0 */ - return 0; -} - -static int dxva2_h264_end_frame(AVCodecContext *avctx) -{ - H264Context *h = avctx->priv_data; - H264SliceContext *sl = &h->slice_ctx[0]; - struct dxva2_picture_context *ctx_pic = - h->cur_pic_ptr->hwaccel_picture_private; - int ret; - - if (ctx_pic->slice_count <= 0 || ctx_pic->bitstream_size <= 0) - return -1; - ret = ff_dxva2_common_end_frame(avctx, h->cur_pic_ptr->f, - &ctx_pic->pp, sizeof(ctx_pic->pp), - &ctx_pic->qm, sizeof(ctx_pic->qm), - commit_bitstream_and_slice_buffer); - if (!ret) - ff_h264_draw_horiz_band(h, sl, 0, h->avctx->height); - return ret; -} - -#if CONFIG_H264_DXVA2_HWACCEL -const AVHWAccel ff_h264_dxva2_hwaccel = { - .name = "h264_dxva2", - .type = AVMEDIA_TYPE_VIDEO, - .id = AV_CODEC_ID_H264, - .pix_fmt = AV_PIX_FMT_DXVA2_VLD, - .init = ff_dxva2_decode_init, - .uninit = ff_dxva2_decode_uninit, - .start_frame = dxva2_h264_start_frame, - .decode_slice = dxva2_h264_decode_slice, - .end_frame = dxva2_h264_end_frame, - .frame_params = ff_dxva2_common_frame_params, - .frame_priv_data_size = sizeof(struct dxva2_picture_context), - .priv_data_size = sizeof(FFDXVASharedContext), -}; -#endif - -#if CONFIG_H264_D3D11VA_HWACCEL -const AVHWAccel ff_h264_d3d11va_hwaccel = { - .name = "h264_d3d11va", - .type = AVMEDIA_TYPE_VIDEO, - .id = AV_CODEC_ID_H264, - .pix_fmt = AV_PIX_FMT_D3D11VA_VLD, - .init = ff_dxva2_decode_init, - .uninit = ff_dxva2_decode_uninit, - .start_frame = dxva2_h264_start_frame, - .decode_slice = dxva2_h264_decode_slice, - .end_frame = dxva2_h264_end_frame, - .frame_params = ff_dxva2_common_frame_params, - .frame_priv_data_size = sizeof(struct dxva2_picture_context), - .priv_data_size = sizeof(FFDXVASharedContext), -}; -#endif - -#if CONFIG_H264_D3D11VA2_HWACCEL -const AVHWAccel ff_h264_d3d11va2_hwaccel = { - .name = "h264_d3d11va2", - .type = AVMEDIA_TYPE_VIDEO, - .id = AV_CODEC_ID_H264, - .pix_fmt = AV_PIX_FMT_D3D11, - .init = ff_dxva2_decode_init, - .uninit = ff_dxva2_decode_uninit, - .start_frame = dxva2_h264_start_frame, - .decode_slice = dxva2_h264_decode_slice, - .end_frame = dxva2_h264_end_frame, - .frame_params = ff_dxva2_common_frame_params, - .frame_priv_data_size = sizeof(struct dxva2_picture_context), - .priv_data_size = sizeof(FFDXVASharedContext), -}; -#endif diff --git a/spaces/congsaPfin/Manga-OCR/logs/Alchemy of Souls Season 1 Episode 11 Subtitles Korean Drama with English Subs.md b/spaces/congsaPfin/Manga-OCR/logs/Alchemy of Souls Season 1 Episode 11 Subtitles Korean Drama with English Subs.md deleted file mode 100644 index e02c39f68ae9fc4d9019f7f38c74cadfdbe7b4f2..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Alchemy of Souls Season 1 Episode 11 Subtitles Korean Drama with English Subs.md +++ /dev/null @@ -1,52 +0,0 @@ -
      -

      Alchemy of Souls Episode 11: A Near Flawless Episode

      -

      If you are a fan of fantasy K-dramas, you might have heard of Alchemy of Souls, a mystical tale by the writing duo Hong Jung-eun and Hong Mi-ran, who also created Hotel Del Luna. The show stars Lee Jae-wook, Go Youn-jung, and Hwang Min-hyun as young magicians who deal with heaven and earth. The series has been praised for its captivating story, stunning visuals, and stellar performances.

      -

      In episode 11, we see Jang Uk (Lee Jae-wook) losing his will to train after his breakup with Mu-deok (Go Youn-jung), who makes a bet with the Prince (Hwang Min-hyun) to become his servant. Meanwhile, at Cheonbugwan, a line of blind girls await judgment, and one of them is Naksu (Kim Ji-soo), the powerful sorceress who possesses Mu-deok's body. The episode is full of twists, turns, and emotions, making it one of the best episodes of the series so far.

      -

      alchemy of souls episode 11 download netnaija


      DOWNLOADhttps://urlca.com/2uO7Q0



      -

      What is Alchemy of Souls?

      -

      Alchemy of Souls is a fantasy K-drama that premiered on Netflix in June 2023. It is based on a webtoon by Kim Young-hoon, which was inspired by Korean folklore and history. The show follows Jang Uk, a man from a prestigious family who wants to change his destiny with the help of magic. He meets Mu-deok, a blind woman who is actually Naksu, a powerful sorceress who escaped from heaven. Together, they face various challenges and enemies, as well as their own feelings for each other.

      -

      The show is divided into two parts, each consisting of 16 episodes. Part 1 focuses on Jang Uk and Mu-deok's training and relationship, while Part 2 explores the secrets and conflicts surrounding Cheonbugwan, the palace where Naksu used to live. The show combines romance, comedy, action, and mystery in a captivating way that keeps the viewers hooked.

      -

      Where to Watch Alchemy of Souls?

      -

      The best and legal way to watch Alchemy of Souls is on Netflix, where you can stream or download all the episodes with subtitles in various languages. Netflix is a popular streaming platform that offers a wide range of content, including movies, TV shows, documentaries, and originals. You can sign up for a free trial or choose from different plans that suit your budget and preferences.

      -

      Other legal streaming platforms that offer Alchemy of Souls are Bilibili, iQiyi, Viki, and WeTV. These platforms are mostly available in Asian countries and regions, such as China, Japan, Korea, Taiwan, Hong Kong, Singapore, Malaysia, Thailand, Indonesia, Philippines, Vietnam, Cambodia, Myanmar, Laos, Brunei, India, Nepal, Sri Lanka, Bangladesh, Pakistan, Afghanistan etc. You can watch Alchemy of Souls with subtitles in various languages on these platforms as well.

      -

      alchemy of souls season 1 episode 11 english subtitles
      -alchemy of souls s01e11 korean drama netnaija
      -alchemy of souls episode 11 1080p nf web dl download
      -alchemy of souls ep 11 netnaija mp4
      -alchemy of souls season 1 episode 11 recap
      -alchemy of souls episode 11 watch online free netnaija
      -alchemy of souls s01e11 korean series netnaija
      -alchemy of souls episode 11 subtitle language english
      -alchemy of souls ep 11 download netnaija hd
      -alchemy of souls season 1 episode 11 review
      -alchemy of souls episode 11 streaming netnaija
      -alchemy of souls s01e11 korean drama download
      -alchemy of souls episode 11 source x264 archie
      -alchemy of souls ep 11 netnaija mkv
      -alchemy of souls season 1 episode 11 spoilers
      -alchemy of souls episode 11 air date jul 23 2022
      -alchemy of souls s01e11 korean series download
      -alchemy of souls episode 11 language korean
      -alchemy of souls ep 11 download netnaija avi
      -alchemy of souls season 1 episode 11 ratings
      -alchemy of souls episode 11 netnaija video
      -alchemy of souls s01e11 korean drama streaming
      -alchemy of souls episode 11 ddp2.0 x264 archie
      -alchemy of souls ep 11 netnaija com
      -alchemy of souls season 1 episode 11 trailer
      -alchemy of souls episode 11 cheonbugwan blind girls
      -alchemy of souls s01e11 korean series streaming
      -alchemy of souls episode 11 mu-deok bet with prince
      -alchemy of souls ep 11 download netnaija net
      -alchemy of souls season 1 episode 11 cast

      -

      How to Download Alchemy of Souls Episode 11 from Netnaija?

      -

      If you are looking for a way to download Alchemy of Souls episode 11 for free without using any legal streaming platforms, you might have come across Netnaija, a movie piracy website that claims to offer the latest and hottest movies and TV shows for free download. Netnaija is one of the many illegal websites that operate on the internet, violating the intellectual property rights of the creators and distributors of the content. However, before you decide to use Netnaija to download Alchemy of Souls episode 11, you should be aware of the risks and consequences of using such websites. Here is a step-by-step guide on how to download Alchemy of Souls episode 11 from Netnaija, as well as why you should avoid it and other piracy websites.

      - - Step 1: Go to Netnaija's website (https://www.thenetnaija.com/). You might need to use a VPN or proxy service to access the website, as it might be blocked by your internet service provider or government. - Step 2: Search for Alchemy of Souls in the search bar or browse through the categories and genres. You might find the show under Korean Drama, Fantasy, or Romance. - Step 3: Click on the link for Alchemy of Souls episode 11. You will be redirected to another page with a brief description of the episode and a download button. - Step 4: Click on the download button. You will be asked to complete a captcha or a survey to prove that you are not a robot. You might also encounter pop-up ads or redirects to other websites that might contain malware or viruses. - Step 5: After completing the captcha or survey, you will be given a link to download the episode. The link might be from a third-party file hosting service, such as Mega, Mediafire, Zippyshare, etc. You will need to click on the link and follow the instructions to download the file. - Step 6: After downloading the file, you will need to unzip it using a software like WinRAR or 7-Zip. You will then be able to watch the episode on your device using a media player like VLC or MX Player.

      Why You Should Avoid Netnaija and Other Piracy Websites?

      -

      While downloading Alchemy of Souls episode 11 from Netnaija might seem like an easy and convenient way to watch the show for free, you should know that it is illegal and unethical to do so. By using Netnaija and other piracy websites, you are not only breaking the law, but also harming the entertainment industry and yourself. Here are some of the reasons why you should avoid Netnaija and other piracy websites:

      - - Malware and viruses: Piracy websites often contain malicious software that can infect your device and compromise your data and security. You might also expose yourself to phishing, identity theft, ransomware, spyware, and other cyberattacks by clicking on pop-up ads or redirects. - Legal issues: Piracy websites violate the intellectual property rights of the content creators and distributors, who can take legal action against them and their users. Depending on your country's laws, you might face fines, lawsuits, or even jail time for downloading or streaming pirated content. - Ethical concerns: Piracy websites deprive the content creators and distributors of their rightful revenue and recognition, which can affect their ability to produce more quality content in the future. By using piracy websites, you are not supporting the hard work and creativity of the actors, writers, directors, producers, and crew members who make the show possible. - Quality issues: Piracy websites often offer low-quality files that have poor audio and video quality, missing subtitles, distorted images, or corrupted data. You might also experience buffering, lagging, or crashing issues while watching the files on your device.

      Conclusion

      -

      Alchemy of Souls episode 11 is a near flawless episode that delivers an exciting and emotional story with amazing visuals and performances. The show is one of the best fantasy K-dramas that you can watch on Netflix or other legal streaming platforms. However, if you are tempted to use Netnaija or other piracy websites to download the episode for free, you should think twice before doing so. Not only are you breaking the law and risking your device's security, but also hurting the entertainment industry and yourself. Therefore, we recommend that you watch Alchemy of Souls legally and ethically, and enjoy the show without any guilt or hassle.

      -

      FAQs

      -

      Here are some frequently asked questions and answers about Alchemy of Souls and Netnaija:

      - - Q: When will Alchemy of Souls Part 2 air? - A: Alchemy of Souls Part 2 is expected to air in late 2023 or early 2024 on Netflix. - Q: Who sings the OSTs for Alchemy of Souls? - A: The OSTs for Alchemy of Souls are sung by various artists, such as IU , Ailee, Baekhyun, Taeyeon, and Chen. You can listen to the OSTs on Spotify, YouTube, or other music platforms. - Q: Is Netnaija safe and legal to use? - A: No, Netnaija is not safe and legal to use. It is a movie piracy website that offers illegal and unauthorized downloads of movies and TV shows. It can expose you to malware, viruses, legal issues, and ethical concerns. - Q: How can I support the content creators and distributors of Alchemy of Souls? - A: You can support the content creators and distributors of Alchemy of Souls by watching the show on legal streaming platforms, such as Netflix, Bilibili, iQiyi, Viki, or WeTV. You can also buy the official merchandise, such as DVDs, books, posters, or accessories. - Q: What are some other fantasy K-dramas that I can watch? - A: Some other fantasy K-dramas that you can watch are Goblin, Hotel Del Luna, Tale of the Nine-Tailed, The King: Eternal Monarch, and Mystic Pop-up Bar.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bloody Roar 2 APK - The Best Fighting Game for Android - Download Now in 21 MB.md b/spaces/congsaPfin/Manga-OCR/logs/Bloody Roar 2 APK - The Best Fighting Game for Android - Download Now in 21 MB.md deleted file mode 100644 index c5a4f60f43d0d88c271c1bf3900f2bfb8087bae4..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Bloody Roar 2 APK - The Best Fighting Game for Android - Download Now in 21 MB.md +++ /dev/null @@ -1,197 +0,0 @@ -
      -

      Bloody Roar 2 APK Download 21 MB: A Guide for Fighting Game Fans

      -

      If you are a fan of fighting games, you might have heard of Bloody Roar, a series of games that features fighters who can transform into powerful beasts. One of the most popular entries in the series is Bloody Roar 2, which was released in 1999 for the PlayStation console. But did you know that you can also play this game on your Android device? Yes, you read that right. You can download a small APK file that will let you enjoy this classic game on your smartphone or tablet. In this article, we will tell you everything you need to know about Bloody Roar 2 APK download, including what the game is about, how to download and install it, how to play and enjoy it, and some cheats and tricks to spice up your gameplay. So, without further ado, let's get started.

      -

      bloody roar 2 apk download 21 mb


      Download Filehttps://urlca.com/2uO8Ir



      -

      What is Bloody Roar 2?

      -

      Bloody Roar 2 is a fighting arcade game developed by Eighting and Raizing in 1999. It is the sequel to the first Bloody Roar and the second installment of the Bloody Roar series. The game features a total of eleven playable characters, each with their own unique beast form that can be activated during the fight. The game also introduces "Beast Drives", super attacks that initiate a cutscene and inflict substantial damage towards the opponent.

      -

      A brief introduction to the game and its features

      -

      The game has several modes and options that you can choose from, such as arcade mode, story mode, training mode, survival mode, time attack mode, etc. You can also customize your game settings, such as difficulty level, time limit, rounds per match, etc. The game has a fast-paced and fluid combat system that allows you to perform combos, counters, throws, dodges, blocks, scratches, etc. The game also has a stunning graphics and sound quality that enhances the atmosphere and excitement of the fights.

      -

      The story and characters of Bloody Roar 2

      -

      The game takes place five years after the events of the first game, where the zoanthropes (people who can transform into beasts) have resumed their normal lives after defeating the evil Tyron Corporation. However, a new threat emerges when a group called the Zoanthrope Liberation Front (ZLF) starts to cause chaos and violence in the name of zoanthrope supremacy. Meanwhile, Alan Gado, a lion zoanthrope who is known for promoting peace between humans and zoanthropes, becomes a fugitive for an unknown reason. Several rebel zoanthropes join forces to stop the ZLF and Gado from plunging the world into war.

      -

      The game features eleven playable characters, seven of which are new additions to the series. They are:

      -
        -
      • Yugo Ogami: A wolf zoanthrope who is the leader of the rebels and seeks to find out why Gado betrayed them.
      • -
      • Alice Tsukagami: A rabbit zoanthrope who is a nurse and a friend of Yugo. She tries to help the wounded and stop the violence.
      • -
      • Bakuryu: A mole zoanthrope who is a former assassin of the Tyron Corporation. He joins the rebels to atone for his past sins.
      • -
      • Shenlong: A tiger zoanthrope who is a clone of Long, a character from the first game. He is a ruthless fighter who works for the ZLF.
      • -
      • Busuzima: A chameleon zoanthrope who is a mad scientist and a former member of the Tyron Corporation. He experiments on himself and other zoanthropes to create new beast forms.
      • -
      • Stun: A hornet zoanthrope who is a former researcher of the Tyron Corporation. He was turned into a monster by Busuzima and seeks revenge on him.
      • -
      • Jenny Burtory: A bat zoanthrope who is a famous model and actress. She uses her beauty and charm to manipulate others.
      • -
      • Gado: A lion zoanthrope who is a former mercenary and a hero of the first game. He becomes a wanted man for unknown reasons and leads the ZLF.
      • -
      • Long: A tiger zoanthrope who is a martial arts master and a friend of Gado. He appears as a hidden character in the game.
      • -
      • Shina: A leopard zoanthrope who is Gado's daughter and a soldier. She appears as a hidden character in the game.
      • -
      • Uriko: A half-beast zoanthrope who is Alice's younger sister and a former experiment of the Tyron Corporation. She appears as a hidden character in the game.
      • -
      • Kohryu: An iron mole zoanthrope who is a cyborg created by Busuzima. He appears as a hidden character in the game.
      • -
      -

      The gameplay and mechanics of Bloody Roar 2

      -

      The gameplay of Bloody Roar 2 is similar to other fighting games, where you have to defeat your opponent in one-on-one matches by depleting their health bar. You can choose from different characters, each with their own fighting style, strengths, and weaknesses. You can also switch between your human and beast forms during the fight, which gives you access to different moves, abilities, and advantages. For example, in beast form, you can perform stronger attacks, regenerate health, and use Beast Drives. However, you also consume your Beast Gauge, which limits the duration of your transformation. You can fill up your Beast Gauge by attacking, blocking, or taking damage from your opponent.

      -

      bloody roar 2 apk free download for android
      -bloody roar 2 apk mod unlimited money
      -bloody roar 2 apk offline no emulator
      -bloody roar 2 apk obb highly compressed
      -bloody roar 2 apk latest version 1.3.3
      -bloody roar 2 apk full game unlocked
      -bloody roar 2 apk data file download
      -bloody roar 2 apk android 10 compatible
      -bloody roar 2 apk no root required
      -bloody roar 2 apk english language patch
      -bloody roar 2 apk cheats codes and tips
      -bloody roar 2 apk best settings for smooth gameplay
      -bloody roar 2 apk all characters and stages
      -bloody roar 2 apk direct download link
      -bloody roar 2 apk original file from Hudson Soft
      -bloody roar 2 apk review and rating
      -bloody roar 2 apk how to install and play
      -bloody roar 2 apk gameplay video and screenshots
      -bloody roar 2 apk features and specifications
      -bloody roar 2 apk system requirements and compatibility
      -bloody roar 2 apk update and bug fixes
      -bloody roar 2 apk download size and speed
      -bloody roar 2 apk alternative download sources
      -bloody roar 2 apk safe and secure download
      -bloody roar 2 apk virus and malware scan report
      -bloody roar 2 apk fan community and support
      -bloody roar 2 apk FAQs and troubleshooting guide
      -bloody roar 2 apk comparison with other fighting games
      -bloody roar 2 apk history and development
      -bloody roar 2 apk fun facts and trivia

      -

      The game has several features that make it unique and fun to play, such as:

      -
        -
      • The ability to cancel your attacks into other attacks, creating combos and chains that can deal massive damage.
      • -
      • The ability to counter your opponent's attacks by timing your own attacks or blocks correctly, creating opportunities for counterattacks.
      • -
      • The ability to throw your opponent by pressing the throw button near them, breaking their guard and stunning them for a brief moment.
      • -
      • The ability to dodge your opponent's attacks by pressing the dodge button along with a direction, avoiding damage and repositioning yourself.
      • -
      • The ability to scratch your opponent by pressing the scratch button near them, inflicting minor damage and building up your Beast Gauge.
      • -
      -

      How to download and install Bloody Roar 2 APK?

      -

      If you want to play Bloody Roar 2 on your Android device, you will need to download an APK file that contains the game data. An APK file is an application package file that can be installed on Android devices without using the Google Play Store. However, you will also need an emulator that can run PlayStation games on your device. An emulator is a software that mimics the functions of another system, allowing you to play games that are not compatible with your device. In this case, you will need a PlayStation emulator for Android, such as ePSXe or FPse.

      -

      The requirements and steps for downloading the APK file

      -

      Before you download the APK file, you will need to make sure that your device meets the following requirements:

      -
        -
      • You have enough storage space on your device or SD card (at least 50 MB).
      • -
      • You have enabled the installation of apps from unknown sources on your device settings (usually under security or privacy options).
      • -
      • You have an internet connection to download the file.
      • -
      -

      Once you have met these requirements, you can follow these steps to download the APK file:

      -
        -
      1. Go to this link: (https://www.mediafire.com/file/0g7x BIOS file, you can follow these steps to run and play the game:
          -
        1. Launch the emulator app on your device and grant it the necessary permissions.
        2. -
        3. Go to the settings or preferences menu of the emulator and select the BIOS file that you have downloaded and placed on your device.
        4. -
        5. Go to the game or load menu of the emulator and select the Bloody Roar 2 APK file that you have installed on your device.
        6. -
        7. Wait for the game to load and start playing.
        8. -
        -

        Here are some tips and suggestions for installing and running the game:

        -
          -
        • Make sure that your device has enough battery power or is plugged into a charger before playing the game, as it can drain your battery quickly.
        • -
        • Adjust the emulator settings to optimize the performance and graphics of the game, such as resolution, frame rate, sound quality, etc.
        • -
        • Use a controller or a keyboard if you have one, as they can provide a better gaming experience than using the touch screen.
        • -
        • Save your progress frequently by using the save state feature of the emulator, as the game does not have an auto-save function.
        • -
        -

        The advantages and disadvantages of playing Bloody Roar 2 on Android devices

        -

        Playing Bloody Roar 2 on your Android device can have some advantages and disadvantages, depending on your preferences and expectations. Here are some of them:

        -

        The advantages are:

        -
          -
        • You can play the game anytime and anywhere, as long as you have your device with you.
        • -
        • You can enjoy the game on a larger screen and with better sound quality than on the original PlayStation console.
        • -
        • You can customize the game settings and controls to suit your liking and comfort.
        • -
        • You can access cheats and tricks that are not available on the original version of the game.
        • -
        -

        The disadvantages are:

        -
          -
        • You may encounter some compatibility issues or bugs that can affect the gameplay or cause crashes.
        • -
        • You may experience some lag or slowdowns during some scenes or fights, especially if your device is not powerful enough to run the game smoothly.
        • -
        • You may find it hard to control the game using the touch screen, especially if you are used to using a controller or a keyboard.
        • -
        • You may lose your progress if you delete the APK file or the emulator app from your device, unless you back up your save data.
        • -
        -

        How to play and enjoy Bloody Roar 2?

        -

        Now that you have downloaded and installed Bloody Roar 2 on your Android device, you might be wondering how to play and enjoy it. Well, don't worry, we have got you covered. In this section, we will give you some basic information and tips on how to play and enjoy Bloody Roar 2, such as the controls, the modes, the cheats, and more. So, let's dive in.

        -

        The basic controls and moves of Bloody Roar 2

        -

        The basic controls of Bloody Roar 2 are simple and easy to learn. You can use either the touch screen or a controller or a keyboard to play the game. Here are the default buttons for each action:

        - - - - - - - - - -
        ActionTouch ScreenControllerKeyboard
        PunchASquareZ
        KickBXX
        Beast Form/Beast DriveCOC
        Dodge/Scratch/ThrowDTriangleA
        Start/Pause/Select ModeSelect/StartSelect/StartEnter/Space
        Move Left/Right/Up/DownD-Pad/Analog StickD-Pad/Analog StickLeft/Right/Up/Down Arrows
        Block (Hold)L1/R1L1/R1S
        -

        The basic moves of Bloody Roar 2 are also easy to execute. You can perform different types of attacks, such as punches, kicks, throws, scratches, etc. by pressing the corresponding buttons. You can also combine different buttons and directions to perform more advanced moves, such as combos, counters, dodges, etc. You can also activate your beast form by pressing the beast button, which will give you more power and abilities. You can also use your beast drive by pressing the beast button again when your beast gauge is full, which will trigger a special attack that can deal a lot of damage.

        -

        Here are some examples of basic and advanced moves that you can try:

        -
          -
        • Punch + Kick: A simple combo that can be followed by other attacks.
        • -
        • Punch + Punch + Kick: A three-hit combo that can knock down your opponent.
        • -
        • Kick + Kick + Punch: A three-hit combo that can launch your opponent into the air.
        • -
        • Punch + Dodge + Punch: A counter move that can avoid your opponent's attack and hit them from behind.
        • -
        • Kick + Dodge + Kick: A counter move that can avoid your opponent's attack and hit them from the side.
        • -
        • Dodge + Scratch: A move that can scratch your opponent and fill up your beast gauge.
        • -
        • Dodge + Throw: A move that can throw your opponent and stun them for a moment.
        • -
        • Beast Form: A move that can transform you into your beast form and give you more power and abilities.
        • -
        • Beast Drive: A move that can unleash a super attack that can deal a lot of damage.
        • -
        -

        Of course, these are just some of the moves that you can perform in the game. You can experiment with different combinations and find out what works best for you and your character. You can also check the move list of each character in the game menu or online to learn more about their moves and abilities.

        -

        The game modes and options of Bloody Roar 2

        -

        Bloody Roar 2 has several game modes and options that you can choose from, depending on your mood and preference. Here are some of them:

        -
          -
        • Arcade Mode: This is the main mode of the game, where you have to fight against a series of opponents until you reach the final boss. You can choose from different difficulty levels and time limits. You can also unlock new characters and endings by completing this mode with different characters.
        • -
        • Story Mode: This is a mode where you can follow the story of each character and learn more about their background and motivation. You have to fight against specific opponents and watch cutscenes that reveal the plot. You can also unlock new characters and endings by completing this mode with different characters.
        • -
        • Training Mode: This is a mode where you can practice your skills and moves without worrying about time or health. You can choose your opponent, stage, and settings. You can also view your inputs and damage output on the screen.
        • -
        • Survival Mode: This is a mode where you have to fight against an endless stream of opponents until you lose. You have to survive as long as possible and earn points based on your performance. You can also compare your score with other players on the online leaderboard.
        • -
        • Time Attack Mode: This is a mode where you have to defeat a series of opponents as fast as possible. You have to beat the clock and earn points based on your speed. You can also compare your score with other players on the online leaderboard.
        • -
        • Options Mode: This is a mode where you can customize your game settings, such as sound, display, controller, etc. You can also view your game records, such as wins, losses, time, etc.
        • -
        -

        The cheats and tricks of Bloody Roar 2

        -

        Bloody Roar 2 has some cheats and tricks that you can use to enhance your gameplay or have some fun. Here are some of them:

        -

        A table of cheat codes and their effects

        - - - - - - - - - - -
        Cheat CodeEffect
        Hold L1 + L2 + R1 + R2 at the title screenUnlock all characters
        Hold L2 + R2 at the character selection screenSelect alternate costume
        Hold L1 + R1 at the character selection screenSelect kid mode (smaller characters)
        Hold L1 + L2 at the character selection screenSelect big head mode (larger heads)
        Hold L1 + L2 + R1 + R2 at the character selection screenSelect hyper mode (faster and stronger attacks)
        Hold L1 + L2 + R1 + R2 at the stage selection screenSelect any stage
        Hold L1 + L2 + R1 + R2 at the pause menuRestart the match
        Hold L1 + L2 + R1 + R2 at the game over screenContinue the game with full health
        -

        A list of tips and strategies for winning fights

        -
          -
        • Learn the moves and abilities of each character and their beast form, and use them wisely and effectively.
        • -
        • Use your beast form and beast drive when you have the opportunity, as they can give you an edge over your opponent.
        • -
        • Use combos and chains to deal more damage and prevent your opponent from recovering.
        • -
        • Use counters and dodges to avoid your opponent's attacks and create openings for your own attacks.
        • -
        • Use throws and scratches to break your opponent's guard and fill up your beast gauge.
        • -
        • Use blocks and blocks to reduce the damage you take from your opponent's attacks.
        • -
        • Watch your health and beast gauge, and don't let them run out.
        • -
        • Watch your opponent's movements and patterns, and anticipate their next move.
        • -
        • Adapt your strategy according to your opponent's character, style, and behavior.
        • -
        • Have fun and enjoy the game.
        • -
        -

        A summary of the best characters and their beast forms

        -

        Bloody Roar 2 has eleven playable characters, each with their own unique beast form that can be activated during the fight. However, some characters are better than others in terms of power, speed, defense, range, etc. Here is a summary of the best characters and their beast forms in the game:

        -
          -
        • Yugo: A wolf zoanthrope who is well-balanced in all aspects. He has fast and powerful attacks, good combos, and a decent range. His beast form is a wolf that can perform a spinning attack that can hit multiple times.
        • -
        • Gado: A lion zoanthrope who is strong and durable. He has heavy and devastating attacks, good defense, and a long range. His beast form is a lion that can perform a roaring attack that can stun the opponent.
        • -
        • Bakuryu: A mole zoanthrope who is fast and agile. He has quick and sneaky attacks, good dodges, and a short range. His beast form is a mole that can perform a digging attack that can surprise the opponent.
        • -
        • Jenny: A bat zoanthrope who is graceful and elegant. She has smooth and stylish attacks, good counters, and a medium range. Her beast form is a bat that can perform a flying attack that can hit from above.
        • -
        • Shenlong: A tiger zoanthrope who is fierce and ruthless. He has brutal and savage attacks, good chains, and a medium range. His beast form is a tiger that can perform a slashing attack that can hit multiple times.
        • -
        -

        Conclusion

        -

        Bloody Roar 2 is a classic fighting game that features fighters who can transform into powerful beasts. It has a captivating story, a diverse cast of characters, a fluid combat system, and a stunning graphics and sound quality. It is one of the best games in the Bloody Roar series and one of the best games in the genre. If you want to play this game on your Android device, you can download a small APK file that will let you enjoy this game on your smartphone or tablet. You will also need an emulator that can run PlayStation games on your device, such as ePSXe or FPse. You will also need a BIOS file that will allow the emulator to function properly. Once you have downloaded and installed everything, you can run and play the game on your device. You can also use some cheats and tricks to enhance your gameplay or have some fun. You can also learn some tips and strategies for winning fights and mastering the game. Bloody Roar 2 is a game that you will surely love and enjoy, whether you are a fan of fighting games or not. So, what are you waiting for? Download the APK file now and unleash the beast within you.

        -

        A list of FAQs about Bloody Roar 2 APK download

        -

        Here are some of the frequently asked questions about Bloody Roar 2 APK download, along with their answers:

        -
          -
        1. Q: Is Bloody Roar 2 APK download safe and legal?
        2. -
        3. A: Yes, Bloody Roar 2 APK download is safe and legal, as long as you download it from a trusted and reputable source, such as the link we provided in this article. However, you should always be careful and cautious when downloading any file from the internet, as there may be some malicious or fraudulent sites that may harm your device or steal your data. You should also respect the intellectual property rights of the game developers and publishers, and not distribute or sell the APK file without their permission.
        4. -
        5. Q: Is Bloody Roar 2 APK download compatible with all Android devices?
        6. -
        7. A: No, Bloody Roar 2 APK download is not compatible with all Android devices, as it may require some specific features or specifications that your device may not have. For example, your device may not have enough storage space, memory, processor speed, graphics quality, etc. to run the game smoothly and properly. You should always check the requirements and compatibility of the APK file before downloading and installing it on your device.
        8. -
        9. Q: Is Bloody Roar 2 APK download free or paid?
        10. -
        11. A: Bloody Roar 2 APK download is free, as you do not have to pay any money to download or install it on your device. However, you may have to pay some money to download or install the emulator or the BIOS file that are needed to run the game on your device. You may also have to pay some money to access some features or options of the emulator or the game, such as premium settings, extra modes, etc.
        12. -
        13. Q: Is Bloody Roar 2 APK download online or offline?
        14. -
        15. A: Bloody Roar 2 APK download is offline, as you do not need an internet connection to play the game on your device. However, you will need an internet connection to download or install the APK file, the emulator, and the BIOS file on your device. You may also need an internet connection to access some features or options of the emulator or the game, such as online leaderboards, updates, etc.
        16. -
        17. Q: Is Bloody Roar 2 APK download original or modified?
        18. -
        19. A: Bloody Roar 2 APK download is original, as it is not modified or altered in any way from the original version of the game that was released for the PlayStation console in 1999. However, you may find some modified or hacked versions of the APK file online that may claim to offer some benefits or advantages over the original version, such as unlimited health, money, characters, etc. However, we do not recommend downloading or using these modified versions, as they may contain viruses, malware, spyware, etc. that may harm your device or steal your data. They may also cause some errors or glitches in the game that may ruin your gameplay experience.
        20. -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Defense Zone 3 HD APK and Challenge Yourself with the Merciless Battles.md b/spaces/congsaPfin/Manga-OCR/logs/Download Defense Zone 3 HD APK and Challenge Yourself with the Merciless Battles.md deleted file mode 100644 index 892658dca1b576cc261e302801ed7994a5eb7a1d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Defense Zone 3 HD APK and Challenge Yourself with the Merciless Battles.md +++ /dev/null @@ -1,124 +0,0 @@ - -

        Defense Zone 3 APK Download: A Strategy Game with Stunning Graphics and Challenging Levels

        -

        If you are a fan of tower defense games, you might have heard of Defense Zone, a popular action/strategy game series that has been downloaded by millions of players worldwide. Defense Zone 3 is the latest installment in the series, and it offers new weapons, landscapes, and options galore. In this article, we will show you how to download and install Defense Zone 3 APK on your Android devices, as well as how to play it on your PC and Mac using an emulator.

        -

        What is Defense Zone 3?

        -

        The sequel to the popular action/strategy game series

        -

        Defense Zone 3 is the long-awaited sequel to Defense Zone 2, which was released in 2014. The game is developed by Artem Kotov, an independent developer who has created several other tower defense games, such as Defense Zone Original, Defense Zone HD, and Defense Zone Ultra HD. Defense Zone 3 is available for Android, iOS, Windows, Mac, and Linux platforms.

        -

        defense zone 3 apk download


        Download 🗹 https://urlca.com/2uOa9P



        -

        The core gameplay and features

        -

        The core of the game is still the same as the previous versions: you have to defend your base from waves of enemies who try to destroy it by any means necessary. You have access to various types of turrets that vary in terms of their attack range, firing speed, and damage type. You have to strategically place them on the map and upgrade them as you progress. You also have special abilities that can help you turn the tide of the battle, such as air strikes, nuclear bombs, freeze rays, and more.

        -

        The game has four difficulty levels that will challenge every player, from beginners to experts. You can also choose from eight different kinds of turrets, each with their own strengths and weaknesses. The game has 21 levels in total, each with varied seasons and types of landscapes that affect the gameplay. The game also supports over 60 languages, so you can enjoy it in your preferred language.

        -

        The new weapons, landscapes, and options

        -

        Defense Zone 3 introduces some new features that make the game even more dynamic and amazing. For example, you can now use flamethrowers that can set enemies on fire, or tesla coils that can zap multiple enemies at once. You can also encounter new types of enemies, such as drones, tanks, helicopters, and more. The game also has new landscapes that are stunningly detailed and realistic, such as deserts, forests, mountains, cities, and more. The game also has new options that allow you to customize your gameplay experience, such as changing the speed of the game, enabling or disabling blood effects, adjusting the sound volume, and more.

        -

        How to Download and Install Defense Zone 3 APK on Android Devices?

        -

        The requirements and permissions

        -

        To download and install Defense Zone 3 APK on your Android devices, you need to have an Android version of 5.1 or higher. You also need to have at least 243 MB of free storage space on your device. The game may ask for some permissions when you install it, such as access to your photos, media, files, network connections, Wi-Fi connections, device ID, call information, etc. These permissions are necessary for the game to function properly and to provide you with updates and support.

        -

        defense zone 3 hd apk free download
        -defense zone 3 ultra hd apk download
        -defense zone 3 mod apk unlimited money download
        -defense zone 3 latest version apk download
        -defense zone 3 offline apk download
        -defense zone 3 hack apk download
        -defense zone 3 full apk download
        -defense zone 3 android game apk download
        -defense zone 3 strategy game apk download
        -defense zone 3 premium apk download
        -defense zone 3 apk download for pc
        -defense zone 3 apk download uptodown
        -defense zone 3 apk download apkpure
        -defense zone 3 apk download rexdl
        -defense zone 3 apk download revdl
        -defense zone 3 apk download android 1
        -defense zone 3 apk download mob.org
        -defense zone 3 apk download moddroid
        -defense zone 3 apk download happymod
        -defense zone 3 apk download an1.com
        -defense zone 3 hd game free download for android apk
        -defense zone 3 ultra hd game free download for android apk
        -defense zone 3 mod game free download for android apk
        -defense zone 3 latest game free download for android apk
        -defense zone 3 offline game free download for android apk
        -defense zone 3 hack game free download for android apk
        -defense zone 3 full game free download for android apk
        -defense zone 3 android game free download full version apk
        -defense zone 3 strategy game free download full version apk
        -defense zone 3 premium game free download full version apk
        -how to download and install defense zone 3 hd apk on android device
        -how to download and install defense zone 3 ultra hd apk on android device
        -how to download and install defense zone 3 mod apk on android device
        -how to update defense zone 3 hd apk on android device
        -how to update defense zone 3 ultra hd apk on android device
        -how to update defense zone 3 mod apk on android device
        -how to play defense zone 3 hd on android device without internet connection
        -how to play defense zone 3 ultra hd on android device without internet connection
        -how to play defense zone 3 mod on android device without internet connection
        -how to unlock all weapons and levels in defense zone 3 hd apk on android device
        -how to unlock all weapons and levels in defense zone 3 ultra hd apk on android device
        -how to unlock all weapons and levels in defense zone 3 mod apk on android device
        -how to fix defense zone 3 hd not working or crashing on android device
        -how to fix defense zone 3 ultra hd not working or crashing on android device
        -how to fix defense zone 3 mod not working or crashing on android device
        -what are the best tips and tricks for playing defense zone 3 hd on android device
        -what are the best tips and tricks for playing defense zone 3 ultra hd on android device
        -what are the best tips and tricks for playing defense zone 3 mod on android device

        -

        The steps to download and install

        -

        To download and install Defense Zone 3 APK on your Android devices, you can follow these simple steps:

        -
          -
        1. Go to How to Download and Play Defense Zone 3 on PC and Mac? -

          The benefits of playing on PC and Mac

          -

          While Defense Zone 3 is a great game to play on your Android devices, you might want to try it on your PC and Mac as well. There are several benefits of playing Defense Zone 3 on PC and Mac, such as:

          -
            -
          • You can enjoy the game on a bigger screen, which enhances the visual quality and the immersion.
          • -
          • You can use your keyboard and mouse to control the game, which gives you more accuracy and convenience.
          • -
          • You can save your battery life and storage space on your Android devices.
          • -
          • You can play the game without any interruptions from calls, messages, or notifications.
          • -
          -

          The best emulator to use

          -

          To play Defense Zone 3 on PC and Mac, you need to use an emulator, which is a software that allows you to run Android apps and games on your computer. There are many emulators available online, but not all of them are compatible with Defense Zone 3. Based on our research, we recommend using one of these emulators:

          - - - - - - - - - - - - - - - - - - - - - -
          EmulatorFeaturesDownload Link
          BlueStacks- The most popular and trusted emulator for Android games.
          - Supports high-resolution graphics and smooth gameplay.
          - Has a MOBA mode that optimizes the controls for Defense Zone 3.
          - Allows you to customize the key mapping, speed, sound, and other settings.
          BlueStacks
          NoxPlayer- A fast and powerful emulator that runs smoothly on low-end PCs.
          - Supports multiple instances that let you play different games or accounts at the same time.
          - Has a script recording feature that lets you automate your actions in the game.
          - Allows you to adjust the resolution, performance, graphics, and other settings.
          NoxPlayer
          LDPlayer- A lightweight and stable emulator that consumes less CPU and RAM.
          - Supports high frame rate and high graphics quality.
          - Has a game booster feature that enhances the performance of Defense Zone 3.
          - Allows you to change the language, theme, keyboard layout, and other settings.
          LDPlayer
          -

          The steps to download and play

          -

          To download and play Defense Zone 3 on PC and Mac using an emulator, you can follow these general steps:

          -
            -
          1. Download and install the emulator of your choice on your PC or Mac.
          2. -
          3. Launch the emulator and sign in with your Google account.
          4. -
          5. Search for Defense Zone 3 in the emulator's app store or Google Play Store.
          6. -
          7. Click to install Defense Zone 3 from the search results.
          8. -
          9. Click the Defense Zone 3 icon on the emulator's home screen to start playing.
          10. -
          -

          Conclusion

          -

          Defense Zone 3 is a strategy game that will test your skills and tactics in defending your base from enemy attacks. The game has stunning graphics, challenging levels, and various weapons and options to choose from. You can download and install Defense Zone 3 APK on your Android devices easily by following our guide. You can also play Defense Zone 3 on your PC and Mac using an emulator that suits your preferences. We hope you enjoy playing Defense Zone 3 and have fun!

          -

          FAQs

          -

          Here are some frequently asked questions about Defense Zone 3:

          -

          Is Defense Zone 3 free to play?

          -

          Yes, Defense Zone 3 is free to download and play. However, the game contains ads and in-app purchases that can enhance your gameplay experience.

          -

          How can I remove ads from Defense Zone 3?

          -

          You can remove ads from Defense Zone 3 by purchasing the ad-free version of the game for $2.99. Alternatively, you can turn off your internet connection while playing the game.

          -

          How can I save my progress in Defense Zone 3?

          -

          You can save your progress in Defense Zone 3 by connecting your game account to Google Play Games or Facebook. This way, you can sync your data across different devices and platforms.

          -

          How can I get more coins in Defense Zone 3?

          -

          You can get more coins in Defense Zone 3 by playing the game and completing the levels. You can also watch video ads or complete offers to earn extra coins. Additionally, you can buy coins with real money through the in-app store.

          -

          How can I contact the developer of Defense Zone 3?

          -

          You can contact the developer of Defense Zone 3 by sending an email to support@defensezone.net. You can also visit the official website of the game at defensezone.net or follow the developer on Facebook at facebook.com/DefenseZoneGames.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Mad Skills Motocross 3 MOD APK Terbaru and Become a Pro Rider.md b/spaces/congsaPfin/Manga-OCR/logs/Download Mad Skills Motocross 3 MOD APK Terbaru and Become a Pro Rider.md deleted file mode 100644 index 407962398b078acfd490db0a00e9c9a10aba3b43..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Mad Skills Motocross 3 MOD APK Terbaru and Become a Pro Rider.md +++ /dev/null @@ -1,74 +0,0 @@ -
          -

          Download Mad Skills Motocross 3 Mod APK Terbaru

          -

          If you are a fan of motocross games, you must have heard of Mad Skills Motocross 3, the latest installment in the popular series of motocross simulators. This game offers you a thrilling and realistic experience of riding a dirt bike on challenging tracks, performing amazing stunts, and competing with other players online. In this article, we will tell you everything you need to know about Mad Skills Motocross 3, and how to download and install its mod apk version for free.

          -

          download mad skills motocross 3 mod apk terbaru


          Download Zip ->>->>->> https://urlca.com/2uOapf



          -

          What is Mad Skills Motocross 3?

          -

          Mad Skills Motocross 3 is a motocross game developed by Turborilla, the same studio that created the previous two games in the series. It was released in May 2021 for Android and iOS devices, and has received positive reviews from critics and players alike. Mad Skills Motocross 3 is a game that combines realistic physics, simple controls, great graphics, and horizontal scrolling. You can choose from different bikes and riders, customize their appearance and performance, and race on various tracks with different terrains and obstacles. You can also challenge other players online, join weekly competitions and events, and climb the leaderboards.

          -

          Features of Mad Skills Motocross 3

          -

          Mad Skills Motocross 3 has many features that make it one of the best motocross games on the market. Here are some of them:

          -

          Realistic physics

          -

          The game uses a realistic physics engine that simulates the behavior of the bike and the rider on different surfaces and situations. You can feel the weight, speed, momentum, and balance of your bike as you ride it. You can also perform various stunts, such as wheelies, flips, whips, scrubs, and more. The game also has a dynamic weather system that affects the track conditions and your performance.

          -

          Customizable bikes and riders

          -

          The game allows you to choose from different types of bikes, such as MX, Enduro, Trail, Supermoto, Electric, and more. Each bike has its own characteristics and advantages. You can also customize your bike's color, parts, decals, and numbers. You can also choose your rider's gender, skin tone, hair style, helmet, goggles, gloves, boots, jersey, pants, and accessories. You can create your own unique style and show it off to other players.

          -

          Online multiplayer and leaderboards

          -

          The game has an online multiplayer mode where you can race against other players from around the world in real time. You can join or create rooms with different settings, such as track selection, number of laps, difficulty level, weather condition, etc. You can also chat with other players before and after the race. The game also has a global leaderboard where you can see your rank and stats compared to other players. You can also view your friends' ranks and challenge them to a race.

          -

          Weekly competitions and events

          -

          The game has a weekly competition mode where you can compete with other players on a new track every week. The track changes every Monday at midnight GMT. You can race as many times as you want during the week and try to improve your time. The top 1000 players at the end of the week will receive rewards based on their rank. The game also has special events where you can win exclusive prizes by completing certain tasks or challenges.

          Why download Mad Skills Motocross 3 mod apk?

          -

          Mad Skills Motocross 3 is a free-to-play game, but it also has some in-app purchases and ads that may limit your enjoyment. For example, you need to spend money to buy new bikes, skins, and upgrades, or to unlock premium tracks and events. You also have to watch ads to get extra rewards or to continue playing after losing a race. If you want to enjoy the game without these limitations, you may want to download Mad Skills Motocross 3 mod apk. This is a modified version of the game that gives you some advantages and benefits, such as:

          -

          Unlimited money

          -

          With Mad Skills Motocross 3 mod apk, you will have unlimited money in your account. You can use this money to buy anything you want in the game, such as new bikes, skins, upgrades, tracks, events, etc. You don't have to worry about running out of money or saving up for something expensive. You can also skip the ads and enjoy the game without interruptions.

          -

          No ads

          -

          Another benefit of Mad Skills Motocross 3 mod apk is that it removes all the ads from the game. You don't have to watch any ads to get extra rewards or to continue playing after losing a race. You can also avoid the annoying pop-ups and banners that may distract you from the game. You can enjoy the game without any ads and have a smoother and faster gaming experience.

          -

          How to download mad skills motocross 3 mod apk latest version
          -Mad skills motocross 3 mod apk unlimited money and rockets
          -Download mad skills motocross 3 hacked apk for android
          -Mad skills motocross 3 mod apk free download full version
          -Mad skills motocross 3 mod apk offline gameplay
          -Download mad skills motocross 3 mod apk with all bikes unlocked
          -Mad skills motocross 3 mod apk realistic physics and graphics
          -Download mad skills motocross 3 mod apk no ads and no root
          -Mad skills motocross 3 mod apk best racing game for mobile
          -Download mad skills motocross 3 mod apk from APKMB.Com[^1^]
          -Mad skills motocross 3 mod apk easy controls and customization
          -Download mad skills motocross 3 mod apk with multiplayer mode
          -Mad skills motocross 3 mod apk new tracks and challenges
          -Download mad skills motocross 3 mod apk with cheat codes
          -Mad skills motocross 3 mod apk fun and addictive gameplay
          -Download mad skills motocross 3 premium apk for free
          -Mad skills motocross 3 mod apk high quality sound and music
          -Download mad skills motocross 3 cracked apk with unlimited coins
          -Mad skills motocross 3 mod apk latest update and features
          -Download mad skills motocross 3 modded apk with pro tips
          -Mad skills motocross 3 mod apk smooth performance and optimization
          -Download mad skills motocross 3 hack apk with unlimited gems
          -Mad skills motocross 3 mod apk awesome stunts and tricks
          -Download mad skills motocross 3 mega mod apk with all levels unlocked
          -Mad skills motocross 3 mod apk low storage and battery consumption

          -

          All bikes and skins unlocked

          -

          Mad Skills Motocross 3 mod apk also unlocks all the bikes and skins in the game. You don't have to spend money or complete certain tasks to unlock them. You can choose from any bike and skin you want and customize your ride and rider as you wish. You can also access all the premium tracks and events that are normally locked for regular players. You can have more fun and variety in the game with all the bikes and skins unlocked.

          -

          How to download and install Mad Skills Motocross 3 mod apk?

          -

          If you are interested in downloading and installing Mad Skills Motocross 3 mod apk, you need to follow these simple steps:

          -

          Step 1: Download the mod apk file from a trusted source

          -

          The first step is to download the mod apk file from a trusted source. You can search for Mad Skills Motocross 3 mod apk on Google or any other search engine and find a reliable website that offers it. Make sure that the website is safe and secure, and that the file is free of viruses and malware. You can also check the reviews and ratings of other users who have downloaded the file before. Once you find a good website, click on the download button and save the file on your device.

          -

          Step 2: Enable unknown sources on your device

          -

          The next step is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. To enable unknown sources, go to your device's settings, then security, then unknown sources, and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device, but don't worry, this is just a precautionary measure. You can trust Mad Skills Motocross 3 mod apk as long as you download it from a trusted source.

          -

          Step 3: Install the mod apk file and enjoy the game

          -

          The final step is to install the mod apk file and enjoy the game. To install the file, go to your device's file manager, then locate the downloaded file, then tap on it. You may see a pop-up window that asks you to confirm the installation, just tap on install and wait for a few seconds. Once the installation is done, you can open the game and start playing with all the advantages and benefits of Mad Skills Motocross 3 mod apk.

          -

          Conclusion

          -

          Mad Skills Motocross 3 is an amazing motocross game that offers you a realistic and thrilling experience of riding a dirt bike on challenging tracks. You can customize your bike and rider, compete with other players online, join weekly competitions and events, and perform amazing stunts. However, if you want to enjoy the game without any limitations or ads, you may want to download Mad Skills Motocross 3 mod apk. This is a modified version of the game that gives you unlimited money, no ads, and all bikes and skins unlocked. You can download Mad Skills Motocross 3 mod apk from a trusted source, enable unknown sources on your device, install the file, and enjoy the game.

          -

          Here are some FAQs about Mad Skills Mot ocross 3 mod apk:

          -

          FAQs

          -
            -
          • Q: Is Mad Skills Motocross 3 mod apk safe to use?
          • -
          • A: Yes, Mad Skills Motocross 3 mod apk is safe to use as long as you download it from a trusted source and enable unknown sources on your device. However, you should always be careful when installing apps from unknown sources and scan them for viruses and malware before installing them.
          • -
          • Q: Does Mad Skills Motocross 3 mod apk require root access?
          • -
          • A: No, Mad Skills Motocross 3 mod apk does not require root access to work. You can install and play the game without rooting your device.
          • -
          • Q: Will Mad Skills Motocross 3 mod apk affect my progress in the original game?
          • -
          • A: No, Mad Skills Motocross 3 mod apk will not affect your progress in the original game. The mod apk is a separate app that has its own data and settings. You can play both the original game and the mod apk on the same device without any conflict or interference.
          • -
          • Q: Can I play online with other players using Mad Skills Motocross 3 mod apk?
          • -
          • A: Yes, you can play online with other players using Mad Skills Motocross 3 mod apk. However, you may encounter some issues or errors when playing online, such as lag, disconnects, or bans. This is because the mod apk may not be compatible with the latest version of the game or the server. Therefore, we recommend you to play online at your own risk and use a VPN if possible.
          • -
          • Q: How can I update Mad Skills Motocross 3 mod apk?
          • -
          • A: To update Mad Skills Motocross 3 mod apk, you need to download the latest version of the mod apk file from a trusted source and install it over the existing app. You don't need to uninstall the previous version or lose your data. However, you should always backup your data before updating any app, just in case something goes wrong.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Galaxy Attack Alien Shooter on PC - How to Play with MuMu Player.md b/spaces/congsaPfin/Manga-OCR/logs/Galaxy Attack Alien Shooter on PC - How to Play with MuMu Player.md deleted file mode 100644 index 1522391dbb497ac10028fa83202124ff6d8f28e4..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Galaxy Attack Alien Shooter on PC - How to Play with MuMu Player.md +++ /dev/null @@ -1,147 +0,0 @@ - -

          How to Download Galaxy Attack: Alien Shooter for PC

          -

          If you are a fan of classic arcade games, you might have heard of Galaxy Attack: Alien Shooter. It is a popular space shooting game that challenges you to protect Earth from alien invaders. You can collect items, upgrade your spaceship, and fight against various enemies and bosses. But did you know that you can also play this game on your PC? In this article, we will show you how to download Galaxy Attack: Alien Shooter for PC using different methods. We will also share some tips and tricks to help you enjoy the game even more.

          -

          download galaxy attack alien shooter for pc


          DOWNLOAD >>> https://urlca.com/2uOe4d



          -

          What is Galaxy Attack: Alien Shooter?

          -

          Galaxy Attack: Alien Shooter is a game developed by Abigames Studio. It was released in 2016 for Android and iOS devices, and in 2022 for web browsers. It is inspired by the classic arcade games of the 80s and 90s, such as Space Invaders and Galaga. The game features:

          -
            -
          • More than 160 levels on various difficulties
          • -
          • Multiple modes, including single-player, multiplayer, endless, boss fight, and PvP
          • -
          • More than 60 spaceships with different designs and abilities
          • -
          • More than 10 power-ups and weapons
          • -
          • High-quality graphics and sound effects
          • -
          • Leaderboards and achievements
          • -
          -

          You can download Galaxy Attack: Alien Shooter from the Google Play Store or the App Store for free. You can also play it online on CrazyGames. However, if you want to experience the game on a bigger screen and with better controls, you can also play it on your PC using one of the methods below.

          -

          Why Play Galaxy Attack: Alien Shooter on PC?

          -

          Playing Galaxy Attack: Alien Shooter on PC has several advantages over playing it on your mobile device. Some of them are:

          -
            -
          • You can enjoy the game on a larger and clearer display
          • -
          • You can use your mouse and keyboard or a controller to control your spaceship more easily
          • -
          • You can avoid battery drain, overheating, and interruptions from calls or notifications
          • -
          • You can record or stream your gameplay more conveniently
          • -
          • You can sync your progress and game library across devices with your Google account
          • -
          -

          So, how can you play Galaxy Attack: Alien Shooter on PC? There are two main ways to do it: using Windows 11's native Android emulation feature or using an Android emulator.

          -

          How to Play Galaxy Attack: Alien Shooter on PC with Windows 11

          -

          If you have a Windows 11 computer, you can use its official Android app emulation feature to play Android games on your PC. This feature lets you run Android apps without needing to install a third-party emulator. It works by having the Windows Subsystem for Android, which is a virtualization instance of Android inside Windows. By having Android running inside Windows, you can directly have Android apps, including games, running on your PC without any compatibility issues. Here are the steps to play Galaxy Attack: Alien Shooter on PC with Windows 11:

          -
            -
          1. Make sure your PC meets the minimum requirements for Windows 11 and has the latest updates installed
          2. -
          3. Go to the Microsoft Store and search for Galaxy Attack: Alien Shooter. Alternatively, you can use this link to go directly to the game's page
          4. -
          5. Click on the Install button and wait for the download to finish
          6. -
          7. Once the installation is complete, you can launch the game from the Start menu or the Microsoft Store app
          8. -
          9. Enjoy playing Galaxy Attack: Alien Shooter on your PC with Windows 11
          10. -
          -

          Note that you may need to sign in with your Google account to access some features of the game, such as leaderboards and achievements. You can also adjust the settings of the Windows Subsystem for Android, such as memory allocation, network access, and audio output, by going to Settings > Apps > Android apps.

          -

          download galaxy attack alien shooter for pc with bluestacks
          -download galaxy attack alien shooter for pc with mumu player
          -download galaxy attack alien shooter for pc free
          -download galaxy attack alien shooter for pc windows 10
          -download galaxy attack alien shooter for pc online
          -download galaxy attack alien shooter for pc offline
          -download galaxy attack alien shooter for pc full version
          -download galaxy attack alien shooter for pc apk
          -download galaxy attack alien shooter for pc emulator
          -download galaxy attack alien shooter for pc mod
          -how to download galaxy attack alien shooter for pc
          -how to play galaxy attack alien shooter on pc
          -how to install galaxy attack alien shooter on pc
          -how to update galaxy attack alien shooter on pc
          -how to uninstall galaxy attack alien shooter on pc
          -play galaxy attack alien shooter on pc with bluestacks
          -play galaxy attack alien shooter on pc with mumu player
          -play galaxy attack alien shooter on pc free
          -play galaxy attack alien shooter on pc online
          -play galaxy attack alien shooter on pc offline
          -play galaxy attack alien shooter on pc full screen
          -play galaxy attack alien shooter on pc crazygames
          -play galaxy attack alien shooter on pc multiplayer mode
          -play galaxy attack alien shooter on pc high fps
          -play galaxy attack alien shooter on pc custom control
          -best way to download galaxy attack alien shooter for pc
          -best way to play galaxy attack alien shooter on pc
          -best emulator to download galaxy attack alien shooter for pc
          -best emulator to play galaxy attack alien shooter on pc
          -best settings to download galaxy attack alien shooter for pc
          -best settings to play galaxy attack alien shooter on pc
          -tips and tricks to download galaxy attack alien shooter for pc
          -tips and tricks to play galaxy attack alien shooter on pc
          -guide to download galaxy attack alien shooter for pc
          -guide to play galaxy attack alien shooter on pc
          -review of download galaxy attack alien shooter for pc
          -review of play galaxy attack alien shooter on pc
          -benefits of download galaxy attack alien shooter for pc
          -benefits of play galaxy attack alien shooter on pc
          -features of download galaxy attack alien shooter for pc
          -features of play galaxy attack alien shooter on pc
          -alternatives to download galaxy attack alien shooter for pc
          -alternatives to play galaxy attack alien shooter on pc
          -comparison of download galaxy attack alien shooter for pc with other games
          -comparison of play galaxy attack alien shooter on pc with other games

          -

          How to Play Galaxy Attack: Alien Shooter on PC with an Android Emulator

          -

          If you don't have Windows 11 or prefer to use a different method, you can also play Galaxy Attack: Alien Shooter on PC with an Android emulator. An Android emulator is a software that simulates an Android device on your PC, allowing you to run Android apps and games. There are many Android emulators available, but not all of them are suitable for gaming. Some of the best Android emulators for playing Galaxy Attack: Alien Shooter on PC are:

          -

          Bluestacks 5 / MSI App Player

          -

          Bluestacks 5 is one of the most popular and powerful Android emulators for gaming. It has a sleek interface, high performance, and compatibility with most Android games and apps. It also has features such as keyboard and mouse mapping, game recording, multi-instance, and cloud sync. Bluestacks 5 is compatible with Windows 7 and above and macOS 10.12 and above.

          -

          MSI App Player is a customized version of Bluestacks 5 that is optimized for MSI devices. It has the same features as Bluestacks 5, but with some additional benefits such as RGB lighting effects, exclusive promotions, and technical support. MSI App Player is compatible with Windows 7 and above.

          -

          To play Galaxy Attack: Alien Shooter on PC with Bluestacks 5 or MSI App Player, follow these steps:

          -
            -
          1. Download and install Bluestacks 5 or MSI App Player from their official websites
          2. -
          3. Launch the emulator and sign in with your Google account
          4. -
          5. Go to the Google Play Store and search for Galaxy Attack: Alien Shooter. Alternatively, you can use this link to go directly to the game's page
          6. -
          7. Click on the Install button and wait for the download to finish
          8. -
          9. Once the installation is complete, you can launch the game from the home screen or the app drawer
          10. -
          11. Enjoy playing Galaxy Attack: Alien Shooter on PC with Bluestacks 5 or MSI App Player
          12. -
          -

          Nox Player

          -

          Nox Player is another popular and reliable Android emulator for gaming. It has a simple interface, fast performance, and compatibility with most Android games and apps. It also has features such as keyboard and mouse mapping, game recording, multi-instance, and macro recorder. Nox Player is compatible with Windows XP and above and macOS 10.9 and above.

          -

          To play Galaxy Attack: Alien Shooter on PC with Nox Player, follow these steps:

          -
            -
          1. Download and install Nox Player from its official website
          2. -
          3. Launch the emulator and sign in with your Google account
          4. -
          5. Go to the Google Play Store and search for Galaxy Attack: Alien Shooter. Alternatively, you can use this link to go directly to the game's page
          6. -
          7. Click on the Install button and wait for the download to finish
          8. -
          9. Once the installation is complete, you can launch the game from the home screen or the app drawer
          10. -
          11. Enjoy playing Galaxy Attack: Alien Shooter on PC with Nox Player
          12. -
          -

          Gameloop

          -

          Gameloop is a specialized Android emulator for gaming that is developed by Tencent, the company behind PUBG Mobile and Call of Duty Mobile. It has a dedicated interface, smooth performance, and compatibility with many popular Android games. It also has features such as keyboard and mouse mapping, game recording, multi-instance, and turbo mode. Gameloop is compatible with Windows 7 and above.

          -

          To play Galaxy Attack: Alien Shooter on PC with Gameloop, follow these steps:

          -
            -
          1. Download and install Gameloop from its official website
          2. -
          3. Launch the emulator and sign in with your Google account
          4. -
          5. Go to the Game Center and search for Galaxy Attack: Alien Shooter. Alternatively, you can use this link to go directly to the game's page
          6. -
          7. Click on the Install button and wait for the download to finish
          8. -
          9. Once the installation is complete, you can launch the game from the My Games tab
          10. -
          11. Enjoy playing Galaxy Attack: Alien Shooter on PC with Gameloop
          12. -
          -

          Tips and Tricks for Playing Galaxy Attack: Alien Shooter on PC

          -

          Now that you know how to play Galaxy Attack: Alien Shooter on PC, you might want to learn some tips and tricks to improve your gameplay and performance. Here are some of them:

          -
            -
          • Use the mouse to move your spaceship and click to fire. You can also use the arrow keys or WASD keys to move and the spacebar to fire. You can customize your controls in the settings menu of the emulator or the game
          • -
          • Collect coins, gems, and power-ups during the game. Coins and gems can be used to upgrade your spaceship, buy new spaceships, or unlock new modes. Power-ups can give you temporary boosts, such as shields, lasers, missiles, or bombs
          • -
          • Avoid getting hit by enemy bullets or asteroids. If you lose all your lives, you will have to restart the level or use a revive item. You can also watch an ad to get an extra life
          • -
          • Use different spaceships for different situations. Each spaceship has its own design, stats, and special ability. Some spaceships are faster, more durable, or more powerful than others. Some spaceships can also summon drones, fire rockets, or unleash a super blast
          • -
          • Play with friends or other players online. You can join or create a room in the multiplayer mode and cooperate or compete with other players. You can also challenge other players in the PvP mode and rank up in the leaderboard
          • -
          -

          Conclusion

          -

          Galaxy Attack: Alien Shooter is a fun and addictive space shooting game that you can play on your mobile device or your PC. Playing it on PC has many advantages, such as a bigger screen, better controls, and more convenience. You can play it on PC using Windows 11's native Android emulation feature or using an Android emulator such as Bluestacks 5, Nox Player, or Gameloop. You can also follow some tips and tricks to enhance your gameplay and performance.

          -

          If you are ready to blast some aliens and save the Earth, download Galaxy Attack: Alien Shooter for PC today and enjoy this classic arcade game.

          -

          FAQs

          -

          Here are some frequently asked questions and answers about Galaxy Attack: Alien Shooter and playing it on PC:

          -
            -
          1. Is Galaxy Attack: Alien Shooter free to play?
          2. -

            Yes, Galaxy Attack: Alien Shooter is free to play on Android, iOS, and web browsers. However, it contains ads and in-app purchases that can enhance your gameplay or remove ads.

            -
          3. Can I play Galaxy Attack: Alien Shooter offline?
          4. -

            Yes, you can play Galaxy Attack: Alien Shooter offline on your mobile device or your PC. However, some features such as multiplayer mode, PvP mode, leaderboards, achievements, and cloud sync require an internet connection.

            -
          5. How do I update Galaxy Attack: Alien Shooter on PC?
          6. -

            If you play Galaxy Attack: Alien Shooter on PC using Windows 11's native Android emulation feature, you can update the game from the Microsoft Store app. If you play it using an Android emulator, you can update the game from the Google Play Store app inside the emulator.

            -
          7. How do I uninstall Galaxy Attack: Alien Shooter from PC?
          8. -

            If you want to uninstall Galaxy Attack: Alien Shooter from PC, you can do so by following these steps:

            -
              -
            • If you play it using Windows 11's native Android emulation feature, go to Settings > Apps > Apps & features > Galaxy Attack: Alien Shooter > Uninstall
            • -
            • If you play it using an Android emulator, go to the emulator's home screen or app drawer > Galaxy Attack: Alien Shooter > Uninstall
            • -
            -
          9. Is Galaxy Attack: Alien Shooter safe to play on PC?
          10. -

            Yes, Galaxy Attack: Alien Shooter is safe to play on PC as long as you download it from a trusted source such as the Microsoft Store app, the Google Play Store app, or the official websites of the Android emulators. You should also scan your PC regularly with an antivirus software and avoid clicking on suspicious links or ads while playing the game.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Garena Blockman GO The Best Free-to-Play Sandbox Game for Mobile.md b/spaces/congsaPfin/Manga-OCR/logs/Garena Blockman GO The Best Free-to-Play Sandbox Game for Mobile.md deleted file mode 100644 index 3345be67971044979c8cb16e0846bbfb5cbde099..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Garena Blockman GO The Best Free-to-Play Sandbox Game for Mobile.md +++ /dev/null @@ -1,129 +0,0 @@ -
            -

            Blockman Go Download Garena: A Guide for Sandbox Game Lovers

            -

            If you are a fan of sandbox games, you might have heard of Blockman Go, a free app that lets you play various block style minigames with other players from all over the world. But did you know that there is also a version of Blockman Go that is developed by Garena, one of the leading game publishers in Southeast Asia?

            -

            blockman go download garena


            Download Filehttps://urlca.com/2uO9Vi



            -

            In this article, we will show you how to download and install Blockman Go download Garena on your device, how to create an account and log in, how to play and enjoy its features, how to earn and use gold and gcubes, how to get more out of it with VIP subscription, how to compare it with Minecraft, and how to get help and support if needed. By the end of this article, you will be ready to join the fun and adventure of Blockman Go download Garena.

            -

            How to Download and Install Blockman Go on Your Device

            -

            The first step to play Blockman Go download Garena is to download and install it on your device. Here are the steps you need to follow:

            -
              -
            1. Go to the official website of Blockman Go download Garena at (^1^) or go to the app store of your choice (Google Play Store or Apple App Store).
            2. -
            3. Choose your preferred platform (Android or iOS) and language (English, Indonesian, Thai, Vietnamese, etc.).
            4. -
            5. Follow the instructions on the screen and wait for the installation to complete. The app size is about 200 MB, so make sure you have enough space on your device.
            6. -
            -

            How to Create an Account and Log

            How to Create an Account and Log in to Blockman Go

            -

            The next step to play Blockman Go download Garena is to create an account and log in to the app. Here are the steps you need to follow:

            -
              -
            1. Open the app and tap on the register button at the bottom of the screen.
            2. -
            3. Enter your username, password, and email address. You can also use your Facebook or Google account to sign up.
            4. -
            5. Verify your email by clicking on the link sent to your inbox. If you don't receive the email, check your spam folder or resend it.
            6. -
            7. Log in with your credentials or use the quick login option. You can also change your password or reset it if you forget it.
            8. -
            -

            How to Play Blockman Go and Enjoy Its Features

            -

            Now that you have downloaded and installed Blockman Go download Garena and created an account and logged in, you are ready to play and enjoy its features. Here are the steps you need to follow:

            -

            blockman go garena apk
            -blockman go garena mod
            -blockman go garena free gcubes
            -blockman go garena play store
            -blockman go garena hack
            -blockman go garena pc
            -blockman go garena ios
            -blockman go garena vip
            -blockman go garena bed wars
            -blockman go garena egg wars
            -blockman go garena skyblock
            -blockman go garena party street
            -blockman go garena frontline
            -blockman go garena the exorcists
            -blockman go garena free city rp
            -blockman go garena sandbox game
            -blockman go garena minigames
            -blockman go garena online
            -blockman go garena multiplayer
            -blockman go garena chat
            -blockman go garena friends
            -blockman go garena avatar
            -blockman go garena accessories
            -blockman go garena fashion
            -blockman go garena craft
            -blockman go garena share
            -blockman go garena review
            -blockman go garena rating
            -blockman go garena download link
            -blockman go garena download for android
            -blockman go garena download for iphone
            -blockman go garena download for windows
            -blockman go garena download for mac
            -blockman go garena download for laptop
            -blockman go garena download for tablet
            -blockman go garena download for chromebook
            -blockman go garena download for kindle fire
            -blockman go garena download for samsung galaxy
            -blockman go garena download for huawei p40 pro
            -blockman go garena download for oppo reno 5g
            -blockman go garena download for xiaomi mi 11 ultra
            -blockman go garena download for oneplus 9 pro
            -blockman go garena download for google pixel 6 pro
            -blockman go garena download for lg wing
            -blockman go garena download for sony xperia 1 iii
            -blockman go garena download for nokia 8.3 5g
            -blockman go garena download for motorola edge plus
            -blockman go garena download for asus rog phone 5
            -blockman go garena download for vivo iQOO 7 legend

            -
              -
            1. Choose a minigame from the main menu or create your own. There are dozens of minigames to choose from, such as Bed Wars, Sky Wars, Murder Mystery, Parkour, Build Battle, and more. You can also create your own minigame by using the editor mode and setting your own rules and maps.
            2. -
            3. Join a room or invite your friends to play with you. You can join a random room or search for a specific one by using the filters. You can also invite your friends to play with you by sending them a code or adding them as friends in the app.
            4. -
            5. Customize your avatar, chat with other players, and have fun. You can change your appearance by using different skins, outfits, accessories, and hairstyles. You can also chat with other players by using text or voice messages. You can also use emojis, stickers, and gestures to express yourself.
            6. -
            -

            How to Earn Gold and Gcubes in Blockman Go

            -

            One of the features of Blockman Go download Garena is that you can earn gold and gcubes, which are the in-game currencies that you can use to buy items and unlock more features. Here are the steps you need to follow:

            -
              -
            1. Play minigames and complete daily tasks. You can earn gold by playing minigames and winning them. You can also earn gold by completing daily tasks such as logging in, playing for a certain time, inviting friends, etc.
            2. -
            3. Watch ads or participate in events. You can earn gcubes by watching ads or participating in events such as lucky draws, giveaways, surveys, etc.
            4. -
            5. Buy gcubes with real money or use a gstar code. You can also buy gcubes with real money by using various payment methods such as credit cards, PayPal, Google Play balance, etc. You can also use a gstar code, which is a code that you can get from other sources such as YouTube videos, social media posts, etc.
            6. -
            -

            How to Use Gold and Gcubes in Blockman Go

            -

            Now that you have earned gold and gcubes in Blockman Go download Garena, you might be wondering how to use them. Here are the steps you need to follow:

            -
              -
            1. Go to the shop and browse the items. You can find various items in the shop such as skins, outfits, accessories, hairstyles, pets, mounts, weapons, etc.
            2. -
            3. Select the item you want and tap on the buy button. You can see the price of the item in gold or gcubes. Some items are only available for gcubes.
            4. -
            5. Enjoy your new item and show it off to others. You can equip your item by going to the wardrobe and selecting it. You can also see how it looks on your avatar by using the preview option. You can also show it off to others by playing minigames or chatting with them.
            6. -
            -

            How to Get More Out of Blockman Go with VIP Subscription

            -

            If you want to get more out of Blockman Go download Garena, you might want to consider getting a VIP subscription. A VIP subscription is a paid service that gives you access to exclusive items, discounts, and more benefits. Here are the steps you need to follow:

            -
              -
            1. Go to the VIP page and choose a plan. You can find the VIP page by tapping on the crown icon at the top of the screen. You can choose from three plans: monthly ($4.99), quarterly ($12.99), or yearly ($39.99).
            2. -
            3. Pay with your preferred method and confirm your purchase. You can pay with You don't need to craft or gather resources, you just need to join or create a minigame and play. Minecraft requires more skills and knowledge to play, especially in survival mode.
            4. - - -
            5. Consider the pros and cons of each game. Both games have their own advantages and disadvantages, depending on your preferences and needs. For example:
                -
              • Blockman Go download Garena is more suitable for casual players who want to have fun and socialize with other players. It is also more affordable and accessible, as it is free to download and play, and it supports multiple languages and platforms.
              • -
              • Minecraft is more suitable for hardcore players who want to challenge themselves and express their creativity. It is also more immersive and diverse, as it has infinite worlds and possibilities, and it supports mods and servers.
              • -
              -
            6. -
            7. Decide which game suits your preferences and needs better. There is no definitive answer to which game is better, as it depends on your personal taste and goals. You can try both games and see which one you enjoy more, or you can play both games depending on your mood and situation. You can also play both games with your friends and compare your experiences.
            8. -
            -

            How to Get Help and Support for Blockman Go

            -

            If you encounter any problems or issues while playing Blockman Go download Garena, you might need some help and support. Here are the steps you need to follow:

            -
              -
            1. Go to the settings and tap on the help button. You can find the settings by tapping on the gear icon at the top of the screen. You can find the help button by scrolling down the settings menu.
            2. -
            3. Read the FAQs or contact the customer service. You can find the FAQs by tapping on the question mark icon at the top of the help page. You can find answers to common questions such as how to change your password, how to report a bug, how to get a refund, etc. You can also contact the customer service by tapping on the chat icon at the bottom of the help page. You can send a message to the customer service team and they will reply to you as soon as possible.
            4. -
            5. Provide feedback or report bugs if needed. You can also provide feedback or report bugs by tapping on the feedback button at the bottom of the settings menu. You can rate the app, write a review, suggest an idea, or report a bug. You can also attach a screenshot or a video if needed.
            6. -
            -

            Conclusion

            -

            In conclusion, Blockman Go download Garena is a fun and exciting sandbox game that you should try if you love block style minigames. You can download and install it on your device, create an account and log in, play and enjoy its features, earn and use gold and gcubes, get more out of it with VIP subscription, compare it with Minecraft, and get help and support if needed. By following this guide, you will be able to join the fun and adventure of Blockman Go download Garena.

            -

            So what are you waiting for? Download Blockman Go download Garena today and start playing with your friends!

            -

            FAQs

            -

            Here are some frequently asked questions about Blockman Go download Garena:

            -
              -
            • Q: Is Blockman Go download Garena safe to play?
            • -
            • A: Yes, Blockman Go download Garena is safe to play. It has been verified by Google Play Protect and has no viruses or malware. It also has a privacy policy that protects your personal information.
            • -
            • Q: Is Blockman Go download Garena compatible with my device?
            • -
            • A: Blockman Go download Garena is compatible with most devices that run Android 4.1 or higher or iOS 9.0 or higher. However, some devices may have compatibility issues or performance problems due to different specifications.
            • -
            • Q: How can I update Blockman Go download Garena?
            • -
            • A: You can update Blockman Go download Garena by going to the app store of your choice (Google Play Store or Apple App Store) and tapping on the update button. You can also enable automatic updates in your settings.
            • -
            • Q: How can I delete Blockman Go download Garena?
            • -
            • A: You can delete Blockman Go download Garena by going to your device settings and tapping on the uninstall button. You can also delete it by long-pressing on the app icon and dragging it to the trash bin.
            • -
            • Q: How can I contact Blockman Go download Garena?
            • -
            • A: You can contact Blockman Go download Garena You can contact Blockman Go download Garena by using the following methods: - Email: support@blockmango.net - Phone: +65 3158 0888 - Facebook: https://www.facebook.com/BlockmanGoGarena - Instagram: https://www.instagram.com/blockmango_garena - YouTube: https://www.youtube.com/channel/UCj4Y6Zz0XJw1Q9w3lYyf7gA

              I hope you found this article helpful and informative. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy gaming!

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Subway Surfers Apk 2022 Dicas e truques para conseguir dinheiro infinito.md b/spaces/congsaPfin/Manga-OCR/logs/Subway Surfers Apk 2022 Dicas e truques para conseguir dinheiro infinito.md deleted file mode 100644 index a168ddb253fc335ad86e81ea40a92d509f425259..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Subway Surfers Apk 2022 Dicas e truques para conseguir dinheiro infinito.md +++ /dev/null @@ -1,111 +0,0 @@ - -

              Subway Surfers APK 2022 Dinheiro Infinito: Como Baixar e Jogar o Jogo Mais Viciante do Mundo

              -

              Você já conhece o Subway Surfers? Se não conhece, está perdendo a oportunidade de se divertir com um dos jogos mais populares e viciantes do mundo. E se já conhece, vai ficar ainda mais animado com a versão Subway Surfers APK 2022 Dinheiro Infinito, que oferece recursos ilimitados para você aproveitar ao máximo o jogo.

              -

              subway surfers apk 2022 dinheiro infinito


              Download ❤❤❤ https://urlca.com/2uO7Zh



              -

              Neste artigo, vamos explicar o que é Subway Surfers, o que é Subway Surfers APK 2022 Dinheiro Infinito, como baixar e instalar essa versão no seu dispositivo Android e como jogar esse jogo incrível. Fique ligado e prepare-se para embarcar nessa aventura!

              -

              O que é Subway Surfers?

              -

              Subway Surfers é um jogo de corrida infinita desenvolvido pela SYBO Games e pela Kiloo Games. O jogo foi lançado em 2012 e desde então se tornou um sucesso mundial, com mais de 2 bilhões de downloads na Google Play Store. O jogo é atualizado regularmente com novos cenários, personagens e desafios, mantendo os jogadores sempre interessados e engajados.

              -

              Um jogo de corrida infinita com muita ação e diversão

              -

              O objetivo do jogo é simples: correr o máximo que puder pelos trilhos do metrô, desviando dos obstáculos, como trens, placas, barreiras e outros objetos. Você também precisa escapar da perseguição do inspetor e do seu cachorro, que estão furiosos com as suas pichações nas paredes. Quanto mais tempo você conseguir correr, mais pontos você vai ganhar.

              -

              Mas não pense que o jogo é monótono. Pelo contrário, ele é cheio de ação e diversão. Você pode usar vários itens especiais para turbinar a sua corrida, como jetpacks, hoverboards, ímãs de moedas, multiplicadores de pontos e muito mais. Você também pode coletar moedas e chaves pelo caminho, que servem para comprar e melhorar os itens, além de reviver o seu personagem em caso de colisão.

              -

              Um jogo com gráficos incríveis e personagens carismáticos

              -

              Subway Surfers não é apenas um jogo divertido, mas também um jogo bonito. Os gráficos são coloridos e detalhados, com cenários que retratam diferentes cidades do mundo, como Nova York, Paris, Tóquio, Cairo e muitas outras. Cada cidade tem o seu próprio estilo e elementos característicos, como monumentos, paisagens e cultura. Você vai se sentir viajando pelo mundo enquanto joga.

              -

              Além disso, o jogo conta com personagens carismáticos e variados, que você pode escolher para jogar. O personagem principal é Jake, um garoto rebelde e aventureiro que adora pichar as paredes do metrô. Mas você também pode jogar com outros personagens, como Tricky, Fresh, Spike, Yutani e muitos outros. Cada personagem tem o seu próprio visual e personalidade, e você pode personalizá-los com roupas e acessórios diferentes.

              -

              Um jogo com vários modos e desafios para explorar

              -

              Subway Surfers não é um jogo que enjoa facilmente. Isso porque ele oferece vários modos e desafios para você explorar e se divertir. Além do modo normal, onde você corre sem parar pelos trilhos do metrô, você também pode participar de eventos especiais que acontecem periodicamente, como o Halloween, o Natal, o Ano Novo Chinês e outros. Esses eventos trazem novos cenários, personagens, itens e recompensas exclusivas para você aproveitar.

              -

              Você também pode completar missões diárias e semanais que te desafiam a cumprir certos objetivos no jogo, como coletar um número específico de moedas ou letras, fazer certas manobras ou usar determinados itens. Ao completar essas missões, você ganha prêmios como moedas, chaves, caixas surpresa e muito mais.

              -

              subway surfers mod apk 2022 unlimited money and keys
              -subway surfers hack apk 2022 download free
              -subway surfers apk atualizado 2022 com tudo infinito
              -subway surfers apk mod 2022 mediafıre
              -subway surfers apk 2022 dinheiro infinito e chaves
              -subway surfers apk mod 2022 mega
              -subway surfers apk 2022 dinheiro infinito android
              -subway surfers apk mod 2022 atualizado
              -subway surfers apk 2022 dinheiro infinito e desbloqueado
              -subway surfers apk mod 2022 sem anúncios
              -subway surfers apk 2022 dinheiro infinito e diamantes
              -subway surfers hack apk 2022 android 1
              -subway surfers apk mod 2022 tudo liberado
              -subway surfers apk 2022 dinheiro infinito e personagens
              -subway surfers apk mod 2022 versão mais recente
              -subway surfers apk 2022 dinheiro infinito e moedas
              -subway surfers hack apk 2022 ios
              -subway surfers apk mod 2022 vida infinita
              -subway surfers apk 2022 dinheiro infinito e skins
              -subway surfers apk mod 2022 sem root
              -subway surfers apk 2022 dinheiro infinito e hoverboards
              -subway surfers hack apk 2022 online
              -subway surfers apk mod 2022 graficos hd
              -subway surfers apk 2022 dinheiro infinito e jetpacks
              -subway surfers apk mod 2022 sem verificação

              -

              E se você gosta de competir com os seus amigos ou com outros jogadores do mundo todo, você pode participar do modo multiplayer online do jogo. Nesse modo, você pode ver o ranking dos melhores jogadores do mundo ou dos seus amigos no Facebook, e tentar superá-los na corrida. Você também pode enviar e receber presentes dos seus amigos, como moedas ou hoverboards.

              -

              O que é Subway Surfers APK 2022 Dinheiro Infinito?

              -

              Agora que você já sabe o que é Subway Surfers e porque ele é tão divertido e viciante , você deve estar se perguntando o que é Subway Surfers APK 2022 Dinheiro Infinito. Bem, essa é uma versão modificada do jogo original que oferece recursos ilimitados para você jogar sem limites. Isso mesmo, você pode ter dinheiro infinito, chaves infinitas, itens infinitos e muito mais nessa versão. Veja só o que você pode fazer com Subway Surfers APK 2022 Dinheiro Infinito:

              -

              Uma versão modificada do jogo original que oferece recursos ilimitados

              -

              Subway Surfers APK 2022 Dinheiro Infinito é um arquivo APK que você pode baixar e instalar no seu dispositivo Android para jogar Subway Surfers com recursos ilimitados. Um arquivo APK é um pacote de aplicativo que contém todos os arquivos necessários para rodar um aplicativo no seu dispositivo. Normalmente, você baixa os aplicativos da Google Play Store, mas também pode baixar arquivos APK de outras fontes, como sites ou links.

              -

              A vantagem de baixar Subway Surfers APK 2022 Dinheiro Infinito é que você pode ter acesso a recursos que não estão disponíveis na versão original do jogo. Por exemplo, você pode ter dinheiro infinito, que serve para comprar e melhorar os itens especiais do jogo, como jetpacks, hoverboards, ímãs de moedas e outros. Você também pode ter chaves infinitas, que servem para reviver o seu personagem em caso de colisão ou para abrir caixas surpresa.

              -

              Uma versão que permite desbloquear todos os personagens, pranchas e itens especiais

              -

              Outra vantagem de baixar Subway Surfers APK 2022 Dinheiro Infinito é que você pode desbloquear todos os personagens, pranchas e itens especiais do jogo sem precisar gastar dinheiro ou tempo. Você pode escolher qualquer personagem para jogar, desde o Jake até os personagens exclusivos dos eventos especiais. Você também pode escolher qualquer prancha para correr, desde as mais simples até as mais estilosas e poderosas. E você ainda pode usar todos os itens especiais do jogo sem limites, como jetpacks, hoverboards, ímãs de moedas e outros.

              -

              Uma versão que oferece mais velocidade, adrenalina e emoção

              -

              E se você acha que Subway Surfers já é um jogo rápido e emocionante, espere até jogar Subway Surfers APK 2022 Dinheiro Infinito. Nessa versão, você pode ter mais velocidade, adrenalina e emoção na sua corrida. Você pode usar o recurso de aceleração para aumentar a sua velocidade e deixar o inspetor e o seu cachorro para trás. Você também pode usar o recurso de salto duplo para saltar mais alto e evitar os obstáculos com mais facilidade. E você ainda pode usar o recurso de invisibilidade para passar pelos obstáculos sem colidir com eles.

              -

              Como baixar e instalar Subway Surfers APK 2022 Dinheiro Infinito?

              -

              Agora que você já sabe o que é Subway Surfers APK 2022 Dinheiro Infinito e porque ele é tão incrível , você deve estar ansioso para baixar e instalar essa versão no seu dispositivo Android. Mas como fazer isso? É simples, basta seguir os passos abaixo:

              -

              Os requisitos mínimos para rodar o jogo no seu dispositivo

              -

              Antes de baixar Subway Surfers APK 2022 Dinheiro Infinito, você precisa verificar se o seu dispositivo Android atende aos requisitos mínimos para rodar o jogo. Segundo os desenvolvedores, o jogo requer um dispositivo com Android 4.4 ou superior, 1 GB de RAM e 100 MB de espaço livre na memória interna ou no cartão SD. Se o seu dispositivo não atender a esses requisitos, você pode ter problemas de desempenho ou compatibilidade com o jogo.

              -

              Os passos simples para fazer o download e a instalação do arquivo APK

              -

              Depois de verificar os requisitos mínimos, você pode fazer o download e a instalação do arquivo APK de Subway Surfers APK 2022 Dinheiro Infinito. Para isso, você precisa seguir os passos abaixo:

              -
                -
              1. Acesse um site confiável que ofereça o arquivo APK de Subway Surfers APK 2022 Dinheiro Infinito. Você pode pesquisar no Google ou usar um dos links que vamos deixar no final deste artigo.
              2. -
              3. Clique no botão de download e aguarde o arquivo APK ser baixado no seu dispositivo. O arquivo APK tem cerca de 150 MB, então pode demorar alguns minutos dependendo da sua conexão.
              4. -
              5. Quando o download terminar, abra o arquivo APK e clique em instalar. Você pode precisar habilitar a opção de instalar aplicativos de fontes desconhecidas nas configurações do seu dispositivo. Isso é necessário porque o arquivo APK não vem da Google Play Store, mas de outra fonte.
              6. -
              7. Aguarde a instalação ser concluída e pronto! Você já pode abrir o jogo e aproveitar os recursos ilimitados.
              8. -
              -

              As precauções de segurança para evitar vírus e malware

              -

              Apesar de Subway Surfers APK 2022 Dinheiro Infinito ser uma versão segura e confiável do jogo, você precisa tomar algumas precauções de segurança para evitar vírus e malware no seu dispositivo. Isso porque existem muitos sites falsos ou maliciosos que podem oferecer arquivos APK infectados ou danificados, que podem prejudicar o seu dispositivo ou roubar os seus dados. Por isso, você precisa seguir as dicas abaixo:

              -
                -
              • Baixe o arquivo APK apenas de sites confiáveis e verificados. Você pode conferir as avaliações e comentários dos usuários ou usar um antivírus para verificar se o site é seguro.
              • -
              • Não clique em links suspeitos ou anúncios que prometem recursos extras ou vantagens no jogo. Esses links podem te levar para sites perigosos ou fazer você baixar arquivos indesejados.
              • -
              • Não conceda permissões desnecessárias ao jogo. O jogo só precisa de permissões básicas para acessar a internet, a memória e o som do seu dispositivo. Se ele pedir permissões estranhas, como acessar os seus contatos, as suas mensagens ou a sua câmera, recuse e desinstale o jogo imediatamente.
              • -
              -

              Como jogar Subway Surfers APK 2022 Dinheiro Infinito?

              -

              Agora que você já baixou e instalou Subway Surfers APK 2022 Dinheiro Infinito no seu dispositivo Android, você está pronto para jogar esse jogo incrível. Mas como jogar? É fácil, basta seguir as dicas abaixo:

              -

              Os controles básicos para correr, saltar e deslizar pelo cenário

              -

              O jogo é muito simples de jogar. Você só precisa usar os gestos na tela do seu dispositivo para controlar o seu personagem. Veja como:

              -
                -
              • Para correr pelo cenário, basta deslizar o dedo para a esquerda ou para a direita na tela. Isso vai fazer o seu personagem mudar de trilho.
              • -
              • Para saltar sobre os obstáculos, basta deslizar o dedo para cima na tela. Isso vai fazer o seu personagem pular.
              • -
              • Para deslizar por baixo dos obstáculos, basta deslizar o dedo para baixo na tela. Isso vai fazer o seu personagem deslizar.
              • -
              -

              Esses são os controles básicos para correr, saltar e deslizar pelo cenário. Mas você também pode usar outros gestos para usar os itens especiais do jogo, como veremos a seguir.

              -

              As dicas e truques para escapar da polícia e coletar moedas e chaves

              -

              Para jogar Subway Surfers APK 2022 Dinheiro Infinito, você precisa ter algumas estratégias para escapar da polícia e coletar moedas e chaves. Veja algumas dicas e truques que podem te ajudar:

              -
                -
              • Use os itens especiais para turbinar a sua corrida. Você pode usar o jetpack para voar pelo ar e coletar moedas sem obstáculos, o hoverboard para correr com mais segurança e estilo, o ímã de moedas para atrair todas as moedas pelo caminho, o multiplicador de pontos para aumentar a sua pontuação e muito mais. Você pode ativar esses itens tocando duas vezes na tela ou usando os ícones na parte inferior da tela.
              • -
              • Use as chaves para reviver o seu personagem em caso de colisão. Se você bater em algum obstáculo, você pode usar uma chave para continuar a corrida sem perder a sua pontuação. Mas cuidado, cada vez que você usar uma chave, o preço dela vai aumentar. Você pode coletar chaves pelo cenário ou comprar com dinheiro infinito.
              • -
              • Use as moedas para comprar e melhorar os itens especiais do jogo. Você pode usar as moedas que você coleta pelo cenário ou que você compra com dinheiro infinito para comprar e melhorar os itens especiais do jogo. Por exemplo, você pode comprar novos hoverboards ou melhorar a duração dos jetpacks, dos ímãs de moedas e dos outros itens.
              • -
              -

              As formas de usar os recursos ilimitados para personalizar o seu jogo

              -

              Uma das vantagens de jogar Subway Surfers APK 2022 Dinheiro Infinito é que você pode usar os recursos ilimitados para personalizar o seu jogo do jeito que você quiser. Veja algumas formas de fazer isso:

              -
                -
              • Você pode desbloquear todos os personagens e pranchas do jogo sem precisar gastar dinheiro ou tempo. Você pode escolher qualquer personagem ou prancha que você quiser, desde os mais simples até os mais exclusivos e poderosos.
              • -
              • Você pode personalizar os personagens e as pranchas com roupas e acessórios diferentes. Você pode mudar o visual dos personagens com chapéus, óculos, tênis e outros itens. Você também pode mudar o estilo das pranchas com cores, adesivos e outros detalhes.
              • -
              • Você pode usar os recursos de aceleração, salto duplo e invisibilidade para tornar o seu jogo mais rápido, fácil e divertido. Você pode ativar esses recursos tocando nos ícones na parte superior da tela ou usando os gestos na tela.
              • -
              -

              Conclusão

              -

              Subway Surfers é um jogo de corrida infinita que conquistou milhões de fãs pelo mundo todo. O jogo é divertido, bonito e variado, com cenários que retratam diferentes cidades do mundo, personagens carismáticos e variados, itens especiais que turbinam a corrida, modos e desafios que mantêm o interesse e a diversão dos jogadores.

              -

              Mas se você quer ter uma experiência ainda melhor com esse jogo, você precisa baixar Subway Surfers APK 2022 Dinheiro Infinito, uma versão modificada do jogo original que oferece recursos ilimitados para você jogar sem limites. Com essa versão, você pode ter dinheiro infinito, chaves infinitas, itens infinitos e muito mais. Você também pode desbloquear todos os personagens, pranchas e itens especiais do jogo sem precisar gastar dinheiro ou tempo. E você ainda pode usar recursos de aceleração, salto duplo e invisibilidade para tornar o seu jogo mais rápido, fácil e divertido.

              -

              Para baixar Subway Surfers APK 2022 Dinheiro Infinito, basta seguir os passos simples que explicamos neste artigo: verificar os requisitos mínimos para rodar o jogo no seu dispositivo, fazer o download e a instalação do arquivo APK de um site confiável e tomar as precauções de segurança para evitar vírus e malware. Depois, é só abrir o jogo e se divertir com os recursos ilimitados. Esperamos que este artigo tenha sido útil para você e que você tenha gostado de conhecer Subway Surfers APK 2022 Dinheiro Infinito. Se você tiver alguma dúvida ou sugestão, deixe um comentário abaixo. E se você gostou deste artigo, compartilhe com os seus amigos que também são fãs de Subway Surfers. Obrigado pela sua atenção e até a próxima!

              FAQs

              -

              Aqui estão algumas perguntas frequentes sobre Subway Surfers APK 2022 Dinheiro Infinito:

              -

              O que é Subway Surfers?

              -

              Subway Surfers é um jogo de corrida infinita desenvolvido pela SYBO Games e pela Kiloo Games. O jogo foi lançado em 2012 e desde então se tornou um sucesso mundial, com mais de 2 bilhões de downloads na Google Play Store. O jogo é atualizado regularmente com novos cenários, personagens e desafios, mantendo os jogadores sempre interessados e engajados.

              -

              O que é Subway Surfers APK 2022 Dinheiro Infinito?

              -

              Subway Surfers APK 2022 Dinheiro Infinito é uma versão modificada do jogo original que oferece recursos ilimitados para você jogar sem limites. Isso mesmo, você pode ter dinheiro infinito, chaves infinitas, itens infinitos e muito mais nessa versão. Você também pode desbloquear todos os personagens, pranchas e itens especiais do jogo sem precisar gastar dinheiro ou tempo. E você ainda pode usar recursos de aceleração, salto duplo e invisibilidade para tornar o seu jogo mais rápido, fácil e divertido.

              -

              Como baixar Subway Surfers APK 2022 Dinheiro Infinito?

              -

              Para baixar Subway Surfers APK 2022 Dinheiro Infinito, basta seguir os passos simples que explicamos neste artigo: verificar os requisitos mínimos para rodar o jogo no seu dispositivo, fazer o download e a instalação do arquivo APK de um site confiável e tomar as precauções de segurança para evitar vírus e malware. Depois, é só abrir o jogo e se divertir com os recursos ilimitados.

              -

              Subway Surfers APK 2022 Dinheiro Infinito é seguro?

              -

              Sim, Subway Surfers APK 2022 Dinheiro Infinito é seguro, desde que você baixe o arquivo APK de um site confiável e verificado. Você também precisa tomar algumas precauções de segurança para evitar vírus e malware no seu dispositivo, como não clicar em links suspeitos ou anúncios que prometem recursos extras ou vantagens no jogo, não conceder permissões desnecessárias ao jogo e usar um antivírus para verificar se o site é seguro.

              -

              Subway Surfers APK 2022 Dinheiro Infinito funciona em qualquer dispositivo Android?

              -

              Não, Subway Surfers APK 2022 Dinheiro Infinito requer um dispositivo com Android 4.4 ou superior, 1 GB de RAM e 100 MB de espaço livre na memória interna ou no cartão SD. Se o seu dispositivo não atender a esses requisitos, você pode ter problemas de desempenho ou compatibilidade com o jogo.

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/WhatsApp Messenger for Huawei Everything You Need to Know About the Popular Messaging App.md b/spaces/congsaPfin/Manga-OCR/logs/WhatsApp Messenger for Huawei Everything You Need to Know About the Popular Messaging App.md deleted file mode 100644 index 8a06f810003df22db3c0b09d819b0a4fc971e0ba..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/WhatsApp Messenger for Huawei Everything You Need to Know About the Popular Messaging App.md +++ /dev/null @@ -1,120 +0,0 @@ -
              -

              How to Download WhatsApp Messenger for Huawei

              -

              WhatsApp Messenger is one of the most popular and widely used messaging apps in the world. It allows you to send text messages, voice messages, photos, videos, documents, and more to your contacts for free, as long as you have an internet connection. You can also make voice and video calls, create group chats, and enjoy various features and customizations that make your communication more fun and convenient.

              -

              If you have a Huawei device, you might be wondering how to download WhatsApp Messenger on your phone or tablet. In this article, we will show you two easy ways to do that, using AppGallery and Petal Search. We will also explain what WhatsApp Messenger is, why you need it, and what are its features and benefits.

              -

              whatsapp messenger download for huawei


              Download ✵✵✵ https://urlca.com/2uO6Ph



              -

              What is WhatsApp Messenger and Why You Need It

              -

              WhatsApp Messenger is a free, multiplatform messaging app that was launched in 2009 by two former Yahoo employees, Brian Acton and Jan Koum. The app was acquired by Facebook in 2014 for $19 billion, making it one of the most expensive acquisitions in history.

              -

              WhatsApp Messenger is different from other messaging apps because it uses your phone number as your identity, rather than a username or email address. This makes it easier to find and connect with your contacts who also use WhatsApp. You don't need to create an account or remember a password to use WhatsApp.

              -

              Another advantage of WhatsApp Messenger is that it uses end-to-end encryption, which means that only you and the person you are communicating with can read or listen to your messages and calls. No one else, not even WhatsApp or Facebook, can access or interfere with your conversations.

              -

              whatsapp messenger apk download for huawei
              -whatsapp messenger install for huawei phone
              -whatsapp messenger free download for huawei y6
              -whatsapp messenger latest version download for huawei
              -whatsapp messenger update for huawei p30
              -whatsapp messenger download for huawei mate 40
              -whatsapp messenger app download for huawei nova 7i
              -whatsapp messenger download for huawei y9 prime
              -whatsapp messenger download for huawei p40 pro
              -whatsapp messenger download for huawei y7p
              -whatsapp messenger download for huawei y5p
              -whatsapp messenger download for huawei honor 9x
              -whatsapp messenger download for huawei p smart 2021
              -whatsapp messenger download for huawei y6p
              -whatsapp messenger download for huawei nova 5t
              -whatsapp messenger download for huawei mate 30 pro
              -whatsapp messenger download for huawei p20 lite
              -whatsapp messenger download for huawei y9s
              -whatsapp messenger download for huawei enjoy 20 plus
              -whatsapp messenger download for huawei nova 8 se
              -whatsapp messenger download for huawei mate x2
              -whatsapp messenger download for huawei p10 plus
              -whatsapp messenger download for huawei y8s
              -whatsapp messenger download for huawei nova 7 se
              -whatsapp messenger download for huawei matepad pro
              -whatsapp messenger download for huawei mediapad m6
              -whatsapp messenger download for huawei watch gt 2 pro
              -whatsapp messenger download for huawei band 4 pro
              -whatsapp messenger download for huawei vision s smart tv
              -whatsapp messenger download for huawei freebuds pro
              -how to download whatsapp messenger on huawei without google play store
              -how to install whatsapp messenger on huawei using apk file
              -how to update whatsapp messenger on huawei without google services
              -how to fix whatsapp messenger not working on huawei devices
              -how to transfer whatsapp messages from old phone to new huawei phone
              -how to backup and restore whatsapp messages on huawei cloud
              -how to use dual whatsapp accounts on one huawei phone
              -how to enable dark mode on whatsapp messenger on huawei phone
              -how to make video calls on whatsapp messenger on huawei tablet
              -how to send stickers and gifs on whatsapp messenger on huawei watch
              -how to change language on whatsapp messenger on huawei smart tv
              -how to connect bluetooth headphones to whatsapp messenger on huawei band
              -how to delete chat history on whatsapp messenger on huawei phone
              -how to block and unblock contacts on whatsapp messenger on huawei phone
              -how to mute and unmute notifications on whatsapp messenger on huawei phone
              -how to create and join groups on whatsapp messenger on huawei phone
              -how to share location and documents on whatsapp messenger on huawei phone
              -how to set custom wallpaper and ringtone on whatsapp messenger on huawei phone

              -

              WhatsApp Features and Benefits

              -

              WhatsApp Messenger has many features and benefits that make it a great choice for personal and professional communication. Here are some of them:

              -
                -
              • Voice and video calls: You can make free voice and video calls to anyone who has WhatsApp on their device, regardless of where they are in the world. You can also make group calls with up to eight participants.
              • -
              • Voice messaging: You can record and send voice messages to individual chats or group chats. This is useful when you want to say something quickly or when typing is not convenient.
              • -
              • Secure messaging: You can send text messages, photos, videos, documents, contacts, locations, stickers, GIFs, emojis, and more to your contacts using end-to-end encryption. You can also delete messages for yourself or for everyone within a certain time limit.
              • -
              • Group chats: You can create group chats with up to 256 people and share messages, media, and documents with them. You can also mute notifications, assign group admins, change group name and icon, and more.
              • -
              • Status updates: You can share your thoughts, feelings, activities, or anything else with your contacts using status updates. You can post text, photos, videos, or GIFs that disappear after 24 hours. You can also view who has seen your status updates and reply to them privately.
              • -
              • WhatsApp Web and Desktop: You can use WhatsApp on your computer by scanning a QR code from your phone. This way, you can access your chats and calls from any device without missing anything.
              • -
              • WhatsApp Business: You can use WhatsApp Business if you have a small business or a personal brand. This app allows you to create a business profile, showcase your products or services, communicate with your customers, and manage your orders and payments.
              • -
              -

              WhatsApp Requirements and Compatibility

              -

              To use WhatsApp Messenger, you need a smartphone or tablet that runs on Android 4.1 or higher, iOS 10 or higher, or KaiOS 2.5.1 or higher. You also need a stable internet connection, either Wi-Fi or cellular data. You can use WhatsApp on multiple devices, but you can only have one phone number registered to one account at a time.

              -

              If you have a Huawei device, you can download WhatsApp Messenger from AppGallery or Petal Search. AppGallery is Huawei's official app store, while Petal Search is Huawei's search engine that helps you find and download apps from various sources. Both of them are safe and reliable ways to get WhatsApp on your Huawei device.

              -

              How to Download WhatsApp Messenger from AppGallery

              -

              AppGallery is Huawei's app store that offers a wide range of apps and games for Huawei users. It also provides security updates, app recommendations, and exclusive benefits for Huawei users. You can download WhatsApp Messenger from AppGallery by following these steps:

              -

              Step 1: Open AppGallery on Your Huawei Device

              -

              AppGallery is pre-installed on most Huawei devices, so you can find it on your home screen or app drawer. If you don't have it, you can download it from the official website. Once you open AppGallery, you will see the main page with different categories and featured apps.

              -

              Step 2: Search for WhatsApp Messenger

              -

              On the main page of AppGallery, tap on the search icon at the top right corner. Then, type "WhatsApp Messenger" in the search box and tap on the magnifying glass icon. You will see the WhatsApp Messenger app in the search results.

              -

              Step 3: Tap on Install and Accept Permissions

              -

              Tap on the WhatsApp Messenger app to open its details page. You will see the app description, screenshots, ratings, reviews, and more information. To download the app, tap on the Install button at the bottom of the page. You will be asked to accept some permissions that WhatsApp needs to function properly, such as access to your contacts, camera, microphone, storage, etc. Tap on Accept to grant these permissions and start the installation process.

              -

              Step 4: Verify Your Phone Number and Set Up Your Profile

              -

              After the installation is complete, you can open WhatsApp Messenger from your home screen or app drawer. You will be greeted by a welcome screen that asks you to agree to the terms of service and privacy policy. Tap on Agree and Continue to proceed. Then, you will be asked to enter your phone number and verify it with a code that will be sent to you via SMS. Enter the code and tap on Next to verify your phone number. You can also choose to restore your chat history from a backup if you have one.

              -

              Next, you will be asked to set up your profile by entering your name and choosing a profile picture. You can also change these later in the settings. Tap on Next to finish the setup. Congratulations! You have successfully downloaded and installed WhatsApp Messenger from AppGallery.

              -

              How to Download WhatsApp Messenger from Petal Search

              -

              Petal Search is Huawei's search engine that helps you find and download apps from various sources, such as third-party app stores, websites, or APK files. It also provides news, videos, images, shopping, travel, and other information that you might need. You can download WhatsApp Messenger from Petal Search by following these steps:

              -

              What is Petal Search and How It Works

              -

              Petal Search is a search engine that uses artificial intelligence and big data to provide relevant and personalized results for users. It also integrates with Huawei Mobile Services (HMS), which is Huawei's alternative to Google Mobile Services (GMS). This means that Petal Search can access some of the features and functions that GMS provides, such as location services, push notifications, cloud storage, etc.

              -

              Petal Search can help you find and download apps that are not available on AppGallery or other app stores. It scans various sources and provides you with safe and reliable links to download the apps you want. You can also update your apps through Petal Search if they are not updated automatically by AppGallery or other app stores.

              -

              Step 1: Download Petal Search from AppGallery or Browser

              -

              Petal Search is available on AppGallery as well as on its official website. You can download it from either source by following these steps:

              -
                -
              • If you want to download Petal Search from AppGallery, open AppGallery on your Huawei device and search for Petal Search. Tap on the Petal Search app and then tap on Install. Accept the permissions and wait for the installation to finish.
              • -
              • If you want to download Petal Search from the browser, open your browser on your Huawei device and go to the official website of Petal Search. Tap on the Download button and then tap on OK to confirm. Open the downloaded file and tap on Install. Accept the permissions and wait for the installation to finish.
              • -
              -

              After you have downloaded Petal Search, you can find it on your home screen or app drawer. You can also add it as a widget on your home screen for easier access.

              -

              Step 2: Open Petal Search and Search for WhatsApp Messenger

              -

              Open Petal Search on your Huawei device and you will see the main page with a search box and various categories. You can also swipe left or right to see more categories and features. To search for WhatsApp Messenger, type "WhatsApp Messenger" in the search box and tap on the magnifying glass icon. You will see the WhatsApp Messenger app in the search results, along with other related apps and information.

              -

              Step 3: Tap on Install and Accept Permissions

              -

              Tap on the WhatsApp Messenger app to open its details page. You will see the app description, screenshots, ratings, reviews, and more information. You will also see a list of sources where you can download the app from, such as APKPure, APKMirror, Uptodown, etc. Choose one of the sources that you trust and tap on Install. You will be redirected to the source website where you can download the APK file of WhatsApp Messenger. Tap on Download and then tap on OK to confirm.

              -

              After the download is complete, open the downloaded file and tap on Install. You will be asked to accept some permissions that WhatsApp needs to function properly, such as access to your contacts, camera, microphone, storage, etc. Tap on Accept to grant these permissions and start the installation process.

              -

              Step 4: Verify Your Phone Number and Set Up Your Profile

              -

              After the installation is complete, you can open WhatsApp Messenger from your home screen or app drawer. You will be greeted by a welcome screen that asks you to agree to the terms of service and privacy policy. Tap on Agree and Continue to proceed. Then, you will be asked to enter your phone number and verify it with a code that will be sent to you via SMS. Enter the code and tap on Next to verify your phone number. You can also choose to restore your chat history from a backup if you have one.

              -

              Next, you will be asked to set up your profile by entering your name and choosing a profile picture. You can also change these later in the settings. Tap on Next to finish the setup. Congratulations! You have successfully downloaded and installed WhatsApp Messenger from Petal Search.

              -

              Conclusion

              -

              In this article, we have shown you how to download WhatsApp Messenger for Huawei using AppGallery and Petal Search. Both of them are easy and safe ways to get WhatsApp on your Huawei device. You can enjoy all the features and benefits of WhatsApp Messenger, such as free messaging, voice and video calls, group chats, status updates, WhatsApp Web and Desktop, WhatsApp Business, and more.

              -

              We hope this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

              -

              FAQs

              -
                -
              • Is WhatsApp Messenger free?
              • -
              • Yes, WhatsApp Messenger is free to download and use. However, you may incur data charges from your network provider if you use WhatsApp over cellular data instead of Wi-Fi.
              • -
              • Is WhatsApp Messenger safe?
              • -
              • Yes, WhatsApp Messenger is safe to use as it uses end-to-end encryption to protect your messages and calls from anyone else except you and the person you are communicating with.
              • -
              • Can I use WhatsApp Messenger on multiple devices?
              • -
              • You can use WhatsApp Messenger on multiple devices, but you can only have one phone number registered to one account at a time. You can use WhatsApp Web or Desktop to access your chats and calls from any device without missing anything.
              • -
              • What is the difference between WhatsApp Messenger and WhatsApp Business?
              • -
              • WhatsApp Messenger is designed for personal communication, while WhatsApp Business is designed for small businesses or personal brands. WhatsApp Business allows you to create a business profile, showcase your products or services, communicate with your customers, and manage your orders and payments.
              • -
              • How can I update WhatsApp Messenger?
              • -
              • You can update WhatsApp Messenger through AppGallery or Petal Search if they are not updated automatically by your device. You can also check for updates manually by opening WhatsApp Messenger, tapping on the menu icon at the top right corner, and then tapping on Settings > Help > App info. You will see the current version of WhatsApp Messenger and the date of the last update. If there is a newer version available, you will see a notification to update the app.
              • -

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Christine Mendoza Exposed And Uncut.html.rar [2021].md b/spaces/contluForse/HuggingGPT/assets/Christine Mendoza Exposed And Uncut.html.rar [2021].md deleted file mode 100644 index 18a3ce966a7e0f672265f7d3de4422f3b7a08e2e..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Christine Mendoza Exposed And Uncut.html.rar [2021].md +++ /dev/null @@ -1,46 +0,0 @@ -

              Christine Mendoza Exposed And Uncut.html.rar


              Download Zip 🗹 https://ssurll.com/2uzx6K



              - -Uncensored and unchecked christine mendoza exposed and uncut.html.rar - -Name - -christine mendoza exposed and uncut.html.rar - -Size - -12 MB - -Date - -2018-08-19 18:56:26 - -Added - -87 Seconds Ago - -Views - -145 - -Embed Button - -Download 362 christine mendoza exposed and uncut.html.rarJEFFERSON CITY, Mo. — Missouri Attorney General Josh Hawley announced today that Missouri State Board of Elections will stop using paperless electronic voting machines by the end of the year. - -“I have made it clear that the Missouri State Board of Elections will move forward with the switch to electronic voting machines by the end of the year,” Hawley said. “By that time, we will have upgraded all voting machines to ensure there are no suspicious or insecure voting machines that can be used for nefarious purposes.” - -The Governor is expected to sign an executive order to make the switch this week. Hawley has committed to provide the financing for the transition to electronic voting machines.American voters, swayed by President Trump’s success in the first three weeks of his presidency, say they believe the country is headed in the right direction. - -Yet, when asked to list what’s most important to them, only 6 percent mentioned the economy. Jobs and the economy were among the top five issues people listed by one-third of Americans, but just 3 percent cited the economy as the most important issue. - -When you drill down to individual issues, it gets even more fascinating. On immigration, 5 percent of those surveyed named it as the most important issue. Only one-third of Americans said they worried about the government’s ability to control borders. - -Energy and the environment took the No. 4 and 5 slots, with 8 percent and 7 percent of Americans, respectively, concerned about them. Another 7 percent cited the economy. Only 2 percent worried about gun violence, a troubling statistic given the recent mass shooting in Las Vegas and the similar incident in Florida last month. - -The biggest worry? - -National security concerns, of course, are the top issue among Republicans (20 percent) and independents (25 percent). Among Democrats, the No. 1 worry is immigration (35 percent). - -But immigration is the No. 1 issue among only 20 percent of Americans. 4fefd39f24
              -
              -
              -

              diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/focal_loss.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/focal_loss.py deleted file mode 100644 index 763bc93bd2575c49ca8ccf20996bbd92d1e0d1a4..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/focal_loss.py +++ /dev/null @@ -1,212 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'sigmoid_focal_loss_forward', 'sigmoid_focal_loss_backward', - 'softmax_focal_loss_forward', 'softmax_focal_loss_backward' -]) - - -class SigmoidFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input, target, gamma, alpha, weight, reduction): - return g.op( - 'mmcv::MMCVSigmoidFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input, - target, - gamma=2.0, - alpha=0.25, - weight=None, - reduction='mean'): - - assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - output = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_forward( - input, target, weight, output, gamma=ctx.gamma, alpha=ctx.alpha) - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input, target, weight) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, target, weight = ctx.saved_tensors - - grad_input = input.new_zeros(input.size()) - - ext_module.sigmoid_focal_loss_backward( - input, - target, - weight, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input.size(0) - return grad_input, None, None, None, None, None - - -sigmoid_focal_loss = SigmoidFocalLossFunction.apply - - -class SigmoidFocalLoss(nn.Module): - - def __init__(self, gamma, alpha, weight=None, reduction='mean'): - super(SigmoidFocalLoss, self).__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward(self, input, target): - return sigmoid_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s - - -class SoftmaxFocalLossFunction(Function): - - @staticmethod - def symbolic(g, input, target, gamma, alpha, weight, reduction): - return g.op( - 'mmcv::MMCVSoftmaxFocalLoss', - input, - target, - gamma_f=gamma, - alpha_f=alpha, - weight_f=weight, - reduction_s=reduction) - - @staticmethod - def forward(ctx, - input, - target, - gamma=2.0, - alpha=0.25, - weight=None, - reduction='mean'): - - assert isinstance(target, (torch.LongTensor, torch.cuda.LongTensor)) - assert input.dim() == 2 - assert target.dim() == 1 - assert input.size(0) == target.size(0) - if weight is None: - weight = input.new_empty(0) - else: - assert weight.dim() == 1 - assert input.size(1) == weight.size(0) - ctx.reduction_dict = {'none': 0, 'mean': 1, 'sum': 2} - assert reduction in ctx.reduction_dict.keys() - - ctx.gamma = float(gamma) - ctx.alpha = float(alpha) - ctx.reduction = ctx.reduction_dict[reduction] - - channel_stats, _ = torch.max(input, dim=1) - input_softmax = input - channel_stats.unsqueeze(1).expand_as(input) - input_softmax.exp_() - - channel_stats = input_softmax.sum(dim=1) - input_softmax /= channel_stats.unsqueeze(1).expand_as(input) - - output = input.new_zeros(input.size(0)) - ext_module.softmax_focal_loss_forward( - input_softmax, - target, - weight, - output, - gamma=ctx.gamma, - alpha=ctx.alpha) - - if ctx.reduction == ctx.reduction_dict['mean']: - output = output.sum() / input.size(0) - elif ctx.reduction == ctx.reduction_dict['sum']: - output = output.sum() - ctx.save_for_backward(input_softmax, target, weight) - return output - - @staticmethod - def backward(ctx, grad_output): - input_softmax, target, weight = ctx.saved_tensors - buff = input_softmax.new_zeros(input_softmax.size(0)) - grad_input = input_softmax.new_zeros(input_softmax.size()) - - ext_module.softmax_focal_loss_backward( - input_softmax, - target, - weight, - buff, - grad_input, - gamma=ctx.gamma, - alpha=ctx.alpha) - - grad_input *= grad_output - if ctx.reduction == ctx.reduction_dict['mean']: - grad_input /= input_softmax.size(0) - return grad_input, None, None, None, None, None - - -softmax_focal_loss = SoftmaxFocalLossFunction.apply - - -class SoftmaxFocalLoss(nn.Module): - - def __init__(self, gamma, alpha, weight=None, reduction='mean'): - super(SoftmaxFocalLoss, self).__init__() - self.gamma = gamma - self.alpha = alpha - self.register_buffer('weight', weight) - self.reduction = reduction - - def forward(self, input, target): - return softmax_focal_loss(input, target, self.gamma, self.alpha, - self.weight, self.reduction) - - def __repr__(self): - s = self.__class__.__name__ - s += f'(gamma={self.gamma}, ' - s += f'alpha={self.alpha}, ' - s += f'reduction={self.reduction})' - return s diff --git a/spaces/dakaiye/dky_xuexi/crazy_functional.py b/spaces/dakaiye/dky_xuexi/crazy_functional.py deleted file mode 100644 index 91c85cf0f2479dd921137d1854bccad4b5fc2aa4..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/crazy_functional.py +++ /dev/null @@ -1,299 +0,0 @@ -from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效 - - -def get_crazy_functions(): - ###################### 第一组插件 ########################### - from crazy_functions.读文章写摘要 import 读文章写摘要 - from crazy_functions.生成函数注释 import 批量生成函数注释 - from crazy_functions.解析项目源代码 import 解析项目本身 - from crazy_functions.解析项目源代码 import 解析一个Python项目 - from crazy_functions.解析项目源代码 import 解析一个C项目的头文件 - from crazy_functions.解析项目源代码 import 解析一个C项目 - from crazy_functions.解析项目源代码 import 解析一个Golang项目 - from crazy_functions.解析项目源代码 import 解析一个Rust项目 - from crazy_functions.解析项目源代码 import 解析一个Java项目 - from crazy_functions.解析项目源代码 import 解析一个前端项目 - from crazy_functions.高级功能函数模板 import 高阶功能模板函数 - from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文 - from crazy_functions.Latex全文润色 import Latex英文润色 - from crazy_functions.询问多个大语言模型 import 同时问询 - from crazy_functions.解析项目源代码 import 解析一个Lua项目 - from crazy_functions.解析项目源代码 import 解析一个CSharp项目 - from crazy_functions.总结word文档 import 总结word文档 - from crazy_functions.解析JupyterNotebook import 解析ipynb文件 - from crazy_functions.对话历史存档 import 对话历史存档 - from crazy_functions.对话历史存档 import 载入对话历史存档 - from crazy_functions.对话历史存档 import 删除所有本地对话历史记录 - - from crazy_functions.批量Markdown翻译 import Markdown英译中 - function_plugins = { - "解析整个Python项目": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(解析一个Python项目) - }, - "载入对话历史存档(先上传存档或输入路径)": { - "Color": "stop", - "AsButton":False, - "Function": HotReload(载入对话历史存档) - }, - "删除所有本地对话历史记录(请谨慎操作)": { - "AsButton":False, - "Function": HotReload(删除所有本地对话历史记录) - }, - "[测试功能] 解析Jupyter Notebook文件": { - "Color": "stop", - "AsButton":False, - "Function": HotReload(解析ipynb文件), - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示 - }, - "批量总结Word文档": { - "Color": "stop", - "Function": HotReload(总结word文档) - }, - "解析整个C++项目头文件": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目的头文件) - }, - "解析整个C++项目(.cpp/.hpp/.c/.h)": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目) - }, - "解析整个Go项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Golang项目) - }, - "解析整个Rust项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Rust项目) - }, - "解析整个Java项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Java项目) - }, - "解析整个前端项目(js,ts,css等)": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个前端项目) - }, - "解析整个Lua项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Lua项目) - }, - "解析整个CSharp项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个CSharp项目) - }, - "读Tex论文写摘要": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(读文章写摘要) - }, - "Markdown/Readme英译中": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "Function": HotReload(Markdown英译中) - }, - "批量生成函数注释": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量生成函数注释) - }, - "保存当前的对话": { - "Function": HotReload(对话历史存档) - }, - "[多线程Demo] 解析此项目本身(源码自译解)": { - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析项目本身) - }, - "[老旧的Demo] 把本项目源代码切换成全英文": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(全项目切换英文) - }, - "[插件demo] 历史上的今天": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(高阶功能模板函数) - }, - - } - ###################### 第二组插件 ########################### - # [第二组插件]: 经过充分测试 - from crazy_functions.批量总结PDF文档 import 批量总结PDF文档 - from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer - from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档 - from crazy_functions.谷歌检索小助手 import 谷歌检索小助手 - from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入 - from crazy_functions.Latex全文润色 import Latex中文润色 - from crazy_functions.Latex全文润色 import Latex英文纠错 - from crazy_functions.Latex全文翻译 import Latex中译英 - from crazy_functions.Latex全文翻译 import Latex英译中 - from crazy_functions.批量Markdown翻译 import Markdown中译英 - - function_plugins.update({ - "批量翻译PDF文档(多线程)": { - "Color": "stop", - "AsButton": True, # 加入下拉菜单中 - "Function": HotReload(批量翻译PDF文档) - }, - "询问多个GPT模型": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(同时问询) - }, - "[测试功能] 批量总结PDF文档": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(批量总结PDF文档) - }, - "[测试功能] 批量总结PDF文档pdfminer": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量总结PDF文档pdfminer) - }, - "谷歌学术检索助手(输入谷歌学术搜索页url)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(谷歌检索小助手) - }, - - "理解PDF文档内容 (模仿ChatPDF)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(理解PDF文档内容标准文件输入) - }, - "英文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英文润色) - }, - "英文Latex项目全文纠错(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英文纠错) - }, - "[测试功能] 中文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中文润色) - }, - "Latex项目全文中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中译英) - }, - "Latex项目全文英译中(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英译中) - }, - "批量Markdown中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Markdown中译英) - }, - - - }) - - ###################### 第三组插件 ########################### - # [第三组插件]: 尚未充分测试的函数插件,放在这里 - from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 - function_plugins.update({ - "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(下载arxiv论文并翻译摘要) - } - }) - - from crazy_functions.联网的ChatGPT import 连接网络回答问题 - function_plugins.update({ - "连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(连接网络回答问题) - } - }) - - from crazy_functions.解析项目源代码 import 解析任意code项目 - function_plugins.update({ - "解析项目源代码(手动指定和筛选源代码文件类型)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示 - "Function": HotReload(解析任意code项目) - }, - }) - from crazy_functions.询问多个大语言模型 import 同时问询_指定模型 - function_plugins.update({ - "询问多个GPT模型(手动指定询问哪些模型)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示 - "Function": HotReload(同时问询_指定模型) - }, - }) - from crazy_functions.图片生成 import 图片生成 - function_plugins.update({ - "图片生成(先切换模型到openai或api2d)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "在这里输入分辨率, 如256x256(默认)", # 高级参数输入区的显示提示 - "Function": HotReload(图片生成) - }, - }) - from crazy_functions.总结音视频 import 总结音视频 - function_plugins.update({ - "批量总结音视频(输入路径或上传压缩包)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, - "ArgsReminder": "调用openai api 使用whisper-1模型, 目前支持的格式:mp4, m4a, wav, mpga, mpeg, mp3。此处可以输入解析提示,例如:解析为简体中文(默认)。", - "Function": HotReload(总结音视频) - } - }) - try: - from crazy_functions.数学动画生成manim import 动画生成 - function_plugins.update({ - "数学动画生成(Manim)": { - "Color": "stop", - "AsButton": False, - "Function": HotReload(动画生成) - } - }) - except: - print('Load function plugin failed') - - try: - from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言 - function_plugins.update({ - "Markdown翻译(手动指定语言)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, - "ArgsReminder": "请输入要翻译成哪种语言,默认为Chinese。", - "Function": HotReload(Markdown翻译指定语言) - } - }) - except: - print('Load function plugin failed') - - ###################### 第n组插件 ########################### - return function_plugins diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/Image.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/Image.py deleted file mode 100644 index a519a28af3689d46f1c26d00ffc7204958da7a7e..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/Image.py +++ /dev/null @@ -1,3910 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# the Image class wrapper -# -# partial release history: -# 1995-09-09 fl Created -# 1996-03-11 fl PIL release 0.0 (proof of concept) -# 1996-04-30 fl PIL release 0.1b1 -# 1999-07-28 fl PIL release 1.0 final -# 2000-06-07 fl PIL release 1.1 -# 2000-10-20 fl PIL release 1.1.1 -# 2001-05-07 fl PIL release 1.1.2 -# 2002-03-15 fl PIL release 1.1.3 -# 2003-05-10 fl PIL release 1.1.4 -# 2005-03-28 fl PIL release 1.1.5 -# 2006-12-02 fl PIL release 1.1.6 -# 2009-11-15 fl PIL release 1.1.7 -# -# Copyright (c) 1997-2009 by Secret Labs AB. All rights reserved. -# Copyright (c) 1995-2009 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import atexit -import builtins -import io -import logging -import math -import os -import re -import struct -import sys -import tempfile -import warnings -from collections.abc import Callable, MutableMapping -from enum import IntEnum -from pathlib import Path - -try: - import defusedxml.ElementTree as ElementTree -except ImportError: - ElementTree = None - -# VERSION was removed in Pillow 6.0.0. -# PILLOW_VERSION was removed in Pillow 9.0.0. -# Use __version__ instead. -from . import ( - ExifTags, - ImageMode, - TiffTags, - UnidentifiedImageError, - __version__, - _plugins, -) -from ._binary import i32le, o32be, o32le -from ._util import DeferredError, is_path - -logger = logging.getLogger(__name__) - - -class DecompressionBombWarning(RuntimeWarning): - pass - - -class DecompressionBombError(Exception): - pass - - -# Limit to around a quarter gigabyte for a 24-bit (3 bpp) image -MAX_IMAGE_PIXELS = int(1024 * 1024 * 1024 // 4 // 3) - - -try: - # If the _imaging C module is not present, Pillow will not load. - # Note that other modules should not refer to _imaging directly; - # import Image and use the Image.core variable instead. - # Also note that Image.core is not a publicly documented interface, - # and should be considered private and subject to change. - from . import _imaging as core - - if __version__ != getattr(core, "PILLOW_VERSION", None): - msg = ( - "The _imaging extension was built for another version of Pillow or PIL:\n" - f"Core version: {getattr(core, 'PILLOW_VERSION', None)}\n" - f"Pillow version: {__version__}" - ) - raise ImportError(msg) - -except ImportError as v: - core = DeferredError(ImportError("The _imaging C module is not installed.")) - # Explanations for ways that we know we might have an import error - if str(v).startswith("Module use of python"): - # The _imaging C module is present, but not compiled for - # the right version (windows only). Print a warning, if - # possible. - warnings.warn( - "The _imaging extension was built for another version of Python.", - RuntimeWarning, - ) - elif str(v).startswith("The _imaging extension"): - warnings.warn(str(v), RuntimeWarning) - # Fail here anyway. Don't let people run with a mostly broken Pillow. - # see docs/porting.rst - raise - - -USE_CFFI_ACCESS = False -try: - import cffi -except ImportError: - cffi = None - - -def isImageType(t): - """ - Checks if an object is an image object. - - .. warning:: - - This function is for internal use only. - - :param t: object to check if it's an image - :returns: True if the object is an image - """ - return hasattr(t, "im") - - -# -# Constants - - -# transpose -class Transpose(IntEnum): - FLIP_LEFT_RIGHT = 0 - FLIP_TOP_BOTTOM = 1 - ROTATE_90 = 2 - ROTATE_180 = 3 - ROTATE_270 = 4 - TRANSPOSE = 5 - TRANSVERSE = 6 - - -# transforms (also defined in Imaging.h) -class Transform(IntEnum): - AFFINE = 0 - EXTENT = 1 - PERSPECTIVE = 2 - QUAD = 3 - MESH = 4 - - -# resampling filters (also defined in Imaging.h) -class Resampling(IntEnum): - NEAREST = 0 - BOX = 4 - BILINEAR = 2 - HAMMING = 5 - BICUBIC = 3 - LANCZOS = 1 - - -_filters_support = { - Resampling.BOX: 0.5, - Resampling.BILINEAR: 1.0, - Resampling.HAMMING: 1.0, - Resampling.BICUBIC: 2.0, - Resampling.LANCZOS: 3.0, -} - - -# dithers -class Dither(IntEnum): - NONE = 0 - ORDERED = 1 # Not yet implemented - RASTERIZE = 2 # Not yet implemented - FLOYDSTEINBERG = 3 # default - - -# palettes/quantizers -class Palette(IntEnum): - WEB = 0 - ADAPTIVE = 1 - - -class Quantize(IntEnum): - MEDIANCUT = 0 - MAXCOVERAGE = 1 - FASTOCTREE = 2 - LIBIMAGEQUANT = 3 - - -module = sys.modules[__name__] -for enum in (Transpose, Transform, Resampling, Dither, Palette, Quantize): - for item in enum: - setattr(module, item.name, item.value) - - -if hasattr(core, "DEFAULT_STRATEGY"): - DEFAULT_STRATEGY = core.DEFAULT_STRATEGY - FILTERED = core.FILTERED - HUFFMAN_ONLY = core.HUFFMAN_ONLY - RLE = core.RLE - FIXED = core.FIXED - - -# -------------------------------------------------------------------- -# Registries - -ID = [] -OPEN = {} -MIME = {} -SAVE = {} -SAVE_ALL = {} -EXTENSION = {} -DECODERS = {} -ENCODERS = {} - -# -------------------------------------------------------------------- -# Modes - -_ENDIAN = "<" if sys.byteorder == "little" else ">" - - -def _conv_type_shape(im): - m = ImageMode.getmode(im.mode) - shape = (im.height, im.width) - extra = len(m.bands) - if extra != 1: - shape += (extra,) - return shape, m.typestr - - -MODES = ["1", "CMYK", "F", "HSV", "I", "L", "LAB", "P", "RGB", "RGBA", "RGBX", "YCbCr"] - -# raw modes that may be memory mapped. NOTE: if you change this, you -# may have to modify the stride calculation in map.c too! -_MAPMODES = ("L", "P", "RGBX", "RGBA", "CMYK", "I;16", "I;16L", "I;16B") - - -def getmodebase(mode): - """ - Gets the "base" mode for given mode. This function returns "L" for - images that contain grayscale data, and "RGB" for images that - contain color data. - - :param mode: Input mode. - :returns: "L" or "RGB". - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).basemode - - -def getmodetype(mode): - """ - Gets the storage type mode. Given a mode, this function returns a - single-layer mode suitable for storing individual bands. - - :param mode: Input mode. - :returns: "L", "I", or "F". - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).basetype - - -def getmodebandnames(mode): - """ - Gets a list of individual band names. Given a mode, this function returns - a tuple containing the names of individual bands (use - :py:method:`~PIL.Image.getmodetype` to get the mode used to store each - individual band. - - :param mode: Input mode. - :returns: A tuple containing band names. The length of the tuple - gives the number of bands in an image of the given mode. - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).bands - - -def getmodebands(mode): - """ - Gets the number of individual bands for this mode. - - :param mode: Input mode. - :returns: The number of bands in this mode. - :exception KeyError: If the input mode was not a standard mode. - """ - return len(ImageMode.getmode(mode).bands) - - -# -------------------------------------------------------------------- -# Helpers - -_initialized = 0 - - -def preinit(): - """Explicitly load standard file format drivers.""" - - global _initialized - if _initialized >= 1: - return - - try: - from . import BmpImagePlugin - - assert BmpImagePlugin - except ImportError: - pass - try: - from . import GifImagePlugin - - assert GifImagePlugin - except ImportError: - pass - try: - from . import JpegImagePlugin - - assert JpegImagePlugin - except ImportError: - pass - try: - from . import PpmImagePlugin - - assert PpmImagePlugin - except ImportError: - pass - try: - from . import PngImagePlugin - - assert PngImagePlugin - except ImportError: - pass - # try: - # import TiffImagePlugin - # assert TiffImagePlugin - # except ImportError: - # pass - - _initialized = 1 - - -def init(): - """ - Explicitly initializes the Python Imaging Library. This function - loads all available file format drivers. - """ - - global _initialized - if _initialized >= 2: - return 0 - - for plugin in _plugins: - try: - logger.debug("Importing %s", plugin) - __import__(f"PIL.{plugin}", globals(), locals(), []) - except ImportError as e: - logger.debug("Image: failed to import %s: %s", plugin, e) - - if OPEN or SAVE: - _initialized = 2 - return 1 - - -# -------------------------------------------------------------------- -# Codec factories (used by tobytes/frombytes and ImageFile.load) - - -def _getdecoder(mode, decoder_name, args, extra=()): - # tweak arguments - if args is None: - args = () - elif not isinstance(args, tuple): - args = (args,) - - try: - decoder = DECODERS[decoder_name] - except KeyError: - pass - else: - return decoder(mode, *args + extra) - - try: - # get decoder - decoder = getattr(core, decoder_name + "_decoder") - except AttributeError as e: - msg = f"decoder {decoder_name} not available" - raise OSError(msg) from e - return decoder(mode, *args + extra) - - -def _getencoder(mode, encoder_name, args, extra=()): - # tweak arguments - if args is None: - args = () - elif not isinstance(args, tuple): - args = (args,) - - try: - encoder = ENCODERS[encoder_name] - except KeyError: - pass - else: - return encoder(mode, *args + extra) - - try: - # get encoder - encoder = getattr(core, encoder_name + "_encoder") - except AttributeError as e: - msg = f"encoder {encoder_name} not available" - raise OSError(msg) from e - return encoder(mode, *args + extra) - - -# -------------------------------------------------------------------- -# Simple expression analyzer - - -class _E: - def __init__(self, scale, offset): - self.scale = scale - self.offset = offset - - def __neg__(self): - return _E(-self.scale, -self.offset) - - def __add__(self, other): - if isinstance(other, _E): - return _E(self.scale + other.scale, self.offset + other.offset) - return _E(self.scale, self.offset + other) - - __radd__ = __add__ - - def __sub__(self, other): - return self + -other - - def __rsub__(self, other): - return other + -self - - def __mul__(self, other): - if isinstance(other, _E): - return NotImplemented - return _E(self.scale * other, self.offset * other) - - __rmul__ = __mul__ - - def __truediv__(self, other): - if isinstance(other, _E): - return NotImplemented - return _E(self.scale / other, self.offset / other) - - -def _getscaleoffset(expr): - a = expr(_E(1, 0)) - return (a.scale, a.offset) if isinstance(a, _E) else (0, a) - - -# -------------------------------------------------------------------- -# Implementation wrapper - - -class Image: - """ - This class represents an image object. To create - :py:class:`~PIL.Image.Image` objects, use the appropriate factory - functions. There's hardly ever any reason to call the Image constructor - directly. - - * :py:func:`~PIL.Image.open` - * :py:func:`~PIL.Image.new` - * :py:func:`~PIL.Image.frombytes` - """ - - format = None - format_description = None - _close_exclusive_fp_after_loading = True - - def __init__(self): - # FIXME: take "new" parameters / other image? - # FIXME: turn mode and size into delegating properties? - self.im = None - self.mode = "" - self._size = (0, 0) - self.palette = None - self.info = {} - self.readonly = 0 - self.pyaccess = None - self._exif = None - - @property - def width(self): - return self.size[0] - - @property - def height(self): - return self.size[1] - - @property - def size(self): - return self._size - - def _new(self, im): - new = Image() - new.im = im - new.mode = im.mode - new._size = im.size - if im.mode in ("P", "PA"): - if self.palette: - new.palette = self.palette.copy() - else: - from . import ImagePalette - - new.palette = ImagePalette.ImagePalette() - new.info = self.info.copy() - return new - - # Context manager support - def __enter__(self): - return self - - def __exit__(self, *args): - if hasattr(self, "fp") and getattr(self, "_exclusive_fp", False): - if getattr(self, "_fp", False): - if self._fp != self.fp: - self._fp.close() - self._fp = DeferredError(ValueError("Operation on closed image")) - if self.fp: - self.fp.close() - self.fp = None - - def close(self): - """ - Closes the file pointer, if possible. - - This operation will destroy the image core and release its memory. - The image data will be unusable afterward. - - This function is required to close images that have multiple frames or - have not had their file read and closed by the - :py:meth:`~PIL.Image.Image.load` method. See :ref:`file-handling` for - more information. - """ - try: - if getattr(self, "_fp", False): - if self._fp != self.fp: - self._fp.close() - self._fp = DeferredError(ValueError("Operation on closed image")) - if self.fp: - self.fp.close() - self.fp = None - except Exception as msg: - logger.debug("Error closing: %s", msg) - - if getattr(self, "map", None): - self.map = None - - # Instead of simply setting to None, we're setting up a - # deferred error that will better explain that the core image - # object is gone. - self.im = DeferredError(ValueError("Operation on closed image")) - - def _copy(self): - self.load() - self.im = self.im.copy() - self.pyaccess = None - self.readonly = 0 - - def _ensure_mutable(self): - if self.readonly: - self._copy() - else: - self.load() - - def _dump(self, file=None, format=None, **options): - suffix = "" - if format: - suffix = "." + format - - if not file: - f, filename = tempfile.mkstemp(suffix) - os.close(f) - else: - filename = file - if not filename.endswith(suffix): - filename = filename + suffix - - self.load() - - if not format or format == "PPM": - self.im.save_ppm(filename) - else: - self.save(filename, format, **options) - - return filename - - def __eq__(self, other): - return ( - self.__class__ is other.__class__ - and self.mode == other.mode - and self.size == other.size - and self.info == other.info - and self.getpalette() == other.getpalette() - and self.tobytes() == other.tobytes() - ) - - def __repr__(self): - return "<%s.%s image mode=%s size=%dx%d at 0x%X>" % ( - self.__class__.__module__, - self.__class__.__name__, - self.mode, - self.size[0], - self.size[1], - id(self), - ) - - def _repr_pretty_(self, p, cycle): - """IPython plain text display support""" - - # Same as __repr__ but without unpredictable id(self), - # to keep Jupyter notebook `text/plain` output stable. - p.text( - "<%s.%s image mode=%s size=%dx%d>" - % ( - self.__class__.__module__, - self.__class__.__name__, - self.mode, - self.size[0], - self.size[1], - ) - ) - - def _repr_image(self, image_format, **kwargs): - """Helper function for iPython display hook. - - :param image_format: Image format. - :returns: image as bytes, saved into the given format. - """ - b = io.BytesIO() - try: - self.save(b, image_format, **kwargs) - except Exception as e: - msg = f"Could not save to {image_format} for display" - raise ValueError(msg) from e - return b.getvalue() - - def _repr_png_(self): - """iPython display hook support for PNG format. - - :returns: PNG version of the image as bytes - """ - return self._repr_image("PNG", compress_level=1) - - def _repr_jpeg_(self): - """iPython display hook support for JPEG format. - - :returns: JPEG version of the image as bytes - """ - return self._repr_image("JPEG") - - @property - def __array_interface__(self): - # numpy array interface support - new = {"version": 3} - try: - if self.mode == "1": - # Binary images need to be extended from bits to bytes - # See: https://github.com/python-pillow/Pillow/issues/350 - new["data"] = self.tobytes("raw", "L") - else: - new["data"] = self.tobytes() - except Exception as e: - if not isinstance(e, (MemoryError, RecursionError)): - try: - import numpy - from packaging.version import parse as parse_version - except ImportError: - pass - else: - if parse_version(numpy.__version__) < parse_version("1.23"): - warnings.warn(e) - raise - new["shape"], new["typestr"] = _conv_type_shape(self) - return new - - def __getstate__(self): - im_data = self.tobytes() # load image first - return [self.info, self.mode, self.size, self.getpalette(), im_data] - - def __setstate__(self, state): - Image.__init__(self) - info, mode, size, palette, data = state - self.info = info - self.mode = mode - self._size = size - self.im = core.new(mode, size) - if mode in ("L", "LA", "P", "PA") and palette: - self.putpalette(palette) - self.frombytes(data) - - def tobytes(self, encoder_name="raw", *args): - """ - Return image as a bytes object. - - .. warning:: - - This method returns the raw image data from the internal - storage. For compressed image data (e.g. PNG, JPEG) use - :meth:`~.save`, with a BytesIO parameter for in-memory - data. - - :param encoder_name: What encoder to use. The default is to - use the standard "raw" encoder. - - A list of C encoders can be seen under - codecs section of the function array in - :file:`_imaging.c`. Python encoders are - registered within the relevant plugins. - :param args: Extra arguments to the encoder. - :returns: A :py:class:`bytes` object. - """ - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if encoder_name == "raw" and args == (): - args = self.mode - - self.load() - - if self.width == 0 or self.height == 0: - return b"" - - # unpack data - e = _getencoder(self.mode, encoder_name, args) - e.setimage(self.im) - - bufsize = max(65536, self.size[0] * 4) # see RawEncode.c - - output = [] - while True: - bytes_consumed, errcode, data = e.encode(bufsize) - output.append(data) - if errcode: - break - if errcode < 0: - msg = f"encoder error {errcode} in tobytes" - raise RuntimeError(msg) - - return b"".join(output) - - def tobitmap(self, name="image"): - """ - Returns the image converted to an X11 bitmap. - - .. note:: This method only works for mode "1" images. - - :param name: The name prefix to use for the bitmap variables. - :returns: A string containing an X11 bitmap. - :raises ValueError: If the mode is not "1" - """ - - self.load() - if self.mode != "1": - msg = "not a bitmap" - raise ValueError(msg) - data = self.tobytes("xbm") - return b"".join( - [ - f"#define {name}_width {self.size[0]}\n".encode("ascii"), - f"#define {name}_height {self.size[1]}\n".encode("ascii"), - f"static char {name}_bits[] = {{\n".encode("ascii"), - data, - b"};", - ] - ) - - def frombytes(self, data, decoder_name="raw", *args): - """ - Loads this image with pixel data from a bytes object. - - This method is similar to the :py:func:`~PIL.Image.frombytes` function, - but loads data into this image instead of creating a new image object. - """ - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - # default format - if decoder_name == "raw" and args == (): - args = self.mode - - # unpack data - d = _getdecoder(self.mode, decoder_name, args) - d.setimage(self.im) - s = d.decode(data) - - if s[0] >= 0: - msg = "not enough image data" - raise ValueError(msg) - if s[1] != 0: - msg = "cannot decode image data" - raise ValueError(msg) - - def load(self): - """ - Allocates storage for the image and loads the pixel data. In - normal cases, you don't need to call this method, since the - Image class automatically loads an opened image when it is - accessed for the first time. - - If the file associated with the image was opened by Pillow, then this - method will close it. The exception to this is if the image has - multiple frames, in which case the file will be left open for seek - operations. See :ref:`file-handling` for more information. - - :returns: An image access object. - :rtype: :ref:`PixelAccess` or :py:class:`PIL.PyAccess` - """ - if self.im is not None and self.palette and self.palette.dirty: - # realize palette - mode, arr = self.palette.getdata() - self.im.putpalette(mode, arr) - self.palette.dirty = 0 - self.palette.rawmode = None - if "transparency" in self.info and mode in ("LA", "PA"): - if isinstance(self.info["transparency"], int): - self.im.putpalettealpha(self.info["transparency"], 0) - else: - self.im.putpalettealphas(self.info["transparency"]) - self.palette.mode = "RGBA" - else: - palette_mode = "RGBA" if mode.startswith("RGBA") else "RGB" - self.palette.mode = palette_mode - self.palette.palette = self.im.getpalette(palette_mode, palette_mode) - - if self.im is not None: - if cffi and USE_CFFI_ACCESS: - if self.pyaccess: - return self.pyaccess - from . import PyAccess - - self.pyaccess = PyAccess.new(self, self.readonly) - if self.pyaccess: - return self.pyaccess - return self.im.pixel_access(self.readonly) - - def verify(self): - """ - Verifies the contents of a file. For data read from a file, this - method attempts to determine if the file is broken, without - actually decoding the image data. If this method finds any - problems, it raises suitable exceptions. If you need to load - the image after using this method, you must reopen the image - file. - """ - pass - - def convert( - self, mode=None, matrix=None, dither=None, palette=Palette.WEB, colors=256 - ): - """ - Returns a converted copy of this image. For the "P" mode, this - method translates pixels through the palette. If mode is - omitted, a mode is chosen so that all information in the image - and the palette can be represented without a palette. - - The current version supports all possible conversions between - "L", "RGB" and "CMYK". The ``matrix`` argument only supports "L" - and "RGB". - - When translating a color image to greyscale (mode "L"), - the library uses the ITU-R 601-2 luma transform:: - - L = R * 299/1000 + G * 587/1000 + B * 114/1000 - - The default method of converting a greyscale ("L") or "RGB" - image into a bilevel (mode "1") image uses Floyd-Steinberg - dither to approximate the original image luminosity levels. If - dither is ``None``, all values larger than 127 are set to 255 (white), - all other values to 0 (black). To use other thresholds, use the - :py:meth:`~PIL.Image.Image.point` method. - - When converting from "RGBA" to "P" without a ``matrix`` argument, - this passes the operation to :py:meth:`~PIL.Image.Image.quantize`, - and ``dither`` and ``palette`` are ignored. - - When converting from "PA", if an "RGBA" palette is present, the alpha - channel from the image will be used instead of the values from the palette. - - :param mode: The requested mode. See: :ref:`concept-modes`. - :param matrix: An optional conversion matrix. If given, this - should be 4- or 12-tuple containing floating point values. - :param dither: Dithering method, used when converting from - mode "RGB" to "P" or from "RGB" or "L" to "1". - Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG` - (default). Note that this is not used when ``matrix`` is supplied. - :param palette: Palette to use when converting from mode "RGB" - to "P". Available palettes are :data:`Palette.WEB` or - :data:`Palette.ADAPTIVE`. - :param colors: Number of colors to use for the :data:`Palette.ADAPTIVE` - palette. Defaults to 256. - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - self.load() - - has_transparency = self.info.get("transparency") is not None - if not mode and self.mode == "P": - # determine default mode - if self.palette: - mode = self.palette.mode - else: - mode = "RGB" - if mode == "RGB" and has_transparency: - mode = "RGBA" - if not mode or (mode == self.mode and not matrix): - return self.copy() - - if matrix: - # matrix conversion - if mode not in ("L", "RGB"): - msg = "illegal conversion" - raise ValueError(msg) - im = self.im.convert_matrix(mode, matrix) - new = self._new(im) - if has_transparency and self.im.bands == 3: - transparency = new.info["transparency"] - - def convert_transparency(m, v): - v = m[0] * v[0] + m[1] * v[1] + m[2] * v[2] + m[3] * 0.5 - return max(0, min(255, int(v))) - - if mode == "L": - transparency = convert_transparency(matrix, transparency) - elif len(mode) == 3: - transparency = tuple( - convert_transparency(matrix[i * 4 : i * 4 + 4], transparency) - for i in range(0, len(transparency)) - ) - new.info["transparency"] = transparency - return new - - if mode == "P" and self.mode == "RGBA": - return self.quantize(colors) - - trns = None - delete_trns = False - # transparency handling - if has_transparency: - if (self.mode in ("1", "L", "I") and mode in ("LA", "RGBA")) or ( - self.mode == "RGB" and mode == "RGBA" - ): - # Use transparent conversion to promote from transparent - # color to an alpha channel. - new_im = self._new( - self.im.convert_transparent(mode, self.info["transparency"]) - ) - del new_im.info["transparency"] - return new_im - elif self.mode in ("L", "RGB", "P") and mode in ("L", "RGB", "P"): - t = self.info["transparency"] - if isinstance(t, bytes): - # Dragons. This can't be represented by a single color - warnings.warn( - "Palette images with Transparency expressed in bytes should be " - "converted to RGBA images" - ) - delete_trns = True - else: - # get the new transparency color. - # use existing conversions - trns_im = Image()._new(core.new(self.mode, (1, 1))) - if self.mode == "P": - trns_im.putpalette(self.palette) - if isinstance(t, tuple): - err = "Couldn't allocate a palette color for transparency" - try: - t = trns_im.palette.getcolor(t, self) - except ValueError as e: - if str(e) == "cannot allocate more than 256 colors": - # If all 256 colors are in use, - # then there is no need for transparency - t = None - else: - raise ValueError(err) from e - if t is None: - trns = None - else: - trns_im.putpixel((0, 0), t) - - if mode in ("L", "RGB"): - trns_im = trns_im.convert(mode) - else: - # can't just retrieve the palette number, got to do it - # after quantization. - trns_im = trns_im.convert("RGB") - trns = trns_im.getpixel((0, 0)) - - elif self.mode == "P" and mode in ("LA", "PA", "RGBA"): - t = self.info["transparency"] - delete_trns = True - - if isinstance(t, bytes): - self.im.putpalettealphas(t) - elif isinstance(t, int): - self.im.putpalettealpha(t, 0) - else: - msg = "Transparency for P mode should be bytes or int" - raise ValueError(msg) - - if mode == "P" and palette == Palette.ADAPTIVE: - im = self.im.quantize(colors) - new = self._new(im) - from . import ImagePalette - - new.palette = ImagePalette.ImagePalette("RGB", new.im.getpalette("RGB")) - if delete_trns: - # This could possibly happen if we requantize to fewer colors. - # The transparency would be totally off in that case. - del new.info["transparency"] - if trns is not None: - try: - new.info["transparency"] = new.palette.getcolor(trns, new) - except Exception: - # if we can't make a transparent color, don't leave the old - # transparency hanging around to mess us up. - del new.info["transparency"] - warnings.warn("Couldn't allocate palette entry for transparency") - return new - - if "LAB" in (self.mode, mode): - other_mode = mode if self.mode == "LAB" else self.mode - if other_mode in ("RGB", "RGBA", "RGBX"): - from . import ImageCms - - srgb = ImageCms.createProfile("sRGB") - lab = ImageCms.createProfile("LAB") - profiles = [lab, srgb] if self.mode == "LAB" else [srgb, lab] - transform = ImageCms.buildTransform( - profiles[0], profiles[1], self.mode, mode - ) - return transform.apply(self) - - # colorspace conversion - if dither is None: - dither = Dither.FLOYDSTEINBERG - - try: - im = self.im.convert(mode, dither) - except ValueError: - try: - # normalize source image and try again - modebase = getmodebase(self.mode) - if modebase == self.mode: - raise - im = self.im.convert(modebase) - im = im.convert(mode, dither) - except KeyError as e: - msg = "illegal conversion" - raise ValueError(msg) from e - - new_im = self._new(im) - if mode == "P" and palette != Palette.ADAPTIVE: - from . import ImagePalette - - new_im.palette = ImagePalette.ImagePalette("RGB", list(range(256)) * 3) - if delete_trns: - # crash fail if we leave a bytes transparency in an rgb/l mode. - del new_im.info["transparency"] - if trns is not None: - if new_im.mode == "P": - try: - new_im.info["transparency"] = new_im.palette.getcolor(trns, new_im) - except ValueError as e: - del new_im.info["transparency"] - if str(e) != "cannot allocate more than 256 colors": - # If all 256 colors are in use, - # then there is no need for transparency - warnings.warn( - "Couldn't allocate palette entry for transparency" - ) - else: - new_im.info["transparency"] = trns - return new_im - - def quantize( - self, - colors=256, - method=None, - kmeans=0, - palette=None, - dither=Dither.FLOYDSTEINBERG, - ): - """ - Convert the image to 'P' mode with the specified number - of colors. - - :param colors: The desired number of colors, <= 256 - :param method: :data:`Quantize.MEDIANCUT` (median cut), - :data:`Quantize.MAXCOVERAGE` (maximum coverage), - :data:`Quantize.FASTOCTREE` (fast octree), - :data:`Quantize.LIBIMAGEQUANT` (libimagequant; check support - using :py:func:`PIL.features.check_feature` with - ``feature="libimagequant"``). - - By default, :data:`Quantize.MEDIANCUT` will be used. - - The exception to this is RGBA images. :data:`Quantize.MEDIANCUT` - and :data:`Quantize.MAXCOVERAGE` do not support RGBA images, so - :data:`Quantize.FASTOCTREE` is used by default instead. - :param kmeans: Integer - :param palette: Quantize to the palette of given - :py:class:`PIL.Image.Image`. - :param dither: Dithering method, used when converting from - mode "RGB" to "P" or from "RGB" or "L" to "1". - Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG` - (default). - :returns: A new image - """ - - self.load() - - if method is None: - # defaults: - method = Quantize.MEDIANCUT - if self.mode == "RGBA": - method = Quantize.FASTOCTREE - - if self.mode == "RGBA" and method not in ( - Quantize.FASTOCTREE, - Quantize.LIBIMAGEQUANT, - ): - # Caller specified an invalid mode. - msg = ( - "Fast Octree (method == 2) and libimagequant (method == 3) " - "are the only valid methods for quantizing RGBA images" - ) - raise ValueError(msg) - - if palette: - # use palette from reference image - palette.load() - if palette.mode != "P": - msg = "bad mode for palette image" - raise ValueError(msg) - if self.mode != "RGB" and self.mode != "L": - msg = "only RGB or L mode images can be quantized to a palette" - raise ValueError(msg) - im = self.im.convert("P", dither, palette.im) - new_im = self._new(im) - new_im.palette = palette.palette.copy() - return new_im - - im = self._new(self.im.quantize(colors, method, kmeans)) - - from . import ImagePalette - - mode = im.im.getpalettemode() - palette = im.im.getpalette(mode, mode)[: colors * len(mode)] - im.palette = ImagePalette.ImagePalette(mode, palette) - - return im - - def copy(self): - """ - Copies this image. Use this method if you wish to paste things - into an image, but still retain the original. - - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - self.load() - return self._new(self.im.copy()) - - __copy__ = copy - - def crop(self, box=None): - """ - Returns a rectangular region from this image. The box is a - 4-tuple defining the left, upper, right, and lower pixel - coordinate. See :ref:`coordinate-system`. - - Note: Prior to Pillow 3.4.0, this was a lazy operation. - - :param box: The crop rectangle, as a (left, upper, right, lower)-tuple. - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if box is None: - return self.copy() - - if box[2] < box[0]: - msg = "Coordinate 'right' is less than 'left'" - raise ValueError(msg) - elif box[3] < box[1]: - msg = "Coordinate 'lower' is less than 'upper'" - raise ValueError(msg) - - self.load() - return self._new(self._crop(self.im, box)) - - def _crop(self, im, box): - """ - Returns a rectangular region from the core image object im. - - This is equivalent to calling im.crop((x0, y0, x1, y1)), but - includes additional sanity checks. - - :param im: a core image object - :param box: The crop rectangle, as a (left, upper, right, lower)-tuple. - :returns: A core image object. - """ - - x0, y0, x1, y1 = map(int, map(round, box)) - - absolute_values = (abs(x1 - x0), abs(y1 - y0)) - - _decompression_bomb_check(absolute_values) - - return im.crop((x0, y0, x1, y1)) - - def draft(self, mode, size): - """ - Configures the image file loader so it returns a version of the - image that as closely as possible matches the given mode and - size. For example, you can use this method to convert a color - JPEG to greyscale while loading it. - - If any changes are made, returns a tuple with the chosen ``mode`` and - ``box`` with coordinates of the original image within the altered one. - - Note that this method modifies the :py:class:`~PIL.Image.Image` object - in place. If the image has already been loaded, this method has no - effect. - - Note: This method is not implemented for most images. It is - currently implemented only for JPEG and MPO images. - - :param mode: The requested mode. - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - """ - pass - - def _expand(self, xmargin, ymargin=None): - if ymargin is None: - ymargin = xmargin - self.load() - return self._new(self.im.expand(xmargin, ymargin)) - - def filter(self, filter): - """ - Filters this image using the given filter. For a list of - available filters, see the :py:mod:`~PIL.ImageFilter` module. - - :param filter: Filter kernel. - :returns: An :py:class:`~PIL.Image.Image` object.""" - - from . import ImageFilter - - self.load() - - if isinstance(filter, Callable): - filter = filter() - if not hasattr(filter, "filter"): - msg = "filter argument should be ImageFilter.Filter instance or class" - raise TypeError(msg) - - multiband = isinstance(filter, ImageFilter.MultibandFilter) - if self.im.bands == 1 or multiband: - return self._new(filter.filter(self.im)) - - ims = [] - for c in range(self.im.bands): - ims.append(self._new(filter.filter(self.im.getband(c)))) - return merge(self.mode, ims) - - def getbands(self): - """ - Returns a tuple containing the name of each band in this image. - For example, ``getbands`` on an RGB image returns ("R", "G", "B"). - - :returns: A tuple containing band names. - :rtype: tuple - """ - return ImageMode.getmode(self.mode).bands - - def getbbox(self, *, alpha_only=True): - """ - Calculates the bounding box of the non-zero regions in the - image. - - :param alpha_only: Optional flag, defaulting to ``True``. - If ``True`` and the image has an alpha channel, trim transparent pixels. - Otherwise, trim pixels when all channels are zero. - Keyword-only argument. - :returns: The bounding box is returned as a 4-tuple defining the - left, upper, right, and lower pixel coordinate. See - :ref:`coordinate-system`. If the image is completely empty, this - method returns None. - - """ - - self.load() - return self.im.getbbox(alpha_only) - - def getcolors(self, maxcolors=256): - """ - Returns a list of colors used in this image. - - The colors will be in the image's mode. For example, an RGB image will - return a tuple of (red, green, blue) color values, and a P image will - return the index of the color in the palette. - - :param maxcolors: Maximum number of colors. If this number is - exceeded, this method returns None. The default limit is - 256 colors. - :returns: An unsorted list of (count, pixel) values. - """ - - self.load() - if self.mode in ("1", "L", "P"): - h = self.im.histogram() - out = [] - for i in range(256): - if h[i]: - out.append((h[i], i)) - if len(out) > maxcolors: - return None - return out - return self.im.getcolors(maxcolors) - - def getdata(self, band=None): - """ - Returns the contents of this image as a sequence object - containing pixel values. The sequence object is flattened, so - that values for line one follow directly after the values of - line zero, and so on. - - Note that the sequence object returned by this method is an - internal PIL data type, which only supports certain sequence - operations. To convert it to an ordinary sequence (e.g. for - printing), use ``list(im.getdata())``. - - :param band: What band to return. The default is to return - all bands. To return a single band, pass in the index - value (e.g. 0 to get the "R" band from an "RGB" image). - :returns: A sequence-like object. - """ - - self.load() - if band is not None: - return self.im.getband(band) - return self.im # could be abused - - def getextrema(self): - """ - Gets the minimum and maximum pixel values for each band in - the image. - - :returns: For a single-band image, a 2-tuple containing the - minimum and maximum pixel value. For a multi-band image, - a tuple containing one 2-tuple for each band. - """ - - self.load() - if self.im.bands > 1: - extrema = [] - for i in range(self.im.bands): - extrema.append(self.im.getband(i).getextrema()) - return tuple(extrema) - return self.im.getextrema() - - def _getxmp(self, xmp_tags): - def get_name(tag): - return tag.split("}")[1] - - def get_value(element): - value = {get_name(k): v for k, v in element.attrib.items()} - children = list(element) - if children: - for child in children: - name = get_name(child.tag) - child_value = get_value(child) - if name in value: - if not isinstance(value[name], list): - value[name] = [value[name]] - value[name].append(child_value) - else: - value[name] = child_value - elif value: - if element.text: - value["text"] = element.text - else: - return element.text - return value - - if ElementTree is None: - warnings.warn("XMP data cannot be read without defusedxml dependency") - return {} - else: - root = ElementTree.fromstring(xmp_tags) - return {get_name(root.tag): get_value(root)} - - def getexif(self): - """ - Gets EXIF data from the image. - - :returns: an :py:class:`~PIL.Image.Exif` object. - """ - if self._exif is None: - self._exif = Exif() - self._exif._loaded = False - elif self._exif._loaded: - return self._exif - self._exif._loaded = True - - exif_info = self.info.get("exif") - if exif_info is None: - if "Raw profile type exif" in self.info: - exif_info = bytes.fromhex( - "".join(self.info["Raw profile type exif"].split("\n")[3:]) - ) - elif hasattr(self, "tag_v2"): - self._exif.bigtiff = self.tag_v2._bigtiff - self._exif.endian = self.tag_v2._endian - self._exif.load_from_fp(self.fp, self.tag_v2._offset) - if exif_info is not None: - self._exif.load(exif_info) - - # XMP tags - if ExifTags.Base.Orientation not in self._exif: - xmp_tags = self.info.get("XML:com.adobe.xmp") - if xmp_tags: - match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags) - if match: - self._exif[ExifTags.Base.Orientation] = int(match[2]) - - return self._exif - - def _reload_exif(self): - if self._exif is None or not self._exif._loaded: - return - self._exif._loaded = False - self.getexif() - - def get_child_images(self): - child_images = [] - exif = self.getexif() - ifds = [] - if ExifTags.Base.SubIFDs in exif: - subifd_offsets = exif[ExifTags.Base.SubIFDs] - if subifd_offsets: - if not isinstance(subifd_offsets, tuple): - subifd_offsets = (subifd_offsets,) - for subifd_offset in subifd_offsets: - ifds.append((exif._get_ifd_dict(subifd_offset), subifd_offset)) - ifd1 = exif.get_ifd(ExifTags.IFD.IFD1) - if ifd1 and ifd1.get(513): - ifds.append((ifd1, exif._info.next)) - - offset = None - for ifd, ifd_offset in ifds: - current_offset = self.fp.tell() - if offset is None: - offset = current_offset - - fp = self.fp - thumbnail_offset = ifd.get(513) - if thumbnail_offset is not None: - try: - thumbnail_offset += self._exif_offset - except AttributeError: - pass - self.fp.seek(thumbnail_offset) - data = self.fp.read(ifd.get(514)) - fp = io.BytesIO(data) - - with open(fp) as im: - if thumbnail_offset is None: - im._frame_pos = [ifd_offset] - im._seek(0) - im.load() - child_images.append(im) - - if offset is not None: - self.fp.seek(offset) - return child_images - - def getim(self): - """ - Returns a capsule that points to the internal image memory. - - :returns: A capsule object. - """ - - self.load() - return self.im.ptr - - def getpalette(self, rawmode="RGB"): - """ - Returns the image palette as a list. - - :param rawmode: The mode in which to return the palette. ``None`` will - return the palette in its current mode. - - .. versionadded:: 9.1.0 - - :returns: A list of color values [r, g, b, ...], or None if the - image has no palette. - """ - - self.load() - try: - mode = self.im.getpalettemode() - except ValueError: - return None # no palette - if rawmode is None: - rawmode = mode - return list(self.im.getpalette(mode, rawmode)) - - def apply_transparency(self): - """ - If a P mode image has a "transparency" key in the info dictionary, - remove the key and instead apply the transparency to the palette. - Otherwise, the image is unchanged. - """ - if self.mode != "P" or "transparency" not in self.info: - return - - from . import ImagePalette - - palette = self.getpalette("RGBA") - transparency = self.info["transparency"] - if isinstance(transparency, bytes): - for i, alpha in enumerate(transparency): - palette[i * 4 + 3] = alpha - else: - palette[transparency * 4 + 3] = 0 - self.palette = ImagePalette.ImagePalette("RGBA", bytes(palette)) - self.palette.dirty = 1 - - del self.info["transparency"] - - def getpixel(self, xy): - """ - Returns the pixel value at a given position. - - :param xy: The coordinate, given as (x, y). See - :ref:`coordinate-system`. - :returns: The pixel value. If the image is a multi-layer image, - this method returns a tuple. - """ - - self.load() - if self.pyaccess: - return self.pyaccess.getpixel(xy) - return self.im.getpixel(xy) - - def getprojection(self): - """ - Get projection to x and y axes - - :returns: Two sequences, indicating where there are non-zero - pixels along the X-axis and the Y-axis, respectively. - """ - - self.load() - x, y = self.im.getprojection() - return list(x), list(y) - - def histogram(self, mask=None, extrema=None): - """ - Returns a histogram for the image. The histogram is returned as a - list of pixel counts, one for each pixel value in the source - image. Counts are grouped into 256 bins for each band, even if - the image has more than 8 bits per band. If the image has more - than one band, the histograms for all bands are concatenated (for - example, the histogram for an "RGB" image contains 768 values). - - A bilevel image (mode "1") is treated as a greyscale ("L") image - by this method. - - If a mask is provided, the method returns a histogram for those - parts of the image where the mask image is non-zero. The mask - image must have the same size as the image, and be either a - bi-level image (mode "1") or a greyscale image ("L"). - - :param mask: An optional mask. - :param extrema: An optional tuple of manually-specified extrema. - :returns: A list containing pixel counts. - """ - self.load() - if mask: - mask.load() - return self.im.histogram((0, 0), mask.im) - if self.mode in ("I", "F"): - if extrema is None: - extrema = self.getextrema() - return self.im.histogram(extrema) - return self.im.histogram() - - def entropy(self, mask=None, extrema=None): - """ - Calculates and returns the entropy for the image. - - A bilevel image (mode "1") is treated as a greyscale ("L") - image by this method. - - If a mask is provided, the method employs the histogram for - those parts of the image where the mask image is non-zero. - The mask image must have the same size as the image, and be - either a bi-level image (mode "1") or a greyscale image ("L"). - - :param mask: An optional mask. - :param extrema: An optional tuple of manually-specified extrema. - :returns: A float value representing the image entropy - """ - self.load() - if mask: - mask.load() - return self.im.entropy((0, 0), mask.im) - if self.mode in ("I", "F"): - if extrema is None: - extrema = self.getextrema() - return self.im.entropy(extrema) - return self.im.entropy() - - def paste(self, im, box=None, mask=None): - """ - Pastes another image into this image. The box argument is either - a 2-tuple giving the upper left corner, a 4-tuple defining the - left, upper, right, and lower pixel coordinate, or None (same as - (0, 0)). See :ref:`coordinate-system`. If a 4-tuple is given, the size - of the pasted image must match the size of the region. - - If the modes don't match, the pasted image is converted to the mode of - this image (see the :py:meth:`~PIL.Image.Image.convert` method for - details). - - Instead of an image, the source can be a integer or tuple - containing pixel values. The method then fills the region - with the given color. When creating RGB images, you can - also use color strings as supported by the ImageColor module. - - If a mask is given, this method updates only the regions - indicated by the mask. You can use either "1", "L", "LA", "RGBA" - or "RGBa" images (if present, the alpha band is used as mask). - Where the mask is 255, the given image is copied as is. Where - the mask is 0, the current value is preserved. Intermediate - values will mix the two images together, including their alpha - channels if they have them. - - See :py:meth:`~PIL.Image.Image.alpha_composite` if you want to - combine images with respect to their alpha channels. - - :param im: Source image or pixel value (integer or tuple). - :param box: An optional 4-tuple giving the region to paste into. - If a 2-tuple is used instead, it's treated as the upper left - corner. If omitted or None, the source is pasted into the - upper left corner. - - If an image is given as the second argument and there is no - third, the box defaults to (0, 0), and the second argument - is interpreted as a mask image. - :param mask: An optional mask image. - """ - - if isImageType(box) and mask is None: - # abbreviated paste(im, mask) syntax - mask = box - box = None - - if box is None: - box = (0, 0) - - if len(box) == 2: - # upper left corner given; get size from image or mask - if isImageType(im): - size = im.size - elif isImageType(mask): - size = mask.size - else: - # FIXME: use self.size here? - msg = "cannot determine region size; use 4-item box" - raise ValueError(msg) - box += (box[0] + size[0], box[1] + size[1]) - - if isinstance(im, str): - from . import ImageColor - - im = ImageColor.getcolor(im, self.mode) - - elif isImageType(im): - im.load() - if self.mode != im.mode: - if self.mode != "RGB" or im.mode not in ("LA", "RGBA", "RGBa"): - # should use an adapter for this! - im = im.convert(self.mode) - im = im.im - - self._ensure_mutable() - - if mask: - mask.load() - self.im.paste(im, box, mask.im) - else: - self.im.paste(im, box) - - def alpha_composite(self, im, dest=(0, 0), source=(0, 0)): - """'In-place' analog of Image.alpha_composite. Composites an image - onto this image. - - :param im: image to composite over this one - :param dest: Optional 2 tuple (left, top) specifying the upper - left corner in this (destination) image. - :param source: Optional 2 (left, top) tuple for the upper left - corner in the overlay source image, or 4 tuple (left, top, right, - bottom) for the bounds of the source rectangle - - Performance Note: Not currently implemented in-place in the core layer. - """ - - if not isinstance(source, (list, tuple)): - msg = "Source must be a tuple" - raise ValueError(msg) - if not isinstance(dest, (list, tuple)): - msg = "Destination must be a tuple" - raise ValueError(msg) - if len(source) not in (2, 4): - msg = "Source must be a 2 or 4-tuple" - raise ValueError(msg) - if not len(dest) == 2: - msg = "Destination must be a 2-tuple" - raise ValueError(msg) - if min(source) < 0: - msg = "Source must be non-negative" - raise ValueError(msg) - - if len(source) == 2: - source = source + im.size - - # over image, crop if it's not the whole thing. - if source == (0, 0) + im.size: - overlay = im - else: - overlay = im.crop(source) - - # target for the paste - box = dest + (dest[0] + overlay.width, dest[1] + overlay.height) - - # destination image. don't copy if we're using the whole image. - if box == (0, 0) + self.size: - background = self - else: - background = self.crop(box) - - result = alpha_composite(background, overlay) - self.paste(result, box) - - def point(self, lut, mode=None): - """ - Maps this image through a lookup table or function. - - :param lut: A lookup table, containing 256 (or 65536 if - self.mode=="I" and mode == "L") values per band in the - image. A function can be used instead, it should take a - single argument. The function is called once for each - possible pixel value, and the resulting table is applied to - all bands of the image. - - It may also be an :py:class:`~PIL.Image.ImagePointHandler` - object:: - - class Example(Image.ImagePointHandler): - def point(self, data): - # Return result - :param mode: Output mode (default is same as input). In the - current version, this can only be used if the source image - has mode "L" or "P", and the output has mode "1" or the - source image mode is "I" and the output mode is "L". - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - self.load() - - if isinstance(lut, ImagePointHandler): - return lut.point(self) - - if callable(lut): - # if it isn't a list, it should be a function - if self.mode in ("I", "I;16", "F"): - # check if the function can be used with point_transform - # UNDONE wiredfool -- I think this prevents us from ever doing - # a gamma function point transform on > 8bit images. - scale, offset = _getscaleoffset(lut) - return self._new(self.im.point_transform(scale, offset)) - # for other modes, convert the function to a table - lut = [lut(i) for i in range(256)] * self.im.bands - - if self.mode == "F": - # FIXME: _imaging returns a confusing error message for this case - msg = "point operation not supported for this mode" - raise ValueError(msg) - - if mode != "F": - lut = [round(i) for i in lut] - return self._new(self.im.point(lut, mode)) - - def putalpha(self, alpha): - """ - Adds or replaces the alpha layer in this image. If the image - does not have an alpha layer, it's converted to "LA" or "RGBA". - The new layer must be either "L" or "1". - - :param alpha: The new alpha layer. This can either be an "L" or "1" - image having the same size as this image, or an integer or - other color value. - """ - - self._ensure_mutable() - - if self.mode not in ("LA", "PA", "RGBA"): - # attempt to promote self to a matching alpha mode - try: - mode = getmodebase(self.mode) + "A" - try: - self.im.setmode(mode) - except (AttributeError, ValueError) as e: - # do things the hard way - im = self.im.convert(mode) - if im.mode not in ("LA", "PA", "RGBA"): - raise ValueError from e # sanity check - self.im = im - self.pyaccess = None - self.mode = self.im.mode - except KeyError as e: - msg = "illegal image mode" - raise ValueError(msg) from e - - if self.mode in ("LA", "PA"): - band = 1 - else: - band = 3 - - if isImageType(alpha): - # alpha layer - if alpha.mode not in ("1", "L"): - msg = "illegal image mode" - raise ValueError(msg) - alpha.load() - if alpha.mode == "1": - alpha = alpha.convert("L") - else: - # constant alpha - try: - self.im.fillband(band, alpha) - except (AttributeError, ValueError): - # do things the hard way - alpha = new("L", self.size, alpha) - else: - return - - self.im.putband(alpha.im, band) - - def putdata(self, data, scale=1.0, offset=0.0): - """ - Copies pixel data from a flattened sequence object into the image. The - values should start at the upper left corner (0, 0), continue to the - end of the line, followed directly by the first value of the second - line, and so on. Data will be read until either the image or the - sequence ends. The scale and offset values are used to adjust the - sequence values: **pixel = value*scale + offset**. - - :param data: A flattened sequence object. - :param scale: An optional scale value. The default is 1.0. - :param offset: An optional offset value. The default is 0.0. - """ - - self._ensure_mutable() - - self.im.putdata(data, scale, offset) - - def putpalette(self, data, rawmode="RGB"): - """ - Attaches a palette to this image. The image must be a "P", "PA", "L" - or "LA" image. - - The palette sequence must contain at most 256 colors, made up of one - integer value for each channel in the raw mode. - For example, if the raw mode is "RGB", then it can contain at most 768 - values, made up of red, green and blue values for the corresponding pixel - index in the 256 colors. - If the raw mode is "RGBA", then it can contain at most 1024 values, - containing red, green, blue and alpha values. - - Alternatively, an 8-bit string may be used instead of an integer sequence. - - :param data: A palette sequence (either a list or a string). - :param rawmode: The raw mode of the palette. Either "RGB", "RGBA", or a mode - that can be transformed to "RGB" or "RGBA" (e.g. "R", "BGR;15", "RGBA;L"). - """ - from . import ImagePalette - - if self.mode not in ("L", "LA", "P", "PA"): - msg = "illegal image mode" - raise ValueError(msg) - if isinstance(data, ImagePalette.ImagePalette): - palette = ImagePalette.raw(data.rawmode, data.palette) - else: - if not isinstance(data, bytes): - data = bytes(data) - palette = ImagePalette.raw(rawmode, data) - self.mode = "PA" if "A" in self.mode else "P" - self.palette = palette - self.palette.mode = "RGB" - self.load() # install new palette - - def putpixel(self, xy, value): - """ - Modifies the pixel at the given position. The color is given as - a single numerical value for single-band images, and a tuple for - multi-band images. In addition to this, RGB and RGBA tuples are - accepted for P and PA images. - - Note that this method is relatively slow. For more extensive changes, - use :py:meth:`~PIL.Image.Image.paste` or the :py:mod:`~PIL.ImageDraw` - module instead. - - See: - - * :py:meth:`~PIL.Image.Image.paste` - * :py:meth:`~PIL.Image.Image.putdata` - * :py:mod:`~PIL.ImageDraw` - - :param xy: The pixel coordinate, given as (x, y). See - :ref:`coordinate-system`. - :param value: The pixel value. - """ - - if self.readonly: - self._copy() - self.load() - - if self.pyaccess: - return self.pyaccess.putpixel(xy, value) - - if ( - self.mode in ("P", "PA") - and isinstance(value, (list, tuple)) - and len(value) in [3, 4] - ): - # RGB or RGBA value for a P or PA image - if self.mode == "PA": - alpha = value[3] if len(value) == 4 else 255 - value = value[:3] - value = self.palette.getcolor(value, self) - if self.mode == "PA": - value = (value, alpha) - return self.im.putpixel(xy, value) - - def remap_palette(self, dest_map, source_palette=None): - """ - Rewrites the image to reorder the palette. - - :param dest_map: A list of indexes into the original palette. - e.g. ``[1,0]`` would swap a two item palette, and ``list(range(256))`` - is the identity transform. - :param source_palette: Bytes or None. - :returns: An :py:class:`~PIL.Image.Image` object. - - """ - from . import ImagePalette - - if self.mode not in ("L", "P"): - msg = "illegal image mode" - raise ValueError(msg) - - bands = 3 - palette_mode = "RGB" - if source_palette is None: - if self.mode == "P": - self.load() - palette_mode = self.im.getpalettemode() - if palette_mode == "RGBA": - bands = 4 - source_palette = self.im.getpalette(palette_mode, palette_mode) - else: # L-mode - source_palette = bytearray(i // 3 for i in range(768)) - - palette_bytes = b"" - new_positions = [0] * 256 - - # pick only the used colors from the palette - for i, oldPosition in enumerate(dest_map): - palette_bytes += source_palette[ - oldPosition * bands : oldPosition * bands + bands - ] - new_positions[oldPosition] = i - - # replace the palette color id of all pixel with the new id - - # Palette images are [0..255], mapped through a 1 or 3 - # byte/color map. We need to remap the whole image - # from palette 1 to palette 2. New_positions is - # an array of indexes into palette 1. Palette 2 is - # palette 1 with any holes removed. - - # We're going to leverage the convert mechanism to use the - # C code to remap the image from palette 1 to palette 2, - # by forcing the source image into 'L' mode and adding a - # mapping 'L' mode palette, then converting back to 'L' - # sans palette thus converting the image bytes, then - # assigning the optimized RGB palette. - - # perf reference, 9500x4000 gif, w/~135 colors - # 14 sec prepatch, 1 sec postpatch with optimization forced. - - mapping_palette = bytearray(new_positions) - - m_im = self.copy() - m_im.mode = "P" - - m_im.palette = ImagePalette.ImagePalette( - palette_mode, palette=mapping_palette * bands - ) - # possibly set palette dirty, then - # m_im.putpalette(mapping_palette, 'L') # converts to 'P' - # or just force it. - # UNDONE -- this is part of the general issue with palettes - m_im.im.putpalette(palette_mode + ";L", m_im.palette.tobytes()) - - m_im = m_im.convert("L") - - m_im.putpalette(palette_bytes, palette_mode) - m_im.palette = ImagePalette.ImagePalette(palette_mode, palette=palette_bytes) - - if "transparency" in self.info: - try: - m_im.info["transparency"] = dest_map.index(self.info["transparency"]) - except ValueError: - if "transparency" in m_im.info: - del m_im.info["transparency"] - - return m_im - - def _get_safe_box(self, size, resample, box): - """Expands the box so it includes adjacent pixels - that may be used by resampling with the given resampling filter. - """ - filter_support = _filters_support[resample] - 0.5 - scale_x = (box[2] - box[0]) / size[0] - scale_y = (box[3] - box[1]) / size[1] - support_x = filter_support * scale_x - support_y = filter_support * scale_y - - return ( - max(0, int(box[0] - support_x)), - max(0, int(box[1] - support_y)), - min(self.size[0], math.ceil(box[2] + support_x)), - min(self.size[1], math.ceil(box[3] + support_y)), - ) - - def resize(self, size, resample=None, box=None, reducing_gap=None): - """ - Returns a resized copy of this image. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param resample: An optional resampling filter. This can be - one of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`, - :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`, - :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`. - If the image has mode "1" or "P", it is always set to - :py:data:`Resampling.NEAREST`. If the image mode specifies a number - of bits, such as "I;16", then the default filter is - :py:data:`Resampling.NEAREST`. Otherwise, the default filter is - :py:data:`Resampling.BICUBIC`. See: :ref:`concept-filters`. - :param box: An optional 4-tuple of floats providing - the source image region to be scaled. - The values must be within (0, 0, width, height) rectangle. - If omitted or None, the entire source is used. - :param reducing_gap: Apply optimization by resizing the image - in two steps. First, reducing the image by integer times - using :py:meth:`~PIL.Image.Image.reduce`. - Second, resizing using regular resampling. The last step - changes size no less than by ``reducing_gap`` times. - ``reducing_gap`` may be None (no first step is performed) - or should be greater than 1.0. The bigger ``reducing_gap``, - the closer the result to the fair resampling. - The smaller ``reducing_gap``, the faster resizing. - With ``reducing_gap`` greater or equal to 3.0, the result is - indistinguishable from fair resampling in most cases. - The default value is None (no optimization). - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if resample is None: - type_special = ";" in self.mode - resample = Resampling.NEAREST if type_special else Resampling.BICUBIC - elif resample not in ( - Resampling.NEAREST, - Resampling.BILINEAR, - Resampling.BICUBIC, - Resampling.LANCZOS, - Resampling.BOX, - Resampling.HAMMING, - ): - msg = f"Unknown resampling filter ({resample})." - - filters = [ - f"{filter[1]} ({filter[0]})" - for filter in ( - (Resampling.NEAREST, "Image.Resampling.NEAREST"), - (Resampling.LANCZOS, "Image.Resampling.LANCZOS"), - (Resampling.BILINEAR, "Image.Resampling.BILINEAR"), - (Resampling.BICUBIC, "Image.Resampling.BICUBIC"), - (Resampling.BOX, "Image.Resampling.BOX"), - (Resampling.HAMMING, "Image.Resampling.HAMMING"), - ) - ] - msg += " Use " + ", ".join(filters[:-1]) + " or " + filters[-1] - raise ValueError(msg) - - if reducing_gap is not None and reducing_gap < 1.0: - msg = "reducing_gap must be 1.0 or greater" - raise ValueError(msg) - - size = tuple(size) - - self.load() - if box is None: - box = (0, 0) + self.size - else: - box = tuple(box) - - if self.size == size and box == (0, 0) + self.size: - return self.copy() - - if self.mode in ("1", "P"): - resample = Resampling.NEAREST - - if self.mode in ["LA", "RGBA"] and resample != Resampling.NEAREST: - im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - im = im.resize(size, resample, box) - return im.convert(self.mode) - - self.load() - - if reducing_gap is not None and resample != Resampling.NEAREST: - factor_x = int((box[2] - box[0]) / size[0] / reducing_gap) or 1 - factor_y = int((box[3] - box[1]) / size[1] / reducing_gap) or 1 - if factor_x > 1 or factor_y > 1: - reduce_box = self._get_safe_box(size, resample, box) - factor = (factor_x, factor_y) - if callable(self.reduce): - self = self.reduce(factor, box=reduce_box) - else: - self = Image.reduce(self, factor, box=reduce_box) - box = ( - (box[0] - reduce_box[0]) / factor_x, - (box[1] - reduce_box[1]) / factor_y, - (box[2] - reduce_box[0]) / factor_x, - (box[3] - reduce_box[1]) / factor_y, - ) - - return self._new(self.im.resize(size, resample, box)) - - def reduce(self, factor, box=None): - """ - Returns a copy of the image reduced ``factor`` times. - If the size of the image is not dividable by ``factor``, - the resulting size will be rounded up. - - :param factor: A greater than 0 integer or tuple of two integers - for width and height separately. - :param box: An optional 4-tuple of ints providing - the source image region to be reduced. - The values must be within ``(0, 0, width, height)`` rectangle. - If omitted or ``None``, the entire source is used. - """ - if not isinstance(factor, (list, tuple)): - factor = (factor, factor) - - if box is None: - box = (0, 0) + self.size - else: - box = tuple(box) - - if factor == (1, 1) and box == (0, 0) + self.size: - return self.copy() - - if self.mode in ["LA", "RGBA"]: - im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - im = im.reduce(factor, box) - return im.convert(self.mode) - - self.load() - - return self._new(self.im.reduce(factor, box)) - - def rotate( - self, - angle, - resample=Resampling.NEAREST, - expand=0, - center=None, - translate=None, - fillcolor=None, - ): - """ - Returns a rotated copy of this image. This method returns a - copy of this image, rotated the given number of degrees counter - clockwise around its centre. - - :param angle: In degrees counter clockwise. - :param resample: An optional resampling filter. This can be - one of :py:data:`Resampling.NEAREST` (use nearest neighbour), - :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2 - environment), or :py:data:`Resampling.BICUBIC` (cubic spline - interpolation in a 4x4 environment). If omitted, or if the image has - mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`. - See :ref:`concept-filters`. - :param expand: Optional expansion flag. If true, expands the output - image to make it large enough to hold the entire rotated image. - If false or omitted, make the output image the same size as the - input image. Note that the expand flag assumes rotation around - the center and no translation. - :param center: Optional center of rotation (a 2-tuple). Origin is - the upper left corner. Default is the center of the image. - :param translate: An optional post-rotate translation (a 2-tuple). - :param fillcolor: An optional color for area outside the rotated image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - angle = angle % 360.0 - - # Fast paths regardless of filter, as long as we're not - # translating or changing the center. - if not (center or translate): - if angle == 0: - return self.copy() - if angle == 180: - return self.transpose(Transpose.ROTATE_180) - if angle in (90, 270) and (expand or self.width == self.height): - return self.transpose( - Transpose.ROTATE_90 if angle == 90 else Transpose.ROTATE_270 - ) - - # Calculate the affine matrix. Note that this is the reverse - # transformation (from destination image to source) because we - # want to interpolate the (discrete) destination pixel from - # the local area around the (floating) source pixel. - - # The matrix we actually want (note that it operates from the right): - # (1, 0, tx) (1, 0, cx) ( cos a, sin a, 0) (1, 0, -cx) - # (0, 1, ty) * (0, 1, cy) * (-sin a, cos a, 0) * (0, 1, -cy) - # (0, 0, 1) (0, 0, 1) ( 0, 0, 1) (0, 0, 1) - - # The reverse matrix is thus: - # (1, 0, cx) ( cos -a, sin -a, 0) (1, 0, -cx) (1, 0, -tx) - # (0, 1, cy) * (-sin -a, cos -a, 0) * (0, 1, -cy) * (0, 1, -ty) - # (0, 0, 1) ( 0, 0, 1) (0, 0, 1) (0, 0, 1) - - # In any case, the final translation may be updated at the end to - # compensate for the expand flag. - - w, h = self.size - - if translate is None: - post_trans = (0, 0) - else: - post_trans = translate - if center is None: - # FIXME These should be rounded to ints? - rotn_center = (w / 2.0, h / 2.0) - else: - rotn_center = center - - angle = -math.radians(angle) - matrix = [ - round(math.cos(angle), 15), - round(math.sin(angle), 15), - 0.0, - round(-math.sin(angle), 15), - round(math.cos(angle), 15), - 0.0, - ] - - def transform(x, y, matrix): - (a, b, c, d, e, f) = matrix - return a * x + b * y + c, d * x + e * y + f - - matrix[2], matrix[5] = transform( - -rotn_center[0] - post_trans[0], -rotn_center[1] - post_trans[1], matrix - ) - matrix[2] += rotn_center[0] - matrix[5] += rotn_center[1] - - if expand: - # calculate output size - xx = [] - yy = [] - for x, y in ((0, 0), (w, 0), (w, h), (0, h)): - x, y = transform(x, y, matrix) - xx.append(x) - yy.append(y) - nw = math.ceil(max(xx)) - math.floor(min(xx)) - nh = math.ceil(max(yy)) - math.floor(min(yy)) - - # We multiply a translation matrix from the right. Because of its - # special form, this is the same as taking the image of the - # translation vector as new translation vector. - matrix[2], matrix[5] = transform(-(nw - w) / 2.0, -(nh - h) / 2.0, matrix) - w, h = nw, nh - - return self.transform( - (w, h), Transform.AFFINE, matrix, resample, fillcolor=fillcolor - ) - - def save(self, fp, format=None, **params): - """ - Saves this image under the given filename. If no format is - specified, the format to use is determined from the filename - extension, if possible. - - Keyword options can be used to provide additional instructions - to the writer. If a writer doesn't recognise an option, it is - silently ignored. The available options are described in the - :doc:`image format documentation - <../handbook/image-file-formats>` for each writer. - - You can use a file object instead of a filename. In this case, - you must always specify the format. The file object must - implement the ``seek``, ``tell``, and ``write`` - methods, and be opened in binary mode. - - :param fp: A filename (string), pathlib.Path object or file object. - :param format: Optional format override. If omitted, the - format to use is determined from the filename extension. - If a file object was used instead of a filename, this - parameter should always be used. - :param params: Extra parameters to the image writer. - :returns: None - :exception ValueError: If the output format could not be determined - from the file name. Use the format option to solve this. - :exception OSError: If the file could not be written. The file - may have been created, and may contain partial data. - """ - - filename = "" - open_fp = False - if isinstance(fp, Path): - filename = str(fp) - open_fp = True - elif is_path(fp): - filename = fp - open_fp = True - elif fp == sys.stdout: - try: - fp = sys.stdout.buffer - except AttributeError: - pass - if not filename and hasattr(fp, "name") and is_path(fp.name): - # only set the name for metadata purposes - filename = fp.name - - # may mutate self! - self._ensure_mutable() - - save_all = params.pop("save_all", False) - self.encoderinfo = params - self.encoderconfig = () - - preinit() - - ext = os.path.splitext(filename)[1].lower() - - if not format: - if ext not in EXTENSION: - init() - try: - format = EXTENSION[ext] - except KeyError as e: - msg = f"unknown file extension: {ext}" - raise ValueError(msg) from e - - if format.upper() not in SAVE: - init() - if save_all: - save_handler = SAVE_ALL[format.upper()] - else: - save_handler = SAVE[format.upper()] - - created = False - if open_fp: - created = not os.path.exists(filename) - if params.get("append", False): - # Open also for reading ("+"), because TIFF save_all - # writer needs to go back and edit the written data. - fp = builtins.open(filename, "r+b") - else: - fp = builtins.open(filename, "w+b") - - try: - save_handler(self, fp, filename) - except Exception: - if open_fp: - fp.close() - if created: - try: - os.remove(filename) - except PermissionError: - pass - raise - if open_fp: - fp.close() - - def seek(self, frame): - """ - Seeks to the given frame in this sequence file. If you seek - beyond the end of the sequence, the method raises an - ``EOFError`` exception. When a sequence file is opened, the - library automatically seeks to frame 0. - - See :py:meth:`~PIL.Image.Image.tell`. - - If defined, :attr:`~PIL.Image.Image.n_frames` refers to the - number of available frames. - - :param frame: Frame number, starting at 0. - :exception EOFError: If the call attempts to seek beyond the end - of the sequence. - """ - - # overridden by file handlers - if frame != 0: - raise EOFError - - def show(self, title=None): - """ - Displays this image. This method is mainly intended for debugging purposes. - - This method calls :py:func:`PIL.ImageShow.show` internally. You can use - :py:func:`PIL.ImageShow.register` to override its default behaviour. - - The image is first saved to a temporary file. By default, it will be in - PNG format. - - On Unix, the image is then opened using the **xdg-open**, **display**, - **gm**, **eog** or **xv** utility, depending on which one can be found. - - On macOS, the image is opened with the native Preview application. - - On Windows, the image is opened with the standard PNG display utility. - - :param title: Optional title to use for the image window, where possible. - """ - - _show(self, title=title) - - def split(self): - """ - Split this image into individual bands. This method returns a - tuple of individual image bands from an image. For example, - splitting an "RGB" image creates three new images each - containing a copy of one of the original bands (red, green, - blue). - - If you need only one band, :py:meth:`~PIL.Image.Image.getchannel` - method can be more convenient and faster. - - :returns: A tuple containing bands. - """ - - self.load() - if self.im.bands == 1: - ims = [self.copy()] - else: - ims = map(self._new, self.im.split()) - return tuple(ims) - - def getchannel(self, channel): - """ - Returns an image containing a single channel of the source image. - - :param channel: What channel to return. Could be index - (0 for "R" channel of "RGB") or channel name - ("A" for alpha channel of "RGBA"). - :returns: An image in "L" mode. - - .. versionadded:: 4.3.0 - """ - self.load() - - if isinstance(channel, str): - try: - channel = self.getbands().index(channel) - except ValueError as e: - msg = f'The image has no channel "{channel}"' - raise ValueError(msg) from e - - return self._new(self.im.getband(channel)) - - def tell(self): - """ - Returns the current frame number. See :py:meth:`~PIL.Image.Image.seek`. - - If defined, :attr:`~PIL.Image.Image.n_frames` refers to the - number of available frames. - - :returns: Frame number, starting with 0. - """ - return 0 - - def thumbnail(self, size, resample=Resampling.BICUBIC, reducing_gap=2.0): - """ - Make this image into a thumbnail. This method modifies the - image to contain a thumbnail version of itself, no larger than - the given size. This method calculates an appropriate thumbnail - size to preserve the aspect of the image, calls the - :py:meth:`~PIL.Image.Image.draft` method to configure the file reader - (where applicable), and finally resizes the image. - - Note that this function modifies the :py:class:`~PIL.Image.Image` - object in place. If you need to use the full resolution image as well, - apply this method to a :py:meth:`~PIL.Image.Image.copy` of the original - image. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param resample: Optional resampling filter. This can be one - of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`, - :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`, - :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`. - If omitted, it defaults to :py:data:`Resampling.BICUBIC`. - (was :py:data:`Resampling.NEAREST` prior to version 2.5.0). - See: :ref:`concept-filters`. - :param reducing_gap: Apply optimization by resizing the image - in two steps. First, reducing the image by integer times - using :py:meth:`~PIL.Image.Image.reduce` or - :py:meth:`~PIL.Image.Image.draft` for JPEG images. - Second, resizing using regular resampling. The last step - changes size no less than by ``reducing_gap`` times. - ``reducing_gap`` may be None (no first step is performed) - or should be greater than 1.0. The bigger ``reducing_gap``, - the closer the result to the fair resampling. - The smaller ``reducing_gap``, the faster resizing. - With ``reducing_gap`` greater or equal to 3.0, the result is - indistinguishable from fair resampling in most cases. - The default value is 2.0 (very close to fair resampling - while still being faster in many cases). - :returns: None - """ - - provided_size = tuple(map(math.floor, size)) - - def preserve_aspect_ratio(): - def round_aspect(number, key): - return max(min(math.floor(number), math.ceil(number), key=key), 1) - - x, y = provided_size - if x >= self.width and y >= self.height: - return - - aspect = self.width / self.height - if x / y >= aspect: - x = round_aspect(y * aspect, key=lambda n: abs(aspect - n / y)) - else: - y = round_aspect( - x / aspect, key=lambda n: 0 if n == 0 else abs(aspect - x / n) - ) - return x, y - - box = None - if reducing_gap is not None: - size = preserve_aspect_ratio() - if size is None: - return - - res = self.draft(None, (size[0] * reducing_gap, size[1] * reducing_gap)) - if res is not None: - box = res[1] - if box is None: - self.load() - - # load() may have changed the size of the image - size = preserve_aspect_ratio() - if size is None: - return - - if self.size != size: - im = self.resize(size, resample, box=box, reducing_gap=reducing_gap) - - self.im = im.im - self._size = size - self.mode = self.im.mode - - self.readonly = 0 - self.pyaccess = None - - # FIXME: the different transform methods need further explanation - # instead of bloating the method docs, add a separate chapter. - def transform( - self, - size, - method, - data=None, - resample=Resampling.NEAREST, - fill=1, - fillcolor=None, - ): - """ - Transforms this image. This method creates a new image with the - given size, and the same mode as the original, and copies data - to the new image using the given transform. - - :param size: The output size in pixels, as a 2-tuple: - (width, height). - :param method: The transformation method. This is one of - :py:data:`Transform.EXTENT` (cut out a rectangular subregion), - :py:data:`Transform.AFFINE` (affine transform), - :py:data:`Transform.PERSPECTIVE` (perspective transform), - :py:data:`Transform.QUAD` (map a quadrilateral to a rectangle), or - :py:data:`Transform.MESH` (map a number of source quadrilaterals - in one operation). - - It may also be an :py:class:`~PIL.Image.ImageTransformHandler` - object:: - - class Example(Image.ImageTransformHandler): - def transform(self, size, data, resample, fill=1): - # Return result - - It may also be an object with a ``method.getdata`` method - that returns a tuple supplying new ``method`` and ``data`` values:: - - class Example: - def getdata(self): - method = Image.Transform.EXTENT - data = (0, 0, 100, 100) - return method, data - :param data: Extra data to the transformation method. - :param resample: Optional resampling filter. It can be one of - :py:data:`Resampling.NEAREST` (use nearest neighbour), - :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2 - environment), or :py:data:`Resampling.BICUBIC` (cubic spline - interpolation in a 4x4 environment). If omitted, or if the image - has mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`. - See: :ref:`concept-filters`. - :param fill: If ``method`` is an - :py:class:`~PIL.Image.ImageTransformHandler` object, this is one of - the arguments passed to it. Otherwise, it is unused. - :param fillcolor: Optional fill color for the area outside the - transform in the output image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if self.mode in ("LA", "RGBA") and resample != Resampling.NEAREST: - return ( - self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - .transform(size, method, data, resample, fill, fillcolor) - .convert(self.mode) - ) - - if isinstance(method, ImageTransformHandler): - return method.transform(size, self, resample=resample, fill=fill) - - if hasattr(method, "getdata"): - # compatibility w. old-style transform objects - method, data = method.getdata() - - if data is None: - msg = "missing method data" - raise ValueError(msg) - - im = new(self.mode, size, fillcolor) - if self.mode == "P" and self.palette: - im.palette = self.palette.copy() - im.info = self.info.copy() - if method == Transform.MESH: - # list of quads - for box, quad in data: - im.__transformer( - box, self, Transform.QUAD, quad, resample, fillcolor is None - ) - else: - im.__transformer( - (0, 0) + size, self, method, data, resample, fillcolor is None - ) - - return im - - def __transformer( - self, box, image, method, data, resample=Resampling.NEAREST, fill=1 - ): - w = box[2] - box[0] - h = box[3] - box[1] - - if method == Transform.AFFINE: - data = data[:6] - - elif method == Transform.EXTENT: - # convert extent to an affine transform - x0, y0, x1, y1 = data - xs = (x1 - x0) / w - ys = (y1 - y0) / h - method = Transform.AFFINE - data = (xs, 0, x0, 0, ys, y0) - - elif method == Transform.PERSPECTIVE: - data = data[:8] - - elif method == Transform.QUAD: - # quadrilateral warp. data specifies the four corners - # given as NW, SW, SE, and NE. - nw = data[:2] - sw = data[2:4] - se = data[4:6] - ne = data[6:8] - x0, y0 = nw - As = 1.0 / w - At = 1.0 / h - data = ( - x0, - (ne[0] - x0) * As, - (sw[0] - x0) * At, - (se[0] - sw[0] - ne[0] + x0) * As * At, - y0, - (ne[1] - y0) * As, - (sw[1] - y0) * At, - (se[1] - sw[1] - ne[1] + y0) * As * At, - ) - - else: - msg = "unknown transformation method" - raise ValueError(msg) - - if resample not in ( - Resampling.NEAREST, - Resampling.BILINEAR, - Resampling.BICUBIC, - ): - if resample in (Resampling.BOX, Resampling.HAMMING, Resampling.LANCZOS): - msg = { - Resampling.BOX: "Image.Resampling.BOX", - Resampling.HAMMING: "Image.Resampling.HAMMING", - Resampling.LANCZOS: "Image.Resampling.LANCZOS", - }[resample] + f" ({resample}) cannot be used." - else: - msg = f"Unknown resampling filter ({resample})." - - filters = [ - f"{filter[1]} ({filter[0]})" - for filter in ( - (Resampling.NEAREST, "Image.Resampling.NEAREST"), - (Resampling.BILINEAR, "Image.Resampling.BILINEAR"), - (Resampling.BICUBIC, "Image.Resampling.BICUBIC"), - ) - ] - msg += " Use " + ", ".join(filters[:-1]) + " or " + filters[-1] - raise ValueError(msg) - - image.load() - - self.load() - - if image.mode in ("1", "P"): - resample = Resampling.NEAREST - - self.im.transform2(box, image.im, method, data, resample, fill) - - def transpose(self, method): - """ - Transpose image (flip or rotate in 90 degree steps) - - :param method: One of :py:data:`Transpose.FLIP_LEFT_RIGHT`, - :py:data:`Transpose.FLIP_TOP_BOTTOM`, :py:data:`Transpose.ROTATE_90`, - :py:data:`Transpose.ROTATE_180`, :py:data:`Transpose.ROTATE_270`, - :py:data:`Transpose.TRANSPOSE` or :py:data:`Transpose.TRANSVERSE`. - :returns: Returns a flipped or rotated copy of this image. - """ - - self.load() - return self._new(self.im.transpose(method)) - - def effect_spread(self, distance): - """ - Randomly spread pixels in an image. - - :param distance: Distance to spread pixels. - """ - self.load() - return self._new(self.im.effect_spread(distance)) - - def toqimage(self): - """Returns a QImage copy of this image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.toqimage(self) - - def toqpixmap(self): - """Returns a QPixmap copy of this image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.toqpixmap(self) - - -# -------------------------------------------------------------------- -# Abstract handlers. - - -class ImagePointHandler: - """ - Used as a mixin by point transforms - (for use with :py:meth:`~PIL.Image.Image.point`) - """ - - pass - - -class ImageTransformHandler: - """ - Used as a mixin by geometry transforms - (for use with :py:meth:`~PIL.Image.Image.transform`) - """ - - pass - - -# -------------------------------------------------------------------- -# Factories - -# -# Debugging - - -def _wedge(): - """Create greyscale wedge (for debugging only)""" - - return Image()._new(core.wedge("L")) - - -def _check_size(size): - """ - Common check to enforce type and sanity check on size tuples - - :param size: Should be a 2 tuple of (width, height) - :returns: True, or raises a ValueError - """ - - if not isinstance(size, (list, tuple)): - msg = "Size must be a tuple" - raise ValueError(msg) - if len(size) != 2: - msg = "Size must be a tuple of length 2" - raise ValueError(msg) - if size[0] < 0 or size[1] < 0: - msg = "Width and height must be >= 0" - raise ValueError(msg) - - return True - - -def new(mode, size, color=0): - """ - Creates a new image with the given mode and size. - - :param mode: The mode to use for the new image. See: - :ref:`concept-modes`. - :param size: A 2-tuple, containing (width, height) in pixels. - :param color: What color to use for the image. Default is black. - If given, this should be a single integer or floating point value - for single-band modes, and a tuple for multi-band modes (one value - per band). When creating RGB or HSV images, you can also use color - strings as supported by the ImageColor module. If the color is - None, the image is not initialised. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - _check_size(size) - - if color is None: - # don't initialize - return Image()._new(core.new(mode, size)) - - if isinstance(color, str): - # css3-style specifier - - from . import ImageColor - - color = ImageColor.getcolor(color, mode) - - im = Image() - if mode == "P" and isinstance(color, (list, tuple)) and len(color) in [3, 4]: - # RGB or RGBA value for a P image - from . import ImagePalette - - im.palette = ImagePalette.ImagePalette() - color = im.palette.getcolor(color) - return im._new(core.fill(mode, size, color)) - - -def frombytes(mode, size, data, decoder_name="raw", *args): - """ - Creates a copy of an image memory from pixel data in a buffer. - - In its simplest form, this function takes three arguments - (mode, size, and unpacked pixel data). - - You can also use any pixel decoder supported by PIL. For more - information on available decoders, see the section - :ref:`Writing Your Own File Codec `. - - Note that this function decodes pixel data only, not entire images. - If you have an entire image in a string, wrap it in a - :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load - it. - - :param mode: The image mode. See: :ref:`concept-modes`. - :param size: The image size. - :param data: A byte buffer containing raw data for the given mode. - :param decoder_name: What decoder to use. - :param args: Additional parameters for the given decoder. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - _check_size(size) - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if decoder_name == "raw" and args == (): - args = mode - - im = new(mode, size) - im.frombytes(data, decoder_name, args) - return im - - -def frombuffer(mode, size, data, decoder_name="raw", *args): - """ - Creates an image memory referencing pixel data in a byte buffer. - - This function is similar to :py:func:`~PIL.Image.frombytes`, but uses data - in the byte buffer, where possible. This means that changes to the - original buffer object are reflected in this image). Not all modes can - share memory; supported modes include "L", "RGBX", "RGBA", and "CMYK". - - Note that this function decodes pixel data only, not entire images. - If you have an entire image file in a string, wrap it in a - :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load it. - - In the current version, the default parameters used for the "raw" decoder - differs from that used for :py:func:`~PIL.Image.frombytes`. This is a - bug, and will probably be fixed in a future release. The current release - issues a warning if you do this; to disable the warning, you should provide - the full set of parameters. See below for details. - - :param mode: The image mode. See: :ref:`concept-modes`. - :param size: The image size. - :param data: A bytes or other buffer object containing raw - data for the given mode. - :param decoder_name: What decoder to use. - :param args: Additional parameters for the given decoder. For the - default encoder ("raw"), it's recommended that you provide the - full set of parameters:: - - frombuffer(mode, size, data, "raw", mode, 0, 1) - - :returns: An :py:class:`~PIL.Image.Image` object. - - .. versionadded:: 1.1.4 - """ - - _check_size(size) - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if decoder_name == "raw": - if args == (): - args = mode, 0, 1 - if args[0] in _MAPMODES: - im = new(mode, (1, 1)) - im = im._new(core.map_buffer(data, size, decoder_name, 0, args)) - if mode == "P": - from . import ImagePalette - - im.palette = ImagePalette.ImagePalette("RGB", im.im.getpalette("RGB")) - im.readonly = 1 - return im - - return frombytes(mode, size, data, decoder_name, args) - - -def fromarray(obj, mode=None): - """ - Creates an image memory from an object exporting the array interface - (using the buffer protocol):: - - from PIL import Image - import numpy as np - a = np.zeros((5, 5)) - im = Image.fromarray(a) - - If ``obj`` is not contiguous, then the ``tobytes`` method is called - and :py:func:`~PIL.Image.frombuffer` is used. - - In the case of NumPy, be aware that Pillow modes do not always correspond - to NumPy dtypes. Pillow modes only offer 1-bit pixels, 8-bit pixels, - 32-bit signed integer pixels, and 32-bit floating point pixels. - - Pillow images can also be converted to arrays:: - - from PIL import Image - import numpy as np - im = Image.open("hopper.jpg") - a = np.asarray(im) - - When converting Pillow images to arrays however, only pixel values are - transferred. This means that P and PA mode images will lose their palette. - - :param obj: Object with array interface - :param mode: Optional mode to use when reading ``obj``. Will be determined from - type if ``None``. - - This will not be used to convert the data after reading, but will be used to - change how the data is read:: - - from PIL import Image - import numpy as np - a = np.full((1, 1), 300) - im = Image.fromarray(a, mode="L") - im.getpixel((0, 0)) # 44 - im = Image.fromarray(a, mode="RGB") - im.getpixel((0, 0)) # (44, 1, 0) - - See: :ref:`concept-modes` for general information about modes. - :returns: An image object. - - .. versionadded:: 1.1.6 - """ - arr = obj.__array_interface__ - shape = arr["shape"] - ndim = len(shape) - strides = arr.get("strides", None) - if mode is None: - try: - typekey = (1, 1) + shape[2:], arr["typestr"] - except KeyError as e: - msg = "Cannot handle this data type" - raise TypeError(msg) from e - try: - mode, rawmode = _fromarray_typemap[typekey] - except KeyError as e: - msg = "Cannot handle this data type: %s, %s" % typekey - raise TypeError(msg) from e - else: - rawmode = mode - if mode in ["1", "L", "I", "P", "F"]: - ndmax = 2 - elif mode == "RGB": - ndmax = 3 - else: - ndmax = 4 - if ndim > ndmax: - msg = f"Too many dimensions: {ndim} > {ndmax}." - raise ValueError(msg) - - size = 1 if ndim == 1 else shape[1], shape[0] - if strides is not None: - if hasattr(obj, "tobytes"): - obj = obj.tobytes() - else: - obj = obj.tostring() - - return frombuffer(mode, size, obj, "raw", rawmode, 0, 1) - - -def fromqimage(im): - """Creates an image instance from a QImage image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.fromqimage(im) - - -def fromqpixmap(im): - """Creates an image instance from a QPixmap image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.fromqpixmap(im) - - -_fromarray_typemap = { - # (shape, typestr) => mode, rawmode - # first two members of shape are set to one - ((1, 1), "|b1"): ("1", "1;8"), - ((1, 1), "|u1"): ("L", "L"), - ((1, 1), "|i1"): ("I", "I;8"), - ((1, 1), "u2"): ("I", "I;16B"), - ((1, 1), "i2"): ("I", "I;16BS"), - ((1, 1), "u4"): ("I", "I;32B"), - ((1, 1), "i4"): ("I", "I;32BS"), - ((1, 1), "f4"): ("F", "F;32BF"), - ((1, 1), "f8"): ("F", "F;64BF"), - ((1, 1, 2), "|u1"): ("LA", "LA"), - ((1, 1, 3), "|u1"): ("RGB", "RGB"), - ((1, 1, 4), "|u1"): ("RGBA", "RGBA"), - # shortcuts: - ((1, 1), _ENDIAN + "i4"): ("I", "I"), - ((1, 1), _ENDIAN + "f4"): ("F", "F"), -} - - -def _decompression_bomb_check(size): - if MAX_IMAGE_PIXELS is None: - return - - pixels = max(1, size[0]) * max(1, size[1]) - - if pixels > 2 * MAX_IMAGE_PIXELS: - msg = ( - f"Image size ({pixels} pixels) exceeds limit of {2 * MAX_IMAGE_PIXELS} " - "pixels, could be decompression bomb DOS attack." - ) - raise DecompressionBombError(msg) - - if pixels > MAX_IMAGE_PIXELS: - warnings.warn( - f"Image size ({pixels} pixels) exceeds limit of {MAX_IMAGE_PIXELS} pixels, " - "could be decompression bomb DOS attack.", - DecompressionBombWarning, - ) - - -def open(fp, mode="r", formats=None): - """ - Opens and identifies the given image file. - - This is a lazy operation; this function identifies the file, but - the file remains open and the actual image data is not read from - the file until you try to process the data (or call the - :py:meth:`~PIL.Image.Image.load` method). See - :py:func:`~PIL.Image.new`. See :ref:`file-handling`. - - :param fp: A filename (string), pathlib.Path object or a file object. - The file object must implement ``file.read``, - ``file.seek``, and ``file.tell`` methods, - and be opened in binary mode. The file object will also seek to zero - before reading. - :param mode: The mode. If given, this argument must be "r". - :param formats: A list or tuple of formats to attempt to load the file in. - This can be used to restrict the set of formats checked. - Pass ``None`` to try all supported formats. You can print the set of - available formats by running ``python3 -m PIL`` or using - the :py:func:`PIL.features.pilinfo` function. - :returns: An :py:class:`~PIL.Image.Image` object. - :exception FileNotFoundError: If the file cannot be found. - :exception PIL.UnidentifiedImageError: If the image cannot be opened and - identified. - :exception ValueError: If the ``mode`` is not "r", or if a ``StringIO`` - instance is used for ``fp``. - :exception TypeError: If ``formats`` is not ``None``, a list or a tuple. - """ - - if mode != "r": - msg = f"bad mode {repr(mode)}" - raise ValueError(msg) - elif isinstance(fp, io.StringIO): - msg = ( - "StringIO cannot be used to open an image. " - "Binary data must be used instead." - ) - raise ValueError(msg) - - if formats is None: - formats = ID - elif not isinstance(formats, (list, tuple)): - msg = "formats must be a list or tuple" - raise TypeError(msg) - - exclusive_fp = False - filename = "" - if isinstance(fp, Path): - filename = str(fp.resolve()) - elif is_path(fp): - filename = fp - - if filename: - fp = builtins.open(filename, "rb") - exclusive_fp = True - - try: - fp.seek(0) - except (AttributeError, io.UnsupportedOperation): - fp = io.BytesIO(fp.read()) - exclusive_fp = True - - prefix = fp.read(16) - - preinit() - - accept_warnings = [] - - def _open_core(fp, filename, prefix, formats): - for i in formats: - i = i.upper() - if i not in OPEN: - init() - try: - factory, accept = OPEN[i] - result = not accept or accept(prefix) - if type(result) in [str, bytes]: - accept_warnings.append(result) - elif result: - fp.seek(0) - im = factory(fp, filename) - _decompression_bomb_check(im.size) - return im - except (SyntaxError, IndexError, TypeError, struct.error): - # Leave disabled by default, spams the logs with image - # opening failures that are entirely expected. - # logger.debug("", exc_info=True) - continue - except BaseException: - if exclusive_fp: - fp.close() - raise - return None - - im = _open_core(fp, filename, prefix, formats) - - if im is None and formats is ID: - checked_formats = formats.copy() - if init(): - im = _open_core( - fp, - filename, - prefix, - tuple(format for format in formats if format not in checked_formats), - ) - - if im: - im._exclusive_fp = exclusive_fp - return im - - if exclusive_fp: - fp.close() - for message in accept_warnings: - warnings.warn(message) - msg = "cannot identify image file %r" % (filename if filename else fp) - raise UnidentifiedImageError(msg) - - -# -# Image processing. - - -def alpha_composite(im1, im2): - """ - Alpha composite im2 over im1. - - :param im1: The first image. Must have mode RGBA. - :param im2: The second image. Must have mode RGBA, and the same size as - the first image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - im1.load() - im2.load() - return im1._new(core.alpha_composite(im1.im, im2.im)) - - -def blend(im1, im2, alpha): - """ - Creates a new image by interpolating between two input images, using - a constant alpha:: - - out = image1 * (1.0 - alpha) + image2 * alpha - - :param im1: The first image. - :param im2: The second image. Must have the same mode and size as - the first image. - :param alpha: The interpolation alpha factor. If alpha is 0.0, a - copy of the first image is returned. If alpha is 1.0, a copy of - the second image is returned. There are no restrictions on the - alpha value. If necessary, the result is clipped to fit into - the allowed output range. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - im1.load() - im2.load() - return im1._new(core.blend(im1.im, im2.im, alpha)) - - -def composite(image1, image2, mask): - """ - Create composite image by blending images using a transparency mask. - - :param image1: The first image. - :param image2: The second image. Must have the same mode and - size as the first image. - :param mask: A mask image. This image can have mode - "1", "L", or "RGBA", and must have the same size as the - other two images. - """ - - image = image2.copy() - image.paste(image1, None, mask) - return image - - -def eval(image, *args): - """ - Applies the function (which should take one argument) to each pixel - in the given image. If the image has more than one band, the same - function is applied to each band. Note that the function is - evaluated once for each possible pixel value, so you cannot use - random components or other generators. - - :param image: The input image. - :param function: A function object, taking one integer argument. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - return image.point(args[0]) - - -def merge(mode, bands): - """ - Merge a set of single band images into a new multiband image. - - :param mode: The mode to use for the output image. See: - :ref:`concept-modes`. - :param bands: A sequence containing one single-band image for - each band in the output image. All bands must have the - same size. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if getmodebands(mode) != len(bands) or "*" in mode: - msg = "wrong number of bands" - raise ValueError(msg) - for band in bands[1:]: - if band.mode != getmodetype(mode): - msg = "mode mismatch" - raise ValueError(msg) - if band.size != bands[0].size: - msg = "size mismatch" - raise ValueError(msg) - for band in bands: - band.load() - return bands[0]._new(core.merge(mode, *[b.im for b in bands])) - - -# -------------------------------------------------------------------- -# Plugin registry - - -def register_open(id, factory, accept=None): - """ - Register an image file plugin. This function should not be used - in application code. - - :param id: An image format identifier. - :param factory: An image file factory method. - :param accept: An optional function that can be used to quickly - reject images having another format. - """ - id = id.upper() - if id not in ID: - ID.append(id) - OPEN[id] = factory, accept - - -def register_mime(id, mimetype): - """ - Registers an image MIME type. This function should not be used - in application code. - - :param id: An image format identifier. - :param mimetype: The image MIME type for this format. - """ - MIME[id.upper()] = mimetype - - -def register_save(id, driver): - """ - Registers an image save function. This function should not be - used in application code. - - :param id: An image format identifier. - :param driver: A function to save images in this format. - """ - SAVE[id.upper()] = driver - - -def register_save_all(id, driver): - """ - Registers an image function to save all the frames - of a multiframe format. This function should not be - used in application code. - - :param id: An image format identifier. - :param driver: A function to save images in this format. - """ - SAVE_ALL[id.upper()] = driver - - -def register_extension(id, extension): - """ - Registers an image extension. This function should not be - used in application code. - - :param id: An image format identifier. - :param extension: An extension used for this format. - """ - EXTENSION[extension.lower()] = id.upper() - - -def register_extensions(id, extensions): - """ - Registers image extensions. This function should not be - used in application code. - - :param id: An image format identifier. - :param extensions: A list of extensions used for this format. - """ - for extension in extensions: - register_extension(id, extension) - - -def registered_extensions(): - """ - Returns a dictionary containing all file extensions belonging - to registered plugins - """ - init() - return EXTENSION - - -def register_decoder(name, decoder): - """ - Registers an image decoder. This function should not be - used in application code. - - :param name: The name of the decoder - :param decoder: A callable(mode, args) that returns an - ImageFile.PyDecoder object - - .. versionadded:: 4.1.0 - """ - DECODERS[name] = decoder - - -def register_encoder(name, encoder): - """ - Registers an image encoder. This function should not be - used in application code. - - :param name: The name of the encoder - :param encoder: A callable(mode, args) that returns an - ImageFile.PyEncoder object - - .. versionadded:: 4.1.0 - """ - ENCODERS[name] = encoder - - -# -------------------------------------------------------------------- -# Simple display support. - - -def _show(image, **options): - from . import ImageShow - - ImageShow.show(image, **options) - - -# -------------------------------------------------------------------- -# Effects - - -def effect_mandelbrot(size, extent, quality): - """ - Generate a Mandelbrot set covering the given extent. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param extent: The extent to cover, as a 4-tuple: - (x0, y0, x1, y1). - :param quality: Quality. - """ - return Image()._new(core.effect_mandelbrot(size, extent, quality)) - - -def effect_noise(size, sigma): - """ - Generate Gaussian noise centered around 128. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param sigma: Standard deviation of noise. - """ - return Image()._new(core.effect_noise(size, sigma)) - - -def linear_gradient(mode): - """ - Generate 256x256 linear gradient from black to white, top to bottom. - - :param mode: Input mode. - """ - return Image()._new(core.linear_gradient(mode)) - - -def radial_gradient(mode): - """ - Generate 256x256 radial gradient from black to white, centre to edge. - - :param mode: Input mode. - """ - return Image()._new(core.radial_gradient(mode)) - - -# -------------------------------------------------------------------- -# Resources - - -def _apply_env_variables(env=None): - if env is None: - env = os.environ - - for var_name, setter in [ - ("PILLOW_ALIGNMENT", core.set_alignment), - ("PILLOW_BLOCK_SIZE", core.set_block_size), - ("PILLOW_BLOCKS_MAX", core.set_blocks_max), - ]: - if var_name not in env: - continue - - var = env[var_name].lower() - - units = 1 - for postfix, mul in [("k", 1024), ("m", 1024 * 1024)]: - if var.endswith(postfix): - units = mul - var = var[: -len(postfix)] - - try: - var = int(var) * units - except ValueError: - warnings.warn(f"{var_name} is not int") - continue - - try: - setter(var) - except ValueError as e: - warnings.warn(f"{var_name}: {e}") - - -_apply_env_variables() -atexit.register(core.clear_cache) - - -class Exif(MutableMapping): - """ - This class provides read and write access to EXIF image data:: - - from PIL import Image - im = Image.open("exif.png") - exif = im.getexif() # Returns an instance of this class - - Information can be read and written, iterated over or deleted:: - - print(exif[274]) # 1 - exif[274] = 2 - for k, v in exif.items(): - print("Tag", k, "Value", v) # Tag 274 Value 2 - del exif[274] - - To access information beyond IFD0, :py:meth:`~PIL.Image.Exif.get_ifd` - returns a dictionary:: - - from PIL import ExifTags - im = Image.open("exif_gps.jpg") - exif = im.getexif() - gps_ifd = exif.get_ifd(ExifTags.IFD.GPSInfo) - print(gps_ifd) - - Other IFDs include ``ExifTags.IFD.Exif``, ``ExifTags.IFD.Makernote``, - ``ExifTags.IFD.Interop`` and ``ExifTags.IFD.IFD1``. - - :py:mod:`~PIL.ExifTags` also has enum classes to provide names for data:: - - print(exif[ExifTags.Base.Software]) # PIL - print(gps_ifd[ExifTags.GPS.GPSDateStamp]) # 1999:99:99 99:99:99 - """ - - endian = None - bigtiff = False - - def __init__(self): - self._data = {} - self._hidden_data = {} - self._ifds = {} - self._info = None - self._loaded_exif = None - - def _fixup(self, value): - try: - if len(value) == 1 and isinstance(value, tuple): - return value[0] - except Exception: - pass - return value - - def _fixup_dict(self, src_dict): - # Helper function - # returns a dict with any single item tuples/lists as individual values - return {k: self._fixup(v) for k, v in src_dict.items()} - - def _get_ifd_dict(self, offset): - try: - # an offset pointer to the location of the nested embedded IFD. - # It should be a long, but may be corrupted. - self.fp.seek(offset) - except (KeyError, TypeError): - pass - else: - from . import TiffImagePlugin - - info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - info.load(self.fp) - return self._fixup_dict(info) - - def _get_head(self): - version = b"\x2B" if self.bigtiff else b"\x2A" - if self.endian == "<": - head = b"II" + version + b"\x00" + o32le(8) - else: - head = b"MM\x00" + version + o32be(8) - if self.bigtiff: - head += o32le(8) if self.endian == "<" else o32be(8) - head += b"\x00\x00\x00\x00" - return head - - def load(self, data): - # Extract EXIF information. This is highly experimental, - # and is likely to be replaced with something better in a future - # version. - - # The EXIF record consists of a TIFF file embedded in a JPEG - # application marker (!). - if data == self._loaded_exif: - return - self._loaded_exif = data - self._data.clear() - self._hidden_data.clear() - self._ifds.clear() - if data and data.startswith(b"Exif\x00\x00"): - data = data[6:] - if not data: - self._info = None - return - - self.fp = io.BytesIO(data) - self.head = self.fp.read(8) - # process dictionary - from . import TiffImagePlugin - - self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - self.endian = self._info._endian - self.fp.seek(self._info.next) - self._info.load(self.fp) - - def load_from_fp(self, fp, offset=None): - self._loaded_exif = None - self._data.clear() - self._hidden_data.clear() - self._ifds.clear() - - # process dictionary - from . import TiffImagePlugin - - self.fp = fp - if offset is not None: - self.head = self._get_head() - else: - self.head = self.fp.read(8) - self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - if self.endian is None: - self.endian = self._info._endian - if offset is None: - offset = self._info.next - self.fp.seek(offset) - self._info.load(self.fp) - - def _get_merged_dict(self): - merged_dict = dict(self) - - # get EXIF extension - if ExifTags.IFD.Exif in self: - ifd = self._get_ifd_dict(self[ExifTags.IFD.Exif]) - if ifd: - merged_dict.update(ifd) - - # GPS - if ExifTags.IFD.GPSInfo in self: - merged_dict[ExifTags.IFD.GPSInfo] = self._get_ifd_dict( - self[ExifTags.IFD.GPSInfo] - ) - - return merged_dict - - def tobytes(self, offset=8): - from . import TiffImagePlugin - - head = self._get_head() - ifd = TiffImagePlugin.ImageFileDirectory_v2(ifh=head) - for tag, value in self.items(): - if tag in [ - ExifTags.IFD.Exif, - ExifTags.IFD.GPSInfo, - ] and not isinstance(value, dict): - value = self.get_ifd(tag) - if ( - tag == ExifTags.IFD.Exif - and ExifTags.IFD.Interop in value - and not isinstance(value[ExifTags.IFD.Interop], dict) - ): - value = value.copy() - value[ExifTags.IFD.Interop] = self.get_ifd(ExifTags.IFD.Interop) - ifd[tag] = value - return b"Exif\x00\x00" + head + ifd.tobytes(offset) - - def get_ifd(self, tag): - if tag not in self._ifds: - if tag == ExifTags.IFD.IFD1: - if self._info is not None and self._info.next != 0: - self._ifds[tag] = self._get_ifd_dict(self._info.next) - elif tag in [ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo]: - offset = self._hidden_data.get(tag, self.get(tag)) - if offset is not None: - self._ifds[tag] = self._get_ifd_dict(offset) - elif tag in [ExifTags.IFD.Interop, ExifTags.IFD.Makernote]: - if ExifTags.IFD.Exif not in self._ifds: - self.get_ifd(ExifTags.IFD.Exif) - tag_data = self._ifds[ExifTags.IFD.Exif][tag] - if tag == ExifTags.IFD.Makernote: - from .TiffImagePlugin import ImageFileDirectory_v2 - - if tag_data[:8] == b"FUJIFILM": - ifd_offset = i32le(tag_data, 8) - ifd_data = tag_data[ifd_offset:] - - makernote = {} - for i in range(0, struct.unpack(" 4: - (offset,) = struct.unpack("H", tag_data[:2])[0]): - ifd_tag, typ, count, data = struct.unpack( - ">HHL4s", tag_data[i * 12 + 2 : (i + 1) * 12 + 2] - ) - if ifd_tag == 0x1101: - # CameraInfo - (offset,) = struct.unpack(">L", data) - self.fp.seek(offset) - - camerainfo = {"ModelID": self.fp.read(4)} - - self.fp.read(4) - # Seconds since 2000 - camerainfo["TimeStamp"] = i32le(self.fp.read(12)) - - self.fp.read(4) - camerainfo["InternalSerialNumber"] = self.fp.read(4) - - self.fp.read(12) - parallax = self.fp.read(4) - handler = ImageFileDirectory_v2._load_dispatch[ - TiffTags.FLOAT - ][1] - camerainfo["Parallax"] = handler( - ImageFileDirectory_v2(), parallax, False - ) - - self.fp.read(4) - camerainfo["Category"] = self.fp.read(2) - - makernote = {0x1101: dict(self._fixup_dict(camerainfo))} - self._ifds[tag] = makernote - else: - # Interop - self._ifds[tag] = self._get_ifd_dict(tag_data) - ifd = self._ifds.get(tag, {}) - if tag == ExifTags.IFD.Exif and self._hidden_data: - ifd = { - k: v - for (k, v) in ifd.items() - if k not in (ExifTags.IFD.Interop, ExifTags.IFD.Makernote) - } - return ifd - - def hide_offsets(self): - for tag in (ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo): - if tag in self: - self._hidden_data[tag] = self[tag] - del self[tag] - - def __str__(self): - if self._info is not None: - # Load all keys into self._data - for tag in self._info: - self[tag] - - return str(self._data) - - def __len__(self): - keys = set(self._data) - if self._info is not None: - keys.update(self._info) - return len(keys) - - def __getitem__(self, tag): - if self._info is not None and tag not in self._data and tag in self._info: - self._data[tag] = self._fixup(self._info[tag]) - del self._info[tag] - return self._data[tag] - - def __contains__(self, tag): - return tag in self._data or (self._info is not None and tag in self._info) - - def __setitem__(self, tag, value): - if self._info is not None and tag in self._info: - del self._info[tag] - self._data[tag] = value - - def __delitem__(self, tag): - if self._info is not None and tag in self._info: - del self._info[tag] - else: - del self._data[tag] - - def __iter__(self): - keys = set(self._data) - if self._info is not None: - keys.update(self._info) - return iter(keys) diff --git a/spaces/deepkyu/multilingual-font-style-transfer/lightning.py b/spaces/deepkyu/multilingual-font-style-transfer/lightning.py deleted file mode 100644 index b7efa01ae7786972a3125d8dd97d9e8d902d1018..0000000000000000000000000000000000000000 --- a/spaces/deepkyu/multilingual-font-style-transfer/lightning.py +++ /dev/null @@ -1,313 +0,0 @@ -import numpy as np -import torch -from torch import nn -import torch.nn.functional as F -from torch.utils.data import DataLoader -import pytorch_lightning as pl -import importlib -import PIL.Image as Image - -import models -import datasets -from evaluator.ssim import SSIM, MSSSIM -import lpips -from models.loss import GANHingeLoss -from utils import set_logger, magic_image_handler - -NUM_TEST_SAVE_IMAGE = 10 - - -class FontLightningModule(pl.LightningModule): - def __init__(self, args): - super().__init__() - self.args = args - - self.losses = {} - self.metrics = {} - self.networks = nn.ModuleDict(self.build_models()) - self.module_keys = list(self.networks.keys()) - - self.losses = self.build_losses() - self.metrics = self.build_metrics() - - self.opt_tag = {key: None for key in self.networks.keys()} - self.sched_tag = {key: None for key in self.networks.keys()} - self.sched_use = False - # self.automatic_optimization = False - - self.train_d_content = True - self.train_d_style = True - - def build_models(self): - networks = {} - for key, hp_model in self.args.models.items(): - key_ = key.lower() - if 'g' == key_[0]: - model_ = models.Generator(hp_model) - elif 'd' == key_[0]: - model_ = models.PatchGANDiscriminator(hp_model) # TODO: add option for selecting discriminator - else: - raise ValueError(f"No key such as {key}") - - networks[key.lower()] = model_ - return networks - - def build_losses(self): - losses_dict = {} - losses_dict['L1'] = torch.nn.L1Loss() - - if 'd_content' in self.module_keys: - losses_dict['GANLoss_content'] = GANHingeLoss() - if 'd_style' in self.module_keys: - losses_dict['GANLoss_style'] = GANHingeLoss() - - return losses_dict - - def build_metrics(self): - metrics_dict = nn.ModuleDict() - metrics_dict['ssim'] = SSIM(val_range=1) # img value is in [0, 1] - metrics_dict['msssim'] = MSSSIM(weights=[0.45, 0.3, 0.25], val_range=1) # since imsize=64, len(weight)<=3 - metrics_dict['lpips'] = lpips.LPIPS(net='vgg') - return metrics_dict - - def configure_optimizers(self): - optims = {} - for key, args_model in self.args.models.items(): - key = key.lower() - if args_model['optim'] is not None: - args_optim = args_model['optim'] - module, cls = args_optim['class'].rsplit(".", 1) - O = getattr(importlib.import_module(module, package=None), cls) - o = O([p for p in self.networks[key].parameters() if p.requires_grad], - lr=args_optim.lr, betas=args_optim.betas) - - optims[key] = o - - optim_module_keys = optims.keys() - - count = 0 - optim_list = [] - - for _key in self.module_keys: - if _key in optim_module_keys: - optim_list.append(optims[_key]) - self.opt_tag[_key] = count - count += 1 - - return optim_list - - def forward(self, content_images, style_images): - return self.networks['g']((content_images, style_images)) - - def common_forward(self, batch, batch_idx): - loss = {} - logs = {} - - content_images = batch['content_images'] - style_images = batch['style_images'] - gt_images = batch['gt_images'] - image_paths = batch['image_paths'] - char_idx = batch['char_idx'] - - generated_images = self(content_images, style_images) - - # l1 loss - loss['g_L1'] = self.losses['L1'](generated_images, gt_images) - loss['g_backward'] = loss['g_L1'] * self.args.logging.lambda_L1 - - # loss for training generator - if 'd_content' in self.module_keys: - loss = self.d_content_loss_for_G(content_images, generated_images, loss) - - if 'd_style' in self.networks.keys(): - loss = self.d_style_loss_for_G(style_images, generated_images, loss) - - # loss for training discriminator - generated_images = generated_images.detach() - - if 'd_content' in self.module_keys: - if self.train_d_content: - loss = self.d_content_loss_for_D(content_images, generated_images, gt_images, loss) - - if 'd_style' in self.module_keys: - if self.train_d_style: - loss = self.d_style_loss_for_D(style_images, generated_images, gt_images, loss) - - logs['content_images'] = content_images - logs['style_images'] = style_images - logs['gt_images'] = gt_images - logs['generated_images'] = generated_images - - return loss, logs - - @property - def automatic_optimization(self): - return False - - def training_step(self, batch, batch_idx): - metrics = {} - # forward - loss, logs = self.common_forward(batch, batch_idx) - - if self.global_step % self.args.logging.freq['train'] == 0: - with torch.no_grad(): - metrics.update(self.calc_metrics(logs['gt_images'], logs['generated_images'])) - - # backward - opts = self.optimizers() - - opts[self.opt_tag['g']].zero_grad() - self.manual_backward(loss['g_backward']) - - if 'd_content' in self.module_keys: - if self.train_d_content: - opts[self.opt_tag['d_content']].zero_grad() - self.manual_backward(loss['dcontent_backward']) - - if 'd_style' in self.module_keys: - if self.train_d_style: - opts[self.opt_tag['d_style']].zero_grad() - self.manual_backward(loss['dstyle_backward']) - - opts[self.opt_tag['g']].step() - - if 'd_content' in self.module_keys: - if self.train_d_content: - opts[self.opt_tag['d_content']].step() - - if 'd_style' in self.module_keys: - if self.train_d_style: - opts[self.opt_tag['d_style']].step() - - if self.global_step % self.args.logging.freq['train'] == 0: - self.custom_log(loss, metrics, logs, mode='train') - - def validation_step(self, batch, batch_idx): - metrics = {} - loss, logs = self.common_forward(batch, batch_idx) - self.custom_log(loss, metrics, logs, mode='eval') - - def test_step(self, batch, batch_idx): - metrics = {} - loss, logs = self.common_forward(batch, batch_idx) - metrics.update(self.calc_metrics(logs['gt_images'], logs['generated_images'])) - - if batch_idx < NUM_TEST_SAVE_IMAGE: - for key, value in logs.items(): - if 'image' in key: - sample_images = (magic_image_handler(value) * 255)[..., 0].astype(np.uint8) - Image.fromarray(sample_images).save(f"{batch_idx:02d}_{key}.png") - - return loss, logs, metrics - - def test_epoch_end(self, test_step_outputs): - # do something with the outputs of all test batches - # all_test_preds = test_step_outputs.metrics - ssim_list = [] - msssim_list = [] - - for _, test_output in enumerate(test_step_outputs): - - ssim_list.append(test_output[2]['SSIM'].cpu().numpy()) - msssim_list.append(test_output[2]['MSSSIM'].cpu().numpy()) - - print(f"SSIM: {np.mean(ssim_list)}") - print(f"MSSSIM: {np.mean(msssim_list)}") - - def common_dataloader(self, mode='train', batch_size=None): - dataset_cls = getattr(datasets, self.args.datasets.type) - dataset_config = getattr(self.args.datasets, mode) - dataset = dataset_cls(dataset_config, mode=mode) - _batch_size = batch_size if batch_size is not None else dataset_config.batch_size - dataloader = DataLoader(dataset, - shuffle=dataset_config.shuffle, - batch_size=_batch_size, - num_workers=dataset_config.num_workers, - drop_last=True) - - return dataloader - - def train_dataloader(self): - return self.common_dataloader(mode='train') - - def val_dataloader(self): - return self.common_dataloader(mode='eval') - - def test_dataloader(self): - return self.common_dataloader(mode='eval') - - def calc_metrics(self, gt_images, generated_images): - """ - - :param gt_images: - :param generated_images: - :return: - """ - metrics = {} - _gt = torch.clamp(gt_images.clone(), 0, 1) - _gen = torch.clamp(generated_images.clone(), 0, 1) - metrics['SSIM'] = self.metrics['ssim'](_gt, _gen) - msssim_value = self.metrics['msssim'](_gt, _gen) - metrics['MSSSIM'] = msssim_value if not torch.isnan(msssim_value) else torch.tensor(0.).type_as(_gt) - metrics['LPIPS'] = self.metrics['lpips'](_gt * 2 - 1, _gen * 2 - 1).squeeze().mean() - return metrics - - # region step - def d_content_loss_for_G(self, content_images, generated_images, loss): - pred_generated = self.networks['d_content'](torch.cat([content_images, generated_images], dim=1)) - loss['g_gan_content'] = self.losses['GANLoss_content'](pred_generated, True, for_discriminator=False) - - loss['g_backward'] += loss['g_gan_content'] - return loss - - def d_content_loss_for_D(self, content_images, generated_images, gt_images, loss): - # D - if 'd_content' in self.module_keys: - if self.train_d_content: - pred_gt_images = self.networks['d_content'](torch.cat([content_images, gt_images], dim=1)) - pred_generated_images = self.networks['d_content'](torch.cat([content_images, generated_images], dim=1)) - - loss['dcontent_gt'] = self.losses['GANLoss_content'](pred_gt_images, True, for_discriminator=True) - loss['dcontent_gen'] = self.losses['GANLoss_content'](pred_generated_images, False, for_discriminator=True) - loss['dcontent_backward'] = (loss['dcontent_gt'] + loss['dcontent_gen']) - - return loss - - def d_style_loss_for_G(self, style_images, generated_images, loss): - pred_generated = self.networks['d_style'](torch.cat([style_images, generated_images], dim=1)) - loss['g_gan_style'] = self.losses['GANLoss_style'](pred_generated, True, for_discriminator=False) - - assert self.train_d_style - loss['g_backward'] += loss['g_gan_style'] - return loss - - def d_style_loss_for_D(self, style_images, generated_images, gt_images, loss): - pred_gt_images = self.networks['d_style'](torch.cat([style_images, gt_images], dim=1)) - pred_generated_images = self.networks['d_style'](torch.cat([style_images, generated_images], dim=1)) - - loss['dstyle_gt'] = self.losses['GANLoss_style'](pred_gt_images, True, for_discriminator=True) - loss['dstyle_gen'] = self.losses['GANLoss_style'](pred_generated_images, False, for_discriminator=True) - loss['dstyle_backward'] = (loss['dstyle_gt'] + loss['dstyle_gen']) - - return loss - - def custom_log(self, loss, metrics, logs, mode): - # logging values with tensorboard - for loss_full_key, value in loss.items(): - model_type, loss_type = loss_full_key.split('_')[0], "_".join(loss_full_key.split('_')[1:]) - self.log(f'{model_type}/{mode}_{loss_type}', value) - - for metric_full_key, value in metrics.items(): - model_type, metric_type = metric_full_key.split('_')[0], "_".join(metric_full_key.split('_')[1:]) - self.log(f'{model_type}/{mode}_{metric_type}', value) - - # logging images, params, etc. - tensorboard = self.logger.experiment - for key, value in logs.items(): - if 'image' in key: - sample_images = magic_image_handler(value) - tensorboard.add_image(f"{mode}/" + key, sample_images, self.global_step, dataformats='HWC') - elif 'param' in key: - tensorboard.add_histogram(f"{mode}" + key, value, self.global_step) - else: - raise RuntimeError(f"Only logging with one of keywords: image, param | current input: {key}") diff --git a/spaces/denisp1/Streamlit-GraphViz-Demo/README.md b/spaces/denisp1/Streamlit-GraphViz-Demo/README.md deleted file mode 100644 index a7588c9821aed0d603e9c415459f24dcb55cca11..0000000000000000000000000000000000000000 --- a/spaces/denisp1/Streamlit-GraphViz-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Streamlit GraphViz Demo -emoji: 💻 -colorFrom: purple -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/derful/Chatgpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" "b/spaces/derful/Chatgpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" deleted file mode 100644 index 16651d0cc4c2ea8075e5ebc12cf4f5c7fb708560..0000000000000000000000000000000000000000 --- "a/spaces/derful/Chatgpt-academic/crazy_functions/\350\247\243\346\236\220\351\241\271\347\233\256\346\272\220\344\273\243\347\240\201.py" +++ /dev/null @@ -1,149 +0,0 @@ -from predict import predict_no_ui -from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down -fast_debug = False - -def 解析源代码(api, file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt): - import time, glob, os - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - - prefix = "接下来请你逐文件分析下面的工程" if index==0 else "" - i_say = prefix + f'请对下面的程序文件做一个概述文件名是{os.path.relpath(fp, project_folder)},文件代码是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - msg = '正常' - - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(api, i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield chatbot, history, msg - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(api, i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield chatbot, history, msg - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield chatbot, history, msg - - - - -@CatchException -def 解析项目本身(api, txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - import time, glob, os - file_manifest = [f for f in glob.glob('*.py')] - for index, fp in enumerate(file_manifest): - # if 'test_project' in fp: continue - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - - prefix = "接下来请你分析自己的程序构成,别紧张," if index==0 else "" - i_say = prefix + f'请对下面的程序文件做一个概述文件名是{fp},文件代码是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - # ** gpt request ** - # gpt_say = predict_no_ui(inputs=i_say, top_p=top_p, temperature=temperature) - gpt_say = yield from predict_no_ui_but_counting_down(api, i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield chatbot, history, '正常' - time.sleep(2) - - i_say = f'根据以上你自己的分析,对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能(包括{file_manifest})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - # ** gpt request ** - # gpt_say = predict_no_ui(inputs=i_say, top_p=top_p, temperature=temperature, history=history) - gpt_say = yield from predict_no_ui_but_counting_down(api, i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield chatbot, history, '正常' - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield chatbot, history, '正常' - -@CatchException -def 解析一个Python项目(api, txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield chatbot, history, '正常' - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}") - yield chatbot, history, '正常' - return - yield from 解析源代码(api, file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt) - - -@CatchException -def 解析一个C项目的头文件(api, txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield chatbot, history, '正常' - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}") - yield chatbot, history, '正常' - return - yield from 解析源代码(api, file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt) - -@CatchException -def 解析一个C项目(api, txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield chatbot, history, '正常' - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}") - yield chatbot, history, '正常' - return - yield from 解析源代码(api, file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt) - diff --git a/spaces/diacanFperku/AutoGPT/Dekada70fullmoviedownload [NEW].md b/spaces/diacanFperku/AutoGPT/Dekada70fullmoviedownload [NEW].md deleted file mode 100644 index 6d7b75f7b6d6f47736dcabc89f0e3ed4f201e07d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Dekada70fullmoviedownload [NEW].md +++ /dev/null @@ -1,9 +0,0 @@ -

              dekada70fullmoviedownload


              Download Zip >>> https://gohhs.com/2uFUGA



              -
              -But until this era, Julian Bartolome (Christopher de Leon) and Amanda Bartolome (Wilma Santos) are confident that will give all freedom for all five of them ... Unfortunately, even in the most beautiful houses, children should not grow up without parents. -Even in the happiest of times, families face grief, loss, grief, and if that happens, life will never be the same again. -In this film, we will see how Amanda and Christopher try to keep everything in accordance with their plans. -Children who were supposed to be brought up in harmony with each other get a lot of problems because they don't have parents to step in and help them deal with them. 8a78ff9644
              -
              -
              -

              diff --git a/spaces/diacanFperku/AutoGPT/Guitar Pro 6 Keygen Embrace 2011 WORK.md b/spaces/diacanFperku/AutoGPT/Guitar Pro 6 Keygen Embrace 2011 WORK.md deleted file mode 100644 index 831726bfde3e5bb380665f09dea7aa2d113eddb5..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Guitar Pro 6 Keygen Embrace 2011 WORK.md +++ /dev/null @@ -1,7 +0,0 @@ -

              guitar pro 6 keygen embrace 2011


              Download File 🌟 https://gohhs.com/2uFUCd



              - -May 23, 2560 BC — Embrace Keymaker for Guitar Pro 6 Download Guitar Pro 6.1.5 r11553 ... Multimedia Suite 10 Platinum HD 2011 PC,,,Norton 360 v2.0 Keygen,, ... Guitar Pro 6.. ... Guitar Pro 6. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. . .. Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... Guitar Pro 6.. ... -Guitar Pro 6.. ... GigaTorrent.Net :: Download torrent :: Download Guitar Pro 6.1.5 r11553, ... 8a78ff9644
              -
              -
              -

              diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (hindi Hd Mrs. Serial Killer Movies 1) PORTABLE.md b/spaces/diacanFperku/AutoGPT/HD Online Player (hindi Hd Mrs. Serial Killer Movies 1) PORTABLE.md deleted file mode 100644 index 0b58cbd1840ec17bb1262a65e4199e856d2d8187..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/HD Online Player (hindi Hd Mrs. Serial Killer Movies 1) PORTABLE.md +++ /dev/null @@ -1,6 +0,0 @@ -

              HD Online Player (hindi hd Mrs. Serial Killer movies 1)


              DOWNLOAD ✑ ✑ ✑ https://gohhs.com/2uFV5m



              -
              -Kidnap (2017) Hindi Dubbed Full Movie Watch Online HD Print Scam 1992 the Harshad Mehta Story (2020) Hind. ... Sleuth is an open-ended, detective role playing game (RPG) where players ... Sample short story #1 for Imaginative Writing 241. ... A woman who survived being kidnapped by a serial killer when she was 15 ... 4d29de3e1b
              -
              -
              -

              diff --git a/spaces/diacanFperku/AutoGPT/Miegakure PC Game Free BEST Download.md b/spaces/diacanFperku/AutoGPT/Miegakure PC Game Free BEST Download.md deleted file mode 100644 index 97632f8a9d6e47317377ecfa6b3253994dfd3752..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Miegakure PC Game Free BEST Download.md +++ /dev/null @@ -1,111 +0,0 @@ - -

              Miegakure PC Game Free Download: How to Play a 4D Puzzle-Platforming Adventure

              - -

              If you are looking for a game that will challenge your mind and imagination, you should try Miegakure. Miegakure is a puzzle-platforming game that lets you explore and interact with a 4D world. In this game, the fourth dimension is not time! It is an actual fourth dimension of space, that works just like the first three dimensions we are familiar with. If you count time, this game is 5D.

              - -

              Miegakure is not yet available on Steam, but you can add it to your wishlist and get notified when it becomes available. You can also download Miegakure for free and play it on your PC with GameLoop, a platform that allows you to enjoy the best Steam games on your computer. In this article, we will tell you more about Miegakure and how to play it on your PC.

              -

              Miegakure PC Game Free Download


              Download File === https://gohhs.com/2uFVv8



              - -

              What is Miegakure and why you should play it

              - -

              Miegakure is a game developed by mtb design works, inc., an independent studio led by Marc ten Bosch, a designer and programmer who has been working on this project for over a decade. Miegakure is the first game that lets you explore and interact with a 4D world, where you can perform miraculous feats and solve puzzles.

              - -

              Miegakure is inspired by the novella Flatland, which tells the story of a 2D square that can only see a 2D cross section of a 3D world. For the square, the third dimension is invisible and mysterious; the square has no concept of it because it is stuck seeing a 2D world. If a 3D object visits the 2D plane it appears to be deforming, in a way that looks like an M.R.I. scan (of a brain, for example).

              - -

              In Miegakure, you play as a character who can switch between different 3D cross sections of a 4D world. By doing so, you can walk through walls, disappear and reappear, create impossible shapes, and manipulate the physical reality around you. You will also encounter 4D objects and creatures that will challenge your perception and understanding of space.

              - -

              Miegakure is a game that will make you think in new ways and expand your horizons. It is also a game that will amaze you with its beautiful graphics, music, and story. Miegakure is a game that you should play if you love puzzles, adventure, and innovation.

              -

              - -

              How to download Miegakure for free and play it on PC

              - -

              Miegakure is not yet available on Steam, but you can add it to your wishlist and get notified when it becomes available. However, if you want to play Miegakure for free on your PC right now, you can do so with GameLoop.

              - -

              GameLoop is a platform that allows you to enjoy the best Steam games on your computer. With GameLoop, you can download Miegakure and other popular Steam games for free and play them on your PC with high performance and smooth graphics. You can also use GameLoop to chat with other players, record your gameplay, stream your games online, and access the latest deals and offers on Steam games.

              - -

              To download Miegakure for free and play it on PC with GameLoop, you need to follow these simple steps:

              - -
                -
              • Download and install GameLoop on your PC from https://www.gameloop.com/mx/.
              • -
              • Launch GameLoop and search for Miegakure in the search bar.
              • -
              • Click on the 'Get' button to get the latest best deals on GameDeal.
              • -
              • Follow the instructions to download and install Miegakure on your PC.
              • -
              • Enjoy playing Miegakure on your PC with GameLoop.
              • -
              - -

              Conclusion

              - -

              Miegakure is a game that will blow your mind and make you see the world in a different way. It is a game that lets you explore and interact with a 4D world, where you can perform miraculous feats and solve puzzles. It is also a game that will impress you with its stunning graphics, music, and story.

              - -

              Miegakure is not yet available on Steam, but you can add it to your wishlist and get notified when it becomes available. You can also download Miegakure for free and play it on your PC with GameLoop, a platform that allows you to enjoy the best Steam games on your computer.

              - -

              If you are looking for a game that will challenge your mind and imagination, you should try Miegakure. Download it now and start your 4D adventure.

              -

              How to play Miegakure on your PC

              - -

              Once you have downloaded Miegakure for free and installed it on your PC with GameLoop, you can start playing it and enjoy its 4D gameplay. Here are some tips on how to play Miegakure on your PC:

              - -
                -
              • To move your character in the 3D world, you can use the arrow keys or the WASD keys on your keyboard. You can also use the mouse to look around and interact with objects.
              • -
              • To switch between different 3D cross sections of the 4D world, you can use the spacebar or the left mouse button. You will see a colored slice that represents the 4D direction you are moving along. You can also use the Q and E keys to rotate the slice.
              • -
              • To solve puzzles, you will need to use your 4D movement to manipulate objects and environments in ways that are impossible in 3D. For example, you can walk through walls, make objects appear and disappear, create impossible shapes, and more.
              • -
              • To progress through the game, you will need to collect crystals that are hidden in each level. Some crystals are easy to find, while others require more exploration and puzzle-solving. You will also encounter portals that will take you to different worlds and dimensions.
              • -
              • To learn more about the game and its story, you can talk to other characters that you will meet along the way. They will give you hints, clues, and insights about the 4D world and its mysteries.
              • -
              - -

              What are the features and benefits of Miegakure

              - -

              Miegakure is a game that has many features and benefits that make it a unique and enjoyable experience. Some of them are:

              - -
                -
              • It is the first game that lets you explore and interact with a 4D world, where you can perform miraculous feats and solve puzzles.
              • -
              • It has stunning graphics that show the beauty and complexity of the 4D world, with colorful environments, realistic lighting, and smooth animations.
              • -
              • It has an original soundtrack that creates a relaxing and immersive atmosphere, with soothing melodies and ambient sounds.
              • -
              • It has an engaging story that reveals the secrets and history of the 4D world, with interesting characters, dialogues, and lore.
              • -
              • It has a high replay value, as you can discover new things and perspectives every time you play, with different paths, secrets, and challenges.
              • -
              - -

              Conclusion

              - -

              Miegakure is a game that will blow your mind and make you see the world in a different way. It is a game that lets you explore and interact with a 4D world, where you can perform miraculous feats and solve puzzles. It is also a game that will impress you with its stunning graphics, music, and story.

              - -

              Miegakure is not yet available on Steam, but you can add it to your wishlist and get notified when it becomes available. You can also download Miegakure for free and play it on your PC with GameLoop, a platform that allows you to enjoy the best Steam games on your computer.

              - -

              If you are looking for a game that will challenge your mind and imagination, you should try Miegakure. Download it now and start your 4D adventure.

              -

              What are the tips and tricks for Miegakure

              - -

              Miegakure is a game that will test your spatial reasoning and logic skills. It is a game that will require you to think in 4D and use your 4D movement to solve puzzles and explore the world. Here are some tips and tricks for Miegakure:

              - -
                -
              • Pay attention to the color of the slice. The color of the slice indicates the 4D direction you are moving along. For example, if the slice is red, you are moving along the red axis. You can also use the Q and E keys to rotate the slice and change the 4D direction.
              • -
              • Use the spacebar or the left mouse button to switch between different 3D cross sections of the 4D world. You can also hold the spacebar or the left mouse button to see a preview of the next cross section before switching.
              • -
              • Experiment with different 4D movements and see how they affect the objects and environments around you. For example, you can walk through walls, make objects appear and disappear, create impossible shapes, and more.
              • -
              • Look for clues and hints in the level design. Sometimes, you can find patterns, symbols, or markings that will help you solve puzzles or find secrets.
              • -
              • Don't be afraid to explore and try new things. Miegakure is a game that encourages curiosity and discovery. You might find hidden paths, secrets, or surprises that will enrich your experience.
              • -
              - -

              What are the FAQs for Miegakure

              - -

              Miegakure is a game that has many questions and mysteries. Here are some of the frequently asked questions for Miegakure:

              - -
                -
              • Q: When will Miegakure be released on Steam?
                A: Miegakure is still in development and does not have a release date yet. However, you can add it to your wishlist on Steam and get notified when it becomes available.
              • -
              • Q: How can I download Miegakure for free and play it on PC?
                A: You can download Miegakure for free and play it on PC with GameLoop, a platform that allows you to enjoy the best Steam games on your computer. You can download and install GameLoop from https://www.gameloop.com/mx/.
              • -
              • Q: What is the fourth dimension in Miegakure?
                A: The fourth dimension in Miegakure is not time! It is an actual fourth dimension of space, that works just like the first three dimensions we are familiar with. If you count time, this game is 5D.
              • -
              • Q: How can I see and interact with a 4D world?
                A: You can see and interact with a 4D world by switching between different 3D cross sections of it. You can use the spacebar or the left mouse button to switch between cross sections, and use the Q and E keys to rotate them.
              • -
              • Q: What are some examples of 4D objects and creatures?
                A: Some examples of 4D objects and creatures are spherinders, which are like cylinders but with spheres at both ends; duocylinders, which are like two cylinders glued together at right angles; tesseracts, which are like cubes but with eight cubes as faces; glomes, which are like spheres but with four dimensions; and snails, which are like snails but with four dimensions.
              • -
              - -

              Conclusion

              - -

              Miegakure is a game that will blow your mind and make you see the world in a different way. It is a game that lets you explore and interact with a 4D world, where you can perform miraculous feats and solve puzzles. It is also a game that will impress you with its stunning graphics, music, and story.

              - -

              Miegakure is not yet available on Steam, but you can add it to your wishlist and get notified when it becomes available. You can also download Miegakure for free and play it on your PC with GameLoop, a platform that allows you to enjoy the best Steam games on your computer.

              - -

              If you are looking for a game that will challenge your mind and imagination, you should try Miegakure. Download it now and start your 4D adventure.

              -

              Miegakure is a game that will blow your mind and make you see the world in a different way. It is a game that lets you explore and interact with a 4D world, where you can perform miraculous feats and solve puzzles. It is also a game that will impress you with its stunning graphics, music, and story.

              - -

              Miegakure is not yet available on Steam, but you can add it to your wishlist and get notified when it becomes available. You can also download Miegakure for free and play it on your PC with GameLoop, a platform that allows you to enjoy the best Steam games on your computer.

              - -

              If you are looking for a game that will challenge your mind and imagination, you should try Miegakure. Download it now and start your 4D adventure.

              3cee63e6c2
              -
              -
              \ No newline at end of file diff --git a/spaces/digitalxingtong/Lixiang-Bert-Vits2/app.py b/spaces/digitalxingtong/Lixiang-Bert-Vits2/app.py deleted file mode 100644 index 8cbe276aab4b758ddf2fc0d4c0bd3051c59e5639..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Lixiang-Bert-Vits2/app.py +++ /dev/null @@ -1,183 +0,0 @@ -import sys, os - -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s") - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser - - -net_g = None - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language -import soundfile as sf -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - global net_g - bert, phones, tones, lang_ids = get_text(text, "ZH", hps) - with torch.no_grad(): - x_tst=phones.to(device).unsqueeze(0) - tones=tones.to(device).unsqueeze(0) - lang_ids=lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - sf.write("tmp.wav", audio, 44100) - return audio -def convert_wav_to_ogg(wav_file): - os.makedirs('out', exist_ok=True) - filename = os.path.splitext(os.path.basename(wav_file.name))[0] - output_path_ogg = os.path.join('out', f"out.ogg") - - renamed_input_path = os.path.join('in', f"in.wav") - os.makedirs('in', exist_ok=True) - os.rename(wav_file.name, renamed_input_path) - command = ["ffmpeg", "-i", renamed_input_path, "-acodec", "libopus", "-y", output_path_ogg] - os.system(" ".join(command)) - return output_path_ogg -def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale): - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker) - with open('tmp.wav', 'rb') as wav_file: - newogg = convert_wav_to_ogg(wav_file) - return "Success", (hps.data.sampling_rate, audio),newogg - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./logs/lixiang/lixiang.pth", help="path of your model") - parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file") - parser.add_argument("--share", default=False, help="make link public") - parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log") - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config_dir) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - ''' - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - ''' - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - - - gr.Markdown(value=""" - 理想_ideal Bert-Vits2在线语音生成\n - 1、模型作者:数字星瞳企划 https://t.me/xingtong25680 \n - \n - 2、原项目地址:https://github.com/Stardust-minus/Bert-VITS2\n - 3、使用此模型进行二创请注明AI生成,以及该项目地址。\n - 4、如果想生成超长txt文本的音频请使用colab。 https://colab.research.google.com/drive/13ek8_j1aknr-pbjj3NXxSM4vBIsracU3?usp=drive_link\n - - """) - text = gr.TextArea(label="Text", placeholder="Input Text Here", - value="这里是数字星瞳企画,请在电报搜索星瞳全拼加二五六八零,获取最新更新进展。") - speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker') - sdp_ratio = gr.Slider(minimum=0, maximum=1, value=0.2, step=0.01, label='语调变化') - noise_scale = gr.Slider(minimum=0.1, maximum=1.5, value=0.6, step=0.01, label='感情变化') - noise_scale_w = gr.Slider(minimum=0.1, maximum=1.4, value=0.8, step=0.01, label='音节发音长度变化') - length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='语速') - btn = gr.Button("开启AI语音之旅吧!", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - ogg_output = gr.File(label="Converted OGG file") - gr.Markdown(value=""" - 模型汇总:\n - 星瞳 https://huggingface.co/spaces/digitalxingtong/Xingtong-Bert-Vits2 \n - 星瞳 朗读专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2 \n - 星瞳 长文本专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2 \n - 甜甜叫花鸡 https://huggingface.co/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2 \n - 七海 https://huggingface.co/spaces/digitalxingtong/Nanami-Bert-Vits2 \n - 东雪莲 https://huggingface.co/spaces/digitalxingtong/Azuma-Bert-Vits2 \n - 嘉然 https://huggingface.co/spaces/digitalxingtong/Jiaran-Bert-Vits2 \n - 乃琳 https://huggingface.co/spaces/digitalxingtong/Eileen-Bert-Vits2 \n - 恬豆 https://huggingface.co/spaces/digitalxingtong/Dou-Bert-Vits2 \n - 奶绿 杂谈 https://huggingface.co/spaces/digitalxingtong/Nailv-Bert-Vits2 \n - 奶绿 朗读 https://huggingface.co/spaces/digitalxingtong/Nailv-read-Bert-Vits2 \n - 露早 https://huggingface.co/spaces/digitalxingtong/Luzao-Bert-Vits2 \n - 柚恩 https://huggingface.co/spaces/digitalxingtong/Un-Bert-Vits2 \n - 米诺 https://huggingface.co/spaces/digitalxingtong/Minuo-Bert-Vits2 \n - 扇宝 https://huggingface.co/spaces/digitalxingtong/Shanbao-Bert-Vits2 \n - 牧牧白 https://huggingface.co/spaces/digitalxingtong/Miiu-Bert-Vits2 \n - 吉诺儿kino https://huggingface.co/spaces/digitalxingtong/Kino-Bert-Vits2 \n - 九夏 https://huggingface.co/spaces/digitalxingtong/Jiuxia-Bert-Vits2 \n - 卡缇娅 https://huggingface.co/spaces/digitalxingtong/Yaya-Bert-Vits2 \n - 理想_ideal https://huggingface.co/spaces/digitalxingtong/Lixiang-Bert-Vits2 \n - 阿梓 https://huggingface.co/spaces/digitalxingtong/Azusa-Bert-Vits2 \n - 鹿鸣 https://huggingface.co/spaces/digitalxingtong/Luming-Bert-Vits2 \n - 永雏塔菲 https://huggingface.co/spaces/digitalxingtong/Taffy-Bert-VITS2 \n - """) - btn.click(tts_fn, - inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale], - outputs=[text_output, audio_output,ogg_output]) - - - app.launch(show_error=True) diff --git a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/attentions.py b/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/attentions.py deleted file mode 100644 index ecbdbc8be941a962046fc11fd6739b093112123e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/attentions.py +++ /dev/null @@ -1,343 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from torch.nn.utils import weight_norm, remove_weight_norm -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - if isflow: - cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1) - self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1) - self.cond_layer = weight_norm(cond_layer, name='weight') - self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if 'gin_channels' in kwargs: - self.gin_channels = kwargs['gin_channels'] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2 - print(self.gin_channels, self.cond_layer_idx) - assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers' - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/dineshreddy/WALT/cwalt/kmedoid.py b/spaces/dineshreddy/WALT/cwalt/kmedoid.py deleted file mode 100644 index 6a04839cf1fd9e8d1bf56872f1c67d0bd7005cb9..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/cwalt/kmedoid.py +++ /dev/null @@ -1,55 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Fri May 20 15:18:56 2022 - -@author: dinesh -""" - -import numpy as np -import math - -def kMedoids(D, k, tmax=100): - # determine dimensions of distance matrix D - m, n = D.shape - - np.fill_diagonal(D, math.inf) - - if k > n: - raise Exception('too many medoids') - # randomly initialize an array of k medoid indices - M = np.arange(n) - np.random.shuffle(M) - M = np.sort(M[:k]) - - # create a copy of the array of medoid indices - Mnew = np.copy(M) - - # initialize a dictionary to represent clusters - C = {} - for t in range(tmax): - # determine clusters, i. e. arrays of data indices - J = np.argmin(D[:,M], axis=1) - - for kappa in range(k): - C[kappa] = np.where(J==kappa)[0] - # update cluster medoids - for kappa in range(k): - J = np.mean(D[np.ix_(C[kappa],C[kappa])],axis=1) - j = np.argmin(J) - Mnew[kappa] = C[kappa][j] - np.sort(Mnew) - # check for convergence - if np.array_equal(M, Mnew): - break - M = np.copy(Mnew) - else: - # final update of cluster memberships - J = np.argmin(D[:,M], axis=1) - for kappa in range(k): - C[kappa] = np.where(J==kappa)[0] - - np.fill_diagonal(D, 0) - - # return results - return M, C \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/mmdet/models/utils/gaussian_target.py b/spaces/dineshreddy/WALT/mmdet/models/utils/gaussian_target.py deleted file mode 100644 index 7bb7160cb4bf2f47876f6e8373142aa5846920a9..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/utils/gaussian_target.py +++ /dev/null @@ -1,185 +0,0 @@ -from math import sqrt - -import torch - - -def gaussian2D(radius, sigma=1, dtype=torch.float32, device='cpu'): - """Generate 2D gaussian kernel. - - Args: - radius (int): Radius of gaussian kernel. - sigma (int): Sigma of gaussian function. Default: 1. - dtype (torch.dtype): Dtype of gaussian tensor. Default: torch.float32. - device (str): Device of gaussian tensor. Default: 'cpu'. - - Returns: - h (Tensor): Gaussian kernel with a - ``(2 * radius + 1) * (2 * radius + 1)`` shape. - """ - x = torch.arange( - -radius, radius + 1, dtype=dtype, device=device).view(1, -1) - y = torch.arange( - -radius, radius + 1, dtype=dtype, device=device).view(-1, 1) - - h = (-(x * x + y * y) / (2 * sigma * sigma)).exp() - - h[h < torch.finfo(h.dtype).eps * h.max()] = 0 - return h - - -def gen_gaussian_target(heatmap, center, radius, k=1): - """Generate 2D gaussian heatmap. - - Args: - heatmap (Tensor): Input heatmap, the gaussian kernel will cover on - it and maintain the max value. - center (list[int]): Coord of gaussian kernel's center. - radius (int): Radius of gaussian kernel. - k (int): Coefficient of gaussian kernel. Default: 1. - - Returns: - out_heatmap (Tensor): Updated heatmap covered by gaussian kernel. - """ - diameter = 2 * radius + 1 - gaussian_kernel = gaussian2D( - radius, sigma=diameter / 6, dtype=heatmap.dtype, device=heatmap.device) - - x, y = center - - height, width = heatmap.shape[:2] - - left, right = min(x, radius), min(width - x, radius + 1) - top, bottom = min(y, radius), min(height - y, radius + 1) - - masked_heatmap = heatmap[y - top:y + bottom, x - left:x + right] - masked_gaussian = gaussian_kernel[radius - top:radius + bottom, - radius - left:radius + right] - out_heatmap = heatmap - torch.max( - masked_heatmap, - masked_gaussian * k, - out=out_heatmap[y - top:y + bottom, x - left:x + right]) - - return out_heatmap - - -def gaussian_radius(det_size, min_overlap): - r"""Generate 2D gaussian radius. - - This function is modified from the `official github repo - `_. - - Given ``min_overlap``, radius could computed by a quadratic equation - according to Vieta's formulas. - - There are 3 cases for computing gaussian radius, details are following: - - - Explanation of figure: ``lt`` and ``br`` indicates the left-top and - bottom-right corner of ground truth box. ``x`` indicates the - generated corner at the limited position when ``radius=r``. - - - Case1: one corner is inside the gt box and the other is outside. - - .. code:: text - - |< width >| - - lt-+----------+ - - | | | ^ - +--x----------+--+ - | | | | - | | | | height - | | overlap | | - | | | | - | | | | v - +--+---------br--+ - - | | | - +----------+--x - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{(w-r)*(h-r)}{w*h+(w+h)r-r^2} \ge {iou} \quad\Rightarrow\quad - {r^2-(w+h)r+\cfrac{1-iou}{1+iou}*w*h} \ge 0 \\ - {a} = 1,\quad{b} = {-(w+h)},\quad{c} = {\cfrac{1-iou}{1+iou}*w*h} - {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a} - - - Case2: both two corners are inside the gt box. - - .. code:: text - - |< width >| - - lt-+----------+ - - | | | ^ - +--x-------+ | - | | | | - | |overlap| | height - | | | | - | +-------x--+ - | | | v - +----------+-br - - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{(w-2*r)*(h-2*r)}{w*h} \ge {iou} \quad\Rightarrow\quad - {4r^2-2(w+h)r+(1-iou)*w*h} \ge 0 \\ - {a} = 4,\quad {b} = {-2(w+h)},\quad {c} = {(1-iou)*w*h} - {r} \le \cfrac{-b-\sqrt{b^2-4*a*c}}{2*a} - - - Case3: both two corners are outside the gt box. - - .. code:: text - - |< width >| - - x--+----------------+ - | | | - +-lt-------------+ | - - | | | | ^ - | | | | - | | overlap | | height - | | | | - | | | | v - | +------------br--+ - - | | | - +----------------+--x - - To ensure IoU of generated box and gt box is larger than ``min_overlap``: - - .. math:: - \cfrac{w*h}{(w+2*r)*(h+2*r)} \ge {iou} \quad\Rightarrow\quad - {4*iou*r^2+2*iou*(w+h)r+(iou-1)*w*h} \le 0 \\ - {a} = {4*iou},\quad {b} = {2*iou*(w+h)},\quad {c} = {(iou-1)*w*h} \\ - {r} \le \cfrac{-b+\sqrt{b^2-4*a*c}}{2*a} - - Args: - det_size (list[int]): Shape of object. - min_overlap (float): Min IoU with ground truth for boxes generated by - keypoints inside the gaussian kernel. - - Returns: - radius (int): Radius of gaussian kernel. - """ - height, width = det_size - - a1 = 1 - b1 = (height + width) - c1 = width * height * (1 - min_overlap) / (1 + min_overlap) - sq1 = sqrt(b1**2 - 4 * a1 * c1) - r1 = (b1 - sq1) / (2 * a1) - - a2 = 4 - b2 = 2 * (height + width) - c2 = (1 - min_overlap) * width * height - sq2 = sqrt(b2**2 - 4 * a2 * c2) - r2 = (b2 - sq2) / (2 * a2) - - a3 = 4 * min_overlap - b3 = -2 * min_overlap * (height + width) - c3 = (min_overlap - 1) * width * height - sq3 = sqrt(b3**2 - 4 * a3 * c3) - r3 = (b3 + sq3) / (2 * a3) - return min(r1, r2, r3) diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_models/crnn.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_models/crnn.py deleted file mode 100644 index b316c6a8a7f4f79c0cff3062583391b746f3cad8..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/recog_models/crnn.py +++ /dev/null @@ -1,12 +0,0 @@ -label_convertor = dict( - type='CTCConvertor', dict_type='DICT36', with_unknown=False, lower=True) - -model = dict( - type='CRNNNet', - preprocessor=None, - backbone=dict(type='VeryDeepVgg', leaky_relu=False, input_channels=1), - encoder=None, - decoder=dict(type='CRNNDecoder', in_channels=512, rnn_flag=True), - loss=dict(type='CTCLoss'), - label_convertor=label_convertor, - pretrained=None) diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py deleted file mode 100644 index e22571e74511bab4303138f0e4816687fadac69e..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py', - '../../_base_/schedules/schedule_sgd_160e.py', - '../../_base_/det_datasets/icdar2017.py', - '../../_base_/det_pipelines/maskrcnn_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}} - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/doevent/cartoonizer-demo-onnx/app.py b/spaces/doevent/cartoonizer-demo-onnx/app.py deleted file mode 100644 index 0f93ae8c75bdbb52e851d3929d0d99145976dd8e..0000000000000000000000000000000000000000 --- a/spaces/doevent/cartoonizer-demo-onnx/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import os - -import cv2 -import gradio as gr -import numpy as np -import onnxruntime as ort -from PIL import Image - -_sess_options = ort.SessionOptions() -_sess_options.intra_op_num_threads = os.cpu_count() -MODEL_SESS = ort.InferenceSession( - "cartoonizer.onnx", _sess_options, providers=["CPUExecutionProvider"] -) - - -def preprocess_image(image: Image) -> np.ndarray: - image = np.array(image) - image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) - - h, w, c = np.shape(image) - if min(h, w) > 720: - if h > w: - h, w = int(720 * h / w), 720 - else: - h, w = 720, int(720 * w / h) - image = cv2.resize(image, (w, h), interpolation=cv2.INTER_AREA) - h, w = (h // 8) * 8, (w // 8) * 8 - image = image[:h, :w, :] - image = image.astype(np.float32) / 127.5 - 1 - return np.expand_dims(image, axis=0) - - -def inference(image: np.ndarray) -> Image: - image = preprocess_image(image) - results = MODEL_SESS.run(None, {"input_photo:0": image}) - output = (np.squeeze(results[0]) + 1.0) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB) - return Image.fromarray(output) - - -title = "Generate cartoonized images" -article = "Demo of CartoonGAN model (https://systemerrorwang.github.io/White-box-Cartoonization/). \nDemo image is from https://unsplash.com/photos/f0SgAs27BYI." - -iface = gr.Interface( - inference, - inputs=gr.inputs.Image(type="pil", label="Input Image"), - outputs="image", - title=title, - article=article, - allow_flagging="never", - examples=[["mountain.jpeg"]], -) -iface.launch(enable_queue=True, debug=True) diff --git a/spaces/dorkai/singpt/modules/shared.py b/spaces/dorkai/singpt/modules/shared.py deleted file mode 100644 index ea2eb50b7f586e5c562bf2e7c75429c91f21ec6c..0000000000000000000000000000000000000000 --- a/spaces/dorkai/singpt/modules/shared.py +++ /dev/null @@ -1,103 +0,0 @@ -import argparse - -model = None -tokenizer = None -model_name = "" -soft_prompt_tensor = None -soft_prompt = False -is_RWKV = False - -# Chat variables -history = {'internal': [], 'visible': []} -character = 'None' -stop_everything = False -processing_message = '*Is typing...*' - -# UI elements (buttons, sliders, HTML, etc) -gradio = {} - -# Generation input parameters -input_params = [] - -settings = { - 'max_new_tokens': 200, - 'max_new_tokens_min': 1, - 'max_new_tokens_max': 2000, - 'name1': 'Person 1', - 'name2': 'Person 2', - 'context': 'This is a conversation between two people.', - 'stop_at_newline': True, - 'chat_prompt_size': 2048, - 'chat_prompt_size_min': 0, - 'chat_prompt_size_max': 2048, - 'chat_generation_attempts': 1, - 'chat_generation_attempts_min': 1, - 'chat_generation_attempts_max': 5, - 'name1_pygmalion': 'You', - 'name2_pygmalion': 'Kawaii', - 'context_pygmalion': "Kawaii's persona: Kawaii is a cheerful person who loves to make others smile. She is an optimist who loves to spread happiness and positivity wherever she goes.\n", - 'stop_at_newline_pygmalion': False, - 'default_extensions': [], - 'chat_default_extensions': ["gallery"], - 'presets': { - 'default': 'NovelAI-Sphinx Moth', - 'pygmalion-*': 'Pygmalion', - 'RWKV-*': 'Naive', - }, - 'prompts': { - 'default': 'Common sense questions and answers\n\nQuestion: \nFactual answer:', - '^(gpt4chan|gpt-4chan|4chan)': '-----\n--- 865467536\nInput text\n--- 865467537\n', - '(rosey|chip|joi)_.*_instruct.*': 'User: \n', - 'oasst-*': '<|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>' - } -} - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ('yes', 'true', 't', 'y', '1'): - return True - elif v.lower() in ('no', 'false', 'f', 'n', '0'): - return False - else: - raise argparse.ArgumentTypeError('Boolean value expected.') - -parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog,max_help_position=54)) -parser.add_argument('--model', type=str, help='Name of the model to load by default.') -parser.add_argument('--notebook', action='store_true', help='Launch the web UI in notebook mode, where the output is written to the same text box as the input.') -parser.add_argument('--chat', action='store_true', help='Launch the web UI in chat mode.') -parser.add_argument('--cai-chat', action='store_true', help='Launch the web UI in chat mode with a style similar to Character.AI\'s. If the file img_bot.png or img_bot.jpg exists in the same folder as server.py, this image will be used as the bot\'s profile picture. Similarly, img_me.png or img_me.jpg will be used as your profile picture.') -parser.add_argument('--cpu', action='store_true', help='Use the CPU to generate text.') -parser.add_argument('--load-in-8bit', action='store_true', help='Load the model with 8-bit precision.') -parser.add_argument('--load-in-4bit', action='store_true', help='DEPRECATED: use --gptq-bits 4 instead.') -parser.add_argument('--gptq-bits', type=int, default=0, help='Load a pre-quantized model with specified precision. 2, 3, 4 and 8bit are supported. Currently only works with LLaMA and OPT.') -parser.add_argument('--gptq-model-type', type=str, help='Model type of pre-quantized model. Currently only LLaMa and OPT are supported.') -parser.add_argument('--bf16', action='store_true', help='Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU.') -parser.add_argument('--auto-devices', action='store_true', help='Automatically split the model across the available GPU(s) and CPU.') -parser.add_argument('--disk', action='store_true', help='If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk.') -parser.add_argument('--disk-cache-dir', type=str, default="cache", help='Directory to save the disk cache to. Defaults to "cache".') -parser.add_argument('--gpu-memory', type=int, nargs="+", help='Maxmimum GPU memory in GiB to be allocated per GPU. Example: --gpu-memory 10 for a single GPU, --gpu-memory 10 5 for two GPUs.') -parser.add_argument('--cpu-memory', type=int, help='Maximum CPU memory in GiB to allocate for offloaded weights. Must be an integer number. Defaults to 99.') -parser.add_argument('--flexgen', action='store_true', help='Enable the use of FlexGen offloading.') -parser.add_argument('--percent', type=int, nargs="+", default=[0, 100, 100, 0, 100, 0], help='FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0).') -parser.add_argument("--compress-weight", action="store_true", help="FlexGen: activate weight compression.") -parser.add_argument("--pin-weight", type=str2bool, nargs="?", const=True, default=True, help="FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%%).") -parser.add_argument('--deepspeed', action='store_true', help='Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration.') -parser.add_argument('--nvme-offload-dir', type=str, help='DeepSpeed: Directory to use for ZeRO-3 NVME offloading.') -parser.add_argument('--local_rank', type=int, default=0, help='DeepSpeed: Optional argument for distributed setups.') -parser.add_argument('--rwkv-strategy', type=str, default=None, help='RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8".') -parser.add_argument('--rwkv-cuda-on', action='store_true', help='RWKV: Compile the CUDA kernel for better performance.') -parser.add_argument('--no-stream', action='store_true', help='Don\'t stream the text output in real time.') -parser.add_argument('--settings', type=str, help='Load the default interface settings from this json file. See settings-template.json for an example. If you create a file called settings.json, this file will be loaded by default without the need to use the --settings flag.') -parser.add_argument('--extensions', type=str, nargs="+", help='The list of extensions to load. If you want to load more than one extension, write the names separated by spaces.') -parser.add_argument('--listen', action='store_true', help='Make the web UI reachable from your local network.') -parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.') -parser.add_argument('--share', action='store_true', help='Create a public URL. This is useful for running the web UI on Google Colab or similar.') -parser.add_argument('--auto-launch', action='store_true', default=False, help='Open the web UI in the default browser upon launch.') -parser.add_argument('--verbose', action='store_true', help='Print the prompts to the terminal.') -args = parser.parse_args() - -# Provisional, this will be deleted later -if args.load_in_4bit: - print("Warning: --load-in-4bit is deprecated and will be removed. Use --gptq-bits 4 instead.\n") - args.gptq_bits = 4 diff --git a/spaces/dorkai/text-generation-webui-main/css/chat_style-cai-chat.css b/spaces/dorkai/text-generation-webui-main/css/chat_style-cai-chat.css deleted file mode 100644 index f601de3248b7ee94d6da58026354f8b9afeb9297..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/css/chat_style-cai-chat.css +++ /dev/null @@ -1,91 +0,0 @@ -.chat { - margin-left: auto; - margin-right: auto; - max-width: 800px; - height: calc(100vh - 306px); - overflow-y: auto; - padding-right: 20px; - display: flex; - flex-direction: column-reverse; - word-break: break-word; - overflow-wrap: anywhere; -} - -.message { - display: grid; - grid-template-columns: 60px minmax(0, 1fr); - padding-bottom: 25px; - font-size: 15px; - font-family: Helvetica, Arial, sans-serif; - line-height: 1.428571429; -} - -.circle-you { - width: 50px; - height: 50px; - background-color: rgb(238, 78, 59); - border-radius: 50%; -} - -.circle-bot { - width: 50px; - height: 50px; - background-color: rgb(59, 78, 244); - border-radius: 50%; -} - -.circle-bot img, -.circle-you img { - border-radius: 50%; - width: 100%; - height: 100%; - object-fit: cover; -} - -.text {} - -.text p { - margin-top: 5px; -} - -.username { - font-weight: bold; -} - -.message-body {} - -.message-body img { - max-width: 300px; - max-height: 300px; - border-radius: 20px; -} - -.message-body p { - margin-bottom: 0 !important; - font-size: 15px !important; - line-height: 1.428571429 !important; -} - -.message-body li { - margin-top: 0.5em !important; - margin-bottom: 0.5em !important; -} - -.message-body li > p { - display: inline !important; -} - -.message-body code { - overflow-x: auto; -} -.message-body :not(pre) > code { - white-space: normal !important; -} - -.dark .message-body p em { - color: rgb(138, 138, 138) !important; -} - -.message-body p em { - color: rgb(110, 110, 110) !important; -} \ No newline at end of file diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/modules/chat.py b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/modules/chat.py deleted file mode 100644 index 98e171b0f35041ec12d01657297a1fc8b9fa91dd..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/modules/chat.py +++ /dev/null @@ -1,562 +0,0 @@ -import ast -import base64 -import copy -import io -import json -import logging -import re -from datetime import datetime -from pathlib import Path - -import yaml -from PIL import Image - -import modules.shared as shared -from modules.extensions import apply_extensions -from modules.html_generator import chat_html_wrapper, make_thumbnail -from modules.text_generation import (encode, generate_reply, - get_max_prompt_length) - - -# Replace multiple string pairs in a string -def replace_all(text, dic): - for i, j in dic.items(): - text = text.replace(i, j) - - return text - - -def generate_chat_prompt(user_input, state, **kwargs): - impersonate = kwargs['impersonate'] if 'impersonate' in kwargs else False - _continue = kwargs['_continue'] if '_continue' in kwargs else False - also_return_rows = kwargs['also_return_rows'] if 'also_return_rows' in kwargs else False - is_instruct = state['mode'] == 'instruct' - rows = [state['context'] if is_instruct else f"{state['context'].strip()}\n"] - min_rows = 3 - - # Finding the maximum prompt size - chat_prompt_size = state['chat_prompt_size'] - if shared.soft_prompt: - chat_prompt_size -= shared.soft_prompt_tensor.shape[1] - - max_length = min(get_max_prompt_length(state), chat_prompt_size) - - # Building the turn templates - if 'turn_template' not in state or state['turn_template'] == '': - if is_instruct: - template = '<|user|>\n<|user-message|>\n<|bot|>\n<|bot-message|>\n' - else: - template = '<|user|>: <|user-message|>\n<|bot|>: <|bot-message|>\n' - else: - template = state['turn_template'].replace(r'\n', '\n') - - replacements = { - '<|user|>': state['name1'].strip(), - '<|bot|>': state['name2'].strip(), - } - - user_turn = replace_all(template.split('<|bot|>')[0], replacements) - bot_turn = replace_all('<|bot|>' + template.split('<|bot|>')[1], replacements) - user_turn_stripped = replace_all(user_turn.split('<|user-message|>')[0], replacements) - bot_turn_stripped = replace_all(bot_turn.split('<|bot-message|>')[0], replacements) - - # Building the prompt - i = len(shared.history['internal']) - 1 - while i >= 0 and len(encode(''.join(rows))[0]) < max_length: - if _continue and i == len(shared.history['internal']) - 1: - rows.insert(1, bot_turn_stripped + shared.history['internal'][i][1].strip()) - else: - rows.insert(1, bot_turn.replace('<|bot-message|>', shared.history['internal'][i][1].strip())) - - string = shared.history['internal'][i][0] - if string not in ['', '<|BEGIN-VISIBLE-CHAT|>']: - rows.insert(1, replace_all(user_turn, {'<|user-message|>': string.strip(), '<|round|>': str(i)})) - - i -= 1 - - if impersonate: - min_rows = 2 - rows.append(user_turn_stripped.rstrip(' ')) - elif not _continue: - # Adding the user message - if len(user_input) > 0: - rows.append(replace_all(user_turn, {'<|user-message|>': user_input.strip(), '<|round|>': str(len(shared.history["internal"]))})) - - # Adding the Character prefix - rows.append(apply_extensions("bot_prefix", bot_turn_stripped.rstrip(' '))) - - while len(rows) > min_rows and len(encode(''.join(rows))[0]) >= max_length: - rows.pop(1) - - prompt = ''.join(rows) - if also_return_rows: - return prompt, rows - else: - return prompt - - -def get_stopping_strings(state): - if state['mode'] == 'instruct': - stopping_strings = [f"\n{state['name1']}", f"\n{state['name2']}"] - else: - stopping_strings = [f"\n{state['name1']}:", f"\n{state['name2']}:"] - - stopping_strings += ast.literal_eval(f"[{state['custom_stopping_strings']}]") - return stopping_strings - - -def extract_message_from_reply(reply, state): - next_character_found = False - stopping_strings = get_stopping_strings(state) - - if state['stop_at_newline']: - lines = reply.split('\n') - reply = lines[0].strip() - if len(lines) > 1: - next_character_found = True - else: - for string in stopping_strings: - idx = reply.find(string) - if idx != -1: - reply = reply[:idx] - next_character_found = True - - # If something like "\nYo" is generated just before "\nYou:" - # is completed, trim it - if not next_character_found: - for string in stopping_strings: - for j in range(len(string) - 1, 0, -1): - if reply[-j:] == string[:j]: - reply = reply[:-j] - break - else: - continue - - break - - return reply, next_character_found - - -def chatbot_wrapper(text, state, regenerate=False, _continue=False): - if shared.model_name == 'None' or shared.model is None: - logging.error("No model is loaded! Select one in the Model tab.") - yield shared.history['visible'] - return - - # Defining some variables - cumulative_reply = '' - just_started = True - visible_text = None - eos_token = '\n' if state['stop_at_newline'] else None - stopping_strings = get_stopping_strings(state) - - # Preparing the input - if not any((regenerate, _continue)): - text, visible_text = apply_extensions('input_hijack', text, visible_text) - if visible_text is None: - visible_text = text - - text = apply_extensions('input', text) - # *Is typing...* - yield shared.history['visible'] + [[visible_text, shared.processing_message]] - else: - text, visible_text = shared.history['internal'][-1][0], shared.history['visible'][-1][0] - if regenerate: - shared.history['visible'].pop() - shared.history['internal'].pop() - # *Is typing...* - yield shared.history['visible'] + [[visible_text, shared.processing_message]] - elif _continue: - last_reply = [shared.history['internal'][-1][1], shared.history['visible'][-1][1]] - yield shared.history['visible'][:-1] + [[visible_text, last_reply[1] + '...']] - - # Generating the prompt - kwargs = {'_continue': _continue} - prompt = apply_extensions('custom_generate_chat_prompt', text, state, **kwargs) - if prompt is None: - prompt = generate_chat_prompt(text, state, **kwargs) - - # Generate - for i in range(state['chat_generation_attempts']): - reply = None - for reply in generate_reply(f"{prompt}{' ' if len(cumulative_reply) > 0 else ''}{cumulative_reply}", state, eos_token=eos_token, stopping_strings=stopping_strings): - reply = cumulative_reply + reply - - # Extracting the reply - reply, next_character_found = extract_message_from_reply(reply, state) - visible_reply = re.sub("(||{{user}})", state['name1'], reply) - visible_reply = apply_extensions("output", visible_reply) - if _continue: - sep = ' ' if last_reply[0][-1] not in [' ', '\n'] else '' - reply = last_reply[0] + sep + reply - sep = ' ' if last_reply[1][-1] not in [' ', '\n'] else '' - visible_reply = last_reply[1] + sep + visible_reply - - # We need this global variable to handle the Stop event, - # otherwise gradio gets confused - if shared.stop_everything: - return shared.history['visible'] - - if just_started: - just_started = False - if not _continue: - shared.history['internal'].append(['', '']) - shared.history['visible'].append(['', '']) - - shared.history['internal'][-1] = [text, reply] - shared.history['visible'][-1] = [visible_text, visible_reply] - yield shared.history['visible'] - if next_character_found: - break - - if reply is not None: - cumulative_reply = reply - - yield shared.history['visible'] - - -def impersonate_wrapper(text, state): - if shared.model_name == 'None' or shared.model is None: - logging.error("No model is loaded! Select one in the Model tab.") - yield '' - return - - # Defining some variables - cumulative_reply = '' - eos_token = '\n' if state['stop_at_newline'] else None - prompt = generate_chat_prompt(text, state, impersonate=True) - stopping_strings = get_stopping_strings(state) - - # Yield *Is typing...* - yield shared.processing_message - for i in range(state['chat_generation_attempts']): - reply = None - for reply in generate_reply(f"{prompt}{' ' if len(cumulative_reply) > 0 else ''}{cumulative_reply}", state, eos_token=eos_token, stopping_strings=stopping_strings): - reply = cumulative_reply + reply - reply, next_character_found = extract_message_from_reply(reply, state) - yield reply - if next_character_found: - break - - if reply is not None: - cumulative_reply = reply - - yield reply - - -def cai_chatbot_wrapper(text, state): - for history in chatbot_wrapper(text, state): - yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode']) - - -def regenerate_wrapper(text, state): - if (len(shared.history['visible']) == 1 and not shared.history['visible'][0][0]) or len(shared.history['internal']) == 0: - yield chat_html_wrapper(shared.history['visible'], state['name1'], state['name2'], state['mode']) - else: - for history in chatbot_wrapper('', state, regenerate=True): - yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode']) - - -def continue_wrapper(text, state): - if (len(shared.history['visible']) == 1 and not shared.history['visible'][0][0]) or len(shared.history['internal']) == 0: - yield chat_html_wrapper(shared.history['visible'], state['name1'], state['name2'], state['mode']) - else: - for history in chatbot_wrapper('', state, _continue=True): - yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode']) - - -def remove_last_message(name1, name2, mode): - if len(shared.history['visible']) > 0 and shared.history['internal'][-1][0] != '<|BEGIN-VISIBLE-CHAT|>': - last = shared.history['visible'].pop() - shared.history['internal'].pop() - else: - last = ['', ''] - - return chat_html_wrapper(shared.history['visible'], name1, name2, mode), last[0] - - -def send_last_reply_to_input(): - if len(shared.history['internal']) > 0: - return shared.history['internal'][-1][1] - else: - return '' - - -def replace_last_reply(text, name1, name2, mode): - if len(shared.history['visible']) > 0: - shared.history['visible'][-1][1] = text - shared.history['internal'][-1][1] = apply_extensions("input", text) - - return chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def send_dummy_message(text, name1, name2, mode): - shared.history['visible'].append([text, '']) - shared.history['internal'].append([apply_extensions("input", text), '']) - return chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def send_dummy_reply(text, name1, name2, mode): - if len(shared.history['visible']) > 0 and not shared.history['visible'][-1][1] == '': - shared.history['visible'].append(['', '']) - shared.history['internal'].append(['', '']) - - shared.history['visible'][-1][1] = text - shared.history['internal'][-1][1] = apply_extensions("input", text) - return chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def clear_html(): - return chat_html_wrapper([], "", "") - - -def clear_chat_log(name1, name2, greeting, mode): - shared.history['visible'] = [] - shared.history['internal'] = [] - - if greeting != '': - shared.history['internal'] += [['<|BEGIN-VISIBLE-CHAT|>', greeting]] - shared.history['visible'] += [['', apply_extensions("output", greeting)]] - - # Save cleared logs - save_history(mode) - return chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def redraw_html(name1, name2, mode): - return chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def tokenize_dialogue(dialogue, name1, name2, mode): - history = [] - messages = [] - dialogue = re.sub('', '', dialogue) - dialogue = re.sub('', '', dialogue) - dialogue = re.sub('(\n|^)[Aa]non:', '\\1You:', dialogue) - dialogue = re.sub('(\n|^)\[CHARACTER\]:', f'\\g<1>{name2}:', dialogue) - idx = [m.start() for m in re.finditer(f"(^|\n)({re.escape(name1)}|{re.escape(name2)}):", dialogue)] - if len(idx) == 0: - return history - - for i in range(len(idx) - 1): - messages.append(dialogue[idx[i]:idx[i + 1]].strip()) - - messages.append(dialogue[idx[-1]:].strip()) - entry = ['', ''] - for i in messages: - if i.startswith(f'{name1}:'): - entry[0] = i[len(f'{name1}:'):].strip() - elif i.startswith(f'{name2}:'): - entry[1] = i[len(f'{name2}:'):].strip() - if not (len(entry[0]) == 0 and len(entry[1]) == 0): - history.append(entry) - - entry = ['', ''] - - print("\033[1;32;1m\nDialogue tokenized to:\033[0;37;0m\n", end='') - for row in history: - for column in row: - print("\n") - for line in column.strip().split('\n'): - print("| " + line + "\n") - - print("|\n") - print("------------------------------") - - return history - - -def save_history(mode, timestamp=False): - # Instruct mode histories should not be saved as if - # Alpaca or Vicuna were characters - if mode == 'instruct': - if not timestamp: - return - - fname = f"Instruct_{datetime.now().strftime('%Y%m%d-%H%M%S')}.json" - else: - if timestamp: - fname = f"{shared.character}_{datetime.now().strftime('%Y%m%d-%H%M%S')}.json" - else: - fname = f"{shared.character}_persistent.json" - - if not Path('logs').exists(): - Path('logs').mkdir() - - with open(Path(f'logs/{fname}'), 'w', encoding='utf-8') as f: - f.write(json.dumps({'data': shared.history['internal'], 'data_visible': shared.history['visible']}, indent=2)) - - return Path(f'logs/{fname}') - - -def load_history(file, name1, name2): - file = file.decode('utf-8') - try: - j = json.loads(file) - if 'data' in j: - shared.history['internal'] = j['data'] - if 'data_visible' in j: - shared.history['visible'] = j['data_visible'] - else: - shared.history['visible'] = copy.deepcopy(shared.history['internal']) - except: - shared.history['internal'] = tokenize_dialogue(file, name1, name2) - shared.history['visible'] = copy.deepcopy(shared.history['internal']) - - -def replace_character_names(text, name1, name2): - text = text.replace('{{user}}', name1).replace('{{char}}', name2) - return text.replace('', name1).replace('', name2) - - -def build_pygmalion_style_context(data): - context = "" - if 'char_persona' in data and data['char_persona'] != '': - context += f"{data['char_name']}'s Persona: {data['char_persona']}\n" - - if 'world_scenario' in data and data['world_scenario'] != '': - context += f"Scenario: {data['world_scenario']}\n" - - context = f"{context.strip()}\n\n" - return context - - -def generate_pfp_cache(character): - cache_folder = Path("cache") - if not cache_folder.exists(): - cache_folder.mkdir() - - for path in [Path(f"characters/{character}.{extension}") for extension in ['png', 'jpg', 'jpeg']]: - if path.exists(): - img = make_thumbnail(Image.open(path)) - img.save(Path('cache/pfp_character.png'), format='PNG') - return img - - return None - - -def load_character(character, name1, name2, mode): - shared.character = character - context = greeting = turn_template = "" - greeting_field = 'greeting' - picture = None - - # Deleting the profile picture cache, if any - if Path("cache/pfp_character.png").exists(): - Path("cache/pfp_character.png").unlink() - - if character != 'None': - folder = 'characters' if not mode == 'instruct' else 'characters/instruction-following' - picture = generate_pfp_cache(character) - for extension in ["yml", "yaml", "json"]: - filepath = Path(f'{folder}/{character}.{extension}') - if filepath.exists(): - break - - file_contents = open(filepath, 'r', encoding='utf-8').read() - data = json.loads(file_contents) if extension == "json" else yaml.safe_load(file_contents) - - # Finding the bot's name - for k in ['name', 'bot', '<|bot|>', 'char_name']: - if k in data and data[k] != '': - name2 = data[k] - break - - # Find the user name (if any) - for k in ['your_name', 'user', '<|user|>']: - if k in data and data[k] != '': - name1 = data[k] - break - else: - name1 = shared.settings['name1'] - - for field in ['context', 'greeting', 'example_dialogue', 'char_persona', 'char_greeting', 'world_scenario']: - if field in data: - data[field] = replace_character_names(data[field], name1, name2) - - if 'context' in data: - context = data['context'] - if mode != 'instruct': - context = context.strip() + '\n\n' - elif "char_persona" in data: - context = build_pygmalion_style_context(data) - greeting_field = 'char_greeting' - - if 'example_dialogue' in data: - context += f"{data['example_dialogue'].strip()}\n" - - if greeting_field in data: - greeting = data[greeting_field] - - if 'turn_template' in data: - turn_template = data['turn_template'] - - else: - context = shared.settings['context'] - name2 = shared.settings['name2'] - greeting = shared.settings['greeting'] - turn_template = shared.settings['turn_template'] - - if mode != 'instruct': - shared.history['internal'] = [] - shared.history['visible'] = [] - if Path(f'logs/{shared.character}_persistent.json').exists(): - load_history(open(Path(f'logs/{shared.character}_persistent.json'), 'rb').read(), name1, name2) - else: - # Insert greeting if it exists - if greeting != "": - shared.history['internal'] += [['<|BEGIN-VISIBLE-CHAT|>', greeting]] - shared.history['visible'] += [['', apply_extensions("output", greeting)]] - - # Create .json log files since they don't already exist - save_history(mode) - - return name1, name2, picture, greeting, context, repr(turn_template)[1:-1], chat_html_wrapper(shared.history['visible'], name1, name2, mode) - - -def upload_character(json_file, img, tavern=False): - json_file = json_file if type(json_file) == str else json_file.decode('utf-8') - data = json.loads(json_file) - outfile_name = data["char_name"] - i = 1 - while Path(f'characters/{outfile_name}.json').exists(): - outfile_name = f'{data["char_name"]}_{i:03d}' - i += 1 - - if tavern: - outfile_name = f'TavernAI-{outfile_name}' - - with open(Path(f'characters/{outfile_name}.json'), 'w', encoding='utf-8') as f: - f.write(json_file) - - if img is not None: - img = Image.open(io.BytesIO(img)) - img.save(Path(f'characters/{outfile_name}.png')) - - logging.info(f'New character saved to "characters/{outfile_name}.json".') - return outfile_name - - -def upload_tavern_character(img, name1, name2): - _img = Image.open(io.BytesIO(img)) - _img.getexif() - decoded_string = base64.b64decode(_img.info['chara']) - _json = json.loads(decoded_string) - _json = {"char_name": _json['name'], "char_persona": _json['description'], "char_greeting": _json["first_mes"], "example_dialogue": _json['mes_example'], "world_scenario": _json['scenario']} - return upload_character(json.dumps(_json), img, tavern=True) - - -def upload_your_profile_picture(img, name1, name2, mode): - cache_folder = Path("cache") - if not cache_folder.exists(): - cache_folder.mkdir() - - if img is None: - if Path("cache/pfp_me.png").exists(): - Path("cache/pfp_me.png").unlink() - else: - img = make_thumbnail(img) - img.save(Path('cache/pfp_me.png')) - logging.info('Profile picture saved to "cache/pfp_me.png"') - - return chat_html_wrapper(shared.history['visible'], name1, name2, mode, reset_cache=True) diff --git a/spaces/dpc/vien/README.md b/spaces/dpc/vien/README.md deleted file mode 100644 index 3affa3b4199819ed946526fba932b0843fc12cca..0000000000000000000000000000000000000000 --- a/spaces/dpc/vien/README.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: Text Translation -emoji: 🐠 -colorFrom: yellow -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -## Info - - -Using facebook/m2m100-12B-avg-5-ckpt pre-trained model - -facebook/m2m100-12B-avg-5-ckpt supports 100 languages. - -Here, this app uses/tests these languages only. - -``` -Chinese(zh) -English(en) -Hindi(hi) -Japanese(ja) -Sinhalese(si) -Thai(th) -Vietnamese(vi) - -``` - - -## Read more: - -https://huggingface.co/facebook/m2m100-12B-avg-5-ckpt \ No newline at end of file diff --git a/spaces/dteam/chatgpt-dteam/bin_public/app/llama_func.py b/spaces/dteam/chatgpt-dteam/bin_public/app/llama_func.py deleted file mode 100644 index ef10a4f140c77e55cf18bc4c4b6fad1bced4325d..0000000000000000000000000000000000000000 --- a/spaces/dteam/chatgpt-dteam/bin_public/app/llama_func.py +++ /dev/null @@ -1,213 +0,0 @@ -import os -import logging - -from llama_index import GPTSimpleVectorIndex, ServiceContext -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -from langchain.llms import OpenAI -from langchain.chat_models import ChatOpenAI -import colorama -import PyPDF2 -from tqdm import tqdm -import hashlib - -from bin_public.config.presets import * -from bin_public.utils.utils import * - -def get_index_name(file_src): - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() - -def block_split(text): - blocks = [] - while len(text) > 0: - blocks.append(Document(text[:1000])) - text = text[1000:] - return blocks - -def get_documents(file_src): - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - logging.info(f"loading file: {file.name}") - if os.path.splitext(file.name)[1] == ".pdf": - logging.debug("Loading PDF...") - pdftext = "" - with open(file.name, 'rb') as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - text_raw = pdftext - elif os.path.splitext(file.name)[1] == ".docx": - logging.debug("Loading DOCX...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - text_raw = loader.load_data(file=file.name)[0].text - elif os.path.splitext(file.name)[1] == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - text_raw = loader.load_data(file=file.name)[0].text - else: - logging.debug("Loading text file...") - with open(file.name, "r", encoding="utf-8") as f: - text_raw = f.read() - text = add_space(text_raw) - # text = block_split(text) - # documents += text - documents += [Document(text)] - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" " -): - os.environ["OPENAI_API_KEY"] = api_key - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - llm_predictor = LLMPredictor( - llm=ChatOpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key) - ) - prompt_helper = PromptHelper(max_input_size = max_input_size, num_output = num_outputs, max_chunk_overlap = max_chunk_overlap, embedding_limit=embedding_limit, chunk_size_limit=600, separator=separator) - index_name = get_index_name(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - documents = get_documents(file_src) - logging.info("构建索引中……") - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=chunk_size_limit) - index = GPTSimpleVectorIndex.from_documents( - documents, service_context=service_context - ) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - logging.error("索引构建失败!", e) - print(e) - return None - - -def chat_ai( - api_key, - index, - question, - context, - chatbot, -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.info(f"Question: {question}") - - response, chatbot_display, status_text = ask_ai( - api_key, - index, - question, - replace_today(PROMPT_TEMPLATE), - REFINE_TEMPLATE, - SIM_K, - 1.0, - context, - ) - if response is None: - status_text = "查询失败,请换个问法试试" - return context, chatbot - response = response - - context.append({"role": "user", "content": question}) - context.append({"role": "assistant", "content": response}) - chatbot.append((question, chatbot_display)) - - os.environ["OPENAI_API_KEY"] = "" - return context, chatbot, status_text - - -def ask_ai( - api_key, - index, - question, - prompt_tmpl, - refine_tmpl, - sim_k=5, - temprature=0, - prefix_messages=[], -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.debug("Index file found") - logging.debug("Querying index...") - llm_predictor = LLMPredictor( - llm=ChatOpenAI( - temperature=temprature, - model_name="gpt-3.5-turbo-0301", - prefix_messages=prefix_messages, - ) - ) - - response = None # Initialize response variable to avoid UnboundLocalError - #qa_prompt = QuestionAnswerPrompt(prompt_tmpl.replace("{reply_language}", reply_language)) - #rf_prompt = RefinePrompt(refine_tmpl.replace("{reply_language}", reply_language)) - response = index.query( - question, - similarity_top_k=sim_k, - #text_qa_template=qa_prompt, - #refine_template=rf_prompt, - response_mode="compact", - ) - - if response is not None: - logging.info(f"Response: {response}") - ret_text = response.response - nodes = [] - for index, node in enumerate(response.source_nodes): - brief = node.source_text[:25].replace("\n", "") - nodes.append( - f"
              [{index + 1}]\t{brief}...

              {node.source_text}

              " - ) - new_response = ret_text + "\n----------\n" + "\n\n".join(nodes) - logging.info( - f"Response: {colorama.Fore.BLUE}{ret_text}{colorama.Style.RESET_ALL}" - ) - os.environ["OPENAI_API_KEY"] = "" - return ret_text, new_response, f"查询消耗了{llm_predictor.last_token_usage} tokens" - else: - logging.warning("No response found, returning None") - os.environ["OPENAI_API_KEY"] = "" - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/dwolfe66/text-generation-webui-space/modules/shared.py b/spaces/dwolfe66/text-generation-webui-space/modules/shared.py deleted file mode 100644 index ea2eb50b7f586e5c562bf2e7c75429c91f21ec6c..0000000000000000000000000000000000000000 --- a/spaces/dwolfe66/text-generation-webui-space/modules/shared.py +++ /dev/null @@ -1,103 +0,0 @@ -import argparse - -model = None -tokenizer = None -model_name = "" -soft_prompt_tensor = None -soft_prompt = False -is_RWKV = False - -# Chat variables -history = {'internal': [], 'visible': []} -character = 'None' -stop_everything = False -processing_message = '*Is typing...*' - -# UI elements (buttons, sliders, HTML, etc) -gradio = {} - -# Generation input parameters -input_params = [] - -settings = { - 'max_new_tokens': 200, - 'max_new_tokens_min': 1, - 'max_new_tokens_max': 2000, - 'name1': 'Person 1', - 'name2': 'Person 2', - 'context': 'This is a conversation between two people.', - 'stop_at_newline': True, - 'chat_prompt_size': 2048, - 'chat_prompt_size_min': 0, - 'chat_prompt_size_max': 2048, - 'chat_generation_attempts': 1, - 'chat_generation_attempts_min': 1, - 'chat_generation_attempts_max': 5, - 'name1_pygmalion': 'You', - 'name2_pygmalion': 'Kawaii', - 'context_pygmalion': "Kawaii's persona: Kawaii is a cheerful person who loves to make others smile. She is an optimist who loves to spread happiness and positivity wherever she goes.\n", - 'stop_at_newline_pygmalion': False, - 'default_extensions': [], - 'chat_default_extensions': ["gallery"], - 'presets': { - 'default': 'NovelAI-Sphinx Moth', - 'pygmalion-*': 'Pygmalion', - 'RWKV-*': 'Naive', - }, - 'prompts': { - 'default': 'Common sense questions and answers\n\nQuestion: \nFactual answer:', - '^(gpt4chan|gpt-4chan|4chan)': '-----\n--- 865467536\nInput text\n--- 865467537\n', - '(rosey|chip|joi)_.*_instruct.*': 'User: \n', - 'oasst-*': '<|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>' - } -} - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ('yes', 'true', 't', 'y', '1'): - return True - elif v.lower() in ('no', 'false', 'f', 'n', '0'): - return False - else: - raise argparse.ArgumentTypeError('Boolean value expected.') - -parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog,max_help_position=54)) -parser.add_argument('--model', type=str, help='Name of the model to load by default.') -parser.add_argument('--notebook', action='store_true', help='Launch the web UI in notebook mode, where the output is written to the same text box as the input.') -parser.add_argument('--chat', action='store_true', help='Launch the web UI in chat mode.') -parser.add_argument('--cai-chat', action='store_true', help='Launch the web UI in chat mode with a style similar to Character.AI\'s. If the file img_bot.png or img_bot.jpg exists in the same folder as server.py, this image will be used as the bot\'s profile picture. Similarly, img_me.png or img_me.jpg will be used as your profile picture.') -parser.add_argument('--cpu', action='store_true', help='Use the CPU to generate text.') -parser.add_argument('--load-in-8bit', action='store_true', help='Load the model with 8-bit precision.') -parser.add_argument('--load-in-4bit', action='store_true', help='DEPRECATED: use --gptq-bits 4 instead.') -parser.add_argument('--gptq-bits', type=int, default=0, help='Load a pre-quantized model with specified precision. 2, 3, 4 and 8bit are supported. Currently only works with LLaMA and OPT.') -parser.add_argument('--gptq-model-type', type=str, help='Model type of pre-quantized model. Currently only LLaMa and OPT are supported.') -parser.add_argument('--bf16', action='store_true', help='Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU.') -parser.add_argument('--auto-devices', action='store_true', help='Automatically split the model across the available GPU(s) and CPU.') -parser.add_argument('--disk', action='store_true', help='If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk.') -parser.add_argument('--disk-cache-dir', type=str, default="cache", help='Directory to save the disk cache to. Defaults to "cache".') -parser.add_argument('--gpu-memory', type=int, nargs="+", help='Maxmimum GPU memory in GiB to be allocated per GPU. Example: --gpu-memory 10 for a single GPU, --gpu-memory 10 5 for two GPUs.') -parser.add_argument('--cpu-memory', type=int, help='Maximum CPU memory in GiB to allocate for offloaded weights. Must be an integer number. Defaults to 99.') -parser.add_argument('--flexgen', action='store_true', help='Enable the use of FlexGen offloading.') -parser.add_argument('--percent', type=int, nargs="+", default=[0, 100, 100, 0, 100, 0], help='FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0).') -parser.add_argument("--compress-weight", action="store_true", help="FlexGen: activate weight compression.") -parser.add_argument("--pin-weight", type=str2bool, nargs="?", const=True, default=True, help="FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%%).") -parser.add_argument('--deepspeed', action='store_true', help='Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration.') -parser.add_argument('--nvme-offload-dir', type=str, help='DeepSpeed: Directory to use for ZeRO-3 NVME offloading.') -parser.add_argument('--local_rank', type=int, default=0, help='DeepSpeed: Optional argument for distributed setups.') -parser.add_argument('--rwkv-strategy', type=str, default=None, help='RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8".') -parser.add_argument('--rwkv-cuda-on', action='store_true', help='RWKV: Compile the CUDA kernel for better performance.') -parser.add_argument('--no-stream', action='store_true', help='Don\'t stream the text output in real time.') -parser.add_argument('--settings', type=str, help='Load the default interface settings from this json file. See settings-template.json for an example. If you create a file called settings.json, this file will be loaded by default without the need to use the --settings flag.') -parser.add_argument('--extensions', type=str, nargs="+", help='The list of extensions to load. If you want to load more than one extension, write the names separated by spaces.') -parser.add_argument('--listen', action='store_true', help='Make the web UI reachable from your local network.') -parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.') -parser.add_argument('--share', action='store_true', help='Create a public URL. This is useful for running the web UI on Google Colab or similar.') -parser.add_argument('--auto-launch', action='store_true', default=False, help='Open the web UI in the default browser upon launch.') -parser.add_argument('--verbose', action='store_true', help='Print the prompts to the terminal.') -args = parser.parse_args() - -# Provisional, this will be deleted later -if args.load_in_4bit: - print("Warning: --load-in-4bit is deprecated and will be removed. Use --gptq-bits 4 instead.\n") - args.gptq_bits = 4 diff --git a/spaces/eaedk/agri-tech-fastapi/main.py b/spaces/eaedk/agri-tech-fastapi/main.py deleted file mode 100644 index 6d7ba7f662f997085d9e505a055fbb108bf2c3ee..0000000000000000000000000000000000000000 --- a/spaces/eaedk/agri-tech-fastapi/main.py +++ /dev/null @@ -1,170 +0,0 @@ -from fastapi import FastAPI -import uvicorn -from typing import List, Literal, Optional -from pydantic import BaseModel -import pandas as pd -import pickle -import os -import json -import logging - -# logger -logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG) - -# Util Functions & Classes -def loading(fp): - with open(fp, "rb") as f: - data = pickle.load(f) - - print(f"INFO: Loaded data : {data}") - return data - - -def predict(df, endpoint="simple"): - """Take a dataframe as input and use it to make predictions""" - - print( - f"[Info] 'predict' function has been called through the endpoint '{endpoint}'.\n" - ) - - logging.info(f" \n{df.to_markdown()}") - - # scaling - scaled_df = scaler.transform(df) - logging.info(f" Scaler output is of type {type(scaled_df)}") - - # prediction - prediction = model.predict_proba(scaled_df) - print(f"INFO: Prediction output: {prediction}") - - # Formatting of the prediction - ## extract highest proba - highest_proba = prediction.max(axis=1) - print(f"INFO: Highest probabilities : {highest_proba}") - - ## extract indexes of the highest proba - highest_proba_idx = prediction.argmax(axis=1) - print(f"INFO: Highest probability indexes : {highest_proba_idx}") - - ## Maching prediction with classes - predicted_classes = [labels[i] for i in highest_proba_idx] - print(f"INFO: Predicted classes : {predicted_classes}") - # prediction[:, highest_proba_idx] - - # save in df - df["predicted proba"] = highest_proba - df["predicted label"] = predicted_classes - - print(f"INFO: dataframe filled with prediction\n{df.to_markdown()}\n") - - # parsing prediction - # parsed = json.loads(df.to_json(orient="index")) # or - parsed = df.to_dict("records") - - return parsed - - -## INPUT MODELING -class Land(BaseModel): - """Modeling of one input data in a type-restricted dictionary-like format - - column_name : variable type # strictly respect the name in the dataframe header. - - eg.: - ========= - customer_age : int - gender : Literal['male', 'female', 'other'] - """ - - N: float - P: float - K: float - temperature: float - humidity: float - ph: float - rainfall: float - - -class Lands(BaseModel): - inputs: List[Land] - - def return_list_of_dict( - cls, - ): - # return [land.dict() for land in cls.inputs] - return [i.dict() for i in cls.inputs] - - -# API Config -app = FastAPI(title="Agri-Tech API", - description="This is a ML API for classification of crop to plant on a land regarding some features") - -# ML Config -ml_objects = loading(fp=os.path.join("assets", "ml", "crop_recommandation2.pkl")) -## Extract the ml components -model = ml_objects["model"] -scaler = ml_objects["scaler"].set_output(transform="pandas") -labels = ml_objects["labels"] - - -# Endpoints -@app.get("/") -def root(): - return {"Description": " This is a ML API for classification of crop to plant on a land regarding some features.", - "Documentation": "Go to the docs: https://eaedk-agri-tech-fastapi.hf.space/docs"} - - -@app.get("/checkup") -def test(a: Optional[int], b: int): - return {"a": a, "b": b} - - -## ML endpoint -@app.post("/predict") -def make_prediction( - N: float, - P: float, - K: float, - temperature: float, - humidity: float, - ph: float, - rainfall: float, -): - """Make prediction with the passed data""" - - df = pd.DataFrame( - { - "N": [N], - "P": [P], - "K": [K], - "temperature": [temperature], - "humidity": [humidity], - "ph": [ph], - "rainfall": [rainfall], - } - ) - - parsed = predict(df=df) # df.to_dict('records') - - return { - "output": parsed, - } - - -@app.post("/predict_multi") -def make_multi_prediction(multi_lands: Lands): - """Make prediction with the passed data""" - print(f"Mutiple inputs passed: {multi_lands}\n") - df = pd.DataFrame(multi_lands.return_list_of_dict()) - - parsed = predict(df=df, endpoint="multi inputs") # df.to_dict('records') - - return { - "output": parsed, - "author": "Stella Archar", - "api_version": ";)", - } - - -if __name__ == "__main__": - uvicorn.run("main:app", reload=True) diff --git a/spaces/enzostvs/stable-diffusion-tpu/README.md b/spaces/enzostvs/stable-diffusion-tpu/README.md deleted file mode 100644 index 16091394fc4cae4bfb4080f13f91a0a7c506f8d1..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Stable Diffusion TPU -emoji: ⚡ -colorFrom: pink -colorTo: blue -sdk: docker -pinned: true -app_port: 3002 -license: mit - -hf_oauth: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/eson/tokenizer-arena/vocab/chinese_llama2/__init__.py b/spaces/eson/tokenizer-arena/vocab/chinese_llama2/__init__.py deleted file mode 100644 index 746423135290cba3856857ec0071cc58522f3e0c..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/chinese_llama2/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -""" -## 词典扩容 -32000 -32001 但 - -""" - -from transformers import LlamaTokenizer - -tokenizer = LlamaTokenizer.from_pretrained("ziqingyang/chinese-llama-2-7b") - -tokenizer.comments = "重新设计了新词表(大小:55296),进一步提升了中文字词的覆盖程度" diff --git a/spaces/eugenesiow/super-image/app.py b/spaces/eugenesiow/super-image/app.py deleted file mode 100644 index cec2e61bc8d716f588d9c2e852b15ade69bdd9db..0000000000000000000000000000000000000000 --- a/spaces/eugenesiow/super-image/app.py +++ /dev/null @@ -1,93 +0,0 @@ -import cv2 -import torch -import numpy as np -import gradio as gr -from PIL import Image -from super_image import ImageLoader, EdsrModel, MsrnModel, MdsrModel, AwsrnModel, A2nModel, CarnModel, PanModel, \ - HanModel, DrlnModel, RcanModel - -title = "super-image" -description = "State of the Art Image Super-Resolution Models." -article = "

              Github Repo" \ - "| Documentation " \ - "| Models

              " - - -def get_model(model_name, scale): - if model_name == 'EDSR': - model = EdsrModel.from_pretrained('eugenesiow/edsr', scale=scale) - elif model_name == 'MSRN': - model = MsrnModel.from_pretrained('eugenesiow/msrn', scale=scale) - elif model_name == 'MDSR': - model = MdsrModel.from_pretrained('eugenesiow/mdsr', scale=scale) - elif model_name == 'AWSRN-BAM': - model = AwsrnModel.from_pretrained('eugenesiow/awsrn-bam', scale=scale) - elif model_name == 'A2N': - model = A2nModel.from_pretrained('eugenesiow/a2n', scale=scale) - elif model_name == 'CARN': - model = CarnModel.from_pretrained('eugenesiow/carn', scale=scale) - elif model_name == 'PAN': - model = PanModel.from_pretrained('eugenesiow/pan', scale=scale) - elif model_name == 'HAN': - model = HanModel.from_pretrained('eugenesiow/han', scale=scale) - elif model_name == 'DRLN': - model = DrlnModel.from_pretrained('eugenesiow/drln', scale=scale) - elif model_name == 'RCAN': - model = RcanModel.from_pretrained('eugenesiow/rcan', scale=scale) - else: - model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=scale) - return model - - -def inference(img, scale_str, model_name): - max_res = 1024 - scale = int(scale_str.replace('x', '')) - width, height = img.size - print(width, height) - if width > max_res or height > max_res: - img = img.thumbnail((max_res, max_res), Image.ANTIALIAS) - model = get_model(model_name, scale) - try: - inputs = ImageLoader.load_image(img) - preds = model(inputs) - preds = preds.data.cpu().numpy() - pred = preds[0].transpose((1, 2, 0)) * 255.0 - return Image.fromarray(pred.astype('uint8'), 'RGB') - except Exception as e: - print(e) - return None - - -torch.hub.download_url_to_file('http://people.rennes.inria.fr/Aline.Roumy/results/images_SR_BMVC12/input_groundtruth/baby_mini_d3_gaussian.bmp', - 'baby.bmp') -torch.hub.download_url_to_file('http://people.rennes.inria.fr/Aline.Roumy/results/images_SR_BMVC12/input_groundtruth/woman_mini_d3_gaussian.bmp', - 'woman.bmp') -torch.hub.download_url_to_file('http://people.rennes.inria.fr/Aline.Roumy/results/images_SR_BMVC12/input_groundtruth/bird_mini_d4_gaussian.bmp', - 'bird.bmp') - -# models = ['EDSR-base', 'DRLN', 'EDSR', 'MDSR', 'A2N', 'PAN', 'AWSRN-BAM', 'MSRN'] -models = ['EDSR-base', 'A2N', 'PAN', 'AWSRN-BAM', 'MSRN'] -scales = [2, 3, 4] -for model_name in models: - for scale in scales: - get_model(model_name, scale) - -gr.Interface( - inference, - [ - gr.inputs.Image(type="pil", label="Input"), - gr.inputs.Radio(["x2", "x3", "x4"], label='scale'), - gr.inputs.Dropdown(choices=models, - label='Model') - ], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['baby.bmp', 'x2', 'EDSR-base'], - ['woman.bmp', 'x3', 'MSRN'], - ['bird.bmp', 'x4', 'PAN'] - ], - allow_flagging='never', - ).launch(debug=False) diff --git a/spaces/facebook/MusicGen/setup.py b/spaces/facebook/MusicGen/setup.py deleted file mode 100644 index 64e7d6fcb1092748f8151f6d3ed1767d3be1b34b..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/setup.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio generation research library for PyTorch' - -URL = 'https://github.com/facebookresearch/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com, jadecopet@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/falterWliame/Face_Mask_Detection/HACK EZDrummer 2 All Expansions With UPDATES !!TOP!!.md b/spaces/falterWliame/Face_Mask_Detection/HACK EZDrummer 2 All Expansions With UPDATES !!TOP!!.md deleted file mode 100644 index 7646c2e71c33ff7b7a3594a87028400dba9f9bd4..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/HACK EZDrummer 2 All Expansions With UPDATES !!TOP!!.md +++ /dev/null @@ -1,8 +0,0 @@ -

              HACK EZDrummer 2 All Expansions With UPDATES


              DOWNLOADhttps://urlca.com/2uDdJQ



              -
              -Classic Rock/Metal EZX. Classic Hard Rock EZX. Classic and Modern Country EZX. - -Optimized for midi + USB (Included) Our original EZdrummer drum and loop pack is extremely flexible. With all of our loops from. EZdrummerV1 & V2. Total Drum Kit. EZX drum pack. The drums are unpatched. For now. You can use. With the drum content that comes with. EZdrummer. One way to get started is to download our free. Drum Pack. Then load your own drum content. Simply load. A generic drum clip in midi. Mode of a generic drum. Now, using either the Pitch Bend. Or the Velocity. Controls. For each of the. Snare. Hi-hat. Kick. Add. and Compose. On our Drum Kit. Track. you will. Not be limited by. The limitations of midi drum content. As these are. Unpitched. So any drum. Can be used. By our MIDI Drum Pack. Users can get. A drum. Kit. And. Compose. Using any drum. They want. In any of. The. Sequence. Types. And be. Able to. Compose. Along. Side. Of. The midi. Drum content. They. Load. Or. Export. The. Tracks. Including. The. MIDI. Drum. Content. That. Comes. With. EZdrummer. Without. The. MIDI. Drum. Content. From. EZdrummer. By. Using. The. Undocked. Mixer. Panel. On. Our. Drum. Kit. Track. Click. Each. Drum. To. Open. The. Mixer. Panel. With. The. Drum. In. It. Brought. To. You. Click. Again. To. Export. It. Click. A. Third. Time. To. Export. The. MIDI. Drum. Pack. Click. Again. To. Start. Adding. Songs. Click. A. Fourth. Time. To. Export. A. MIDI..Drum. Kit. Time. To. Export. The. MIDI..Drum. Kit. Time. To. Export. 5. The. Drum. Kit. Click. To. Export. Drum. Kits. 6. The. Drum. Kit. Click. To. Export. Drum. Kits. 7. The. Drum. 4fefd39f24
              -
              -
              -

              diff --git a/spaces/falterWliame/Face_Mask_Detection/Philips MCM2000 12 Service Manual TOP Download.md b/spaces/falterWliame/Face_Mask_Detection/Philips MCM2000 12 Service Manual TOP Download.md deleted file mode 100644 index b37fdfefb68d570990d3ba00aa908fbd3c411190..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Philips MCM2000 12 Service Manual TOP Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

              Philips MCM2000 12 Service Manual Download


              Download Filehttps://urlca.com/2uDdQf



              - -View and download the Philips MCM2000/12 Service Manual online. MICRO AUDIO SYSTEM. Download MCM2000/12 mini audio system manual in pdf format. Top case PCB1 PCB3 S105 S101 DVDM S107 S104 S106 PCB4 S102 PCB2 Bottom case 13-1 13-1. 13 13-4.13 S109 S109 S103 S106 PCB5 2-2. 2 S108 S108 S108 S106 S108 S108. By pressing a button or keys on the front panel. The device can be disconnected from the power supply by pressing a button or key on the front panel. When doing this, make sure that there is a power switch on the rear panel of the machine (Fig. ). Rice. . Installing and removing the MCM2000/12 front panel. On the front panel of the MCM2000/12, in addition to the button or key, there are the following elements. Button or key. Button 8a78ff9644
              -
              -
              -

              diff --git a/spaces/farukozderim/space-building-space-25/README.md b/spaces/farukozderim/space-building-space-25/README.md deleted file mode 100644 index 8e452d5881e65ae7e7457790ec47c5649053575a..0000000000000000000000000000000000000000 --- a/spaces/farukozderim/space-building-space-25/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Space Building Space 25 -emoji: 🌖 -colorFrom: blue -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/fatiXbelha/sd/8 Ball Pool by Miniclip The Ultimate Pool Game Experience.md b/spaces/fatiXbelha/sd/8 Ball Pool by Miniclip The Ultimate Pool Game Experience.md deleted file mode 100644 index b8568ec859892fbdc0f06ca02abaeaf9aa624277..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/8 Ball Pool by Miniclip The Ultimate Pool Game Experience.md +++ /dev/null @@ -1,101 +0,0 @@ -
              -

              How to Download and Play 8 Ball Pool on Windows 7

              -

              Do you love playing pool games but don't have access to a real pool table? Do you want to enjoy the thrill of competing with players from around the world in a realistic and immersive pool game? If yes, then you should try 8 Ball Pool, the world's #1 pool game for Android devices. And guess what? You can also play it on your Windows 7 PC with the help of an Android emulator.

              -

              download 8 ball pool windows 7


              Download Zip ✒ ✒ ✒ https://urllie.com/2uNDCh



              -

              In this article, we will show you how to download and play 8 Ball Pool on Windows 7, as well as some of the benefits, tips, and tricks that will help you become a master of the pool. So, let's get started!

              -

              Benefits of Playing 8 Ball Pool on Windows 7

              -

              Playing 8 Ball Pool on Windows 7 has several benefits, such as:

              -
                -
              • Improved focus and concentration: Playing pool requires a great amount of focus and concentration. You have to aim carefully, calculate angles, adjust power, and anticipate your opponent's moves. Playing pool regularly can help you improve your mental abilities and attention span.
              • -
              • Better hand-eye coordination: Playing pool also requires good hand-eye coordination. You have to control your cue stick, hit the cue ball accurately, and watch the balls move on the table. Playing pool can help you enhance your motor skills and coordination.
              • -
              • Development of strategic planning skills: Playing pool is not just about hitting balls randomly. You have to plan your shots ahead, think of the best way to clear the table, and avoid leaving easy shots for your opponent. Playing pool can help you develop your strategic thinking and problem-solving skills.
              • -
              • Relaxation and stress relief: Playing pool can be a great way to relax and unwind after a long day. You can enjoy the game at your own pace, listen to some music, chat with your friends online, or just have fun. Playing pool can help you reduce stress and anxiety.
              • -
              • Heightened cognition: Playing pool can also stimulate your brain and keep it active. Pool involves performing mental mathematical estimates and calculations, such as basic geometry and physics. These skills are necessary to calculate precise angles and trajectories and to determine how much force to apply during a strike so as not to under or overshoot a target. Playing pool can help you sharpen your mind and memory.
              • -
              -

              Steps to Download and Play 8 Ball Pool on Windows 7

              -

              To download and play 8 Ball Pool on Windows 7, you will need an Android emulator software that lets you run Android applications on your PC. There are many Android emulators available online, but we recommend using BlueStacks, which is one of the most popular and reliable ones. Here are the steps to download and play 8 Ball Pool on Windows 7 using BlueStacks:

              -
                -
              1. Download and install BlueStacks on your PC: Go to the official website of BlueStacks and download the latest version of the software for Windows 7. Follow the instructions on the screen to install BlueStacks on your PC. It may take some time depending on your internet speed and PC performance.
              2. -
              3. Download the 8 Ball Pool APK file for PC: Go to a trusted website that provides APK files for Android applications, such as APKPure or APKMirror, and search for 8 Ball Pool. Download the latest version of the 8 Ball Pool APK file for PC and save it on your desktop or any other folder you prefer.
              4. -
              5. Drag and drop the APK file into the emulator window to install: Launch BlueStacks on your PC and sign in with your Google account. You will see a home screen with various icons and options. Locate the 8 Ball Pool APK file on your PC and drag and drop it into the emulator window. BlueStacks will automatically detect and install the game on your PC.
              6. -
              7. Launch the game and enjoy playing: Once the installation is complete, you will see the 8 Ball Pool icon on the home screen of BlueStacks. Click on it to launch the game and start playing. You can use your mouse and keyboard to control the game, or you can also customize the settings to use a gamepad or a touch screen if you have one.
              8. -
              -

              Tips and Tricks for 8 Ball Pool on Windows 7

              -

              Now that you know how to download and play 8 Ball Pool on Windows 7, here are some tips and tricks that will help you improve your game and win more matches:

              -
                -
              • Choose your tables wisely: When you start playing 8 Ball Pool, you will have access to different tables with different entry fees and rewards. You should choose a table that matches your skill level and budget. Don't play on a table that is too expensive or too difficult for you, as you may lose more coins than you earn. Also, don't play on a table that is too cheap or too easy for you, as you may not get enough challenge or satisfaction from winning.
              • -
              • Open the app every day: One of the easiest ways to earn more coins and cash in 8 Ball Pool is to open the app every day. You will get a free spin on the Spin and Win wheel, which can give you various prizes, such as coins, cash, cues, boxes, or even rare items. You will also get a free scratch card every day, which can also give you coins or cash. Additionally, you will get a daily bonus of coins based on your level and VIP status.
              • -
              • Buy a better cue: Another way to improve your game is to buy a better cue. Cues have different attributes, such as power, aim, spin, and time. A better cue can help you hit harder, aim more accurately, apply more spin, or take more time to adjust your shot. You can buy cues with coins or cash, or you can also win them from boxes or events. You can also upgrade your cues with coins to increase their attributes.
              • -
              • Use a little English: English is a term used in pool games to describe the spin applied to the cue ball when hitting it. Using English can help you control the direction and speed of the cue ball after hitting another ball. You can use English to make trick shots, avoid scratches, or set up your next shot. To use English in 8 Ball Pool, you can adjust the position of the cue ball icon on the bottom right corner of the screen before hitting it.
              • -
              • Shoot faster: One of the most important skills in 8 Ball Pool is to shoot faster than your opponent. Shooting faster can give you an advantage over your opponent, as you can clear more balls before they get a chance to shoot. Shooting faster can also prevent you from running out of time, which can result in a foul or a loss. To shoot faster in 8 Ball Pool, you should practice your aiming and timing skills, as well as plan your shots ahead.
              • -
              -

              Conclusion and FAQs

              -

              In conclusion, 8 Ball Pool is a fun and addictive pool game that you can play on your Windows 7 PC with an Android emulator. Playing 8 Ball Pool on Windows 7 can offer you many benefits, such as improved focus, concentration, hand-eye coordination, strategic planning skills, relaxation, and cognition. To download and play 8 Ball Pool on Windows 7, you just need to follow these simple steps: download and install an Android emulator on your PC, download the 8 Ball Pool APK file for PC, drag and drop the APK file into the emulator window to install, and launch the game and enjoy playing. You can also use some tips and tricks to improve your game and win more matches, such as choosing your tables wisely, opening the app every day, buying a better cue, using a little English, and shooting faster. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to contact us or leave a comment below. Happy playing!

              -

              Here are some frequently asked questions (FAQs) about 8 Ball Pool on Windows 7:

              -

              How to download 8 ball pool on windows 7 laptop
              -Download 8 ball pool for windows 7 32 bit
              -Download 8 ball pool for windows 7 offline
              -Download 8 ball pool for windows 7 free full version
              -Download 8 ball pool for windows 7 from miniclip.com
              -Download 8 ball pool for windows 7 using bluestacks emulator
              -Download 8 ball pool for windows 7 pc without internet
              -Download 8 ball pool for windows 7 ultimate
              -Download 8 ball pool for windows 7 professional
              -Download 8 ball pool for windows 7 home premium
              -Download 8 ball pool for windows 7 starter
              -Download 8 ball pool for windows 7 with crack
              -Download 8 ball pool for windows 7 latest version
              -Download 8 ball pool for windows 7 apk
              -Download 8 ball pool for windows 7 softonic
              -Download 8 ball pool for windows 7 game loop
              -Download 8 ball pool for windows 7 nox player
              -Download 8 ball pool for windows 7 memu play
              -Download 8 ball pool for windows 7 ld player
              -Download 8 ball pool for windows 7 uptodown
              -Download and install 8 ball pool on windows 7
              -Download and play 8 ball pool on windows 7
              -Download and update 8 ball pool on windows 7
              -Download and run 8 ball pool on windows 7
              -Download and enjoy 8 ball pool on windows 7
              -Best way to download 8 ball pool on windows 7
              -Easiest way to download 8 ball pool on windows 7
              -Fastest way to download 8 ball pool on windows 7
              -Safest way to download 8 ball pool on windows 7
              -Cheapest way to download 8 ball pool on windows 7
              -Can I download 8 ball pool on windows 7?
              -Why can't I download 8 ball pool on windows 7?
              -How do I download and install the latest version of the game?
              -How do I download and install the latest version of the game?
              -How do I download and install the latest version of the game?
              -How do I download and install the latest version of the game?
              -How do I download and install the latest version of the game?

              -

              FAQ #1: How can I play with my friends online?

              -

              One of the best features of 8 Ball Pool is that you can play with your friends online. You can either challenge your friends directly from the game, or you can join a club and play with other club members. To challenge your friends, you need to connect your game account to your Facebook account. Then, you can see your Facebook friends who are also playing 8 Ball Pool on the Friends tab. You can tap on their name and send them a challenge request. To join a club, you need to go to the Club tab and either create your own club or join an existing one. You can chat with other club members, play friendly matches, or compete in club tournaments.

              -

              FAQ #2: How can I earn more coins and cash in the game?

              -

              Coins and cash are the main currencies in 8 Ball Pool. You need coins to enter matches, buy cues, upgrade cues, or join clubs. You need cash to buy premium items, such as boxes, cues, or chat packs. There are several ways to earn more coins and cash in the game, such as:

              -
                -
              • Winning matches: The most obvious way to earn coins is to win matches. You will get the entry fee of the table plus a bonus amount of coins for winning. The higher the table, the more coins you will earn.
              • -
              • Opening boxes: Boxes are rewards that you can get from various sources, such as winning matches, completing missions, ranking up, or participating in events. Boxes contain various items, such as coins, cash, cues, or rare items.
              • -
              • Completing missions: Missions are tasks that you can complete by playing the game. They can be daily missions or seasonal missions. Completing missions will give you coins, cash, boxes, or other rewards.
              • -
              • Participating in events: Events are special modes that you can play for a limited time. They can be tournaments, mini-games, or challenges. Participating in events will give you coins, cash, boxes, or other rewards.
              • -
              • Watching videos: You can watch short videos from the game to earn some free coins or cash. You can find the video icon on the top right corner of the home screen.
              • -
              -

              FAQ #3: How can I customize my cue and pool table?

              -

              You can customize your cue and pool table in 8 Ball Pool to make them look more stylish and unique. To customize your cue, you can go to the Shop tab and browse through the different cues available. You can buy cues with coins or cash, or you can also win them from boxes or events. You can also upgrade your cues with coins to increase their attributes, such as power, aim, spin, and time. To customize your pool table, you can go to the Settings tab and tap on the Table option. You can choose from different colors and patterns for your pool table cloth. You can also change the color of the balls and the pockets.

              -

              FAQ #4: How can I improve my skills and ranking in the game?

              -

              If you want to become a better player and climb up the ranks in 8 Ball Pool, you need to practice your skills and learn from your mistakes. Here are some tips that can help you improve your skills and ranking in the game:

              -
                -
              • Practice offline: If you are new to the game or want to hone your skills without risking your coins, you can practice offline in the Practice mode. You can play against the computer or yourself, and you can choose any table you want. Practicing offline can help you get familiar with the game mechanics, the cues, the tables, and the shots.
              • -
              • Watch tutorials and replays: If you want to learn from the experts or see how other players play, you can watch tutorials and replays in the game. You can find tutorials on the Tutorial tab, where you can learn the basics of the game, as well as some advanced techniques and strategies. You can also watch replays of your own matches or other players' matches on the Replay tab. Watching tutorials and replays can help you gain insights and tips on how to play better.
              • -
              • Play in different modes: If you want to challenge yourself and test your skills, you can play in different modes in the game. You can play in 1v1 mode, where you can play against another player online. You can also play in Tournament mode, where you can compete with up to 8 players in a knockout format. You can also play in 9 Ball mode, where you have to pot the balls in numerical order. Playing in different modes can help you improve your versatility and adaptability.
              • -
              • Get feedback and advice: If you want to get feedback and advice from other players, you can join a club or a community in the game. You can chat with other club members, ask for tips, share your experiences, or request friendly matches. You can also join a community on social media platforms, such as Facebook, Twitter, Instagram, or YouTube, where you can interact with other fans of the game, watch live streams, join contests, or get updates.
              • -
              -

              FAQ #5: How can I contact the developer for feedback or support?

              -

              If you have any feedback or suggestions for the game, or if you encounter any issues or problems while playing, you can contact the developer of 8 Ball Pool for assistance. You can contact them through various channels, such as:

              -
                -
              • Email: You can send an email to support@miniclip.com with your query or issue. You should include your user ID, device model, operating system version, and a screenshot or video of the problem if possible.
              • -
              • Website: You can visit the official website of 8 Ball Pool at https://www.miniclip.com/games/8-ball-pool-multiplayer/en/ and click on the Support button at the bottom of the page. You will be redirected to a page where you can find answers to frequently asked questions, submit a request form, or chat with an agent.
              • -
              • In-game: You can also contact the developer from within the game. You can go to the Settings tab and tap on the Help & Support option. You will be redirected to a page where you can find answers to frequently asked questions, submit a request form, or chat with an agent.
              • -

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Rosa Linns Snap - The Snapping 1 2 Where Are You Song That Took Over the Internet.md b/spaces/fatiXbelha/sd/Download Rosa Linns Snap - The Snapping 1 2 Where Are You Song That Took Over the Internet.md deleted file mode 100644 index f6717da115b063e7f3bf400fed13554891615a7b..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Rosa Linns Snap - The Snapping 1 2 Where Are You Song That Took Over the Internet.md +++ /dev/null @@ -1,161 +0,0 @@ -
              -

              Download 1 2 Where Are You: The Meaning and Origin of the Viral TikTok Song

              -

              If you are a TikTok user, you have probably heard the catchy song that goes "snapping 1 2 where are you, you're still in my heart". But do you know what the song is about, who sings it, and how it became viral on the app? In this article, we will answer all these questions and more. We will also show you how to download the song and use it on your own TikTok videos. So, keep reading and get ready to snap along!

              -

              What is the song about?

              -

              The song is called "SNAP" and it is sung by Rosa Linn, a Norwegian singer-songwriter who is based in Los Angeles. The song was released in September 2022 and it is part of her debut EP "Rosa Linn".

              -

              download 1 2 where are you


              Download Zip 🆗 https://urllie.com/2uNx28



              -

              The lyrics and the message

              -

              The lyrics of the song are about trying to get over a breakup by snapping your fingers. However, the snapping does not work and the singer still misses her ex. She expresses her frustration and sadness with lines like "I'm writing a song, said this is the last one, how many last songs are left?" and "And if one more person says you should get over it, oh I might stop talking to people before I snap". The song is relatable for anyone who has gone through a heartbreak and knows how hard it is to move on.

              -

              The artist and the inspiration

              -

              Rosa Linn is a 22-year-old artist who started making music when she was 15. She grew up listening to pop, rock, and country music, and she cites Taylor Swift, Ed Sheeran, and Avril Lavigne as some of her influences. She writes her own songs based on her personal experiences and emotions. She said that "SNAP" was inspired by a real breakup that she had in June 2022. She said that she wrote the song as a way of coping with her feelings and hoping that other people could relate to it.

              -

              How did the song become viral on TikTok?

              -

              The song became viral on TikTok thanks to a trend called #snapchallenge. The challenge involves making a video where you snap your fingers along with the chorus of the song. The challenge was started by a user named @jessicawangofficial, who posted a video of herself snapping in different outfits on October 5th, 2022. Since then, the challenge has been done by millions of users, including celebrities like Addison Rae, Charli D'Amelio, James Charles, and Noah Beck.

              -

              The #snapchallenge trend

              -

              The #snapchallenge trend is simple and fun to do. All you need is your phone, the TikTok app, and some creativity. Here are the basic steps to join the trend:

              -
                -
              1. Open the TikTok app and tap on the "+" icon to create a new video.
              2. -
              3. Tap on "Sounds" and search for "SNAP" by Rosa Linn. Select the sound and tap on "Use this sound".
              4. -
              5. Record yourself snapping your fingers along with the chorus of the song. You can snap in different ways, such as with one hand, two hands, or with props like gloves or rings.
              6. -
              7. Edit your video by adding filters, stickers, text, or effects.
              8. -
              9. Post your video with the hashtag #snapchallenge and tag @

                The popular videos and creators

                -

                Some of the most popular videos and creators that have used the song on TikTok are:

                - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
                VideoCreatorViewsLikes
                @addisonre snapping with her mom@addisonre32.7M6.5M
                @charlidamelio snapping with her sister and friends@charlidamelio28.4M5.2M
                @jamescharles snapping with different makeup looks@jamescharles25.1M4.8M
                @noahbeck snapping with his girlfriend and dog@noahbeck23.6M4.5M
                @rosalinnmusic snapping with her guitar and fans@rosalinnmusic21.9M4.1M
                -

                These videos show how the song can be used to showcase different aspects of one's personality, style, and relationships. They also show how the song can connect with millions of people who enjoy the catchy tune and the relatable lyrics.

                -

                download rosa linn snap lyrics
                -download snapping 1 2 where are you tiktok song
                -download snap eurovision 2022
                -download rosa linn snap mp3
                -download snap armenia eurovision song
                -download rosa linn snap official music video
                -download snap high and fast rosa linn
                -download snapping one two where are you song
                -download rosa linn snap instrumental
                -download snap by rosa linn spotify
                -download snapping 1 2 where are you ringtone
                -download snap lyrics video rosa linn
                -download snap armenia eurovision 2022
                -download rosa linn snap acoustic version
                -download snapping one two where are you remix
                -download rosa linn snap karaoke
                -download snap by rosa linn youtube
                -download snapping 1 2 where are you sub español
                -download snap eurovision song contest 2022
                -download rosa linn snap piano cover
                -download snapping one two where are you dance challenge
                -download rosa linn snap live performance
                -download snap by rosa linn soundcloud
                -download snapping 1 2 where are you reaction video
                -download snap eurovision armenia lyrics
                -download rosa linn snap guitar chords
                -download snapping one two where are you meme
                -download rosa linn snap nightcore
                -download snap by rosa linn apple music
                -download snapping 1 2 where are you slowed down
                -download snap eurovision armenia music video
                -download rosa linn snap sheet music
                -download snapping one two where are you edit
                -download rosa linn snap bass boosted
                -download snap by rosa linn amazon music
                -download snapping 1 2 where are you roblox id
                -download snap eurovision armenia mp3
                -download rosa linn snap flute notes
                -download snapping one two where are you mashup
                -download rosa linn snap 8d audio
                -download snap by rosa linn deezer
                -download snapping 1 2 where are you clean version
                -download snap eurovision armenia live
                -download rosa linn snap drum cover
                -download snapping one two where are you compilation
                -download rosa linn snap reversed
                -download snap by rosa linn shazam
                -download snapping 1 2 where are you behind the scenes
                -download snap eurovision armenia reaction

                -

                The reactions and feedback

                -

                The song has received a lot of positive reactions and feedback from TikTok users and fans. Some of the common comments are:

                -
                  -
                • "This song is so catchy, I can't stop snapping!"
                • -
                • "This song is so relatable, I feel like Rosa Linn is singing about my life!"
                • -
                • "This song is so beautiful, I love Rosa Linn's voice and style!"
                • -
                • "This song is so inspiring, I want to learn how to play guitar and sing like Rosa Linn!"
                • -
                • "This song is so amazing, I want to see Rosa Linn live in concert!"
                • -
                -

                The song has also received praise from other musicians and celebrities, such as Taylor Swift, Ed Sheeran, Avril Lavigne, Shawn Mendes, Camila Cabello, and Selena Gomez. They have expressed their admiration for Rosa Linn's talent and creativity, and have invited her to collaborate with them on future projects.

                -

                How to download the song and use it on TikTok?

                -

                If you want to download the song and use it on your own TikTok videos, you have several options. Here are some of them:

                -

                The streaming platforms and links

                -

                The song is available on various streaming platforms, such as Spotify, Apple Music, YouTube Music, Amazon Music, Deezer, Tidal, Pandora, and SoundCloud. You can find the links to these platforms on Rosa Linn's official website: https://www.rosalinn.com/. You can also find the links on her TikTok profile: https://www.tiktok.com/@rosalinnmusic/.

                -

                You can listen to the song for free on some of these platforms, or you can pay a subscription fee to access more features and benefits. You can also buy or download the song as a digital file (MP3, WAV, FLAC, etc.) from some of these platforms.

                -

                The steps to create a TikTok video with the song

                -

                To create a TikTok video with the song, you can follow these steps:

                -
                  -
                1. Open the TikTok app and tap on the "+" icon to create a new video.
                2. -
                3. Tap on "Sounds" and search for "SNAP" by Rosa Linn. Select the sound and tap on "Use this sound". Alternatively, you can go to Rosa Linn's profile and tap on the sound icon next to her name.
                4. -
                5. Choose a segment of the song that you want to use for your video. You can adjust the length and position of the segment by dragging the slider or tapping on the scissors icon.
                6. -
                7. Record yourself doing whatever you want with the song. You can snap your fingers, dance, lip-sync, sing, act, or do anything else that matches the mood of the song.
                8. -
                9. Edit your video by adding filters, stickers, text, or effects. You can also trim, crop, rotate, or adjust the speed of your video.
                10. -
                11. Post your video with a catchy caption and relevant hashtags. You can also tag @rosalinnmusic and @jessicawangofficial to show your appreciation and support.
                12. -
                -

                The tips and tricks to make your video stand out

                -

                To make your video stand out from the millions of other videos that use the same song, you can try some of these tips and tricks:

                -
                  -
                • Use different snapping styles and techniques. You can snap with one hand, two hands, or with props like gloves or rings. You can also snap in different rhythms, patterns, or directions.
                • -
                • Use different outfits and accessories. You can change your clothes, shoes, hats, glasses, jewelry, or makeup to match the song or to show your personality.
                • -
                • Use different locations and backgrounds. You can record your video indoors or outdoors, in your bedroom or in your kitchen, in a park or in a mall, or anywhere else that suits the song or your mood.
                • -
                • Use different angles and perspectives. You can record your video from above, below, behind, in front, or sideways. You can also use a tripod, a selfie stick, a mirror, or a friend to help you record your video.
                • -
                • Use different transitions and effects. You can use the built-in features of TikTok to add transitions and effects to your video. You can also use external apps or software to edit your video and make it more professional and creative.
                • -
                -

                Conclusion

                -

                In conclusion, "SNAP" by Rosa Linn is a catchy and relatable song that has become viral on TikTok thanks to the #snapchallenge trend. The song is about trying to get over a breakup by snapping your fingers, but failing to do so. The song was inspired by a real breakup that Rosa Linn had in June 2022. The song is available on various streaming platforms and you can download it and use it on your own TikTok videos. You can also follow some tips and tricks to make your video stand out from the crowd. We hope you enjoyed this article and learned something new about the song and the artist. If you did, please share it with your friends and family who might be interested in it. And don't forget to snap along!

                -

                FAQs

                -

                Here are some frequently asked questions about the song and the trend:

                -
                  -
                1. What is the name of the song that goes "snapping 1 2 where are you"?
                2. -

                  The name of the song is "SNAP" by Rosa Linn.

                  -
                3. Who is Rosa Linn?
                4. -

                  Rosa Linn is a Norwegian singer-songwriter who is based in Los Angeles. She released her debut EP "Rosa Linn" in September 2022.

                  -
                5. What is the #snapchallenge on TikTok?
                6. -

                  The #snapchallenge is a trend on TikTok where users snap their fingers along with the chorus of the song "SNAP" by Rosa Linn.

                  -
                7. How do I download the song and use it on TikTok?
                8. -

                  You can find the links to download or stream the song on Rosa Linn's website or TikTok profile. You can also use the sound directly from TikTok by searching for "SNAP" by Rosa Linn or going to her profile.

                  -
                9. How do I make my video stand out from other videos that use the same song?
                10. -

                  You can try some tips and tricks such as using different snapping styles, outfits, locations, angles, transitions, and effects.

                  -

                401be4b1e0
                -
                -
                \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Sniper 3D Assassin 3.36 1 Mod Apk with Unlimited Money and Diamonds for Free.md b/spaces/fatiXbelha/sd/Download Sniper 3D Assassin 3.36 1 Mod Apk with Unlimited Money and Diamonds for Free.md deleted file mode 100644 index 68e8c7f668c8bb674ba038195c53d7e094f7b5af..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Sniper 3D Assassin 3.36 1 Mod Apk with Unlimited Money and Diamonds for Free.md +++ /dev/null @@ -1,77 +0,0 @@ - -

                Sniper 3D Assassin 3.36 1 Mod Apk: A Guide for Shooting Game Fans

                -

                If you are a fan of shooting games, you might have heard of Sniper 3D Assassin, a popular free online multiplayer FPS game that lets you fight in a global war against crime and become the ultimate sniper. But did you know that there is a mod apk version of Sniper 3D Assassin that gives you unlimited money, menu, premium features and more? In this article, we will tell you everything you need to know about Sniper 3D Assassin 3.36 1 mod apk, including its features, benefits, and how to download and install it on your device.

                -

                sniper 3d assassin 3.36 1 mod apk


                DOWNLOADhttps://urllie.com/2uNDda



                -

                What is Sniper 3D Assassin and what is the mod apk version?

                -

                Sniper 3D Assassin is a fun and addictive game that challenges you to complete hundreds of thrilling missions in different worlds and scenarios. You can develop your sniper skills by shooting under rain, wind, fog, and other realistic conditions. You can also customize your weapons with over 150 snipers and rifles to choose from. You can play online or offline, solo or with friends, in PvP or squad wars modes.

                -

                The mod apk version of Sniper 3D Assassin is a modified version of the original game that gives you access to unlimited money, menu, premium features and more. With the mod apk version, you can buy any weapon or upgrade you want without worrying about the cost. You can also unlock all the levels and missions without having to complete them one by one. You can also enjoy some extra features such as aimbot, wallhack, god mode, and more.

                -

                What are the main features of Sniper 3D Assassin and how does the mod apk enhance them?

                -

                Some of the main features of Sniper 3D Assassin are:

                -

                sniper 3d assassin mod apk unlimited money and diamonds
                -sniper 3d assassin mod apk free download latest version
                -sniper 3d assassin mod apk offline and online
                -sniper 3d assassin mod apk real-time multiplayer pvp arena
                -sniper 3d assassin mod apk squad wars and leagues
                -sniper 3d assassin mod apk performance improvements and bug fixes
                -sniper 3d assassin mod apk fun free online multiplayer fps game
                -sniper 3d assassin mod apk fight in a multiplayer war and become the best sniper
                -sniper 3d assassin mod apk play through all the 21 cities in single player mode
                -sniper 3d assassin mod apk realistic and well-optimized 3d graphics
                -sniper 3d assassin mod apk intriguing stationary sniper gameplay
                -sniper 3d assassin mod apk assassinate various high-value targets
                -sniper 3d assassin mod apk unlock and upgrade your precious rifles
                -sniper 3d assassin mod apk unique challenges and events for generous prizes
                -sniper 3d assassin mod apk compete with friends and worldwide players
                -sniper 3d assassin mod apk action-packed multiplayer fps game for android
                -sniper 3d assassin mod apk best sniper shooting game for mobile devices
                -sniper 3d assassin mod apk enjoy the thrill of killing enemies with one shot
                -sniper 3d assassin mod apk use powerful weapons and advanced equipment to operate in different terrains
                -sniper 3d assassin mod apk explore the work and process of becoming a professional sniper
                -sniper 3d assassin mod apk experience the realistic physics and weather effects on your shots
                -sniper 3d assassin mod apk utilize environmental factors to hide corpses and create stylish death patterns
                -sniper 3d assassin mod apk download from google play store or apkmb.com
                -sniper 3d assassin mod apk updated on april 9, 2023 with version 4.16.1
                -sniper 3d assassin mod apk requires android 4.4 and up to run smoothly on any device

                -
                  -
                • Ultra realistic 3D graphics and cool animations
                • -
                • Hundreds of thrilling missions in different worlds
                • -
                • Tons of lethal guns and mortal weapons
                • -
                • Addictive FPS gameplay with easy and intuitive controls
                • -
                • Free game that can be played online or offline
                • -
                -

                The mod apk version of Sniper 3D Assassin enhances these features by giving you:

                -
                  -
                • Unlimited money to buy any weapon or upgrade you want
                • -
                • Menu to access all the levels and missions without completing them
                • -
                • Premium features such as no ads, no reload time, unlimited energy, etc.
                • -
                • Extra features such as aimbot, wallhack, god mode, etc.
                • -
                • Better performance and stability
                • -
                -

                What are the benefits of using the mod apk version of Sniper 3D Assassin?

                -

                Some of the benefits of using the mod apk version of Sniper 3D Assassin are:

                -
                  -
                • You can have more fun and excitement by playing with unlimited money, menu, premium features and more
                • -
                • You can save time and effort by unlocking all the levels and missions without having to complete them
                • -
                • You can have an edge over your enemies by using extra features such as aimbot, wallhack, god mode, etc.
                • -
                • You can enjoy a smoother and faster gaming experience with better performance and stability
                • -
                • You can play without any interruptions or distractions from ads or other limitations
                • -
                -

                How to download and install the mod apk version of Sniper 3D Assassin?

                -

                To download and install the mod apk version of Sniper 3D Assassin, you need to follow these steps:

                -
                  -
                1. Uninstall the original game from your device if you have it installedConclusion -

                  Sniper 3D Assassin is a great game for shooting game fans who want to have fun and challenge themselves in a global war against crime. The mod apk version of Sniper 3D Assassin makes the game even more enjoyable by giving you unlimited money, menu, premium features and more. You can download and install the mod apk version of Sniper 3D Assassin easily by following the steps we have provided in this article. So what are you waiting for? Download Sniper 3D Assassin 3.36 1 mod apk now and become the ultimate sniper!

                  -

                  FAQs

                  -

                  Here are some frequently asked questions about Sniper 3D Assassin and the mod apk version:

                  -

                  Is Sniper 3D Assassin free to play?

                  -

                  Yes, Sniper 3D Assassin is free to play and can be downloaded from the Google Play Store or the App Store. However, some items and features in the game may require real money to purchase or unlock.

                  -

                  Is Sniper 3D Assassin mod apk safe to use?

                  -

                  Yes, Sniper 3D Assassin mod apk is safe to use as long as you download it from a trusted source such as [this one]. However, you should always be careful when downloading and installing any mod apk files from unknown sources as they may contain viruses or malware that can harm your device or compromise your privacy.

                  -

                  Does Sniper 3D Assassin mod apk work online?

                  -

                  Yes, Sniper 3D Assassin mod apk works online and offline. You can play online with other players in PvP or squad wars modes, or offline in solo mode. However, you may need an internet connection to access some features such as updates, events, or rewards.

                  -

                  Will I get banned for using Sniper 3D Assassin mod apk?

                  -

                  No, you will not get banned for using Sniper 3D Assassin mod apk as long as you use it responsibly and do not abuse it. However, you should be aware that using any mod apk files may violate the terms and conditions of the game and may result in your account being suspended or terminated by the developers. Therefore, you should use Sniper 3D Assassin mod apk at your own risk and discretion.

                  -

                  Can I update Sniper 3D Assassin mod apk?

                  -

                  Yes, you can update Sniper 3D Assassin mod apk whenever there is a new version available. However, you may need to uninstall the previous version and install the new version manually. You may also need to backup your game data before updating to avoid losing your progress.

                  197e85843d
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Extreme Car Driving Simulator MOD APK How to Get All Cars and Tracks.md b/spaces/fatiXbelha/sd/Extreme Car Driving Simulator MOD APK How to Get All Cars and Tracks.md deleted file mode 100644 index ef7f86cb63cc2d0c587e2f0eec6bfb3acf6f1149..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Extreme Car Driving Simulator MOD APK How to Get All Cars and Tracks.md +++ /dev/null @@ -1,90 +0,0 @@ - -

                  Extreme Car Driving Simulator Hack Apkpure: How to Get Unlimited Coins and Unlock All Cars

                  -

                  If you are a fan of racing games and want to experience an open world car simulator, you might have heard of Extreme Car Driving Simulator. This game allows you to drive, drift, and feel a racing sports car in a detailed environment. You can also customize your car, take on challenges, and play in various game modes. However, if you want to enjoy the game without any limitations or restrictions, you might want to try Extreme Car Driving Simulator Hack Apkpure. This is a modified version of the game that gives you unlimited coins and unlocks all cars in the game. In this article, we will tell you what Extreme Car Driving Simulator Hack Apkpure is, how to download and install it, and some tips and tricks for playing it.

                  -

                  What is Extreme Car Driving Simulator?What is Extreme Car Driving Simulator?

                  -

                  Extreme Car Driving Simulator is a racing game that offers an open world experience. You can drive anywhere you want, explore different zones, and enjoy the realistic 3D graphics and physics. You can also collect and customize a variety of cars, from sports cars to off-road vehicles. The game has several game modes, such as free mode, checkpoint mode, traffic mode, and ghost mode. You can also take on challenges, such as drifting, jumping, or crashing, and earn coins and rewards.

                  -

                  extreme car driving simulator hack apkpure


                  DOWNLOAD ····· https://urllie.com/2uNFpE



                  -

                  A racing game that offers an open world experience

                  -

                  One of the main features of Extreme Car Driving Simulator is that it gives you the freedom to drive anywhere you want in a large and detailed map. You can explore the city, the airport, the desert, or the industrial zone. You can also find ramps, loops, bridges, and tunnels to perform stunts and have fun. The game has no rules or limits, so you can drive as fast as you want, crash into other cars or objects, and cause chaos.

                  -

                  Features realistic 3D graphics, physics, and a variety of cars to collect and customize

                  -

                  The game also boasts of realistic 3D graphics and physics that make you feel like you are driving a real car. You can see the damage effects on your car, the smoke from the tires, the reflections on the windows, and the shadows on the ground. You can also hear the engine sound, the horn, the brakes, and the screeching of the tires. The game has a variety of cars to choose from, such as sports cars, muscle cars, SUVs, trucks, and more. You can also customize your car with different colors, wheels, spoilers, vinyls, and stickers.

                  -

                  Allows players to explore different zones, take on challenges, and play in various game modes

                  -

                  The game has different zones to explore, each with its own features and attractions. For example, the city zone has skyscrapers, traffic lights, pedestrians, and police cars. The airport zone has planes, helicopters, runways, and hangars. The desert zone has sand dunes, cacti, and camels. The industrial zone has factories, warehouses, and cranes. The game also has challenges that you can complete to earn coins and rewards. For example, you can drift for a certain distance, jump over a ramp, or crash into a wall. The game also has various game modes that you can play according to your preference. For example, free mode lets you drive freely without any objectives or restrictions. Checkpoint mode lets you race against time and reach checkpoints before the timer runs out. Traffic mode lets you drive in a busy road with other cars and avoid collisions. Ghost mode lets you race against your own ghost car and beat your best time.

                  -

                  What is Apkpure?

                  -

                  Apkpure is a website that provides free and safe APK files for Android devices. APK files are application packages that contain all the files needed to install an app or a game on your device. Apkpure allows you to download and install apps and games that are not available on Google Play Store or are restricted in your region. Apkpure also offers fast downloads, updates, and reviews for thousands of apps and games. Apkpure also has a user-friendly interface and a search function that helps you find the app or game you want.

                  -

                  A website that provides free and safe APK files for Android devices

                  -

                  Apkpure is a website that lets you download APK files for free and without any registration or subscription. APK files are application packages that contain all the files needed to install an app or a game on your device. You can use Apkpure to download apps and games that are not available on Google Play Store or are restricted in your region. For example, you can download Extreme Car Driving Simulator Hack Apkpure from Apkpure website, even if it is not on Google Play Store.

                  -

                  Allows users to download and install apps and games that are not available on Google Play Store

                  -

                  One of the advantages of using Apkpure is that it allows you to access apps and games that are not on Google Play Store or are blocked in your region. For example, some apps and games may be banned or removed from Google Play Store due to legal issues, content violations, or geo-restrictions. Apkpure lets you bypass these limitations and download the apps and games you want. For example, you can download Extreme Car Driving Simulator Hack Apkpure from Apkpure website, even if it is banned or removed from Google Play Store.

                  -

                  Offers fast downloads, updates, and reviews for thousands of apps and games

                  -

                  Another advantage of using Apkpure is that it offers fast downloads, updates, and reviews for thousands of apps and games. Apkpure has a large database of apps and games that are updated regularly. You can also see the ratings, comments, and screenshots of the apps and games before downloading them. Apkpure also has a fast download speed and a resume function that lets you pause and resume your downloads. You can also update your apps and games easily with Apkpure.

                  -

                  What is Extreme Car Driving Simulator Hack Apkpure?

                  -

                  Extreme Car Driving Simulator Hack Apkpure is a modified version of Extreme Car Driving Simulator that gives you unlimited coins and unlocks all cars in the game. This means that you can enjoy the game without any limitations or restrictions. You can buy any car you want, customize it as you wish, and drive it anywhere you want. You can also complete all the challenges and play all the game modes without any difficulty.

                  -

                  extreme car driving simulator mod apk unlimited money
                  -extreme car driving simulator cheats android download
                  -extreme car driving simulator hack version free
                  -extreme car driving simulator unlimited coins and gems
                  -extreme car driving simulator mod menu apk
                  -extreme car driving simulator online hack tool
                  -extreme car driving simulator apk pure latest version
                  -extreme car driving simulator hack ios no jailbreak
                  -extreme car driving simulator cheat codes 2023
                  -extreme car driving simulator mod apk all cars unlocked
                  -extreme car driving simulator hack apk download for android
                  -extreme car driving simulator unlimited nitro mod
                  -extreme car driving simulator hack without human verification
                  -extreme car driving simulator mod apk revdl
                  -extreme car driving simulator hack apk 2023
                  -extreme car driving simulator cheat engine pc
                  -extreme car driving simulator mod apk rexdl
                  -extreme car driving simulator hack game guardian
                  -extreme car driving simulator mod apk happymod
                  -extreme car driving simulator hack apk no root
                  -extreme car driving simulator cheat codes android
                  -extreme car driving simulator mod apk offline
                  -extreme car driving simulator hack lucky patcher
                  -extreme car driving simulator mod apk an1
                  -extreme car driving simulator hack online generator
                  -extreme car driving simulator cheat codes ios
                  -extreme car driving simulator mod apk obb
                  -extreme car driving simulator hack appvn
                  -extreme car driving simulator mod apk android 1
                  -extreme car driving simulator hack no survey no password

                  -

                  A modified version of Extreme Car Driving Simulator that gives users unlimited coins and unlocks all cars

                  -

                  The main difference between Extreme Car Driving Simulator Hack Apkpure and the original game is that the hack version gives you unlimited coins and unlocks all cars in the game. Coins are the currency of the game that you can use to buy new cars or upgrade your existing ones. Cars are the vehicles that you can drive in the game, each with its own features and performance. The original game requires you to earn coins by completing challenges or watching ads, and to unlock cars by reaching certain levels or paying real money. The hack version gives you unlimited coins from the start, and unlocks all cars for free.

                  -

                  Enables users to enjoy the game without any limitations or restrictions

                  -

                  The benefit of using Extreme Car Driving Simulator Hack Apkpure is that it enables you to enjoy the game without any limitations or restrictions. You can buy any car you want, customize it as you wish, and drive it anywhere you want. You can also complete all the challenges and play all the game modes without any difficulty. You don't have to worry about running out of coins, waiting for ads, or spending real money.

                  -

                  Requires users to download and install the APK file from Apkpure website

                  -

                  The only requirement for using Extreme Car Driving Simulator Hack Apkpure is that you have to download and install the APK file from Apkpure website. APK files are application packages that contain all the files needed to install an app or a game on your device. You cannot find Extreme Car Driving Simulator Hack Apkpure on Google Play Store or any other app store, because it is a modified version of the game. You have to download it from Apkpure website, which provides free and safe APK files for Android devices.

                  How to Download and Install Extreme Car Driving Simulator Hack Apkpure?

                  -

                  If you want to use Extreme Car Driving Simulator Hack Apkpure, you have to download and install the APK file from Apkpure website. APK files are application packages that contain all the files needed to install an app or a game on your device. The process is simple and easy, and you can follow these steps:

                  -

                  Step 1: Go to [Apkpure website] and search for Extreme Car Driving Simulator

                  -

                  The first step is to go to Apkpure website, which is a website that provides free and safe APK files for Android devices. You can use any browser on your device to access the website. Once you are on the website, you can use the search function to look for Extreme Car Driving Simulator. You will see a list of results, and you have to choose the one that says Extreme Car Driving Simulator Hack Apkpure.

                  -

                  Step 2: Click on the download button and wait for the APK file to be downloaded

                  -

                  The next step is to click on the download button, which is usually a green or blue button with a downward arrow. You will see a pop-up window that asks you to confirm your download. You have to click on OK or Yes to proceed. You will then see a progress bar that shows you how much of the APK file has been downloaded. You have to wait until the download is complete, which may take a few minutes depending on your internet speed.

                  -

                  Step 3: Enable unknown sources on your device settings and install the APK file

                  -

                  The third step is to enable unknown sources on your device settings, which is a security feature that prevents you from installing apps or games from sources other than Google Play Store. You have to do this because Extreme Car Driving Simulator Hack Apkpure is not available on Google Play Store, and you have to install it from Apkpure website. To enable unknown sources, you have to go to your device settings, then security or privacy, then unknown sources or install unknown apps. You have to toggle the switch or check the box that allows you to install apps or games from unknown sources. You may see a warning message that tells you about the risks of installing apps or games from unknown sources. You have to click on OK or Yes to continue. After enabling unknown sources, you have to go to your device file manager or downloads folder, and find the APK file that you downloaded from Apkpure website. You have to tap on the APK file, and then click on install or open. You may see another pop-up window that asks you to confirm your installation. You have to click on OK or Yes to proceed.

                  -

                  Step 4: Launch the game and enjoy unlimited coins and unlocked cars

                  -

                  The final step is to launch the game and enjoy unlimited coins and unlocked cars. You can find the game icon on your device home screen or app drawer, and tap on it to open it. You will see the game loading screen, and then the main menu. You can choose any game mode, car, or challenge that you want, and start playing. You will notice that you have unlimited coins and unlocked cars in the game, which means that you can buy any car you want, customize it as you wish, and drive it anywhere you want.

                  Tips and Tricks for Playing Extreme Car Driving Simulator Hack Apkpure

                  -

                  If you want to have more fun and excitement while playing Extreme Car Driving Simulator Hack Apkpure, you can try some of these tips and tricks that will help you improve your skills and performance in the game.

                  -

                  Use drift mode to increase your speed and perform illegal stunt actions

                  -

                  One of the tips for playing Extreme Car Driving Simulator Hack Apkpure is to use drift mode, which is a feature that lets you slide your car sideways and control it with the steering wheel. You can activate drift mode by tapping on the drift button on the right side of the screen. Drift mode can help you increase your speed and perform illegal stunt actions, such as drifting around corners, spinning, or doing donuts. Drift mode can also help you avoid obstacles, escape from police, or create smoke effects.

                  -

                  Fly with a plane or use plane tyres to explore the map and find collectibles

                  -

                  Another tip for playing Extreme Car Driving Simulator Hack Apkpure is to fly with a plane or use plane tyres to explore the map and find collectibles. You can find a plane in the airport zone, and you can use it to fly over the map and see the different zones and attractions. You can also use plane tyres, which are special tyres that let you glide in the air and land safely. You can find plane tyres in the desert zone, and you can use them to jump over ramps, bridges, or buildings. Flying with a plane or using plane tyres can help you explore the map and find collectibles, such as coins, stars, or trophies.

                  -

                  Go around the loop or the hole to gain momentum and boost your speed

                  -

                  A third tip for playing Extreme Car Driving Simulator Hack Apkpure is to go around the loop or the hole to gain momentum and boost your speed. You can find a loop in the city zone, and a hole in the industrial zone. These are structures that let you go around them in a circular motion and accelerate your car. Going around the loop or the hole can help you gain momentum and boost your speed, which can help you complete challenges, reach checkpoints, or outrun other cars.

                  -

                  Use speed mod or auto clicker to run full speed without braking

                  -

                  A fourth tip for playing Extreme Car Driving Simulator Hack Apkpure is to use speed mod or auto clicker to run full speed without braking. Speed mod is a feature that lets you increase your car's maximum speed and acceleration. You can activate speed mod by tapping on the speed mod button on the left side of the screen. Auto clicker is a tool that lets you automate your taps on the screen. You can use auto clicker to tap on the gas pedal continuously without lifting your finger. Using speed mod or auto clicker can help you run full speed without braking, which can help you finish races faster, avoid collisions, or create shockwaves.

                  Conclusion

                  -

                  In conclusion, Extreme Car Driving Simulator Hack Apkpure is a fun and exciting way to enjoy the game without spending money or time. You can download and install the APK file from Apkpure website easily and safely, and get unlimited coins and unlock all cars in the game. You can also customize your car, explore different zones, take on challenges, and play in various game modes. You can also use some tips and tricks to improve your skills and performance in the game. However, you should also be aware that using Extreme Car Driving Simulator Hack Apkpure is not legal and may cause some issues with the original game developer or your device. You should use it at your own risk and discretion.

                  -

                  FAQs

                  -

                  Is Extreme Car Driving Simulator Hack Apkpure safe to use?

                  -

                  Yes, it is safe to use as long as you download it from a trusted source like Apkpure website. Apkpure provides free and safe APK files for Android devices, and scans them for viruses and malware. However, you should also be careful about the permissions that the app or game asks for, and only grant them if you trust the app or game.

                  -

                  Is Extreme Car Driving Simulator Hack Apkpure legal to use?

                  -

                  No, it is not legal to use as it violates the terms and conditions of the original game developer. Using Extreme Car Driving Simulator Hack Apkpure is considered as cheating or hacking, and may result in legal actions or penalties from the original game developer. You may also lose your progress or account in the original game if you use Extreme Car Driving Simulator Hack Apkpure.

                  -

                  Will Extreme Car Driving Simulator Hack Apkpure work on any device?

                  -

                  Yes, it will work on any Android device that supports the original game. However, you should also check the compatibility and requirements of the app or game before downloading and installing it. Some apps or games may not work properly on some devices due to different specifications or features.

                  -

                  Can I play Extreme Car Driving Simulator Hack Apkpure online with other players?

                  -

                  No, you cannot play it online with other players as it is a modified version of the game. Playing online with other players requires a connection to the original game server, which will not recognize or accept Extreme Car Driving Simulator Hack Apkpure. You can only play it offline or with your own ghost car.

                  -

                  Can I update Extreme Car Driving Simulator Hack Apkpure when there is a new version of the game?

                  -

                  No, you cannot update it when there is a new version of the game. Updating the app or game requires a connection to the original game server, which will not recognize or accept Extreme Car Driving Simulator Hack Apkpure. You will have to download and install the new APK file from Apkpure website when there is a new version of the game.

                  401be4b1e0
                  -
                  -
                  \ No newline at end of file diff --git a/spaces/fb700/chat3/crazy_functions/test_project/latex/attention/background.tex b/spaces/fb700/chat3/crazy_functions/test_project/latex/attention/background.tex deleted file mode 100644 index 785069dc0f9143bad24e640056dd1072d5c6e5b5..0000000000000000000000000000000000000000 --- a/spaces/fb700/chat3/crazy_functions/test_project/latex/attention/background.tex +++ /dev/null @@ -1,58 +0,0 @@ -The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}. - -Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}. - -End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}. - -To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution. -In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}. - - -%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs. - -%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation. - -%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture. - -%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost. - -%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length. - -%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs. - -%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture. - - - -%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)? - -%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence. - -%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model. - -%\begin{table}[h!] -%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.} -%\label{tab:op_complexities} -%\begin{center} -%\vspace{-5pt} -%\scalebox{0.75}{ - -%\begin{tabular}{l|c|c|c} -%\hline \hline -%Layer Type & Receptive & Complexity & Sequential \\ -% & Field & & Operations \\ -%\hline -%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\ -%\hline -%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\ -%\hline -%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\ -%\hline -%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\ -%\hline -%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\ -%\hline \hline -%\end{tabular} -%} -%\end{center} -%\end{table} \ No newline at end of file diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/losses/contextual_loss/modules/contextual.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/losses/contextual_loss/modules/contextual.py deleted file mode 100644 index c288dc7c6cdd3f2fab6cd7c52f0ce88df3b7461d..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/losses/contextual_loss/modules/contextual.py +++ /dev/null @@ -1,121 +0,0 @@ -import random -from typing import ( - Iterable, - List, - Optional, -) - -import numpy as np -import torch -import torch.nn as nn - -from .vgg import VGG19 -from .. import functional as F -from ..config import LOSS_TYPES - - -class ContextualLoss(nn.Module): - """ - Creates a criterion that measures the contextual loss. - - Parameters - --- - band_width : int, optional - a band_width parameter described as :math:`h` in the paper. - use_vgg : bool, optional - if you want to use VGG feature, set this `True`. - vgg_layer : str, optional - intermidiate layer name for VGG feature. - Now we support layer names: - `['relu1_2', 'relu2_2', 'relu3_4', 'relu4_4', 'relu5_4']` - """ - - def __init__( - self, - band_width: float = 0.5, - loss_type: str = 'cosine', - use_vgg: bool = False, - vgg_layers: List[str] = ['relu3_4'], - feature_1d_size: int = 64, - ): - - super().__init__() - - assert band_width > 0, 'band_width parameter must be positive.' - assert loss_type in LOSS_TYPES,\ - f'select a loss type from {LOSS_TYPES}.' - - self.loss_type = loss_type - self.band_width = band_width - self.feature_1d_size = feature_1d_size - - if use_vgg: - self.vgg_model = VGG19() - self.vgg_layers = vgg_layers - self.register_buffer( - name='vgg_mean', - tensor=torch.tensor( - [[[0.485]], [[0.456]], [[0.406]]], requires_grad=False) - ) - self.register_buffer( - name='vgg_std', - tensor=torch.tensor( - [[[0.229]], [[0.224]], [[0.225]]], requires_grad=False) - ) - - def forward(self, x: torch.Tensor, y: torch.Tensor, all_dist: bool = False): - if not hasattr(self, 'vgg_model'): - return self.contextual_loss(x, y, self.feature_1d_size, self.band_width, all_dist=all_dist) - - - x = self.forward_vgg(x) - y = self.forward_vgg(y) - - loss = 0 - for layer in self.vgg_layers: - # picking up vgg feature maps - fx = getattr(x, layer) - fy = getattr(y, layer) - loss = loss + self.contextual_loss( - fx, fy, self.feature_1d_size, self.band_width, all_dist=all_dist, loss_type=self.loss_type - ) - return loss - - def forward_vgg(self, x: torch.Tensor): - assert x.shape[1] == 3, 'VGG model takes 3 chennel images.' - # [-1, 1] -> [0, 1] - x = (x + 1) * 0.5 - - # normalization - x = x.sub(self.vgg_mean.detach()).div(self.vgg_std) - return self.vgg_model(x) - - @classmethod - def contextual_loss( - cls, - x: torch.Tensor, y: torch.Tensor, - feature_1d_size: int, - band_width: int, - all_dist: bool = False, - loss_type: str = 'cosine', - ) -> torch.Tensor: - feature_size = feature_1d_size ** 2 - if np.prod(x.shape[2:]) > feature_size or np.prod(y.shape[2:]) > feature_size: - x, indices = cls.random_sampling(x, feature_1d_size=feature_1d_size) - y, _ = cls.random_sampling(y, feature_1d_size=feature_1d_size, indices=indices) - - return F.contextual_loss(x, y, band_width, all_dist=all_dist, loss_type=loss_type) - - @staticmethod - def random_sampling( - tensor_NCHW: torch.Tensor, feature_1d_size: int, indices: Optional[List] = None - ): - N, C, H, W = tensor_NCHW.shape - S = H * W - tensor_NCS = tensor_NCHW.reshape([N, C, S]) - if indices is None: - all_indices = list(range(S)) - random.shuffle(all_indices) - indices = all_indices[:feature_1d_size**2] - res = tensor_NCS[:, :, indices].reshape(N, -1, feature_1d_size, feature_1d_size) - return res, indices diff --git a/spaces/fengmuxi/ChatGpt-Web/app/api/user/set/route.ts b/spaces/fengmuxi/ChatGpt-Web/app/api/user/set/route.ts deleted file mode 100644 index 2b8a91396222d822548dd636a6ea4b7baef4f784..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/api/user/set/route.ts +++ /dev/null @@ -1,41 +0,0 @@ -import { NextRequest, NextResponse } from "next/server"; -import { auth, getIP } from "../../auth"; - -export async function POST(req: NextRequest) { - try { - const authResult = auth(req); - if (authResult.error) { - return NextResponse.json(authResult, { - status: 401, - }); - } - const token=req.headers.get("auth") ?? "" - const name=req.nextUrl.searchParams.get("name") - let body={ - nickName:name - } - let res=await fetch("https://eladmin.dwzynj.top/api/users/myCenter", { - method: "PUT", - headers:{ - "Content-Type":'application/json;charset=utf-8', - "Authorization":token, - "UserIp": String(getIP(req)) - }, - body:JSON.stringify(body) - }) - if(res.status==401){ - let msg={ - flag:false, - msg:"未登录!" - } - // console.log(res.status) - return new Response(JSON.stringify(msg)) - } - let msg=await res.json() - // console.log(msg) - return new Response(JSON.stringify(msg)) - } catch (e) { - console.error("[eladmin] ", e); - return new Response(JSON.stringify(e)); - } -} diff --git a/spaces/fffiloni/DA-CLIP/README.md b/spaces/fffiloni/DA-CLIP/README.md deleted file mode 100644 index 8caa25c0062d73e73afc068c3d1844fe5812218b..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/DA-CLIP/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: DA CLIP -emoji: 🏃 -colorFrom: blue -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fffiloni/Image-to-MusicGen/tests/modules/test_rope.py b/spaces/fffiloni/Image-to-MusicGen/tests/modules/test_rope.py deleted file mode 100644 index b9a54aec8b38a257ba28053afccf305a60691bfc..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/tests/modules/test_rope.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.modules.rope import RotaryEmbedding -from audiocraft.modules.transformer import StreamingTransformer - - -def test_rope(): - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_rope_io_dtypes(): - B, T, H, C = 8, 75, 16, 128 - - rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32) - rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64) - - # Test bfloat16 inputs w/ both 32 and 64 precision rope. - xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - - # Test float32 inputs w/ both 32 and 64 precision rope. - xq_32 = torch.rand((B, T, H, C)).to(torch.float32) - xk_32 = torch.rand((B, T, H, C)).to(torch.float32) - xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - - -def test_transformer_with_rope(): - torch.manual_seed(1234) - for pos in ['rope', 'sin_rope']: - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding=pos) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - out = tr(x) - assert list(out.shape) == list(x.shape) - - -@torch.no_grad() -def test_rope_streaming(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, causal=True, dropout=0., - custom=True, positional_embedding='rope') - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -@torch.no_grad() -def test_rope_streaming_past_context(): - torch.manual_seed(1234) - - for context in [None, 10]: - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=True, - dropout=0., positional_embedding='rope') - tr.eval() - - steps = 20 - x = torch.randn(3, steps, 16) - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_rope_memory_efficient(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - # Check at float precision b/c this is the rope default. - assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm() - - -def test_rope_with_xpos(): - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_positional_scale(): - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert torch.allclose(xq, xq_out) - assert torch.allclose(xk, xk_out) diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/models/lm.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/models/lm.py deleted file mode 100644 index 43f82b42340dd9e721a3a76fa58e27f70fe2b4e5..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/models/lm.py +++ /dev/null @@ -1,526 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from functools import partial -import logging -import math -import typing as tp - -import torch -from torch import nn - -from ..utils import utils -from ..modules.streaming import StreamingModule, State -from ..modules.transformer import StreamingTransformer, create_norm_fn -from ..modules.conditioners import ( - ConditionFuser, - ClassifierFreeGuidanceDropout, - AttributeDropout, - ConditioningProvider, - ConditioningAttributes, - ConditionType, -) -from ..modules.codebooks_patterns import CodebooksPatternProvider -from ..modules.activations import get_activation_fn - - -logger = logging.getLogger(__name__) -ConditionTensors = tp.Dict[str, ConditionType] -CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]] - - -def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None): - """LM layer initialization. - Inspired from xlformers: https://github.com/fairinternal/xlformers - - Args: - method (str): Method name for init function. Valid options are: - 'gaussian', 'uniform'. - input_dim (int): Input dimension of the initialized module. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - """ - # Compute std - std = 1 / math.sqrt(input_dim) - # Rescale with depth - if init_depth is not None: - std = std / math.sqrt(2 * init_depth) - - if method == 'gaussian': - return partial( - torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std - ) - elif method == 'uniform': - bound = math.sqrt(3) * std # ensure the standard deviation is `std` - return partial(torch.nn.init.uniform_, a=-bound, b=bound) - else: - raise ValueError("Unsupported layer initialization method") - - -def init_layer(m: nn.Module, - method: str, - init_depth: tp.Optional[int] = None, - zero_bias_init: bool = False): - """Wrapper around ``get_init_fn`` for proper initialization of LM modules. - - Args: - m (nn.Module): Module to initialize. - method (str): Method name for the init function. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - zero_bias_init (bool): Whether to initialize the bias to 0 or not. - """ - if isinstance(m, nn.Linear): - init_fn = get_init_fn(method, m.in_features, init_depth=init_depth) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - if zero_bias_init and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Embedding): - init_fn = get_init_fn(method, m.embedding_dim, init_depth=None) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - - -class ScaledEmbedding(nn.Embedding): - """Boost learning rate for embeddings (with `scale`). - """ - def __init__(self, *args, lr=None, **kwargs): - super().__init__(*args, **kwargs) - self.lr = lr - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - return group - - -@dataclass -class LMOutput: - # The logits are already re-aligned with the input codes - # hence no extra shift is required, e.g. when computing CE - logits: torch.Tensor # [B, K, T, card] - mask: torch.Tensor # [B, K, T] - - -class LMModel(StreamingModule): - """Transformer-based language model on multiple streams of codes. - - Args: - pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving. - condition_provider (MusicConditioningProvider): Conditioning provider from metadata. - fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input. - n_q (int): Number of parallel streams to model. - card (int): Cardinality, vocabulary size. - dim (int): Dimension of the transformer encoder. - num_heads (int): Number of heads for the transformer encoder. - hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder. - norm (str): Normalization method. - norm_first (bool): Use pre-norm instead of post-norm. - emb_lr (Optional[float]): Embedding-specific learning rate. - bias_proj (bool): Use bias for output projections. - weight_init (Optional[str]): Method for weight initialization. - depthwise_init (Optional[str]): Method for depthwise weight initialization. - zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros. - cfg_dropout (float): Classifier-free guidance dropout. - cfg_coef (float): Classifier-free guidance coefficient. - attribute_dropout (dict): Attribute dropout probabilities. - two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps. - **kwargs: Additional parameters for the transformer encoder. - """ - def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider, - fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8, - hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False, - emb_lr: tp.Optional[float] = None, bias_proj: bool = True, - weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None, - zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0, - attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False, - **kwargs): - super().__init__() - self.cfg_coef = cfg_coef - self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout) - self.att_dropout = AttributeDropout(p=attribute_dropout) - self.condition_provider = condition_provider - self.fuser = fuser - self.card = card - embed_dim = self.card + 1 - self.n_q = n_q - self.dim = dim - self.pattern_provider = pattern_provider - self.two_step_cfg = two_step_cfg - self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)]) - if 'activation' in kwargs: - kwargs['activation'] = get_activation_fn(kwargs['activation']) - self.transformer = StreamingTransformer( - d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim), - norm=norm, norm_first=norm_first, **kwargs) - self.out_norm: tp.Optional[nn.Module] = None - if norm_first: - self.out_norm = create_norm_fn(norm, dim) - self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)]) - self._init_weights(weight_init, depthwise_init, zero_bias_init) - self._fsdp: tp.Optional[nn.Module] - self.__dict__['_fsdp'] = None - - def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool): - """Initialization of the transformer module weights. - - Args: - weight_init (Optional[str]): Weight initialization strategy. See ``get_init_fn`` for valid options. - depthwise_init (Optional[str]): Depwthwise initialization strategy. The following options are valid: - 'current' where the depth corresponds to the current layer index or 'global' where the total number - of layer is used as depth. If not set, no depthwise initialization strategy is used. - zero_bias_init (bool): Whether to initalize bias to zero or not. - """ - assert depthwise_init is None or depthwise_init in ['current', 'global'] - assert depthwise_init is None or weight_init is not None, \ - "If 'depthwise_init' is defined, a 'weight_init' method should be provided." - assert not zero_bias_init or weight_init is not None, \ - "If 'zero_bias_init', a 'weight_init' method should be provided" - - if weight_init is None: - return - - for emb_layer in self.emb: - init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - for layer_idx, tr_layer in enumerate(self.transformer.layers): - depth = None - if depthwise_init == 'current': - depth = layer_idx + 1 - elif depthwise_init == 'global': - depth = len(self.transformer.layers) - init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init) - tr_layer.apply(init_fn) - - for linear in self.linears: - init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - @property - def special_token_id(self) -> int: - return self.card - - @property - def num_codebooks(self) -> int: - return self.n_q - - def forward(self, sequence: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor: - """Apply language model on sequence and conditions. - Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and - S the sequence steps, return the logits with shape [B, card, K, S]. - - Args: - indices (torch.Tensor): indices of the codes to model. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - torch.Tensor: Logits. - """ - B, K, S = sequence.shape - assert K == self.num_codebooks, 'Sequence shape must match the specified number of codebooks' - input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)]) - if condition_tensors is None: - assert not self._is_streaming, "Conditions tensors should be precomputed when streaming." - # apply dropout modules - conditions = self.cfg_dropout(conditions) - conditions = self.att_dropout(conditions) - tokenized = self.condition_provider.tokenize(conditions) - # encode conditions and fuse, both have a streaming cache to not recompute when generating. - condition_tensors = self.condition_provider(tokenized) - else: - assert not conditions, "Shouldn't pass both conditions and condition_tensors." - - input_, cross_attention_input = self.fuser(input_, condition_tensors) - - out = self.transformer(input_, cross_attention_src=cross_attention_input) - if self.out_norm: - out = self.out_norm(out) - logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card] - - # remove the prefix from the model outputs - if len(self.fuser.fuse2cond['prepend']) > 0: - logits = logits[:, :, -S:] - - return logits # [B, K, S, card] - - def compute_predictions( - self, codes: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput: - """Given an input tensor of codes [B, K, T] and list of conditions, runs the model - forward using the specified codes interleaving pattern. - - Args: - codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size, - K the number of codebooks and T the number of timesteps. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - LMOutput: Language model outputs - logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes, - i.e. the first item corresponds to logits to predict the first code, meaning that - no additional shifting of codes and logits is required. - mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions. - Given the specified interleaving strategies, parts of the logits and codes should - not be considered as valid predictions because of invalid context. - """ - B, K, T = codes.shape - codes = codes.contiguous() - # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens - pattern = self.pattern_provider.get_pattern(T) - sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence( - codes, self.special_token_id, keep_only_valid_steps=True - ) - # apply model on pattern sequence - model = self if self._fsdp is None else self._fsdp - logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card] - # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card] - # and provide the corresponding mask over invalid positions of tokens - logits = logits.permute(0, 3, 1, 2) # [B, card, K, S] - # note: we use nans as special token to make it obvious if we feed unexpected logits - logits, logits_indexes, logits_mask = pattern.revert_pattern_logits( - logits, float('nan'), keep_only_valid_steps=True - ) - logits = logits.permute(0, 2, 3, 1) # [B, K, T, card] - logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T] - return LMOutput(logits, logits_mask) - - def _sample_next_token(self, - sequence: torch.Tensor, - cfg_conditions: CFGConditions, - unconditional_state: State, - use_sampling: bool = False, - temp: float = 1.0, - top_k: int = 0, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None) -> torch.Tensor: - """Sample next token from the model given a sequence and a set of conditions. The model supports - multiple sampling strategies (greedy sampling, softmax, top-k, top-p...). - - Args: - sequence (torch.Tensor): Current sequence of shape [B, K, S] - with K corresponding to the number of codebooks and S the number of sequence steps. - S = 1 in streaming mode, except for the first step that contains a bigger prompt. - condition_tensors (Dict[str, ConditionType): Set of conditions. If CFG is used, - should be twice the batch size, being the concatenation of the conditions + null conditions. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coef (float): classifier free guidance coefficient - Returns: - next_token (torch.Tensor): Next token tensor of shape [B, K, 1]. - """ - B = sequence.shape[0] - cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef - model = self if self._fsdp is None else self._fsdp - if self.two_step_cfg and cfg_conditions != {}: - assert isinstance(cfg_conditions, tuple) - condition_tensors, null_condition_tensors = cfg_conditions - cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors) - state = self.get_streaming_state() - self.set_streaming_state(unconditional_state) - uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors) - unconditional_state.update(self.get_streaming_state()) - self.set_streaming_state(state) - logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef - else: - assert isinstance(cfg_conditions, dict) - condition_tensors = cfg_conditions - if condition_tensors: - # Preparing for CFG, predicting both conditional and unconditional logits. - sequence = torch.cat([sequence, sequence], dim=0) - all_logits = model( - sequence, - conditions=[], condition_tensors=condition_tensors) - if condition_tensors: - cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card] - logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef - else: - logits = all_logits - - logits = logits.permute(0, 1, 3, 2) # [B, K, card, T] - logits = logits[..., -1] # [B x K x card] - - if use_sampling: - probs = torch.softmax(logits / temp, dim=-1) - if top_p > 0.0: - next_token = utils.sample_top_p(probs, p=top_p) - elif top_k > 0: - next_token = utils.sample_top_k(probs, k=top_k) - else: - next_token = utils.multinomial(probs, num_samples=1) - else: - next_token = torch.argmax(logits, dim=-1, keepdim=True) - - return next_token - - @torch.no_grad() - def generate(self, - prompt: tp.Optional[torch.Tensor] = None, - conditions: tp.List[ConditioningAttributes] = [], - num_samples: tp.Optional[int] = None, - max_gen_len: int = 256, - use_sampling: bool = True, - temp: float = 1.0, - top_k: int = 250, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None, - two_step_cfg: bool = False, - remove_prompts: bool = False, - check: bool = False, - callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor: - """Generate tokens sampling from the model given a prompt or unconditionally. Generation can - be perform in a greedy fashion or using sampling with top K and top P strategies. - - Args: - prompt (Optional[torch.Tensor]): Prompt tokens of shape [B, K, T]. - conditions_tensors (Dict[str, torch.Tensor]): Set of conditions or None. - num_samples (int or None): Number of samples to generate when no prompt and no conditions are given. - max_gen_len (int): Maximum generation length. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - remove_prompts (bool): Whether to remove prompts from generation or not. - Returns: - torch.Tensor: Generated tokens. - """ - assert not self.training, "generation shouldn't be used in training mode." - first_param = next(iter(self.parameters())) - device = first_param.device - - # Checking all input shapes are consistents. - possible_num_samples = [] - if num_samples is not None: - possible_num_samples.append(num_samples) - elif prompt is not None: - possible_num_samples.append(prompt.shape[0]) - elif conditions: - possible_num_samples.append(len(conditions)) - else: - possible_num_samples.append(1) - assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsitent inputs shapes" - num_samples = possible_num_samples[0] - - # below we create set of conditions: one conditional and one unconditional - # to do that we merge the regular condition together with the null condition - # we then do 1 forward pass instead of 2. - # the reason for that is two-fold: - # 1. it is about x2 faster than doing 2 forward passes - # 2. avoid the streaming API treating the 2 passes as part of different time steps - # We also support doing two different passes, in particular to ensure that - # the padding structure is exactly the same between train anf test. - # With a batch size of 1, this can be slower though. - cfg_conditions: CFGConditions - two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg - if conditions: - null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions) - if two_step_cfg: - cfg_conditions = ( - self.condition_provider(self.condition_provider.tokenize(conditions)), - self.condition_provider(self.condition_provider.tokenize(null_conditions)), - ) - else: - conditions = conditions + null_conditions - tokenized = self.condition_provider.tokenize(conditions) - cfg_conditions = self.condition_provider(tokenized) - else: - cfg_conditions = {} - - if prompt is None: - assert num_samples > 0 - prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device) - - B, K, T = prompt.shape - start_offset = T - assert start_offset < max_gen_len - - pattern = self.pattern_provider.get_pattern(max_gen_len) - # this token is used as default value for codes that are not generated yet - unknown_token = -1 - - # we generate codes up to the max_gen_len that will be mapped to the pattern sequence - gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device) - # filling the gen_codes with the prompt if needed - gen_codes[..., :start_offset] = prompt - # create the gen_sequence with proper interleaving from the pattern: [B, K, S] - gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id) - # retrieve the start_offset in the sequence: - # it is the first sequence step that contains the `start_offset` timestep - start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset) - assert start_offset_sequence is not None - - with self.streaming(): - unconditional_state = self.get_streaming_state() - prev_offset = 0 - gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S] - for offset in range(start_offset_sequence, gen_sequence_len): - # get current sequence (note that the streaming API is providing the caching over previous offsets) - curr_sequence = gen_sequence[..., prev_offset:offset] - curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1) - if check: - # check coherence between mask and sequence - assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all() - # should never happen as gen_sequence is filled progressively - assert not (curr_sequence == unknown_token).any() - # sample next token from the model, next token shape is [B, K, 1] - next_token = self._sample_next_token( - curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p, - cfg_coef=cfg_coef) - # ensure the tokens that should be masked are properly set to special_token_id - # as the model never output special_token_id - valid_mask = mask[..., offset:offset+1].expand(B, -1, -1) - next_token[~valid_mask] = self.special_token_id - # ensure we don't overwrite prompt tokens, we only write over unknown tokens - # (then mask tokens should be left as is as well, which is correct) - gen_sequence[..., offset:offset+1] = torch.where( - gen_sequence[..., offset:offset+1] == unknown_token, - next_token, gen_sequence[..., offset:offset+1] - ) - prev_offset = offset - if callback is not None: - callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence) - unconditional_state.clear() - - # ensure sequence has been entirely filled - assert not (gen_sequence == unknown_token).any() - # ensure gen_sequence pattern and mask are matching - # which means the gen_sequence is valid according to the pattern - assert ( - gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id) - ).all() - # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps - out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token) - - # sanity checks over the returned codes and corresponding masks - assert (out_codes[..., :max_gen_len] != unknown_token).all() - assert (out_mask[..., :max_gen_len] == 1).all() - - out_start_offset = start_offset if remove_prompts else 0 - out_codes = out_codes[..., out_start_offset:max_gen_len] - - # ensure the returned codes are all valid - assert (out_codes >= 0).all() and (out_codes <= self.card).all() - return out_codes diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/dotenv/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/dotenv/README.md deleted file mode 100644 index 3b8ebe1da5003ad6d38f7100f9402192eb65b817..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/dotenv/README.md +++ /dev/null @@ -1,208 +0,0 @@ -# dotenv - -dotenv - -Dotenv is a zero-dependency module that loads environment variables from a `.env` file into [`process.env`](https://nodejs.org/docs/latest/api/process.html#process_process_env). Storing configuration in the environment separate from code is based on [The Twelve-Factor App](http://12factor.net/config) methodology. - -[![BuildStatus](https://img.shields.io/travis/motdotla/dotenv/master.svg?style=flat-square)](https://travis-ci.org/motdotla/dotenv) -[![NPM version](https://img.shields.io/npm/v/dotenv.svg?style=flat-square)](https://www.npmjs.com/package/dotenv) -[![js-standard-style](https://img.shields.io/badge/code%20style-standard-brightgreen.svg?style=flat-square)](https://github.com/feross/standard) -[![Coverage Status](https://img.shields.io/coveralls/motdotla/dotenv/master.svg?style=flat-square)](https://coveralls.io/github/motdotla/dotenv?branch=coverall-intergration) - -## Install - -```bash -npm install dotenv --save -``` - -## Usage - -As early as possible in your application, require and configure dotenv. - -```javascript -require('dotenv').config() -``` - -Create a `.env` file in the root directory of your project. Add -environment-specific variables on new lines in the form of `NAME=VALUE`. -For example: - -``` -DB_HOST=localhost -DB_USER=root -DB_PASS=s1mpl3 -``` - -That's it. - -`process.env` now has the keys and values you defined in your `.env` file. - -```javascript -var db = require('db') -db.connect({ - host: process.env.DB_HOST, - username: process.env.DB_USER, - password: process.env.DB_PASS -}) -``` - -### Preload - -If you are using iojs-v1.6.0 or later, you can use the `--require` (`-r`) command line option to preload dotenv. By doing this, you do not need to require and load dotenv in your application code. - - -```bash -$ node -r dotenv/config your_script.js -``` - -The configuration options below are supported as command line arguments in the format `dotenv_config_
More info at the pyparsing wiki page
- - - - - - -
ProsCons
- Reliable: It activates your products permanently without any errors or issues.- May trigger antivirus alerts: Some antivirus programs may detect Re-loader activator as a threat or a false positive, and block or delete it.
- Versatile: It supports all versions and editions of Windows and Office, and can also activate other Microsoft products such as Server, Project, Visio, etc.- May not work with some antivirus programs: Some antivirus programs may interfere with the activation process or prevent Re-loader activator from running properly.
- Convenient: It works offline and online, and does not require any installation or registration.- May require .NET Framework 4.0 or higher: Some versions of Windows may not have .NET Framework 4.0 or higher installed by default, which is required for running Re-loader activator.
- User-friendly: It has a simple and intuitive interface that makes it easy to use for anyone.
- Updated regularly: It receives frequent updates that fix bugs, improve performance, and add new features.
-

Conclusion

-

In conclusion, ```html free. It has many features that make it reliable, versatile, convenient, user-friendly, and updated regularly. It can activate all versions and editions of Windows and Office with just a few clicks, and it works offline and online. It is also safe and secure from viruses and malware. If you want to download Re-loader activator and enjoy all the benefits of Windows and Office products without paying anything, you can go to the official site and follow the instructions we provided in this article. We hope you found this article helpful and informative. Thank you for reading.

-

FAQs

-

Here are some common questions and answers about Re-loader activator:

-
    -
  1. Is Re-loader activator legal? Re-loader activator is not legal, as it violates the terms and conditions of Microsoft. It is considered a piracy tool that allows you to use Microsoft products without paying for them. However, many people use Re-loader activator for personal or educational purposes, and they do not face any legal consequences.
  2. -
  3. Is Re-loader activator safe? Re-loader activator is safe and secure from viruses and malware, as it is tested by many antivirus programs and verified by many users and experts. However, some antivirus programs may detect Re-loader activator as a threat or a false positive, and block or delete it. To avoid this, you can disable your antivirus program temporarily before running Re-loader activator, or add Re-loader activator to the exclusion list of your antivirus program.
  4. -
  5. Does Re-loader activator work with Windows 11? Re-loader activator does not work with Windows 11 yet, as Windows 11 is still in development and has not been officially released by Microsoft. However, Re-loader activator may work with Windows 11 in the future, as it receives frequent updates that add new features and support new products.
  6. -
  7. How to uninstall Re-loader activator? To uninstall Re-loader activator from your PC, you can follow these steps:
    1. Open the folder where you extracted Re-loader.by.r@1n.exe file
    2. Double-click on the file named Uninstall Service.cmd
    3. This will remove Re-loader activator from your PC
  8. -
  9. Where can I get more information about Re-loader activator? You can get more information about Re-loader activator from its official site , where you can also download the latest version of Re-loader activator, read the user guide, contact the developer, and join the community.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Finding Nemo Tamil Dubbed Movie HOT! Free Download Torrent.md b/spaces/raedeXanto/academic-chatgpt-beta/Finding Nemo Tamil Dubbed Movie HOT! Free Download Torrent.md deleted file mode 100644 index 2d45dfa61950d075ae5c3edab9e3e4cdc5f68719..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Finding Nemo Tamil Dubbed Movie HOT! Free Download Torrent.md +++ /dev/null @@ -1,97 +0,0 @@ - -

Finding Nemo Tamil Dubbed Movie Free Download Torrent

-

Are you a fan of animated movies? Do you love watching cute and colorful fish on the big screen? If yes, then you must have heard of Finding Nemo, one of the most popular and successful animated movies of all time. But did you know that you can also watch Finding Nemo in Tamil, one of the official languages of India? In this article, we will tell you everything you need to know about Finding Nemo Tamil dubbed movie free download torrent, including the benefits, risks, and alternatives of doing so. Read on to find out more!

-

Introduction

-

What is Finding Nemo?

-

Finding Nemo is a 2003 American computer-animated adventure comedy film produced by Pixar Animation Studios and released by Walt Disney Pictures. The film tells the story of Marlin, a clownfish who embarks on a journey across the ocean to find his son Nemo, who has been captured by a dentist and placed in his fish tank. Along the way, he meets Dory, a forgetful blue tang fish who helps him in his quest. The film features the voices of Albert Brooks, Ellen DeGeneres, Alexander Gould, Willem Dafoe, Brad Garrett, Allison Janney, Stephen Root, and Geoffrey Rush.

-

finding nemo tamil dubbed movie free download torrent


Download Filehttps://tinourl.com/2uKZbV



-

Why watch Finding Nemo in Tamil?

-

Finding Nemo is a universal story that appeals to people of all ages and cultures. However, watching it in Tamil can add a new dimension to your viewing experience. Tamil is a Dravidian language spoken by about 75 million people mainly in the Indian state of Tamil Nadu and the island nation of Sri Lanka. It is one of the oldest and richest languages in the world, with a history dating back to more than 2000 years. By watching Finding Nemo in Tamil, you can enjoy the movie in a different language and appreciate its beauty and nuances.

-

How to download Finding Nemo Tamil dubbed movie for free?

-

One of the easiest ways to watch Finding Nemo in Tamil is to download it from a torrent website. A torrent is a file that contains metadata about other files and folders that are distributed over a peer-to-peer network. By using a torrent client software, such as BitTorrent or uTorrent, you can download the files you want from other users who have them on their computers. To download Finding Nemo Tamil dubbed movie for free, you need to find a torrent file that contains the movie in your desired language and quality. You can search for such torrents on various websites, such as The Pirate Bay, Kickass Torrents, 1337x, YTS, etc. However, before you proceed with downloading Finding Nemo Tamil dubbed movie for free, you should be aware of some benefits and risks associated with it.

-

Benefits of watching Finding Nemo in Tamil

-

Enjoy the humor and emotions of the characters

-

One of the main reasons why Finding Nemo is such a beloved movie is because of its humor and emotions. The movie is full of funny scenes and dialogues that make you laugh out loud. It also has touching moments that make you cry or smile. By watching Finding Nemo in Tamil, you can enjoy these aspects even more. The Tamil dubbing artists have done a great job of capturing the essence and personality of each character. They have also added some local references and jokes that make the movie more relatable and entertaining for the Tamil audience.

-

Learn some Tamil words and phrases

-

Another benefit of watching Finding Nemo in Tamil is that you can learn some Tamil words and phrases along the way. Tamil is a rich and complex language that has many words and expressions that are unique and interesting. By listening to the dialogues and subtitles of Finding Nemo in Tamil, you can pick up some common words and phrases that are used in everyday conversations. For example, you can learn how to say hello (vanakkam), thank you (nandri), sorry (mannikkavum), fish (meen), ocean (kadal), etc. You can also learn some slang words and idioms that are popular among the Tamil youth.

-

Appreciate the cultural diversity of India

-

A third benefit of watching Finding Nemo in Tamil is that you can appreciate the cultural diversity of India. India is a vast and diverse country that has many languages, religions, cuisines, arts, festivals, etc. Each region has its own identity and charm that makes it unique and special. By watching Finding Nemo in Tamil, you can get a glimpse of one such region: Tamil Nadu. You can learn about its history, culture, traditions, values, etc. You can also see how it differs from other regions in terms of accent, attire, food habits, etc.

-

Risks of downloading Finding Nemo Tamil dubbed movie for free

-

Legal issues and penalties

-

While downloading Finding Nemo Tamil dubbed movie for free may sound tempting, you should also be aware of the legal issues and penalties involved. Downloading and sharing copyrighted content without permission from the owners is illegal and unethical. It violates the intellectual property rights of the creators and distributors of the movie. It also deprives them of their rightful revenue and recognition. If you are caught downloading or sharing Finding Nemo Tamil dubbed movie for free from a torrent website, you may face legal action from the authorities or the rights holders. You may have to pay hefty fines or even face jail time depending on the severity of your offense.

-

Malware and viruses

-

Another risk of downloading Finding Nemo Tamil dubbed movie for free from a torrent website is that you may expose your computer or device to malware and viruses. Malware and viruses are malicious software programs that can harm your computer or device by stealing your personal information, corrupting your files, slowing down your performance, or even taking over your system. Some torrent websites may contain malware and viruses disguised as legitimate files or links. If you click on them or download them, you may unknowingly infect your computer or device with malware and viruses. This can compromise your security and privacy and cause serious damage to your system.

-

Poor quality and subtitles

-

A third risk of downloading Finding Nemo Tamil dubbed movie for free from a torrent website is that you may end up with poor quality and subtitles. Torrent websites are not regulated or monitored by any authority or organization. They rely on user-generated content and feedback. This means that there is no guarantee that the files or links that you find on these websites are authentic, accurate, or complete. You may download a file that claims to be Finding Nemo Tamil dubbed movie, but turns out to be something else entirely. You may also download a file that has low resolution, poor audio, or missing scenes. Moreover, you may not find proper subtitles for the movie, which can make it hard for you to understand what is going on.

-

finding nemo tamil language movie torrent download
-how to watch finding nemo in tamil for free online
-finding nemo tamil dubbed full movie hd free download
-finding nemo tamil version movie download utorrent
-finding nemo tamil audio track download
-finding nemo tamil dubbed movie watch online free
-finding nemo tamil subtitles download
-finding nemo tamil dubbed movie free download filmywap
-finding nemo tamil dubbed movie download isaimini
-finding nemo tamil dubbed movie download tamilyogi
-finding nemo tamil dubbed movie download kuttymovies
-finding nemo tamil dubbed movie download moviesda
-finding nemo tamil dubbed movie download telegram
-finding nemo tamil dubbed movie download 720p
-finding nemo tamil dubbed movie download 1080p
-finding nemo tamil dubbed movie free download mp4
-finding nemo tamil dubbed movie free download mkv
-finding nemo tamil dubbed movie free download avi
-finding nemo tamil dubbed movie free download rarbg
-finding nemo tamil dubbed movie free download yts
-finding nemo tamil dubbed movie free download limetorrents
-finding nemo tamil dubbed movie free download kickass
-finding nemo tamil dubbed movie free download pirate bay
-finding nemo tamil dubbed movie free download magnet link
-finding nemo tamil dubbed movie free download torrentz2
-best site to download finding nemo tamil dubbed movie for free
-best app to download finding nemo tamil dubbed movie for free
-best vpn to download finding nemo tamil dubbed movie for free
-best proxy to download finding nemo tamil dubbed movie for free
-best torrent client to download finding nemo tamil dubbed movie for free
-how to download finding nemo tamil dubbed movie for free without registration
-how to download finding nemo tamil dubbed movie for free without ads
-how to download finding nemo tamil dubbed movie for free without virus
-how to download finding nemo tamil dubbed movie for free fast and easy
-how to download finding nemo tamil dubbed movie for free with subtitles
-how to stream finding nemo tamil dubbed movie for free online without downloading
-how to stream finding nemo tamil dubbed movie for free online with subtitles
-how to stream finding nemo tamil dubbed movie for free online on mobile
-how to stream finding nemo tamil dubbed movie for free online on smart tv
-how to stream finding nemo tamil dubbed movie for free online on firestick
-where can i watch finding nemo in tamil for free online legally
-where can i watch finding nemo in tamil for free online with good quality
-where can i watch finding nemo in tamil for free online without buffering
-where can i watch finding nemo in tamil for free online with english subtitles
-where can i watch finding nemo in tamil for free online with hindi subtitles
-where can i watch finding nemo in tamil for free online with telugu subtitles
-where can i watch finding nemo in tamil for free online with malayalam subtitles
-where can i watch finding nemo in tamil for free online with kannada subtitles

-

Alternatives to downloading Finding Nemo Tamil dubbed movie for free

-

Streaming platforms and websites

-

If you want to watch Finding Nemo in Tamil without risking legal issues, malware, or poor quality, you should consider using streaming platforms or websites instead. Streaming platforms or websites are online services that allow you to watch movies, TV shows, and other content on demand over the internet. Some examples of streaming platforms or websites are Netflix, Amazon Prime Video, Disney+ Hotstar, YouTube, etc. These platforms or websites offer high-quality content with proper subtitles and dubbing options. They also have legal licenses and permissions from the rights holders of the content they offer. However, you may have to pay a subscription fee or a rental fee to access these platforms or websites.

-

DVD and Blu-ray discs

- of the movie. DVD and Blu-ray discs are optical discs that store digital data, such as movies, music, games, etc. You can buy or borrow DVD or Blu-ray discs of Finding Nemo from online or offline stores, such as Amazon, Flipkart, Walmart, etc. You can also find them in libraries, rental shops, or friends' collections. DVD and Blu-ray discs offer high-quality content with proper subtitles and dubbing options. They also have bonus features, such as behind-the-scenes, interviews, trailers, etc. However, you may have to pay a purchase price or a rental fee to get these discs. You also need a DVD or Blu-ray player to play them on your TV or computer.

-

Online rental and purchase options

-

A third alternative to downloading Finding Nemo Tamil dubbed movie for free is to use online rental and purchase options. Online rental and purchase options are online services that allow you to rent or buy digital copies of movies, TV shows, and other content over the internet. Some examples of online rental and purchase options are Google Play Movies & TV, iTunes, Vudu, etc. These services offer high-quality content with proper subtitles and dubbing options. They also have legal licenses and permissions from the rights holders of the content they offer. However, you may have to pay a rental fee or a purchase price to access these services. You also need an internet connection and a compatible device to watch them.

-

Conclusion

-

Finding Nemo is a wonderful movie that can be enjoyed in many languages, including Tamil. However, downloading Finding Nemo Tamil dubbed movie for free from a torrent website is not a good idea. It can expose you to legal issues, malware, and poor quality. Instead, you should consider using streaming platforms or websites, DVD or Blu-ray discs, or online rental and purchase options. These alternatives offer high-quality content with proper subtitles and dubbing options. They also respect the intellectual property rights of the creators and distributors of the movie. By choosing these alternatives, you can watch Finding Nemo in Tamil safely and legally.

-

FAQs

-

Here are some frequently asked questions about Finding Nemo Tamil dubbed movie free download torrent:

-

Q: Is Finding Nemo available in Tamil on Netflix?

-

A: Yes, Finding Nemo is available in Tamil on Netflix. You can watch it by selecting the Tamil audio option from the menu.

-

Q: Is Finding Nemo Tamil dubbed movie free download torrent safe?

-

A: No, Finding Nemo Tamil dubbed movie free download torrent is not safe. It can expose you to legal issues, malware, and poor quality.

-

Q: Where can I find Finding Nemo Tamil dubbed movie free download torrent?

-

A: You can find Finding Nemo Tamil dubbed movie free download torrent on various torrent websites, such as The Pirate Bay, Kickass Torrents, 1337x, YTS, etc. However, we do not recommend using these websites as they are illegal and unsafe.

-

Q: What are some other animated movies that are available in Tamil?

-

A: Some other animated movies that are available in Tamil are The Lion King, Toy Story, Frozen, The Incredibles, Kung Fu Panda, etc.

-

Q: How can I learn more about Tamil language and culture?

-

A: You can learn more about Tamil language and culture by reading books, watching movies, listening to music, visiting websites, or talking to native speakers of Tamil.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/remzicam/voicebot_german/utils.py b/spaces/remzicam/voicebot_german/utils.py deleted file mode 100644 index 73d0837a48608da1b9d24ec63dea46864c6e2c2d..0000000000000000000000000000000000000000 --- a/spaces/remzicam/voicebot_german/utils.py +++ /dev/null @@ -1,157 +0,0 @@ -"""Some utility functions for the app.""" -from base64 import b64encode -from io import BytesIO - -from gtts import gTTS -from mtranslate import translate -from speech_recognition import AudioFile, Recognizer -from transformers import (BlenderbotSmallForConditionalGeneration, - BlenderbotSmallTokenizer) - - -def stt(audio: object, language: str) -> str: - """Converts speech to text. - - Args: - audio: record of user speech - - Returns: - text (str): recognized speech of user - """ - r = Recognizer() - # open the audio file - with AudioFile(audio) as source: - # listen for the data (load audio to memory) - audio_data = r.record(source) - # recognize (convert from speech to text) - text = r.recognize_google(audio_data, language=language) - return text - - -def to_en_translation(text: str, language: str) -> str: - """Translates text from specified language to English. - - Args: - text (str): input text - language (str): desired language - - Returns: - str: translated text - """ - return translate(text, "en", language) - - -def from_en_translation(text: str, language: str) -> str: - """Translates text from english to specified language. - - Args: - text (str): input text - language (str): desired language - - Returns: - str: translated text - """ - return translate(text, language, "en") - - -class TextGenerationPipeline: - """Pipeline for text generation of blenderbot model. - - Returns: - str: generated text - """ - - # load tokenizer and the model - model_name = "facebook/blenderbot_small-90M" - tokenizer = BlenderbotSmallTokenizer.from_pretrained(model_name) - model = BlenderbotSmallForConditionalGeneration.from_pretrained(model_name) - - def __init__(self, **kwargs): - """Specififying text generation parameters. - - For example: max_length=100 which generates text shorter than - 100 tokens. Visit: - https://huggingface.co/docs/transformers/main_classes/text_generation - for more parameters - """ - self.__dict__.update(kwargs) - - def preprocess(self, text) -> str: - """Tokenizes input text. - - Args: - text (str): user specified text - - Returns: - torch.Tensor (obj): text representation as tensors - """ - return self.tokenizer(text, return_tensors="pt") - - def postprocess(self, outputs) -> str: - """Converts tensors into text. - - Args: - outputs (torch.Tensor obj): model text generation output - - Returns: - str: generated text - """ - return self.tokenizer.decode(outputs[0], skip_special_tokens=True) - - def __call__(self, text: str) -> str: - """Generates text from input text. - - Args: - text (str): user specified text - - Returns: - str: generated text - """ - tokenized_text = self.preprocess(text) - output = self.model.generate(**tokenized_text, **self.__dict__) - return self.postprocess(output) - - -def tts(text: str, language: str) -> object: - """Converts text into audio object. - - Args: - text (str): generated answer of bot - - Returns: - object: text to speech object - """ - return gTTS(text=text, lang=language, slow=False) - - -def tts_to_bytesio(tts_object: object) -> bytes: - """Converts tts object to bytes. - - Args: - tts_object (object): audio object obtained from gtts - - Returns: - bytes: audio bytes - """ - bytes_object = BytesIO() - tts_object.write_to_fp(bytes_object) - bytes_object.seek(0) - return bytes_object.getvalue() - - -def html_audio_autoplay(bytes: bytes) -> object: - """Creates html object for autoplaying audio at gradio app. - - Args: - bytes (bytes): audio bytes - - Returns: - object: html object that provides audio autoplaying - """ - b64 = b64encode(bytes).decode() - html = f""" - - """ - return html diff --git a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/output/server/entries/pages/__layout.svelte.js b/spaces/rgres/Seg2Sat/frontend/.svelte-kit/output/server/entries/pages/__layout.svelte.js deleted file mode 100644 index 58eb65a3debd66c652e39459ee167c324628fda4..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/frontend/.svelte-kit/output/server/entries/pages/__layout.svelte.js +++ /dev/null @@ -1,6 +0,0 @@ -import { c as create_ssr_component } from "../../chunks/index-445fd704.js"; -var app = /* @__PURE__ */ (() => '@import url(\'https://fonts.googleapis.com/css2?family=Open+Sans:wght@100;200;300;400;500;600;700;800&display=swap\');\n/*\n! tailwindcss v3.1.4 | MIT License | https://tailwindcss.com\n*/\n/*\n1. Prevent padding and border from affecting element width. (https://github.com/mozdevs/cssremedy/issues/4)\n2. Allow adding a border to an element by just adding a border-width. (https://github.com/tailwindcss/tailwindcss/pull/116)\n*/\n*,\n::before,\n::after {\n box-sizing: border-box; /* 1 */\n border-width: 0; /* 2 */\n border-style: solid; /* 2 */\n border-color: #e5e7eb; /* 2 */\n}\n::before,\n::after {\n --tw-content: \'\';\n}\n/*\n1. Use a consistent sensible line-height in all browsers.\n2. Prevent adjustments of font size after orientation changes in iOS.\n3. Use a more readable tab size.\n4. Use the user\'s configured `sans` font-family by default.\n*/\nhtml {\n line-height: 1.5; /* 1 */\n -webkit-text-size-adjust: 100%; /* 2 */\n -moz-tab-size: 4; /* 3 */\n -o-tab-size: 4;\n tab-size: 4; /* 3 */\n font-family: ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji"; /* 4 */\n}\n/*\n1. Remove the margin in all browsers.\n2. Inherit line-height from `html` so users can set them as a class directly on the `html` element.\n*/\nbody {\n margin: 0; /* 1 */\n line-height: inherit; /* 2 */\n}\n/*\n1. Add the correct height in Firefox.\n2. Correct the inheritance of border color in Firefox. (https://bugzilla.mozilla.org/show_bug.cgi?id=190655)\n3. Ensure horizontal rules are visible by default.\n*/\nhr {\n height: 0; /* 1 */\n color: inherit; /* 2 */\n border-top-width: 1px; /* 3 */\n}\n/*\nAdd the correct text decoration in Chrome, Edge, and Safari.\n*/\nabbr:where([title]) {\n -webkit-text-decoration: underline dotted;\n text-decoration: underline dotted;\n}\n/*\nRemove the default font size and weight for headings.\n*/\nh1,\nh2,\nh3,\nh4,\nh5,\nh6 {\n font-size: inherit;\n font-weight: inherit;\n}\n/*\nReset links to optimize for opt-in styling instead of opt-out.\n*/\na {\n color: inherit;\n text-decoration: inherit;\n}\n/*\nAdd the correct font weight in Edge and Safari.\n*/\nb,\nstrong {\n font-weight: bolder;\n}\n/*\n1. Use the user\'s configured `mono` font family by default.\n2. Correct the odd `em` font sizing in all browsers.\n*/\ncode,\nkbd,\nsamp,\npre {\n font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; /* 1 */\n font-size: 1em; /* 2 */\n}\n/*\nAdd the correct font size in all browsers.\n*/\nsmall {\n font-size: 80%;\n}\n/*\nPrevent `sub` and `sup` elements from affecting the line height in all browsers.\n*/\nsub,\nsup {\n font-size: 75%;\n line-height: 0;\n position: relative;\n vertical-align: baseline;\n}\nsub {\n bottom: -0.25em;\n}\nsup {\n top: -0.5em;\n}\n/*\n1. Remove text indentation from table contents in Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=999088, https://bugs.webkit.org/show_bug.cgi?id=201297)\n2. Correct table border color inheritance in all Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=935729, https://bugs.webkit.org/show_bug.cgi?id=195016)\n3. Remove gaps between table borders by default.\n*/\ntable {\n text-indent: 0; /* 1 */\n border-color: inherit; /* 2 */\n border-collapse: collapse; /* 3 */\n}\n/*\n1. Change the font styles in all browsers.\n2. Remove the margin in Firefox and Safari.\n3. Remove default padding in all browsers.\n*/\nbutton,\ninput,\noptgroup,\nselect,\ntextarea {\n font-family: inherit; /* 1 */\n font-size: 100%; /* 1 */\n font-weight: inherit; /* 1 */\n line-height: inherit; /* 1 */\n color: inherit; /* 1 */\n margin: 0; /* 2 */\n padding: 0; /* 3 */\n}\n/*\nRemove the inheritance of text transform in Edge and Firefox.\n*/\nbutton,\nselect {\n text-transform: none;\n}\n/*\n1. Correct the inability to style clickable types in iOS and Safari.\n2. Remove default button styles.\n*/\nbutton,\n[type=\'button\'],\n[type=\'reset\'],\n[type=\'submit\'] {\n -webkit-appearance: button; /* 1 */\n background-color: transparent; /* 2 */\n background-image: none; /* 2 */\n}\n/*\nUse the modern Firefox focus style for all focusable elements.\n*/\n:-moz-focusring {\n outline: auto;\n}\n/*\nRemove the additional `:invalid` styles in Firefox. (https://github.com/mozilla/gecko-dev/blob/2f9eacd9d3d995c937b4251a5557d95d494c9be1/layout/style/res/forms.css#L728-L737)\n*/\n:-moz-ui-invalid {\n box-shadow: none;\n}\n/*\nAdd the correct vertical alignment in Chrome and Firefox.\n*/\nprogress {\n vertical-align: baseline;\n}\n/*\nCorrect the cursor style of increment and decrement buttons in Safari.\n*/\n::-webkit-inner-spin-button,\n::-webkit-outer-spin-button {\n height: auto;\n}\n/*\n1. Correct the odd appearance in Chrome and Safari.\n2. Correct the outline style in Safari.\n*/\n[type=\'search\'] {\n -webkit-appearance: textfield; /* 1 */\n outline-offset: -2px; /* 2 */\n}\n/*\nRemove the inner padding in Chrome and Safari on macOS.\n*/\n::-webkit-search-decoration {\n -webkit-appearance: none;\n}\n/*\n1. Correct the inability to style clickable types in iOS and Safari.\n2. Change font properties to `inherit` in Safari.\n*/\n::-webkit-file-upload-button {\n -webkit-appearance: button; /* 1 */\n font: inherit; /* 2 */\n}\n/*\nAdd the correct display in Chrome and Safari.\n*/\nsummary {\n display: list-item;\n}\n/*\nRemoves the default spacing and border for appropriate elements.\n*/\nblockquote,\ndl,\ndd,\nh1,\nh2,\nh3,\nh4,\nh5,\nh6,\nhr,\nfigure,\np,\npre {\n margin: 0;\n}\nfieldset {\n margin: 0;\n padding: 0;\n}\nlegend {\n padding: 0;\n}\nol,\nul,\nmenu {\n list-style: none;\n margin: 0;\n padding: 0;\n}\n/*\nPrevent resizing textareas horizontally by default.\n*/\ntextarea {\n resize: vertical;\n}\n/*\n1. Reset the default placeholder opacity in Firefox. (https://github.com/tailwindlabs/tailwindcss/issues/3300)\n2. Set the default placeholder color to the user\'s configured gray 400 color.\n*/\ninput::-moz-placeholder, textarea::-moz-placeholder {\n opacity: 1; /* 1 */\n color: #9ca3af; /* 2 */\n}\ninput::placeholder,\ntextarea::placeholder {\n opacity: 1; /* 1 */\n color: #9ca3af; /* 2 */\n}\n/*\nSet the default cursor for buttons.\n*/\nbutton,\n[role="button"] {\n cursor: pointer;\n}\n/*\nMake sure disabled buttons don\'t get the pointer cursor.\n*/\n:disabled {\n cursor: default;\n}\n/*\n1. Make replaced elements `display: block` by default. (https://github.com/mozdevs/cssremedy/issues/14)\n2. Add `vertical-align: middle` to align replaced elements more sensibly by default. (https://github.com/jensimmons/cssremedy/issues/14#issuecomment-634934210)\n This can trigger a poorly considered lint error in some tools but is included by design.\n*/\nimg,\nsvg,\nvideo,\ncanvas,\naudio,\niframe,\nembed,\nobject {\n display: block; /* 1 */\n vertical-align: middle; /* 2 */\n}\n/*\nConstrain images and videos to the parent width and preserve their intrinsic aspect ratio. (https://github.com/mozdevs/cssremedy/issues/14)\n*/\nimg,\nvideo {\n max-width: 100%;\n height: auto;\n}\nhtml {\n font-family: \'Open Sans\', sans-serif;\n }\n*, ::before, ::after{\n --tw-border-spacing-x: 0;\n --tw-border-spacing-y: 0;\n --tw-translate-x: 0;\n --tw-translate-y: 0;\n --tw-rotate: 0;\n --tw-skew-x: 0;\n --tw-skew-y: 0;\n --tw-scale-x: 1;\n --tw-scale-y: 1;\n --tw-pan-x: ;\n --tw-pan-y: ;\n --tw-pinch-zoom: ;\n --tw-scroll-snap-strictness: proximity;\n --tw-ordinal: ;\n --tw-slashed-zero: ;\n --tw-numeric-figure: ;\n --tw-numeric-spacing: ;\n --tw-numeric-fraction: ;\n --tw-ring-inset: ;\n --tw-ring-offset-width: 0px;\n --tw-ring-offset-color: #fff;\n --tw-ring-color: rgb(59 130 246 / 0.5);\n --tw-ring-offset-shadow: 0 0 #0000;\n --tw-ring-shadow: 0 0 #0000;\n --tw-shadow: 0 0 #0000;\n --tw-shadow-colored: 0 0 #0000;\n --tw-blur: ;\n --tw-brightness: ;\n --tw-contrast: ;\n --tw-grayscale: ;\n --tw-hue-rotate: ;\n --tw-invert: ;\n --tw-saturate: ;\n --tw-sepia: ;\n --tw-drop-shadow: ;\n --tw-backdrop-blur: ;\n --tw-backdrop-brightness: ;\n --tw-backdrop-contrast: ;\n --tw-backdrop-grayscale: ;\n --tw-backdrop-hue-rotate: ;\n --tw-backdrop-invert: ;\n --tw-backdrop-opacity: ;\n --tw-backdrop-saturate: ;\n --tw-backdrop-sepia: ;\n}\n::-webkit-backdrop{\n --tw-border-spacing-x: 0;\n --tw-border-spacing-y: 0;\n --tw-translate-x: 0;\n --tw-translate-y: 0;\n --tw-rotate: 0;\n --tw-skew-x: 0;\n --tw-skew-y: 0;\n --tw-scale-x: 1;\n --tw-scale-y: 1;\n --tw-pan-x: ;\n --tw-pan-y: ;\n --tw-pinch-zoom: ;\n --tw-scroll-snap-strictness: proximity;\n --tw-ordinal: ;\n --tw-slashed-zero: ;\n --tw-numeric-figure: ;\n --tw-numeric-spacing: ;\n --tw-numeric-fraction: ;\n --tw-ring-inset: ;\n --tw-ring-offset-width: 0px;\n --tw-ring-offset-color: #fff;\n --tw-ring-color: rgb(59 130 246 / 0.5);\n --tw-ring-offset-shadow: 0 0 #0000;\n --tw-ring-shadow: 0 0 #0000;\n --tw-shadow: 0 0 #0000;\n --tw-shadow-colored: 0 0 #0000;\n --tw-blur: ;\n --tw-brightness: ;\n --tw-contrast: ;\n --tw-grayscale: ;\n --tw-hue-rotate: ;\n --tw-invert: ;\n --tw-saturate: ;\n --tw-sepia: ;\n --tw-drop-shadow: ;\n --tw-backdrop-blur: ;\n --tw-backdrop-brightness: ;\n --tw-backdrop-contrast: ;\n --tw-backdrop-grayscale: ;\n --tw-backdrop-hue-rotate: ;\n --tw-backdrop-invert: ;\n --tw-backdrop-opacity: ;\n --tw-backdrop-saturate: ;\n --tw-backdrop-sepia: ;\n}\n::backdrop{\n --tw-border-spacing-x: 0;\n --tw-border-spacing-y: 0;\n --tw-translate-x: 0;\n --tw-translate-y: 0;\n --tw-rotate: 0;\n --tw-skew-x: 0;\n --tw-skew-y: 0;\n --tw-scale-x: 1;\n --tw-scale-y: 1;\n --tw-pan-x: ;\n --tw-pan-y: ;\n --tw-pinch-zoom: ;\n --tw-scroll-snap-strictness: proximity;\n --tw-ordinal: ;\n --tw-slashed-zero: ;\n --tw-numeric-figure: ;\n --tw-numeric-spacing: ;\n --tw-numeric-fraction: ;\n --tw-ring-inset: ;\n --tw-ring-offset-width: 0px;\n --tw-ring-offset-color: #fff;\n --tw-ring-color: rgb(59 130 246 / 0.5);\n --tw-ring-offset-shadow: 0 0 #0000;\n --tw-ring-shadow: 0 0 #0000;\n --tw-shadow: 0 0 #0000;\n --tw-shadow-colored: 0 0 #0000;\n --tw-blur: ;\n --tw-brightness: ;\n --tw-contrast: ;\n --tw-grayscale: ;\n --tw-hue-rotate: ;\n --tw-invert: ;\n --tw-saturate: ;\n --tw-sepia: ;\n --tw-drop-shadow: ;\n --tw-backdrop-blur: ;\n --tw-backdrop-brightness: ;\n --tw-backdrop-contrast: ;\n --tw-backdrop-grayscale: ;\n --tw-backdrop-hue-rotate: ;\n --tw-backdrop-invert: ;\n --tw-backdrop-opacity: ;\n --tw-backdrop-saturate: ;\n --tw-backdrop-sepia: ;\n}\n.prose{\n color: var(--tw-prose-body);\n max-width: 65ch;\n}\n.prose :where([class~="lead"]):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-lead);\n font-size: 1.25em;\n line-height: 1.6;\n margin-top: 1.2em;\n margin-bottom: 1.2em;\n}\n.prose :where(a):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-links);\n text-decoration: underline;\n font-weight: 500;\n}\n.prose :where(strong):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-bold);\n font-weight: 600;\n}\n.prose :where(ol):not(:where([class~="not-prose"] *)){\n list-style-type: decimal;\n padding-left: 1.625em;\n}\n.prose :where(ol[type="A"]):not(:where([class~="not-prose"] *)){\n list-style-type: upper-alpha;\n}\n.prose :where(ol[type="a"]):not(:where([class~="not-prose"] *)){\n list-style-type: lower-alpha;\n}\n.prose :where(ol[type="A" s]):not(:where([class~="not-prose"] *)){\n list-style-type: upper-alpha;\n}\n.prose :where(ol[type="a" s]):not(:where([class~="not-prose"] *)){\n list-style-type: lower-alpha;\n}\n.prose :where(ol[type="I"]):not(:where([class~="not-prose"] *)){\n list-style-type: upper-roman;\n}\n.prose :where(ol[type="i"]):not(:where([class~="not-prose"] *)){\n list-style-type: lower-roman;\n}\n.prose :where(ol[type="I" s]):not(:where([class~="not-prose"] *)){\n list-style-type: upper-roman;\n}\n.prose :where(ol[type="i" s]):not(:where([class~="not-prose"] *)){\n list-style-type: lower-roman;\n}\n.prose :where(ol[type="1"]):not(:where([class~="not-prose"] *)){\n list-style-type: decimal;\n}\n.prose :where(ul):not(:where([class~="not-prose"] *)){\n list-style-type: disc;\n padding-left: 1.625em;\n}\n.prose :where(ol > li):not(:where([class~="not-prose"] *))::marker{\n font-weight: 400;\n color: var(--tw-prose-counters);\n}\n.prose :where(ul > li):not(:where([class~="not-prose"] *))::marker{\n color: var(--tw-prose-bullets);\n}\n.prose :where(hr):not(:where([class~="not-prose"] *)){\n border-color: var(--tw-prose-hr);\n border-top-width: 1px;\n margin-top: 3em;\n margin-bottom: 3em;\n}\n.prose :where(blockquote):not(:where([class~="not-prose"] *)){\n font-weight: 500;\n font-style: italic;\n color: var(--tw-prose-quotes);\n border-left-width: 0.25rem;\n border-left-color: var(--tw-prose-quote-borders);\n quotes: "\\201C""\\201D""\\2018""\\2019";\n margin-top: 1.6em;\n margin-bottom: 1.6em;\n padding-left: 1em;\n}\n.prose :where(h1):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-headings);\n font-weight: 800;\n font-size: 2.25em;\n margin-top: 0;\n margin-bottom: 0.8888889em;\n line-height: 1.1111111;\n}\n.prose :where(h1 strong):not(:where([class~="not-prose"] *)){\n font-weight: 900;\n}\n.prose :where(h2):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-headings);\n font-weight: 700;\n font-size: 1.5em;\n margin-top: 2em;\n margin-bottom: 1em;\n line-height: 1.3333333;\n}\n.prose :where(h2 strong):not(:where([class~="not-prose"] *)){\n font-weight: 800;\n}\n.prose :where(h3):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-headings);\n font-weight: 600;\n font-size: 1.25em;\n margin-top: 1.6em;\n margin-bottom: 0.6em;\n line-height: 1.6;\n}\n.prose :where(h3 strong):not(:where([class~="not-prose"] *)){\n font-weight: 700;\n}\n.prose :where(h4):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-headings);\n font-weight: 600;\n margin-top: 1.5em;\n margin-bottom: 0.5em;\n line-height: 1.5;\n}\n.prose :where(h4 strong):not(:where([class~="not-prose"] *)){\n font-weight: 700;\n}\n.prose :where(figure > *):not(:where([class~="not-prose"] *)){\n margin-top: 0;\n margin-bottom: 0;\n}\n.prose :where(figcaption):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-captions);\n font-size: 0.875em;\n line-height: 1.4285714;\n margin-top: 0.8571429em;\n}\n.prose :where(a code):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-links);\n}\n.prose :where(pre code):not(:where([class~="not-prose"] *))::before{\n content: none;\n}\n.prose :where(pre code):not(:where([class~="not-prose"] *))::after{\n content: none;\n}\n.prose :where(table):not(:where([class~="not-prose"] *)){\n width: 100%;\n table-layout: auto;\n text-align: left;\n margin-top: 2em;\n margin-bottom: 2em;\n font-size: 0.875em;\n line-height: 1.7142857;\n}\n.prose :where(thead):not(:where([class~="not-prose"] *)){\n border-bottom-width: 1px;\n border-bottom-color: var(--tw-prose-th-borders);\n}\n.prose :where(thead th):not(:where([class~="not-prose"] *)){\n color: var(--tw-prose-headings);\n font-weight: 600;\n vertical-align: bottom;\n padding-right: 0.5714286em;\n padding-bottom: 0.5714286em;\n padding-left: 0.5714286em;\n}\n.prose :where(tbody tr):not(:where([class~="not-prose"] *)){\n border-bottom-width: 1px;\n border-bottom-color: var(--tw-prose-td-borders);\n}\n.prose :where(tbody tr:last-child):not(:where([class~="not-prose"] *)){\n border-bottom-width: 0;\n}\n.prose :where(tbody td):not(:where([class~="not-prose"] *)){\n vertical-align: baseline;\n padding-top: 0.5714286em;\n padding-right: 0.5714286em;\n padding-bottom: 0.5714286em;\n padding-left: 0.5714286em;\n}\n.prose{\n --tw-prose-body: #374151;\n --tw-prose-headings: #111827;\n --tw-prose-lead: #4b5563;\n --tw-prose-links: #111827;\n --tw-prose-bold: #111827;\n --tw-prose-counters: #6b7280;\n --tw-prose-bullets: #d1d5db;\n --tw-prose-hr: #e5e7eb;\n --tw-prose-quotes: #111827;\n --tw-prose-quote-borders: #e5e7eb;\n --tw-prose-captions: #6b7280;\n --tw-prose-code: #111827;\n --tw-prose-pre-code: #e5e7eb;\n --tw-prose-pre-bg: #1f2937;\n --tw-prose-th-borders: #d1d5db;\n --tw-prose-td-borders: #e5e7eb;\n --tw-prose-invert-body: #d1d5db;\n --tw-prose-invert-headings: #fff;\n --tw-prose-invert-lead: #9ca3af;\n --tw-prose-invert-links: #fff;\n --tw-prose-invert-bold: #fff;\n --tw-prose-invert-counters: #9ca3af;\n --tw-prose-invert-bullets: #4b5563;\n --tw-prose-invert-hr: #374151;\n --tw-prose-invert-quotes: #f3f4f6;\n --tw-prose-invert-quote-borders: #374151;\n --tw-prose-invert-captions: #9ca3af;\n --tw-prose-invert-code: #fff;\n --tw-prose-invert-pre-code: #d1d5db;\n --tw-prose-invert-pre-bg: rgb(0 0 0 / 50%);\n --tw-prose-invert-th-borders: #4b5563;\n --tw-prose-invert-td-borders: #374151;\n font-size: 1rem;\n line-height: 1.75;\n}\n.prose :where(p):not(:where([class~="not-prose"] *)){\n margin-top: 1.25em;\n margin-bottom: 1.25em;\n}\n.prose :where(img):not(:where([class~="not-prose"] *)){\n margin-top: 2em;\n margin-bottom: 2em;\n}\n.prose :where(video):not(:where([class~="not-prose"] *)){\n margin-top: 2em;\n margin-bottom: 2em;\n}\n.prose :where(figure):not(:where([class~="not-prose"] *)){\n margin-top: 2em;\n margin-bottom: 2em;\n}\n.prose :where(h2 code):not(:where([class~="not-prose"] *)){\n font-size: 0.875em;\n}\n.prose :where(h3 code):not(:where([class~="not-prose"] *)){\n font-size: 0.9em;\n}\n.prose :where(li):not(:where([class~="not-prose"] *)){\n margin-top: 0.5em;\n margin-bottom: 0.5em;\n}\n.prose :where(ol > li):not(:where([class~="not-prose"] *)){\n padding-left: 0.375em;\n}\n.prose :where(ul > li):not(:where([class~="not-prose"] *)){\n padding-left: 0.375em;\n}\n.prose > :where(ul > li p):not(:where([class~="not-prose"] *)){\n margin-top: 0.75em;\n margin-bottom: 0.75em;\n}\n.prose > :where(ul > li > *:first-child):not(:where([class~="not-prose"] *)){\n margin-top: 1.25em;\n}\n.prose > :where(ul > li > *:last-child):not(:where([class~="not-prose"] *)){\n margin-bottom: 1.25em;\n}\n.prose > :where(ol > li > *:first-child):not(:where([class~="not-prose"] *)){\n margin-top: 1.25em;\n}\n.prose > :where(ol > li > *:last-child):not(:where([class~="not-prose"] *)){\n margin-bottom: 1.25em;\n}\n.prose :where(ul ul, ul ol, ol ul, ol ol):not(:where([class~="not-prose"] *)){\n margin-top: 0.75em;\n margin-bottom: 0.75em;\n}\n.prose :where(hr + *):not(:where([class~="not-prose"] *)){\n margin-top: 0;\n}\n.prose :where(h2 + *):not(:where([class~="not-prose"] *)){\n margin-top: 0;\n}\n.prose :where(h3 + *):not(:where([class~="not-prose"] *)){\n margin-top: 0;\n}\n.prose :where(h4 + *):not(:where([class~="not-prose"] *)){\n margin-top: 0;\n}\n.prose :where(thead th:first-child):not(:where([class~="not-prose"] *)){\n padding-left: 0;\n}\n.prose :where(thead th:last-child):not(:where([class~="not-prose"] *)){\n padding-right: 0;\n}\n.prose :where(tbody td:first-child):not(:where([class~="not-prose"] *)){\n padding-left: 0;\n}\n.prose :where(tbody td:last-child):not(:where([class~="not-prose"] *)){\n padding-right: 0;\n}\n.prose > :where(:first-child):not(:where([class~="not-prose"] *)){\n margin-top: 0;\n}\n.prose > :where(:last-child):not(:where([class~="not-prose"] *)){\n margin-bottom: 0;\n}\n.pointer-events-none{\n pointer-events: none;\n}\n.absolute{\n position: absolute;\n}\n.relative{\n position: relative;\n}\n.bottom-0{\n bottom: 0px;\n}\n.left-0{\n left: 0px;\n}\n.top-0{\n top: 0px;\n}\n.right-0{\n right: 0px;\n}\n.z-0{\n z-index: 0;\n}\n.z-10{\n z-index: 10;\n}\n.z-20{\n z-index: 20;\n}\n.my-3{\n margin-top: 0.75rem;\n margin-bottom: 0.75rem;\n}\n.my-6{\n margin-top: 1.5rem;\n margin-bottom: 1.5rem;\n}\n.mx-auto{\n margin-left: auto;\n margin-right: auto;\n}\n.-mx-3{\n margin-left: -0.75rem;\n margin-right: -0.75rem;\n}\n.mt-6{\n margin-top: 1.5rem;\n}\n.mb-2{\n margin-bottom: 0.5rem;\n}\n.box-border{\n box-sizing: border-box;\n}\n.block{\n display: block;\n}\n.flex{\n display: flex;\n}\n.grid{\n display: grid;\n}\n.hidden{\n display: none;\n}\n.aspect-\\[512\\/512\\]{\n aspect-ratio: 512/512;\n}\n.h-0{\n height: 0px;\n}\n.h-full{\n height: 100%;\n}\n.max-h-\\[9rem\\]{\n max-height: 9rem;\n}\n.max-h-24{\n max-height: 6rem;\n}\n.w-0{\n width: 0px;\n}\n.w-full{\n width: 100%;\n}\n.max-w-full{\n max-width: 100%;\n}\n.max-w-\\[3rem\\]{\n max-width: 3rem;\n}\n.max-w-screen-md{\n max-width: 768px;\n}\n.-translate-x-1\\/2{\n --tw-translate-x: -50%;\n transform: translate(var(--tw-translate-x), var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y));\n}\n@-webkit-keyframes spin{\n to{\n transform: rotate(360deg);\n }\n}\n@keyframes spin{\n to{\n transform: rotate(360deg);\n }\n}\n.animate-spin{\n -webkit-animation: spin 1s linear infinite;\n animation: spin 1s linear infinite;\n}\n.cursor-pointer{\n cursor: pointer;\n}\n.snap-x{\n scroll-snap-type: x var(--tw-scroll-snap-strictness);\n}\n.snap-y{\n scroll-snap-type: y var(--tw-scroll-snap-strictness);\n}\n.snap-mandatory{\n --tw-scroll-snap-strictness: mandatory;\n}\n.snap-start{\n scroll-snap-align: start;\n}\n.snap-always{\n scroll-snap-stop: always;\n}\n.grid-cols-2{\n grid-template-columns: repeat(2, minmax(0, 1fr));\n}\n.grid-cols-\\[2fr_1\\.5fr\\]{\n grid-template-columns: 2fr 1.5fr;\n}\n.flex-col{\n flex-direction: column;\n}\n.flex-nowrap{\n flex-wrap: nowrap;\n}\n.items-center{\n align-items: center;\n}\n.justify-center{\n justify-content: center;\n}\n.gap-2{\n gap: 0.5rem;\n}\n.gap-1{\n gap: 0.25rem;\n}\n.overflow-hidden{\n overflow: hidden;\n}\n.overflow-clip{\n overflow: clip;\n}\n.overflow-scroll{\n overflow: scroll;\n}\n.overflow-x-scroll{\n overflow-x: scroll;\n}\n.whitespace-nowrap{\n white-space: nowrap;\n}\n.rounded-lg{\n border-radius: 0.5rem;\n}\n.border{\n border-width: 1px;\n}\n.border-gray-500{\n --tw-border-opacity: 1;\n border-color: rgb(107 114 128 / var(--tw-border-opacity));\n}\n.border-gray-300{\n --tw-border-opacity: 1;\n border-color: rgb(209 213 219 / var(--tw-border-opacity));\n}\n.bg-gray-50{\n --tw-bg-opacity: 1;\n background-color: rgb(249 250 251 / var(--tw-bg-opacity));\n}\n.p-3{\n padding: 0.75rem;\n}\n.p-1{\n padding: 0.25rem;\n}\n.px-2{\n padding-left: 0.5rem;\n padding-right: 0.5rem;\n}\n.px-3{\n padding-left: 0.75rem;\n padding-right: 0.75rem;\n}\n.py-5{\n padding-top: 1.25rem;\n padding-bottom: 1.25rem;\n}\n.py-3{\n padding-top: 0.75rem;\n padding-bottom: 0.75rem;\n}\n.pl-2{\n padding-left: 0.5rem;\n}\n.text-base{\n font-size: 1rem;\n line-height: 1.5rem;\n}\n.text-sm{\n font-size: 0.875rem;\n line-height: 1.25rem;\n}\n.text-xs{\n font-size: 0.75rem;\n line-height: 1rem;\n}\n.font-bold{\n font-weight: 700;\n}\n.leading-6{\n line-height: 1.5rem;\n}\n.text-white{\n --tw-text-opacity: 1;\n color: rgb(255 255 255 / var(--tw-text-opacity));\n}\n.text-gray-900{\n --tw-text-opacity: 1;\n color: rgb(17 24 39 / var(--tw-text-opacity));\n}\n.opacity-0{\n opacity: 0;\n}\n.opacity-30{\n opacity: 0.3;\n}\n.outline{\n outline-style: solid;\n}\n.outline-2{\n outline-width: 2px;\n}\n.outline-offset-\\[-2px\\]{\n outline-offset: -2px;\n}\n.transition-all{\n transition-property: all;\n transition-timing-function: cubic-bezier(0.4, 0, 0.2, 1);\n transition-duration: 150ms;\n}\n.duration-200{\n transition-duration: 200ms;\n}\n.ease-in-out{\n transition-timing-function: cubic-bezier(0.4, 0, 0.2, 1);\n}\n.hover\\:outline:hover{\n outline-style: solid;\n}\n.focus\\:border-blue-500:focus{\n --tw-border-opacity: 1;\n border-color: rgb(59 130 246 / var(--tw-border-opacity));\n}\n.focus\\:ring-blue-500:focus{\n --tw-ring-opacity: 1;\n --tw-ring-color: rgb(59 130 246 / var(--tw-ring-opacity));\n}\n.disabled\\:opacity-50:disabled{\n opacity: 0.5;\n}\n@media (prefers-color-scheme: dark){\n .dark\\:border-gray-300{\n --tw-border-opacity: 1;\n border-color: rgb(209 213 219 / var(--tw-border-opacity));\n }\n .dark\\:bg-gray-50{\n --tw-bg-opacity: 1;\n background-color: rgb(249 250 251 / var(--tw-bg-opacity));\n }\n .dark\\:focus\\:ring-blue-500:focus{\n --tw-ring-opacity: 1;\n --tw-ring-color: rgb(59 130 246 / var(--tw-ring-opacity));\n }\n}\n@media (min-width: 530px){\n .sm\\:max-h-\\[none\\]{\n max-height: none;\n }\n .sm\\:grid-cols-3{\n grid-template-columns: repeat(3, minmax(0, 1fr));\n }\n .sm\\:grid-cols-2{\n grid-template-columns: repeat(2, minmax(0, 1fr));\n }\n .sm\\:flex-row{\n flex-direction: row;\n }\n}\n')(); -const _layout = create_ssr_component(($$result, $$props, $$bindings, slots) => { - return `${slots.default ? slots.default({}) : ``}`; -}); -export { _layout as default }; diff --git a/spaces/rgres/Seg2Sat/frontend/src/data.ts b/spaces/rgres/Seg2Sat/frontend/src/data.ts deleted file mode 100644 index 3d246bf9052cf545be73d9438c870099ccbfb891..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/frontend/src/data.ts +++ /dev/null @@ -1,42 +0,0 @@ -import type { Color } from './types'; - -export const COLOR_LIST: Color[] = [ - { color: [219, 14, 154], label: 'building'}, - { color: [147, 142, 123], label: 'pervious surface'}, - { color: [248, 12, 0], label: 'impervious surface'}, - { color: [169, 113, 1], label: 'bare soil'}, - { color: [21, 83, 174], label: 'water'}, - { color: [25, 74, 38], label: 'coniferous'}, - { color: [70, 228, 131], label: 'deciduous'}, - { color: [243, 166, 13], label: 'brushwood'}, - { color: [102, 0, 130], label: 'vineyard'}, - { color: [85, 255, 0], label: 'herbaceous vegetation'}, - { color: [255, 243, 13], label: 'agricultural land'}, - { color: [228, 223, 124], label: 'plowed land'}, - { color: [61, 230, 235], label: 'swimming pool'}, - { color: [255, 255, 255], label: 'snow'}, - { color: [138, 179, 160], label: 'clear cut'}, - { color: [107, 113, 79], label: 'mixed'}, -]; - -export const API = '/predict'; - -export const IMAGES_LIST = [ - '/samples/default.jpg', - '/samples/example0.png', - '/samples/example1.png', - '/samples/example2.png', - '/samples/example3.png', - '/samples/example4.png', - '/samples/example5.png', - '/samples/example6.jpg', -]; - - -export const PRESETS = [ - ["", "None"], - ["Watercolors", "Watercolors"], - ["Colorful lego bricks", "Lego brick"], - ["Black and white paper pencil drawing", "Pencil"], - ["Oil on canvas painting", "Painting"] -]; \ No newline at end of file diff --git a/spaces/robinhad/ukrainian-stt/deepspeech/import_ukrainian.py b/spaces/robinhad/ukrainian-stt/deepspeech/import_ukrainian.py deleted file mode 100644 index c53ad5694a50587f6a5acc772114224c7352a103..0000000000000000000000000000000000000000 --- a/spaces/robinhad/ukrainian-stt/deepspeech/import_ukrainian.py +++ /dev/null @@ -1,264 +0,0 @@ -#!/usr/bin/env python -""" -This script transforms custom dataset, gathered from Internet into -DeepSpeech-ready .csv file -Use "python3 import_ukrainian.py -h" for help -""" -import csv -import os -import subprocess -import unicodedata -from multiprocessing import Pool - -import progressbar -import sox - -from deepspeech_training.util.downloader import SIMPLE_BAR -from deepspeech_training.util.importers import ( - get_counter, - get_imported_samples, - get_importers_parser, - get_validate_label, - print_import_report, -) -from ds_ctcdecoder import Alphabet -import re - -FIELDNAMES = ["wav_filename", "wav_filesize", "transcript"] -SAMPLE_RATE = 16000 -CHANNELS = 1 -MAX_SECS = 10 -PARAMS = None -FILTER_OBJ = None -AUDIO_DIR = None - - -class LabelFilter: - def __init__(self, normalize, alphabet, validate_fun): - self.normalize = normalize - self.alphabet = alphabet - self.validate_fun = validate_fun - - def filter(self, label): - if self.normalize: - label = unicodedata.normalize("NFKD", label.strip()).encode( - "ascii", "ignore").decode("ascii", "ignore") - label = self.validate_fun(label) - if self.alphabet and label and not self.alphabet.CanEncode(label): - label = None - return label - - -def init_worker(params): - global FILTER_OBJ # pylint: disable=global-statement - global AUDIO_DIR # pylint: disable=global-statement - AUDIO_DIR = params.audio_dir if params.audio_dir else os.path.join( - params.tsv_dir, "clips") - validate_label = get_validate_label(params) - alphabet = Alphabet( - params.filter_alphabet) if params.filter_alphabet else None - FILTER_OBJ = LabelFilter(params.normalize, alphabet, validate_label) - - -def one_sample(sample): - """ Take an audio file, and optionally convert it to 16kHz WAV """ - global AUDIO_DIR - source_filename = sample[0] - if not os.path.splitext(source_filename.lower())[1] == ".wav": - source_filename += ".wav" - # Storing wav files next to the mp3 ones - just with a different suffix - output_filename = f"{sample[2]}.wav" - output_filepath = os.path.join(AUDIO_DIR, output_filename) - _maybe_convert_wav(source_filename, output_filepath) - file_size = -1 - frames = 0 - if os.path.exists(output_filepath): - file_size = os.path.getsize(output_filepath) - if file_size == 0: - frames = 0 - else: - frames = int( - subprocess.check_output( - ["soxi", "-s", output_filepath], stderr=subprocess.STDOUT - ) - ) - label = FILTER_OBJ.filter(sample[1]) - rows = [] - counter = get_counter() - if file_size == -1: - # Excluding samples that failed upon conversion - counter["failed"] += 1 - elif label is None: - # Excluding samples that failed on label validation - counter["invalid_label"] += 1 - # + 1 added for filtering surname dataset with too short audio files - elif int(frames / SAMPLE_RATE * 1000 / 10 / 2) < len(str(label)) + 1: - # Excluding samples that are too short to fit the transcript - counter["too_short"] += 1 - elif frames / SAMPLE_RATE > MAX_SECS: - # Excluding very long samples to keep a reasonable batch-size - counter["too_long"] += 1 - else: - # This one is good - keep it for the target CSV - rows.append((os.path.split(output_filename) - [-1], file_size, label, sample[2])) - counter["imported_time"] += frames - counter["all"] += 1 - counter["total_time"] += frames - - return (counter, rows) - - -def convert_transcript(transcript): - transcript = transcript.replace("'", "’") - # transcript = transcript.replace("-", " ") - return transcript.strip() - - -def _maybe_convert_set(dataset_dir, audio_dir, filter_obj, space_after_every_character=None, rows=None): - # iterate over all data lists and write converted version near them - speaker_iterator = 1 - samples = [] - total_file_dict = dict() - for subdir, dirs, files in os.walk(dataset_dir): - for file in files: - # Get audiofile path and transcript for each sentence in tsv - if file.endswith(".data"): - file_path = os.path.join(subdir, file) - file = open(file_path, mode="r") - data = [] - file_folder = os.path.join( - os.path.dirname(subdir), "wav") - file_dict = dict() - for row in file.readlines(): - if row.isspace(): - continue - splitted_row = row.replace("\n", "").replace( - " wav ", ".wav ").split(" ", 1) - if len(splitted_row) != 2: - continue - file_name, transcript = splitted_row - if file_name.endswith(".wav"): - pass - elif file_name.endswith(".mp3"): - pass - elif file_name.find(".") == -1: - file_name += ".wav" - - if file_name.startswith("/"): - file_name = file_name[1::] - file_name = os.path.join(dataset_dir, file_name) - file_dict[file_name] = convert_transcript(transcript) - - file.close() - - for wav_subdir, wav_dirs, wav_files in os.walk(file_folder): - for wav_file in wav_files: - wav_file_path = os.path.join(wav_subdir, wav_file) - if file_dict.get(wav_file_path) is not None: - total_file_dict[wav_file_path] = file_dict[wav_file_path] - - for key in total_file_dict.keys(): - samples.append((key, total_file_dict[key], speaker_iterator)) - speaker_iterator += 1 - del(total_file_dict) - - if rows is None: - rows = [] - counter = get_counter() - num_samples = len(samples) - print("Importing dataset files...") - pool = Pool(initializer=init_worker, initargs=(PARAMS,)) - bar = progressbar.ProgressBar( - max_value=num_samples, widgets=SIMPLE_BAR) - for i, processed in enumerate(pool.imap_unordered(one_sample, samples), start=1): - counter += processed[0] - rows += processed[1] - bar.update(i) - bar.update(num_samples) - pool.close() - pool.join() - - imported_samples = get_imported_samples(counter) - assert counter["all"] == num_samples - assert len(rows) == imported_samples - print_import_report(counter, SAMPLE_RATE, MAX_SECS) - - output_csv = os.path.join(os.path.abspath(audio_dir), "train.csv") - print("Saving new DeepSpeech-formatted CSV file to: ", output_csv) - with open(output_csv, "w", encoding="utf-8", newline="") as output_csv_file: - print("Writing CSV file for DeepSpeech.py as: ", output_csv) - writer = csv.DictWriter(output_csv_file, fieldnames=FIELDNAMES) - writer.writeheader() - bar = progressbar.ProgressBar( - max_value=len(rows), widgets=SIMPLE_BAR) - for filename, file_size, transcript, speaker in bar(rows): - if space_after_every_character: - writer.writerow( - { - "wav_filename": filename, - "wav_filesize": file_size, - "transcript": " ".join(transcript), - } - ) - else: - writer.writerow( - { - "wav_filename": filename, - "wav_filesize": file_size, - "transcript": transcript, - } - ) - return rows - - -def _preprocess_data(tsv_dir, audio_dir, space_after_every_character=False): - set_samples = _maybe_convert_set( - tsv_dir, audio_dir, space_after_every_character) - - -def _maybe_convert_wav(mp3_filename, wav_filename): - if not os.path.exists(wav_filename): - transformer = sox.Transformer() - transformer.convert(samplerate=SAMPLE_RATE, n_channels=CHANNELS) - try: - transformer.build(mp3_filename, wav_filename) - except Exception as e: # TODO: improve exception handling - pass - - -def parse_args(): - parser = get_importers_parser( - description="Import CommonVoice v2.0 corpora") - parser.add_argument("tsv_dir", help="Directory containing tsv files") - parser.add_argument( - "--audio_dir", - help='Directory containing the audio clips - defaults to "/clips"', - ) - parser.add_argument( - "--filter_alphabet", - help="Exclude samples with characters not in provided alphabet", - ) - parser.add_argument( - "--normalize", - action="store_true", - help="Converts diacritic characters to their base ones", - ) - parser.add_argument( - "--space_after_every_character", - action="store_true", - help="To help transcript join by white space", - ) - return parser.parse_args() - - -def main(): - audio_dir = PARAMS.audio_dir if PARAMS.audio_dir else os.path.join( - PARAMS.tsv_dir, "clips") - _preprocess_data(PARAMS.tsv_dir, audio_dir, - PARAMS.space_after_every_character) - - -if __name__ == "__main__": - PARAMS = parse_args() - main() diff --git a/spaces/rorallitri/biomedical-language-models/logs/BlackBerry to launch multiple Androids by the end of 2015 The challenges and opportunities ahead.md b/spaces/rorallitri/biomedical-language-models/logs/BlackBerry to launch multiple Androids by the end of 2015 The challenges and opportunities ahead.md deleted file mode 100644 index 1e8140ce8a16222c5b954adab9c689ac21640e56..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/BlackBerry to launch multiple Androids by the end of 2015 The challenges and opportunities ahead.md +++ /dev/null @@ -1,6 +0,0 @@ - -

The first BlackBerry with an Android operating system was released in late November 2015, the 192 gram/6.77 ounce BlackBerry Priv.[87] It launched with version 5.1.1 but was later upgraded to version 6.0 Android Marshmallow. It was first available in four countries but increased to 31 countries by February 28, 2016.[88]Employing a Qualcomm 8992 Snapdragon 808 Hexa-Core, 64 bit, Adreno 418, 600 MHz GPU with 3 GB RAM processor, this unit is equipped with a curved 5.4-inch (2560 x 1440) OLED display and a sliding QWERTY keyboard which is hidden when not in use; Google's voice recognition that allows for dictating e-mails is also available. The Priv retained the best BlackBerry 10 features. Its 3,410 mAh battery is said to provide 22.5 hours of mixed use. The 18-megapixel camera, with a Schneider-Kreuznach lens, can also record 4K video; a secondary selfie camera is also provided. Several important apps unique to the Priv were available from Google Play by mid December.[89]

-

The company did note that some BlackBerry devices using the Android operating system will still work. These include models launched from 2015 to 2018, starting with the BlackBerry Priv to the BlackBerry Evolve X.

-

BlackBerry to launch multiple Androids by the end of 2015


Download Ziphttps://tinurll.com/2uzotb



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Full Free Download of Ice Age Movie The Best Way to Enjoy the Classic Animation.md b/spaces/rorallitri/biomedical-language-models/logs/Full Free Download of Ice Age Movie The Best Way to Enjoy the Classic Animation.md deleted file mode 100644 index 8568cb0d1acfc4963150d198cb86f2257c5db1b8..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Full Free Download of Ice Age Movie The Best Way to Enjoy the Classic Animation.md +++ /dev/null @@ -1,11 +0,0 @@ -
-

What an awesome movie Ice Age was, every sequel of the movie was a hit. There was a time when the cinema halls used to be house-full for the popularity of the movie. Ice Age 3: Dawn of the Dinosaurs is one of its sequels that were very famous.

-

This is a book app developed by iStoryTime, who also developed Wee Sing & Learn ABC. With the initial download (now it is free), you get Ice Age story. It is essentially a picture book with background music. Both the pictures and the narration are authentic. There are three options to read the book: Auto Play, Read to Myself, and Read to Me. It is a perfect app for bed-time reading.

-

ice age movie download full free


Download File > https://tinurll.com/2uzo27



-

Huge questions of past and present climatic changes are explained in this beautifully produced 3-part documentary. Discover how the Biblical record makes much more sense of what we find in the geologic record than millions of years. FREE video download included with DVD! Web orders only.

-

This sample is exclusively for KidsKonnect members!
To download this worksheet, click the button below to signup for free (it only takes a minute) and you'll be brought right back to this page to start the download!

Sign Me Up

-

From Montelent, Hollywood and Bollywood/Hindi-dubbed movies are easily available and may be downloaded in any quantity depending on your smartphone or computer room. In addition, you will need a lot of data to download all three of these video/movie quality options.

-

Author not found. License informationThe Ice Age Movie Font font provided is for typography style knowledge only. The download is completely free for personal use and the font cannot be used for commercial purposes.Therefore, if you wish to use this font for commercial purposes, you must purchase a license or contact the author for permission to use it. How to install the Ice Age Movie Font fontYou can install the Ice Age Movie Font font on any operating system. For safety and to ensure that there is no Malware or malicious software, downloading the source file é compressed in ZIP format. Fonts are in OTF (OpenType) or TTF (TrueType) format.

Download variations of Ice Age Movie FontAccording to the Ice Age Movie Font font family, below, we have listed other fonts that may be useful for your project. We have made an improved selection especially for you.Random fonts: Click to load 3 other fontsIce Age D Download this fontIce age font Download this fontIce age font rus Download this fontIce aGE rUSS Download this fontIce Becker Download this font Leave your feedback for the Ice Age Movie Font fontFinally, it's very important that we know your feedback about the Ice Age Movie Font font. Also tell us what type of project you used. Sharing your opinion and ideas will help many other participants in the MaisFontes community to improve the arts.

Also take the opportunity to share on social networks or click SAVE to keep this font in your fonts panel in the User Portal. Create a free account on MaisFontes by clicking here. Cloud words: Ice Age Movie Font Ice Age Movie Font font download;Ice Age Movie Font font free;Ice Age Movie Font download;Ice Age Movie Font Font;Ice Age Movie Font Logotipo;free font Ice Age Movie Font;Ice Age Movie Font free font;Font Ice Age Movie Font; × Ice Age Movie FontEmail type correctly your email Cancel Send email× Click to show the lettertypeice-age-movie-font.png
Save imageDonate and help us!Continue browsing

Type your comment below. Cancel CommentComentários ComentarBe the first to comment.if(typeof ez_ad_units!='undefined')ez_ad_units.push([[970,250],'maisfontes_com-medrectangle-1','ezslot_9',117,'0','0']);__ez_fad_position('div-gpt-ad-maisfontes_com-medrectangle-1-0');report this ad ©MaisFontes 2014-2023

-

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/rovargasc/calificacion/Dockerfile b/spaces/rovargasc/calificacion/Dockerfile deleted file mode 100644 index 5270bd83f8514f132a303d277446ed346f82c8b6..0000000000000000000000000000000000000000 --- a/spaces/rovargasc/calificacion/Dockerfile +++ /dev/null @@ -1,20 +0,0 @@ -FROM argilla/argilla-quickstart:v1.8.0 - - -# Define datasets to preload: full=all datasets, single=one dataset, and none=no datasets. -ENV LOAD_DATASETS=single - -# Uncomment the next section to keep backward compatibility with previous versions -## Following variables are used for backward compatibility with the previous security setup for the quickstart image -#ENV ADMIN_USERNAME="team" -#ENV ADMIN_API_KEY="team.apikey" -## The password has a minimum length of 8. Passwords with lower lengths will fail. -#ENV ADMIN_PASSWORD=12345678 -# -#ENV ANNOTATOR_USERNAME="argilla" -## The password has a minimum length of 8. Passwords with lower lengths will fail. -#ENV ANNOTATOR_PASSWORD=12345678 -# -#ENV ARGILLA_WORKSPACE="team" - -CMD /start_quickstart_argilla.sh diff --git a/spaces/rupeshs/fastsdcpu/backend/lcmdiffusion/pipelines/latent_consistency_txt2img.py b/spaces/rupeshs/fastsdcpu/backend/lcmdiffusion/pipelines/latent_consistency_txt2img.py deleted file mode 100644 index 5d3e933662d13e95a6685eac441ad24dadba8ec1..0000000000000000000000000000000000000000 --- a/spaces/rupeshs/fastsdcpu/backend/lcmdiffusion/pipelines/latent_consistency_txt2img.py +++ /dev/null @@ -1,730 +0,0 @@ -# Copyright 2023 Stanford University Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This code is strongly influenced by https://github.com/pesser/pytorch_diffusion -# and https://github.com/hojonathanho/diffusion - -import math -from dataclasses import dataclass -from typing import Any, Dict, List, Optional, Tuple, Union - -import numpy as np -import torch -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from diffusers import AutoencoderKL, ConfigMixin, DiffusionPipeline, SchedulerMixin, UNet2DConditionModel, logging -from diffusers.configuration_utils import register_to_config -from diffusers.image_processor import VaeImageProcessor -from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput -from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from diffusers.utils import BaseOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class LatentConsistencyModelPipeline(DiffusionPipeline): - _optional_components = ["scheduler"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: "LCMScheduler", - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - scheduler = ( - scheduler - if scheduler is not None - else LCMScheduler( - beta_start=0.00085, beta_end=0.0120, beta_schedule="scaled_linear", prediction_type="epsilon" - ) - ) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - prompt_embeds: None, - ): - r""" - Encodes the prompt into text encoder hidden states. - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - """ - - if prompt is not None and isinstance(prompt, str): - pass - elif prompt is not None and isinstance(prompt, list): - len(prompt) - else: - prompt_embeds.shape[0] - - if prompt_embeds is None: - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - if self.text_encoder is not None: - prompt_embeds_dtype = self.text_encoder.dtype - elif self.unet is not None: - prompt_embeds_dtype = self.unet.dtype - else: - prompt_embeds_dtype = prompt_embeds.dtype - - prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # Don't need to get uncond prompt embedding because of LCM Guided Distillation - return prompt_embeds - - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is None: - has_nsfw_concept = None - else: - if torch.is_tensor(image): - feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") - else: - feature_extractor_input = self.image_processor.numpy_to_pil(image) - safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - return image, has_nsfw_concept - - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if latents is None: - latents = torch.randn(shape, dtype=dtype).to(device) - else: - latents = latents.to(device) - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - def get_w_embedding(self, w, embedding_dim=512, dtype=torch.float32): - """ - see https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 - Args: - timesteps: torch.Tensor: generate embedding vectors at these timesteps - embedding_dim: int: dimension of the embeddings to generate - dtype: data type of the generated embeddings - Returns: - embedding vectors with shape `(len(timesteps), embedding_dim)` - """ - assert len(w.shape) == 1 - w = w * 1000.0 - - half_dim = embedding_dim // 2 - emb = torch.log(torch.tensor(10000.0)) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=dtype) * -emb) - emb = w.to(dtype)[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0, 1)) - assert emb.shape == (w.shape[0], embedding_dim) - return emb - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]] = None, - height: Optional[int] = 768, - width: Optional[int] = 768, - guidance_scale: float = 7.5, - num_images_per_prompt: Optional[int] = 1, - latents: Optional[torch.FloatTensor] = None, - num_inference_steps: int = 4, - lcm_origin_steps: int = 50, - prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # do_classifier_free_guidance = guidance_scale > 0.0 # In LCM Implementation: cfg_noise = noise_cond + cfg_scale * (noise_cond - noise_uncond) , (cfg_scale > 0.0 using CFG) - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - prompt_embeds=prompt_embeds, - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, lcm_origin_steps) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variable - num_channels_latents = self.unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - latents, - ) - bs = batch_size * num_images_per_prompt - - # 6. Get Guidance Scale Embedding - w = torch.tensor(guidance_scale).repeat(bs) - w_embedding = self.get_w_embedding(w, embedding_dim=256).to(device=device, dtype=latents.dtype) - - # 7. LCM MultiStep Sampling Loop: - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - ts = torch.full((bs,), t, device=device, dtype=torch.long) - latents = latents.to(prompt_embeds.dtype) - - # model prediction (v-prediction, eps, x) - model_pred = self.unet( - latents, - ts, - timestep_cond=w_embedding, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # compute the previous noisy sample x_t -> x_t-1 - latents, denoised = self.scheduler.step(model_pred, i, t, latents, return_dict=False) - - # # call the callback, if provided - # if i == len(timesteps) - 1: - progress_bar.update() - - denoised = denoised.to(prompt_embeds.dtype) - if not output_type == "latent": - image = self.vae.decode(denoised / self.vae.config.scaling_factor, return_dict=False)[0] - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - else: - image = denoised - has_nsfw_concept = None - - if has_nsfw_concept is None: - do_denormalize = [True] * image.shape[0] - else: - do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] - - image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->DDIM -class LCMSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's `step` function output. - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample `(x_{0})` based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - denoised: Optional[torch.FloatTensor] = None - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar( - num_diffusion_timesteps, - max_beta=0.999, - alpha_transform_type="cosine", -): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar. - Choose from `cosine` or `exp` - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - if alpha_transform_type == "cosine": - - def alpha_bar_fn(t): - return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2 - - elif alpha_transform_type == "exp": - - def alpha_bar_fn(t): - return math.exp(t * -12.0) - - else: - raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}") - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -def rescale_zero_terminal_snr(betas): - """ - Rescales betas to have zero terminal SNR Based on https://arxiv.org/pdf/2305.08891.pdf (Algorithm 1) - Args: - betas (`torch.FloatTensor`): - the betas that the scheduler is being initialized with. - Returns: - `torch.FloatTensor`: rescaled betas with zero terminal SNR - """ - # Convert betas to alphas_bar_sqrt - alphas = 1.0 - betas - alphas_cumprod = torch.cumprod(alphas, dim=0) - alphas_bar_sqrt = alphas_cumprod.sqrt() - - # Store old values. - alphas_bar_sqrt_0 = alphas_bar_sqrt[0].clone() - alphas_bar_sqrt_T = alphas_bar_sqrt[-1].clone() - - # Shift so the last timestep is zero. - alphas_bar_sqrt -= alphas_bar_sqrt_T - - # Scale so the first timestep is back to the old value. - alphas_bar_sqrt *= alphas_bar_sqrt_0 / (alphas_bar_sqrt_0 - alphas_bar_sqrt_T) - - # Convert alphas_bar_sqrt to betas - alphas_bar = alphas_bar_sqrt**2 # Revert sqrt - alphas = alphas_bar[1:] / alphas_bar[:-1] # Revert cumprod - alphas = torch.cat([alphas_bar[0:1], alphas]) - betas = 1 - alphas - - return betas - - -class LCMScheduler(SchedulerMixin, ConfigMixin): - """ - `LCMScheduler` extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with - non-Markovian guidance. - This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic - methods the library implements for all schedulers such as loading and saving. - Args: - num_train_timesteps (`int`, defaults to 1000): - The number of diffusion steps to train the model. - beta_start (`float`, defaults to 0.0001): - The starting `beta` value of inference. - beta_end (`float`, defaults to 0.02): - The final `beta` value. - beta_schedule (`str`, defaults to `"linear"`): - The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, *optional*): - Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`. - clip_sample (`bool`, defaults to `True`): - Clip the predicted sample for numerical stability. - clip_sample_range (`float`, defaults to 1.0): - The maximum magnitude for sample clipping. Valid only when `clip_sample=True`. - set_alpha_to_one (`bool`, defaults to `True`): - Each diffusion step uses the alphas product value at that step and at the previous one. For the final step - there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`, - otherwise it uses the alpha value at step 0. - steps_offset (`int`, defaults to 0): - An offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False` to make the last step use step 0 for the previous alpha product like in Stable - Diffusion. - prediction_type (`str`, defaults to `epsilon`, *optional*): - Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process), - `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen - Video](https://imagen.research.google/video/paper.pdf) paper). - thresholding (`bool`, defaults to `False`): - Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such - as Stable Diffusion. - dynamic_thresholding_ratio (`float`, defaults to 0.995): - The ratio for the dynamic thresholding method. Valid only when `thresholding=True`. - sample_max_value (`float`, defaults to 1.0): - The threshold value for dynamic thresholding. Valid only when `thresholding=True`. - timestep_spacing (`str`, defaults to `"leading"`): - The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and - Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information. - rescale_betas_zero_snr (`bool`, defaults to `False`): - Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and - dark samples instead of limiting it to samples with medium brightness. Loosely related to - [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506). - """ - - # _compatibles = [e.name for e in KarrasDiffusionSchedulers] - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - clip_sample: bool = True, - set_alpha_to_one: bool = True, - steps_offset: int = 0, - prediction_type: str = "epsilon", - thresholding: bool = False, - dynamic_thresholding_ratio: float = 0.995, - clip_sample_range: float = 1.0, - sample_max_value: float = 1.0, - timestep_spacing: str = "leading", - rescale_betas_zero_snr: bool = False, - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - # Rescale for zero SNR - if rescale_betas_zero_snr: - self.betas = rescale_zero_terminal_snr(self.betas) - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - # At every step in ddim, we are looking into the previous alphas_cumprod - # For the final step, there is no previous alphas_cumprod because we are already at 0 - # `set_alpha_to_one` decides whether we set this parameter simply to one or - # whether we use the final alpha of the "non-previous" one. - self.final_alpha_cumprod = torch.tensor(1.0) if set_alpha_to_one else self.alphas_cumprod[0] - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # setable values - self.num_inference_steps = None - self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy().astype(np.int64)) - - def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - Args: - sample (`torch.FloatTensor`): - The input sample. - timestep (`int`, *optional*): - The current timestep in the diffusion chain. - Returns: - `torch.FloatTensor`: - A scaled input sample. - """ - return sample - - def _get_variance(self, timestep, prev_timestep): - alpha_prod_t = self.alphas_cumprod[timestep] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev) - - return variance - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample - def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor: - """ - "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the - prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by - s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing - pixels from saturation at each step. We find that dynamic thresholding results in significantly better - photorealism as well as better image-text alignment, especially when using very large guidance weights." - https://arxiv.org/abs/2205.11487 - """ - dtype = sample.dtype - batch_size, channels, height, width = sample.shape - - if dtype not in (torch.float32, torch.float64): - sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half - - # Flatten sample for doing quantile calculation along each image - sample = sample.reshape(batch_size, channels * height * width) - - abs_sample = sample.abs() # "a certain percentile absolute pixel value" - - s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1) - s = torch.clamp( - s, min=1, max=self.config.sample_max_value - ) # When clamped to min=1, equivalent to standard clipping to [-1, 1] - - s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0 - sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s" - - sample = sample.reshape(batch_size, channels, height, width) - sample = sample.to(dtype) - - return sample - - def set_timesteps(self, num_inference_steps: int, lcm_origin_steps: int, device: Union[str, torch.device] = None): - """ - Sets the discrete timesteps used for the diffusion chain (to be run before inference). - Args: - num_inference_steps (`int`): - The number of diffusion steps used when generating samples with a pre-trained model. - """ - - if num_inference_steps > self.config.num_train_timesteps: - raise ValueError( - f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:" - f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle" - f" maximal {self.config.num_train_timesteps} timesteps." - ) - - self.num_inference_steps = num_inference_steps - - # LCM Timesteps Setting: # Linear Spacing - c = self.config.num_train_timesteps // lcm_origin_steps - lcm_origin_timesteps = np.asarray(list(range(1, lcm_origin_steps + 1))) * c - 1 # LCM Training Steps Schedule - skipping_step = len(lcm_origin_timesteps) // num_inference_steps - timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps] # LCM Inference Steps Schedule - - self.timesteps = torch.from_numpy(timesteps.copy()).to(device) - - def get_scalings_for_boundary_condition_discrete(self, t): - self.sigma_data = 0.5 # Default: 0.5 - - # By dividing 0.1: This is almost a delta function at t=0. - c_skip = self.sigma_data**2 / ((t / 0.1) ** 2 + self.sigma_data**2) - c_out = (t / 0.1) / ((t / 0.1) ** 2 + self.sigma_data**2) ** 0.5 - return c_skip, c_out - - def step( - self, - model_output: torch.FloatTensor, - timeindex: int, - timestep: int, - sample: torch.FloatTensor, - eta: float = 0.0, - use_clipped_model_output: bool = False, - generator=None, - variance_noise: Optional[torch.FloatTensor] = None, - return_dict: bool = True, - ) -> Union[LCMSchedulerOutput, Tuple]: - """ - Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion - process from the learned model outputs (most often the predicted noise). - Args: - model_output (`torch.FloatTensor`): - The direct output from learned diffusion model. - timestep (`float`): - The current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - A current instance of a sample created by the diffusion process. - eta (`float`): - The weight of noise for added noise in diffusion step. - use_clipped_model_output (`bool`, defaults to `False`): - If `True`, computes "corrected" `model_output` from the clipped predicted original sample. Necessary - because predicted original sample is clipped to [-1, 1] when `self.config.clip_sample` is `True`. If no - clipping has happened, "corrected" `model_output` would coincide with the one provided as input and - `use_clipped_model_output` has no effect. - generator (`torch.Generator`, *optional*): - A random number generator. - variance_noise (`torch.FloatTensor`): - Alternative to generating noise with `generator` by directly providing the noise for the variance - itself. Useful for methods such as [`CycleDiffusion`]. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~schedulers.scheduling_lcm.LCMSchedulerOutput`] or `tuple`. - Returns: - [`~schedulers.scheduling_utils.LCMSchedulerOutput`] or `tuple`: - If return_dict is `True`, [`~schedulers.scheduling_lcm.LCMSchedulerOutput`] is returned, otherwise a - tuple is returned where the first element is the sample tensor. - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - # 1. get previous step value - prev_timeindex = timeindex + 1 - if prev_timeindex < len(self.timesteps): - prev_timestep = self.timesteps[prev_timeindex] - else: - prev_timestep = timestep - - # 2. compute alphas, betas - alpha_prod_t = self.alphas_cumprod[timestep] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod - - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - # 3. Get scalings for boundary conditions - c_skip, c_out = self.get_scalings_for_boundary_condition_discrete(timestep) - - # 4. Different Parameterization: - parameterization = self.config.prediction_type - - if parameterization == "epsilon": # noise-prediction - pred_x0 = (sample - beta_prod_t.sqrt() * model_output) / alpha_prod_t.sqrt() - - elif parameterization == "sample": # x-prediction - pred_x0 = model_output - - elif parameterization == "v_prediction": # v-prediction - pred_x0 = alpha_prod_t.sqrt() * sample - beta_prod_t.sqrt() * model_output - - # 4. Denoise model output using boundary conditions - denoised = c_out * pred_x0 + c_skip * sample - - # 5. Sample z ~ N(0, I), For MultiStep Inference - # Noise is not used for one-step sampling. - if len(self.timesteps) > 1: - noise = torch.randn(model_output.shape).to(model_output.device) - prev_sample = alpha_prod_t_prev.sqrt() * denoised + beta_prod_t_prev.sqrt() * noise - else: - prev_sample = denoised - - if not return_dict: - return (prev_sample, denoised) - - return LCMSchedulerOutput(prev_sample=prev_sample, denoised=denoised) - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.IntTensor, - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as original_samples - alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype) - timesteps = timesteps.to(original_samples.device) - - sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(original_samples.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.get_velocity - def get_velocity( - self, sample: torch.FloatTensor, noise: torch.FloatTensor, timesteps: torch.IntTensor - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as sample - alphas_cumprod = self.alphas_cumprod.to(device=sample.device, dtype=sample.dtype) - timesteps = timesteps.to(sample.device) - - sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(sample.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(sample.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - velocity = sqrt_alpha_prod * noise - sqrt_one_minus_alpha_prod * sample - return velocity - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_scribble.py b/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_scribble.py deleted file mode 100644 index 8f3375a0bfe3c7cf7f69c3f81217d77c704327ef..0000000000000000000000000000000000000000 --- a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_scribble.py +++ /dev/null @@ -1,188 +0,0 @@ -import gradio as gr -import torch -from controlnet_aux import HEDdetector -from diffusers import ( - ControlNetModel, - StableDiffusionControlNetPipeline, - UniPCMultistepScheduler, -) -from PIL import Image - -from diffusion_webui.utils.model_list import ( - controlnet_scribble_model_list, - stable_model_list, -) -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_LIST, - get_scheduler_list, -) - - -class StableDiffusionControlNetScribbleGenerator: - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path, controlnet_model_path, scheduler): - if self.pipe is None: - controlnet = ControlNetModel.from_pretrained( - controlnet_model_path, torch_dtype=torch.float16 - ) - - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - pretrained_model_name_or_path=stable_model_path, - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16, - ) - - self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - def controlnet_scribble(self, image_path: str): - hed = HEDdetector.from_pretrained("lllyasviel/ControlNet") - - image = Image.open(image_path) - image = hed(image, scribble=True) - - return image - - def generate_image( - self, - image_path: str, - stable_model_path: str, - controlnet_hed_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - scheduler: str, - seed_generator: int, - ): - - image = self.controlnet_scribble(image_path=image_path) - - pipe = self.load_model( - stable_model_path=stable_model_path, - controlnet_model_path=controlnet_hed_model_path, - scheduler=scheduler, - ) - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - output = pipe( - prompt=prompt, - image=image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - generator=generator, - ).images - - return output - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - controlnet_scribble_image_file = gr.Image( - type="filepath", label="Image" - ) - controlnet_scribble_prompt = gr.Textbox( - lines=1, - show_label=False, - placeholder="Prompt", - ) - - controlnet_scribble_negative_prompt = gr.Textbox( - lines=1, - show_label=False, - placeholder="Negative Prompt", - ) - - with gr.Row(): - with gr.Column(): - controlnet_scribble_stable_model_id = gr.Dropdown( - choices=stable_model_list, - value=stable_model_list[0], - label="Stable Model Id", - ) - controlnet_scribble_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - - controlnet_scribble_num_inference_step = gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - - controlnet_scribble_num_images_per_prompt = ( - gr.Slider( - minimum=1, - maximum=10, - step=1, - value=1, - label="Number Of Images", - ) - ) - with gr.Row(): - with gr.Column(): - controlnet_scribble_model_id = gr.Dropdown( - choices=controlnet_scribble_model_list, - value=controlnet_scribble_model_list[0], - label="ControlNet Model Id", - ) - - controlnet_scribble_scheduler = gr.Dropdown( - choices=SCHEDULER_LIST, - value=SCHEDULER_LIST[0], - label="Scheduler", - ) - - controlnet_scribble_seed_generator = gr.Number( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed Generator", - ) - - controlnet_scribble_predict = gr.Button(value="Generator") - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - controlnet_scribble_predict.click( - fn=StableDiffusionControlNetScribbleGenerator().generate_image, - inputs=[ - controlnet_scribble_image_file, - controlnet_scribble_stable_model_id, - controlnet_scribble_model_id, - controlnet_scribble_prompt, - controlnet_scribble_negative_prompt, - controlnet_scribble_num_images_per_prompt, - controlnet_scribble_guidance_scale, - controlnet_scribble_num_inference_step, - controlnet_scribble_scheduler, - controlnet_scribble_seed_generator, - ], - outputs=output_image, - ) diff --git a/spaces/scedlatioru/img-to-music/example/Cocoon El Retorno [DVDRIP][Spanish][www.mewpct.com] 45.md b/spaces/scedlatioru/img-to-music/example/Cocoon El Retorno [DVDRIP][Spanish][www.mewpct.com] 45.md deleted file mode 100644 index 3b94ef93bbc8437b5b504d44fd6c3938b41602a2..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Cocoon El Retorno [DVDRIP][Spanish][www.mewpct.com] 45.md +++ /dev/null @@ -1,6 +0,0 @@ -

Cocoon El Retorno [DVDRIP][Spanish][www.mewpct.com] 45


Download File 🗸 https://gohhs.com/2uEAHC



- - 8a78ff9644
-
-
-

diff --git a/spaces/scedlatioru/img-to-music/example/Filmeprivatedepierrewoomantensaoanalemcontinenteafricano UPDATED.md b/spaces/scedlatioru/img-to-music/example/Filmeprivatedepierrewoomantensaoanalemcontinenteafricano UPDATED.md deleted file mode 100644 index ba3dfad1a806b17b020c0dc7be6ea8b8120a6b6b..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Filmeprivatedepierrewoomantensaoanalemcontinenteafricano UPDATED.md +++ /dev/null @@ -1,6 +0,0 @@ -

filmeprivatedepierrewoomantensaoanalemcontinenteafricano


Download Zip ►►►►► https://gohhs.com/2uEzyh



-
- 3cee63e6c2
-
-
-

diff --git a/spaces/scedlatioru/img-to-music/example/HD Online Player (dostana Movie 1980 Download 11) _HOT_.md b/spaces/scedlatioru/img-to-music/example/HD Online Player (dostana Movie 1980 Download 11) _HOT_.md deleted file mode 100644 index 58c05466d2476e6d433276ce9c8475cb66fca8f4..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/HD Online Player (dostana Movie 1980 Download 11) _HOT_.md +++ /dev/null @@ -1,6 +0,0 @@ -
-

11/11/2013 mw3 gun fix. mac in this way. test this call. mw3 ai auto aim problems.. "the media is no longer being supported, so these downloads might not work when you run modern warfare 3. you can also download these files to patch your modern warfare 3. m. when you first run the game, you will receive the message: system error code: 0xa22 the game has recently been released and has received many problems. modern warfare 3: scripts mki mb format take control of your game. call of duty modern warfare 3 with keygen free download. 3 free multiplayer map packs for mw3. download modern warfare 3 patch aug 13 update game free and install update game's files on your system. " download games rar files to your device from our website. modern warfare 3 mw3 server free download.why use
11/11/2013. home;. modern warfare 3. download. hgl1client-hgl1-gl-gle7. 9. 0 script patch free download here:
. the first is that to play multiplayer, you will need to have a copy of mw2.5, be entitled to mw3, and be on a server with a different copy of the game. you are eligible to download a copy of mw3 for free.

-

its
hey guys, i believe that the most common question that people ask is the method of how i was able to use my sps card account without the activation code. keep reading, i will try to give you the answer in a short period of time. the members of my
nellierose's notes: sd version (small, pointy) for gun girl and gun girls phantasia for the first time, there will be a contest between top-ranked girls. a problem is happening in the room you are in.
a problem is happening in the room you are in. there are many enemies who are.
my mom is starting to do important things in her life now, so we have to become closer. we have a good home where she can get better health care than she can afford to get in new york.
free ps3 online download games, multiplayer, and demos. play free downloads online for pc and games. start your gaming.

-

HD Online Player (dostana movie 1980 download 11)


Download Zip ✦✦✦ https://gohhs.com/2uEzZ1



899543212b
-
-
\ No newline at end of file diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/setup.py b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/setup.py deleted file mode 100644 index 382a2aa1006e581eaf31dbb3155d4b0ba3b31140..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/setup.py +++ /dev/null @@ -1,165 +0,0 @@ -#!/usr/bin/env python - -from setuptools import find_packages, setup - -import os -import subprocess -import sys -import time -import torch -from torch.utils.cpp_extension import BuildExtension, CppExtension, CUDAExtension - -version_file = './basicsr/version.py' - - -def readme(): - with open('README.md', encoding='utf-8') as f: - content = f.read() - return content - - -def get_git_hash(): - - def _minimal_ext_cmd(cmd): - # construct minimal environment - env = {} - for k in ['SYSTEMROOT', 'PATH', 'HOME']: - v = os.environ.get(k) - if v is not None: - env[k] = v - # LANGUAGE is used on win32 - env['LANGUAGE'] = 'C' - env['LANG'] = 'C' - env['LC_ALL'] = 'C' - out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0] - return out - - try: - out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) - sha = out.strip().decode('ascii') - except OSError: - sha = 'unknown' - - return sha - - -def get_hash(): - if os.path.exists('.git'): - sha = get_git_hash()[:7] - elif os.path.exists(version_file): - try: - from version import __version__ - sha = __version__.split('+')[-1] - except ImportError: - raise ImportError('Unable to get git version') - else: - sha = 'unknown' - - return sha - - -def write_version_py(): - content = """# GENERATED VERSION FILE -# TIME: {} -__version__ = '{}' -__gitsha__ = '{}' -version_info = ({}) -""" - sha = get_hash() - with open('./basicsr/VERSION', 'r') as f: - SHORT_VERSION = f.read().strip() - VERSION_INFO = ', '.join([x if x.isdigit() else f'"{x}"' for x in SHORT_VERSION.split('.')]) - - version_file_str = content.format(time.asctime(), SHORT_VERSION, sha, VERSION_INFO) - with open(version_file, 'w') as f: - f.write(version_file_str) - - -def get_version(): - with open(version_file, 'r') as f: - exec(compile(f.read(), version_file, 'exec')) - return locals()['__version__'] - - -def make_cuda_ext(name, module, sources, sources_cuda=None): - if sources_cuda is None: - sources_cuda = [] - define_macros = [] - extra_compile_args = {'cxx': []} - - if torch.cuda.is_available() or os.getenv('FORCE_CUDA', '0') == '1': - define_macros += [('WITH_CUDA', None)] - extension = CUDAExtension - extra_compile_args['nvcc'] = [ - '-D__CUDA_NO_HALF_OPERATORS__', - '-D__CUDA_NO_HALF_CONVERSIONS__', - '-D__CUDA_NO_HALF2_OPERATORS__', - ] - sources += sources_cuda - else: - print(f'Compiling {name} without CUDA') - extension = CppExtension - - return extension( - name=f'{module}.{name}', - sources=[os.path.join(*module.split('.'), p) for p in sources], - define_macros=define_macros, - extra_compile_args=extra_compile_args) - - -def get_requirements(filename='requirements.txt'): - with open(os.path.join('.', filename), 'r') as f: - requires = [line.replace('\n', '') for line in f.readlines()] - return requires - - -if __name__ == '__main__': - if '--cuda_ext' in sys.argv: - ext_modules = [ - make_cuda_ext( - name='deform_conv_ext', - module='ops.dcn', - sources=['src/deform_conv_ext.cpp'], - sources_cuda=['src/deform_conv_cuda.cpp', 'src/deform_conv_cuda_kernel.cu']), - make_cuda_ext( - name='fused_act_ext', - module='ops.fused_act', - sources=['src/fused_bias_act.cpp'], - sources_cuda=['src/fused_bias_act_kernel.cu']), - make_cuda_ext( - name='upfirdn2d_ext', - module='ops.upfirdn2d', - sources=['src/upfirdn2d.cpp'], - sources_cuda=['src/upfirdn2d_kernel.cu']), - ] - sys.argv.remove('--cuda_ext') - else: - ext_modules = [] - - write_version_py() - setup( - name='basicsr', - version=get_version(), - description='Open Source Image and Video Super-Resolution Toolbox', - long_description=readme(), - long_description_content_type='text/markdown', - author='Xintao Wang', - author_email='xintao.wang@outlook.com', - keywords='computer vision, restoration, super resolution', - url='https://github.com/xinntao/BasicSR', - include_package_data=True, - packages=find_packages(exclude=('options', 'datasets', 'experiments', 'results', 'tb_logger', 'wandb')), - classifiers=[ - 'Development Status :: 4 - Beta', - 'License :: OSI Approved :: Apache Software License', - 'Operating System :: OS Independent', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - ], - license='Apache License 2.0', - setup_requires=['cython', 'numpy'], - install_requires=get_requirements(), - ext_modules=ext_modules, - cmdclass={'build_ext': BuildExtension}, - zip_safe=False) diff --git a/spaces/segments-tobias/conex/espnet2/torch_utils/pytorch_version.py b/spaces/segments-tobias/conex/espnet2/torch_utils/pytorch_version.py deleted file mode 100644 index 01f17cc748e3af444d551a14600728205cd6f61d..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/torch_utils/pytorch_version.py +++ /dev/null @@ -1,16 +0,0 @@ -import torch - - -def pytorch_cudnn_version() -> str: - message = ( - f"pytorch.version={torch.__version__}, " - f"cuda.available={torch.cuda.is_available()}, " - ) - - if torch.backends.cudnn.enabled: - message += ( - f"cudnn.version={torch.backends.cudnn.version()}, " - f"cudnn.benchmark={torch.backends.cudnn.benchmark}, " - f"cudnn.deterministic={torch.backends.cudnn.deterministic}" - ) - return message diff --git a/spaces/shaheer/mysent/app.py b/spaces/shaheer/mysent/app.py deleted file mode 100644 index 15ade2abbd8f1f24475be24d164bde58d3efe1b0..0000000000000000000000000000000000000000 --- a/spaces/shaheer/mysent/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import streamlit as st -from transformers import pipeline - -pipe = pipeline("sentiment-analysis") -text = st.text_area("enter your text") - -if text: - out = pipe(text) - st.json(out) diff --git a/spaces/shgao/EditAnything/cldm/cldm.py b/spaces/shgao/EditAnything/cldm/cldm.py deleted file mode 100644 index 0b3ac7a575cf4933fc14dfc15dd3cca41cb3f3e8..0000000000000000000000000000000000000000 --- a/spaces/shgao/EditAnything/cldm/cldm.py +++ /dev/null @@ -1,435 +0,0 @@ -import einops -import torch -import torch as th -import torch.nn as nn - -from ldm.modules.diffusionmodules.util import ( - conv_nd, - linear, - zero_module, - timestep_embedding, -) - -from einops import rearrange, repeat -from torchvision.utils import make_grid -from ldm.modules.attention import SpatialTransformer -from ldm.modules.diffusionmodules.openaimodel import UNetModel, TimestepEmbedSequential, ResBlock, Downsample, AttentionBlock -from ldm.models.diffusion.ddpm import LatentDiffusion -from ldm.util import log_txt_as_img, exists, instantiate_from_config -from ldm.models.diffusion.ddim import DDIMSampler - - -class ControlledUnetModel(UNetModel): - def forward(self, x, timesteps=None, context=None, control=None, only_mid_control=False, **kwargs): - hs = [] - with torch.no_grad(): - t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False) - emb = self.time_embed(t_emb) - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb, context) - hs.append(h) - h = self.middle_block(h, emb, context) - - if control is not None: - h += control.pop() - - for i, module in enumerate(self.output_blocks): - if only_mid_control or control is None: - h = torch.cat([h, hs.pop()], dim=1) - else: - h = torch.cat([h, hs.pop() + control.pop()], dim=1) - h = module(h, emb, context) - - h = h.type(x.dtype) - return self.out(h) - - -class ControlNet(nn.Module): - def __init__( - self, - image_size, - in_channels, - model_channels, - hint_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - use_checkpoint=False, - use_fp16=False, - num_heads=-1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - use_spatial_transformer=False, # custom transformer support - transformer_depth=1, # custom transformer support - context_dim=None, # custom transformer support - n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model - legacy=True, - disable_self_attentions=None, - num_attention_blocks=None, - disable_middle_self_attn=False, - use_linear_in_transformer=False, - ): - super().__init__() - if use_spatial_transformer: - assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...' - - if context_dim is not None: - assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...' - from omegaconf.listconfig import ListConfig - if type(context_dim) == ListConfig: - context_dim = list(context_dim) - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - if num_heads == -1: - assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set' - - if num_head_channels == -1: - assert num_heads != -1, 'Either num_heads or num_head_channels has to be set' - - self.dims = dims - self.image_size = image_size - self.in_channels = in_channels - self.model_channels = model_channels - if isinstance(num_res_blocks, int): - self.num_res_blocks = len(channel_mult) * [num_res_blocks] - else: - if len(num_res_blocks) != len(channel_mult): - raise ValueError("provide num_res_blocks either as an int (globally constant) or " - "as a list/tuple (per-level) with the same length as channel_mult") - self.num_res_blocks = num_res_blocks - if disable_self_attentions is not None: - # should be a list of booleans, indicating whether to disable self-attention in TransformerBlocks or not - assert len(disable_self_attentions) == len(channel_mult) - if num_attention_blocks is not None: - assert len(num_attention_blocks) == len(self.num_res_blocks) - assert all(map(lambda i: self.num_res_blocks[i] >= num_attention_blocks[i], range(len(num_attention_blocks)))) - print(f"Constructor of UNetModel received num_attention_blocks={num_attention_blocks}. " - f"This option has LESS priority than attention_resolutions {attention_resolutions}, " - f"i.e., in cases where num_attention_blocks[i] > 0 but 2**i not in attention_resolutions, " - f"attention will still not be set.") - - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - self.predict_codebook_ids = n_embed is not None - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self.zero_convs = nn.ModuleList([self.make_zero_conv(model_channels)]) - - self.input_hint_block = TimestepEmbedSequential( - conv_nd(dims, hint_channels, 16, 3, padding=1), - nn.SiLU(), - conv_nd(dims, 16, 16, 3, padding=1), - nn.SiLU(), - conv_nd(dims, 16, 32, 3, padding=1, stride=2), - nn.SiLU(), - conv_nd(dims, 32, 32, 3, padding=1), - nn.SiLU(), - conv_nd(dims, 32, 96, 3, padding=1, stride=2), - nn.SiLU(), - conv_nd(dims, 96, 96, 3, padding=1), - nn.SiLU(), - conv_nd(dims, 96, 256, 3, padding=1, stride=2), - nn.SiLU(), - zero_module(conv_nd(dims, 256, model_channels, 3, padding=1)) - ) - - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for nr in range(self.num_res_blocks[level]): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - # num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - if exists(disable_self_attentions): - disabled_sa = disable_self_attentions[level] - else: - disabled_sa = False - - if not exists(num_attention_blocks) or nr < num_attention_blocks[level]: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim, - disable_self_attn=disabled_sa, use_linear=use_linear_in_transformer, - use_checkpoint=use_checkpoint - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self.zero_convs.append(self.make_zero_conv(ch)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - self.zero_convs.append(self.make_zero_conv(ch)) - ds *= 2 - self._feature_size += ch - - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - # num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( # always uses a self-attn - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim, - disable_self_attn=disable_middle_self_attn, use_linear=use_linear_in_transformer, - use_checkpoint=use_checkpoint - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self.middle_block_out = self.make_zero_conv(ch) - self._feature_size += ch - - def make_zero_conv(self, channels): - return TimestepEmbedSequential(zero_module(conv_nd(self.dims, channels, channels, 1, padding=0))) - - def forward(self, x, hint, timesteps, context, **kwargs): - t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False) - emb = self.time_embed(t_emb) - - guided_hint = self.input_hint_block(hint, emb, context) - - outs = [] - - h = x.type(self.dtype) - for module, zero_conv in zip(self.input_blocks, self.zero_convs): - if guided_hint is not None: - h = module(h, emb, context) - h += guided_hint - guided_hint = None - else: - h = module(h, emb, context) - outs.append(zero_conv(h, emb, context)) - - h = self.middle_block(h, emb, context) - outs.append(self.middle_block_out(h, emb, context)) - - return outs - - -class ControlLDM(LatentDiffusion): - - def __init__(self, control_stage_config, control_key, only_mid_control, *args, **kwargs): - super().__init__(*args, **kwargs) - self.control_model = instantiate_from_config(control_stage_config) - self.control_key = control_key - self.only_mid_control = only_mid_control - self.control_scales = [1.0] * 13 - - @torch.no_grad() - def get_input(self, batch, k, bs=None, *args, **kwargs): - x, c = super().get_input(batch, self.first_stage_key, *args, **kwargs) - control = batch[self.control_key] - if bs is not None: - control = control[:bs] - control = control.to(self.device) - control = einops.rearrange(control, 'b h w c -> b c h w') - control = control.to(memory_format=torch.contiguous_format).float() - return x, dict(c_crossattn=[c], c_concat=[control]) - - def apply_model(self, x_noisy, t, cond, *args, **kwargs): - assert isinstance(cond, dict) - diffusion_model = self.model.diffusion_model - - cond_txt = torch.cat(cond['c_crossattn'], 1) - - if cond['c_concat'] is None: - eps = diffusion_model(x=x_noisy, timesteps=t, context=cond_txt, control=None, only_mid_control=self.only_mid_control) - else: - control = self.control_model(x=x_noisy, hint=torch.cat(cond['c_concat'], 1), timesteps=t, context=cond_txt) - control = [c * scale for c, scale in zip(control, self.control_scales)] - eps = diffusion_model(x=x_noisy, timesteps=t, context=cond_txt, control=control, only_mid_control=self.only_mid_control) - - return eps - - @torch.no_grad() - def get_unconditional_conditioning(self, N): - return self.get_learned_conditioning([""] * N) - - @torch.no_grad() - def log_images(self, batch, N=4, n_row=2, sample=False, ddim_steps=50, ddim_eta=0.0, return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=False, unconditional_guidance_scale=9.0, unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - use_ddim = ddim_steps is not None - - log = dict() - z, c = self.get_input(batch, self.first_stage_key, bs=N) - c_cat, c = c["c_concat"][0][:N], c["c_crossattn"][0][:N] - N = min(z.shape[0], N) - n_row = min(z.shape[0], n_row) - log["reconstruction"] = self.decode_first_stage(z) - log["control"] = c_cat * 2.0 - 1.0 - log["conditioning"] = log_txt_as_img((512, 512), batch[self.cond_stage_key], size=16) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_cross = self.get_unconditional_conditioning(N) - uc_cat = c_cat # torch.zeros_like(c_cat) - uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]} - samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc_full, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - return log - - @torch.no_grad() - def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs): - ddim_sampler = DDIMSampler(self) - b, c, h, w = cond["c_concat"][0].shape - shape = (self.channels, h // 8, w // 8) - samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size, shape, cond, verbose=False, **kwargs) - return samples, intermediates - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.control_model.parameters()) - if not self.sd_locked: - params += list(self.model.diffusion_model.output_blocks.parameters()) - params += list(self.model.diffusion_model.out.parameters()) - opt = torch.optim.AdamW(params, lr=lr) - return opt - - def low_vram_shift(self, is_diffusing): - if is_diffusing: - self.model = self.model.cuda() - self.control_model = self.control_model.cuda() - self.first_stage_model = self.first_stage_model.cpu() - self.cond_stage_model = self.cond_stage_model.cpu() - else: - self.model = self.model.cpu() - self.control_model = self.control_model.cpu() - self.first_stage_model = self.first_stage_model.cuda() - self.cond_stage_model = self.cond_stage_model.cuda() diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/utils/download_util.py b/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/utils/download_util.py deleted file mode 100644 index 2a267915743ee3f3232bc8fe992466b52468979a..0000000000000000000000000000000000000000 --- a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/utils/download_util.py +++ /dev/null @@ -1,95 +0,0 @@ -import math -import os -import requests -from torch.hub import download_url_to_file, get_dir -from tqdm import tqdm -from urllib.parse import urlparse - -from .misc import sizeof_fmt - - -def download_file_from_google_drive(file_id, save_path): - """Download files from google drive. - Ref: - https://stackoverflow.com/questions/25010369/wget-curl-large-file-from-google-drive # noqa E501 - Args: - file_id (str): File id. - save_path (str): Save path. - """ - - session = requests.Session() - URL = 'https://docs.google.com/uc?export=download' - params = {'id': file_id} - - response = session.get(URL, params=params, stream=True) - token = get_confirm_token(response) - if token: - params['confirm'] = token - response = session.get(URL, params=params, stream=True) - - # get file size - response_file_size = session.get(URL, params=params, stream=True, headers={'Range': 'bytes=0-2'}) - print(response_file_size) - if 'Content-Range' in response_file_size.headers: - file_size = int(response_file_size.headers['Content-Range'].split('/')[1]) - else: - file_size = None - - save_response_content(response, save_path, file_size) - - -def get_confirm_token(response): - for key, value in response.cookies.items(): - if key.startswith('download_warning'): - return value - return None - - -def save_response_content(response, destination, file_size=None, chunk_size=32768): - if file_size is not None: - pbar = tqdm(total=math.ceil(file_size / chunk_size), unit='chunk') - - readable_file_size = sizeof_fmt(file_size) - else: - pbar = None - - with open(destination, 'wb') as f: - downloaded_size = 0 - for chunk in response.iter_content(chunk_size): - downloaded_size += chunk_size - if pbar is not None: - pbar.update(1) - pbar.set_description(f'Download {sizeof_fmt(downloaded_size)} / {readable_file_size}') - if chunk: # filter out keep-alive new chunks - f.write(chunk) - if pbar is not None: - pbar.close() - - -def load_file_from_url(url, model_dir=None, progress=True, file_name=None): - """Load file form http url, will download models if necessary. - Ref:https://github.com/1adrianb/face-alignment/blob/master/face_alignment/utils.py - Args: - url (str): URL to be downloaded. - model_dir (str): The path to save the downloaded model. Should be a full path. If None, use pytorch hub_dir. - Default: None. - progress (bool): Whether to show the download progress. Default: True. - file_name (str): The downloaded file name. If None, use the file name in the url. Default: None. - Returns: - str: The path to the downloaded file. - """ - if model_dir is None: # use the pytorch hub_dir - hub_dir = get_dir() - model_dir = os.path.join(hub_dir, 'checkpoints') - - os.makedirs(model_dir, exist_ok=True) - - parts = urlparse(url) - filename = os.path.basename(parts.path) - if file_name is not None: - filename = file_name - cached_file = os.path.abspath(os.path.join(model_dir, filename)) - if not os.path.exists(cached_file): - print(f'Downloading: "{url}" to {cached_file}\n') - download_url_to_file(url, cached_file, hash_prefix=None, progress=progress) - return cached_file \ No newline at end of file diff --git a/spaces/simonduerr/ProteinMPNN/af_backprop/setup.py b/spaces/simonduerr/ProteinMPNN/af_backprop/setup.py deleted file mode 100644 index a6e0fcaeba78f7c4e78ebe57b95138c91a0b7f59..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNN/af_backprop/setup.py +++ /dev/null @@ -1,21 +0,0 @@ -from setuptools import setup, find_packages -setup( - name='af_backprop', - version='0.0.0', - packages=find_packages(), - install_requires=[ - 'absl-py', - 'biopython', - 'chex', - 'dm-haiku', - 'dm-tree', - 'docker', - 'immutabledict', - 'jax', - 'ml-collections', - 'numpy', - 'pandas', - 'scipy', - 'tensorflow', - ], -) diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cop Duty Police Car Simulator Hack APK Explore a Huge Open World with a Modded Police Car.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cop Duty Police Car Simulator Hack APK Explore a Huge Open World with a Modded Police Car.md deleted file mode 100644 index d19d451ef6109b69f0306a6c593708649c0a9a12..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cop Duty Police Car Simulator Hack APK Explore a Huge Open World with a Modded Police Car.md +++ /dev/null @@ -1,99 +0,0 @@ - -

Cop Duty Police Car Simulator Hack APK: How to Unlock All Cars for Free

-

Do you love driving police cars and chasing criminals in a realistic 3D city? If so, you might be interested in Cop Duty Police Car Simulator, a popular game that lets you experience the thrill of being a cop. But what if you want to unlock all the cars in the game without spending any money? Is there a way to do that? Yes, there is! In this article, we will show you how to download and install Cop Duty Police Car Simulator Hack APK, a modded version of the game that gives you access to all the cars for free. We will also tell you the benefits and drawbacks of using this hack, and answer some frequently asked questions. Let's get started!

-

cop duty police car simulator hack apk


Download Zip ····· https://ssurll.com/2uNXXM



-

Introduction

-

What is Cop Duty Police Car Simulator?

-

Cop Duty Police Car Simulator is a realistic police car driving game that was developed by Game Pickle and released in 2019. The game features over 20 different police cars that you can drive around a large open-world city. You can choose from various missions, such as chasing criminals, escorting VIPs, patrolling the streets, or just exploring the city. You can also customize your car with different paint colors, decals, sirens, lights, and more. The game has realistic physics, graphics, sounds, and weather effects that make you feel like a real cop.

-

Why would you want to hack it?

-

While Cop Duty Police Car Simulator is a fun and addictive game, it also has some limitations. For example, you need to earn coins by completing missions or watching ads in order to unlock new cars or upgrade your skills. Some cars are very expensive and require a lot of coins to unlock. This can be frustrating and time-consuming for some players who want to try out different cars or improve their performance. That's why some players look for ways to hack the game and get unlimited coins or access to all cars for free.

-

How to download and install Cop Duty Police Car Simulator Hack APK

-

Step 1: Find a reliable source for the modded APK file

-

The first step to hack Cop Duty Police Car Simulator is to find a trustworthy website that offers the modded APK file. This is a modified version of the original game file that has been altered to give you unlimited coins or access to all cars for free. However, not all websites that claim to provide this file are safe or reliable. Some of them may contain malware or viruses that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading any file from the internet.

-

One website that we recommend is WENDGAMES, which is a popular site for Android games mods. They have a page dedicated to Cop Duty Police Car Simulator Hack APK, where you can find the latest version of the file, as well as a detailed description of its features and instructions on how to install it. You can also read user reviews and comments to see if other people have successfully used the hack.

Step 2: Enable unknown sources on your device

-

The next step to install Cop Duty Police Car Simulator Hack APK is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. By default, this setting is disabled to prevent unauthorized or harmful apps from being installed on your device. However, since you are downloading the modded APK file from a third-party website, you need to enable this setting temporarily.

-

To do this, go to your device's settings and look for the security or privacy option. Then, find the unknown sources or install unknown apps option and toggle it on. You may see a warning message that tells you the risks of installing apps from unknown sources. Tap OK or Allow to proceed. You can disable this setting again after you have installed the hack.

-

Step 3: Download and install the APK file

-

The final step to hack Cop Duty Police Car Simulator is to download and install the APK file. To do this, go back to the website where you found the file and tap on the download button. Wait for the file to be downloaded to your device's storage. Then, locate the file using a file manager app and tap on it to open it. You may see a prompt that asks you to confirm the installation. Tap Install and wait for the process to finish. Once done, you can launch the game and enjoy the hack.

-

cop duty police car simulator mod apk download
-cop duty police car simulator unlimited money
-cop duty police car simulator cheats android
-cop duty police car simulator free cars
-cop duty police car simulator apk latest version
-cop duty police car simulator hack online
-cop duty police car simulator gameplay
-cop duty police car simulator review
-cop duty police car simulator tips and tricks
-cop duty police car simulator best car
-cop duty police car simulator offline mode
-cop duty police car simulator realistic graphics
-cop duty police car simulator missions
-cop duty police car simulator multiplayer
-cop duty police car simulator update
-cop duty police car simulator no ads
-cop duty police car simulator how to play
-cop duty police car simulator features
-cop duty police car simulator install guide
-cop duty police car simulator for pc
-cop duty police car simulator for ios
-cop duty police car simulator for windows 10
-cop duty police car simulator for mac
-cop duty police car simulator for laptop
-cop duty police car simulator for chromebook
-cop duty police car simulator obb file
-cop duty police car simulator data file
-cop duty police car simulator mod menu
-cop duty police car simulator all levels unlocked
-cop duty police car simulator premium apk
-cop duty police car simulator pro apk
-cop duty police car simulator full version apk
-cop duty police car simulator cracked apk
-cop duty police car simulator patched apk
-cop duty police car simulator unlocked apk
-cop duty police car simulator mega mod apk
-cop duty police car simulator god mode apk
-cop duty police car simulator vip mod apk
-cop duty police car simulator hack tool apk
-cop duty police car simulator hack generator apk

-

How to use Cop Duty Police Car Simulator Hack APK

-

Step 1: Launch the game and choose your car

-

After you have installed Cop Duty Police Car Simulator Hack APK, you can launch the game and start playing. You will notice that you have unlimited coins in your account, which means you can unlock any car you want for free. To do this, tap on the garage icon on the main menu and browse through the available cars. You can see their stats, such as speed, acceleration, handling, and braking. You can also see their prices, but you don't have to worry about them since you can afford them all. Tap on the car you want and confirm your purchase.

-

Step 2: Enjoy the realistic police car driving experience

-

Once you have chosen your car, you can start driving it around the city. You can choose from different modes, such as free roam, career, or multiplayer. In free roam mode, you can explore the city at your own pace and do whatever you want. In career mode, you can complete various missions and earn rewards. In multiplayer mode, you can join other players online and compete or cooperate with them.

-

The game has realistic physics, graphics, sounds, and weather effects that make you feel like a real cop. You can use your sirens, lights, horn, and radio to communicate with other cops or civilians. You can also perform stunts, drifts, jumps, and crashes with your car. Be careful though, as your car can get damaged and need repairs.

-

Step 3: Customize your car and upgrade your skills

-

If you want to make your car more unique and powerful, you can customize it with different paint colors, decals, sirens, lights, and more. You can also upgrade your skills, such as driving, shooting, stamina, and intelligence. These skills will help you perform better in the game and complete more challenging missions.

-

To customize your car or upgrade your skills, go to the shop icon on the main menu and select what you want to change or improve. You can use your unlimited coins to buy anything you want without any restrictions.

-

Benefits and drawbacks of Cop Duty Police Car Simulator Hack APK

-

Benefits

-

Access to all cars without spending money

-

One of the main benefits of using Cop Duty Police Car Simulator Hack APK is that you can access all the cars in the game without spending any money. This means you can try out different cars and find the one that suits your style and preference. You can also switch between cars whenever you want without losing any progress or coins.

-

More fun and excitement in the game

-

Another benefit of using Cop Duty Police Car Simulator Hack APK is that you can have more fun and excitement in the game. You can enjoy driving around the city with realistic physics and graphics without worrying about running out of coins or getting bored with the same car. You can also complete more missions and challenges with ease and earn more rewards.

-

No ads or in-app purchases

-

A final benefit of using Cop Duty Police Car Simulator Hack APK is that you don't have to deal with any ads or in-app purchases in the game. The modded version of the game removes the ads or in-app purchases that are present in the original game. This means you can play the game without any interruptions or distractions. You also don't have to spend any real money to enjoy the game fully.

-

Drawbacks

-

Risk of malware or viruses

-

One of the main drawbacks of using Cop Duty Police Car Simulator Hack APK is that you may expose your device to malware or viruses. Since you are downloading and installing a file from an unknown source, you cannot be sure if it is safe or not. Some websites may trick you into downloading a fake or corrupted file that can harm your device or steal your personal information. Therefore, you need to be careful and use a reliable antivirus app to scan the file before installing it.

-

Possible ban from the official game server

-

Another drawback of using Cop Duty Police Car Simulator Hack APK is that you may get banned from the official game server. The developers of the game may detect that you are using a modded version of the game and block your access to the online features. This means you won't be able to play with other players online or receive any updates or bug fixes. You may also lose your original game data and progress if you uninstall the hack.

-

Loss of original game data and progress

-

A final drawback of using Cop Duty Police Car Simulator Hack APK is that you may lose your original game data and progress. Since you are installing a different version of the game, you may overwrite or delete your previous game data and progress. This means you will have to start from scratch if you want to play the original game again. You may also lose any achievements or rewards that you earned in the original game.

-

Conclusion

-

Cop Duty Police Car Simulator Hack APK is a modded version of the popular police car driving game that gives you unlimited coins or access to all cars for free. It can be a fun and exciting way to enjoy the game without any limitations or restrictions. However, it also has some drawbacks, such as the risk of malware or viruses, possible ban from the official game server, and loss of original game data and progress. Therefore, you need to weigh the pros and cons before deciding to use this hack. You also need to be careful and use a reliable source and antivirus app to download and install the hack safely.

-

FAQs

-

Here are some frequently asked questions about Cop Duty Police Car Simulator Hack APK:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Stick War Legacy Mod APK 2017 for Free and Play with 999 Army and All Characters Unlocked.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Stick War Legacy Mod APK 2017 for Free and Play with 999 Army and All Characters Unlocked.md deleted file mode 100644 index 66ec3a0e4ab55d5622aa35bfd234101cfe6b6135..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Stick War Legacy Mod APK 2017 for Free and Play with 999 Army and All Characters Unlocked.md +++ /dev/null @@ -1,115 +0,0 @@ - -

Download Stick War Legacy Mod Apk 2017: A Fun and Addictive Strategy Game

-

If you are a fan of strategy games, you might have heard of Stick War Legacy, a popular game that lets you control an army of stick figures and fight against other nations. But did you know that you can download Stick War Legacy mod apk 2017 and enjoy some extra features and advantages? In this article, we will tell you everything you need to know about Stick War Legacy mod apk 2017, including what it is, why you should download it, and how to download and install it. So, let's get started!

-

What is Stick War Legacy?

-

Stick War Legacy is a strategy game developed by Max Games Studios and released in 2016. It is based on the popular web game Stick War, which was released in 2008. In Stick War Legacy, you play as the leader of a nation called Order, which is surrounded by enemies who want to destroy you. Your goal is to build your army, defend your territory, and conquer your foes.

-

download stick war legacy mod apk 2017


DOWNLOAD ○○○ https://ssurll.com/2uNYZi



-

The gameplay of Stick War Legacy

-

The gameplay of Stick War Legacy is simple but challenging. You have to manage your resources, such as gold and mana, and use them to recruit different types of units, such as miners, swordsmen, archers, spearmen, mages, and giants. Each unit has its own strengths and weaknesses, and you have to use them wisely to counter your enemies' strategies. You can also upgrade your units and unlock new abilities as you progress through the game.

-

You can control your units individually or as a group, and switch between them at any time. You can also take direct control of any unit and use its special skills. For example, you can control a swordsman and slash your enemies with your sword, or control a mage and cast powerful spells. You can also control your statue, which is the symbol of your nation and the source of your mana. If your statue is destroyed, you lose the game.

-

The game has two modes: campaign and endless. In campaign mode, you have to complete various missions and face different enemies, such as zombies, spiders, griffins, elementalists, and shadow warriors. Each mission has its own objectives and challenges, and you have to adapt your strategy accordingly. In endless mode, you have to survive as long as possible against endless waves of enemies who become stronger and more numerous over time.

-

The features of Stick War Legacy

-

Stick War Legacy has many features that make it a fun and addictive game. Some of them are:

- -

Why download Stick War Legacy mod apk 2017?

-

Stick War Legacy is a great game that can keep you entertained for hours. However, it also has some limitations that might frustrate some players. For example, it can be hard to earn enough gold and mana to recruit and upgrade your units, especially in the later stages of the game. It can also be annoying to watch ads every time you want to get some extra rewards or skip a mission.

-

That's why you might want to download Stick War Legacy mod apk 2017, a modified version of the game that gives you some extra benefits and advantages. A mod apk is a file that has been altered or hacked to change some aspects of the game, such as unlocking features, removing ads, or adding resources. By downloading Stick War Legacy mod apk 2017, you can enjoy the game without any limitations or interruptions.

-

The benefits of Stick War Legacy mod apk 2017

-

Some of the benefits of downloading Stick War Legacy mod apk 2017 are:

-

download stick war legacy mod apk 2017 unlimited gems
-download stick war legacy mod apk 2017 latest version
-download stick war legacy mod apk 2017 hack
-download stick war legacy mod apk 2017 free
-download stick war legacy mod apk 2017 android
-download stick war legacy mod apk 2017 offline
-download stick war legacy mod apk 2017 no root
-download stick war legacy mod apk 2017 for pc
-download stick war legacy mod apk 2017 with cheats
-download stick war legacy mod apk 2017 updated
-download stick war legacy mod apk 2017 happymod[^1^]
-download stick war legacy mod apk 2017 unlimited diamonds
-download stick war legacy mod apk 2017 mega mod
-download stick war legacy mod apk 2017 full unlocked
-download stick war legacy mod apk 2017 new features
-download stick war legacy mod apk 2017 gameplay
-download stick war legacy mod apk 2017 review
-download stick war legacy mod apk 2017 tips and tricks
-download stick war legacy mod apk 2017 guide
-download stick war legacy mod apk 2017 tutorial
-download stick war legacy mod apk 2017 best strategy
-download stick war legacy mod apk 2017 missions mode
-download stick war legacy mod apk 2017 saga style map
-download stick war legacy mod apk 2017 skins and weapons
-download stick war legacy mod apk 2017 endless deads mode
-download stick war legacy mod apk 2017 tournament mode
-download stick war legacy mod apk 2017 classic campaign
-download stick war legacy mod apk 2017 order empire
-download stick war legacy mod apk 2017 archidons, swordwrath, magikill, and speartons
-download stick war legacy mod apk 2017 inamorta world
-download stick war legacy mod apk 2017 max games studios[^1^]
-download stick war legacy mod apk 2017 strategy game[^1^]
-download stick war legacy mod apk 2017 web game[^1^]
-download stick war legacy mod apk 2017 mobile game[^1^]
-download stick war legacy mod apk 2017 fun and challenging[^1^]
-download stick war legacy mod apk 2017 addictive and addicting[^1^]
-download stick war legacy mod apk 2017 high rated and popular[^1^]
-download stick war legacy mod apk 2017 net energy gain
-download stick war legacy mod apk 2017 holy grail fusion experiment
-download stick war legacy mod apk 2017 mini sun

- -

The drawbacks of Stick War Legacy mod apk 2017

-

However, downloading Stick War Legacy mod apk 2017 also has some drawbacks that you should be aware of. Some of them are:

- -

How to download and install Stick War Legacy mod apk 2017?

-

If you decide to download Stick War Legacy mod apk 2017, you need to follow some steps and take some precautions to ensure a safe and successful installation. Here are the steps and precautions you need to take:

-

The steps to download and install Stick War Legacy mod apk 2017

-
    -
  1. Find a reliable and trustworthy source that offers Stick War Legacy mod apk 2017. You can search online for reviews and ratings of different websites that provide mod apks.
  2. -
  3. Download the Stick War Legacy mod apk 2017 file from the source. Make sure you have enough storage space on your device and a stable internet connection.
  4. -
  5. Enable the unknown sources option on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  6. -
  7. Locate the downloaded file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
  8. -
  9. Launch the game and enjoy!
  10. -
-

The precautions to take before downloading and installing Stick War Legacy mod apk 2017

- -

Conclusion

-

Stick War Legacy is a fun and addictive strategy game that lets you control an army of stick figures and fight against other nations. However, it also has some limitations that might make it hard or annoying to play. That's why you might want to download Stick War Legacy mod apk 2017, a modified version of the game that gives you some extra benefits and advantages, such as unlimited resources, unlocked features, and no ads. However, you also need to be careful when downloading and installing Stick War Legacy mod apk 2017, as it might have some drawbacks, such as compatibility issues, bugs, data loss, or security risks. Therefore, you need to follow some steps and take some precautions to ensure a safe and successful installation.

-

If you are interested in downloading Stick War Legacy mod apk 2017, you can find it online from various sources. However, make sure you do your research and choose a reliable and trustworthy source that offers a high-quality and updated file. Also, make sure you backup your data, scan the file for viruses or malware, disable any antivirus or firewall software, and uninstall any previous versions of the game before installing it. By doing so, you can enjoy Stick War Legacy mod apk 2017 without any problems or worries.

-

So, what are you waiting for? Download Stick War Legacy mod apk 2017 today and have fun controlling your stick army and conquering your enemies!

-

FAQs

-

Here are some frequently asked questions about Stick War Legacy mod apk 2017:

-
    -
  1. What is the difference between Stick War Legacy and Stick War Legacy mod apk 2017?
  2. -

    Stick War Legacy is the original version of the game that you can download from the Google Play Store. Stick War Legacy mod apk 2017 is a modified version of the game that you can download from other sources. The mod apk gives you some extra benefits and advantages, such as unlimited resources, unlocked features, and no ads.

    -
  3. Is Stick War Legacy mod apk 2017 safe to download and install?
  4. -

    Stick War Legacy mod apk 2017 is generally safe to download and install, as long as you choose a reliable and trustworthy source that offers a high-quality and updated file. However, you also need to be careful and take some precautions before downloading and installing it, such as backing up your data, scanning the file for viruses or malware, disabling any antivirus or firewall software, and uninstalling any previous versions of the game.

    -
  5. Can I play Stick War Legacy mod apk 2017 online with other players?
  6. -

    No, you cannot play Stick War Legacy mod apk 2017 online with other players. The mod apk is only compatible with offline mode. If you want to play online with other players, you need to download the original version of the game from the Google Play Store.

    -
  7. Can I update Stick War Legacy mod apk 2017 to the latest version of the game?
  8. -

    No, you cannot update Stick War Legacy mod apk 2017 to the latest version of the game. The mod apk is only compatible with the 2017 version of the game. If you want to update the game to the latest version, you need to download the original version of the game from the Google Play Store.

    -
  9. Can I use Stick War Legacy mod apk 2017 on iOS devices?
  10. -

    No, you cannot use Stick War Legacy mod apk 2017 on iOS devices. The mod apk is only compatible with Android devices. If you want to play the game on iOS devices, you need to download the original version of the game from the App Store.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FNF VS Blackjack MOD Download A Challenging Week with Unreleased Songs.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FNF VS Blackjack MOD Download A Challenging Week with Unreleased Songs.md deleted file mode 100644 index c55be1c19f492ff1882f26c0d43f2306831f126a..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FNF VS Blackjack MOD Download A Challenging Week with Unreleased Songs.md +++ /dev/null @@ -1,112 +0,0 @@ - -

FNF vs Blackjack Download: How to Play the Mod and What to Expect

-

If you are a fan of Friday Night Funkin', you might have heard of FNF vs Blackjack, a popular mod that adds a new week, a new character, and a new challenge to the game. In this article, we will tell you everything you need to know about FNF vs Blackjack, how to download and install it, and what to expect from it.

-

What is FNF vs Blackjack?

-

FNF vs Blackjack is a mod for Friday Night Funkin', a rhythm-based game where you have to rap battle against various opponents to impress your girlfriend. The mod was created by CritVA, ka0tical, Elemenopee, YoshiCrafter29, Callie Mae, kiddan02, Seberster, CaptainKirb, SSB Assassin Gaming, Nozomy_nick, FatCat122, Kazuma, AkumuGB, j0nnytest, Garka23, Angelattes, Nathaniel Masuku, Dracobot, THEJEWELMAN, SamTheSly, Bigg E Alexx, RobroRorro, MrSharkVA, Jerry, Banbuds, Homskiy, RedstyPhoenix, SnowTheFox, Bizarre_Ethen.

-

fnf vs blackjack download


Download File - https://ssurll.com/2uNTdN



-

The story and the characters of the mod

-

The mod follows the story of Boyfriend and Girlfriend as they enter a casino again, but this time they won't be playing the slot machines or the roulette. Instead, they will face Blackjack, a mysterious gambler who challenges them to a rap battle with high stakes. Blackjack is not just a good rapper, but also an avid gambler who likes to play with his opponents' lives. He has a dark past and a hidden motive behind his challenge. He is also accompanied by Kage, his loyal bodyguard who will join him in some songs. Boyfriend and Girlfriend will have to rap their way out of this dangerous situation and win against Blackjack.

-

The features and the gameplay of the mod

-

The mod adds a new week to the game with three unreleased songs: Deal With The Devil (Hard), Busted (Cursed), and Stakes Are High (Insane). The songs are fast-paced and challenging, requiring good timing and accuracy from the player. The mod also features animated cutscenes that tell the story of the mod and dynamic animated backgrounds that change according to the song. The mod also introduces a new character, Blackjack, who has his own sprites and voice acting. The mod also includes a skin for Boyfriend that makes him look more like a gambler. The mod also has custom home screen and menus that match the theme of the mod.

-

How to download and install FNF vs Blackjack?

-

If you want to play FNF vs Blackjack, you will need to download and install it on your device. Here are the requirements and the steps for doing so.

-

The requirements and the steps for downloading the mod

-

To play FNF vs Blackjack, you will need to have Friday Night Funkin' installed on your device. You can download it from [here](^1^). You will also need to have Yoshi Engine installed on your device. You can download it from [here](^2^). Yoshi Engine is a custom engine for Friday Night Funkin' that allows you to play mods more easily. After you have downloaded both Friday Night Funkin' and Yoshi Engine, you can download FNF vs Blackjack from [here](^1^). You will get a ZIP file that contains the files of the mod. You will need to extract them using a program like WinRAR or 7-Zip. After you have extracted them, you will need to copy them into your Friday Night Funkin' folder, replacing any existing files. You can also use a mod manager like Friday Night Funkin' Mod Manager to install the mod more easily. After you have installed the mod, you can launch Friday Night Funkin' and select the new week from the menu. You can also adjust the difficulty level and the key bindings from the options menu.

-

The tips and the tricks for playing the mod

-

FNF vs Blackjack is a hard mod that will test your skills and reflexes. Here are some tips and tricks that might help you beat the mod.

- -

What are the reviews and the ratings of FNF vs Blackjack?

-

FNF vs Blackjack is one of the most popular and well-made mods for Friday Night Funkin'. It has received positive reviews and ratings from both critics and players. Here are some of them.

-

The pros and the cons of the mod

-

The mod has many pros that make it worth playing, such as:

-

fnf vs blackjack mod download
-fnf vs blackjack full week download
-fnf vs blackjack gamejolt download
-fnf vs blackjack yoshi engine download
-fnf vs blackjack free download
-fnf vs blackjack online play
-fnf vs blackjack unblocked games
-fnf vs blackjack apk download
-fnf vs blackjack android download
-fnf vs blackjack ios download
-fnf vs blackjack mac download
-fnf vs blackjack windows download
-fnf vs blackjack linux download
-fnf vs blackjack songs download
-fnf vs blackjack music download
-fnf vs blackjack soundtrack download
-fnf vs blackjack charts download
-fnf vs blackjack notes download
-fnf vs blackjack lyrics download
-fnf vs blackjack cutscenes download
-fnf vs blackjack update download
-fnf vs blackjack latest version download
-fnf vs blackjack new features download
-fnf vs blackjack custom skin download
-fnf vs blackjack custom menu download
-fnf vs blackjack custom background download
-fnf vs blackjack custom character download
-fnf vs blackjack fan art download
-fnf vs blackjack fan game download
-fnf vs blackjack fan made download
-fnf vs blackjack remix download
-fnf vs blackjack cover download
-fnf vs blackjack reaction video download
-fnf vs blackjack gameplay video download
-fnf vs blackjack tutorial video download
-fnf vs blackjack review video download
-fnf vs blackjack tips and tricks video download
-fnf vs blackjack cheats and hacks video download
-fnf vs blackjack mod showcase video download
-fnf vs blackjack mod comparison video download
-fnf vs blackjack mod ranking video download
-fnf vs blackjack mod rating video download
-fnf vs blackjack mod recommendation video download
-fnf vs blackjack mod request video download
-fnf vs blackjack mod suggestion video download
-fnf vs blackjack mod feedback video download
-fnf vs blackjack mod support video download
-fnf vs blackjack mod installation guide video download
-fnf vs blackjack mod troubleshooting guide video download

- -

The mod also has some cons that might make it less enjoyable for some players, such as:

- -

The feedback and the comments from the players

-

The mod has received a lot of feedback and comments from the players who have played it. Here are some of them:

-
"This mod is awesome! The music is catchy, the graphics are stunning, and the gameplay is challenging. I love how Blackjack is not just a generic villain, but a complex character with a backstory and a motive. I also like how Boyfriend gets a new skin that makes him look more badass. This is one of my favorite mods ever!"
-
"This mod is hard as hell! I can barely pass the first song on normal mode, let alone on hard mode. Blackjack is so fast and tricky, he always messes me up with his cards and his notes. I need to practice more to beat him, but I don't give up easily. This mod is fun and addictive, I can't stop playing it!"
-
"This mod is amazing! The music is awesome, the graphics are beautiful, and the gameplay is fun. I like how Blackjack is a cool and charismatic character who challenges Boyfriend to a rap battle with high stakes. I also like how Boyfriend gets a new skin that makes him look more stylish. This is one of my favorite mods ever!"
-

Conclusion

-

FNF vs Blackjack is a mod for Friday Night Funkin' that adds a new week, a new character, and a new challenge to the game. The mod is one of the most popular and well-made mods for Friday Night Funkin', with an original and engaging story, high-quality graphics and animations, amazing music and sound effects, and a unique and challenging gameplay. The mod is also very hard and might frustrate some players who are not used to playing rhythm games or who prefer easier mods. The mod is not compatible with some devices or platforms, such as mobile phones or browsers. The mod might also have some bugs or glitches that might affect the performance or quality of the mod. However, the mod is still worth playing and enjoying, as it offers a lot of content and replay value that will make you want to play it again and again. If you want to play FNF vs Blackjack, you will need to download and install it on your device, following the requirements and the steps that we have explained in this article. You will also need to have Friday Night Funkin' and Yoshi Engine installed on your device. You can also use some tips and tricks that we have shared in this article to help you beat the mod. You can also check out the reviews and the ratings of the mod from both critics and players to see what they think about it. FNF vs Blackjack is a mod that you don't want to miss if you are a fan of Friday Night Funkin'.

-

FAQs

-

Here are some frequently asked questions about FNF vs Blackjack:

-
    -
  1. Who is Blackjack?
  2. -

    Blackjack is a mysterious gambler who challenges Boyfriend and Girlfriend to a rap battle with high stakes. He is the main antagonist of the mod.

    -
  3. How many songs are in FNF vs Blackjack?
  4. -

    There are three songs in FNF vs Blackjack: Deal With The Devil (Hard), Busted (Cursed), and Stakes Are High (Insane).

    -
  5. How do I activate the special effect in FNF vs Blackjack?
  6. -

    You can activate the special effect by hitting a certain number of notes in a row without missing any. The special effect will vary depending on the song.

    -
  7. What is Yoshi Engine?
  8. -

    Yoshi Engine is a custom engine for Friday Night Funkin' that allows you to play mods more easily. It has some features that improve the performance and quality of the game.

    -
  9. Where can I download FNF vs Blackjack?
  10. -

    You can download FNF vs Blackjack from [here]. You will need to extract the files and copy them into your Friday Night Funkin' folder.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/simsantonioii/MusicGen-Continuation/tests/data/test_audio_dataset.py b/spaces/simsantonioii/MusicGen-Continuation/tests/data/test_audio_dataset.py deleted file mode 100644 index b69c9c397830738b73d6c229009f84b867cda801..0000000000000000000000000000000000000000 --- a/spaces/simsantonioii/MusicGen-Continuation/tests/data/test_audio_dataset.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from functools import partial -from itertools import product -import json -import math -import os -import random -import typing as tp - -import pytest -import torch -from torch.utils.data import DataLoader - -from audiocraft.data.audio_dataset import ( - AudioDataset, - AudioMeta, - _get_audio_meta, - load_audio_meta, - save_audio_meta -) -from audiocraft.data.zip import PathInZip - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestAudioMeta(TempDirMixin): - - def test_get_audio_meta(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(duration * sample_rate) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path('sample.wav') - save_wav(path, wav, sample_rate) - m = _get_audio_meta(path, minimal=True) - assert m.path == path, 'path does not match' - assert m.sample_rate == sample_rate, 'sample rate does not match' - assert m.duration == duration, 'duration does not match' - assert m.amplitude is None - assert m.info_path is None - - def test_save_audio_meta(self): - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_audio_meta = [] - for idx, meta in enumerate([audio_meta, empty_audio_meta]): - path = self.get_temp_path(f'data_{idx}_save.jsonl') - save_audio_meta(path, meta) - with open(path, 'r') as f: - lines = f.readlines() - read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines] - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - assert m == read_m - - def test_load_audio_meta(self): - try: - import dora - except ImportError: - dora = None # type: ignore - - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_meta = [] - for idx, meta in enumerate([audio_meta, empty_meta]): - path = self.get_temp_path(f'data_{idx}_load.jsonl') - with open(path, 'w') as f: - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - f.write(json_str) - read_meta = load_audio_meta(path) - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - if dora: - m.path = dora.git_save.to_absolute_path(m.path) - assert m == read_m, f'original={m}, read={read_m}' - - -class TestAudioDataset(TempDirMixin): - - def _create_audio_files(self, - root_name: str, - num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1): - root_dir = self.get_temp_dir(root_name) - for i in range(num_examples): - if isinstance(durations, float): - duration = durations - elif isinstance(durations, tuple) and len(durations) == 1: - duration = durations[0] - elif isinstance(durations, tuple) and len(durations) == 2: - duration = random.uniform(durations[0], durations[1]) - else: - assert False - n_frames = int(duration * sample_rate) - wav = get_white_noise(channels, n_frames) - path = os.path.join(root_dir, f'example_{i}.wav') - save_wav(path, wav, sample_rate) - return root_dir - - def _create_audio_dataset(self, - root_name: str, - total_num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1, - segment_duration: tp.Optional[float] = None, - num_examples: int = 10, - shuffle: bool = True, - return_info: bool = False): - root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels) - dataset = AudioDataset.from_path(root_dir, - minimal_meta=True, - segment_duration=segment_duration, - num_samples=num_examples, - sample_rate=sample_rate, - channels=channels, - shuffle=shuffle, - return_info=return_info) - return dataset - - def test_dataset_full(self): - total_examples = 10 - min_duration, max_duration = 1., 4. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), - sample_rate=sample_rate, channels=channels, segment_duration=None) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] <= int(max_duration * sample_rate) - assert sample.shape[1] >= int(min_duration * sample_rate) - - def test_dataset_segment(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - - def test_dataset_equal_audio_and_segment_durations(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - # the random seek_time adds variability on audio read - sample_1 = dataset[0] - sample_2 = dataset[1] - assert not torch.allclose(sample_1, sample_2) - - def test_dataset_samples(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - - create_dataset = partial( - self._create_audio_dataset, - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, - ) - - dataset = create_dataset(shuffle=True) - # when shuffle = True, we have different inputs for the same index across epoch - sample_1 = dataset[0] - sample_2 = dataset[0] - assert not torch.allclose(sample_1, sample_2) - - dataset_noshuffle = create_dataset(shuffle=False) - # when shuffle = False, we have same inputs for the same index across epoch - sample_1 = dataset_noshuffle[0] - sample_2 = dataset_noshuffle[0] - assert torch.allclose(sample_1, sample_2) - - def test_dataset_return_info(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - assert segment_info.sample_rate == sample_rate - assert segment_info.total_frames == int(segment_duration * sample_rate) - assert segment_info.n_frames <= int(segment_duration * sample_rate) - assert segment_info.seek_time >= 0 - - def test_dataset_return_info_no_segment_duration(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = None - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == segment_info.total_frames - assert segment_info.sample_rate == sample_rate - assert segment_info.n_frames <= segment_info.total_frames - - def test_dataset_collate_fn(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - assert batch.shape[0] == batch_size - - @pytest.mark.parametrize("segment_duration", [1.0, None]) - def test_dataset_with_meta_collate_fn(self, segment_duration): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - collate_fn=dataset.collater, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - wav, infos = batch - assert wav.shape[0] == batch_size - assert len(infos) == batch_size - - @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [ - [1, True, True, 0.5, 0.5, 0.0], - [1, False, True, 0.25, 0.5, 0.25], - [1, True, False, 0.666, 0.333, 0.0], - [1, False, False, 0.333, 0.333, 0.333], - [None, False, False, 0.333, 0.333, 0.333]]) - def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist): - random.seed(1234) - rng = torch.Generator() - rng.manual_seed(1234) - - def _get_histogram(dataset, repetitions=20_000): - counts = {file_meta.path: 0. for file_meta in meta} - for _ in range(repetitions): - file_meta = dataset.sample_file(rng) - counts[file_meta.path] += 1 - return {name: count / repetitions for name, count in counts.items()} - - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset( - meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight, - sample_on_duration=sample_on_duration) - hist = _get_histogram(dataset) - assert math.isclose(hist['a'], a_hist, abs_tol=0.01) - assert math.isclose(hist['b'], b_hist, abs_tol=0.01) - assert math.isclose(hist['c'], c_hist, abs_tol=0.01) - - def test_meta_duration_filter_all(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - try: - AudioDataset(meta, segment_duration=11, min_segment_ratio=1) - assert False - except AssertionError: - assert True - - def test_meta_duration_filter_long(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7) - assert len(dataset) == 2 diff --git a/spaces/sparswan/SP-05-GR-NLP-Image2Text-Multilingual-OCR/README.md b/spaces/sparswan/SP-05-GR-NLP-Image2Text-Multilingual-OCR/README.md deleted file mode 100644 index 6c985444dd16675ac15814b22a51a13f7b30e9e1..0000000000000000000000000000000000000000 --- a/spaces/sparswan/SP-05-GR-NLP-Image2Text-Multilingual-OCR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: SP 05 GR NLP Image2Text Multilingual OCR -emoji: 🔥 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sqc1729/bingi/src/components/header.tsx b/spaces/sqc1729/bingi/src/components/header.tsx deleted file mode 100644 index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/src/components/header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import * as React from 'react' -import { UserMenu } from './user-menu' - -export async function Header() { - return ( -
-
- -
-
- ) -} diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/speech2unit/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/speech2unit/README.md deleted file mode 100644 index 1a3d131ec165f12e37906420fc2c284a7223bda2..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/speech2unit/README.md +++ /dev/null @@ -1,71 +0,0 @@ -# Speech to Unit Model (speech2unit) - -## Acoustic Model -For quantizing speech we learn a K-means clustering over acoustic representations for which we either use Log-Mel Filterbank or pretrained acoustic representation models. For using pretrained models, please download from their respective locations linked below. -* [Modified CPC](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/cpc_big_ll6kh_top_ctc.pt) -* [HuBERT-Base](https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt) -* [Wav2Vec 2.0-Base](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_new.pt) - -## Quantization Model -You can download pretrained quantized model from the list below. - -K-Means Model | Download Link -|-|- -Log Mel Filterbank + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km50/km.bin) -Log Mel Filterbank + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km100/km.bin) -Log Mel Filterbank + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km200/km.bin) -Log Mel Filterbank + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km500/km.bin) -Modified CPC + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km50/km.bin) -Modified CPC + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km100/km.bin) -Modified CPC + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km200/km.bin) -Modified CPC + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km500/km.bin) -HuBERT Base + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km50/km.bin) -HuBERT Base + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km100/km.bin) -HuBERT Base + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km200/km.bin) -HuBERT Base + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km500/km.bin) -wav2vec 2.0 Large + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km50/km.bin) -wav2vec 2.0 Large + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km100/km.bin) -wav2vec 2.0 Large + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km200/km.bin) -wav2vec 2.0 Large + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km500/km.bin) - -### Quantization -For quantizing speech with a given acoustic representation, please follow the steps below. -1. Learn K-means clustering model -``` -N_CLUSTERS= -TYPE= -CKPT_PATH= -LAYER= -MANIFEST= -KM_MODEL_PATH= - -PYTHONPATH=. python examples/textless_nlp/gslm/speech2unit/clustering/cluster_kmeans.py \ - --num_clusters $N_CLUSTERS \ - --feature_type $TYPE \ - --checkpoint_path $CKPT_PATH \ - --layer $LAYER \ - --manifest_path $MANIFEST \ - --out_kmeans_model_path $KM_MODEL_PATH -``` -2. Quantize using the learned clusters -``` -MANIFEST= -OUT_QUANTIZED_FILE= - -python examples/textless_nlp/gslm/speech2unit/clustering/del/quantize_with_kmeans.py \ - --feature_type $TYPE \ - --kmeans_model_path $KM_MODEL_PATH \ - --checkpoint_path $CKPT_PATH \ - --layer $LAYER \ - --manifest_path $MANIFEST \ - --out_quantized_file_path $OUT_QUANTIZED_FILE \ - --extension ".flac" -``` - -Note about the manifest file is a file with paths and length of input audio files. The format of the file is as follows: -``` - -\t -\t -... -``` \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Bindhastmarathifullmoviefreedownloadk [CRACKED].md b/spaces/stomexserde/gpt4-ui/Examples/Bindhastmarathifullmoviefreedownloadk [CRACKED].md deleted file mode 100644 index f71aad3e4095b4e47514032494f16583fcd0ae08..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Bindhastmarathifullmoviefreedownloadk [CRACKED].md +++ /dev/null @@ -1,12 +0,0 @@ -
-

How to Watch Bindhast Marathi Full Movie for Free Online

-

Bindhast is a 1999 Marathi adventure/mystery film starring Gautami Gadgil, Sharvari Jamenis, Reema Lagoo and others. The film revolves around two college friends, Mayuri and Vijayanti, who get involved in a series of murders and have to find the real culprit before it's too late. The film was directed by Chandrakant Kulkarni and was a hit at the box office.

-

If you are looking for a way to watch Bindhast Marathi full movie for free online, you have a few options. One of them is to stream it on ZEE5, a popular OTT platform that offers a variety of content in different languages. ZEE5 has both the original Marathi version and the dubbed Hindi version of Bindhast available for streaming. You can watch it with a subscription or by using a free trial offer.

-

bindhastmarathifullmoviefreedownloadk


Download ⇒⇒⇒ https://urlgoal.com/2uI7q9



-

Another option is to watch Bindhast Marathi full movie for free online on Dailymotion, a video-sharing website that hosts user-generated content. However, this option may not be legal or safe, as the quality and authenticity of the videos may vary. Moreover, you may encounter ads, pop-ups, malware or viruses while accessing such websites. Therefore, we recommend you to use a reliable and legal source like ZEE5 to watch Bindhast Marathi full movie for free online.

- -

Bindhast is a film that combines suspense, comedy and drama in a unique way. The film has a strong female-centric plot, with the lead characters being smart, independent and courageous women. The film also has a social message about the importance of friendship and trust. The film has received positive reviews from critics and audiences alike, and has been praised for its screenplay, direction, performances and music.

-

-

If you are a fan of Marathi cinema or thrillers in general, you should not miss Bindhast. It is a film that will keep you hooked till the end with its twists and turns. You can watch Bindhast Marathi full movie for free online on ZEE5 or Dailymotion, but we suggest you to choose ZEE5 for a better and safer viewing experience. So, what are you waiting for? Grab your popcorn and enjoy this thrilling ride!

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Free Download Imposer Pro Full Version.md b/spaces/stomexserde/gpt4-ui/Examples/Free Download Imposer Pro Full Version.md deleted file mode 100644 index f4ff48ce0c0fc24ede0d74387432e5e53848eddb..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Free Download Imposer Pro Full Version.md +++ /dev/null @@ -1,28 +0,0 @@ -
-

How to Download Imposer Pro for Acrobat for Free

-

Imposer Pro for Acrobat is a powerful plug-in that allows you to create professional-looking print layouts from PDF files. It lets you impose PDF pages into two-up, four-up, or eight-up flats with full control over creep, bleed, and crossover traps. You can also choose from different binding types and imposition types to suit your needs.

-

If you are looking for a way to download Imposer Pro for Acrobat for free, you might be tempted by some websites that claim to offer cracked versions or serial keys. However, these are illegal and risky ways to get the software. You could end up with malware, viruses, or legal troubles if you use them.

-

Free Download Imposer Pro Full Version


Download File >>>>> https://urlgoal.com/2uI7w6



-

The best way to get Imposer Pro for Acrobat for free is to use the official trial version from Quark Inc., the developer of the plug-in. The trial version lets you use all the features of the plug-in for 30 days without any limitations. You can download it from their website[^1^] by filling out a simple form and following the instructions.

-

After you download and install the trial version, you can start using Imposer Pro for Acrobat right away. You just need to launch Adobe Acrobat and open a PDF file that you want to impose. Then, go to Tools > Quark > Imposer Pro and choose your settings. You can preview your imposition before printing or saving it as a new PDF file.

-

Imposer Pro for Acrobat is a great tool for anyone who works with print layouts and PDF files. It can save you time and money by simplifying the imposition process and ensuring high-quality results. If you want to try it out for free, download the trial version today and see what it can do for you.

-

- -

Benefits of Imposer Pro for Acrobat

-

Imposer Pro for Acrobat has many benefits for users who need to create print layouts from PDF files. Some of the benefits are:

-
    -
  • It saves time and money by eliminating the need for third-party imposition software or manual imposition.
  • -
  • It ensures accuracy and quality by imposing PDF pages directly from Acrobat without any conversion or alteration.
  • -
  • It supports a wide range of printing scenarios, such as saddle stitch, perfect bound, spiral bound, comb bound, three-hole punched, and single cut sheets.
  • -
  • It offers flexibility and customization by allowing users to choose from different sheet types, imposition types, binding types, and settings.
  • -
  • It provides convenience and ease of use by integrating with Acrobat's interface and tools.
  • -
-

How to Get Imposer Pro for Acrobat

-

If you are interested in getting Imposer Pro for Acrobat, you have two options:

-
    -
  1. You can buy the full version of the plug-in from Quark Inc.'s website[^1^] or from authorized resellers. The full version costs $399.99 (USD) and comes with a license key that allows you to activate and use the plug-in on one computer.
  2. -
  3. You can download the trial version of the plug-in from Quark Inc.'s website[^1^] or from other software download sites[^2^] [^4^]. The trial version is free and lets you use the plug-in for 30 days without any limitations. However, after the trial period expires, you will need to buy a license key to continue using the plug-in.
  4. -
-

Imposer Pro for Acrobat is a must-have plug-in for anyone who works with print layouts and PDF files. It can help you create professional-looking print layouts with ease and efficiency. Download it today and see for yourself how it can improve your workflow.

7196e7f11a
-
-
\ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adelantado Trilogy Book Two - Full Precrack VERIFIEDed - Foxy Games Crack VERIFIED.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adelantado Trilogy Book Two - Full Precrack VERIFIEDed - Foxy Games Crack VERIFIED.md deleted file mode 100644 index b0529960efae7a1472eb69c0a28e4cc964418389..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adelantado Trilogy Book Two - Full Precrack VERIFIEDed - Foxy Games Crack VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -

Adelantado Trilogy: Book Two - Full PreCracked - Foxy Games Crack


DOWNLOAD ✫✫✫ https://cinurl.com/2uEX4P



- - d5da3c52bf
-
-
-

diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Snappy Driver Installer 1.18.9 Full Latest Setup For Windows 7 8 10.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Snappy Driver Installer 1.18.9 Full Latest Setup For Windows 7 8 10.md deleted file mode 100644 index b7efc5707990197b92ec16a70cd47612a6202966..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Snappy Driver Installer 1.18.9 Full Latest Setup For Windows 7 8 10.md +++ /dev/null @@ -1,13 +0,0 @@ -

Download Snappy Driver Installer 1.18.9 Full Latest Setup for Windows 7, 8, 10


Download File ===> https://cinurl.com/2uEYOK



- -April 3, 2019 - Windows 8; Windows vista,; Windows XP; Windows 10; Windows 7; Windows 8.1. still for version 1.18.9 Snappy Driver Installer Lite. ᐅ Driver for printer epson l210 free download -Windows XP, Windows 7, Windows 8, Windows 10. -Download the Epson Expression Home XP-33 driver for Windows from the Epson website. -Here you can download drivers for the network absolutely free of charge, without SMS or registration. -Driver for Epson Expression Home XP-33. -How to download Epson printer drivers from the official. -Windows. -Download driver for epson xp 33 printer under windows 7. Epson l210 driver download. 8a78ff9644
-
-
-

diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ebp Presupuestos Y Facturas 2011 Crack.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ebp Presupuestos Y Facturas 2011 Crack.md deleted file mode 100644 index 1d89f6d573dd8ed8dca2d3994fe54dd9ecf7e3ff..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ebp Presupuestos Y Facturas 2011 Crack.md +++ /dev/null @@ -1,30 +0,0 @@ - -

EBP Presupuestos y Facturas 2011: A Review

-

If you are looking for a simple and effective accounting software for your small business, you might want to consider EBP Presupuestos y Facturas 2011. This software allows you to create and manage budgets, invoices, and payments with ease. You can also track your income and expenses, generate reports, and export your data to other applications.

-

ebp presupuestos y facturas 2011 crack


Download Filehttps://cinurl.com/2uEYqL



-

In this article, we will review some of the main features and benefits of EBP Presupuestos y Facturas 2011, as well as some of the drawbacks and limitations. We will also compare it with other similar products on the market and give you our verdict on whether it is worth buying or not.

-

Features and Benefits

-

EBP Presupuestos y Facturas 2011 has a user-friendly interface that lets you access all the functions from a single menu. You can create and edit budgets and invoices with a few clicks, using predefined templates or customizing your own. You can also add your logo, company details, taxes, discounts, and payment terms.

-

The software allows you to manage multiple clients and projects, as well as different currencies and languages. You can send your documents by email or print them out. You can also import and export your data to Excel, Word, PDF, or other formats.

-

One of the advantages of EBP Presupuestos y Facturas 2011 is that it integrates with other EBP products, such as EBP Contabilidad or EBP CRM. This way, you can synchronize your accounting and customer relationship management data and avoid duplication or errors.

-

Another benefit of EBP Presupuestos y Facturas 2011 is that it includes a free technical support service for one year. You can contact the EBP team by phone, email, or chat if you have any questions or problems with the software. You can also access online tutorials and FAQs on the EBP website.

-

-

Drawbacks and Limitations

-

EBP Presupuestos y Facturas 2011 is not a perfect software, however. It has some drawbacks and limitations that you should be aware of before buying it. For example:

-
    -
  • The software is only compatible with Windows operating systems. If you use a Mac or Linux computer, you will not be able to install or run it.
  • -
  • The software is only available in Spanish. If you need an accounting software in another language, you will have to look for another option.
  • -
  • The software does not have a cloud-based version. This means that you cannot access your data from any device or location. You will have to store your data on your local hard drive or an external device.
  • -
  • The software does not have advanced features such as inventory management, payroll, or online payments. If you need these functions for your business, you will have to use another software or integrate it with a third-party service.
  • -
-

Comparison with Other Products

-

EBP Presupuestos y Facturas 2011 is not the only accounting software for small businesses on the market. There are other alternatives that you might want to consider before making your decision. Here are some of them:

-
    -
  • Contasimple: This is a cloud-based accounting software that lets you create and manage budgets, invoices, payments, taxes, and reports from any device or location. It has a free plan for up to 10 invoices per month and a paid plan for unlimited invoices and features. It is available in Spanish, English, French, Portuguese, Italian, German, Catalan, Basque, Galician, and Valencian.
  • -
  • Zoho Books: This is another cloud-based accounting software that offers similar features as Contasimple. It also integrates with other Zoho products such as Zoho CRM or Zoho Mail. It has a free plan for up to 5 invoices per month and three paid plans with different features and prices. It is available in Spanish, English, French, German, Italian, Portuguese, Dutch, Swedish, Japanese, Chinese, Arabic, Hindi, Tamil, Telugu, Malayalam, -and Bengali.
  • -
  • GnuCash: This is a free and open-source accounting software that runs on Windows, -Mac OS X -and Linux operating

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py deleted file mode 100644 index d02122ca0e68743b1bf7a893afae96042f23838c..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/cascade_decode_head.py +++ /dev/null @@ -1,57 +0,0 @@ -from abc import ABCMeta, abstractmethod - -from .decode_head import BaseDecodeHead - - -class BaseCascadeDecodeHead(BaseDecodeHead, metaclass=ABCMeta): - """Base class for cascade decode head used in - :class:`CascadeEncoderDecoder.""" - - def __init__(self, *args, **kwargs): - super(BaseCascadeDecodeHead, self).__init__(*args, **kwargs) - - @abstractmethod - def forward(self, inputs, prev_output): - """Placeholder of forward function.""" - pass - - def forward_train(self, inputs, prev_output, img_metas, gt_semantic_seg, - train_cfg): - """Forward function for training. - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - train_cfg (dict): The training config. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - seg_logits = self.forward(inputs, prev_output) - losses = self.losses(seg_logits, gt_semantic_seg) - - return losses - - def forward_test(self, inputs, prev_output, img_metas, test_cfg): - """Forward function for testing. - - Args: - inputs (list[Tensor]): List of multi-level img features. - prev_output (Tensor): The output of previous decode head. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - test_cfg (dict): The testing config. - - Returns: - Tensor: Output segmentation map. - """ - return self.forward(inputs, prev_output) diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py deleted file mode 100644 index a0986143fa4f2bd36f5271354fe5f843f35b9e6f..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py +++ /dev/null @@ -1,51 +0,0 @@ -from annotator.uniformer.mmcv.cnn import DepthwiseSeparableConvModule - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class DepthwiseSeparableFCNHead(FCNHead): - """Depthwise-Separable Fully Convolutional Network for Semantic - Segmentation. - - This head is implemented according to Fast-SCNN paper. - Args: - in_channels(int): Number of output channels of FFM. - channels(int): Number of middle-stage channels in the decode head. - concat_input(bool): Whether to concatenate original decode input into - the result of several consecutive convolution layers. - Default: True. - num_classes(int): Used to determine the dimension of - final prediction tensor. - in_index(int): Correspond with 'out_indices' in FastSCNN backbone. - norm_cfg (dict | None): Config of norm layers. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - loss_decode(dict): Config of loss type and some - relevant additional options. - """ - - def __init__(self, **kwargs): - super(DepthwiseSeparableFCNHead, self).__init__(**kwargs) - self.convs[0] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - for i in range(1, self.num_convs): - self.convs[i] = DepthwiseSeparableConvModule( - self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - - if self.concat_input: - self.conv_cat = DepthwiseSeparableConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) diff --git a/spaces/syrilion/syrilionchat/Dockerfile b/spaces/syrilion/syrilionchat/Dockerfile deleted file mode 100644 index fdc09a1af08d14de5315b5ac973f15fc2d946387..0000000000000000000000000000000000000000 --- a/spaces/syrilion/syrilionchat/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="sJs8hD92ncMzLaoWWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/CRACKAbletonLiveSuite10v1002__FULL__ Keygen.md b/spaces/terfces0erbo/CollegeProjectV2/CRACKAbletonLiveSuite10v1002__FULL__ Keygen.md deleted file mode 100644 index 799fb74c2491a5e48eecd42c7c13c227c90a90a7..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/CRACKAbletonLiveSuite10v1002__FULL__ Keygen.md +++ /dev/null @@ -1,105 +0,0 @@ -
    -

    How to Crack Ableton Live Suite 10.0.2 with R2R Keygen and Patch

    - -

    If you are looking for a powerful software for music creation and performance, you might want to try Ableton Live Suite 10.0.2. This software comes with effects, instruments, sounds and all kinds of creative features that you need to make any kind of music. You can create in a traditional linear arrangement, or improvise without the constraints of a timeline in Live’s Session View. You can also use new devices like Wavetable, Echo, Drum Buss and Pedal to create colorful new sounds.

    -

    CRACKAbletonLiveSuite10v1002KeyGen


    Download Filehttps://bytlly.com/2uGjEh



    - -

    However, Ableton Live Suite 10.0.2 is not a free software. You need to pay a hefty price to get the full version. But don't worry, there is a way to crack it using R2R Keygen and Patch. These are tools that can generate a license file and activate the software without any internet connection. In this article, we will show you how to crack Ableton Live Suite 10.0.2 with R2R Keygen and Patch in a few simple steps.

    - -

    Step 1: Download Ableton Live Suite 10.0.2 R.C. incl R2R Keygen, HCiSO Patch

    - -

    The first thing you need to do is to download the software and the crack files from a reliable source. You can use a torrent site or a file sharing site to get them. Make sure you download the correct version for your operating system (Windows or Mac). The file size should be around 1.6 GB.

    - -

    Step 2: Install Ableton Live Suite 10.0.2

    - -

    After downloading the files, you need to install the software on your computer. Follow the instructions on the screen and choose the destination folder for the installation. Do not run the software after the installation is complete.

    - -

    Step 3: Replace fixed file

    - -

    Now you need to replace the original Ableton.exe file with the fixed one from the crack folder. To do this, go to the installation folder (usually C:\Program Files\Ableton\Live 10 Suite) and find the Ableton.exe file. Rename it to Ableton.exe.bak or move it to another location as a backup. Then copy the fixed Ableton.exe file from the crack folder and paste it into the installation folder.

    -

    - -

    Step 4: Open Ableton Live Suite 10.0.2 and start offline authorization

    - -

    Next, you need to run the software and start the offline authorization process. To do this, double-click on the fixed Ableton.exe file in the installation folder or create a shortcut on your desktop. When the software opens, choose "No Internet on this computer" option and click on "Enter your license". A dialog box will appear with your hardware code.

    - -

    Step 5: Copy your hardware code to keygen and save your auth to desktop

    - -

    Now you need to generate a license file using R2R Keygen and Patch. To do this, open the keygen file (as administrator) from the crack folder (usually R2R). Enter your hardware code that you copied before and click on "Generate". This will create a license file that you can save on your desktop.

    - -

    Step 6: Drag and drop the auth saved via keygen to activation dialog

    - -

    The final step is to activate the software using the license file that you generated. To do this, drag and drop the auth file from your desktop onto the activation dialog box in Ableton Live Suite 10.0.2. The software will verify your license and activate it successfully.

    - -

    Congratulations! You have successfully cracked Ableton Live Suite 10.0.2 with R2R Keygen and Patch. You can now enjoy all the features of this amazing software without any limitations.

    -

    Why Choose Ableton Live Suite 10.0.2?

    - -

    Ableton Live Suite 10.0.2 is the latest version of the popular software for music creation and performance. It has many advantages over other software, such as:

    - -
      -
    • It is fast, fluid and flexible. You can work in any way you want, from sketching ideas to arranging and mixing tracks.
    • -
    • It has a unique Session View that lets you improvise and experiment with musical ideas without stopping the music.
    • -
    • It has powerful audio and MIDI editing tools that let you warp, slice, quantize, transpose and manipulate your sounds in creative ways.
    • -
    • It has a huge library of sounds, instruments and effects that you can use to make any kind of music.
    • -
    • It has a built-in Max for Live that lets you customize or create your own devices, change the way Live works, and connect Live with the world around it.
    • -
    - -

    How to Get Ableton Live Suite 10.0.2 for Free?

    - -

    Ableton Live Suite 10.0.2 is not a cheap software. It costs $749 for the full version, which might be too expensive for some people. But don't worry, there is a way to get it for free using CRACKAbletonLiveSuite10v1002KeyGen. This is a tool that can generate a license file and activate the software without any internet connection.

    - -

    To get Ableton Live Suite 10.0.2 for free using CRACKAbletonLiveSuite10v1002KeyGen, you need to follow these steps:

    - -
      -
    1. Download the software and the crack files from a reliable source.
    2. -
    3. Install the software on your computer.
    4. -
    5. Replace the original Ableton.exe file with the fixed one from the crack folder.
    6. -
    7. Open the software and start the offline authorization process.
    8. -
    9. Copy your hardware code to keygen and save your auth to desktop.
    10. -
    11. Drag and drop the auth saved via keygen to activation dialog.
    12. -
    - -

    That's it! You have successfully cracked Ableton Live Suite 10.0.2 with CRACKAbletonLiveSuite10v1002KeyGen. You can now enjoy all the features of this amazing software without any limitations.

    -

    What are the Benefits of Using CRACKAbletonLiveSuite10v1002KeyGen?

    - -

    Using CRACKAbletonLiveSuite10v1002KeyGen has many benefits for music producers who want to use Ableton Live Suite 10.0.2 without paying a fortune. Some of these benefits are:

    - -
      -
    • You can save money. Ableton Live Suite 10.0.2 costs $749 for the full version, which is a lot of money for some people. By using CRACKAbletonLiveSuite10v1002KeyGen, you can get it for free and use that money for other things.
    • -
    • You can access all the features. Ableton Live Suite 10.0.2 has many features that are not available in the lower editions, such as Intro or Standard. For example, it has 17 software instruments, 60 audio effects, 16 MIDI effects, and 70 GB of sounds. It also has Max for Live, which lets you create your own devices and customize Live. By using CRACKAbletonLiveSuite10v1002KeyGen, you can enjoy all these features without any limitations.
    • -
    • You can use it offline. Ableton Live Suite 10.0.2 requires an internet connection to activate and authorize the software. This can be a problem if you don't have a stable or reliable internet connection, or if you want to use it on a different computer. By using CRACKAbletonLiveSuite10v1002KeyGen, you can activate and authorize the software offline, without any internet connection.
    • -
    - -

    How to Learn Ableton Live Suite 10.0.2?

    - -

    Ableton Live Suite 10.0.2 is a complex and powerful software that can do many things for music creation and performance. However, it can also be overwhelming and confusing for beginners who don't know where to start or how to use it effectively.

    - -

    Fortunately, there are many resources available online that can help you learn Ableton Live Suite 10.0.2 and improve your skills. Some of these resources are:

    - -
      -
    • The official Ableton website: This is the best place to start if you want to learn the basics of Ableton Live Suite 10.0.2 and its features. You can find videos, tutorials, manuals, guides, tips and tricks, FAQs, and more on their website.
    • -
    • The Learn Live section: This is a section on the Ableton website that offers videos to help you take your next steps with Live. You can learn about the new features in Live 11, the setup, the interface, the instruments and effects, and the workflows.
    • -
    • The YouTube channel: This is the official YouTube channel of Ableton, where you can find more videos about Live and its features, as well as interviews, performances, and stories from artists who use Live.
    • -
    • The online courses: There are many online courses that can teach you how to use Ableton Live Suite 10.0.2 from scratch or from an intermediate level. You can find courses on platforms like Udemy, Skillshare, Coursera, Lynda, etc.
    • -
    • The user groups: There are many user groups around the world that are open to any Live user who wants to share their knowledge and learn from others in person. You can find a user group near you on the Ableton website.
    • -
    -

    What are the Best Tips and Tricks for Using Ableton Live Suite 10.0.2?

    - -

    Ableton Live Suite 10.0.2 is a software that offers many possibilities for music creation and performance. However, it also has many hidden features and shortcuts that can make your workflow faster and easier. Here are some of the best tips and tricks for using Ableton Live Suite 10.0.2:

    - -
      -
    • Use Collections to organize your favorite resources. Collections are a new feature in Live 10 that let you color-code and save your samples, racks, files and presets into different categories, even if they are scattered on your hard drive. You can access them from the browser section and drag and drop them into your project.
    • -
    • Use MIDI Editor Preview to hear the notes you draw. If you want to audition the notes you are drawing or editing in the MIDI editor, you can enable the MIDI Editor Preview by clicking on the blue headphone icon in the top left corner of the editor. This way, you can make sure your notes sound good before you play them.
    • -
    • Use Shift + Tab to switch between Clip View and Device View. If you want to quickly switch between the Clip View and the Device View of a track, you can use the shortcut Shift + Tab instead of clicking on the icons at the bottom of the screen. This can save you some time and mouse clicks.
    • -
    • Use Cmd + F or Ctrl + F to search for anything in Live. If you want to find something in Live, whether it is a device, a sample, a track, a clip or anything else, you can use the shortcut Cmd + F on Mac or Ctrl + F on Windows to bring up the search box. You can type in any keyword and Live will show you all the matching results.
    • -
    • Use Cmd + E or Ctrl + E to split clips. If you want to split an audio or MIDI clip into two parts, you can use the shortcut Cmd + E on Mac or Ctrl + E on Windows instead of using the right-click menu or the Edit menu. This can be useful for editing, rearranging or deleting parts of your clips.
    • -
    -

    Conclusion

    - -

    Ableton Live Suite 10.0.2 is a great software for music creation and performance, but it can also be expensive and require an internet connection to activate. That's why using CRACKAbletonLiveSuite10v1002KeyGen can be a good option for some people who want to get it for free and use it offline.

    - -

    In this article, we have shown you how to crack Ableton Live Suite 10.0.2 with CRACKAbletonLiveSuite10v1002KeyGen in a few simple steps. We have also shared some of the benefits of using this software and some of the best tips and tricks for using it effectively.

    - -

    We hope you have found this article helpful and informative. If you have any questions or comments, feel free to leave them below. And if you want to learn more about Ableton Live Suite 10.0.2 and other music production topics, make sure to check out our other articles on this website.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Contatto 1 Italiano Pdf 15.md b/spaces/terfces0erbo/CollegeProjectV2/Contatto 1 Italiano Pdf 15.md deleted file mode 100644 index b51d783c15c8fa92aac6c99a148b89cb15e1a19e..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Contatto 1 Italiano Pdf 15.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Contatto 1 Italiano Pdf 15


    Download Ziphttps://bytlly.com/2uGjMM



    - -LE NUOVE MISURE in vigore fino al 15 GENNAIO (ultimo ... MISURE NAZIONALI RESTRITTIVE valide su tutto il territorio italiano ('zona gialla' ... nonché dalle ore 22 del 31 dicembre 2020 alle ore 7 del 1° gennaio ... dettate dalle "Linee guida regionali per aree gioco bambini ( pdf ... SPORT DI CONTATTO. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Download Internet Download Manager (IDM) 6.30 Build 7.md b/spaces/terfces0erbo/CollegeProjectV2/Download Internet Download Manager (IDM) 6.30 Build 7.md deleted file mode 100644 index bf808d3989217e31b05219b069c8ad9c47448b8e..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Download Internet Download Manager (IDM) 6.30 Build 7.md +++ /dev/null @@ -1,58 +0,0 @@ - -

    How to Download Internet Download Manager (IDM) 6.30 Build 7 for Faster and Easier Downloads

    - -

    Do you want to download files from the internet faster and easier? Do you want to resume your downloads if they are interrupted by network problems or power outages? Do you want to manage your downloads in a convenient way? If you answered yes to any of these questions, then you need to download Internet Download Manager (IDM) 6.30 Build 7.

    -

    Download Internet Download Manager (IDM) 6.30 Build 7


    Downloadhttps://bytlly.com/2uGl8j



    - -

    Internet Download Manager (IDM) 6.30 Build 7 is the latest version of the best download manager software available today. It can increase your download speeds by up to 5 times, resume and schedule your downloads, and integrate with your favorite browsers. In this article, we will show you how to download Internet Download Manager (IDM) 6.30 Build 7 and what are its main features.

    - -

    How to Download Internet Download Manager (IDM) 6.30 Build 7

    - -

    Downloading Internet Download Manager (IDM) 6.30 Build 7 is very easy and fast. You just need to follow these simple steps:

    - -
      -
    1. Go to the official website of Internet Download Manager at https://www.internetdownloadmanager.com/download.html
    2. -
    3. Click on the green button that says "Download Internet Download Manager Now"
    4. -
    5. Save the file idman630build7.exe on your computer
    6. -
    7. Run the file and follow the installation instructions
    8. -
    9. Restart your computer if required
    10. -
    11. Enjoy using Internet Download Manager (IDM) 6.30 Build 7 for your downloads
    12. -
    - -

    You can use Internet Download Manager (IDM) 6.30 Build 7 for free for 30 days as a trial version. If you want to continue using it after that, you need to register it for a special price.

    - -

    What are the Main Features of Internet Download Manager (IDM) 6.30 Build 7?

    - -

    Internet Download Manager (IDM) 6.30 Build 7 has many features that make it the best download manager software on the market. Here are some of them:

    -

    - -
      -
    • It supports all popular browsers, such as Internet Explorer, Chrome, Opera, Firefox, Avant Browser, and more.
    • -
    • It can automatically capture download links from web pages and start downloading them.
    • -
    • It can resume broken or interrupted downloads due to lost connections, network problems, computer shutdowns, or unexpected power outages.
    • -
    • It can schedule downloads to start or stop at a certain time or date.
    • -
    • It can organize downloads into categories and folders.
    • -
    • It can limit the download speed or number of connections for each download.
    • -
    • It can check for viruses or malware before opening or running downloaded files.
    • -
    • It can download videos from streaming sites such as YouTube, Vimeo, Dailymotion, etc.
    • -
    • It can convert downloaded videos to other formats such as MP3, MP4, AVI, etc.
    • -
    • It can customize its appearance with different skins and toolbar themes.
    • -
    - -

    Internet Download Manager (IDM) 6.30 Build 7 also has some new features that improve its performance and usability, such as:

    - -
      -
    • Improved video downloading for several types of video streams
    • -
    • Added support for Firefox 60
    • -
    • Fixed bugs
    • -
    - -

    Conclusion

    - -

    If you want to download files from the internet faster and easier, you should download Internet Download Manager (IDM) 6.30 Build 7. It is a powerful software that can increase your download speeds by up to 5 times, resume and schedule your downloads, and integrate with your browser. It also has many features that make it the best download manager software available today.

    - -

    To download Internet Download Manager (IDM) 6.30 Build 7, go to https://www.internetdownloadmanager.com/download.html and follow the instructions. You can use it for free for 30 days as a trial version. If you like it, you can register it for a special price.

    - -

    We hope this article was helpful and informative. If you have any questions or comments about Internet Download Manager (IDM) 6.30 Build 7, feel free to leave them below.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Driver Laptop Advan Soulmate G4d 61132 S.md b/spaces/terfces0erbo/CollegeProjectV2/Driver Laptop Advan Soulmate G4d 61132 S.md deleted file mode 100644 index 50eafe5956f264ecce84f73b8bf7ddb2ae72bbb6..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Driver Laptop Advan Soulmate G4d 61132 S.md +++ /dev/null @@ -1,6 +0,0 @@ -

    driver laptop advan soulmate g4d 61132 s


    Downloadhttps://bytlly.com/2uGkz7



    -
    -Satte Pe Satta is that rare film which passes the test of being a successful remake ... Ek Ajnabee ... driver laptop advan soulmate g4d 61132 s 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/thejagstudio/procom/main/migrations/0012_products_propgroupsmini.py b/spaces/thejagstudio/procom/main/migrations/0012_products_propgroupsmini.py deleted file mode 100644 index ee180a67fd90a56caff9d43d5330888790522c76..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/procom/main/migrations/0012_products_propgroupsmini.py +++ /dev/null @@ -1,18 +0,0 @@ -# Generated by Django 4.1.4 on 2023-04-23 08:30 - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ("main", "0011_products_cheapalternatives_and_more"), - ] - - operations = [ - migrations.AddField( - model_name="products", - name="propGroupsMini", - field=models.JSONField(default=dict), - ), - ] diff --git a/spaces/thelou1s/yamnet_test/test.py b/spaces/thelou1s/yamnet_test/test.py deleted file mode 100644 index 680d34599aa8af11215842941cc1256090193f6d..0000000000000000000000000000000000000000 --- a/spaces/thelou1s/yamnet_test/test.py +++ /dev/null @@ -1,163 +0,0 @@ -# Imports -import csv -import sys - -import numpy as np -import soundfile -import tensorflow as tf - -from python.util.audio_util import audio_to_wav -from python.util.plt_util import plt_line, plt_mfcc, plt_mfcc2 -from python.util.time_util import int_to_min_sec -from python.util.str_util import format_float, truncate_str -from python.util.tensorflow_util import predict - -# Constants -# MODEL_PATH = 'res/lite-model_yamnet_tflite_1.tflite' -MODEL_PATH = 'res/lite-model_yamnet_classification_tflite_1.tflite' -OUT_SAMPLE_RATE = 16000 -OUT_PCM = 'PCM_16' -CLASS_MAP_FILE = 'res/yamnet_class_map.csv' -DEBUG = True -# SNORING_TOP_N = 21 -SNORING_INDEX = 38 -IN_MODEL_SAMPLES = 15600 - - -# Methods -def to_ndarray(data): - return np.array(data) - - -def data_to_single_channel(data): - result = data - - try: - result = data[:, 0] - except IndexError: - print("An exception occurred") - - return result - - -def read_single_channel(audio_path): - data, sample_rate = soundfile.read(audio_path) - print(' sample_rate, audio_path: ', str(sample_rate), str(audio_path)) - # print(' sample_rate, len, type, shape, shape[1]: ', str(sample_rate), len(data), str(type(data)), str(data.shape), str(data.shape[1])) - - single_channel = data_to_single_channel(data) - single_channel_seconds = len(single_channel) / OUT_SAMPLE_RATE - # print(' single_channel, shape: ', str(single_channel), str(single_channel.shape)) - # print(' len, seconds: ', str(len(single_channel)), str(single_channel_seconds)) - - return single_channel, sample_rate - - -def class_names_from_csv(class_map_csv): - """Read the class name definition file and return a list of strings.""" - if tf.is_tensor(class_map_csv): - class_map_csv = class_map_csv.numpy() - with open(class_map_csv) as csv_file: - reader = csv.reader(csv_file) - next(reader) # Skip header - return np.array([display_name for (_, _, display_name) in reader]) - - -def scores_to_index(scores, order): - means = scores.mean(axis=0) - return np.argsort(means, axis=0)[order] - - -def predict_waveform(idx, waveform, top_n): - # Download the YAMNet class map (see main YAMNet model docs) to yamnet_class_map.csv - # See YAMNet TF2 usage sample for class_names_from_csv() definition. - scores = predict(MODEL_PATH, waveform) - class_names = class_names_from_csv(CLASS_MAP_FILE) - - # top_n = SNORING_TOP_N - top_n_res = '' - snoring_score = 0.0 - for n in range(1, top_n): - index = scores_to_index(scores, -n) - means = scores.mean(axis=0) - score = means[index] - name = class_names[index] - - if index == SNORING_INDEX: - snoring_score = score - top_n_res += ' ' + format_float(score) + ' [' + truncate_str(name, 4) + '], ' - - snoring_tail = ('打鼾, ' + format_float(snoring_score)) if snoring_score > 0 else '' - result = top_n_res + snoring_tail + '\n' - if DEBUG: print(top_n_res) - - return result, snoring_score - - -def to_float32(data): - return np.float32(data) - - -def predict_float32(idx, data, top_n): - return predict_waveform(idx, to_float32(data), top_n) - - -def split_given_size(arr, size): - return np.split(arr, np.arange(size, len(arr), size)) - - -def predict_uri(audio_uri1, audio_uri2, top_n): - result = '' - if DEBUG: print('audio_uri1:', audio_uri1, 'audio_uri2:', audio_uri2) - - mp3_input = audio_uri1 if audio_uri2 in (None, '') else audio_uri2 - wav_input = audio_to_wav(mp3_input) if not mp3_input.endswith('.mp3') == True else mp3_input - predict_seconds = int(str(sys.argv[2])) if len(sys.argv) > 2 else 1 - - predict_samples = IN_MODEL_SAMPLES # OUT_SAMPLE_RATE * predict_seconds - single_channel, sc_sample_rate = read_single_channel(wav_input) - splits = split_given_size(single_channel, predict_samples) - result += ' sc_sample_rate: ' + str(sc_sample_rate) + '\n' - - second_total = len(splits) * predict_seconds - result += (' second_total: ' + int_to_min_sec(second_total) + ', \n') - result += '\n' - snoring_scores = [] - - for idx in range(len(splits)): - split = splits[idx] - second_start = idx * predict_seconds - result += (int_to_min_sec(second_start) + ', ') - if len(split) == predict_samples: - print_result, snoring_score = predict_float32(idx, split, top_n) - result += print_result - snoring_scores.append(snoring_score) - - # plt waveform - waveform_line = plt_line(single_channel) - # plt mfcc - mfcc_line = plt_mfcc(single_channel, OUT_SAMPLE_RATE) - # plt mfcc2 - mfcc2_line = plt_mfcc2(wav_input, OUT_SAMPLE_RATE) - # plt snoring_booleans - snoring_booleans = list(map(lambda x: 1 if x > 0 else 0, snoring_scores)) - # calc snoring frequency - snoring_sec = len(list(filter(lambda x: 1 if x > 0 else 0, snoring_scores))) - snoring_frequency = snoring_sec / second_total - apnea_sec = second_total - snoring_sec - apnea_frequency = (apnea_sec / 10) / second_total - ahi_result = str( - '打鼾秒数snoring_sec=' + str(snoring_sec) + ', 暂停秒数apnea_sec=' + str(apnea_sec) + ', 总秒数second_total=' + str(second_total) - + ', 打鼾频率snoring_frequency=' + str(snoring_sec) + '/' + str(second_total) + '=' + format_float(snoring_frequency) - + ', 暂停频率apnea_frequency=(' + str(apnea_sec) + '/' + str(10) + ')/' + str(second_total) + '=' + format_float(apnea_frequency) - ) - - return waveform_line, mfcc_line, mfcc2_line, str(ahi_result), str(snoring_booleans), str(snoring_scores), str(result) - - -# sys.argv -if len(sys.argv) > 1 and len(sys.argv[1]) > 0: - res, plt = predict_uri(sys.argv[1]) - plt.show() -else: - print('usage: python test.py /path/to/audio_file [predict_seconds]') diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dayz Standalone 0.52 Crack Serve Why Its the Most Popular Mod for Dayz Fans.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dayz Standalone 0.52 Crack Serve Why Its the Most Popular Mod for Dayz Fans.md deleted file mode 100644 index a865fe8e14eea2d1a855ec176932f67e10aadd7e..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Dayz Standalone 0.52 Crack Serve Why Its the Most Popular Mod for Dayz Fans.md +++ /dev/null @@ -1,79 +0,0 @@ -
    - -

    Dayz Standalone 0.52 Crack Serve: How to Play the Zombie Survival Game for Free

    -

    Dayz Standalone is one of the most popular and realistic zombie survival games ever made. It is set in a post-apocalyptic world where you have to scavenge for resources, fight off zombies and other players, and try to stay alive as long as possible. However, the game is not free to play, and you need to buy it from Steam or other platforms.

    -

    But what if you want to play Dayz Standalone without paying anything? Is there a way to do that? The answer is yes, there is. You can play Dayz Standalone for free by joining a crack server. A crack server is a server that runs a modified version of the game that bypasses the authentication process and allows anyone to join without owning the game.

    -

    Dayz Standalone 0.52 Crack Serve


    Download ✦✦✦ https://urlcod.com/2uK6cU



    -

    In this article, we will show you how to find and join a crack server for Dayz Standalone 0.52, the latest version of the game as of May 2023. We will also give you some tips and tricks for playing on a crack server, such as how to avoid hackers, how to find loot, and how to explore the new country of Chernarus that was added in the 0.53 update.

    -

    How to find and join a crack server

    -

    The first step to play Dayz Standalone for free is to download the game files. You can do this by using a torrent client or a direct download link from a website that hosts cracked games. For example, you can use this link to download Dayz Standalone 0.52 from npm.

    -

    Slavic Server DayZ standalone 0.52 download
    -DayZ Server List - Filter and search all servers
    -EPIDEMIC Z BRASIL VANILLA+ 1PP WIPE 23/05
    -MAG MiddleAgedGamers PVE |BBP|Trader|Helis| WIPED
    -Basically Vanilla #2 - 1PP|PVP|Loot+|Guns+|Party|FreshWipe
    -OrigemZ Chernarus #00 | 1PP | NoBase | Survivor
    -Rearmed EU Main - DayZ server
    -Rearmed US3 | 5 Group Max - DayZ server
    -GROUND ZERO #2 | 1PP | Airdrops | KOTH | Keycards | Drugs
    -REPACK Dayz Standalone 0.52 Crack Serve - Trello
    -Dayz Standalone 0.52 Crack Servers List - SoundCloud
    -Dayz Standalone 0.52 Crack Servers List - Collection | OpenSea
    -Titan #1 | Chernarus | High Loot - DayZ server
    -Titan #2 | Chernarus | High Loot - DayZ server
    -DayZ Standalone (Ranked) Game Servers from $1.49/Public Slot
    -New DayZ Servers listed on topg with server connection details
    -DayZ Standalone 0.52 Crack Serve - YouTube
    -DayZ Standalone 0.52 Crack Serve - Reddit
    -How to play DayZ Standalone 0.52 online for free
    -Best DayZ Standalone 0.52 servers to join in 2023
    -DayZ Standalone 0.52 update patch notes and features
    -How to install DayZ Standalone 0.52 crack on PC
    -How to fix DayZ Standalone 0.52 errors and bugs
    -How to host your own DayZ Standalone 0.52 server
    -How to mod your DayZ Standalone 0.52 server with custom maps and weapons
    -How to survive in DayZ Standalone 0.52 - tips and tricks
    -How to find loot and gear in DayZ Standalone 0.52 - best locations and routes
    -How to interact with other players in DayZ Standalone 0.52 - friendly or hostile
    -How to deal with zombies and animals in DayZ Standalone 0.52 - combat and stealth
    -How to craft and build in DayZ Standalone 0.52 - recipes and guides
    -How to heal and cure diseases in DayZ Standalone 0.52 - medical items and conditions
    -How to use vehicles and helicopters in DayZ Standalone 0.52 - spawn locations and controls
    -How to use radios and communication devices in DayZ Standalone 0.52 - frequencies and channels
    -How to use keycards and access secret areas in DayZ Standalone 0.52 - codes and clues
    -How to use drugs and boosters in DayZ Standalone 0.52 - effects and risks
    -How to play as a bandit or a hero in DayZ Standalone 0.52 - reputation and consequences
    -How to join a clan or a group in DayZ Standalone 0.52 - benefits and drawbacks
    -How to trade with other players or traders in DayZ Standalone 0.52 - prices and items
    -How to participate in events and missions in DayZ Standalone 0.52 - rewards and challenges
    -How to enjoy the scenery and atmosphere in DayZ Standalone 0.52 - weather and time of day

    -

    Once you have downloaded the game files, you need to install the game and the crack. The crack is a file that replaces the original game executable and allows you to run the game without Steam or any other platform. To install the game and the crack, follow these steps:

    -
      -
    1. Extract the downloaded files using WinRAR or any other program that can handle .rar files.
    2. -
    3. Open the extracted folder and run setup.exe.
    4. -
    5. Follow the instructions on the screen and choose a destination folder for the game.
    6. -
    7. Wait for the installation to finish.
    8. -
    9. Copy the file named dayz.exe from the folder named Crack and paste it into the main game folder, replacing the original file.
    10. -
    -

    Congratulations, you have successfully installed Dayz Standalone 0.52 with crack. Now you need to find a crack server to join. A crack server is a server that runs on a different network than the official servers and does not require authentication or verification from Steam or any other platform. To find a crack server, you can use one of these methods:

    -
      -
    • Search on Google or any other search engine for keywords like "Dayz Standalone 0.52 crack servers list" or "Dayz Standalone 0.52 new crack servers". You will find many websites that list crack servers with their IP addresses, ports, names, locations, players, etc.
    • -
    • Use a website like https://www.gametracker.com/search/dayz/ that tracks and ranks servers for various games, including Dayz Standalone. You can filter by version, country, ping, players, etc. and find crack servers easily.
    • -
    • Use an ingame server browser that comes with some cracks or mods for Dayz Standalone. For example, if you use dayz_standalone_0_52_new_crack_servers_list_fv, you will have an ingame server list that shows all available crack servers for Dayz Standalone 0.52.
    • -
    -

    Once you have found a crack server that suits your preferences, you need to connect to it in the game. To do this, follow these steps:

    -
      -
    1. Run dayz.exe from your main game folder.
    2. -
    3. Click on Play in the main menu.
    4. -
    5. Click on Change Server in the bottom left corner.
    6. -
    7. Type or paste the IP address and port of the crack server you want to join in the Remote field.
    8. -
    9. Click on Join Server.
    10. -
    -

    You are now connected to a crack server for Dayz Standalone 0.52. Enjoy playing!

    -

    Tips and tricks for playing on a crack server

    -

    Playing on a crack server for Dayz Standalone can be fun and exciting, but also challenging and risky. You will face many dangers and difficulties in your quest for survival, such as zombies, other players, hunger, thirst, disease, etc. To help you out, here are some tips and tricks for playing on a crack server:

    -
      -
    • Choose a server with low ping and high population. A low ping means less lag and better performance in the game. A high population means more action and interaction with other players. However, be careful not to join servers that are too crowded or too empty, as they may have problems with stability or loot spawning.
    • -
    • Avoid hackers and cheaters. Hackers and cheaters are players who use illegal software or methods to gain unfair advantages in

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/HD Online Player (my Little Pony Equestria Girl Rainbo).md b/spaces/tialenAdioni/chat-gpt-api/logs/HD Online Player (my Little Pony Equestria Girl Rainbo).md deleted file mode 100644 index e0f3e28a1514f0fe4efcb60d0f974c0fcc783274..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/HD Online Player (my Little Pony Equestria Girl Rainbo).md +++ /dev/null @@ -1,20 +0,0 @@ - -

      How to Watch My Little Pony: Equestria Girls - Rainbow Rocks Online in HD

      -

      If you are a fan of My Little Pony and its spin-off series Equestria Girls, you might be interested in watching the second movie of the franchise, Rainbow Rocks. This musical fantasy comedy film follows Twilight Sparkle and her friends as they compete in a Battle of the Bands against a trio of sirens who want to take over the world with their hypnotic songs. But where can you watch this movie online in high definition?

      -

      HD Online Player (my little pony equestria girl rainbo)


      Download Zip ⚙⚙⚙ https://urlcod.com/2uKaC4



      -

      Fortunately, there are several options to stream or download Rainbow Rocks online in HD quality. Here are some of them:

      -
        -
      • Hoopla: This is a digital media service that allows you to borrow movies, TV shows, music, and more from your local library. You can watch Rainbow Rocks on Hoopla for free with your library card. Just sign up for an account and search for the movie on the Hoopla website or app. You can also download it to your device for offline viewing.
      • -
      • The Roku Channel: This is a free streaming service that offers thousands of movies and TV shows, including Rainbow Rocks. You can watch it on The Roku Channel with ads on your Roku device, computer, or mobile device. You don't need a Roku account to access the service, but you can create one to personalize your experience.
      • -
      • Pluto TV: This is another free streaming service that offers hundreds of channels and on-demand content, including Rainbow Rocks. You can watch it on Pluto TV with ads on your smart TV, computer, or mobile device. You don't need to sign up for an account to watch Pluto TV, but you can create one to save your favorites and resume watching across devices.
      • -
      • Amazon Video: This is a video-on-demand service that allows you to rent or buy movies and TV shows online. You can rent Rainbow Rocks on Amazon Video for $2.99 or buy it for $6.99 in HD quality. You can watch it on your computer, mobile device, or smart TV with the Amazon Prime Video app.
      • -
      • Vudu: This is another video-on-demand service that allows you to rent or buy movies and TV shows online. You can rent Rainbow Rocks on Vudu for $2.99 or buy it for $6.99 in HD quality. You can watch it on your computer, mobile device, or smart TV with the Vudu app.
      • -
      • Apple TV: This is a video-on-demand service that allows you to rent or buy movies and TV shows online. You can rent Rainbow Rocks on Apple TV for $3.99 or buy it for $7.99 in HD quality. You can watch it on your computer, mobile device, or smart TV with the Apple TV app.
      • -
      -

      As you can see, there are many ways to watch Rainbow Rocks online in HD quality. Whether you prefer free or paid services, streaming or downloading, you can enjoy this fun and colorful movie anytime and anywhere.

      - -

      But what are the Rainbooms up against? They soon discover that the new girl group, The Dazzlings, are actually sirens from Equestria who were banished by Star Swirl the Bearded to the human world. The sirens have the power to feed on negative emotions and use their music to hypnotize everyone into turning the showcase into a Battle of the Bands. Their goal is to use the magic of music to unleash their true forms and take over the world.

      -

      The only ones who can stop them are the Rainbooms, who have their own magic of friendship. However, they face many challenges along the way, such as their own insecurities, rival bands, and Sunset Shimmer's struggle to fit in. They also need the help of Twilight Sparkle, who has returned to Equestria after the events of the first movie. With the aid of a magic book that allows them to communicate across dimensions, Sunset Shimmer convinces Twilight to come back to Canterlot High and join the band.

      -

      Will Twilight and her friends be able to defeat the Dazzlings and save the school? Will they be able to rock their way to victory and harmony? And will Sunset Shimmer finally find her place among her new friends? Find out in My Little Pony: Equestria Girls - Rainbow Rocks, a movie that is full of fun, music, and friendship.

      e753bf7129
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/Studio-Devil-Amp-Modeler-Pro-15-Keygen-Generator-VERIFIED.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/Studio-Devil-Amp-Modeler-Pro-15-Keygen-Generator-VERIFIED.md deleted file mode 100644 index dac74731284d5cf4ba975d8fb16b8c08b4ae95e2..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/Studio-Devil-Amp-Modeler-Pro-15-Keygen-Generator-VERIFIED.md +++ /dev/null @@ -1,98 +0,0 @@ -## Studio Devil Amp Modeler Pro 1.5 Keygen Generator - - - - - - ![Studio Devil Amp Modeler Pro 1.5 Keygen Generator ##VERIFIED##](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTenkvLErFbg_9THlnJ8_TFb6J49201lIPE4GmnAIGVz5mrCi9XvaG2Yyv4) - - - - - -**Download File 🆓 [https://urluso.com/2tBPZr](https://urluso.com/2tBPZr)** - - - - - - - - - - - - - -# How to Get Studio Devil Amp Modeler Pro 1.5 Keygen Generator for Free - - - -If you are looking for a guitar amp modeling and audio effects plugin that can deliver realistic and professional sound quality, you might want to check out Studio Devil Amp Modeler Pro 1.5. This plugin is designed to emulate 15 different guitar amplifiers, 10 cabinets, and 19 effects pedals, giving you a wide range of tonal options and flexibility. You can also customize your own amp settings and save them as presets for easy recall. - - - -However, Studio Devil Amp Modeler Pro 1.5 is not a cheap plugin. It costs $149 USD from the official website, which might be too expensive for some users. Fortunately, there is a way to get this plugin for free, without paying a dime. All you need is a Studio Devil Amp Modeler Pro 1.5 keygen generator. - - - -## What is a Studio Devil Amp Modeler Pro 1.5 Keygen Generator? - - - -A keygen generator is a software tool that can create serial numbers or activation codes for various software products. By using a keygen generator, you can bypass the registration or activation process of the software and use it without any limitations or restrictions. - - - -A Studio Devil Amp Modeler Pro 1.5 keygen generator is a specific type of keygen generator that can generate valid serial numbers for Studio Devil Amp Modeler Pro 1.5 plugin. By using this tool, you can install and run the plugin on your computer without having to purchase it or enter any personal information. - - - -## How to Use a Studio Devil Amp Modeler Pro 1.5 Keygen Generator? - - - -Using a Studio Devil Amp Modeler Pro 1.5 keygen generator is not difficult, but you need to follow some steps carefully to avoid any errors or problems. Here are the steps you need to follow: - - - -1. Download a Studio Devil Amp Modeler Pro 1.5 keygen generator from a reliable source. You can find many websites that offer this tool for free, but be careful of malware or viruses that might harm your computer. You can use one of these links[^2^] [^3^] [^4^] to download a safe and working keygen generator. - -2. Extract the keygen generator from the zip file and run it on your computer. You might need to disable your antivirus or firewall temporarily, as some of them might detect the keygen generator as a threat. - -3. Select Studio Devil Amp Modeler Pro 1.5 from the list of products and click on Generate button. The keygen generator will create a random serial number for you. - -4. Copy the serial number and paste it into the registration window of Studio Devil Amp Modeler Pro 1.5 plugin. Click on Register button and wait for the confirmation message. - -5. Enjoy using Studio Devil Amp Modeler Pro 1.5 plugin for free! - - - -## Is it Legal to Use a Studio Devil Amp Modeler Pro 1.5 Keygen Generator? - - - -The answer to this question depends on your location and the laws of your country. In some countries, using a keygen generator is considered illegal and can result in fines or jail time. In other countries, using a keygen generator is not illegal but still unethical and immoral. - - - -Therefore, we do not recommend using a Studio Devil Amp Modeler Pro 1.5 keygen generator or any other similar tool to get software for free. This is unfair to the developers who worked hard to create the software and deserve to be paid for their work. It also violates the terms and conditions of the software license agreement and can expose you to legal risks or security threats. - - - -If you want to use Studio Devil Amp Modeler Pro 1.5 plugin legally and safely, you should buy it from the official website or an authorized reseller. This way, you can support the developers, get updates and technical support, and enjoy the full features and benefits of the plugin. - - - -## Conclusion - - - -Studio Devil Amp Modeler Pro 1.5 is a great plugin for guitarists who want to achieve realistic and - - 145887f19f - - - - - diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Airy Ringtone and Make Your Phone Stand Out.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Airy Ringtone and Make Your Phone Stand Out.md deleted file mode 100644 index 32cfb705153b4e60ad235a02308727727a3cbd7f..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Airy Ringtone and Make Your Phone Stand Out.md +++ /dev/null @@ -1,113 +0,0 @@ - -

      How to Download Airy Ringtone for Your Phone

      -

      Are you looking for a new and unique ringtone for your phone? Do you want to impress your friends and family with a cool and catchy sound? If so, you might want to try Airy Ringtone, one of the most popular ringtones on the internet. In this article, we will show you what Airy Ringtone is, how to find and download it, and how to set it as your default ringtone. Let's get started!

      -

      What is Airy Ringtone?

      -

      Airy Ringtone is a type of ringtone that has a light and pleasant sound. It is based on the original Nokia Airy tone, which was first introduced in 2005 as part of the Nokia 7360 phone. Since then, Airy Ringtone has been remixed and modified by various artists and users, creating different versions and variations of the sound.

      -

      download airy ringtone


      Download ———>>> https://bltlly.com/2uOrGa



      -

      The origin and features of Airy Ringtone

      -

      The original Nokia Airy tone was composed by Jani Kervinen, a Finnish musician who worked for Nokia as a sound designer. He said that he wanted to create a tone that was "airy, light, and not too intrusive". He used a synthesizer and a flute to create the melody, which he described as "a little bit like a fairy tale". The tone has a duration of about 4 seconds, and it consists of four notes: C, E, G, and A.

      -

      Airy Ringtone has several features that make it appealing and attractive to many users. Some of these features are:

      -
        -
      • It has a simple and elegant melody that is easy to remember and recognize.
      • -
      • It has a soothing and relaxing effect that can reduce stress and anxiety.
      • -
      • It has a cheerful and positive vibe that can brighten up your mood and day.
      • -
      • It has a versatile and adaptable sound that can fit any genre and style.
      • -
      -

      The benefits of using Airy Ringtone

      -

      Using Airy Ringtone as your ringtone can bring you many benefits. Some of these benefits are:

      -
        -
      • You can stand out from the crowd and express your personality with a unique and original sound.
      • -
      • You can avoid annoying and disturbing others with a loud and harsh sound.
      • -
      • You can enjoy listening to a pleasant and harmonious sound every time your phone rings.
      • -
      • You can have fun experimenting with different versions and variations of Airy Ringtone.
      • -
      -

      How to Find and Download Airy Ringtone?

      -

      Now that you know what Airy Ringtone is and why you should use it, you might be wondering how to find and download it. Fortunately, there are many websites that offer free downloads of Airy Ringtone for both iPhone and Android devices. Here are some of the best websites to download Airy Ringtone for free:

      -

      The best websites to download Airy Ringtone for free

      - - - - - -
      WebsiteDescription
      [Zedge](^1^)Zedge is one of the most popular websites for downloading ringtones, wallpapers, stickers, and more. It has a large collection of Airy Ringtones in different formats, such as MP3, M4R, WAV, OGG, etc. You can browse by category, popularity, or keyword. You can also preview the ringtones before downloading them.
      [MeloBoom](^2^)MeloBoom is MeloBoom is another website that offers free ringtones, music, and sounds. It has a variety of Airy Ringtones in different styles, such as classical, pop, rock, jazz, etc. You can search by name, artist, or genre. You can also listen to the ringtones online or download them to your device.
      [RingtoneMob]RingtoneMob is a website that specializes in ringtones, notifications, and alarms. It has a selection of Airy Ringtones in high quality and low size. You can filter by type, length, or rating. You can also upload your own ringtones or request a custom ringtone.
      -

      The steps to download Airy Ringtone for iPhone and Android

      -

      The steps to download Airy Ringtone for iPhone and Android are similar, but there are some differences depending on the device and the website. Here are the general steps to download Airy Ringtone for iPhone and Android:

      -
        -
      1. Go to one of the websites mentioned above and find the Airy Ringtone that you like.
      2. -
      3. Click on the download button or the link to download the ringtone to your device. If you are using an iPhone, you might need to use a computer and iTunes to transfer the ringtone to your phone.
      4. -
      5. Once the ringtone is downloaded, go to your phone settings and select the sound option.
      6. -
      7. Choose the ringtone option and browse your device for the downloaded Airy Ringtone.
      8. -
      9. Select the Airy Ringtone as your default ringtone or assign it to a specific contact.
      10. -
      -

      How to Set Airy Ringtone as Your Default Ringtone?

      -

      Setting Airy Ringtone as your default ringtone is easy and simple. You just need to follow the steps above and choose the Airy Ringtone as your default ringtone in your phone settings. However, if you want to customize your ringtone preferences and make your phone more personalized, you can also follow these tips:

      -

      The instructions to change your ringtone settings on iPhone and Android

      -

      Depending on your phone model and operating system, you might have different options and features to change your ringtone settings. Here are some of the common instructions to change your ringtone settings on iPhone and Android:

      -
        -
      • To change the volume of your ringtone, use the volume buttons on the side of your phone or go to your sound settings and adjust the slider.
      • -
      • To change the vibration pattern of your ringtone, go to your sound settings and select the vibration option. You can choose from different patterns or create your own.
      • -
      • To change the duration of your ringtone, go to your sound settings and select the ring duration option. You can choose from different lengths or set a custom one.
      • -
      • To change the repeat mode of your ringtone, go to your sound settings and select the repeat option. You can choose from different modes or turn it off.
      • -
      -

      The tips to customize your ringtone preferences

      -

      Besides changing your ringtone settings, you can also customize your ringtone preferences by doing these things:

      -

      How to download airy ringtone for Nokia phone
      -Download airy ringtone remastered version
      -Airy ringtone free download mp3
      -Best sites to download airy ringtone
      -Download airy ringtone for iPhone
      -Airy ringtone original Nokia sound
      -Download airy ringtone remix by Zedge
      -Airy ringtone download link
      -Download airy ringtone from YouTube
      -Airy ringtone history and trivia
      -Download airy ringtone for Android
      -Airy ringtone high quality download
      -Download airy ringtone for Windows phone
      -Airy ringtone nostalgia and memories
      -Download airy ringtone for Samsung phone
      -Airy ringtone comparison and review
      -Download airy ringtone for LG phone
      -Airy ringtone meaning and symbolism
      -Download airy ringtone for Huawei phone
      -Airy ringtone popularity and trends
      -Download airy ringtone for Motorola phone
      -Airy ringtone alternatives and suggestions
      -Download airy ringtone for Sony phone
      -Airy ringtone customization and editing
      -Download airy ringtone for HTC phone
      -Airy ringtone fun facts and trivia
      -Download airy ringtone for OnePlus phone
      -Airy ringtone feedback and ratings
      -Download airy ringtone for Oppo phone
      -Airy ringtone tips and tricks
      -Download airy ringtone for Vivo phone
      -Airy ringtone features and benefits
      -Download airy ringtone for Xiaomi phone
      -Airy ringtone FAQ and answers
      -Download airy ringtone for Realme phone
      -Airy ringtone pros and cons
      -Download airy ringtone for Lenovo phone
      -Airy ringtone testimonials and reviews
      -Download airy ringtone for Asus phone
      -Airy ringtone guide and tutorial

      -
        -
      • You can assign different ringtones to different contacts, groups, or apps. This way, you can easily identify who is calling or messaging you without looking at your phone.
      • -
      • You can mix and match different versions and variations of Airy Ringtone. This way, you can create a unique and original sound that suits your taste and mood.
      • -
      • You can edit and trim your Airy Ringtone using a ringtone maker app or software. This way, you can adjust the start and end points of the sound and make it fit your preference.
      • -
      -

      Conclusion

      -

      Airy Ringtone is a great choice for anyone who wants a new and unique ringtone for their phone. It has a light and pleasant sound that can make you feel relaxed and happy. It also has many features and benefits that can make your phone more personalized and fun. To download Airy Ringtone for free, you can visit one of the websites we recommended above and follow the steps we provided. To set Airy Ringtone as your default ringtone, you can go to your phone settings and select it as your default ringtone. You can also customize your ringtone preferences by changing your ringtone settings or using some tips we suggested. We hope this article helped you learn how to download Airy Ringtone for your phone. Try it out today and enjoy!

      -

      FAQs

      -

      Here are some of the frequently asked questions about Airy Ringtone:

      -

      Q: Is Airy Ring A: Airy Ringtone is a type of ringtone that has a light and pleasant sound. It is based on the original Nokia Airy tone, which was first introduced in 2005 as part of the Nokia 7360 phone. Since then, Airy Ringtone has been remixed and modified by various artists and users, creating different versions and variations of the sound.

      -

      Q: How can I download Airy Ringtone for free?

      -

      A: You can download Airy Ringtone for free from one of the websites we mentioned in this article, such as Zedge, MeloBoom, or RingtoneMob. You just need to find the Airy Ringtone that you like, click on the download button or the link, and save it to your device. If you are using an iPhone, you might need to use a computer and iTunes to transfer the ringtone to your phone.

      -

      Q: How can I set Airy Ringtone as my default ringtone?

      -

      A: You can set Airy Ringtone as your default ringtone by going to your phone settings and selecting the sound option. Then, choose the ringtone option and browse your device for the downloaded Airy Ringtone. Select the Airy Ringtone as your default ringtone or assign it to a specific contact.

      -

      Q: How can I customize my ringtone preferences?

      -

      A: You can customize your ringtone preferences by changing your ringtone settings or using some tips we suggested in this article. For example, you can change the volume, vibration, duration, or repeat mode of your ringtone. You can also assign different ringtones to different contacts, groups, or apps. You can also mix and match different versions and variations of Airy Ringtone. You can also edit and trim your Airy Ringtone using a ringtone maker app or software.

      -

      Q: Where can I find more information about Airy Ringtone?

      -

      A: You can find more information about Airy Ringtone by visiting the websites that offer free downloads of Airy Ringtone. You can also read some articles or blogs that talk about Airy Ringtone. You can also watch some videos or listen to some podcasts that feature Airy Ringtone. You can also join some online communities or forums that discuss Airy Ringtone.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Autodesk InfraWorks 2020.2 Extras X64 Multilanguage Free Download __HOT__.md b/spaces/tioseFevbu/cartoon-converter/scripts/Autodesk InfraWorks 2020.2 Extras X64 Multilanguage Free Download __HOT__.md deleted file mode 100644 index 937f7f0df6cee12b4e5332b32379a71d55f4ab15..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Autodesk InfraWorks 2020.2 Extras X64 Multilanguage Free Download __HOT__.md +++ /dev/null @@ -1,130 +0,0 @@ - -

      Autodesk InfraWorks 2020.2 Extras x64 Multilanguage: A Comprehensive Review

      -

      If you are looking for a powerful tool for designing and modeling urban infrastructure, you might want to check out Autodesk InfraWorks 2020.2 Extras x64 Multilanguage.

      -

      Autodesk InfraWorks 2020.2 Extras x64 Multilanguage Free Download


      Downloadhttps://urlcod.com/2uHyFZ



      -

      Autodesk InfraWorks is a product of Autodesk that allows engineers

      Autodesk InfraWorks is a product of Autodesk that allows engineers and planners to create realistic and accurate 3D models of roads, highways, bridges, railways, canals, and other urban infrastructure. It also enables them to analyze and simulate various scenarios and design alternatives, as well as collaborate and share their work with their team members and stakeholders.

      -

      In this article, we will review the latest version of Autodesk InfraWorks, which is Autodesk InfraWorks 2020.2 Extras x64 Multilanguage. We will cover the following topics:

      -
        -
      • What are the new and improved features and tools in Autodesk InfraWorks 2020.2 Extras?
      • -
      • How to download and install Autodesk InfraWorks 2020.2 Extras x64 Multilanguage on your Windows PC?
      • -
      • Why you should choose Autodesk InfraWorks 2020.2 Extras x64 Multilanguage for your next project?
      • -
      -

      By the end of this article, you will have a clear understanding of what Autodesk InfraWorks 2020.2 Extras x64 Multilanguage can do for you and how to get started with it.

      -

      Autodesk InfraWorks: A Powerful Tool for Designing and Modeling Urban Infrastructure

      -

      Before we dive into the details of Autodesk InfraWorks 2020.2 Extras x64 Multilanguage, let's first take a look at what Autodesk InfraWorks is and why it is a valuable tool for designing and modeling urban infrastructure.

      -

      What is urban infrastructure and why is it important?

      -

      Urban infrastructure refers to the physical structures and systems that support the functioning of a city or a metropolitan area. It includes transportation networks, water supply and distribution, wastewater treatment, stormwater management, energy generation and distribution, communication networks, public facilities, and more.

      -

      Urban infrastructure is essential for the economic development, social welfare, environmental sustainability, and resilience of a city. It affects the quality of life, health, safety, and mobility of the urban population. It also influences the attractiveness, competitiveness, and innovation potential of a city.

      -

      However, urban infrastructure also faces many challenges in the 21st century. These include rapid urbanization, climate change, aging infrastructure, limited resources, increasing demand, changing user expectations, complex regulations, and emerging technologies.

      -

      -

      To address these challenges, urban infrastructure needs to be planned, designed, built, operated, maintained, and upgraded in a smart, efficient, and integrated way. This requires a holistic approach that considers the interdependencies, trade-offs, synergies, and impacts of different infrastructure systems and components.

      -

      How does Autodesk InfraWorks help engineers and planners create realistic and accurate 3D models of urban infrastructure?

      -

      Autodesk InfraWorks is a software application that helps engineers and planners create realistic and accurate 3D models of urban infrastructure. It allows them to:

      -
        -
      • Import and integrate data from various sources such as GIS databases, CAD files, point clouds, aerial images, satellite images, etc.
      • -
      • Create 3D models of existing conditions using Model Builder or manual editing tools.
      • -
      • Add new features such as roads, bridges, buildings,
          -
        • Import and integrate data from various sources such as GIS databases, CAD files, point clouds, aerial images, satellite images, etc.
        • -
        • Create 3D models of existing conditions using Model Builder or manual editing tools.
        • -
        • Add new features such as roads, bridges, buildings, landscaping, water features, etc. using design tools or component libraries.
        • -
        • Modify and refine the features using editing tools such as move, rotate, scale, align, snap, etc.
        • -
        • Analyze and simulate different scenarios and design alternatives using analysis tools such as traffic simulation, drainage analysis, viewshed analysis, etc.
        • -
        • Visualize and communicate the models using visualization tools such as rendering, animation, annotation, measurement, etc.
        • -
        • Collaborate and share the models with team members and stakeholders using cloud collaboration tools such as BIM 360 Docs, BIM 360 Design, etc.
        • -
        -

        By using Autodesk InfraWorks, engineers and planners can create 3D models of urban infrastructure that are realistic, accurate, and data-rich. They can also explore various design options and evaluate their impacts on the environment, the community, and the project goals. They can also present their models in a compelling and interactive way to gain feedback and approval from their clients and decision-makers.

        -

        What are the advantages of using Autodesk InfraWorks over other CAD software?

        -

        Autodesk InfraWorks is not just another CAD software. It is a specialized tool for designing and modeling urban infrastructure. It has several advantages over other CAD software, such as:

        -
          -
        • It is easy to use and learn. It has a user-friendly interface and intuitive workflows that guide the user through the modeling process. It also has a comprehensive help system and online resources that provide tips and tutorials for the user.
        • -
        • It is fast and efficient. It can handle large and complex models with ease and speed. It also has a smart caching system that optimizes the performance and memory usage of the software.
        • -
        • It is flexible and customizable. It can import and export data from various formats and sources. It also has a rich set of tools and components that can be modified and customized to suit the user's needs and preferences.
        • -
        • It is integrated and compatible. It can work seamlessly with other Autodesk products such as AutoCAD Civil 3D, Revit, Navisworks, etc. It also supports industry standards such as BIM (Building Information Modeling), GIS (Geographic Information System), IFC (Industry Foundation Classes), etc.
        • -
        -

        By using Autodesk InfraWorks, engineers and planners can benefit from a powerful tool that can help them design and model urban infrastructure in a smart, efficient, and integrated way.

        -

        Autodesk InfraWorks 2020.2 Extras: What's New and Improved?

        -

        Now that we have seen what Autodesk InfraWorks is and what it can do for us, let's take a look at what's new and improved in Autodesk InfraWorks 2020.2 Extras x64 Multilanguage.

        -

        Autodesk InfraWorks 2020.2 Extras x64 Multilanguage is the latest version of Autodesk InfraWorks that was released in December 2022. It includes several enhancements and additions that improve the user experience and the quality of the models. Some of these features are:

        -
          -
        • Model Builder: A smart and easy way to create your model
        • -
        • Cloud Collaboration: A convenient and secure way to share your work
        • -
        • Geotechnical Modeling: A detailed and accurate way to analyze your site conditions
        • -
        • Bridge Design: A comprehensive and flexible way to design your bridges
        • -
        -

        In the following sections, we will explore each of these features in more detail and see how they can be used in real-world projects.

        -

        Model Builder: A Smart and Easy Way to Create Your Model

        -

        One of the most useful features of Autodesk InfraWorks is Model Builder. Model Builder is a tool that allows you to create your model automatically by using data from various sources such as vector data (e.g., shapefiles), raster data (e.g., DEMs), or other data (e.g., OpenStreetMap).

        -

        To use Model Builder, you just need to specify the area of interest (AOI) for your model by drawing a polygon on a map or entering an address or coordinates. Then you can choose the data sources you want to use for your model from a list of available options. You can also adjust the settings for your model such as the theme (e.g., urban or rural), the style (e.g., realistic or conceptual), the resolution (e.g., high or low), etc.

        -

        Once you have Once you have configured your model settings, you can click on the Create Model button and wait for a few minutes while Model Builder creates your model. You will receive an email notification when your model is ready. You can then open your model in Autodesk InfraWorks and start editing and designing it. Model Builder is a smart and easy way to create your model because it saves you time and effort by using existing data sources. It also gives you a realistic and accurate representation of your environment that you can use as a base for your design. You can also update your model with new data sources as they become available. Here is an example of how Model Builder can be used to create a model of a city: A 3D model of a city created by Model Builder -

        Cloud Collaboration: A Convenient and Secure Way to Share Your Work

        - Another useful feature of Autodesk InfraWorks is Cloud Collaboration. Cloud Collaboration is a tool that allows you to share your model with your team members and stakeholders using the cloud. You can also access your model from any device and location using the cloud. To use Cloud Collaboration, you need to have a subscription to BIM 360 Docs or BIM 360 Design, which are cloud-based services that enable collaboration and data management for Autodesk products. You also need to have an Autodesk account and sign in to Autodesk InfraWorks. Once you have set up your cloud collaboration account, you can upload your model to the cloud by clicking on the Publish button in Autodesk InfraWorks. You can then invite other users to view or edit your model by sending them an email invitation or a link. You can also control the permissions and access levels of each user. By using Cloud Collaboration, you can share your model with your team members and stakeholders in a convenient and secure way. You can also collaborate with them in real time and see their changes and comments on your model. You can also keep track of the history and versions of your model and restore them if needed. Here is an example of how Cloud Collaboration can be used to share a model of a bridge: A 3D model of a bridge shared by Cloud Collaboration -

        Geotechnical Modeling: A Detailed and Accurate Way to Analyze Your Site Conditions

        - A new feature that was added in Autodesk InfraWorks 2020.2 Extras x64 Multilanguage is Geotechnical Modeling. Geotechnical Modeling is a tool that allows you to import and visualize geospatial data such as soil types, groundwater levels, slope stability, etc. You can also use Geotechnical Modeling to perform analysis and simulation of your site conditions and their impact on your design. To use Geotechnical Modeling, you need to have geospatial data in the form of boreholes, cross-sections, or surfaces. You can import these data from various sources such as CSV files, Excel files, AGS files, etc. You can also create these data manually using the Geotechnical Module in Autodesk InfraWorks. Once you have imported or created your geospatial data, you can visualize them in 3D using different colors, symbols, labels, etc. You can also edit and refine them using tools such as move, delete, split, merge, etc. You can also create profiles and sections to view the data in 2D. By using Geotechnical Modeling, you can analyze and simulate your site conditions in detail and accuracy. You can also evaluate how your site conditions affect your design parameters such as foundation depth, bearing capacity, settlement, etc. You can also identify potential risks and hazards such as landslides, liquefaction, erosion, etc. Here is an example of how Geotechnical Modeling can be used to analyze a site condition for a road project: A 3D model of a road project with geotechnical data -

        Bridge Design: A Comprehensive and Flexible Way to Design Your Bridges

        - Another feature that was improved in Autodesk InfraWorks 2020.2 Extras x64 Multilanguage is Bridge Design. Bridge Design is a tool that allows you to create different types of bridges such as girder, arch, cable-stayed, Another feature that was improved in Autodesk InfraWorks 2020.2 Extras x64 Multilanguage is Bridge Design. Bridge Design is a tool that allows you to create different types of bridges such as girder, arch, cable-stayed, etc. You can also customize your bridge components such as piers, abutments, decks, railings, etc. You can also analyze your bridge performance such as load capacity, stress distribution, vibration frequency, etc. To use Bridge Design, you need to have a road or a railway in your model that you want to add a bridge to. You can then select the bridge type from a list of available options or create your own custom bridge type. You can also adjust the bridge parameters such as span length, height, width, curvature, etc. Once you have created your bridge, you can modify and refine it using tools such as move, rotate, scale, align, snap, etc. You can also edit the individual components of your bridge using tools such as add, delete, split, merge, etc. You can also apply different materials and styles to your bridge using tools such as color, texture, pattern, etc. By using Bridge Design, you can create comprehensive and flexible bridges that suit your design requirements and specifications. You can also evaluate how your bridges perform under different loading and environmental conditions and optimize them accordingly. You can also visualize and communicate your bridges in a realistic and interactive way. Here is an example of how Bridge Design can be used to create a cable-stayed bridge: A 3D model of a cable-stayed bridge created by Bridge Design -

        How to Download and Install Autodesk InfraWorks 2020.2 Extras x64 Multilanguage: A Step-by-Step Guide

        -

        Now that we have seen the features and benefits of Autodesk InfraWorks 2020.2 Extras x64 Multilanguage, let's see how we can download and install it on our Windows PC.

        -

        What are the system requirements for Autodesk InfraWorks 2020.2 Extras x64 Multilanguage?

        -

        Before we download and install Autodesk InfraWorks 2020.2 Extras x64 Multilanguage, we need to make sure that our PC meets the minimum system requirements for running the software. These are:

        -
          -
        • Operating System: Windows 10 (64-bit) or Windows 8.1 (64-bit)
        • -
        • Processor: Intel Core i5 or higher or AMD equivalent
        • -
        • Memory: 8 GB RAM or higher
        • -
        • Graphics: DirectX 11 compatible graphics card with 1 GB VRAM or higher
        • -
        • Display: 1920 x 1080 resolution or higher
        • -
        • Storage: 10 GB free disk space or higher
        • -
        • Internet: Broadband connection for cloud services
        • -
        -

        If our PC meets these requirements, we can proceed to download and install Autodesk InfraWorks 2020.2 Extras x64 Multilanguage.

        -

        Where can you download Autodesk InfraWorks 2020.2 Extras x64 Multilanguage for free?

        -

        There are several websites that offer Autodesk InfraWorks 2020.2 Extras x64 Multilanguage for free download. However, not all of them are reliable and safe. Some of them may contain viruses, malware, or spyware that can harm our PC or compromise our data.

        -

        Therefore, we recommend that you download Autodesk InfraWorks 2020.2 Extras x64 Multilanguage from the official Autodesk website or from a trusted third-party website that has positive reviews and ratings from other users.

        -

        One of the websites that we recommend is Get Into PC, which is a popular and reputable website that provides free software downloads for Windows users. It has a large collection of software from various categories such as design, engineering, multimedia, security, etc.

        -

        To download Autodesk InfraWorks 2020.2 Extras x64 Multilanguage from Get Into PC, you can follow these steps:

        -
          -
        1. Go to Get Into PC website and search for Autodesk InfraWorks 2020.2 Extras x64 Multilanguage in the search box.
        2. -
        3. Select the result that matches your query and click on it.
        4. -
        5. On the product page, read the description and features of Autodesk InfraWorks 2020.2 Extras x64 Multilanguage and scroll down to the bottom.
        6. On the product page, read the description and features of Autodesk InfraWorks 2020.2 Extras x64 Multilanguage and scroll down to the bottom. -
        7. Click on the Download button and wait for a few seconds until the download link appears.
        8. -
        9. Click on the download link and save the file to your PC.
        10. -
        11. Extract the file using WinRAR or any other file compression software.
        12. -
        -

        You have now downloaded Autodesk InfraWorks 2020.2 Extras x64 Multilanguage from Get Into PC. You can also download it from other websites such as Ocean of Software, Softonic, FileHippo, etc. However, make sure that you scan the file for viruses and malware before installing it.

        -

        How to install Autodesk InfraWorks 2020.2 Extras x64 Multilanguage on your Windows PC?

        -

        After you have downloaded Autodesk InfraWorks 2020.2 Extras x64 Multilanguage, you can install it on your Windows PC by following these steps:

        -
          -
        1. Open the extracted folder and run the setup.exe file as administrator.
        2. -
        3. Follow the instructions on the screen and accept the terms and conditions.
        4. -
        5. Select the components and features that you want to install and choose the destination folder for the installation.
        6. -
        7. Click on the Install button and wait for the installation to complete.
        8. -
        9. Click on the Finish button and restart your PC.
        10. -
        -

        You have now installed Autodesk InfraWorks 2020.2 Extras x64 Multilanguage on your Windows PC. You can launch it from the Start menu or the desktop shortcut. You can also activate it using a serial number or a product key that you can obtain from Autodesk or from a third-party provider.

        -

        Conclusion: Why You Should Choose Autodesk InfraWorks 2020.2 Extras x64 Multilanguage for Your Next Project

        -

        In this article, we have reviewed Autodesk InfraWorks 2020.2 Extras x64 Multilanguage, a powerful tool for designing and modeling urban infrastructure. We have seen what it is, what it can do, what's new and improved in it, and how to download and install it.

        -

        We have learned that Autodesk InfraWorks 2020.2 Extras x64 Multilanguage can help us create realistic and accurate 3D models of roads, highways, bridges, railways, canals, and other urban infrastructure. It can also help us analyze and simulate various scenarios and design alternatives, as well as collaborate and share our work with our team members and stakeholders.

        -

        We have also learned that Autodesk InfraWorks 2020.2 Extras x64 Multilanguage has several advantages over other CAD software, such as ease of use, speed, efficiency, flexibility, customization, integration, and compatibility. It also has several new and improved features, such as Model Builder, Cloud Collaboration, Geotechnical Modeling, Bridge Design, etc.

        -

        Therefore, we conclude that Autodesk InfraWorks 2020.2 Extras x64 Multilanguage is a valuable tool for designing and modeling urban infrastructure. It can help us create better designs, faster workflows, higher quality models, and more satisfied clients.

        -

        If you are interested in trying Autodesk InfraWorks 2020.2 Extras x64 Multilanguage for yourself, you can download it for free from the official Autodesk website or from a trusted third-party website. You can also get a free trial or a subscription from Autodesk or from a third-party provider.

        -

        We hope that this article has been informative and helpful for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

        -

        Frequently Asked Questions

        -

        Here are some of the frequently asked questions about Autodesk InfraWorks 2020.2 Extras x64 Multilanguage:

        -
          -
        1. What is the difference between Autodesk InfraWorks 2020.2 and Autodesk InfraWorks 2020.2 Extras?
        2. -

          Autodesk InfraWorks 2020.2 is the base version of Autodesk InfraWorks that includes the core features and tools for designing and modeling urban infrastructure. Autodesk InfraWorks 2020.2 Extras is an add-on package that includes additional features and tools such as Geotechnical Modeling, Bridge Design, etc.

          -
        3. Can I use Autodesk InfraWorks 2020.2 Extras without Autodesk InfraWorks 2020.2?
        4. -

          No, you cannot use Autodesk InfraWorks 2020.2 Extras without Autodesk InfraWorks 2020.2. You need to have No, you cannot use Autodesk InfraWorks 2020.2 Extras without Autodesk InfraWorks 2020.2. You need to have Autodesk InfraWorks 2020.2 installed on your PC before you can install Autodesk InfraWorks 2020.2 Extras. You also need to have a valid license or subscription for both products.

        5. How much does Autodesk InfraWorks 2020.2 Extras x64 Multilanguage cost?
        6. -

          Autodesk InfraWorks 2020.2 Extras x64 Multilanguage is not sold separately. It is included in the Autodesk InfraWorks subscription, which costs $1,890 per year or $210 per month. You can also get a free trial for 30 days from the Autodesk website.

          -
        7. What languages are supported by Autodesk InfraWorks 2020.2 Extras x64 Multilanguage?
        8. -

          Autodesk InfraWorks 2020.2 Extras x64 Multilanguage supports the following languages: English, French, German, Italian, Japanese, Portuguese (Brazilian), Russian, Simplified Chinese, and Spanish.

          -
        9. Where can I find more information and support for Autodesk InfraWorks 2020.2 Extras x64 Multilanguage?
        10. -

          You can find more information and support for Autodesk InfraWorks 2020.2 Extras x64 Multilanguage from the following sources:

          -

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Excursions In World Music Seventh Edition Downloa Englisch Autogramm S.md b/spaces/tioseFevbu/cartoon-converter/scripts/Excursions In World Music Seventh Edition Downloa Englisch Autogramm S.md deleted file mode 100644 index 1ec1ed67ab668f0b94db92fe609e4d8a91976831..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Excursions In World Music Seventh Edition Downloa Englisch Autogramm S.md +++ /dev/null @@ -1,20 +0,0 @@ -
          -

          How to Download Excursions in World Music, Seventh Edition in English

          - -

          If you are looking for a comprehensive introductory textbook to world music, you might want to check out Excursions in World Music, Seventh Edition by Bruno Nettl and Timothy Rommen. This book offers a panoramic experience for students by engaging the many cultures around the globe and highlighting the sheer diversity to be experienced in the world of music. It also illustrates the often profound ways through which a deeper exploration of these many different communities can reveal overlaps, shared horizons, and common concerns in spite of and, because of, this very diversity.

          - -

          The new seventh edition introduces five brand new chapters, including chapters by three new contributors on the Middle East, South Asia, and Korea, as well as a new chapter on Latin America along with a new introduction written by Timothy Rommen. General updates have been made to other chapters, replacing visuals and updating charts/statistics. The book covers topics such as music of South Asia, music of the Middle East and North Africa, musics of East Asia, music of Indonesia, music of Sub-Saharan Africa, the musical culture of Europe, music in Latin America, music in the Caribbean, Native American music, and music of ethnic North America.

          -

          Excursions In World Music, Seventh Edition Downloa englisch autogramm s


          Download File https://urlcod.com/2uHwJ8



          - -

          So how can you download Excursions in World Music, Seventh Edition in English? There are several options available for you. You can purchase the paperback or hardback version from Taylor & Francis or Routledge websites. You can also buy the eBook and mp3 file from VitalSource or Google Books. Alternatively, you can get the print book and CD set from Amazon or other online retailers. The audio CD contains streamed audio tracks for most of the listening guides in the book. You can also access a companion website that offers numerous student resources such as interactive quizzes, flashcards, and an interactive map with pinpoints of interest and activities.

          - -

          Excursions in World Music, Seventh Edition is a great resource for anyone who wants to learn more about the diverse musical cultures of the world. It is suitable for undergraduate courses in world music or ethnomusicology as well as for general readers who are interested in exploring the richness and variety of world music. Download it today and enjoy your musical journey!

          - -

          What are the benefits of reading Excursions in World Music, Seventh Edition? There are many reasons why you should read this book if you are interested in world music. First of all, you will gain a broad and deep understanding of the musical traditions and practices of various regions and cultures around the world. You will learn about the historical, social, religious, and political contexts that shape and influence the music of different communities. You will also discover the similarities and differences among musical styles, genres, instruments, forms, and functions across the globe.

          - -

          Secondly, you will develop your critical listening and analytical skills by following the listening guides in the book. These guides will help you identify and appreciate the musical elements and features of each example. You will also be able to compare and contrast the musical examples within and across chapters. The book provides a wide range of musical examples from different genres and regions, such as classical, folk, popular, sacred, secular, vocal, instrumental, solo, ensemble, and so on.

          - -

          Thirdly, you will enhance your cultural awareness and sensitivity by reading about the diverse perspectives and experiences of the musicians and their audiences. You will learn about the values, beliefs, attitudes, and emotions that are expressed and communicated through music. You will also understand how music can reflect and affect the identity, culture, and society of different groups of people. The book also addresses some of the contemporary issues and challenges that face world music today, such as globalization, hybridization, appropriation, preservation, and innovation.

          -

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Fundamentals Of Athletic Training-3rd Edition Download Pdf 3 LINK.md b/spaces/tioseFevbu/cartoon-converter/scripts/Fundamentals Of Athletic Training-3rd Edition Download Pdf 3 LINK.md deleted file mode 100644 index 141b5c5ebaf4df277e08a45e1800f92173f24248..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Fundamentals Of Athletic Training-3rd Edition Download Pdf 3 LINK.md +++ /dev/null @@ -1,12 +0,0 @@ -
          -

          Fundamentals Of Athletic Training-3rd Edition: A Comprehensive Textbook for Sports Medicine Students

          -

          Fundamentals Of Athletic Training-3rd Edition is a textbook that explains the foundational concepts in athletic training and presents injuries and illnesses commonly encountered by certified athletic trainers. Written by Lorin Cartwright and William A. Pitney, this book is designed for high school students who are interested in pursuing careers as sports medicine professionals or who are assisting athletic trainers on the field and in the training room. The book covers topics such as the professional and administrative functions of athletic trainers, anatomy and physiology, athletic injuries to the axial region, upper and lower extremities injuries, rehabilitation and reconditioning, emergency care and nutrition, drugs and general health issues. The book also includes new chapters dealing with training athletes with disabilities, cultural diversity and modern facilities design for new standards and exercise regimes.

          -

          The third edition of this book was published by Human Kinetics in 2011 and has 395 pages. It features a full-color layout, numerous illustrations and sidebars, chapter summaries and review questions, and online access to additional resources for instructors such as a test bank and visual aids. The book is based on the latest developments in athletic training with regard to treatment, care, administration, and certification. It also provides real-world examples and scenarios that athletic trainers currently working in the field encounter. The book aims to develop the knowledge and skills of students in a level that they can understand and apply.

          -

          Fundamentals Of Athletic Training-3rd Edition Download Pdf 3


          Download File · https://urlcod.com/2uHyps



          -

          Fundamentals Of Athletic Training-3rd Edition is available for download in PDF format from various online sources. However, it is recommended that students purchase the original hardcover or paperback version of the book from reputable sellers or libraries to support the authors and publishers. The book can also be used as a reference or a supplement for other courses or programs related to sports medicine, exercise physiology, kinesiology, physical education, or health sciences.

          - -

          For students who want to pursue a career as athletic trainers, they need to complete a bachelor's degree program in athletic training or a related field. Some employers may prefer candidates who have a master's degree or higher. According to the CollegeGrad website[^1^], high school students interested in postsecondary athletic training programs should take courses in anatomy, physiology, and physics. Additionally, students need to gain clinical experience under the supervision of a certified athletic trainer. They also need to pass a national exam administered by the Board of Certification for the Athletic Trainer (BOC) and obtain a state license or certification, which may vary depending on the state requirements.

          -

          -

          Athletic trainers can work in various settings such as schools, colleges, universities, professional sports teams, clinics, hospitals, or military bases. They can also specialize in certain sports or populations such as youth, elderly, or disabled athletes. The median annual wage for athletic trainers was $48,440 in May 2019, according to the U.S. Bureau of Labor Statistics. The job outlook for athletic trainers is projected to grow 16 percent from 2019 to 2029, much faster than the average for all occupations. This is due to the increasing demand for preventive care and injury treatment for athletes and other physically active people.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Livro Chapeuzinhos Coloridos.pdf.md b/spaces/tioseFevbu/cartoon-converter/scripts/Livro Chapeuzinhos Coloridos.pdf.md deleted file mode 100644 index ba232b1757c05f2abf46d952cd7c9c42945f07d8..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Livro Chapeuzinhos Coloridos.pdf.md +++ /dev/null @@ -1,21 +0,0 @@ - -

          Livro Chapeuzinhos Coloridos.pdf: A Creative and Educational Book for Children

          -

          If you are looking for a book that can stimulate your child's imagination and teach them about diversity, you might want to check out Livro Chapeuzinhos Coloridos.pdf. This book, written by José Roberto Torero and Marcus Aurelius Pimenta, is a collection of stories that reimagines the classic tale of Little Red Riding Hood with different colors of hoods and different endings.

          -

          Livro Chapeuzinhos Coloridos.pdf is not only a fun and entertaining book, but also a valuable resource for parents and educators who want to introduce their children to topics such as cultural differences, gender roles, social justice, and environmental awareness. Each story has a different message and a different lesson that can spark meaningful conversations and reflections.

          -

          Livro Chapeuzinhos Coloridos.pdf


          DOWNLOADhttps://urlcod.com/2uHya4



          -

          Some of the stories in Livro Chapeuzinhos Coloridos.pdf are:

          -
            -
          • Chapeuzinho Amarelo (Yellow Hood), who is afraid of everything and learns to overcome her fears with the help of a friendly wolf.
          • -
          • Chapeuzinho Azul (Blue Hood), who is a boy who likes to dress up as a girl and finds acceptance and friendship in the forest.
          • -
          • Chapeuzinho Verde (Green Hood), who is an activist who fights against the deforestation of the woods and the exploitation of the animals.
          • -
          • Chapeuzinho Branco (White Hood), who is a spoiled and selfish girl who learns to share and care for others after meeting a poor grandmother.
          • -
          • Chapeuzinho Preto (Black Hood), who is a girl who faces racism and discrimination in her village and discovers her roots and her identity in the forest.
          • -
          -

          Livro Chapeuzinhos Coloridos.pdf is a book that celebrates diversity and creativity, while also offering a fresh and modern perspective on a classic fairy tale. You can download the PDF version of the book for free from this link: https://www.acervodigital.com.br/livros/livro-chapeuzinhos-coloridos-pdf. You can also find more information about the authors and their other works on their website: http://www.toreroepimenta.com.br/.

          - -

          One of the reasons why Livro Chapeuzinhos Coloridos.pdf is such a popular and acclaimed book is because of its beautiful and colorful illustrations. The book features the artwork of Ziraldo, a famous Brazilian cartoonist and writer, who has created memorable characters and scenes that capture the essence and the emotion of each story. Ziraldo's style is playful and expressive, using bright colors and simple shapes to convey the mood and the message of the book.

          -

          Livro Chapeuzinhos Coloridos.pdf is not only a book for children, but also for adults who want to revisit their childhood memories and enjoy a new and original version of Little Red Riding Hood. The book is suitable for readers of all ages and backgrounds, as it offers a universal and timeless appeal. Whether you are looking for a book to read with your kids, to use in your classroom, or to enjoy by yourself, Livro Chapeuzinhos Coloridos.pdf is a great choice that will make you laugh, cry, think, and dream.

          -

          -

          Don't miss this opportunity to download Livro Chapeuzinhos Coloridos.pdf for free and discover why this book has won several awards and has been translated into many languages. You will be amazed by the creativity and the diversity of this book, and you will never look at Little Red Riding Hood the same way again.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pep517/envbuild.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pep517/envbuild.py deleted file mode 100644 index fe8873c64a90d2ae3e44510453191e1ab4b5c84e..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pep517/envbuild.py +++ /dev/null @@ -1,171 +0,0 @@ -"""Build wheels/sdists by installing build deps to a temporary environment. -""" - -import io -import os -import logging -import shutil -from subprocess import check_call -import sys -from sysconfig import get_paths -from tempfile import mkdtemp - -from .compat import toml_load -from .wrappers import Pep517HookCaller, LoggerWrapper - -log = logging.getLogger(__name__) - - -def _load_pyproject(source_dir): - with io.open( - os.path.join(source_dir, 'pyproject.toml'), - 'rb', - ) as f: - pyproject_data = toml_load(f) - buildsys = pyproject_data['build-system'] - return ( - buildsys['requires'], - buildsys['build-backend'], - buildsys.get('backend-path'), - ) - - -class BuildEnvironment(object): - """Context manager to install build deps in a simple temporary environment - - Based on code I wrote for pip, which is MIT licensed. - """ - # Copyright (c) 2008-2016 The pip developers (see AUTHORS.txt file) - # - # Permission is hereby granted, free of charge, to any person obtaining - # a copy of this software and associated documentation files (the - # "Software"), to deal in the Software without restriction, including - # without limitation the rights to use, copy, modify, merge, publish, - # distribute, sublicense, and/or sell copies of the Software, and to - # permit persons to whom the Software is furnished to do so, subject to - # the following conditions: - # - # The above copyright notice and this permission notice shall be - # included in all copies or substantial portions of the Software. - # - # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE - # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION - # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - path = None - - def __init__(self, cleanup=True): - self._cleanup = cleanup - - def __enter__(self): - self.path = mkdtemp(prefix='pep517-build-env-') - log.info('Temporary build environment: %s', self.path) - - self.save_path = os.environ.get('PATH', None) - self.save_pythonpath = os.environ.get('PYTHONPATH', None) - - install_scheme = 'nt' if (os.name == 'nt') else 'posix_prefix' - install_dirs = get_paths(install_scheme, vars={ - 'base': self.path, - 'platbase': self.path, - }) - - scripts = install_dirs['scripts'] - if self.save_path: - os.environ['PATH'] = scripts + os.pathsep + self.save_path - else: - os.environ['PATH'] = scripts + os.pathsep + os.defpath - - if install_dirs['purelib'] == install_dirs['platlib']: - lib_dirs = install_dirs['purelib'] - else: - lib_dirs = install_dirs['purelib'] + os.pathsep + \ - install_dirs['platlib'] - if self.save_pythonpath: - os.environ['PYTHONPATH'] = lib_dirs + os.pathsep + \ - self.save_pythonpath - else: - os.environ['PYTHONPATH'] = lib_dirs - - return self - - def pip_install(self, reqs): - """Install dependencies into this env by calling pip in a subprocess""" - if not reqs: - return - log.info('Calling pip to install %s', reqs) - cmd = [ - sys.executable, '-m', 'pip', 'install', '--ignore-installed', - '--prefix', self.path] + list(reqs) - check_call( - cmd, - stdout=LoggerWrapper(log, logging.INFO), - stderr=LoggerWrapper(log, logging.ERROR), - ) - - def __exit__(self, exc_type, exc_val, exc_tb): - needs_cleanup = ( - self._cleanup and - self.path is not None and - os.path.isdir(self.path) - ) - if needs_cleanup: - shutil.rmtree(self.path) - - if self.save_path is None: - os.environ.pop('PATH', None) - else: - os.environ['PATH'] = self.save_path - - if self.save_pythonpath is None: - os.environ.pop('PYTHONPATH', None) - else: - os.environ['PYTHONPATH'] = self.save_pythonpath - - -def build_wheel(source_dir, wheel_dir, config_settings=None): - """Build a wheel from a source directory using PEP 517 hooks. - - :param str source_dir: Source directory containing pyproject.toml - :param str wheel_dir: Target directory to create wheel in - :param dict config_settings: Options to pass to build backend - - This is a blocking function which will run pip in a subprocess to install - build requirements. - """ - if config_settings is None: - config_settings = {} - requires, backend, backend_path = _load_pyproject(source_dir) - hooks = Pep517HookCaller(source_dir, backend, backend_path) - - with BuildEnvironment() as env: - env.pip_install(requires) - reqs = hooks.get_requires_for_build_wheel(config_settings) - env.pip_install(reqs) - return hooks.build_wheel(wheel_dir, config_settings) - - -def build_sdist(source_dir, sdist_dir, config_settings=None): - """Build an sdist from a source directory using PEP 517 hooks. - - :param str source_dir: Source directory containing pyproject.toml - :param str sdist_dir: Target directory to place sdist in - :param dict config_settings: Options to pass to build backend - - This is a blocking function which will run pip in a subprocess to install - build requirements. - """ - if config_settings is None: - config_settings = {} - requires, backend, backend_path = _load_pyproject(source_dir) - hooks = Pep517HookCaller(source_dir, backend, backend_path) - - with BuildEnvironment() as env: - env.pip_install(requires) - reqs = hooks.get_requires_for_build_sdist(config_settings) - env.pip_install(reqs) - return hooks.build_sdist(sdist_dir, config_settings) diff --git a/spaces/tnt2011/dog_cat_classifier/README.md b/spaces/tnt2011/dog_cat_classifier/README.md deleted file mode 100644 index 15b0038ae036fee8789f3d14a13e3e13038dca69..0000000000000000000000000000000000000000 --- a/spaces/tnt2011/dog_cat_classifier/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dog Cat Classifier -emoji: 📚 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/toiram/goofyai-3d_render_style_xl/app.py b/spaces/toiram/goofyai-3d_render_style_xl/app.py deleted file mode 100644 index 4f2d3011c603b276c7800e5d1e9de8bf628eeda2..0000000000000000000000000000000000000000 --- a/spaces/toiram/goofyai-3d_render_style_xl/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/goofyai/3d_render_style_xl").launch() \ No newline at end of file diff --git a/spaces/tomofi/MMOCR/configs/kie/sdmgr/sdmgr_unet16_60e_wildreceipt.py b/spaces/tomofi/MMOCR/configs/kie/sdmgr/sdmgr_unet16_60e_wildreceipt.py deleted file mode 100644 index f073064affebe05d3830e18d76453c1cceb0f1a1..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/configs/kie/sdmgr/sdmgr_unet16_60e_wildreceipt.py +++ /dev/null @@ -1,105 +0,0 @@ -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -max_scale, min_scale = 1024, 512 - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(max_scale, min_scale), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='KIEFormatBundle'), - dict( - type='Collect', - keys=['img', 'relations', 'texts', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(max_scale, min_scale), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='KIEFormatBundle'), - dict( - type='Collect', - keys=['img', 'relations', 'texts', 'gt_bboxes'], - meta_keys=[ - 'img_norm_cfg', 'img_shape', 'ori_filename', 'filename', - 'ori_texts' - ]) -] - -dataset_type = 'KIEDataset' -data_root = 'data/wildreceipt' - -loader = dict( - type='HardDiskLoader', - repeat=1, - parser=dict( - type='LineJsonParser', - keys=['file_name', 'height', 'width', 'annotations'])) - -train = dict( - type=dataset_type, - ann_file=f'{data_root}/train.txt', - pipeline=train_pipeline, - img_prefix=data_root, - loader=loader, - dict_file=f'{data_root}/dict.txt', - test_mode=False) -test = dict( - type=dataset_type, - ann_file=f'{data_root}/test.txt', - pipeline=test_pipeline, - img_prefix=data_root, - loader=loader, - dict_file=f'{data_root}/dict.txt', - test_mode=True) - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=train, - val=test, - test=test) - -evaluation = dict( - interval=1, - metric='macro_f1', - metric_options=dict( - macro_f1=dict( - ignores=[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 25]))) - -model = dict( - type='SDMGR', - backbone=dict(type='UNet', base_channels=16), - bbox_head=dict( - type='SDMGRHead', visual_dim=16, num_chars=92, num_classes=26), - visual_modality=True, - train_cfg=None, - test_cfg=None, - class_list=f'{data_root}/class_list.txt') - -optimizer = dict(type='Adam', weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=1, - warmup_ratio=1, - step=[40, 50]) -total_epochs = 60 - -checkpoint_config = dict(interval=1) -log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] - -find_unused_parameters = True diff --git a/spaces/tomofi/MMOCR/tests/test_dataset/test_test_time_aug.py b/spaces/tomofi/MMOCR/tests/test_dataset/test_test_time_aug.py deleted file mode 100644 index 5d68ac42ee3f5fd17fc05cef3632173b9396681c..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/tests/test_dataset/test_test_time_aug.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import pytest - -from mmocr.datasets.pipelines.test_time_aug import MultiRotateAugOCR - - -def test_resize_ocr(): - input_img1 = np.ones((64, 256, 3), dtype=np.uint8) - input_img2 = np.ones((64, 32, 3), dtype=np.uint8) - - rci = MultiRotateAugOCR(transforms=[], rotate_degrees=[0, 90, 270]) - - # test invalid arguments - with pytest.raises(AssertionError): - MultiRotateAugOCR(transforms=[], rotate_degrees=[45]) - with pytest.raises(AssertionError): - MultiRotateAugOCR(transforms=[], rotate_degrees=[20.5]) - - # test call with input_img1 - results = {'img_shape': input_img1.shape, 'img': input_img1} - results = rci(results) - assert np.allclose([64, 256, 3], results['img_shape']) - assert len(results['img']) == 1 - assert len(results['img_shape']) == 1 - assert np.allclose([64, 256, 3], results['img_shape'][0]) - - # test call with input_img2 - results = {'img_shape': input_img2.shape, 'img': input_img2} - results = rci(results) - assert np.allclose([64, 32, 3], results['img_shape']) - assert len(results['img']) == 3 - assert len(results['img_shape']) == 3 - assert np.allclose([64, 32, 3], results['img_shape'][0]) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_rpn_r50_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_rpn_r50_fpn_1x_coco.py deleted file mode 100644 index 27ab3e733bda1fb1c7c50cbd0f26597650b4c2e7..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_rpn_r50_fpn_1x_coco.py +++ /dev/null @@ -1,58 +0,0 @@ -_base_ = '../rpn/rpn_r50_fpn_1x_coco.py' -model = dict( - rpn_head=dict( - _delete_=True, - type='GARPNHead', - in_channels=256, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=8, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[8], - strides=[4, 8, 16, 32, 64]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.07, 0.07, 0.14, 0.14]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.07, 0.07, 0.11, 0.11]), - loc_filter_thr=0.01, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict( - rpn=dict( - ga_assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - ignore_iof_thr=-1), - ga_sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - center_ratio=0.2, - ignore_ratio=0.5)), - test_cfg=dict(rpn=dict(nms_post=1000))) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py deleted file mode 100644 index a44c01831b508da0a5e1ca3720bb437bcea086d1..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_caffe_c4.py', - '../_base_/datasets/coco_instance.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/models/diffusion/ddim.py b/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/models/diffusion/ddim.py deleted file mode 100644 index fb31215db5c3f3f703f15987d7eee6a179c9f7ec..0000000000000000000000000000000000000000 --- a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,241 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \ - extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - return x_dec \ No newline at end of file diff --git a/spaces/tovaru/vits-for-ba/text/symbols.py b/spaces/tovaru/vits-for-ba/text/symbols.py deleted file mode 100644 index ce7d043ce7c06e63fc60950127b978ad06abbe5d..0000000000000000000000000000000000000000 --- a/spaces/tovaru/vits-for-ba/text/symbols.py +++ /dev/null @@ -1,15 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ ' - - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/ttt246/brain/Extension/src/pages/Options/Options.tsx b/spaces/ttt246/brain/Extension/src/pages/Options/Options.tsx deleted file mode 100644 index f42e02dbd0a1cbf17a5972d66252800bbc9b1859..0000000000000000000000000000000000000000 --- a/spaces/ttt246/brain/Extension/src/pages/Options/Options.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import React from 'react'; -import './Options.css'; - -interface Props { - title: string; -} - -const Options: React.FC = ({ title }: Props) => { - return
          {title} Page
          ; -}; - -export default Options; diff --git a/spaces/ucalyptus/PTI/models/e4e/stylegan2/op/upfirdn2d.cpp b/spaces/ucalyptus/PTI/models/e4e/stylegan2/op/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/PTI/models/e4e/stylegan2/op/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/udion/BayesCap/src/losses.py b/spaces/udion/BayesCap/src/losses.py deleted file mode 100644 index 990af85be1163124a385b06ac5ffc63a47b0cfdd..0000000000000000000000000000000000000000 --- a/spaces/udion/BayesCap/src/losses.py +++ /dev/null @@ -1,131 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models as models -from torch import Tensor - -class ContentLoss(nn.Module): - """Constructs a content loss function based on the VGG19 network. - Using high-level feature mapping layers from the latter layers will focus more on the texture content of the image. - - Paper reference list: - -`Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network ` paper. - -`ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks ` paper. - -`Perceptual Extreme Super Resolution Network with Receptive Field Block ` paper. - - """ - - def __init__(self) -> None: - super(ContentLoss, self).__init__() - # Load the VGG19 model trained on the ImageNet dataset. - vgg19 = models.vgg19(pretrained=True).eval() - # Extract the thirty-sixth layer output in the VGG19 model as the content loss. - self.feature_extractor = nn.Sequential(*list(vgg19.features.children())[:36]) - # Freeze model parameters. - for parameters in self.feature_extractor.parameters(): - parameters.requires_grad = False - - # The preprocessing method of the input data. This is the VGG model preprocessing method of the ImageNet dataset. - self.register_buffer("mean", torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)) - self.register_buffer("std", torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)) - - def forward(self, sr: Tensor, hr: Tensor) -> Tensor: - # Standardized operations - sr = sr.sub(self.mean).div(self.std) - hr = hr.sub(self.mean).div(self.std) - - # Find the feature map difference between the two images - loss = F.l1_loss(self.feature_extractor(sr), self.feature_extractor(hr)) - - return loss - - -class GenGaussLoss(nn.Module): - def __init__( - self, reduction='mean', - alpha_eps = 1e-4, beta_eps=1e-4, - resi_min = 1e-4, resi_max=1e3 - ) -> None: - super(GenGaussLoss, self).__init__() - self.reduction = reduction - self.alpha_eps = alpha_eps - self.beta_eps = beta_eps - self.resi_min = resi_min - self.resi_max = resi_max - - def forward( - self, - mean: Tensor, one_over_alpha: Tensor, beta: Tensor, target: Tensor - ): - one_over_alpha1 = one_over_alpha + self.alpha_eps - beta1 = beta + self.beta_eps - - resi = torch.abs(mean - target) - # resi = torch.pow(resi*one_over_alpha1, beta1).clamp(min=self.resi_min, max=self.resi_max) - resi = (resi*one_over_alpha1*beta1).clamp(min=self.resi_min, max=self.resi_max) - ## check if resi has nans - if torch.sum(resi != resi) > 0: - print('resi has nans!!') - return None - - log_one_over_alpha = torch.log(one_over_alpha1) - log_beta = torch.log(beta1) - lgamma_beta = torch.lgamma(torch.pow(beta1, -1)) - - if torch.sum(log_one_over_alpha != log_one_over_alpha) > 0: - print('log_one_over_alpha has nan') - if torch.sum(lgamma_beta != lgamma_beta) > 0: - print('lgamma_beta has nan') - if torch.sum(log_beta != log_beta) > 0: - print('log_beta has nan') - - l = resi - log_one_over_alpha + lgamma_beta - log_beta - - if self.reduction == 'mean': - return l.mean() - elif self.reduction == 'sum': - return l.sum() - else: - print('Reduction not supported') - return None - -class TempCombLoss(nn.Module): - def __init__( - self, reduction='mean', - alpha_eps = 1e-4, beta_eps=1e-4, - resi_min = 1e-4, resi_max=1e3 - ) -> None: - super(TempCombLoss, self).__init__() - self.reduction = reduction - self.alpha_eps = alpha_eps - self.beta_eps = beta_eps - self.resi_min = resi_min - self.resi_max = resi_max - - self.L_GenGauss = GenGaussLoss( - reduction=self.reduction, - alpha_eps=self.alpha_eps, beta_eps=self.beta_eps, - resi_min=self.resi_min, resi_max=self.resi_max - ) - self.L_l1 = nn.L1Loss(reduction=self.reduction) - - def forward( - self, - mean: Tensor, one_over_alpha: Tensor, beta: Tensor, target: Tensor, - T1: float, T2: float - ): - l1 = self.L_l1(mean, target) - l2 = self.L_GenGauss(mean, one_over_alpha, beta, target) - l = T1*l1 + T2*l2 - - return l - - -# x1 = torch.randn(4,3,32,32) -# x2 = torch.rand(4,3,32,32) -# x3 = torch.rand(4,3,32,32) -# x4 = torch.randn(4,3,32,32) - -# L = GenGaussLoss(alpha_eps=1e-4, beta_eps=1e-4, resi_min=1e-4, resi_max=1e3) -# L2 = TempCombLoss(alpha_eps=1e-4, beta_eps=1e-4, resi_min=1e-4, resi_max=1e3) -# print(L(x1, x2, x3, x4), L2(x1, x2, x3, x4, 1e0, 1e-2)) \ No newline at end of file diff --git a/spaces/ulysses115/diffsvc_test/modules/parallel_wavegan/optimizers/__init__.py b/spaces/ulysses115/diffsvc_test/modules/parallel_wavegan/optimizers/__init__.py deleted file mode 100644 index a0e0c5932838281e912079e5784d84d43444a61a..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/diffsvc_test/modules/parallel_wavegan/optimizers/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from torch.optim import * # NOQA -from .radam import * # NOQA diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Arsenio Lupin online free The best sources to read and listen to the French master of suspense.md b/spaces/usbethFlerru/sovits-modelsV2/example/Arsenio Lupin online free The best sources to read and listen to the French master of suspense.md deleted file mode 100644 index 1dee18a56625725f1b186cc9e9ba7a8d4b4acab9..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Arsenio Lupin online free The best sources to read and listen to the French master of suspense.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Arsenio Lupin online free


          Download Zip ::: https://urlcod.com/2uyVKb



          -
          - aaccfb2cb3
          -
          -
          -

          diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Bs 6399 Part 3 1988 Pdf Download LINK.md b/spaces/usbethFlerru/sovits-modelsV2/example/Bs 6399 Part 3 1988 Pdf Download LINK.md deleted file mode 100644 index 6ceb8a1d01944ca1529207a3e63ed5282c0d5470..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Bs 6399 Part 3 1988 Pdf Download LINK.md +++ /dev/null @@ -1,32 +0,0 @@ -

          Bs 6399 Part 3 1988 Pdf Download


          Download File ===> https://urlcod.com/2uyUei



          -
          -2 Definitions. In this Part of BS 6399 the following terms are used . They are listed in alphabetical order and the definitions of the terms are shown immediately below the . - -3 Roof loads. In this Part of BS 6399 the following terms are used . They are listed in alphabetical order and the definitions of the terms are shown immediately below the . - -3.1 ‘Roof’. The term roof is used as an abbreviation for roof, outer covering or outer structure. - -3.1.1 Roof includes the whole of the building structure and any associated works, features, fittings or other materials which are attached to or incorporated in the building structure but are not part of the building structure itself. In this definition, the terms roofing and cover are used as synonyms for each other. - -3.1.2 Roofing is a roof structure. - -3.1.3 Covering is an outer covering of the building structure or any associated works, features, fittings or other materials which are attached to or incorporated in the building structure but are not part of the building structure itself. - -3.1.4 ‘Flat roof’ means a roof which is a plane. - -3.1.5 ‘Sheet roof’ means a flat roof supported on edge beams or rafters which are not penetrated by roof joists or trusses or which are supported by attachment to the edge beams or rafters. - -3.1.6 ‘Standing seam roof’ means a roof structure supported by sheet roof cladding or a standing seam metal roofing sheet. - -3.1.7 ‘Plywood roof’ means a roof structure supported by plywood sheets of a planar cross-sectional shape. - -3.2 Building. In this Part of BS 6399 the term building is used as an abbreviation for building, building structure, structure, building or building component. - -3.2.1 Building includes a building which is partly or wholly underground and includes a building which is partly or wholly enclosed by a wall or roof. A building which is partly or wholly enclosed by a wall or roof is not included in a building’s construction if the building is intended to serve another function. - -3.2.2 Building includes a building which is partly or wholly underground. - -3.2.3 Building includes a building which is partly or 4fefd39f24
          -
          -
          -

          diff --git a/spaces/user238921933/stable-diffusion-webui/.github/pull_request_template.md b/spaces/user238921933/stable-diffusion-webui/.github/pull_request_template.md deleted file mode 100644 index 69056331b8f56cb7a0f8bcff3010eb2a67c141c7..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/.github/pull_request_template.md +++ /dev/null @@ -1,28 +0,0 @@ -# Please read the [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing) before submitting a pull request! - -If you have a large change, pay special attention to this paragraph: - -> Before making changes, if you think that your feature will result in more than 100 lines changing, find me and talk to me about the feature you are proposing. It pains me to reject the hard work someone else did, but I won't add everything to the repo, and it's better if the rejection happens before you have to waste time working on the feature. - -Otherwise, after making sure you're following the rules described in wiki page, remove this section and continue on. - -**Describe what this pull request is trying to achieve.** - -A clear and concise description of what you're trying to accomplish with this, so your intent doesn't have to be extracted from your code. - -**Additional notes and description of your changes** - -More technical discussion about your changes go here, plus anything that a maintainer might have to specifically take a look at, or be wary of. - -**Environment this was tested in** - -List the environment you have developed / tested this on. As per the contributing page, changes should be able to work on Windows out of the box. - - OS: [e.g. Windows, Linux] - - Browser: [e.g. chrome, safari] - - Graphics card: [e.g. NVIDIA RTX 2080 8GB, AMD RX 6600 8GB] - -**Screenshots or videos of your changes** - -If applicable, screenshots or a video showing off your changes. If it edits an existing UI, it should ideally contain a comparison of what used to be there, before your changes were made. - -This is **required** for anything that touches the user interface. \ No newline at end of file diff --git a/spaces/user238921933/stable-diffusion-webui/webui.sh b/spaces/user238921933/stable-diffusion-webui/webui.sh deleted file mode 100644 index 8cdad22d310fed20f229b09d7a3160aeb1731a85..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/webui.sh +++ /dev/null @@ -1,186 +0,0 @@ -#!/usr/bin/env bash -################################################# -# Please do not make any changes to this file, # -# change the variables in webui-user.sh instead # -################################################# - -# If run from macOS, load defaults from webui-macos-env.sh -if [[ "$OSTYPE" == "darwin"* ]]; then - if [[ -f webui-macos-env.sh ]] - then - source ./webui-macos-env.sh - fi -fi - -# Read variables from webui-user.sh -# shellcheck source=/dev/null -if [[ -f webui-user.sh ]] -then - source ./webui-user.sh -fi - -# Set defaults -# Install directory without trailing slash -if [[ -z "${install_dir}" ]] -then - install_dir="/home/$(whoami)" -fi - -# Name of the subdirectory (defaults to stable-diffusion-webui) -if [[ -z "${clone_dir}" ]] -then - clone_dir="stable-diffusion-webui" -fi - -# python3 executable -if [[ -z "${python_cmd}" ]] -then - python_cmd="python3" -fi - -# git executable -if [[ -z "${GIT}" ]] -then - export GIT="git" -fi - -# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv) -if [[ -z "${venv_dir}" ]] -then - venv_dir="venv" -fi - -if [[ -z "${LAUNCH_SCRIPT}" ]] -then - LAUNCH_SCRIPT="launch.py" -fi - -# this script cannot be run as root by default -can_run_as_root=0 - -# read any command line flags to the webui.sh script -while getopts "f" flag > /dev/null 2>&1 -do - case ${flag} in - f) can_run_as_root=1;; - *) break;; - esac -done - -# Disable sentry logging -export ERROR_REPORTING=FALSE - -# Do not reinstall existing pip packages on Debian/Ubuntu -export PIP_IGNORE_INSTALLED=0 - -# Pretty print -delimiter="################################################################" - -printf "\n%s\n" "${delimiter}" -printf "\e[1m\e[32mInstall script for stable-diffusion + Web UI\n" -printf "\e[1m\e[34mTested on Debian 11 (Bullseye)\e[0m" -printf "\n%s\n" "${delimiter}" - -# Do not run as root -if [[ $(id -u) -eq 0 && can_run_as_root -eq 0 ]] -then - printf "\n%s\n" "${delimiter}" - printf "\e[1m\e[31mERROR: This script must not be launched as root, aborting...\e[0m" - printf "\n%s\n" "${delimiter}" - exit 1 -else - printf "\n%s\n" "${delimiter}" - printf "Running on \e[1m\e[32m%s\e[0m user" "$(whoami)" - printf "\n%s\n" "${delimiter}" -fi - -if [[ -d .git ]] -then - printf "\n%s\n" "${delimiter}" - printf "Repo already cloned, using it as install directory" - printf "\n%s\n" "${delimiter}" - install_dir="${PWD}/../" - clone_dir="${PWD##*/}" -fi - -# Check prerequisites -gpu_info=$(lspci 2>/dev/null | grep VGA) -case "$gpu_info" in - *"Navi 1"*|*"Navi 2"*) export HSA_OVERRIDE_GFX_VERSION=10.3.0 - ;; - *"Renoir"*) export HSA_OVERRIDE_GFX_VERSION=9.0.0 - printf "\n%s\n" "${delimiter}" - printf "Experimental support for Renoir: make sure to have at least 4GB of VRAM and 10GB of RAM or enable cpu mode: --use-cpu all --no-half" - printf "\n%s\n" "${delimiter}" - ;; - *) - ;; -esac -if echo "$gpu_info" | grep -q "AMD" && [[ -z "${TORCH_COMMAND}" ]] -then - export TORCH_COMMAND="pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2" -fi - -for preq in "${GIT}" "${python_cmd}" -do - if ! hash "${preq}" &>/dev/null - then - printf "\n%s\n" "${delimiter}" - printf "\e[1m\e[31mERROR: %s is not installed, aborting...\e[0m" "${preq}" - printf "\n%s\n" "${delimiter}" - exit 1 - fi -done - -if ! "${python_cmd}" -c "import venv" &>/dev/null -then - printf "\n%s\n" "${delimiter}" - printf "\e[1m\e[31mERROR: python3-venv is not installed, aborting...\e[0m" - printf "\n%s\n" "${delimiter}" - exit 1 -fi - -cd "${install_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/, aborting...\e[0m" "${install_dir}"; exit 1; } -if [[ -d "${clone_dir}" ]] -then - cd "${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; } -else - printf "\n%s\n" "${delimiter}" - printf "Clone stable-diffusion-webui" - printf "\n%s\n" "${delimiter}" - "${GIT}" clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git "${clone_dir}" - cd "${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; } -fi - -printf "\n%s\n" "${delimiter}" -printf "Create and activate python venv" -printf "\n%s\n" "${delimiter}" -cd "${install_dir}"/"${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; } -if [[ ! -d "${venv_dir}" ]] -then - "${python_cmd}" -m venv "${venv_dir}" - first_launch=1 -fi -# shellcheck source=/dev/null -if [[ -f "${venv_dir}"/bin/activate ]] -then - source "${venv_dir}"/bin/activate -else - printf "\n%s\n" "${delimiter}" - printf "\e[1m\e[31mERROR: Cannot activate python venv, aborting...\e[0m" - printf "\n%s\n" "${delimiter}" - exit 1 -fi - -if [[ ! -z "${ACCELERATE}" ]] && [ ${ACCELERATE}="True" ] && [ -x "$(command -v accelerate)" ] -then - printf "\n%s\n" "${delimiter}" - printf "Accelerating launch.py..." - printf "\n%s\n" "${delimiter}" - exec accelerate launch --num_cpu_threads_per_process=6 "${LAUNCH_SCRIPT}" "$@" -else - printf "\n%s\n" "${delimiter}" - printf "Launching launch.py..." - printf "\n%s\n" "${delimiter}" - exec "${python_cmd}" "${LAUNCH_SCRIPT}" "$@" -fi diff --git a/spaces/valhalla/minDALLE/examples/sampling_ex.py b/spaces/valhalla/minDALLE/examples/sampling_ex.py deleted file mode 100644 index 0b9e7de1564a1fae0aab7900e8b0e12fcb1f9d05..0000000000000000000000000000000000000000 --- a/spaces/valhalla/minDALLE/examples/sampling_ex.py +++ /dev/null @@ -1,63 +0,0 @@ -# ------------------------------------------------------------------------------------ -# Minimal DALL-E -# Copyright (c) 2021 KakaoBrain. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------ - -import os -import sys -import argparse -import clip -import numpy as np -from PIL import Image - -sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) - -from dalle.models import Dalle -from dalle.utils.utils import set_seed, clip_score - - -parser = argparse.ArgumentParser() -parser.add_argument('-n', '--num_candidates', type=int, default=96) -parser.add_argument('--prompt', type=str, default='A painting of a tree on the ocean') -parser.add_argument('--softmax-temperature', type=float, default=1.0) -parser.add_argument('--top-k', type=int, default=256) -parser.add_argument('--top-p', type=float, default=None, help='0.0 <= top-p <= 1.0') -parser.add_argument('--seed', type=int, default=0) - -args = parser.parse_args() - -# Setup -assert args.top_k <= 256, "It is recommended that top_k is set lower than 256." - -set_seed(args.seed) -device = 'cuda:0' -model = Dalle.from_pretrained('minDALL-E/1.3B') # This will automatically download the pretrained model. -model.to(device=device) - -# Sampling -images = model.sampling(prompt=args.prompt, - top_k=args.top_k, - top_p=args.top_p, - softmax_temperature=args.softmax_temperature, - num_candidates=args.num_candidates, - device=device).cpu().numpy() -images = np.transpose(images, (0, 2, 3, 1)) - -# CLIP Re-ranking -model_clip, preprocess_clip = clip.load("ViT-B/32", device=device) -model_clip.to(device=device) -rank = clip_score(prompt=args.prompt, - images=images, - model_clip=model_clip, - preprocess_clip=preprocess_clip, - device=device) - -# Save images -images = images[rank] -print(rank, images.shape) -if not os.path.exists('./figures'): - os.makedirs('./figures') -for i in range(min(16, args.num_candidates)): - im = Image.fromarray((images[i]*255).astype(np.uint8)) - im.save(f'./figures/{args.prompt}_{i}.png') diff --git a/spaces/veb-101/driver-drowsiness-detection/app.py b/spaces/veb-101/driver-drowsiness-detection/app.py deleted file mode 100644 index a77b4ce11d0b1d176e74ca7e27c15c65e403c502..0000000000000000000000000000000000000000 --- a/spaces/veb-101/driver-drowsiness-detection/app.py +++ /dev/null @@ -1,87 +0,0 @@ -import os -import av -import threading -import streamlit as st -import streamlit_nested_layout -from streamlit_webrtc import VideoHTMLAttributes, webrtc_streamer - -from audio_handling import AudioFrameHandler -from drowsy_detection import VideoFrameHandler -from ads import css_string - - -# Define the audio file to use. -alarm_file_path = os.path.join("audio", "wake_up.wav") - -# Streamlit Components -st.set_page_config( - page_title="Drowsiness Detection | LearnOpenCV", - page_icon="https://learnopencv.com/wp-content/uploads/2017/12/favicon.png", - layout="wide", # centered, wide - initial_sidebar_state="expanded", - menu_items={ - "About": "### Visit www.learnopencv.com for more exciting tutorials!!!", - }, -) - - -col1, col2 = st.columns(spec=[6, 2], gap="medium") - -with col1: - st.title("Drowsiness Detection!!!🥱😪😴") - with st.container(): - c1, c2 = st.columns(spec=[1, 1]) - with c1: - # The amount of time (in seconds) to wait before sounding the alarm. - WAIT_TIME = st.slider("Seconds to wait before sounding alarm:", 0.0, 5.0, 1.0, 0.25) - - with c2: - # Lowest valid value of Eye Aspect Ratio. Ideal values [0.15, 0.2]. - EAR_THRESH = st.slider("Eye Aspect Ratio threshold:", 0.0, 0.4, 0.18, 0.01) - -thresholds = { - "EAR_THRESH": EAR_THRESH, - "WAIT_TIME": WAIT_TIME, -} - -# For streamlit-webrtc -video_handler = VideoFrameHandler() -audio_handler = AudioFrameHandler(sound_file_path=alarm_file_path) - -lock = threading.Lock() # For thread-safe access & to prevent race-condition. -shared_state = {"play_alarm": False} - - -def video_frame_callback(frame: av.VideoFrame): - frame = frame.to_ndarray(format="bgr24") # Decode and convert frame to RGB - - frame, play_alarm = video_handler.process(frame, thresholds) # Process frame - with lock: - shared_state["play_alarm"] = play_alarm # Update shared state - - return av.VideoFrame.from_ndarray(frame, format="bgr24") # Encode and return BGR frame - - -def audio_frame_callback(frame: av.AudioFrame): - with lock: # access the current “play_alarm” state - play_alarm = shared_state["play_alarm"] - - new_frame: av.AudioFrame = audio_handler.process(frame, play_sound=play_alarm) - return new_frame - - -# https://github.com/whitphx/streamlit-webrtc/blob/main/streamlit_webrtc/config.py - -with col1: - ctx = webrtc_streamer( - key="drowsiness-detection", - video_frame_callback=video_frame_callback, - audio_frame_callback=audio_frame_callback, - rtc_configuration={"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]}, # Add this to config for cloud deployment. - media_stream_constraints={"video": {"height": {"ideal": 480}}, "audio": True}, - video_html_attrs=VideoHTMLAttributes(autoPlay=True, controls=False, muted=False), - ) - -with col2: - # Banner for newsletter subscription, jobs, and consulting. - st.markdown(css_string, unsafe_allow_html=True) diff --git a/spaces/vobecant/DaS/segmenter_model/utils.py b/spaces/vobecant/DaS/segmenter_model/utils.py deleted file mode 100644 index 792d40ffaa3472b7243d652b322f39b86dd20bf1..0000000000000000000000000000000000000000 --- a/spaces/vobecant/DaS/segmenter_model/utils.py +++ /dev/null @@ -1,526 +0,0 @@ -import math -# import segm.utils.torch as ptu -# from segm.engine import seg2rgb -from collections import namedtuple - -import numpy as np -import torch.nn as nn -import torch.nn.functional as F -from timm.models.layers import trunc_normal_ - -import torch - -CityscapesClass = namedtuple('CityscapesClass', ['name', 'id', 'train_id', 'category', 'category_id', - 'has_instances', 'ignore_in_eval', 'color']) - -classes = [ - CityscapesClass('unlabeled', 0, 255, 'void', 0, False, True, (0, 0, 0)), - CityscapesClass('ego vehicle', 1, 255, 'void', 0, False, True, (0, 0, 0)), - CityscapesClass('rectification border', 2, 255, 'void', 0, False, True, (0, 0, 0)), - CityscapesClass('out of roi', 3, 255, 'void', 0, False, True, (0, 0, 0)), - CityscapesClass('static', 4, 255, 'void', 0, False, True, (0, 0, 0)), - CityscapesClass('dynamic', 5, 255, 'void', 0, False, True, (111, 74, 0)), - CityscapesClass('ground', 6, 255, 'void', 0, False, True, (81, 0, 81)), - CityscapesClass('road', 7, 0, 'flat', 1, False, False, (128, 64, 128)), - CityscapesClass('sidewalk', 8, 1, 'flat', 1, False, False, (244, 35, 232)), - CityscapesClass('parking', 9, 255, 'flat', 1, False, True, (250, 170, 160)), - CityscapesClass('rail track', 10, 255, 'flat', 1, False, True, (230, 150, 140)), - CityscapesClass('building', 11, 2, 'construction', 2, False, False, (70, 70, 70)), - CityscapesClass('wall', 12, 3, 'construction', 2, False, False, (102, 102, 156)), - CityscapesClass('fence', 13, 4, 'construction', 2, False, False, (190, 153, 153)), - CityscapesClass('guard rail', 14, 255, 'construction', 2, False, True, (180, 165, 180)), - CityscapesClass('bridge', 15, 255, 'construction', 2, False, True, (150, 100, 100)), - CityscapesClass('tunnel', 16, 255, 'construction', 2, False, True, (150, 120, 90)), - CityscapesClass('pole', 17, 5, 'object', 3, False, False, (153, 153, 153)), - CityscapesClass('polegroup', 18, 255, 'object', 3, False, True, (153, 153, 153)), - CityscapesClass('traffic light', 19, 6, 'object', 3, False, False, (250, 170, 30)), - CityscapesClass('traffic sign', 20, 7, 'object', 3, False, False, (220, 220, 0)), - CityscapesClass('vegetation', 21, 8, 'nature', 4, False, False, (107, 142, 35)), - CityscapesClass('terrain', 22, 9, 'nature', 4, False, False, (152, 251, 152)), - CityscapesClass('sky', 23, 10, 'sky', 5, False, False, (70, 130, 180)), - CityscapesClass('person', 24, 11, 'human', 6, True, False, (220, 20, 60)), - CityscapesClass('rider', 25, 12, 'human', 6, True, False, (255, 0, 0)), - CityscapesClass('car', 26, 13, 'vehicle', 7, True, False, (0, 0, 142)), - CityscapesClass('truck', 27, 14, 'vehicle', 7, True, False, (0, 0, 70)), - CityscapesClass('bus', 28, 15, 'vehicle', 7, True, False, (0, 60, 100)), - CityscapesClass('caravan', 29, 255, 'vehicle', 7, True, True, (0, 0, 90)), - CityscapesClass('trailer', 30, 255, 'vehicle', 7, True, True, (0, 0, 110)), - CityscapesClass('train', 31, 16, 'vehicle', 7, True, False, (0, 80, 100)), - CityscapesClass('motorcycle', 32, 17, 'vehicle', 7, True, False, (0, 0, 230)), - CityscapesClass('bicycle', 33, 18, 'vehicle', 7, True, False, (119, 11, 32)), - CityscapesClass('license plate', -1, -1, 'vehicle', 7, False, True, (0, 0, 142)), -] - -cityscapes_id_to_trainID = {cls.id: cls.train_id for cls in classes} -cityscapes_trainID_to_testID = {cls.train_id: cls.id for cls in classes} -cityscapes_trainID_to_color = {cls.train_id: cls.color for cls in classes} -cityscapes_trainID_to_name = {cls.train_id: cls.name for cls in classes} -cityscapes_trainID_to_color[255] = (0, 0, 0) -cityscapes_trainID_to_name = {cls.train_id: cls.name for cls in classes} -cityscapes_trainID_to_name[255] = 'ignore' -cityscapes_trainID_to_name[19] = 'ignore' - - -def map2cs(seg): - while len(seg.shape) > 2: - seg = seg[0] - colors = cityscapes_trainID_to_color - # assert False, 'set ignore_idx color to black, make sure that it is not in colors' - rgb = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for l in np.unique(seg): - rgb[seg == l, :] = colors[l] - return rgb - - -def get_colors(num_colors): - from PIL import ImageColor - import matplotlib - hex_colors = [ - # "#000000", # keep the black reserved - "#FFFF00", "#1CE6FF", "#FF34FF", "#FF4A46", "#008941", "#006FA6", "#A30059", - "#FFDBE5", "#7A4900", "#0000A6", "#63FFAC", "#B79762", "#004D43", "#8FB0FF", "#997D87", - "#5A0007", "#809693", "#FEFFE6", "#1B4400", "#4FC601", "#3B5DFF", "#4A3B53", "#FF2F80", - "#61615A", "#BA0900", "#6B7900", "#00C2A0", "#FFAA92", "#FF90C9", "#B903AA", "#D16100", - "#DDEFFF", "#000035", "#7B4F4B", "#A1C299", "#300018", "#0AA6D8", "#013349", "#00846F", - "#372101", "#FFB500", "#C2FFED", "#A079BF", "#CC0744", "#C0B9B2", "#C2FF99", "#001E09", - "#00489C", "#6F0062", "#0CBD66", "#EEC3FF", "#456D75", "#B77B68", "#7A87A1", "#788D66", - "#885578", "#FAD09F", "#FF8A9A", "#D157A0", "#BEC459", "#456648", "#0086ED", "#886F4C", - "#34362D", "#B4A8BD", "#00A6AA", "#452C2C", "#636375", "#A3C8C9", "#FF913F", "#938A81", - "#575329", "#00FECF", "#B05B6F", "#8CD0FF", "#3B9700", "#04F757", "#C8A1A1", "#1E6E00", - "#7900D7", "#A77500", "#6367A9", "#A05837", "#6B002C", "#772600", "#D790FF", "#9B9700", - "#549E79", "#FFF69F", "#201625", "#72418F", "#BC23FF", "#99ADC0", "#3A2465", "#922329", - "#5B4534", "#FDE8DC", "#404E55", "#0089A3", "#CB7E98", "#A4E804", "#324E72", "#6A3A4C", - "#83AB58", "#001C1E", "#D1F7CE", "#004B28", "#C8D0F6", "#A3A489", "#806C66", "#222800", - "#BF5650", "#E83000", "#66796D", "#DA007C", "#FF1A59", "#8ADBB4", "#1E0200", "#5B4E51", - "#C895C5", "#320033", "#FF6832", "#66E1D3", "#CFCDAC", "#D0AC94", "#7ED379", "#012C58", - ] - hex_colors_mlib = list(matplotlib.colors.cnames.values()) - for hcm in hex_colors_mlib: - if hcm not in hex_colors: - hex_colors.append(hcm) - colors = [ImageColor.getrgb(hex) for hex in hex_colors] - return colors[:num_colors] - - -def colorize_one(seg, ignore=255, colors=None, ncolors=32): - unq = np.unique(seg) - if ncolors is not None: - ncolors = max(ncolors, max(unq)) - else: - ncolors = max(unq) - colors = get_colors(ncolors) if colors is None else colors - h, w = seg.shape - c = 3 - rgb = np.zeros((h, w, c), dtype=np.uint8) - for l in unq: - if ignore is not None and l == ignore: - continue - try: - rgb[seg == l, :] = colors[l] - except: - raise Exception(l) - return rgb - - -def init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=0.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - -def resize_pos_embed(posemb, grid_old_shape, grid_new_shape, num_extra_tokens): - # Rescale the grid of position embeddings when loading from state_dict. Adapted from - # https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224 - posemb_tok, posemb_grid = ( - posemb[:, :num_extra_tokens], - posemb[0, num_extra_tokens:], - ) - if grid_old_shape is None: - gs_old_h = int(math.sqrt(len(posemb_grid))) - gs_old_w = gs_old_h - else: - gs_old_h, gs_old_w = grid_old_shape - - gs_h, gs_w = grid_new_shape - posemb_grid = posemb_grid.reshape(1, gs_old_h, gs_old_w, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - return posemb - - -def checkpoint_filter_fn(state_dict, model): - """ convert patch embedding weight from manual patchify + linear proj to conv""" - out_dict = {} - if "model" in state_dict: - # For deit models - state_dict = state_dict["model"] - num_extra_tokens = 1 + ("dist_token" in state_dict.keys()) - patch_size = model.patch_size - image_size = model.patch_embed.image_size - for k, v in state_dict.items(): - if k == "pos_embed" and v.shape != model.pos_embed.shape: - # To resize pos embedding when using model at different size from pretrained weights - v = resize_pos_embed( - v, - None, - (image_size[0] // patch_size, image_size[1] // patch_size), - num_extra_tokens, - ) - out_dict[k] = v - return out_dict - - -def padding(im, patch_size, fill_value=0): - # make the image sizes divisible by patch_size - H, W = im.size(2), im.size(3) - pad_h, pad_w = 0, 0 - if H % patch_size > 0: - pad_h = patch_size - (H % patch_size) - if W % patch_size > 0: - pad_w = patch_size - (W % patch_size) - im_padded = im - if pad_h > 0 or pad_w > 0: - im_padded = F.pad(im, (0, pad_w, 0, pad_h), value=fill_value) - return im_padded - - -def unpadding(y, target_size): - H, W = target_size - H_pad, W_pad = y.size(2), y.size(3) - # crop predictions on extra pixels coming from padding - extra_h = H_pad - H - extra_w = W_pad - W - if extra_h > 0: - y = y[:, :, :-extra_h] - if extra_w > 0: - y = y[:, :, :, :-extra_w] - return y - - -def resize(im, smaller_size): - h, w = im.shape[2:] - if h < w: - ratio = w / h - h_res, w_res = smaller_size, ratio * smaller_size - else: - ratio = h / w - h_res, w_res = ratio * smaller_size, smaller_size - if min(h, w) < smaller_size: - im_res = F.interpolate(im, (int(h_res), int(w_res)), mode="bilinear") - else: - im_res = im - return im_res - - -def sliding_window(im, flip, window_size, window_stride, channels_first=True): - if channels_first: - B, C, H, W = im.shape - else: - B, H, W, C = im.shape - ws = window_size - - windows = {"crop": [], "anchors": []} - h_anchors = torch.arange(0, H, window_stride) - w_anchors = torch.arange(0, W, window_stride) - h_anchors = [h.item() for h in h_anchors if h < H - ws] + [H - ws] - w_anchors = [w.item() for w in w_anchors if w < W - ws] + [W - ws] - for ha in h_anchors: - for wa in w_anchors: - if channels_first: - window = im[:, :, ha: ha + ws, wa: wa + ws] - else: - window = im[:, ha: ha + ws, wa: wa + ws] - windows["crop"].append(window) - windows["anchors"].append((ha, wa)) - windows["flip"] = flip - windows["shape"] = (H, W) - return windows - - -def merge_windows(windows, window_size, ori_shape, no_softmax=False, no_upsample=False, patch_size=None): - ws = window_size - im_windows = windows["seg_maps"] - anchors = windows["anchors"] - C = im_windows[0].shape[0] - H, W = windows["shape"] - flip = windows["flip"] - - if no_upsample: - H, W = H // patch_size, W // patch_size - - logit = torch.zeros((C, H, W), device=im_windows.device) - count = torch.zeros((1, H, W), device=im_windows.device) - for window, (ha, wa) in zip(im_windows, anchors): - if no_upsample: - ha = ha // patch_size - wa = wa // patch_size - logit[:, ha: ha + ws, wa: wa + ws] += window - count[:, ha: ha + ws, wa: wa + ws] += 1 - logit /= count - # print('Interpolate {} -> {}'.format(logit.shape, ori_shape)) - if not no_upsample: - logit = F.interpolate( - logit.unsqueeze(0), - ori_shape, - mode="bilinear", - )[0] - if flip: - logit = torch.flip(logit, (2,)) - if not no_softmax: - # print('Softmax in merge_windows') - result = F.softmax(logit, 0) - else: - # print('No softmax in merge_windows') - result = logit - return result - - -def debug_windows(windows, debug_file): - pass - - -def inference_picie( - model, - classifier, - metric_test, - ims, - ori_shape, - window_size, - window_stride, - batch_size, - decoder_features=False, - no_upsample=False, - debug_file=None, - im_rgb=None, - channel_first=False -): - try: - C = model.n_cls - except: - C = classifier.module.bias.shape[0] - - # seg_maps = [] - - # for im, im_metas in zip(ims, ims_metas): - for im in ims: - im = im.to('cuda') - if len(im.shape) == 3: - im = im.unsqueeze(0) - flip = False # im_metas["flip"] - windows = sliding_window(im, flip, window_size, window_stride) - crops = torch.stack(windows.pop("crop"))[:, 0] - num_crops = len(crops) - - WB = batch_size if batch_size > 0 else num_crops - if no_upsample: - window_size = window_size // model.patch_size - seg_maps = torch.zeros((num_crops, C, window_size, window_size), device=im.device) - with torch.no_grad(): - for i in range(0, num_crops, WB): - # try: - feats = model.forward(crops[i: i + WB]) - if metric_test == 'cosine': - feats = F.normalize(feats, dim=1, p=2) - probs = classifier(feats) - probs = F.interpolate(probs, crops[i: i + WB].shape[-2:], mode='bilinear', align_corners=False) - seg_maps[i: i + WB] = probs - windows["seg_maps"] = seg_maps - - im_seg_map = merge_windows(windows, window_size, ori_shape, no_softmax=decoder_features, - no_upsample=no_upsample, patch_size=None) - - seg_map = im_seg_map - if no_upsample and not decoder_features: - pass - else: - seg_map = F.interpolate( - seg_map.unsqueeze(0), - ori_shape, - mode="bilinear", - ) - - return seg_map - - -def inference( - model, - ims, - ori_shape, - window_size, - window_stride, - batch_size, - decoder_features=False, - encoder_features=False, - no_upsample=False, -): - C = model.n_cls - patch_size = model.patch_size - - # seg_maps = [] - - # for im, im_metas in zip(ims, ims_metas): - for im in ims: - # im = im.to('cuda') - if len(im.shape) == 3: - im = im.unsqueeze(0) - # im = resize(im, window_size) - flip = False # im_metas["flip"] - # print(im) - windows = sliding_window(im, flip, window_size, window_stride) - # print(windows) - crops = torch.stack(windows.pop("crop"))[:, 0] - num_crops = len(crops) - - WB = batch_size if batch_size > 0 else num_crops - if no_upsample: - window_size = window_size // model.patch_size - # print('Change variable window_size to {}'.format(window_size)) - seg_maps = torch.zeros((num_crops, C, window_size, window_size), device=im.device) - # print('Allocated segm_maps: {}, device: {}'.format(seg_maps.shape, seg_maps.device)) - with torch.no_grad(): - for i in range(0, num_crops, WB): - # try: - # print('Forward crop {}'.format(crops[i: i + WB].shape)) - seg_maps[i: i + WB] = model.forward(crops[i: i + WB], decoder_features=decoder_features, - encoder_features=encoder_features, - no_upsample=no_upsample) - windows["seg_maps"] = seg_maps - - im_seg_map = merge_windows(windows, window_size, ori_shape, no_softmax=decoder_features, - no_upsample=no_upsample, patch_size=model.patch_size) - - seg_map = im_seg_map - if no_upsample and not decoder_features: - pass - else: - seg_map = F.interpolate( - seg_map.unsqueeze(0), - ori_shape, - mode="bilinear", - ) - # seg_maps.append(seg_map) - - # print('Done one inference.') - # seg_maps = torch.cat(seg_maps, dim=0) - return seg_map - - -def inference_features( - model, - ims, - ori_shape, - window_size, - window_stride, - batch_size, - decoder_features=False, - encoder_features=False, - save2cpu=False, - no_upsample=True, - encoder_only=False -): - C = model.n_cls if decoder_features else model.encoder.d_model - patch_size = model.patch_size - - # seg_maps = [] - - # for im, im_metas in zip(ims, ims_metas): - for im in ims: - im = im.to('cuda') - if len(im.shape) == 3: - im = im.unsqueeze(0) - # im = resize(im, window_size) - flip = False # im_metas["flip"] - # print(im) - windows = sliding_window(im, flip, window_size, window_stride) - # print(windows) - crops = torch.stack(windows.pop("crop"))[:, 0] - num_crops = len(crops) - - WB = batch_size if batch_size > 0 else num_crops - if no_upsample: - window_size = window_size // model.patch_size - # print('Change variable window_size to {}'.format(window_size)) - enc_maps = torch.zeros((num_crops, C, window_size, window_size), device=im.device) - if decoder_features: - dec_maps = torch.zeros((num_crops, C, window_size, window_size), device=im.device) - # print('Allocated segm_maps: {}, device: {}'.format(seg_maps.shape, seg_maps.device)) - with torch.no_grad(): - for i in range(0, num_crops, WB): - enc_fts = model.forward(crops[i: i + WB], decoder_features=decoder_features, - encoder_features=True, - no_upsample=no_upsample, encoder_only=encoder_only) - if decoder_features: - enc_fts, dec_fts = enc_fts - dec_maps[i: i + WB] = dec_fts - elif isinstance(enc_fts, tuple): - enc_fts = enc_fts[0] - enc_maps[i: i + WB] = enc_fts - - windows["seg_maps"] = enc_maps - im_enc_map = merge_windows(windows, window_size, ori_shape, no_softmax=decoder_features, - no_upsample=no_upsample, patch_size=model.patch_size) - - if decoder_features: - windows["seg_maps"] = dec_maps - im_dec_map = merge_windows(windows, window_size, ori_shape, no_softmax=decoder_features, - no_upsample=no_upsample, patch_size=model.patch_size) - - if no_upsample: - pass - else: - im_enc_map = F.interpolate( - im_enc_map.unsqueeze(0), - ori_shape, - mode="bilinear", - ) - if decoder_features: - im_dec_map = F.interpolate( - im_dec_map.unsqueeze(0), - ori_shape, - mode="bilinear", - ) - - im_enc_map = im_enc_map.cpu().numpy() - if decoder_features: - im_dec_map = im_dec_map.cpu().numpy() - return im_enc_map, im_dec_map - - return im_enc_map - - -def inference_conv( - model, - ims, - ims_metas, - ori_shape -): - assert len(ims) == 1 - for im, im_metas in zip(ims, ims_metas): - im = im.to(ptu.device) - if len(im.shape) < 4: - im = im.unsqueeze(0) - logits = model(im) - if ori_shape[:2] != logits.shape[-2:]: - # resize - logits = F.interpolate( - logits, - ori_shape[-2:], - mode="bilinear", - ) - # 3) applies softmax - result = F.softmax(logits.squeeze(), 0) - # print(result.shape) - return result - - -def num_params(model): - model_parameters = filter(lambda p: p.requires_grad, model.parameters()) - n_params = sum([torch.prod(torch.tensor(p.size())) for p in model_parameters]) - if not type(n_params) == int: - n_params = n_params.item() - return n_params diff --git a/spaces/weibinke/vits-simple-api/static/js/jquery.slim.min.js b/spaces/weibinke/vits-simple-api/static/js/jquery.slim.min.js deleted file mode 100644 index 36b4e1a137828dc488ed9a2e704b74cb35815759..0000000000000000000000000000000000000000 --- a/spaces/weibinke/vits-simple-api/static/js/jquery.slim.min.js +++ /dev/null @@ -1,2 +0,0 @@ -/*! jQuery v3.5.1 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector | (c) JS Foundation and other contributors | jquery.org/license */ -!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(g,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,v=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,y=n.hasOwnProperty,a=y.toString,l=a.call(Object),m={},b=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType},x=function(e){return null!=e&&e===e.window},w=g.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function C(e,t,n){var r,i,o=(n=n||w).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function T(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.5.1 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector",E=function(e,t){return new E.fn.init(e,t)};function d(e){var t=!!e&&"length"in e&&e.length,n=T(e);return!b(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+R+")"+R+"*"),U=new RegExp(R+"|>"),V=new RegExp(W),X=new RegExp("^"+B+"$"),Q={ID:new RegExp("^#("+B+")"),CLASS:new RegExp("^\\.("+B+")"),TAG:new RegExp("^("+B+"|[*])"),ATTR:new RegExp("^"+M),PSEUDO:new RegExp("^"+W),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+R+"*(even|odd|(([+-]|)(\\d*)n|)"+R+"*(?:([+-]|)"+R+"*(\\d+)|))"+R+"*\\)|)","i"),bool:new RegExp("^(?:"+I+")$","i"),needsContext:new RegExp("^"+R+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+R+"*((?:-\\d)?\\d*)"+R+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,G=/^(?:input|select|textarea|button)$/i,K=/^h\d$/i,J=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+R+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){C()},ae=xe(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{O.apply(t=P.call(d.childNodes),d.childNodes),t[d.childNodes.length].nodeType}catch(e){O={apply:t.length?function(e,t){q.apply(e,P.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,d=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==d&&9!==d&&11!==d)return n;if(!r&&(C(e),e=e||T,E)){if(11!==d&&(u=Z.exec(t)))if(i=u[1]){if(9===d){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return O.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&p.getElementsByClassName&&e.getElementsByClassName)return O.apply(n,e.getElementsByClassName(i)),n}if(p.qsa&&!k[t+" "]&&(!v||!v.test(t))&&(1!==d||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===d&&(U.test(t)||_.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&p.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=A)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+be(l[o]);c=l.join(",")}try{return O.apply(n,f.querySelectorAll(c)),n}catch(e){k(t,!0)}finally{s===A&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>x.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[A]=!0,e}function ce(e){var t=T.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)x.attrHandle[n[r]]=t}function de(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function pe(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in p=se.support={},i=se.isXML=function(e){var t=e.namespaceURI,n=(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},C=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:d;return r!=T&&9===r.nodeType&&r.documentElement&&(a=(T=r).documentElement,E=!i(T),d!=T&&(n=T.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),p.scope=ce(function(e){return a.appendChild(e).appendChild(T.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),p.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),p.getElementsByTagName=ce(function(e){return e.appendChild(T.createComment("")),!e.getElementsByTagName("*").length}),p.getElementsByClassName=J.test(T.getElementsByClassName),p.getById=ce(function(e){return a.appendChild(e).id=A,!T.getElementsByName||!T.getElementsByName(A).length}),p.getById?(x.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},x.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(x.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},x.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),x.find.TAG=p.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):p.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},x.find.CLASS=p.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(p.qsa=J.test(T.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+R+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+R+"*(?:value|"+I+")"),e.querySelectorAll("[id~="+A+"-]").length||v.push("~="),(t=T.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+R+"*name"+R+"*="+R+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+A+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=T.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+R+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(p.matchesSelector=J.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){p.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",W)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=J.test(a.compareDocumentPosition),y=t||J.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},D=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!p.sortDetached&&t.compareDocumentPosition(e)===n?e==T||e.ownerDocument==d&&y(d,e)?-1:t==T||t.ownerDocument==d&&y(d,t)?1:u?H(u,e)-H(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==T?-1:t==T?1:i?-1:o?1:u?H(u,e)-H(u,t):0;if(i===o)return de(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?de(a[r],s[r]):a[r]==d?-1:s[r]==d?1:0}),T},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(C(e),p.matchesSelector&&E&&!k[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||p.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){k(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return Q.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&V.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+R+")"+e+"("+R+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function D(e,n,r){return b(n)?E.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?E.grep(e,function(e){return e===n!==r}):"string"!=typeof n?E.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(E.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||L,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:j.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof E?t[0]:t,E.merge(this,E.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:w,!0)),k.test(r[1])&&E.isPlainObject(t))for(r in t)b(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=w.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):b(e)?void 0!==n.ready?n.ready(e):e(E):E.makeArray(e,this)}).prototype=E.fn,L=E(w);var q=/^(?:parents|prev(?:Until|All))/,O={children:!0,contents:!0,next:!0,prev:!0};function P(e,t){while((e=e[t])&&1!==e.nodeType);return e}E.fn.extend({has:function(e){var t=E(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,pe=/^$|^module$|\/(?:java|ecma)script/i;le=w.createDocumentFragment().appendChild(w.createElement("div")),(ce=w.createElement("input")).setAttribute("type","radio"),ce.setAttribute("checked","checked"),ce.setAttribute("name","t"),le.appendChild(ce),m.checkClone=le.cloneNode(!0).cloneNode(!0).lastChild.checked,le.innerHTML="",m.noCloneChecked=!!le.cloneNode(!0).lastChild.defaultValue,le.innerHTML="",m.option=!!le.lastChild;var he={thead:[1,"","
          "],col:[2,"","
          "],tr:[2,"","
          "],td:[3,"","
          "],_default:[0,"",""]};function ge(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&S(e,t)?E.merge([e],n):n}function ve(e,t){for(var n=0,r=e.length;n",""]);var ye=/<|&#?\w+;/;function me(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),d=[],p=0,h=e.length;p\s*$/g;function Le(e,t){return S(e,"table")&&S(11!==t.nodeType?t:t.firstChild,"tr")&&E(e).children("tbody")[0]||e}function je(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Oe(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n
          ",2===ft.childNodes.length),E.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(m.createHTMLDocument?((r=(t=w.implementation.createHTMLDocument("")).createElement("base")).href=w.location.href,t.head.appendChild(r)):t=w),o=!n&&[],(i=k.exec(e))?[t.createElement(i[1])]:(i=me([e],t,o),o&&o.length&&E(o).remove(),E.merge([],i.childNodes)));var r,i,o},E.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=E.css(e,"position"),c=E(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=E.css(e,"top"),u=E.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),b(t)&&(t=t.call(e,n,E.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):("number"==typeof f.top&&(f.top+="px"),"number"==typeof f.left&&(f.left+="px"),c.css(f))}},E.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){E.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===E.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===E.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=E(e).offset()).top+=E.css(e,"borderTopWidth",!0),i.left+=E.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-E.css(r,"marginTop",!0),left:t.left-i.left-E.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===E.css(e,"position"))e=e.offsetParent;return e||re})}}),E.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;E.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),E.each(["top","left"],function(e,n){E.cssHooks[n]=Fe(m.pixelPosition,function(e,t){if(t)return t=We(e,n),Ie.test(t)?E(e).position()[n]+"px":t})}),E.each({Height:"height",Width:"width"},function(a,s){E.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){E.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?E.css(e,t,i):E.style(e,t,n,i)},s,n?e:void 0,n)}})}),E.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),E.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){E.fn[n]=function(e,t){return 0 str: - """read the content of target file - """ - with open(file_path, 'r', encoding='utf-8') as f: - content = f.read() - - return content - -def initialize_and_load_models(): - - checkpoint_path = 'model/cloth_segm.pth' - net = load_seg_model(checkpoint_path, device=device) - - return net - -net = initialize_and_load_models() -palette = get_palette(4) - - -def run(img): - - cloth_seg = generate_mask(img, net=net, palette=palette, device=device) - return cloth_seg - -# Define input and output interfaces -input_image = gr.inputs.Image(label="Input Image", type="pil") - -# Define the Gradio interface -cloth_seg_image = gr.outputs.Image(label="Cloth Segmentation", type="pil") - -title = "Demo for Cloth Segmentation" -description = "An app for Cloth Segmentation" -inputs = [input_image] -outputs = [cloth_seg_image] - -css = ''' -.container {max-width: 1150px;margin: auto;padding-top: 1.5rem} -#image_upload{min-height:400px} -#image_upload [data-testid="image"], #image_upload [data-testid="image"] > div{min-height: 400px} -#mask_radio .gr-form{background:transparent; border: none} -#word_mask{margin-top: .75em !important} -#word_mask textarea:disabled{opacity: 0.3} -.footer {margin-bottom: 45px;margin-top: 35px;text-align: center;border-bottom: 1px solid #e5e5e5} -.footer>p {font-size: .8rem; display: inline-block; padding: 0 10px;transform: translateY(10px);background: white} -.dark .footer {border-color: #303030} -.dark .footer>p {background: #0b0f19} -.acknowledgments h4{margin: 1.25em 0 .25em 0;font-weight: bold;font-size: 115%} -#image_upload .touch-none{display: flex} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -''' -example={} -image_dir='input' - -image_list=[os.path.join(image_dir,file) for file in os.listdir(image_dir)] -image_list.sort() - - -image_blocks = gr.Blocks(css=css) -with image_blocks as demo: - gr.HTML(read_content("header.html")) - with gr.Group(): - with gr.Box(): - with gr.Row(): - with gr.Column(): - image = gr.Image(source='upload', elem_id="image_upload", type="pil", label="Input Image") - - - with gr.Column(): - image_out = gr.Image(label="Output", elem_id="output-img").style(height=400) - - - - - - with gr.Row(): - with gr.Column(): - gr.Examples(image_list, inputs=[image],label="Examples - Input Images",examples_per_page=12) - with gr.Column(): - with gr.Row(elem_id="prompt-container").style(mobile_collapse=False, equal_height=True): - btn = gr.Button("Run!").style( - margin=False, - rounded=(False, True, True, False), - full_width=True, - ) - - - - btn.click(fn=run, inputs=[image], outputs=[image_out]) - - - - - gr.HTML( - """ - -
          -

          ACKNOWLEDGEMENTS

          -

          - U2net model is from original u2net repo. Thanks to Xuebin Qin for amazing repo.

          -

          Codes are modified from levindabhi/cloth-segmentation -

          - """ - ) - -image_blocks.launch() \ No newline at end of file diff --git a/spaces/wilson1/bingo/src/lib/isomorphic/browser.ts b/spaces/wilson1/bingo/src/lib/isomorphic/browser.ts deleted file mode 100644 index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingo/src/lib/isomorphic/browser.ts +++ /dev/null @@ -1,11 +0,0 @@ -'use client' - -const debug = console.info.bind(console) - -class WebSocketAlias extends WebSocket { - constructor(address: string | URL, ...args: any) { - super(address) - } -} - -export default { fetch, WebSocket: WebSocketAlias, debug } diff --git a/spaces/wong26/faster-whisper-webui/tests/segments_test.py b/spaces/wong26/faster-whisper-webui/tests/segments_test.py deleted file mode 100644 index d829f1c77f74b3c96513fe4965d532cf2d1dceb4..0000000000000000000000000000000000000000 --- a/spaces/wong26/faster-whisper-webui/tests/segments_test.py +++ /dev/null @@ -1,48 +0,0 @@ -import sys -import unittest - -sys.path.append('../whisper-webui') - -from src.segments import merge_timestamps - -class TestSegments(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestSegments, self).__init__(*args, **kwargs) - - def test_merge_segments(self): - segments = [ - {'start': 10.0, 'end': 20.0}, - {'start': 22.0, 'end': 27.0}, - {'start': 31.0, 'end': 35.0}, - {'start': 45.0, 'end': 60.0}, - {'start': 61.0, 'end': 65.0}, - {'start': 68.0, 'end': 98.0}, - {'start': 100.0, 'end': 102.0}, - {'start': 110.0, 'end': 112.0} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 9.0, 'end': 36.0}, - {'start': 44.0, 'end': 66.0}, - {'start': 67.0, 'end': 99.0}, - {'start': 99.0, 'end': 103.0}, - {'start': 109.0, 'end': 113.0} - ]) - - def test_overlap_next(self): - segments = [ - {'start': 5.0, 'end': 39.182}, - {'start': 39.986, 'end': 40.814} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 4.0, 'end': 39.584}, - {'start': 39.584, 'end': 41.814} - ]) - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/xcchen/xcchenvits-uma-genshin-honkai/commons.py b/spaces/xcchen/xcchenvits-uma-genshin-honkai/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/xcchen/xcchenvits-uma-genshin-honkai/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/xiaoxuezi/spleeter/spleeter/model/__init__.py b/spaces/xiaoxuezi/spleeter/spleeter/model/__init__.py deleted file mode 100644 index f8fa5d0ec79a22a18db05379eea2d986140c1035..0000000000000000000000000000000000000000 --- a/spaces/xiaoxuezi/spleeter/spleeter/model/__init__.py +++ /dev/null @@ -1,573 +0,0 @@ -#!/usr/bin/env python -# coding: utf8 - -""" This package provide an estimator builder as well as model functions. """ - -import importlib - -# pyright: reportMissingImports=false -# pylint: disable=import-error -import tensorflow as tf -from tensorflow.signal import hann_window, inverse_stft, stft - -from ..utils.tensor import pad_and_partition, pad_and_reshape - -# pylint: enable=import-error - - -__email__ = "spleeter@deezer.com" -__author__ = "Deezer Research" -__license__ = "MIT License" - - -placeholder = tf.compat.v1.placeholder - - -def get_model_function(model_type): - """ - Get tensorflow function of the model to be applied to the input tensor. - For instance "unet.softmax_unet" will return the softmax_unet function - in the "unet.py" submodule of the current module (spleeter.model). - - Params: - - model_type: str - the relative module path to the model function. - - Returns: - A tensorflow function to be applied to the input tensor to get the - multitrack output. - """ - relative_path_to_module = ".".join(model_type.split(".")[:-1]) - model_name = model_type.split(".")[-1] - main_module = ".".join((__name__, "functions")) - path_to_module = f"{main_module}.{relative_path_to_module}" - module = importlib.import_module(path_to_module) - model_function = getattr(module, model_name) - return model_function - - -class InputProvider(object): - def __init__(self, params): - self.params = params - - def get_input_dict_placeholders(self): - raise NotImplementedError() - - @property - def input_names(self): - raise NotImplementedError() - - def get_feed_dict(self, features, *args): - raise NotImplementedError() - - -class WaveformInputProvider(InputProvider): - @property - def input_names(self): - return ["audio_id", "waveform"] - - def get_input_dict_placeholders(self): - shape = (None, self.params["n_channels"]) - features = { - "waveform": placeholder(tf.float32, shape=shape, name="waveform"), - "audio_id": placeholder(tf.string, name="audio_id"), - } - return features - - def get_feed_dict(self, features, waveform, audio_id): - return {features["audio_id"]: audio_id, features["waveform"]: waveform} - - -class SpectralInputProvider(InputProvider): - def __init__(self, params): - super().__init__(params) - self.stft_input_name = "{}_stft".format(self.params["mix_name"]) - - @property - def input_names(self): - return ["audio_id", self.stft_input_name] - - def get_input_dict_placeholders(self): - features = { - self.stft_input_name: placeholder( - tf.complex64, - shape=( - None, - self.params["frame_length"] // 2 + 1, - self.params["n_channels"], - ), - name=self.stft_input_name, - ), - "audio_id": placeholder(tf.string, name="audio_id"), - } - return features - - def get_feed_dict(self, features, stft, audio_id): - return {features["audio_id"]: audio_id, features[self.stft_input_name]: stft} - - -class InputProviderFactory(object): - @staticmethod - def get(params): - stft_backend = params["stft_backend"] - assert stft_backend in ( - "tensorflow", - "librosa", - ), "Unexpected backend {}".format(stft_backend) - if stft_backend == "tensorflow": - return WaveformInputProvider(params) - else: - return SpectralInputProvider(params) - - -class EstimatorSpecBuilder(object): - """A builder class that allows to builds a multitrack unet model - estimator. The built model estimator has a different behaviour when - used in a train/eval mode and in predict mode. - - * In train/eval mode: it takes as input and outputs magnitude spectrogram - * In predict mode: it takes as input and outputs waveform. The whole - separation process is then done in this function - for performance reason: it makes it possible to run - the whole spearation process (including STFT and - inverse STFT) on GPU. - - :Example: - - >>> from spleeter.model import EstimatorSpecBuilder - >>> builder = EstimatorSpecBuilder() - >>> builder.build_predict_model() - >>> builder.build_evaluation_model() - >>> builder.build_train_model() - - >>> from spleeter.model import model_fn - >>> estimator = tf.estimator.Estimator(model_fn=model_fn, ...) - """ - - # Supported model functions. - DEFAULT_MODEL = "unet.unet" - - # Supported loss functions. - L1_MASK = "L1_mask" - WEIGHTED_L1_MASK = "weighted_L1_mask" - - # Supported optimizers. - ADADELTA = "Adadelta" - SGD = "SGD" - - # Math constants. - WINDOW_COMPENSATION_FACTOR = 2.0 / 3.0 - EPSILON = 1e-10 - - def __init__(self, features, params): - """Default constructor. Depending on built model - usage, the provided features should be different: - - * In train/eval mode: features is a dictionary with a - "mix_spectrogram" key, associated to the - mix magnitude spectrogram. - * In predict mode: features is a dictionary with a "waveform" - key, associated to the waveform of the sound - to be separated. - - :param features: The input features for the estimator. - :param params: Some hyperparameters as a dictionary. - """ - - self._features = features - self._params = params - # Get instrument name. - self._mix_name = params["mix_name"] - self._instruments = params["instrument_list"] - # Get STFT/signals parameters - self._n_channels = params["n_channels"] - self._T = params["T"] - self._F = params["F"] - self._frame_length = params["frame_length"] - self._frame_step = params["frame_step"] - - def include_stft_computations(self): - return self._params["stft_backend"] == "tensorflow" - - def _build_model_outputs(self): - """Created a batch_sizexTxFxn_channels input tensor containing - mix magnitude spectrogram, then an output dict from it according - to the selected model in internal parameters. - - :returns: Build output dict. - :raise ValueError: If required model_type is not supported. - """ - - input_tensor = self.spectrogram_feature - model = self._params.get("model", None) - if model is not None: - model_type = model.get("type", self.DEFAULT_MODEL) - else: - model_type = self.DEFAULT_MODEL - try: - apply_model = get_model_function(model_type) - except ModuleNotFoundError: - raise ValueError(f"No model function {model_type} found") - self._model_outputs = apply_model( - input_tensor, self._instruments, self._params["model"]["params"] - ) - - def _build_loss(self, labels): - """Construct tensorflow loss and metrics - - :param output_dict: dictionary of network outputs (key: instrument - name, value: estimated spectrogram of the instrument) - :param labels: dictionary of target outputs (key: instrument - name, value: ground truth spectrogram of the instrument) - :returns: tensorflow (loss, metrics) tuple. - """ - output_dict = self.model_outputs - loss_type = self._params.get("loss_type", self.L1_MASK) - if loss_type == self.L1_MASK: - losses = { - name: tf.reduce_mean(tf.abs(output - labels[name])) - for name, output in output_dict.items() - } - elif loss_type == self.WEIGHTED_L1_MASK: - losses = { - name: tf.reduce_mean( - tf.reduce_mean(labels[name], axis=[1, 2, 3], keep_dims=True) - * tf.abs(output - labels[name]) - ) - for name, output in output_dict.items() - } - else: - raise ValueError(f"Unkwnown loss type: {loss_type}") - loss = tf.reduce_sum(list(losses.values())) - # Add metrics for monitoring each instrument. - metrics = {k: tf.compat.v1.metrics.mean(v) for k, v in losses.items()} - metrics["absolute_difference"] = tf.compat.v1.metrics.mean(loss) - return loss, metrics - - def _build_optimizer(self): - """Builds an optimizer instance from internal parameter values. - - Default to AdamOptimizer if not specified. - - :returns: Optimizer instance from internal configuration. - """ - name = self._params.get("optimizer") - if name == self.ADADELTA: - return tf.compat.v1.train.AdadeltaOptimizer() - rate = self._params["learning_rate"] - if name == self.SGD: - return tf.compat.v1.train.GradientDescentOptimizer(rate) - return tf.compat.v1.train.AdamOptimizer(rate) - - @property - def instruments(self): - return self._instruments - - @property - def stft_name(self): - return f"{self._mix_name}_stft" - - @property - def spectrogram_name(self): - return f"{self._mix_name}_spectrogram" - - def _build_stft_feature(self): - """Compute STFT of waveform and slice the STFT in segment - with the right length to feed the network. - """ - - stft_name = self.stft_name - spec_name = self.spectrogram_name - - if stft_name not in self._features: - # pad input with a frame of zeros - waveform = tf.concat( - [ - tf.zeros((self._frame_length, self._n_channels)), - self._features["waveform"], - ], - 0, - ) - stft_feature = tf.transpose( - stft( - tf.transpose(waveform), - self._frame_length, - self._frame_step, - window_fn=lambda frame_length, dtype: ( - hann_window(frame_length, periodic=True, dtype=dtype) - ), - pad_end=True, - ), - perm=[1, 2, 0], - ) - self._features[f"{self._mix_name}_stft"] = stft_feature - if spec_name not in self._features: - self._features[spec_name] = tf.abs( - pad_and_partition(self._features[stft_name], self._T) - )[:, :, : self._F, :] - - @property - def model_outputs(self): - if not hasattr(self, "_model_outputs"): - self._build_model_outputs() - return self._model_outputs - - @property - def outputs(self): - if not hasattr(self, "_outputs"): - self._build_outputs() - return self._outputs - - @property - def stft_feature(self): - if self.stft_name not in self._features: - self._build_stft_feature() - return self._features[self.stft_name] - - @property - def spectrogram_feature(self): - if self.spectrogram_name not in self._features: - self._build_stft_feature() - return self._features[self.spectrogram_name] - - @property - def masks(self): - if not hasattr(self, "_masks"): - self._build_masks() - return self._masks - - @property - def masked_stfts(self): - if not hasattr(self, "_masked_stfts"): - self._build_masked_stfts() - return self._masked_stfts - - def _inverse_stft(self, stft_t, time_crop=None): - """Inverse and reshape the given STFT - - :param stft_t: input STFT - :returns: inverse STFT (waveform) - """ - inversed = ( - inverse_stft( - tf.transpose(stft_t, perm=[2, 0, 1]), - self._frame_length, - self._frame_step, - window_fn=lambda frame_length, dtype: ( - hann_window(frame_length, periodic=True, dtype=dtype) - ), - ) - * self.WINDOW_COMPENSATION_FACTOR - ) - reshaped = tf.transpose(inversed) - if time_crop is None: - time_crop = tf.shape(self._features["waveform"])[0] - return reshaped[self._frame_length : self._frame_length + time_crop, :] - - def _build_mwf_output_waveform(self): - """Perform separation with multichannel Wiener Filtering using Norbert. - Note: multichannel Wiener Filtering is not coded in Tensorflow and thus - may be quite slow. - - :returns: dictionary of separated waveforms (key: instrument name, - value: estimated waveform of the instrument) - """ - import norbert # pylint: disable=import-error - - output_dict = self.model_outputs - x = self.stft_feature - v = tf.stack( - [ - pad_and_reshape( - output_dict[f"{instrument}_spectrogram"], - self._frame_length, - self._F, - )[: tf.shape(x)[0], ...] - for instrument in self._instruments - ], - axis=3, - ) - input_args = [v, x] - stft_function = ( - tf.py_function( - lambda v, x: norbert.wiener(v.numpy(), x.numpy()), - input_args, - tf.complex64, - ), - ) - return { - instrument: self._inverse_stft(stft_function[0][:, :, :, k]) - for k, instrument in enumerate(self._instruments) - } - - def _extend_mask(self, mask): - """Extend mask, from reduced number of frequency bin to the number of - frequency bin in the STFT. - - :param mask: restricted mask - :returns: extended mask - :raise ValueError: If invalid mask_extension parameter is set. - """ - extension = self._params["mask_extension"] - # Extend with average - # (dispatch according to energy in the processed band) - if extension == "average": - extension_row = tf.reduce_mean(mask, axis=2, keepdims=True) - # Extend with 0 - # (avoid extension artifacts but not conservative separation) - elif extension == "zeros": - mask_shape = tf.shape(mask) - extension_row = tf.zeros((mask_shape[0], mask_shape[1], 1, mask_shape[-1])) - else: - raise ValueError(f"Invalid mask_extension parameter {extension}") - n_extra_row = self._frame_length // 2 + 1 - self._F - extension = tf.tile(extension_row, [1, 1, n_extra_row, 1]) - return tf.concat([mask, extension], axis=2) - - def _build_masks(self): - """ - Compute masks from the output spectrograms of the model. - :return: - """ - output_dict = self.model_outputs - stft_feature = self.stft_feature - separation_exponent = self._params["separation_exponent"] - output_sum = ( - tf.reduce_sum( - [e ** separation_exponent for e in output_dict.values()], axis=0 - ) - + self.EPSILON - ) - out = {} - for instrument in self._instruments: - output = output_dict[f"{instrument}_spectrogram"] - # Compute mask with the model. - instrument_mask = ( - output ** separation_exponent + (self.EPSILON / len(output_dict)) - ) / output_sum - # Extend mask; - instrument_mask = self._extend_mask(instrument_mask) - # Stack back mask. - old_shape = tf.shape(instrument_mask) - new_shape = tf.concat( - [[old_shape[0] * old_shape[1]], old_shape[2:]], axis=0 - ) - instrument_mask = tf.reshape(instrument_mask, new_shape) - # Remove padded part (for mask having the same size as STFT); - - instrument_mask = instrument_mask[: tf.shape(stft_feature)[0], ...] - out[instrument] = instrument_mask - self._masks = out - - def _build_masked_stfts(self): - input_stft = self.stft_feature - out = {} - for instrument, mask in self.masks.items(): - out[instrument] = tf.cast(mask, dtype=tf.complex64) * input_stft - self._masked_stfts = out - - def _build_manual_output_waveform(self, masked_stft): - """Perform ratio mask separation - - :param output_dict: dictionary of estimated spectrogram (key: instrument - name, value: estimated spectrogram of the instrument) - :returns: dictionary of separated waveforms (key: instrument name, - value: estimated waveform of the instrument) - """ - - output_waveform = {} - for instrument, stft_data in masked_stft.items(): - output_waveform[instrument] = self._inverse_stft(stft_data) - return output_waveform - - def _build_output_waveform(self, masked_stft): - """Build output waveform from given output dict in order to be used in - prediction context. Regarding of the configuration building method will - be using MWF. - - :returns: Built output waveform. - """ - - if self._params.get("MWF", False): - output_waveform = self._build_mwf_output_waveform() - else: - output_waveform = self._build_manual_output_waveform(masked_stft) - return output_waveform - - def _build_outputs(self): - if self.include_stft_computations(): - self._outputs = self._build_output_waveform(self.masked_stfts) - else: - self._outputs = self.masked_stfts - - if "audio_id" in self._features: - self._outputs["audio_id"] = self._features["audio_id"] - - def build_predict_model(self): - """Builder interface for creating model instance that aims to perform - prediction / inference over given track. The output of such estimator - will be a dictionary with a "" key per separated instrument - , associated to the estimated separated waveform of the instrument. - - :returns: An estimator for performing prediction. - """ - - return tf.estimator.EstimatorSpec( - tf.estimator.ModeKeys.PREDICT, predictions=self.outputs - ) - - def build_evaluation_model(self, labels): - """Builder interface for creating model instance that aims to perform - model evaluation. The output of such estimator will be a dictionary - with a key "_spectrogram" per separated instrument, - associated to the estimated separated instrument magnitude spectrogram. - - :param labels: Model labels. - :returns: An estimator for performing model evaluation. - """ - loss, metrics = self._build_loss(labels) - return tf.estimator.EstimatorSpec( - tf.estimator.ModeKeys.EVAL, loss=loss, eval_metric_ops=metrics - ) - - def build_train_model(self, labels): - """Builder interface for creating model instance that aims to perform - model training. The output of such estimator will be a dictionary - with a key "_spectrogram" per separated instrument, - associated to the estimated separated instrument magnitude spectrogram. - - :param labels: Model labels. - :returns: An estimator for performing model training. - """ - loss, metrics = self._build_loss(labels) - optimizer = self._build_optimizer() - train_operation = optimizer.minimize( - loss=loss, global_step=tf.compat.v1.train.get_global_step() - ) - return tf.estimator.EstimatorSpec( - mode=tf.estimator.ModeKeys.TRAIN, - loss=loss, - train_op=train_operation, - eval_metric_ops=metrics, - ) - - -def model_fn(features, labels, mode, params, config): - """ - - :param features: - :param labels: - :param mode: Estimator mode. - :param params: - :param config: TF configuration (not used). - :returns: Built EstimatorSpec. - :raise ValueError: If estimator mode is not supported. - """ - builder = EstimatorSpecBuilder(features, params) - if mode == tf.estimator.ModeKeys.PREDICT: - return builder.build_predict_model() - elif mode == tf.estimator.ModeKeys.EVAL: - return builder.build_evaluation_model(labels) - elif mode == tf.estimator.ModeKeys.TRAIN: - return builder.build_train_model(labels) - raise ValueError(f"Unknown mode {mode}") diff --git a/spaces/xuxw98/TAPA/tests/conftest.py b/spaces/xuxw98/TAPA/tests/conftest.py deleted file mode 100644 index ab19c77e17e9e5836a114058453ec75dab203864..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/tests/conftest.py +++ /dev/null @@ -1,42 +0,0 @@ -import sys -from pathlib import Path - -import pytest - -wd = Path(__file__).parent.parent.absolute() - - -@pytest.fixture() -def orig_llama(): - sys.path.append(str(wd)) - - from scripts.download import download_original - - download_original(wd) - - import original_model - - return original_model - - -@pytest.fixture() -def orig_llama_adapter(): - sys.path.append(str(wd)) - - from scripts.download import download_original - - download_original(wd) - - import original_adapter - - return original_adapter - - -@pytest.fixture() -def lit_llama(): - # this adds support for running tests without the package installed - sys.path.append(str(wd)) - - import lit_llama - - return lit_llama diff --git a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/utils.py b/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/utils.py deleted file mode 100644 index 97258f1706cc76773011e24a11bf417ea76ae112..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/utils.py +++ /dev/null @@ -1,357 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import queue -import threading -import torch -from basicsr.utils.download_util import load_file_from_url -from torch.nn import functional as F - -ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - - -class RealESRGANer: - """A helper class for upsampling images with RealESRGAN. - - Args: - scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4. - model_path (str): The path to the pretrained model. It can be urls (will first download it automatically). - model (nn.Module): The defined network. Default: None. - tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop - input images into tiles, and then process each of them. Finally, they will be merged into one image. - 0 denotes for do not use tile. Default: 0. - tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10. - pre_pad (int): Pad the input images to avoid border artifacts. Default: 10. - half (float): Whether to use half precision during inference. Default: False. - """ - - def __init__( - self, - scale, - model_path, - dni_weight=None, - model=None, - tile=0, - tile_pad=10, - pre_pad=10, - half=False, - device=None, - gpu_id=None, - ): - self.scale = scale - self.tile_size = tile - self.tile_pad = tile_pad - self.pre_pad = pre_pad - self.mod_scale = None - self.half = half - - # initialize model - if gpu_id: - self.device = ( - torch.device(f"cuda:{gpu_id}" if torch.cuda.is_available() else "cpu") - if device is None - else device - ) - else: - self.device = ( - torch.device("cuda" if torch.cuda.is_available() else "cpu") - if device is None - else device - ) - - if isinstance(model_path, list): - # dni - assert len(model_path) == len( - dni_weight - ), "model_path and dni_weight should have the save length." - loadnet = self.dni(model_path[0], model_path[1], dni_weight) - else: - # if the model_path starts with https, it will first download models to the folder: weights - if model_path.startswith("https://"): - model_path = load_file_from_url( - url=model_path, - model_dir=os.path.join(ROOT_DIR, "weights"), - progress=True, - file_name=None, - ) - loadnet = torch.load(model_path, map_location=torch.device("cpu")) - - # prefer to use params_ema - if "params_ema" in loadnet: - keyname = "params_ema" - else: - keyname = "params" - model.load_state_dict(loadnet[keyname], strict=True) - - model.eval() - self.model = model.to(self.device) - if self.half: - self.model = self.model.half() - - def dni(self, net_a, net_b, dni_weight, key="params", loc="cpu"): - """Deep network interpolation. - - ``Paper: Deep Network Interpolation for Continuous Imagery Effect Transition`` - """ - net_a = torch.load(net_a, map_location=torch.device(loc)) - net_b = torch.load(net_b, map_location=torch.device(loc)) - for k, v_a in net_a[key].items(): - net_a[key][k] = dni_weight[0] * v_a + dni_weight[1] * net_b[key][k] - return net_a - - def pre_process(self, img): - """Pre-process, such as pre-pad and mod pad, so that the images can be divisible""" - img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float() - self.img = img.unsqueeze(0).to(self.device) - if self.half: - self.img = self.img.half() - - # pre_pad - if self.pre_pad != 0: - self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), "reflect") - # mod pad for divisible borders - if self.scale == 2: - self.mod_scale = 2 - elif self.scale == 1: - self.mod_scale = 4 - if self.mod_scale is not None: - self.mod_pad_h, self.mod_pad_w = 0, 0 - _, _, h, w = self.img.size() - if h % self.mod_scale != 0: - self.mod_pad_h = self.mod_scale - h % self.mod_scale - if w % self.mod_scale != 0: - self.mod_pad_w = self.mod_scale - w % self.mod_scale - self.img = F.pad( - self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), "reflect" - ) - - def process(self): - # model inference - self.output = self.model(self.img) - - def tile_process(self): - """It will first crop input images to tiles, and then process each tile. - Finally, all the processed tiles are merged into one images. - - Modified from: https://github.com/ata4/esrgan-launcher - """ - batch, channel, height, width = self.img.shape - output_height = height * self.scale - output_width = width * self.scale - output_shape = (batch, channel, output_height, output_width) - - # start with black image - self.output = self.img.new_zeros(output_shape) - tiles_x = math.ceil(width / self.tile_size) - tiles_y = math.ceil(height / self.tile_size) - - # loop over all tiles - for y in range(tiles_y): - for x in range(tiles_x): - # extract tile from input image - ofs_x = x * self.tile_size - ofs_y = y * self.tile_size - # input tile area on total image - input_start_x = ofs_x - input_end_x = min(ofs_x + self.tile_size, width) - input_start_y = ofs_y - input_end_y = min(ofs_y + self.tile_size, height) - - # input tile area on total image with padding - input_start_x_pad = max(input_start_x - self.tile_pad, 0) - input_end_x_pad = min(input_end_x + self.tile_pad, width) - input_start_y_pad = max(input_start_y - self.tile_pad, 0) - input_end_y_pad = min(input_end_y + self.tile_pad, height) - - # input tile dimensions - input_tile_width = input_end_x - input_start_x - input_tile_height = input_end_y - input_start_y - tile_idx = y * tiles_x + x + 1 - input_tile = self.img[ - :, - :, - input_start_y_pad:input_end_y_pad, - input_start_x_pad:input_end_x_pad, - ] - - # upscale tile - try: - with torch.no_grad(): - output_tile = self.model(input_tile) - except RuntimeError as error: - print("Error", error) - print(f"\tTile {tile_idx}/{tiles_x * tiles_y}") - - # output tile area on total image - output_start_x = input_start_x * self.scale - output_end_x = input_end_x * self.scale - output_start_y = input_start_y * self.scale - output_end_y = input_end_y * self.scale - - # output tile area without padding - output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale - output_end_x_tile = output_start_x_tile + input_tile_width * self.scale - output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale - output_end_y_tile = output_start_y_tile + input_tile_height * self.scale - - # put tile into output image - self.output[ - :, :, output_start_y:output_end_y, output_start_x:output_end_x - ] = output_tile[ - :, - :, - output_start_y_tile:output_end_y_tile, - output_start_x_tile:output_end_x_tile, - ] - - def post_process(self): - # remove extra pad - if self.mod_scale is not None: - _, _, h, w = self.output.size() - self.output = self.output[ - :, - :, - 0 : h - self.mod_pad_h * self.scale, - 0 : w - self.mod_pad_w * self.scale, - ] - # remove prepad - if self.pre_pad != 0: - _, _, h, w = self.output.size() - self.output = self.output[ - :, - :, - 0 : h - self.pre_pad * self.scale, - 0 : w - self.pre_pad * self.scale, - ] - return self.output - - @torch.no_grad() - def enhance(self, img, outscale=None, alpha_upsampler="realesrgan"): - h_input, w_input = img.shape[0:2] - # img: numpy - img = img.astype(np.float32) - if np.max(img) > 256: # 16-bit image - max_range = 65535 - print("\tInput is a 16-bit image") - else: - max_range = 255 - img = img / max_range - if len(img.shape) == 2: # gray image - img_mode = "L" - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) - elif img.shape[2] == 4: # RGBA image with alpha channel - img_mode = "RGBA" - alpha = img[:, :, 3] - img = img[:, :, 0:3] - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - if alpha_upsampler == "realesrgan": - alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB) - else: - img_mode = "RGB" - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - - # ------------------- process image (without the alpha channel) ------------------- # - self.pre_process(img) - if self.tile_size > 0: - self.tile_process() - else: - self.process() - output_img = self.post_process() - output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy() - output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0)) - if img_mode == "L": - output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY) - - # ------------------- process the alpha channel if necessary ------------------- # - if img_mode == "RGBA": - if alpha_upsampler == "realesrgan": - self.pre_process(alpha) - if self.tile_size > 0: - self.tile_process() - else: - self.process() - output_alpha = self.post_process() - output_alpha = ( - output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy() - ) - output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0)) - output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY) - else: # use the cv2 resize for alpha channel - h, w = alpha.shape[0:2] - output_alpha = cv2.resize( - alpha, - (w * self.scale, h * self.scale), - interpolation=cv2.INTER_LINEAR, - ) - - # merge the alpha channel - output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA) - output_img[:, :, 3] = output_alpha - - # ------------------------------ return ------------------------------ # - if max_range == 65535: # 16-bit image - output = (output_img * 65535.0).round().astype(np.uint16) - else: - output = (output_img * 255.0).round().astype(np.uint8) - - if outscale is not None and outscale != float(self.scale): - output = cv2.resize( - output, - ( - int(w_input * outscale), - int(h_input * outscale), - ), - interpolation=cv2.INTER_LANCZOS4, - ) - - return output, img_mode - - -class PrefetchReader(threading.Thread): - """Prefetch images. - - Args: - img_list (list[str]): A image list of image paths to be read. - num_prefetch_queue (int): Number of prefetch queue. - """ - - def __init__(self, img_list, num_prefetch_queue): - super().__init__() - self.que = queue.Queue(num_prefetch_queue) - self.img_list = img_list - - def run(self): - for img_path in self.img_list: - img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED) - self.que.put(img) - - self.que.put(None) - - def __next__(self): - next_item = self.que.get() - if next_item is None: - raise StopIteration - return next_item - - def __iter__(self): - return self - - -class IOConsumer(threading.Thread): - def __init__(self, opt, que, qid): - super().__init__() - self._queue = que - self.qid = qid - self.opt = opt - - def run(self): - while True: - msg = self._queue.get() - if isinstance(msg, str) and msg == "quit": - break - - output = msg["output"] - save_path = msg["save_path"] - cv2.imwrite(save_path, output) - print(f"IO worker {self.qid} is done.") diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/training/lpips/trainer.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/training/lpips/trainer.py deleted file mode 100644 index 52b6112cdc79db7a429ec52e60fcefdb756f776b..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/training/lpips/trainer.py +++ /dev/null @@ -1,280 +0,0 @@ - -from __future__ import absolute_import - -import numpy as np -import torch -from torch import nn -from collections import OrderedDict -from torch.autograd import Variable -from scipy.ndimage import zoom -from tqdm import tqdm -import lpips -import os - - -class Trainer(): - def name(self): - return self.model_name - - def initialize(self, model='lpips', net='alex', colorspace='Lab', pnet_rand=False, pnet_tune=False, model_path=None, - use_gpu=True, printNet=False, spatial=False, - is_train=False, lr=.0001, beta1=0.5, version='0.1', gpu_ids=[0]): - ''' - INPUTS - model - ['lpips'] for linearly calibrated network - ['baseline'] for off-the-shelf network - ['L2'] for L2 distance in Lab colorspace - ['SSIM'] for ssim in RGB colorspace - net - ['squeeze','alex','vgg'] - model_path - if None, will look in weights/[NET_NAME].pth - colorspace - ['Lab','RGB'] colorspace to use for L2 and SSIM - use_gpu - bool - whether or not to use a GPU - printNet - bool - whether or not to print network architecture out - spatial - bool - whether to output an array containing varying distances across spatial dimensions - is_train - bool - [True] for training mode - lr - float - initial learning rate - beta1 - float - initial momentum term for adam - version - 0.1 for latest, 0.0 was original (with a bug) - gpu_ids - int array - [0] by default, gpus to use - ''' - self.use_gpu = use_gpu - self.gpu_ids = gpu_ids - self.model = model - self.net = net - self.is_train = is_train - self.spatial = spatial - self.model_name = '%s [%s]'%(model,net) - - if(self.model == 'lpips'): # pretrained net + linear layer - self.net = lpips.LPIPS(pretrained=not is_train, net=net, version=version, lpips=True, spatial=spatial, - pnet_rand=pnet_rand, pnet_tune=pnet_tune, - use_dropout=True, model_path=model_path, eval_mode=False) - elif(self.model=='baseline'): # pretrained network - self.net = lpips.LPIPS(pnet_rand=pnet_rand, net=net, lpips=False) - elif(self.model in ['L2','l2']): - self.net = lpips.L2(use_gpu=use_gpu,colorspace=colorspace) # not really a network, only for testing - self.model_name = 'L2' - elif(self.model in ['DSSIM','dssim','SSIM','ssim']): - self.net = lpips.DSSIM(use_gpu=use_gpu,colorspace=colorspace) - self.model_name = 'SSIM' - else: - raise ValueError("Model [%s] not recognized." % self.model) - - self.parameters = list(self.net.parameters()) - - if self.is_train: # training mode - # extra network on top to go from distances (d0,d1) => predicted human judgment (h*) - self.rankLoss = lpips.BCERankingLoss() - self.parameters += list(self.rankLoss.net.parameters()) - self.lr = lr - self.old_lr = lr - self.optimizer_net = torch.optim.Adam(self.parameters, lr=lr, betas=(beta1, 0.999)) - else: # test mode - self.net.eval() - - if(use_gpu): - self.net.to(gpu_ids[0]) - self.net = torch.nn.DataParallel(self.net, device_ids=gpu_ids) - if(self.is_train): - self.rankLoss = self.rankLoss.to(device=gpu_ids[0]) # just put this on GPU0 - - if(printNet): - print('---------- Networks initialized -------------') - networks.print_network(self.net) - print('-----------------------------------------------') - - def forward(self, in0, in1, retPerLayer=False): - ''' Function computes the distance between image patches in0 and in1 - INPUTS - in0, in1 - torch.Tensor object of shape Nx3xXxY - image patch scaled to [-1,1] - OUTPUT - computed distances between in0 and in1 - ''' - - return self.net.forward(in0, in1, retPerLayer=retPerLayer) - - # ***** TRAINING FUNCTIONS ***** - def optimize_parameters(self): - self.forward_train() - self.optimizer_net.zero_grad() - self.backward_train() - self.optimizer_net.step() - self.clamp_weights() - - def clamp_weights(self): - for module in self.net.modules(): - if(hasattr(module, 'weight') and module.kernel_size==(1,1)): - module.weight.data = torch.clamp(module.weight.data,min=0) - - def set_input(self, data): - self.input_ref = data['ref'] - self.input_p0 = data['p0'] - self.input_p1 = data['p1'] - self.input_judge = data['judge'] - - if(self.use_gpu): - self.input_ref = self.input_ref.to(device=self.gpu_ids[0]) - self.input_p0 = self.input_p0.to(device=self.gpu_ids[0]) - self.input_p1 = self.input_p1.to(device=self.gpu_ids[0]) - self.input_judge = self.input_judge.to(device=self.gpu_ids[0]) - - self.var_ref = Variable(self.input_ref,requires_grad=True) - self.var_p0 = Variable(self.input_p0,requires_grad=True) - self.var_p1 = Variable(self.input_p1,requires_grad=True) - - def forward_train(self): # run forward pass - self.d0 = self.forward(self.var_ref, self.var_p0) - self.d1 = self.forward(self.var_ref, self.var_p1) - self.acc_r = self.compute_accuracy(self.d0,self.d1,self.input_judge) - - self.var_judge = Variable(1.*self.input_judge).view(self.d0.size()) - - self.loss_total = self.rankLoss.forward(self.d0, self.d1, self.var_judge*2.-1.) - - return self.loss_total - - def backward_train(self): - torch.mean(self.loss_total).backward() - - def compute_accuracy(self,d0,d1,judge): - ''' d0, d1 are Variables, judge is a Tensor ''' - d1_lt_d0 = (d1 %f' % (type,self.old_lr, lr)) - self.old_lr = lr - - - def get_image_paths(self): - return self.image_paths - - def save_done(self, flag=False): - np.save(os.path.join(self.save_dir, 'done_flag'),flag) - np.savetxt(os.path.join(self.save_dir, 'done_flag'),[flag,],fmt='%i') - - -def score_2afc_dataset(data_loader, func, name=''): - ''' Function computes Two Alternative Forced Choice (2AFC) score using - distance function 'func' in dataset 'data_loader' - INPUTS - data_loader - CustomDatasetDataLoader object - contains a TwoAFCDataset inside - func - callable distance function - calling d=func(in0,in1) should take 2 - pytorch tensors with shape Nx3xXxY, and return numpy array of length N - OUTPUTS - [0] - 2AFC score in [0,1], fraction of time func agrees with human evaluators - [1] - dictionary with following elements - d0s,d1s - N arrays containing distances between reference patch to perturbed patches - gts - N array in [0,1], preferred patch selected by human evaluators - (closer to "0" for left patch p0, "1" for right patch p1, - "0.6" means 60pct people preferred right patch, 40pct preferred left) - scores - N array in [0,1], corresponding to what percentage function agreed with humans - CONSTS - N - number of test triplets in data_loader - ''' - - d0s = [] - d1s = [] - gts = [] - - for data in tqdm(data_loader.load_data(), desc=name): - d0s+=func(data['ref'],data['p0']).data.cpu().numpy().flatten().tolist() - d1s+=func(data['ref'],data['p1']).data.cpu().numpy().flatten().tolist() - gts+=data['judge'].cpu().numpy().flatten().tolist() - - d0s = np.array(d0s) - d1s = np.array(d1s) - gts = np.array(gts) - scores = (d0s bool: - raise NotImplementedError("StoppingCriteria needs to be subclassed") - - -class MaxLengthCriteria(StoppingCriteria): - """ - This class can be used to stop generation whenever the full generated number of tokens exceeds `max_length`. Keep - in mind for decoder-only type of transformers, this will include the initial prompted tokens. - - Args: - max_length (`int`): - The maximum length that the output sequence can have in number of tokens. - max_position_embeddings (`int`, *optional*): - The maximum model length, as defined by the model's `config.max_position_embeddings` attribute. - """ - - def __init__(self, max_length: int, max_position_embeddings: Optional[int] = None): - self.max_length = max_length - self.max_position_embeddings = max_position_embeddings - - @add_start_docstrings(STOPPING_CRITERIA_INPUTS_DOCSTRING) - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - cur_len = input_ids.shape[-1] - is_done = cur_len >= self.max_length - if self.max_position_embeddings is not None and not is_done and cur_len >= self.max_position_embeddings: - logger.warning_once( - "This is a friendly reminder - the current text generation call will exceed the model's predefined " - f"maximum length ({self.max_position_embeddings}). Depending on the model, you may observe " - "exceptions, performance degradation, or nothing at all." - ) - return is_done - - -class MaxNewTokensCriteria(StoppingCriteria): - """ - This class can be used to stop generation whenever the generated number of tokens exceeds `max_new_tokens`. Keep in - mind for decoder-only type of transformers, this will **not** include the initial prompted tokens. This is very - close to `MaxLengthCriteria` but ignores the number of initial tokens. - - Args: - start_length (`int`): - The number of initial tokens. - max_new_tokens (`int`): - The maximum number of tokens to generate. - """ - - def __init__(self, start_length: int, max_new_tokens: int): - warnings.warn( - "The class `MaxNewTokensCriteria` is deprecated. " - f"Please use `MaxLengthCriteria(max_length={start_length + max_new_tokens})` " - "with `max_length = start_length + max_new_tokens` instead.", - FutureWarning, - ) - self.start_length = start_length - self.max_new_tokens = max_new_tokens - self.max_length = start_length + max_new_tokens - - @add_start_docstrings(STOPPING_CRITERIA_INPUTS_DOCSTRING) - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - return input_ids.shape[-1] >= self.max_length - - -class MaxTimeCriteria(StoppingCriteria): - """ - This class can be used to stop generation whenever the full generation exceeds some amount of time. By default, the - time will start being counted when you initialize this function. You can override this by passing an - `initial_time`. - - Args: - max_time (`float`): - The maximum allowed time in seconds for the generation. - initial_time (`float`, *optional*, defaults to `time.time()`): - The start of the generation allowed time. - """ - - def __init__(self, max_time: float, initial_timestamp: Optional[float] = None): - self.max_time = max_time - self.initial_timestamp = time.time() if initial_timestamp is None else initial_timestamp - - @add_start_docstrings(STOPPING_CRITERIA_INPUTS_DOCSTRING) - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - return time.time() - self.initial_timestamp > self.max_time - - -class StoppingCriteriaList(list): - @add_start_docstrings(STOPPING_CRITERIA_INPUTS_DOCSTRING) - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: - return any(criteria(input_ids, scores) for criteria in self) - - @property - def max_length(self) -> Optional[int]: - for stopping_criterium in self: - if isinstance(stopping_criterium, MaxLengthCriteria): - return stopping_criterium.max_length - elif isinstance(stopping_criterium, MaxNewTokensCriteria): - return stopping_criterium.max_length - return None - - -def validate_stopping_criteria(stopping_criteria: StoppingCriteriaList, max_length: int) -> StoppingCriteriaList: - stopping_max_length = stopping_criteria.max_length - new_stopping_criteria = deepcopy(stopping_criteria) - if stopping_max_length is not None and stopping_max_length != max_length: - warnings.warn("You set different `max_length` for stopping criteria and `max_length` parameter", UserWarning) - elif stopping_max_length is None: - new_stopping_criteria.append(MaxLengthCriteria(max_length=max_length)) - return new_stopping_criteria diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dpr/tokenization_dpr_fast.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dpr/tokenization_dpr_fast.py deleted file mode 100644 index 784ed1344cf6f413691f3c9f25f3e537533f5b93..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dpr/tokenization_dpr_fast.py +++ /dev/null @@ -1,410 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The HuggingFace Inc. team, The Hugging Face Team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization classes for DPR.""" - - -import collections -from typing import List, Optional, Union - -from ...tokenization_utils_base import BatchEncoding -from ...utils import TensorType, add_end_docstrings, add_start_docstrings, logging -from ..bert.tokenization_bert_fast import BertTokenizerFast -from .tokenization_dpr import DPRContextEncoderTokenizer, DPRQuestionEncoderTokenizer, DPRReaderTokenizer - - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt", "tokenizer_file": "tokenizer.json"} - -CONTEXT_ENCODER_PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "facebook/dpr-ctx_encoder-single-nq-base": ( - "https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base/resolve/main/vocab.txt" - ), - "facebook/dpr-ctx_encoder-multiset-base": ( - "https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base/resolve/main/vocab.txt" - ), - }, - "tokenizer_file": { - "facebook/dpr-ctx_encoder-single-nq-base": ( - "https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base/resolve/main/tokenizer.json" - ), - "facebook/dpr-ctx_encoder-multiset-base": ( - "https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base/resolve/main/tokenizer.json" - ), - }, -} -QUESTION_ENCODER_PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "facebook/dpr-question_encoder-single-nq-base": ( - "https://huggingface.co/facebook/dpr-question_encoder-single-nq-base/resolve/main/vocab.txt" - ), - "facebook/dpr-question_encoder-multiset-base": ( - "https://huggingface.co/facebook/dpr-question_encoder-multiset-base/resolve/main/vocab.txt" - ), - }, - "tokenizer_file": { - "facebook/dpr-question_encoder-single-nq-base": ( - "https://huggingface.co/facebook/dpr-question_encoder-single-nq-base/resolve/main/tokenizer.json" - ), - "facebook/dpr-question_encoder-multiset-base": ( - "https://huggingface.co/facebook/dpr-question_encoder-multiset-base/resolve/main/tokenizer.json" - ), - }, -} -READER_PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "facebook/dpr-reader-single-nq-base": ( - "https://huggingface.co/facebook/dpr-reader-single-nq-base/resolve/main/vocab.txt" - ), - "facebook/dpr-reader-multiset-base": ( - "https://huggingface.co/facebook/dpr-reader-multiset-base/resolve/main/vocab.txt" - ), - }, - "tokenizer_file": { - "facebook/dpr-reader-single-nq-base": ( - "https://huggingface.co/facebook/dpr-reader-single-nq-base/resolve/main/tokenizer.json" - ), - "facebook/dpr-reader-multiset-base": ( - "https://huggingface.co/facebook/dpr-reader-multiset-base/resolve/main/tokenizer.json" - ), - }, -} - -CONTEXT_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "facebook/dpr-ctx_encoder-single-nq-base": 512, - "facebook/dpr-ctx_encoder-multiset-base": 512, -} -QUESTION_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "facebook/dpr-question_encoder-single-nq-base": 512, - "facebook/dpr-question_encoder-multiset-base": 512, -} -READER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "facebook/dpr-reader-single-nq-base": 512, - "facebook/dpr-reader-multiset-base": 512, -} - - -CONTEXT_ENCODER_PRETRAINED_INIT_CONFIGURATION = { - "facebook/dpr-ctx_encoder-single-nq-base": {"do_lower_case": True}, - "facebook/dpr-ctx_encoder-multiset-base": {"do_lower_case": True}, -} -QUESTION_ENCODER_PRETRAINED_INIT_CONFIGURATION = { - "facebook/dpr-question_encoder-single-nq-base": {"do_lower_case": True}, - "facebook/dpr-question_encoder-multiset-base": {"do_lower_case": True}, -} -READER_PRETRAINED_INIT_CONFIGURATION = { - "facebook/dpr-reader-single-nq-base": {"do_lower_case": True}, - "facebook/dpr-reader-multiset-base": {"do_lower_case": True}, -} - - -class DPRContextEncoderTokenizerFast(BertTokenizerFast): - r""" - Construct a "fast" DPRContextEncoder tokenizer (backed by HuggingFace's *tokenizers* library). - - [`DPRContextEncoderTokenizerFast`] is identical to [`BertTokenizerFast`] and runs end-to-end tokenization: - punctuation splitting and wordpiece. - - Refer to superclass [`BertTokenizerFast`] for usage examples and documentation concerning parameters. - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = CONTEXT_ENCODER_PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = CONTEXT_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - pretrained_init_configuration = CONTEXT_ENCODER_PRETRAINED_INIT_CONFIGURATION - slow_tokenizer_class = DPRContextEncoderTokenizer - - -class DPRQuestionEncoderTokenizerFast(BertTokenizerFast): - r""" - Constructs a "fast" DPRQuestionEncoder tokenizer (backed by HuggingFace's *tokenizers* library). - - [`DPRQuestionEncoderTokenizerFast`] is identical to [`BertTokenizerFast`] and runs end-to-end tokenization: - punctuation splitting and wordpiece. - - Refer to superclass [`BertTokenizerFast`] for usage examples and documentation concerning parameters. - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = QUESTION_ENCODER_PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = QUESTION_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - pretrained_init_configuration = QUESTION_ENCODER_PRETRAINED_INIT_CONFIGURATION - slow_tokenizer_class = DPRQuestionEncoderTokenizer - - -DPRSpanPrediction = collections.namedtuple( - "DPRSpanPrediction", ["span_score", "relevance_score", "doc_id", "start_index", "end_index", "text"] -) - -DPRReaderOutput = collections.namedtuple("DPRReaderOutput", ["start_logits", "end_logits", "relevance_logits"]) - - -CUSTOM_DPR_READER_DOCSTRING = r""" - Return a dictionary with the token ids of the input strings and other information to give to `.decode_best_spans`. - It converts the strings of a question and different passages (title and text) in a sequence of IDs (integers), - using the tokenizer and vocabulary. The resulting `input_ids` is a matrix of size `(n_passages, sequence_length)` - with the format: - - [CLS] [SEP] [SEP] - - Args: - questions (`str` or `List[str]`): - The questions to be encoded. You can specify one question for many passages. In this case, the question - will be duplicated like `[questions] * n_passages`. Otherwise you have to specify as many questions as in - `titles` or `texts`. - titles (`str` or `List[str]`): - The passages titles to be encoded. This can be a string or a list of strings if there are several passages. - texts (`str` or `List[str]`): - The passages texts to be encoded. This can be a string or a list of strings if there are several passages. - padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`): - Activates and controls padding. Accepts the following values: - - - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence - if provided). - - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum - acceptable input length for the model if that argument is not provided. - - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different - lengths). - truncation (`bool`, `str` or [`~tokenization_utils_base.TruncationStrategy`], *optional*, defaults to `False`): - Activates and controls truncation. Accepts the following values: - - - `True` or `'longest_first'`: Truncate to a maximum length specified with the argument `max_length` or to - the maximum acceptable input length for the model if that argument is not provided. This will truncate - token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch - of pairs) is provided. - - `'only_first'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum - acceptable input length for the model if that argument is not provided. This will only truncate the first - sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - - `'only_second'`: Truncate to a maximum length specified with the argument `max_length` or to the maximum - acceptable input length for the model if that argument is not provided. This will only truncate the - second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. - - `False` or `'do_not_truncate'` (default): No truncation (i.e., can output batch with sequence lengths - greater than the model maximum admissible input size). - max_length (`int`, *optional*): - Controls the maximum length to use by one of the truncation/padding parameters. - - If left unset or set to `None`, this will use the predefined model maximum length if a maximum length - is required by one of the truncation/padding parameters. If the model has no specific maximum input - length (like XLNet) truncation/padding to a maximum length will be deactivated. - return_tensors (`str` or [`~utils.TensorType`], *optional*): - If set, will return tensors instead of list of python integers. Acceptable values are: - - - `'tf'`: Return TensorFlow `tf.constant` objects. - - `'pt'`: Return PyTorch `torch.Tensor` objects. - - `'np'`: Return Numpy `np.ndarray` objects. - return_attention_mask (`bool`, *optional*): - Whether or not to return the attention mask. If not set, will return the attention mask according to the - specific tokenizer's default, defined by the `return_outputs` attribute. - - [What are attention masks?](../glossary#attention-mask) - - Return: - `Dict[str, List[List[int]]]`: A dictionary with the following keys: - - - `input_ids`: List of token ids to be fed to a model. - - `attention_mask`: List of indices specifying which tokens should be attended to by the model. - """ - - -@add_start_docstrings(CUSTOM_DPR_READER_DOCSTRING) -class CustomDPRReaderTokenizerMixin: - def __call__( - self, - questions, - titles: Optional[str] = None, - texts: Optional[str] = None, - padding: Union[bool, str] = False, - truncation: Union[bool, str] = False, - max_length: Optional[int] = None, - return_tensors: Optional[Union[str, TensorType]] = None, - return_attention_mask: Optional[bool] = None, - **kwargs, - ) -> BatchEncoding: - if titles is None and texts is None: - return super().__call__( - questions, - padding=padding, - truncation=truncation, - max_length=max_length, - return_tensors=return_tensors, - return_attention_mask=return_attention_mask, - **kwargs, - ) - elif titles is None or texts is None: - text_pair = titles if texts is None else texts - return super().__call__( - questions, - text_pair, - padding=padding, - truncation=truncation, - max_length=max_length, - return_tensors=return_tensors, - return_attention_mask=return_attention_mask, - **kwargs, - ) - titles = titles if not isinstance(titles, str) else [titles] - texts = texts if not isinstance(texts, str) else [texts] - n_passages = len(titles) - questions = questions if not isinstance(questions, str) else [questions] * n_passages - assert len(titles) == len( - texts - ), f"There should be as many titles than texts but got {len(titles)} titles and {len(texts)} texts." - encoded_question_and_titles = super().__call__(questions, titles, padding=False, truncation=False)["input_ids"] - encoded_texts = super().__call__(texts, add_special_tokens=False, padding=False, truncation=False)["input_ids"] - encoded_inputs = { - "input_ids": [ - (encoded_question_and_title + encoded_text)[:max_length] - if max_length is not None and truncation - else encoded_question_and_title + encoded_text - for encoded_question_and_title, encoded_text in zip(encoded_question_and_titles, encoded_texts) - ] - } - if return_attention_mask is not False: - attention_mask = [] - for input_ids in encoded_inputs["input_ids"]: - attention_mask.append([int(input_id != self.pad_token_id) for input_id in input_ids]) - encoded_inputs["attention_mask"] = attention_mask - return self.pad(encoded_inputs, padding=padding, max_length=max_length, return_tensors=return_tensors) - - def decode_best_spans( - self, - reader_input: BatchEncoding, - reader_output: DPRReaderOutput, - num_spans: int = 16, - max_answer_length: int = 64, - num_spans_per_passage: int = 4, - ) -> List[DPRSpanPrediction]: - """ - Get the span predictions for the extractive Q&A model. - - Returns: *List* of *DPRReaderOutput* sorted by descending *(relevance_score, span_score)*. Each - *DPRReaderOutput* is a *Tuple* with: - - - **span_score**: `float` that corresponds to the score given by the reader for this span compared to other - spans in the same passage. It corresponds to the sum of the start and end logits of the span. - - **relevance_score**: `float` that corresponds to the score of the each passage to answer the question, - compared to all the other passages. It corresponds to the output of the QA classifier of the DPRReader. - - **doc_id**: `int` the id of the passage. - ***start_index**: `int` the start index of the span - (inclusive). - **end_index**: `int` the end index of the span (inclusive). - - Examples: - - ```python - >>> from transformers import DPRReader, DPRReaderTokenizer - - >>> tokenizer = DPRReaderTokenizer.from_pretrained("facebook/dpr-reader-single-nq-base") - >>> model = DPRReader.from_pretrained("facebook/dpr-reader-single-nq-base") - >>> encoded_inputs = tokenizer( - ... questions=["What is love ?"], - ... titles=["Haddaway"], - ... texts=["'What Is Love' is a song recorded by the artist Haddaway"], - ... return_tensors="pt", - ... ) - >>> outputs = model(**encoded_inputs) - >>> predicted_spans = tokenizer.decode_best_spans(encoded_inputs, outputs) - >>> print(predicted_spans[0].text) # best span - a song - ```""" - input_ids = reader_input["input_ids"] - start_logits, end_logits, relevance_logits = reader_output[:3] - n_passages = len(relevance_logits) - sorted_docs = sorted(range(n_passages), reverse=True, key=relevance_logits.__getitem__) - nbest_spans_predictions: List[DPRReaderOutput] = [] - for doc_id in sorted_docs: - sequence_ids = list(input_ids[doc_id]) - # assuming question & title information is at the beginning of the sequence - passage_offset = sequence_ids.index(self.sep_token_id, 2) + 1 # second sep id - if sequence_ids[-1] == self.pad_token_id: - sequence_len = sequence_ids.index(self.pad_token_id) - else: - sequence_len = len(sequence_ids) - - best_spans = self._get_best_spans( - start_logits=start_logits[doc_id][passage_offset:sequence_len], - end_logits=end_logits[doc_id][passage_offset:sequence_len], - max_answer_length=max_answer_length, - top_spans=num_spans_per_passage, - ) - for start_index, end_index in best_spans: - start_index += passage_offset - end_index += passage_offset - nbest_spans_predictions.append( - DPRSpanPrediction( - span_score=start_logits[doc_id][start_index] + end_logits[doc_id][end_index], - relevance_score=relevance_logits[doc_id], - doc_id=doc_id, - start_index=start_index, - end_index=end_index, - text=self.decode(sequence_ids[start_index : end_index + 1]), - ) - ) - if len(nbest_spans_predictions) >= num_spans: - break - return nbest_spans_predictions[:num_spans] - - def _get_best_spans( - self, - start_logits: List[int], - end_logits: List[int], - max_answer_length: int, - top_spans: int, - ) -> List[DPRSpanPrediction]: - """ - Finds the best answer span for the extractive Q&A model for one passage. It returns the best span by descending - `span_score` order and keeping max `top_spans` spans. Spans longer that `max_answer_length` are ignored. - """ - scores = [] - for start_index, start_score in enumerate(start_logits): - for answer_length, end_score in enumerate(end_logits[start_index : start_index + max_answer_length]): - scores.append(((start_index, start_index + answer_length), start_score + end_score)) - scores = sorted(scores, key=lambda x: x[1], reverse=True) - chosen_span_intervals = [] - for (start_index, end_index), score in scores: - assert start_index <= end_index, f"Wrong span indices: [{start_index}:{end_index}]" - length = end_index - start_index + 1 - assert length <= max_answer_length, f"Span is too long: {length} > {max_answer_length}" - if any( - start_index <= prev_start_index <= prev_end_index <= end_index - or prev_start_index <= start_index <= end_index <= prev_end_index - for (prev_start_index, prev_end_index) in chosen_span_intervals - ): - continue - chosen_span_intervals.append((start_index, end_index)) - - if len(chosen_span_intervals) == top_spans: - break - return chosen_span_intervals - - -@add_end_docstrings(CUSTOM_DPR_READER_DOCSTRING) -class DPRReaderTokenizerFast(CustomDPRReaderTokenizerMixin, BertTokenizerFast): - r""" - Constructs a "fast" DPRReader tokenizer (backed by HuggingFace's *tokenizers* library). - - [`DPRReaderTokenizerFast`] is almost identical to [`BertTokenizerFast`] and runs end-to-end tokenization: - punctuation splitting and wordpiece. The difference is that is has three inputs strings: question, titles and texts - that are combined to be fed to the [`DPRReader`] model. - - Refer to superclass [`BertTokenizerFast`] for usage examples and documentation concerning parameters. - - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = READER_PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = READER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - pretrained_init_configuration = READER_PRETRAINED_INIT_CONFIGURATION - model_input_names = ["input_ids", "attention_mask"] - slow_tokenizer_class = DPRReaderTokenizer diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/ContentVec768L12_Onnx.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/ContentVec768L12_Onnx.py deleted file mode 100644 index 8dde0f173ed60169282128cc51eb1c200c5d82c5..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/ContentVec768L12_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class ContentVec768L12_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/vec-768-layer-12.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 768 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) \ No newline at end of file diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/lib/unit.js b/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/lib/unit.js deleted file mode 100644 index c349661a8895709e6d27e435acd8cff1ba76d661..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/lib/unit.js +++ /dev/null @@ -1,120 +0,0 @@ -var minus = "-".charCodeAt(0); -var plus = "+".charCodeAt(0); -var dot = ".".charCodeAt(0); -var exp = "e".charCodeAt(0); -var EXP = "E".charCodeAt(0); - -// Check if three code points would start a number -// https://www.w3.org/TR/css-syntax-3/#starts-with-a-number -function likeNumber(value) { - var code = value.charCodeAt(0); - var nextCode; - - if (code === plus || code === minus) { - nextCode = value.charCodeAt(1); - - if (nextCode >= 48 && nextCode <= 57) { - return true; - } - - var nextNextCode = value.charCodeAt(2); - - if (nextCode === dot && nextNextCode >= 48 && nextNextCode <= 57) { - return true; - } - - return false; - } - - if (code === dot) { - nextCode = value.charCodeAt(1); - - if (nextCode >= 48 && nextCode <= 57) { - return true; - } - - return false; - } - - if (code >= 48 && code <= 57) { - return true; - } - - return false; -} - -// Consume a number -// https://www.w3.org/TR/css-syntax-3/#consume-number -module.exports = function(value) { - var pos = 0; - var length = value.length; - var code; - var nextCode; - var nextNextCode; - - if (length === 0 || !likeNumber(value)) { - return false; - } - - code = value.charCodeAt(pos); - - if (code === plus || code === minus) { - pos++; - } - - while (pos < length) { - code = value.charCodeAt(pos); - - if (code < 48 || code > 57) { - break; - } - - pos += 1; - } - - code = value.charCodeAt(pos); - nextCode = value.charCodeAt(pos + 1); - - if (code === dot && nextCode >= 48 && nextCode <= 57) { - pos += 2; - - while (pos < length) { - code = value.charCodeAt(pos); - - if (code < 48 || code > 57) { - break; - } - - pos += 1; - } - } - - code = value.charCodeAt(pos); - nextCode = value.charCodeAt(pos + 1); - nextNextCode = value.charCodeAt(pos + 2); - - if ( - (code === exp || code === EXP) && - ((nextCode >= 48 && nextCode <= 57) || - ((nextCode === plus || nextCode === minus) && - nextNextCode >= 48 && - nextNextCode <= 57)) - ) { - pos += nextCode === plus || nextCode === minus ? 3 : 2; - - while (pos < length) { - code = value.charCodeAt(pos); - - if (code < 48 || code > 57) { - break; - } - - pos += 1; - } - } - - return { - number: value.slice(0, pos), - unit: value.slice(pos) - }; -}; diff --git a/spaces/ysharma/Low-rank-Adaptation/train_lora_dreambooth.py b/spaces/ysharma/Low-rank-Adaptation/train_lora_dreambooth.py deleted file mode 100644 index 43ec4d533d080fd70b655d4d47f6f1a9113117b3..0000000000000000000000000000000000000000 --- a/spaces/ysharma/Low-rank-Adaptation/train_lora_dreambooth.py +++ /dev/null @@ -1,956 +0,0 @@ -# Bootstrapped from: -# https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py - -import argparse -import hashlib -import itertools -import math -import os -import inspect -from pathlib import Path -from typing import Optional - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint - - -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - StableDiffusionPipeline, - UNet2DConditionModel, -) -from diffusers.optimization import get_scheduler -from huggingface_hub import HfFolder, Repository, whoami - -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -from lora_diffusion import ( - inject_trainable_lora, - save_lora_weight, - extract_lora_ups_down, -) - -from torch.utils.data import Dataset -from PIL import Image -from torchvision import transforms - -from pathlib import Path - -import random -import re - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - color_jitter=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize( - size, interpolation=transforms.InterpolationMode.BILINEAR - ), - transforms.CenterCrop(size) - if center_crop - else transforms.RandomCrop(size), - transforms.ColorJitter(0.2, 0.1) - if color_jitter - else transforms.Lambda(lambda x: x), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open( - self.instance_images_path[index % self.num_instance_images] - ) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open( - self.class_images_path[index % self.num_class_images] - ) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -logger = get_logger(__name__) - - -def parse_args(input_args=None): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--pretrained_vae_name_or_path", - type=str, - default=None, - help="Path to pretrained vae or vae identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - required=True, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument( - "--prior_loss_weight", - type=float, - default=1.0, - help="The weight of prior preservation loss.", - ) - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If not have enough images, additional images will be" - " sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--seed", type=int, default=None, help="A seed for reproducible training." - ) - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - action="store_true", - help="Whether to center crop images before resizing to resolution", - ) - parser.add_argument( - "--color_jitter", - action="store_true", - help="Whether to apply color jitter to images", - ) - parser.add_argument( - "--train_text_encoder", - action="store_true", - help="Whether to train the text encoder", - ) - parser.add_argument( - "--train_batch_size", - type=int, - default=4, - help="Batch size (per device) for the training dataloader.", - ) - parser.add_argument( - "--sample_batch_size", - type=int, - default=4, - help="Batch size (per device) for sampling images.", - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--save_steps", - type=int, - default=500, - help="Save checkpoint every X updates steps.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--lora_rank", - type=int, - default=4, - help="Rank of LoRA approximation.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=None, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--learning_rate_text", - type=float, - default=5e-6, - help="Initial learning rate for text encoder (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", - type=int, - default=500, - help="Number of steps for the warmup in the lr scheduler.", - ) - parser.add_argument( - "--use_8bit_adam", - action="store_true", - help="Whether or not to use 8-bit Adam from bitsandbytes.", - ) - parser.add_argument( - "--adam_beta1", - type=float, - default=0.9, - help="The beta1 parameter for the Adam optimizer.", - ) - parser.add_argument( - "--adam_beta2", - type=float, - default=0.999, - help="The beta2 parameter for the Adam optimizer.", - ) - parser.add_argument( - "--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use." - ) - parser.add_argument( - "--adam_epsilon", - type=float, - default=1e-08, - help="Epsilon value for the Adam optimizer", - ) - parser.add_argument( - "--max_grad_norm", default=1.0, type=float, help="Max gradient norm." - ) - parser.add_argument( - "--push_to_hub", - action="store_true", - help="Whether or not to push the model to the Hub.", - ) - parser.add_argument( - "--hub_token", - type=str, - default=None, - help="The token to use to push to the Model Hub.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--local_rank", - type=int, - default=-1, - help="For distributed training: local_rank", - ) - parser.add_argument( - "--resume_unet", - type=str, - default=None, - help=("File path for unet lora to resume training."), - ) - parser.add_argument( - "--resume_text_encoder", - type=str, - default=None, - help=("File path for text encoder lora to resume training."), - ) - - if input_args is not None: - args = parser.parse_args(input_args) - else: - args = parser.parse_args() - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - else: - if args.class_data_dir is not None: - logger.warning( - "You need not use --class_data_dir without --with_prior_preservation." - ) - if args.class_prompt is not None: - logger.warning( - "You need not use --class_prompt without --with_prior_preservation." - ) - - return args - - -def main(args): - logging_dir = Path(args.output_dir, args.logging_dir) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if ( - args.train_text_encoder - and args.gradient_accumulation_steps > 1 - and accelerator.num_processes > 1 - ): - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - if args.seed is not None: - set_seed(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = ( - torch.float16 if accelerator.device.type == "cuda" else torch.float32 - ) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - torch_dtype=torch_dtype, - safety_checker=None, - revision=args.revision, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader( - sample_dataset, batch_size=args.sample_batch_size - ) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, - desc="Generating class images", - disable=not accelerator.is_local_main_process, - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = ( - class_images_dir - / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - ) - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained( - args.tokenizer_name, - revision=args.revision, - ) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="tokenizer", - revision=args.revision, - ) - - # Load models and create wrapper for stable diffusion - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="text_encoder", - revision=args.revision, - ) - vae = AutoencoderKL.from_pretrained( - args.pretrained_vae_name_or_path or args.pretrained_model_name_or_path, - subfolder=None if args.pretrained_vae_name_or_path else "vae", - revision=None if args.pretrained_vae_name_or_path else args.revision, - ) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="unet", - revision=args.revision, - ) - unet.requires_grad_(False) - unet_lora_params, _ = inject_trainable_lora( - unet, r=args.lora_rank, loras=args.resume_unet - ) - - for _up, _down in extract_lora_ups_down(unet): - print("Before training: Unet First Layer lora up", _up.weight.data) - print("Before training: Unet First Layer lora down", _down.weight.data) - break - - vae.requires_grad_(False) - text_encoder.requires_grad_(False) - - if args.train_text_encoder: - text_encoder_lora_params, _ = inject_trainable_lora( - text_encoder, - target_replace_module=["CLIPAttention"], - r=args.lora_rank, - ) - for _up, _down in extract_lora_ups_down( - text_encoder, target_replace_module=["CLIPAttention"] - ): - print("Before training: text encoder First Layer lora up", _up.weight.data) - print( - "Before training: text encoder First Layer lora down", _down.weight.data - ) - break - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate - * args.gradient_accumulation_steps - * args.train_batch_size - * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - text_lr = ( - args.learning_rate - if args.learning_rate_text is None - else args.learning_rate_text - ) - - params_to_optimize = ( - [ - {"params": itertools.chain(*unet_lora_params), "lr": args.learning_rate}, - { - "params": itertools.chain(*text_encoder_lora_params), - "lr": text_lr, - }, - ] - if args.train_text_encoder - else itertools.chain(*unet_lora_params) - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_config( - args.pretrained_model_name_or_path, subfolder="scheduler" - ) - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - color_jitter=args.color_jitter, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad( - {"input_ids": input_ids}, - padding="max_length", - max_length=tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - batch_size=args.train_batch_size, - shuffle=True, - collate_fn=collate_fn, - num_workers=1, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil( - len(train_dataloader) / args.gradient_accumulation_steps - ) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - if args.train_text_encoder: - ( - unet, - text_encoder, - optimizer, - train_dataloader, - lr_scheduler, - ) = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil( - len(train_dataloader) / args.gradient_accumulation_steps - ) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - # Train! - total_batch_size = ( - args.train_batch_size - * accelerator.num_processes - * args.gradient_accumulation_steps - ) - - print("***** Running training *****") - print(f" Num examples = {len(train_dataset)}") - print(f" Num batches each epoch = {len(train_dataloader)}") - print(f" Num Epochs = {args.num_train_epochs}") - print(f" Instantaneous batch size per device = {args.train_batch_size}") - print( - f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}" - ) - print(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - print(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm( - range(args.max_train_steps), disable=not accelerator.is_local_main_process - ) - progress_bar.set_description("Steps") - global_step = 0 - last_save = 0 - - for epoch in range(args.num_train_epochs): - unet.train() - if args.train_text_encoder: - text_encoder.train() - - for step, batch in enumerate(train_dataloader): - # Convert images to latent space - latents = vae.encode( - batch["pixel_values"].to(dtype=weight_dtype) - ).latent_dist.sample() - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, - noise_scheduler.config.num_train_timesteps, - (bsz,), - device=latents.device, - ) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError( - f"Unknown prediction type {noise_scheduler.config.prediction_type}" - ) - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = ( - F.mse_loss(model_pred.float(), target.float(), reduction="none") - .mean([1, 2, 3]) - .mean() - ) - - # Compute prior loss - prior_loss = F.mse_loss( - model_pred_prior.float(), target_prior.float(), reduction="mean" - ) - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - progress_bar.update(1) - optimizer.zero_grad() - - global_step += 1 - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - if args.save_steps and global_step - last_save >= args.save_steps: - if accelerator.is_main_process: - # newer versions of accelerate allow the 'keep_fp32_wrapper' arg. without passing - # it, the models will be unwrapped, and when they are then used for further training, - # we will crash. pass this, but only to newer versions of accelerate. fixes - # https://github.com/huggingface/diffusers/issues/1566 - accepts_keep_fp32_wrapper = "keep_fp32_wrapper" in set( - inspect.signature( - accelerator.unwrap_model - ).parameters.keys() - ) - extra_args = ( - {"keep_fp32_wrapper": True} - if accepts_keep_fp32_wrapper - else {} - ) - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet, **extra_args), - text_encoder=accelerator.unwrap_model( - text_encoder, **extra_args - ), - revision=args.revision, - ) - - filename_unet = ( - f"{args.output_dir}/lora_weight_e{epoch}_s{global_step}.pt" - ) - filename_text_encoder = f"{args.output_dir}/lora_weight_e{epoch}_s{global_step}.text_encoder.pt" - print(f"save weights {filename_unet}, {filename_text_encoder}") - save_lora_weight(pipeline.unet, filename_unet) - if args.train_text_encoder: - save_lora_weight( - pipeline.text_encoder, - filename_text_encoder, - target_replace_module=["CLIPAttention"], - ) - - for _up, _down in extract_lora_ups_down(pipeline.unet): - print( - "First Unet Layer's Up Weight is now : ", - _up.weight.data, - ) - print( - "First Unet Layer's Down Weight is now : ", - _down.weight.data, - ) - break - if args.train_text_encoder: - for _up, _down in extract_lora_ups_down( - pipeline.text_encoder, - target_replace_module=["CLIPAttention"], - ): - print( - "First Text Encoder Layer's Up Weight is now : ", - _up.weight.data, - ) - print( - "First Text Encoder Layer's Down Weight is now : ", - _down.weight.data, - ) - break - - last_save = global_step - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - revision=args.revision, - ) - - print("\n\nLora TRAINING DONE!\n\n") - - save_lora_weight(pipeline.unet, args.output_dir + "/lora_weight.pt") - if args.train_text_encoder: - save_lora_weight( - pipeline.text_encoder, - args.output_dir + "/lora_weight.text_encoder.pt", - target_replace_module=["CLIPAttention"], - ) - - if args.push_to_hub: - repo.push_to_hub( - commit_message="End of training", blocking=False, auto_lfs_prune=True - ) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/ysharma/chatglm2-6b-4bit/README.md b/spaces/ysharma/chatglm2-6b-4bit/README.md deleted file mode 100644 index 6422d2fdf90377726c4532e59c71d037882c1ef6..0000000000000000000000000000000000000000 --- a/spaces/ysharma/chatglm2-6b-4bit/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chatglm2 6b 4bit -emoji: 🌖 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit -duplicated_from: mikeee/chatglm2-6b-4bit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yuan2023/img-to-music/constants.py b/spaces/yuan2023/img-to-music/constants.py deleted file mode 100644 index 86863d1b778d4c66f0d8e1e0b699f1bb937c1d50..0000000000000000000000000000000000000000 --- a/spaces/yuan2023/img-to-music/constants.py +++ /dev/null @@ -1,9 +0,0 @@ -import numpy as np -import os - -MUBERT_LICENSE = os.environ.get('MUBERT_LICENSE') -MUBERT_TOKEN = os.environ.get('MUBERT_TOKEN') - -MUBERT_MODE = "loop" -MUBERT_TAGS_STRING = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk,funk,witch house,tech house,underground,artists,mystical,disco,sensorium,r&b,agender,psychedelic trance / psytrance,peaceful,run 140,piano,run 160,setting,meditation,christmas,ambient,horror,cinematic,electro house,idm,bass,minimal,underscore,drums,glitchy,beautiful,technology,tribal house,country pop,jazz & funk,documentary,space,classical,valentines,chillstep,experimental,trap,new jack swing,drama,post-rock,tense,corporate,neutral,happy,analog,funky,spiritual,sberzvuk special,chill hop,dramatic,catchy,holidays,fitness 90,optimistic,orchestra,acid techno,energizing,romantic,minimal house,breaks,hyper pop,warm up,dreamy,dark,urban,microfunk,dub,nu disco,vogue,keys,hardcore,aggressive,indie,electro funk,beauty,relaxing,trance,pop,hiphop,soft,acoustic,chillrave / ethno-house,deep techno,angry,dance,fun,dubstep,tropical,latin pop,heroic,world music,inspirational,uplifting,atmosphere,art,epic,advertising,chillout,scary,spooky,slow ballad,saxophone,summer,erotic,jazzy,energy 100,kara mar,xmas,atmospheric,indie pop,hip-hop,yoga,reggaeton,lounge,travel,running,folk,chillrave & ethno-house,detective,darkambient,chill,fantasy,minimal techno,special,night,tropical house,downtempo,lullaby,meditative,upbeat,glitch hop,fitness,neurofunk,sexual,indie rock,future pop,jazz,cyberpunk,melancholic,happy hardcore,family / kids,synths,electric guitar,comedy,psychedelic trance & psytrance,edm,psychedelic rock,calm,zen,bells,podcast,melodic house,ethnic percussion,nature,heavy,bassline,indie dance,techno,drumnbass,synth pop,vaporwave,sad,8-bit,chillgressive,deep,orchestral,futuristic,hardtechno,nostalgic,big room,sci-fi,tutorial,joyful,pads,minimal 170,drill,ethnic 108,amusing,sleepy ambient,psychill,italo disco,lofi,house,acoustic guitar,bassline house,rock,k-pop,synthwave,deep house,electronica,gabber,nightlife,sport & fitness,road trip,celebration,electro,disco house,electronic' -MUBERT_TAGS = np.array(MUBERT_TAGS_STRING.split(',')) \ No newline at end of file diff --git a/spaces/yuhe6/final_project/app.py b/spaces/yuhe6/final_project/app.py deleted file mode 100644 index c25dcf00a82e198266e728ec4627deedbd477138..0000000000000000000000000000000000000000 --- a/spaces/yuhe6/final_project/app.py +++ /dev/null @@ -1,86 +0,0 @@ -import os -import torch -from PIL import Image -from torchvision import transforms -import gradio as gr -#https://huggingface.co/spaces/yuhe6/final_project/blob/main/Net_Rotate9.pth -#os.system("wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt") - -#model = torch.hub.load('huawei-noah/ghostnet', 'ghostnet_1x', pretrained=True) -#model = torch.jit.load('https://huggingface.co/spaces/yuhe6/final_project/blob/main/Net_Rotate9.pth').eval().to(device) - - -model = torch.jit.load('Net2_Flip_jit.pt', map_location = torch.device('cpu')) -model.eval() - -model_categories = ["cat","dog"] # verify order -n_categories = len(model_categories) - -#torch.hub.download_url_to_file('https://huggingface.co/spaces/yuhe6/final_project/blob/main/Net_Rotate9.pth', '/tmp/temporary_file') -#model = torch.hub.load('/tmp', 'temporary_file', pretrained=True) - -#model.eval() -# Download an example image from the pytorch website -torch.hub.download_url_to_file("https://upload.wikimedia.org/wikipedia/commons/5/5b/Dog_%28Canis_lupus_familiaris%29_%281%29.jpg", "dog1.jpg") -torch.hub.download_url_to_file("https://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Golde33443.jpg/640px-Golde33443.jpg", "dog2.jpg") -torch.hub.download_url_to_file("https://upload.wikimedia.org/wikipedia/commons/c/c7/Tabby_cat_with_blue_eyes-3336579.jpg", "cat1.jpg") -torch.hub.download_url_to_file("https://upload.wikimedia.org/wikipedia/commons/9/9e/Domestic_cat.jpg", "cat2.jpg") - -def inference(input_image): - preprocess = transforms.Compose([ - transforms.Resize(size = (256, 256)), # Fixed resize from transforms.Resize(256) - #transforms.CenterCrop(224), - transforms.ToTensor(), - #transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ]) - - # Used print statements to detect shapes between input tensor & batch - # e.g. input_tensor.shape - input_tensor = preprocess(input_image) - input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model - - # move the input and model to GPU for speed if available - if torch.cuda.is_available(): - input_batch = input_batch.to('cuda') - model.to('cuda') - - with torch.no_grad(): - output = model(input_batch) # model(input_tensor) # needed to have batch dimension - - # The output has unnormalized scores. To get probabilities, you can run a softmax on it. - probabilities = torch.nn.functional.softmax(output[0]) - - # Read the categories - #with open("dog_cat.txt", "r") as f: - #categories = [s.strip() for s in f.readlines()] - #with open("dog_cat.txt", "r") as f: - - #categories = [s.strip() for s in f.readlines()] - # Show top categories per image - top1_prob, top1_catid = torch.topk(probabilities, n_categories) - result = {} - for i in range(top1_prob.size(0)): - result[model_categories[top1_catid[i]]] = top1_prob[i].item() - return result - -inputs = gr.inputs.Image(type='pil') -outputs = gr.outputs.Label(type="confidences", num_top_classes = n_categories) - -title = "STAT 430 Final Project App -- Made by Group DHZ" -description = "This is our Cat & Dog Classifier for the final project, and the model we use is generated by our second neural network augmented by the flipping technique, which is would give the best accuracy. To use it, simply upload your image, or click one of the examples to load them. The authors are Xiongjie Dai (xdai12), Yu He (yuhe6), Mengjia Zeng (mengjia6)." -#article = "

          GhostNet: More Features from Cheap Operations | Github Repo

          " - -examples = [ - ['dog1.jpg'], - ['cat1.jpg'], - ['dog2.jpg'], - ['cat2.jpg'] -] - -gr.Interface( - inference, inputs, outputs, - title = title, description = description, - examples = examples, - analytics_enabled = False).launch( - #debug = True # Enabled debug mode to see the stacktrace on Google Colab. - ) \ No newline at end of file diff --git a/spaces/zeno-ml/translation-report/main.py b/spaces/zeno-ml/translation-report/main.py deleted file mode 100644 index cd31b450a4657c6d022530203099946d2d54a04c..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/translation-report/main.py +++ /dev/null @@ -1,99 +0,0 @@ -"""The main entry point for performing comparison on analysis_gpt_mts.""" - -from __future__ import annotations - -import argparse -import os - -import pandas as pd -from zeno_build.experiments import search_space -from zeno_build.experiments.experiment_run import ExperimentRun -from zeno_build.reporting.visualize import visualize - -import config -from modeling import ( - GptMtInstance, - process_data, - process_output, -) - - -def analysis_gpt_mt_main( - input_dir: str, - results_dir: str, -) -> None: - """Run the analysis of GPT-MT experiment.""" - # Get the dataset configuration - lang_pair_preset = config.main_space.dimensions["lang_pairs"] - if not isinstance(lang_pair_preset, search_space.Constant): - raise ValueError( - "All experiments must be run on a single set of language pairs." - ) - lang_pairs = config.lang_pairs[lang_pair_preset.value] - - # Load and exhaustiveize the format of the necessary data. - test_data: list[GptMtInstance] = process_data( - input_dir=input_dir, - lang_pairs=lang_pairs, - ) - - results: list[ExperimentRun] = [] - model_presets = config.main_space.dimensions["model_preset"] - if not isinstance(model_presets, search_space.Categorical): - raise ValueError("The model presets must be a categorical parameter.") - for model_preset in model_presets.choices: - output = process_output( - input_dir=input_dir, - lang_pairs=lang_pairs, - model_preset=model_preset, - ) - results.append( - ExperimentRun(model_preset, {"model_preset": model_preset}, output) - ) - - # Perform the visualization - df = pd.DataFrame( - { - "data": [x.data for x in test_data], - "label": [x.label for x in test_data], - "lang_pair": [x.lang_pair for x in test_data], - "doc_id": [x.doc_id for x in test_data], - } - ) - labels = [x.label for x in test_data] - visualize( - df, - labels, - results, - "text-classification", - "data", - config.zeno_distill_and_metric_functions, - zeno_config={ - "cache_path": os.path.join(results_dir, "zeno_cache"), - "port": 7860, - "host": "0.0.0.0", - "editable": False, - }, - ) - - -if __name__ == "__main__": - # Parse the command line arguments - parser = argparse.ArgumentParser() - parser.add_argument( - "--input-dir", - type=str, - help="The directory of the GPT-MT repo.", - ) - parser.add_argument( - "--results-dir", - type=str, - default="results", - help="The directory to store the results in.", - ) - args = parser.parse_args() - - analysis_gpt_mt_main( - input_dir=args.input_dir, - results_dir=args.results_dir, - ) diff --git a/spaces/zhan66/vits-simple-api/utils/merge.py b/spaces/zhan66/vits-simple-api/utils/merge.py deleted file mode 100644 index 86ee1cf89fd270b7e30766364f69495895f5f2d0..0000000000000000000000000000000000000000 --- a/spaces/zhan66/vits-simple-api/utils/merge.py +++ /dev/null @@ -1,190 +0,0 @@ -import os -import json -import logging -import torch -import config -import numpy as np -from utils.utils import check_is_none -from vits import VITS -from voice import TTS - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -lang_dict = { - "english_cleaners": ["en"], - "english_cleaners2": ["en"], - "japanese_cleaners": ["ja"], - "japanese_cleaners2": ["ja"], - "korean_cleaners": ["ko"], - "chinese_cleaners": ["zh"], - "zh_ja_mixture_cleaners": ["zh", "ja"], - "sanskrit_cleaners": ["sa"], - "cjks_cleaners": ["zh", "ja", "ko", "sa"], - "cjke_cleaners": ["zh", "ja", "ko", "en"], - "cjke_cleaners2": ["zh", "ja", "ko", "en"], - "cje_cleaners": ["zh", "ja", "en"], - "cje_cleaners2": ["zh", "ja", "en"], - "thai_cleaners": ["th"], - "shanghainese_cleaners": ["sh"], - "chinese_dialect_cleaners": ["zh", "ja", "sh", "gd", "en", "SZ", "WX", "CZ", "HZ", "SX", "NB", "JJ", "YX", "JD", - "ZR", "PH", "TX", "JS", "HN", "LP", "XS", "FY", "RA", "CX", "SM", "TT", "WZ", "SC", - "YB"], - "bert_chinese_cleaners": ["zh"], -} - - -def analysis(model_config_json): - model_config = json.load(model_config_json) - symbols = model_config.get("symbols", None) - emotion_embedding = model_config.get("data").get("emotion_embedding", False) - if "use_spk_conditioned_encoder" in model_config.get("model"): - model_type = 'bert_vits2' - return model_type - if symbols != None: - if not emotion_embedding: - mode_type = "vits" - else: - mode_type = "w2v2" - else: - mode_type = "hubert" - return mode_type - - -def load_npy(model_): - if isinstance(model_, list): - # check if is .npy - for i in model_: - _model_extention = os.path.splitext(i)[1] - if _model_extention != ".npy": - raise ValueError(f"Unsupported model type: {_model_extention}") - - # merge npy files - emotion_reference = np.empty((0, 1024)) - for i in model_: - tmp = np.load(i).reshape(-1, 1024) - emotion_reference = np.append(emotion_reference, tmp, axis=0) - - elif os.path.isdir(model_): - emotion_reference = np.empty((0, 1024)) - for root, dirs, files in os.walk(model_): - for file_name in files: - # check if is .npy - _model_extention = os.path.splitext(file_name)[1] - if _model_extention != ".npy": - continue - file_path = os.path.join(root, file_name) - - # merge npy files - tmp = np.load(file_path).reshape(-1, 1024) - emotion_reference = np.append(emotion_reference, tmp, axis=0) - - elif os.path.isfile(model_): - # check if is .npy - _model_extention = os.path.splitext(model_)[1] - if _model_extention != ".npy": - raise ValueError(f"Unsupported model type: {_model_extention}") - - emotion_reference = np.load(model_) - logging.info(f"Loaded emotional dimention npy range:{len(emotion_reference)}") - return emotion_reference - - -def merge_model(merging_model): - vits_obj = [] - vits_speakers = [] - hubert_vits_obj = [] - hubert_vits_speakers = [] - w2v2_vits_obj = [] - w2v2_vits_speakers = [] - bert_vits2_obj = [] - bert_vits2_speakers = [] - - # model list - vits_list = [] - hubert_vits_list = [] - w2v2_vits_list = [] - bert_vits2_list = [] - - for l in merging_model: - with open(l[1], 'r', encoding='utf-8') as model_config: - model_type = analysis(model_config) - if model_type == "vits": - vits_list.append(l) - elif model_type == "hubert": - hubert_vits_list.append(l) - elif model_type == "w2v2": - w2v2_vits_list.append(l) - elif model_type == "bert_vits2": - bert_vits2_list.append(l) - - # merge vits - new_id = 0 - for obj_id, i in enumerate(vits_list): - obj = VITS(model=i[0], config=i[1], model_type="vits", device=device) - lang = lang_dict.get(obj.get_cleaner(), ["unknown"]) - for id, name in enumerate(obj.get_speakers()): - vits_obj.append([int(id), obj, obj_id]) - vits_speakers.append({"id": new_id, "name": name, "lang": lang}) - new_id += 1 - - # merge hubert-vits - if len(hubert_vits_list) != 0: - if getattr(config, "HUBERT_SOFT_MODEL", None) == None or check_is_none(config.HUBERT_SOFT_MODEL): - raise ValueError(f"Please configure HUBERT_SOFT_MODEL path in config.py") - try: - from vits.hubert_model import hubert_soft - hubert = hubert_soft(config.HUBERT_SOFT_MODEL) - except Exception as e: - raise ValueError(f"Load HUBERT_SOFT_MODEL failed {e}") - - new_id = 0 - for obj_id, i in enumerate(hubert_vits_list): - obj = VITS(model=i[0], config=i[1], model_=hubert, model_type="hubert", device=device) - lang = lang_dict.get(obj.get_cleaner(), ["unknown"]) - - for id, name in enumerate(obj.get_speakers()): - hubert_vits_obj.append([int(id), obj, obj_id]) - hubert_vits_speakers.append({"id": new_id, "name": name, "lang": lang}) - new_id += 1 - - # merge w2v2-vits - emotion_reference = None - if len(w2v2_vits_list) != 0: - if getattr(config, "DIMENSIONAL_EMOTION_NPY", None) == None or check_is_none(config.DIMENSIONAL_EMOTION_NPY): - raise ValueError(f"Please configure DIMENSIONAL_EMOTION_NPY path in config.py") - try: - emotion_reference = load_npy(config.DIMENSIONAL_EMOTION_NPY) - except Exception as e: - raise ValueError(f"Load DIMENSIONAL_EMOTION_NPY failed {e}") - - new_id = 0 - for obj_id, i in enumerate(w2v2_vits_list): - obj = VITS(model=i[0], config=i[1], model_=emotion_reference, model_type="w2v2", device=device) - lang = lang_dict.get(obj.get_cleaner(), ["unknown"]) - - for id, name in enumerate(obj.get_speakers()): - w2v2_vits_obj.append([int(id), obj, obj_id]) - w2v2_vits_speakers.append({"id": new_id, "name": name, "lang": lang}) - new_id += 1 - - # merge Bert_VITS2 - new_id = 0 - for obj_id, i in enumerate(bert_vits2_list): - from bert_vits2 import Bert_VITS2 - obj = Bert_VITS2(model=i[0], config=i[1], device=device) - lang = ["ZH"] - for id, name in enumerate(obj.get_speakers()): - bert_vits2_obj.append([int(id), obj, obj_id]) - bert_vits2_speakers.append({"id": new_id, "name": name, "lang": lang}) - new_id += 1 - - - voice_obj = {"VITS": vits_obj, "HUBERT-VITS": hubert_vits_obj, "W2V2-VITS": w2v2_vits_obj, - "BERT-VITS2": bert_vits2_obj} - voice_speakers = {"VITS": vits_speakers, "HUBERT-VITS": hubert_vits_speakers, "W2V2-VITS": w2v2_vits_speakers, - "BERT-VITS2": bert_vits2_speakers} - w2v2_emotion_count = len(emotion_reference) if emotion_reference is not None else 0 - - tts = TTS(voice_obj, voice_speakers, w2v2_emotion_count=w2v2_emotion_count, device=device) - - return tts diff --git a/spaces/zhang-wei-jian/docker/node_modules/is-binary-path/readme.md b/spaces/zhang-wei-jian/docker/node_modules/is-binary-path/readme.md deleted file mode 100644 index b4ab02519b0fdefb0f64748adcc1d35111a2cf75..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/is-binary-path/readme.md +++ /dev/null @@ -1,34 +0,0 @@ -# is-binary-path [![Build Status](https://travis-ci.org/sindresorhus/is-binary-path.svg?branch=master)](https://travis-ci.org/sindresorhus/is-binary-path) - -> Check if a file path is a binary file - - -## Install - -``` -$ npm install is-binary-path -``` - - -## Usage - -```js -const isBinaryPath = require('is-binary-path'); - -isBinaryPath('source/unicorn.png'); -//=> true - -isBinaryPath('source/unicorn.txt'); -//=> false -``` - - -## Related - -- [binary-extensions](https://github.com/sindresorhus/binary-extensions) - List of binary file extensions -- [is-text-path](https://github.com/sindresorhus/is-text-path) - Check if a filepath is a text file - - -## License - -MIT © [Sindre Sorhus](https://sindresorhus.com), [Paul Miller](https://paulmillr.com) diff --git a/spaces/zhang-wei-jian/docker/node_modules/picomatch/CHANGELOG.md b/spaces/zhang-wei-jian/docker/node_modules/picomatch/CHANGELOG.md deleted file mode 100644 index 8ccc6c1bab0138d0d4da5e604fcb9608790b1692..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/picomatch/CHANGELOG.md +++ /dev/null @@ -1,136 +0,0 @@ -# Release history - -**All notable changes to this project will be documented in this file.** - -The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) -and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). - -
          - Guiding Principles - -- Changelogs are for humans, not machines. -- There should be an entry for every single version. -- The same types of changes should be grouped. -- Versions and sections should be linkable. -- The latest version comes first. -- The release date of each versions is displayed. -- Mention whether you follow Semantic Versioning. - -
          - -
          - Types of changes - -Changelog entries are classified using the following labels _(from [keep-a-changelog](http://keepachangelog.com/)_): - -- `Added` for new features. -- `Changed` for changes in existing functionality. -- `Deprecated` for soon-to-be removed features. -- `Removed` for now removed features. -- `Fixed` for any bug fixes. -- `Security` in case of vulnerabilities. - -
          - -## 2.3.1 (2022-01-02) - -### Fixed - -* Fixes bug when a pattern containing an expression after the closing parenthesis (`/!(*.d).{ts,tsx}`) was incorrectly converted to regexp ([9f241ef](https://github.com/micromatch/picomatch/commit/9f241ef)). - -### Changed - -* Some documentation improvements ([f81d236](https://github.com/micromatch/picomatch/commit/f81d236), [421e0e7](https://github.com/micromatch/picomatch/commit/421e0e7)). - -## 2.3.0 (2021-05-21) - -### Fixed - -* Fixes bug where file names with two dots were not being matched consistently with negation extglobs containing a star ([56083ef](https://github.com/micromatch/picomatch/commit/56083ef)) - -## 2.2.3 (2021-04-10) - -### Fixed - -* Do not skip pattern seperator for square brackets ([fb08a30](https://github.com/micromatch/picomatch/commit/fb08a30)). -* Set negatedExtGlob also if it does not span the whole pattern ([032e3f5](https://github.com/micromatch/picomatch/commit/032e3f5)). - -## 2.2.2 (2020-03-21) - -### Fixed - -* Correctly handle parts of the pattern after parentheses in the `scan` method ([e15b920](https://github.com/micromatch/picomatch/commit/e15b920)). - -## 2.2.1 (2020-01-04) - -* Fixes [#49](https://github.com/micromatch/picomatch/issues/49), so that braces with no sets or ranges are now propertly treated as literals. - -## 2.2.0 (2020-01-04) - -* Disable fastpaths mode for the parse method ([5b8d33f](https://github.com/micromatch/picomatch/commit/5b8d33f)) -* Add `tokens`, `slashes`, and `parts` to the object returned by `picomatch.scan()`. - -## 2.1.0 (2019-10-31) - -* add benchmarks for scan ([4793b92](https://github.com/micromatch/picomatch/commit/4793b92)) -* Add eslint object-curly-spacing rule ([707c650](https://github.com/micromatch/picomatch/commit/707c650)) -* Add prefer-const eslint rule ([5c7501c](https://github.com/micromatch/picomatch/commit/5c7501c)) -* Add support for nonegate in scan API ([275c9b9](https://github.com/micromatch/picomatch/commit/275c9b9)) -* Change lets to consts. Move root import up. ([4840625](https://github.com/micromatch/picomatch/commit/4840625)) -* closes https://github.com/micromatch/picomatch/issues/21 ([766bcb0](https://github.com/micromatch/picomatch/commit/766bcb0)) -* Fix "Extglobs" table in readme ([eb19da8](https://github.com/micromatch/picomatch/commit/eb19da8)) -* fixes https://github.com/micromatch/picomatch/issues/20 ([9caca07](https://github.com/micromatch/picomatch/commit/9caca07)) -* fixes https://github.com/micromatch/picomatch/issues/26 ([fa58f45](https://github.com/micromatch/picomatch/commit/fa58f45)) -* Lint test ([d433a34](https://github.com/micromatch/picomatch/commit/d433a34)) -* lint unit tests ([0159b55](https://github.com/micromatch/picomatch/commit/0159b55)) -* Make scan work with noext ([6c02e03](https://github.com/micromatch/picomatch/commit/6c02e03)) -* minor linting ([c2a2b87](https://github.com/micromatch/picomatch/commit/c2a2b87)) -* minor parser improvements ([197671d](https://github.com/micromatch/picomatch/commit/197671d)) -* remove eslint since it... ([07876fa](https://github.com/micromatch/picomatch/commit/07876fa)) -* remove funding file ([8ebe96d](https://github.com/micromatch/picomatch/commit/8ebe96d)) -* Remove unused funks ([cbc6d54](https://github.com/micromatch/picomatch/commit/cbc6d54)) -* Run eslint during pretest, fix existing eslint findings ([0682367](https://github.com/micromatch/picomatch/commit/0682367)) -* support `noparen` in scan ([3d37569](https://github.com/micromatch/picomatch/commit/3d37569)) -* update changelog ([7b34e77](https://github.com/micromatch/picomatch/commit/7b34e77)) -* update travis ([777f038](https://github.com/micromatch/picomatch/commit/777f038)) -* Use eslint-disable-next-line instead of eslint-disable ([4e7c1fd](https://github.com/micromatch/picomatch/commit/4e7c1fd)) - -## 2.0.7 (2019-05-14) - -* 2.0.7 ([9eb9a71](https://github.com/micromatch/picomatch/commit/9eb9a71)) -* supports lookbehinds ([1f63f7e](https://github.com/micromatch/picomatch/commit/1f63f7e)) -* update .verb.md file with typo change ([2741279](https://github.com/micromatch/picomatch/commit/2741279)) -* fix: typo in README ([0753e44](https://github.com/micromatch/picomatch/commit/0753e44)) - -## 2.0.4 (2019-04-10) - -### Fixed - -- Readme link [fixed](https://github.com/micromatch/picomatch/pull/13/commits/a96ab3aa2b11b6861c23289964613d85563b05df) by @danez. -- `options.capture` now works as expected when fastpaths are enabled. See https://github.com/micromatch/picomatch/pull/12/commits/26aefd71f1cfaf95c37f1c1fcab68a693b037304. Thanks to @DrPizza. - -## 2.0.0 (2019-04-10) - -### Added - -- Adds support for `options.onIgnore`. See the readme for details -- Adds support for `options.onResult`. See the readme for details - -### Breaking changes - -- The unixify option was renamed to `windows` -- caching and all related options and methods have been removed - -## 1.0.0 (2018-11-05) - -- adds `.onMatch` option -- improvements to `.scan` method -- numerous improvements and optimizations for matching and parsing -- better windows path handling - -## 0.1.0 - 2017-04-13 - -First release. - - -[keep-a-changelog]: https://github.com/olivierlacan/keep-a-changelog diff --git a/spaces/zhang-wei-jian/docker/node_modules/setprototypeof/index.d.ts b/spaces/zhang-wei-jian/docker/node_modules/setprototypeof/index.d.ts deleted file mode 100644 index f108ecd0a8ca1ec609529d3a0b76106c48e418a0..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/setprototypeof/index.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -declare function setPrototypeOf(o: any, proto: object | null): any; -export = setPrototypeOf; diff --git a/spaces/zhanpj/ChatGPT/modules/overwrites.py b/spaces/zhanpj/ChatGPT/modules/overwrites.py deleted file mode 100644 index bfcd4d01b7d7bec1184a8d09113933bca860530b..0000000000000000000000000000000000000000 --- a/spaces/zhanpj/ChatGPT/modules/overwrites.py +++ /dev/null @@ -1,56 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from modules.presets import * -from modules.llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - user, bot = y[-1] - if not detect_converted_mark(user): - user = convert_asis(user) - if not detect_converted_mark(bot): - bot = convert_mdtext(bot) - y[-1] = (user, bot) - return y - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/zhenwusw/JoJoGAN/e4e/utils/data_utils.py b/spaces/zhenwusw/JoJoGAN/e4e/utils/data_utils.py deleted file mode 100644 index f1ba79f4a2d5cc2b97dce76d87bf6e7cdebbc257..0000000000000000000000000000000000000000 --- a/spaces/zhenwusw/JoJoGAN/e4e/utils/data_utils.py +++ /dev/null @@ -1,25 +0,0 @@ -""" -Code adopted from pix2pixHD: -https://github.com/NVIDIA/pix2pixHD/blob/master/data/image_folder.py -""" -import os - -IMG_EXTENSIONS = [ - '.jpg', '.JPG', '.jpeg', '.JPEG', - '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tiff' -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def make_dataset(dir): - images = [] - assert os.path.isdir(dir), '%s is not a valid directory' % dir - for root, _, fnames in sorted(os.walk(dir)): - for fname in fnames: - if is_image_file(fname): - path = os.path.join(root, fname) - images.append(path) - return images diff --git a/spaces/zhongkaifu/mt_enu_chs/README.md b/spaces/zhongkaifu/mt_enu_chs/README.md deleted file mode 100644 index b8b79ac118cc028d41656fc3866d9ed5cb75d66f..0000000000000000000000000000000000000000 --- a/spaces/zhongkaifu/mt_enu_chs/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Translation from English to Chinese -emoji: 🌖 -colorFrom: yellow -colorTo: pink -sdk: docker -pinned: false -license: bsd-3-clause ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference