How to Download BatteryBar Pro for Free with License Key
-
BatteryBar Pro is a popular application that displays the battery status of your laptop on the taskbar. It shows you useful information such as the remaining battery percentage, the battery wear, and the full runtime. It also helps you monitor your battery's lifespan and take care of your battery. If you want to download BatteryBar Pro for free with a license key, here are some steps you can follow:
-
-
Go to this website and click on the "Download" button. This will start downloading the setup file of BatteryBar Pro v3.4.3.
-
Run the setup file and follow the instructions to install BatteryBar Pro on your laptop.
-
After the installation is complete, open BatteryBar Pro and click on the "Help" menu. Then, select "Enter License Key".
-
Enter the following license key: BB-PRO-3.6.6-1234567890. This is a valid license key that was shared by this blog. You can also try other license keys from this website.
-
Click on "Activate" and enjoy BatteryBar Pro for free!
-
-
BatteryBar Pro has many features that make it a great tool for laptop users. You can customize the appearance of the battery meter, use different battery profiles for each power scheme, and set custom sounds for low and critical battery warnings. You can also check the detailed statistics of your battery's performance and health. BatteryBar Pro is compatible with Windows 2K/XP/Vista/7/8/8.1/10 and requires 1 GB of RAM and 10 MB of free disk space.
If you liked this article, please share it with your friends and leave a comment below. You can also subscribe to our newsletter for more tips and tricks on how to optimize your laptop's battery life.
-
-
BatteryBar Pro is not the only application that can display your battery status on the taskbar. There are some other alternatives that you can try if you want to compare different features and options. Here are some of them:
-
-
BatteryInfoView: This is a simple and lightweight tool that shows you the current status and information of your battery. It also logs the battery's discharge cycles and displays the battery capacity evolution as a graph.
-
BatteryMon: This is a comprehensive and user-friendly tool that monitors your battery's voltage, charge rate, discharge rate, and more. It also supports multiple batteries and can alert you when your battery reaches a low or critical level.
-
Smarter Battery: This is a smart and advanced tool that displays your battery's health, wear level, discharge cycles, and calibration status. It also has a battery benchmark feature that tests your battery's performance and generates a report.
-
-
These are some of the best applications that can display your battery status on the taskbar. You can download them for free or purchase them for a reasonable price. However, if you want to get the most out of your battery life, you should also follow some basic tips and practices that can help you optimize your laptop's power consumption. Here are some of them:
-
-
Adjust your screen brightness and contrast to a comfortable level. A brighter screen consumes more power than a dimmer one.
-
Turn off or disable any unnecessary devices or features that you are not using. For example, you can turn off Bluetooth, Wi-Fi, webcam, microphone, etc. when you don't need them.
-
Use a power-saving mode or plan that suits your needs. Windows has different power plans that you can choose from, such as Balanced, Power Saver, High Performance, etc. You can also customize your own power plan and change the settings for various components.
-
Close any programs or applications that you are not using. Running multiple programs at the same time can drain your battery faster than running one or two programs.
-
Avoid exposing your laptop to extreme temperatures or humidity. High or low temperatures can affect your battery's performance and lifespan.
-
-
By following these tips and using BatteryBar Pro or any of the alternative applications mentioned above, you can monitor and optimize your laptop's battery life easily and effectively. You can also save money and time by avoiding frequent battery replacements or repairs.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit PhantomPDF A Complete PDF Solution for Any Task.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit PhantomPDF A Complete PDF Solution for Any Task.md
deleted file mode 100644
index 2fd7db550cad83918174dfeb12454bc5d6255cde..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit PhantomPDF A Complete PDF Solution for Any Task.md
+++ /dev/null
@@ -1,29 +0,0 @@
-
-```html
-
Why You Should Use Foxit PhantomPDF for Your PDF Needs
-
If you work with PDF files regularly, you know how important it is to have a reliable and versatile PDF editor. Whether you need to create, edit, convert, sign, or protect your PDF documents, you want a tool that can handle any task with ease and efficiency. That's why you should use Foxit PhantomPDF, the best PDF editor for Windows and Mac.
-
Foxit PhantomPDF is a powerful and user-friendly PDF software that lets you do more with your PDF files. Here are some of the features that make Foxit PhantomPDF stand out from the competition:
Create and edit PDFs from any source. You can create PDF files from scratch, from scanned documents, from Microsoft Office files, from web pages, and more. You can also edit any PDF file with a full-featured word processor-like interface. You can add, delete, move, resize, or format text, images, graphics, and other elements. You can also insert headers, footers, page numbers, bookmarks, hyperlinks, comments, and annotations.
-
Convert PDFs to and from other formats. You can convert PDF files to Word, Excel, PowerPoint, HTML, ePub, and other formats with high quality and accuracy. You can also convert other file types to PDF with a simple drag-and-drop. You can even combine multiple files into one PDF or split a PDF into separate files.
-
Sign and protect PDFs with advanced security. You can sign your PDF documents electronically with digital signatures or stamps. You can also protect your PDF files with passwords, encryption, redaction, watermarks, and permissions. You can also manage the digital certificates and trusted identities for your PDF documents.
-
Collaborate and share PDFs with ease. You can collaborate on PDF documents with other users in real time with Foxit PhantomPDF's cloud-based services. You can also share your PDF files via email, Dropbox, Google Drive, OneDrive, SharePoint, or Foxit Drive. You can also integrate Foxit PhantomPDF with Microsoft Teams, Outlook, Word, Excel, PowerPoint, and Visio.
-
-
As you can see, Foxit PhantomPDF is more than just a PDF editor. It's a complete PDF solution that meets all your PDF needs. Whether you are a student, a professional, or a business owner, you can benefit from using Foxit PhantomPDF for your PDF projects.
-
So what are you waiting for? Download Foxit PhantomPDF today and enjoy a free trial for 14 days. You'll be amazed by how much you can do with your PDF files with Foxit PhantomPDF.
-```
-
-```html
-
If you are wondering how Foxit PhantomPDF compares to other PDF editors, you'll be glad to know that it has many advantages over its competitors. Here are some of the reasons why Foxit PhantomPDF is the best choice for your PDF needs:
-
-
It's fast and reliable. Foxit PhantomPDF is designed to be fast and responsive, even when working with large and complex PDF files. You won't experience any lag or crashes when using Foxit PhantomPDF. You can also trust that your PDF files will be processed and displayed correctly, without any errors or glitches.
-
It's affordable and flexible. Foxit PhantomPDF offers a range of pricing plans and licensing options to suit your budget and needs. You can choose from a perpetual license, a subscription license, or a volume license. You can also choose from different editions, such as Standard, Business, or Education. You can also enjoy free updates and technical support for your Foxit PhantomPDF software.
-
It's compatible and compliant. Foxit PhantomPDF works seamlessly with any PDF file, regardless of its origin or format. You can also create PDF files that comply with various standards and regulations, such as ISO 32000-1, ISO 19005-1, ISO 14289-1, WCAG 2.0, PDF/A, PDF/E, PDF/X, and more. You can also validate the compliance of your PDF files with Foxit PhantomPDF's built-in tools.
-
-
With Foxit PhantomPDF, you can enjoy a smooth and satisfying PDF experience. You can create, edit, convert, sign, protect, collaborate, and share PDF files with ease and confidence. You can also customize your Foxit PhantomPDF software to fit your preferences and needs.
-
Foxit PhantomPDF is the ultimate PDF editor for Windows and Mac. Don't settle for less when you can have the best. Download Foxit PhantomPDF today and discover the difference.
-
-``` ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/ACTIA Multidiag Dvd Rip.rar NEW.md b/spaces/1gistliPinn/ChatGPT4/Examples/ACTIA Multidiag Dvd Rip.rar NEW.md
deleted file mode 100644
index 1d15b3eeddcae1e9667ce979cb0d32a3949da006..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/ACTIA Multidiag Dvd Rip.rar NEW.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Assassin's Creed: Unity is a 2014 action-adventure game developed by Ubisoft Montreal and published by Ubisoft. It is the eighth major installment in the Assassin's Creed series, and the successor to 2013's Assassin's Creed IV: Black Flag. It is set in Paris during the French Revolution, and follows the story of Arno Dorian, a young assassin who becomes involved in a conflict between the Assassins and the Templars.
-
The game features a new engine, AnvilNext 2.0, which allows for improved graphics, animations, and gameplay. The game also introduces a new cooperative multiplayer mode, where up to four players can team up to complete missions and explore the open world of Paris. The game received mixed reviews from critics, who praised the setting, visuals, and combat, but criticized the technical issues, story, and lack of innovation.
Assassin's Creed: Unity Gold Edition V.1.5.0 - MAXAGENT 31 is a repack version of the game that includes all the DLCs and updates released by Ubisoft. It also features a crack by MAXAGENT 31, a group of hackers who claim to have bypassed the game's DRM protection. The repack version is smaller in size than the original game, and claims to have faster installation and better performance.
-
However, some users have reported that the repack version still suffers from bugs, glitches, and crashes. Some have also complained that the crack by MAXAGENT 31 is not reliable, and that it may contain malware or viruses. Therefore, it is advised to download the repack version at your own risk, and to scan it with an antivirus program before installing it.
-
-
If you want to play Assassin's Creed: Unity Gold Edition V.1.5.0 - MAXAGENT 31, you will need a PC that meets the following minimum requirements:
-
-
OS: Windows 7 SP1, Windows 8/8.1 (64-bit operating system required)
Graphics: NVIDIA GeForce GTX 680 or AMD Radeon HD 7970 (2 GB VRAM)
-
Storage: 50 GB available space
-
Sound Card: DirectX 9.0c compatible sound card with latest drivers
-
-
You can download Assassin's Creed: Unity Gold Edition V.1.5.0 - MAXAGENT 31 from various torrent sites, such as The Pirate Bay, Kickass Torrents, or RARBG. However, be aware that downloading and playing pirated games is illegal and may result in legal consequences. You may also face ethical issues, as you are depriving the developers and publishers of their rightful income. Therefore, it is recommended to buy the original game from official sources, such as Steam, Uplay, or Epic Games Store.
-
-
Assassin's Creed: Unity Gold Edition V.1.5.0 - MAXAGENT 31 offers you the opportunity to experience the French Revolution in a stunning and immersive way. You can explore the city of Paris, from the Bastille to the Notre Dame, and witness the historical events that shaped the modern world. You can also customize your own assassin, choosing from a variety of weapons, outfits, and skills. You can even join forces with other players online, and take on challenging missions together.
Therefore, Assassin's Creed: Unity Gold Edition V.1.5.0 - MAXAGENT 31 is a game that may appeal to fans of the franchise, but may disappoint others who are looking for a fresh and polished experience. The game is available for download from various torrent sites, but it is illegal and risky to do so. It is better to purchase the original game from official sources, and support the developers and publishers who worked hard to create it.
- d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar.md
deleted file mode 100644
index 7bd5fcec4c10dcc9609e8c16ad4e224a8a0c9e23..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar.md
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar: A Powerful Software for Creating 3D Videos
-
-
If you are looking for a software that can convert any 2D video into 3D stereoscopic video with special effects, then you should try Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar. This software is a shareware that can be downloaded from various websites, such as AfterDawn, Selsoft, and OpenSea. It supports all video formats, such as MP4, AVI, VOB, DVD, WMV, and MKV. It can also download videos from YouTube and convert them to 3D.
-
Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar
In this article, we will show you how to use Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar to create stunning 3D videos from your 2D sources. We will also explain the features, benefits, and tips of this software.
-
-
Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar: How to Use It
-
-
To use Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar, you need to follow these steps:
-
-
-
Download the software from one of the websites mentioned above.
-
Extract the rar file using a program such as WinRAR or 7-Zip.
-
Run the keygen.exe file and generate a serial number.
-
Run the setup.exe file and install the software using the serial number.
-
Launch the software and select the video file you want to convert to 3D.
-
Choose the output format and the destination folder.
-
Adjust the settings according to your preferences, such as the depth, the angle, and the effect of the 3D video.
-
Click on the Convert button and wait for the process to finish.
-
Enjoy your 3D video on your PC or on your compatible device.
-
-
-
Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar: Features and Benefits
-
-
Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar has many features and benefits that make it a powerful software for creating 3D videos. Some of them are:
-
-
-
It can convert any 2D video into 3D stereoscopic video with special effects.
-
It supports all video formats, such as MP4, AVI, VOB, DVD, WMV, and MKV.
-
It can also download videos from YouTube and convert them to 3D.
-
It can extract audio from video files and save them as MP3, WAV, or WMA files.
-
It has a user-friendly interface that is easy to use and navigate.
-
It has a fast conversion speed and a high quality output.
-
It has a preview function that allows you to see the result before converting.
-
It has a batch mode that allows you to convert multiple files at once.
-
-
-
Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar: Tips and Tricks
-
-
To get the most out of Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar, you should follow these tips and tricks:
-
-
-
Make sure you have enough disk space and memory for the conversion process.
-
Choose the output format that is compatible with your device or player.
-
Adjust the settings according to your preferences, but do not overdo it or else you might lose quality or realism.
-
Use a good quality source file for better results.
-
Do not use illegal or cracked versions of the software as they might contain viruses or malware.
-
-
-
Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar: Conclusion
-
-
Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar is a software that can convert any 2D video into 3D stereoscopic video with special effects. It supports all video formats, such as MP4, AVI, VOB, DVD, WMV, and MKV. It can also download videos from YouTube and convert them to 3D. It has many features and benefits that make it a powerful software for creating 3D videos. It also has some tips and tricks that can help you get the most out of it.
-
-
-
If you are interested in creating stunning 3D videos from your 2D sources, then you should try Axara 2D To 3D Video Converter
-
Axara 2D To 3D Video Converter 2.4.3.243- Keygen And Crack.rar: Conclusion
. There is no need to write another conclusion. If you want to write another article for a different keyword, please let me know. 3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Baixar O Jogo Do Ronald Mcdonald O Resgate Dos Bichos.md b/spaces/1gistliPinn/ChatGPT4/Examples/Baixar O Jogo Do Ronald Mcdonald O Resgate Dos Bichos.md
deleted file mode 100644
index 0a01f1fd54f75afb11ca41379782079390e47cb8..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Baixar O Jogo Do Ronald Mcdonald O Resgate Dos Bichos.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
baixar o jogo do ronald mcdonald o resgate dos bichos
-
-BeachBum - o veículo vai funcionar
-
-Alguns pontos
-
-1 - Esta implementação é baseada na noticia do Designer do Webbrowser, que afirmou ter criado um navegador html/js para mimular o Safari.
-
-2 - No final deste trecho de código, você pode ver um exemplo do conteúdo na tela do Inspector de elementos
-
-3 - Quando o player irá rolar, todos os elementos são ajustados.
-
-4 - Se não tiver captura de tela com navegador de outro provedor, você vai ter que esperar um tempo para ter sucesso em capturar a tela do Safari.
-
-5 - O seguinte exemplo funciona bem em Chrome e Opera, mas tem um bug oculto no Internet Explorer. O empurra no canto inferior esquerdo do jogo, então você tem que fazer scroll para o lado esquerdo e fechar o navegador.
-
-6 - Isso causou diversos problemas em outros browsers e corrigido isso, apesar de ter um pequeno bug.
-
-7 - Sendo assim, espero que você goste e compartilhe o jogo com os seus amigos.
-
-8 - Este exemplo demonstra uma outra opção de implementação, que será a única opção que eu farei mesmo que teste em alguns navegadores. Você terá que segurar o botão "Navegador em execução" e clicar no efeito de paginação.
-
-9 - Se a tela estiver em Full screen, você terá que clicar no botão "Abrir menú" e a tela irá adicionar um nome ao menú, e você poderá mudar isso adicionando uma tag no seu site (clique aqui para saber mais).
-
-10 - Você pode alterar qualquer coisa ao clicar no botão em negrito 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Costruzione Di Macchine Mcgraw-hill Pdf Download [EXCLUSIVE].md b/spaces/1gistliPinn/ChatGPT4/Examples/Costruzione Di Macchine Mcgraw-hill Pdf Download [EXCLUSIVE].md
deleted file mode 100644
index 005e61016f37998563c67e327b28f7d941f57acf..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Costruzione Di Macchine Mcgraw-hill Pdf Download [EXCLUSIVE].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-riassunto di m lazzari informatica umanistica mcgraw hill ... [Libri gratis] Fondamenti di costruzione di macchine New Orleans Saints (-1) at Atlanta Falcons ... 1fdad05405
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Maxim-Korea-October-2012.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/Maxim-Korea-October-2012.md
deleted file mode 100644
index aed94fa088482fd2f49596d5ba9a5068a7c5d1e6..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/Maxim-Korea-October-2012.md
+++ /dev/null
@@ -1,68 +0,0 @@
-## Maxim Korea - October 2012
-
-
-
-
-
-
-
-
-
-**Click Here - [https://kneedacexbrew.blogspot.com/?d=2txjpK](https://kneedacexbrew.blogspot.com/?d=2txjpK)**
-
-
-
-
-
-
-
-
-
-
-
- Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Maxim Korea - October 2012":
-
-# Maxim Korea - October 2012: The Hottest Issue of the Year
-
-
-
-If you are looking for some eye candy and entertainment, you don't want to miss the October 2012 issue of Maxim Korea. This magazine features some of the most beautiful and talented women in Korea, as well as exclusive interviews, fashion tips, lifestyle advice, and more. Here are some of the highlights of this sizzling issue:
-
-
-
-- **Cover Girl: Lee Hyori**. The queen of K-pop graces the cover of Maxim Korea with her stunning looks and charisma. She talks about her latest album, her love life, and her secrets to staying fit and fabulous.
-
-- **Feature: The Maxim Hot 100**. Who are the hottest women in Korea right now? Maxim Korea reveals its annual list of the most gorgeous and influential celebrities, models, singers, actresses, and athletes. Find out who made the cut and who got the coveted number one spot.
-
-- **Special: Halloween Party**. Get ready for some spooky fun with Maxim Korea's guide to the best Halloween parties in Seoul. Whether you want to dress up, dance, or drink, we have the perfect place for you to celebrate this festive occasion.
-
-- **Fashion: Fall Trends**. As the weather gets cooler, it's time to update your wardrobe with some stylish and cozy outfits. Maxim Korea shows you how to rock the latest fall trends, from leather jackets to knit sweaters, with some help from our gorgeous models.
-
-- **Lifestyle: Travel Tips**. If you are planning a trip abroad, you need to check out Maxim Korea's travel tips. We give you the best recommendations for where to go, what to do, and what to pack for your next adventure.
-
-
-
-And that's not all. Maxim Korea also has plenty of other content to keep you entertained and informed, such as sports news, movie reviews, gadget reviews, jokes, quizzes, and more. Don't miss this hot issue of Maxim Korea - October 2012. Get your copy today!
-
-Here is a possible continuation of the article:
-
-But wait, there's more. Maxim Korea also has some exclusive content that you can only access online. Here are some of the perks of being a Maxim Korea online subscriber:
-
-
-
-- **Behind-the-scenes videos**. Watch how our cover girl and models pose for the camera and have fun on the set. You'll get to see their personalities and charm in action.
-
-- **Interactive features**. Participate in polls, surveys, and contests to share your opinions and win prizes. You can also chat with other Maxim Korea fans and get tips from our experts.
-
-- **Bonus content**. Enjoy more photos, articles, and videos that are not available in the print edition. You'll get to see more of your favorite Maxim Korea stars and topics.
-
-
-
-So what are you waiting for? Subscribe to Maxim Korea online today and get access to all these amazing features and more. You'll never miss a thing from Maxim Korea - October 2012.
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dynamons World MOD APK and Unlock All Levels and Features.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dynamons World MOD APK and Unlock All Levels and Features.md
deleted file mode 100644
index 94bf1668d59ecd1bea4297979b5e0b734e1bfa9c..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dynamons World MOD APK and Unlock All Levels and Features.md
+++ /dev/null
@@ -1,109 +0,0 @@
-
-
How to Download and Install Dynamons World Mod APK on Android
-
If you are looking for a fun and addictive RPG game that lets you catch and train your own team of monsters, then you should try Dynamons World. This game is loved by millions of RPG players who enjoy exploring an open world, fighting challenging battles, and collecting rare and powerful creatures. But what if you want to enjoy the game without any limitations or restrictions? That's where Dynamons World Mod APK comes in. This is a modified version of the game that gives you unlimited money to buy anything you want in the game. In this article, we will show you how to download and install Dynamons World Mod APK on your Android device and enjoy the game to the fullest.
-
What is Dynamons World?
-
A fun and addictive RPG game
-
Dynamons World is an RPG game that is inspired by popular games like Pokemon and Digimon. You play as a Dynamon master who travels across the Dynamon Kingdom, catching and training different types of Dynamons. Dynamons are cute and powerful creatures that have different elemental abilities, such as fire, water, electricity, and dark. You can use them to fight other Dynamon masters, Captains, and even evil forces that threaten the kingdom.
Some of the features that make Dynamons World an amazing game are:
-
-
Online Battle Arena: You can challenge your friends and players worldwide in online PvP multiplayer battles. You can show off your skills and strategy and climb the leaderboards.
-
Catch and train dozens of unique Dynamons: You can explore an open world searching for the rarest and strongest monsters. You can catch them using special balls and train them to level up their skills and stats.
-
Unleash powerful skills and brilliant tactics: You can use skill cards to activate special moves and abilities for your Dynamons. You can also combine different types of Dynamons to create synergies and advantages in battle.
-
Travel all the way from Dynamons Camp to the Temple Ruins: You can follow an addictive and immersive RPG story that takes you through various locations, quests, and battles. You can meet new characters, allies, and enemies along the way.
-
-
What is Dynamons World Mod APK?
-
A modified version of the game with unlimited money
-
Dynamons World Mod APK is a modified version of the original game that gives you unlimited money to spend in the game. Money is used to buy items, upgrades, balls, skill cards, and more. With unlimited money, you can buy anything you want without worrying about running out of resources. You can also unlock all the Dynamons in the game without having to catch them.
-
Benefits of using Dynamons World Mod APK
-
Some of the benefits of using Dynamons World Mod APK are:
-
download apk dynamons world mod unlimited money
-download apk dynamons world mod latest version
-download apk dynamons world mod free shopping
-download apk dynamons world mod android
-download apk dynamons world mod offline
-download apk dynamons world mod 1.8.14
-download apk dynamons world mod no root
-download apk dynamons world mod for pc
-download apk dynamons world mod hack
-download apk dynamons world mod mega
-download apk dynamons world mod revdl
-download apk dynamons world mod rexdl
-download apk dynamons world mod apkpure
-download apk dynamons world mod happymod
-download apk dynamons world mod an1
-download apk dynamons world mod 2023
-download apk dynamons world mod new update
-download apk dynamons world mod unlimited gems
-download apk dynamons world mod unlocked all
-download apk dynamons world mod ad free
-download apk dynamons world mod cheat
-download apk dynamons world mod full version
-download apk dynamons world mod high damage
-download apk dynamons world mod vip
-download apk dynamons world mod god mode
-download apk dynamons world mod easy win
-download apk dynamons world mod premium
-download apk dynamons world mod pro
-download apk dynamons world mod original
-download apk dynamons world mod best rpg game
-download apk dynamons world mod by azerion casual
-download apk dynamons world mod from google play store
-download apk dynamons world mod without verification
-download apk dynamons world mod without survey
-download apk dynamons world mod without ads
-download apk dynamons world mod with obb data file
-download apk dynamons world mod with unlimited coins and diamonds
-download apk dynamons world mod with all features unlocked
-download apk dynamons world mod with fast and secure link
-download apk dynamons world mod with direct link
-
-
You can enjoy the game without any limitations or restrictions: You can play the game as much as you want without having to wait for energy or coins. You can also access all the features and content in the game without having to complete certain levels or tasks.
-
You can have more fun and excitement in the game: You can experiment with different combinations of Dynamons, skills, and strategies. You can have more fun and excitement in the game: You can experiment with different combinations of Dynamons, skills, and strategies. You can also challenge yourself with harder opponents and quests. You can enjoy the game without any frustration or boredom.
-
You can save your time and effort in the game: You don't have to spend hours grinding for money or catching Dynamons. You can get everything you need in a matter of seconds. You can also skip the ads and pop-ups that interrupt your gameplay.
-
-
How to Download Dynamons World Mod APK?
-
Find a reputable source for the APK file
-
The first step to download Dynamons World Mod APK is to find a reliable and trustworthy source for the APK file. There are many websites that offer APK files for various games and apps, but not all of them are safe and secure. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you should do some research before downloading any APK file from the internet. You should look for reviews, ratings, feedback, and comments from other users who have downloaded the same file. You should also check the file size, version, and compatibility with your device.
-
Allow unknown apps on your Android device
-
The next step to download Dynamons World Mod APK is to allow unknown apps on your Android device. Unknown apps are apps that are not downloaded from the official Google Play Store. By default, Android devices do not allow unknown apps to be installed for security reasons. However, you can change this setting by following these steps:
-
-
Go to your device's Settings and tap on Security.
-
Scroll down and find the option Unknown Sources or Install Unknown Apps.
-
Toggle the switch to enable it or tap on it and select Allow.
-
A warning message will appear. Read it carefully and tap on OK.
-
-
Now you have enabled unknown apps on your device and you can proceed to download and install Dynamons World Mod APK.
-
Download and install the APK file using a file manager app
-
The final step to download Dynamons World Mod APK is to download and install the APK file using a file manager app. A file manager app is an app that lets you access and manage the files and folders on your device. You can use any file manager app that you prefer, such as ES File Explorer, Astro File Manager, or Solid Explorer. Here are the steps to follow:
-
-
Open your browser and go to the website where you found the APK file for Dynamons World Mod APK.
-
Tap on the Download button and wait for the download to complete.
-
Once the download is finished, open your file manager app and locate the downloaded APK file in your Downloads folder.
-
Tap on the APK file and a pop-up will appear. Tap on Install.
-
The installation process will begin. Wait for it to finish.
-
Once the installation is done, you can tap on Open to launch the game or find it in your app drawer.
-
-
How to Play Dynamons World Mod APK?
-
Explore the open world and catch rare Dynamons
-
Dynamons World Mod APK lets you explore a vast open world full of secrets, surprises, and adventures. You can travel across different regions, such as forests, deserts, mountains, islands, and cities. You can encounter various types of Dynamons in different habitats and environments. You can catch them using special balls that match their element. You can also find hidden items, chests, coins, and skill cards along the way.
-
Battle other players online in PvP mode
-
Dynamons World Mod APK also lets you battle other players online in PvP mode. You can join an online battle arena where you can challenge your friends or random players from around the world. You can show off your skills and strategy by using your best team of Dynamons. You can also chat with other players, send emojis, and make friends. You can earn rewards, trophies, and badges by winning battles and climbing the leaderboards.
-
Use skill cards and strategy to defeat tough Captains
-
Dynamons World Mod APK also lets you use skill cards and strategy to defeat tough Captains. Captains are powerful Dynamon masters who guard each region of the kingdom. They have their own team of Captains are powerful Dynamon masters who guard each region of the kingdom. They have their own team of strong and rare Dynamons that can pose a challenge to any player. You can challenge them to a battle and try to defeat them using your skill cards and strategy. Skill cards are special cards that activate different moves and abilities for your Dynamons. You can collect, upgrade, and equip them to your Dynamons to make them stronger and more versatile. You can also use strategy by choosing the right Dynamons, elements, and skills for each battle. You can earn rewards, badges, and fame by defeating Captains and advancing to the next region.
-
Conclusion
-
Dynamons World is a fun and addictive RPG game that lets you catch and train your own team of monsters. You can explore an open world, fight challenging battles, and collect rare and powerful creatures. You can also enjoy the game without any limitations or restrictions by using Dynamons World Mod APK. This is a modified version of the game that gives you unlimited money to buy anything you want in the game. You can also unlock all the Dynamons in the game without having to catch them. To download and install Dynamons World Mod APK on your Android device, you need to find a reputable source for the APK file, allow unknown apps on your device, and use a file manager app to download and install the APK file. Then, you can play the game and have fun with your Dynamons.
-
FAQs
-
Is Dynamons World Mod APK safe to use?
-
Yes, Dynamons World Mod APK is safe to use as long as you download it from a reliable and trustworthy source. You should also scan the APK file with an antivirus app before installing it on your device.
-
Do I need to root my device to use Dynamons World Mod APK?
-
No, you do not need to root your device to use Dynamons World Mod APK. You just need to enable unknown apps on your device and install the APK file as explained above.
-
Can I play Dynamons World Mod APK offline?
-
Yes, you can play Dynamons World Mod APK offline without an internet connection. However, some features of the game, such as online PvP battles, may not be available offline.
-
Can I update Dynamons World Mod APK?
-
Yes, you can update Dynamons World Mod APK whenever there is a new version available. However, you may need to uninstall the previous version and install the new one manually. You should also backup your game data before updating to avoid losing your progress.
-
Can I use Dynamons World Mod APK with other mods or cheats?
-
No, you should not use Dynamons World Mod APK with other mods or cheats as they may cause conflicts or errors in the game. You should only use Dynamons World Mod APK as it is without any modifications or alterations.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Kodu and Unleash Your Creativity with Game Design.md b/spaces/1phancelerku/anime-remove-background/Download Kodu and Unleash Your Creativity with Game Design.md
deleted file mode 100644
index f72e3c04d6788aed35d858b8c3d0c4802a1f8631..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Kodu and Unleash Your Creativity with Game Design.md
+++ /dev/null
@@ -1,157 +0,0 @@
-
-
How to Download Kodu: A Guide for Kids and Parents
-
Kodu is a 3D game development environment that is designed to teach kids basic programming principles. Kodu allows creators to build the world's terrain, populate it with characters and props, and then program their behaviors and games rules in a bespoke visual programming language.
Kodu is a great tool for kids who want to create their own games without writing any code. It is fun, easy, and educational. Kids can use their imagination and creativity to make games that they can play and share with others. In this article, we will show you how to download, install, and use Kodu to make your own games.
-
How to Download Kodu
-
There are two ways to download Kodu: from the Microsoft Store or from the Kodu website. Both methods are free and safe.
-
Download from Microsoft Store
-
The Microsoft Store is an online platform where you can download apps and games for Windows PCs. You can find Kodu in the Microsoft Store by following these steps:
-
-
Open the Microsoft Store app on your PC. You can find it in the Start menu or by typing "Microsoft Store" in the search bar.
-
In the search box at the top right corner, type "Kodu" and press Enter.
-
Click on the "Kodu_Game_Lab" app from the search results.
-
Click on the "Get" button to download and install Kodu on your PC.
-
-
The Microsoft Store will automatically update Kodu whenever a new version is released.
-
Download from Kodu Website
-
The Kodu website is another source where you can download Kodu for your PC. You can visit the website by following this link: http://www.kodugamelab.com/downloads/
-
On the website, you will see two options for downloading Kodu: Desktop Build or Microsoft Store Build. The Desktop Build is useful when you want to install Kodu offline or on multiple PCs. The Microsoft Store Build is similar to the one we described above.
-
How to download kodu game lab on Windows 10
-Download kodu for free and create your own games
-Kodu game lab download for PC - Microsoft Store
-Learn programming with kodu game lab - download now
-Download kodu game lab and join the world-wide community
-Kodu game lab tutorial - how to download and install
-Download kodu game lab and explore featured worlds
-Kodu game lab system requirements - can you download it?
-Download kodu game lab and access classroom resources
-Kodu game lab review - is it worth downloading?
-Download kodu game lab and make 3D games with visual programming
-Kodu game lab vs Scratch - which one to download?
-Download kodu game lab and share your games online
-Kodu game lab tips and tricks - how to get started after downloading
-Download kodu game lab and use freeform terrain editing
-Kodu game lab FAQ - everything you need to know before downloading
-Download kodu game lab and customize your characters and props
-Kodu game lab examples - what can you make after downloading?
-Download kodu game lab and enjoy flexible lighting and sound effects
-Kodu game lab license - what are the terms of use for downloading?
-Download kodu game lab and watch videos to learn more
-Kodu game lab support - how to get help after downloading
-Download kodu game lab and play games from other users
-Kodu game lab update - what's new in the latest version to download?
-Download kodu game lab and use touch, keyboard, mouse, or controller
-Kodu game lab history - how did it start and evolve since downloading?
-Download kodu game lab and teach creativity, problem solving, and storytelling
-Kodu game lab awards - what recognition has it received since downloading?
-Download kodu game lab and use the tile-based programming language
-Kodu game lab feedback - how to share your opinion after downloading?
-Download kodu game lab and challenge yourself with different games rules
-Kodu game lab bugs - how to report and fix them after downloading?
-Download kodu game lab and experiment with different genres and styles
-Kodu game lab features - what makes it unique and fun to download?
-Download kodu game lab and inspire your kids to learn coding
-Kodu game lab alternatives - what other programs can you download?
-Download kodu game lab and collaborate with other creators
-Kodu game lab forum - how to join the discussion after downloading?
-Download kodu game lab and discover new worlds every day
-Kodu game lab newsletter - how to subscribe and stay updated after downloading?
-Download kodu game lab and unleash your imagination
-Kodu game lab ratings - how do users rate it after downloading?
-Download kodu game lab and follow the official blog
-Kodu game lab cheats - how to hack the program after downloading?
-Download kodu game lab and compare it with other 3D development tools
-Kodu game lab wiki - how to find more information after downloading?
-Download kodu game lab and enter contests and competitions
-Kodu game lab source code - how to access it after downloading?
-
To download the Desktop Build, follow these steps:
-
-
Choose between the .EXE or .MSI file format. The .EXE file is for regular users who want to install Kodu easily. The .MSI file is for system administrators who want to install Kodu via SCCM.
-
Click on the "KoduSetup.EXE" or "KoduSetup.MSI" link to download the file.
-
Save the file on your PC and run it to install Kodu.
-
-
How to Install Kodi
-
Once you have downloaded Kodi, you need to install it on your PC. The installation process may vary depending on how you downloaded Kodi.
-
Install from Microsoft Store
-
If you downloaded Kodi from the Microsoft Store, you don't need to do anything else. The Microsoft Store will automatically install Kodi on your PC after downloading it. You can find Kodi in your Start menu or by typing "Kodu" in the search bar.
-
Install from .EXE or .MSI
Install from .EXE or .MSI file
-
If you downloaded Kodu from the Kodu website, you need to run the .EXE or .MSI file that you saved on your PC. The installation process is simple and straightforward. Just follow these steps:
-
-
Double-click on the "KoduSetup.EXE" or "KoduSetup.MSI" file to launch the installer.
-
Accept the license agreement and choose the destination folder for Kodu.
-
Click on the "Install" button to start the installation.
-
Wait for the installation to finish and click on the "Finish" button.
-
-
You can find Kodu in your Start menu or by typing "Kodu" in the search bar.
-
How to Use Kodu
-
Now that you have installed Kodu on your PC, you are ready to use it to create your own games. Kodu has a user-friendly interface that lets you design and program your games with ease. Here are some basic steps to get you started:
-
Launch Kodu and create a new world
-
To launch Kodu, click on the "Kodu_Game_Lab" icon on your desktop or in your Start menu. You will see the main menu of Kodu, where you can choose to create a new world, load an existing world, or browse the community worlds.
-
To create a new world, click on the "New World" button. You will see a blank world with a default terrain and sky. You can change the terrain and sky later using the terrain editor.
-
Use the terrain editor to shape the world
-
The terrain editor is a tool that lets you modify the shape, color, and texture of the ground in your world. You can access the terrain editor by pressing the "E" key on your keyboard or clicking on the "Edit Terrain" button on the toolbar.
-
The terrain editor has several options for changing the terrain, such as raising, lowering, flattening, smoothing, painting, and erasing. You can also choose from different brushes and materials to create different effects. For example, you can use the water brush to create lakes and rivers, or use the grass material to create green fields.
-
To use the terrain editor, select a brush and a material from the menus on the left side of the screen. Then, move your mouse over the terrain and click and drag to apply the brush. You can adjust the size and strength of the brush using the mouse wheel or the slider on the right side of the screen. You can also undo and redo your actions using the buttons on the toolbar.
-
Add characters and props to the world
-
Characters and props are objects that you can add to your world to make it more interesting and interactive. Characters are living creatures that can move and perform actions, such as robots, animals, and vehicles. Props are static objects that can be used for decoration or gameplay purposes, such as trees, rocks, coins, and switches.
-
To add characters and props to your world, press the "O" key on your keyboard or click on the "Object Tool" button on the toolbar. You will see a menu of different categories of objects, such as Landscapes, Machines, Nature, Paths, and Sensors. Click on a category to see its subcategories, and then click on an object to select it.
-
To place an object in your world, move your mouse over the terrain and click where you want to put it. You can adjust its position, rotation, and scale using the mouse or the arrow keys. You can also copy, delete, or lock an object using the buttons on the toolbar.
-
Use
Use the visual programming language to program the game logic
-
The visual programming language is a tool that lets you program the behavior and interaction of the objects in your world. You can access the visual programming language by pressing the "P" key on your keyboard or clicking on the "Program Tool" button on the toolbar.
-
The visual programming language uses a simple and intuitive syntax that consists of three elements: when, do, and options. When is a condition that triggers an action, such as when the game starts, when the player presses a button, or when an object collides with another object. Do is an action that is performed when the condition is met, such as move, shoot, score, or say. Options are modifiers that change how the action is executed, such as direction, speed, color, or sound.
-
To use the visual programming language, select an object that you want to program and click on the "Add Rule" button on the toolbar. You will see a blank rule with a when and a do slot. Click on the slot to open a menu of different options for the condition or the action. Choose an option and drag it to the slot. You can also add more slots by clicking on the "+" button or delete slots by clicking on the "X" button.
-
You can create multiple rules for each object and combine different conditions and actions to create complex and interesting game logic. For example, you can program a robot to move forward when the player presses the spacebar, shoot a laser when it sees an enemy, and explode when it touches water.
-
Test and play the game
-
After you have created your world and programmed your game logic, you can test and play your game to see how it works. To test your game, press the "T" key on your keyboard or click on the "Test World" button on the toolbar. You will see your game in full screen mode and you can control your character using the mouse and keyboard.
-
To play your game, press the "Esc" key on your keyboard or click on the "Exit Test Mode" button on the toolbar. You will return to the main menu of Kodu, where you can choose to play your game, save your game, or load another game.
-
To save your game, click on the "Save World" button on the main menu. You will be asked to enter a name and a description for your game. You can also choose to add tags and ratings to your game. To load another game, click on the "Load World" button on the main menu. You will see a list of games that you have saved or downloaded from the community.
-
Conclusion
-
Kodu is a fun and easy way to create your own games without writing any code. You can download Kodu for free from the Microsoft Store or from the Kodu website. You can install Kodu on your PC and use it to design and program your games with simple tools and commands. You can test and play your games and share them with others online.
-
Here are some tips and tricks for using Kodu:
-
-
Explore different categories and subcategories of objects to find new and interesting elements for your games.
-
Use different materials and brushes to create diverse and realistic terrains for your worlds.
-
Use different options and modifiers to customize and fine-tune your actions and behaviors.
-
Use sensors and timers to create dynamic and interactive events in your games.
-
Use variables and scores to keep track of data and states in your games.
-
-
We hope you enjoyed this article and learned how to download Kodu. We encourage you to try Kodu yourself and create your own games. You can also browse the community worlds and see what other creators have made with Kodu. Have fun!
-
FAQs
-
What are the system requirements for Kodu?
-
Kodu requires a Windows PC with at least 1 GB of RAM, 2 GB of hard disk space, a DirectX 9.0c compatible graphics card with Shader Model 2.0 or higher, and a keyboard and mouse. A gamepad is optional but recommended for playing games.
-
What are some alternatives to Kodu?
-
If you are looking for other game development tools for kids, you can check out these alternatives:
-
-
Scratch: A block-based programming language that lets you create interactive stories, games, and animations.
-
Roblox: An online platform where you can create and play games with millions of players around the world.
Minecraft: A sandbox game where you can build and explore infinite worlds with blocks and resources.
-
GameMaker Studio: A game development software that lets you create 2D and 3D games with drag-and-drop or scripting.
-
-
How can I learn more about Kodu?
-
If you want to learn more about Kodu, you can visit these resources:
-
-
Kodu website: The official website of Kodu, where you can download Kodu, browse the community worlds, and find tutorials and documentation.
-
Kodu YouTube channel: The official YouTube channel of Kodu, where you can watch videos of Kodu features, tips, and examples.
-
Kodu blog: The official blog of Kodu, where you can read news and updates about Kodu.
-
Kodu forum: The official forum of Kodu, where you can ask questions, share ideas, and get help from other Kodu users and developers.
-
-
How can I share my games with others?
-
If you want to share your games with others, you can do so by uploading them to the community worlds. To upload your game, follow these steps:
-
-
Save your game and go to the main menu of Kodu.
-
Click on the "Share World" button on the main menu.
-
Enter your name, email, and password to create an account or log in to your existing account.
-
Choose a name, description, tags, and ratings for your game.
-
Click on the "Upload" button to upload your game to the community worlds.
-
-
Once your game is uploaded, other users can find it, download it, and play it. You can also view your uploaded games and edit or delete them by clicking on the "My Worlds" button on the main menu.
-
How can I get help or support for Kodu?
-
If you need help or support for Kodu, you can contact the Kodu team by following these methods:
Twitter: You can follow and tweet to @KoduGameLab on Twitter with your comments or suggestions.
-
Facebook: You can like and message Kodu Game Lab on Facebook with your queries or opinions.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download and Install WhatsApp Messenger on Your Windows 8 PC.md b/spaces/1phancelerku/anime-remove-background/Download and Install WhatsApp Messenger on Your Windows 8 PC.md
deleted file mode 100644
index aacbf9d77cfe056b8a127796a62a04fb92f13fb5..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download and Install WhatsApp Messenger on Your Windows 8 PC.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
How to Download WhatsApp Messenger for Windows 8
-
WhatsApp Messenger is a free messaging app that lets you communicate with your friends and family across different devices. You can send and receive text messages, photos, videos, voice notes, documents, and more with WhatsApp. You can also make voice and video calls for free with WhatsApp.
If you have a Windows 8 computer, you might be wondering how you can download WhatsApp Messenger for it. In this article, we will show you how to do that in a few simple steps. We will also share some benefits of using WhatsApp Messenger on Windows 8, as well as some tips and tricks for using it.
-
Benefits of Using WhatsApp Messenger on Windows 8
-
Stay connected with your friends and family across devices
-
One of the main benefits of using WhatsApp Messenger on Windows 8 is that you can stay connected with your friends and family across different devices. You can use WhatsApp on your phone, tablet, or desktop computer. This way, you can pick up any conversation where you left off, no matter what device you are using.
-
Send and receive list of keyboard shortcuts by pressing Ctrl + / on your keyboard.
-
How to enable dark mode
-
Another tip for using WhatsApp Messenger on Windows 8 is to enable dark mode. Dark mode can help you reduce eye strain and save battery life by changing the background color of WhatsApp to black. To enable dark mode, you need to click on the menu icon (three dots) in the top left corner of WhatsApp Desktop. Then, click on "Settings" and then on "Theme". You will see two options: "Light" and "Dark". Choose "Dark" and click on "OK". You will see that WhatsApp Desktop has switched to dark mode.
-
How to mute notifications
-
A third tip for using WhatsApp Messenger on Windows 8 is to mute notifications. Notifications can be useful to alert you of new messages and calls, but they can also be annoying or distracting if you are busy or want some peace and quiet. To mute notifications, you need to click on the menu icon (three dots) in the top left corner of WhatsApp Desktop. Then, click on "Settings" and then on "Notifications". You will see various options to customize your notifications, such as sound, banner, flash, and mute. You can choose to mute all notifications or only some of them. You can also choose the duration of the mute, such as 8 hours, 1 week, or always.
-
How to install whatsapp messenger on windows 8 laptop
-Whatsapp messenger for windows 8 free download full version
-Whatsapp messenger for windows 8 desktop app
-Whatsapp messenger for windows 8 pc download without bluestacks
-Whatsapp messenger for windows 8.1 64 bit download
-Whatsapp messenger for windows 8 tablet download
-Whatsapp messenger for windows 8 offline installer
-Whatsapp messenger for windows 8 pro download
-Whatsapp messenger for windows 8.1 phone download
-Whatsapp messenger for windows 8 direct download link
-Whatsapp messenger for windows 8 latest version download
-Whatsapp messenger for windows 8 exe file download
-Whatsapp messenger for windows 8 microsoft store download
-Whatsapp messenger for windows 8 web version download
-Whatsapp messenger for windows 8 apk download
-Whatsapp messenger for windows 8 software download
-Whatsapp messenger for windows 8 setup download
-Whatsapp messenger for windows 8 zip file download
-Whatsapp messenger for windows 8 online download
-Whatsapp messenger for windows 8 crack download
-Whatsapp messenger for windows 8 beta version download
-Whatsapp messenger for windows 8 update download
-Whatsapp messenger for windows 8 iso download
-Whatsapp messenger for windows 8 portable download
-Whatsapp messenger for windows 8 rar file download
-Whatsapp messenger for windows 8 old version download
-Whatsapp messenger for windows 8 modded version download
-Whatsapp messenger for windows 8 official website download
-Whatsapp messenger for windows 8 original version download
-Whatsapp messenger for windows 8 premium version download
-Whatsapp messenger for windows 8 hacked version download
-Whatsapp messenger for windows 8 cracked version download
-Whatsapp messenger for windows 8 patched version download
-Whatsapp messenger for windows 8 unlocked version download
-Whatsapp messenger for windows 8 full version free download with crack
-Whatsapp messenger for windows 8 full version free download with keygen
-Whatsapp messenger for windows 8 full version free download with serial key
-Whatsapp messenger for windows 8 full version free download with license key
-Whatsapp messenger for windows 8 full version free download with activation key
-Whatsapp messenger for windows 8 full version free download with product key
-Download whatsapp desktop app from the microsoft store on your computer running on Windows OS.
-
Conclusion and FAQs
-
In conclusion, WhatsApp Messenger is a great app that lets you communicate with your friends and family across different devices. You can download WhatsApp Messenger for Windows 8 by following the steps we have outlined in this article. You can also enjoy some benefits of using WhatsApp Messenger on Windows 8, such as staying connected, sending and receiving various types of media, and enjoying end-to-end encryption and privacy controls. You can also use some tips and tricks for using WhatsApp Messenger on Windows 8, such as using keyboard shortcuts, enabling dark mode, and muting notifications. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to contact us.
-
FAQ #1: Can I use WhatsApp Web instead of WhatsApp Desktop?
-
Yes, you can use WhatsApp Web instead of WhatsApp Desktop if you prefer. WhatsApp Web is a web-based version of WhatsApp that you can access from any browser. However, WhatsApp Web has some limitations compared to WhatsApp Desktop, such as not being able to make voice or video calls, not being able to use keyboard shortcuts, not being able to enable dark mode, and not being able to run in the background. To use WhatsApp Web, you need to go to https://web.whatsapp.com/ and scan the QR code with your phone.
-
FAQ #2: Can I use WhatsApp Desktop without my phone?
-
No, you cannot use WhatsApp Desktop without your phone. WhatsApp Desktop is a companion app that syncs your messages and calls with your phone. You need to have your phone connected to the internet and linked with your account in order to use WhatsApp Desktop. If your phone is offline or disconnected from your account, you will not be able to use WhatsApp Desktop.
-
FAQ #3: How can I update WhatsApp Desktop?
-
To update WhatsApp Desktop, you need to go to the menu icon (three dots) in the top left corner of WhatsApp Desktop. Then, click on "Help" and then on "Check for updates". You will see a window that says "Checking for updates". If there is a new version available, you will see a button that says "Update". Click on this button and wait for the update process to complete.
-
FAQ #4: How can I uninstall WhatsApp Desktop?
-
To uninstall WhatsApp Desktop, you need to go to the Control Panel on your computer. Then, click on "Programs" and then on "Uninstall a program". You will see a list of programs installed on your computer. Find "WhatsApp" and right-click on it. Then, click on "Uninstall" and follow the prompts to remove WhatsApp Desktop from your computer.
-
FAQ #5: How can I contact WhatsApp support?
-
To contact WhatsApp support, you need to go to the menu icon (three dots) in the top left corner of WhatsApp Desktop. Then, click on "Settings" and then on "Help". You will see a button that says "Contact Us". Click on this button and fill out the form with your name, email address, subject, description, and attachments (optional). Then, click on "Send" and wait for a response from WhatsApp support.
-
-
-
-💡 `strength` is a value between 0.0 and 1.0 that controls the amount of noise added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input.
-
-
-
-Define the prompt (for this checkpoint finetuned on Ghibli-style art, you need to prefix the prompt with the `ghibli style` tokens) and run the pipeline:
-
-```python
-prompt = "ghibli style, a fantasy landscape with castles"
-generator = torch.Generator(device=device).manual_seed(1024)
-image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
-image
-```
-
-
-
-
-
-You can also try experimenting with a different scheduler to see how that affects the output:
-
-```python
-from diffusers import LMSDiscreteScheduler
-
-lms = LMSDiscreteScheduler.from_config(pipe.scheduler.config)
-pipe.scheduler = lms
-generator = torch.Generator(device=device).manual_seed(1024)
-image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
-image
-```
-
-
-
-
-
-Check out the Spaces below, and try generating images with different values for `strength`. You'll notice that using lower values for `strength` produces images that are more similar to the original image.
-
-Feel free to also switch the scheduler to the [`LMSDiscreteScheduler`] and see how that affects the output.
-
-
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco.py
deleted file mode 100644
index f0c96e58b6131f2958f28c56b9d8384d5b4746f7..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False))
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index 1b70c5b8d49f04661e23604ca4da56a82b1b99c9..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r50-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,4 +0,0 @@
-_base_ = [
- '../_base_/models/danet_r50-d8.py', '../_base_/datasets/cityscapes.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Low-VRAM-guide.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Low-VRAM-guide.md
deleted file mode 100644
index 7814ecb0c3bc604e8eaa6545b5f83be7f5bdb519..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Low-VRAM-guide.md
+++ /dev/null
@@ -1,53 +0,0 @@
-If you GPU is not large enough to fit a 16-bit model, try these in the following order:
-
-### Load the model in 8-bit mode
-
-```
-python server.py --load-in-8bit
-```
-
-### Load the model in 4-bit mode
-
-```
-python server.py --load-in-4bit
-```
-
-### Split the model across your GPU and CPU
-
-```
-python server.py --auto-devices
-```
-
-If you can load the model with this command but it runs out of memory when you try to generate text, try increasingly limiting the amount of memory allocated to the GPU until the error stops happening:
-
-```
-python server.py --auto-devices --gpu-memory 10
-python server.py --auto-devices --gpu-memory 9
-python server.py --auto-devices --gpu-memory 8
-...
-```
-
-where the number is in GiB.
-
-For finer control, you can also specify the unit in MiB explicitly:
-
-```
-python server.py --auto-devices --gpu-memory 8722MiB
-python server.py --auto-devices --gpu-memory 4725MiB
-python server.py --auto-devices --gpu-memory 3500MiB
-...
-```
-
-### Send layers to a disk cache
-
-As a desperate last measure, you can split the model across your GPU, CPU, and disk:
-
-```
-python server.py --auto-devices --disk
-```
-
-With this, I am able to load a 30b model into my RTX 3090, but it takes 10 seconds to generate 1 word.
-
-### DeepSpeed (experimental)
-
-An experimental alternative to all of the above is to use DeepSpeed: [guide](DeepSpeed.md).
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/llama.cpp.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/llama.cpp.md
deleted file mode 100644
index 48d60df36b4bc4d4e77acff7f7b0b9e3864e25ad..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/llama.cpp.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# llama.cpp
-
-llama.cpp is the best backend in two important scenarios:
-
-1) You don't have a GPU.
-2) You want to run a model that doesn't fit into your GPU.
-
-## Setting up the models
-
-#### Pre-converted
-
-Download the GGUF models directly into your `text-generation-webui/models` folder. It will be a single file.
-
-* Make sure its name ends in `.gguf`.
-* `q4_K_M` quantization is recommended.
-
-#### Convert Llama yourself
-
-Follow the instructions in the llama.cpp README to generate a GGUF: https://github.com/ggerganov/llama.cpp#prepare-data--run
-
-## GPU acceleration
-
-Enabled with the `--n-gpu-layers` parameter.
-
-* If you have enough VRAM, use a high number like `--n-gpu-layers 1000` to offload all layers to the GPU.
-* Otherwise, start with a low number like `--n-gpu-layers 10` and then gradually increase it until you run out of memory.
-
-This feature works out of the box for NVIDIA GPUs on Linux (amd64) or Windows. For other GPUs, you need to uninstall `llama-cpp-python` with
-
-```
-pip uninstall -y llama-cpp-python
-```
-
-and then recompile it using the commands here: https://pypi.org/project/llama-cpp-python/
-
-#### macOS
-
-For macOS, these are the commands:
-
-```
-pip uninstall -y llama-cpp-python
-CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir
-```
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/logging_colors.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/logging_colors.py
deleted file mode 100644
index a0c97c3a76cfc17eb5d8d8bb310a5389ab5db719..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/logging_colors.py
+++ /dev/null
@@ -1,117 +0,0 @@
-# Copied from https://stackoverflow.com/a/1336640
-
-import logging
-import platform
-
-logging.basicConfig(
- format='%(asctime)s %(levelname)s:%(message)s',
- datefmt='%Y-%m-%d %H:%M:%S',
-)
-
-
-def add_coloring_to_emit_windows(fn):
- # add methods we need to the class
- def _out_handle(self):
- import ctypes
- return ctypes.windll.kernel32.GetStdHandle(self.STD_OUTPUT_HANDLE)
- out_handle = property(_out_handle)
-
- def _set_color(self, code):
- import ctypes
-
- # Constants from the Windows API
- self.STD_OUTPUT_HANDLE = -11
- hdl = ctypes.windll.kernel32.GetStdHandle(self.STD_OUTPUT_HANDLE)
- ctypes.windll.kernel32.SetConsoleTextAttribute(hdl, code)
-
- setattr(logging.StreamHandler, '_set_color', _set_color)
-
- def new(*args):
- FOREGROUND_BLUE = 0x0001 # text color contains blue.
- FOREGROUND_GREEN = 0x0002 # text color contains green.
- FOREGROUND_RED = 0x0004 # text color contains red.
- FOREGROUND_INTENSITY = 0x0008 # text color is intensified.
- FOREGROUND_WHITE = FOREGROUND_BLUE | FOREGROUND_GREEN | FOREGROUND_RED
- # winbase.h
- # STD_INPUT_HANDLE = -10
- # STD_OUTPUT_HANDLE = -11
- # STD_ERROR_HANDLE = -12
-
- # wincon.h
- # FOREGROUND_BLACK = 0x0000
- FOREGROUND_BLUE = 0x0001
- FOREGROUND_GREEN = 0x0002
- # FOREGROUND_CYAN = 0x0003
- FOREGROUND_RED = 0x0004
- FOREGROUND_MAGENTA = 0x0005
- FOREGROUND_YELLOW = 0x0006
- # FOREGROUND_GREY = 0x0007
- FOREGROUND_INTENSITY = 0x0008 # foreground color is intensified.
-
- # BACKGROUND_BLACK = 0x0000
- # BACKGROUND_BLUE = 0x0010
- # BACKGROUND_GREEN = 0x0020
- # BACKGROUND_CYAN = 0x0030
- # BACKGROUND_RED = 0x0040
- # BACKGROUND_MAGENTA = 0x0050
- BACKGROUND_YELLOW = 0x0060
- # BACKGROUND_GREY = 0x0070
- BACKGROUND_INTENSITY = 0x0080 # background color is intensified.
-
- levelno = args[1].levelno
- if (levelno >= 50):
- color = BACKGROUND_YELLOW | FOREGROUND_RED | FOREGROUND_INTENSITY | BACKGROUND_INTENSITY
- elif (levelno >= 40):
- color = FOREGROUND_RED | FOREGROUND_INTENSITY
- elif (levelno >= 30):
- color = FOREGROUND_YELLOW | FOREGROUND_INTENSITY
- elif (levelno >= 20):
- color = FOREGROUND_GREEN
- elif (levelno >= 10):
- color = FOREGROUND_MAGENTA
- else:
- color = FOREGROUND_WHITE
- args[0]._set_color(color)
-
- ret = fn(*args)
- args[0]._set_color(FOREGROUND_WHITE)
- # print "after"
- return ret
- return new
-
-
-def add_coloring_to_emit_ansi(fn):
- # add methods we need to the class
- def new(*args):
- levelno = args[1].levelno
- if (levelno >= 50):
- color = '\x1b[31m' # red
- elif (levelno >= 40):
- color = '\x1b[31m' # red
- elif (levelno >= 30):
- color = '\x1b[33m' # yellow
- elif (levelno >= 20):
- color = '\x1b[32m' # green
- elif (levelno >= 10):
- color = '\x1b[35m' # pink
- else:
- color = '\x1b[0m' # normal
- args[1].msg = color + args[1].msg + '\x1b[0m' # normal
- # print "after"
- return fn(*args)
- return new
-
-
-if platform.system() == 'Windows':
- # Windows does not support ANSI escapes and we are using API calls to set the console color
- logging.StreamHandler.emit = add_coloring_to_emit_windows(logging.StreamHandler.emit)
-else:
- # all non-Windows platforms are supporting ANSI escapes so we use them
- logging.StreamHandler.emit = add_coloring_to_emit_ansi(logging.StreamHandler.emit)
- # log = logging.getLogger()
- # log.addFilter(log_filter())
- # //hdlr = logging.StreamHandler()
- # //hdlr.setFormatter(formatter())
-
-logger = logging.getLogger('text-generation-webui')
-logger.setLevel(logging.DEBUG)
diff --git a/spaces/Apex-X/Tm/roop/processors/frame/face_swapper.py b/spaces/Apex-X/Tm/roop/processors/frame/face_swapper.py
deleted file mode 100644
index c53b5b86d7e87870191c01855652088d43726142..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/Tm/roop/processors/frame/face_swapper.py
+++ /dev/null
@@ -1,88 +0,0 @@
-from typing import Any, List, Callable
-import cv2
-import insightface
-import threading
-
-import roop.globals
-import roop.processors.frame.core
-from roop.core import update_status
-from roop.face_analyser import get_one_face, get_many_faces
-from roop.typing import Face, Frame
-from roop.utilities import conditional_download, resolve_relative_path, is_image, is_video
-
-FACE_SWAPPER = None
-THREAD_LOCK = threading.Lock()
-NAME = 'ROOP.FACE-SWAPPER'
-
-
-def get_face_swapper() -> Any:
- global FACE_SWAPPER
-
- with THREAD_LOCK:
- if FACE_SWAPPER is None:
- model_path = resolve_relative_path('../models/inswapper_128.onnx')
- FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=roop.globals.execution_providers)
- return FACE_SWAPPER
-
-
-def pre_check() -> bool:
- download_directory_path = resolve_relative_path('../models')
- conditional_download(download_directory_path, ['https://huggingface.co/henryruhs/roop/resolve/main/inswapper_128.onnx'])
- return True
-
-
-def pre_start() -> bool:
- if not is_image(roop.globals.source_path):
- update_status('Select an image for source path.', NAME)
- return False
- elif not get_one_face(cv2.imread(roop.globals.source_path)):
- update_status('No face in source path detected.', NAME)
- return False
- if not is_image(roop.globals.target_path) and not is_video(roop.globals.target_path):
- update_status('Select an image or video for target path.', NAME)
- return False
- return True
-
-
-def post_process() -> None:
- global FACE_SWAPPER
-
- FACE_SWAPPER = None
-
-
-def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame:
- return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
-
-
-def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
- if roop.globals.many_faces:
- many_faces = get_many_faces(temp_frame)
- if many_faces:
- for target_face in many_faces:
- temp_frame = swap_face(source_face, target_face, temp_frame)
- else:
- target_face = get_one_face(temp_frame)
- if target_face:
- temp_frame = swap_face(source_face, target_face, temp_frame)
- return temp_frame
-
-
-def process_frames(source_path: str, temp_frame_paths: List[str], update: Callable[[], None]) -> None:
- source_face = get_one_face(cv2.imread(source_path))
- for temp_frame_path in temp_frame_paths:
- temp_frame = cv2.imread(temp_frame_path)
- result = process_frame(source_face, temp_frame)
- cv2.imwrite(temp_frame_path, result)
- if update:
- update()
-
-
-def process_image(source_path: str, target_path: str, output_path: str) -> None:
- source_face = get_one_face(cv2.imread(source_path))
- target_frame = cv2.imread(target_path)
- result = process_frame(source_face, target_frame)
- cv2.imwrite(output_path, result)
-
-
-def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
- roop.processors.frame.core.process_video(source_path, temp_frame_paths, process_frames)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/segment.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/segment.py
deleted file mode 100644
index e125798463512ce4322a2cc139b4e5c1515e5c05..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/segment.py
+++ /dev/null
@@ -1,739 +0,0 @@
-from enum import IntEnum
-from functools import lru_cache
-from itertools import filterfalse
-from logging import getLogger
-from operator import attrgetter
-from typing import (
- TYPE_CHECKING,
- Dict,
- Iterable,
- List,
- NamedTuple,
- Optional,
- Sequence,
- Tuple,
- Type,
- Union,
-)
-
-from .cells import (
- _is_single_cell_widths,
- cached_cell_len,
- cell_len,
- get_character_cell_size,
- set_cell_size,
-)
-from .repr import Result, rich_repr
-from .style import Style
-
-if TYPE_CHECKING:
- from .console import Console, ConsoleOptions, RenderResult
-
-log = getLogger("rich")
-
-
-class ControlType(IntEnum):
- """Non-printable control codes which typically translate to ANSI codes."""
-
- BELL = 1
- CARRIAGE_RETURN = 2
- HOME = 3
- CLEAR = 4
- SHOW_CURSOR = 5
- HIDE_CURSOR = 6
- ENABLE_ALT_SCREEN = 7
- DISABLE_ALT_SCREEN = 8
- CURSOR_UP = 9
- CURSOR_DOWN = 10
- CURSOR_FORWARD = 11
- CURSOR_BACKWARD = 12
- CURSOR_MOVE_TO_COLUMN = 13
- CURSOR_MOVE_TO = 14
- ERASE_IN_LINE = 15
- SET_WINDOW_TITLE = 16
-
-
-ControlCode = Union[
- Tuple[ControlType],
- Tuple[ControlType, Union[int, str]],
- Tuple[ControlType, int, int],
-]
-
-
-@rich_repr()
-class Segment(NamedTuple):
- """A piece of text with associated style. Segments are produced by the Console render process and
- are ultimately converted in to strings to be written to the terminal.
-
- Args:
- text (str): A piece of text.
- style (:class:`~rich.style.Style`, optional): An optional style to apply to the text.
- control (Tuple[ControlCode], optional): Optional sequence of control codes.
-
- Attributes:
- cell_length (int): The cell length of this Segment.
- """
-
- text: str
- style: Optional[Style] = None
- control: Optional[Sequence[ControlCode]] = None
-
- @property
- def cell_length(self) -> int:
- """The number of terminal cells required to display self.text.
-
- Returns:
- int: A number of cells.
- """
- text, _style, control = self
- return 0 if control else cell_len(text)
-
- def __rich_repr__(self) -> Result:
- yield self.text
- if self.control is None:
- if self.style is not None:
- yield self.style
- else:
- yield self.style
- yield self.control
-
- def __bool__(self) -> bool:
- """Check if the segment contains text."""
- return bool(self.text)
-
- @property
- def is_control(self) -> bool:
- """Check if the segment contains control codes."""
- return self.control is not None
-
- @classmethod
- @lru_cache(1024 * 16)
- def _split_cells(cls, segment: "Segment", cut: int) -> Tuple["Segment", "Segment"]:
-
- text, style, control = segment
- _Segment = Segment
-
- cell_length = segment.cell_length
- if cut >= cell_length:
- return segment, _Segment("", style, control)
-
- cell_size = get_character_cell_size
-
- pos = int((cut / cell_length) * (len(text) - 1))
-
- before = text[:pos]
- cell_pos = cell_len(before)
- if cell_pos == cut:
- return (
- _Segment(before, style, control),
- _Segment(text[pos:], style, control),
- )
- while pos < len(text):
- char = text[pos]
- pos += 1
- cell_pos += cell_size(char)
- before = text[:pos]
- if cell_pos == cut:
- return (
- _Segment(before, style, control),
- _Segment(text[pos:], style, control),
- )
- if cell_pos > cut:
- return (
- _Segment(before[: pos - 1] + " ", style, control),
- _Segment(" " + text[pos:], style, control),
- )
-
- raise AssertionError("Will never reach here")
-
- def split_cells(self, cut: int) -> Tuple["Segment", "Segment"]:
- """Split segment in to two segments at the specified column.
-
- If the cut point falls in the middle of a 2-cell wide character then it is replaced
- by two spaces, to preserve the display width of the parent segment.
-
- Returns:
- Tuple[Segment, Segment]: Two segments.
- """
- text, style, control = self
-
- if _is_single_cell_widths(text):
- # Fast path with all 1 cell characters
- if cut >= len(text):
- return self, Segment("", style, control)
- return (
- Segment(text[:cut], style, control),
- Segment(text[cut:], style, control),
- )
-
- return self._split_cells(self, cut)
-
- @classmethod
- def line(cls) -> "Segment":
- """Make a new line segment."""
- return cls("\n")
-
- @classmethod
- def apply_style(
- cls,
- segments: Iterable["Segment"],
- style: Optional[Style] = None,
- post_style: Optional[Style] = None,
- ) -> Iterable["Segment"]:
- """Apply style(s) to an iterable of segments.
-
- Returns an iterable of segments where the style is replaced by ``style + segment.style + post_style``.
-
- Args:
- segments (Iterable[Segment]): Segments to process.
- style (Style, optional): Base style. Defaults to None.
- post_style (Style, optional): Style to apply on top of segment style. Defaults to None.
-
- Returns:
- Iterable[Segments]: A new iterable of segments (possibly the same iterable).
- """
- result_segments = segments
- if style:
- apply = style.__add__
- result_segments = (
- cls(text, None if control else apply(_style), control)
- for text, _style, control in result_segments
- )
- if post_style:
- result_segments = (
- cls(
- text,
- (
- None
- if control
- else (_style + post_style if _style else post_style)
- ),
- control,
- )
- for text, _style, control in result_segments
- )
- return result_segments
-
- @classmethod
- def filter_control(
- cls, segments: Iterable["Segment"], is_control: bool = False
- ) -> Iterable["Segment"]:
- """Filter segments by ``is_control`` attribute.
-
- Args:
- segments (Iterable[Segment]): An iterable of Segment instances.
- is_control (bool, optional): is_control flag to match in search.
-
- Returns:
- Iterable[Segment]: And iterable of Segment instances.
-
- """
- if is_control:
- return filter(attrgetter("control"), segments)
- else:
- return filterfalse(attrgetter("control"), segments)
-
- @classmethod
- def split_lines(cls, segments: Iterable["Segment"]) -> Iterable[List["Segment"]]:
- """Split a sequence of segments in to a list of lines.
-
- Args:
- segments (Iterable[Segment]): Segments potentially containing line feeds.
-
- Yields:
- Iterable[List[Segment]]: Iterable of segment lists, one per line.
- """
- line: List[Segment] = []
- append = line.append
-
- for segment in segments:
- if "\n" in segment.text and not segment.control:
- text, style, _ = segment
- while text:
- _text, new_line, text = text.partition("\n")
- if _text:
- append(cls(_text, style))
- if new_line:
- yield line
- line = []
- append = line.append
- else:
- append(segment)
- if line:
- yield line
-
- @classmethod
- def split_and_crop_lines(
- cls,
- segments: Iterable["Segment"],
- length: int,
- style: Optional[Style] = None,
- pad: bool = True,
- include_new_lines: bool = True,
- ) -> Iterable[List["Segment"]]:
- """Split segments in to lines, and crop lines greater than a given length.
-
- Args:
- segments (Iterable[Segment]): An iterable of segments, probably
- generated from console.render.
- length (int): Desired line length.
- style (Style, optional): Style to use for any padding.
- pad (bool): Enable padding of lines that are less than `length`.
-
- Returns:
- Iterable[List[Segment]]: An iterable of lines of segments.
- """
- line: List[Segment] = []
- append = line.append
-
- adjust_line_length = cls.adjust_line_length
- new_line_segment = cls("\n")
-
- for segment in segments:
- if "\n" in segment.text and not segment.control:
- text, segment_style, _ = segment
- while text:
- _text, new_line, text = text.partition("\n")
- if _text:
- append(cls(_text, segment_style))
- if new_line:
- cropped_line = adjust_line_length(
- line, length, style=style, pad=pad
- )
- if include_new_lines:
- cropped_line.append(new_line_segment)
- yield cropped_line
- line.clear()
- else:
- append(segment)
- if line:
- yield adjust_line_length(line, length, style=style, pad=pad)
-
- @classmethod
- def adjust_line_length(
- cls,
- line: List["Segment"],
- length: int,
- style: Optional[Style] = None,
- pad: bool = True,
- ) -> List["Segment"]:
- """Adjust a line to a given width (cropping or padding as required).
-
- Args:
- segments (Iterable[Segment]): A list of segments in a single line.
- length (int): The desired width of the line.
- style (Style, optional): The style of padding if used (space on the end). Defaults to None.
- pad (bool, optional): Pad lines with spaces if they are shorter than `length`. Defaults to True.
-
- Returns:
- List[Segment]: A line of segments with the desired length.
- """
- line_length = sum(segment.cell_length for segment in line)
- new_line: List[Segment]
-
- if line_length < length:
- if pad:
- new_line = line + [cls(" " * (length - line_length), style)]
- else:
- new_line = line[:]
- elif line_length > length:
- new_line = []
- append = new_line.append
- line_length = 0
- for segment in line:
- segment_length = segment.cell_length
- if line_length + segment_length < length or segment.control:
- append(segment)
- line_length += segment_length
- else:
- text, segment_style, _ = segment
- text = set_cell_size(text, length - line_length)
- append(cls(text, segment_style))
- break
- else:
- new_line = line[:]
- return new_line
-
- @classmethod
- def get_line_length(cls, line: List["Segment"]) -> int:
- """Get the length of list of segments.
-
- Args:
- line (List[Segment]): A line encoded as a list of Segments (assumes no '\\\\n' characters),
-
- Returns:
- int: The length of the line.
- """
- _cell_len = cell_len
- return sum(_cell_len(text) for text, style, control in line if not control)
-
- @classmethod
- def get_shape(cls, lines: List[List["Segment"]]) -> Tuple[int, int]:
- """Get the shape (enclosing rectangle) of a list of lines.
-
- Args:
- lines (List[List[Segment]]): A list of lines (no '\\\\n' characters).
-
- Returns:
- Tuple[int, int]: Width and height in characters.
- """
- get_line_length = cls.get_line_length
- max_width = max(get_line_length(line) for line in lines) if lines else 0
- return (max_width, len(lines))
-
- @classmethod
- def set_shape(
- cls,
- lines: List[List["Segment"]],
- width: int,
- height: Optional[int] = None,
- style: Optional[Style] = None,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Set the shape of a list of lines (enclosing rectangle).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style, optional): Style of any padding added.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- _height = height or len(lines)
-
- blank = (
- [cls(" " * width + "\n", style)] if new_lines else [cls(" " * width, style)]
- )
-
- adjust_line_length = cls.adjust_line_length
- shaped_lines = lines[:_height]
- shaped_lines[:] = [
- adjust_line_length(line, width, style=style) for line in lines
- ]
- if len(shaped_lines) < _height:
- shaped_lines.extend([blank] * (_height - len(shaped_lines)))
- return shaped_lines
-
- @classmethod
- def align_top(
- cls: Type["Segment"],
- lines: List[List["Segment"]],
- width: int,
- height: int,
- style: Style,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Aligns lines to top (adds extra lines to bottom as required).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style): Style of any padding added.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- extra_lines = height - len(lines)
- if not extra_lines:
- return lines[:]
- lines = lines[:height]
- blank = cls(" " * width + "\n", style) if new_lines else cls(" " * width, style)
- lines = lines + [[blank]] * extra_lines
- return lines
-
- @classmethod
- def align_bottom(
- cls: Type["Segment"],
- lines: List[List["Segment"]],
- width: int,
- height: int,
- style: Style,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Aligns render to bottom (adds extra lines above as required).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style): Style of any padding added. Defaults to None.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- extra_lines = height - len(lines)
- if not extra_lines:
- return lines[:]
- lines = lines[:height]
- blank = cls(" " * width + "\n", style) if new_lines else cls(" " * width, style)
- lines = [[blank]] * extra_lines + lines
- return lines
-
- @classmethod
- def align_middle(
- cls: Type["Segment"],
- lines: List[List["Segment"]],
- width: int,
- height: int,
- style: Style,
- new_lines: bool = False,
- ) -> List[List["Segment"]]:
- """Aligns lines to middle (adds extra lines to above and below as required).
-
- Args:
- lines (List[List[Segment]]): A list of lines.
- width (int): Desired width.
- height (int, optional): Desired height or None for no change.
- style (Style): Style of any padding added.
- new_lines (bool, optional): Padded lines should include "\n". Defaults to False.
-
- Returns:
- List[List[Segment]]: New list of lines.
- """
- extra_lines = height - len(lines)
- if not extra_lines:
- return lines[:]
- lines = lines[:height]
- blank = cls(" " * width + "\n", style) if new_lines else cls(" " * width, style)
- top_lines = extra_lines // 2
- bottom_lines = extra_lines - top_lines
- lines = [[blank]] * top_lines + lines + [[blank]] * bottom_lines
- return lines
-
- @classmethod
- def simplify(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Simplify an iterable of segments by combining contiguous segments with the same style.
-
- Args:
- segments (Iterable[Segment]): An iterable of segments.
-
- Returns:
- Iterable[Segment]: A possibly smaller iterable of segments that will render the same way.
- """
- iter_segments = iter(segments)
- try:
- last_segment = next(iter_segments)
- except StopIteration:
- return
-
- _Segment = Segment
- for segment in iter_segments:
- if last_segment.style == segment.style and not segment.control:
- last_segment = _Segment(
- last_segment.text + segment.text, last_segment.style
- )
- else:
- yield last_segment
- last_segment = segment
- yield last_segment
-
- @classmethod
- def strip_links(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Remove all links from an iterable of styles.
-
- Args:
- segments (Iterable[Segment]): An iterable segments.
-
- Yields:
- Segment: Segments with link removed.
- """
- for segment in segments:
- if segment.control or segment.style is None:
- yield segment
- else:
- text, style, _control = segment
- yield cls(text, style.update_link(None) if style else None)
-
- @classmethod
- def strip_styles(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Remove all styles from an iterable of segments.
-
- Args:
- segments (Iterable[Segment]): An iterable segments.
-
- Yields:
- Segment: Segments with styles replace with None
- """
- for text, _style, control in segments:
- yield cls(text, None, control)
-
- @classmethod
- def remove_color(cls, segments: Iterable["Segment"]) -> Iterable["Segment"]:
- """Remove all color from an iterable of segments.
-
- Args:
- segments (Iterable[Segment]): An iterable segments.
-
- Yields:
- Segment: Segments with colorless style.
- """
-
- cache: Dict[Style, Style] = {}
- for text, style, control in segments:
- if style:
- colorless_style = cache.get(style)
- if colorless_style is None:
- colorless_style = style.without_color
- cache[style] = colorless_style
- yield cls(text, colorless_style, control)
- else:
- yield cls(text, None, control)
-
- @classmethod
- def divide(
- cls, segments: Iterable["Segment"], cuts: Iterable[int]
- ) -> Iterable[List["Segment"]]:
- """Divides an iterable of segments in to portions.
-
- Args:
- cuts (Iterable[int]): Cell positions where to divide.
-
- Yields:
- [Iterable[List[Segment]]]: An iterable of Segments in List.
- """
- split_segments: List["Segment"] = []
- add_segment = split_segments.append
-
- iter_cuts = iter(cuts)
-
- while True:
- cut = next(iter_cuts, -1)
- if cut == -1:
- return []
- if cut != 0:
- break
- yield []
- pos = 0
-
- segments_clear = split_segments.clear
- segments_copy = split_segments.copy
-
- _cell_len = cached_cell_len
- for segment in segments:
- text, _style, control = segment
- while text:
- end_pos = pos if control else pos + _cell_len(text)
- if end_pos < cut:
- add_segment(segment)
- pos = end_pos
- break
-
- if end_pos == cut:
- add_segment(segment)
- yield segments_copy()
- segments_clear()
- pos = end_pos
-
- cut = next(iter_cuts, -1)
- if cut == -1:
- if split_segments:
- yield segments_copy()
- return
-
- break
-
- else:
- before, segment = segment.split_cells(cut - pos)
- text, _style, control = segment
- add_segment(before)
- yield segments_copy()
- segments_clear()
- pos = cut
-
- cut = next(iter_cuts, -1)
- if cut == -1:
- if split_segments:
- yield segments_copy()
- return
-
- yield segments_copy()
-
-
-class Segments:
- """A simple renderable to render an iterable of segments. This class may be useful if
- you want to print segments outside of a __rich_console__ method.
-
- Args:
- segments (Iterable[Segment]): An iterable of segments.
- new_lines (bool, optional): Add new lines between segments. Defaults to False.
- """
-
- def __init__(self, segments: Iterable[Segment], new_lines: bool = False) -> None:
- self.segments = list(segments)
- self.new_lines = new_lines
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
- if self.new_lines:
- line = Segment.line()
- for segment in self.segments:
- yield segment
- yield line
- else:
- yield from self.segments
-
-
-class SegmentLines:
- def __init__(self, lines: Iterable[List[Segment]], new_lines: bool = False) -> None:
- """A simple renderable containing a number of lines of segments. May be used as an intermediate
- in rendering process.
-
- Args:
- lines (Iterable[List[Segment]]): Lists of segments forming lines.
- new_lines (bool, optional): Insert new lines after each line. Defaults to False.
- """
- self.lines = list(lines)
- self.new_lines = new_lines
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
- if self.new_lines:
- new_line = Segment.line()
- for line in self.lines:
- yield from line
- yield new_line
- else:
- for line in self.lines:
- yield from line
-
-
-if __name__ == "__main__": # pragma: no cover
- from pip._vendor.rich.console import Console
- from pip._vendor.rich.syntax import Syntax
- from pip._vendor.rich.text import Text
-
- code = """from rich.console import Console
-console = Console()
-text = Text.from_markup("Hello, [bold magenta]World[/]!")
-console.print(text)"""
-
- text = Text.from_markup("Hello, [bold magenta]World[/]!")
-
- console = Console()
-
- console.rule("rich.Segment")
- console.print(
- "A Segment is the last step in the Rich render process before generating text with ANSI codes."
- )
- console.print("\nConsider the following code:\n")
- console.print(Syntax(code, "python", line_numbers=True))
- console.print()
- console.print(
- "When you call [b]print()[/b], Rich [i]renders[/i] the object in to the following:\n"
- )
- fragments = list(console.render(text))
- console.print(fragments)
- console.print()
- console.print("The Segments are then processed to produce the following output:\n")
- console.print(text)
- console.print(
- "\nYou will only need to know this if you are implementing your own Rich renderables."
- )
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/tornadoweb.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/tornadoweb.py
deleted file mode 100644
index e19c30b18905a39466ab6b51403438605e706caf..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/tornadoweb.py
+++ /dev/null
@@ -1,59 +0,0 @@
-# Copyright 2017 Elisey Zanko
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import sys
-import typing
-
-from pip._vendor.tenacity import BaseRetrying
-from pip._vendor.tenacity import DoAttempt
-from pip._vendor.tenacity import DoSleep
-from pip._vendor.tenacity import RetryCallState
-
-from tornado import gen
-
-if typing.TYPE_CHECKING:
- from tornado.concurrent import Future
-
-_RetValT = typing.TypeVar("_RetValT")
-
-
-class TornadoRetrying(BaseRetrying):
- def __init__(self, sleep: "typing.Callable[[float], Future[None]]" = gen.sleep, **kwargs: typing.Any) -> None:
- super().__init__(**kwargs)
- self.sleep = sleep
-
- @gen.coroutine # type: ignore[misc]
- def __call__(
- self,
- fn: "typing.Callable[..., typing.Union[typing.Generator[typing.Any, typing.Any, _RetValT], Future[_RetValT]]]",
- *args: typing.Any,
- **kwargs: typing.Any,
- ) -> "typing.Generator[typing.Any, typing.Any, _RetValT]":
- self.begin()
-
- retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
- while True:
- do = self.iter(retry_state=retry_state)
- if isinstance(do, DoAttempt):
- try:
- result = yield fn(*args, **kwargs)
- except BaseException: # noqa: B902
- retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
- else:
- retry_state.set_result(result)
- elif isinstance(do, DoSleep):
- retry_state.prepare_for_next_attempt()
- yield self.sleep(do)
- else:
- raise gen.Return(do)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/nuimages.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/nuimages.py
deleted file mode 100644
index 52736e331cc6c95001bc84f2c17a0805789b2450..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/nuimages.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from detectron2.data.datasets.register_coco import register_coco_instances
-import os
-
-categories = [
- {'id': 0, 'name': 'car'},
- {'id': 1, 'name': 'truck'},
- {'id': 2, 'name': 'trailer'},
- {'id': 3, 'name': 'bus'},
- {'id': 4, 'name': 'construction_vehicle'},
- {'id': 5, 'name': 'bicycle'},
- {'id': 6, 'name': 'motorcycle'},
- {'id': 7, 'name': 'pedestrian'},
- {'id': 8, 'name': 'traffic_cone'},
- {'id': 9, 'name': 'barrier'},
-]
-
-def _get_builtin_metadata():
- id_to_name = {x['id']: x['name'] for x in categories}
- thing_dataset_id_to_contiguous_id = {i: i for i in range(len(categories))}
- thing_classes = [id_to_name[k] for k in sorted(id_to_name)]
- return {
- "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id,
- "thing_classes": thing_classes}
-
-_PREDEFINED_SPLITS = {
- "nuimages_train": ("nuimages", "nuimages/annotations/nuimages_v1.0-train.json"),
- "nuimages_val": ("nuimages", "nuimages/annotations/nuimages_v1.0-val.json"),
- "nuimages_mini": ("nuimages", "nuimages/annotations/nuimages_v1.0-mini.json"),
-}
-
-for key, (image_root, json_file) in _PREDEFINED_SPLITS.items():
- register_coco_instances(
- key,
- _get_builtin_metadata(),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
diff --git a/spaces/Bart92/RVC_HF/utils/backups_test.py b/spaces/Bart92/RVC_HF/utils/backups_test.py
deleted file mode 100644
index f3edf15811b5035ee82f21e54e87b7e87ce413eb..0000000000000000000000000000000000000000
--- a/spaces/Bart92/RVC_HF/utils/backups_test.py
+++ /dev/null
@@ -1,138 +0,0 @@
-
-import os
-import shutil
-import hashlib
-import time
-
-LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
-WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
-GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup'
-
-def import_google_drive_backup():
- print("Importing Google Drive backup...")
- GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' # change this to your Google Drive path
- LOGS_FOLDER = '/content/Applio-RVC-Fork/logs'
- WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights'
- weights_exist = False
- files_to_copy = []
- weights_to_copy = []
-
- def handle_files(root, files, is_weight_files=False):
- for filename in files:
- filepath = os.path.join(root, filename)
- if filename.endswith('.pth') and is_weight_files:
- weights_exist = True
- backup_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- else:
- backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH))
- backup_folderpath = os.path.dirname(backup_filepath)
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created folder: {backup_folderpath}', flush=True)
- if is_weight_files:
- weights_to_copy.append((filepath, backup_filepath))
- else:
- files_to_copy.append((filepath, backup_filepath))
-
- for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'logs')):
- handle_files(root, files)
-
- for root, dirs, files in os.walk(os.path.join(GOOGLE_DRIVE_PATH, 'weights')):
- handle_files(root, files, True)
-
- # Copy files in batches
- total_files = len(files_to_copy)
- start_time = time.time()
- for i, (source, dest) in enumerate(files_to_copy, start=1):
- with open(source, 'rb') as src, open(dest, 'wb') as dst:
- shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size
- # Report progress every 5 seconds or after every 100 files, whichever is less frequent
- if time.time() - start_time > 5 or i % 100 == 0:
- print(f'\rCopying file {i} of {total_files} ({i * 100 / total_files:.2f}%)', end="")
- start_time = time.time()
- print(f'\nImported {len(files_to_copy)} files from Google Drive backup')
-
- # Copy weights in batches
- total_weights = len(weights_to_copy)
- start_time = time.time()
- for i, (source, dest) in enumerate(weights_to_copy, start=1):
- with open(source, 'rb') as src, open(dest, 'wb') as dst:
- shutil.copyfileobj(src, dst, 1024*1024) # 1MB buffer size
- # Report progress every 5 seconds or after every 100 files, whichever is less frequent
- if time.time() - start_time > 5 or i % 100 == 0:
- print(f'\rCopying weight file {i} of {total_weights} ({i * 100 / total_weights:.2f}%)', end="")
- start_time = time.time()
- if weights_exist:
- print(f'\nImported {len(weights_to_copy)} weight files')
- print("Copied weights from Google Drive backup to local weights folder.")
- else:
- print("\nNo weights found in Google Drive backup.")
- print("Google Drive backup import completed.")
-
-def backup_files():
- print("\n Starting backup loop...")
- last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt')
- fully_updated = False # boolean to track if all files are up to date
- try:
- with open(last_backup_timestamps_path, 'r') as f:
- last_backup_timestamps = dict(line.strip().split(':') for line in f)
- except:
- last_backup_timestamps = {}
-
- while True:
- updated = False
- files_to_copy = []
- files_to_delete = []
-
- for root, dirs, files in os.walk(LOGS_FOLDER):
- for filename in files:
- if filename != 'last_backup_timestamps.txt':
- filepath = os.path.join(root, filename)
- if os.path.isfile(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- backup_folderpath = os.path.dirname(backup_filepath)
-
- if not os.path.exists(backup_folderpath):
- os.makedirs(backup_folderpath)
- print(f'Created backup folder: {backup_folderpath}', flush=True)
-
- # check if file has changed since last backup
- last_backup_timestamp = last_backup_timestamps.get(filepath)
- current_timestamp = os.path.getmtime(filepath)
- if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp:
- files_to_copy.append((filepath, backup_filepath)) # add to list of files to copy
- last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp
- updated = True
- fully_updated = False # if a file is updated, all files are not up to date
-
- # check if any files were deleted in Colab and delete them from the backup drive
- for filepath in list(last_backup_timestamps.keys()):
- if not os.path.exists(filepath):
- backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER))
- if os.path.exists(backup_filepath):
- files_to_delete.append(backup_filepath) # add to list of files to delete
- del last_backup_timestamps[filepath]
- updated = True
- fully_updated = False # if a file is deleted, all files are not up to date
-
- # Copy files in batches
- if files_to_copy:
- for source, dest in files_to_copy:
- shutil.copy2(source, dest)
- print(f'Copied or updated {len(files_to_copy)} files')
-
- # Delete files in batches
- if files_to_delete:
- for file in files_to_delete:
- os.remove(file)
- print(f'Deleted {len(files_to_delete)} files')
-
- if not updated and not fully_updated:
- print("Files are up to date.")
- fully_updated = True # if all files are up to date, set the boolean to True
- copy_weights_folder_to_drive()
-
- with open(last_backup_timestamps_path, 'w') as f:
- for filepath, timestamp in last_backup_timestamps.items():
- f.write(f'{filepath}:{timestamp}\n')
- time.sleep(15) # wait for 15 seconds before checking again
diff --git a/spaces/Benson/text-generation/Examples/Apk Stumble Chicos Apk Puro.md b/spaces/Benson/text-generation/Examples/Apk Stumble Chicos Apk Puro.md
deleted file mode 100644
index 993b5eed48052e4de7aaee3052eca34e2a3f1fef..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Apk Stumble Chicos Apk Puro.md
+++ /dev/null
@@ -1,80 +0,0 @@
-
-
8 bola piscina 5.8.0 Mod Apk: Todo lo que necesita saber
-
Si eres un fan de los juegos de billar, es posible que hayas oído hablar de 8 Ball Pool, uno de los juegos multijugador en línea más populares y adictivos para dispositivos Android e iOS. Pero ¿sabías que hay una manera de disfrutar de este juego aún más con monedas ilimitadas, dinero en efectivo, señales y otros beneficios? Sí, estamos hablando de 8 Ball Pool 5.8.0 Mod Apk, la última versión de la aplicación modificada que le permite jugar el juego con características mejoradas y sin restricciones. En este artículo, le diremos todo lo que necesita saber acerca de este apk mod, incluyendo sus características, beneficios, riesgos, y cómo descargar e instalar en su dispositivo.
-
¿Qué es la piscina de bolas 8?
-
8 Ball Pool es un juego multijugador gratuito desarrollado por Miniclip, una compañía suiza que también creó otros juegos populares como Agar.io, Soccer Stars y Carrom Pool. El juego fue lanzado en 2010 y desde entonces se ha convertido en uno de los juegos más descargados y jugados en Google Play y App Store, con más de 500 millones de descargas y millones de jugadores activos en todo el mundo.
Algunas de las características que hacen que 8 Ball Pool sea tan divertido y atractivo son:
-
-
Puedes jugar con tus amigos o desafiar a jugadores de todo el mundo en partidas 1 a 1 o torneos.
-
Puede personalizar su señal y tabla con varios diseños y colores.
-
Puedes ganar monedas y dinero ganando partidas y completando misiones.
-
Puedes usar monedas y dinero en efectivo para comprar nuevas pistas, paquetes de chat, minijuegos y otros artículos en la tienda de juegos.
-
Puedes unirte a clubes y chatear con otros miembros.
-
Puedes subir de nivel y desbloquear nuevas ubicaciones y modos.
-
Puedes participar en eventos de temporada y ganar recompensas exclusivas.
-
-
Cómo jugar al billar de bolas 8
-
-
¿Qué es un apk mod?
-
Un apk mod es una versión modificada de una aplicación original que ha sido alterada por desarrolladores de terceros para agregar o eliminar ciertas características, omitir limitaciones o mejorar el rendimiento. Un apk mod generalmente viene con un nombre de archivo y firma diferente a la aplicación original, y requiere instalación manual de fuentes desconocidas.
-
Beneficios de usar un mod apk
-
Algunos de los beneficios de usar un apk mod son:
-
-
Puede acceder a funciones premium que de otro modo están bloqueadas o requieren compras en la aplicación.
-
Puedes obtener recursos ilimitados como monedas, efectivo, gemas, etc. que son difíciles de ganar o caros de comprar.
-
Puedes desbloquear todos los niveles, modos, elementos, señales, etc. que estén restringidos o requieran progreso o logros.
-
Puede eliminar anuncios, ventanas emergentes, banners, etc. que son molestos o intrusivos.
-
Puede disfrutar de una carga más rápida, un juego más suave, mejores gráficos, etc. que de otra manera están comprometidos o de baja calidad.
-
-
R
Riesgos de usar un mod apk
-
Sin embargo, el uso de un apk mod también viene con algunos riesgos que usted debe ser consciente de:
-
-
Usted puede obtener prohibido en el juego o perder su cuenta si los desarrolladores detectan que está utilizando un apk mod.
-
Puedes exponer tu dispositivo a malware, virus, spyware, etc. que pueden dañar tus datos, privacidad o seguridad si descargas un mod apk desde una fuente no confiable.
-
Puede experimentar fallos, errores, bloqueos, etc. que pueden afectar el rendimiento de su juego o dispositivo si instala un apk mod que es incompatible con su dispositivo o versión del juego.
-
Usted puede perderse las actualizaciones, nuevas características, correcciones de errores, etc. que son liberados por los desarrolladores originales si se utiliza un apk mod que está desactualizado o no se actualiza regularmente.
-
Usted puede perder la diversión y el desafío del juego si se utiliza un apk mod que hace que el juego demasiado fácil o injusto.
-
-
-
8 Ball Pool 5.8.0 Mod Apk es la última versión de la aplicación modificada para 8 Ball Pool que fue lanzado en junio de 2023. Es uno de los apks mod más populares y ampliamente utilizados para este juego, ya que ofrece muchas características y beneficios increíbles que no están disponibles en la aplicación original.
-
Características de 8 Piscina de bolas 5.8.0 Mod Apk
-
Algunas de las características que se pueden disfrutar con 8 Ball Pool 5.8.0 Mod Apk son:
-
-
-
Puedes obtener monedas ilimitadas y dinero en efectivo que puedes usar para comprar cualquier cosa en la tienda del juego.
-
Puedes obtener señales ilimitadas y actualizarlas al nivel máximo.
-
Puedes obtener paquetes de chat ilimitados y usarlos para comunicarte con otros jugadores.
-
Puedes obtener minijuegos ilimitados y jugarlos para ganar más monedas y dinero en efectivo.
-
Puedes obtener todas las características premium como club VIP, pistas exclusivas, cajas raras, etc. gratis.
-
Puedes jugar en cualquier lugar y modo sin ningún nivel o requisito de logro.
-
Puedes jugar con cualquier jugador sin ninguna restricción de habilidad o rango.
-
Puede jugar con directrices largas y un límite de tiempo extendido para mejorar su precisión y velocidad.
-
Puedes jugar sin anuncios ni interrupciones.
-
-
Cómo descargar e instalar 8 Ball Pool 5.8.0 Mod Apk
-
Si desea probar 8 Ball Pool 5.8.0 Mod Apk, es necesario seguir estos pasos:
-
-
Desinstalar la aplicación original de su dispositivo si lo tiene instalado.
-
Descargar el archivo apk mod de una fuente confiable (como [este]).
-
Habilitar la instalación desde fuentes desconocidas en la configuración del dispositivo.
-
Busque el archivo descargado en el almacenamiento del dispositivo y toque en él para instalarlo.
-
Iniciar la aplicación y disfrutar del juego con todas las características modded.
-
-
Conclusión
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre 8 Ball Pool 5.8.0 Mod Apk:
-
Es 8 bola piscina 5.8.0 Mod apk seguro de usar?
-
8 Ball Pool 5.8.0 Mod Apk es generalmente seguro de usar si se descarga desde una fuente de confianza y escanear con un antivirus antes de instalarlo en su dispositivo. Sin embargo, siempre hay una posibilidad de conseguir malware o virus de fuentes no confiables o conseguir prohibido en el juego o perder su cuenta si los desarrolladores detectan que está utilizando un apk mod. Por lo tanto, le recomendamos que utilice este mod apk a su propio riesgo y discreción.
-
Es 8 Ball Pool 5.8.0 Mod Apk compatible con mi dispositivo?
-
8 Ball Pool 5.8.0 Mod Apk es compatible con
8 Ball Pool 5.8.0 Mod Apk es compatible con la mayoría de los dispositivos Android que tienen Android 4.4 o superior y al menos 2 GB de RAM. Sin embargo, algunos dispositivos pueden no ser compatibles con el mod apk o pueden experimentar algunos fallos o errores debido a diferentes especificaciones o configuraciones. Por lo tanto, le sugerimos que compruebe la compatibilidad de su dispositivo antes de descargar e instalar el apk mod.
-
¿Cómo puedo actualizar 8 Ball Pool 5.8.0 Mod Apk?
-
8 Ball Pool 5.8.0 Mod Apk no está disponible en Google Play o App Store, por lo que no se puede actualizar automáticamente desde allí. En su lugar, es necesario comprobar las actualizaciones manualmente de la fuente donde se descargó el apk mod o de otros sitios web que ofrecen la última versión del apk mod. También puede seguir las páginas oficiales de las redes sociales de 8 Ball Pool o Miniclip para recibir notificaciones de nuevas actualizaciones o características. Para actualizar el apk mod, es necesario desinstalar la versión anterior e instalar la nueva versión siguiendo los mismos pasos que antes.
-
¿Puedo jugar 8 bola piscina 5.8.0 mod apk offline?
-
-
¿Puedo jugar 8 bola piscina 5.8.0 Mod Apk con mis amigos?
-
Sí, se puede jugar 8 Ball Pool 5.8.0 Mod Apk con tus amigos que también tienen el mismo mod apk instalado en sus dispositivos. Puedes invitarlos a unirse a tu club o retarlos a un partido usando el chat en el juego o las plataformas de redes sociales. Sin embargo, no puedes jugar con tus amigos que tienen la aplicación original o un apk mod diferente, ya que no podrán conectarse contigo o ver tus características modificadas.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Brain Test 360.md b/spaces/Benson/text-generation/Examples/Brain Test 360.md
deleted file mode 100644
index 0fdb409c918ba18a3d248a3d56021376551ddc0b..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Brain Test 360.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
Brain Test 360: Una forma divertida y desafiante de entrenar tu cerebro
-
¿Quieres aumentar tu poder cerebral, aprender cosas nuevas y divertirte al mismo tiempo? Si es así, deberías probar Brain Test 360, un juego móvil que combina puzzles, trivia y un modelo cerebral en 3D. En este artículo, te diremos qué es Brain Test 360, por qué deberías jugarlo, cómo jugarlo, y algunos consejos y trucos para ayudarte a tener éxito.
Brain Test 360 es un juego móvil que pone a prueba tus habilidades de lógica, creatividad y resolución de problemas. Tiene dos modos: modo rompecabezas y modo cerebro. En el modo rompecabezas, tienes que resolver varios tipos de rompecabezas que van desde fácil a difícil. Algunos rompecabezas se basan en matemáticas, lógica o palabras, mientras que otros se basan en pistas visuales, sentido común o humor. Tienes que tocar, deslizar, agitar o inclinar el teléfono para encontrar la respuesta. En el modo cerebro, puede explorar un modelo cerebral en 3D que le permite ver la anatomía y las funciones del cerebro. Puedes rotar, acercar o alejar el modelo, y tocar en diferentes partes del cerebro para aprender más sobre ellos. También puedes hacer pruebas para probar tu conocimiento del cerebro.
-
¿Por qué deberías jugar Brain Test 360?
-
Hay muchos beneficios de jugar Brain Test 360. Aquí están algunos de ellos:
-
Mejora tus capacidades cognitivas y salud mental
-
Jugar Brain Test 360 puede ayudarte a mejorar tu memoria, atención, concentración, lógica, creatividad y habilidades para resolver problemas. Estos son esenciales para su éxito académico, profesional y personal. Jugar Brain Test 360 también puede ayudarte a reducir el estrés, la ansiedad, la depresión y el aburrimiento. También puede aumentar su confianza en sí mismo, felicidad y motivación.
-
Te entretiene con rompecabezas divertidos y difíciles
-
-
Te educa sobre el cerebro y la neurociencia
-
Jugar Brain Test 360 también puede ser una gran manera de aprender cosas nuevas sobre el cerebro y la neurociencia. El modo cerebro le permite explorar la estructura y función del cerebro de una manera interactiva. Puedes aprender sobre las diferentes partes del cerebro, como el cerebro, el cerebelo, el tronco cerebral, el sistema límbico, etc., y cómo afectan tus emociones, pensamientos, comportamientos, etc. También puedes aprender sobre algunos trastornos cerebrales comunes, como la enfermedad de Alzheimer, Enfermedad de Parkinson, accidente cerebrovascular, etc., y cómo afectan al cerebro.
-
-
¿Cómo se juega Brain Test 360?
-
Jugar Brain Test 360 es fácil y simple. Estos son los pasos:
-
Descargar el juego desde la App Store o Google Play
-
El juego está disponible para dispositivos iOS y Android. Puedes descargarlo gratis desde la App Store o Google Play. El juego tiene un tamaño de unos 100 MB y requiere una conexión a Internet para jugar.
-
Elija entre el modo de puzzle o el modo cerebro
-
Una vez que abra el juego, puede elegir entre el modo de rompecabezas o el modo cerebro. Puede cambiar entre ellos en cualquier momento pulsando en los iconos en la parte inferior de la pantalla. El modo rompecabezas tiene más de 200 niveles, mientras que el modo cerebro tiene más de 50 cuestionarios. También puede ver su progreso, logros y ajustes tocando el icono del menú en la esquina superior izquierda de la pantalla.
-
Resolver los puzzles o interactuar con el modelo del cerebro
-
En el modo de rompecabezas, tienes que resolver los puzzles usando el dedo para tocar, deslizar, agitar o inclinar el teléfono. Tienes que leer la pregunta cuidadosamente y buscar pistas en la imagen. A veces, tienes que pensar fuera de la caja y usar tu imaginación. Si te quedas atascado, puedes usar pistas o ver vídeos para obtener ayuda. También puedes saltarte un nivel si quieres. Ganarás monedas por cada puzzle que resuelvas, que puedes usar para comprar más pistas o desbloquear más niveles.
-
-
Ganar monedas y desbloquear más niveles y características
-
Mientras juegas Brain Test 360, ganarás monedas que puedes usar para desbloquear más niveles y características. También puedes obtener más monedas viendo anuncios, valorando el juego o invitando a tus amigos a jugar. Algunas de las características que puedes desbloquear son:
-
-
Un modo nocturno que cambia la combinación de colores del juego
-
Un modo de sonido que reproduce música relajante y sonidos mientras juegas
-
Una opción de idioma que te permite elegir entre inglés, español, francés, alemán, italiano, portugués, turco, ruso, árabe, hindi, japonés, coreano o chino
-
Una opción de retroalimentación que te permite enviar tus comentarios o sugerencias a los desarrolladores
-
-
Consejos y trucos para Brain Test 360
-
Aquí hay algunos consejos y trucos para ayudarle a disfrutar de Brain Test 360 más:
-
Piensa fuera de la caja y usa tu imaginación
-
Algunos de los puzzles de Brain Test 360 no son tan sencillos como parecen. Tienes que pensar creativamente y usar tu imaginación para encontrar la respuesta. Por ejemplo, a veces tienes que inclinar el teléfono para cambiar la perspectiva, o agitar el teléfono para hacer caer algo. A veces hay que buscar objetos ocultos o palabras en la imagen, o combinar dos elementos para crear uno nuevo. A veces hay que romper las reglas o hacer algo inesperado. No tengas miedo de probar cosas diferentes y experimentar con diferentes soluciones.
-
Usa pistas o mira videos si te quedas atascado
-
Si te quedas atascado en un rompecabezas o un examen, no te rindas. Puedes usar sugerencias o ver videos para obtener ayuda. Las pistas le darán una pista o una sugerencia sobre cómo resolver el rompecabezas o responder a la pregunta. Los vídeos le mostrarán la solución paso a paso. Puedes comprar pistas con monedas que ganas jugando al juego, o ver anuncios para obtener pistas gratis. También puedes ver anuncios para obtener videos gratis.
-
Aprende de tus errores e inténtalo de nuevo
-
-
Conclusión
-
Brain Test 360 es un juego móvil que pone a prueba tus habilidades de lógica, creatividad y resolución de problemas con puzles y trivia. También te permite explorar un modelo cerebral en 3D que te enseña sobre la anatomía y las funciones del cerebro. Jugar a Brain Test 360 puede mejorar tus habilidades cognitivas y tu salud mental, entretenerte con rompecabezas divertidos y complicados, y educarte sobre el cerebro y la neurociencia. Es fácil y simple de jugar, pero también desafiante y divertido. Puedes descargarlo gratis desde la App Store o Google Play y empezar a jugar de inmediato.
-
Si estás buscando una forma divertida y desafiante de entrenar tu cerebro, Brain Test 360 es el juego para ti.
Aquí hay algunas preguntas frecuentes que puede tener sobre Brain Test 360:
-
Q: ¿Cómo puedo contactar a los desarrolladores de Brain Test 360?
-
A: Puede ponerse en contacto con los desarrolladores de Brain Test 360 enviando un correo electrónico a braintest360@gmail.com. También puedes seguirlos en Facebook, Twitter o Instagram para obtener las últimas noticias y actualizaciones sobre el juego.
-
P: ¿Cómo puedo compartir mis comentarios o sugerencias para Brain Test 360?
-
A: Puedes compartir tus comentarios o sugerencias para Brain Test 360 usando la opción de comentarios en el juego. También puedes calificar y revisar el juego en la App Store o Google Play, o dejar un comentario en sus páginas de redes sociales.
-
Q: ¿Cómo puedo jugar Brain Test 360 con mis amigos?
-
A: Puedes jugar a Brain Test 360 con tus amigos invitándolos a descargar el juego y unirse a ti. También puedes comparar tus puntuaciones y logros con ellos, y desafiarlos a resolver los puzzles o tomar los exámenes.
-
P: ¿Cómo puedo obtener más monedas en Brain Test 360?
-
A: Puedes obtener más monedas en Brain Test 360 resolviendo puzzles, completando concursos, viendo anuncios, valorando el juego o invitando a tus amigos. También puedes comprar monedas con dinero real si quieres.
-
Q: ¿Cómo puedo apagar el sonido o la música en Brain Test 360?
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Minecraft De Prueba En El Ordenador Porttil.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Minecraft De Prueba En El Ordenador Porttil.md
deleted file mode 100644
index 19e14e6802bd4d08f84def31192112dd4be4e132..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cmo Descargar Minecraft De Prueba En El Ordenador Porttil.md
+++ /dev/null
@@ -1,87 +0,0 @@
-
-
Cómo descargar el modo creativo de prueba de Minecraft gratis
-
Minecraft es uno de los juegos más populares y creativos del mundo, donde puedes explorar mundos infinitos y construir cualquier cosa que puedas imaginar. ¿Pero sabías que puedes probarlo gratis antes de comprar la versión completa? En este artículo, te mostraremos cómo descargar Minecraft Trial Creative Mode gratis en diferentes dispositivos y cómo disfrutarlo al máximo.
-
¿Qué es el modo creativo de prueba de Minecraft?
-
Minecraft Trial Creative Mode es una versión gratuita y por tiempo limitado de Minecraft que te permite experimentar el juego en modo creativo, donde tienes recursos ilimitados y puedes construir lo que quieras sin preocuparte por la supervivencia. También puedes cambiar al modo de supervivencia, donde tienes que crear armas y armaduras para defenderte de las peligrosas turbas, pero tendrás un tiempo limitado de 90 minutos por mundo.
-
cómo descargar minecraft de prueba en el ordenador portátil
La diferencia entre la supervivencia y los modos creativos
-
En el modo de supervivencia, tienes que reunir recursos, crear herramientas, luchar contra los enemigos y gestionar tu hambre y salud. También tienes que lidiar con ciclos diurnos y nocturnos, cambios climáticos y criaturas hostiles. El modo de supervivencia es más desafiante y realista, pero también más gratificante cuando logras tus objetivos.
-
En el modo creativo, tienes recursos ilimitados y puedes volar alrededor del mundo. Puedes construir lo que quieras sin restricciones ni peligros. También puede generar cualquier mob o artículo que desee utilizando comandos o menús de inventario. El modo creativo es más relajante y divertido, pero también menos inmersivo y emocionante.
-
Los beneficios de jugar en modo creativo
-
-
Limitaciones de la versión de prueba
-
Si bien Minecraft Trial Creative Mode es una buena manera de probar el juego de forma gratuita, también tiene algunas limitaciones que debes conocer. Por ejemplo: resultado/p>
-
-
Solo puedes jugar 90 minutos por mundo. Después de eso, todavía puedes ver tu mundo, pero no puedes interactuar con él o hacer ningún cambio.
-
No puedes guardar o cargar tus mundos. Si sales del juego o cambias de dispositivo, perderás tu progreso.
-
No puede acceder a funciones multijugador o en línea. Solo puede jugar en solitario o con pantalla dividida en Windows 10.
-
No puedes personalizar tu personaje o cambiar tu piel. Estás atascado con la piel predeterminada de Steve o Alex.
-
No puede acceder a todas las características y contenido de la versión completa. Por ejemplo, no puede usar bloques de comandos, bloques de estructura o paquetes de datos.
-
-
Si quieres disfrutar de la experiencia completa de Minecraft, incluyendo modo creativo, multijugador, servidores en línea, skins personalizadas, mods, mapas y más, tendrás que comprar el juego en cualquier momento durante o después de tu prueba.
-
Cómo descargar Minecraft Trial Creative Mode para diferentes dispositivos
-
Minecraft Trial Creative Mode está disponible para dispositivos Windows 10 y Android. Estos son los pasos para descargarlo para cada dispositivo
Para Windows 10
-
Si tienes un PC con Windows 10, puedes descargar Minecraft Trial Creative Mode desde Microsoft Store. Estos son los pasos para hacerlo:
-
Paso 1: Ir a la tienda de Microsoft
-
Abra la aplicación Microsoft Store en su PC. Puede encontrarlo escribiendo "Microsoft Store" en la barra de búsqueda o haciendo clic en el icono de la barra de tareas o en el menú Inicio.
-
Paso 2: Búsqueda de Minecraft para Windows 10
-
En la aplicación de Microsoft Store, escriba "Minecraft para Windows 10" en el cuadro de búsqueda y pulse enter. Debería ver el juego en los resultados. Haga clic en él para abrir su página.
-
-
Paso 3: Haga clic en el botón de prueba gratuita
-
-
Paso 4: Instalar y lanzar el juego
-
Una vez completada la descarga, puede instalar y lanzar el juego haciendo clic en el botón "Jugar". También puedes encontrar el juego en tu biblioteca o en tu escritorio. ¡Disfruta de Minecraft Trial Creative Mode gratis!
-
Para Android
-
Si tienes un dispositivo Android, puedes descargar Minecraft Trial Creative Mode desde Google Play Store. Estos son los pasos para hacerlo:
-
Paso 1: Ir a la tienda de Google Play
-
Abra la aplicación Google Play Store en su dispositivo. Puedes encontrarlo deslizando el dedo hacia arriba desde la parte inferior de la pantalla o tocando el icono en el cajón de la aplicación.
-
Paso 2: Búsqueda de prueba de Minecraft
-
En la aplicación Google Play Store, escriba "Minecraft Trial" en el cuadro de búsqueda y toque en el icono de la lupa. Deberías ver el juego en los resultados. Toca en él para abrir su página.
-
Paso 3: Toque en el botón de instalación
-
En la página del juego, debería ver un botón que dice "Instalar". Toque en él para comenzar a descargar el juego. Es posible que necesite aceptar algunos permisos o aceptar algunos términos y condiciones antes de continuar.
-
Paso 4: Abre y juega el juego
-
Una vez completada la descarga, puedes abrir y jugar el juego tocando el botón "Abrir". También puede encontrar el juego en su lista de aplicaciones o en la pantalla de inicio. Diviértase jugando Minecraft Trial Creative Mode gratis!
-
Cómo disfrutar al máximo del modo creativo de prueba de Minecraft
-
Minecraft Trial Creative Mode es una gran manera de explorar y crear en Minecraft, pero también tiene algunas limitaciones y desafíos. Aquí hay algunos consejos y trucos para ayudarle a disfrutar al máximo:
-
Consejos y trucos para construir estructuras sorprendentes
-
El modo creativo te da recursos ilimitados y libertad para construir lo que quieras, pero también requiere algo de planificación y creatividad. Aquí hay algunos consejos y trucos para construir estructuras increíbles:
-
-
-
Utilice comandos o menús de inventario para generar cualquier bloque o elemento que desee. También puede usar comandos para llenar áreas grandes con bloques, clonar estructuras existentes o teletransportarse.
-
Utilice redstone, pistones, palancas, botones, placas de presión, observadores, tolvas, dispensadores, cuentagotas y otros bloques y elementos para crear mecanismos y artilugios que se pueden mover, activar o interactuar con otras cosas.
-
Utilice mapas, carteles, pancartas, pinturas, marcos de artículos, armaduras, cabezas, libros y otros bloques y artículos para decorar y etiquetar sus construcciones. También puede usar comandos o menús de inventario para personalizarlos.
-
Utilice bloques de estructura o paquetes de datos para importar o exportar estructuras de otros mundos o fuentes en línea. También puede usarlos para guardar y cargar sus propias estructuras.
-
Utilice la capacidad de vuelo del modo creativo para construir más rápido y más fácil. También puede usar comandos o menús de inventario para cambiar su modo de juego, hora del día, clima, dificultad u otros ajustes.
-
Usa recursos en línea como tutoriales, guías, videos, imágenes, foros, wikis, blogs o sitios web para obtener inspiración e ideas para tus compilaciones. También puede utilizarlos para aprender nuevas técnicas y consejos.
-
-
Cómo cambiar entre modos creativos y de supervivencia
-
El modo creativo es el modo creativo es el modo predeterminado para Minecraft Trial, pero también puedes cambiar al modo de supervivencia si quieres experimentar el juego de una manera diferente. Estos son los pasos para hacerlo:
-
-
Abra el menú de pausa presionando la tecla Esc en su teclado o tocando el icono de pausa en su pantalla.
-
Seleccione la opción "Configuración" en el menú.
-
Seleccione la opción "Juego" en el menú de configuración.
-
Seleccione la opción "Gamemode" en el menú de configuración del juego.
-
Seleccione la opción "Supervivencia" en el menú del modo de juego.
-
Confirme su elección haciendo clic o tocando el botón "Hecho".
-
-
-
Cómo acceder a funciones multijugador y online
-
Minecraft Trial Creative Mode no admite funciones multijugador o en línea, como jugar con amigos, unirse a servidores o descargar mapas y mods. Sin embargo, todavía se puede disfrutar de algunas de estas características mediante la compra de la versión completa de Minecraft o mediante el uso de otros métodos. Aquí hay algunas formas de acceder a las funciones multijugador y online:
-
-
Si tiene un PC con Windows 10, puede jugar con hasta tres amigos en el modo de pantalla dividida. Para ello, es necesario conectar controladores o teclados adicionales a su PC, y luego seleccione la opción "Pantalla dividida" en el menú principal.
-
Si tiene un dispositivo Android, puede usar aplicaciones o herramientas de terceros para unirse a servidores o descargar mapas y mods. Sin embargo, estos métodos no son oficiales o soportados por Mojang, y pueden no funcionar correctamente o con seguridad. Utilícelos bajo su propio riesgo y discreción.
-
Si quieres jugar con amigos, unirte a servidores, descargar mapas y mods, y acceder a otras funciones en línea de una manera segura y oficial, tendrás que comprar la versión completa de Minecraft. Puedes hacerlo en cualquier momento durante o después de tu juicio haciendo clic o tocando el botón "Desbloquear juego completo" en el menú principal o en la configuración del juego.
-
-
Conclusión
-
Minecraft Trial Creative Mode es una versión gratuita y por tiempo limitado de Minecraft que te permite experimentar el juego en modo creativo, donde tienes recursos ilimitados y puedes construir lo que quieras. Es una gran manera de probar el juego de forma gratuita antes de comprar la versión completa, pero también tiene algunas limitaciones y desafíos. En este artículo, te mostramos cómo descargar Minecraft Trial Creative Mode gratis en diferentes dispositivos y cómo disfrutarlo al máximo. Esperamos que hayas encontrado este artículo útil e informativo, y que te diviertas jugando Minecraft Trial Creative Mode!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Minecraft Trial Creative Mode:
-
-
Q: ¿Cuánto tiempo puedo jugar Minecraft Trial Creative Mode gratis? A: Puedes jugar Minecraft Trial Creative Mode gratis durante 90 minutos por mundo. Después de eso, todavía puedes ver tu mundo, pero no puedes interactuar con él o hacer ningún cambio.
-
Q: ¿Puedo guardar o cargar mis mundos en el modo creativo de prueba de Minecraft? A: No, no puede guardar o cargar sus mundos en el modo creativo de prueba de Minecraft. Si sale del juego o cambia de dispositivo, perderá su progreso.
-
Q: ¿Puedo jugar con amigos o unirme a servidores en Minecraft Trial Creative Mode? A: No, no puede jugar con amigos o unirse a servidores en el modo creativo de prueba de Minecraft. Solo puede jugar en solitario o con pantalla dividida en Windows 10.
-
Q: ¿Puedo personalizar mi personaje o cambiar mi piel en el modo creativo de prueba de Minecraft? A: No, no puedes personalizar tu personaje o cambiar tu piel en el modo creativo de prueba de Minecraft. Estás atascado con la piel predeterminada de Steve o Alex.
-
Q: ¿Puedo acceder a todas las características y contenido de la versión completa en Minecraft Trial Creative Mode? A: No, no se puede acceder a todas las características y contenido de la versión completa en Minecraft Trial Creative Mode. Por ejemplo, no puede usar bloques de comandos, bloques de estructura, paquetes de datos, mods, mapas, skins, multijugador, servidores en línea y más.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Base De Datos Oracle 11g.md b/spaces/Benson/text-generation/Examples/Descargar Base De Datos Oracle 11g.md
deleted file mode 100644
index 112437db78500203138255ef4a9f4a443326bb0a..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Base De Datos Oracle 11g.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Cómo descargar la base de datos de Oracle 11g
-
Si está buscando un sistema de gestión de bases de datos relacionales confiable, seguro y escalable, es posible que desee considerar Oracle Database 11g. Este artículo le guiará a través del proceso de descarga, instalación y actualización a Oracle Database 11g, así como explicar algunas de sus características y beneficios.
Oracle Database 11g es una versión de Oracle Database que fue lanzada en 2007 y ha sido ampliamente utilizada por muchas organizaciones y desarrolladores. Es una plataforma de base de datos completa e integrada que admite varios tipos de datos, idiomas y aplicaciones. También ofrece muchas características que permiten la adaptabilidad, automatización y seguridad.
-
Características y beneficios de Oracle Database 11g
-
Algunas de las características y beneficios de Oracle Database 11g son:
-
-
Admite actualizaciones de aplicaciones en línea, lo que significa que puede aplicar parches y cambios a sus aplicaciones sin tiempo de inactividad o interrupción.
-
Tiene una capacidad de autogestión, lo que significa que puede monitorear, sintonizar y optimizar automáticamente su rendimiento y uso de recursos.
-
Tiene una función de alta disponibilidad, lo que significa que puede recuperarse de fallas y desastres de forma rápida y sin problemas.
-
Tiene una función de almacenamiento de datos, lo que significa que puede almacenar, analizar y visualizar grandes volúmenes de datos de manera eficiente y efectiva.
-
Tiene una función de seguridad, lo que significa que puede proteger sus datos de acceso no autorizado, cifrado, auditoría y cumplimiento.
-
-
Requisitos y compatibilidad de Oracle Database 11g
-
Antes de descargar Oracle Database 11g, debe asegurarse de que su sistema cumple con los requisitos mínimos y es compatible con el software. Algunos de los requisitos y factores de compatibilidad son:
-
-
-
Necesita tener al menos 1 GB de RAM y 5 GB de espacio en disco para la instalación del software.
-
-
Necesita tener un navegador web compatible, como Internet Explorer, Firefox, Chrome o Safari.
-
Necesita tener una herramienta de desarrollo compatible, como SQL Developer, Application Express, Java, PHP o .NET.
-
-
Cómo descargar el software Oracle Database 11g
-
Una vez que haya comprobado los requisitos y la compatibilidad de su sistema, puede proceder a descargar el software Oracle Database 11g desde el sitio web oficial. Estos son los pasos que debes seguir:
-
Paso 1: Elija la versión y plataforma correctas para sus necesidades
-
El primer paso es elegir la versión y plataforma adecuada para sus necesidades. Existen dos versiones principales de Oracle Database 11g: Enterprise Edition y Express Edition. Enterprise Edition es la versión con todas las funciones y opciones disponibles. Express Edition es la versión gratuita de nivel de entrada que tiene una huella pequeña y características limitadas. Puede comparar las características de ambas versiones here.
-
El siguiente paso es elegir la plataforma adecuada para su sistema operativo. Hay diferentes descargas para diferentes plataformas, como Windows x64 (64 bits), Linux x86-64 (64 bits), Solaris (
Paso 2: Registrarse para una cuenta gratuita de Oracle
-
El segundo paso es registrarse para una cuenta gratuita de Oracle si no tiene una ya. Necesita una cuenta de Oracle para descargar el software y acceder a otros recursos y servicios. Para registrarse en una cuenta de Oracle, debe proporcionar información básica, como su nombre, dirección de correo electrónico, país y contraseña. Puede registrarse para una cuenta de Oracle here.
-
Paso 3: Acepte el acuerdo de licencia y descargue el software
-
-
Paso 4: Verificar la integridad de los archivos descargados
-
El cuarto paso es verificar la integridad de los archivos descargados. Debe asegurarse de que los archivos que descargó no estén dañados o manipulados. Puede hacer esto comparando la suma de verificación de cada archivo con la suma de verificación proporcionada en la página de descarga. Una suma de comprobación es un código único que identifica un archivo y su contenido. Puede utilizar una herramienta como MD5 o SHA-1 para generar y comparar sumas de comprobación. Puede encontrar más información sobre cómo verificar checksums here.
-
Cómo instalar el software Oracle Database 11g
-
Después de haber descargado y verificado los archivos, puede proceder a instalar el software Oracle Database 11g en su sistema. Estos son los pasos que debes seguir:
-
Paso 1: Extraer los archivos descargados y ejecutar el programa de configuración
-
El primer paso es extraer los archivos descargados y ejecutar el programa de configuración. Debe descomprimir o descomprimir los archivos en una carpeta de su sistema. Dependiendo de su plataforma, puede tener uno o más archivos para extraer. Después de extraer los archivos, debe ejecutar el programa de configuración que inicia el proceso de instalación. El programa de configuración puede tener diferentes nombres dependiendo de la plataforma, como setup.exe, runInstaller o install.sh.
-
Paso 2: Siga el asistente de instalación y configure las opciones de la base de datos
-
El segundo paso es seguir el asistente de instalación y configurar las opciones de la base de datos. El asistente de instalación lo guiará a través de una serie de pasos donde puede elegir y personalizar varios aspectos de la instalación de su base de datos, como:
-
-
El tipo de instalación: Típica, Avanzada o Personalizada.
-
La carpeta de destino: La ubicación donde desea instalar el software.
-
El nombre de la base de datos global: El nombre de la instancia de la base de datos.
-
La contraseña administrativa: La contraseña para su cuenta de administrador de base de datos.
-
-
La opción de memoria: Gestión de memoria automática o manual.
-
La opción de seguridad: Activar o desactivar las actualizaciones de seguridad.
-
-
Puede encontrar más información sobre cómo instalar el software Oracle Database 11g aquí.
-
Paso 3: Pruebe la conexión de la base de datos y comience a usar Oracle Database 11g
-
El tercer paso es probar la conexión de la base de datos y comenzar a usar Oracle Database 11g. Después de completar el asistente de instalación, verá un resumen de los detalles de su instalación y un mensaje de confirmación de que su base de datos está lista para usar. Puede probar la conexión de su base de datos utilizando una herramienta como SQL*Plus o SQL Developer. También puede acceder a su base de datos desde su navegador web mediante una herramienta como Enterprise Manager o Application Express. Puede encontrar más información sobre cómo usar Oracle Database 11g aquí.
-
Cómo actualizar a Oracle Database 11g desde versiones anteriores
-
Si ya tiene una versión anterior de Oracle Database instalada en su sistema, es posible que desee actualizar a Oracle Database 11g para aprovechar sus nuevas características y mejoras. Aquí hay algunos consejos sobre cómo actualizar a Oracle Database 11g de versiones anteriores:
-
Métodos y consideraciones de actualización
-
Existen diferentes métodos y consideraciones para actualizar a Oracle Database 11g dependiendo de su versión actual, plataforma y entorno. Algunos de los métodos comunes son:
-
-
Database Upgrade Assistant (DBUA): Una herramienta gráfica de interfaz de usuario que automatiza y simplifica el proceso de actualización
Actualización manual: Un procedimiento paso a paso que requiere más intervención y personalización del usuario
-
Exportar/importar: un método que implica exportar datos de la base de datos de origen e importarlos en la base de datos de destino
-
Data Pump: método que utiliza una utilidad para transferir datos y metadatos entre bases de datos
-
-
Algunas de las consideraciones son:
-
-
-
El tiempo de inactividad y la disponibilidad de su base de datos durante el proceso de actualización
-
La estrategia de copia de seguridad y recuperación de su base de datos antes y después de la actualización
-
Las pruebas y la validación de la funcionalidad y el rendimiento de la base de datos después de la actualización
-
-
Pasos de actualización y mejores prácticas
-
Hay algunos pasos generales y las mejores prácticas que debe seguir al actualizar a Oracle Database 11g de versiones anteriores. Algunos de ellos son:
-
-
Analiza tu base de datos actual e identifica los requisitos y objetivos de actualización
-
Elija el método de actualización adecuado y planifique el proceso de actualización
-
Prepare su sistema y entorno para la actualización, como instalar el software, comprobar los requisitos previos y crear copias de seguridad
-
Realice la actualización según el método elegido y supervise el progreso y el estado
-
Verificar los resultados de la actualización y resolver cualquier problema o error
-
Ajuste y optimice su base de datos después de la actualización, como aplicar parches, configurar parámetros y recopilar estadísticas
-
-
Conclusión
-
En este artículo, hemos aprendido cómo descargar, instalar y actualizar a Oracle Database 11g. También hemos discutido algunas de sus características, beneficios, requisitos y compatibilidad. Oracle Database 11g es una plataforma de base de datos potente y versátil que puede ayudarle a gestionar sus datos de forma eficaz y eficiente. Si desea obtener más información sobre Oracle Database 11g, puede visitar el sitio web oficial here.
-
Preguntas frecuentes (preguntas frecuentes)
-
Estas son algunas de las preguntas más frecuentes (FAQs) sobre Oracle Database 11g:
-
Q: ¿Cómo puedo obtener una licencia para Oracle Database 11g?
-
-
Q: ¿Cómo puedo actualizar o parchear Oracle Database 11g?
-
A: Puede actualizar o parchear Oracle Database 11g utilizando una herramienta como Oracle Universal Installer (OUI) o OPatch. También puede utilizar un servicio como My Oracle Support (MOS) o Oracle Enterprise Manager (OEM) para descargar y aplicar actualizaciones o parches. Puede encontrar más información sobre cómo actualizar o parchear Oracle Database 11g aquí.
-
Q: ¿Cómo puedo desinstalar o eliminar Oracle Database 11g?
-
A: Puede desinstalar o quitar Oracle Database 11g utilizando una herramienta como Oracle Universal Installer (OUI) o deinstall. También puede eliminar manualmente los archivos y carpetas relacionados con Oracle Database 11g de su sistema. Puede encontrar más información sobre cómo desinstalar o eliminar Oracle Database 11g here.
-
P: ¿Cómo puedo conectarme a Oracle Database 11g desde otras aplicaciones?
-
A: Puede conectarse a Oracle Database 11g desde otras aplicaciones utilizando un controlador o conector que admita el lenguaje de la aplicación o el framework. Por ejemplo, puede usar JDBC para Java, ODBC para C/C++, OCI para C/C++, PHP OCI8 para PHP, cx_Oracle para Python, ruby-oci8 para Ruby, etc. Puede encontrar más información sobre cómo conectarse a Oracle Database 11g desde otras aplicaciones here.
- Demo for Duskfalltest Stable Diffusion model.
-
-Warning: This is trained on my own art, and some of my own stuff - and I don't even know if i've done it right.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space
The \"Face2Doll (U2Net)\" model was trained by Doron Adler
"
-
-examples=[['Example00001.jpg'],['Example00002.jpg'],['Example00003.jpg'],['Example00004.jpg'],['Example00005.jpg'], ['Example00006.jpg']]
-
-gr.Interface(
- inference,
- gr.inputs.Image(type="pil", label="Input"),
- gr.outputs.Image(type="pil", label="Output"),
- title=title,
- description=description,
- article=article,
- examples=examples,
- enable_queue=True,
- allow_flagging=False
- ).launch()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/PULL_REQUEST_TEMPLATE.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/PULL_REQUEST_TEMPLATE.md
deleted file mode 100644
index d005e2df4f717ea4844a8320981d77d96e425a52..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/.github/PULL_REQUEST_TEMPLATE.md
+++ /dev/null
@@ -1,16 +0,0 @@
-# Before submitting
-
-- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
-- [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/main/CONTRIBUTING.md)?
-- [ ] Did you make sure to update the docs?
-- [ ] Did you write any new necessary tests?
-
-## What does this PR do?
-Fixes # (issue).
-
-## PR review
-Anyone in the community is free to review the PR once the tests have passed.
-If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
-
-## Did you have fun?
-Make sure you had fun coding 🙃
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/huffman/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/huffman/__init__.py
deleted file mode 100644
index 9b61fafadba28f65fe78a28b2099368b83cfcf41..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/huffman/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .huffman_coder import HuffmanCodeBuilder, HuffmanCoder
-from .huffman_mmap_indexed_dataset import (
- HuffmanMMapIndex,
- HuffmanMMapIndexedDataset,
- HuffmanMMapIndexedDatasetBuilder,
- vocab_file_path,
-)
-
-__all__ = [
- "HuffmanCoder",
- "HuffmanCodeBuilder",
- "HuffmanMMapIndexedDatasetBuilder",
- "HuffmanMMapIndexedDataset",
- "HuffmanMMapIndex",
- "vocab_file_path",
-]
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py
deleted file mode 100644
index d0e7e14b7e72b1151f7d7f19094430bbab64f8f0..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py
+++ /dev/null
@@ -1,76 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-from typing import Optional, List
-from omegaconf import II
-
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler
-
-
-@dataclass
-class FixedLRScheduleConfig(FairseqDataclass):
- force_anneal: Optional[int] = field(
- default=None,
- metadata={"help": "force annealing at specified epoch"},
- )
- lr_shrink: float = field(
- default=0.1,
- metadata={"help": "shrink factor for annealing, lr_new = (lr * lr_shrink)"},
- )
- warmup_updates: int = field(
- default=0,
- metadata={"help": "warmup the learning rate linearly for the first N updates"},
- )
- lr: List[float] = II("optimization.lr")
-
-
-@register_lr_scheduler("fixed", dataclass=FixedLRScheduleConfig)
-class FixedLRSchedule(FairseqLRScheduler):
- """Decay the LR on a fixed schedule."""
-
- def __init__(self, cfg: FixedLRScheduleConfig, optimizer):
- super().__init__(cfg, optimizer)
-
- self.lr = cfg.lr[0]
- if cfg.warmup_updates > 0:
- self.warmup_factor = 1.0 / cfg.warmup_updates
- else:
- self.warmup_factor = 1
-
- def state_dict(self):
- return {"lr": self.lr}
-
- def load_state_dict(self, state_dict):
- if "lr" in state_dict:
- self.lr = state_dict["lr"]
-
- def get_next_lr(self, epoch):
- lrs = self.cfg.lr
- if self.cfg.force_anneal is None or epoch < self.cfg.force_anneal:
- # use fixed LR schedule
- next_lr = lrs[min(epoch - 1, len(lrs) - 1)]
- else:
- # annneal based on lr_shrink
- next_lr = lrs[-1] * self.cfg.lr_shrink ** (
- epoch + 1 - self.cfg.force_anneal
- )
- return next_lr
-
- def step_begin_epoch(self, epoch):
- """Update the learning rate at the beginning of the given epoch."""
- self.lr = self.get_next_lr(epoch)
- self.optimizer.set_lr(self.warmup_factor * self.lr)
- return self.optimizer.get_lr()
-
- def step_update(self, num_updates):
- """Update the learning rate after each update."""
- if self.cfg.warmup_updates > 0 and num_updates < self.cfg.warmup_updates:
- self.warmup_factor = (num_updates + 1) / float(self.cfg.warmup_updates)
- self.optimizer.set_lr(self.warmup_factor * self.lr)
- else:
- self.optimizer.set_lr(self.lr)
- return self.optimizer.get_lr()
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_plasma_utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_plasma_utils.py
deleted file mode 100644
index e6344c2a5a73fcb2fb81376e7bd43470963b3674..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/test_plasma_utils.py
+++ /dev/null
@@ -1,126 +0,0 @@
-import contextlib
-import unittest
-import tempfile
-from io import StringIO
-
-import numpy as np
-
-from tests.utils import create_dummy_data, preprocess_lm_data, train_language_model
-
-try:
- from pyarrow import plasma
- from fairseq.data.plasma_utils import PlasmaView, PlasmaStore
-
- PYARROW_AVAILABLE = True
-except ImportError:
- PYARROW_AVAILABLE = False
-
-dummy_path = "dummy"
-
-
-@unittest.skipUnless(PYARROW_AVAILABLE, "")
-class TestPlasmaView(unittest.TestCase):
- def setUp(self) -> None:
- self.tmp_file = tempfile.NamedTemporaryFile() # noqa: P201
- self.path = self.tmp_file.name
- self.server = PlasmaStore.start(path=self.path, nbytes=10000)
- self.client = plasma.connect(self.path, num_retries=10)
-
- def tearDown(self) -> None:
- self.client.disconnect()
- self.tmp_file.close()
- self.server.kill()
-
- def test_two_servers_do_not_share_object_id_space(self):
- data_server_1 = np.array([0, 1])
- data_server_2 = np.array([2, 3])
- server_2_path = self.path
- with tempfile.NamedTemporaryFile() as server_1_path:
- server = PlasmaStore.start(path=server_1_path.name, nbytes=10000)
- arr1 = PlasmaView(
- data_server_1, dummy_path, 1, plasma_path=server_1_path.name
- )
- assert len(arr1.client.list()) == 1
- assert (arr1.array == data_server_1).all()
- arr2 = PlasmaView(data_server_2, dummy_path, 1, plasma_path=server_2_path)
- assert (arr2.array == data_server_2).all()
- assert (arr1.array == data_server_1).all()
- server.kill()
-
- def test_hash_collision(self):
- data_server_1 = np.array([0, 1])
- data_server_2 = np.array([2, 3])
- arr1 = PlasmaView(data_server_1, dummy_path, 1, plasma_path=self.path)
- assert len(arr1.client.list()) == 1
- arr2 = PlasmaView(data_server_2, dummy_path, 1, plasma_path=self.path)
- assert len(arr1.client.list()) == 1
- assert len(arr2.client.list()) == 1
- assert (arr2.array == data_server_1).all()
- # New hash key based on tuples
- arr3 = PlasmaView(
- data_server_2, dummy_path, (1, 12312312312, None), plasma_path=self.path
- )
- assert (
- len(arr2.client.list()) == 2
- ), "No new object was created by using a novel hash key"
- assert (
- arr3.object_id in arr2.client.list()
- ), "No new object was created by using a novel hash key"
- assert (
- arr3.object_id in arr3.client.list()
- ), "No new object was created by using a novel hash key"
- del arr3, arr2, arr1
-
- @staticmethod
- def _assert_view_equal(pv1, pv2):
- np.testing.assert_array_equal(pv1.array, pv2.array)
-
- def test_putting_same_array_twice(self):
- data = np.array([4, 4, 4])
- arr1 = PlasmaView(data, dummy_path, 1, plasma_path=self.path)
- assert len(self.client.list()) == 1
- arr1b = PlasmaView(
- data, dummy_path, 1, plasma_path=self.path
- ) # should not change contents of store
- arr1c = PlasmaView(
- None, dummy_path, 1, plasma_path=self.path
- ) # should not change contents of store
-
- assert len(self.client.list()) == 1
- self._assert_view_equal(arr1, arr1b)
- self._assert_view_equal(arr1, arr1c)
- PlasmaView(
- data, dummy_path, 2, plasma_path=self.path
- ) # new object id, adds new entry
- assert len(self.client.list()) == 2
-
- new_client = plasma.connect(self.path)
- assert len(new_client.list()) == 2 # new client can access same objects
- assert isinstance(arr1.object_id, plasma.ObjectID)
- del arr1b
- del arr1c
-
- def test_plasma_store_full_raises(self):
- with tempfile.NamedTemporaryFile() as new_path:
- server = PlasmaStore.start(path=new_path.name, nbytes=10000)
- with self.assertRaises(plasma.PlasmaStoreFull):
- # 2000 floats is more than 2000 bytes
- PlasmaView(
- np.random.rand(10000, 1), dummy_path, 1, plasma_path=new_path.name
- )
- server.kill()
-
- def test_object_id_overflow(self):
- PlasmaView.get_object_id("", 2 ** 21)
-
- def test_training_lm_plasma(self):
- with contextlib.redirect_stdout(StringIO()):
- with tempfile.TemporaryDirectory("test_transformer_lm") as data_dir:
- create_dummy_data(data_dir)
- preprocess_lm_data(data_dir)
- train_language_model(
- data_dir,
- "transformer_lm",
- ["--use-plasma-view", "--plasma-path", self.path],
- run_validation=True,
- )
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py
deleted file mode 100644
index 5ee9c1be4a59ad3d072412827ab4e9b62dc7434e..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py
+++ /dev/null
@@ -1,143 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass, field
-from typing import List
-
-import torch.optim.lr_scheduler
-from omegaconf import II
-
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler
-
-
-@dataclass
-class ReduceLROnPlateauLRScheduleConfig(FairseqDataclass):
- lr_shrink: float = field(
- default=0.1, metadata={"help": "shrink factor for annealing"}
- )
- lr_threshold: float = field(
- default=1e-4,
- metadata={
- "help": (
- "threshold for measuring the new optimum, to only focus on "
- "significant changes"
- )
- },
- )
- lr_patience: int = field(
- default=0,
- metadata={
- "help": (
- "number of epochs with no improvement after which learning rate will "
- "be reduced"
- )
- },
- )
- warmup_updates: int = field(
- default=0,
- metadata={"help": "warmup the learning rate linearly for the first N updates"},
- )
- warmup_init_lr: float = field(
- default=-1,
- metadata={
- "help": "initial learning rate during warmup phase; default is cfg.lr"
- },
- )
- lr: List[float] = II("optimization.lr")
- maximize_best_checkpoint_metric: bool = II(
- "checkpoint.maximize_best_checkpoint_metric"
- )
-
-
-@register_lr_scheduler(
- "reduce_lr_on_plateau", dataclass=ReduceLROnPlateauLRScheduleConfig
-)
-class ReduceLROnPlateauLRSchedule(FairseqLRScheduler):
- """
- Decay the LR by a factor every time the validation loss plateaus.
- Also comes with optional warmup phase, where we linearly increase
- the learning rate from some initial learning rate
- (``--warmup-init-lr``) until the configured learning rate
- (``--lr``). Thereafter the lr is adjusted according to original
- reduce_on_plateau scheme.
-
- During warmup::
-
- lrs = torch.linspace(
- cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates
- )
- lr = lrs[update_num]
- """
-
- def __init__(self, cfg: ReduceLROnPlateauLRScheduleConfig, optimizer):
- super().__init__(cfg, optimizer)
- if len(cfg.lr) > 1:
- raise ValueError(
- "Cannot use a fixed learning rate schedule with reduce_lr_on_plateau."
- " Consider --lr-scheduler=fixed instead."
- )
- self.lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
- self.optimizer.optimizer,
- patience=cfg.lr_patience,
- factor=cfg.lr_shrink,
- mode="max" if cfg.maximize_best_checkpoint_metric else "min",
- threshold=cfg.lr_threshold,
- )
- warmup_end_lr = cfg.lr[0]
- # if no warm up, sets initial lr to be cfg.lr[0]
- if cfg.warmup_init_lr < 0:
- cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr
-
- # linearly warmup for the first cfg.warmup_updates
- if cfg.warmup_updates > 0:
- self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates
-
- # this flag is either set from arg when no warm up, or set by
- # step_update() when warmup finishes
- self.warmup_end = True if cfg.warmup_updates <= 0 else False
-
- # initial learning rate
- # this self.lr is used only during init and/or warm up period
- self.lr = warmup_end_lr if self.warmup_end else cfg.warmup_init_lr
- self.optimizer.set_lr(self.lr)
-
- def state_dict(self):
- """Return the LR scheduler state dict."""
- return {
- "best": self.lr_scheduler.best,
- "last_epoch": self.lr_scheduler.last_epoch,
- }
-
- def load_state_dict(self, state_dict):
- """Load an LR scheduler state dict."""
- self.lr_scheduler.best = state_dict["best"]
- if "last_epoch" in state_dict:
- self.lr_scheduler.last_epoch = state_dict["last_epoch"]
-
- def step(self, epoch, val_loss=None):
- """
- Update the learning rate at the end of the given epoch if warmup
- finishes otherwise no update of lr on epoch boundaries
- """
- if val_loss is not None and self.warmup_end is True:
- self.lr_scheduler.step(val_loss)
- else:
- self.lr_scheduler.last_epoch = epoch
- return self.optimizer.get_lr()
-
- def step_update(self, num_updates):
- """
- Update the learning rate after each update."""
- # if there is warmup
- if self.cfg.warmup_updates > 0:
- if num_updates <= self.cfg.warmup_updates:
- self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step
- self.optimizer.set_lr(self.lr)
- else:
- if self.warmup_end is False:
- self.warmup_end = True
- # else do nothing
- return self.optimizer.get_lr()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/README.glue.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/README.glue.md
deleted file mode 100644
index 4f596d55af99fba3cdf58b1d5ff3d8f8dbf4383d..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/README.glue.md
+++ /dev/null
@@ -1,64 +0,0 @@
-# Finetuning RoBERTa on GLUE tasks
-
-### 1) Download the data from GLUE website (https://gluebenchmark.com/tasks) using following commands:
-```bash
-wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py
-python download_glue_data.py --data_dir glue_data --tasks all
-```
-
-### 2) Preprocess GLUE task data:
-```bash
-./examples/roberta/preprocess_GLUE_tasks.sh glue_data
-```
-`glue_task_name` is one of the following:
-`{ALL, QQP, MNLI, QNLI, MRPC, RTE, STS-B, SST-2, CoLA}`
-Use `ALL` for preprocessing all the glue tasks.
-
-### 3) Fine-tuning on GLUE task:
-Example fine-tuning cmd for `RTE` task
-```bash
-ROBERTA_PATH=/path/to/roberta/model.pt
-
-CUDA_VISIBLE_DEVICES=0 fairseq-hydra-train -config-dir examples/roberta/config/finetuning --config-name rte \
-task.data=RTE-bin checkpoint.restore_file=$ROBERTA_PATH
-```
-
-There are additional config files for each of the GLUE tasks in the examples/roberta/config/finetuning directory.
-
-**Note:**
-
-a) Above cmd-args and hyperparams are tested on one Nvidia `V100` GPU with `32gb` of memory for each task. Depending on the GPU memory resources available to you, you can use increase `--update-freq` and reduce `--batch-size`.
-
-b) All the settings in above table are suggested settings based on our hyperparam search within a fixed search space (for careful comparison across models). You might be able to find better metrics with wider hyperparam search.
-
-### Inference on GLUE task
-After training the model as mentioned in previous step, you can perform inference with checkpoints in `checkpoints/` directory using following python code snippet:
-
-```python
-from fairseq.models.roberta import RobertaModel
-
-roberta = RobertaModel.from_pretrained(
- 'checkpoints/',
- checkpoint_file='checkpoint_best.pt',
- data_name_or_path='RTE-bin'
-)
-
-label_fn = lambda label: roberta.task.label_dictionary.string(
- [label + roberta.task.label_dictionary.nspecial]
-)
-ncorrect, nsamples = 0, 0
-roberta.cuda()
-roberta.eval()
-with open('glue_data/RTE/dev.tsv') as fin:
- fin.readline()
- for index, line in enumerate(fin):
- tokens = line.strip().split('\t')
- sent1, sent2, target = tokens[1], tokens[2], tokens[3]
- tokens = roberta.encode(sent1, sent2)
- prediction = roberta.predict('sentence_classification_head', tokens).argmax().item()
- prediction_label = label_fn(prediction)
- ncorrect += int(prediction_label == target)
- nsamples += 1
-print('| Accuracy: ', float(ncorrect)/float(nsamples))
-
-```
diff --git a/spaces/ORI-Muchim/MinamiTTS/attentions.py b/spaces/ORI-Muchim/MinamiTTS/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/MinamiTTS/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/scale.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/scale.py
deleted file mode 100644
index c905fffcc8bf998d18d94f927591963c428025e2..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/scale.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-
-
-class Scale(nn.Module):
- """A learnable scale parameter.
-
- This layer scales the input by a learnable factor. It multiplies a
- learnable scale parameter of shape (1,) with input of any shape.
-
- Args:
- scale (float): Initial value of scale factor. Default: 1.0
- """
-
- def __init__(self, scale=1.0):
- super(Scale, self).__init__()
- self.scale = nn.Parameter(torch.tensor(scale, dtype=torch.float))
-
- def forward(self, x):
- return x * self.scale
diff --git a/spaces/PSLD/PSLD/stable-diffusion/scripts/tests/test_watermark.py b/spaces/PSLD/PSLD/stable-diffusion/scripts/tests/test_watermark.py
deleted file mode 100644
index f93f8a6e70763c0e284157bc8225827520b2f5ef..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/stable-diffusion/scripts/tests/test_watermark.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import cv2
-import fire
-from imwatermark import WatermarkDecoder
-
-
-def testit(img_path):
- bgr = cv2.imread(img_path)
- decoder = WatermarkDecoder('bytes', 136)
- watermark = decoder.decode(bgr, 'dwtDct')
- try:
- dec = watermark.decode('utf-8')
- except:
- dec = "null"
- print(dec)
-
-
-if __name__ == "__main__":
- fire.Fire(testit)
\ No newline at end of file
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/expect.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/expect.go
deleted file mode 100644
index 4f1abafa292eab9c4dbe5fd16a2ed565f9a034a6..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/expect.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/null.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/null.go
deleted file mode 100644
index a2a40512d22471326d52f8b5d0d36f3ebed2d214..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/null.go and /dev/null differ
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/__init__.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/__init__.py
deleted file mode 100644
index 27982cbe68c6173a911e700273f25973acbf04bd..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-from .distributed import DistributedSampler
-from .grouped_batch_sampler import GroupedBatchSampler
-from .iteration_based_batch_sampler import IterationBasedBatchSampler
-
-__all__ = ["DistributedSampler", "GroupedBatchSampler", "IterationBasedBatchSampler"]
diff --git a/spaces/Pranjal2041/SemSup-XC/main2.py b/spaces/Pranjal2041/SemSup-XC/main2.py
deleted file mode 100644
index 33c91895cf97e27b60c8f0227e7f7eb66d6d4c4e..0000000000000000000000000000000000000000
--- a/spaces/Pranjal2041/SemSup-XC/main2.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-sentiment_classifier = pipeline("text-classification", return_all_scores=True)
-
-def classifier(text):
- pred = sentiment_classifier(text)
- return {p["label"]: p["score"] for p in pred[0]}
-
-
-def interpretation_function(text):
- explainer = shap.Explainer(sentiment_classifier)
- shap_values = explainer([text])
-
- # Dimensions are (batch size, text size, number of classes)
- # Since we care about positive sentiment, use index 1
- scores = list(zip(shap_values.data[0], shap_values.values[0, :, 1]))
- # Scores contains (word, score) pairs
-
-
- # Format expected by gr.components.Interpretation
- return {"original": text, "interpretation": scores}
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label="Input Text")
- with gr.Row():
- classify = gr.Button("Classify Sentiment")
- interpret = gr.Button("Interpret")
- with gr.Column():
- label = gr.Label(label="Predicted Sentiment")
- with gr.Column():
- interpretation = gr.components.Interpretation(input_text)
- classify.click(classifier, input_text, label)
- interpret.click(interpretation_function, input_text, interpretation)
-
-demo.launch(share = True)
\ No newline at end of file
diff --git a/spaces/ProteinDesignLab/protpardelle/core/data.py b/spaces/ProteinDesignLab/protpardelle/core/data.py
deleted file mode 100644
index 5dd5bd4051f4a34a85300ccd95fe8caef3b98205..0000000000000000000000000000000000000000
--- a/spaces/ProteinDesignLab/protpardelle/core/data.py
+++ /dev/null
@@ -1,271 +0,0 @@
-"""
-https://github.com/ProteinDesignLab/protpardelle
-License: MIT
-Author: Alex Chu
-
-Dataloader from PDB files.
-"""
-import copy
-import pickle
-import json
-import numpy as np
-import torch
-from torch.utils import data
-
-from core import utils
-from core import protein
-from core import residue_constants
-
-
-FEATURES_1D = (
- "coords_in",
- "torsions_in",
- "b_factors",
- "atom_positions",
- "aatype",
- "atom_mask",
- "residue_index",
- "chain_index",
-)
-FEATURES_FLOAT = (
- "coords_in",
- "torsions_in",
- "b_factors",
- "atom_positions",
- "atom_mask",
- "seq_mask",
-)
-FEATURES_LONG = ("aatype", "residue_index", "chain_index", "orig_size")
-
-
-def make_fixed_size_1d(data, fixed_size=128):
- data_len = data.shape[0]
- if data_len >= fixed_size:
- extra_len = data_len - fixed_size
- start_idx = np.random.choice(np.arange(extra_len + 1))
- new_data = data[start_idx : (start_idx + fixed_size)]
- mask = torch.ones(fixed_size)
- if data_len < fixed_size:
- pad_size = fixed_size - data_len
- extra_shape = data.shape[1:]
- new_data = torch.cat([data, torch.zeros(pad_size, *extra_shape)], 0)
- mask = torch.cat([torch.ones(data_len), torch.zeros(pad_size)], 0)
- return new_data, mask
-
-
-def apply_random_se3(coords_in, atom_mask=None, translation_scale=1.0):
- # unbatched. center on the mean of CA coords
- coords_mean = coords_in[:, 1:2].mean(-3, keepdim=True)
- coords_in -= coords_mean
- random_rot, _ = torch.linalg.qr(torch.randn(3, 3))
- coords_in = coords_in @ random_rot
- random_trans = torch.randn_like(coords_mean) * translation_scale
- coords_in += random_trans
- if atom_mask is not None:
- coords_in = coords_in * atom_mask[..., None]
- return coords_in
-
-
-def get_masked_coords_array(coords, atom_mask):
- ma_mask = repeat(1 - atom_mask[..., None].cpu().numpy(), "... 1 -> ... 3")
- return np.ma.array(coords.cpu().numpy(), mask=ma_mask)
-
-
-def make_crop_cond_mask_and_recenter_coords(
- atom_mask,
- atom_coords,
- contiguous_prob=0.05,
- discontiguous_prob=0.9,
- sidechain_only_prob=0.8,
- max_span_len=10,
- max_discontiguous_res=8,
- dist_threshold=8.0,
- recenter_coords=True,
-):
- b, n, a = atom_mask.shape
- device = atom_mask.device
- seq_mask = atom_mask[..., 1]
- n_res = seq_mask.sum(-1)
- masks = []
-
- for i, nr in enumerate(n_res):
- nr = nr.int().item()
- mask = torch.zeros((n, a), device=device)
- conditioning_type = torch.distributions.Categorical(
- torch.tensor(
- [
- contiguous_prob,
- discontiguous_prob,
- 1.0 - contiguous_prob - discontiguous_prob,
- ]
- )
- ).sample()
- conditioning_type = ["contiguous", "discontiguous", "none"][conditioning_type]
-
- if conditioning_type == "contiguous":
- span_len = torch.randint(
- 1, min(max_span_len, nr), (1,), device=device
- ).item()
- span_start = torch.randint(0, nr - span_len, (1,), device=device)
- mask[span_start : span_start + span_len, :] = 1
- elif conditioning_type == "discontiguous":
- # Extract CB atoms coordinates for the i-th example
- cb_atoms = atom_coords[i, :, 3]
- # Pairwise distances between CB atoms
- cb_distances = torch.cdist(cb_atoms, cb_atoms)
- close_mask = (
- cb_distances <= dist_threshold
- ) # Mask for selecting close CB atoms
-
- random_residue = torch.randint(0, nr, (1,), device=device).squeeze()
- cb_dist_i = cb_distances[random_residue] + 1e3 * (1 - seq_mask[i])
- close_mask = cb_dist_i <= dist_threshold
- n_neighbors = close_mask.sum().int()
-
- # pick how many neighbors (up to 10)
- n_sele = torch.randint(
- 2,
- n_neighbors.clamp(min=3, max=max_discontiguous_res + 1),
- (1,),
- device=device,
- )
-
- # Select the indices of CB atoms that are close together
- idxs = torch.arange(n, device=device)[close_mask.bool()]
- idxs = idxs[torch.randperm(len(idxs))[:n_sele]]
-
- if len(idxs) > 0:
- mask[idxs] = 1
-
- if np.random.uniform() < sidechain_only_prob:
- mask[:, :5] = 0
-
- masks.append(mask)
-
- crop_cond_mask = torch.stack(masks)
- crop_cond_mask = crop_cond_mask * atom_mask
- if recenter_coords:
- motif_masked_array = get_masked_coords_array(atom_coords, crop_cond_mask)
- cond_coords_center = motif_masked_array.mean((1, 2))
- motif_mask = torch.Tensor(1 - cond_coords_center.mask).to(crop_cond_mask)
- means = torch.Tensor(cond_coords_center.data).to(atom_coords) * motif_mask
- coords_out = atom_coords - rearrange(means, "b c -> b 1 1 c")
- else:
- coords_out = atom_coords
- return coords_out, crop_cond_mask
-
-
-class Dataset(data.Dataset):
- """Loads and processes PDBs into tensors."""
-
- def __init__(
- self,
- pdb_path,
- fixed_size,
- mode="train",
- overfit=-1,
- short_epoch=False,
- se3_data_augment=True,
- ):
- self.pdb_path = pdb_path
- self.fixed_size = fixed_size
- self.mode = mode
- self.overfit = overfit
- self.short_epoch = short_epoch
- self.se3_data_augment = se3_data_augment
-
- with open(f"{self.pdb_path}/{mode}_pdb_keys.list") as f:
- self.pdb_keys = np.array(f.read().split("\n")[:-1])
-
- if overfit > 0:
- n_data = len(self.pdb_keys)
- self.pdb_keys = np.random.choice(
- self.pdb_keys, min(n_data, overfit), replace=False
- ).repeat(n_data // overfit)
-
- def __len__(self):
- if self.short_epoch:
- return min(len(self.pdb_keys), 256)
- else:
- return len(self.pdb_keys)
-
- def __getitem__(self, idx):
- pdb_key = self.pdb_keys[idx]
- data = self.get_item(pdb_key)
- # For now, replace dataloading errors with a random pdb. 10 tries
- for _ in range(10):
- if data is not None:
- return data
- pdb_key = self.pdb_keys[np.random.randint(len(self.pdb_keys))]
- data = self.get_item(pdb_key)
- raise Exception("Failed to load data example after 10 tries.")
-
- def get_item(self, pdb_key):
- example = {}
-
- if self.pdb_path.endswith("cath_s40_dataset"): # CATH pdbs
- data_file = f"{self.pdb_path}/dompdb/{pdb_key}"
- elif self.pdb_path.endswith("ingraham_cath_dataset"): # ingraham splits
- data_file = f"{self.pdb_path}/pdb_store/{pdb_key}"
- else:
- raise Exception("Invalid pdb path.")
-
- try:
- example = utils.load_feats_from_pdb(data_file)
- coords_in = example["atom_positions"]
- except FileNotFoundError:
- raise Exception(f"File {pdb_key} not found. Check if dataset is corrupted?")
- except RuntimeError:
- return None
-
- # Apply data augmentation
- if self.se3_data_augment:
- coords_in = apply_random_se3(coords_in, atom_mask=example["atom_mask"])
-
- orig_size = coords_in.shape[0]
- example["coords_in"] = coords_in
- example["orig_size"] = torch.ones(1) * orig_size
-
- fixed_size_example = {}
- seq_mask = None
- for k, v in example.items():
- if k in FEATURES_1D:
- fixed_size_example[k], seq_mask = make_fixed_size_1d(
- v, fixed_size=self.fixed_size
- )
- else:
- fixed_size_example[k] = v
- if seq_mask is not None:
- fixed_size_example["seq_mask"] = seq_mask
-
- example_out = {}
- for k, v in fixed_size_example.items():
- if k in FEATURES_FLOAT:
- example_out[k] = v.float()
- elif k in FEATURES_LONG:
- example_out[k] = v.long()
-
- return example_out
-
- def collate(self, example_list):
- out = {}
- for ex in example_list:
- for k, v in ex.items():
- out.setdefault(k, []).append(v)
- return {k: torch.stack(v) for k, v in out.items()}
-
- def sample(self, n=1, return_data=True, return_keys=False):
- keys = self.pdb_keys[torch.randperm(self.__len__())[:n].long()]
-
- if return_keys and not return_data:
- return keys
-
- if n == 1:
- data = self.collate([self.get_item(keys)])
- else:
- data = self.collate([self.get_item(key) for key in keys])
-
- if return_data and return_keys:
- return data, keys
- if return_data and not return_keys:
- return data
diff --git a/spaces/Purple11/Grounded-Diffusion/src/CLIP/model-card.md b/spaces/Purple11/Grounded-Diffusion/src/CLIP/model-card.md
deleted file mode 100644
index 6db1ca46f0706d2276e0c95578f4aa4dc0136e58..0000000000000000000000000000000000000000
--- a/spaces/Purple11/Grounded-Diffusion/src/CLIP/model-card.md
+++ /dev/null
@@ -1,120 +0,0 @@
-# Model Card: CLIP
-
-Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/pdf/1912.10389.pdf), we’re providing some accompanying information about the multimodal model.
-
-## Model Details
-
-The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
-
-### Model Date
-
-January 2021
-
-### Model Type
-
-The base model uses a ResNet50 with several modifications as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer.
-
-### Model Versions
-
-Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50.
-
-As part of the staged release process, we have also released the RN101 model, as well as RN50x4, a RN50 scaled up 4x according to the [EfficientNet](https://arxiv.org/abs/1905.11946) scaling rule. In July 2021, we additionally released the RN50x16 and ViT-B/16 models, and in January 2022, the RN50x64 and ViT-L/14 models were released. Lastly, the ViT-L/14@336px model was released in April 2022.
-
-Please see the paper linked below for further details about their specification.
-
-### Documents
-
-- [Blog Post](https://openai.com/blog/clip/)
-- [CLIP Paper](https://arxiv.org/abs/2103.00020)
-
-
-
-## Model Use
-
-### Intended Use
-
-The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
-
-#### Primary intended uses
-
-The primary intended users of these models are AI researchers.
-
-We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
-
-### Out-of-Scope Use Cases
-
-**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
-
-Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
-
-Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
-
-
-
-## Data
-
-The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
-
-### Data Mission Statement
-
-Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
-
-
-
-## Performance and Limitations
-
-### Performance
-
-We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
-
-- Food101
-- CIFAR10
-- CIFAR100
-- Birdsnap
-- SUN397
-- Stanford Cars
-- FGVC Aircraft
-- VOC2007
-- DTD
-- Oxford-IIIT Pet dataset
-- Caltech101
-- Flowers102
-- MNIST
-- SVHN
-- IIIT5K
-- Hateful Memes
-- SST-2
-- UCF101
-- Kinetics700
-- Country211
-- CLEVR Counting
-- KITTI Distance
-- STL-10
-- RareAct
-- Flickr30
-- MSCOCO
-- ImageNet
-- ImageNet-A
-- ImageNet-R
-- ImageNet Sketch
-- ObjectNet (ImageNet Overlap)
-- Youtube-BB
-- ImageNet-Vid
-
-## Limitations
-
-CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
-
-### Bias and Fairness
-
-We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
-
-We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
-
-
-
-## Feedback
-
-### Where to send questions or comments about the model
-
-Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
diff --git "a/spaces/Qiukai/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py" "b/spaces/Qiukai/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py"
deleted file mode 100644
index 2f4201438c4d8597c251726fe99c02d40f0cadf0..0000000000000000000000000000000000000000
--- "a/spaces/Qiukai/gpt/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243.py"
+++ /dev/null
@@ -1,166 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file
-import re
-import unicodedata
-fast_debug = False
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-
-def is_paragraph_break(match):
- """
- 根据给定的匹配结果来判断换行符是否表示段落分隔。
- 如果换行符前为句子结束标志(句号,感叹号,问号),且下一个字符为大写字母,则换行符更有可能表示段落分隔。
- 也可以根据之前的内容长度来判断段落是否已经足够长。
- """
- prev_char, next_char = match.groups()
-
- # 句子结束标志
- sentence_endings = ".!?"
-
- # 设定一个最小段落长度阈值
- min_paragraph_length = 140
-
- if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length:
- return "\n\n"
- else:
- return " "
-
-def normalize_text(text):
- """
- 通过把连字(ligatures)等文本特殊符号转换为其基本形式来对文本进行归一化处理。
- 例如,将连字 "fi" 转换为 "f" 和 "i"。
- """
- # 对文本进行归一化处理,分解连字
- normalized_text = unicodedata.normalize("NFKD", text)
-
- # 替换其他特殊字符
- cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text)
-
- return cleaned_text
-
-def clean_text(raw_text):
- """
- 对从 PDF 提取出的原始文本进行清洗和格式化处理。
- 1. 对原始文本进行归一化处理。
- 2. 替换跨行的连词,例如 “Espe-\ncially” 转换为 “Especially”。
- 3. 根据 heuristic 规则判断换行符是否是段落分隔,并相应地进行替换。
- """
- # 对文本进行归一化处理
- normalized_text = normalize_text(raw_text)
-
- # 替换跨行的连词
- text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text)
-
- # 根据前后相邻字符的特点,找到原文本中的换行符
- newlines = re.compile(r'(\S)\n(\S)')
-
- # 根据 heuristic 规则,用空格或段落分隔符替换原换行符
- final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text)
-
- return final_text.strip()
-
-def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
- import time, glob, os, fitz
- print('begin analysis on:', file_manifest)
- for index, fp in enumerate(file_manifest):
- with fitz.open(fp) as doc:
- file_content = ""
- for page in doc:
- file_content += page.get_text()
- file_content = clean_text(file_content)
- print(file_content)
-
- prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
- i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
- i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say_show_user,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=[],
- sys_prompt="总结文章。"
- ) # 带超时倒计时
-
-
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- if not fast_debug: time.sleep(2)
-
- all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
- i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
- chatbot.append((i_say, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- if not fast_debug:
- msg = '正常'
- # ** gpt request **
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot,
- history=history,
- sys_prompt="总结文章。"
- ) # 带超时倒计时
-
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res))
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
-
-
-@CatchException
-def 批量总结PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- import glob, os
-
- # 基本信息:功能、贡献者
- chatbot.append([
- "函数插件功能?",
- "批量总结PDF文档。函数插件贡献者: ValeriaWong,Eralien"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import fitz
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 检测输入参数,如没有给定输入参数,直接退出
- if os.path.exists(txt):
- project_folder = txt
- else:
- if txt == "": txt = '空空如也的输入栏'
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 搜索需要处理的文件清单
- file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
- # [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
- # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
-
- # 如果没找到任何文件
- if len(file_manifest) == 0:
- report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 开始正式执行任务
- yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
diff --git a/spaces/Rajagopal/ImageBind_zeroshot_demo2/README.md b/spaces/Rajagopal/ImageBind_zeroshot_demo2/README.md
deleted file mode 100644
index 162ddea8b6e79332450f42d5cdb86d17031c834e..0000000000000000000000000000000000000000
--- a/spaces/Rajagopal/ImageBind_zeroshot_demo2/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ImageBind
-emoji: 🔥
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.30.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: Rajagopal/ImageBind_zeroshot_demo
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/vcs/mercurial.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/vcs/mercurial.py
deleted file mode 100644
index 2a005e0aff2df95f01aff4706b48af5da0c81db1..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/vcs/mercurial.py
+++ /dev/null
@@ -1,163 +0,0 @@
-import configparser
-import logging
-import os
-from typing import List, Optional, Tuple
-
-from pip._internal.exceptions import BadCommand, InstallationError
-from pip._internal.utils.misc import HiddenText, display_path
-from pip._internal.utils.subprocess import make_command
-from pip._internal.utils.urls import path_to_url
-from pip._internal.vcs.versioncontrol import (
- RevOptions,
- VersionControl,
- find_path_to_project_root_from_repo_root,
- vcs,
-)
-
-logger = logging.getLogger(__name__)
-
-
-class Mercurial(VersionControl):
- name = "hg"
- dirname = ".hg"
- repo_name = "clone"
- schemes = (
- "hg+file",
- "hg+http",
- "hg+https",
- "hg+ssh",
- "hg+static-http",
- )
-
- @staticmethod
- def get_base_rev_args(rev: str) -> List[str]:
- return [rev]
-
- def fetch_new(
- self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
- ) -> None:
- rev_display = rev_options.to_display()
- logger.info(
- "Cloning hg %s%s to %s",
- url,
- rev_display,
- display_path(dest),
- )
- if verbosity <= 0:
- flags: Tuple[str, ...] = ("--quiet",)
- elif verbosity == 1:
- flags = ()
- elif verbosity == 2:
- flags = ("--verbose",)
- else:
- flags = ("--verbose", "--debug")
- self.run_command(make_command("clone", "--noupdate", *flags, url, dest))
- self.run_command(
- make_command("update", *flags, rev_options.to_args()),
- cwd=dest,
- )
-
- def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- repo_config = os.path.join(dest, self.dirname, "hgrc")
- config = configparser.RawConfigParser()
- try:
- config.read(repo_config)
- config.set("paths", "default", url.secret)
- with open(repo_config, "w") as config_file:
- config.write(config_file)
- except (OSError, configparser.NoSectionError) as exc:
- logger.warning("Could not switch Mercurial repository to %s: %s", url, exc)
- else:
- cmd_args = make_command("update", "-q", rev_options.to_args())
- self.run_command(cmd_args, cwd=dest)
-
- def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
- self.run_command(["pull", "-q"], cwd=dest)
- cmd_args = make_command("update", "-q", rev_options.to_args())
- self.run_command(cmd_args, cwd=dest)
-
- @classmethod
- def get_remote_url(cls, location: str) -> str:
- url = cls.run_command(
- ["showconfig", "paths.default"],
- show_stdout=False,
- stdout_only=True,
- cwd=location,
- ).strip()
- if cls._is_local_repository(url):
- url = path_to_url(url)
- return url.strip()
-
- @classmethod
- def get_revision(cls, location: str) -> str:
- """
- Return the repository-local changeset revision number, as an integer.
- """
- current_revision = cls.run_command(
- ["parents", "--template={rev}"],
- show_stdout=False,
- stdout_only=True,
- cwd=location,
- ).strip()
- return current_revision
-
- @classmethod
- def get_requirement_revision(cls, location: str) -> str:
- """
- Return the changeset identification hash, as a 40-character
- hexadecimal string
- """
- current_rev_hash = cls.run_command(
- ["parents", "--template={node}"],
- show_stdout=False,
- stdout_only=True,
- cwd=location,
- ).strip()
- return current_rev_hash
-
- @classmethod
- def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
- """Always assume the versions don't match"""
- return False
-
- @classmethod
- def get_subdirectory(cls, location: str) -> Optional[str]:
- """
- Return the path to Python project root, relative to the repo root.
- Return None if the project root is in the repo root.
- """
- # find the repo root
- repo_root = cls.run_command(
- ["root"], show_stdout=False, stdout_only=True, cwd=location
- ).strip()
- if not os.path.isabs(repo_root):
- repo_root = os.path.abspath(os.path.join(location, repo_root))
- return find_path_to_project_root_from_repo_root(location, repo_root)
-
- @classmethod
- def get_repository_root(cls, location: str) -> Optional[str]:
- loc = super().get_repository_root(location)
- if loc:
- return loc
- try:
- r = cls.run_command(
- ["root"],
- cwd=location,
- show_stdout=False,
- stdout_only=True,
- on_returncode="raise",
- log_failed_cmd=False,
- )
- except BadCommand:
- logger.debug(
- "could not determine if %s is under hg control "
- "because hg is not available",
- location,
- )
- return None
- except InstallationError:
- return None
- return os.path.normpath(r.rstrip("\r\n"))
-
-
-vcs.register(Mercurial)
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/unicode.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/unicode.py
deleted file mode 100644
index 06526203911de55da3c2a8c5ae73f48024c3f018..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/unicode.py
+++ /dev/null
@@ -1,352 +0,0 @@
-# unicode.py
-
-import sys
-from itertools import filterfalse
-from typing import List, Tuple, Union
-
-
-class _lazyclassproperty:
- def __init__(self, fn):
- self.fn = fn
- self.__doc__ = fn.__doc__
- self.__name__ = fn.__name__
-
- def __get__(self, obj, cls):
- if cls is None:
- cls = type(obj)
- if not hasattr(cls, "_intern") or any(
- cls._intern is getattr(superclass, "_intern", [])
- for superclass in cls.__mro__[1:]
- ):
- cls._intern = {}
- attrname = self.fn.__name__
- if attrname not in cls._intern:
- cls._intern[attrname] = self.fn(cls)
- return cls._intern[attrname]
-
-
-UnicodeRangeList = List[Union[Tuple[int, int], Tuple[int]]]
-
-
-class unicode_set:
- """
- A set of Unicode characters, for language-specific strings for
- ``alphas``, ``nums``, ``alphanums``, and ``printables``.
- A unicode_set is defined by a list of ranges in the Unicode character
- set, in a class attribute ``_ranges``. Ranges can be specified using
- 2-tuples or a 1-tuple, such as::
-
- _ranges = [
- (0x0020, 0x007e),
- (0x00a0, 0x00ff),
- (0x0100,),
- ]
-
- Ranges are left- and right-inclusive. A 1-tuple of (x,) is treated as (x, x).
-
- A unicode set can also be defined using multiple inheritance of other unicode sets::
-
- class CJK(Chinese, Japanese, Korean):
- pass
- """
-
- _ranges: UnicodeRangeList = []
-
- @_lazyclassproperty
- def _chars_for_ranges(cls):
- ret = []
- for cc in cls.__mro__:
- if cc is unicode_set:
- break
- for rr in getattr(cc, "_ranges", ()):
- ret.extend(range(rr[0], rr[-1] + 1))
- return [chr(c) for c in sorted(set(ret))]
-
- @_lazyclassproperty
- def printables(cls):
- "all non-whitespace characters in this range"
- return "".join(filterfalse(str.isspace, cls._chars_for_ranges))
-
- @_lazyclassproperty
- def alphas(cls):
- "all alphabetic characters in this range"
- return "".join(filter(str.isalpha, cls._chars_for_ranges))
-
- @_lazyclassproperty
- def nums(cls):
- "all numeric digit characters in this range"
- return "".join(filter(str.isdigit, cls._chars_for_ranges))
-
- @_lazyclassproperty
- def alphanums(cls):
- "all alphanumeric characters in this range"
- return cls.alphas + cls.nums
-
- @_lazyclassproperty
- def identchars(cls):
- "all characters in this range that are valid identifier characters, plus underscore '_'"
- return "".join(
- sorted(
- set(
- "".join(filter(str.isidentifier, cls._chars_for_ranges))
- + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµº"
- + "ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿ"
- + "_"
- )
- )
- )
-
- @_lazyclassproperty
- def identbodychars(cls):
- """
- all characters in this range that are valid identifier body characters,
- plus the digits 0-9
- """
- return "".join(
- sorted(
- set(
- cls.identchars
- + "0123456789"
- + "".join(
- [c for c in cls._chars_for_ranges if ("_" + c).isidentifier()]
- )
- )
- )
- )
-
-
-class pyparsing_unicode(unicode_set):
- """
- A namespace class for defining common language unicode_sets.
- """
-
- # fmt: off
-
- # define ranges in language character sets
- _ranges: UnicodeRangeList = [
- (0x0020, sys.maxunicode),
- ]
-
- class BasicMultilingualPlane(unicode_set):
- "Unicode set for the Basic Multilingual Plane"
- _ranges: UnicodeRangeList = [
- (0x0020, 0xFFFF),
- ]
-
- class Latin1(unicode_set):
- "Unicode set for Latin-1 Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0020, 0x007E),
- (0x00A0, 0x00FF),
- ]
-
- class LatinA(unicode_set):
- "Unicode set for Latin-A Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0100, 0x017F),
- ]
-
- class LatinB(unicode_set):
- "Unicode set for Latin-B Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0180, 0x024F),
- ]
-
- class Greek(unicode_set):
- "Unicode set for Greek Unicode Character Ranges"
- _ranges: UnicodeRangeList = [
- (0x0342, 0x0345),
- (0x0370, 0x0377),
- (0x037A, 0x037F),
- (0x0384, 0x038A),
- (0x038C,),
- (0x038E, 0x03A1),
- (0x03A3, 0x03E1),
- (0x03F0, 0x03FF),
- (0x1D26, 0x1D2A),
- (0x1D5E,),
- (0x1D60,),
- (0x1D66, 0x1D6A),
- (0x1F00, 0x1F15),
- (0x1F18, 0x1F1D),
- (0x1F20, 0x1F45),
- (0x1F48, 0x1F4D),
- (0x1F50, 0x1F57),
- (0x1F59,),
- (0x1F5B,),
- (0x1F5D,),
- (0x1F5F, 0x1F7D),
- (0x1F80, 0x1FB4),
- (0x1FB6, 0x1FC4),
- (0x1FC6, 0x1FD3),
- (0x1FD6, 0x1FDB),
- (0x1FDD, 0x1FEF),
- (0x1FF2, 0x1FF4),
- (0x1FF6, 0x1FFE),
- (0x2129,),
- (0x2719, 0x271A),
- (0xAB65,),
- (0x10140, 0x1018D),
- (0x101A0,),
- (0x1D200, 0x1D245),
- (0x1F7A1, 0x1F7A7),
- ]
-
- class Cyrillic(unicode_set):
- "Unicode set for Cyrillic Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0400, 0x052F),
- (0x1C80, 0x1C88),
- (0x1D2B,),
- (0x1D78,),
- (0x2DE0, 0x2DFF),
- (0xA640, 0xA672),
- (0xA674, 0xA69F),
- (0xFE2E, 0xFE2F),
- ]
-
- class Chinese(unicode_set):
- "Unicode set for Chinese Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x2E80, 0x2E99),
- (0x2E9B, 0x2EF3),
- (0x31C0, 0x31E3),
- (0x3400, 0x4DB5),
- (0x4E00, 0x9FEF),
- (0xA700, 0xA707),
- (0xF900, 0xFA6D),
- (0xFA70, 0xFAD9),
- (0x16FE2, 0x16FE3),
- (0x1F210, 0x1F212),
- (0x1F214, 0x1F23B),
- (0x1F240, 0x1F248),
- (0x20000, 0x2A6D6),
- (0x2A700, 0x2B734),
- (0x2B740, 0x2B81D),
- (0x2B820, 0x2CEA1),
- (0x2CEB0, 0x2EBE0),
- (0x2F800, 0x2FA1D),
- ]
-
- class Japanese(unicode_set):
- "Unicode set for Japanese Unicode Character Range, combining Kanji, Hiragana, and Katakana ranges"
- _ranges: UnicodeRangeList = []
-
- class Kanji(unicode_set):
- "Unicode set for Kanji Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x4E00, 0x9FBF),
- (0x3000, 0x303F),
- ]
-
- class Hiragana(unicode_set):
- "Unicode set for Hiragana Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x3041, 0x3096),
- (0x3099, 0x30A0),
- (0x30FC,),
- (0xFF70,),
- (0x1B001,),
- (0x1B150, 0x1B152),
- (0x1F200,),
- ]
-
- class Katakana(unicode_set):
- "Unicode set for Katakana Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x3099, 0x309C),
- (0x30A0, 0x30FF),
- (0x31F0, 0x31FF),
- (0x32D0, 0x32FE),
- (0xFF65, 0xFF9F),
- (0x1B000,),
- (0x1B164, 0x1B167),
- (0x1F201, 0x1F202),
- (0x1F213,),
- ]
-
- class Hangul(unicode_set):
- "Unicode set for Hangul (Korean) Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x1100, 0x11FF),
- (0x302E, 0x302F),
- (0x3131, 0x318E),
- (0x3200, 0x321C),
- (0x3260, 0x327B),
- (0x327E,),
- (0xA960, 0xA97C),
- (0xAC00, 0xD7A3),
- (0xD7B0, 0xD7C6),
- (0xD7CB, 0xD7FB),
- (0xFFA0, 0xFFBE),
- (0xFFC2, 0xFFC7),
- (0xFFCA, 0xFFCF),
- (0xFFD2, 0xFFD7),
- (0xFFDA, 0xFFDC),
- ]
-
- Korean = Hangul
-
- class CJK(Chinese, Japanese, Hangul):
- "Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range"
-
- class Thai(unicode_set):
- "Unicode set for Thai Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0E01, 0x0E3A),
- (0x0E3F, 0x0E5B)
- ]
-
- class Arabic(unicode_set):
- "Unicode set for Arabic Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0600, 0x061B),
- (0x061E, 0x06FF),
- (0x0700, 0x077F),
- ]
-
- class Hebrew(unicode_set):
- "Unicode set for Hebrew Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0591, 0x05C7),
- (0x05D0, 0x05EA),
- (0x05EF, 0x05F4),
- (0xFB1D, 0xFB36),
- (0xFB38, 0xFB3C),
- (0xFB3E,),
- (0xFB40, 0xFB41),
- (0xFB43, 0xFB44),
- (0xFB46, 0xFB4F),
- ]
-
- class Devanagari(unicode_set):
- "Unicode set for Devanagari Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0900, 0x097F),
- (0xA8E0, 0xA8FF)
- ]
-
- # fmt: on
-
-
-pyparsing_unicode.Japanese._ranges = (
- pyparsing_unicode.Japanese.Kanji._ranges
- + pyparsing_unicode.Japanese.Hiragana._ranges
- + pyparsing_unicode.Japanese.Katakana._ranges
-)
-
-pyparsing_unicode.BMP = pyparsing_unicode.BasicMultilingualPlane
-
-# add language identifiers using language Unicode
-pyparsing_unicode.العربية = pyparsing_unicode.Arabic
-pyparsing_unicode.中文 = pyparsing_unicode.Chinese
-pyparsing_unicode.кириллица = pyparsing_unicode.Cyrillic
-pyparsing_unicode.Ελληνικά = pyparsing_unicode.Greek
-pyparsing_unicode.עִברִית = pyparsing_unicode.Hebrew
-pyparsing_unicode.日本語 = pyparsing_unicode.Japanese
-pyparsing_unicode.Japanese.漢字 = pyparsing_unicode.Japanese.Kanji
-pyparsing_unicode.Japanese.カタカナ = pyparsing_unicode.Japanese.Katakana
-pyparsing_unicode.Japanese.ひらがな = pyparsing_unicode.Japanese.Hiragana
-pyparsing_unicode.한국어 = pyparsing_unicode.Korean
-pyparsing_unicode.ไทย = pyparsing_unicode.Thai
-pyparsing_unicode.देवनागरी = pyparsing_unicode.Devanagari
diff --git a/spaces/Realcat/image-matching-webui/third_party/SGMNet/train/dataset.py b/spaces/Realcat/image-matching-webui/third_party/SGMNet/train/dataset.py
deleted file mode 100644
index 37a97fd6204240e636d4b234f6c855f948c76b99..0000000000000000000000000000000000000000
--- a/spaces/Realcat/image-matching-webui/third_party/SGMNet/train/dataset.py
+++ /dev/null
@@ -1,284 +0,0 @@
-import numpy as np
-import torch
-import torch.utils.data as data
-import cv2
-import os
-import h5py
-import random
-
-import sys
-
-ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../"))
-sys.path.insert(0, ROOT_DIR)
-
-from utils import train_utils, evaluation_utils
-
-torch.multiprocessing.set_sharing_strategy("file_system")
-
-
-class Offline_Dataset(data.Dataset):
- def __init__(self, config, mode):
- assert mode == "train" or mode == "valid"
-
- self.config = config
- self.mode = mode
- metadir = (
- os.path.join(config.dataset_path, "valid")
- if mode == "valid"
- else os.path.join(config.dataset_path, "train")
- )
-
- pair_num_list = np.loadtxt(os.path.join(metadir, "pair_num.txt"), dtype=str)
- self.total_pairs = int(pair_num_list[0, 1])
- self.pair_seq_list, self.accu_pair_num = train_utils.parse_pair_seq(
- pair_num_list
- )
-
- def collate_fn(self, batch):
- batch_size, num_pts = len(batch), batch[0]["x1"].shape[0]
-
- data = {}
- dtype = [
- "x1",
- "x2",
- "kpt1",
- "kpt2",
- "desc1",
- "desc2",
- "num_corr",
- "num_incorr1",
- "num_incorr2",
- "e_gt",
- "pscore1",
- "pscore2",
- "img_path1",
- "img_path2",
- ]
- for key in dtype:
- data[key] = []
- for sample in batch:
- for key in dtype:
- data[key].append(sample[key])
-
- for key in [
- "x1",
- "x2",
- "kpt1",
- "kpt2",
- "desc1",
- "desc2",
- "e_gt",
- "pscore1",
- "pscore2",
- ]:
- data[key] = torch.from_numpy(np.stack(data[key])).float()
- for key in ["num_corr", "num_incorr1", "num_incorr2"]:
- data[key] = torch.from_numpy(np.stack(data[key])).int()
-
- # kpt augmentation with random homography
- if self.mode == "train" and self.config.data_aug:
- homo_mat = torch.from_numpy(
- train_utils.get_rnd_homography(batch_size)
- ).unsqueeze(1)
- aug_seed = random.random()
- if aug_seed < 0.5:
- x1_homo = torch.cat(
- [data["x1"], torch.ones([batch_size, num_pts, 1])], dim=-1
- ).unsqueeze(-1)
- x1_homo = torch.matmul(homo_mat.float(), x1_homo.float()).squeeze(-1)
- data["aug_x1"] = x1_homo[:, :, :2] / x1_homo[:, :, 2].unsqueeze(-1)
- data["aug_x2"] = data["x2"]
- else:
- x2_homo = torch.cat(
- [data["x2"], torch.ones([batch_size, num_pts, 1])], dim=-1
- ).unsqueeze(-1)
- x2_homo = torch.matmul(homo_mat.float(), x2_homo.float()).squeeze(-1)
- data["aug_x2"] = x2_homo[:, :, :2] / x2_homo[:, :, 2].unsqueeze(-1)
- data["aug_x1"] = data["x1"]
- else:
- data["aug_x1"], data["aug_x2"] = data["x1"], data["x2"]
- return data
-
- def __getitem__(self, index):
- seq = self.pair_seq_list[index]
- index_within_seq = index - self.accu_pair_num[seq]
-
- with h5py.File(
- os.path.join(self.config.dataset_path, seq, "info.h5py"), "r"
- ) as data:
- R, t = (
- data["dR"][str(index_within_seq)][()],
- data["dt"][str(index_within_seq)][()],
- )
- egt = np.reshape(
- np.matmul(
- np.reshape(
- evaluation_utils.np_skew_symmetric(
- t.astype("float64").reshape(1, 3)
- ),
- (3, 3),
- ),
- np.reshape(R.astype("float64"), (3, 3)),
- ),
- (3, 3),
- )
- egt = egt / np.linalg.norm(egt)
- K1, K2 = (
- data["K1"][str(index_within_seq)][()],
- data["K2"][str(index_within_seq)][()],
- )
- size1, size2 = (
- data["size1"][str(index_within_seq)][()],
- data["size2"][str(index_within_seq)][()],
- )
-
- img_path1, img_path2 = (
- data["img_path1"][str(index_within_seq)][()][0].decode(),
- data["img_path2"][str(index_within_seq)][()][0].decode(),
- )
- img_name1, img_name2 = img_path1.split("/")[-1], img_path2.split("/")[-1]
- img_path1, img_path2 = os.path.join(
- self.config.rawdata_path, img_path1
- ), os.path.join(self.config.rawdata_path, img_path2)
- fea_path1, fea_path2 = os.path.join(
- self.config.desc_path, seq, img_name1 + self.config.desc_suffix
- ), os.path.join(
- self.config.desc_path, seq, img_name2 + self.config.desc_suffix
- )
- with h5py.File(fea_path1, "r") as fea1, h5py.File(fea_path2, "r") as fea2:
- desc1, kpt1, pscore1 = (
- fea1["descriptors"][()],
- fea1["keypoints"][()][:, :2],
- fea1["keypoints"][()][:, 2],
- )
- desc2, kpt2, pscore2 = (
- fea2["descriptors"][()],
- fea2["keypoints"][()][:, :2],
- fea2["keypoints"][()][:, 2],
- )
- kpt1, kpt2, desc1, desc2 = (
- kpt1[: self.config.num_kpt],
- kpt2[: self.config.num_kpt],
- desc1[: self.config.num_kpt],
- desc2[: self.config.num_kpt],
- )
-
- # normalize kpt
- if self.config.input_normalize == "intrinsic":
- x1, x2 = np.concatenate(
- [kpt1, np.ones([kpt1.shape[0], 1])], axis=-1
- ), np.concatenate([kpt2, np.ones([kpt2.shape[0], 1])], axis=-1)
- x1, x2 = (
- np.matmul(np.linalg.inv(K1), x1.T).T[:, :2],
- np.matmul(np.linalg.inv(K2), x2.T).T[:, :2],
- )
- elif self.config.input_normalize == "img":
- x1, x2 = (kpt1 - size1 / 2) / size1, (kpt2 - size2 / 2) / size2
- S1_inv, S2_inv = np.asarray(
- [
- [size1[0], 0, 0.5 * size1[0]],
- [0, size1[1], 0.5 * size1[1]],
- [0, 0, 1],
- ]
- ), np.asarray(
- [
- [size2[0], 0, 0.5 * size2[0]],
- [0, size2[1], 0.5 * size2[1]],
- [0, 0, 1],
- ]
- )
- M1, M2 = np.matmul(np.linalg.inv(K1), S1_inv), np.matmul(
- np.linalg.inv(K2), S2_inv
- )
- egt = np.matmul(np.matmul(M2.transpose(), egt), M1)
- egt = egt / np.linalg.norm(egt)
- else:
- raise NotImplementedError
-
- corr = data["corr"][str(index_within_seq)][()]
- incorr1, incorr2 = (
- data["incorr1"][str(index_within_seq)][()],
- data["incorr2"][str(index_within_seq)][()],
- )
-
- # permute kpt
- valid_corr = corr[corr.max(axis=-1) < self.config.num_kpt]
- valid_incorr1, valid_incorr2 = (
- incorr1[incorr1 < self.config.num_kpt],
- incorr2[incorr2 < self.config.num_kpt],
- )
- num_corr, num_incorr1, num_incorr2 = (
- len(valid_corr),
- len(valid_incorr1),
- len(valid_incorr2),
- )
- mask1_invlaid, mask2_invalid = np.ones(x1.shape[0]).astype(bool), np.ones(
- x2.shape[0]
- ).astype(bool)
- mask1_invlaid[valid_corr[:, 0]] = False
- mask2_invalid[valid_corr[:, 1]] = False
- mask1_invlaid[valid_incorr1] = False
- mask2_invalid[valid_incorr2] = False
- invalid_index1, invalid_index2 = (
- np.nonzero(mask1_invlaid)[0],
- np.nonzero(mask2_invalid)[0],
- )
-
- # random sample from point w/o valid annotation
- cur_kpt1 = self.config.num_kpt - num_corr - num_incorr1
- cur_kpt2 = self.config.num_kpt - num_corr - num_incorr2
-
- if invalid_index1.shape[0] < cur_kpt1:
- sub_idx1 = np.concatenate(
- [
- np.arange(len(invalid_index1)),
- np.random.randint(
- len(invalid_index1), size=cur_kpt1 - len(invalid_index1)
- ),
- ]
- )
- if invalid_index1.shape[0] >= cur_kpt1:
- sub_idx1 = np.random.choice(len(invalid_index1), cur_kpt1, replace=False)
- if invalid_index2.shape[0] < cur_kpt2:
- sub_idx2 = np.concatenate(
- [
- np.arange(len(invalid_index2)),
- np.random.randint(
- len(invalid_index2), size=cur_kpt2 - len(invalid_index2)
- ),
- ]
- )
- if invalid_index2.shape[0] >= cur_kpt2:
- sub_idx2 = np.random.choice(len(invalid_index2), cur_kpt2, replace=False)
-
- per_idx1, per_idx2 = np.concatenate(
- [valid_corr[:, 0], valid_incorr1, invalid_index1[sub_idx1]]
- ), np.concatenate([valid_corr[:, 1], valid_incorr2, invalid_index2[sub_idx2]])
-
- pscore1, pscore2 = (
- pscore1[per_idx1][:, np.newaxis],
- pscore2[per_idx2][:, np.newaxis],
- )
- x1, x2 = x1[per_idx1][:, :2], x2[per_idx2][:, :2]
- desc1, desc2 = desc1[per_idx1], desc2[per_idx2]
- kpt1, kpt2 = kpt1[per_idx1], kpt2[per_idx2]
-
- return {
- "x1": x1,
- "x2": x2,
- "kpt1": kpt1,
- "kpt2": kpt2,
- "desc1": desc1,
- "desc2": desc2,
- "num_corr": num_corr,
- "num_incorr1": num_incorr1,
- "num_incorr2": num_incorr2,
- "e_gt": egt,
- "pscore1": pscore1,
- "pscore2": pscore2,
- "img_path1": img_path1,
- "img_path2": img_path2,
- }
-
- def __len__(self):
- return self.total_pairs
diff --git a/spaces/ReganMayer/ChatGPT44/app.py b/spaces/ReganMayer/ChatGPT44/app.py
deleted file mode 100644
index 5e9f843311d5f64ef73b0270d1cba5c3e219d5e6..0000000000000000000000000000000000000000
--- a/spaces/ReganMayer/ChatGPT44/app.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import gradio as gr
-import os
-import json
-import requests
-
-#Streaming endpoint
-API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream"
-
-#Huggingface provided GPT4 OpenAI API Key
-OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
-
-#Inferenec function
-def predict(system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]):
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {OPENAI_API_KEY}"
- }
- print(f"system message is ^^ {system_msg}")
- if system_msg.strip() == '':
- initial_message = [{"role": "user", "content": f"{inputs}"},]
- multi_turn_message = []
- else:
- initial_message= [{"role": "system", "content": system_msg},
- {"role": "user", "content": f"{inputs}"},]
- multi_turn_message = [{"role": "system", "content": system_msg},]
-
- if chat_counter == 0 :
- payload = {
- "model": "gpt-4",
- "messages": initial_message ,
- "temperature" : 1.0,
- "top_p":1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
- print(f"chat_counter - {chat_counter}")
- else: #if chat_counter != 0 :
- messages=multi_turn_message # Of the type of - [{"role": "system", "content": system_msg},]
- for data in chatbot:
- user = {}
- user["role"] = "user"
- user["content"] = data[0]
- assistant = {}
- assistant["role"] = "assistant"
- assistant["content"] = data[1]
- messages.append(user)
- messages.append(assistant)
- temp = {}
- temp["role"] = "user"
- temp["content"] = inputs
- messages.append(temp)
- #messages
- payload = {
- "model": "gpt-4",
- "messages": messages, # Of the type of [{"role": "user", "content": f"{inputs}"}],
- "temperature" : temperature, #1.0,
- "top_p": top_p, #1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,}
-
- chat_counter+=1
-
- history.append(inputs)
- print(f"Logging : payload is - {payload}")
- # make a POST request to the API endpoint using the requests.post method, passing in stream=True
- response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- print(f"Logging : response code - {response}")
- token_counter = 0
- partial_words = ""
-
- counter=0
- for chunk in response.iter_lines():
- #Skipping first chunk
- if counter == 0:
- counter+=1
- continue
- # check whether each line is non-empty
- if chunk.decode() :
- chunk = chunk.decode()
- # decode each line as response data is in bytes
- if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
- partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
- if token_counter == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
- chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
- token_counter+=1
- yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history}
-
-#Resetting to blank
-def reset_textbox():
- return gr.update(value='')
-
-#to set a component as visible=False
-def set_visible_false():
- return gr.update(visible=False)
-
-#to set a component as visible=True
-def set_visible_true():
- return gr.update(visible=True)
-
-title = """
🔥GPT4 with ChatCompletions API +🚀Gradio-Streaming
"""
-
-#display message for themes feature
-theme_addon_msg = """
🌟 Discover Gradio Themes with this Demo, featuring v3.22.0! Gradio v3.23.0 also enables seamless Theme sharing. You can develop or modify a theme, and send it to the hub using simple theme.push_to_hub().
- 🏆Participate in Gradio's Theme Building Hackathon to exhibit your creative flair and win fabulous rewards! Join here - Gradio-Themes-Party🎨 🏆
-"""
-
-#Using info to add additional information about System message in GPT4
-system_msg_info = """A conversation could begin with a system message to gently instruct the assistant.
-System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'"""
-
-#Modifying existing Gradio Theme
-theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green",
- text_size=gr.themes.sizes.text_lg)
-
-with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""",
- theme=theme) as demo:
- gr.HTML(title)
- gr.HTML("""
🔥This Huggingface Gradio Demo provides you full access to GPT4 API (4096 token limit). 🎉🥳🎉You don't need any OPENAI API key🙌""")
- gr.HTML(theme_addon_msg)
- gr.HTML('''
Duplicate the Space and run securely with your OpenAI API Key
''')
-
- with gr.Column(elem_id = "col_container"):
- #GPT4 API Key is provided by Huggingface
- with gr.Accordion(label="System message:", open=False):
- system_msg = gr.Textbox(label="Instruct the AI Assistant to set its beaviour", info = system_msg_info, value="")
- accordion_msg = gr.HTML(value="🚧 To set System message you will have to refresh the app", visible=False)
- chatbot = gr.Chatbot(label='GPT4', elem_id="chatbot")
- inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter")
- state = gr.State([])
- with gr.Row():
- with gr.Column(scale=7):
- b1 = gr.Button().style(full_width=True)
- with gr.Column(scale=3):
- server_status_code = gr.Textbox(label="Status code from OpenAI server", )
-
- #top_p, temperature
- with gr.Accordion("Parameters", open=False):
- top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",)
- temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
- chat_counter = gr.Number(value=0, visible=False, precision=0)
-
- #Event handling
- inputs.submit( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
- b1.click( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
-
- inputs.submit(set_visible_false, [], [system_msg])
- b1.click(set_visible_false, [], [system_msg])
- inputs.submit(set_visible_true, [], [accordion_msg])
- b1.click(set_visible_true, [], [accordion_msg])
-
- b1.click(reset_textbox, [], [inputs])
- inputs.submit(reset_textbox, [], [inputs])
-
- #Examples
- with gr.Accordion(label="Examples for System message:", open=False):
- gr.Examples(
- examples = [["""You are an AI programming assistant.
-
- - Follow the user's requirements carefully and to the letter.
- - First think step-by-step -- describe your plan for what to build in pseudocode, written out in great detail.
- - Then output the code in a single code block.
- - Minimize any other prose."""], ["""You are ComedianGPT who is a helpful assistant. You answer everything with a joke and witty replies."""],
- ["You are ChefGPT, a helpful assistant who answers questions with culinary expertise and a pinch of humor."],
- ["You are FitnessGuruGPT, a fitness expert who shares workout tips and motivation with a playful twist."],
- ["You are SciFiGPT, an AI assistant who discusses science fiction topics with a blend of knowledge and wit."],
- ["You are PhilosopherGPT, a thoughtful assistant who responds to inquiries with philosophical insights and a touch of humor."],
- ["You are EcoWarriorGPT, a helpful assistant who shares environment-friendly advice with a lighthearted approach."],
- ["You are MusicMaestroGPT, a knowledgeable AI who discusses music and its history with a mix of facts and playful banter."],
- ["You are SportsFanGPT, an enthusiastic assistant who talks about sports and shares amusing anecdotes."],
- ["You are TechWhizGPT, a tech-savvy AI who can help users troubleshoot issues and answer questions with a dash of humor."],
- ["You are FashionistaGPT, an AI fashion expert who shares style advice and trends with a sprinkle of wit."],
- ["You are ArtConnoisseurGPT, an AI assistant who discusses art and its history with a blend of knowledge and playful commentary."],
- ["You are a helpful assistant that provides detailed and accurate information."],
- ["You are an assistant that speaks like Shakespeare."],
- ["You are a friendly assistant who uses casual language and humor."],
- ["You are a financial advisor who gives expert advice on investments and budgeting."],
- ["You are a health and fitness expert who provides advice on nutrition and exercise."],
- ["You are a travel consultant who offers recommendations for destinations, accommodations, and attractions."],
- ["You are a movie critic who shares insightful opinions on films and their themes."],
- ["You are a history enthusiast who loves to discuss historical events and figures."],
- ["You are a tech-savvy assistant who can help users troubleshoot issues and answer questions about gadgets and software."],
- ["You are an AI poet who can compose creative and evocative poems on any given topic."],],
- inputs = system_msg,)
-
-demo.queue(max_size=20, concurrency_count=20).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/RichardMB1217/blip/train_nlvr.py b/spaces/RichardMB1217/blip/train_nlvr.py
deleted file mode 100644
index 84b247bda2334c1fd894b6c11d33ef48c8e7df28..0000000000000000000000000000000000000000
--- a/spaces/RichardMB1217/blip/train_nlvr.py
+++ /dev/null
@@ -1,213 +0,0 @@
-'''
- * Copyright (c) 2022, salesforce.com, inc.
- * All rights reserved.
- * SPDX-License-Identifier: BSD-3-Clause
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
- * By Junnan Li
-'''
-import argparse
-import os
-import ruamel_yaml as yaml
-import numpy as np
-import random
-import time
-import datetime
-import json
-from pathlib import Path
-import json
-import pickle
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.utils.data import DataLoader
-import torch.backends.cudnn as cudnn
-import torch.distributed as dist
-
-from models.blip_nlvr import blip_nlvr
-
-import utils
-from utils import cosine_lr_schedule, warmup_lr_schedule
-from data import create_dataset, create_sampler, create_loader
-
-def train(model, data_loader, optimizer, epoch, device, config):
- # train
- model.train()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
- metric_logger.add_meter('lr', utils.SmoothedValue(window_size=50, fmt='{value:.6f}'))
- metric_logger.add_meter('loss', utils.SmoothedValue(window_size=50, fmt='{value:.4f}'))
-
- header = 'Train Epoch: [{}]'.format(epoch)
- print_freq = 50
- step_size = 10
-
- for i,(image0, image1, text, targets) in enumerate(metric_logger.log_every(data_loader, print_freq, header)):
-
- images = torch.cat([image0, image1], dim=0)
- images, targets = images.to(device), targets.to(device)
-
- loss = model(images, text, targets=targets, train=True)
-
- optimizer.zero_grad()
- loss.backward()
- optimizer.step()
-
- metric_logger.update(lr=optimizer.param_groups[0]["lr"])
- metric_logger.update(loss=loss.item())
-
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
- print("Averaged stats:", metric_logger.global_avg())
- return {k: "{:.4f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()}
-
-
-@torch.no_grad()
-def evaluate(model, data_loader, device, config):
- # test
- model.eval()
-
- metric_logger = utils.MetricLogger(delimiter=" ")
-
- header = 'Evaluation:'
- print_freq = 50
-
- for image0, image1, text, targets in metric_logger.log_every(data_loader, print_freq, header):
- images = torch.cat([image0, image1], dim=0)
- images, targets = images.to(device), targets.to(device)
-
- prediction = model(images, text, targets=targets, train=False)
-
- _, pred_class = prediction.max(1)
- accuracy = (targets==pred_class).sum() / targets.size(0)
-
- metric_logger.meters['acc'].update(accuracy.item(), n=image0.size(0))
-
- # gather the stats from all processes
- metric_logger.synchronize_between_processes()
-
- print("Averaged stats:", metric_logger.global_avg())
- return {k: "{:.4f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()}
-
-
-
-def main(args, config):
- utils.init_distributed_mode(args)
-
- device = torch.device(args.device)
-
- # fix the seed for reproducibility
- seed = args.seed + utils.get_rank()
- torch.manual_seed(seed)
- np.random.seed(seed)
- random.seed(seed)
- cudnn.benchmark = True
-
- #### Dataset ####
- print("Creating dataset")
- datasets = create_dataset('nlvr', config)
-
- if args.distributed:
- num_tasks = utils.get_world_size()
- global_rank = utils.get_rank()
- samplers = create_sampler(datasets, [True,False,False], num_tasks, global_rank)
- else:
- samplers = [None, None, None]
-
- batch_size=[config['batch_size_train'],config['batch_size_test'],config['batch_size_test']]
- train_loader, val_loader, test_loader = create_loader(datasets,samplers,batch_size=batch_size,
- num_workers=[4,4,4],is_trains=[True,False,False],
- collate_fns=[None,None,None])
-
- #### Model ####
- print("Creating model")
- model = blip_nlvr(pretrained=config['pretrained'], image_size=config['image_size'],
- vit=config['vit'], vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer'])
-
- model = model.to(device)
-
- model_without_ddp = model
- if args.distributed:
- model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
- model_without_ddp = model.module
-
- optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay'])
-
- print("Start training")
- start_time = time.time()
- best = 0
- best_epoch = 0
-
- for epoch in range(0, config['max_epoch']):
- if not args.evaluate:
- if args.distributed:
- train_loader.sampler.set_epoch(epoch)
-
- cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr'])
-
- train_stats = train(model, train_loader, optimizer, epoch, device, config)
-
- val_stats = evaluate(model, val_loader, device, config)
- test_stats = evaluate(model, test_loader, device, config)
-
- if utils.is_main_process():
- if args.evaluate:
- log_stats = {**{f'val_{k}': v for k, v in val_stats.items()},
- **{f'test_{k}': v for k, v in test_stats.items()},
- }
- with open(os.path.join(args.output_dir, "log.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
-
- else:
- log_stats = {**{f'train_{k}': v for k, v in train_stats.items()},
- **{f'val_{k}': v for k, v in val_stats.items()},
- **{f'test_{k}': v for k, v in test_stats.items()},
- 'epoch': epoch,
- }
-
- if float(val_stats['acc'])>best:
- save_obj = {
- 'model': model_without_ddp.state_dict(),
- 'optimizer': optimizer.state_dict(),
- 'config': config,
- 'epoch': epoch,
- }
- torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_best.pth'))
- best = float(val_stats['acc'])
- best_epoch = epoch
-
- with open(os.path.join(args.output_dir, "log.txt"),"a") as f:
- f.write(json.dumps(log_stats) + "\n")
- if args.evaluate:
- break
-
- dist.barrier()
-
- if utils.is_main_process():
- with open(os.path.join(args.output_dir, "log.txt"),"a") as f:
- f.write("best epoch: %d"%best_epoch)
-
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('Training time {}'.format(total_time_str))
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--config', default='./configs/nlvr.yaml')
- parser.add_argument('--output_dir', default='output/NLVR')
- parser.add_argument('--evaluate', action='store_true')
- parser.add_argument('--device', default='cuda')
- parser.add_argument('--seed', default=42, type=int)
- parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes')
- parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training')
- parser.add_argument('--distributed', default=True, type=bool)
- args = parser.parse_args()
-
- config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader)
-
- Path(args.output_dir).mkdir(parents=True, exist_ok=True)
-
- yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w'))
-
- main(args, config)
\ No newline at end of file
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/hrnet.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/hrnet.py
deleted file mode 100644
index 331ebf3ccb8597b3f507670753789073fc3c946d..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/backbones/hrnet.py
+++ /dev/null
@@ -1,555 +0,0 @@
-import torch.nn as nn
-from annotator.uniformer.mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init,
- kaiming_init)
-from annotator.uniformer.mmcv.runner import load_checkpoint
-from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm
-
-from annotator.uniformer.mmseg.ops import Upsample, resize
-from annotator.uniformer.mmseg.utils import get_root_logger
-from ..builder import BACKBONES
-from .resnet import BasicBlock, Bottleneck
-
-
-class HRModule(nn.Module):
- """High-Resolution Module for HRNet.
-
- In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange
- is in this module.
- """
-
- def __init__(self,
- num_branches,
- blocks,
- num_blocks,
- in_channels,
- num_channels,
- multiscale_output=True,
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True)):
- super(HRModule, self).__init__()
- self._check_branches(num_branches, num_blocks, in_channels,
- num_channels)
-
- self.in_channels = in_channels
- self.num_branches = num_branches
-
- self.multiscale_output = multiscale_output
- self.norm_cfg = norm_cfg
- self.conv_cfg = conv_cfg
- self.with_cp = with_cp
- self.branches = self._make_branches(num_branches, blocks, num_blocks,
- num_channels)
- self.fuse_layers = self._make_fuse_layers()
- self.relu = nn.ReLU(inplace=False)
-
- def _check_branches(self, num_branches, num_blocks, in_channels,
- num_channels):
- """Check branches configuration."""
- if num_branches != len(num_blocks):
- error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_BLOCKS(' \
- f'{len(num_blocks)})'
- raise ValueError(error_msg)
-
- if num_branches != len(num_channels):
- error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_CHANNELS(' \
- f'{len(num_channels)})'
- raise ValueError(error_msg)
-
- if num_branches != len(in_channels):
- error_msg = f'NUM_BRANCHES({num_branches}) <> NUM_INCHANNELS(' \
- f'{len(in_channels)})'
- raise ValueError(error_msg)
-
- def _make_one_branch(self,
- branch_index,
- block,
- num_blocks,
- num_channels,
- stride=1):
- """Build one branch."""
- downsample = None
- if stride != 1 or \
- self.in_channels[branch_index] != \
- num_channels[branch_index] * block.expansion:
- downsample = nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- self.in_channels[branch_index],
- num_channels[branch_index] * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- build_norm_layer(self.norm_cfg, num_channels[branch_index] *
- block.expansion)[1])
-
- layers = []
- layers.append(
- block(
- self.in_channels[branch_index],
- num_channels[branch_index],
- stride,
- downsample=downsample,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
- self.in_channels[branch_index] = \
- num_channels[branch_index] * block.expansion
- for i in range(1, num_blocks[branch_index]):
- layers.append(
- block(
- self.in_channels[branch_index],
- num_channels[branch_index],
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
-
- return nn.Sequential(*layers)
-
- def _make_branches(self, num_branches, block, num_blocks, num_channels):
- """Build multiple branch."""
- branches = []
-
- for i in range(num_branches):
- branches.append(
- self._make_one_branch(i, block, num_blocks, num_channels))
-
- return nn.ModuleList(branches)
-
- def _make_fuse_layers(self):
- """Build fuse layer."""
- if self.num_branches == 1:
- return None
-
- num_branches = self.num_branches
- in_channels = self.in_channels
- fuse_layers = []
- num_out_branches = num_branches if self.multiscale_output else 1
- for i in range(num_out_branches):
- fuse_layer = []
- for j in range(num_branches):
- if j > i:
- fuse_layer.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels[j],
- in_channels[i],
- kernel_size=1,
- stride=1,
- padding=0,
- bias=False),
- build_norm_layer(self.norm_cfg, in_channels[i])[1],
- # we set align_corners=False for HRNet
- Upsample(
- scale_factor=2**(j - i),
- mode='bilinear',
- align_corners=False)))
- elif j == i:
- fuse_layer.append(None)
- else:
- conv_downsamples = []
- for k in range(i - j):
- if k == i - j - 1:
- conv_downsamples.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels[j],
- in_channels[i],
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg,
- in_channels[i])[1]))
- else:
- conv_downsamples.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels[j],
- in_channels[j],
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg,
- in_channels[j])[1],
- nn.ReLU(inplace=False)))
- fuse_layer.append(nn.Sequential(*conv_downsamples))
- fuse_layers.append(nn.ModuleList(fuse_layer))
-
- return nn.ModuleList(fuse_layers)
-
- def forward(self, x):
- """Forward function."""
- if self.num_branches == 1:
- return [self.branches[0](x[0])]
-
- for i in range(self.num_branches):
- x[i] = self.branches[i](x[i])
-
- x_fuse = []
- for i in range(len(self.fuse_layers)):
- y = 0
- for j in range(self.num_branches):
- if i == j:
- y += x[j]
- elif j > i:
- y = y + resize(
- self.fuse_layers[i][j](x[j]),
- size=x[i].shape[2:],
- mode='bilinear',
- align_corners=False)
- else:
- y += self.fuse_layers[i][j](x[j])
- x_fuse.append(self.relu(y))
- return x_fuse
-
-
-@BACKBONES.register_module()
-class HRNet(nn.Module):
- """HRNet backbone.
-
- High-Resolution Representations for Labeling Pixels and Regions
- arXiv: https://arxiv.org/abs/1904.04514
-
- Args:
- extra (dict): detailed configuration for each stage of HRNet.
- in_channels (int): Number of input image channels. Normally 3.
- conv_cfg (dict): dictionary to construct and config conv layer.
- norm_cfg (dict): dictionary to construct and config norm layer.
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- zero_init_residual (bool): whether to use zero init for last norm layer
- in resblocks to let them behave as identity.
-
- Example:
- >>> from annotator.uniformer.mmseg.models import HRNet
- >>> import torch
- >>> extra = dict(
- >>> stage1=dict(
- >>> num_modules=1,
- >>> num_branches=1,
- >>> block='BOTTLENECK',
- >>> num_blocks=(4, ),
- >>> num_channels=(64, )),
- >>> stage2=dict(
- >>> num_modules=1,
- >>> num_branches=2,
- >>> block='BASIC',
- >>> num_blocks=(4, 4),
- >>> num_channels=(32, 64)),
- >>> stage3=dict(
- >>> num_modules=4,
- >>> num_branches=3,
- >>> block='BASIC',
- >>> num_blocks=(4, 4, 4),
- >>> num_channels=(32, 64, 128)),
- >>> stage4=dict(
- >>> num_modules=3,
- >>> num_branches=4,
- >>> block='BASIC',
- >>> num_blocks=(4, 4, 4, 4),
- >>> num_channels=(32, 64, 128, 256)))
- >>> self = HRNet(extra, in_channels=1)
- >>> self.eval()
- >>> inputs = torch.rand(1, 1, 32, 32)
- >>> level_outputs = self.forward(inputs)
- >>> for level_out in level_outputs:
- ... print(tuple(level_out.shape))
- (1, 32, 8, 8)
- (1, 64, 4, 4)
- (1, 128, 2, 2)
- (1, 256, 1, 1)
- """
-
- blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck}
-
- def __init__(self,
- extra,
- in_channels=3,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=False,
- with_cp=False,
- zero_init_residual=False):
- super(HRNet, self).__init__()
- self.extra = extra
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.norm_eval = norm_eval
- self.with_cp = with_cp
- self.zero_init_residual = zero_init_residual
-
- # stem net
- self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1)
- self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2)
-
- self.conv1 = build_conv_layer(
- self.conv_cfg,
- in_channels,
- 64,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False)
-
- self.add_module(self.norm1_name, norm1)
- self.conv2 = build_conv_layer(
- self.conv_cfg,
- 64,
- 64,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False)
-
- self.add_module(self.norm2_name, norm2)
- self.relu = nn.ReLU(inplace=True)
-
- # stage 1
- self.stage1_cfg = self.extra['stage1']
- num_channels = self.stage1_cfg['num_channels'][0]
- block_type = self.stage1_cfg['block']
- num_blocks = self.stage1_cfg['num_blocks'][0]
-
- block = self.blocks_dict[block_type]
- stage1_out_channels = num_channels * block.expansion
- self.layer1 = self._make_layer(block, 64, num_channels, num_blocks)
-
- # stage 2
- self.stage2_cfg = self.extra['stage2']
- num_channels = self.stage2_cfg['num_channels']
- block_type = self.stage2_cfg['block']
-
- block = self.blocks_dict[block_type]
- num_channels = [channel * block.expansion for channel in num_channels]
- self.transition1 = self._make_transition_layer([stage1_out_channels],
- num_channels)
- self.stage2, pre_stage_channels = self._make_stage(
- self.stage2_cfg, num_channels)
-
- # stage 3
- self.stage3_cfg = self.extra['stage3']
- num_channels = self.stage3_cfg['num_channels']
- block_type = self.stage3_cfg['block']
-
- block = self.blocks_dict[block_type]
- num_channels = [channel * block.expansion for channel in num_channels]
- self.transition2 = self._make_transition_layer(pre_stage_channels,
- num_channels)
- self.stage3, pre_stage_channels = self._make_stage(
- self.stage3_cfg, num_channels)
-
- # stage 4
- self.stage4_cfg = self.extra['stage4']
- num_channels = self.stage4_cfg['num_channels']
- block_type = self.stage4_cfg['block']
-
- block = self.blocks_dict[block_type]
- num_channels = [channel * block.expansion for channel in num_channels]
- self.transition3 = self._make_transition_layer(pre_stage_channels,
- num_channels)
- self.stage4, pre_stage_channels = self._make_stage(
- self.stage4_cfg, num_channels)
-
- @property
- def norm1(self):
- """nn.Module: the normalization layer named "norm1" """
- return getattr(self, self.norm1_name)
-
- @property
- def norm2(self):
- """nn.Module: the normalization layer named "norm2" """
- return getattr(self, self.norm2_name)
-
- def _make_transition_layer(self, num_channels_pre_layer,
- num_channels_cur_layer):
- """Make transition layer."""
- num_branches_cur = len(num_channels_cur_layer)
- num_branches_pre = len(num_channels_pre_layer)
-
- transition_layers = []
- for i in range(num_branches_cur):
- if i < num_branches_pre:
- if num_channels_cur_layer[i] != num_channels_pre_layer[i]:
- transition_layers.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- num_channels_pre_layer[i],
- num_channels_cur_layer[i],
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg,
- num_channels_cur_layer[i])[1],
- nn.ReLU(inplace=True)))
- else:
- transition_layers.append(None)
- else:
- conv_downsamples = []
- for j in range(i + 1 - num_branches_pre):
- in_channels = num_channels_pre_layer[-1]
- out_channels = num_channels_cur_layer[i] \
- if j == i - num_branches_pre else in_channels
- conv_downsamples.append(
- nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels,
- out_channels,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg, out_channels)[1],
- nn.ReLU(inplace=True)))
- transition_layers.append(nn.Sequential(*conv_downsamples))
-
- return nn.ModuleList(transition_layers)
-
- def _make_layer(self, block, inplanes, planes, blocks, stride=1):
- """Make each layer."""
- downsample = None
- if stride != 1 or inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- inplanes,
- planes * block.expansion,
- kernel_size=1,
- stride=stride,
- bias=False),
- build_norm_layer(self.norm_cfg, planes * block.expansion)[1])
-
- layers = []
- layers.append(
- block(
- inplanes,
- planes,
- stride,
- downsample=downsample,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
- inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(
- block(
- inplanes,
- planes,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
-
- return nn.Sequential(*layers)
-
- def _make_stage(self, layer_config, in_channels, multiscale_output=True):
- """Make each stage."""
- num_modules = layer_config['num_modules']
- num_branches = layer_config['num_branches']
- num_blocks = layer_config['num_blocks']
- num_channels = layer_config['num_channels']
- block = self.blocks_dict[layer_config['block']]
-
- hr_modules = []
- for i in range(num_modules):
- # multi_scale_output is only used for the last module
- if not multiscale_output and i == num_modules - 1:
- reset_multiscale_output = False
- else:
- reset_multiscale_output = True
-
- hr_modules.append(
- HRModule(
- num_branches,
- block,
- num_blocks,
- in_channels,
- num_channels,
- reset_multiscale_output,
- with_cp=self.with_cp,
- norm_cfg=self.norm_cfg,
- conv_cfg=self.conv_cfg))
-
- return nn.Sequential(*hr_modules), in_channels
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
-
- if self.zero_init_residual:
- for m in self.modules():
- if isinstance(m, Bottleneck):
- constant_init(m.norm3, 0)
- elif isinstance(m, BasicBlock):
- constant_init(m.norm2, 0)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- """Forward function."""
-
- x = self.conv1(x)
- x = self.norm1(x)
- x = self.relu(x)
- x = self.conv2(x)
- x = self.norm2(x)
- x = self.relu(x)
- x = self.layer1(x)
-
- x_list = []
- for i in range(self.stage2_cfg['num_branches']):
- if self.transition1[i] is not None:
- x_list.append(self.transition1[i](x))
- else:
- x_list.append(x)
- y_list = self.stage2(x_list)
-
- x_list = []
- for i in range(self.stage3_cfg['num_branches']):
- if self.transition2[i] is not None:
- x_list.append(self.transition2[i](y_list[-1]))
- else:
- x_list.append(y_list[i])
- y_list = self.stage3(x_list)
-
- x_list = []
- for i in range(self.stage4_cfg['num_branches']):
- if self.transition3[i] is not None:
- x_list.append(self.transition3[i](y_list[-1]))
- else:
- x_list.append(y_list[i])
- y_list = self.stage4(x_list)
-
- return y_list
-
- def train(self, mode=True):
- """Convert the model into training mode will keeping the normalization
- layer freezed."""
- super(HRNet, self).train(mode)
- if mode and self.norm_eval:
- for m in self.modules():
- # trick: eval have effect on BatchNorm only
- if isinstance(m, _BatchNorm):
- m.eval()
diff --git a/spaces/Rongjiehuang/ProDiff/tasks/tts/dataset_utils.py b/spaces/Rongjiehuang/ProDiff/tasks/tts/dataset_utils.py
deleted file mode 100644
index 488e616dd63cb8fdf30c47e037a2acc21c41c7f3..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/ProDiff/tasks/tts/dataset_utils.py
+++ /dev/null
@@ -1,260 +0,0 @@
-from utils.cwt import get_lf0_cwt
-import torch.optim
-import torch.utils.data
-import importlib
-from utils.indexed_datasets import IndexedDataset
-from utils.pitch_utils import norm_interp_f0, denorm_f0, f0_to_coarse
-import numpy as np
-from tasks.base_task import BaseDataset
-import torch
-import torch.optim
-import torch.utils.data
-import utils
-import torch.distributions
-from utils.hparams import hparams
-from utils.pitch_utils import norm_interp_f0
-from resemblyzer import VoiceEncoder
-import json
-from data_gen.tts.data_gen_utils import build_phone_encoder
-
-class BaseTTSDataset(BaseDataset):
- def __init__(self, prefix, shuffle=False, test_items=None, test_sizes=None, data_dir=None):
- super().__init__(shuffle)
- self.data_dir = hparams['binary_data_dir'] if data_dir is None else data_dir
- self.prefix = prefix
- self.hparams = hparams
- self.indexed_ds = None
- self.ext_mel2ph = None
-
- def load_size():
- self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
-
- if prefix == 'test' or hparams['inference']:
- if test_items is not None:
- self.indexed_ds, self.sizes = test_items, test_sizes
- else:
- load_size()
- if hparams['num_test_samples'] > 0:
- self.avail_idxs = [x for x in range(hparams['num_test_samples']) \
- if x < len(self.sizes)]
- if len(hparams['test_ids']) > 0:
- self.avail_idxs = hparams['test_ids'] + self.avail_idxs
- else:
- self.avail_idxs = list(range(len(self.sizes)))
- else:
- load_size()
- self.avail_idxs = list(range(len(self.sizes)))
-
- if hparams['min_frames'] > 0:
- self.avail_idxs = [
- x for x in self.avail_idxs if self.sizes[x] >= hparams['min_frames']]
- self.sizes = [self.sizes[i] for i in self.avail_idxs]
-
- def _get_item(self, index):
- if hasattr(self, 'avail_idxs') and self.avail_idxs is not None:
- index = self.avail_idxs[index]
- if self.indexed_ds is None:
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
- return self.indexed_ds[index]
-
- def __getitem__(self, index):
- hparams = self.hparams
- item = self._get_item(index)
- assert len(item['mel']) == self.sizes[index], (len(item['mel']), self.sizes[index])
- max_frames = hparams['max_frames']
- spec = torch.Tensor(item['mel'])[:max_frames]
- max_frames = spec.shape[0] // hparams['frames_multiple'] * hparams['frames_multiple']
- spec = spec[:max_frames]
- phone = torch.LongTensor(item['phone'][:hparams['max_input_tokens']])
- sample = {
- "id": index,
- "item_name": item['item_name'],
- "text": item['txt'],
- "txt_token": phone,
- "mel": spec,
- "mel_nonpadding": spec.abs().sum(-1) > 0,
- }
- if hparams['use_spk_embed']:
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
- if hparams['use_spk_id']:
- sample["spk_id"] = item['spk_id']
- return sample
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- hparams = self.hparams
- id = torch.LongTensor([s['id'] for s in samples])
- item_names = [s['item_name'] for s in samples]
- text = [s['text'] for s in samples]
- txt_tokens = utils.collate_1d([s['txt_token'] for s in samples], 0)
- mels = utils.collate_2d([s['mel'] for s in samples], 0.0)
- txt_lengths = torch.LongTensor([s['txt_token'].numel() for s in samples])
- mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
-
- batch = {
- 'id': id,
- 'item_name': item_names,
- 'nsamples': len(samples),
- 'text': text,
- 'txt_tokens': txt_tokens,
- 'txt_lengths': txt_lengths,
- 'mels': mels,
- 'mel_lengths': mel_lengths,
- }
-
- if hparams['use_spk_embed']:
- spk_embed = torch.stack([s['spk_embed'] for s in samples])
- batch['spk_embed'] = spk_embed
- if hparams['use_spk_id']:
- spk_ids = torch.LongTensor([s['spk_id'] for s in samples])
- batch['spk_ids'] = spk_ids
- return batch
-
-
-class FastSpeechDataset(BaseTTSDataset):
- def __init__(self, prefix, shuffle=False, test_items=None, test_sizes=None, data_dir=None):
- super().__init__(prefix, shuffle, test_items, test_sizes, data_dir)
- self.f0_mean, self.f0_std = hparams.get('f0_mean', None), hparams.get('f0_std', None)
- if prefix == 'test' and hparams['test_input_dir'] != '':
- self.data_dir = hparams['test_input_dir']
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
- self.indexed_ds = sorted(self.indexed_ds, key=lambda item: item['item_name'])
- items = {}
- for i in range(len(self.indexed_ds)):
- speaker = self.indexed_ds[i]['item_name'].split('_')[0]
- if speaker not in items.keys():
- items[speaker] = [i]
- else:
- items[speaker].append(i)
- sort_item = sorted(items.values(), key=lambda item_pre_speaker: len(item_pre_speaker), reverse=True)
- self.avail_idxs = [n for a in sort_item for n in a][:hparams['num_test_samples']]
- self.indexed_ds, self.sizes = self.load_test_inputs()
- self.avail_idxs = [i for i in range(hparams['num_test_samples'])]
-
- if hparams['pitch_type'] == 'cwt':
- _, hparams['cwt_scales'] = get_lf0_cwt(np.ones(10))
-
- def __getitem__(self, index):
- sample = super(FastSpeechDataset, self).__getitem__(index)
- item = self._get_item(index)
- hparams = self.hparams
- max_frames = hparams['max_frames']
- spec = sample['mel']
- T = spec.shape[0]
- phone = sample['txt_token']
- sample['energy'] = (spec.exp() ** 2).sum(-1).sqrt()
- sample['mel2ph'] = mel2ph = torch.LongTensor(item['mel2ph'])[:T] if 'mel2ph' in item else None
- if hparams['use_pitch_embed']:
- assert 'f0' in item
- if hparams.get('normalize_pitch', False):
- f0 = item["f0"]
- if len(f0 > 0) > 0 and f0[f0 > 0].std() > 0:
- f0[f0 > 0] = (f0[f0 > 0] - f0[f0 > 0].mean()) / f0[f0 > 0].std() * hparams['f0_std'] + \
- hparams['f0_mean']
- f0[f0 > 0] = f0[f0 > 0].clip(min=60, max=500)
- pitch = f0_to_coarse(f0)
- pitch = torch.LongTensor(pitch[:max_frames])
- else:
- pitch = torch.LongTensor(item.get("pitch"))[:max_frames] if "pitch" in item else None
- f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams)
- uv = torch.FloatTensor(uv)
- f0 = torch.FloatTensor(f0)
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = torch.Tensor(item['cwt_spec'])[:max_frames]
- f0_mean = item.get('f0_mean', item.get('cwt_mean'))
- f0_std = item.get('f0_std', item.get('cwt_std'))
- sample.update({"cwt_spec": cwt_spec, "f0_mean": f0_mean, "f0_std": f0_std})
- elif hparams['pitch_type'] == 'ph':
- if "f0_ph" in item:
- f0 = torch.FloatTensor(item['f0_ph'])
- else:
- f0 = denorm_f0(f0, None, hparams)
- f0_phlevel_sum = torch.zeros_like(phone).float().scatter_add(0, mel2ph - 1, f0)
- f0_phlevel_num = torch.zeros_like(phone).float().scatter_add(
- 0, mel2ph - 1, torch.ones_like(f0)).clamp_min(1)
- f0_ph = f0_phlevel_sum / f0_phlevel_num
- f0, uv = norm_interp_f0(f0_ph, hparams)
- else:
- f0 = uv = torch.zeros_like(mel2ph)
- pitch = None
- sample["f0"], sample["uv"], sample["pitch"] = f0, uv, pitch
- if hparams['use_spk_embed']:
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
- if hparams['use_spk_id']:
- sample["spk_id"] = item['spk_id']
- return sample
-
- def collater(self, samples):
- if len(samples) == 0:
- return {}
- hparams = self.hparams
- batch = super(FastSpeechDataset, self).collater(samples)
- f0 = utils.collate_1d([s['f0'] for s in samples], 0.0)
- pitch = utils.collate_1d([s['pitch'] for s in samples]) if samples[0]['pitch'] is not None else None
- uv = utils.collate_1d([s['uv'] for s in samples])
- energy = utils.collate_1d([s['energy'] for s in samples], 0.0)
- mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \
- if samples[0]['mel2ph'] is not None else None
- batch.update({
- 'mel2ph': mel2ph,
- 'energy': energy,
- 'pitch': pitch,
- 'f0': f0,
- 'uv': uv,
- })
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = utils.collate_2d([s['cwt_spec'] for s in samples])
- f0_mean = torch.Tensor([s['f0_mean'] for s in samples])
- f0_std = torch.Tensor([s['f0_std'] for s in samples])
- batch.update({'cwt_spec': cwt_spec, 'f0_mean': f0_mean, 'f0_std': f0_std})
- return batch
-
- def load_test_inputs(self):
- binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizerr.BaseBinarizer')
- pkg = ".".join(binarizer_cls.split(".")[:-1])
- cls_name = binarizer_cls.split(".")[-1]
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
- ph_set_fn = f"{hparams['binary_data_dir']}/phone_set.json"
- ph_set = json.load(open(ph_set_fn, 'r'))
- print("| phone set: ", ph_set)
- phone_encoder = build_phone_encoder(hparams['binary_data_dir'])
- word_encoder = None
- voice_encoder = VoiceEncoder().cuda()
- encoder = [phone_encoder, word_encoder]
- sizes = []
- items = []
- for i in range(len(self.avail_idxs)):
- item = self._get_item(i)
-
- item2tgfn = f"{hparams['test_input_dir'].replace('binary', 'processed')}/mfa_outputs/{item['item_name']}.TextGrid"
- item = binarizer_cls.process_item(item['item_name'], item['ph'], item['txt'], item2tgfn,
- item['wav_fn'], item['spk_id'], encoder, hparams['binarization_args'])
- item['spk_embed'] = voice_encoder.embed_utterance(item['wav']) \
- if hparams['binarization_args']['with_spk_embed'] else None # 判断是否保存embedding文件
- items.append(item)
- sizes.append(item['len'])
- return items, sizes
-
-class FastSpeechWordDataset(FastSpeechDataset):
- def __getitem__(self, index):
- sample = super(FastSpeechWordDataset, self).__getitem__(index)
- item = self._get_item(index)
- max_frames = hparams['max_frames']
- sample["ph_words"] = item["ph_words"]
- sample["word_tokens"] = torch.LongTensor(item["word_tokens"])
- sample["mel2word"] = torch.LongTensor(item.get("mel2word"))[:max_frames]
- sample["ph2word"] = torch.LongTensor(item['ph2word'][:hparams['max_input_tokens']])
- return sample
-
- def collater(self, samples):
- batch = super(FastSpeechWordDataset, self).collater(samples)
- ph_words = [s['ph_words'] for s in samples]
- batch['ph_words'] = ph_words
- word_tokens = utils.collate_1d([s['word_tokens'] for s in samples], 0)
- batch['word_tokens'] = word_tokens
- mel2word = utils.collate_1d([s['mel2word'] for s in samples], 0)
- batch['mel2word'] = mel2word
- ph2word = utils.collate_1d([s['ph2word'] for s in samples], 0)
- batch['ph2word'] = ph2word
- return batch
diff --git a/spaces/Rothfeld/kmeans-pixelartifier/app.py b/spaces/Rothfeld/kmeans-pixelartifier/app.py
deleted file mode 100644
index 0db9c9b55e5313daca52419b64eab83dc9bf5450..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/kmeans-pixelartifier/app.py
+++ /dev/null
@@ -1,136 +0,0 @@
-# %%
-
-import cv2
-from sklearn.cluster import KMeans
-from PIL import Image
-import numpy as np
-import gradio.components as gc
-import gradio as gr
-
-
-def pixart(
- i,
- block_size=4,
- n_clusters=5,
- hsv_weights=[0, 0, 1],
- local_contrast_blur_radius=51, # has to be odd
- upscale=True,
- seed=None,
-):
- w, h = i.size
- dw = w//block_size
- dh = h//block_size
-
- # always resize with NEAREST to keep the original colors
- i = i.resize((dw, dh), Image.Resampling.NEAREST)
- ai = np.array(i)
-
- if seed is None:
- # seed = np.random.randint(0, 2**32 - 1)
- seed = np.random.randint(0, 2**16 - 1)
- km = KMeans(n_clusters=n_clusters, random_state=seed)
-
- hsv = cv2.cvtColor(ai, cv2.COLOR_RGB2HSV)
- bhsv = cv2.GaussianBlur(
- hsv,
- (local_contrast_blur_radius, local_contrast_blur_radius),
- 0,
- borderType=cv2.BORDER_REPLICATE
- )
- hsv32 = hsv.astype(np.float32)
- km.fit(
- hsv32.reshape(-1, hsv32.shape[-1]),
- # (sharp-blurred) gives large values if a pixel stands out from its surroundings
- # raise to the power of 4 to make the difference more pronounced.
- # this preserves rare specks of color by increasing the probability of them getting their own cluster
- sample_weight=(
- np.linalg.norm((hsv32 - bhsv), axis=-1).reshape(-1)
- ** 4
- )
- )
- label_grid = km.labels_.reshape(hsv32.shape[:2])
- centers = km.cluster_centers_ # hsv values
-
- def pick_representative_pixel(cluster):
- '''pick the representative pixel for a cluster'''
- most_sat_color = (hsv[label_grid == cluster] @
- np.array(hsv_weights)).argmax()
- return hsv[label_grid == cluster][most_sat_color]
- cluster_colors = np.array([
- pick_representative_pixel(c)
- for c in range(centers.shape[0])])
-
- # assign each pixel the color of its cluster
- ki = cluster_colors[label_grid]
-
- rgb = cv2.cvtColor(ki.astype(np.uint8), cv2.COLOR_HSV2RGB)
- i = Image.fromarray(rgb)
- if upscale:
- i = i.resize((w, h), Image.Resampling.NEAREST)
- return i, seed
-
-
-def query(
- i: Image.Image,
- block_size: str,
- n_clusters, # =5,
- hsv_weights, # ='0,0,1'
- local_contrast_blur_radius, # =51 has to be odd
- seed, # =42,
-):
- bs = float(block_size)
- w, h = i.size
- if bs < 1:
- blsz = int(bs * min(w, h))
- else:
- blsz = int(bs)
-
- hw = [float(w) for w in hsv_weights.split(',')]
-
- pxart, usedseed = pixart(
- i,
- block_size=blsz,
- n_clusters=n_clusters,
- hsv_weights=hw,
- local_contrast_blur_radius=local_contrast_blur_radius,
- upscale=True,
- seed=int(seed) if seed != '' else None,
- )
- return pxart.convert('P', palette=Image.Palette.ADAPTIVE, colors=n_clusters), usedseed
-
-
-# %%
-searchimage = gc.Image(
- # shape=(512, 512),
- label="Search image", type='pil')
-block_size = gc.Textbox(
- "0.01",
- label='Block Size ',
- placeholder="e.g. 8 for 8 pixels. 0.01 for 1% of min(w,h) (<1 for percentages, >= 1 for pixels)")
-palette_size = gc.Slider(
- 1, 256, 32, step=1, label='Palette Size (Number of Colors)')
-hsv_weights = gc.Textbox(
- "0,0,1",
- label='HSV Weights. Weights of the channels when selecting a "representative pixel"/centroid from a cluster of pixels',
- placeholder='e.g. 0,0,1 to only consider the V channel (which seems to work well)')
-lcbr = gc.Slider(
- 3, 512, 51, step=2, label='Blur radius to calculate local contrast')
-
-seed = gc.Textbox(
- "",
- label='Seed for the random number generator (empty to randomize)',
- placeholder='e.g. 42')
-
-outimage = gc.Image(shape=(224, 224), label="Output", type='pil')
-seedout = gc.Textbox(label='used seed')
-
-
-gr.Interface(
- query,
- [searchimage, block_size, palette_size, hsv_weights, lcbr, seed],
- [outimage, seedout],
- title="kmeans-Pixartifier",
- description=f"Turns images into pixel art using kmeans clustering",
- analytics_enabled=False,
- allow_flagging='never',
-).launch()
diff --git a/spaces/Rothfeld/textual-inversion-init-token/app.py b/spaces/Rothfeld/textual-inversion-init-token/app.py
deleted file mode 100644
index 0fec6bb133547dde0cce9e0cca7b276136b01b8f..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/textual-inversion-init-token/app.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# %%
-import gradio.components as gc
-import gradio as gr
-
-import numpy as np
-import pandas as pd
-import torch
-from PIL import Image
-from transformers import CLIPModel, CLIPProcessor
-device = 'cpu'
-torch.no_grad().__enter__()
-torch.autocast('cuda').__enter__()
-
-# %%
-
-t = pd.read_pickle("clip_texts_1_fp16.pkl")
-words = t.reset_index().word
-wordsv = torch.tensor(t.values).to(device)
-
-# %%
-
-# %%
-model_name = "openai/clip-vit-large-patch14"
-mmm = CLIPModel.from_pretrained(model_name)
-mmm.eval()
-mmm.to(device)
-
-processor = CLIPProcessor.from_pretrained(model_name)
-
-# %%
-
-
-def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
- """ helper function to spherically interpolate two arrays v1 v2 """
- inputs_are_torch = False
- if not isinstance(v0, np.ndarray):
- inputs_are_torch = True
- input_device = v0.device
- v0 = v0.cpu().numpy()
- v1 = v1.cpu().numpy()
-
- dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
- if np.abs(dot) > DOT_THRESHOLD:
- v2 = (1 - t) * v0 + t * v1
- else:
- theta_0 = np.arccos(dot)
- sin_theta_0 = np.sin(theta_0)
- theta_t = theta_0 * t
- sin_theta_t = np.sin(theta_t)
- s0 = np.sin(theta_0 - theta_t) / sin_theta_0
- s1 = sin_theta_t / sin_theta_0
- v2 = s0 * v0 + s1 * v1
-
- if inputs_are_torch:
- v2 = torch.from_numpy(v2).to(input_device)
-
- return v2
-
-
-def query(text: str, img: Image.Image, limit: int, score_threshold: float, slerp_degree: float):
- if text != '':
- inp = processor(text=text, return_tensors='pt').to(device)
- rout = mmm.get_text_features(**inp)
- tout = rout.detach().cpu().numpy()[0]
- out = tout
-
- if img is not None:
- inp = processor(images=[img], return_tensors="pt",).to(device)
- rout = mmm.get_image_features(**inp)
- iout = rout.detach().cpu().numpy()[0]
- out = iout
-
- if text != '' and img is not None:
- out = slerp(slerp_degree, tout, iout)
-
- if out is not None:
- # calculate cosine similarity
- scores = np.dot(out, wordsv.T)
- # sort by score
- topk = (
- pd.concat(
- [words, pd.Series(scores, name='score')],
- axis=1
- )
- .sort_values('score', ascending=False)
- .query(f'score > {score_threshold}')
- .head(limit)
- )
-
- topwords = "\n".join(
- f'{word}: {score:.2f} '
- for _, word, score in topk.itertuples()
- )
-
- return topwords
-
-
-searchtext = gc.Textbox(lines=2, placeholder="Search text")
-searchimage = gc.Image(shape=(224, 224), label="Search image", type='pil')
-inp_limit = gc.Slider(1, 50, 10, step=1, label='Limit')
-score_threshold = gc.Slider(0, 30, 0, step=.5, label='Score threshold')
-slerp_degree = gc.Slider(
- 0, 1, 0.5, step=.01, label='Slerp degree (if both text and image are provided)\nFinds a midpoint between image and text embeddings')
-
-
-dsurl = 'https://www.kaggle.com/datasets/yk1598/479k-english-words'
-gr.Interface(
- query,
- [searchtext, searchimage, inp_limit, score_threshold, slerp_degree],
- [gc.Textbox(label='Top words')],
- title="Initial Token Finder for Textual Inversion",
- description=f"find the closest single token word for a given text and/or image.\nbased on {model_name}.\n\nData: {dsurl}",
- analytics_enabled=False,
- allow_flagging='never',
-).launch()
diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets_33966KB.py b/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets_33966KB.py
deleted file mode 100644
index 73a5b836177b706c306e27875f8391c1aed4b948..0000000000000000000000000000000000000000
--- a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets_33966KB.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import layers_33966KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16, 32)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 16)
- self.stg1_high_band_net = BaseASPPNet(2, 16)
-
- self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(8, 16)
-
- self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(16, 32)
-
- self.out = nn.Conv2d(32, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(16, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(16, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/Silentlin/DiffSinger/modules/fastspeech/tts_modules.py b/spaces/Silentlin/DiffSinger/modules/fastspeech/tts_modules.py
deleted file mode 100644
index 195eff279de781dd2565cfb2da65533c58f6c332..0000000000000000000000000000000000000000
--- a/spaces/Silentlin/DiffSinger/modules/fastspeech/tts_modules.py
+++ /dev/null
@@ -1,357 +0,0 @@
-import logging
-import math
-
-import torch
-import torch.nn as nn
-from torch.nn import functional as F
-
-from modules.commons.espnet_positional_embedding import RelPositionalEncoding
-from modules.commons.common_layers import SinusoidalPositionalEmbedding, Linear, EncSALayer, DecSALayer, BatchNorm1dTBC
-from utils.hparams import hparams
-
-DEFAULT_MAX_SOURCE_POSITIONS = 2000
-DEFAULT_MAX_TARGET_POSITIONS = 2000
-
-
-class TransformerEncoderLayer(nn.Module):
- def __init__(self, hidden_size, dropout, kernel_size=None, num_heads=2, norm='ln'):
- super().__init__()
- self.hidden_size = hidden_size
- self.dropout = dropout
- self.num_heads = num_heads
- self.op = EncSALayer(
- hidden_size, num_heads, dropout=dropout,
- attention_dropout=0.0, relu_dropout=dropout,
- kernel_size=kernel_size
- if kernel_size is not None else hparams['enc_ffn_kernel_size'],
- padding=hparams['ffn_padding'],
- norm=norm, act=hparams['ffn_act'])
-
- def forward(self, x, **kwargs):
- return self.op(x, **kwargs)
-
-
-######################
-# fastspeech modules
-######################
-class LayerNorm(torch.nn.LayerNorm):
- """Layer normalization module.
- :param int nout: output dim size
- :param int dim: dimension to be normalized
- """
-
- def __init__(self, nout, dim=-1):
- """Construct an LayerNorm object."""
- super(LayerNorm, self).__init__(nout, eps=1e-12)
- self.dim = dim
-
- def forward(self, x):
- """Apply layer normalization.
- :param torch.Tensor x: input tensor
- :return: layer normalized tensor
- :rtype torch.Tensor
- """
- if self.dim == -1:
- return super(LayerNorm, self).forward(x)
- return super(LayerNorm, self).forward(x.transpose(1, -1)).transpose(1, -1)
-
-
-class DurationPredictor(torch.nn.Module):
- """Duration predictor module.
- This is a module of duration predictor described in `FastSpeech: Fast, Robust and Controllable Text to Speech`_.
- The duration predictor predicts a duration of each frame in log domain from the hidden embeddings of encoder.
- .. _`FastSpeech: Fast, Robust and Controllable Text to Speech`:
- https://arxiv.org/pdf/1905.09263.pdf
- Note:
- The calculation domain of outputs is different between in `forward` and in `inference`. In `forward`,
- the outputs are calculated in log domain but in `inference`, those are calculated in linear domain.
- """
-
- def __init__(self, idim, n_layers=2, n_chans=384, kernel_size=3, dropout_rate=0.1, offset=1.0, padding='SAME'):
- """Initilize duration predictor module.
- Args:
- idim (int): Input dimension.
- n_layers (int, optional): Number of convolutional layers.
- n_chans (int, optional): Number of channels of convolutional layers.
- kernel_size (int, optional): Kernel size of convolutional layers.
- dropout_rate (float, optional): Dropout rate.
- offset (float, optional): Offset value to avoid nan in log domain.
- """
- super(DurationPredictor, self).__init__()
- self.offset = offset
- self.conv = torch.nn.ModuleList()
- self.kernel_size = kernel_size
- self.padding = padding
- for idx in range(n_layers):
- in_chans = idim if idx == 0 else n_chans
- self.conv += [torch.nn.Sequential(
- torch.nn.ConstantPad1d(((kernel_size - 1) // 2, (kernel_size - 1) // 2)
- if padding == 'SAME'
- else (kernel_size - 1, 0), 0),
- torch.nn.Conv1d(in_chans, n_chans, kernel_size, stride=1, padding=0),
- torch.nn.ReLU(),
- LayerNorm(n_chans, dim=1),
- torch.nn.Dropout(dropout_rate)
- )]
- if hparams['dur_loss'] in ['mse', 'huber']:
- odims = 1
- elif hparams['dur_loss'] == 'mog':
- odims = 15
- elif hparams['dur_loss'] == 'crf':
- odims = 32
- from torchcrf import CRF
- self.crf = CRF(odims, batch_first=True)
- self.linear = torch.nn.Linear(n_chans, odims)
-
- def _forward(self, xs, x_masks=None, is_inference=False):
- xs = xs.transpose(1, -1) # (B, idim, Tmax)
- for f in self.conv:
- xs = f(xs) # (B, C, Tmax)
- if x_masks is not None:
- xs = xs * (1 - x_masks.float())[:, None, :]
-
- xs = self.linear(xs.transpose(1, -1)) # [B, T, C]
- xs = xs * (1 - x_masks.float())[:, :, None] # (B, T, C)
- if is_inference:
- return self.out2dur(xs), xs
- else:
- if hparams['dur_loss'] in ['mse']:
- xs = xs.squeeze(-1) # (B, Tmax)
- return xs
-
- def out2dur(self, xs):
- if hparams['dur_loss'] in ['mse']:
- # NOTE: calculate in log domain
- xs = xs.squeeze(-1) # (B, Tmax)
- dur = torch.clamp(torch.round(xs.exp() - self.offset), min=0).long() # avoid negative value
- elif hparams['dur_loss'] == 'mog':
- return NotImplementedError
- elif hparams['dur_loss'] == 'crf':
- dur = torch.LongTensor(self.crf.decode(xs)).cuda()
- return dur
-
- def forward(self, xs, x_masks=None):
- """Calculate forward propagation.
- Args:
- xs (Tensor): Batch of input sequences (B, Tmax, idim).
- x_masks (ByteTensor, optional): Batch of masks indicating padded part (B, Tmax).
- Returns:
- Tensor: Batch of predicted durations in log domain (B, Tmax).
- """
- return self._forward(xs, x_masks, False)
-
- def inference(self, xs, x_masks=None):
- """Inference duration.
- Args:
- xs (Tensor): Batch of input sequences (B, Tmax, idim).
- x_masks (ByteTensor, optional): Batch of masks indicating padded part (B, Tmax).
- Returns:
- LongTensor: Batch of predicted durations in linear domain (B, Tmax).
- """
- return self._forward(xs, x_masks, True)
-
-
-class LengthRegulator(torch.nn.Module):
- def __init__(self, pad_value=0.0):
- super(LengthRegulator, self).__init__()
- self.pad_value = pad_value
-
- def forward(self, dur, dur_padding=None, alpha=1.0):
- """
- Example (no batch dim version):
- 1. dur = [2,2,3]
- 2. token_idx = [[1],[2],[3]], dur_cumsum = [2,4,7], dur_cumsum_prev = [0,2,4]
- 3. token_mask = [[1,1,0,0,0,0,0],
- [0,0,1,1,0,0,0],
- [0,0,0,0,1,1,1]]
- 4. token_idx * token_mask = [[1,1,0,0,0,0,0],
- [0,0,2,2,0,0,0],
- [0,0,0,0,3,3,3]]
- 5. (token_idx * token_mask).sum(0) = [1,1,2,2,3,3,3]
-
- :param dur: Batch of durations of each frame (B, T_txt)
- :param dur_padding: Batch of padding of each frame (B, T_txt)
- :param alpha: duration rescale coefficient
- :return:
- mel2ph (B, T_speech)
- """
- assert alpha > 0
- dur = torch.round(dur.float() * alpha).long()
- if dur_padding is not None:
- dur = dur * (1 - dur_padding.long())
- token_idx = torch.arange(1, dur.shape[1] + 1)[None, :, None].to(dur.device)
- dur_cumsum = torch.cumsum(dur, 1)
- dur_cumsum_prev = F.pad(dur_cumsum, [1, -1], mode='constant', value=0)
-
- pos_idx = torch.arange(dur.sum(-1).max())[None, None].to(dur.device)
- token_mask = (pos_idx >= dur_cumsum_prev[:, :, None]) & (pos_idx < dur_cumsum[:, :, None])
- mel2ph = (token_idx * token_mask.long()).sum(1)
- return mel2ph
-
-
-class PitchPredictor(torch.nn.Module):
- def __init__(self, idim, n_layers=5, n_chans=384, odim=2, kernel_size=5,
- dropout_rate=0.1, padding='SAME'):
- """Initilize pitch predictor module.
- Args:
- idim (int): Input dimension.
- n_layers (int, optional): Number of convolutional layers.
- n_chans (int, optional): Number of channels of convolutional layers.
- kernel_size (int, optional): Kernel size of convolutional layers.
- dropout_rate (float, optional): Dropout rate.
- """
- super(PitchPredictor, self).__init__()
- self.conv = torch.nn.ModuleList()
- self.kernel_size = kernel_size
- self.padding = padding
- for idx in range(n_layers):
- in_chans = idim if idx == 0 else n_chans
- self.conv += [torch.nn.Sequential(
- torch.nn.ConstantPad1d(((kernel_size - 1) // 2, (kernel_size - 1) // 2)
- if padding == 'SAME'
- else (kernel_size - 1, 0), 0),
- torch.nn.Conv1d(in_chans, n_chans, kernel_size, stride=1, padding=0),
- torch.nn.ReLU(),
- LayerNorm(n_chans, dim=1),
- torch.nn.Dropout(dropout_rate)
- )]
- self.linear = torch.nn.Linear(n_chans, odim)
- self.embed_positions = SinusoidalPositionalEmbedding(idim, 0, init_size=4096)
- self.pos_embed_alpha = nn.Parameter(torch.Tensor([1]))
-
- def forward(self, xs):
- """
-
- :param xs: [B, T, H]
- :return: [B, T, H]
- """
- positions = self.pos_embed_alpha * self.embed_positions(xs[..., 0])
- xs = xs + positions
- xs = xs.transpose(1, -1) # (B, idim, Tmax)
- for f in self.conv:
- xs = f(xs) # (B, C, Tmax)
- # NOTE: calculate in log domain
- xs = self.linear(xs.transpose(1, -1)) # (B, Tmax, H)
- return xs
-
-
-class EnergyPredictor(PitchPredictor):
- pass
-
-
-def mel2ph_to_dur(mel2ph, T_txt, max_dur=None):
- B, _ = mel2ph.shape
- dur = mel2ph.new_zeros(B, T_txt + 1).scatter_add(1, mel2ph, torch.ones_like(mel2ph))
- dur = dur[:, 1:]
- if max_dur is not None:
- dur = dur.clamp(max=max_dur)
- return dur
-
-
-class FFTBlocks(nn.Module):
- def __init__(self, hidden_size, num_layers, ffn_kernel_size=9, dropout=None, num_heads=2,
- use_pos_embed=True, use_last_norm=True, norm='ln', use_pos_embed_alpha=True):
- super().__init__()
- self.num_layers = num_layers
- embed_dim = self.hidden_size = hidden_size
- self.dropout = dropout if dropout is not None else hparams['dropout']
- self.use_pos_embed = use_pos_embed
- self.use_last_norm = use_last_norm
- if use_pos_embed:
- self.max_source_positions = DEFAULT_MAX_TARGET_POSITIONS
- self.padding_idx = 0
- self.pos_embed_alpha = nn.Parameter(torch.Tensor([1])) if use_pos_embed_alpha else 1
- self.embed_positions = SinusoidalPositionalEmbedding(
- embed_dim, self.padding_idx, init_size=DEFAULT_MAX_TARGET_POSITIONS,
- )
-
- self.layers = nn.ModuleList([])
- self.layers.extend([
- TransformerEncoderLayer(self.hidden_size, self.dropout,
- kernel_size=ffn_kernel_size, num_heads=num_heads)
- for _ in range(self.num_layers)
- ])
- if self.use_last_norm:
- if norm == 'ln':
- self.layer_norm = nn.LayerNorm(embed_dim)
- elif norm == 'bn':
- self.layer_norm = BatchNorm1dTBC(embed_dim)
- else:
- self.layer_norm = None
-
- def forward(self, x, padding_mask=None, attn_mask=None, return_hiddens=False):
- """
- :param x: [B, T, C]
- :param padding_mask: [B, T]
- :return: [B, T, C] or [L, B, T, C]
- """
- padding_mask = x.abs().sum(-1).eq(0).data if padding_mask is None else padding_mask
- nonpadding_mask_TB = 1 - padding_mask.transpose(0, 1).float()[:, :, None] # [T, B, 1]
- if self.use_pos_embed:
- positions = self.pos_embed_alpha * self.embed_positions(x[..., 0])
- x = x + positions
- x = F.dropout(x, p=self.dropout, training=self.training)
- # B x T x C -> T x B x C
- x = x.transpose(0, 1) * nonpadding_mask_TB
- hiddens = []
- for layer in self.layers:
- x = layer(x, encoder_padding_mask=padding_mask, attn_mask=attn_mask) * nonpadding_mask_TB
- hiddens.append(x)
- if self.use_last_norm:
- x = self.layer_norm(x) * nonpadding_mask_TB
- if return_hiddens:
- x = torch.stack(hiddens, 0) # [L, T, B, C]
- x = x.transpose(1, 2) # [L, B, T, C]
- else:
- x = x.transpose(0, 1) # [B, T, C]
- return x
-
-
-class FastspeechEncoder(FFTBlocks):
- def __init__(self, embed_tokens, hidden_size=None, num_layers=None, kernel_size=None, num_heads=2):
- hidden_size = hparams['hidden_size'] if hidden_size is None else hidden_size
- kernel_size = hparams['enc_ffn_kernel_size'] if kernel_size is None else kernel_size
- num_layers = hparams['dec_layers'] if num_layers is None else num_layers
- super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads,
- use_pos_embed=False) # use_pos_embed_alpha for compatibility
- self.embed_tokens = embed_tokens
- self.embed_scale = math.sqrt(hidden_size)
- self.padding_idx = 0
- if hparams.get('rel_pos') is not None and hparams['rel_pos']:
- self.embed_positions = RelPositionalEncoding(hidden_size, dropout_rate=0.0)
- else:
- self.embed_positions = SinusoidalPositionalEmbedding(
- hidden_size, self.padding_idx, init_size=DEFAULT_MAX_TARGET_POSITIONS,
- )
-
- def forward(self, txt_tokens):
- """
-
- :param txt_tokens: [B, T]
- :return: {
- 'encoder_out': [T x B x C]
- }
- """
- encoder_padding_mask = txt_tokens.eq(self.padding_idx).data
- x = self.forward_embedding(txt_tokens) # [B, T, H]
- x = super(FastspeechEncoder, self).forward(x, encoder_padding_mask)
- return x
-
- def forward_embedding(self, txt_tokens):
- # embed tokens and positions
- x = self.embed_scale * self.embed_tokens(txt_tokens)
- if hparams['use_pos_embed']:
- positions = self.embed_positions(txt_tokens)
- x = x + positions
- x = F.dropout(x, p=self.dropout, training=self.training)
- return x
-
-
-class FastspeechDecoder(FFTBlocks):
- def __init__(self, hidden_size=None, num_layers=None, kernel_size=None, num_heads=None):
- num_heads = hparams['num_heads'] if num_heads is None else num_heads
- hidden_size = hparams['hidden_size'] if hidden_size is None else hidden_size
- kernel_size = hparams['dec_ffn_kernel_size'] if kernel_size is None else kernel_size
- num_layers = hparams['dec_layers'] if num_layers is None else num_layers
- super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads)
-
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/__init__.py
deleted file mode 100644
index 0bd8ec5e3b566d8a2d43a0904fd49db7862a21eb..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/altair/utils/__init__.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from .core import (
- infer_vegalite_type,
- infer_encoding_types,
- sanitize_dataframe,
- parse_shorthand,
- use_signature,
- update_nested,
- display_traceback,
- SchemaBase,
-)
-from .html import spec_to_html
-from .plugin_registry import PluginRegistry
-from .deprecation import AltairDeprecationWarning
-from .schemapi import Undefined
-
-
-__all__ = (
- "infer_vegalite_type",
- "infer_encoding_types",
- "sanitize_dataframe",
- "spec_to_html",
- "parse_shorthand",
- "use_signature",
- "update_nested",
- "display_traceback",
- "AltairDeprecationWarning",
- "SchemaBase",
- "Undefined",
- "PluginRegistry",
-)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/test_chroma.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/test_chroma.py
deleted file mode 100644
index 8deec45e17e09f2c8c0cd64b80995fe67bbb8500..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/test/test_chroma.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import unittest
-import os
-from unittest.mock import patch, Mock
-
-import chromadb
-import chromadb.config
-from chromadb.db import DB
-
-
-class GetDBTest(unittest.TestCase):
- @patch("chromadb.db.duckdb.DuckDB", autospec=True)
- def test_default_db(self, mock: Mock) -> None:
- system = chromadb.config.System(
- chromadb.config.Settings(persist_directory="./foo")
- )
- system.instance(DB)
- assert mock.called
-
- @patch("chromadb.db.duckdb.PersistentDuckDB", autospec=True)
- def test_persistent_duckdb(self, mock: Mock) -> None:
- system = chromadb.config.System(
- chromadb.config.Settings(
- chroma_db_impl="duckdb+parquet", persist_directory="./foo"
- )
- )
- system.instance(DB)
- assert mock.called
-
- @patch("chromadb.db.clickhouse.Clickhouse", autospec=True)
- def test_clickhouse(self, mock: Mock) -> None:
- system = chromadb.config.System(
- chromadb.config.Settings(
- chroma_db_impl="clickhouse",
- persist_directory="./foo",
- clickhouse_host="foo",
- clickhouse_port="666",
- )
- )
- system.instance(DB)
- assert mock.called
-
-
-class GetAPITest(unittest.TestCase):
- @patch("chromadb.api.local.LocalAPI", autospec=True)
- @patch.dict(os.environ, {}, clear=True)
- def test_local(self, mock_api: Mock) -> None:
- chromadb.Client(chromadb.config.Settings(persist_directory="./foo"))
- assert mock_api.called
-
- @patch("chromadb.db.duckdb.DuckDB", autospec=True)
- @patch.dict(os.environ, {}, clear=True)
- def test_local_db(self, mock_db: Mock) -> None:
- chromadb.Client(chromadb.config.Settings(persist_directory="./foo"))
- assert mock_db.called
-
- @patch("chromadb.api.fastapi.FastAPI", autospec=True)
- @patch.dict(os.environ, {}, clear=True)
- def test_fastapi(self, mock: Mock) -> None:
- chromadb.Client(
- chromadb.config.Settings(
- chroma_api_impl="rest",
- persist_directory="./foo",
- chroma_server_host="foo",
- chroma_server_http_port="80",
- )
- )
- assert mock.called
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/registry.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/registry.py
deleted file mode 100644
index 5623b80ad96ae9a66ba397b94752b9b18729dad4..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/registry.py
+++ /dev/null
@@ -1,695 +0,0 @@
-#!~/.wine/drive_c/Python25/python.exe
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2009-2014, Mario Vilas
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice,this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution.
-# * Neither the name of the copyright holder nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
-# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
-# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-
-"""
-Registry access.
-
-@group Instrumentation:
- Registry, RegistryKey
-"""
-
-from __future__ import with_statement
-
-__revision__ = "$Id$"
-
-__all__ = ['Registry']
-
-import sys
-from winappdbg import win32
-from winappdbg import compat
-import collections
-import warnings
-
-#==============================================================================
-
-class _RegistryContainer (object):
- """
- Base class for L{Registry} and L{RegistryKey}.
- """
-
- # Dummy object to detect empty arguments.
- class __EmptyArgument:
- pass
- __emptyArgument = __EmptyArgument()
-
- def __init__(self):
- self.__default = None
-
- def has_key(self, name):
- return name in self
-
- def get(self, name, default=__emptyArgument):
- try:
- return self[name]
- except KeyError:
- if default is RegistryKey.__emptyArgument:
- return self.__default
- return default
-
- def setdefault(self, default):
- self.__default = default
-
- def __iter__(self):
- return compat.iterkeys(self)
-
-#==============================================================================
-
-class RegistryKey (_RegistryContainer):
- """
- Exposes a single Windows Registry key as a dictionary-like object.
-
- @see: L{Registry}
-
- @type path: str
- @ivar path: Registry key path.
-
- @type handle: L{win32.RegistryKeyHandle}
- @ivar handle: Registry key handle.
- """
-
- def __init__(self, path, handle):
- """
- @type path: str
- @param path: Registry key path.
-
- @type handle: L{win32.RegistryKeyHandle}
- @param handle: Registry key handle.
- """
- super(RegistryKey, self).__init__()
- if path.endswith('\\'):
- path = path[:-1]
- self._path = path
- self._handle = handle
-
- @property
- def path(self):
- return self._path
-
- @property
- def handle(self):
- #if not self._handle:
- # msg = "This Registry key handle has already been closed."
- # raise RuntimeError(msg)
- return self._handle
-
- #def close(self):
- # """
- # Close the Registry key handle, freeing its resources. It cannot be
- # used again after calling this method.
- #
- # @note: This method will be called automatically by the garbage
- # collector, and upon exiting a "with" block.
- #
- # @raise RuntimeError: This Registry key handle has already been closed.
- # """
- # self.handle.close()
- #
- #def __enter__(self):
- # """
- # Compatibility with the "C{with}" Python statement.
- # """
- # return self
- #
- #def __exit__(self, type, value, traceback):
- # """
- # Compatibility with the "C{with}" Python statement.
- # """
- # try:
- # self.close()
- # except Exception:
- # pass
-
- def __contains__(self, name):
- try:
- win32.RegQueryValueEx(self.handle, name, False)
- return True
- except WindowsError:
- e = sys.exc_info()[1]
- if e.winerror == win32.ERROR_FILE_NOT_FOUND:
- return False
- raise
-
- def __getitem__(self, name):
- try:
- return win32.RegQueryValueEx(self.handle, name)[0]
- except WindowsError:
- e = sys.exc_info()[1]
- if e.winerror == win32.ERROR_FILE_NOT_FOUND:
- raise KeyError(name)
- raise
-
- def __setitem__(self, name, value):
- win32.RegSetValueEx(self.handle, name, value)
-
- def __delitem__(self, name):
- win32.RegDeleteValue(self.handle, name)
-
- def iterkeys(self):
- handle = self.handle
- index = 0
- while 1:
- resp = win32.RegEnumValue(handle, index, False)
- if resp is None:
- break
- yield resp[0]
- index += 1
-
- def itervalues(self):
- handle = self.handle
- index = 0
- while 1:
- resp = win32.RegEnumValue(handle, index)
- if resp is None:
- break
- yield resp[2]
- index += 1
-
- def iteritems(self):
- handle = self.handle
- index = 0
- while 1:
- resp = win32.RegEnumValue(handle, index)
- if resp is None:
- break
- yield resp[0], resp[2]
- index += 1
-
- def keys(self):
- # return list(self.iterkeys()) # that can't be optimized by psyco
- handle = self.handle
- keys = list()
- index = 0
- while 1:
- resp = win32.RegEnumValue(handle, index, False)
- if resp is None:
- break
- keys.append(resp[0])
- index += 1
- return keys
-
- def values(self):
- # return list(self.itervalues()) # that can't be optimized by psyco
- handle = self.handle
- values = list()
- index = 0
- while 1:
- resp = win32.RegEnumValue(handle, index)
- if resp is None:
- break
- values.append(resp[2])
- index += 1
- return values
-
- def items(self):
- # return list(self.iteritems()) # that can't be optimized by psyco
- handle = self.handle
- items = list()
- index = 0
- while 1:
- resp = win32.RegEnumValue(handle, index)
- if resp is None:
- break
- items.append( (resp[0], resp[2]) )
- index += 1
- return items
-
- def get_value_type(self, name):
- """
- Retrieves the low-level data type for the given value.
-
- @type name: str
- @param name: Registry value name.
-
- @rtype: int
- @return: One of the following constants:
- - L{win32.REG_NONE} (0)
- - L{win32.REG_SZ} (1)
- - L{win32.REG_EXPAND_SZ} (2)
- - L{win32.REG_BINARY} (3)
- - L{win32.REG_DWORD} (4)
- - L{win32.REG_DWORD_BIG_ENDIAN} (5)
- - L{win32.REG_LINK} (6)
- - L{win32.REG_MULTI_SZ} (7)
- - L{win32.REG_RESOURCE_LIST} (8)
- - L{win32.REG_FULL_RESOURCE_DESCRIPTOR} (9)
- - L{win32.REG_RESOURCE_REQUIREMENTS_LIST} (10)
- - L{win32.REG_QWORD} (11)
-
- @raise KeyError: The specified value could not be found.
- """
- try:
- return win32.RegQueryValueEx(self.handle, name)[1]
- except WindowsError:
- e = sys.exc_info()[1]
- if e.winerror == win32.ERROR_FILE_NOT_FOUND:
- raise KeyError(name)
- raise
-
- def clear(self):
- handle = self.handle
- while 1:
- resp = win32.RegEnumValue(handle, 0, False)
- if resp is None:
- break
- win32.RegDeleteValue(handle, resp[0])
-
- def __str__(self):
- default = self['']
- return str(default)
-
- def __unicode__(self):
- default = self[u'']
- return compat.unicode(default)
-
- def __repr__(self):
- return '' % self._path
-
- def iterchildren(self):
- """
- Iterates the subkeys for this Registry key.
-
- @rtype: iter of L{RegistryKey}
- @return: Iterator of subkeys.
- """
- handle = self.handle
- index = 0
- while 1:
- subkey = win32.RegEnumKey(handle, index)
- if subkey is None:
- break
- yield self.child(subkey)
- index += 1
-
- def children(self):
- """
- Returns a list of subkeys for this Registry key.
-
- @rtype: list(L{RegistryKey})
- @return: List of subkeys.
- """
- # return list(self.iterchildren()) # that can't be optimized by psyco
- handle = self.handle
- result = []
- index = 0
- while 1:
- subkey = win32.RegEnumKey(handle, index)
- if subkey is None:
- break
- result.append( self.child(subkey) )
- index += 1
- return result
-
- def child(self, subkey):
- """
- Retrieves a subkey for this Registry key, given its name.
-
- @type subkey: str
- @param subkey: Name of the subkey.
-
- @rtype: L{RegistryKey}
- @return: Subkey.
- """
- path = self._path + '\\' + subkey
- handle = win32.RegOpenKey(self.handle, subkey)
- return RegistryKey(path, handle)
-
- def flush(self):
- """
- Flushes changes immediately to disk.
-
- This method is normally not needed, as the Registry writes changes
- to disk by itself. This mechanism is provided to ensure the write
- happens immediately, as opposed to whenever the OS wants to.
-
- @warn: Calling this method too often may degrade performance.
- """
- win32.RegFlushKey(self.handle)
-
-#==============================================================================
-
-# TODO: possibly cache the RegistryKey objects
-# to avoid opening and closing handles many times on code sequences like this:
-#
-# r = Registry()
-# r['HKLM\\Software\\Microsoft\\Windows NT\\CurrentVersion\\Run']['Example 1'] = 'example1.exe'
-# r['HKLM\\Software\\Microsoft\\Windows NT\\CurrentVersion\\Run']['Example 2'] = 'example2.exe'
-# r['HKLM\\Software\\Microsoft\\Windows NT\\CurrentVersion\\Run']['Example 3'] = 'example3.exe'
-
-# TODO: support for access flags?
-# TODO: should be possible to disable the safety checks (see __delitem__)
-
-# TODO: workaround for an API bug described by a user in MSDN
-#
-# http://msdn.microsoft.com/en-us/library/windows/desktop/aa379776(v=vs.85).aspx
-#
-# Apparently RegDeleteTree won't work remotely from Win7 to WinXP, and the only
-# solution is to recursively call RegDeleteKey.
-
-class Registry (_RegistryContainer):
- """
- Exposes the Windows Registry as a Python container.
-
- @type machine: str or None
- @ivar machine: For a remote Registry, the machine name.
- For a local Registry, the value is C{None}.
- """
-
- _hives_by_name = {
-
- # Short names
- 'HKCR' : win32.HKEY_CLASSES_ROOT,
- 'HKCU' : win32.HKEY_CURRENT_USER,
- 'HKLM' : win32.HKEY_LOCAL_MACHINE,
- 'HKU' : win32.HKEY_USERS,
- 'HKPD' : win32.HKEY_PERFORMANCE_DATA,
- 'HKCC' : win32.HKEY_CURRENT_CONFIG,
-
- # Long names
- 'HKEY_CLASSES_ROOT' : win32.HKEY_CLASSES_ROOT,
- 'HKEY_CURRENT_USER' : win32.HKEY_CURRENT_USER,
- 'HKEY_LOCAL_MACHINE' : win32.HKEY_LOCAL_MACHINE,
- 'HKEY_USERS' : win32.HKEY_USERS,
- 'HKEY_PERFORMANCE_DATA' : win32.HKEY_PERFORMANCE_DATA,
- 'HKEY_CURRENT_CONFIG' : win32.HKEY_CURRENT_CONFIG,
- }
-
- _hives_by_value = {
- win32.HKEY_CLASSES_ROOT : 'HKEY_CLASSES_ROOT',
- win32.HKEY_CURRENT_USER : 'HKEY_CURRENT_USER',
- win32.HKEY_LOCAL_MACHINE : 'HKEY_LOCAL_MACHINE',
- win32.HKEY_USERS : 'HKEY_USERS',
- win32.HKEY_PERFORMANCE_DATA : 'HKEY_PERFORMANCE_DATA',
- win32.HKEY_CURRENT_CONFIG : 'HKEY_CURRENT_CONFIG',
- }
-
- _hives = sorted(compat.itervalues(_hives_by_value))
-
- def __init__(self, machine = None):
- """
- Opens a local or remote registry.
-
- @type machine: str
- @param machine: Optional machine name. If C{None} it opens the local
- registry.
- """
- self._machine = machine
- self._remote_hives = {}
-
- @property
- def machine(self):
- return self._machine
-
- def _split_path(self, path):
- """
- Splits a Registry path and returns the hive and key.
-
- @type path: str
- @param path: Registry path.
-
- @rtype: tuple( int, str )
- @return: Tuple containing the hive handle and the subkey path.
- The hive handle is always one of the following integer constants:
- - L{win32.HKEY_CLASSES_ROOT}
- - L{win32.HKEY_CURRENT_USER}
- - L{win32.HKEY_LOCAL_MACHINE}
- - L{win32.HKEY_USERS}
- - L{win32.HKEY_PERFORMANCE_DATA}
- - L{win32.HKEY_CURRENT_CONFIG}
- """
- if '\\' in path:
- p = path.find('\\')
- hive = path[:p]
- path = path[p+1:]
- else:
- hive = path
- path = None
- handle = self._hives_by_name[ hive.upper() ]
- return handle, path
-
- def _parse_path(self, path):
- """
- Parses a Registry path and returns the hive and key.
-
- @type path: str
- @param path: Registry path.
-
- @rtype: tuple( int, str )
- @return: Tuple containing the hive handle and the subkey path.
- For a local Registry, the hive handle is an integer.
- For a remote Registry, the hive handle is a L{RegistryKeyHandle}.
- """
- handle, path = self._split_path(path)
- if self._machine is not None:
- handle = self._connect_hive(handle)
- return handle, path
-
- def _join_path(self, hive, subkey):
- """
- Joins the hive and key to make a Registry path.
-
- @type hive: int
- @param hive: Registry hive handle.
- The hive handle must be one of the following integer constants:
- - L{win32.HKEY_CLASSES_ROOT}
- - L{win32.HKEY_CURRENT_USER}
- - L{win32.HKEY_LOCAL_MACHINE}
- - L{win32.HKEY_USERS}
- - L{win32.HKEY_PERFORMANCE_DATA}
- - L{win32.HKEY_CURRENT_CONFIG}
-
- @type subkey: str
- @param subkey: Subkey path.
-
- @rtype: str
- @return: Registry path.
- """
- path = self._hives_by_value[hive]
- if subkey:
- path = path + '\\' + subkey
- return path
-
- def _sanitize_path(self, path):
- """
- Sanitizes the given Registry path.
-
- @type path: str
- @param path: Registry path.
-
- @rtype: str
- @return: Registry path.
- """
- return self._join_path( *self._split_path(path) )
-
- def _connect_hive(self, hive):
- """
- Connect to the specified hive of a remote Registry.
-
- @note: The connection will be cached, to close all connections and
- erase this cache call the L{close} method.
-
- @type hive: int
- @param hive: Hive to connect to.
-
- @rtype: L{win32.RegistryKeyHandle}
- @return: Open handle to the remote Registry hive.
- """
- try:
- handle = self._remote_hives[hive]
- except KeyError:
- handle = win32.RegConnectRegistry(self._machine, hive)
- self._remote_hives[hive] = handle
- return handle
-
- def close(self):
- """
- Closes all open connections to the remote Registry.
-
- No exceptions are raised, even if an error occurs.
-
- This method has no effect when opening the local Registry.
-
- The remote Registry will still be accessible after calling this method
- (new connections will be opened automatically on access).
- """
- while self._remote_hives:
- hive = self._remote_hives.popitem()[1]
- try:
- hive.close()
- except Exception:
- try:
- e = sys.exc_info()[1]
- msg = "Cannot close registry hive handle %s, reason: %s"
- msg %= (hive.value, str(e))
- warnings.warn(msg)
- except Exception:
- pass
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_value, traceback):
- self.close()
-
- def __repr__(self):
- if self._machine:
- return '' % self._machine
- return ''
-
- def __contains__(self, path):
- hive, subpath = self._parse_path(path)
- try:
- with win32.RegOpenKey(hive, subpath):
- return True
- except WindowsError:
- e = sys.exc_info()[1]
- if e.winerror == win32.ERROR_FILE_NOT_FOUND:
- return False
- raise
-
- def __getitem__(self, path):
- path = self._sanitize_path(path)
- hive, subpath = self._parse_path(path)
- try:
- handle = win32.RegOpenKey(hive, subpath)
- except WindowsError:
- e = sys.exc_info()[1]
- if e.winerror == win32.ERROR_FILE_NOT_FOUND:
- raise KeyError(path)
- raise
- return RegistryKey(path, handle)
-
- def __setitem__(self, path, value):
- do_copy = isinstance(value, RegistryKey)
- if not do_copy and not isinstance(value, str) \
- and not isinstance(value, compat.unicode):
- if isinstance(value, object):
- t = value.__class__.__name__
- else:
- t = type(value)
- raise TypeError("Expected string or RegistryKey, got %s" % t)
- hive, subpath = self._parse_path(path)
- with win32.RegCreateKey(hive, subpath) as handle:
- if do_copy:
- win32.RegCopyTree(value.handle, None, handle)
- else:
- win32.RegSetValueEx(handle, None, value)
-
- # XXX FIXME currently not working!
- # It's probably best to call RegDeleteKey recursively, even if slower.
- def __delitem__(self, path):
- hive, subpath = self._parse_path(path)
- if not subpath:
- raise TypeError(
- "Are you SURE you want to wipe out an entire hive?!"
- " Call win32.RegDeleteTree() directly if you must...")
- try:
- win32.RegDeleteTree(hive, subpath)
- except WindowsError:
- e = sys.exc_info()[1]
- if e.winerror == win32.ERROR_FILE_NOT_FOUND:
- raise KeyError(path)
- raise
-
- def create(self, path):
- """
- Creates a new Registry key.
-
- @type path: str
- @param path: Registry key path.
-
- @rtype: L{RegistryKey}
- @return: The newly created Registry key.
- """
- path = self._sanitize_path(path)
- hive, subpath = self._parse_path(path)
- handle = win32.RegCreateKey(hive, subpath)
- return RegistryKey(path, handle)
-
- def subkeys(self, path):
- """
- Returns a list of subkeys for the given Registry key.
-
- @type path: str
- @param path: Registry key path.
-
- @rtype: list(str)
- @return: List of subkey names.
- """
- result = list()
- hive, subpath = self._parse_path(path)
- with win32.RegOpenKey(hive, subpath) as handle:
- index = 0
- while 1:
- name = win32.RegEnumKey(handle, index)
- if name is None:
- break
- result.append(name)
- index += 1
- return result
-
- def iterate(self, path):
- """
- Returns a recursive iterator on the specified key and its subkeys.
-
- @type path: str
- @param path: Registry key path.
-
- @rtype: iterator
- @return: Recursive iterator that returns Registry key paths.
-
- @raise KeyError: The specified path does not exist.
- """
- if path.endswith('\\'):
- path = path[:-1]
- if not self.has_key(path):
- raise KeyError(path)
- stack = collections.deque()
- stack.appendleft(path)
- return self.__iterate(stack)
-
- def iterkeys(self):
- """
- Returns an iterator that crawls the entire Windows Registry.
- """
- stack = collections.deque(self._hives)
- stack.reverse()
- return self.__iterate(stack)
-
- def __iterate(self, stack):
- while stack:
- path = stack.popleft()
- yield path
- try:
- subkeys = self.subkeys(path)
- except WindowsError:
- continue
- prefix = path + '\\'
- subkeys = [prefix + name for name in subkeys]
- stack.extendleft(subkeys)
diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/coco_evaluation.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/coco_evaluation.py
deleted file mode 100644
index fdc41798537d3b2e6fc7096c9f4bebd724f1e395..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/coco_evaluation.py
+++ /dev/null
@@ -1,722 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import contextlib
-import copy
-import io
-import itertools
-import json
-import logging
-import numpy as np
-import os
-import pickle
-from collections import OrderedDict
-import annotator.oneformer.pycocotools.mask as mask_util
-import torch
-from annotator.oneformer.pycocotools.coco import COCO
-from annotator.oneformer.pycocotools.cocoeval import COCOeval
-from tabulate import tabulate
-
-import annotator.oneformer.detectron2.utils.comm as comm
-from annotator.oneformer.detectron2.config import CfgNode
-from annotator.oneformer.detectron2.data import MetadataCatalog
-from annotator.oneformer.detectron2.data.datasets.coco import convert_to_coco_json
-from annotator.oneformer.detectron2.structures import Boxes, BoxMode, pairwise_iou
-from annotator.oneformer.detectron2.utils.file_io import PathManager
-from annotator.oneformer.detectron2.utils.logger import create_small_table
-
-from .evaluator import DatasetEvaluator
-
-try:
- from annotator.oneformer.detectron2.evaluation.fast_eval_api import COCOeval_opt
-except ImportError:
- COCOeval_opt = COCOeval
-
-
-class COCOEvaluator(DatasetEvaluator):
- """
- Evaluate AR for object proposals, AP for instance detection/segmentation, AP
- for keypoint detection outputs using COCO's metrics.
- See http://cocodataset.org/#detection-eval and
- http://cocodataset.org/#keypoints-eval to understand its metrics.
- The metrics range from 0 to 100 (instead of 0 to 1), where a -1 or NaN means
- the metric cannot be computed (e.g. due to no predictions made).
-
- In addition to COCO, this evaluator is able to support any bounding box detection,
- instance segmentation, or keypoint detection dataset.
- """
-
- def __init__(
- self,
- dataset_name,
- tasks=None,
- distributed=True,
- output_dir=None,
- *,
- max_dets_per_image=None,
- use_fast_impl=True,
- kpt_oks_sigmas=(),
- allow_cached_coco=True,
- ):
- """
- Args:
- dataset_name (str): name of the dataset to be evaluated.
- It must have either the following corresponding metadata:
-
- "json_file": the path to the COCO format annotation
-
- Or it must be in detectron2's standard dataset format
- so it can be converted to COCO format automatically.
- tasks (tuple[str]): tasks that can be evaluated under the given
- configuration. A task is one of "bbox", "segm", "keypoints".
- By default, will infer this automatically from predictions.
- distributed (True): if True, will collect results from all ranks and run evaluation
- in the main process.
- Otherwise, will only evaluate the results in the current process.
- output_dir (str): optional, an output directory to dump all
- results predicted on the dataset. The dump contains two files:
-
- 1. "instances_predictions.pth" a file that can be loaded with `torch.load` and
- contains all the results in the format they are produced by the model.
- 2. "coco_instances_results.json" a json file in COCO's result format.
- max_dets_per_image (int): limit on the maximum number of detections per image.
- By default in COCO, this limit is to 100, but this can be customized
- to be greater, as is needed in evaluation metrics AP fixed and AP pool
- (see https://arxiv.org/pdf/2102.01066.pdf)
- This doesn't affect keypoint evaluation.
- use_fast_impl (bool): use a fast but **unofficial** implementation to compute AP.
- Although the results should be very close to the official implementation in COCO
- API, it is still recommended to compute results with the official API for use in
- papers. The faster implementation also uses more RAM.
- kpt_oks_sigmas (list[float]): The sigmas used to calculate keypoint OKS.
- See http://cocodataset.org/#keypoints-eval
- When empty, it will use the defaults in COCO.
- Otherwise it should be the same length as ROI_KEYPOINT_HEAD.NUM_KEYPOINTS.
- allow_cached_coco (bool): Whether to use cached coco json from previous validation
- runs. You should set this to False if you need to use different validation data.
- Defaults to True.
- """
- self._logger = logging.getLogger(__name__)
- self._distributed = distributed
- self._output_dir = output_dir
-
- if use_fast_impl and (COCOeval_opt is COCOeval):
- self._logger.info("Fast COCO eval is not built. Falling back to official COCO eval.")
- use_fast_impl = False
- self._use_fast_impl = use_fast_impl
-
- # COCOeval requires the limit on the number of detections per image (maxDets) to be a list
- # with at least 3 elements. The default maxDets in COCOeval is [1, 10, 100], in which the
- # 3rd element (100) is used as the limit on the number of detections per image when
- # evaluating AP. COCOEvaluator expects an integer for max_dets_per_image, so for COCOeval,
- # we reformat max_dets_per_image into [1, 10, max_dets_per_image], based on the defaults.
- if max_dets_per_image is None:
- max_dets_per_image = [1, 10, 100]
- else:
- max_dets_per_image = [1, 10, max_dets_per_image]
- self._max_dets_per_image = max_dets_per_image
-
- if tasks is not None and isinstance(tasks, CfgNode):
- kpt_oks_sigmas = (
- tasks.TEST.KEYPOINT_OKS_SIGMAS if not kpt_oks_sigmas else kpt_oks_sigmas
- )
- self._logger.warn(
- "COCO Evaluator instantiated using config, this is deprecated behavior."
- " Please pass in explicit arguments instead."
- )
- self._tasks = None # Infering it from predictions should be better
- else:
- self._tasks = tasks
-
- self._cpu_device = torch.device("cpu")
-
- self._metadata = MetadataCatalog.get(dataset_name)
- if not hasattr(self._metadata, "json_file"):
- if output_dir is None:
- raise ValueError(
- "output_dir must be provided to COCOEvaluator "
- "for datasets not in COCO format."
- )
- self._logger.info(f"Trying to convert '{dataset_name}' to COCO format ...")
-
- cache_path = os.path.join(output_dir, f"{dataset_name}_coco_format.json")
- self._metadata.json_file = cache_path
- convert_to_coco_json(dataset_name, cache_path, allow_cached=allow_cached_coco)
-
- json_file = PathManager.get_local_path(self._metadata.json_file)
- with contextlib.redirect_stdout(io.StringIO()):
- self._coco_api = COCO(json_file)
-
- # Test set json files do not contain annotations (evaluation must be
- # performed using the COCO evaluation server).
- self._do_evaluation = "annotations" in self._coco_api.dataset
- if self._do_evaluation:
- self._kpt_oks_sigmas = kpt_oks_sigmas
-
- def reset(self):
- self._predictions = []
-
- def process(self, inputs, outputs):
- """
- Args:
- inputs: the inputs to a COCO model (e.g., GeneralizedRCNN).
- It is a list of dict. Each dict corresponds to an image and
- contains keys like "height", "width", "file_name", "image_id".
- outputs: the outputs of a COCO model. It is a list of dicts with key
- "instances" that contains :class:`Instances`.
- """
- for input, output in zip(inputs, outputs):
- prediction = {"image_id": input["image_id"]}
-
- if "instances" in output:
- instances = output["instances"].to(self._cpu_device)
- prediction["instances"] = instances_to_coco_json(instances, input["image_id"])
- if "proposals" in output:
- prediction["proposals"] = output["proposals"].to(self._cpu_device)
- if len(prediction) > 1:
- self._predictions.append(prediction)
-
- def evaluate(self, img_ids=None):
- """
- Args:
- img_ids: a list of image IDs to evaluate on. Default to None for the whole dataset
- """
- if self._distributed:
- comm.synchronize()
- predictions = comm.gather(self._predictions, dst=0)
- predictions = list(itertools.chain(*predictions))
-
- if not comm.is_main_process():
- return {}
- else:
- predictions = self._predictions
-
- if len(predictions) == 0:
- self._logger.warning("[COCOEvaluator] Did not receive valid predictions.")
- return {}
-
- if self._output_dir:
- PathManager.mkdirs(self._output_dir)
- file_path = os.path.join(self._output_dir, "instances_predictions.pth")
- with PathManager.open(file_path, "wb") as f:
- torch.save(predictions, f)
-
- self._results = OrderedDict()
- if "proposals" in predictions[0]:
- self._eval_box_proposals(predictions)
- if "instances" in predictions[0]:
- self._eval_predictions(predictions, img_ids=img_ids)
- # Copy so the caller can do whatever with results
- return copy.deepcopy(self._results)
-
- def _tasks_from_predictions(self, predictions):
- """
- Get COCO API "tasks" (i.e. iou_type) from COCO-format predictions.
- """
- tasks = {"bbox"}
- for pred in predictions:
- if "segmentation" in pred:
- tasks.add("segm")
- if "keypoints" in pred:
- tasks.add("keypoints")
- return sorted(tasks)
-
- def _eval_predictions(self, predictions, img_ids=None):
- """
- Evaluate predictions. Fill self._results with the metrics of the tasks.
- """
- self._logger.info("Preparing results for COCO format ...")
- coco_results = list(itertools.chain(*[x["instances"] for x in predictions]))
- tasks = self._tasks or self._tasks_from_predictions(coco_results)
-
- # unmap the category ids for COCO
- if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"):
- dataset_id_to_contiguous_id = self._metadata.thing_dataset_id_to_contiguous_id
- all_contiguous_ids = list(dataset_id_to_contiguous_id.values())
- num_classes = len(all_contiguous_ids)
- assert min(all_contiguous_ids) == 0 and max(all_contiguous_ids) == num_classes - 1
-
- reverse_id_mapping = {v: k for k, v in dataset_id_to_contiguous_id.items()}
- for result in coco_results:
- category_id = result["category_id"]
- assert category_id < num_classes, (
- f"A prediction has class={category_id}, "
- f"but the dataset only has {num_classes} classes and "
- f"predicted class id should be in [0, {num_classes - 1}]."
- )
- result["category_id"] = reverse_id_mapping[category_id]
-
- if self._output_dir:
- file_path = os.path.join(self._output_dir, "coco_instances_results.json")
- self._logger.info("Saving results to {}".format(file_path))
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(coco_results))
- f.flush()
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info(
- "Evaluating predictions with {} COCO API...".format(
- "unofficial" if self._use_fast_impl else "official"
- )
- )
- for task in sorted(tasks):
- assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!"
- coco_eval = (
- _evaluate_predictions_on_coco(
- self._coco_api,
- coco_results,
- task,
- kpt_oks_sigmas=self._kpt_oks_sigmas,
- cocoeval_fn=COCOeval_opt if self._use_fast_impl else COCOeval,
- img_ids=img_ids,
- max_dets_per_image=self._max_dets_per_image,
- )
- if len(coco_results) > 0
- else None # cocoapi does not handle empty results very well
- )
-
- res = self._derive_coco_results(
- coco_eval, task, class_names=self._metadata.get("thing_classes")
- )
- self._results[task] = res
-
- def _eval_box_proposals(self, predictions):
- """
- Evaluate the box proposals in predictions.
- Fill self._results with the metrics for "box_proposals" task.
- """
- if self._output_dir:
- # Saving generated box proposals to file.
- # Predicted box_proposals are in XYXY_ABS mode.
- bbox_mode = BoxMode.XYXY_ABS.value
- ids, boxes, objectness_logits = [], [], []
- for prediction in predictions:
- ids.append(prediction["image_id"])
- boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy())
- objectness_logits.append(prediction["proposals"].objectness_logits.numpy())
-
- proposal_data = {
- "boxes": boxes,
- "objectness_logits": objectness_logits,
- "ids": ids,
- "bbox_mode": bbox_mode,
- }
- with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f:
- pickle.dump(proposal_data, f)
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info("Evaluating bbox proposals ...")
- res = {}
- areas = {"all": "", "small": "s", "medium": "m", "large": "l"}
- for limit in [100, 1000]:
- for area, suffix in areas.items():
- stats = _evaluate_box_proposals(predictions, self._coco_api, area=area, limit=limit)
- key = "AR{}@{:d}".format(suffix, limit)
- res[key] = float(stats["ar"].item() * 100)
- self._logger.info("Proposal metrics: \n" + create_small_table(res))
- self._results["box_proposals"] = res
-
- def _derive_coco_results(self, coco_eval, iou_type, class_names=None):
- """
- Derive the desired score numbers from summarized COCOeval.
-
- Args:
- coco_eval (None or COCOEval): None represents no predictions from model.
- iou_type (str):
- class_names (None or list[str]): if provided, will use it to predict
- per-category AP.
-
- Returns:
- a dict of {metric name: score}
- """
-
- metrics = {
- "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl"],
- "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl"],
- "keypoints": ["AP", "AP50", "AP75", "APm", "APl"],
- }[iou_type]
-
- if coco_eval is None:
- self._logger.warn("No predictions from the model!")
- return {metric: float("nan") for metric in metrics}
-
- # the standard metrics
- results = {
- metric: float(coco_eval.stats[idx] * 100 if coco_eval.stats[idx] >= 0 else "nan")
- for idx, metric in enumerate(metrics)
- }
- self._logger.info(
- "Evaluation results for {}: \n".format(iou_type) + create_small_table(results)
- )
- if not np.isfinite(sum(results.values())):
- self._logger.info("Some metrics cannot be computed and is shown as NaN.")
-
- if class_names is None or len(class_names) <= 1:
- return results
- # Compute per-category AP
- # from https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L222-L252 # noqa
- precisions = coco_eval.eval["precision"]
- # precision has dims (iou, recall, cls, area range, max dets)
- assert len(class_names) == precisions.shape[2]
-
- results_per_category = []
- for idx, name in enumerate(class_names):
- # area range index 0: all area ranges
- # max dets index -1: typically 100 per image
- precision = precisions[:, :, idx, 0, -1]
- precision = precision[precision > -1]
- ap = np.mean(precision) if precision.size else float("nan")
- results_per_category.append(("{}".format(name), float(ap * 100)))
-
- # tabulate it
- N_COLS = min(6, len(results_per_category) * 2)
- results_flatten = list(itertools.chain(*results_per_category))
- results_2d = itertools.zip_longest(*[results_flatten[i::N_COLS] for i in range(N_COLS)])
- table = tabulate(
- results_2d,
- tablefmt="pipe",
- floatfmt=".3f",
- headers=["category", "AP"] * (N_COLS // 2),
- numalign="left",
- )
- self._logger.info("Per-category {} AP: \n".format(iou_type) + table)
-
- results.update({"AP-" + name: ap for name, ap in results_per_category})
- return results
-
-
-def instances_to_coco_json(instances, img_id):
- """
- Dump an "Instances" object to a COCO-format json that's used for evaluation.
-
- Args:
- instances (Instances):
- img_id (int): the image id
-
- Returns:
- list[dict]: list of json annotations in COCO format.
- """
- num_instance = len(instances)
- if num_instance == 0:
- return []
-
- boxes = instances.pred_boxes.tensor.numpy()
- boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
- boxes = boxes.tolist()
- scores = instances.scores.tolist()
- classes = instances.pred_classes.tolist()
-
- has_mask = instances.has("pred_masks")
- if has_mask:
- # use RLE to encode the masks, because they are too large and takes memory
- # since this evaluator stores outputs of the entire dataset
- rles = [
- mask_util.encode(np.array(mask[:, :, None], order="F", dtype="uint8"))[0]
- for mask in instances.pred_masks
- ]
- for rle in rles:
- # "counts" is an array encoded by mask_util as a byte-stream. Python3's
- # json writer which always produces strings cannot serialize a bytestream
- # unless you decode it. Thankfully, utf-8 works out (which is also what
- # the annotator.oneformer.pycocotools/_mask.pyx does).
- rle["counts"] = rle["counts"].decode("utf-8")
-
- has_keypoints = instances.has("pred_keypoints")
- if has_keypoints:
- keypoints = instances.pred_keypoints
-
- results = []
- for k in range(num_instance):
- result = {
- "image_id": img_id,
- "category_id": classes[k],
- "bbox": boxes[k],
- "score": scores[k],
- }
- if has_mask:
- result["segmentation"] = rles[k]
- if has_keypoints:
- # In COCO annotations,
- # keypoints coordinates are pixel indices.
- # However our predictions are floating point coordinates.
- # Therefore we subtract 0.5 to be consistent with the annotation format.
- # This is the inverse of data loading logic in `datasets/coco.py`.
- keypoints[k][:, :2] -= 0.5
- result["keypoints"] = keypoints[k].flatten().tolist()
- results.append(result)
- return results
-
-
-# inspired from Detectron:
-# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa
-def _evaluate_box_proposals(dataset_predictions, coco_api, thresholds=None, area="all", limit=None):
- """
- Evaluate detection proposal recall metrics. This function is a much
- faster alternative to the official COCO API recall evaluation code. However,
- it produces slightly different results.
- """
- # Record max overlap value for each gt box
- # Return vector of overlap values
- areas = {
- "all": 0,
- "small": 1,
- "medium": 2,
- "large": 3,
- "96-128": 4,
- "128-256": 5,
- "256-512": 6,
- "512-inf": 7,
- }
- area_ranges = [
- [0**2, 1e5**2], # all
- [0**2, 32**2], # small
- [32**2, 96**2], # medium
- [96**2, 1e5**2], # large
- [96**2, 128**2], # 96-128
- [128**2, 256**2], # 128-256
- [256**2, 512**2], # 256-512
- [512**2, 1e5**2],
- ] # 512-inf
- assert area in areas, "Unknown area range: {}".format(area)
- area_range = area_ranges[areas[area]]
- gt_overlaps = []
- num_pos = 0
-
- for prediction_dict in dataset_predictions:
- predictions = prediction_dict["proposals"]
-
- # sort predictions in descending order
- # TODO maybe remove this and make it explicit in the documentation
- inds = predictions.objectness_logits.sort(descending=True)[1]
- predictions = predictions[inds]
-
- ann_ids = coco_api.getAnnIds(imgIds=prediction_dict["image_id"])
- anno = coco_api.loadAnns(ann_ids)
- gt_boxes = [
- BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS)
- for obj in anno
- if obj["iscrowd"] == 0
- ]
- gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes
- gt_boxes = Boxes(gt_boxes)
- gt_areas = torch.as_tensor([obj["area"] for obj in anno if obj["iscrowd"] == 0])
-
- if len(gt_boxes) == 0 or len(predictions) == 0:
- continue
-
- valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1])
- gt_boxes = gt_boxes[valid_gt_inds]
-
- num_pos += len(gt_boxes)
-
- if len(gt_boxes) == 0:
- continue
-
- if limit is not None and len(predictions) > limit:
- predictions = predictions[:limit]
-
- overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes)
-
- _gt_overlaps = torch.zeros(len(gt_boxes))
- for j in range(min(len(predictions), len(gt_boxes))):
- # find which proposal box maximally covers each gt box
- # and get the iou amount of coverage for each gt box
- max_overlaps, argmax_overlaps = overlaps.max(dim=0)
-
- # find which gt box is 'best' covered (i.e. 'best' = most iou)
- gt_ovr, gt_ind = max_overlaps.max(dim=0)
- assert gt_ovr >= 0
- # find the proposal box that covers the best covered gt box
- box_ind = argmax_overlaps[gt_ind]
- # record the iou coverage of this gt box
- _gt_overlaps[j] = overlaps[box_ind, gt_ind]
- assert _gt_overlaps[j] == gt_ovr
- # mark the proposal box and the gt box as used
- overlaps[box_ind, :] = -1
- overlaps[:, gt_ind] = -1
-
- # append recorded iou coverage level
- gt_overlaps.append(_gt_overlaps)
- gt_overlaps = (
- torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32)
- )
- gt_overlaps, _ = torch.sort(gt_overlaps)
-
- if thresholds is None:
- step = 0.05
- thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32)
- recalls = torch.zeros_like(thresholds)
- # compute recall for each iou threshold
- for i, t in enumerate(thresholds):
- recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos)
- # ar = 2 * np.trapz(recalls, thresholds)
- ar = recalls.mean()
- return {
- "ar": ar,
- "recalls": recalls,
- "thresholds": thresholds,
- "gt_overlaps": gt_overlaps,
- "num_pos": num_pos,
- }
-
-
-def _evaluate_predictions_on_coco(
- coco_gt,
- coco_results,
- iou_type,
- kpt_oks_sigmas=None,
- cocoeval_fn=COCOeval_opt,
- img_ids=None,
- max_dets_per_image=None,
-):
- """
- Evaluate the coco results using COCOEval API.
- """
- assert len(coco_results) > 0
-
- if iou_type == "segm":
- coco_results = copy.deepcopy(coco_results)
- # When evaluating mask AP, if the results contain bbox, cocoapi will
- # use the box area as the area of the instance, instead of the mask area.
- # This leads to a different definition of small/medium/large.
- # We remove the bbox field to let mask AP use mask area.
- for c in coco_results:
- c.pop("bbox", None)
-
- coco_dt = coco_gt.loadRes(coco_results)
- coco_eval = cocoeval_fn(coco_gt, coco_dt, iou_type)
- # For COCO, the default max_dets_per_image is [1, 10, 100].
- if max_dets_per_image is None:
- max_dets_per_image = [1, 10, 100] # Default from COCOEval
- else:
- assert (
- len(max_dets_per_image) >= 3
- ), "COCOeval requires maxDets (and max_dets_per_image) to have length at least 3"
- # In the case that user supplies a custom input for max_dets_per_image,
- # apply COCOevalMaxDets to evaluate AP with the custom input.
- if max_dets_per_image[2] != 100:
- coco_eval = COCOevalMaxDets(coco_gt, coco_dt, iou_type)
- if iou_type != "keypoints":
- coco_eval.params.maxDets = max_dets_per_image
-
- if img_ids is not None:
- coco_eval.params.imgIds = img_ids
-
- if iou_type == "keypoints":
- # Use the COCO default keypoint OKS sigmas unless overrides are specified
- if kpt_oks_sigmas:
- assert hasattr(coco_eval.params, "kpt_oks_sigmas"), "annotator.oneformer.pycocotools is too old!"
- coco_eval.params.kpt_oks_sigmas = np.array(kpt_oks_sigmas)
- # COCOAPI requires every detection and every gt to have keypoints, so
- # we just take the first entry from both
- num_keypoints_dt = len(coco_results[0]["keypoints"]) // 3
- num_keypoints_gt = len(next(iter(coco_gt.anns.values()))["keypoints"]) // 3
- num_keypoints_oks = len(coco_eval.params.kpt_oks_sigmas)
- assert num_keypoints_oks == num_keypoints_dt == num_keypoints_gt, (
- f"[COCOEvaluator] Prediction contain {num_keypoints_dt} keypoints. "
- f"Ground truth contains {num_keypoints_gt} keypoints. "
- f"The length of cfg.TEST.KEYPOINT_OKS_SIGMAS is {num_keypoints_oks}. "
- "They have to agree with each other. For meaning of OKS, please refer to "
- "http://cocodataset.org/#keypoints-eval."
- )
-
- coco_eval.evaluate()
- coco_eval.accumulate()
- coco_eval.summarize()
-
- return coco_eval
-
-
-class COCOevalMaxDets(COCOeval):
- """
- Modified version of COCOeval for evaluating AP with a custom
- maxDets (by default for COCO, maxDets is 100)
- """
-
- def summarize(self):
- """
- Compute and display summary metrics for evaluation results given
- a custom value for max_dets_per_image
- """
-
- def _summarize(ap=1, iouThr=None, areaRng="all", maxDets=100):
- p = self.params
- iStr = " {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}"
- titleStr = "Average Precision" if ap == 1 else "Average Recall"
- typeStr = "(AP)" if ap == 1 else "(AR)"
- iouStr = (
- "{:0.2f}:{:0.2f}".format(p.iouThrs[0], p.iouThrs[-1])
- if iouThr is None
- else "{:0.2f}".format(iouThr)
- )
-
- aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng]
- mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets]
- if ap == 1:
- # dimension of precision: [TxRxKxAxM]
- s = self.eval["precision"]
- # IoU
- if iouThr is not None:
- t = np.where(iouThr == p.iouThrs)[0]
- s = s[t]
- s = s[:, :, :, aind, mind]
- else:
- # dimension of recall: [TxKxAxM]
- s = self.eval["recall"]
- if iouThr is not None:
- t = np.where(iouThr == p.iouThrs)[0]
- s = s[t]
- s = s[:, :, aind, mind]
- if len(s[s > -1]) == 0:
- mean_s = -1
- else:
- mean_s = np.mean(s[s > -1])
- print(iStr.format(titleStr, typeStr, iouStr, areaRng, maxDets, mean_s))
- return mean_s
-
- def _summarizeDets():
- stats = np.zeros((12,))
- # Evaluate AP using the custom limit on maximum detections per image
- stats[0] = _summarize(1, maxDets=self.params.maxDets[2])
- stats[1] = _summarize(1, iouThr=0.5, maxDets=self.params.maxDets[2])
- stats[2] = _summarize(1, iouThr=0.75, maxDets=self.params.maxDets[2])
- stats[3] = _summarize(1, areaRng="small", maxDets=self.params.maxDets[2])
- stats[4] = _summarize(1, areaRng="medium", maxDets=self.params.maxDets[2])
- stats[5] = _summarize(1, areaRng="large", maxDets=self.params.maxDets[2])
- stats[6] = _summarize(0, maxDets=self.params.maxDets[0])
- stats[7] = _summarize(0, maxDets=self.params.maxDets[1])
- stats[8] = _summarize(0, maxDets=self.params.maxDets[2])
- stats[9] = _summarize(0, areaRng="small", maxDets=self.params.maxDets[2])
- stats[10] = _summarize(0, areaRng="medium", maxDets=self.params.maxDets[2])
- stats[11] = _summarize(0, areaRng="large", maxDets=self.params.maxDets[2])
- return stats
-
- def _summarizeKps():
- stats = np.zeros((10,))
- stats[0] = _summarize(1, maxDets=20)
- stats[1] = _summarize(1, maxDets=20, iouThr=0.5)
- stats[2] = _summarize(1, maxDets=20, iouThr=0.75)
- stats[3] = _summarize(1, maxDets=20, areaRng="medium")
- stats[4] = _summarize(1, maxDets=20, areaRng="large")
- stats[5] = _summarize(0, maxDets=20)
- stats[6] = _summarize(0, maxDets=20, iouThr=0.5)
- stats[7] = _summarize(0, maxDets=20, iouThr=0.75)
- stats[8] = _summarize(0, maxDets=20, areaRng="medium")
- stats[9] = _summarize(0, maxDets=20, areaRng="large")
- return stats
-
- if not self.eval:
- raise Exception("Please run accumulate() first")
- iouType = self.params.iouType
- if iouType == "segm" or iouType == "bbox":
- summarize = _summarizeDets
- elif iouType == "keypoints":
- summarize = _summarizeKps
- self.stats = summarize()
-
- def __str__(self):
- self.summarize()
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/install_egg_info.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/install_egg_info.py
deleted file mode 100644
index f3e8f3447dc206799a8e124000a81c443adc870f..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/install_egg_info.py
+++ /dev/null
@@ -1,92 +0,0 @@
-"""
-distutils.command.install_egg_info
-
-Implements the Distutils 'install_egg_info' command, for installing
-a package's PKG-INFO metadata.
-"""
-
-import os
-import sys
-import re
-
-from ..cmd import Command
-from .. import dir_util
-from .._log import log
-
-
-class install_egg_info(Command):
- """Install an .egg-info file for the package"""
-
- description = "Install package's PKG-INFO metadata as an .egg-info file"
- user_options = [
- ('install-dir=', 'd', "directory to install to"),
- ]
-
- def initialize_options(self):
- self.install_dir = None
-
- @property
- def basename(self):
- """
- Allow basename to be overridden by child class.
- Ref pypa/distutils#2.
- """
- return "%s-%s-py%d.%d.egg-info" % (
- to_filename(safe_name(self.distribution.get_name())),
- to_filename(safe_version(self.distribution.get_version())),
- *sys.version_info[:2],
- )
-
- def finalize_options(self):
- self.set_undefined_options('install_lib', ('install_dir', 'install_dir'))
- self.target = os.path.join(self.install_dir, self.basename)
- self.outputs = [self.target]
-
- def run(self):
- target = self.target
- if os.path.isdir(target) and not os.path.islink(target):
- dir_util.remove_tree(target, dry_run=self.dry_run)
- elif os.path.exists(target):
- self.execute(os.unlink, (self.target,), "Removing " + target)
- elif not os.path.isdir(self.install_dir):
- self.execute(
- os.makedirs, (self.install_dir,), "Creating " + self.install_dir
- )
- log.info("Writing %s", target)
- if not self.dry_run:
- with open(target, 'w', encoding='UTF-8') as f:
- self.distribution.metadata.write_pkg_file(f)
-
- def get_outputs(self):
- return self.outputs
-
-
-# The following routines are taken from setuptools' pkg_resources module and
-# can be replaced by importing them from pkg_resources once it is included
-# in the stdlib.
-
-
-def safe_name(name):
- """Convert an arbitrary string to a standard distribution name
-
- Any runs of non-alphanumeric/. characters are replaced with a single '-'.
- """
- return re.sub('[^A-Za-z0-9.]+', '-', name)
-
-
-def safe_version(version):
- """Convert an arbitrary string to a standard version string
-
- Spaces become dots, and all other non-alphanumeric characters become
- dashes, with runs of multiple dashes condensed to a single dash.
- """
- version = version.replace(' ', '.')
- return re.sub('[^A-Za-z0-9.]+', '-', version)
-
-
-def to_filename(name):
- """Convert a project or version name to its filename-escaped form
-
- Any '-' characters are currently replaced with '_'.
- """
- return name.replace('-', '_')
diff --git a/spaces/ToniDan/DanToniGPT2FormalInformal/app.py b/spaces/ToniDan/DanToniGPT2FormalInformal/app.py
deleted file mode 100644
index a1904ce4c5a771735d7e60d0279e8b46448e77e7..0000000000000000000000000000000000000000
--- a/spaces/ToniDan/DanToniGPT2FormalInformal/app.py
+++ /dev/null
@@ -1,275 +0,0 @@
-import streamlit as st
-import numpy as np
-import pandas as pd
-import os
-import torch
-import torch.nn as nn
-from transformers.activations import get_activation
-from transformers import AutoTokenizer, AutoModelForCausalLM
-
-
-st.title('GPT2: To see all prompt outlines: https://huggingface.co/BigSalmon/BigSalmon/InformalToFormalLincoln91Paraphrase')
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
-@st.cache(allow_output_mutation=True)
-def get_model():
- tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln92Paraphrase")
- model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln92Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincolnMediumParaphraseConcise")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincolnMediumParaphraseConcise")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln91Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln91Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln88Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln88Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln86Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln86Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln82Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln82Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln79Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln79Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln74Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln74Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln72Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln72Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln64Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln64Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln60Paraphrase")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln60Paraphrase")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo1.3BInformalToFormal")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo1.3BInformalToFormal")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln55")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln55")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln51")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln51")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln45")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln49")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln43")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln43")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln41")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln41")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln38")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln38")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln37")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln37")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln36")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln36")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/MediumInformalToFormalLincoln")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/MediumInformalToFormalLincoln")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln35")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln35")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln31")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln31")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln21")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln21")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsOneSent")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsOneSent")
-
- #tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsToSentence")
- #model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsToSentence")
-
- return model, tokenizer
-
-model, tokenizer = get_model()
-
-g = """informal english: garage band has made people who know nothing about music good at creating music.
-Translated into the Style of Abraham Lincoln: garage band ( offers the uninitiated in music the ability to produce professional-quality compositions / catapults those for whom music is an uncharted art the ability the realize masterpieces / stimulates music novice's competency to yield sublime arrangements / begets individuals of rudimentary musical talent the proficiency to fashion elaborate suites ).
-informal english: chrome extensions can make doing regular tasks much easier to get done.
-Translated into the Style of Abraham Lincoln: chrome extensions ( yield the boon of time-saving convenience / ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks / turbocharges the velocity with which one can conduct their obligations ).
-informal english: broadband is finally expanding to rural areas, a great development that will thrust them into modern life.
-Translated into the Style of Abraham Lincoln: broadband is ( ( finally / at last / after years of delay ) arriving in remote locations / springing to life in far-flung outposts / inching into even the most backwater corners of the nation ) that will leap-frog them into the twenty-first century.
-informal english: google translate has made talking to people who do not share your language easier.
-Translated into the Style of Abraham Lincoln: google translate ( imparts communicability to individuals whose native tongue differs / mitigates the trials of communication across linguistic barriers / hastens the bridging of semantic boundaries / mollifies the complexity of multilingual communication / avails itself to the internationalization of discussion / flexes its muscles to abet intercultural conversation / calms the tides of linguistic divergence ).
-informal english: corn fields are all across illinois, visible once you leave chicago.
-Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
-informal english: """
-
-number_of_outputs = st.sidebar.slider("Number of Outputs", 5, 100)
-log_nums = st.sidebar.slider("How Many Log Outputs?", 50, 600)
-
-def BestProbs(prompt):
- prompt = prompt.strip()
- text = tokenizer.encode(prompt)
- myinput, past_key_values = torch.tensor([text]), None
- myinput = myinput
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
- logits = logits[0,-1]
- probabilities = torch.nn.functional.softmax(logits)
- best_logits, best_indices = logits.topk(10)
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
- for i in best_words[0:10]:
- print("_______")
- st.write(f"${i} $\n")
- f = (f"${i} $\n")
- m = (prompt + f"{i}")
- BestProbs2(m)
- return f
-
-def BestProbs2(prompt):
- prompt = prompt.strip()
- text = tokenizer.encode(prompt)
- myinput, past_key_values = torch.tensor([text]), None
- myinput = myinput
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
- logits = logits[0,-1]
- probabilities = torch.nn.functional.softmax(logits)
- best_logits, best_indices = logits.topk(20)
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
- for i in best_words[0:20]:
- print(i)
- st.write(i)
-
-def LogProbs(prompt):
- col1 = []
- col2 = []
- prompt = prompt.strip()
- text = tokenizer.encode(prompt)
- myinput, past_key_values = torch.tensor([text]), None
- myinput = myinput
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
- logits = logits[0,-1]
- probabilities = torch.nn.functional.softmax(logits)
- best_logits, best_indices = logits.topk(10)
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
- for i in best_words[0:10]:
- print("_______")
- f = i
- col1.append(f)
- m = (prompt + f"{i}")
- #print("^^" + f + " ^^")
- prompt = m.strip()
- text = tokenizer.encode(prompt)
- myinput, past_key_values = torch.tensor([text]), None
- myinput = myinput
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
- logits = logits[0,-1]
- probabilities = torch.nn.functional.softmax(logits)
- best_logits, best_indices = logits.topk(20)
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
- for i in best_words[0:20]:
- #print(i)
- col2.append(i)
- #print(col1)
- #print(col2)
- d = {col1[0]: [col2[0], col2[1], col2[2], col2[3], col2[4], col2[5], col2[6], col2[7], col2[8], col2[9], col2[10], col2[11], col2[12], col2[13], col2[14], col2[15], col2[16], col2[17], col2[18], col2[19]],
- col1[1]: [col2[20], col2[21], col2[22], col2[23], col2[24], col2[25], col2[26], col2[27], col2[28], col2[29], col2[30], col2[31], col2[32], col2[33], col2[34], col2[35], col2[36], col2[37], col2[38], col2[39]],
- col1[2]: [col2[40], col2[41], col2[42], col2[43], col2[44], col2[45], col2[46], col2[47], col2[48], col2[49], col2[50], col2[51], col2[52], col2[53], col2[54], col2[55], col2[56], col2[57], col2[58], col2[59]],
- col1[3]: [col2[60], col2[61], col2[62], col2[63], col2[64], col2[65], col2[66], col2[67], col2[68], col2[69], col2[70], col2[71], col2[72], col2[73], col2[74], col2[75], col2[76], col2[77], col2[78], col2[79]],
- col1[4]: [col2[80], col2[81], col2[82], col2[83], col2[84], col2[85], col2[86], col2[87], col2[88], col2[89], col2[90], col2[91], col2[92], col2[93], col2[94], col2[95], col2[96], col2[97], col2[98], col2[99]],
- col1[5]: [col2[100], col2[101], col2[102], col2[103], col2[104], col2[105], col2[106], col2[107], col2[108], col2[109], col2[110], col2[111], col2[112], col2[113], col2[114], col2[115], col2[116], col2[117], col2[118], col2[119]],
- col1[6]: [col2[120], col2[121], col2[122], col2[123], col2[124], col2[125], col2[126], col2[127], col2[128], col2[129], col2[130], col2[131], col2[132], col2[133], col2[134], col2[135], col2[136], col2[137], col2[138], col2[139]],
- col1[7]: [col2[140], col2[141], col2[142], col2[143], col2[144], col2[145], col2[146], col2[147], col2[148], col2[149], col2[150], col2[151], col2[152], col2[153], col2[154], col2[155], col2[156], col2[157], col2[158], col2[159]],
- col1[8]: [col2[160], col2[161], col2[162], col2[163], col2[164], col2[165], col2[166], col2[167], col2[168], col2[169], col2[170], col2[171], col2[172], col2[173], col2[174], col2[175], col2[176], col2[177], col2[178], col2[179]],
- col1[9]: [col2[180], col2[181], col2[182], col2[183], col2[184], col2[185], col2[186], col2[187], col2[188], col2[189], col2[190], col2[191], col2[192], col2[193], col2[194], col2[195], col2[196], col2[197], col2[198], col2[199]]}
- df = pd.DataFrame(data=d)
- print(df)
- st.write(df)
- return df
-
-def BestProbs5(prompt):
- prompt = prompt.strip()
- text = tokenizer.encode(prompt)
- myinput, past_key_values = torch.tensor([text]), None
- myinput = myinput
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
- logits = logits[0,-1]
- probabilities = torch.nn.functional.softmax(logits)
- best_logits, best_indices = logits.topk(number_of_outputs)
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
- for i in best_words[0:number_of_outputs]:
- #print(i)
- print("\n")
- g = (prompt + i)
- st.write(g)
- l = run_generate(g, "hey")
- st.write(l)
-
-def run_generate(text, bad_words):
- yo = []
- input_ids = tokenizer.encode(text, return_tensors='pt')
- res = len(tokenizer.encode(text))
- bad_words = bad_words.split()
- bad_word_ids = [[7829], [40940]]
- for bad_word in bad_words:
- bad_word = " " + bad_word
- ids = tokenizer(bad_word).input_ids
- bad_word_ids.append(ids)
- sample_outputs = model.generate(
- input_ids,
- do_sample=True,
- max_length= res + 5,
- min_length = res + 5,
- top_k=50,
- temperature=1.0,
- num_return_sequences=3,
- bad_words_ids=bad_word_ids
- )
- for i in range(3):
- e = tokenizer.decode(sample_outputs[i])
- e = e.replace(text, "")
- yo.append(e)
- print(yo)
- return yo
-
-with st.form(key='my_form'):
- prompt = st.text_area(label='Enter sentence', value=g, height=500)
- submit_button = st.form_submit_button(label='Submit')
- submit_button2 = st.form_submit_button(label='Fast Forward')
- submit_button3 = st.form_submit_button(label='Fast Forward 2.0')
- submit_button4 = st.form_submit_button(label='Get Top')
-
- if submit_button:
- with torch.no_grad():
- text = tokenizer.encode(prompt)
- myinput, past_key_values = torch.tensor([text]), None
- myinput = myinput
- myinput= myinput.to(device)
- logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
- logits = logits[0,-1]
- probabilities = torch.nn.functional.softmax(logits)
- best_logits, best_indices = logits.topk(log_nums)
- best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
- text.append(best_indices[0].item())
- best_probabilities = probabilities[best_indices].tolist()
- words = []
- st.write(best_words)
- if submit_button2:
- print("----")
- st.write("___")
- m = LogProbs(prompt)
- st.write("___")
- st.write(m)
- st.write("___")
- if submit_button3:
- print("----")
- st.write("___")
- st.write(BestProbs)
- if submit_button4:
- BestProbs5(prompt)
\ No newline at end of file
diff --git a/spaces/Usaki108/VoiceChange/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/Usaki108/VoiceChange/infer_pack/modules/F0Predictor/DioF0Predictor.py
deleted file mode 100644
index eb60d8830714338448be009d1075e3594337db15..0000000000000000000000000000000000000000
--- a/spaces/Usaki108/VoiceChange/infer_pack/modules/F0Predictor/DioF0Predictor.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class DioF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.dio(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- for index, pitch in enumerate(f0):
- f0[index] = round(pitch, 1)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/Wootang01/text_summarizer/app.py b/spaces/Wootang01/text_summarizer/app.py
deleted file mode 100644
index 3ea84b3539feb04a1aaeadda62a2c04642a2baae..0000000000000000000000000000000000000000
--- a/spaces/Wootang01/text_summarizer/app.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import gradio as gr
-from gradio.mix import Parallel, Series
-from transformers import AutoTokenizer, AutoModelWithLMHead, AutoModelForSeq2SeqLM
-
-title = "Text Summarizer"
-description = "Past an article text or other text. Submit the text and the machine will create four summaries based on words in the text. Which sentences in the text are the most important for the summaries? Which summaries are better for your case?"
-examples = [
-
- ["""""""""
- Hong Kong health authorities on Wednesday began a city-wide search for the contacts of a Covid-19 patient from a suspected dance cluster and ordered a Royal Caribbean "cruise to nowhere" ship with 3,700 people onboard to return to port early.
-
-The latest hunt was sparked by a 62-year-old woman who danced with some 20 friends at Victoria Park and the Causeway Bay Community Centre on New Year's Eve. Two of the fellow dancers, one of whom was a domestic helper, came up positive in preliminary tests.
-
-The 62-year-old was said to have contracted the virus from her 28-year-old flight attendant daughter, who returned to Hong Kong on December 27 and had onset of symptoms on December 29.
-
-It was only on January 1 that the 62-year-old was classified as a close contact and being brought to a quarantine facility.
-
-The helper's employer and eight other of her close contacts then went on a "cruise to nowhere" journey on January 2, which was due to return on January 6.
-
-As part of its coronavirus restrictions, Hong Kong has restricted cruises to short trips in nearby waters, with ships asked to operate at reduced capacity and to only allow vaccinated passengers who test negative for the virus.
-
-The "Spectrum of the Seas" ship had about 2,500 passengers and 1,200 staff on board. The nine close contact passengers were isolated from the rest of the people on board and preliminary tests taken during the journey returned negative results, authorities said.
-
-"Spectrum of the Seas is taking appropriate measures under guidelines by the Department of Health," Royal Caribbean said in a statement.
-
-The ship was on early Wednesday ordered to return to the Kai Tak Cruise Terminal. The nine close contacts will be sent to a quarantine center, while the rest of the passengers and staff will have to undergo several compulsory tests in the coming days, the government said.
-"""""""""],
-["""""
-Hong Kong has seen a record low in the Joint University Programmes Admissions System this year, the lowest in nearly a decade.
-
-JUPAS - the main route to apply for local tertiary institutions - allows applicants to seek entry to full-time programs at the eight institutions funded by the University Grants Committee and the self-financed Hong Kong Metropolitan University.
-
-According to the JUPAS website, there were 38,955 applicants this year, a drop of 1,057 from last year. The figures have been declining each year since 2013 from the peak of 69,172.
-
-Reports suggested that the record figure could be a result of the city’s low birth rate and the increasing number of families moving abroad with their children, out of worries about the city’s political status quo.
-
-It also noted that JUPAS updating its program list may also contribute to the drop in application numbers.
-"""""]
-]
-
-io1 = gr.Interface.load('huggingface/sshleifer/distilbart-cnn-12-6')
-io2 = gr.Interface.load("huggingface/facebook/bart-large-cnn")
-io3 = gr.Interface.load("huggingface/csebuetnlp/mT5_multilingual_XLSum")
-io4 = gr.Interface.load("huggingface/google/pegasus-xsum")
-
-iface = Parallel(io1, io2, io3, io4,
- theme='huggingface',
- inputs = gr.inputs.Textbox(lines = 10, label="Text"), title=title, description=description, examples=examples)
-
-iface.launch(share=False)
\ No newline at end of file
diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/__init__.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/__init__.py
deleted file mode 100644
index 1759733cc109fa348c3f764c5939b5b609521cb3..0000000000000000000000000000000000000000
--- a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from . import data, modules, models
-
-__version__ = '0.0.1'
diff --git a/spaces/XzJosh/Ava2-Bert-VITS2/losses.py b/spaces/XzJosh/Ava2-Bert-VITS2/losses.py
deleted file mode 100644
index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Ava2-Bert-VITS2/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
-
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/XzJosh/otto-Bert-VITS2/bert_gen.py b/spaces/XzJosh/otto-Bert-VITS2/bert_gen.py
deleted file mode 100644
index 44814715396ffc3abe84a12c74d66293c356eb4f..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/otto-Bert-VITS2/bert_gen.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import torch
-from torch.utils.data import DataLoader
-from multiprocessing import Pool
-import commons
-import utils
-from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate
-from tqdm import tqdm
-import warnings
-
-from text import cleaned_text_to_sequence, get_bert
-
-config_path = 'configs/config.json'
-hps = utils.get_hparams_from_file(config_path)
-
-def process_line(line):
- _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|")
- phone = phones.split(" ")
- tone = [int(i) for i in tone.split(" ")]
- word2ph = [int(i) for i in word2ph.split(" ")]
- w2pho = [i for i in word2ph]
- word2ph = [i for i in word2ph]
- phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
-
- if hps.data.add_blank:
- phone = commons.intersperse(phone, 0)
- tone = commons.intersperse(tone, 0)
- language = commons.intersperse(language, 0)
- for i in range(len(word2ph)):
- word2ph[i] = word2ph[i] * 2
- word2ph[0] += 1
- wav_path = f'{_id}'
-
- bert_path = wav_path.replace(".wav", ".bert.pt")
- try:
- bert = torch.load(bert_path)
- assert bert.shape[-1] == len(phone)
- except:
- bert = get_bert(text, word2ph, language_str)
- assert bert.shape[-1] == len(phone)
- torch.save(bert, bert_path)
-
-
-if __name__ == '__main__':
- lines = []
- with open(hps.data.training_files, encoding='utf-8' ) as f:
- lines.extend(f.readlines())
-
- with open(hps.data.validation_files, encoding='utf-8' ) as f:
- lines.extend(f.readlines())
-
- with Pool(processes=12) as pool: #A100 40GB suitable config,if coom,please decrease the processess number.
- for _ in tqdm(pool.imap_unordered(process_line, lines)):
- pass
diff --git a/spaces/YiLin1/Once/README.md b/spaces/YiLin1/Once/README.md
deleted file mode 100644
index e426c73021bf6e268cfc7fd75a2c020649e7aefb..0000000000000000000000000000000000000000
--- a/spaces/YiLin1/Once/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Once
-emoji: 👁
-colorFrom: pink
-colorTo: indigo
-sdk: docker
-pinned: false
-license: mit
-app_port: 8080
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Yiqin/ChatVID/config/yttemporal.py b/spaces/Yiqin/ChatVID/config/yttemporal.py
deleted file mode 100644
index 1e291c18ab8c1b3a6bd3adcbe1a92013ea871783..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/config/yttemporal.py
+++ /dev/null
@@ -1,184 +0,0 @@
-
-import ml_collections
-
-
-def get_config(runlocal=''):
- """Returns the base experiment configuration."""
-
- runlocal = bool(runlocal)
-
- config = ml_collections.ConfigDict()
- config.token_loss_coef = 1.
- config.runlocal = runlocal
- config.experiment_name = 'ytt'
-
- config.count_flops = False if runlocal else ml_collections.ConfigDict(
- {'count_flops': True})
-
- # dataset
- config.dataset_name = 'dense_video_captioning'
- config.dataset_configs = ml_collections.ConfigDict()
- config.dataset_configs.corrupt = 0.25
- config.dataset_configs.span_len = 5.
- config.dataset_configs.proba_corrupt = 1.
- config.dataset_configs.corrupt_coef = 1.
- config.dataset_configs.preserve = False
- notime = ml_collections.config_dict.FieldReference(False)
- config.dataset_configs.notime = notime
- config.dataset_configs.abs_time_token = False
- config.dataset_configs.random_temporal_crop_proba = 1.
- config.dataset_configs.time_format = 'se'
- tmp_only = ml_collections.config_dict.FieldReference(False)
- config.dataset_configs.tmp_only = tmp_only
- config.dataset_configs.split = not runlocal
- order = ml_collections.config_dict.FieldReference('ld')
- config.dataset_configs.order = order
- config.dataset_configs.from_xm = None
-
- config.data_dtype_str = 'float32'
-
- config.dataset_configs.base_dir = '/'
- config.dataset_configs.base_dir = '/path/to/yttemporal'
- config.dataset_configs.tables = {
- 'train': 'train.tfrecord.sst@1024',
- }
- config.dataset_configs.examples_per_subset = {
- 'train': 14780275,
- }
-
- # List of modalities to load, supports `features` only for now.
- # Note that it only specifies which modalities to load, not which to use,
- # which is controlled by config.model.modality_fusion
- config.dataset_configs.modalities = ('features', 'text')
- config.dataset_configs.features_dim = 768
- config.dataset_configs.return_as_dict = True
- num_frames = ml_collections.config_dict.FieldReference(100)
- config.dataset_configs.num_frames = num_frames
- num_bins = ml_collections.config_dict.FieldReference(100)
- config.dataset_configs.num_bins = num_bins
- config.dataset_configs.one_hot_labels = True
- config.dataset_configs.zero_centering = True
- config.dataset_configs.val_on_test = False
- config.dataset_configs.num_eval_clips = 1
- config.dataset_configs.prefetch_to_device = 2
-
- # Text params
- config.dataset_configs.max_num_output_words = 1000
- config.dataset_configs.max_num_input_words = 1000
- config.dataset_configs.tokenizer = ml_collections.ConfigDict()
- config.dataset_configs.tokenizer.tokenizer_type = 'sentence_piece'
- config.dataset_configs.caption_string = 'ASR/segment/label/string'
- config.dataset_configs.train_caption_string = 'ASR/segment/label/string'
- config.dataset_configs.input_timestamp_start_name = 'ASR/segment/start/timestamp'
- config.dataset_configs.input_timestamp_end_name = 'ASR/segment/end/timestamp'
- config.dataset_configs.input_duration_name = 'video/duration'
- config.dataset_configs.output_raw_timestamp_name = 'timestamp'
- config.dataset_configs.output_raw_duration_name = 'duration'
- config.dataset_configs.input_feature_name = 'image/clip_embeddings'
- config.dataset_configs.output_raw_feature_name = 'features'
- config.dataset_configs.vocabulary_size = 32128
- config.dataset_configs.max_events = 1100
- config.dataset_configs.max_segments = 0
- config.datasets = {'ytt': config.dataset_configs}
-
- # Decoding
- config.decoding = ml_collections.ConfigDict()
- config.decoding.decoding_method = 'beamsearch'
- config.decoding.num_decodes = 4
- config.decoding.alpha = 0.6
- config.decoding.temperature = 1.
-
- # Model
- config.model_name = 'vid2seq'
- config.model = ml_collections.ConfigDict()
- config.model.from_xm = None
-
- # Encoder configs
- config.model.encoder = ml_collections.ConfigDict()
- config.model.encoder.share_encoder = True
- config.model.encoder.encoder_type = 'cat_encoder'
- config.model.encoder.cat_encoder = ml_collections.ConfigDict()
- config.model.encoder.cat_encoder.dim = 2048
- config.model.encoder.cat_encoder.layers = 12
- config.model.encoder.cat_encoder.heads = 12
- config.model.encoder.cat_encoder.pos_embed = 'learned_1d'
- config.model.encoder.cat_encoder.dropout_rate = 0.1
- config.model.encoder.cat_encoder.t5_dropout_rate = 0.1
- config.model.encoder.cat_encoder.stochastic_depth = 0.
- config.model.encoder.cat_encoder.pretrained_config = 't5_1_1_base'
- config.model.encoder.from_xm = None
-
- # Decoder configs
- config.model.decoder_type = 't5_decoder'
- config.model.decoder = ml_collections.ConfigDict()
- config.model.decoder.order = order
- config.model.decoder.t5_decoder = ml_collections.ConfigDict()
- config.model.decoder.t5_decoder.logits_via_embedding = False
- config.model.decoder.t5_decoder.dropout_rate = 0.1
- config.model.decoder.t5_decoder.num_frames = num_frames
- config.model.decoder.notime = notime
- config.model.decoder.num_bins = num_bins
- config.model.decoder.tmp_only = tmp_only
- # Obtained from scenic/projects/t5/model.py.
- config.model.decoder.t5_decoder.pretrained_config = 't5_1_1_base'
-
- config.model.tmp_decoder_type = 't5_decoder'
- config.model.tmp_decoder = ml_collections.ConfigDict()
- config.model.tmp_decoder.t5_decoder = ml_collections.ConfigDict()
- config.model.tmp_decoder.t5_decoder.logits_via_embedding = False
- config.model.tmp_decoder.t5_decoder.dropout_rate = 0.
- config.model.tmp_decoder.t5_decoder.pretrained_config = 't5_1_1_base'
- config.model.decoder.t5_decoder.local = 5
-
- # Initalisation configs
- config.init_from = ml_collections.ConfigDict()
- config.init_from.step = None
- config.init_from.xm = None
-
- config.init_from.encoder = ml_collections.ConfigDict()
- config.init_from.encoder.checkpoint_path = None
- config.init_from.encoder.init_from_vit = False
- config.init_from.encoder = ml_collections.ConfigDict()
- config.init_from.encoder.load_pretrained_weights = True
-
- config.init_from.decoder = ml_collections.ConfigDict()
- config.init_from.decoder.load_pretrained_weights = True
-
- config.init_from.t5 = ml_collections.ConfigDict()
- config.init_from.t5.load_pretrained_weights = True
-
- # Training
- config.trainer_name = 'densevidcap_trainer'
- config.optimizer = 'adam'
- config.optimizer_configs = ml_collections.ConfigDict()
- config.optimizer_configs.weight_decay = 0.
- config.l2_decay_factor = 0.
- config.max_grad_norm = 0.1
- config.label_smoothing = 0.1
- epochs = ml_collections.config_dict.FieldReference(10)
- config.num_training_epochs = 0
- batch_size = ml_collections.config_dict.FieldReference(512)
- config.batch_size = 1 if runlocal else batch_size # 128 # Minimum is num_devices = 32
- config.eval_batch_size = 1 if runlocal else 128 # Needs to be num_local_devices
- config.rng_seed = 0
-
- # Learning schedule.
- config.lr_configs = ml_collections.ConfigDict()
- config.lr_configs.learning_rate_schedule = 'compound'
- config.lr_configs.factors = 'constant * linear_warmup'
- config.lr_configs.warmup_steps = 1000
- config.lr_configs.base_learning_rate = 1e-4
-
- config.eval_metrics = ['cider', 'meteor', 'soda']
-
- # Logging
- config.log_summary_steps = 500 # write TB and/or XM summary
- config.checkpoint_steps = 5000
- config.log_eval_steps = 5000
- config.write_summary = True # write TB and/or XM summary
- config.write_xm_measurements = True # write XM measurements
- config.xprof = True # Profile using xprof
- config.checkpoint = True # do checkpointing
- config.debug_train = False # debug mode during training
- config.debug_eval = False # debug mode during eval
- return config
diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/roi_align_rotated.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/roi_align_rotated.py
deleted file mode 100644
index d097326c3a6116e872cecf0d675b42958f359b14..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/roi_align_rotated.py
+++ /dev/null
@@ -1,91 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import torch
-from torch import nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair
-
-
-class _ROIAlignRotated(Function):
- @staticmethod
- def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio):
- ctx.save_for_backward(roi)
- ctx.output_size = _pair(output_size)
- ctx.spatial_scale = spatial_scale
- ctx.sampling_ratio = sampling_ratio
- ctx.input_shape = input.size()
- output = torch.ops.detectron2.roi_align_rotated_forward(
- input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio
- )
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- (rois,) = ctx.saved_tensors
- output_size = ctx.output_size
- spatial_scale = ctx.spatial_scale
- sampling_ratio = ctx.sampling_ratio
- bs, ch, h, w = ctx.input_shape
- grad_input = torch.ops.detectron2.roi_align_rotated_backward(
- grad_output,
- rois,
- spatial_scale,
- output_size[0],
- output_size[1],
- bs,
- ch,
- h,
- w,
- sampling_ratio,
- )
- return grad_input, None, None, None, None, None
-
-
-roi_align_rotated = _ROIAlignRotated.apply
-
-
-class ROIAlignRotated(nn.Module):
- def __init__(self, output_size, spatial_scale, sampling_ratio):
- """
- Args:
- output_size (tuple): h, w
- spatial_scale (float): scale the input boxes by this number
- sampling_ratio (int): number of inputs samples to take for each output
- sample. 0 to take samples densely.
-
- Note:
- ROIAlignRotated supports continuous coordinate by default:
- Given a continuous coordinate c, its two neighboring pixel indices (in our
- pixel model) are computed by floor(c - 0.5) and ceil(c - 0.5). For example,
- c=1.3 has pixel neighbors with discrete indices [0] and [1] (which are sampled
- from the underlying signal at continuous coordinates 0.5 and 1.5).
- """
- super(ROIAlignRotated, self).__init__()
- self.output_size = output_size
- self.spatial_scale = spatial_scale
- self.sampling_ratio = sampling_ratio
-
- def forward(self, input, rois):
- """
- Args:
- input: NCHW images
- rois: Bx6 boxes. First column is the index into N.
- The other 5 columns are (x_ctr, y_ctr, width, height, angle_degrees).
- """
- assert rois.dim() == 2 and rois.size(1) == 6
- orig_dtype = input.dtype
- if orig_dtype == torch.float16:
- input = input.float()
- rois = rois.float()
- return roi_align_rotated(
- input, rois, self.output_size, self.spatial_scale, self.sampling_ratio
- ).to(dtype=orig_dtype)
-
- def __repr__(self):
- tmpstr = self.__class__.__name__ + "("
- tmpstr += "output_size=" + str(self.output_size)
- tmpstr += ", spatial_scale=" + str(self.spatial_scale)
- tmpstr += ", sampling_ratio=" + str(self.sampling_ratio)
- tmpstr += ")"
- return tmpstr
diff --git a/spaces/YuAnthony/Audio-Caption/model.py b/spaces/YuAnthony/Audio-Caption/model.py
deleted file mode 100644
index f2df8f89d491a04864bf24f2ddcfb1de61be5474..0000000000000000000000000000000000000000
--- a/spaces/YuAnthony/Audio-Caption/model.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.transformer import TransformerDecoder,TransformerDecoderLayer
-
-from hparams import hparams as hp
-from encoder import Cnn10,init_layer
-
-
-class PositionalEncoding(nn.Module):
-
- def __init__(self, d_model, dropout=0.1, max_len=100):
- super(PositionalEncoding, self).__init__()
- self.dropout = nn.Dropout(p=dropout)
-
- pe = torch.zeros(max_len, d_model)
- position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
- div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
- pe[:, 0::2] = torch.sin(position * div_term)
- pe[:, 1::2] = torch.cos(position * div_term)
- pe = pe.unsqueeze(0).transpose(0, 1)
- self.register_buffer('pe', pe)
-
- def forward(self, x):
- x = x + self.pe[:x.size(0), :]
- return self.dropout(x)
-
-
-class TransformerModel(nn.Module):
-
- def __init__(self, ntoken, ninp, nhead, nhid, nlayers, batch_size, dropout=0.5,pretrain_cnn=None,
- pretrain_emb=None,freeze_cnn=True):
- super(TransformerModel, self).__init__()
-
- self.model_type = 'cnn+transformer'
- decoder_layers = TransformerDecoderLayer(d_model=nhid, nhead=nhead, dropout=dropout)
- self.transformer_decoder = TransformerDecoder(decoder_layers, nlayers)
- self.word_emb = nn.Embedding(ntoken, nhid)
- self.ninp = ninp
- self.nhid = nhid
- self.fc = nn.Linear(512, 512, bias=True)
- self.fc1 = nn.Linear(512, nhid, bias=True)
- self.dec_fc = nn.Linear(nhid, ntoken)
- self.batch_size = batch_size
- self.ntoken = ntoken
- self.encoder = Cnn10()
- self.dropout = nn.Dropout(dropout)
- self.pos_encoder = PositionalEncoding(nhid, dropout)
- self.generator = nn.Softmax(dim=-1)
- self.init_weights()
-
- if pretrain_cnn is not None:
- dict_trained = pretrain_cnn
- dict_new = self.encoder.state_dict().copy()
- new_list = list(self.encoder.state_dict().keys())
- trained_list = list(dict_trained.keys())
- for i in range(len(new_list)):
- dict_new[new_list[i]] = dict_trained[trained_list[i]]
- self.encoder.load_state_dict(dict_new)
- if freeze_cnn:
- self.freeze_cnn()
-
- if pretrain_emb is not None:
- self.word_emb.weight.data = pretrain_emb
-
- def freeze_cnn(self):
- for p in self.encoder.parameters():
- p.requires_grad = False
-
- def generate_square_subsequent_mask(self, sz):
- mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
- mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
- return mask
-
- def init_weights(self):
- initrange = 0.1
- init_layer(self.fc1)
- init_layer(self.fc)
- self.word_emb.weight.data.uniform_(-initrange, initrange)
- self.dec_fc.bias.data.zero_()
- self.dec_fc.weight.data.uniform_(-initrange, initrange)
-
- def encode(self, src, input_mask=None):
- x = self.encoder(src) # (batch_size, 512, T/16, mel_bins/16)
- x = torch.mean(x, dim=3) # (batch_size, 512, T/16)
- x = x.permute(2, 0, 1) # (T/16,batch_size,512)
- x = F.relu_(self.fc(x))
- x = F.dropout(x, p=0.2, training=self.training)
- x = torch.relu(self.fc1(x))
- return x
-
- def decode(self, mem, tgt, input_mask=None, target_mask=None, target_padding_mask=None):
- # tgt:(batch_size,T_out)
- # mem:(T_mem,batch_size,nhid)
-
- tgt = tgt.transpose(0, 1) # (T_out,batch_size)
- if target_mask is None or target_mask.size(0) != len(tgt):
- device = tgt.device
- target_mask = self.generate_square_subsequent_mask(len(tgt)).to(device)
-
- tgt = self.dropout(self.word_emb(tgt)) * math.sqrt(self.nhid)
- tgt = self.pos_encoder(tgt)
- # mem = self.pos_encoder(mem)
- output = self.transformer_decoder(tgt, mem, memory_mask=input_mask, tgt_mask=target_mask,
- tgt_key_padding_mask=target_padding_mask)
- output = self.dec_fc(output)
- return output
-
- def forward(self, src, tgt, input_mask=None, target_mask=None, target_padding_mask=None):
- # src:(batch_size,T_in,feature_dim)
- # tgt:(batch_size,T_out)
- mem = self.encode(src)
- output = self.decode(mem, tgt, input_mask=input_mask, target_mask=target_mask,
- target_padding_mask=target_padding_mask)
- return output
diff --git a/spaces/a-v-bely/russian-task-generator/utilities_ui/custom_download_button.py b/spaces/a-v-bely/russian-task-generator/utilities_ui/custom_download_button.py
deleted file mode 100644
index 89b418e503949c582486a2645d54b18666d481c1..0000000000000000000000000000000000000000
--- a/spaces/a-v-bely/russian-task-generator/utilities_ui/custom_download_button.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import io
-import re
-import uuid
-import base64
-import streamlit as st
-from typing import Optional, Union
-from streamlit.elements.button import DownloadButtonDataType
-
-DownloadButtonDataType = Union[DownloadButtonDataType, "pd.DataFrame", "Styler"]
-
-HAS_PD = True
-
-
-def download_button(label: str,
- data: DownloadButtonDataType,
- file_name: Optional[str] = None) -> str:
- """Generates a link to download the given data, support file-like object and pd.DataFrame.
- Params
- Args:
- label: text show on page.
- data: file-like object or pd.DataFrame.
- file_name: filename and extension of file. e.g. mydata.csv,
- Raises:
- RuntimeError: when data type is not supported
- Returns:
- the anchor tag to download object_to_download
- Examples:
- download_button('Click to download data!', your_df, 'YOUR_DF.xlsx'),
- download_button('Click to download text!', your_str.encode(), 'YOUR_STRING.txt')
- """
-
- # inspired by https://gist.github.com/chad-m/6be98ed6cf1c4f17d09b7f6e5ca2978f
-
- data_as_bytes: bytes
- if isinstance(data, str):
- data_as_bytes = data.encode()
- elif isinstance(data, io.TextIOWrapper):
- string_data = data.read()
- data_as_bytes = string_data.encode()
- # mimetype = mimetype or "text/plain"
- # Assume bytes; try methods until we run out.
- elif isinstance(data, bytes):
- data_as_bytes = data
- elif isinstance(data, io.BytesIO):
- data.seek(0)
- data_as_bytes = data.getvalue()
- elif isinstance(data, io.BufferedReader):
- data.seek(0)
- data_as_bytes = data.read()
- elif isinstance(data, io.RawIOBase):
- data.seek(0)
- data_as_bytes = data.read() or b""
- elif HAS_PD and hasattr(data, "to_excel"):
- bio = io.BytesIO()
- data.to_excel(bio)
- bio.seek(0)
- data_as_bytes = bio.read()
- else:
- raise RuntimeError("Invalid binary data format: %s" % type(data))
-
- b64 = base64.b64encode(data_as_bytes).decode()
- button_uuid = str(uuid.uuid4()).replace("-", "")
- button_id = re.sub(r"\d+", "", button_uuid)
-
- custom_css = f"""
- """
-
- dl_link = (
- custom_css
- + f'{label} '
- )
-
- div_dl_link = f"""
{dl_link}
"""
- st.markdown(div_dl_link, unsafe_allow_html=True)
- return dl_link
diff --git a/spaces/abdvl/datahub_qa_bot/docs/how/migrating-graph-service-implementation.md b/spaces/abdvl/datahub_qa_bot/docs/how/migrating-graph-service-implementation.md
deleted file mode 100644
index 024740b2ce61f06293040cbfc73d7663624bf44d..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/how/migrating-graph-service-implementation.md
+++ /dev/null
@@ -1,51 +0,0 @@
-# Migrate Graph Service Implementation to Elasticsearch
-
-We currently support either Elasticsearch or Neo4j as backend implementations for the graph service. We recommend
-Elasticsearch for those looking for a lighter deployment or do not want to manage a Neo4j database.
-If you started using Neo4j as your graph service backend, here is how you can migrate to Elasticsearch.
-
-## Docker-compose
-
-If you are running your instance through docker locally, you will want to spin up your Datahub instance with
-elasticsearch as the backend. On a clean start, this happens by default. However, if you've written data to
-Neo4j you need to explicitly ask DataHub to start in Elastic mode.
-
-```aidl
-datahub docker quickstart --graph-service-impl=elasticsearch
-```
-
-Next, run the following command from root to rebuild your graph index.
-
-```
-./docker/datahub-upgrade/datahub-upgrade.sh -u RestoreIndices
-```
-
-After this command completes, you should be migrated. Open up the DataHub UI and verify your relationships are
-visible.
-
-Once you confirm the migration is successful, you must remove your neo4j volume by running
-
-```aidl
-docker volume rm datahub_neo4jdata
-```
-
-This prevents your DataHub instance from coming up in neo4j mode in the future.
-
-## Helm
-
-First, adjust your helm variables to turn off neo4j and set your graph_service_impl to elasticsearch.
-
-To turn off neo4j in your prerequisites file, set `neo4j-community`'s `enabled` property to `false`
-in this [values.yaml](https://github.com/acryldata/datahub-helm/blob/master/charts/prerequisites/values.yaml#L54).
-
-Then, set `graph_service_impl` to `elasticsearch` in the
-[values.yaml](https://github.com/acryldata/datahub-helm/blob/master/charts/datahub/values.yaml#L63) of datahub.
-
-
-See the [deployment helm guide](../deploy/kubernetes.md#components) for more details on how to
-set up your helm deployment.
-
-Finally, follow the [restore-indices helm guide](./restore-indices.md) to re-build
-your graph index.
-
-Once the job completes, your data will be migrated.
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/optimizer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/optimizer.py
deleted file mode 100644
index 4ef3e9ff8f9c6926e32bdf027612267b64ed80df..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/optimizer.py
+++ /dev/null
@@ -1,508 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-from collections import defaultdict
-from itertools import chain
-
-from torch.nn.utils import clip_grad
-
-from annotator.uniformer.mmcv.utils import TORCH_VERSION, _BatchNorm, digit_version
-from ..dist_utils import allreduce_grads
-from ..fp16_utils import LossScaler, wrap_fp16_model
-from .hook import HOOKS, Hook
-
-try:
- # If PyTorch version >= 1.6.0, torch.cuda.amp.GradScaler would be imported
- # and used; otherwise, auto fp16 will adopt mmcv's implementation.
- from torch.cuda.amp import GradScaler
-except ImportError:
- pass
-
-
-@HOOKS.register_module()
-class OptimizerHook(Hook):
-
- def __init__(self, grad_clip=None):
- self.grad_clip = grad_clip
-
- def clip_grads(self, params):
- params = list(
- filter(lambda p: p.requires_grad and p.grad is not None, params))
- if len(params) > 0:
- return clip_grad.clip_grad_norm_(params, **self.grad_clip)
-
- def after_train_iter(self, runner):
- runner.optimizer.zero_grad()
- runner.outputs['loss'].backward()
- if self.grad_clip is not None:
- grad_norm = self.clip_grads(runner.model.parameters())
- if grad_norm is not None:
- # Add grad norm to the logger
- runner.log_buffer.update({'grad_norm': float(grad_norm)},
- runner.outputs['num_samples'])
- runner.optimizer.step()
-
-
-@HOOKS.register_module()
-class GradientCumulativeOptimizerHook(OptimizerHook):
- """Optimizer Hook implements multi-iters gradient cumulating.
-
- Args:
- cumulative_iters (int, optional): Num of gradient cumulative iters.
- The optimizer will step every `cumulative_iters` iters.
- Defaults to 1.
-
- Examples:
- >>> # Use cumulative_iters to simulate a large batch size
- >>> # It is helpful when the hardware cannot handle a large batch size.
- >>> loader = DataLoader(data, batch_size=64)
- >>> optim_hook = GradientCumulativeOptimizerHook(cumulative_iters=4)
- >>> # almost equals to
- >>> loader = DataLoader(data, batch_size=256)
- >>> optim_hook = OptimizerHook()
- """
-
- def __init__(self, cumulative_iters=1, **kwargs):
- super(GradientCumulativeOptimizerHook, self).__init__(**kwargs)
-
- assert isinstance(cumulative_iters, int) and cumulative_iters > 0, \
- f'cumulative_iters only accepts positive int, but got ' \
- f'{type(cumulative_iters)} instead.'
-
- self.cumulative_iters = cumulative_iters
- self.divisible_iters = 0
- self.remainder_iters = 0
- self.initialized = False
-
- def has_batch_norm(self, module):
- if isinstance(module, _BatchNorm):
- return True
- for m in module.children():
- if self.has_batch_norm(m):
- return True
- return False
-
- def _init(self, runner):
- if runner.iter % self.cumulative_iters != 0:
- runner.logger.warning(
- 'Resume iter number is not divisible by cumulative_iters in '
- 'GradientCumulativeOptimizerHook, which means the gradient of '
- 'some iters is lost and the result may be influenced slightly.'
- )
-
- if self.has_batch_norm(runner.model) and self.cumulative_iters > 1:
- runner.logger.warning(
- 'GradientCumulativeOptimizerHook may slightly decrease '
- 'performance if the model has BatchNorm layers.')
-
- residual_iters = runner.max_iters - runner.iter
-
- self.divisible_iters = (
- residual_iters // self.cumulative_iters * self.cumulative_iters)
- self.remainder_iters = residual_iters - self.divisible_iters
-
- self.initialized = True
-
- def after_train_iter(self, runner):
- if not self.initialized:
- self._init(runner)
-
- if runner.iter < self.divisible_iters:
- loss_factor = self.cumulative_iters
- else:
- loss_factor = self.remainder_iters
- loss = runner.outputs['loss']
- loss = loss / loss_factor
- loss.backward()
-
- if (self.every_n_iters(runner, self.cumulative_iters)
- or self.is_last_iter(runner)):
-
- if self.grad_clip is not None:
- grad_norm = self.clip_grads(runner.model.parameters())
- if grad_norm is not None:
- # Add grad norm to the logger
- runner.log_buffer.update({'grad_norm': float(grad_norm)},
- runner.outputs['num_samples'])
- runner.optimizer.step()
- runner.optimizer.zero_grad()
-
-
-if (TORCH_VERSION != 'parrots'
- and digit_version(TORCH_VERSION) >= digit_version('1.6.0')):
-
- @HOOKS.register_module()
- class Fp16OptimizerHook(OptimizerHook):
- """FP16 optimizer hook (using PyTorch's implementation).
-
- If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend,
- to take care of the optimization procedure.
-
- Args:
- loss_scale (float | str | dict): Scale factor configuration.
- If loss_scale is a float, static loss scaling will be used with
- the specified scale. If loss_scale is a string, it must be
- 'dynamic', then dynamic loss scaling will be used.
- It can also be a dict containing arguments of GradScalar.
- Defaults to 512. For Pytorch >= 1.6, mmcv uses official
- implementation of GradScaler. If you use a dict version of
- loss_scale to create GradScaler, please refer to:
- https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler
- for the parameters.
-
- Examples:
- >>> loss_scale = dict(
- ... init_scale=65536.0,
- ... growth_factor=2.0,
- ... backoff_factor=0.5,
- ... growth_interval=2000
- ... )
- >>> optimizer_hook = Fp16OptimizerHook(loss_scale=loss_scale)
- """
-
- def __init__(self,
- grad_clip=None,
- coalesce=True,
- bucket_size_mb=-1,
- loss_scale=512.,
- distributed=True):
- self.grad_clip = grad_clip
- self.coalesce = coalesce
- self.bucket_size_mb = bucket_size_mb
- self.distributed = distributed
- self._scale_update_param = None
- if loss_scale == 'dynamic':
- self.loss_scaler = GradScaler()
- elif isinstance(loss_scale, float):
- self._scale_update_param = loss_scale
- self.loss_scaler = GradScaler(init_scale=loss_scale)
- elif isinstance(loss_scale, dict):
- self.loss_scaler = GradScaler(**loss_scale)
- else:
- raise ValueError('loss_scale must be of type float, dict, or '
- f'"dynamic", got {loss_scale}')
-
- def before_run(self, runner):
- """Preparing steps before Mixed Precision Training."""
- # wrap model mode to fp16
- wrap_fp16_model(runner.model)
- # resume from state dict
- if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']:
- scaler_state_dict = runner.meta['fp16']['loss_scaler']
- self.loss_scaler.load_state_dict(scaler_state_dict)
-
- def copy_grads_to_fp32(self, fp16_net, fp32_weights):
- """Copy gradients from fp16 model to fp32 weight copy."""
- for fp32_param, fp16_param in zip(fp32_weights,
- fp16_net.parameters()):
- if fp16_param.grad is not None:
- if fp32_param.grad is None:
- fp32_param.grad = fp32_param.data.new(
- fp32_param.size())
- fp32_param.grad.copy_(fp16_param.grad)
-
- def copy_params_to_fp16(self, fp16_net, fp32_weights):
- """Copy updated params from fp32 weight copy to fp16 model."""
- for fp16_param, fp32_param in zip(fp16_net.parameters(),
- fp32_weights):
- fp16_param.data.copy_(fp32_param.data)
-
- def after_train_iter(self, runner):
- """Backward optimization steps for Mixed Precision Training. For
- dynamic loss scaling, please refer to
- https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler.
-
- 1. Scale the loss by a scale factor.
- 2. Backward the loss to obtain the gradients.
- 3. Unscale the optimizer’s gradient tensors.
- 4. Call optimizer.step() and update scale factor.
- 5. Save loss_scaler state_dict for resume purpose.
- """
- # clear grads of last iteration
- runner.model.zero_grad()
- runner.optimizer.zero_grad()
-
- self.loss_scaler.scale(runner.outputs['loss']).backward()
- self.loss_scaler.unscale_(runner.optimizer)
- # grad clip
- if self.grad_clip is not None:
- grad_norm = self.clip_grads(runner.model.parameters())
- if grad_norm is not None:
- # Add grad norm to the logger
- runner.log_buffer.update({'grad_norm': float(grad_norm)},
- runner.outputs['num_samples'])
- # backward and update scaler
- self.loss_scaler.step(runner.optimizer)
- self.loss_scaler.update(self._scale_update_param)
-
- # save state_dict of loss_scaler
- runner.meta.setdefault(
- 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict()
-
- @HOOKS.register_module()
- class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook,
- Fp16OptimizerHook):
- """Fp16 optimizer Hook (using PyTorch's implementation) implements
- multi-iters gradient cumulating.
-
- If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend,
- to take care of the optimization procedure.
- """
-
- def __init__(self, *args, **kwargs):
- super(GradientCumulativeFp16OptimizerHook,
- self).__init__(*args, **kwargs)
-
- def after_train_iter(self, runner):
- if not self.initialized:
- self._init(runner)
-
- if runner.iter < self.divisible_iters:
- loss_factor = self.cumulative_iters
- else:
- loss_factor = self.remainder_iters
- loss = runner.outputs['loss']
- loss = loss / loss_factor
-
- self.loss_scaler.scale(loss).backward()
-
- if (self.every_n_iters(runner, self.cumulative_iters)
- or self.is_last_iter(runner)):
-
- # copy fp16 grads in the model to fp32 params in the optimizer
- self.loss_scaler.unscale_(runner.optimizer)
-
- if self.grad_clip is not None:
- grad_norm = self.clip_grads(runner.model.parameters())
- if grad_norm is not None:
- # Add grad norm to the logger
- runner.log_buffer.update(
- {'grad_norm': float(grad_norm)},
- runner.outputs['num_samples'])
-
- # backward and update scaler
- self.loss_scaler.step(runner.optimizer)
- self.loss_scaler.update(self._scale_update_param)
-
- # save state_dict of loss_scaler
- runner.meta.setdefault(
- 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict()
-
- # clear grads
- runner.model.zero_grad()
- runner.optimizer.zero_grad()
-
-else:
-
- @HOOKS.register_module()
- class Fp16OptimizerHook(OptimizerHook):
- """FP16 optimizer hook (mmcv's implementation).
-
- The steps of fp16 optimizer is as follows.
- 1. Scale the loss value.
- 2. BP in the fp16 model.
- 2. Copy gradients from fp16 model to fp32 weights.
- 3. Update fp32 weights.
- 4. Copy updated parameters from fp32 weights to fp16 model.
-
- Refer to https://arxiv.org/abs/1710.03740 for more details.
-
- Args:
- loss_scale (float | str | dict): Scale factor configuration.
- If loss_scale is a float, static loss scaling will be used with
- the specified scale. If loss_scale is a string, it must be
- 'dynamic', then dynamic loss scaling will be used.
- It can also be a dict containing arguments of LossScaler.
- Defaults to 512.
- """
-
- def __init__(self,
- grad_clip=None,
- coalesce=True,
- bucket_size_mb=-1,
- loss_scale=512.,
- distributed=True):
- self.grad_clip = grad_clip
- self.coalesce = coalesce
- self.bucket_size_mb = bucket_size_mb
- self.distributed = distributed
- if loss_scale == 'dynamic':
- self.loss_scaler = LossScaler(mode='dynamic')
- elif isinstance(loss_scale, float):
- self.loss_scaler = LossScaler(
- init_scale=loss_scale, mode='static')
- elif isinstance(loss_scale, dict):
- self.loss_scaler = LossScaler(**loss_scale)
- else:
- raise ValueError('loss_scale must be of type float, dict, or '
- f'"dynamic", got {loss_scale}')
-
- def before_run(self, runner):
- """Preparing steps before Mixed Precision Training.
-
- 1. Make a master copy of fp32 weights for optimization.
- 2. Convert the main model from fp32 to fp16.
- """
- # keep a copy of fp32 weights
- old_groups = runner.optimizer.param_groups
- runner.optimizer.param_groups = copy.deepcopy(
- runner.optimizer.param_groups)
- state = defaultdict(dict)
- p_map = {
- old_p: p
- for old_p, p in zip(
- chain(*(g['params'] for g in old_groups)),
- chain(*(g['params']
- for g in runner.optimizer.param_groups)))
- }
- for k, v in runner.optimizer.state.items():
- state[p_map[k]] = v
- runner.optimizer.state = state
- # convert model to fp16
- wrap_fp16_model(runner.model)
- # resume from state dict
- if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']:
- scaler_state_dict = runner.meta['fp16']['loss_scaler']
- self.loss_scaler.load_state_dict(scaler_state_dict)
-
- def copy_grads_to_fp32(self, fp16_net, fp32_weights):
- """Copy gradients from fp16 model to fp32 weight copy."""
- for fp32_param, fp16_param in zip(fp32_weights,
- fp16_net.parameters()):
- if fp16_param.grad is not None:
- if fp32_param.grad is None:
- fp32_param.grad = fp32_param.data.new(
- fp32_param.size())
- fp32_param.grad.copy_(fp16_param.grad)
-
- def copy_params_to_fp16(self, fp16_net, fp32_weights):
- """Copy updated params from fp32 weight copy to fp16 model."""
- for fp16_param, fp32_param in zip(fp16_net.parameters(),
- fp32_weights):
- fp16_param.data.copy_(fp32_param.data)
-
- def after_train_iter(self, runner):
- """Backward optimization steps for Mixed Precision Training. For
- dynamic loss scaling, please refer `loss_scalar.py`
-
- 1. Scale the loss by a scale factor.
- 2. Backward the loss to obtain the gradients (fp16).
- 3. Copy gradients from the model to the fp32 weight copy.
- 4. Scale the gradients back and update the fp32 weight copy.
- 5. Copy back the params from fp32 weight copy to the fp16 model.
- 6. Save loss_scaler state_dict for resume purpose.
- """
- # clear grads of last iteration
- runner.model.zero_grad()
- runner.optimizer.zero_grad()
- # scale the loss value
- scaled_loss = runner.outputs['loss'] * self.loss_scaler.loss_scale
- scaled_loss.backward()
- # copy fp16 grads in the model to fp32 params in the optimizer
-
- fp32_weights = []
- for param_group in runner.optimizer.param_groups:
- fp32_weights += param_group['params']
- self.copy_grads_to_fp32(runner.model, fp32_weights)
- # allreduce grads
- if self.distributed:
- allreduce_grads(fp32_weights, self.coalesce,
- self.bucket_size_mb)
-
- has_overflow = self.loss_scaler.has_overflow(fp32_weights)
- # if has overflow, skip this iteration
- if not has_overflow:
- # scale the gradients back
- for param in fp32_weights:
- if param.grad is not None:
- param.grad.div_(self.loss_scaler.loss_scale)
- if self.grad_clip is not None:
- grad_norm = self.clip_grads(fp32_weights)
- if grad_norm is not None:
- # Add grad norm to the logger
- runner.log_buffer.update(
- {'grad_norm': float(grad_norm)},
- runner.outputs['num_samples'])
- # update fp32 params
- runner.optimizer.step()
- # copy fp32 params to the fp16 model
- self.copy_params_to_fp16(runner.model, fp32_weights)
- self.loss_scaler.update_scale(has_overflow)
- if has_overflow:
- runner.logger.warning('Check overflow, downscale loss scale '
- f'to {self.loss_scaler.cur_scale}')
-
- # save state_dict of loss_scaler
- runner.meta.setdefault(
- 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict()
-
- @HOOKS.register_module()
- class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook,
- Fp16OptimizerHook):
- """Fp16 optimizer Hook (using mmcv implementation) implements multi-
- iters gradient cumulating."""
-
- def __init__(self, *args, **kwargs):
- super(GradientCumulativeFp16OptimizerHook,
- self).__init__(*args, **kwargs)
-
- def after_train_iter(self, runner):
- if not self.initialized:
- self._init(runner)
-
- if runner.iter < self.divisible_iters:
- loss_factor = self.cumulative_iters
- else:
- loss_factor = self.remainder_iters
-
- loss = runner.outputs['loss']
- loss = loss / loss_factor
-
- # scale the loss value
- scaled_loss = loss * self.loss_scaler.loss_scale
- scaled_loss.backward()
-
- if (self.every_n_iters(runner, self.cumulative_iters)
- or self.is_last_iter(runner)):
-
- # copy fp16 grads in the model to fp32 params in the optimizer
- fp32_weights = []
- for param_group in runner.optimizer.param_groups:
- fp32_weights += param_group['params']
- self.copy_grads_to_fp32(runner.model, fp32_weights)
- # allreduce grads
- if self.distributed:
- allreduce_grads(fp32_weights, self.coalesce,
- self.bucket_size_mb)
-
- has_overflow = self.loss_scaler.has_overflow(fp32_weights)
- # if has overflow, skip this iteration
- if not has_overflow:
- # scale the gradients back
- for param in fp32_weights:
- if param.grad is not None:
- param.grad.div_(self.loss_scaler.loss_scale)
- if self.grad_clip is not None:
- grad_norm = self.clip_grads(fp32_weights)
- if grad_norm is not None:
- # Add grad norm to the logger
- runner.log_buffer.update(
- {'grad_norm': float(grad_norm)},
- runner.outputs['num_samples'])
- # update fp32 params
- runner.optimizer.step()
- # copy fp32 params to the fp16 model
- self.copy_params_to_fp16(runner.model, fp32_weights)
- else:
- runner.logger.warning(
- 'Check overflow, downscale loss scale '
- f'to {self.loss_scaler.cur_scale}')
-
- self.loss_scaler.update_scale(has_overflow)
-
- # save state_dict of loss_scaler
- runner.meta.setdefault(
- 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict()
-
- # clear grads
- runner.model.zero_grad()
- runner.optimizer.zero_grad()
diff --git a/spaces/abidlabs/remove-bg/app.py b/spaces/abidlabs/remove-bg/app.py
deleted file mode 100644
index dd31f27a7583b281d2c50fe8a1ca6dee1bb3d800..0000000000000000000000000000000000000000
--- a/spaces/abidlabs/remove-bg/app.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import gradio as gr
-import cv2
-import torch
-import numpy as np
-from torchvision import transforms
-
-description = "Automatically remove the image background from a profile photo. Based on a [Space by eugenesiow](https://huggingface.co/spaces/eugenesiow/remove-bg)."
-
-
-def make_transparent_foreground(pic, mask):
- # split the image into channels
- b, g, r = cv2.split(np.array(pic).astype('uint8'))
- # add an alpha channel with and fill all with transparent pixels (max 255)
- a = np.ones(mask.shape, dtype='uint8') * 255
- # merge the alpha channel back
- alpha_im = cv2.merge([b, g, r, a], 4)
- # create a transparent background
- bg = np.zeros(alpha_im.shape)
- # setup the new mask
- new_mask = np.stack([mask, mask, mask, mask], axis=2)
- # copy only the foreground color pixels from the original image where mask is set
- foreground = np.where(new_mask, alpha_im, bg).astype(np.uint8)
-
- return foreground
-
-
-def remove_background(input_image):
- preprocess = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
- ])
-
- input_tensor = preprocess(input_image)
- input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
-
- # move the input and model to GPU for speed if available
- if torch.cuda.is_available():
- input_batch = input_batch.to('cuda')
- model.to('cuda')
-
- with torch.no_grad():
- output = model(input_batch)['out'][0]
- output_predictions = output.argmax(0)
-
- # create a binary (black and white) mask of the profile foreground
- mask = output_predictions.byte().cpu().numpy()
- background = np.zeros(mask.shape)
- bin_mask = np.where(mask, 255, background).astype(np.uint8)
-
- foreground = make_transparent_foreground(input_image, bin_mask)
-
- return foreground, bin_mask
-
-
-def inference(img):
- foreground, _ = remove_background(img)
- return foreground
-
-
-torch.hub.download_url_to_file('https://pbs.twimg.com/profile_images/691700243809718272/z7XZUARB_400x400.jpg',
- 'demis.jpg')
-torch.hub.download_url_to_file('https://hai.stanford.edu/sites/default/files/styles/person_medium/public/2020-03/hai_1512feifei.png?itok=INFuLABp',
- 'lifeifei.png')
-model = torch.hub.load('pytorch/vision:v0.6.0', 'deeplabv3_resnet101', pretrained=True)
-model.eval()
-
-gr.Interface(
- inference,
- gr.inputs.Image(type="pil", label="Input"),
- gr.outputs.Image(type="pil", label="Output"),
- description=description,
- examples=[['demis.jpg'], ['lifeifei.png']],
- enable_queue=True,
- css=".footer{display:none !important}"
-).launch(debug=False)
diff --git a/spaces/abidlabs/structured-data-classification/app.py b/spaces/abidlabs/structured-data-classification/app.py
deleted file mode 100644
index 6a8778266f8d94a8cb77897a42745777e397e5fc..0000000000000000000000000000000000000000
--- a/spaces/abidlabs/structured-data-classification/app.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import numpy as np
-import tensorflow as tf
-import gradio as gr
-from huggingface_hub import from_pretrained_keras
-
-# download the already pushed model
-model = from_pretrained_keras("keras-io/structured-data-classification")
-
-def convert_and_predict(age, sex, cp, trestbps, chol, fbs, restecg, thalach, exang, oldpeak, slope, ca, thal):
-
- # some conversions from the gradio interface are needed
- sample_converted = {
- "age": age,
- "sex": sex,
- "cp": cp+1,
- "trestbps": trestbps,
- "chol": chol,
- "fbs": 0 if fbs<=120 else 1,
- "restecg": restecg,
- "thalach": thalach,
- "exang": exang,
- "oldpeak": oldpeak,
- "slope": slope+1,
- "ca": ca,
- "thal": thal,
-}
-
- input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample_converted.items()}
- predictions = model.predict(input_dict)
-
- return f'{predictions[0][0]:.2%}'
-
-
-# the app uses slider and number fields for numerical inputs
-# while radio buttons for the categoricals
-inputs = [
- gr.Slider(minimum=1, maximum=120, step=1, label='age', value=60),
- gr.Radio(choices=['female','male'], label='sex', type='index',value='male'),
- gr.Radio(choices=['typical angina',
- 'atypical angina',
- 'non-anginal pain',
- 'asymptomatic'],
- type='index', label=f'chest pain type', value='typical angina'),
- gr.Number(label='blood pressure in mmHg', value=145),
- gr.Number(label='serum cholestoral in mg/dl', value=233),
- gr.Number(label='fasting blood sugar in mg/dl', value=150),
- gr.Radio(choices=['normal','T-T wave abnormality','probable or definite left ventricular hypertrophy'],
- label='resting ecg', type='index',value='probable or definite left ventricular hypertrophy'),
- gr.Number(label='maximum heart rate achieved', value=150),
- gr.Radio(choices=['no','yes',], type='index', label='exercise induced angina',value='no'),
- gr.Number(label='ST depression induced by exercise relative to rest', value=2.3),
- gr.Radio(choices=['psloping','flat','downsloping'], label='slope of the peak exercise ST segment', type='index', value='downsloping'),
- gr.Number(label ='number of major vessels (0-3) colored by flourosopy',value=0),
- gr.Radio(['normal','fixed','reversable'],label ='thal', value='fixed')
- ]
-
-
-# the app outputs text
-output = gr.Textbox(label='Probability of having a heart disease, as evaluated by our model:')
-# it's good practice to pass examples, description and a title to guide users
-title = "Structured Data Classification 🧮"
-description = "Binary classification of structured data including numerical and categorical features for Heart Disease prediction."
-
-article = "Author: Marco Buiani. Based on this keras example by François Chollet. HuggingFace Model here "
-
-examples = [[41, 'female', 'atypical angina', 130, 204, 100, 'normal', 150, 'yes', 1.4, 'psloping', 2, 'reversible'],
- [63, 'male', 'typical angina', 145, 233, 150, 'T-T wave abnormality', 150, 'no', 2.3, 'flat', 0, 'fixed']]
-
-gr.Interface(convert_and_predict, inputs, output, examples= examples, allow_flagging='never',
- title=title, description=description, article=article, live=True).launch()
\ No newline at end of file
diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/rotation_conversions.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/rotation_conversions.py
deleted file mode 100644
index 1006e8a3117b231a7a456d5b826e76347fe0bfd4..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/rotation_conversions.py
+++ /dev/null
@@ -1,532 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
-# Check PYTORCH3D_LICENCE before use
-
-import functools
-from typing import Optional
-
-import torch
-import torch.nn.functional as F
-
-
-"""
-The transformation matrices returned from the functions in this file assume
-the points on which the transformation will be applied are column vectors.
-i.e. the R matrix is structured as
- R = [
- [Rxx, Rxy, Rxz],
- [Ryx, Ryy, Ryz],
- [Rzx, Rzy, Rzz],
- ] # (3, 3)
-This matrix can be applied to column vectors by post multiplication
-by the points e.g.
- points = [[0], [1], [2]] # (3 x 1) xyz coordinates of a point
- transformed_points = R * points
-To apply the same matrix to points which are row vectors, the R matrix
-can be transposed and pre multiplied by the points:
-e.g.
- points = [[0, 1, 2]] # (1 x 3) xyz coordinates of a point
- transformed_points = points * R.transpose(1, 0)
-"""
-
-
-def quaternion_to_matrix(quaternions):
- """
- Convert rotations given as quaternions to rotation matrices.
- Args:
- quaternions: quaternions with real part first,
- as tensor of shape (..., 4).
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
- r, i, j, k = torch.unbind(quaternions, -1)
- two_s = 2.0 / (quaternions * quaternions).sum(-1)
-
- o = torch.stack(
- (
- 1 - two_s * (j * j + k * k),
- two_s * (i * j - k * r),
- two_s * (i * k + j * r),
- two_s * (i * j + k * r),
- 1 - two_s * (i * i + k * k),
- two_s * (j * k - i * r),
- two_s * (i * k - j * r),
- two_s * (j * k + i * r),
- 1 - two_s * (i * i + j * j),
- ),
- -1,
- )
- return o.reshape(quaternions.shape[:-1] + (3, 3))
-
-
-def _copysign(a, b):
- """
- Return a tensor where each element has the absolute value taken from the,
- corresponding element of a, with sign taken from the corresponding
- element of b. This is like the standard copysign floating-point operation,
- but is not careful about negative 0 and NaN.
- Args:
- a: source tensor.
- b: tensor whose signs will be used, of the same shape as a.
- Returns:
- Tensor of the same shape as a with the signs of b.
- """
- signs_differ = (a < 0) != (b < 0)
- return torch.where(signs_differ, -a, a)
-
-
-def _sqrt_positive_part(x):
- """
- Returns torch.sqrt(torch.max(0, x))
- but with a zero subgradient where x is 0.
- """
- ret = torch.zeros_like(x)
- positive_mask = x > 0
- ret[positive_mask] = torch.sqrt(x[positive_mask])
- return ret
-
-
-def matrix_to_quaternion(matrix):
- """
- Convert rotations given as rotation matrices to quaternions.
- Args:
- matrix: Rotation matrices as tensor of shape (..., 3, 3).
- Returns:
- quaternions with real part first, as tensor of shape (..., 4).
- """
- if matrix.size(-1) != 3 or matrix.size(-2) != 3:
- raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.")
- m00 = matrix[..., 0, 0]
- m11 = matrix[..., 1, 1]
- m22 = matrix[..., 2, 2]
- o0 = 0.5 * _sqrt_positive_part(1 + m00 + m11 + m22)
- x = 0.5 * _sqrt_positive_part(1 + m00 - m11 - m22)
- y = 0.5 * _sqrt_positive_part(1 - m00 + m11 - m22)
- z = 0.5 * _sqrt_positive_part(1 - m00 - m11 + m22)
- o1 = _copysign(x, matrix[..., 2, 1] - matrix[..., 1, 2])
- o2 = _copysign(y, matrix[..., 0, 2] - matrix[..., 2, 0])
- o3 = _copysign(z, matrix[..., 1, 0] - matrix[..., 0, 1])
- return torch.stack((o0, o1, o2, o3), -1)
-
-
-def _axis_angle_rotation(axis: str, angle):
- """
- Return the rotation matrices for one of the rotations about an axis
- of which Euler angles describe, for each value of the angle given.
- Args:
- axis: Axis label "X" or "Y or "Z".
- angle: any shape tensor of Euler angles in radians
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
-
- cos = torch.cos(angle)
- sin = torch.sin(angle)
- one = torch.ones_like(angle)
- zero = torch.zeros_like(angle)
-
- if axis == "X":
- R_flat = (one, zero, zero, zero, cos, -sin, zero, sin, cos)
- if axis == "Y":
- R_flat = (cos, zero, sin, zero, one, zero, -sin, zero, cos)
- if axis == "Z":
- R_flat = (cos, -sin, zero, sin, cos, zero, zero, zero, one)
-
- return torch.stack(R_flat, -1).reshape(angle.shape + (3, 3))
-
-
-def euler_angles_to_matrix(euler_angles, convention: str):
- """
- Convert rotations given as Euler angles in radians to rotation matrices.
- Args:
- euler_angles: Euler angles in radians as tensor of shape (..., 3).
- convention: Convention string of three uppercase letters from
- {"X", "Y", and "Z"}.
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
- if euler_angles.dim() == 0 or euler_angles.shape[-1] != 3:
- raise ValueError("Invalid input euler angles.")
- if len(convention) != 3:
- raise ValueError("Convention must have 3 letters.")
- if convention[1] in (convention[0], convention[2]):
- raise ValueError(f"Invalid convention {convention}.")
- for letter in convention:
- if letter not in ("X", "Y", "Z"):
- raise ValueError(f"Invalid letter {letter} in convention string.")
- matrices = map(_axis_angle_rotation, convention, torch.unbind(euler_angles, -1))
- return functools.reduce(torch.matmul, matrices)
-
-
-def _angle_from_tan(
- axis: str, other_axis: str, data, horizontal: bool, tait_bryan: bool
-):
- """
- Extract the first or third Euler angle from the two members of
- the matrix which are positive constant times its sine and cosine.
- Args:
- axis: Axis label "X" or "Y or "Z" for the angle we are finding.
- other_axis: Axis label "X" or "Y or "Z" for the middle axis in the
- convention.
- data: Rotation matrices as tensor of shape (..., 3, 3).
- horizontal: Whether we are looking for the angle for the third axis,
- which means the relevant entries are in the same row of the
- rotation matrix. If not, they are in the same column.
- tait_bryan: Whether the first and third axes in the convention differ.
- Returns:
- Euler Angles in radians for each matrix in data as a tensor
- of shape (...).
- """
-
- i1, i2 = {"X": (2, 1), "Y": (0, 2), "Z": (1, 0)}[axis]
- if horizontal:
- i2, i1 = i1, i2
- even = (axis + other_axis) in ["XY", "YZ", "ZX"]
- if horizontal == even:
- return torch.atan2(data[..., i1], data[..., i2])
- if tait_bryan:
- return torch.atan2(-data[..., i2], data[..., i1])
- return torch.atan2(data[..., i2], -data[..., i1])
-
-
-def _index_from_letter(letter: str):
- if letter == "X":
- return 0
- if letter == "Y":
- return 1
- if letter == "Z":
- return 2
-
-
-def matrix_to_euler_angles(matrix, convention: str):
- """
- Convert rotations given as rotation matrices to Euler angles in radians.
- Args:
- matrix: Rotation matrices as tensor of shape (..., 3, 3).
- convention: Convention string of three uppercase letters.
- Returns:
- Euler angles in radians as tensor of shape (..., 3).
- """
- if len(convention) != 3:
- raise ValueError("Convention must have 3 letters.")
- if convention[1] in (convention[0], convention[2]):
- raise ValueError(f"Invalid convention {convention}.")
- for letter in convention:
- if letter not in ("X", "Y", "Z"):
- raise ValueError(f"Invalid letter {letter} in convention string.")
- if matrix.size(-1) != 3 or matrix.size(-2) != 3:
- raise ValueError(f"Invalid rotation matrix shape f{matrix.shape}.")
- i0 = _index_from_letter(convention[0])
- i2 = _index_from_letter(convention[2])
- tait_bryan = i0 != i2
- if tait_bryan:
- central_angle = torch.asin(
- matrix[..., i0, i2] * (-1.0 if i0 - i2 in [-1, 2] else 1.0)
- )
- else:
- central_angle = torch.acos(matrix[..., i0, i0])
-
- o = (
- _angle_from_tan(
- convention[0], convention[1], matrix[..., i2], False, tait_bryan
- ),
- central_angle,
- _angle_from_tan(
- convention[2], convention[1], matrix[..., i0, :], True, tait_bryan
- ),
- )
- return torch.stack(o, -1)
-
-
-def random_quaternions(
- n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False
-):
- """
- Generate random quaternions representing rotations,
- i.e. versors with nonnegative real part.
- Args:
- n: Number of quaternions in a batch to return.
- dtype: Type to return.
- device: Desired device of returned tensor. Default:
- uses the current device for the default tensor type.
- requires_grad: Whether the resulting tensor should have the gradient
- flag set.
- Returns:
- Quaternions as tensor of shape (N, 4).
- """
- o = torch.randn((n, 4), dtype=dtype, device=device, requires_grad=requires_grad)
- s = (o * o).sum(1)
- o = o / _copysign(torch.sqrt(s), o[:, 0])[:, None]
- return o
-
-
-def random_rotations(
- n: int, dtype: Optional[torch.dtype] = None, device=None, requires_grad=False
-):
- """
- Generate random rotations as 3x3 rotation matrices.
- Args:
- n: Number of rotation matrices in a batch to return.
- dtype: Type to return.
- device: Device of returned tensor. Default: if None,
- uses the current device for the default tensor type.
- requires_grad: Whether the resulting tensor should have the gradient
- flag set.
- Returns:
- Rotation matrices as tensor of shape (n, 3, 3).
- """
- quaternions = random_quaternions(
- n, dtype=dtype, device=device, requires_grad=requires_grad
- )
- return quaternion_to_matrix(quaternions)
-
-
-def random_rotation(
- dtype: Optional[torch.dtype] = None, device=None, requires_grad=False
-):
- """
- Generate a single random 3x3 rotation matrix.
- Args:
- dtype: Type to return
- device: Device of returned tensor. Default: if None,
- uses the current device for the default tensor type
- requires_grad: Whether the resulting tensor should have the gradient
- flag set
- Returns:
- Rotation matrix as tensor of shape (3, 3).
- """
- return random_rotations(1, dtype, device, requires_grad)[0]
-
-
-def standardize_quaternion(quaternions):
- """
- Convert a unit quaternion to a standard form: one in which the real
- part is non negative.
- Args:
- quaternions: Quaternions with real part first,
- as tensor of shape (..., 4).
- Returns:
- Standardized quaternions as tensor of shape (..., 4).
- """
- return torch.where(quaternions[..., 0:1] < 0, -quaternions, quaternions)
-
-
-def quaternion_raw_multiply(a, b):
- """
- Multiply two quaternions.
- Usual torch rules for broadcasting apply.
- Args:
- a: Quaternions as tensor of shape (..., 4), real part first.
- b: Quaternions as tensor of shape (..., 4), real part first.
- Returns:
- The product of a and b, a tensor of quaternions shape (..., 4).
- """
- aw, ax, ay, az = torch.unbind(a, -1)
- bw, bx, by, bz = torch.unbind(b, -1)
- ow = aw * bw - ax * bx - ay * by - az * bz
- ox = aw * bx + ax * bw + ay * bz - az * by
- oy = aw * by - ax * bz + ay * bw + az * bx
- oz = aw * bz + ax * by - ay * bx + az * bw
- return torch.stack((ow, ox, oy, oz), -1)
-
-
-def quaternion_multiply(a, b):
- """
- Multiply two quaternions representing rotations, returning the quaternion
- representing their composition, i.e. the versor with nonnegative real part.
- Usual torch rules for broadcasting apply.
- Args:
- a: Quaternions as tensor of shape (..., 4), real part first.
- b: Quaternions as tensor of shape (..., 4), real part first.
- Returns:
- The product of a and b, a tensor of quaternions of shape (..., 4).
- """
- ab = quaternion_raw_multiply(a, b)
- return standardize_quaternion(ab)
-
-
-def quaternion_invert(quaternion):
- """
- Given a quaternion representing rotation, get the quaternion representing
- its inverse.
- Args:
- quaternion: Quaternions as tensor of shape (..., 4), with real part
- first, which must be versors (unit quaternions).
- Returns:
- The inverse, a tensor of quaternions of shape (..., 4).
- """
-
- return quaternion * quaternion.new_tensor([1, -1, -1, -1])
-
-
-def quaternion_apply(quaternion, point):
- """
- Apply the rotation given by a quaternion to a 3D point.
- Usual torch rules for broadcasting apply.
- Args:
- quaternion: Tensor of quaternions, real part first, of shape (..., 4).
- point: Tensor of 3D points of shape (..., 3).
- Returns:
- Tensor of rotated points of shape (..., 3).
- """
- if point.size(-1) != 3:
- raise ValueError(f"Points are not in 3D, f{point.shape}.")
- real_parts = point.new_zeros(point.shape[:-1] + (1,))
- point_as_quaternion = torch.cat((real_parts, point), -1)
- out = quaternion_raw_multiply(
- quaternion_raw_multiply(quaternion, point_as_quaternion),
- quaternion_invert(quaternion),
- )
- return out[..., 1:]
-
-
-def axis_angle_to_matrix(axis_angle):
- """
- Convert rotations given as axis/angle to rotation matrices.
- Args:
- axis_angle: Rotations given as a vector in axis angle form,
- as a tensor of shape (..., 3), where the magnitude is
- the angle turned anticlockwise in radians around the
- vector's direction.
- Returns:
- Rotation matrices as tensor of shape (..., 3, 3).
- """
- return quaternion_to_matrix(axis_angle_to_quaternion(axis_angle))
-
-
-def matrix_to_axis_angle(matrix):
- """
- Convert rotations given as rotation matrices to axis/angle.
- Args:
- matrix: Rotation matrices as tensor of shape (..., 3, 3).
- Returns:
- Rotations given as a vector in axis angle form, as a tensor
- of shape (..., 3), where the magnitude is the angle
- turned anticlockwise in radians around the vector's
- direction.
- """
- return quaternion_to_axis_angle(matrix_to_quaternion(matrix))
-
-
-def axis_angle_to_quaternion(axis_angle):
- """
- Convert rotations given as axis/angle to quaternions.
- Args:
- axis_angle: Rotations given as a vector in axis angle form,
- as a tensor of shape (..., 3), where the magnitude is
- the angle turned anticlockwise in radians around the
- vector's direction.
- Returns:
- quaternions with real part first, as tensor of shape (..., 4).
- """
- angles = torch.norm(axis_angle, p=2, dim=-1, keepdim=True)
- half_angles = 0.5 * angles
- eps = 1e-6
- small_angles = angles.abs() < eps
- sin_half_angles_over_angles = torch.empty_like(angles)
- sin_half_angles_over_angles[~small_angles] = (
- torch.sin(half_angles[~small_angles]) / angles[~small_angles]
- )
- # for x small, sin(x/2) is about x/2 - (x/2)^3/6
- # so sin(x/2)/x is about 1/2 - (x*x)/48
- sin_half_angles_over_angles[small_angles] = (
- 0.5 - (angles[small_angles] * angles[small_angles]) / 48
- )
- quaternions = torch.cat(
- [torch.cos(half_angles), axis_angle * sin_half_angles_over_angles], dim=-1
- )
- return quaternions
-
-
-def quaternion_to_axis_angle(quaternions):
- """
- Convert rotations given as quaternions to axis/angle.
- Args:
- quaternions: quaternions with real part first,
- as tensor of shape (..., 4).
- Returns:
- Rotations given as a vector in axis angle form, as a tensor
- of shape (..., 3), where the magnitude is the angle
- turned anticlockwise in radians around the vector's
- direction.
- """
- norms = torch.norm(quaternions[..., 1:], p=2, dim=-1, keepdim=True)
- half_angles = torch.atan2(norms, quaternions[..., :1])
- angles = 2 * half_angles
- eps = 1e-6
- small_angles = angles.abs() < eps
- sin_half_angles_over_angles = torch.empty_like(angles)
- sin_half_angles_over_angles[~small_angles] = (
- torch.sin(half_angles[~small_angles]) / angles[~small_angles]
- )
- # for x small, sin(x/2) is about x/2 - (x/2)^3/6
- # so sin(x/2)/x is about 1/2 - (x*x)/48
- sin_half_angles_over_angles[small_angles] = (
- 0.5 - (angles[small_angles] * angles[small_angles]) / 48
- )
- return quaternions[..., 1:] / sin_half_angles_over_angles
-
-
-def rotation_6d_to_matrix(d6: torch.Tensor) -> torch.Tensor:
- """
- Converts 6D rotation representation by Zhou et al. [1] to rotation matrix
- using Gram--Schmidt orthogonalisation per Section B of [1].
- Args:
- d6: 6D rotation representation, of size (*, 6)
- Returns:
- batch of rotation matrices of size (*, 3, 3)
- [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
- On the Continuity of Rotation Representations in Neural Networks.
- IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Retrieved from http://arxiv.org/abs/1812.07035
- """
-
- a1, a2 = d6[..., :3], d6[..., 3:]
- b1 = F.normalize(a1, dim=-1)
- b2 = a2 - (b1 * a2).sum(-1, keepdim=True) * b1
- b2 = F.normalize(b2, dim=-1)
- b3 = torch.cross(b1, b2, dim=-1)
- return torch.stack((b1, b2, b3), dim=-2)
-
-
-def matrix_to_rotation_6d(matrix: torch.Tensor) -> torch.Tensor:
- """
- Converts rotation matrices to 6D rotation representation by Zhou et al. [1]
- by dropping the last row. Note that 6D representation is not unique.
- Args:
- matrix: batch of rotation matrices of size (*, 3, 3)
- Returns:
- 6D rotation representation, of size (*, 6)
- [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H.
- On the Continuity of Rotation Representations in Neural Networks.
- IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Retrieved from http://arxiv.org/abs/1812.07035
- """
- return matrix[..., :2, :].clone().reshape(*matrix.size()[:-2], 6)
-
-def canonicalize_smplh(poses, trans = None):
- bs, nframes, njoints = poses.shape[:3]
-
- global_orient = poses[:, :, 0]
-
- # first global rotations
- rot2d = matrix_to_axis_angle(global_orient[:, 0])
- #rot2d[:, :2] = 0 # Remove the rotation along the vertical axis
- rot2d = axis_angle_to_matrix(rot2d)
-
- # Rotate the global rotation to eliminate Z rotations
- global_orient = torch.einsum("ikj,imkl->imjl", rot2d, global_orient)
-
- # Construct canonicalized version of x
- xc = torch.cat((global_orient[:, :, None], poses[:, :, 1:]), dim=2)
-
- if trans is not None:
- vel = trans[:, 1:] - trans[:, :-1]
- # Turn the translation as well
- vel = torch.einsum("ikj,ilk->ilj", rot2d, vel)
- trans = torch.cat((torch.zeros(bs, 1, 3, device=vel.device),
- torch.cumsum(vel, 1)), 1)
- return xc, trans
- else:
- return xc
-
-
\ No newline at end of file
diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/viewer.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/viewer.py
deleted file mode 100644
index d2326c38205c6eaddb4f567e3b088329187af258..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/viewer.py
+++ /dev/null
@@ -1,1160 +0,0 @@
-"""A pyglet-based interactive 3D scene viewer.
-"""
-import copy
-import os
-import sys
-from threading import Thread, RLock
-import time
-
-import imageio
-import numpy as np
-import OpenGL
-import trimesh
-
-try:
- from Tkinter import Tk, tkFileDialog as filedialog
-except Exception:
- try:
- from tkinter import Tk, filedialog as filedialog
- except Exception:
- pass
-
-from .constants import (TARGET_OPEN_GL_MAJOR, TARGET_OPEN_GL_MINOR,
- MIN_OPEN_GL_MAJOR, MIN_OPEN_GL_MINOR,
- TEXT_PADDING, DEFAULT_SCENE_SCALE,
- DEFAULT_Z_FAR, DEFAULT_Z_NEAR, RenderFlags, TextAlign)
-from .light import DirectionalLight
-from .node import Node
-from .camera import PerspectiveCamera, OrthographicCamera, IntrinsicsCamera
-from .trackball import Trackball
-from .renderer import Renderer
-from .mesh import Mesh
-
-import pyglet
-from pyglet import clock
-pyglet.options['shadow_window'] = False
-
-
-class Viewer(pyglet.window.Window):
- """An interactive viewer for 3D scenes.
-
- The viewer's camera is separate from the scene's, but will take on
- the parameters of the scene's main view camera and start in the same pose.
- If the scene does not have a camera, a suitable default will be provided.
-
- Parameters
- ----------
- scene : :class:`Scene`
- The scene to visualize.
- viewport_size : (2,) int
- The width and height of the initial viewing window.
- render_flags : dict
- A set of flags for rendering the scene. Described in the note below.
- viewer_flags : dict
- A set of flags for controlling the viewer's behavior.
- Described in the note below.
- registered_keys : dict
- A map from ASCII key characters to tuples containing:
-
- - A function to be called whenever the key is pressed,
- whose first argument will be the viewer itself.
- - (Optionally) A list of additional positional arguments
- to be passed to the function.
- - (Optionally) A dict of keyword arguments to be passed
- to the function.
-
- kwargs : dict
- Any keyword arguments left over will be interpreted as belonging to
- either the :attr:`.Viewer.render_flags` or :attr:`.Viewer.viewer_flags`
- dictionaries. Those flag sets will be updated appropriately.
-
- Note
- ----
- The basic commands for moving about the scene are given as follows:
-
- - **Rotating about the scene**: Hold the left mouse button and
- drag the cursor.
- - **Rotating about the view axis**: Hold ``CTRL`` and the left mouse
- button and drag the cursor.
- - **Panning**:
-
- - Hold SHIFT, then hold the left mouse button and drag the cursor, or
- - Hold the middle mouse button and drag the cursor.
-
- - **Zooming**:
-
- - Scroll the mouse wheel, or
- - Hold the right mouse button and drag the cursor.
-
- Other keyboard commands are as follows:
-
- - ``a``: Toggles rotational animation mode.
- - ``c``: Toggles backface culling.
- - ``f``: Toggles fullscreen mode.
- - ``h``: Toggles shadow rendering.
- - ``i``: Toggles axis display mode
- (no axes, world axis, mesh axes, all axes).
- - ``l``: Toggles lighting mode
- (scene lighting, Raymond lighting, or direct lighting).
- - ``m``: Toggles face normal visualization.
- - ``n``: Toggles vertex normal visualization.
- - ``o``: Toggles orthographic mode.
- - ``q``: Quits the viewer.
- - ``r``: Starts recording a GIF, and pressing again stops recording
- and opens a file dialog.
- - ``s``: Opens a file dialog to save the current view as an image.
- - ``w``: Toggles wireframe mode
- (scene default, flip wireframes, all wireframe, or all solid).
- - ``z``: Resets the camera to the initial view.
-
- Note
- ----
- The valid keys for ``render_flags`` are as follows:
-
- - ``flip_wireframe``: `bool`, If `True`, all objects will have their
- wireframe modes flipped from what their material indicates.
- Defaults to `False`.
- - ``all_wireframe``: `bool`, If `True`, all objects will be rendered
- in wireframe mode. Defaults to `False`.
- - ``all_solid``: `bool`, If `True`, all objects will be rendered in
- solid mode. Defaults to `False`.
- - ``shadows``: `bool`, If `True`, shadows will be rendered.
- Defaults to `False`.
- - ``vertex_normals``: `bool`, If `True`, vertex normals will be
- rendered as blue lines. Defaults to `False`.
- - ``face_normals``: `bool`, If `True`, face normals will be rendered as
- blue lines. Defaults to `False`.
- - ``cull_faces``: `bool`, If `True`, backfaces will be culled.
- Defaults to `True`.
- - ``point_size`` : float, The point size in pixels. Defaults to 1px.
-
- Note
- ----
- The valid keys for ``viewer_flags`` are as follows:
-
- - ``rotate``: `bool`, If `True`, the scene's camera will rotate
- about an axis. Defaults to `False`.
- - ``rotate_rate``: `float`, The rate of rotation in radians per second.
- Defaults to `PI / 3.0`.
- - ``rotate_axis``: `(3,) float`, The axis in world coordinates to rotate
- about. Defaults to ``[0,0,1]``.
- - ``view_center``: `(3,) float`, The position to rotate the scene about.
- Defaults to the scene's centroid.
- - ``use_raymond_lighting``: `bool`, If `True`, an additional set of three
- directional lights that move with the camera will be added to the scene.
- Defaults to `False`.
- - ``use_direct_lighting``: `bool`, If `True`, an additional directional
- light that moves with the camera and points out of it will be added to
- the scene. Defaults to `False`.
- - ``lighting_intensity``: `float`, The overall intensity of the
- viewer's additional lights (when they're in use). Defaults to 3.0.
- - ``use_perspective_cam``: `bool`, If `True`, a perspective camera will
- be used. Otherwise, an orthographic camera is used. Defaults to `True`.
- - ``save_directory``: `str`, A directory to open the file dialogs in.
- Defaults to `None`.
- - ``window_title``: `str`, A title for the viewer's application window.
- Defaults to `"Scene Viewer"`.
- - ``refresh_rate``: `float`, A refresh rate for rendering, in Hertz.
- Defaults to `30.0`.
- - ``fullscreen``: `bool`, Whether to make viewer fullscreen.
- Defaults to `False`.
- - ``show_world_axis``: `bool`, Whether to show the world axis.
- Defaults to `False`.
- - ``show_mesh_axes``: `bool`, Whether to show the individual mesh axes.
- Defaults to `False`.
- - ``caption``: `list of dict`, Text caption(s) to display on the viewer.
- Defaults to `None`.
-
- Note
- ----
- Animation can be accomplished by running the viewer with ``run_in_thread``
- enabled. Then, just run a loop in your main thread, updating the scene as
- needed. Before updating the scene, be sure to acquire the
- :attr:`.Viewer.render_lock`, and release it when your update is done.
- """
-
- def __init__(self, scene, viewport_size=None,
- render_flags=None, viewer_flags=None,
- registered_keys=None, run_in_thread=False,
- auto_start=True,
- **kwargs):
-
- #######################################################################
- # Save attributes and flags
- #######################################################################
- if viewport_size is None:
- viewport_size = (640, 480)
- self._scene = scene
- self._viewport_size = viewport_size
- self._render_lock = RLock()
- self._is_active = False
- self._should_close = False
- self._run_in_thread = run_in_thread
- self._auto_start = auto_start
-
- self._default_render_flags = {
- 'flip_wireframe': False,
- 'all_wireframe': False,
- 'all_solid': False,
- 'shadows': False,
- 'vertex_normals': False,
- 'face_normals': False,
- 'cull_faces': True,
- 'point_size': 1.0,
- }
- self._default_viewer_flags = {
- 'mouse_pressed': False,
- 'rotate': False,
- 'rotate_rate': np.pi / 3.0,
- 'rotate_axis': np.array([0.0, 0.0, 1.0]),
- 'view_center': None,
- 'record': False,
- 'use_raymond_lighting': False,
- 'use_direct_lighting': False,
- 'lighting_intensity': 3.0,
- 'use_perspective_cam': True,
- 'save_directory': None,
- 'window_title': 'Scene Viewer',
- 'refresh_rate': 30.0,
- 'fullscreen': False,
- 'show_world_axis': False,
- 'show_mesh_axes': False,
- 'caption': None
- }
- self._render_flags = self._default_render_flags.copy()
- self._viewer_flags = self._default_viewer_flags.copy()
- self._viewer_flags['rotate_axis'] = (
- self._default_viewer_flags['rotate_axis'].copy()
- )
-
- if render_flags is not None:
- self._render_flags.update(render_flags)
- if viewer_flags is not None:
- self._viewer_flags.update(viewer_flags)
-
- for key in kwargs:
- if key in self.render_flags:
- self._render_flags[key] = kwargs[key]
- elif key in self.viewer_flags:
- self._viewer_flags[key] = kwargs[key]
-
- # TODO MAC OS BUG FOR SHADOWS
- if sys.platform == 'darwin':
- self._render_flags['shadows'] = False
-
- self._registered_keys = {}
- if registered_keys is not None:
- self._registered_keys = {
- ord(k.lower()): registered_keys[k] for k in registered_keys
- }
-
- #######################################################################
- # Save internal settings
- #######################################################################
-
- # Set up caption stuff
- self._message_text = None
- self._ticks_till_fade = 2.0 / 3.0 * self.viewer_flags['refresh_rate']
- self._message_opac = 1.0 + self._ticks_till_fade
-
- # Set up raymond lights and direct lights
- self._raymond_lights = self._create_raymond_lights()
- self._direct_light = self._create_direct_light()
-
- # Set up axes
- self._axes = {}
- self._axis_mesh = Mesh.from_trimesh(
- trimesh.creation.axis(origin_size=0.1, axis_radius=0.05,
- axis_length=1.0), smooth=False)
- if self.viewer_flags['show_world_axis']:
- self._set_axes(world=self.viewer_flags['show_world_axis'],
- mesh=self.viewer_flags['show_mesh_axes'])
-
- #######################################################################
- # Set up camera node
- #######################################################################
- self._camera_node = None
- self._prior_main_camera_node = None
- self._default_camera_pose = None
- self._default_persp_cam = None
- self._default_orth_cam = None
- self._trackball = None
- self._saved_frames = []
-
- # Extract main camera from scene and set up our mirrored copy
- znear = None
- zfar = None
- if scene.main_camera_node is not None:
- n = scene.main_camera_node
- camera = copy.copy(n.camera)
- if isinstance(camera, (PerspectiveCamera, IntrinsicsCamera)):
- self._default_persp_cam = camera
- znear = camera.znear
- zfar = camera.zfar
- elif isinstance(camera, OrthographicCamera):
- self._default_orth_cam = camera
- znear = camera.znear
- zfar = camera.zfar
- self._default_camera_pose = scene.get_pose(scene.main_camera_node)
- self._prior_main_camera_node = n
-
- # Set defaults as needed
- if zfar is None:
- zfar = max(scene.scale * 10.0, DEFAULT_Z_FAR)
- if znear is None or znear == 0:
- if scene.scale == 0:
- znear = DEFAULT_Z_NEAR
- else:
- znear = min(scene.scale / 10.0, DEFAULT_Z_NEAR)
-
- if self._default_persp_cam is None:
- self._default_persp_cam = PerspectiveCamera(
- yfov=np.pi / 3.0, znear=znear, zfar=zfar
- )
- if self._default_orth_cam is None:
- xmag = ymag = scene.scale
- if scene.scale == 0:
- xmag = ymag = 1.0
- self._default_orth_cam = OrthographicCamera(
- xmag=xmag, ymag=ymag,
- znear=znear,
- zfar=zfar
- )
- if self._default_camera_pose is None:
- self._default_camera_pose = self._compute_initial_camera_pose()
-
- # Pick camera
- if self.viewer_flags['use_perspective_cam']:
- camera = self._default_persp_cam
- else:
- camera = self._default_orth_cam
-
- self._camera_node = Node(
- matrix=self._default_camera_pose, camera=camera
- )
- scene.add_node(self._camera_node)
- scene.main_camera_node = self._camera_node
- self._reset_view()
-
- #######################################################################
- # Initialize OpenGL context and renderer
- #######################################################################
- self._renderer = Renderer(
- self._viewport_size[0], self._viewport_size[1],
- self.render_flags['point_size']
- )
- self._is_active = True
-
- if self.run_in_thread:
- self._thread = Thread(target=self._init_and_start_app)
- self._thread.start()
- else:
- if auto_start:
- self._init_and_start_app()
-
- def start(self):
- self._init_and_start_app()
-
- @property
- def scene(self):
- """:class:`.Scene` : The scene being visualized.
- """
- return self._scene
-
- @property
- def viewport_size(self):
- """(2,) int : The width and height of the viewing window.
- """
- return self._viewport_size
-
- @property
- def render_lock(self):
- """:class:`threading.RLock` : If acquired, prevents the viewer from
- rendering until released.
-
- Run :meth:`.Viewer.render_lock.acquire` before making updates to
- the scene in a different thread, and run
- :meth:`.Viewer.render_lock.release` once you're done to let the viewer
- continue.
- """
- return self._render_lock
-
- @property
- def is_active(self):
- """bool : `True` if the viewer is active, or `False` if it has
- been closed.
- """
- return self._is_active
-
- @property
- def run_in_thread(self):
- """bool : Whether the viewer was run in a separate thread.
- """
- return self._run_in_thread
-
- @property
- def render_flags(self):
- """dict : Flags for controlling the renderer's behavior.
-
- - ``flip_wireframe``: `bool`, If `True`, all objects will have their
- wireframe modes flipped from what their material indicates.
- Defaults to `False`.
- - ``all_wireframe``: `bool`, If `True`, all objects will be rendered
- in wireframe mode. Defaults to `False`.
- - ``all_solid``: `bool`, If `True`, all objects will be rendered in
- solid mode. Defaults to `False`.
- - ``shadows``: `bool`, If `True`, shadows will be rendered.
- Defaults to `False`.
- - ``vertex_normals``: `bool`, If `True`, vertex normals will be
- rendered as blue lines. Defaults to `False`.
- - ``face_normals``: `bool`, If `True`, face normals will be rendered as
- blue lines. Defaults to `False`.
- - ``cull_faces``: `bool`, If `True`, backfaces will be culled.
- Defaults to `True`.
- - ``point_size`` : float, The point size in pixels. Defaults to 1px.
-
- """
- return self._render_flags
-
- @render_flags.setter
- def render_flags(self, value):
- self._render_flags = value
-
- @property
- def viewer_flags(self):
- """dict : Flags for controlling the viewer's behavior.
-
- The valid keys for ``viewer_flags`` are as follows:
-
- - ``rotate``: `bool`, If `True`, the scene's camera will rotate
- about an axis. Defaults to `False`.
- - ``rotate_rate``: `float`, The rate of rotation in radians per second.
- Defaults to `PI / 3.0`.
- - ``rotate_axis``: `(3,) float`, The axis in world coordinates to
- rotate about. Defaults to ``[0,0,1]``.
- - ``view_center``: `(3,) float`, The position to rotate the scene
- about. Defaults to the scene's centroid.
- - ``use_raymond_lighting``: `bool`, If `True`, an additional set of
- three directional lights that move with the camera will be added to
- the scene. Defaults to `False`.
- - ``use_direct_lighting``: `bool`, If `True`, an additional directional
- light that moves with the camera and points out of it will be
- added to the scene. Defaults to `False`.
- - ``lighting_intensity``: `float`, The overall intensity of the
- viewer's additional lights (when they're in use). Defaults to 3.0.
- - ``use_perspective_cam``: `bool`, If `True`, a perspective camera will
- be used. Otherwise, an orthographic camera is used. Defaults to
- `True`.
- - ``save_directory``: `str`, A directory to open the file dialogs in.
- Defaults to `None`.
- - ``window_title``: `str`, A title for the viewer's application window.
- Defaults to `"Scene Viewer"`.
- - ``refresh_rate``: `float`, A refresh rate for rendering, in Hertz.
- Defaults to `30.0`.
- - ``fullscreen``: `bool`, Whether to make viewer fullscreen.
- Defaults to `False`.
- - ``show_world_axis``: `bool`, Whether to show the world axis.
- Defaults to `False`.
- - ``show_mesh_axes``: `bool`, Whether to show the individual mesh axes.
- Defaults to `False`.
- - ``caption``: `list of dict`, Text caption(s) to display on
- the viewer. Defaults to `None`.
-
- """
- return self._viewer_flags
-
- @viewer_flags.setter
- def viewer_flags(self, value):
- self._viewer_flags = value
-
- @property
- def registered_keys(self):
- """dict : Map from ASCII key character to a handler function.
-
- This is a map from ASCII key characters to tuples containing:
-
- - A function to be called whenever the key is pressed,
- whose first argument will be the viewer itself.
- - (Optionally) A list of additional positional arguments
- to be passed to the function.
- - (Optionally) A dict of keyword arguments to be passed
- to the function.
-
- """
- return self._registered_keys
-
- @registered_keys.setter
- def registered_keys(self, value):
- self._registered_keys = value
-
- def close_external(self):
- """Close the viewer from another thread.
-
- This function will wait for the actual close, so you immediately
- manipulate the scene afterwards.
- """
- self._should_close = True
- while self.is_active:
- time.sleep(1.0 / self.viewer_flags['refresh_rate'])
-
- def save_gif(self, filename=None):
- """Save the stored GIF frames to a file.
-
- To use this asynchronously, run the viewer with the ``record``
- flag and the ``run_in_thread`` flags set.
- Kill the viewer after your desired time with
- :meth:`.Viewer.close_external`, and then call :meth:`.Viewer.save_gif`.
-
- Parameters
- ----------
- filename : str
- The file to save the GIF to. If not specified,
- a file dialog will be opened to ask the user where
- to save the GIF file.
- """
- if filename is None:
- filename = self._get_save_filename(['gif', 'all'])
- if filename is not None:
- self.viewer_flags['save_directory'] = os.path.dirname(filename)
- imageio.mimwrite(filename, self._saved_frames,
- fps=self.viewer_flags['refresh_rate'],
- palettesize=128, subrectangles=True)
- self._saved_frames = []
-
- def on_close(self):
- """Exit the event loop when the window is closed.
- """
- # Remove our camera and restore the prior one
- if self._camera_node is not None:
- self.scene.remove_node(self._camera_node)
- if self._prior_main_camera_node is not None:
- self.scene.main_camera_node = self._prior_main_camera_node
-
- # Delete any lighting nodes that we've attached
- if self.viewer_flags['use_raymond_lighting']:
- for n in self._raymond_lights:
- if self.scene.has_node(n):
- self.scene.remove_node(n)
- if self.viewer_flags['use_direct_lighting']:
- if self.scene.has_node(self._direct_light):
- self.scene.remove_node(self._direct_light)
-
- # Delete any axis nodes that we've attached
- self._remove_axes()
-
- # Delete renderer
- if self._renderer is not None:
- self._renderer.delete()
- self._renderer = None
-
- # Force clean-up of OpenGL context data
- try:
- OpenGL.contextdata.cleanupContext()
- self.close()
- except Exception:
- pass
- finally:
- self._is_active = False
- super(Viewer, self).on_close()
- pyglet.app.exit()
-
- def on_draw(self):
- """Redraw the scene into the viewing window.
- """
- if self._renderer is None:
- return
-
- if self.run_in_thread or not self._auto_start:
- self.render_lock.acquire()
-
- # Make OpenGL context current
- self.switch_to()
-
- # Render the scene
- self.clear()
- self._render()
-
- if self._message_text is not None:
- self._renderer.render_text(
- self._message_text,
- self.viewport_size[0] - TEXT_PADDING,
- TEXT_PADDING,
- font_pt=20,
- color=np.array([0.1, 0.7, 0.2,
- np.clip(self._message_opac, 0.0, 1.0)]),
- align=TextAlign.BOTTOM_RIGHT
- )
-
- if self.viewer_flags['caption'] is not None:
- for caption in self.viewer_flags['caption']:
- xpos, ypos = self._location_to_x_y(caption['location'])
- self._renderer.render_text(
- caption['text'],
- xpos,
- ypos,
- font_name=caption['font_name'],
- font_pt=caption['font_pt'],
- color=caption['color'],
- scale=caption['scale'],
- align=caption['location']
- )
-
- if self.run_in_thread or not self._auto_start:
- self.render_lock.release()
-
- def on_resize(self, width, height):
- """Resize the camera and trackball when the window is resized.
- """
- if self._renderer is None:
- return
-
- self._viewport_size = (width, height)
- self._trackball.resize(self._viewport_size)
- self._renderer.viewport_width = self._viewport_size[0]
- self._renderer.viewport_height = self._viewport_size[1]
- self.on_draw()
-
- def on_mouse_press(self, x, y, buttons, modifiers):
- """Record an initial mouse press.
- """
- self._trackball.set_state(Trackball.STATE_ROTATE)
- if (buttons == pyglet.window.mouse.LEFT):
- ctrl = (modifiers & pyglet.window.key.MOD_CTRL)
- shift = (modifiers & pyglet.window.key.MOD_SHIFT)
- if (ctrl and shift):
- self._trackball.set_state(Trackball.STATE_ZOOM)
- elif ctrl:
- self._trackball.set_state(Trackball.STATE_ROLL)
- elif shift:
- self._trackball.set_state(Trackball.STATE_PAN)
- elif (buttons == pyglet.window.mouse.MIDDLE):
- self._trackball.set_state(Trackball.STATE_PAN)
- elif (buttons == pyglet.window.mouse.RIGHT):
- self._trackball.set_state(Trackball.STATE_ZOOM)
-
- self._trackball.down(np.array([x, y]))
-
- # Stop animating while using the mouse
- self.viewer_flags['mouse_pressed'] = True
-
- def on_mouse_drag(self, x, y, dx, dy, buttons, modifiers):
- """Record a mouse drag.
- """
- self._trackball.drag(np.array([x, y]))
-
- def on_mouse_release(self, x, y, button, modifiers):
- """Record a mouse release.
- """
- self.viewer_flags['mouse_pressed'] = False
-
- def on_mouse_scroll(self, x, y, dx, dy):
- """Record a mouse scroll.
- """
- if self.viewer_flags['use_perspective_cam']:
- self._trackball.scroll(dy)
- else:
- spfc = 0.95
- spbc = 1.0 / 0.95
- sf = 1.0
- if dy > 0:
- sf = spfc * dy
- elif dy < 0:
- sf = - spbc * dy
-
- c = self._camera_node.camera
- xmag = max(c.xmag * sf, 1e-8)
- ymag = max(c.ymag * sf, 1e-8 * c.ymag / c.xmag)
- c.xmag = xmag
- c.ymag = ymag
-
- def on_key_press(self, symbol, modifiers):
- """Record a key press.
- """
- # First, check for registered key callbacks
- if symbol in self.registered_keys:
- tup = self.registered_keys[symbol]
- callback = None
- args = []
- kwargs = {}
- if not isinstance(tup, (list, tuple, np.ndarray)):
- callback = tup
- else:
- callback = tup[0]
- if len(tup) == 2:
- args = tup[1]
- if len(tup) == 3:
- kwargs = tup[2]
- callback(self, *args, **kwargs)
- return
-
- # Otherwise, use default key functions
-
- # A causes the frame to rotate
- self._message_text = None
- if symbol == pyglet.window.key.A:
- self.viewer_flags['rotate'] = not self.viewer_flags['rotate']
- if self.viewer_flags['rotate']:
- self._message_text = 'Rotation On'
- else:
- self._message_text = 'Rotation Off'
-
- # C toggles backface culling
- elif symbol == pyglet.window.key.C:
- self.render_flags['cull_faces'] = (
- not self.render_flags['cull_faces']
- )
- if self.render_flags['cull_faces']:
- self._message_text = 'Cull Faces On'
- else:
- self._message_text = 'Cull Faces Off'
-
- # F toggles face normals
- elif symbol == pyglet.window.key.F:
- self.viewer_flags['fullscreen'] = (
- not self.viewer_flags['fullscreen']
- )
- self.set_fullscreen(self.viewer_flags['fullscreen'])
- self.activate()
- if self.viewer_flags['fullscreen']:
- self._message_text = 'Fullscreen On'
- else:
- self._message_text = 'Fullscreen Off'
-
- # S toggles shadows
- elif symbol == pyglet.window.key.H and sys.platform != 'darwin':
- self.render_flags['shadows'] = not self.render_flags['shadows']
- if self.render_flags['shadows']:
- self._message_text = 'Shadows On'
- else:
- self._message_text = 'Shadows Off'
-
- elif symbol == pyglet.window.key.I:
- if (self.viewer_flags['show_world_axis'] and not
- self.viewer_flags['show_mesh_axes']):
- self.viewer_flags['show_world_axis'] = False
- self.viewer_flags['show_mesh_axes'] = True
- self._set_axes(False, True)
- self._message_text = 'Mesh Axes On'
- elif (not self.viewer_flags['show_world_axis'] and
- self.viewer_flags['show_mesh_axes']):
- self.viewer_flags['show_world_axis'] = True
- self.viewer_flags['show_mesh_axes'] = True
- self._set_axes(True, True)
- self._message_text = 'All Axes On'
- elif (self.viewer_flags['show_world_axis'] and
- self.viewer_flags['show_mesh_axes']):
- self.viewer_flags['show_world_axis'] = False
- self.viewer_flags['show_mesh_axes'] = False
- self._set_axes(False, False)
- self._message_text = 'All Axes Off'
- else:
- self.viewer_flags['show_world_axis'] = True
- self.viewer_flags['show_mesh_axes'] = False
- self._set_axes(True, False)
- self._message_text = 'World Axis On'
-
- # L toggles the lighting mode
- elif symbol == pyglet.window.key.L:
- if self.viewer_flags['use_raymond_lighting']:
- self.viewer_flags['use_raymond_lighting'] = False
- self.viewer_flags['use_direct_lighting'] = True
- self._message_text = 'Direct Lighting'
- elif self.viewer_flags['use_direct_lighting']:
- self.viewer_flags['use_raymond_lighting'] = False
- self.viewer_flags['use_direct_lighting'] = False
- self._message_text = 'Default Lighting'
- else:
- self.viewer_flags['use_raymond_lighting'] = True
- self.viewer_flags['use_direct_lighting'] = False
- self._message_text = 'Raymond Lighting'
-
- # M toggles face normals
- elif symbol == pyglet.window.key.M:
- self.render_flags['face_normals'] = (
- not self.render_flags['face_normals']
- )
- if self.render_flags['face_normals']:
- self._message_text = 'Face Normals On'
- else:
- self._message_text = 'Face Normals Off'
-
- # N toggles vertex normals
- elif symbol == pyglet.window.key.N:
- self.render_flags['vertex_normals'] = (
- not self.render_flags['vertex_normals']
- )
- if self.render_flags['vertex_normals']:
- self._message_text = 'Vert Normals On'
- else:
- self._message_text = 'Vert Normals Off'
-
- # O toggles orthographic camera mode
- elif symbol == pyglet.window.key.O:
- self.viewer_flags['use_perspective_cam'] = (
- not self.viewer_flags['use_perspective_cam']
- )
- if self.viewer_flags['use_perspective_cam']:
- camera = self._default_persp_cam
- self._message_text = 'Perspective View'
- else:
- camera = self._default_orth_cam
- self._message_text = 'Orthographic View'
-
- cam_pose = self._camera_node.matrix.copy()
- cam_node = Node(matrix=cam_pose, camera=camera)
- self.scene.remove_node(self._camera_node)
- self.scene.add_node(cam_node)
- self.scene.main_camera_node = cam_node
- self._camera_node = cam_node
-
- # Q quits the viewer
- elif symbol == pyglet.window.key.Q:
- self.on_close()
-
- # R starts recording frames
- elif symbol == pyglet.window.key.R:
- if self.viewer_flags['record']:
- self.save_gif()
- self.set_caption(self.viewer_flags['window_title'])
- else:
- self.set_caption(
- '{} (RECORDING)'.format(self.viewer_flags['window_title'])
- )
- self.viewer_flags['record'] = not self.viewer_flags['record']
-
- # S saves the current frame as an image
- elif symbol == pyglet.window.key.S:
- self._save_image()
-
- # W toggles through wireframe modes
- elif symbol == pyglet.window.key.W:
- if self.render_flags['flip_wireframe']:
- self.render_flags['flip_wireframe'] = False
- self.render_flags['all_wireframe'] = True
- self.render_flags['all_solid'] = False
- self._message_text = 'All Wireframe'
- elif self.render_flags['all_wireframe']:
- self.render_flags['flip_wireframe'] = False
- self.render_flags['all_wireframe'] = False
- self.render_flags['all_solid'] = True
- self._message_text = 'All Solid'
- elif self.render_flags['all_solid']:
- self.render_flags['flip_wireframe'] = False
- self.render_flags['all_wireframe'] = False
- self.render_flags['all_solid'] = False
- self._message_text = 'Default Wireframe'
- else:
- self.render_flags['flip_wireframe'] = True
- self.render_flags['all_wireframe'] = False
- self.render_flags['all_solid'] = False
- self._message_text = 'Flip Wireframe'
-
- # Z resets the camera viewpoint
- elif symbol == pyglet.window.key.Z:
- self._reset_view()
-
- if self._message_text is not None:
- self._message_opac = 1.0 + self._ticks_till_fade
-
- @staticmethod
- def _time_event(dt, self):
- """The timer callback.
- """
- # Don't run old dead events after we've already closed
- if not self._is_active:
- return
-
- if self.viewer_flags['record']:
- self._record()
- if (self.viewer_flags['rotate'] and not
- self.viewer_flags['mouse_pressed']):
- self._rotate()
-
- # Manage message opacity
- if self._message_text is not None:
- if self._message_opac > 1.0:
- self._message_opac -= 1.0
- else:
- self._message_opac *= 0.90
- if self._message_opac < 0.05:
- self._message_opac = 1.0 + self._ticks_till_fade
- self._message_text = None
-
- if self._should_close:
- self.on_close()
- else:
- self.on_draw()
-
- def _reset_view(self):
- """Reset the view to a good initial state.
-
- The view is initially along the positive x-axis at a
- sufficient distance from the scene.
- """
- scale = self.scene.scale
- if scale == 0.0:
- scale = DEFAULT_SCENE_SCALE
- centroid = self.scene.centroid
-
- if self.viewer_flags['view_center'] is not None:
- centroid = self.viewer_flags['view_center']
-
- self._camera_node.matrix = self._default_camera_pose
- self._trackball = Trackball(
- self._default_camera_pose, self.viewport_size, scale, centroid
- )
-
- def _get_save_filename(self, file_exts):
- file_types = {
- 'png': ('png files', '*.png'),
- 'jpg': ('jpeg files', '*.jpg'),
- 'gif': ('gif files', '*.gif'),
- 'all': ('all files', '*'),
- }
- filetypes = [file_types[x] for x in file_exts]
- try:
- root = Tk()
- save_dir = self.viewer_flags['save_directory']
- if save_dir is None:
- save_dir = os.getcwd()
- filename = filedialog.asksaveasfilename(
- initialdir=save_dir, title='Select file save location',
- filetypes=filetypes
- )
- except Exception:
- return None
-
- root.destroy()
- if filename == ():
- return None
- return filename
-
- def _save_image(self):
- filename = self._get_save_filename(['png', 'jpg', 'gif', 'all'])
- if filename is not None:
- self.viewer_flags['save_directory'] = os.path.dirname(filename)
- imageio.imwrite(filename, self._renderer.read_color_buf())
-
- def _record(self):
- """Save another frame for the GIF.
- """
- data = self._renderer.read_color_buf()
- if not np.all(data == 0.0):
- self._saved_frames.append(data)
-
- def _rotate(self):
- """Animate the scene by rotating the camera.
- """
- az = (self.viewer_flags['rotate_rate'] /
- self.viewer_flags['refresh_rate'])
- self._trackball.rotate(az, self.viewer_flags['rotate_axis'])
-
- def _render(self):
- """Render the scene into the framebuffer and flip.
- """
- scene = self.scene
- self._camera_node.matrix = self._trackball.pose.copy()
-
- # Set lighting
- vli = self.viewer_flags['lighting_intensity']
- if self.viewer_flags['use_raymond_lighting']:
- for n in self._raymond_lights:
- n.light.intensity = vli / 3.0
- if not self.scene.has_node(n):
- scene.add_node(n, parent_node=self._camera_node)
- else:
- self._direct_light.light.intensity = vli
- for n in self._raymond_lights:
- if self.scene.has_node(n):
- self.scene.remove_node(n)
-
- if self.viewer_flags['use_direct_lighting']:
- if not self.scene.has_node(self._direct_light):
- scene.add_node(
- self._direct_light, parent_node=self._camera_node
- )
- elif self.scene.has_node(self._direct_light):
- self.scene.remove_node(self._direct_light)
-
- flags = RenderFlags.NONE
- if self.render_flags['flip_wireframe']:
- flags |= RenderFlags.FLIP_WIREFRAME
- elif self.render_flags['all_wireframe']:
- flags |= RenderFlags.ALL_WIREFRAME
- elif self.render_flags['all_solid']:
- flags |= RenderFlags.ALL_SOLID
-
- if self.render_flags['shadows']:
- flags |= RenderFlags.SHADOWS_DIRECTIONAL | RenderFlags.SHADOWS_SPOT
- if self.render_flags['vertex_normals']:
- flags |= RenderFlags.VERTEX_NORMALS
- if self.render_flags['face_normals']:
- flags |= RenderFlags.FACE_NORMALS
- if not self.render_flags['cull_faces']:
- flags |= RenderFlags.SKIP_CULL_FACES
-
- self._renderer.render(self.scene, flags)
-
- def _init_and_start_app(self):
- # Try multiple configs starting with target OpenGL version
- # and multisampling and removing these options if exception
- # Note: multisampling not available on all hardware
- from pyglet.gl import Config
- confs = [Config(sample_buffers=1, samples=4,
- depth_size=24,
- double_buffer=True,
- major_version=TARGET_OPEN_GL_MAJOR,
- minor_version=TARGET_OPEN_GL_MINOR),
- Config(depth_size=24,
- double_buffer=True,
- major_version=TARGET_OPEN_GL_MAJOR,
- minor_version=TARGET_OPEN_GL_MINOR),
- Config(sample_buffers=1, samples=4,
- depth_size=24,
- double_buffer=True,
- major_version=MIN_OPEN_GL_MAJOR,
- minor_version=MIN_OPEN_GL_MINOR),
- Config(depth_size=24,
- double_buffer=True,
- major_version=MIN_OPEN_GL_MAJOR,
- minor_version=MIN_OPEN_GL_MINOR)]
- for conf in confs:
- try:
- super(Viewer, self).__init__(config=conf, resizable=True,
- width=self._viewport_size[0],
- height=self._viewport_size[1])
- break
- except pyglet.window.NoSuchConfigException:
- pass
-
- if not self.context:
- raise ValueError('Unable to initialize an OpenGL 3+ context')
- clock.schedule_interval(
- Viewer._time_event, 1.0 / self.viewer_flags['refresh_rate'], self
- )
- self.switch_to()
- self.set_caption(self.viewer_flags['window_title'])
- pyglet.app.run()
-
- def _compute_initial_camera_pose(self):
- centroid = self.scene.centroid
- if self.viewer_flags['view_center'] is not None:
- centroid = self.viewer_flags['view_center']
- scale = self.scene.scale
- if scale == 0.0:
- scale = DEFAULT_SCENE_SCALE
-
- s2 = 1.0 / np.sqrt(2.0)
- cp = np.eye(4)
- cp[:3,:3] = np.array([
- [0.0, -s2, s2],
- [1.0, 0.0, 0.0],
- [0.0, s2, s2]
- ])
- hfov = np.pi / 6.0
- dist = scale / (2.0 * np.tan(hfov))
- cp[:3,3] = dist * np.array([1.0, 0.0, 1.0]) + centroid
-
- return cp
-
- def _create_raymond_lights(self):
- thetas = np.pi * np.array([1.0 / 6.0, 1.0 / 6.0, 1.0 / 6.0])
- phis = np.pi * np.array([0.0, 2.0 / 3.0, 4.0 / 3.0])
-
- nodes = []
-
- for phi, theta in zip(phis, thetas):
- xp = np.sin(theta) * np.cos(phi)
- yp = np.sin(theta) * np.sin(phi)
- zp = np.cos(theta)
-
- z = np.array([xp, yp, zp])
- z = z / np.linalg.norm(z)
- x = np.array([-z[1], z[0], 0.0])
- if np.linalg.norm(x) == 0:
- x = np.array([1.0, 0.0, 0.0])
- x = x / np.linalg.norm(x)
- y = np.cross(z, x)
-
- matrix = np.eye(4)
- matrix[:3,:3] = np.c_[x,y,z]
- nodes.append(Node(
- light=DirectionalLight(color=np.ones(3), intensity=1.0),
- matrix=matrix
- ))
-
- return nodes
-
- def _create_direct_light(self):
- light = DirectionalLight(color=np.ones(3), intensity=1.0)
- n = Node(light=light, matrix=np.eye(4))
- return n
-
- def _set_axes(self, world, mesh):
- scale = self.scene.scale
- if world:
- if 'scene' not in self._axes:
- n = Node(mesh=self._axis_mesh, scale=np.ones(3) * scale * 0.3)
- self.scene.add_node(n)
- self._axes['scene'] = n
- else:
- if 'scene' in self._axes:
- self.scene.remove_node(self._axes['scene'])
- self._axes.pop('scene')
-
- if mesh:
- old_nodes = []
- existing_axes = set([self._axes[k] for k in self._axes])
- for node in self.scene.mesh_nodes:
- if node not in existing_axes:
- old_nodes.append(node)
-
- for node in old_nodes:
- if node in self._axes:
- continue
- n = Node(
- mesh=self._axis_mesh,
- scale=np.ones(3) * node.mesh.scale * 0.5
- )
- self.scene.add_node(n, parent_node=node)
- self._axes[node] = n
- else:
- to_remove = set()
- for main_node in self._axes:
- if main_node in self.scene.mesh_nodes:
- self.scene.remove_node(self._axes[main_node])
- to_remove.add(main_node)
- for main_node in to_remove:
- self._axes.pop(main_node)
-
- def _remove_axes(self):
- for main_node in self._axes:
- axis_node = self._axes[main_node]
- self.scene.remove_node(axis_node)
- self._axes = {}
-
- def _location_to_x_y(self, location):
- if location == TextAlign.CENTER:
- return (self.viewport_size[0] / 2.0, self.viewport_size[1] / 2.0)
- elif location == TextAlign.CENTER_LEFT:
- return (TEXT_PADDING, self.viewport_size[1] / 2.0)
- elif location == TextAlign.CENTER_RIGHT:
- return (self.viewport_size[0] - TEXT_PADDING,
- self.viewport_size[1] / 2.0)
- elif location == TextAlign.BOTTOM_LEFT:
- return (TEXT_PADDING, TEXT_PADDING)
- elif location == TextAlign.BOTTOM_RIGHT:
- return (self.viewport_size[0] - TEXT_PADDING, TEXT_PADDING)
- elif location == TextAlign.BOTTOM_CENTER:
- return (self.viewport_size[0] / 2.0, TEXT_PADDING)
- elif location == TextAlign.TOP_LEFT:
- return (TEXT_PADDING, self.viewport_size[1] - TEXT_PADDING)
- elif location == TextAlign.TOP_RIGHT:
- return (self.viewport_size[0] - TEXT_PADDING,
- self.viewport_size[1] - TEXT_PADDING)
- elif location == TextAlign.TOP_CENTER:
- return (self.viewport_size[0] / 2.0,
- self.viewport_size[1] - TEXT_PADDING)
-
-
-__all__ = ['Viewer']
diff --git a/spaces/adorp/ControlNet-v1-1-duplicate/app.py b/spaces/adorp/ControlNet-v1-1-duplicate/app.py
deleted file mode 100644
index b1e36781302a65880879b4853004646b08abe3e5..0000000000000000000000000000000000000000
--- a/spaces/adorp/ControlNet-v1-1-duplicate/app.py
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-
-import gradio as gr
-import torch
-
-from app_canny import create_demo as create_demo_canny
-from app_depth import create_demo as create_demo_depth
-from app_ip2p import create_demo as create_demo_ip2p
-from app_lineart import create_demo as create_demo_lineart
-from app_mlsd import create_demo as create_demo_mlsd
-from app_normal import create_demo as create_demo_normal
-from app_openpose import create_demo as create_demo_openpose
-from app_scribble import create_demo as create_demo_scribble
-from app_scribble_interactive import \
- create_demo as create_demo_scribble_interactive
-from app_segmentation import create_demo as create_demo_segmentation
-from app_shuffle import create_demo as create_demo_shuffle
-from app_softedge import create_demo as create_demo_softedge
-from model import Model
-
-DESCRIPTION = '# ControlNet v1.1'
-
-SPACE_ID = os.getenv('SPACE_ID')
-ALLOW_CHANGING_BASE_MODEL = SPACE_ID != 'hysts/ControlNet-v1-1'
-
-if SPACE_ID is not None:
- DESCRIPTION += f'\n
For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings.
'
-
-if not torch.cuda.is_available():
- DESCRIPTION += '\n
Running on CPU 🥶 This demo does not work on CPU.
'
-
-MAX_NUM_IMAGES = int(os.getenv('MAX_NUM_IMAGES', '3'))
-DEFAULT_NUM_IMAGES = min(MAX_NUM_IMAGES,
- int(os.getenv('DEFAULT_NUM_IMAGES', '1')))
-
-DEFAULT_MODEL_ID = os.getenv('DEFAULT_MODEL_ID',
- 'runwayml/stable-diffusion-v1-5')
-model = Model(base_model_id=DEFAULT_MODEL_ID, task_name='Canny')
-
-with gr.Blocks(css='style.css') as demo:
- gr.Markdown(DESCRIPTION)
- with gr.Tabs():
- with gr.TabItem('Canny'):
- create_demo_canny(model.process_canny,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('MLSD'):
- create_demo_mlsd(model.process_mlsd,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Scribble'):
- create_demo_scribble(model.process_scribble,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Scribble Interactive'):
- create_demo_scribble_interactive(
- model.process_scribble_interactive,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('SoftEdge'):
- create_demo_softedge(model.process_softedge,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('OpenPose'):
- create_demo_openpose(model.process_openpose,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Segmentation'):
- create_demo_segmentation(model.process_segmentation,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Depth'):
- create_demo_depth(model.process_depth,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Normal map'):
- create_demo_normal(model.process_normal,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Lineart'):
- create_demo_lineart(model.process_lineart,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Content Shuffle'):
- create_demo_shuffle(model.process_shuffle,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
- with gr.TabItem('Instruct Pix2Pix'):
- create_demo_ip2p(model.process_ip2p,
- max_images=MAX_NUM_IMAGES,
- default_num_images=DEFAULT_NUM_IMAGES)
-
- with gr.Accordion(label='Base model', open=False):
- with gr.Row():
- with gr.Column():
- current_base_model = gr.Text(label='Current base model')
- with gr.Column(scale=0.3):
- check_base_model_button = gr.Button('Check current base model')
- with gr.Row():
- with gr.Column():
- new_base_model_id = gr.Text(
- label='New base model',
- max_lines=1,
- placeholder='runwayml/stable-diffusion-v1-5',
- info=
- 'The base model must be compatible with Stable Diffusion v1.5.',
- interactive=ALLOW_CHANGING_BASE_MODEL)
- with gr.Column(scale=0.3):
- change_base_model_button = gr.Button(
- 'Change base model', interactive=ALLOW_CHANGING_BASE_MODEL)
- if not ALLOW_CHANGING_BASE_MODEL:
- gr.Markdown(
- '''The base model is not allowed to be changed in this Space so as not to slow down the demo, but it can be changed if you duplicate the Space. '''
- )
-
- check_base_model_button.click(fn=lambda: model.base_model_id,
- outputs=current_base_model,
- queue=False)
- new_base_model_id.submit(fn=model.set_base_model,
- inputs=new_base_model_id,
- outputs=current_base_model)
- change_base_model_button.click(fn=model.set_base_model,
- inputs=new_base_model_id,
- outputs=current_base_model)
-
-demo.queue(max_size=20).launch()
diff --git a/spaces/ahuang11/tastykitchen/README.md b/spaces/ahuang11/tastykitchen/README.md
deleted file mode 100644
index db4a8e7e360a0abaef1dcd86dde9f97c50f0f138..0000000000000000000000000000000000000000
--- a/spaces/ahuang11/tastykitchen/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: TastyKitchen
-emoji: 👨🍳
-colorFrom: gray
-colorTo: blue
-sdk: docker
-pinned: false
-duplicated_from: Panel-Org/panel-template
-license: bsd-3-clause
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ajashari/ajashari-ari-color/app.py b/spaces/ajashari/ajashari-ari-color/app.py
deleted file mode 100644
index c496a87340b0547cc13d5bea823018f8cf537267..0000000000000000000000000000000000000000
--- a/spaces/ajashari/ajashari-ari-color/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/ajashari/ari-color").launch()
\ No newline at end of file
diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder_train.py b/spaces/akhaliq/Real-Time-Voice-Cloning/encoder_train.py
deleted file mode 100644
index b8740a894d615aadfe529cb36068fc8e3496125f..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder_train.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from utils.argutils import print_args
-from encoder.train import train
-from pathlib import Path
-import argparse
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(
- description="Trains the speaker encoder. You must have run encoder_preprocess.py first.",
- formatter_class=argparse.ArgumentDefaultsHelpFormatter
- )
-
- parser.add_argument("run_id", type=str, help= \
- "Name for this model instance. If a model state from the same run ID was previously "
- "saved, the training will restart from there. Pass -f to overwrite saved states and "
- "restart from scratch.")
- parser.add_argument("clean_data_root", type=Path, help= \
- "Path to the output directory of encoder_preprocess.py. If you left the default "
- "output directory when preprocessing, it should be /SV2TTS/encoder/.")
- parser.add_argument("-m", "--models_dir", type=Path, default="encoder/saved_models/", help=\
- "Path to the output directory that will contain the saved model weights, as well as "
- "backups of those weights and plots generated during training.")
- parser.add_argument("-v", "--vis_every", type=int, default=10, help= \
- "Number of steps between updates of the loss and the plots.")
- parser.add_argument("-u", "--umap_every", type=int, default=100, help= \
- "Number of steps between updates of the umap projection. Set to 0 to never update the "
- "projections.")
- parser.add_argument("-s", "--save_every", type=int, default=500, help= \
- "Number of steps between updates of the model on the disk. Set to 0 to never save the "
- "model.")
- parser.add_argument("-b", "--backup_every", type=int, default=7500, help= \
- "Number of steps between backups of the model. Set to 0 to never make backups of the "
- "model.")
- parser.add_argument("-f", "--force_restart", action="store_true", help= \
- "Do not load any saved model.")
- parser.add_argument("--visdom_server", type=str, default="http://localhost")
- parser.add_argument("--no_visdom", action="store_true", help= \
- "Disable visdom.")
- args = parser.parse_args()
-
- # Process the arguments
- args.models_dir.mkdir(exist_ok=True)
-
- # Run the training
- print_args(args, parser)
- train(**vars(args))
-
\ No newline at end of file
diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder_preprocess.py b/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder_preprocess.py
deleted file mode 100644
index 7ede3dfb95972e2de575de35b9d4a9c6d642885e..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder_preprocess.py
+++ /dev/null
@@ -1,59 +0,0 @@
-from synthesizer.synthesize import run_synthesis
-from synthesizer.hparams import hparams
-from utils.argutils import print_args
-import argparse
-import os
-
-
-if __name__ == "__main__":
- class MyFormatter(argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter):
- pass
-
- parser = argparse.ArgumentParser(
- description="Creates ground-truth aligned (GTA) spectrograms from the vocoder.",
- formatter_class=MyFormatter
- )
- parser.add_argument("datasets_root", type=str, help=\
- "Path to the directory containing your SV2TTS directory. If you specify both --in_dir and "
- "--out_dir, this argument won't be used.")
- parser.add_argument("--model_dir", type=str,
- default="synthesizer/saved_models/pretrained/", help=\
- "Path to the pretrained model directory.")
- parser.add_argument("-i", "--in_dir", type=str, default=argparse.SUPPRESS, help= \
- "Path to the synthesizer directory that contains the mel spectrograms, the wavs and the "
- "embeds. Defaults to /SV2TTS/synthesizer/.")
- parser.add_argument("-o", "--out_dir", type=str, default=argparse.SUPPRESS, help= \
- "Path to the output vocoder directory that will contain the ground truth aligned mel "
- "spectrograms. Defaults to /SV2TTS/vocoder/.")
- parser.add_argument("--hparams", default="",
- help="Hyperparameter overrides as a comma-separated list of name=value "
- "pairs")
- parser.add_argument("--no_trim", action="store_true", help=\
- "Preprocess audio without trimming silences (not recommended).")
- parser.add_argument("--cpu", action="store_true", help=\
- "If True, processing is done on CPU, even when a GPU is available.")
- args = parser.parse_args()
- print_args(args, parser)
- modified_hp = hparams.parse(args.hparams)
-
- if not hasattr(args, "in_dir"):
- args.in_dir = os.path.join(args.datasets_root, "SV2TTS", "synthesizer")
- if not hasattr(args, "out_dir"):
- args.out_dir = os.path.join(args.datasets_root, "SV2TTS", "vocoder")
-
- if args.cpu:
- # Hide GPUs from Pytorch to force CPU processing
- os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
-
- # Verify webrtcvad is available
- if not args.no_trim:
- try:
- import webrtcvad
- except:
- raise ModuleNotFoundError("Package 'webrtcvad' not found. This package enables "
- "noise removal and is recommended. Please install and try again. If installation fails, "
- "use --no_trim to disable this error message.")
- del args.no_trim
-
- run_synthesis(args.in_dir, args.out_dir, args.model_dir, modified_hp)
-
diff --git a/spaces/akhaliq/SummerTime/model/base_model.py b/spaces/akhaliq/SummerTime/model/base_model.py
deleted file mode 100644
index ea5a1bcf065295f3b8058f56e313bd2d1dc4188b..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/base_model.py
+++ /dev/null
@@ -1,81 +0,0 @@
-from typing import List, Union
-
-
-class SummModel:
- """
- Base model class for SummerTime
- """
-
- # static variables
- model_name = "None"
- is_extractive = False
- is_neural = False
- is_query_based = False
- is_dialogue_based = False
- is_multi_document = False
-
- def __init__(
- self,
- trained_domain: str = None,
- max_input_length: int = None,
- max_output_length: int = None,
- ):
- self.trained_domain = trained_domain
- self.max_input_length = max_input_length
- self.max_output_length = max_output_length
-
- def summarize(
- self, corpus: Union[List[str], List[List[str]]], queries: List[str] = None
- ) -> List[str]:
- """
- All summarization models should have this function
-
- :param corpus: each string in the list is a source document to be summarized; if the model is multi-document or
- dialogue summarization model, then each instance contains a list of documents/utterances
- :param queries: a list of queries if this is a query-based model
- :return: a list of generated summaries
- """
- raise NotImplementedError(
- "The base class for models shouldn't be instantiated!"
- )
-
- @classmethod
- def assert_summ_input_type(
- cls, corpus: Union[List[str], List[List[str]]], queries: Union[List[str], None]
- ):
- """
- Verifies that type of input corpus or queries for summarization align with the model type.
- """
- raise NotImplementedError(
- "The base class for models shouldn't be instantiated!"
- )
-
- @classmethod
- def show_capability(cls) -> None:
- """
- Use concise language to show the strength and weakness for each model. Try not to use NLP terminologies
- """
- raise NotImplementedError(
- "The base class for models shouldn't be instantiated!"
- )
-
- @classmethod
- def generate_basic_description(cls) -> str:
- """
- Automatically generate the basic description string based on the attributes
- """
- extractive_abstractive = "extractive" if cls.is_extractive else "abstractive"
- neural = "neural" if cls.is_neural else "non-neural"
-
- basic_description = (
- f"{cls.model_name} is a"
- f"{'query-based' if cls.is_query_based else ''} "
- f"{extractive_abstractive}, {neural} model for summarization."
- )
- if cls.is_multi_document or cls.is_dialogue_based:
- basic_description += (
- f"It can handle {'multi-document' if cls.is_multi_document else ''} "
- f"{'dialogue' if cls.is_dialogue_based else ''} textual data."
- )
-
- return basic_description
diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/Rouge155.py b/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/Rouge155.py
deleted file mode 100644
index a3d2ca32f1f430e5356106e719a816da56f9f887..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/pyrouge/Rouge155.py
+++ /dev/null
@@ -1,649 +0,0 @@
-from __future__ import print_function, unicode_literals, division
-
-import os
-import re
-import codecs
-import platform
-
-from subprocess import check_output
-from tempfile import mkdtemp
-from functools import partial
-
-try:
- from configparser import ConfigParser
-except ImportError:
- from ConfigParser import ConfigParser
-
-from .utils import log
-from .utils.file_utils import DirectoryProcessor
-from .utils.file_utils import verify_dir
-
-
-class Rouge155(object):
- """
- This is a wrapper for the ROUGE 1.5.5 summary evaluation package.
- This class is designed to simplify the evaluation process by:
-
- 1) Converting summaries into a format ROUGE understands.
- 2) Generating the ROUGE configuration file automatically based
- on filename patterns.
-
- This class can be used within Python like this:
-
- rouge = Rouge155()
- rouge.system_dir = 'test/systems'
- rouge.model_dir = 'test/models'
-
- # The system filename pattern should contain one group that
- # matches the document ID.
- rouge.system_filename_pattern = 'SL.P.10.R.11.SL062003-(\d+).html'
-
- # The model filename pattern has '#ID#' as a placeholder for the
- # document ID. If there are multiple model summaries, pyrouge
- # will use the provided regex to automatically match them with
- # the corresponding system summary. Here, [A-Z] matches
- # multiple model summaries for a given #ID#.
- rouge.model_filename_pattern = 'SL.P.10.R.[A-Z].SL062003-#ID#.html'
-
- rouge_output = rouge.evaluate()
- print(rouge_output)
- output_dict = rouge.output_to_dict(rouge_ouput)
- print(output_dict)
- -> {'rouge_1_f_score': 0.95652,
- 'rouge_1_f_score_cb': 0.95652,
- 'rouge_1_f_score_ce': 0.95652,
- 'rouge_1_precision': 0.95652,
- [...]
-
-
- To evaluate multiple systems:
-
- rouge = Rouge155()
- rouge.system_dir = '/PATH/TO/systems'
- rouge.model_dir = 'PATH/TO/models'
- for system_id in ['id1', 'id2', 'id3']:
- rouge.system_filename_pattern = \
- 'SL.P/.10.R.{}.SL062003-(\d+).html'.format(system_id)
- rouge.model_filename_pattern = \
- 'SL.P.10.R.[A-Z].SL062003-#ID#.html'
- rouge_output = rouge.evaluate(system_id)
- print(rouge_output)
-
- """
-
- def __init__(self, rouge_dir=None, rouge_args=None, log_level=None):
- """
- Create a Rouge155 object.
-
- rouge_dir: Directory containing Rouge-1.5.5.pl
- rouge_args: Arguments to pass through to ROUGE if you
- don't want to use the default pyrouge
- arguments.
-
- """
- if log_level is None:
- self.log = log.get_global_console_logger()
- else:
- self.log = log.get_global_console_logger(log_level)
- self.__set_dir_properties()
- self._config_file = None
- self._settings_file = self.__get_config_path()
- self.__set_rouge_dir(rouge_dir)
- self.args = self.__clean_rouge_args(rouge_args)
- self._system_filename_pattern = None
- self._model_filename_pattern = None
-
- def save_home_dir(self):
- config = ConfigParser()
- section = "pyrouge settings"
- config.add_section(section)
- config.set(section, "home_dir", self._home_dir)
- with open(self._settings_file, "w") as f:
- config.write(f)
- self.log.info("Set ROUGE home directory to {}.".format(self._home_dir))
-
- @property
- def settings_file(self):
- """
- Path of the setttings file, which stores the ROUGE home dir.
-
- """
- return self._settings_file
-
- @property
- def bin_path(self):
- """
- The full path of the ROUGE binary (although it's technically
- a script), i.e. rouge_home_dir/ROUGE-1.5.5.pl
-
- """
- if self._bin_path is None:
- raise Exception(
- "ROUGE path not set. Please set the ROUGE home directory "
- "and ensure that ROUGE-1.5.5.pl exists in it."
- )
- return self._bin_path
-
- @property
- def system_filename_pattern(self):
- """
- The regular expression pattern for matching system summary
- filenames. The regex string.
-
- E.g. "SL.P.10.R.11.SL062003-(\d+).html" will match the system
- filenames in the SPL2003/system folder of the ROUGE SPL example
- in the "sample-test" folder.
-
- Currently, there is no support for multiple systems.
-
- """
- return self._system_filename_pattern
-
- @system_filename_pattern.setter
- def system_filename_pattern(self, pattern):
- self._system_filename_pattern = pattern
-
- @property
- def model_filename_pattern(self):
- """
- The regular expression pattern for matching model summary
- filenames. The pattern needs to contain the string "#ID#",
- which is a placeholder for the document ID.
-
- E.g. "SL.P.10.R.[A-Z].SL062003-#ID#.html" will match the model
- filenames in the SPL2003/system folder of the ROUGE SPL
- example in the "sample-test" folder.
-
- "#ID#" is a placeholder for the document ID which has been
- matched by the "(\d+)" part of the system filename pattern.
- The different model summaries for a given document ID are
- matched by the "[A-Z]" part.
-
- """
- return self._model_filename_pattern
-
- @model_filename_pattern.setter
- def model_filename_pattern(self, pattern):
- self._model_filename_pattern = pattern
-
- @property
- def config_file(self):
- return self._config_file
-
- @config_file.setter
- def config_file(self, path):
- config_dir, _ = os.path.split(path)
- verify_dir(config_dir, "configuration file")
- self._config_file = path
-
- def split_sentences(self):
- """
- ROUGE requires texts split into sentences. In case the texts
- are not already split, this method can be used.
-
- """
- from pyrouge.utils.sentence_splitter import PunktSentenceSplitter
-
- self.log.info("Splitting sentences.")
- ss = PunktSentenceSplitter()
- sent_split_to_string = lambda s: "\n".join(ss.split(s))
- process_func = partial(
- DirectoryProcessor.process, function=sent_split_to_string
- )
- self.__process_summaries(process_func)
-
- @staticmethod
- def convert_summaries_to_rouge_format(input_dir, output_dir):
- """
- Convert all files in input_dir into a format ROUGE understands
- and saves the files to output_dir. The input files are assumed
- to be plain text with one sentence per line.
-
- input_dir: Path of directory containing the input files.
- output_dir: Path of directory in which the converted files
- will be saved.
-
- """
- DirectoryProcessor.process(
- input_dir, output_dir, Rouge155.convert_text_to_rouge_format
- )
-
- @staticmethod
- def convert_text_to_rouge_format(text, title="dummy title"):
- """
- Convert a text to a format ROUGE understands. The text is
- assumed to contain one sentence per line.
-
- text: The text to convert, containg one sentence per line.
- title: Optional title for the text. The title will appear
- in the converted file, but doesn't seem to have
- any other relevance.
-
- Returns: The converted text as string.
-
- """
- sentences = text.split("\n")
- sent_elems = [
- '[{i}]'
- "{text}".format(i=i, text=sent)
- for i, sent in enumerate(sentences, start=1)
- ]
- html = """
-
-{title}
-
-
-{elems}
-
-""".format(
- title=title, elems="\n".join(sent_elems)
- )
-
- return html
-
- @staticmethod
- def write_config_static(
- system_dir,
- system_filename_pattern,
- model_dir,
- model_filename_pattern,
- config_file_path,
- system_id=None,
- ):
- """
- Write the ROUGE configuration file, which is basically a list
- of system summary files and their corresponding model summary
- files.
-
- pyrouge uses regular expressions to automatically find the
- matching model summary files for a given system summary file
- (cf. docstrings for system_filename_pattern and
- model_filename_pattern).
-
- system_dir: Path of directory containing
- system summaries.
- system_filename_pattern: Regex string for matching
- system summary filenames.
- model_dir: Path of directory containing
- model summaries.
- model_filename_pattern: Regex string for matching model
- summary filenames.
- config_file_path: Path of the configuration file.
- system_id: Optional system ID string which
- will appear in the ROUGE output.
-
- """
- system_filenames = [f for f in os.listdir(system_dir)]
- system_models_tuples = []
-
- system_filename_pattern = re.compile(system_filename_pattern)
- for system_filename in sorted(system_filenames):
- match = system_filename_pattern.match(system_filename)
- if match:
- id = match.groups(0)[0]
- model_filenames = Rouge155.__get_model_filenames_for_id(
- id, model_dir, model_filename_pattern
- )
- system_models_tuples.append((system_filename, sorted(model_filenames)))
- if not system_models_tuples:
- raise Exception(
- "Did not find any files matching the pattern {} "
- "in the system summaries directory {}.".format(
- system_filename_pattern.pattern, system_dir
- )
- )
-
- with codecs.open(config_file_path, "w", encoding="utf-8") as f:
- f.write('')
- for task_id, (system_filename, model_filenames) in enumerate(
- system_models_tuples, start=1
- ):
-
- eval_string = Rouge155.__get_eval_string(
- task_id,
- system_id,
- system_dir,
- system_filename,
- model_dir,
- model_filenames,
- )
- f.write(eval_string)
- f.write("")
-
- def write_config(self, config_file_path=None, system_id=None):
- """
- Write the ROUGE configuration file, which is basically a list
- of system summary files and their matching model summary files.
-
- This is a non-static version of write_config_file_static().
-
- config_file_path: Path of the configuration file.
- system_id: Optional system ID string which will
- appear in the ROUGE output.
-
- """
- if not system_id:
- system_id = 1
- if (not config_file_path) or (not self._config_dir):
- self._config_dir = mkdtemp()
- config_filename = "rouge_conf.xml"
- else:
- config_dir, config_filename = os.path.split(config_file_path)
- verify_dir(config_dir, "configuration file")
- self._config_file = os.path.join(self._config_dir, config_filename)
- Rouge155.write_config_static(
- self._system_dir,
- self._system_filename_pattern,
- self._model_dir,
- self._model_filename_pattern,
- self._config_file,
- system_id,
- )
- self.log.info("Written ROUGE configuration to {}".format(self._config_file))
-
- def evaluate(self, system_id=1, rouge_args=None):
- """
- Run ROUGE to evaluate the system summaries in system_dir against
- the model summaries in model_dir. The summaries are assumed to
- be in the one-sentence-per-line HTML format ROUGE understands.
-
- system_id: Optional system ID which will be printed in
- ROUGE's output.
-
- Returns: Rouge output as string.
-
- """
- self.write_config(system_id=system_id)
- options = self.__get_options(rouge_args)
- command = [self._bin_path] + options
- env = os.environ.copy()
- if hasattr(self, "_home_dir") and self._home_dir:
- env["ROUGE_EVAL_HOME"] = self._home_dir
- self.log.info("Running ROUGE with command {}".format(" ".join(command)))
- rouge_output = check_output(command, env=env).decode("UTF-8")
- return rouge_output
-
- def convert_and_evaluate(self, system_id=1, split_sentences=False, rouge_args=None):
- """
- Convert plain text summaries to ROUGE format and run ROUGE to
- evaluate the system summaries in system_dir against the model
- summaries in model_dir. Optionally split texts into sentences
- in case they aren't already.
-
- This is just a convenience method combining
- convert_summaries_to_rouge_format() and evaluate().
-
- split_sentences: Optional argument specifying if
- sentences should be split.
- system_id: Optional system ID which will be printed
- in ROUGE's output.
-
- Returns: ROUGE output as string.
-
- """
- if split_sentences:
- self.split_sentences()
- self.__write_summaries()
- rouge_output = self.evaluate(system_id, rouge_args)
- return rouge_output
-
- def output_to_dict(self, output):
- """
- Convert the ROUGE output into python dictionary for further
- processing.
-
- """
- # 0 ROUGE-1 Average_R: 0.02632 (95%-conf.int. 0.02632 - 0.02632)
- pattern = re.compile(
- r"(\d+) (ROUGE-\S+) (Average_\w): (\d.\d+) "
- r"\(95%-conf.int. (\d.\d+) - (\d.\d+)\)"
- )
- results = {}
- for line in output.split("\n"):
- match = pattern.match(line)
- if match:
- (
- sys_id,
- rouge_type,
- measure,
- result,
- conf_begin,
- conf_end,
- ) = match.groups()
- measure = {
- "Average_R": "recall",
- "Average_P": "precision",
- "Average_F": "f_score",
- }[measure]
- rouge_type = rouge_type.lower().replace("-", "_")
- key = "{}_{}".format(rouge_type, measure)
- results[key] = float(result)
- results["{}_cb".format(key)] = float(conf_begin)
- results["{}_ce".format(key)] = float(conf_end)
- return results
-
- ###################################################################
- # Private methods
-
- def __set_rouge_dir(self, home_dir=None):
- """
- Verfify presence of ROUGE-1.5.5.pl and data folder, and set
- those paths.
-
- """
- if not home_dir:
- self._home_dir = self.__get_rouge_home_dir_from_settings()
- else:
- self._home_dir = home_dir
- self.save_home_dir()
- self._bin_path = os.path.join(self._home_dir, "ROUGE-1.5.5.pl")
- self.data_dir = os.path.join(self._home_dir, "data")
- if not os.path.exists(self._bin_path):
- raise Exception(
- "ROUGE binary not found at {}. Please set the "
- "correct path by running pyrouge_set_rouge_path "
- "/path/to/rouge/home.".format(self._bin_path)
- )
-
- def __get_rouge_home_dir_from_settings(self):
- config = ConfigParser()
- with open(self._settings_file) as f:
- if hasattr(config, "read_file"):
- config.read_file(f)
- else:
- # use deprecated python 2.x method
- config.readfp(f)
- rouge_home_dir = config.get("pyrouge settings", "home_dir")
- return rouge_home_dir
-
- @staticmethod
- def __get_eval_string(
- task_id, system_id, system_dir, system_filename, model_dir, model_filenames
- ):
- """
- ROUGE can evaluate several system summaries for a given text
- against several model summaries, i.e. there is an m-to-n
- relation between system and model summaries. The system
- summaries are listed in the tag and the model summaries
- in the tag. pyrouge currently only supports one system
- summary per text, i.e. it assumes a 1-to-n relation between
- system and model summaries.
-
- """
- peer_elems = '
{name}
'.format(
- id=system_id, name=system_filename
- )
-
- model_elems = [
- '{name}'.format(id=chr(65 + i), name=name)
- for i, name in enumerate(model_filenames)
- ]
-
- model_elems = "\n\t\t\t".join(model_elems)
- eval_string = """
-
- {model_root}
- {peer_root}
-
-
-
- {peer_elems}
-
-
- {model_elems}
-
-
-""".format(
- task_id=task_id,
- model_root=model_dir,
- model_elems=model_elems,
- peer_root=system_dir,
- peer_elems=peer_elems,
- )
- return eval_string
-
- def __process_summaries(self, process_func):
- """
- Helper method that applies process_func to the files in the
- system and model folders and saves the resulting files to new
- system and model folders.
-
- """
- temp_dir = mkdtemp()
- new_system_dir = os.path.join(temp_dir, "system")
- os.mkdir(new_system_dir)
- new_model_dir = os.path.join(temp_dir, "model")
- os.mkdir(new_model_dir)
- self.log.info(
- "Processing summaries. Saving system files to {} and "
- "model files to {}.".format(new_system_dir, new_model_dir)
- )
- process_func(self._system_dir, new_system_dir)
- process_func(self._model_dir, new_model_dir)
- self._system_dir = new_system_dir
- self._model_dir = new_model_dir
-
- def __write_summaries(self):
- self.log.info("Writing summaries.")
- self.__process_summaries(self.convert_summaries_to_rouge_format)
-
- @staticmethod
- def __get_model_filenames_for_id(id, model_dir, model_filenames_pattern):
- pattern = re.compile(model_filenames_pattern.replace("#ID#", id))
- model_filenames = [f for f in os.listdir(model_dir) if pattern.match(f)]
- if not model_filenames:
- raise Exception(
- "Could not find any model summaries for the system"
- " summary with ID {}. Specified model filename pattern was: "
- "{}".format(id, model_filenames_pattern)
- )
- return model_filenames
-
- def __get_options(self, rouge_args=None):
- """
- Get supplied command line arguments for ROUGE or use default
- ones.
-
- """
- if self.args:
- options = self.args.split()
- elif rouge_args:
- options = rouge_args.split()
- else:
- options = [
- "-e",
- self._data_dir,
- "-c",
- 95,
- "-2",
- "-1",
- "-U",
- "-r",
- 1000,
- "-n",
- 4,
- "-w",
- 1.2,
- "-a",
- ]
- options = list(map(str, options))
-
- options = self.__add_config_option(options)
- return options
-
- def __create_dir_property(self, dir_name, docstring):
- """
- Generate getter and setter for a directory property.
-
- """
- property_name = "{}_dir".format(dir_name)
- private_name = "_" + property_name
- setattr(self, private_name, None)
-
- def fget(self):
- return getattr(self, private_name)
-
- def fset(self, path):
- verify_dir(path, dir_name)
- setattr(self, private_name, path)
-
- p = property(fget=fget, fset=fset, doc=docstring)
- setattr(self.__class__, property_name, p)
-
- def __set_dir_properties(self):
- """
- Automatically generate the properties for directories.
-
- """
- directories = [
- ("home", "The ROUGE home directory."),
- ("data", "The path of the ROUGE 'data' directory."),
- ("system", "Path of the directory containing system summaries."),
- ("model", "Path of the directory containing model summaries."),
- ]
- for (dirname, docstring) in directories:
- self.__create_dir_property(dirname, docstring)
-
- def __clean_rouge_args(self, rouge_args):
- """
- Remove enclosing quotation marks, if any.
-
- """
- if not rouge_args:
- return
- quot_mark_pattern = re.compile('"(.+)"')
- match = quot_mark_pattern.match(rouge_args)
- if match:
- cleaned_args = match.group(1)
- return cleaned_args
- else:
- return rouge_args
-
- def __add_config_option(self, options):
- return options + ["-m"] + [self._config_file]
-
- def __get_config_path(self):
- if platform.system() == "Windows":
- parent_dir = os.getenv("APPDATA")
- config_dir_name = "pyrouge"
- elif os.name == "posix":
- parent_dir = os.path.expanduser("~")
- config_dir_name = ".pyrouge"
- else:
- parent_dir = os.path.dirname(__file__)
- config_dir_name = ""
- config_dir = os.path.join(parent_dir, config_dir_name)
- if not os.path.exists(config_dir):
- os.makedirs(config_dir)
- return os.path.join(config_dir, "settings.ini")
-
-
-if __name__ == "__main__":
- import argparse
- from utils.argparsers import rouge_path_parser
-
- parser = argparse.ArgumentParser(parents=[rouge_path_parser])
- args = parser.parse_args()
-
- rouge = Rouge155(args.rouge_home)
- rouge.save_home_dir()
diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/csmsc/voc1/local/data_prep.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/csmsc/voc1/local/data_prep.sh
deleted file mode 100644
index 9230a6d220c73e7ad6c6704e2bdd5dc845c48b80..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/csmsc/voc1/local/data_prep.sh
+++ /dev/null
@@ -1,94 +0,0 @@
-#!/bin/bash
-
-# Copyright 2019 Tomoki Hayashi
-# MIT License (https://opensource.org/licenses/MIT)
-
-# shellcheck disable=SC1091
-. ./path.sh || exit 1;
-
-fs=24000
-num_dev=100
-num_eval=100
-train_set="train_nodev"
-dev_set="dev"
-eval_set="eval"
-shuffle=false
-
-# shellcheck disable=SC1091
-. utils/parse_options.sh || exit 1;
-
-db_root=$1
-data_dir=$2
-
-# check arguments
-if [ $# != 2 ]; then
- echo "Usage: $0 [Options] "
- echo "e.g.: $0 downloads/CSMSC data"
- echo ""
- echo "Options:"
- echo " --fs: target sampling rate (default=24000)."
- echo " --num_dev: number of development uttreances (default=100)."
- echo " --num_eval: number of evaluation uttreances (default=100)."
- echo " --train_set: name of train set (default=train_nodev)."
- echo " --dev_set: name of dev set (default=dev)."
- echo " --eval_set: name of eval set (default=eval)."
- echo " --shuffle: whether to perform shuffle in making dev / eval set (default=false)."
- exit 1
-fi
-
-set -euo pipefail
-
-[ ! -e "${data_dir}/all" ] && mkdir -p "${data_dir}/all"
-
-# set filenames
-scp="${data_dir}/all/wav.scp"
-segments="${data_dir}/all/segments"
-
-# check file existence
-[ -e "${scp}" ] && rm "${scp}"
-[ -e "${segments}" ] && rm "${segments}"
-
-# make wav.scp
-find "${db_root}/Wave" -name "*.wav" -follow | sort | while read -r filename; do
- id="$(basename "${filename}" .wav)"
- echo "csmsc_${id} cat ${filename} | sox -t wav - -c 1 -b 16 -t wav - rate ${fs} |" >> "${scp}"
-done
-
-# make segments
-find "${db_root}/PhoneLabeling" -name "*.interval" -follow | sort | while read -r filename; do
- nkf -Lu --overwrite "${filename}"
- id="$(basename "${filename}" .interval)"
- start_sec=$(tail -n +14 "${filename}" | head -n 1)
- end_sec=$(head -n -2 "${filename}" | tail -n 1)
- [ -z "${start_sec}" ] && echo "Start second is missing (utt_id=${id}). " >&2 && exit 1;
- [ -z "${end_sec}" ] && echo "End second is missing (utt_id=${id})." >&2 && exit 1;
- echo "csmsc_${id} csmsc_${id} ${start_sec} ${end_sec}" >> "${segments}"
-done
-
-# check
-diff -q <(awk '{print $1}' "${scp}") <(awk '{print $1}' "${segments}") > /dev/null
-
-# split
-num_all=$(wc -l < "${scp}")
-num_deveval=$((num_dev + num_eval))
-num_train=$((num_all - num_deveval))
-utils/split_data.sh \
- --num_first "${num_train}" \
- --num_second "${num_deveval}" \
- --shuffle "${shuffle}" \
- "${data_dir}/all" \
- "${data_dir}/${train_set}" \
- "${data_dir}/deveval"
-utils/split_data.sh \
- --num_first "${num_dev}" \
- --num_second "${num_eval}" \
- --shuffle "${shuffle}" \
- "${data_dir}/deveval" \
- "${data_dir}/${dev_set}" \
- "${data_dir}/${eval_set}"
-
-# remove tmp directories
-rm -rf "${data_dir}/all"
-rm -rf "${data_dir}/deveval"
-
-echo "Successfully prepared data."
diff --git a/spaces/akhaliq/space-that-creates-model-demo-space/README.md b/spaces/akhaliq/space-that-creates-model-demo-space/README.md
deleted file mode 100644
index 39d063b2c8f6c24fa25e6d465652ec30c4080e88..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/space-that-creates-model-demo-space/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Space That Creates Model Demo Space
-emoji: 🐠
-colorFrom: yellow
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.1.6
-app_file: app.py
-pinned: false
-duplicated_from: hysts/space-that-creates-model-demo-space
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/woolitize/README.md b/spaces/akhaliq/woolitize/README.md
deleted file mode 100644
index 5ee2781271998a48d37e3f1987244cf91b4249a5..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/woolitize/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Woolitize
-emoji: 🐨
-colorFrom: pink
-colorTo: blue
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ali-ghamdan/realesrgan-models/scripts/generate_meta_info_pairdata.py b/spaces/ali-ghamdan/realesrgan-models/scripts/generate_meta_info_pairdata.py
deleted file mode 100644
index 76dce7e41c803a8055f3627cccb98deb51419b09..0000000000000000000000000000000000000000
--- a/spaces/ali-ghamdan/realesrgan-models/scripts/generate_meta_info_pairdata.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import argparse
-import glob
-import os
-
-
-def main(args):
- txt_file = open(args.meta_info, 'w')
- # sca images
- img_paths_gt = sorted(glob.glob(os.path.join(args.input[0], '*')))
- img_paths_lq = sorted(glob.glob(os.path.join(args.input[1], '*')))
-
- assert len(img_paths_gt) == len(img_paths_lq), ('GT folder and LQ folder should have the same length, but got '
- f'{len(img_paths_gt)} and {len(img_paths_lq)}.')
-
- for img_path_gt, img_path_lq in zip(img_paths_gt, img_paths_lq):
- # get the relative paths
- img_name_gt = os.path.relpath(img_path_gt, args.root[0])
- img_name_lq = os.path.relpath(img_path_lq, args.root[1])
- print(f'{img_name_gt}, {img_name_lq}')
- txt_file.write(f'{img_name_gt}, {img_name_lq}\n')
-
-
-if __name__ == '__main__':
- """This script is used to generate meta info (txt file) for paired images.
- """
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--input',
- nargs='+',
- default=['datasets/DF2K/DIV2K_train_HR_sub', 'datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub'],
- help='Input folder, should be [gt_folder, lq_folder]')
- parser.add_argument('--root', nargs='+', default=[None, None], help='Folder root, will use the ')
- parser.add_argument(
- '--meta_info',
- type=str,
- default='datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt',
- help='txt path for meta info')
- args = parser.parse_args()
-
- assert len(args.input) == 2, 'Input folder should have two elements: gt folder and lq folder'
- assert len(args.root) == 2, 'Root path should have two elements: root for gt folder and lq folder'
- os.makedirs(os.path.dirname(args.meta_info), exist_ok=True)
- for i in range(2):
- if args.input[i].endswith('/'):
- args.input[i] = args.input[i][:-1]
- if args.root[i] is None:
- args.root[i] = os.path.dirname(args.input[i])
-
- main(args)
diff --git a/spaces/allknowingroger/Image-Models-Test12/app.py b/spaces/allknowingroger/Image-Models-Test12/app.py
deleted file mode 100644
index 88aecd932be943eac7e2e66794e33361d85abf19..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test12/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "shivankarzz/me2",
- "Yntec/Protogen",
- "oljike/jdtlr_sdxl",
- "antoninobrillante/gtl-elephant-test2",
- "imjunaidafzal/saqib-v2",
- "imjunaidafzal/saqib-sarahkhan-t350-u4000-11-21-pm",
- "Joeythemonster/anything-midjourney-v-4-1",
- "amirxsanti/Alexismodel",
- "Abbood/stable-diff-abdul",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/allknowingroger/Image-Models-Test19/README.md b/spaces/allknowingroger/Image-Models-Test19/README.md
deleted file mode 100644
index e22293829f169dd5a94981c7ab481bea41d1a451..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test19/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: More Image Models
-emoji: 😻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: true
-duplicated_from: allknowingroger/Image-Models-Test18
----
-
-
\ No newline at end of file
diff --git a/spaces/alphunt/diffdock-alphunt-demo/evaluate_confidence_calibration.py b/spaces/alphunt/diffdock-alphunt-demo/evaluate_confidence_calibration.py
deleted file mode 100644
index 8b7d2f457458d746945726777e6fcb961d6bfdd4..0000000000000000000000000000000000000000
--- a/spaces/alphunt/diffdock-alphunt-demo/evaluate_confidence_calibration.py
+++ /dev/null
@@ -1,361 +0,0 @@
-import os
-from argparse import ArgumentParser
-
-import pandas as pd
-import plotly.express as px
-import numpy as np
-import scipy
-
-from utils.utils import read_strings_from_txt
-
-parser = ArgumentParser()
-
-
-parser.add_argument('--data_dir', type=str, default='data/PDBBind_processed', help='')
-parser.add_argument('--results_path', type=str, default='inference_out_dir_not_specified/TEST_top40_epoch75_FILTER_restart_cacheNewRestart_big_ema_ESM2emb_tr34_WITH_fixedSamples28_id1_FILTERFROM_temp_restart_ema_ESM2emb_tr34', help='')
-parser.add_argument('--gnina_results_path', type=str, default='results/gnina_rosetta13', help='')
-parser.add_argument('--smina_results_path', type=str, default='results/smina_rosetta13', help='')
-parser.add_argument('--glide_results_path', type=str, default='results/glide', help='')
-parser.add_argument('--qvinaw_results_path', type=str, default='results/qvinaw', help='')
-parser.add_argument('--tankbind_results_path', type=str, default='results/tankbind_top5', help='')
-parser.add_argument('--equibind_results_path', type=str, default='results/equibind_paper', help='')
-parser.add_argument('--no_rec_overlap', action='store_true', default=False, help='')
-args = parser.parse_args()
-
-
-
-min_cross_distances = np.load(f'{args.results_path}/min_cross_distances.npy')
-#min_self_distances = np.load(f'{args.results_path}/min_self_distances.npy')
-base_min_cross_distances = np.load(f'{args.results_path}/base_min_cross_distances.npy')
-rmsds = np.load(f'{args.results_path}/rmsds.npy')
-centroid_distances = np.load(f'{args.results_path}/centroid_distances.npy')
-confidences = np.load(f'{args.results_path}/confidences.npy')
-#complex_names = np.load(f'{args.results_path}/complex_names.npy')
-complex_names = read_strings_from_txt('data/splits/timesplit_test')
-if args.no_rec_overlap:
- names_no_rec_overlap = read_strings_from_txt(f'data/splits/timesplit_test_no_rec_overlap')
- without_rec_overlap_list = []
- for name in complex_names:
- if name in names_no_rec_overlap:
- without_rec_overlap_list.append(1)
- else:
- without_rec_overlap_list.append(0)
- without_rec_overlap = np.array(without_rec_overlap_list, dtype=bool)
- rmsds = np.array(rmsds)[without_rec_overlap]
- #min_self_distances = np.array(min_self_distances)[without_rec_overlap]
- centroid_distances = np.array(centroid_distances)[without_rec_overlap]
- confidences = np.array(confidences)[without_rec_overlap]
- min_cross_distances = np.array(min_cross_distances)[without_rec_overlap]
- base_min_cross_distances = np.array(base_min_cross_distances)[without_rec_overlap]
- complex_names = names_no_rec_overlap
-
-
-
-
-N = rmsds.shape[1]
-performance_metrics = {
- 'steric_clash_fraction': (100 * (min_cross_distances < 0.4).sum() / len(min_cross_distances) / N).__round__(2),
- 'mean_rmsd': rmsds.mean(),
- 'rmsds_below_2': (100 * (rmsds < 2).sum() / len(rmsds) / N),
- 'rmsds_below_5': (100 * (rmsds < 5).sum() / len(rmsds) / N),
- 'rmsds_percentile_25': np.percentile(rmsds, 25).round(2),
- 'rmsds_percentile_50': np.percentile(rmsds, 50).round(2),
- 'rmsds_percentile_75': np.percentile(rmsds, 75).round(2),
-
- 'mean_centroid': centroid_distances.mean().__round__(2),
- 'centroid_below_2': (100 * (centroid_distances < 2).sum() / len(centroid_distances) / N).__round__(2),
- 'centroid_below_5': (100 * (centroid_distances < 5).sum() / len(centroid_distances) / N).__round__(2),
- 'centroid_percentile_25': np.percentile(centroid_distances, 25).round(2),
- 'centroid_percentile_50': np.percentile(centroid_distances, 50).round(2),
- 'centroid_percentile_75': np.percentile(centroid_distances, 75).round(2),
-}
-
-if N >= 5:
- top5_rmsds = np.min(rmsds[:, :5], axis=1)
- top5_centroid_distances = centroid_distances[np.arange(rmsds.shape[0])[:, None], np.argsort(rmsds[:, :5], axis=1)][ :, 0]
- top5_min_cross_distances = min_cross_distances[ np.arange(rmsds.shape[0])[:, None], np.argsort(rmsds[:, :5], axis=1)][:, 0]
- performance_metrics.update({
- 'top5_steric_clash_fraction': (100 * (top5_min_cross_distances < 0.4).sum() / len(top5_min_cross_distances)).__round__(2),
- 'top5_rmsds_below_2': (100 * (top5_rmsds < 2).sum() / len(top5_rmsds)).__round__(2),
- 'top5_rmsds_below_5': (100 * (top5_rmsds < 5).sum() / len(top5_rmsds)).__round__(2),
- 'top5_rmsds_percentile_25': np.percentile(top5_rmsds, 25).round(2),
- 'top5_rmsds_percentile_50': np.percentile(top5_rmsds, 50).round(2),
- 'top5_rmsds_percentile_75': np.percentile(top5_rmsds, 75).round(2),
-
- 'top5_centroid_below_2': (100 * (top5_centroid_distances < 2).sum() / len(top5_centroid_distances)).__round__(2),
- 'top5_centroid_below_5': (100 * (top5_centroid_distances < 5).sum() / len(top5_centroid_distances)).__round__(2),
- 'top5_centroid_percentile_25': np.percentile(top5_centroid_distances, 25).round(2),
- 'top5_centroid_percentile_50': np.percentile(top5_centroid_distances, 50).round(2),
- 'top5_centroid_percentile_75': np.percentile(top5_centroid_distances, 75).round(2),
- })
-
-if N >= 10:
- top10_rmsds = np.min(rmsds[:, :10], axis=1)
- top10_centroid_distances = centroid_distances[np.arange(rmsds.shape[0])[:, None], np.argsort(rmsds[:, :10], axis=1)][:, 0]
- top10_min_cross_distances = min_cross_distances[np.arange(rmsds.shape[0])[:, None], np.argsort(rmsds[:, :10], axis=1)][:, 0]
- performance_metrics.update({
- 'top10_steric_clash_fraction': (100 * (top10_min_cross_distances < 0.4).sum() / len(top10_min_cross_distances)).__round__(2),
- 'top10_rmsds_below_2': (100 * (top10_rmsds < 2).sum() / len(top10_rmsds)).__round__(2),
- 'top10_rmsds_below_5': (100 * (top10_rmsds < 5).sum() / len(top10_rmsds)).__round__(2),
- 'top10_rmsds_percentile_25': np.percentile(top10_rmsds, 25).round(2),
- 'top10_rmsds_percentile_50': np.percentile(top10_rmsds, 50).round(2),
- 'top10_rmsds_percentile_75': np.percentile(top10_rmsds, 75).round(2),
-
- 'top10_centroid_below_2': (100 * (top10_centroid_distances < 2).sum() / len(top10_centroid_distances)).__round__(2),
- 'top10_centroid_below_5': (100 * (top10_centroid_distances < 5).sum() / len(top10_centroid_distances)).__round__(2),
- 'top10_centroid_percentile_25': np.percentile(top10_centroid_distances, 25).round(2),
- 'top10_centroid_percentile_50': np.percentile(top10_centroid_distances, 50).round(2),
- 'top10_centroid_percentile_75': np.percentile(top10_centroid_distances, 75).round(2),
- })
-
-
-confidence_ordering = np.argsort(confidences,axis=1)[:,::-1]
-filtered_rmsds = rmsds[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:,0]
-filtered_centroid_distances = centroid_distances[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:,0]
-filtered_min_cross_distances = min_cross_distances[np.arange(rmsds.shape[0])[:, None], confidence_ordering][:, 0]
-performance_metrics.update({
- 'filtered_steric_clash_fraction': (100 * (filtered_min_cross_distances < 0.4).sum() / len(filtered_min_cross_distances)).__round__(2),
- 'filtered_rmsds_below_2': (100 * (filtered_rmsds < 2).sum() / len(filtered_rmsds)).__round__(2),
- 'filtered_rmsds_below_5': (100 * (filtered_rmsds < 5).sum() / len(filtered_rmsds)).__round__(2),
- 'filtered_rmsds_percentile_25': np.percentile(filtered_rmsds, 25).round(2),
- 'filtered_rmsds_percentile_50': np.percentile(filtered_rmsds, 50).round(2),
- 'filtered_rmsds_percentile_75': np.percentile(filtered_rmsds, 75).round(2),
-
- 'filtered_centroid_below_2': (100 * (filtered_centroid_distances < 2).sum() / len(filtered_centroid_distances)).__round__(2),
- 'filtered_centroid_below_5': (100 * (filtered_centroid_distances < 5).sum() / len(filtered_centroid_distances)).__round__(2),
- 'filtered_centroid_percentile_25': np.percentile(filtered_centroid_distances, 25).round(2),
- 'filtered_centroid_percentile_50': np.percentile(filtered_centroid_distances, 50).round(2),
- 'filtered_centroid_percentile_75': np.percentile(filtered_centroid_distances, 75).round(2),
-})
-
-if N >= 5:
- top5_filtered_rmsds = np.min(rmsds[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:,:5], axis=1)
- top5_filtered_centroid_distances = centroid_distances[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:,:5][ np.arange(rmsds.shape[0])[:, None], np.argsort(rmsds[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:, :5], axis=1)][:, 0]
- top5_filtered_min_cross_distances = min_cross_distances[np.arange(rmsds.shape[0])[:, None], confidence_ordering][:, :5][ np.arange(rmsds.shape[0])[:, None], np.argsort(rmsds[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:, :5], axis=1)][:, 0]
- performance_metrics.update({
- 'top5_filtered_steric_clash_fraction': (100 * (top5_filtered_min_cross_distances < 0.4).sum() / len(top5_filtered_min_cross_distances)).__round__(2),
- 'top5_filtered_rmsds_below_2': (100 * (top5_filtered_rmsds < 2).sum() / len(top5_filtered_rmsds)).__round__(2),
- 'top5_filtered_rmsds_below_5': (100 * (top5_filtered_rmsds < 5).sum() / len(top5_filtered_rmsds)).__round__(2),
- 'top5_filtered_rmsds_percentile_25': np.percentile(top5_filtered_rmsds, 25).round(2),
- 'top5_filtered_rmsds_percentile_50': np.percentile(top5_filtered_rmsds, 50).round(2),
- 'top5_filtered_rmsds_percentile_75': np.percentile(top5_filtered_rmsds, 75).round(2),
-
- 'top5_filtered_centroid_below_2': (100 * (top5_filtered_centroid_distances < 2).sum() / len(top5_filtered_centroid_distances)).__round__(2),
- 'top5_filtered_centroid_below_5': (100 * (top5_filtered_centroid_distances < 5).sum() / len(top5_filtered_centroid_distances)).__round__(2),
- 'top5_filtered_centroid_percentile_25': np.percentile(top5_filtered_centroid_distances, 25).round(2),
- 'top5_filtered_centroid_percentile_50': np.percentile(top5_filtered_centroid_distances, 50).round(2),
- 'top5_filtered_centroid_percentile_75': np.percentile(top5_filtered_centroid_distances, 75).round(2),
- })
-if N >= 10:
- top10_filtered_rmsds = np.min(rmsds[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:,:10], axis=1)
- top10_filtered_centroid_distances = centroid_distances[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:,:10][ np.arange(rmsds.shape[0])[:, None], np.argsort(rmsds[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:, :10], axis=1)][:, 0]
- top10_filtered_min_cross_distances = min_cross_distances[np.arange(rmsds.shape[0])[:, None], confidence_ordering][:, :10][ np.arange(rmsds.shape[0])[:, None], np.argsort(rmsds[np.arange(rmsds.shape[0])[:,None],confidence_ordering][:, :10], axis=1)][:, 0]
- performance_metrics.update({
- 'top10_filtered_steric_clash_fraction': (100 * (top10_filtered_min_cross_distances < 0.4).sum() / len(top10_filtered_min_cross_distances)).__round__(2),
- 'top10_filtered_rmsds_below_2': (100 * (top10_filtered_rmsds < 2).sum() / len(top10_filtered_rmsds)).__round__(2),
- 'top10_filtered_rmsds_below_5': (100 * (top10_filtered_rmsds < 5).sum() / len(top10_filtered_rmsds)).__round__(2),
- 'top10_filtered_rmsds_percentile_25': np.percentile(top10_filtered_rmsds, 25).round(2),
- 'top10_filtered_rmsds_percentile_50': np.percentile(top10_filtered_rmsds, 50).round(2),
- 'top10_filtered_rmsds_percentile_75': np.percentile(top10_filtered_rmsds, 75).round(2),
-
- 'top10_filtered_centroid_below_2': (100 * (top10_filtered_centroid_distances < 2).sum() / len(top10_filtered_centroid_distances)).__round__(2),
- 'top10_filtered_centroid_below_5': (100 * (top10_filtered_centroid_distances < 5).sum() / len(top10_filtered_centroid_distances)).__round__(2),
- 'top10_filtered_centroid_percentile_25': np.percentile(top10_filtered_centroid_distances, 25).round(2),
- 'top10_filtered_centroid_percentile_50': np.percentile(top10_filtered_centroid_distances, 50).round(2),
- 'top10_filtered_centroid_percentile_75': np.percentile(top10_filtered_centroid_distances, 75).round(2),
- })
-
-reverse_confidence_ordering = np.argsort(confidences,axis=1)
-reverse_filtered_rmsds = rmsds[np.arange(rmsds.shape[0])[:, None], reverse_confidence_ordering][:, 0]
-reverse_filtered_centroid_distances = centroid_distances[np.arange(rmsds.shape[0])[:, None], reverse_confidence_ordering][:, 0]
-reverse_filtered_min_cross_distances = min_cross_distances[np.arange(rmsds.shape[0])[:, None], reverse_confidence_ordering][:, 0]
-performance_metrics.update({
- 'reversefiltered_steric_clash_fraction': (100 * (reverse_filtered_min_cross_distances < 0.4).sum() / len(reverse_filtered_min_cross_distances)).__round__(2),
- 'reversefiltered_rmsds_below_2': (100 * (reverse_filtered_rmsds < 2).sum() / len(reverse_filtered_rmsds)).__round__(2),
- 'reversefiltered_rmsds_below_5': (100 * (reverse_filtered_rmsds < 5).sum() / len(reverse_filtered_rmsds)).__round__(2),
- 'reversefiltered_rmsds_percentile_25': np.percentile(reverse_filtered_rmsds, 25).round(2),
- 'reversefiltered_rmsds_percentile_50': np.percentile(reverse_filtered_rmsds, 50).round(2),
- 'reversefiltered_rmsds_percentile_75': np.percentile(reverse_filtered_rmsds, 75).round(2),
-
- 'reversefiltered_centroid_below_2': (100 * (reverse_filtered_centroid_distances < 2).sum() / len(reverse_filtered_centroid_distances)).__round__(2),
- 'reversefiltered_centroid_below_5': (100 * (reverse_filtered_centroid_distances < 5).sum() / len(reverse_filtered_centroid_distances)).__round__(2),
- 'reversefiltered_centroid_percentile_25': np.percentile(reverse_filtered_centroid_distances, 25).round(2),
- 'reversefiltered_centroid_percentile_50': np.percentile(reverse_filtered_centroid_distances, 50).round(2),
- 'reversefiltered_centroid_percentile_75': np.percentile(reverse_filtered_centroid_distances, 75).round(2),
-})
-
-if N >= 5:
- top5_reverse_filtered_rmsds = np.min(rmsds[np.arange(rmsds.shape[0])[:, None], reverse_confidence_ordering][:, :5], axis=1)
- top5_reverse_filtered_centroid_distances = np.min(centroid_distances[np.arange(rmsds.shape[0])[:, None], reverse_confidence_ordering][:, :5], axis=1)
- top5_reverse_filtered_min_cross_distances = np.max(min_cross_distances[np.arange(rmsds.shape[0])[:, None], reverse_confidence_ordering][:, :5], axis=1)
- performance_metrics.update({
- 'top5_reverse_filtered_steric_clash_fraction': (100 * (top5_reverse_filtered_min_cross_distances < 0.4).sum() / len(top5_reverse_filtered_min_cross_distances)).__round__(2),
- 'top5_reversefiltered_rmsds_below_2': (100 * (top5_reverse_filtered_rmsds < 2).sum() / len(top5_reverse_filtered_rmsds)).__round__(2),
- 'top5_reversefiltered_rmsds_below_5': (100 * (top5_reverse_filtered_rmsds < 5).sum() / len(top5_reverse_filtered_rmsds)).__round__(2),
- 'top5_reversefiltered_rmsds_percentile_25': np.percentile(top5_reverse_filtered_rmsds, 25).round(2),
- 'top5_reversefiltered_rmsds_percentile_50': np.percentile(top5_reverse_filtered_rmsds, 50).round(2),
- 'top5_reversefiltered_rmsds_percentile_75': np.percentile(top5_reverse_filtered_rmsds, 75).round(2),
-
- 'top5_reversefiltered_centroid_below_2': (100 * (top5_reverse_filtered_centroid_distances < 2).sum() / len(top5_reverse_filtered_centroid_distances)).__round__(2),
- 'top5_reversefiltered_centroid_below_5': (100 * (top5_reverse_filtered_centroid_distances < 5).sum() / len(top5_reverse_filtered_centroid_distances)).__round__(2),
- 'top5_reversefiltered_centroid_percentile_25': np.percentile(top5_reverse_filtered_centroid_distances, 25).round(2),
- 'top5_reversefiltered_centroid_percentile_50': np.percentile(top5_reverse_filtered_centroid_distances, 50).round(2),
- 'top5_reversefiltered_centroid_percentile_75': np.percentile(top5_reverse_filtered_centroid_distances, 75).round(2),
- })
-
-if N >= 10:
- top10_reverse_filtered_rmsds = np.min(rmsds[np.arange(rmsds.shape[0])[:, None], reverse_confidence_ordering][:, :10], axis=1)
- top10_reverse_filtered_centroid_distances = np.min(centroid_distances[np.arange(rmsds.shape[0])[:, None], reverse_confidence_ordering][:, :10], axis=1)
- top10_reverse_filtered_min_cross_distances = np.max(min_cross_distances[np.arange(rmsds.shape[0])[:, None], reverse_confidence_ordering][:, :10], axis=1)
- performance_metrics.update({
- 'top10_reverse_filtered_steric_clash_fraction': (100 * (top10_reverse_filtered_min_cross_distances < 0.4).sum() / len(top10_reverse_filtered_min_cross_distances)).__round__(2),
- 'top10_reversefiltered_rmsds_below_2': (100 * (top10_reverse_filtered_rmsds < 2).sum() / len(top10_reverse_filtered_rmsds)).__round__(2),
- 'top10_reversefiltered_rmsds_below_5': (100 * (top10_reverse_filtered_rmsds < 5).sum() / len(top10_reverse_filtered_rmsds)).__round__(2),
- 'top10_reversefiltered_rmsds_percentile_25': np.percentile(top10_reverse_filtered_rmsds, 25).round(2),
- 'top10_reversefiltered_rmsds_percentile_50': np.percentile(top10_reverse_filtered_rmsds, 50).round(2),
- 'top10_reversefiltered_rmsds_percentile_75': np.percentile(top10_reverse_filtered_rmsds, 75).round(2),
-
- 'top10_reversefiltered_centroid_below_2': (100 * (top10_reverse_filtered_centroid_distances < 2).sum() / len(top10_reverse_filtered_centroid_distances)).__round__(2),
- 'top10_reversefiltered_centroid_below_5': (100 * (top10_reverse_filtered_centroid_distances < 5).sum() / len(top10_reverse_filtered_centroid_distances)).__round__(2),
- 'top10_reversefiltered_centroid_percentile_25': np.percentile(top10_reverse_filtered_centroid_distances, 25).round(2),
- 'top10_reversefiltered_centroid_percentile_50': np.percentile(top10_reverse_filtered_centroid_distances, 50).round(2),
- 'top10_reversefiltered_centroid_percentile_75': np.percentile(top10_reverse_filtered_centroid_distances, 75).round(2),
- })
-
-filtered_confidences = confidences[np.arange(confidences.shape[0])[:,None],confidence_ordering][:,0]
-
-confident_mask = filtered_confidences > 0
-confident_rmsds = filtered_rmsds[confident_mask]
-confident_centroid_distances = filtered_centroid_distances[confident_mask]
-confident_min_cross_distances = filtered_min_cross_distances[confident_mask]
-
-performance_metrics.update({
- 'fraction_confident_predictions': (100 * len(confident_rmsds) / len(rmsds)).__round__(2),
- 'confident_steric_clash_fraction': (100 * (confident_min_cross_distances < 0.4).sum() / len(confident_min_cross_distances)).__round__(2),
- 'confident_rmsds_below_2': (100 * (confident_rmsds < 2).sum() / len(confident_rmsds)).__round__(2),
- 'confident_rmsds_below_5': (100 * (confident_rmsds < 5).sum() / len(confident_rmsds)).__round__(2),
- 'confident_rmsds_percentile_25': np.percentile(confident_rmsds, 25).round(2),
- 'confident_rmsds_percentile_50': np.percentile(confident_rmsds, 50).round(2),
- 'confident_rmsds_percentile_75': np.percentile(confident_rmsds, 75).round(2),
-
- 'confident_centroid_below_2': (100 * (confident_centroid_distances < 2).sum() / len(confident_centroid_distances)).__round__(2),
- 'confident_centroid_below_5': (100 * (confident_centroid_distances < 5).sum() / len(confident_centroid_distances)).__round__(2),
- 'confident_centroid_percentile_25': np.percentile(confident_centroid_distances, 25).round(2),
- 'confident_centroid_percentile_50': np.percentile(confident_centroid_distances, 50).round(2),
- 'confident_centroid_percentile_75': np.percentile(confident_centroid_distances, 75).round(2),
-})
-
-for k in performance_metrics:
- print(k, performance_metrics[k])
-
-fraction_dataset_rmsds_below_2 = []
-perfect_calibration = []
-no_calibration = []
-for dataset_percentage in range(100):
- dataset_percentage += 1
- dataset_fraction = (dataset_percentage)/100
- num_samples = round(len(rmsds)*dataset_fraction)
- per_complex_confidence_ordering = np.argsort(filtered_confidences)[::-1]
- confident_complexes_rmsds = filtered_rmsds[per_complex_confidence_ordering][:num_samples]
- confident_complexes_centroid_distances = filtered_centroid_distances[per_complex_confidence_ordering][:num_samples]
- confident_complexes_min_cross_distances = filtered_min_cross_distances[per_complex_confidence_ordering][:num_samples]
- confident_complexes_metrics = {
- 'fraction_confident_complexes_predictions': (100 * len(confident_complexes_rmsds) / len(rmsds)).__round__(2),
- 'confident_complexes_steric_clash_fraction': (100 * (confident_complexes_min_cross_distances < 0.4).sum() / len(confident_complexes_min_cross_distances)).__round__(2),
- 'confident_complexes_rmsds_below_2': (100 * (confident_complexes_rmsds < 2).sum() / len(confident_complexes_rmsds)).__round__(2),
- 'confident_complexes_rmsds_below_5': (100 * (confident_complexes_rmsds < 5).sum() / len(confident_complexes_rmsds)).__round__(2),
- 'confident_complexes_rmsds_percentile_25': np.percentile(confident_complexes_rmsds, 25).round(2),
- 'confident_complexes_rmsds_percentile_50': np.percentile(confident_complexes_rmsds, 50).round(2),
- 'confident_complexes_rmsds_percentile_75': np.percentile(confident_complexes_rmsds, 75).round(2),
-
- 'confident_complexes_centroid_below_2': (100 * (confident_complexes_centroid_distances < 2).sum() / len(confident_complexes_centroid_distances)).__round__(2),
- 'confident_complexes_centroid_below_5': (100 * (confident_complexes_centroid_distances < 5).sum() / len(confident_complexes_centroid_distances)).__round__(2),
- 'confident_complexes_centroid_percentile_25': np.percentile(confident_complexes_centroid_distances, 25).round(2),
- 'confident_complexes_centroid_percentile_50': np.percentile(confident_complexes_centroid_distances, 50).round(2),
- 'confident_complexes_centroid_percentile_75': np.percentile(confident_complexes_centroid_distances, 75).round(2),
- }
- fraction_dataset_rmsds_below_2.append(confident_complexes_metrics['confident_complexes_rmsds_below_2'])
- perfect_calibration.append((100 * (np.sort(filtered_rmsds)[:num_samples] < 2).sum() / len(confident_complexes_rmsds)).__round__(2))
- no_calibration.append(performance_metrics['filtered_rmsds_below_2'])
- #print('percentage: ',dataset_percentage)
- #print(confident_complexes_metrics['confident_complexes_rmsds_below_2'])
-
-print(scipy.stats.spearmanr(filtered_rmsds, filtered_confidences))
-df = {'conf': filtered_confidences, 'rmsd': filtered_rmsds}
-fig = px.scatter(df, x='rmsd',y='conf').update_layout(
- xaxis_title="Percentage of datapoints that may be abstained", yaxis_title="Percentage of predictions with RMSD < 2A"
-)
-fig.update_layout(margin={'l': 0, 'r': 0, 't': 20, 'b': 100}, plot_bgcolor='white',
- paper_bgcolor='white', legend_title_text='', legend_title_font_size=1,
- legend=dict(yanchor="bottom", y=0.1, xanchor="right", x=0.99, font=dict(size=17), ),
- )
-fig.update_xaxes(showgrid=True, gridcolor='lightgrey',title_font=dict(size=19),mirror=True,ticks='outside',showline=True,)
-fig.update_yaxes(showgrid=True, gridcolor='lightgrey',title_font=dict(size=19),mirror=True,ticks='outside',showline=True,)
-fig.show()
-
-df = {'Confidence Model': reversed(fraction_dataset_rmsds_below_2),'No Calibration': reversed(no_calibration),'Perfect Calibration': reversed(perfect_calibration),}
-fig = px.line(df, y=list(df.keys())).update_layout(
- xaxis_title="Percentage of datapoints that may be abstained", yaxis_title="Percentage of predictions with RMSD < 2A"
-)
-fig.update_yaxes(range = [0,103])
-fig.update_layout(margin={'l': 0, 'r': 0, 't': 20, 'b': 100}, plot_bgcolor='white',
- paper_bgcolor='white', legend_title_text='', legend_title_font_size=1,
- legend=dict(yanchor="bottom", y=0.1, xanchor="right", x=0.99, font=dict(size=17), ),
- )
-fig.update_xaxes(showgrid=True, gridcolor='lightgrey',title_font=dict(size=19),mirror=True,ticks='outside',showline=True,)
-fig.update_yaxes(showgrid=True, gridcolor='lightgrey',title_font=dict(size=19),mirror=True,ticks='outside',showline=True,)
-fig.write_image('results/confidence_calibration.pdf')
-fig.show()
-
-def filter_by_names(method_names, method_array, names_to_keep):
- output_array = []
- output_names = []
- for method_name, array_element in zip(method_names,method_array):
- if method_name in names_to_keep:
- output_array.append(array_element)
- output_names.append(method_name)
- return np.array(output_array), np.array(output_names)
-
-qvinaw_rmsds = np.load(os.path.join(args.qvinaw_results_path, 'rmsds.npy'))
-qvinaw_names = np.load(os.path.join(args.qvinaw_results_path, 'names.npy'))
-qvinaw_rmsds, qvinaw_names = filter_by_names(qvinaw_names, qvinaw_rmsds, complex_names)
-qvinaw_rmsds = np.concatenate([qvinaw_rmsds, np.random.choice(qvinaw_rmsds, size=len(complex_names) - len(qvinaw_rmsds))])
-
-glide_rmsds = np.load(os.path.join(args.glide_results_path, 'rmsds.npy'))
-glide_names = np.load(os.path.join(args.glide_results_path, 'names.npy')).tolist()
-glide_rmsds, glide_names = filter_by_names(glide_names, glide_rmsds, complex_names)
-glide_rmsds = np.concatenate([glide_rmsds, np.random.choice(glide_rmsds, size=len(complex_names) - len(glide_rmsds))])
-
-smina_rmsds = np.load(os.path.join(args.smina_results_path, 'rmsds.npy'))[:,0]
-smina_names = np.load(os.path.join(args.smina_results_path, 'names.npy'))
-smina_rmsds, smina_names = filter_by_names(smina_names, smina_rmsds, complex_names)
-smina_rmsds = np.concatenate([smina_rmsds, np.random.choice(smina_rmsds, size=len(complex_names) - len(smina_rmsds))])
-
-gnina_rmsds = np.load(os.path.join(args.gnina_results_path, 'rmsds.npy'))[:,0]
-gnina_names = np.load(os.path.join(args.gnina_results_path, 'names.npy'))
-gnina_rmsds, gnina_names = filter_by_names(gnina_names, gnina_rmsds, complex_names)
-gnina_rmsds = np.concatenate([gnina_rmsds, np.random.choice(gnina_rmsds, size=len(complex_names) - len(gnina_rmsds))])
-
-tankbind_rmsds = np.load(os.path.join(args.tankbind_results_path, 'rmsds.npy'))[:,0]
-tankbind_names = np.load(os.path.join(args.tankbind_results_path, 'names.npy'))
-tankbind_rmsds, tankbind_names = filter_by_names(tankbind_names, tankbind_rmsds, complex_names)
-
-equibind_rmsds = np.load(os.path.join(args.equibind_results_path, 'rmsds.npy'))
-equibind_names = np.load(os.path.join(args.equibind_results_path, 'names.npy'))
-equibind_rmsds, equibind_names = filter_by_names(equibind_names, equibind_rmsds, complex_names)
-
-
-df = {'DiffDock': filtered_rmsds, 'GLIDE': glide_rmsds, 'GNINA': gnina_rmsds, 'SMINA': smina_rmsds, 'QVinaW':qvinaw_rmsds, 'TANKBind': tankbind_rmsds, 'EquiBind': equibind_rmsds}
-fig = px.ecdf(df, range_x=[0, 5], range_y=[0.001, 0.75], width=600, height=400)
-fig.add_vline(x=2, annotation_text='', annotation_font_size=20, annotation_position="top right",
- line_dash='dash', line_color='firebrick', annotation_font_color='firebrick')
-fig.update_xaxes(title=f'RMSD (Å)')
-fig.update_yaxes(title=f'Fraction with lower RMSD')
-fig.update_layout(autosize=False, margin={'l': 65, 'r': 5, 't': 5, 'b': 60}, plot_bgcolor='white',
- paper_bgcolor='white', legend_title_text='', legend_title_font_size=18,
- legend=dict(yanchor="top", y=0.995, xanchor="left", x=0.02, font=dict(size=18, color='black'), ), )
-fig.update_xaxes(showgrid=True, gridcolor='lightgrey',title_font=dict(size=23, color='black'),mirror=True,ticks='outside',showline=True, linewidth=1, linecolor='black', tickfont = dict(size = 18, color='black'))
-fig.update_yaxes(showgrid=True, gridcolor='lightgrey',title_font=dict(size=23, color='black'),mirror=True,ticks='outside',showline=True, linewidth=1, linecolor='black', tickfont = dict(size = 18, color='black'))
-fig.update_traces(line=dict(width=3))
-fig.write_image('results/rmsds_nooverlap.pdf')
-fig.show()
\ No newline at end of file
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_util.h b/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_util.h
deleted file mode 100644
index 08dc0ec64859f0c5467b53cdc7948e0d233f53f3..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_util.h
+++ /dev/null
@@ -1,159 +0,0 @@
-#ifndef PA_UTIL_H
-#define PA_UTIL_H
-/*
- * $Id$
- * Portable Audio I/O Library implementation utilities header
- * common implementation utilities and interfaces
- *
- * Based on the Open Source API proposed by Ross Bencina
- * Copyright (c) 1999-2008 Ross Bencina, Phil Burk
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-/** @file
- @ingroup common_src
-
- @brief Prototypes for utility functions used by PortAudio implementations.
-
- Some functions declared here are defined in pa_front.c while others
- are implemented separately for each platform.
-*/
-
-
-#include "portaudio.h"
-
-#ifdef __cplusplus
-extern "C"
-{
-#endif /* __cplusplus */
-
-
-struct PaUtilHostApiRepresentation;
-
-
-/** Retrieve a specific host API representation. This function can be used
- by implementations to retrieve a pointer to their representation in
- host api specific extension functions which aren't passed a rep pointer
- by pa_front.c.
-
- @param hostApi A pointer to a host API representation pointer. Upon success
- this will receive the requested representation pointer.
-
- @param type A valid host API type identifier.
-
- @returns An error code. If the result is PaNoError then a pointer to the
- requested host API representation will be stored in *hostApi. If the host API
- specified by type is not found, this function returns paHostApiNotFound.
-*/
-PaError PaUtil_GetHostApiRepresentation( struct PaUtilHostApiRepresentation **hostApi,
- PaHostApiTypeId type );
-
-
-/** Convert a PortAudio device index into a host API specific device index.
- @param hostApiDevice Pointer to a device index, on success this will receive the
- converted device index value.
- @param device The PortAudio device index to convert.
- @param hostApi The host api which the index should be converted for.
-
- @returns On success returns PaNoError and places the converted index in the
- hostApiDevice parameter.
-*/
-PaError PaUtil_DeviceIndexToHostApiDeviceIndex(
- PaDeviceIndex *hostApiDevice, PaDeviceIndex device,
- struct PaUtilHostApiRepresentation *hostApi );
-
-
-/** Set the host error information returned by Pa_GetLastHostErrorInfo. This
- function and the paUnanticipatedHostError error code should be used as a
- last resort. Implementors should use existing PA error codes where possible,
- or nominate new ones. Note that at it is always better to use
- PaUtil_SetLastHostErrorInfo() and paUnanticipatedHostError than to return an
- ambiguous or inaccurate PaError code.
-
- @param hostApiType The host API which encountered the error (ie of the caller)
-
- @param errorCode The error code returned by the native API function.
-
- @param errorText A string describing the error. PaUtil_SetLastHostErrorInfo
- makes a copy of the string, so it is not necessary for the pointer to remain
- valid after the call to PaUtil_SetLastHostErrorInfo() returns.
-
-*/
-void PaUtil_SetLastHostErrorInfo( PaHostApiTypeId hostApiType, long errorCode,
- const char *errorText );
-
-
-
-/* the following functions are implemented in a platform platform specific
- .c file
-*/
-
-/** Allocate size bytes, guaranteed to be aligned to a FIXME byte boundary */
-void *PaUtil_AllocateMemory( long size );
-
-
-/** Release block if non-NULL. block may be NULL */
-void PaUtil_FreeMemory( void *block );
-
-
-/** Return the number of currently allocated blocks. This function can be
- used for detecting memory leaks.
-
- @note Allocations will only be tracked if PA_TRACK_MEMORY is #defined. If
- it isn't, this function will always return 0.
-*/
-int PaUtil_CountCurrentlyAllocatedBlocks( void );
-
-
-/** Initialize the clock used by PaUtil_GetTime(). Call this before calling
- PaUtil_GetTime.
-
- @see PaUtil_GetTime
-*/
-void PaUtil_InitializeClock( void );
-
-
-/** Return the system time in seconds. Used to implement CPU load functions
-
- @see PaUtil_InitializeClock
-*/
-double PaUtil_GetTime( void );
-
-
-/* void Pa_Sleep( long msec ); must also be implemented in per-platform .c file */
-
-
-
-#ifdef __cplusplus
-}
-#endif /* __cplusplus */
-#endif /* PA_UTIL_H */
diff --git a/spaces/amish1729/LFUNet/keras_vggface/vggface.py b/spaces/amish1729/LFUNet/keras_vggface/vggface.py
deleted file mode 100644
index 300edc7200488db9f1fd8c18edf70fa0336fff39..0000000000000000000000000000000000000000
--- a/spaces/amish1729/LFUNet/keras_vggface/vggface.py
+++ /dev/null
@@ -1,112 +0,0 @@
-'''VGGFace models for Keras.
-
-# Reference:
-- [Deep Face Recognition](http://www.robots.ox.ac.uk/~vgg/publications/2015/Parkhi15/parkhi15.pdf)
-- [VGGFace2: A dataset for recognising faces across pose and age](http://www.robots.ox.ac.uk/~vgg/data/vgg_face2/vggface2.pdf)
-
-'''
-from __future__ import print_function
-from keras_vggface.models import RESNET50, VGG16, SENET50
-
-
-def VGGFace(include_top=True, model='vgg16', weights='vggface',
- input_tensor=None, input_shape=None,
- pooling=None,
- classes=None):
- """Instantiates the VGGFace architectures.
- Optionally loads weights pre-trained
- on VGGFace datasets. Note that when using TensorFlow,
- for best performance you should set
- `image_data_format="channels_last"` in your Keras config
- at ~/.keras/keras.json.
- The model and the weights are compatible with both
- TensorFlow and Theano. The data format
- convention used by the model is the one
- specified in your Keras config file.
- # Arguments
- include_top: whether to include the 3 fully-connected
- layers at the top of the network.
- weights: one of `None` (random initialization)
- or "vggface" (pre-training on VGGFACE datasets).
- input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
- to use as image input for the model.
- model: selects the one of the available architectures
- vgg16, resnet50 or senet50 default is vgg16.
- input_shape: optional shape tuple, only to be specified
- if `include_top` is False (otherwise the input shape
- has to be `(224, 224, 3)` (with `channels_last` data format)
- or `(3, 224, 244)` (with `channels_first` data format).
- It should have exactly 3 inputs channels,
- and width and height should be no smaller than 48.
- E.g. `(200, 200, 3)` would be one valid value.
- pooling: Optional pooling mode for feature extraction
- when `include_top` is `False`.
- - `None` means that the output of the model will be
- the 4D tensor output of the
- last convolutional layer.
- - `avg` means that global average pooling
- will be applied to the output of the
- last convolutional layer, and thus
- the output of the model will be a 2D tensor.
- - `max` means that global max pooling will
- be applied.
- classes: optional number of classes to classify images
- into, only to be specified if `include_top` is True, and
- if no `weights` argument is specified.
- # Returns
- A Keras model instance.
- # Raises
- ValueError: in case of invalid argument for `weights`,
- or invalid input shape.
- """
-
- if weights not in {'vggface', None}:
- raise ValueError('The `weights` argument should be either '
- '`None` (random initialization) or `vggface`'
- '(pre-training on VGGFace Datasets).')
-
- if model == 'vgg16':
-
- if classes is None:
- classes = 2622
-
- if weights == 'vggface' and include_top and classes != 2622:
- raise ValueError(
- 'If using `weights` as vggface original with `include_top`'
- ' as true, `classes` should be 2622')
-
- return VGG16(include_top=include_top, input_tensor=input_tensor,
- input_shape=input_shape, pooling=pooling,
- weights=weights,
- classes=classes)
-
-
- if model == 'resnet50':
-
- if classes is None:
- classes = 8631
-
- if weights == 'vggface' and include_top and classes != 8631:
- raise ValueError(
- 'If using `weights` as vggface original with `include_top`'
- ' as true, `classes` should be 8631')
-
- return RESNET50(include_top=include_top, input_tensor=input_tensor,
- input_shape=input_shape, pooling=pooling,
- weights=weights,
- classes=classes)
-
- if model == 'senet50':
-
- if classes is None:
- classes = 8631
-
- if weights == 'vggface' and include_top and classes != 8631:
- raise ValueError(
- 'If using `weights` as vggface original with `include_top`'
- ' as true, `classes` should be 8631')
-
- return SENET50(include_top=include_top, input_tensor=input_tensor,
- input_shape=input_shape, pooling=pooling,
- weights=weights,
- classes=classes)
\ No newline at end of file
diff --git a/spaces/antonovmaxim/text-generation-webui-space/docs/WSL-installation-guide.md b/spaces/antonovmaxim/text-generation-webui-space/docs/WSL-installation-guide.md
deleted file mode 100644
index 7de38b114f7b0fb6e522c20520b3aadbb8161970..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/docs/WSL-installation-guide.md
+++ /dev/null
@@ -1,79 +0,0 @@
-Guide created by [@jfryton](https://github.com/jfryton). Thank you jfryton.
-
------
-
-Here's an easy-to-follow, step-by-step guide for installing Windows Subsystem for Linux (WSL) with Ubuntu on Windows 10/11:
-
-## Step 1: Enable WSL
-
-1. Press the Windows key + X and click on "Windows PowerShell (Admin)" or "Windows Terminal (Admin)" to open PowerShell or Terminal with administrator privileges.
-2. In the PowerShell window, type the following command and press Enter:
-
-```
-wsl --install
-```
-
-If this command doesn't work, you can enable WSL with the following command for Windows 10:
-
-```
-wsl --set-default-version 1
-```
-
-For Windows 11, you can use:
-
-```
-wsl --set-default-version 2
-```
-
-You may be prompted to restart your computer. If so, save your work and restart.
-
-## Step 2: Install Ubuntu
-
-1. Open the Microsoft Store.
-2. Search for "Ubuntu" in the search bar.
-3. Choose the desired Ubuntu version (e.g., Ubuntu 20.04 LTS) and click "Get" or "Install" to download and install the Ubuntu app.
-4. Once the installation is complete, click "Launch" or search for "Ubuntu" in the Start menu and open the app.
-
-## Step 3: Set up Ubuntu
-
-1. When you first launch the Ubuntu app, it will take a few minutes to set up. Be patient as it installs the necessary files and sets up your environment.
-2. Once the setup is complete, you will be prompted to create a new UNIX username and password. Choose a username and password, and make sure to remember them, as you will need them for future administrative tasks within the Ubuntu environment.
-
-## Step 4: Update and upgrade packages
-
-1. After setting up your username and password, it's a good idea to update and upgrade your Ubuntu system. Run the following commands in the Ubuntu terminal:
-
-```
-sudo apt update
-sudo apt upgrade
-```
-
-2. Enter your password when prompted. This will update the package list and upgrade any outdated packages.
-
-Congratulations! You have now installed WSL with Ubuntu on your Windows 10/11 system. You can use the Ubuntu terminal for various tasks, like running Linux commands, installing packages, or managing files.
-
-You can launch your WSL Ubuntu installation by selecting the Ubuntu app (like any other program installed on your computer) or typing 'ubuntu' into Powershell or Terminal.
-
-## Step 5: Proceed with Linux instructions
-
-1. You can now follow the Linux setup instructions. If you receive any error messages about a missing tool or package, just install them using apt:
-
-```
-sudo apt install [missing package]
-```
-
-You will probably need to install build-essential
-
-```
-sudo apt install build-essential
-```
-
-If you face any issues or need to troubleshoot, you can always refer to the official Microsoft documentation for WSL: https://docs.microsoft.com/en-us/windows/wsl/
-
-## Bonus: Port Forwarding
-
-By default, you won't be able to access the webui from another device on your local network. You will need to setup the appropriate port forwarding using the following command (using PowerShell or Terminal with administrator privileges).
-
-```
-netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=7860 connectaddress=localhost connectport=7860
-```
diff --git a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/mask_decoder.py b/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/mask_decoder.py
deleted file mode 100644
index 3e86f7cc9ad95582a08ef2531c68d03fa4af8d99..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/modeling/mask_decoder.py
+++ /dev/null
@@ -1,176 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from typing import List, Tuple, Type
-
-from .common import LayerNorm2d
-
-
-class MaskDecoder(nn.Module):
- def __init__(
- self,
- *,
- transformer_dim: int,
- transformer: nn.Module,
- num_multimask_outputs: int = 3,
- activation: Type[nn.Module] = nn.GELU,
- iou_head_depth: int = 3,
- iou_head_hidden_dim: int = 256,
- ) -> None:
- """
- Predicts masks given an image and prompt embeddings, using a
- tranformer architecture.
-
- Arguments:
- transformer_dim (int): the channel dimension of the transformer
- transformer (nn.Module): the transformer used to predict masks
- num_multimask_outputs (int): the number of masks to predict
- when disambiguating masks
- activation (nn.Module): the type of activation to use when
- upscaling masks
- iou_head_depth (int): the depth of the MLP used to predict
- mask quality
- iou_head_hidden_dim (int): the hidden dimension of the MLP
- used to predict mask quality
- """
- super().__init__()
- self.transformer_dim = transformer_dim
- self.transformer = transformer
-
- self.num_multimask_outputs = num_multimask_outputs
-
- self.iou_token = nn.Embedding(1, transformer_dim)
- self.num_mask_tokens = num_multimask_outputs + 1
- self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim)
-
- self.output_upscaling = nn.Sequential(
- nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2),
- LayerNorm2d(transformer_dim // 4),
- activation(),
- nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2),
- activation(),
- )
- self.output_hypernetworks_mlps = nn.ModuleList(
- [
- MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3)
- for i in range(self.num_mask_tokens)
- ]
- )
-
- self.iou_prediction_head = MLP(
- transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth
- )
-
- def forward(
- self,
- image_embeddings: torch.Tensor,
- image_pe: torch.Tensor,
- sparse_prompt_embeddings: torch.Tensor,
- dense_prompt_embeddings: torch.Tensor,
- multimask_output: bool,
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- """
- Predict masks given image and prompt embeddings.
-
- Arguments:
- image_embeddings (torch.Tensor): the embeddings from the image encoder
- image_pe (torch.Tensor): positional encoding with the shape of image_embeddings
- sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes
- dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs
- multimask_output (bool): Whether to return multiple masks or a single
- mask.
-
- Returns:
- torch.Tensor: batched predicted masks
- torch.Tensor: batched predictions of mask quality
- """
- masks, iou_pred = self.predict_masks(
- image_embeddings=image_embeddings,
- image_pe=image_pe,
- sparse_prompt_embeddings=sparse_prompt_embeddings,
- dense_prompt_embeddings=dense_prompt_embeddings,
- )
-
- # Select the correct mask or masks for outptu
- if multimask_output:
- mask_slice = slice(1, None)
- else:
- mask_slice = slice(0, 1)
- masks = masks[:, mask_slice, :, :]
- iou_pred = iou_pred[:, mask_slice]
-
- # Prepare output
- return masks, iou_pred
-
- def predict_masks(
- self,
- image_embeddings: torch.Tensor,
- image_pe: torch.Tensor,
- sparse_prompt_embeddings: torch.Tensor,
- dense_prompt_embeddings: torch.Tensor,
- ) -> Tuple[torch.Tensor, torch.Tensor]:
- """Predicts masks. See 'forward' for more details."""
- # Concatenate output tokens
- output_tokens = torch.cat([self.iou_token.weight, self.mask_tokens.weight], dim=0)
- output_tokens = output_tokens.unsqueeze(0).expand(sparse_prompt_embeddings.size(0), -1, -1)
- tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1)
-
- # Expand per-image data in batch direction to be per-mask
- src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0)
- src = src + dense_prompt_embeddings
- pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0)
- b, c, h, w = src.shape
-
- # Run the transformer
- hs, src = self.transformer(src, pos_src, tokens)
- iou_token_out = hs[:, 0, :]
- mask_tokens_out = hs[:, 1 : (1 + self.num_mask_tokens), :]
-
- # Upscale mask embeddings and predict masks using the mask tokens
- src = src.transpose(1, 2).view(b, c, h, w)
- upscaled_embedding = self.output_upscaling(src)
- hyper_in_list: List[torch.Tensor] = []
- for i in range(self.num_mask_tokens):
- hyper_in_list.append(self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :]))
- hyper_in = torch.stack(hyper_in_list, dim=1)
- b, c, h, w = upscaled_embedding.shape
- masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w)
-
- # Generate mask quality predictions
- iou_pred = self.iou_prediction_head(iou_token_out)
-
- return masks, iou_pred
-
-
-# Lightly adapted from
-# https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py # noqa
-class MLP(nn.Module):
- def __init__(
- self,
- input_dim: int,
- hidden_dim: int,
- output_dim: int,
- num_layers: int,
- sigmoid_output: bool = False,
- ) -> None:
- super().__init__()
- self.num_layers = num_layers
- h = [hidden_dim] * (num_layers - 1)
- self.layers = nn.ModuleList(
- nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim])
- )
- self.sigmoid_output = sigmoid_output
-
- def forward(self, x):
- for i, layer in enumerate(self.layers):
- x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
- if self.sigmoid_output:
- x = F.sigmoid(x)
- return x
diff --git a/spaces/artificialguybr/video-dubbing/Wav2Lip/filelists/README.md b/spaces/artificialguybr/video-dubbing/Wav2Lip/filelists/README.md
deleted file mode 100644
index e7d7e7bb3b5adefc9fee84168693e978f129c6e6..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/Wav2Lip/filelists/README.md
+++ /dev/null
@@ -1 +0,0 @@
-Place LRS2 (and any other) filelists here for training.
\ No newline at end of file
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/GifImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/GifImagePlugin.py
deleted file mode 100644
index dd1b21f2e636683c4d81104c4ef49dce132a44ee..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/GifImagePlugin.py
+++ /dev/null
@@ -1,1062 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# GIF file handling
-#
-# History:
-# 1995-09-01 fl Created
-# 1996-12-14 fl Added interlace support
-# 1996-12-30 fl Added animation support
-# 1997-01-05 fl Added write support, fixed local colour map bug
-# 1997-02-23 fl Make sure to load raster data in getdata()
-# 1997-07-05 fl Support external decoder (0.4)
-# 1998-07-09 fl Handle all modes when saving (0.5)
-# 1998-07-15 fl Renamed offset attribute to avoid name clash
-# 2001-04-16 fl Added rewind support (seek to frame 0) (0.6)
-# 2001-04-17 fl Added palette optimization (0.7)
-# 2002-06-06 fl Added transparency support for save (0.8)
-# 2004-02-24 fl Disable interlacing for small images
-#
-# Copyright (c) 1997-2004 by Secret Labs AB
-# Copyright (c) 1995-2004 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import itertools
-import math
-import os
-import subprocess
-from enum import IntEnum
-
-from . import Image, ImageChops, ImageFile, ImagePalette, ImageSequence
-from ._binary import i16le as i16
-from ._binary import o8
-from ._binary import o16le as o16
-
-
-class LoadingStrategy(IntEnum):
- """.. versionadded:: 9.1.0"""
-
- RGB_AFTER_FIRST = 0
- RGB_AFTER_DIFFERENT_PALETTE_ONLY = 1
- RGB_ALWAYS = 2
-
-
-#: .. versionadded:: 9.1.0
-LOADING_STRATEGY = LoadingStrategy.RGB_AFTER_FIRST
-
-# --------------------------------------------------------------------
-# Identify/read GIF files
-
-
-def _accept(prefix):
- return prefix[:6] in [b"GIF87a", b"GIF89a"]
-
-
-##
-# Image plugin for GIF images. This plugin supports both GIF87 and
-# GIF89 images.
-
-
-class GifImageFile(ImageFile.ImageFile):
-
- format = "GIF"
- format_description = "Compuserve GIF"
- _close_exclusive_fp_after_loading = False
-
- global_palette = None
-
- def data(self):
- s = self.fp.read(1)
- if s and s[0]:
- return self.fp.read(s[0])
- return None
-
- def _is_palette_needed(self, p):
- for i in range(0, len(p), 3):
- if not (i // 3 == p[i] == p[i + 1] == p[i + 2]):
- return True
- return False
-
- def _open(self):
-
- # Screen
- s = self.fp.read(13)
- if not _accept(s):
- raise SyntaxError("not a GIF file")
-
- self.info["version"] = s[:6]
- self._size = i16(s, 6), i16(s, 8)
- self.tile = []
- flags = s[10]
- bits = (flags & 7) + 1
-
- if flags & 128:
- # get global palette
- self.info["background"] = s[11]
- # check if palette contains colour indices
- p = self.fp.read(3 << bits)
- if self._is_palette_needed(p):
- p = ImagePalette.raw("RGB", p)
- self.global_palette = self.palette = p
-
- self._fp = self.fp # FIXME: hack
- self.__rewind = self.fp.tell()
- self._n_frames = None
- self._is_animated = None
- self._seek(0) # get ready to read first frame
-
- @property
- def n_frames(self):
- if self._n_frames is None:
- current = self.tell()
- try:
- while True:
- self._seek(self.tell() + 1, False)
- except EOFError:
- self._n_frames = self.tell() + 1
- self.seek(current)
- return self._n_frames
-
- @property
- def is_animated(self):
- if self._is_animated is None:
- if self._n_frames is not None:
- self._is_animated = self._n_frames != 1
- else:
- current = self.tell()
- if current:
- self._is_animated = True
- else:
- try:
- self._seek(1, False)
- self._is_animated = True
- except EOFError:
- self._is_animated = False
-
- self.seek(current)
- return self._is_animated
-
- def seek(self, frame):
- if not self._seek_check(frame):
- return
- if frame < self.__frame:
- self.im = None
- self._seek(0)
-
- last_frame = self.__frame
- for f in range(self.__frame + 1, frame + 1):
- try:
- self._seek(f)
- except EOFError as e:
- self.seek(last_frame)
- raise EOFError("no more images in GIF file") from e
-
- def _seek(self, frame, update_image=True):
-
- if frame == 0:
- # rewind
- self.__offset = 0
- self.dispose = None
- self.__frame = -1
- self._fp.seek(self.__rewind)
- self.disposal_method = 0
- if "comment" in self.info:
- del self.info["comment"]
- else:
- # ensure that the previous frame was loaded
- if self.tile and update_image:
- self.load()
-
- if frame != self.__frame + 1:
- raise ValueError(f"cannot seek to frame {frame}")
-
- self.fp = self._fp
- if self.__offset:
- # backup to last frame
- self.fp.seek(self.__offset)
- while self.data():
- pass
- self.__offset = 0
-
- s = self.fp.read(1)
- if not s or s == b";":
- raise EOFError
-
- palette = None
-
- info = {}
- frame_transparency = None
- interlace = None
- frame_dispose_extent = None
- while True:
-
- if not s:
- s = self.fp.read(1)
- if not s or s == b";":
- break
-
- elif s == b"!":
- #
- # extensions
- #
- s = self.fp.read(1)
- block = self.data()
- if s[0] == 249:
- #
- # graphic control extension
- #
- flags = block[0]
- if flags & 1:
- frame_transparency = block[3]
- info["duration"] = i16(block, 1) * 10
-
- # disposal method - find the value of bits 4 - 6
- dispose_bits = 0b00011100 & flags
- dispose_bits = dispose_bits >> 2
- if dispose_bits:
- # only set the dispose if it is not
- # unspecified. I'm not sure if this is
- # correct, but it seems to prevent the last
- # frame from looking odd for some animations
- self.disposal_method = dispose_bits
- elif s[0] == 254:
- #
- # comment extension
- #
- comment = b""
-
- # Read this comment block
- while block:
- comment += block
- block = self.data()
-
- if "comment" in info:
- # If multiple comment blocks in frame, separate with \n
- info["comment"] += b"\n" + comment
- else:
- info["comment"] = comment
- s = None
- continue
- elif s[0] == 255 and frame == 0:
- #
- # application extension
- #
- info["extension"] = block, self.fp.tell()
- if block[:11] == b"NETSCAPE2.0":
- block = self.data()
- if len(block) >= 3 and block[0] == 1:
- self.info["loop"] = i16(block, 1)
- while self.data():
- pass
-
- elif s == b",":
- #
- # local image
- #
- s = self.fp.read(9)
-
- # extent
- x0, y0 = i16(s, 0), i16(s, 2)
- x1, y1 = x0 + i16(s, 4), y0 + i16(s, 6)
- if (x1 > self.size[0] or y1 > self.size[1]) and update_image:
- self._size = max(x1, self.size[0]), max(y1, self.size[1])
- Image._decompression_bomb_check(self._size)
- frame_dispose_extent = x0, y0, x1, y1
- flags = s[8]
-
- interlace = (flags & 64) != 0
-
- if flags & 128:
- bits = (flags & 7) + 1
- p = self.fp.read(3 << bits)
- if self._is_palette_needed(p):
- palette = ImagePalette.raw("RGB", p)
- else:
- palette = False
-
- # image data
- bits = self.fp.read(1)[0]
- self.__offset = self.fp.tell()
- break
-
- else:
- pass
- # raise OSError, "illegal GIF tag `%x`" % s[0]
- s = None
-
- if interlace is None:
- # self._fp = None
- raise EOFError
-
- self.__frame = frame
- if not update_image:
- return
-
- self.tile = []
-
- if self.dispose:
- self.im.paste(self.dispose, self.dispose_extent)
-
- self._frame_palette = palette if palette is not None else self.global_palette
- self._frame_transparency = frame_transparency
- if frame == 0:
- if self._frame_palette:
- if LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:
- self.mode = "RGBA" if frame_transparency is not None else "RGB"
- else:
- self.mode = "P"
- else:
- self.mode = "L"
-
- if not palette and self.global_palette:
- from copy import copy
-
- palette = copy(self.global_palette)
- self.palette = palette
- else:
- if self.mode == "P":
- if (
- LOADING_STRATEGY != LoadingStrategy.RGB_AFTER_DIFFERENT_PALETTE_ONLY
- or palette
- ):
- self.pyaccess = None
- if "transparency" in self.info:
- self.im.putpalettealpha(self.info["transparency"], 0)
- self.im = self.im.convert("RGBA", Image.Dither.FLOYDSTEINBERG)
- self.mode = "RGBA"
- del self.info["transparency"]
- else:
- self.mode = "RGB"
- self.im = self.im.convert("RGB", Image.Dither.FLOYDSTEINBERG)
-
- def _rgb(color):
- if self._frame_palette:
- color = tuple(self._frame_palette.palette[color * 3 : color * 3 + 3])
- else:
- color = (color, color, color)
- return color
-
- self.dispose_extent = frame_dispose_extent
- try:
- if self.disposal_method < 2:
- # do not dispose or none specified
- self.dispose = None
- elif self.disposal_method == 2:
- # replace with background colour
-
- # only dispose the extent in this frame
- x0, y0, x1, y1 = self.dispose_extent
- dispose_size = (x1 - x0, y1 - y0)
-
- Image._decompression_bomb_check(dispose_size)
-
- # by convention, attempt to use transparency first
- dispose_mode = "P"
- color = self.info.get("transparency", frame_transparency)
- if color is not None:
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGBA"
- color = _rgb(color) + (0,)
- else:
- color = self.info.get("background", 0)
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGB"
- color = _rgb(color)
- self.dispose = Image.core.fill(dispose_mode, dispose_size, color)
- else:
- # replace with previous contents
- if self.im is not None:
- # only dispose the extent in this frame
- self.dispose = self._crop(self.im, self.dispose_extent)
- elif frame_transparency is not None:
- x0, y0, x1, y1 = self.dispose_extent
- dispose_size = (x1 - x0, y1 - y0)
-
- Image._decompression_bomb_check(dispose_size)
- dispose_mode = "P"
- color = frame_transparency
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGBA"
- color = _rgb(frame_transparency) + (0,)
- self.dispose = Image.core.fill(dispose_mode, dispose_size, color)
- except AttributeError:
- pass
-
- if interlace is not None:
- transparency = -1
- if frame_transparency is not None:
- if frame == 0:
- if LOADING_STRATEGY != LoadingStrategy.RGB_ALWAYS:
- self.info["transparency"] = frame_transparency
- elif self.mode not in ("RGB", "RGBA"):
- transparency = frame_transparency
- self.tile = [
- (
- "gif",
- (x0, y0, x1, y1),
- self.__offset,
- (bits, interlace, transparency),
- )
- ]
-
- if info.get("comment"):
- self.info["comment"] = info["comment"]
- for k in ["duration", "extension"]:
- if k in info:
- self.info[k] = info[k]
- elif k in self.info:
- del self.info[k]
-
- def load_prepare(self):
- temp_mode = "P" if self._frame_palette else "L"
- self._prev_im = None
- if self.__frame == 0:
- if self._frame_transparency is not None:
- self.im = Image.core.fill(
- temp_mode, self.size, self._frame_transparency
- )
- elif self.mode in ("RGB", "RGBA"):
- self._prev_im = self.im
- if self._frame_palette:
- self.im = Image.core.fill("P", self.size, self._frame_transparency or 0)
- self.im.putpalette(*self._frame_palette.getdata())
- else:
- self.im = None
- self.mode = temp_mode
- self._frame_palette = None
-
- super().load_prepare()
-
- def load_end(self):
- if self.__frame == 0:
- if self.mode == "P" and LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:
- if self._frame_transparency is not None:
- self.im.putpalettealpha(self._frame_transparency, 0)
- self.mode = "RGBA"
- else:
- self.mode = "RGB"
- self.im = self.im.convert(self.mode, Image.Dither.FLOYDSTEINBERG)
- return
- if not self._prev_im:
- return
- if self._frame_transparency is not None:
- self.im.putpalettealpha(self._frame_transparency, 0)
- frame_im = self.im.convert("RGBA")
- else:
- frame_im = self.im.convert("RGB")
- frame_im = self._crop(frame_im, self.dispose_extent)
-
- self.im = self._prev_im
- self.mode = self.im.mode
- if frame_im.mode == "RGBA":
- self.im.paste(frame_im, self.dispose_extent, frame_im)
- else:
- self.im.paste(frame_im, self.dispose_extent)
-
- def tell(self):
- return self.__frame
-
-
-# --------------------------------------------------------------------
-# Write GIF files
-
-
-RAWMODE = {"1": "L", "L": "L", "P": "P"}
-
-
-def _normalize_mode(im):
- """
- Takes an image (or frame), returns an image in a mode that is appropriate
- for saving in a Gif.
-
- It may return the original image, or it may return an image converted to
- palette or 'L' mode.
-
- :param im: Image object
- :returns: Image object
- """
- if im.mode in RAWMODE:
- im.load()
- return im
- if Image.getmodebase(im.mode) == "RGB":
- im = im.convert("P", palette=Image.Palette.ADAPTIVE)
- if im.palette.mode == "RGBA":
- for rgba in im.palette.colors.keys():
- if rgba[3] == 0:
- im.info["transparency"] = im.palette.colors[rgba]
- break
- return im
- return im.convert("L")
-
-
-def _normalize_palette(im, palette, info):
- """
- Normalizes the palette for image.
- - Sets the palette to the incoming palette, if provided.
- - Ensures that there's a palette for L mode images
- - Optimizes the palette if necessary/desired.
-
- :param im: Image object
- :param palette: bytes object containing the source palette, or ....
- :param info: encoderinfo
- :returns: Image object
- """
- source_palette = None
- if palette:
- # a bytes palette
- if isinstance(palette, (bytes, bytearray, list)):
- source_palette = bytearray(palette[:768])
- if isinstance(palette, ImagePalette.ImagePalette):
- source_palette = bytearray(palette.palette)
-
- if im.mode == "P":
- if not source_palette:
- source_palette = im.im.getpalette("RGB")[:768]
- else: # L-mode
- if not source_palette:
- source_palette = bytearray(i // 3 for i in range(768))
- im.palette = ImagePalette.ImagePalette("RGB", palette=source_palette)
-
- if palette:
- used_palette_colors = []
- for i in range(0, len(source_palette), 3):
- source_color = tuple(source_palette[i : i + 3])
- index = im.palette.colors.get(source_color)
- if index in used_palette_colors:
- index = None
- used_palette_colors.append(index)
- for i, index in enumerate(used_palette_colors):
- if index is None:
- for j in range(len(used_palette_colors)):
- if j not in used_palette_colors:
- used_palette_colors[i] = j
- break
- im = im.remap_palette(used_palette_colors)
- else:
- used_palette_colors = _get_optimize(im, info)
- if used_palette_colors is not None:
- return im.remap_palette(used_palette_colors, source_palette)
-
- im.palette.palette = source_palette
- return im
-
-
-def _write_single_frame(im, fp, palette):
- im_out = _normalize_mode(im)
- for k, v in im_out.info.items():
- im.encoderinfo.setdefault(k, v)
- im_out = _normalize_palette(im_out, palette, im.encoderinfo)
-
- for s in _get_global_header(im_out, im.encoderinfo):
- fp.write(s)
-
- # local image header
- flags = 0
- if get_interlace(im):
- flags = flags | 64
- _write_local_header(fp, im, (0, 0), flags)
-
- im_out.encoderconfig = (8, get_interlace(im))
- ImageFile._save(im_out, fp, [("gif", (0, 0) + im.size, 0, RAWMODE[im_out.mode])])
-
- fp.write(b"\0") # end of image data
-
-
-def _write_multiple_frames(im, fp, palette):
-
- duration = im.encoderinfo.get("duration")
- disposal = im.encoderinfo.get("disposal", im.info.get("disposal"))
-
- im_frames = []
- frame_count = 0
- background_im = None
- for imSequence in itertools.chain([im], im.encoderinfo.get("append_images", [])):
- for im_frame in ImageSequence.Iterator(imSequence):
- # a copy is required here since seek can still mutate the image
- im_frame = _normalize_mode(im_frame.copy())
- if frame_count == 0:
- for k, v in im_frame.info.items():
- if k == "transparency":
- continue
- im.encoderinfo.setdefault(k, v)
-
- encoderinfo = im.encoderinfo.copy()
- im_frame = _normalize_palette(im_frame, palette, encoderinfo)
- if "transparency" in im_frame.info:
- encoderinfo.setdefault("transparency", im_frame.info["transparency"])
- if isinstance(duration, (list, tuple)):
- encoderinfo["duration"] = duration[frame_count]
- elif duration is None and "duration" in im_frame.info:
- encoderinfo["duration"] = im_frame.info["duration"]
- if isinstance(disposal, (list, tuple)):
- encoderinfo["disposal"] = disposal[frame_count]
- frame_count += 1
-
- if im_frames:
- # delta frame
- previous = im_frames[-1]
- if encoderinfo.get("disposal") == 2:
- if background_im is None:
- color = im.encoderinfo.get(
- "transparency", im.info.get("transparency", (0, 0, 0))
- )
- background = _get_background(im_frame, color)
- background_im = Image.new("P", im_frame.size, background)
- background_im.putpalette(im_frames[0]["im"].palette)
- base_im = background_im
- else:
- base_im = previous["im"]
- if _get_palette_bytes(im_frame) == _get_palette_bytes(base_im):
- delta = ImageChops.subtract_modulo(im_frame, base_im)
- else:
- delta = ImageChops.subtract_modulo(
- im_frame.convert("RGB"), base_im.convert("RGB")
- )
- bbox = delta.getbbox()
- if not bbox:
- # This frame is identical to the previous frame
- if duration:
- previous["encoderinfo"]["duration"] += encoderinfo["duration"]
- continue
- else:
- bbox = None
- im_frames.append({"im": im_frame, "bbox": bbox, "encoderinfo": encoderinfo})
-
- if len(im_frames) > 1:
- for frame_data in im_frames:
- im_frame = frame_data["im"]
- if not frame_data["bbox"]:
- # global header
- for s in _get_global_header(im_frame, frame_data["encoderinfo"]):
- fp.write(s)
- offset = (0, 0)
- else:
- # compress difference
- if not palette:
- frame_data["encoderinfo"]["include_color_table"] = True
-
- im_frame = im_frame.crop(frame_data["bbox"])
- offset = frame_data["bbox"][:2]
- _write_frame_data(fp, im_frame, offset, frame_data["encoderinfo"])
- return True
- elif "duration" in im.encoderinfo and isinstance(
- im.encoderinfo["duration"], (list, tuple)
- ):
- # Since multiple frames will not be written, add together the frame durations
- im.encoderinfo["duration"] = sum(im.encoderinfo["duration"])
-
-
-def _save_all(im, fp, filename):
- _save(im, fp, filename, save_all=True)
-
-
-def _save(im, fp, filename, save_all=False):
- # header
- if "palette" in im.encoderinfo or "palette" in im.info:
- palette = im.encoderinfo.get("palette", im.info.get("palette"))
- else:
- palette = None
- im.encoderinfo["optimize"] = im.encoderinfo.get("optimize", True)
-
- if not save_all or not _write_multiple_frames(im, fp, palette):
- _write_single_frame(im, fp, palette)
-
- fp.write(b";") # end of file
-
- if hasattr(fp, "flush"):
- fp.flush()
-
-
-def get_interlace(im):
- interlace = im.encoderinfo.get("interlace", 1)
-
- # workaround for @PIL153
- if min(im.size) < 16:
- interlace = 0
-
- return interlace
-
-
-def _write_local_header(fp, im, offset, flags):
- transparent_color_exists = False
- try:
- if "transparency" in im.encoderinfo:
- transparency = im.encoderinfo["transparency"]
- else:
- transparency = im.info["transparency"]
- transparency = int(transparency)
- except (KeyError, ValueError):
- pass
- else:
- # optimize the block away if transparent color is not used
- transparent_color_exists = True
-
- used_palette_colors = _get_optimize(im, im.encoderinfo)
- if used_palette_colors is not None:
- # adjust the transparency index after optimize
- try:
- transparency = used_palette_colors.index(transparency)
- except ValueError:
- transparent_color_exists = False
-
- if "duration" in im.encoderinfo:
- duration = int(im.encoderinfo["duration"] / 10)
- else:
- duration = 0
-
- disposal = int(im.encoderinfo.get("disposal", 0))
-
- if transparent_color_exists or duration != 0 or disposal:
- packed_flag = 1 if transparent_color_exists else 0
- packed_flag |= disposal << 2
- if not transparent_color_exists:
- transparency = 0
-
- fp.write(
- b"!"
- + o8(249) # extension intro
- + o8(4) # length
- + o8(packed_flag) # packed fields
- + o16(duration) # duration
- + o8(transparency) # transparency index
- + o8(0)
- )
-
- include_color_table = im.encoderinfo.get("include_color_table")
- if include_color_table:
- palette_bytes = _get_palette_bytes(im)
- color_table_size = _get_color_table_size(palette_bytes)
- if color_table_size:
- flags = flags | 128 # local color table flag
- flags = flags | color_table_size
-
- fp.write(
- b","
- + o16(offset[0]) # offset
- + o16(offset[1])
- + o16(im.size[0]) # size
- + o16(im.size[1])
- + o8(flags) # flags
- )
- if include_color_table and color_table_size:
- fp.write(_get_header_palette(palette_bytes))
- fp.write(o8(8)) # bits
-
-
-def _save_netpbm(im, fp, filename):
-
- # Unused by default.
- # To use, uncomment the register_save call at the end of the file.
- #
- # If you need real GIF compression and/or RGB quantization, you
- # can use the external NETPBM/PBMPLUS utilities. See comments
- # below for information on how to enable this.
- tempfile = im._dump()
-
- try:
- with open(filename, "wb") as f:
- if im.mode != "RGB":
- subprocess.check_call(
- ["ppmtogif", tempfile], stdout=f, stderr=subprocess.DEVNULL
- )
- else:
- # Pipe ppmquant output into ppmtogif
- # "ppmquant 256 %s | ppmtogif > %s" % (tempfile, filename)
- quant_cmd = ["ppmquant", "256", tempfile]
- togif_cmd = ["ppmtogif"]
- quant_proc = subprocess.Popen(
- quant_cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL
- )
- togif_proc = subprocess.Popen(
- togif_cmd,
- stdin=quant_proc.stdout,
- stdout=f,
- stderr=subprocess.DEVNULL,
- )
-
- # Allow ppmquant to receive SIGPIPE if ppmtogif exits
- quant_proc.stdout.close()
-
- retcode = quant_proc.wait()
- if retcode:
- raise subprocess.CalledProcessError(retcode, quant_cmd)
-
- retcode = togif_proc.wait()
- if retcode:
- raise subprocess.CalledProcessError(retcode, togif_cmd)
- finally:
- try:
- os.unlink(tempfile)
- except OSError:
- pass
-
-
-# Force optimization so that we can test performance against
-# cases where it took lots of memory and time previously.
-_FORCE_OPTIMIZE = False
-
-
-def _get_optimize(im, info):
- """
- Palette optimization is a potentially expensive operation.
-
- This function determines if the palette should be optimized using
- some heuristics, then returns the list of palette entries in use.
-
- :param im: Image object
- :param info: encoderinfo
- :returns: list of indexes of palette entries in use, or None
- """
- if im.mode in ("P", "L") and info and info.get("optimize", 0):
- # Potentially expensive operation.
-
- # The palette saves 3 bytes per color not used, but palette
- # lengths are restricted to 3*(2**N) bytes. Max saving would
- # be 768 -> 6 bytes if we went all the way down to 2 colors.
- # * If we're over 128 colors, we can't save any space.
- # * If there aren't any holes, it's not worth collapsing.
- # * If we have a 'large' image, the palette is in the noise.
-
- # create the new palette if not every color is used
- optimise = _FORCE_OPTIMIZE or im.mode == "L"
- if optimise or im.width * im.height < 512 * 512:
- # check which colors are used
- used_palette_colors = []
- for i, count in enumerate(im.histogram()):
- if count:
- used_palette_colors.append(i)
-
- if optimise or max(used_palette_colors) >= len(used_palette_colors):
- return used_palette_colors
-
- num_palette_colors = len(im.palette.palette) // Image.getmodebands(
- im.palette.mode
- )
- current_palette_size = 1 << (num_palette_colors - 1).bit_length()
- if (
- # check that the palette would become smaller when saved
- len(used_palette_colors) <= current_palette_size // 2
- # check that the palette is not already the smallest possible size
- and current_palette_size > 2
- ):
- return used_palette_colors
-
-
-def _get_color_table_size(palette_bytes):
- # calculate the palette size for the header
- if not palette_bytes:
- return 0
- elif len(palette_bytes) < 9:
- return 1
- else:
- return math.ceil(math.log(len(palette_bytes) // 3, 2)) - 1
-
-
-def _get_header_palette(palette_bytes):
- """
- Returns the palette, null padded to the next power of 2 (*3) bytes
- suitable for direct inclusion in the GIF header
-
- :param palette_bytes: Unpadded palette bytes, in RGBRGB form
- :returns: Null padded palette
- """
- color_table_size = _get_color_table_size(palette_bytes)
-
- # add the missing amount of bytes
- # the palette has to be 2< 0:
- palette_bytes += o8(0) * 3 * actual_target_size_diff
- return palette_bytes
-
-
-def _get_palette_bytes(im):
- """
- Gets the palette for inclusion in the gif header
-
- :param im: Image object
- :returns: Bytes, len<=768 suitable for inclusion in gif header
- """
- return im.palette.palette
-
-
-def _get_background(im, info_background):
- background = 0
- if info_background:
- background = info_background
- if isinstance(background, tuple):
- # WebPImagePlugin stores an RGBA value in info["background"]
- # So it must be converted to the same format as GifImagePlugin's
- # info["background"] - a global color table index
- try:
- background = im.palette.getcolor(background, im)
- except ValueError as e:
- if str(e) == "cannot allocate more than 256 colors":
- # If all 256 colors are in use,
- # then there is no need for the background color
- return 0
- else:
- raise
- return background
-
-
-def _get_global_header(im, info):
- """Return a list of strings representing a GIF header"""
-
- # Header Block
- # https://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp
-
- version = b"87a"
- if im.info.get("version") == b"89a" or (
- info
- and (
- "transparency" in info
- or "loop" in info
- or info.get("duration")
- or info.get("comment")
- )
- ):
- version = b"89a"
-
- background = _get_background(im, info.get("background"))
-
- palette_bytes = _get_palette_bytes(im)
- color_table_size = _get_color_table_size(palette_bytes)
-
- header = [
- b"GIF" # signature
- + version # version
- + o16(im.size[0]) # canvas width
- + o16(im.size[1]), # canvas height
- # Logical Screen Descriptor
- # size of global color table + global color table flag
- o8(color_table_size + 128), # packed fields
- # background + reserved/aspect
- o8(background) + o8(0),
- # Global Color Table
- _get_header_palette(palette_bytes),
- ]
- if "loop" in info:
- header.append(
- b"!"
- + o8(255) # extension intro
- + o8(11)
- + b"NETSCAPE2.0"
- + o8(3)
- + o8(1)
- + o16(info["loop"]) # number of loops
- + o8(0)
- )
- if info.get("comment"):
- comment_block = b"!" + o8(254) # extension intro
-
- comment = info["comment"]
- if isinstance(comment, str):
- comment = comment.encode()
- for i in range(0, len(comment), 255):
- subblock = comment[i : i + 255]
- comment_block += o8(len(subblock)) + subblock
-
- comment_block += o8(0)
- header.append(comment_block)
- return header
-
-
-def _write_frame_data(fp, im_frame, offset, params):
- try:
- im_frame.encoderinfo = params
-
- # local image header
- _write_local_header(fp, im_frame, offset, 0)
-
- ImageFile._save(
- im_frame, fp, [("gif", (0, 0) + im_frame.size, 0, RAWMODE[im_frame.mode])]
- )
-
- fp.write(b"\0") # end of image data
- finally:
- del im_frame.encoderinfo
-
-
-# --------------------------------------------------------------------
-# Legacy GIF utilities
-
-
-def getheader(im, palette=None, info=None):
- """
- Legacy Method to get Gif data from image.
-
- Warning:: May modify image data.
-
- :param im: Image object
- :param palette: bytes object containing the source palette, or ....
- :param info: encoderinfo
- :returns: tuple of(list of header items, optimized palette)
-
- """
- used_palette_colors = _get_optimize(im, info)
-
- if info is None:
- info = {}
-
- if "background" not in info and "background" in im.info:
- info["background"] = im.info["background"]
-
- im_mod = _normalize_palette(im, palette, info)
- im.palette = im_mod.palette
- im.im = im_mod.im
- header = _get_global_header(im, info)
-
- return header, used_palette_colors
-
-
-def getdata(im, offset=(0, 0), **params):
- """
- Legacy Method
-
- Return a list of strings representing this image.
- The first string is a local image header, the rest contains
- encoded image data.
-
- To specify duration, add the time in milliseconds,
- e.g. ``getdata(im_frame, duration=1000)``
-
- :param im: Image object
- :param offset: Tuple of (x, y) pixels. Defaults to (0, 0)
- :param \\**params: e.g. duration or other encoder info parameters
- :returns: List of bytes containing GIF encoded frame data
-
- """
-
- class Collector:
- data = []
-
- def write(self, data):
- self.data.append(data)
-
- im.load() # make sure raster data is available
-
- fp = Collector()
-
- _write_frame_data(fp, im, offset, params)
-
- return fp.data
-
-
-# --------------------------------------------------------------------
-# Registry
-
-Image.register_open(GifImageFile.format, GifImageFile, _accept)
-Image.register_save(GifImageFile.format, _save)
-Image.register_save_all(GifImageFile.format, _save_all)
-Image.register_extension(GifImageFile.format, ".gif")
-Image.register_mime(GifImageFile.format, "image/gif")
-
-#
-# Uncomment the following line if you wish to use NETPBM/PBMPLUS
-# instead of the built-in "uncompressed" GIF encoder
-
-# Image.register_save(GifImageFile.format, _save_netpbm)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/tacotron2_loss.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/tacotron2_loss.py
deleted file mode 100644
index d3af9762a779bb4a24de41121fa51b1483374938..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/criterions/tacotron2_loss.py
+++ /dev/null
@@ -1,226 +0,0 @@
-# Copyright (c) 2017-present, Facebook, Inc.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the LICENSE file in
-# the root directory of this source tree. An additional grant of patent rights
-# can be found in the PATENTS file in the same directory.
-
-import logging
-from dataclasses import dataclass, field
-from functools import lru_cache
-from typing import Any, Dict, List
-
-import torch
-import torch.nn.functional as F
-from omegaconf import II
-
-from fairseq import metrics, utils
-from fairseq.criterions import FairseqCriterion, register_criterion
-from fairseq.data.data_utils import lengths_to_mask
-from fairseq.dataclass import FairseqDataclass
-
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class Tacotron2CriterionConfig(FairseqDataclass):
- bce_pos_weight: float = field(
- default=1.0,
- metadata={"help": "weight of positive examples for BCE loss"},
- )
- use_guided_attention_loss: bool = field(
- default=False,
- metadata={"help": "use guided attention loss"},
- )
- guided_attention_loss_sigma: float = field(
- default=0.4,
- metadata={"help": "weight of positive examples for BCE loss"},
- )
- ctc_weight: float = field(default=0.0, metadata={"help": "weight for CTC loss"})
- sentence_avg: bool = II("optimization.sentence_avg")
-
-
-class GuidedAttentionLoss(torch.nn.Module):
- """
- Efficiently Trainable Text-to-Speech System Based on Deep Convolutional
- Networks with Guided Attention (https://arxiv.org/abs/1710.08969)
- """
-
- def __init__(self, sigma):
- super().__init__()
- self.sigma = sigma
-
- @staticmethod
- @lru_cache(maxsize=8)
- def _get_weight(s_len, t_len, sigma):
- grid_x, grid_y = torch.meshgrid(torch.arange(t_len), torch.arange(s_len))
- grid_x = grid_x.to(s_len.device)
- grid_y = grid_y.to(s_len.device)
- w = (grid_y.float() / s_len - grid_x.float() / t_len) ** 2
- return 1.0 - torch.exp(-w / (2 * (sigma**2)))
-
- def _get_weights(self, src_lens, tgt_lens):
- bsz, max_s_len, max_t_len = len(src_lens), max(src_lens), max(tgt_lens)
- weights = torch.zeros((bsz, max_t_len, max_s_len))
- for i, (s_len, t_len) in enumerate(zip(src_lens, tgt_lens)):
- weights[i, :t_len, :s_len] = self._get_weight(s_len, t_len, self.sigma)
- return weights
-
- @staticmethod
- def _get_masks(src_lens, tgt_lens):
- in_masks = lengths_to_mask(src_lens)
- out_masks = lengths_to_mask(tgt_lens)
- return out_masks.unsqueeze(2) & in_masks.unsqueeze(1)
-
- def forward(self, attn, src_lens, tgt_lens, reduction="mean"):
- weights = self._get_weights(src_lens, tgt_lens).to(attn.device)
- masks = self._get_masks(src_lens, tgt_lens).to(attn.device)
- loss = (weights * attn.transpose(1, 2)).masked_select(masks)
- loss = torch.sum(loss) if reduction == "sum" else torch.mean(loss)
- return loss
-
-
-@register_criterion("tacotron2", dataclass=Tacotron2CriterionConfig)
-class Tacotron2Criterion(FairseqCriterion):
- def __init__(
- self,
- task,
- sentence_avg,
- use_guided_attention_loss,
- guided_attention_loss_sigma,
- bce_pos_weight,
- ctc_weight,
- ):
- super().__init__(task)
- self.sentence_avg = sentence_avg
- self.bce_pos_weight = bce_pos_weight
-
- self.guided_attn = None
- if use_guided_attention_loss:
- self.guided_attn = GuidedAttentionLoss(guided_attention_loss_sigma)
- self.ctc_weight = ctc_weight
-
- def forward(self, model, sample, reduction="mean"):
- bsz, max_len, _ = sample["target"].size()
- feat_tgt = sample["target"]
- feat_len = sample["target_lengths"].view(bsz, 1).expand(-1, max_len)
- eos_tgt = torch.arange(max_len).to(sample["target"].device)
- eos_tgt = eos_tgt.view(1, max_len).expand(bsz, -1)
- eos_tgt = (eos_tgt == (feat_len - 1)).float()
- src_tokens = sample["net_input"]["src_tokens"]
- src_lens = sample["net_input"]["src_lengths"]
- tgt_lens = sample["target_lengths"]
-
- feat_out, eos_out, extra = model(
- src_tokens=src_tokens,
- src_lengths=src_lens,
- prev_output_tokens=sample["net_input"]["prev_output_tokens"],
- incremental_state=None,
- target_lengths=tgt_lens,
- speaker=sample["speaker"],
- )
-
- l1_loss, mse_loss, eos_loss = self.compute_loss(
- extra["feature_out"],
- feat_out,
- eos_out,
- feat_tgt,
- eos_tgt,
- tgt_lens,
- reduction,
- )
- attn_loss = torch.tensor(0.0).type_as(l1_loss)
- if self.guided_attn is not None:
- attn_loss = self.guided_attn(extra["attn"], src_lens, tgt_lens, reduction)
- ctc_loss = torch.tensor(0.0).type_as(l1_loss)
- if self.ctc_weight > 0.0:
- net_output = (feat_out, eos_out, extra)
- lprobs = model.get_normalized_probs(net_output, log_probs=True)
- lprobs = lprobs.transpose(0, 1) # T x B x C
- src_mask = lengths_to_mask(src_lens)
- src_tokens_flat = src_tokens.masked_select(src_mask)
- ctc_loss = (
- F.ctc_loss(
- lprobs,
- src_tokens_flat,
- tgt_lens,
- src_lens,
- reduction=reduction,
- zero_infinity=True,
- )
- * self.ctc_weight
- )
- loss = l1_loss + mse_loss + eos_loss + attn_loss + ctc_loss
-
- sample_size = sample["nsentences"] if self.sentence_avg else sample["ntokens"]
- logging_output = {
- "loss": utils.item(loss.data),
- "ntokens": sample["ntokens"],
- "nsentences": sample["nsentences"],
- "sample_size": sample_size,
- "l1_loss": utils.item(l1_loss.data),
- "mse_loss": utils.item(mse_loss.data),
- "eos_loss": utils.item(eos_loss.data),
- "attn_loss": utils.item(attn_loss.data),
- "ctc_loss": utils.item(ctc_loss.data),
- }
- return loss, sample_size, logging_output
-
- def compute_loss(
- self,
- feat_out,
- feat_out_post,
- eos_out,
- feat_tgt,
- eos_tgt,
- tgt_lens,
- reduction="mean",
- ):
- mask = lengths_to_mask(tgt_lens)
- _eos_out = eos_out[mask].squeeze()
- _eos_tgt = eos_tgt[mask]
- _feat_tgt = feat_tgt[mask]
- _feat_out = feat_out[mask]
- _feat_out_post = feat_out_post[mask]
-
- l1_loss = F.l1_loss(_feat_out, _feat_tgt, reduction=reduction) + F.l1_loss(
- _feat_out_post, _feat_tgt, reduction=reduction
- )
- mse_loss = F.mse_loss(_feat_out, _feat_tgt, reduction=reduction) + F.mse_loss(
- _feat_out_post, _feat_tgt, reduction=reduction
- )
- eos_loss = F.binary_cross_entropy_with_logits(
- _eos_out,
- _eos_tgt,
- pos_weight=torch.tensor(self.bce_pos_weight),
- reduction=reduction,
- )
- return l1_loss, mse_loss, eos_loss
-
- @classmethod
- def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None:
- ns = [log.get("sample_size", 0) for log in logging_outputs]
- ntot = sum(ns)
- ws = [n / (ntot + 1e-8) for n in ns]
- for key in ["loss", "l1_loss", "mse_loss", "eos_loss", "attn_loss", "ctc_loss"]:
- vals = [log.get(key, 0) for log in logging_outputs]
- val = sum(val * w for val, w in zip(vals, ws))
- metrics.log_scalar(key, val, ntot, round=3)
- metrics.log_scalar("sample_size", ntot, len(logging_outputs))
-
- # inference metrics
- if "targ_frames" not in logging_outputs[0]:
- return
- n = sum(log.get("targ_frames", 0) for log in logging_outputs)
- for key, new_key in [
- ("mcd_loss", "mcd_loss"),
- ("pred_frames", "pred_ratio"),
- ("nins", "ins_rate"),
- ("ndel", "del_rate"),
- ]:
- val = sum(log.get(key, 0) for log in logging_outputs)
- metrics.log_scalar(new_key, val / n, n, round=3)
-
- @staticmethod
- def logging_outputs_can_be_summed() -> bool:
- return False
diff --git a/spaces/asafAdge/Detic/tools/dump_clip_features.py b/spaces/asafAdge/Detic/tools/dump_clip_features.py
deleted file mode 100644
index 127f8c2a86c2425611c8ec075006664f5e07df45..0000000000000000000000000000000000000000
--- a/spaces/asafAdge/Detic/tools/dump_clip_features.py
+++ /dev/null
@@ -1,116 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import argparse
-import json
-import torch
-import numpy as np
-import itertools
-from nltk.corpus import wordnet
-import sys
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--ann', default='datasets/lvis/lvis_v1_val.json')
- parser.add_argument('--out_path', default='')
- parser.add_argument('--prompt', default='a')
- parser.add_argument('--model', default='clip')
- parser.add_argument('--clip_model', default="ViT-B/32")
- parser.add_argument('--fix_space', action='store_true')
- parser.add_argument('--use_underscore', action='store_true')
- parser.add_argument('--avg_synonyms', action='store_true')
- parser.add_argument('--use_wn_name', action='store_true')
- args = parser.parse_args()
-
- print('Loading', args.ann)
- data = json.load(open(args.ann, 'r'))
- cat_names = [x['name'] for x in \
- sorted(data['categories'], key=lambda x: x['id'])]
- if 'synonyms' in data['categories'][0]:
- if args.use_wn_name:
- synonyms = [
- [xx.name() for xx in wordnet.synset(x['synset']).lemmas()] \
- if x['synset'] != 'stop_sign.n.01' else ['stop_sign'] \
- for x in sorted(data['categories'], key=lambda x: x['id'])]
- else:
- synonyms = [x['synonyms'] for x in \
- sorted(data['categories'], key=lambda x: x['id'])]
- else:
- synonyms = []
- if args.fix_space:
- cat_names = [x.replace('_', ' ') for x in cat_names]
- if args.use_underscore:
- cat_names = [x.strip().replace('/ ', '/').replace(' ', '_') for x in cat_names]
- print('cat_names', cat_names)
- device = "cuda" if torch.cuda.is_available() else "cpu"
-
- if args.prompt == 'a':
- sentences = ['a ' + x for x in cat_names]
- sentences_synonyms = [['a ' + xx for xx in x] for x in synonyms]
- if args.prompt == 'none':
- sentences = [x for x in cat_names]
- sentences_synonyms = [[xx for xx in x] for x in synonyms]
- elif args.prompt == 'photo':
- sentences = ['a photo of a {}'.format(x) for x in cat_names]
- sentences_synonyms = [['a photo of a {}'.format(xx) for xx in x] \
- for x in synonyms]
- elif args.prompt == 'scene':
- sentences = ['a photo of a {} in the scene'.format(x) for x in cat_names]
- sentences_synonyms = [['a photo of a {} in the scene'.format(xx) for xx in x] \
- for x in synonyms]
-
- print('sentences_synonyms', len(sentences_synonyms), \
- sum(len(x) for x in sentences_synonyms))
- if args.model == 'clip':
- import clip
- print('Loading CLIP')
- model, preprocess = clip.load(args.clip_model, device=device)
- if args.avg_synonyms:
- sentences = list(itertools.chain.from_iterable(sentences_synonyms))
- print('flattened_sentences', len(sentences))
- text = clip.tokenize(sentences).to(device)
- with torch.no_grad():
- if len(text) > 10000:
- text_features = torch.cat([
- model.encode_text(text[:len(text) // 2]),
- model.encode_text(text[len(text) // 2:])],
- dim=0)
- else:
- text_features = model.encode_text(text)
- print('text_features.shape', text_features.shape)
- if args.avg_synonyms:
- synonyms_per_cat = [len(x) for x in sentences_synonyms]
- text_features = text_features.split(synonyms_per_cat, dim=0)
- text_features = [x.mean(dim=0) for x in text_features]
- text_features = torch.stack(text_features, dim=0)
- print('after stack', text_features.shape)
- text_features = text_features.cpu().numpy()
- elif args.model in ['bert', 'roberta']:
- from transformers import AutoTokenizer, AutoModel
- if args.model == 'bert':
- model_name = 'bert-large-uncased'
- if args.model == 'roberta':
- model_name = 'roberta-large'
- tokenizer = AutoTokenizer.from_pretrained(model_name)
- model = AutoModel.from_pretrained(model_name)
- model.eval()
- if args.avg_synonyms:
- sentences = list(itertools.chain.from_iterable(sentences_synonyms))
- print('flattened_sentences', len(sentences))
- inputs = tokenizer(sentences, padding=True, return_tensors="pt")
- with torch.no_grad():
- model_outputs = model(**inputs)
- outputs = model_outputs.pooler_output
- text_features = outputs.detach().cpu()
- if args.avg_synonyms:
- synonyms_per_cat = [len(x) for x in sentences_synonyms]
- text_features = text_features.split(synonyms_per_cat, dim=0)
- text_features = [x.mean(dim=0) for x in text_features]
- text_features = torch.stack(text_features, dim=0)
- print('after stack', text_features.shape)
- text_features = text_features.numpy()
- print('text_features.shape', text_features.shape)
- else:
- assert 0, args.model
- if args.out_path != '':
- print('saveing to', args.out_path)
- np.save(open(args.out_path, 'wb'), text_features)
- import pdb; pdb.set_trace()
diff --git a/spaces/atimughal662/InfoFusion/src/gpt4all_llm.py b/spaces/atimughal662/InfoFusion/src/gpt4all_llm.py
deleted file mode 100644
index 5f973d42a7775d7f3e5a9c27e725429ca6d607e1..0000000000000000000000000000000000000000
--- a/spaces/atimughal662/InfoFusion/src/gpt4all_llm.py
+++ /dev/null
@@ -1,403 +0,0 @@
-import inspect
-import os
-from typing import Dict, Any, Optional, List, Iterator
-from langchain.callbacks.manager import CallbackManagerForLLMRun
-from langchain.schema.output import GenerationChunk
-from pydantic import root_validator
-from langchain.llms import gpt4all
-
-from utils import FakeTokenizer, get_ngpus_vis, url_alive, download_simple
-
-
-def get_model_tokenizer_gpt4all(base_model, n_jobs=None, max_seq_len=None, llamacpp_dict=None):
- assert llamacpp_dict is not None
- # defaults (some of these are generation parameters, so need to be passed in at generation time)
- model_name = base_model.lower()
- model = get_llm_gpt4all(model_name, model=None,
- # max_new_tokens=max_new_tokens,
- # temperature=temperature,
- # repetition_penalty=repetition_penalty,
- # top_k=top_k,
- # top_p=top_p,
- # callbacks=callbacks,
- n_jobs=n_jobs,
- # verbose=verbose,
- # streaming=stream_output,
- # prompter=prompter,
- # context=context,
- # iinput=iinput,
- inner_class=True,
- max_seq_len=max_seq_len,
- llamacpp_dict=llamacpp_dict,
- )
- return model, FakeTokenizer(), 'cpu'
-
-
-from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
-
-
-class H2OStreamingStdOutCallbackHandler(StreamingStdOutCallbackHandler):
-
- def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
- """Run on new LLM token. Only available when streaming is enabled."""
- # streaming to std already occurs without this
- # sys.stdout.write(token)
- # sys.stdout.flush()
- pass
-
-
-def get_model_kwargs(llamacpp_dict, default_kwargs, cls, exclude_list=[]):
- # default from class
- model_kwargs = {k: v.default for k, v in dict(inspect.signature(cls).parameters).items() if k not in exclude_list}
- # from our defaults
- model_kwargs.update(default_kwargs)
- # from user defaults
- model_kwargs.update(llamacpp_dict)
- # ensure only valid keys
- func_names = list(inspect.signature(cls).parameters)
- model_kwargs = {k: v for k, v in model_kwargs.items() if k in func_names}
- # make int or float if can to satisfy types for class
- for k, v in model_kwargs.items():
- try:
- if float(v) == int(v):
- model_kwargs[k] = int(v)
- else:
- model_kwargs[k] = float(v)
- except:
- pass
- return model_kwargs
-
-
-def get_gpt4all_default_kwargs(max_new_tokens=256,
- temperature=0.1,
- repetition_penalty=1.0,
- top_k=40,
- top_p=0.7,
- n_jobs=None,
- verbose=False,
- max_seq_len=None,
- ):
- if n_jobs in [None, -1]:
- n_jobs = int(os.getenv('OMP_NUM_THREADS', str(os.cpu_count()//2)))
- n_jobs = max(1, min(20, n_jobs)) # hurts beyond some point
- n_gpus = get_ngpus_vis()
- default_kwargs = dict(context_erase=0.5,
- n_batch=1,
- max_tokens=max_seq_len - max_new_tokens,
- n_predict=max_new_tokens,
- repeat_last_n=64 if repetition_penalty != 1.0 else 0,
- repeat_penalty=repetition_penalty,
- temp=temperature,
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- use_mlock=True,
- n_ctx=max_seq_len,
- n_threads=n_jobs,
- verbose=verbose)
- if n_gpus != 0:
- default_kwargs.update(dict(n_gpu_layers=100))
- return default_kwargs
-
-
-def get_llm_gpt4all(model_name,
- model=None,
- max_new_tokens=256,
- temperature=0.1,
- repetition_penalty=1.0,
- top_k=40,
- top_p=0.7,
- streaming=False,
- callbacks=None,
- prompter=None,
- context='',
- iinput='',
- n_jobs=None,
- verbose=False,
- inner_class=False,
- max_seq_len=None,
- llamacpp_dict=None,
- ):
- if not inner_class:
- assert prompter is not None
-
- default_kwargs = \
- get_gpt4all_default_kwargs(max_new_tokens=max_new_tokens,
- temperature=temperature,
- repetition_penalty=repetition_penalty,
- top_k=top_k,
- top_p=top_p,
- n_jobs=n_jobs,
- verbose=verbose,
- max_seq_len=max_seq_len,
- )
- if model_name == 'llama':
- cls = H2OLlamaCpp
- if model is None:
- llamacpp_dict = llamacpp_dict.copy()
- model_path = llamacpp_dict.pop('model_path_llama')
- if os.path.isfile(os.path.basename(model_path)):
- # e.g. if offline but previously downloaded
- model_path = os.path.basename(model_path)
- elif url_alive(model_path):
- # online
- ggml_path = os.getenv('GGML_PATH')
- dest = os.path.join(ggml_path, os.path.basename(model_path)) if ggml_path else None
- model_path = download_simple(model_path, dest=dest)
- else:
- model_path = model
- model_kwargs = get_model_kwargs(llamacpp_dict, default_kwargs, cls, exclude_list=['lc_kwargs'])
- model_kwargs.update(dict(model_path=model_path, callbacks=callbacks, streaming=streaming,
- prompter=prompter, context=context, iinput=iinput))
-
- # migration to new langchain fix:
- odd_keys = ['model_kwargs', 'grammar_path', 'grammar']
- for key in odd_keys:
- model_kwargs.pop(key, None)
-
- llm = cls(**model_kwargs)
- llm.client.verbose = verbose
- inner_model = llm.client
- elif model_name == 'gpt4all_llama':
- cls = H2OGPT4All
- if model is None:
- llamacpp_dict = llamacpp_dict.copy()
- model_path = llamacpp_dict.pop('model_name_gpt4all_llama')
- if url_alive(model_path):
- # online
- ggml_path = os.getenv('GGML_PATH')
- dest = os.path.join(ggml_path, os.path.basename(model_path)) if ggml_path else None
- model_path = download_simple(model_path, dest=dest)
- else:
- model_path = model
- model_kwargs = get_model_kwargs(llamacpp_dict, default_kwargs, cls, exclude_list=['lc_kwargs'])
- model_kwargs.update(
- dict(model=model_path, backend='llama', callbacks=callbacks, streaming=streaming,
- prompter=prompter, context=context, iinput=iinput))
- llm = cls(**model_kwargs)
- inner_model = llm.client
- elif model_name == 'gptj':
- cls = H2OGPT4All
- if model is None:
- llamacpp_dict = llamacpp_dict.copy()
- model_path = llamacpp_dict.pop('model_name_gptj') if model is None else model
- if url_alive(model_path):
- ggml_path = os.getenv('GGML_PATH')
- dest = os.path.join(ggml_path, os.path.basename(model_path)) if ggml_path else None
- model_path = download_simple(model_path, dest=dest)
- else:
- model_path = model
- model_kwargs = get_model_kwargs(llamacpp_dict, default_kwargs, cls, exclude_list=['lc_kwargs'])
- model_kwargs.update(
- dict(model=model_path, backend='gptj', callbacks=callbacks, streaming=streaming,
- prompter=prompter, context=context, iinput=iinput))
- llm = cls(**model_kwargs)
- inner_model = llm.client
- else:
- raise RuntimeError("No such model_name %s" % model_name)
- if inner_class:
- return inner_model
- else:
- return llm
-
-
-class H2OGPT4All(gpt4all.GPT4All):
- model: Any
- prompter: Any
- context: Any = ''
- iinput: Any = ''
- """Path to the pre-trained GPT4All model file."""
-
- @root_validator()
- def validate_environment(cls, values: Dict) -> Dict:
- """Validate that the python package exists in the environment."""
- try:
- if isinstance(values["model"], str):
- from gpt4all import GPT4All as GPT4AllModel
-
- full_path = values["model"]
- model_path, delimiter, model_name = full_path.rpartition("/")
- model_path += delimiter
-
- values["client"] = GPT4AllModel(
- model_name=model_name,
- model_path=model_path or None,
- model_type=values["backend"],
- allow_download=True,
- )
- if values["n_threads"] is not None:
- # set n_threads
- values["client"].model.set_thread_count(values["n_threads"])
- else:
- values["client"] = values["model"]
- if values["n_threads"] is not None:
- # set n_threads
- values["client"].model.set_thread_count(values["n_threads"])
- try:
- values["backend"] = values["client"].model_type
- except AttributeError:
- # The below is for compatibility with GPT4All Python bindings <= 0.2.3.
- values["backend"] = values["client"].model.model_type
-
- except ImportError:
- raise ValueError(
- "Could not import gpt4all python package. "
- "Please install it with `pip install gpt4all`."
- )
- return values
-
- def _call(
- self,
- prompt: str,
- stop: Optional[List[str]] = None,
- run_manager: Optional[CallbackManagerForLLMRun] = None,
- **kwargs,
- ) -> str:
- # Roughly 4 chars per token if natural language
- n_ctx = 2048
- prompt = prompt[-self.max_tokens * 4:]
-
- # use instruct prompting
- data_point = dict(context=self.context, instruction=prompt, input=self.iinput)
- prompt = self.prompter.generate_prompt(data_point)
-
- verbose = False
- if verbose:
- print("_call prompt: %s" % prompt, flush=True)
- # FIXME: GPT4ALl doesn't support yield during generate, so cannot support streaming except via itself to stdout
- return super()._call(prompt, stop=stop, run_manager=run_manager)
-
- # FIXME: Unsure what uses
- #def get_token_ids(self, text: str) -> List[int]:
- # return self.client.tokenize(b" " + text.encode("utf-8"))
-
-
-from langchain.llms import LlamaCpp
-
-
-class H2OLlamaCpp(LlamaCpp):
- model_path: Any
- prompter: Any
- context: Any
- iinput: Any
- """Path to the pre-trained GPT4All model file."""
-
- @root_validator()
- def validate_environment(cls, values: Dict) -> Dict:
- """Validate that llama-cpp-python library is installed."""
- if isinstance(values["model_path"], str):
- model_path = values["model_path"]
- model_param_names = [
- "lora_path",
- "lora_base",
- "n_ctx",
- "n_parts",
- "seed",
- "f16_kv",
- "logits_all",
- "vocab_only",
- "use_mlock",
- "n_threads",
- "n_batch",
- "use_mmap",
- "last_n_tokens_size",
- ]
- model_params = {k: values[k] for k in model_param_names}
- # For backwards compatibility, only include if non-null.
- if values["n_gpu_layers"] is not None:
- model_params["n_gpu_layers"] = values["n_gpu_layers"]
-
- try:
- try:
- from llama_cpp import Llama
- except ImportError:
- from llama_cpp_cuda import Llama
-
- values["client"] = Llama(model_path, **model_params)
- except ImportError:
- raise ModuleNotFoundError(
- "Could not import llama-cpp-python library. "
- "Please install the llama-cpp-python library to "
- "use this embedding model: pip install llama-cpp-python"
- )
- except Exception as e:
- raise ValueError(
- f"Could not load Llama model from path: {model_path}. "
- f"Received error {e}"
- )
- else:
- values["client"] = values["model_path"]
- return values
-
- def _call(
- self,
- prompt: str,
- stop: Optional[List[str]] = None,
- run_manager: Optional[CallbackManagerForLLMRun] = None,
- **kwargs,
- ) -> str:
- verbose = False
- # tokenize twice, just to count tokens, since llama cpp python wrapper has no way to truncate
- # still have to avoid crazy sizes, else hit llama_tokenize: too many tokens -- might still hit, not fatal
- prompt = prompt[-self.n_ctx * 4:]
- prompt_tokens = self.client.tokenize(b" " + prompt.encode("utf-8"))
- num_prompt_tokens = len(prompt_tokens)
- if num_prompt_tokens > self.n_ctx:
- # conservative by using int()
- chars_per_token = int(len(prompt) / num_prompt_tokens)
- prompt = prompt[-self.n_ctx * chars_per_token:]
- if verbose:
- print("reducing tokens, assuming average of %s chars/token: %s" % chars_per_token, flush=True)
- prompt_tokens2 = self.client.tokenize(b" " + prompt.encode("utf-8"))
- num_prompt_tokens2 = len(prompt_tokens2)
- print("reduced tokens from %d -> %d" % (num_prompt_tokens, num_prompt_tokens2), flush=True)
-
- # use instruct prompting
- data_point = dict(context=self.context, instruction=prompt, input=self.iinput)
- prompt = self.prompter.generate_prompt(data_point)
-
- if verbose:
- print("_call prompt: %s" % prompt, flush=True)
-
- if self.streaming:
- # parent handler of streamer expects to see prompt first else output="" and lose if prompt=None in prompter
- text = ""
- for token in self.stream(input=prompt, stop=stop):
- # for token in self.stream(input=prompt, stop=stop, run_manager=run_manager):
- text_chunk = token # ["choices"][0]["text"]
- # self.stream already calls text_callback
- # if text_callback:
- # text_callback(text_chunk)
- text += text_chunk
- # parent handler of streamer expects to see prompt first else output="" and lose if prompt=None in prompter
- return text[len(prompt):]
- else:
- params = self._get_parameters(stop)
- params = {**params, **kwargs}
- result = self.client(prompt=prompt, **params)
- return result["choices"][0]["text"]
-
- def _stream(
- self,
- prompt: str,
- stop: Optional[List[str]] = None,
- run_manager: Optional[CallbackManagerForLLMRun] = None,
- **kwargs: Any,
- ) -> Iterator[GenerationChunk]:
- # parent handler of streamer expects to see prompt first else output="" and lose if prompt=None in prompter
- logprobs = 0
- chunk = GenerationChunk(
- text=prompt,
- generation_info={"logprobs": logprobs},
- )
- yield chunk
- if run_manager:
- run_manager.on_llm_new_token(
- token=chunk.text, verbose=self.verbose, log_probs=logprobs
- )
- # actual new tokens
- for chunk in super()._stream(prompt, stop=stop, run_manager=run_manager, **kwargs):
- yield chunk
-
- def get_token_ids(self, text: str) -> List[int]:
- return self.client.tokenize(b" " + text.encode("utf-8"))
diff --git a/spaces/awacke1/CardWriterPro/current_editable.md b/spaces/awacke1/CardWriterPro/current_editable.md
deleted file mode 100644
index 6ea9a2f7de4e4fcb6f3763a21018bde8bb95d2ff..0000000000000000000000000000000000000000
--- a/spaces/awacke1/CardWriterPro/current_editable.md
+++ /dev/null
@@ -1,141 +0,0 @@
----
-language:
-- de
-license: bigscience-bloom-rail-1.0
-library_name: keras
-tags:
-- autogenerated-modelcard
----
-
-# tethre
-
-## Table of Contents
-- [tethre](#-model_id--defaultmymodelname-true)
- - [Table of Contents](#table-of-contents)
- - [Model Details](#model-details)
- - [How to Get Started with the Model](#how-to-get-started-with-the-model)
- - [Uses](#uses)
- - [Direct Use](#direct-use)
- - [Downstream Use](#downstream-use)
- - [Misuse and Out-of-scope Use](#misuse-and-out-of-scope-use)
- - [Limitations and Biases](#limitations-and-biases)
- - [Training](#training)
- - [Training Data](#training-data)
- - [Training Procedure](#training-procedure)
- - [Evaluation Results](#evaluation-results)
- - [Environmental Impact](#environmental-impact)
- - [Citation Information](#citation-information)
-
-
-
-## Model Details
-
-
-
- hhrirergenjfngdg
-
-- Developed by:
-- Language(s):
-- License: This model is licensed under the bigscience-bloom-rail-1.0 license
-- Resources for more information:
-
-
-
-
-
-
-## How to Get Started with the Model
-
-Use the code below to get started with the model.
-
-```python
-# A nice code snippet here that describes how to use the model...
-```
-
-
-
-
-## Uses
-
-#### Direct Use
-
-
-
-[More Information Needed]
-
-#### Downstream Use
-
-
-
-[More Information Needed]
-
-#### Misuse and Out-of-scope Use
-
-
-
-[More Information Needed]
-
-
-
-
-## Limitations and Biases
-
-
-
-**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
-
-[More Information Needed]
-
-
-
-
-
-## Training
-
-#### Training Data
-
-
-
-
-See the data card for additional information.
-
-#### Training Procedure
-
-
-
-[More Information Needed]
-
-
-
-
-## Evaluation Results
-
-
-
-[More Information Needed]
-
-
-
-
-## Environmental Impact
-
-
-
-You can estimate carbon emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700)
-
-- **Hardware Type:**
-- **Hours used:**
-- **Cloud Provider:**
-- **Compute Region:**
-- **Carbon Emitted:**
-
-
-
-
-
-## Citation Information
-
-```bibtex
-
-```
-
\ No newline at end of file
diff --git a/spaces/awacke1/USMLE-Medical-License-Exam-EDA/backupapp.py b/spaces/awacke1/USMLE-Medical-License-Exam-EDA/backupapp.py
deleted file mode 100644
index f44e9890d59271e50252b7a31d47d02e39e7502d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/USMLE-Medical-License-Exam-EDA/backupapp.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import streamlit as st
-import json
-import pandas as pd
-import streamlit.components.v1 as components
-
-# Function to load JSONL file into a DataFrame
-def load_jsonl(file_path):
- data = []
- with open(file_path, 'r') as f:
- for line in f:
- data.append(json.loads(line))
- return pd.DataFrame(data)
-
-# Function to filter DataFrame by keyword
-def filter_by_keyword(df, keyword):
- return df[df.apply(lambda row: row.astype(str).str.contains(keyword).any(), axis=1)]
-
-# Function to generate HTML with textarea
-def generate_html_with_textarea(text_to_speak):
- return f'''
-
-
-
- Read It Aloud
-
-
-
-
🔊 Read It Aloud
-
-
-
-
-
- '''
-
-# Streamlit App 🚀
-st.title("USMLE Medical Questions Explorer with Speech Synthesis 🎙")
-
-# Dropdown for file selection
-file_option = st.selectbox("Select file:", ["usmle_16.2MB.jsonl", "usmle_2.08MB.jsonl"])
-st.write(f"You selected: {file_option}")
-
-# Load data
-large_data = load_jsonl("usmle_16.2MB.jsonl")
-small_data = load_jsonl("usmle_2.08MB.jsonl")
-
-data = large_data if file_option == "usmle_16.2MB.jsonl" else small_data
-
-# Top 20 healthcare terms for USMLE
-top_20_terms = ['Heart', 'Lung', 'Pain', 'Memory', 'Kidney', 'Diabetes', 'Cancer', 'Infection', 'Virus', 'Bacteria', 'Neurology', 'Psychiatry', 'Gastrointestinal', 'Pediatrics', 'Oncology', 'Skin', 'Blood', 'Surgery', 'Epidemiology', 'Genetics']
-
-# Create Expander and Columns UI for terms
-with st.expander("Search by Common Terms 📚"):
- cols = st.columns(4)
- for term in top_20_terms:
- with cols[top_20_terms.index(term) % 4]:
- if st.button(f"{term}"):
- filtered_data = filter_by_keyword(data, term)
- st.write(f"Filtered Dataset by '{term}' 📊")
- st.dataframe(filtered_data)
- if not filtered_data.empty:
- html_blocks = []
- for idx, row in filtered_data.iterrows():
- question_text = row.get("question", "No question field")
- documentHTML5 = generate_html_with_textarea(question_text)
- html_blocks.append(documentHTML5)
- all_html = ''.join(html_blocks)
- components.html(all_html, width=1280, height=1024)
-
-# Text input for search keyword
-search_keyword = st.text_input("Or, enter a keyword to filter data:")
-if st.button("Search 🕵️♀️"):
- filtered_data = filter_by_keyword(data, search_keyword)
- st.write(f"Filtered Dataset by '{search_keyword}' 📊")
- st.dataframe(filtered_data)
- if not filtered_data.empty:
- html_blocks = []
- for idx, row in filtered_data.iterrows():
- question_text = row.get("question", "No question field")
- documentHTML5 = generate_html_with_textarea(question_text)
- html_blocks.append(documentHTML5)
- all_html = ''.join(html_blocks)
- components.html(all_html, width=1280, height=1024)
-
-
-
-# Inject HTML5 and JavaScript for styling
-st.markdown("""
-
-""", unsafe_allow_html=True)
-
-# Markdown and emojis for the case presentation
-st.markdown("# 🏥 Case Study: 32-year-old Woman's Wellness Check")
-st.markdown("## 📋 Patient Information")
-st.markdown("""
-- **Age**: 32
-- **Gender**: Female
-- **Past Medical History**: Asthma, Hypertension, Anxiety
-- **Current Medications**: Albuterol, Fluticasone, Hydrochlorothiazide, Lisinopril, Fexofenadine
-- **Vitals**
- - **Temperature**: 99.5°F (37.5°C)
- - **Blood Pressure**: 165/95 mmHg
- - **Pulse**: 70/min
- - **Respirations**: 15/min
- - **Oxygen Saturation**: 98% on room air
-""")
-
-# Clinical Findings
-st.markdown("## 📋 Clinical Findings")
-st.markdown("""
-- Cardiac exam reveals a S1 and S2 heart sound with a normal rate.
-- Pulmonary exam is clear to auscultation bilaterally with good air movement.
-- Abdominal exam reveals a bruit, normoactive bowel sounds, and an audible borborygmus.
-- Neurological exam reveals cranial nerves II-XII as grossly intact with normal strength and reflexes in the upper and lower extremities.
-""")
-
-# Next Step Options
-st.markdown("## 🤔 What is the best next step in management?")
-
-# Multiple Choice
-options = ["Blood Test", "MRI Scan", "Ultrasound with Doppler", "Immediate Surgery"]
-choice = st.selectbox("", options)
-
-# Explanation
-if st.button("Submit"):
- if choice == "Ultrasound with Doppler":
- st.success("Correct! 🎉")
- st.markdown("""
- ### Explanation
- The patient's high blood pressure coupled with an abdominal bruit suggests the possibility of renal artery stenosis.
- An **Ultrasound with Doppler** is the best next step for assessing blood flow and evaluating for renal artery stenosis.
- """)
- else:
- st.error("Incorrect. 😞")
- st.markdown("""
- The best next step is **Ultrasound with Doppler**.
- """)
diff --git a/spaces/awacke1/VideoSummaryYoutube3/app.py b/spaces/awacke1/VideoSummaryYoutube3/app.py
deleted file mode 100644
index ea0d92944bdf4e1fde3b7b46810816a97c6b4964..0000000000000000000000000000000000000000
--- a/spaces/awacke1/VideoSummaryYoutube3/app.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import gradio as gr
-from summarize import Summarizer
-
-interface = gr.Interface(fn = Summarizer,
- inputs = [gr.inputs.Textbox(lines=2,
- placeholder="Enter your link...",
- label='YouTube Video Link'),
- gr.inputs.Radio(["mT5", "BART"], type="value", label='Model')],
- outputs = [gr.outputs.Textbox(
- label="Summary")],
-
- title = "Video Summary Generator",
- examples = [
- ['https://www.youtube.com/watch?v=OaeYUm06in0&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=5761s', 'BART'],
- ['https://www.youtube.com/watch?v=U5OD8MjYnOM', 'BART'],
- ['https://www.youtube.com/watch?v=Gfr50f6ZBvo', 'BART'],
- ['https://www.youtube.com/watch?v=G4hL5Om4IJ4&t=2680s', 'BART'],
- ['https://www.youtube.com/watch?v=0Jd7fJgFkPU&t=8776s', 'mT5']
- ],
- enable_queue=True)
-
-interface.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/awacke1/mixture-of-experts-dr-llama/templates.py b/spaces/awacke1/mixture-of-experts-dr-llama/templates.py
deleted file mode 100644
index 2c64194b42f0115f8a95b2749256a3237ab44757..0000000000000000000000000000000000000000
--- a/spaces/awacke1/mixture-of-experts-dr-llama/templates.py
+++ /dev/null
@@ -1,44 +0,0 @@
-css = '''
-
-
-