diff --git a/spaces/101-5/gpt4free/g4f/.v1/gui/pywebio-gui/README.md b/spaces/101-5/gpt4free/g4f/.v1/gui/pywebio-gui/README.md deleted file mode 100644 index 2b99c075d507dbf128a170d2975b1b22b393a70e..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/g4f/.v1/gui/pywebio-gui/README.md +++ /dev/null @@ -1,24 +0,0 @@ -# GUI with PyWebIO -Simple, fast, and with fewer errors -Only requires -```bash -pip install gpt4free -pip install pywebio -``` -clicking on 'pywebio-usesless.py' will run it - -PS: Currently, only 'usesless' is implemented, and the GUI is expected to be updated infrequently, with a focus on stability. - -↓ Here is the introduction in zh-Hans-CN below. - -# 使用pywebio实现的极简GUI -简单,快捷,报错少 -只需要 -```bash -pip install gpt4free -pip install pywebio -``` - -双击pywebio-usesless.py即可运行 - -ps:目前仅实现usesless,这个gui更新频率应该会比较少,目的是追求稳定 diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/SEUSv10RC6shaderpackzip ~REPACK~.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/SEUSv10RC6shaderpackzip ~REPACK~.md deleted file mode 100644 index b1918126a7aa4516d3b4197734d420b7e2442edf..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/SEUSv10RC6shaderpackzip ~REPACK~.md +++ /dev/null @@ -1,126 +0,0 @@ -## SEUSv10RC6shaderpackzip - - - - - - - - - -**LINK ✸ [https://jinyurl.com/2tzZXd](https://jinyurl.com/2tzZXd)** - - - - - - - - - - - - - -# How to Install SEUS v10 RC6 Shader Pack for Minecraft - - - -SEUS v10 RC6 is a shader pack for Minecraft that enhances the graphics of the game with realistic lighting, shadows, water effects and more. It is one of the most popular shader packs for Minecraft and requires OptiFine or GLSL Shaders Mod to work. In this article, we will show you how to download and install SEUS v10 RC6 shader pack for Minecraft. - - - -## Step 1: Download SEUS v10 RC6 shader pack - - - -The first step is to download the SEUS v10 RC6 shader pack from the official website of Sonic Ether[^1^] or from other sources[^2^]. The file name should be `SEUS-v10rc6.zip` and it should be around 20 MB in size. You can also download other versions of SEUS shader pack from the same website, such as SEUS Renewed or SEUS PTGI. - - - -## Step 2: Install OptiFine or GLSL Shaders Mod - - - -The next step is to install OptiFine or GLSL Shaders Mod for your Minecraft version. These mods are necessary to run shader packs in Minecraft. You can download OptiFine from its official website or from other sources. You can download GLSL Shaders Mod from its official forum thread or from other sources. Follow the instructions on how to install these mods for your Minecraft version. - - - -## Step 3: Move the shader pack file to the shader folder - - - -The final step is to move the `SEUS-v10rc6.zip` file to the shader folder in your Minecraft directory. To do this, open the Minecraft launcher and select the profile that has OptiFine or GLSL Shaders Mod installed. Launch Minecraft and go to the video settings in the options menu. Click on shaders packs and open the shader folder in the lower left corner. This will open a folder called `.minecraft\shaderpacks`. Move the `SEUS-v10rc6.zip` file into this folder. Return to Minecraft and select SEUS v10 RC6 in the shader list. Click on done and enjoy your new graphics! - - - -## Troubleshooting - - - -If you encounter any problems with installing or running SEUS v10 RC6 shader pack, here are some possible solutions: - - - -- Make sure you have a compatible version of OptiFine or GLSL Shaders Mod installed for your Minecraft version. - -- Make sure you have a powerful enough computer to run SEUS v10 RC6 shader pack. It is recommended to have at least 4 GB of RAM and a decent graphics card. - -- Make sure you have allocated enough memory to Minecraft. You can do this by editing the JVM arguments in the launcher profile settings. - -- Make sure you have updated your graphics drivers to the latest version. - -- If you still have issues, you can try other versions of SEUS shader pack or other shader packs for Minecraft. - - - -We hope this article helped you install SEUS v10 RC6 shader pack for Minecraft. If you have any questions or feedback, feel free to leave a comment below. - - - -## What are the features of SEUS v10 RC6 shader pack? - - - -SEUS v10 RC6 shader pack is a legacy version of SEUS that offers some amazing features for Minecraft graphics. Some of the features are: - - - -- Dynamic shadows that change according to the position of the sun and the light sources. - -- Realistic water effects that reflect and refract the environment. - -- Smooth lighting that eliminates harsh edges and creates soft transitions. - -- Bloom and lens flare effects that add a cinematic touch to the scenes. - -- Motion blur and depth of field effects that enhance the sense of movement and distance. - -- Customizable settings that allow you to adjust the performance and quality of the shader pack. - - - -SEUS v10 RC6 shader pack is compatible with Minecraft 1.4.6 and requires OptiFine or GLSL Shaders Mod to work. It is one of the most popular shader packs for Minecraft and has been praised by many players and reviewers for its stunning visuals and performance[^1^] [^2^] [^3^]. - - - -## Why should you use SEUS v10 RC6 shader pack? - - - -If you are looking for a way to improve your Minecraft experience with realistic and immersive graphics, SEUS v10 RC6 shader pack is a great choice. It will transform your Minecraft world into a beautiful and lively place that you can explore and enjoy. You will be amazed by the difference that SEUS v10 RC6 shader pack makes in your game. It will make your Minecraft look like a whole new game. - - - -SEUS v10 RC6 shader pack is also easy to install and use. You just need to follow the steps in this article and you will be ready to go. You can also customize the settings of the shader pack to suit your preferences and needs. You can change the brightness, contrast, color, fog, water, shadows, motion blur, depth of field and more. You can also toggle some features on and off if you want to save some resources or change the mood of your game. - - - -SEUS v10 RC6 shader pack is a must-have for any Minecraft fan who wants to experience the game in a new way. It will make your Minecraft more realistic, beautiful and fun. You will not regret trying it out. - - 145887f19f - - - - - diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Articulate Storyline 212121412 Portable.md b/spaces/1gistliPinn/ChatGPT4/Examples/Articulate Storyline 212121412 Portable.md deleted file mode 100644 index 12a338023374674e1ded0fd398767709ea7b4b11..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Articulate Storyline 212121412 Portable.md +++ /dev/null @@ -1,22 +0,0 @@ -

Articulate Storyline 212121412 Portable


Download Filehttps://imgfil.com/2uy26a



-
-The flexible storyline is fully integrated into the POWER BI interface and provides an intuitive user interface for users to start, pause, review and resume... - -A well written and illustrated biography of John Calhoun, first Vice President of the United States of America. He was also the first Vice President to become President of the United States after his brother, Andrew Johnson died. Calhoun was also a great Civil War general. Learn about the best of America in this... - -It's always a struggle to be "the" girl and a budding athlete in the same group of girls, but can all three teenage girls really have it all? This is the story of the relationships these three girls form throughout their senior year of high school and how their friendship not only helps them get through the... - -How the media is misleading US citizens on the separation of powers. "Meet my Friend, Nancy Pelosi." She says the media is wrong. "Our representatives are working on our behalf." It's not the same way, Nancy Pelosi says. The media is twisting the facts. She knows. So is Harry Reid. The media is just... - -One of the most groundbreaking books in American history!The story of the evolution of democracy from its beginning, through our founding fathers. This is a book that has history shaped around you as you read it. It gives a powerful overview of the history of our country. You will meet President... - -Warren is a small town, a peaceful town. But every summer when the town seems to sleep, there are mysterious happenings. With a new library and community center, things have never been better. But when the new summer arrives and the town seems to sleep, something wakes up in Warren and people start... - -It all began on May 20th 2008, what started as a simple wager on an online gaming site. He lost and was then left for dead. With nothing and nowhere to go he landed in a nother world; the world of supernatural witches. It was there he found his true home and where he would begin his journey as... - -It was only a little over a year ago when Stacey Little left her small, quiet, Canadian town. She left with a memory that would haunt her forever. Not long after that, she started to hear voices and see visions. She didn't know what to make of it. She tried to ignore it, but the voices and... - -"Just following the light" In the year 2000, American's were a bit scared 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Ek Bura Aadmi English Dubbed Download A Review of the Film Based on a True Story.md b/spaces/1gistliPinn/ChatGPT4/Examples/Ek Bura Aadmi English Dubbed Download A Review of the Film Based on a True Story.md deleted file mode 100644 index 4fc77f22ced9978d8f98e302abebfb37166b247f..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Ek Bura Aadmi English Dubbed Download A Review of the Film Based on a True Story.md +++ /dev/null @@ -1,6 +0,0 @@ -

Ek Bura Aadmi English Dubbed Download


Download →→→ https://imgfil.com/2uxZZo



- - aaccfb2cb3
-
-
-

diff --git a/spaces/1line/AutoGPT/tests/integration/milvus_memory_tests.py b/spaces/1line/AutoGPT/tests/integration/milvus_memory_tests.py deleted file mode 100644 index ec38bf2f72087b5da679d26594ebff97d8a09b19..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/tests/integration/milvus_memory_tests.py +++ /dev/null @@ -1,57 +0,0 @@ -# sourcery skip: snake-case-functions -"""Tests for the MilvusMemory class.""" -import random -import string -import unittest - -from autogpt.config import Config -from autogpt.memory.milvus import MilvusMemory - -try: - - class TestMilvusMemory(unittest.TestCase): - """Tests for the MilvusMemory class.""" - - def random_string(self, length: int) -> str: - """Generate a random string of the given length.""" - return "".join(random.choice(string.ascii_letters) for _ in range(length)) - - def setUp(self) -> None: - """Set up the test environment.""" - cfg = Config() - cfg.milvus_addr = "localhost:19530" - self.memory = MilvusMemory(cfg) - self.memory.clear() - - # Add example texts to the cache - self.example_texts = [ - "The quick brown fox jumps over the lazy dog", - "I love machine learning and natural language processing", - "The cake is a lie, but the pie is always true", - "ChatGPT is an advanced AI model for conversation", - ] - - for text in self.example_texts: - self.memory.add(text) - - # Add some random strings to test noise - for _ in range(5): - self.memory.add(self.random_string(10)) - - def test_get_relevant(self) -> None: - """Test getting relevant texts from the cache.""" - query = "I'm interested in artificial intelligence and NLP" - num_relevant = 3 - relevant_texts = self.memory.get_relevant(query, num_relevant) - - print(f"Top {k} relevant texts for the query '{query}':") - for i, text in enumerate(relevant_texts, start=1): - print(f"{i}. {text}") - - self.assertEqual(len(relevant_texts), k) - self.assertIn(self.example_texts[1], relevant_texts) - -except: - print( - "Skipping tests/integration/milvus_memory_tests.py as Milvus is not installed." - ) diff --git a/spaces/1phancelerku/anime-remove-background/Discover the New Champion Stadium in Pokmon Masters EX on GBA.md b/spaces/1phancelerku/anime-remove-background/Discover the New Champion Stadium in Pokmon Masters EX on GBA.md deleted file mode 100644 index 1561bfb224e719367f600e116b7baf5687ffa448..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Discover the New Champion Stadium in Pokmon Masters EX on GBA.md +++ /dev/null @@ -1,147 +0,0 @@ - -

Pokemon Masters EX: A New Adventure with Your Favorite Trainers and Pokemon

-

If you are a fan of Pokemon games, you might have heard of Pokemon Masters EX, a free-to-play mobile game that features an original story, team-ups between iconic trainers and Pokemon, and exciting 3-on-3 battles. Whether you are new to the game or a veteran player, you might be wondering how to download and play Pokemon Masters EX on your device. In this article, we will show you how to do that, as well as give you some tips and tricks for enjoying the game.

-

Pokemon Masters EX is a spin-off game that takes place on the artificial island of Pasio, where trainers from different regions compete in the Pokemon Masters League (PML), a tournament of 3-on-3 battles. You play as a trainer who teams up with other famous trainers and their Pokemon, called sync pairs, to form a team of three. You can also participate in co-op battles with other players, as well as special events that feature characters from the anime series.

-

pokemon masters ex gba download


Download Zip ››››› https://jinyurl.com/2uNP04



-

Some of the main features of Pokemon Masters EX are:

- -

Now that you know what Pokemon Masters EX is and why you should play it, let's see how you can download and play it on your device.

-

How to Download Pokemon Masters EX on Android

-

If you have an Android device, you can easily download and play Pokemon Masters EX from the Google Play Store. Here are the steps you need to follow:

-

pokemon masters ex gba rom download
-pokemon masters ex gba emulator download
-pokemon masters ex gba hack download
-pokemon masters ex gba apk download
-pokemon masters ex gba cheats download
-pokemon masters ex gba mod download
-pokemon masters ex gba free download
-pokemon masters ex gba android download
-pokemon masters ex gba ios download
-pokemon masters ex gba pc download
-how to download pokemon masters ex gba
-where to download pokemon masters ex gba
-best site to download pokemon masters ex gba
-safe download pokemon masters ex gba
-fast download pokemon masters ex gba
-easy download pokemon masters ex gba
-full version download pokemon masters ex gba
-latest version download pokemon masters ex gba
-updated version download pokemon masters ex gba
-offline download pokemon masters ex gba
-online play pokemon masters ex gba without download
-review of pokemon masters ex gba download
-guide for pokemon masters ex gba download
-tips for pokemon masters ex gba download
-tricks for pokemon masters ex gba download
-walkthrough for pokemon masters ex gba download
-gameplay of pokemon masters ex gba download
-features of pokemon masters ex gba download
-characters of pokemon masters ex gba download
-story of pokemon masters ex gba download
-graphics of pokemon masters ex gba download
-sound of pokemon masters ex gba download
-controls of pokemon masters ex gba download
-compatibility of pokemon masters ex gba download
-performance of pokemon masters ex gba download
-quality of pokemon masters ex gba download
-rating of pokemon masters ex gba download
-ranking of pokemon masters ex gba download
-popularity of pokemon masters ex gba download
-demand of pokemon masters ex gba download
-benefits of pokemon masters ex gba download
-advantages of pokemon masters ex gba download
-disadvantages of pokemon masters ex gba download
-drawbacks of pokemon masters ex gba download
-problems of pokemon masters ex gba download
-issues of pokemon masters ex gba download
-solutions of pokemon masters ex gba download
-alternatives of pokemon masters ex gba download
-comparisons of pokemon masters ex gba download

-
    -
  1. Go to the Google Play Store and search for Pokemon Masters EX. You can also use this link: [Pokemon Masters EX].
  2. -
  3. Tap on the Install button and wait for the game to download. The game size is about 2 GB, so make sure you have enough space and a stable internet connection.
  4. -
  5. Launch the game and follow the instructions to start your adventure. You will need to accept the terms of service, choose your language, download additional data, and create your character.
  6. -
-

Congratulations! You are now ready to play Pokemon Masters EX on your Android device.

-

How to Download Pokemon Masters EX on iOS

-

If you have an iOS device, you can also download and play Pokemon Masters EX from the App Store. Here are the steps you need to follow:

-
    -
  1. Go to the App Store and search for Pokemon Masters EX. You can also use this link: [Pokemon Masters EX].
  2. -
  3. Tap on the Get button and enter your Apple ID password if prompted. The game is free to download, but it may offer in-app purchases.
  4. -
  5. Wait for the game to download and launch it from your home screen. You will need to accept the terms of service, choose your language, download additional data, and create your character.
  6. -
-

Congratulations! You are now ready to play Pokemon Masters EX on your iOS device.

How to Download Pokemon Masters EX on PC

-

If you don't have a mobile device or you prefer to play Pokemon Masters EX on a bigger screen, you can also download and play it on your PC using an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. Here are the steps you need to follow:

-
    -
  1. Download and install an Android emulator such as [BlueStacks] or [NoxPlayer]. You can choose the one that suits your preferences and system requirements.
  2. -
  3. Open the emulator and sign in with your Google account. You will need to do this to access the Google Play Store and other Google services.
  4. -
  5. Go to the Google Play Store and search for Pokemon Masters EX. You can also use this link: [Pokemon Masters EX].
  6. -
  7. Install the game and launch it from the emulator. You will need to accept the terms of service, choose your language, download additional data, and create your character.
  8. -
-

Congratulations! You are now ready to play Pokemon Masters EX on your PC.

-

How to Download Pokemon Masters EX on GBA

-

If you are feeling nostalgic or adventurous, you might want to try playing Pokemon Masters EX on a Game Boy Advance (GBA), a handheld console that was released in 2001. GBA is one of the most popular and beloved gaming devices of all time, and it has a huge library of classic games, including many Pokemon titles. However, Pokemon Masters EX is not officially available for GBA, so you will need to use some tricks to make it work.

-

Disclaimer: Before we show you how to download and play Pokemon Masters EX on GBA, we need to warn you about the legal and ethical issues of using GBA emulators and ROMs. Emulators are software that mimic the hardware and software of a console, while ROMs are files that contain the data of a game. Using emulators and ROMs without owning the original console and game is considered piracy and may violate the intellectual property rights of the developers and publishers. Therefore, we do not condone or encourage the use of emulators and ROMs for any illegal or unethical purposes. If you decide to use them, do so at your own risk and responsibility.

-

Here are the steps you need to follow:

-
    -
  1. Download and install a GBA emulator such as [VisualBoyAdvance] or [mGBA]. These are two of the most popular and reliable GBA emulators for PC.
  2. -
  3. Download a Pokemon Masters EX ROM from a reputable source such as [ROMsMania] or [EmuParadise]. These are two of the most trusted and safe websites for downloading ROMs for various consoles.
  4. -
  5. Open the emulator and load the ROM file. You will need to locate the ROM file on your PC and open it with the emulator.
  6. -
  7. Enjoy playing Pokemon Masters EX on your GBA. You can customize the controls, graphics, sound, and other settings according to your preferences.
  8. -
-

Congratulations! You are now ready to play Pokemon Masters EX on your GBA.

Pros and Cons of Playing Pokemon Masters EX on GBA

-

Playing Pokemon Masters EX on GBA might sound like a fun and nostalgic idea, but it also has its advantages and disadvantages. Here are some of them:

- - - - - - - - - - - - - - - - - - - - - -
ProsCons
- You can experience the game in a retro style, with pixelated graphics and chiptune sound.- You will miss out on the high-quality graphics and sound of the original game, which are designed to enhance the gameplay and immersion.
- You can play the game on a portable device, without needing an internet connection or a battery charger.- You will need to carry around a GBA device and a cartridge, which might be inconvenient or impractical in some situations.
- You can customize the game settings, such as the speed, difficulty, cheats, and save states, to suit your preferences and needs.- You might encounter compatibility issues, bugs, glitches, or crashes that could ruin your gaming experience or damage your device.
- You can enjoy the game in a different way, with new challenges and surprises.- You will not be able to access the latest updates, features, events, and sync pairs that are available in the original game.
-

As you can see, playing Pokemon Masters EX on GBA has its pros and cons, and it is up to you to decide whether it is worth it or not. If you do decide to try it, make sure you do it legally and ethically, and have fun!

-

Tips and Tricks for Playing Pokemon Masters EX

-

Pokemon Masters EX is a game that requires strategy, skill, and knowledge to master. If you want to become a better player and enjoy the game more, here are some tips and tricks that might help you:

- -

Conclusion

-

Pokemon Masters EX is a game that offers a lot of fun and excitement for Pokemon fans and gamers alike. You can download and play it on various devices, such as Android, iOS, PC, or even GBA. You can also enjoy the game's features, such as sync pairs, 3-on-3 battles, Champion Stadium, events, and more. However, you also need to be aware of the legal and ethical issues of using emulators and ROMs, as well as the pros and cons of playing the game on different devices. We hope this article has helped you learn how to download and play Pokemon Masters EX on your device of choice. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and happy gaming!

-

FAQs

-

Here are some frequently asked questions about Pokemon Masters EX:

-
    -
  1. Q: Is Pokemon Masters EX free to play? -A: Yes, Pokemon Masters EX is free to download and play. However, it may offer in-app purchases that can enhance your gaming experience.
  2. -
  3. Q: Is Pokemon Masters EX compatible with my device? -A: Pokemon Masters EX is compatible with most Android devices that have Android OS 7.0 or higher (64-bit), and most iOS devices that have iOS 11 or higher. For PC and GBA devices, you will need to use an emulator and a ROM file.
  4. -
  5. Q: How can I get more sync pairs in Pokemon Masters EX? -A: You can get more sync pairs by scouting them using gems or tickets. You can also get some sync pairs by completing story chapters or events.
  6. -
  7. Q: How can I contact the support team of Pokemon Masters EX? -A: You can contact the support team of Pokemon Masters EX by tapping on the Menu button in the game, then tapping on Other > Customer Support > Inquiries.
  8. -
  9. Q: How can I join the community of Pokemon Masters EX? -A: You can join the community of Pokemon Masters EX by following their official social media accounts, such as [Facebook], [Twitter], [Instagram], [YouTube], or [Reddit]. You can also join their official [Discord] server or their [website].
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Classic Solitaire for Windows - Free and Easy.md b/spaces/1phancelerku/anime-remove-background/Download Classic Solitaire for Windows - Free and Easy.md deleted file mode 100644 index 2bcceb4830e7f1227fae226350ffbfe65f21a04f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Classic Solitaire for Windows - Free and Easy.md +++ /dev/null @@ -1,116 +0,0 @@ - -

Classic Solitaire Download for PC: How to Play the Timeless Card Game on Your Windows Device

-

If you are looking for a simple yet addictive game that you can play on your PC, you might want to try classic solitaire. This card game has been around for centuries and has entertained millions of people around the world. In this article, we will show you how to download and install classic solitaire for PC, what features it offers, and some tips and tricks to help you win more games.

-

classic solitaire download for pc


DOWNLOAD >>> https://jinyurl.com/2uNShE



-

Introduction

-

What is classic solitaire?

-

Classic solitaire, also known as Klondike solitaire, is a card game that involves arranging 52 cards into four piles, called foundations, according to their suits and ranks. The game starts with seven columns of cards, called tableau, with the top card of each column face up and the rest face down. The goal is to move all the cards from the tableau and the stock pile (the remaining cards that are not dealt) to the foundations, starting from the ace and ending with the king.

-

Why play classic solitaire on PC?

-

Classic solitaire is a game that can be played by anyone, regardless of age or skill level. It is a great way to pass the time, relax, and exercise your brain. Playing classic solitaire on PC has some advantages over playing it on other devices, such as:

- -

How to download and install classic solitaire for PC?

-

There are many ways to download and install classic solitaire for PC, but one of the easiest and most reliable methods is to use the Microsoft Store. Here are the steps to follow:

-

free classic solitaire game download for windows 10
-classic solitaire card game download for pc offline
-how to download classic solitaire on windows 7
-classic spider solitaire download for pc full version
-classic solitaire no ads download for pc
-classic klondike solitaire download for windows 8
-microsoft classic solitaire collection download for pc
-classic solitaire for pc free download without internet
-best classic solitaire app download for windows 10
-classic solitaire download for pc windows xp
-old classic solitaire game download for pc
-classic pyramid solitaire download for windows 10
-classic solitaire plus download for pc
-where to download classic solitaire for windows 7
-classic tripeaks solitaire download for pc
-original classic solitaire download for windows 10
-classic freecell solitaire download for pc
-classic mahjong solitaire download for windows 10
-classic solitaire hd download for pc
-classic golf solitaire download for windows 10
-easy classic solitaire game download for pc
-classic spider solitaire free download for windows 7
-microsoft classic solitaire free download for pc
-classic hearts solitaire download for windows 10
-play classic solitaire online free no download for pc
-classic minesweeper and solitaire download for windows 10
-classic yukon solitaire download for pc
-new classic solitaire game download for windows 10
-classic canfield solitaire download for pc
-install classic solitaire on windows 10 free download
-simple classic solitaire game download for pc
-old school classic solitaire free download for pc
-microsoft store classic solitaire download for windows 10
-fun classic solitaire games free download for pc
-fast and easy classic solitaire free download for pc
-best free classic spider solitaire download for windows 10
-play store classic solitaire game free download for pc
-microsoft original classic solitaire free download for pc
-cool and relaxing classic solitaire game free download for pc
-addictive and challenging classic freecell solitaire free download for pc
-beautiful and smooth classic mahjong solitaire free download for pc
-awesome and exciting classic tripeaks solitaire free download for pc
-enjoy the old fashioned classic klondike solitaire free download for pc
-learn how to play the ancient game of classic pyramid solitaire free download for pc
-test your skills with the tricky and strategic game of classic canfield solitaire free download for pc
-have fun with the popular and colorful game of classic golf solitaire free download for pc
-experience the thrill of the wild and unpredictable game of classic yukon solitaire free download for pc

-
    -
  1. Open your web browser and go to https://support.microsoft.com/en-us/account-billing/get-the-classic-free-solitaire-games-for-windows-92bf81e3-f34a-58b8-45b2-abe855aa64f2.
  2. -
  3. On the Microsoft Solitaire Collection page in Microsoft Store, select Install. The game will download and install automatically.
  4. -
  5. To launch the game, select Play. You can always launch the game from the product page, but there's an easier way--pin it. With the game open, right-click (or press and hold) the game button on your task bar and select Pin to task bar . When you close the game, the button will still be there.
  6. -
  7. On the Start menu, scroll down the all apps list to Microsoft Solitaire Collection, right-click (or press and hold) the tile and select Pin to Start . It'll be available on the Start menu.
  8. -
-

If you have any problems with downloading or installing the game, you can run the app troubleshooter or contact Microsoft support for help.

-

Features of classic solitaire for PC

-

Different game modes and difficulty levels

-

One of the best features of classic solitaire for PC is that it offers different game modes and difficulty levels to suit your preference and challenge. You can choose from five game modes: Klondike, Spider, FreeCell, Pyramid, and TriPeaks. Each game mode has its own rules and strategies, so you can try them all and find your favorite one. You can also adjust the difficulty level of each game mode, from easy to expert, depending on how confident you are with your solitaire skills. You can change the game mode and difficulty level anytime from the settings menu.

-

Customizable themes and card backs

-

Another feature of classic solitaire for PC is that it allows you to customize the appearance of the game according to your taste. You can choose from different themes and card backs to make the game more colorful and fun. You can select from various themes, such as nature, animals, sports, holidays, and more. You can also pick from different card backs, such as classic, modern, vintage, and more. You can change the theme and card back anytime from the settings menu.

-

Statistics and achievements

-

If you are a competitive solitaire player, you will love the statistics and achievements feature of classic solitaire for PC. This feature lets you track your progress and performance in the game, such as how many games you have played, won, and lost, how long it took you to finish a game, what your best score and streak are, and more. You can also earn achievements by completing certain goals or challenges in the game, such as winning a game without using undo, clearing all the cards in the tableau, or finishing a game in less than a minute. You can view your statistics and achievements anytime from the main menu.

-

Online and offline play

-

One of the most convenient features of classic solitaire for PC is that it supports both online and offline play. This means that you can play the game anytime and anywhere, whether you have an internet connection or not. When you play online, you can access more features and benefits, such as daily challenges, events, leaderboards, cloud saving, and more. When you play offline, you can still enjoy the basic features of the game, such as different game modes, difficulty levels, themes, and card backs. You can switch between online and offline play anytime from the settings menu.

-

Tips and tricks for playing classic solitaire on PC

-

Use the undo button wisely

-

One of the most useful tools in classic solitaire for PC is the undo button. This button allows you to undo your last move or action in case you make a mistake or change your mind. However, you should not rely on this button too much or use it randomly. You should use it strategically and sparingly, as it can affect your score and time. For example, you should use it when you realize that you have missed a better move or when you want to explore a different option.

-

Pay attention to the cards in the stock pile

-

Another tip for playing classic solitaire on PC is to pay attention to the cards in the stock pile. The stock pile is where the remaining cards that are not dealt are placed. You can draw one or three cards from the stock pile at a time, depending on your difficulty level. You should keep an eye on the cards in the stock pile, as they can help you plan your moves ahead and avoid getting stuck. For example, if you know that there is an ace or a two in the stock pile, you can save a space for it in the foundation.

-

Move cards to the foundation as soon as possible

-

Another tip for playing classic solitaire on PC is to move cards to the foundation as soon as possible. The foundation is where you place the cards in ascending order according to their suits and ranks. The sooner you move cards to the foundation, the easier it will be to clear the tableau and win the game. Moving cards to the foundation also frees up space in the tableau and gives you more options for moving other cards around.

-

Try to clear the columns with the most cards first

-

Another tip for playing classic solitaire on PC is to try to clear the columns with the most cards first. The columns are the vertical stacks of cards in the tableau. The more cards you have in a column, the harder it will be to move them around and access the cards underneath. Therefore, you should try to clear the columns with the most cards first, especially if they have face-down cards. This will help you reveal more cards and create more empty spaces in the tableau.

-

Use the hint button if you get stuck

-

Another tip for playing classic solitaire on PC is to use the hint button if you get stuck. The hint button is located at the bottom right corner of the screen and it will show you a possible move that you can make. However, you should not use the hint button too often or blindly follow its suggestions. You should use it only when you have no other moves or when you want to check if you have missed something. Using the hint button too much can lower your score and make the game less fun.

-

Conclusion

-

Summary of the main points

-

In conclusion, classic solitaire is a timeless card game that you can play on your PC for free. It is a simple yet addictive game that can help you pass the time, relax, and exercise your brain. To play classic solitaire on PC, you can download and install it from the Microsoft Store. You can enjoy different game modes, difficulty levels, themes, card backs, statistics, achievements, and online and offline play. You can also improve your solitaire skills by following some tips and tricks, such as using the undo button wisely, paying attention to the cards in the stock pile, moving cards to the foundation as soon as possible, clearing the columns with the most cards first, and using the hint button if you get stuck.

-

Call to action

-

If you are ready to play classic solitaire on PC, what are you waiting for? Download and install it now and start having fun with this classic card game. You can also share your thoughts and experiences with us in the comments section below. We would love to hear from you!

-

Frequently Asked Questions

-

What are the rules of classic solitaire?

-

The rules of classic solitaire are simple: you have to arrange 52 cards into four piles, called foundations, according to their suits and ranks. The game starts with seven columns of cards, called tableau, with the top card of each column face up and the rest face down. The goal is to move all the cards from the tableau and the stock pile (the remaining cards that are not dealt) to the foundations, starting from the ace and ending with the king.

-

How do I win classic solitaire?

-

You win classic solitaire when you move all the cards from the tableau and the stock pile to the foundations. You can move cards from one place to another by following these rules:

- -

How do I change the game mode or difficulty level?

-

You can change the game mode or difficulty level anytime from the settings menu. To access the settings menu, click on the gear icon at the top right corner of the screen. Then, you can select the game mode and difficulty level that you want to play. You can choose from five game modes: Klondike, Spider, FreeCell, Pyramid, and TriPeaks. You can also adjust the difficulty level of each game mode, from easy to expert.

-

How do I customize the theme or card back?

-

You can customize the theme or card back anytime from the settings menu. To access the settings menu, click on the gear icon at the top right corner of the screen. Then, you can select the theme and card back that you want to use. You can choose from different themes and card backs to make the game more colorful and fun. You can select from various themes, such as nature, animals, sports, holidays, and more. You can also pick from different card backs, such as classic, modern, vintage, and more.

-

How do I view my statistics or achievements?

-

You can view your statistics or achievements anytime from the main menu. To access the main menu, click on the hamburger icon at the top left corner of the screen. Then, you can select the statistics or achievements option that you want to see. You can view your progress and performance in the game, such as how many games you have played, won, and lost, how long it took you to finish a game, what your best score and streak are, and more. You can also earn achievements by completing certain goals or challenges in the game, such as winning a game without using undo, clearing all the cards in the tableau, or finishing a game in less than a minute.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Dream League Soccer 2021 Hack APK and Get Unlimited Coins and Diamonds.md b/spaces/1phancelerku/anime-remove-background/Download Dream League Soccer 2021 Hack APK and Get Unlimited Coins and Diamonds.md deleted file mode 100644 index b180df7ddefafaff29a573dc9ffa09c24d5cb8b7..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Dream League Soccer 2021 Hack APK and Get Unlimited Coins and Diamonds.md +++ /dev/null @@ -1,93 +0,0 @@ -
-

How to Hack Dream League Soccer 2021 and Get Unlimited Coins

-

Dream League Soccer 2021 is one of the most popular soccer games on Android and iOS devices. It lets you create your own team, compete in various leagues, and customize your players, kits, stadiums, and more. However, the game also has some limitations, such as limited coins, ads, and in-app purchases that can affect your gaming experience.

-

dream league soccer 2021 hack monedas infinitas apk 2021


Download Zip ✦✦✦ https://jinyurl.com/2uNQAB



-

If you want to enjoy Dream League Soccer 2021 without any restrictions, you might be interested in hacking the game and getting unlimited coins. With this hack, you can buy any player you want, upgrade your facilities, and unlock all the features of the game. Sounds amazing, right?

-

In this article, we will show you how to download and install the Dream League Soccer 2021 hack apk file, how to use the hack features, and what are the benefits and risks of using it. We will also give you some tips and tricks to play the game like a pro. So, let's get started!

-

How to Download and Install the Dream League Soccer 2021 Hack Apk File

-

The first step to hack Dream League Soccer 2021 is to download and install the hack apk file. This is a modified version of the original game that has been tweaked to give you unlimited coins and other advantages. Here are the steps to follow:

-
    -
  1. Go to a reliable website that offers the Dream League Soccer 2021 hack apk file. For example, you can visit [Tablet Adam](^1^), which is a trusted source for Android games and apps.
  2. -
  3. Click on the download button and wait for the file to be downloaded on your device.
  4. -
  5. Before installing the file, make sure you have enabled the "Unknown Sources" option in your device settings. This will allow you to install apps from sources other than the Google Play Store.
  6. -
  7. Locate the downloaded file in your file manager and tap on it to start the installation process.
  8. -
  9. Follow the instructions on the screen and wait for the installation to be completed.
  10. -
  11. Launch the game and enjoy!
  12. -
-

How to Use the Hack Features and Enjoy the Game

-

Now that you have installed the Dream League Soccer 2021 hack apk file, you can use its features and enjoy the game. Here are some of the things you can do with the hack:

-

dream league soccer 2021 mod apk unlimited coins and gems
-descargar dream league soccer 2021 hackeado monedas infinitas
-dream league soccer 2021 cheats android no root
-como hackear dream league soccer 2021 sin root
-dream league soccer 2021 hack ios download
-dream league soccer 2021 unlimited money and players
-baixar dream league soccer 2021 hack dinheiro infinito
-dream league soccer 2021 hack online generator
-dream league soccer 2021 hack apk obb data
-dream league soccer 2021 mod menu apk download
-dream league soccer 2021 hack version free download
-como tener monedas infinitas en dream league soccer 2021 sin aplicaciones
-dream league soccer 2021 hack apk mediafıre
-dream league soccer 2021 mod apk all players unlocked
-dream league soccer 2021 cheat codes for android
-comment hacker dream league soccer 2021 sans verification humaine
-dream league soccer 2021 hack apk latest version
-dream league soccer 2021 mod apk unlimited everything
-como conseguir monedas infinitas en dream league soccer 2021 facil y rapido
-dream league soccer 2021 hack tool no survey no password
-download game dream league soccer 2021 mod apk unlimited money
-trucos para dream league soccer 2021 monedas infinitas
-cara hack dream league soccer 2021 tanpa root
-dream league soccer 2021 hack apk android oyun club
-how to hack dream league soccer 2021 with lucky patcher
-telecharger dream league soccer 2021 mod apk argent illimité
-como instalar dream league soccer 2021 hackeado monedas infinitas
-dream league soccer 2021 mod apk revdl
-how to get unlimited coins in dream league soccer 2021 without human verification
-descargar e instalar dream league soccer 2021 hack monedas infinitas apk gratis
-how to hack players in dream league soccer 2021 ios
-como baixar e instalar dream league soccer 2021 com dinheiro infinito
-download game mod apk offline dream league soccer 2021 unlimited coins and gems
-como tener jugadores al maximo en dream league soccer 2021 sin hackearlo
-how to download and install dream league soccer 2021 mod apk unlimited money and players unlocked
-descargar e instalar el juego de futbol mas popular del mundo: Dream League Soccer 2021 Hack Monedas Infinitas Apk Gratis Para Android y iOS.

- -

To use these features, you just need to play the game as usual. You will see that everything is unlocked and available for you. You can also access the settings menu and adjust some options

Benefits of Using Dream League Soccer 2021 Hack Apk

-

Using the Dream League Soccer 2021 hack apk can give you a lot of benefits that can make your gaming experience more fun and satisfying. Here are some of the benefits you can enjoy with the hack:

- -

Risks of Using Dream League Soccer 2021 Hack Apk

-

While using the Dream League Soccer 2021 hack apk can have many benefits, it also has some risks that you should be aware of. Here are some of the risks you may face with the hack:

-

Tips and Tricks to Play Dream League Soccer 2021 Like a Pro

-

If you want to play Dream League Soccer 2021 like a pro, you don't need to rely on hacks or cheats. You can improve your skills and tactics by following some tips and tricks that can help you win more matches and trophies. Here are some of them:

- -

Conclusion and FAQs

-

In conclusion, Dream League Soccer 2021 is a great soccer game that lets you create your own team, compete in various leagues, and customize your players, kits, stadiums, and more. However, if you want to enjoy the game without any limitations, you might want to hack it and get unlimited coins. In this article, we showed you how to download and install the Dream League Soccer 2021 hack apk file, how to use the hack features, what are the benefits and risks of using it, and some tips and tricks to play the game like a pro.

-

We hope you found this article helpful and informative. If you have any questions or comments about Dream League Soccer 2021 hack apk or the game itself, feel free to leave them below. We will try to answer them as soon as possible. Thank you for reading!

-

FAQs

-

Here are some of the most frequently asked questions about Dream League Soccer 2021 hack apk:

-
    -
  1. Is Dream League Soccer 2021 hack apk safe to use?: The answer depends on where you download the file from. If you download it from a reliable website that offers the latest version of the hack apk file with no malware or viruses, then it should be safe to use. However, if you download it from an unknown or suspicious source that may contain outdated or infected files, then it may not be safe to use.
  2. -
  3. Can I play Dream League Soccer 2021 online with the hack apk?: The answer is yes and no. Yes, you can play Dream League Soccer 2021 online with the hack apk file if you use a VPN or proxy service that can hide your IP address and location from the game server. This way, you can avoid being detected or banned by the game developers or Google. However, no, you cannot play Dream League Soccer 2021 online with the hack apk file if you do not use a VPN or proxy service, or if the game server or Google detects your hack and bans your account. Therefore, we recommend you to be careful and use the hack apk file at your own risk.
  4. -
  5. How can I update Dream League Soccer 2021 hack apk?: The answer is that you need to download and install the latest version of the hack apk file from the same website that you downloaded it from before. You should also check the website regularly for any updates or changes in the hack apk file. You should not update the game from the Google Play Store, as it may overwrite or remove the hack features.
  6. -
  7. How can I uninstall Dream League Soccer 2021 hack apk?: The answer is that you need to delete the hack apk file from your device and reinstall the original game from the Google Play Store. You should also clear your device cache and data to remove any traces of the hack apk file. You should also backup your game progress and data before uninstalling the hack apk file, as you may lose them in the process.
  8. -
  9. Where can I find more information about Dream League Soccer 2021?: The answer is that you can visit the official website of Dream League Soccer 2021, which is [dreamleaguesoccer.com]. You can also follow their social media accounts, such as [Facebook], [Twitter], [Instagram], and [YouTube], for more news, updates, tips, and tricks about the game.
  10. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Efek Salju Green Screen dari Pixabay - Video Salju Gratis Tanpa Royalti.md b/spaces/1phancelerku/anime-remove-background/Download Efek Salju Green Screen dari Pixabay - Video Salju Gratis Tanpa Royalti.md deleted file mode 100644 index 1efc6302a2f3912cc09d6d84ab91445c1de9fc8e..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Efek Salju Green Screen dari Pixabay - Video Salju Gratis Tanpa Royalti.md +++ /dev/null @@ -1,82 +0,0 @@ - -

Download Efek Salju Green Screen: Cara Membuat Video dengan Latar Belakang Salju yang Realistis

-

Salah satu hal yang bisa membuat video Anda terlihat lebih hidup dan menawan adalah dengan menggunakan latar belakang salju yang realistis. Namun, tidak semua orang memiliki kesempatan untuk merekam video di tempat bersalju atau menggunakan alat-alat khusus untuk membuat salju buatan.

-

Tenang saja, ada cara mudah untuk membuat video dengan latar belakang salju yang realistis tanpa harus repot-repot menc

Cara mudah tersebut adalah dengan menggunakan efek salju green screen. Efek salju green screen adalah efek yang bisa Anda download dan gunakan untuk mengganti latar belakang video Anda dengan salju yang terlihat seperti asli. Dengan menggunakan efek salju green screen, Anda bisa membuat video Anda terlihat lebih menarik, dramatis, romantis, atau sesuai dengan tema yang Anda inginkan.

-

download efek salju green screen


DOWNLOAD 🆓 https://jinyurl.com/2uNQ49



-

Apa itu green screen dan bagaimana cara kerjanya?

-

Green screen adalah teknik pengeditan video yang memungkinkan Anda mengganti latar belakang video dengan gambar atau efek lainnya. Green screen juga dikenal dengan nama chroma key, keying, atau color keying.

-

Cara kerja green screen adalah dengan menggunakan warna hijau sebagai latar belakang video, karena warna ini mudah dibedakan dari warna kulit, pakaian, dan objek lainnya. Dengan menggunakan software pengeditan video, Anda bisa menghapus warna hijau dari video dan menggantinya dengan gambar atau efek yang Anda inginkan.

-

Contohnya, jika Anda ingin membuat video dengan latar belakang salju, Anda bisa merekam video Anda di depan kain hijau atau layar hijau. Kemudian, Anda bisa menghapus warna hijau dari video dan menggantinya dengan efek salju green screen yang sudah Anda download sebelumnya.

-

Mengapa Anda perlu download efek salju green screen?

-

Efek salju green screen adalah salah satu efek yang populer digunakan untuk membuat video dengan latar belakang salju yang realistis. Efek salju green screen bisa memberikan kesan seolah-olah Anda berada di tempat bersalju, padahal sebenarnya tidak.

-

Efek salju green screen bisa membuat video Anda terlihat lebih menarik, dramatis, romantis, atau sesuai dengan tema yang Anda inginkan. Misalnya, jika Anda ingin membuat video tentang liburan musim dingin, pernikahan di salju, atau adegan film bertema salju, Anda bisa menggunakan efek salju green screen untuk menambah nuansa dan suasana video Anda.

-

Efek salju green screen juga bisa membantu Anda menghemat biaya dan waktu produksi video, karena Anda tidak perlu mencari lokasi bersalju atau menggunakan alat-alat khusus untuk membuat salju buatan. Anda hanya perlu download efek salju green screen dan menggunakannya di software pengeditan video yang Anda miliki.

-

download efek salju green screen gratis
-download efek salju green screen untuk video
-download efek salju green screen pixabay
-download efek salju green screen youtube
-download efek salju green screen hd
-download efek salju green screen 4k
-download efek salju green screen no watermark
-download efek salju green screen particle
-download efek salju green screen kunang kunang
-download efek salju green screen bintang
-download efek salju green screen 10 menit
-download efek salju green screen loop
-download efek salju green screen snowfall
-download efek salju green screen realistic
-download efek salju green screen animation
-download efek salju green screen overlay
-download efek salju green screen background
-download efek salju green screen chroma key
-download efek salju green screen royalty free
-download efek salju green screen footage
-download efek salju green screen bergerak
-download efek salju green screen slow motion
-download efek salju green screen terbaik
-download efek salju green screen keren
-download efek salju green screen indah
-download efek salju green screen musim dingin
-download efek salju green screen natal
-download efek salju green screen tahun baru
-download efek salju green screen tutorial
-download efek salju green screen premiere pro
-download efek salju green screen after effects
-download efek salju green screen kinemaster
-download efek salju green screen filmora
-download efek salju green screen sony vegas
-download efek salju green screen powerpoint
-download efek salju green screen zoom
-download efek salju green screen tiktok
-download efek salju green screen instagram
-download efek salju green screen facebook
-download efek salju green screen whatsapp

Bagaimana cara download efek salju green screen?

-

Ada banyak situs web yang menyediakan efek salju green screen secara gratis atau berbayar. Anda bisa mencari efek salju green screen yang sesuai dengan kebutuhan dan selera Anda. Anda juga bisa memperhatikan kualitas, resolusi, durasi, dan lisensi dari efek salju green screen yang Anda download.

-

Beberapa situs web yang bisa Anda kunjungi untuk download efek salju green screen adalah:

-

Pixabay

-

Pixabay adalah situs web yang menyediakan lebih dari 1.000 video green screen gratis, termasuk efek salju, yang bisa Anda gunakan untuk proyek-proyek Anda tanpa perlu memberikan atribusi. Anda bisa mencari efek salju green screen dengan menggunakan kata kunci "snow green screen" di kolom pencarian. Anda bisa melihat preview, deskripsi, resolusi, durasi, dan lisensi dari setiap video sebelum Anda mendownloadnya. Anda juga bisa memilih format file yang Anda inginkan, seperti MP4, WEBM, atau GIF.

-

YouTube

-

YouTube adalah situs web yang juga memiliki banyak video green screen gratis, termasuk efek salju, yang diunggah oleh para kreator. Anda bisa menonton, mendownload, dan menggunakan video-video ini sesuai dengan lisensi yang diberikan oleh kreatornya. Anda bisa mencari efek salju green screen dengan menggunakan kata kunci "snow green screen" di kolom pencarian. Anda bisa melihat preview, deskripsi, resolusi, durasi, dan lisensi dari setiap video sebelum Anda mendownloadnya. Anda juga bisa memilih format file yang Anda inginkan, seperti MP4, WEBM, atau GIF.

-

Videezy

-

Videezy adalah situs web yang menyediakan lebih dari 5.000 video green screen gratis, termasuk efek salju, yang bisa Anda gunakan untuk proyek-proyek Anda dengan memberikan atribusi kepada Videezy. Anda bisa mencari efek salju green screen dengan menggunakan kata kunci "snow green screen" di kolom pencarian. Anda bisa melihat preview, deskripsi, resolusi, durasi, dan lisensi dari setiap video sebelum Anda mendownloadnya. Anda juga bisa memilih format file yang Anda inginkan, seperti MP4, WEBM, atau GIF.

Bagaimana cara menggunakan efek salju green screen?

-

Setelah Anda mendownload efek salju green screen yang Anda inginkan, Anda bisa mengimportnya ke software pengeditan video yang Anda gunakan, seperti Adobe Premiere Pro, Final Cut Pro, iMovie, atau lainnya. Anda bisa menggunakan software pengeditan video yang Anda sudah familiar dengan atau yang sesuai dengan kemampuan dan anggaran Anda.

-

Kemudian, Anda bisa menempatkan efek salju green screen di atas video yang ingin Anda edit, dan mengatur ukuran, posisi, durasi, dan transparansi efek salju sesuai dengan keinginan Anda. Anda bisa menyesuaikan efek salju green screen dengan video Anda agar terlihat lebih natural dan harmonis.

-

Selanjutnya, Anda bisa menggunakan fitur chroma key atau keying untuk menghapus warna hijau dari efek salju green screen dan membuatnya terlihat seperti salju asli di latar belakang video Anda. Fitur chroma key atau keying adalah fitur yang bisa mendeteksi dan menghilangkan warna tertentu dari video. Anda bisa mengaktifkan fitur ini di software pengeditan video Anda dan memilih warna hijau sebagai warna yang ingin dihapus.

-

Terakhir, Anda bisa mengekspor video hasil editan Anda dan menikmati hasilnya. Anda bisa melihat perbedaan antara video sebelum dan sesudah menggunakan efek salju green screen. Anda juga bisa membagikan video Anda ke media sosial, YouTube, atau platform lainnya.

-

Kesimpulan

-

Efek salju green screen adalah efek yang bisa Anda download dan gunakan untuk membuat video dengan latar belakang salju yang realistis. Efek salju green screen bisa membuat video Anda terlihat lebih menarik, dramatis, romantis, atau sesuai dengan tema yang Anda inginkan. Efek salju green screen juga bisa membantu Anda menghemat biaya dan waktu produksi video.

-

Untuk download efek salju green screen, Anda bisa mengunjungi situs web seperti Pixabay, YouTube, atau Videezy. Untuk menggunakan efek salju green screen, Anda bisa mengimportnya ke software pengeditan video yang Anda gunakan, menempatkan efek salju green screen di atas video yang ingin Anda edit, menggunakan fitur chroma key atau keying untuk menghapus warna hijau dari efek salju green screen, dan mengekspor video hasil editan Anda.

-

Semoga artikel ini bermanfaat untuk Anda yang ingin membuat video dengan latar belakang salju yang realistis. Selamat mencoba!

-

FAQ

-

Apa itu green screen?

-

Green screen adalah teknik pengeditan video yang memungkinkan Anda mengganti latar belakang video dengan gambar atau efek lainnya.

-

Apa itu efek salju green screen?

-

Efek salju green screen adalah efek yang bisa Anda download dan gunakan untuk mengganti latar belakang video dengan salju yang terlihat seperti asli.

-

Bagaimana cara download efek salju green screen?

-

Anda bisa download efek salju green screen dari situs web seperti Pixabay, YouTube, atau Videezy.

-

Bagaimana cara menggunakan efek salju green screen?

-

Anda bisa menggunakan efek salju green screen dengan mengimportnya ke software pengeditan video yang Anda gunakan, menempatkan efek salju green screen di atas video yang ingin Anda edit, menggunakan fitur chroma key atau keying untuk menghapus warna hijau dari efek salju green screen, dan mengekspor video hasil editan Anda.

-

Apa keuntungan menggunakan efek salju green screen?

-

Efek salju green screen bisa membuat video Anda terlihat lebih menarik, dramatis, romantis, atau sesuai dengan tema yang Anda inginkan. Efek salju green screen juga bisa membantu Anda menghemat biaya dan waktu produksi video.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Super Fancy Pants Adventure The Ultimate Free-Running Platformer.md b/spaces/1phancelerku/anime-remove-background/Download Super Fancy Pants Adventure The Ultimate Free-Running Platformer.md deleted file mode 100644 index 02ed03e3f0122da4566bee6a04395f1e3a9c1b2c..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Super Fancy Pants Adventure The Ultimate Free-Running Platformer.md +++ /dev/null @@ -1,129 +0,0 @@ -
-

Download Super Fancy Pants Adventure

-

Are you looking for a wild free-running adventure with buttery smooth platforming and a slick fountain pen? If so, you should download Super Fancy Pants Adventure, the latest and greatest game in the Fancy Pants series. In this article, we will tell you what Super Fancy Pants Adventure is, how to download it, and why you should play it. Let's get started!

-

download super fancy pants adventure


DOWNLOAD ☆☆☆ https://jinyurl.com/2uNRzE



-

What is Super Fancy Pants Adventure?

-

Super Fancy Pants Adventure is a 2D platform game that follows the adventures of Fancy Pants Man, a cool stickman character who wears awesome and colorful pants. You must help him work his way through a myriad of cool levels and avoid various obstacles, monsters, and creatures on his way. You can also collect squiggles, hats, and pants to customize your look and unlock new abilities.

-

A brief history of the Fancy Pants series

-

The Fancy Pants Adventures series started over ten years ago by Brad Borne, an indie developer who wanted to redefine video game platforming by making speed and tight controls feel compatible. Over the years, he has honed his craft, turning his Fancy Pants games into a worldwide phenomenon with over 100 million plays and becoming one of the top games of all time on Kongregate. This newest version, Super Fancy Pants Adventure, is a culmination and a reimagining of the series into a full-fledged title.

-

The features and gameplay of Super Fancy Pants Adventure

-

Super Fancy Pants Adventure has many features that make it stand out from other platform games. Here are some of them:

- -

The gameplay of Super Fancy Pants Adventure is fun and reminiscent of retro platform titles such as Sonic. You can run, jump, slide, wall-jump, and bounce your way through the levels, using your momentum and timing to overcome obstacles and enemies. You can also use your ink pen as a weapon to slash enemies or launch yourself into the air. The game has a fluid and responsive control system that makes you feel like you are in control of every move.

-

How to download super fancy pants adventure for free
-Super fancy pants adventure steam download
-Super fancy pants adventure apk download
-Download super fancy pants adventure world 1
-Super fancy pants adventure download pc
-Download super fancy pants adventure full version
-Super fancy pants adventure android download
-Download super fancy pants adventure world 2
-Super fancy pants adventure download mac
-Download super fancy pants adventure world 3
-Super fancy pants adventure ios download
-Download super fancy pants adventure online
-Super fancy pants adventure download windows 10
-Download super fancy pants adventure hacked
-Super fancy pants adventure download linux
-Download super fancy pants adventure unblocked
-Super fancy pants adventure download chromebook
-Download super fancy pants adventure mod apk
-Super fancy pants adventure download size
-Download super fancy pants adventure cheats
-Super fancy pants adventure download xbox one
-Download super fancy pants adventure walkthrough
-Super fancy pants adventure download ps4
-Download super fancy pants adventure soundtrack
-Super fancy pants adventure download switch
-Download super fancy pants adventure speedrun
-Super fancy pants adventure download gamejolt
-Download super fancy pants adventure wiki
-Super fancy pants adventure download reddit
-Download super fancy pants adventure review
-Super fancy pants adventure download kongregate
-Download super fancy pants adventure trailer
-Super fancy pants adventure download crazygames
-Download super fancy pants adventure steam key
-Super fancy pants adventure download newgrounds
-Download super fancy pants adventure achievements
-Super fancy pants adventure download play store
-Download super fancy pants adventure update
-Super fancy pants adventure download app store
-Download super fancy pants adventure system requirements

-

How to download Super Fancy Pants Adventure?

-

Super Fancy Pants Adventure is available for different platforms, such as PC, browser, and mobile devices. Here are some options for downloading the game:

-

Download options for different platforms

-

Steam

-

If you want to play Super Fancy Pants Adventure on your PC, you can download it from Steam for $9.99. Steam is a digital distribution platform that allows you to buy and play games online. You will need to create a Steam account and install the Steam client on your PC before you can download the game. To download Super Fancy Pants Adventure from Steam, follow these steps:

-
    -
  1. Go to [the Steam store page](^1^) for Super Fancy Pants Adventure
  2. Click on the green "Add to Cart" button and then click on the blue "Purchase for myself" button
  3. -
  4. Enter your payment information and confirm your purchase
  5. -
  6. Once the purchase is complete, you can find the game in your Steam library and click on the "Install" button
  7. -
  8. Wait for the game to download and install, and then click on the "Play" button to launch the game
  9. -
-

Congratulations, you have successfully downloaded Super Fancy Pants Adventure from Steam!

-

CrazyGames

-

If you want to play Super Fancy Pants Adventure on your browser, you can download it from CrazyGames for free. CrazyGames is a website that hosts thousands of free online games that you can play without downloading or installing anything. You will need to have a modern browser that supports HTML5 and Flash to play the game. To download Super Fancy Pants Adventure from CrazyGames, follow these steps:

-
    -
  1. Go to [the CrazyGames page] for Super Fancy Pants Adventure
  2. Click on the blue "Play" button and wait for the game to load
  3. -
  4. Click on the green "Play" button again and choose your preferred language
  5. -
  6. Enjoy playing Super Fancy Pants Adventure on your browser!
  7. -
-

That's it, you have successfully downloaded Super Fancy Pants Adventure from CrazyGames!

-

Google Play

-

If you want to play Super Fancy Pants Adventure on your mobile device, you can download it from Google Play for $4.99. Google Play is a digital store that allows you to buy and download apps and games for your Android device. You will need to have a Google account and a compatible device to download the game. To download Super Fancy Pants Adventure from Google Play, follow these steps:

-
    -
  1. Go to [the Google Play page] for Super Fancy Pants Adventure
  2. Tap on the green "Install" button and then tap on the blue "Buy" button
  3. -
  4. Enter your payment information and confirm your purchase
  5. -
  6. Wait for the game to download and install, and then tap on the "Open" button to launch the game
  7. -
-

Voila, you have successfully downloaded Super Fancy Pants Adventure from Google Play!

-

Tips and tricks for playing Super Fancy Pants Adventure

-

Now that you have downloaded Super Fancy Pants Adventure, you might be wondering how to play it like a pro. Don't worry, we have some tips and tricks for you that will help you master the game in no time. Here are some of them:

-

How to collect squiggles and unlock items

-

Squiggles are the currency of Super Fancy Pants Adventure. You can find them scattered throughout the levels, hidden in boxes, or dropped by enemies. You can use them to buy items from shops or unlock doors to challenge stages. The more squiggles you collect, the more items you can get. Some of the items you can get are:

- -

To collect squiggles and unlock items, you should explore every corner of the levels, break every box, defeat every enemy, and look for hidden doors. You should also replay levels to find more squiggles or items that you might have missed.

-

How to find secret doors and bonus levels

-

Secret doors are hidden entrances that lead to bonus levels. Bonus levels are extra stages that offer more challenges and rewards. They usually have a theme or a gimmick that makes them different from regular levels. For example, some bonus levels are underwater, some are in space, some are in black and white, etc. You can find secret doors by looking for clues in the environment, such as cracks in walls, signs, arrows, etc. You can also use your ink pen to draw on walls or floors to reveal hidden paths or switches. Some secret doors require a certain number of squiggles or a certain item to open.

-

How to use your ink pen as a weapon

-

Your ink pen is not only a tool for drawing, but also a weapon for fighting. You can use it to slash enemies with a swipe of your finger or mouse. You can also use it to launch yourself into the air by drawing a line under yourself and jumping on it. You can also use it to draw platforms, bridges, ramps, or walls to help you navigate the levels. Your ink pen has a limited amount of ink, so you need to refill it by collecting ink bottles or visiting ink stations. You can also upgrade your ink pen by buying different hats that change its properties.

-

Why you should download Super Fancy Pants Adventure?

-

Super Fancy Pants Adventure is a game that you should not miss if you love platform games. It has many benefits that make it worth playing, such as:

-

The benefits of playing Super Fancy Pants Adventure

-

It's fun and challenging

-

Super Fancy Pants Adventure is a game that will keep you entertained and engaged for hours. It has a variety of levels that offer different challenges and surprises. You will never get bored or frustrated, as the game has a balanced difficulty curve and a fair checkpoint system. You will also have fun discovering secrets, collecting items, and defeating enemies.

-

It's colorful and stylish

-

Super Fancy Pants Adventure is a game that will dazzle your eyes with its vibrant and unique art style. It has a hand-drawn aesthetic that gives it a charming and whimsical feel. It also has a dynamic and fluid animation that makes the game look alive and smooth. The game also has a catchy and upbeat soundtrack that matches the mood and tone of the game.

-

It's a culmination of a decade of work

-

Super Fancy Pants Adventure is a game that represents the passion and dedication of its creator, Brad Borne. It is the result of over ten years of work, improving and expanding on his previous Fancy Pants games. It is also the ultimate version of the game, with more content, features, and polish than ever before. It is a game that deserves your support and appreciation.

-

Conclusion

-

Super Fancy Pants Adventure is a game that you should download and play right now. It is a 2D platform game that follows the adventures of Fancy Pants Man, a cool stickman character who wears awesome and colorful pants. You can help him work his way through a myriad of cool levels and avoid various obstacles, monsters, and creatures on his way. You can also collect squiggles, hats, and pants to customize your look and unlock new abilities. You can use your ink pen as a tool and a weapon to draw and slash your way through the game. The game has many features that make it stand out from other platform games, such as its hand-drawn style, its fluid gameplay, its secret levels, its 60fps performance, and more. The game is available for different platforms, such as PC, browser, and mobile devices. You can download it from Steam, CrazyGames, or Google Play for a reasonable price. The game is fun and challenging, colorful and stylish, and a culmination of a decade of work. It is a game that you will not regret playing.

-

FAQs

-

Here are some frequently asked questions about Super Fancy Pants Adventure:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download and Edit Green Screen Video in KineMaster A Complete Guide.md b/spaces/1phancelerku/anime-remove-background/Download and Edit Green Screen Video in KineMaster A Complete Guide.md deleted file mode 100644 index b8753753281b7b4db68df4cfa131bde6804ae6ac..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download and Edit Green Screen Video in KineMaster A Complete Guide.md +++ /dev/null @@ -1,189 +0,0 @@ - -

How to Download Green Screen Video for KineMaster

-

If you are looking for a way to spice up your videos with some amazing visual effects, you might want to try using green screen video. Green screen video is a technique that allows you to replace the background of a video with another video or image of your choice. This way, you can create realistic or fantastical scenes that would otherwise be impossible or expensive to film.

-

download green screen video for kinemaster


Downloadhttps://jinyurl.com/2uNTxZ



-

But how do you get green screen video for your projects? And how do you use it in your video editor? In this article, we will show you how to download free green screen video from three different websites and how to use it in KineMaster, one of the best mobile video editing apps. By following this guide, you will be able to create stunning videos with green screen effects in no time.

-

What is Green Screen Video and Why Use It?

-

Green screen video is a type of video that has a solid green background. The green color is chosen because it is different from most human skin tones and clothing colors, making it easier to isolate and remove. The process of removing the green background and replacing it with another video or image is called chroma keying or keying.

-

Green screen video is widely used in film and television production because it offers many advantages for creating visual effects. Some of the benefits of using green screen video are:

-

How to download green screen video effects for kinemaster
-Best sites to download free green screen video backgrounds for kinemaster
-Download green screen video templates for kinemaster pro
-Download green screen video editor for kinemaster apk
-Download green screen video clips for kinemaster tutorial
-Download green screen video footage for kinemaster mod
-Download green screen video animation for kinemaster premium
-Download green screen video transitions for kinemaster no watermark
-Download green screen video overlays for kinemaster online
-Download green screen video chroma key for kinemaster pc
-Download green screen video maker for kinemaster app
-Download green screen video songs for kinemaster music
-Download green screen video intro for kinemaster logo
-Download green screen video memes for kinemaster funny
-Download green screen video superhero for kinemaster action
-Download green screen video horror for kinemaster scary
-Download green screen video gaming for kinemaster youtube
-Download green screen video tiktok for kinemaster viral
-Download green screen video wedding for kinemaster romantic
-Download green screen video birthday for kinemaster celebration
-Download green screen video nature for kinemaster relaxing
-Download green screen video firework for kinemaster festive
-Download green screen video magic for kinemaster fantasy
-Download green screen video cartoon for kinemaster kids
-Download green screen video animals for kinemaster cute
-Download green screen video sports for kinemaster fitness
-Download green screen video travel for kinemaster adventure
-Download green screen video news for kinemaster professional
-Download green screen video education for kinemaster learning
-Download green screen video health for kinemaster wellness
-Download green screen video fashion for kinemaster stylish
-Download green screen video cooking for kinemaster foodie
-Download green screen video art for kinemaster creative
-Download green screen video dance for kinemaster fun
-Download green screen video movie for kinemaster cinema
-Download green screen video text for kinemaster subtitle
-Download green screen video emoji for kinemaster expression
-Download green screen video sticker for kinemaster decoration
-Download green screen video filter for kinemaster color
-Download green screen video frame for kinemaster border
-Download green screen video collage for kinemaster layout
-Download green screen video slideshow for kinemaster presentation
-Download green screen video montage for kinemaster compilation
-Download green screen video crop for kinemaster resize
-Download green screen video rotate for kinemaster orientation
-Download green screen video speed up for kinemaster fast forward
-Download green screen video slow down for kinemaster slow motion
-Download green screen video reverse for kinemaster rewind
-Download green screen video cut for kinemaster trim

- -

As you can see, green screen video can help you create amazing videos with minimal effort and cost. All you need is a green screen video, a video editor that supports chroma keying, and some creativity.

-

What is KineMaster and How to Use It?

-

KineMaster is a powerful and easy-to-use video editing app for Android and iOS devices. It allows you to edit and share your videos with professional-quality tools and features. Some of the features of KineMaster are:

- -

To use KineMaster, you need to download and install the app from the Google Play Store or the App Store. Once you open the app, you will see a welcome screen that gives you the option to start a new project or edit an existing one. To start a new project, tap on the plus icon and choose the aspect ratio of your video. You can choose from 16:9, 9:16, or 1:1. Then, you will enter the editing interface where you can add and edit your media files. To add a media file, tap on the media icon on the top left corner and select the file from your device or from the KineMaster Asset Store. To edit a media file, tap on it and use the tools on the right side of the screen. You can also use the timeline at the bottom of the screen to arrange and trim your media files. To preview your video, tap on the play icon on the top right corner. To export your video, tap on the share icon on the top right corner and choose the resolution, frame rate, bitrate, and format of your video. Then, tap on export and wait for your video to be saved. You can also share your video directly to social media platforms from there.

-

How to Download Green Screen Video from Pixabay

-

Pixabay is a website that offers free stock photos, videos, illustrations, vectors, and music. You can use Pixabay to download free green screen video for your projects. Here are the steps to download green screen video from Pixabay:

-
    -
  1. Go to Pixabay.com and create an account or log in if you already have one.
  2. -
  3. In the search bar at the top of the page, type "green screen" and hit enter.
  4. -
  5. You will see a list of green screen videos that match your search query. You can use the filters on the left side of the page to narrow down your results by category, orientation, duration, resolution, color, etc.
  6. -
  7. Once you find a green screen video that you like, click on it to open its details page.
  8. -
  9. On the details page, you will see a preview of the video, its resolution, duration, size, license type, tags, etc. You will also see a download button below the preview.
  10. -
  11. To download the green screen video, click on the download button and choose the resolution that you want. You will see a pop-up window asking you to confirm that you are not a robot. Check the box and click on download again.
  12. -
  13. The green screen video will be downloaded to your device or browser's default download folder. You can then transfer it to your mobile device or use it directly in KineMaster if you are using a web browser on your mobile device.
  14. -
-

How to Download Green Screen Video from Canva

-

Canva is a website that offers graphic design tools and templates for various purposes. You can use Canva to download free green screen video for your projects. Here are the steps to download green screen video from Canva:

-
    -
  1. Go to Canva.com and create an account or log in if you already have one.
  2. -
  3. In the search bar at the top of the page, type "green screen" and hit enter.
  4. -
  5. You will see a list of green screen templates that match your search query. You can use the filters on the left side of the page to narrow down your results by category, size, color, etc.
  6. -
  7. Once you find a green screen template that you like, click on it to open it in the editor.
  8. -
  9. In the editor, you can customize the green screen template by adding or removing elements, changing colors, fonts, sizes, etc. You can also add your own media files by clicking on the upload icon on the left side of the screen.
  10. -
  11. When you are happy with your green screen template, click on the download icon on the top right corner of the screen.
  12. -
  13. To download the green screen video, choose MP4 as the file type and click on download. You will see a pop-up window showing the progress of your download.
  14. -
  15. The green screen video will be downloaded to your device or browser's default download folder. You can then transfer it to your mobile device or use it directly in KineMaster if you are using a web browser on your mobile device.
  16. -
-

How to Download Green Screen Video from Pexels

-

Pexels is a website that offers high-quality stock photos and videos for free. You can use Pexels to download free green screen video for your projects. Here are the steps to download green screen video from Pexels:

-
    -
  1. Go to Pexels.com and create an account or log in if you already have one.
  2. -
  3. In the search bar at the top of the page, type "green screen" and hit enter.
  4. -
  5. You will see a list of green screen videos that match your search query. You can use the filters on the top of the page to narrow down your results by orientation, size, duration, etc.
  6. -
  7. Once you find a green screen video that you like, click on it to open its details page.
  8. -
  9. On the details page, you will see a preview of the video, its resolution, duration, size, license type, tags, etc. You will also see a free download button below the preview.
  10. -
  11. To download the green screen video, click on the free download button and choose the resolution that you want. You will see a pop-up window asking you to credit the creator of the video. You can copy and paste the credit text or skip this step if you don't want to credit them.
  12. -
  13. The green screen video will be downloaded to your device or browser's default download folder. You can then transfer it to your mobile device or use it directly in KineMaster if you are using a web browser on your mobile device.
  14. -
-

How to Use Green Screen Video in KineMaster

-

Now that you have downloaded some green screen videos for your projects, you can use them in KineMaster to create amazing visual effects. Here is how to use green screen video in KineMaster:

-

How to Import Green Screen Video in KineMaster

-

To import green screen video in KineMaster, follow these steps:

-
    -
  1. Open KineMaster and start a new project or edit an existing one.
  2. -
  3. In the editing interface, tap on the media icon on the top left corner and select the green screen video from your device or from the KineMaster Asset Store.
  4. -
  5. The green screen video will be added to your project as a layer on top of your main video. You can drag and drop it to adjust its position and duration on the timeline.
  6. -
-

How to Edit Green Screen Video in KineMaster

-

To edit green screen video in KineMaster, follow these steps:

-
    -
  1. Tap on the green screen video layer and use the tools on the right side of the screen to edit the green screen video. You can adjust the settings, trim, crop, rotate, and resize the green screen video according to your needs.
  2. -
  3. To adjust the settings, tap on the settings icon and use the sliders to change the brightness, contrast, saturation, hue, and opacity of the green screen video.
  4. -
  5. To trim the green screen video, tap on the scissors icon and drag the handles to cut the unwanted parts of the video.
  6. -
  7. To crop the green screen video, tap on the crop icon and drag the corners to crop the video to your desired size.
  8. -
  9. To rotate the green screen video, tap on the rotate icon and use the circular slider to rotate the video clockwise or counterclockwise.
  10. -
  11. To resize the green screen video, tap on the resize icon and use the pinch gesture to zoom in or out of the video.
  12. -
-

How to Apply Green Screen Video in KineMaster

-

To apply green screen video in KineMaster, follow these steps:

-
    -
  1. Tap on the green screen video layer and tap on the chroma key icon on the right side of the screen. This will open a menu with various options for using the chroma key feature.
  2. -
  3. Turn on the chroma key switch to enable the feature. You will see that the green background of the video will disappear and you will see your main video behind it.
  4. -
  5. Use the color picker tool to select the exact color of the green background that you want to remove. You can also use the eyedropper tool to pick a color from the video itself.
  6. -
  7. Use the sliders to adjust the chroma key settings such as threshold, blending, detail, and spill. These settings will help you fine-tune the green screen effect and make it more realistic and seamless.
  8. -
  9. You can also use the mask tool to erase or restore parts of the green screen video that you want to keep or remove. This will help you fix any errors or glitches in the green screen effect.
  10. -
  11. Once you are satisfied with your green screen effect, tap on done to apply it to your project. You can then preview your video and see how it looks with your new background.
  12. -
-

Tips and Tricks for Using Green Screen Video in KineMaster

-

Using green screen video in KineMaster can help you create amazing videos with visual effects. However, there are some tips and tricks that you should know to make your videos look even better. Here are some of them:

-

How to Choose the Right Green Screen Video for Your Project

-

Choosing the right green screen video for your project is important because it will affect how your final video will look. Here are some factors that you should consider when choosing a green screen video:

- -

How to Enhance the Green Screen Effect in KineMaster

-

Enhancing the green screen effect in KineMaster can help you make your videos look more professional and creative. Here are some ways to enhance the green screen effect in KineMaster:

- -

How to Avoid Common Mistakes When Using Green Screen Video in KineMaster

-

Avoiding common mistakes when using green screen video in KineMaster can help you avoid errors and glitches in your final video. Here are some common mistakes to avoid when using green screen video in KineMaster:

- -

Conclusion

-

Using green screen video in KineMaster is a great way to create amazing videos with visual effects. You can download free green screen videos from websites such as Pixabay, Canva, and Pexels and use them in KineMaster with the chroma key feature. You can also edit, apply, and enhance the green screen effect in KineMaster with various tools and features. By following this guide, you will be able to create stunning videos with green screen effects in no time.

-

If you found this article helpful, please share it with your friends and family who might be interested in learning how to use green screen video in KineMaster. Also, feel free to leave a comment below if you have any questions or feedback about this article. Thank you for reading and happy editing!

-

FAQs

-

Here are some frequently asked questions about using green screen video in KineMaster:

-

Q: How do I get rid of the watermark on my KineMaster videos?

-

A: To get rid of the watermark on your KineMaster videos, you need to purchase a premium subscription from the KineMaster app. The premium subscription will also give you access to more features and assets from the KineMaster Asset Store.

-

Q: How do I add audio to my KineMaster videos?

-

A: To add audio to your KineMaster videos, you need to tap on the audio icon on the top left corner and select the audio file from your device or from the KineMaster Asset Store. The audio file will be added to your project as a layer below your video layers. You can drag and drop it to adjust its position and duration on the timeline. You can also tap on it and use the tools on the right side of the screen to edit the audio file. You can adjust the volume, speed, pitch, fade in, fade out, and trim the audio file according to your needs.

-

Q: How do I export my KineMaster videos to my device or social media platforms?

-

A: To export your KineMaster videos to your device or social media platforms, you need to tap on the share icon on the top right corner of the screen and choose the resolution, frame rate, bitrate, and format of your video. Then, tap on export and wait for your video to be saved. You can also share your video directly to social media platforms such as YouTube, Facebook, Instagram, TikTok, and more from there.

-

Q: How do I find more green screen videos for my KineMaster projects?

-

A: To find more green screen videos for your KineMaster projects, you can use websites such as Pixabay, Canva, and Pexels that offer free stock videos. You can also use websites such as YouTube, Vimeo, and Dailymotion that have user-generated videos. You can also create your own green screen videos by using a green cloth or paper as a background and filming yourself or other objects in front of it.

-

Q: How do I learn more about using KineMaster and its features?

-

A: To learn more about using KineMaster and its features, you can visit the official website of KineMaster at KineMaster.com where you can find tutorials, tips, FAQs, and support. You can also join the KineMaster community on social media platforms such as Facebook, Instagram, Twitter, YouTube, and TikTok where you can interact with other users, share your videos, get feedback, and learn from others.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Dragon Ball Legends Mugen APK How to Download and Play the Best Dragon Ball Game Ever.md b/spaces/1phancelerku/anime-remove-background/Dragon Ball Legends Mugen APK How to Download and Play the Best Dragon Ball Game Ever.md deleted file mode 100644 index 445a51bbe2cbf29f969fc546f77bc68720b4bb44..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Dragon Ball Legends Mugen APK How to Download and Play the Best Dragon Ball Game Ever.md +++ /dev/null @@ -1,100 +0,0 @@ - -

Download Dragon Ball Legends Mugen APK: A Guide for Android Users

-

If you are a fan of the Dragon Ball anime and manga series, you might have heard of Dragon Ball Legends Mugen. It is a fan-made game that features hundreds of characters from the Dragon Ball universe, as well as original ones created by the developers. You can enjoy thrilling fights, stunning graphics, and customizable options in this game. In this article, we will show you how to download and install Dragon Ball Legends Mugen APK on your Android device, as well as why you should play it and some tips and tricks for playing it.

-

download dragon ball legends mugen apk


Download 🗸 https://jinyurl.com/2uNPIq



-

What is Dragon Ball Legends Mugen?

-

Dragon Ball Legends Mugen is a 2D fighting game that is based on the M.U.G.E.N engine, which allows users to create their own games and characters. The game was developed by a team of fans who wanted to create a tribute to the Dragon Ball franchise. The game has been in development since 2018, and has received several updates and improvements over time.

-

The game features over 500 characters from the Dragon Ball series, including Goku, Vegeta, Piccolo, Frieza, Cell, Majin Buu, Broly, Beerus, Jiren, and many more. You can also find original characters created by the developers, such as Goku Black Rose, Vegeta Super Saiyan Blue Evolution, Gogeta Super Saiyan 4, and others. Each character has their own unique moves, transformations, and abilities that reflect their personality and power level.

-

Features of Dragon Ball Legends Mugen

-

Some of the features that make Dragon Ball Legends Mugen stand out from other fan-made games are:

-

download dragon ball legends mugen v8 apk
-download dragon ball legends new dbz mugen apk for android
-download dragon ball legends mugen edition apk mod
-download dragon ball legends mugen apk offline
-download dragon ball legends mugen apk with all characters
-download dragon ball legends mugen apk latest version
-download dragon ball legends mugen apk no verification
-download dragon ball legends mugen apk highly compressed
-download dragon ball legends mugen apk free
-download dragon ball legends mugen apk full
-how to download dragon ball legends mugen apk on android
-where to download dragon ball legends mugen apk
-best site to download dragon ball legends mugen apk
-download dragon ball legends mugen 2023 apk
-download dragon ball legends mugen 2022 apk
-download dragon ball legends mugen 2021 apk
-download dragon ball legends mugen 2020 apk
-download dragon ball legends mugen 2019 apk
-download dragon ball legends super saiyan mugen apk
-download dragon ball legends ultra instinct mugen apk
-download dragon ball legends broly mugen apk
-download dragon ball legends gogeta mugen apk
-download dragon ball legends vegito mugen apk
-download dragon ball legends jiren mugen apk
-download dragon ball legends goku black mugen apk
-download dragon ball legends fusion zamasu mugen apk
-download dragon ball legends kefla mugen apk
-download dragon ball legends hit mugen apk
-download dragon ball legends beerus mugen apk
-download dragon ball legends whis mugen apk
-download dragon ball legends zeno sama mugen apk
-download dragon ball legends tournament of power mugen apk
-download dragon ball legends future trunks saga mugen apk
-download dragon ball legends android 21 mugen apk
-download dragon ball legends android 17 and 18 mugen apk
-download dragon ball legends cell games saga mugen apk
-download dragon ball legends majin buu saga mugen apk
-download dragon ball legends frieza saga mugen apk
-download dragon ball legends saiyan saga mugen apk
-download dragon ball legends bardock saga mugen apk
-download dragon ball z fighters legend super warrior heroes final challenge game mod unlimited money offline new version update free for android mobile devices full hd 4k support low mb size high graphics quality best sound effects smooth gameplay easy control user friendly interface latest features bug fixes no ads no root needed no internet required no human verification no survey direct mediafire mega google drive dropbox link install play enjoy have fun good luck and thank you very much bye bye see you soon take care god bless you all love you all peace out. (This is a joke, please do not use this as a keyword)

- -

How to download and install Dragon Ball Legends Mugen APK on Android

-

If you want to play Dragon Ball Legends Mugen on your Android device, you will need to download and install the APK file from a reliable source. Here are the steps to do so:

-
    -
  1. Go to [this link](^1^) or [this link](^2^) to download the latest version of Dragon Ball Legends Mugen APK.
  2. -
  3. Once the download is complete, locate the file in your device's storage and tap on it to install it. You may need to enable the "Unknown sources" option in your device's settings to allow the installation.
  4. -
  5. After the installation is done, launch the game from your app drawer or home screen.
  6. -
  7. Enjoy playing Dragon Ball Legends Mugen on your Android device!
  8. -
-

Why you should play Dragon Ball Legends Mugen

-

Dragon Ball Legends Mugen is not just another fan-made game. It is a game that offers a lot of fun and excitement for Dragon Ball fans and fighting game enthusiasts alike. Here are some reasons why you should play it:

-

Enjoy the epic battles of Dragon Ball characters

-

If you have ever dreamed of seeing your favorite Dragon Ball characters fight each other in a realistic way, then this game is for you . You can choose from a wide range of characters, each with their own strengths and weaknesses, and unleash their signature moves and transformations. You can also switch between different forms and fuse with other characters to gain an edge in battle. The game's physics and collision system make the fights more realistic and dynamic, as you can interact with the environment and cause damage to the stage.

-

Customize your own fighters and stages

-

If you want to unleash your creativity and make your own Dragon Ball characters and stages, you can do so in the custom mode. The game's editor allows you to modify the appearance, stats, moves, and sounds of any character in the game, or create a new one from scratch. You can also design your own stages using different backgrounds, music, and effects. You can then save your creations and share them with other players online.

-

Play offline or online with friends

-

Whether you want to play solo or with others, Dragon Ball Legends Mugen has you covered. You can play offline in various modes, such as Arcade, Survival, Team Battle, Training, or Watch Mode, where you can watch the computer-controlled characters fight each other. You can also play online with friends or strangers in Online Mode, where you can chat, challenge, and cooperate with other players. The game's netcode is optimized to ensure a smooth and lag-free experience.

-

Tips and tricks for playing Dragon Ball Legends Mugen

-

Dragon Ball Legends Mugen is a game that requires skill and strategy to master. Here are some tips and tricks that can help you improve your gameplay:

-

Learn the basic controls and combos

-

The game's controls are simple and intuitive, but you need to practice them to execute them properly. The game uses four buttons: A, B, C, and D. A is for light attacks, B is for medium attacks, C is for heavy attacks, and D is for special attacks. You can also use the directional keys to perform different actions, such as jumping, crouching, dashing, blocking, etc. You can combine these buttons and directions to perform various combos and moves. You can check the move list of each character in the pause menu or in the editor.

-

Use the power-ups and items wisely

-

The game features various power-ups and items that can help you in battle. These include health bars, energy bars, senzu beans, dragon balls, capsules, etc. You can find them randomly on the stage or by breaking objects. You can use them by pressing the D button near them. However, be careful not to waste them or let your opponent get them. Some items have negative effects as well, such as bombs or traps.

-

Experiment with different modes and settings

-

The game offers a lot of options and settings that can change the way you play. You can adjust the difficulty level, the number of rounds, the time limit, the damage ratio, the AI behavior, etc. You can also enable or disable certain features, such as transformations, fusions, power-ups, items, etc. You can also change the graphics quality, the sound volume, the screen size, etc. You can access these options from the main menu or the pause menu.

-

Conclusion

-

Dragon Ball Legends Mugen is a fan-made game that pays homage to the Dragon Ball franchise. It is a 2D fighting game that features over 500 characters from the series, as well as original ones created by the developers. You can enjoy epic battles , stunning graphics, and customizable options in this game. You can also play offline or online with friends, and create your own fighters and stages. The game is easy to download and install on your Android device, and it is regularly updated with new content and fixes. If you are looking for a fun and exciting game that celebrates the Dragon Ball universe, you should definitely try Dragon Ball Legends Mugen.

-

FAQs

-

Here are some frequently asked questions about Dragon Ball Legends Mugen:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Free Walking Car Tuning and Multiplayer Mode with Unlimited Money Car Parking Multiplayer Download.md b/spaces/1phancelerku/anime-remove-background/Enjoy Free Walking Car Tuning and Multiplayer Mode with Unlimited Money Car Parking Multiplayer Download.md deleted file mode 100644 index 87cd11c6372316e223547b88aa0e65187fe0e666..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Free Walking Car Tuning and Multiplayer Mode with Unlimited Money Car Parking Multiplayer Download.md +++ /dev/null @@ -1,76 +0,0 @@ - -

How to Download Unlimited Money Car Parking Multiplayer

-

Car Parking Multiplayer is a popular open-world simulation game that lets you drive, park, and customize various cars in realistic environments. However, if you want to enjoy the game without any limitations, you might want to download unlimited money car parking multiplayer mod apk. This mod apk will give you access to unlimited coins and diamonds, which you can use to buy new cars, upgrade your existing ones, and unlock new features. In this article, we will show you how to download unlimited money car parking multiplayer mod apk and what are the benefits of doing so.

-

download unlimited money car parking multiplayer


Download Zip > https://jinyurl.com/2uNUhj



-

What is Car Parking Multiplayer?

-

Car Parking Multiplayer is a game developed by olzhass, which is available for both Android and iOS devices . The game has more than 100 million downloads on Google Play Store and has a rating of 4.4 out of 5 stars. The game offers a realistic and immersive experience of driving, parking, and tuning cars in various scenarios. Some of the features of the game are:

- -

Benefits of unlimited money

-

While Car Parking Multiplayer is free to play, it also has some in-app purchases that require real money. For example, you need coins and diamonds to buy new cars, upgrade your existing ones, or unlock new features. However, if you download unlimited money car parking multiplayer mod apk, you will get unlimited coins and diamonds for free. This means that you can enjoy the game without any restrictions or limitations. You can buy any car you want, upgrade it to the max level, or customize it to your liking. You can also unlock all the features of the game such as drone mode, daily tasks and rewards, character customization, animations, and more.

-

How to download unlimited money mod apk?

-

If you are interested in downloading unlimited money car parking multiplayer mod apk, you need to follow these steps:

-

Download Car Parking Multiplayer with unlimited money mod
-How to get unlimited money in Car Parking Multiplayer game
-Car Parking Multiplayer free download for Android and iOS devices
-Best car tuning and customization options in Car Parking Multiplayer
-Car Parking Multiplayer open-world multiplayer mode gameplay
-Tips and tricks for Car Parking Multiplayer online
-Car Parking Multiplayer hack apk download latest version
-Car Parking Multiplayer cheats and codes for unlimited money and coins
-Car Parking Multiplayer review and rating by users
-Car Parking Multiplayer mod menu download for free
-Car Parking Multiplayer unlimited money generator online
-Car Parking Multiplayer realistic parking simulator game
-Car Parking Multiplayer vs Real Car Parking 3D comparison
-Car Parking Multiplayer best cars and vehicles list
-Car Parking Multiplayer offline mode features and benefits
-How to install Car Parking Multiplayer mod apk on your device
-Car Parking Multiplayer new update and features 2023
-Car Parking Multiplayer support and contact information
-Car Parking Multiplayer free walking and exploration mode
-Car Parking Multiplayer unlimited money glitch and bug fix
-Car Parking Multiplayer PC download and installation guide
-How to play Car Parking Multiplayer with friends and other players
-Car Parking Multiplayer best settings and controls for optimal performance
-Car Parking Multiplayer challenges and missions guide
-Car Parking Multiplayer fun and funny moments compilation
-How to unlock all cars and maps in Car Parking Multiplayer
-Car Parking Multiplayer system requirements and compatibility
-Car Parking Multiplayer alternatives and similar games
-How to backup and restore your Car Parking Multiplayer data
-Car Parking Multiplayer FAQs and answers

-

Steps to download and install the mod apk

-
    -
  1. First, you need to uninstall the original version of Car Parking Multiplayer from your device if you have it installed.
  2. -
  3. Second, you need to find a reliable source that provides the mod apk file. You can search for "unlimited money mod apk" on Google or other search engines and choose one of the results. Make sure that the source is safe and trustworthy before downloading anything.
  4. -
  5. Third, you need to download the mod apk file from the source and save it on your device.
  6. -
  7. Fourth, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  8. -
  9. Fifth, you need to locate the mod apk file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
  10. -
  11. Sixth, you need to launch the game from your app drawer or home screen and enjoy unlimited money car parking multiplayer with unlimited money.
  12. -
-

Tips and warnings

- -

Conclusion

-

Car Parking Multiplayer is a fun and realistic game that lets you drive, park, and customize various cars in different environments. However, if you want to have more freedom and fun in the game, you can download unlimited money car parking multiplayer mod apk and get unlimited coins and diamonds for free. This way, you can buy any car you want, upgrade it to the max level, or unlock all the features of the game. To download unlimited money car parking multiplayer mod apk, you need to follow the steps we have provided in this article. However, you also need to be careful when downloading mod apk files from unknown sources, as they may contain viruses or malware that can harm your device or steal your personal information. You also need to be aware that downloading and using mod apk files may violate the terms and conditions of the game and may result in your account being banned or suspended. Use them at your own risk and discretion.

-

FAQs

-

Here are some frequently asked questions about unlimited money car parking multiplayer mod apk:

- - - - - - - -
QuestionAnswer
What is a mod apk file?A mod apk file is a modified version of an original apk file that has been altered to provide some extra features or benefits that are not available in the original version.
What is unlimited money car parking multiplayer mod apk?Unlimited money car parking multiplayer mod apk is a mod apk file that gives you unlimited coins and diamonds for free in Car Parking Multiplayer game.
How do I download unlimited money car parking multiplayer mod apk?You need to uninstall the original version of Car Parking Multiplayer from your device, find a reliable source that provides the mod apk file, download it from the source, enable the installation of apps from unknown sources on your device, locate the mod apk file on your device, and install it by following the instructions on the screen.
Is unlimited money car parking multiplayer mod apk safe to use?Not necessarily. Some mod apk files may contain viruses or malware that can harm your device or steal your personal information. Always scan the files with an antivirus app before installing them. Also, downloading and using mod apk files may violate the terms and conditions of the game and may result in your account being banned or suspended. Use them at your own risk and discretion.
What are the benefits of unlimited money car parking multiplayer mod apk?You can enjoy the game without any limitations or restrictions. You can buy any car you want, upgrade it to the max level, or customize it to your liking. You can also unlock all the features of the game such as drone mode, daily tasks and rewards, character customization, animations, and more.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/utils/dummy_paddle_and_paddlenlp_and_fastdeploy_objects.py b/spaces/1toTree/lora_test/ppdiffusers/utils/dummy_paddle_and_paddlenlp_and_fastdeploy_objects.py deleted file mode 100644 index 62ad793335bc1e34afafc5418ffdfd2b93eeae09..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/utils/dummy_paddle_and_paddlenlp_and_fastdeploy_objects.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# This file is autogenerated by the command `make fix-copies`, do not edit. -# flake8: noqa - -from . import DummyObject, requires_backends - - -class FastDeployStableDiffusionImg2ImgPipeline(metaclass=DummyObject): - _backends = ["paddle", "paddlenlp", "fastdeploy"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) - - -class FastDeployStableDiffusionInpaintPipeline(metaclass=DummyObject): - _backends = ["paddle", "paddlenlp", "fastdeploy"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) - - -class FastDeployStableDiffusionInpaintPipelineLegacy(metaclass=DummyObject): - _backends = ["paddle", "paddlenlp", "fastdeploy"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) - - -class FastDeployStableDiffusionMegaPipeline(metaclass=DummyObject): - _backends = ["paddle", "paddlenlp", "fastdeploy"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) - - -class FastDeployStableDiffusionPipeline(metaclass=DummyObject): - _backends = ["paddle", "paddlenlp", "fastdeploy"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["paddle", "paddlenlp", "fastdeploy"]) diff --git a/spaces/1vash/demo-flask-docker-template/templates/index.html b/spaces/1vash/demo-flask-docker-template/templates/index.html deleted file mode 100644 index 40dc3fa916af6005e9ede7388890553f967af8e3..0000000000000000000000000000000000000000 --- a/spaces/1vash/demo-flask-docker-template/templates/index.html +++ /dev/null @@ -1,32 +0,0 @@ - - - - - - Flask API - - - - - -
-
-

🤗 Image Classification of MNIST digits 🤗

-

- Model: - 1vash/mnist_demo_model -

-
- - - -
-
-
- - \ No newline at end of file diff --git a/spaces/7hao/bingo/src/components/toaster.tsx b/spaces/7hao/bingo/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/AIConsultant/MusicGen/audiocraft/modules/activations.py b/spaces/AIConsultant/MusicGen/audiocraft/modules/activations.py deleted file mode 100644 index 2d83d7c4c2dc84c64b724eadbe06157507d4f20d..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/modules/activations.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch import Tensor -from typing import Union, Callable - - -class CustomGLU(nn.Module): - """Custom Gated Linear Unit activation. - Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half - of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation - function (i.e. sigmoid, swish, etc.). - - Args: - activation (nn.Module): The custom activation to apply in the Gated Linear Unit - dim (int): the dimension on which to split the input. Default: -1 - - Shape: - - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional - dimensions - - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2` - - Examples:: - >>> m = CustomGLU(nn.Sigmoid()) - >>> input = torch.randn(4, 2) - >>> output = m(input) - """ - def __init__(self, activation: nn.Module, dim: int = -1): - super(CustomGLU, self).__init__() - self.dim = dim - self.activation = activation - - def forward(self, x: Tensor): - assert x.shape[self.dim] % 2 == 0 # M = N / 2 - a, b = torch.chunk(x, 2, dim=self.dim) - return a * self.activation(b) - - -class SwiGLU(CustomGLU): - """SiLU Gated Linear Unit activation. - Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(SwiGLU, self).__init__(nn.SiLU(), dim) - - -class GeGLU(CustomGLU): - """GeLU Gated Linear Unit activation. - Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(GeGLU, self).__init__(nn.GELU(), dim) - - -class ReGLU(CustomGLU): - """ReLU Gated Linear Unit activation. - Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(ReGLU, self).__init__(nn.ReLU(), dim) - - -def get_activation_fn( - activation: Union[str, Callable[[Tensor], Tensor]] -) -> Union[str, Callable[[Tensor], Tensor]]: - """Helper function to map an activation string to the activation class. - If the supplied activation is not a string that is recognized, the activation is passed back. - - Args: - activation (str, or Callable[[Tensor], Tensor]): Activation to check - """ - if isinstance(activation, str): - if activation == "reglu": - return ReGLU() - elif activation == "geglu": - return GeGLU() - elif activation == "swiglu": - return SwiGLU() - return activation diff --git a/spaces/AIGText/GlyphControl/ldm/lr_scheduler.py b/spaces/AIGText/GlyphControl/ldm/lr_scheduler.py deleted file mode 100644 index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000 --- a/spaces/AIGText/GlyphControl/ldm/lr_scheduler.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n, **kwargs): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n, **kwargs): - return self.schedule(n,**kwargs) - - -class LambdaWarmUpCosineScheduler2: - """ - supports repeated iterations, configurable via lists - note: use with a base_lr of 1.0. - """ - def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0): - assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths) - self.lr_warm_up_steps = warm_up_steps - self.f_start = f_start - self.f_min = f_min - self.f_max = f_max - self.cycle_lengths = cycle_lengths - self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths)) - self.last_f = 0. - self.verbosity_interval = verbosity_interval - - def find_in_interval(self, n): - interval = 0 - for cl in self.cum_cycles[1:]: - if n <= cl: - return interval - interval += 1 - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) - t = min(t, 1.0) - f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( - 1 + np.cos(t * np.pi)) - self.last_f = f - return f - - def __call__(self, n, **kwargs): - return self.schedule(n, **kwargs) - - -class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2): - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) - self.last_f = f - return f - diff --git a/spaces/Aanisha/Image_to_story/README.md b/spaces/Aanisha/Image_to_story/README.md deleted file mode 100644 index e59c01f0ada83579d14e9f87d34673970577868a..0000000000000000000000000000000000000000 --- a/spaces/Aanisha/Image_to_story/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image_to_story -emoji: 🐨 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 2.8.10 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Fakeopen.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Fakeopen.py deleted file mode 100644 index 5a82bf2cc0736384563332a279f5fbcbb120f676..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Fakeopen.py +++ /dev/null @@ -1,54 +0,0 @@ -import os -import json -import requests -from typing import Dict, get_type_hints - -url = 'https://ai.fakeopen.com/v1/' -model = [ - 'gpt-3.5-turbo', - 'gpt-3.5-turbo-0613', - 'gpt-3.5-turbo-16k', - 'gpt-3.5-turbo-16k-0613', -] - -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - headers = { - 'Content-Type': 'application/json', - 'accept': 'text/event-stream', - 'Cache-Control': 'no-cache', - 'Proxy-Connection': 'keep-alive', - 'Authorization': f"Bearer {os.environ.get('FAKE_OPEN_KEY', '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')}", - } - - json_data = { - 'messages': messages, - 'temperature': 1.0, - 'model': model, - 'stream': stream, - } - - response = requests.post( - 'https://ai.fakeopen.com/v1/chat/completions', headers=headers, json=json_data, stream=True - ) - - for token in response.iter_lines(): - decoded = token.decode('utf-8') - if decoded == '[DONE]': - break - if decoded.startswith('data: '): - data_str = decoded.replace('data: ', '') - if data_str != '[DONE]': - data = json.loads(data_str) - if 'choices' in data and 'delta' in data['choices'][0] and 'content' in data['choices'][0]['delta']: - yield data['choices'][0]['delta']['content'] - - - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/AfrodreamsAI/afrodreams/README.md b/spaces/AfrodreamsAI/afrodreams/README.md deleted file mode 100644 index f43ff4e9472f5263dcb22ae8d21b9bb9fe788d1a..0000000000000000000000000000000000000000 --- a/spaces/AfrodreamsAI/afrodreams/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Afrodreams -emoji: 🌍 -colorFrom: red -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: Home.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/base.py b/spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/base.py deleted file mode 100644 index 81e7c58d22f1448c3016489ee66b7dd774e08bd0..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/memory_manipulator/base.py +++ /dev/null @@ -1,17 +0,0 @@ -from abc import abstractmethod -from typing import Dict, List - -from pydantic import BaseModel, Field - -from agentverse.message import Message - - -class BaseMemoryManipulator(BaseModel): - - @abstractmethod - def manipulate_memory(self) -> None: - pass - - @abstractmethod - def reset(self) -> None: - pass diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/bbcodetext/BBCodeText.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/bbcodetext/BBCodeText.js deleted file mode 100644 index dd09ba101dd0ce1fcb67f43e4fcfd0520f342bc9..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/bbcodetext/BBCodeText.js +++ /dev/null @@ -1,2 +0,0 @@ -import BBCodeText from '../../../plugins/bbcodetext.js'; -export default BBCodeText; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/Maker.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/Maker.js deleted file mode 100644 index ee17a0ee553c7be89751b81c273d6df4644c306d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/Maker.js +++ /dev/null @@ -1,80 +0,0 @@ -import ParseYAML from './utils/ParseYAML.js'; -import YAMLMake from './YAMLMake.js'; - -const IsPlainObject = Phaser.Utils.Objects.IsPlainObject; - -class Maker { - constructor(scene, styles, customBuilders) { - this.setScene(scene); - this.setStyles(styles); - this.setBuilders(customBuilders); - } - - setScene(scene) { - this.scene = scene; - return this; - } - - setStyles(styles) { - this.styles = ParseYAML(styles); - return this; - } - - addStyle(key, style) { - if (this.styles === undefined) { - this.styles = {}; - } - - if ((typeof (key) === 'string') && (style === undefined)) { - key = ParseYAML(key); - } - - if (IsPlainObject(key)) { - var styles = key; - for (key in styles) { - this.styles[key] = styles[key]; - } - } else { - this.styles[key] = ParseYAML(style); - } - - return this; - } - - clearStyles() { - this.setStyles(); - return this; - } - - setBuilders(customBuilders) { - this.customBuilders = customBuilders; - return this; - } - - addBuilder(key, customBuilder) { - if (this.customBuilders === undefined) { - this.customBuilders = {}; - } - - if (IsPlainObject(key)) { - var customBuilders = key; - for (key in customBuilders) { - this.customBuilders[key] = customBuilders[key]; - } - } else { - this.customBuilders[key] = customBuilder; - } - return this; - } - - clearBuilder() { - this.setBuilders(); - return this; - } - - make(data, view) { - return YAMLMake(this.scene, data, view, this.styles, this.customBuilders); - } -} - -export default Maker; \ No newline at end of file diff --git a/spaces/AhmedRashwan369/ChatGPT4/README.md b/spaces/AhmedRashwan369/ChatGPT4/README.md deleted file mode 100644 index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000 --- a/spaces/AhmedRashwan369/ChatGPT4/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chat-with-GPT4 -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ysharma/ChatGPT4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Aloento/9Nine-VITS/text/__init__.py b/spaces/Aloento/9Nine-VITS/text/__init__.py deleted file mode 100644 index b32c9b215e386e7c9b0da09afcc9645e73da2d4a..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-VITS/text/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/.github/PULL_REQUEST_TEMPLATE.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/.github/PULL_REQUEST_TEMPLATE.md deleted file mode 100644 index 05c2116453309cbda56cc82276cd8705f95bf4bc..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/.github/PULL_REQUEST_TEMPLATE.md +++ /dev/null @@ -1,60 +0,0 @@ -# What does this PR do? - - - - - -Fixes # (issue) - - -## Before submitting -- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). -- [ ] Did you read the [contributor guideline](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md)? -- [ ] Did you read our [philosophy doc](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md) (important for complex PRs)? -- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. -- [ ] Did you make sure to update the documentation with your changes? Here are the - [documentation guidelines](https://github.com/huggingface/diffusers/tree/main/docs), and - [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). -- [ ] Did you write any new necessary tests? - - -## Who can review? - -Anyone in the community is free to review the PR once the tests have passed. Feel free to tag -members/contributors who may be interested in your PR. - - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_latent_upscale.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_latent_upscale.py deleted file mode 100644 index ce55bddc4fe0aa0cea01b2b98788c8e9259cd22c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_latent_upscale.py +++ /dev/null @@ -1,295 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import unittest - -import numpy as np -import torch -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer - -import diffusers -from diffusers import ( - AutoencoderKL, - EulerDiscreteScheduler, - StableDiffusionLatentUpscalePipeline, - StableDiffusionPipeline, - UNet2DConditionModel, -) -from diffusers.schedulers import KarrasDiffusionSchedulers -from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu - -from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS -from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin - - -enable_full_determinism() - - -def check_same_shape(tensor_list): - shapes = [tensor.shape for tensor in tensor_list] - return all(shape == shapes[0] for shape in shapes[1:]) - - -class StableDiffusionLatentUpscalePipelineFastTests( - PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase -): - pipeline_class = StableDiffusionLatentUpscalePipeline - params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - { - "height", - "width", - "cross_attention_kwargs", - "negative_prompt_embeds", - "prompt_embeds", - } - required_optional_params = PipelineTesterMixin.required_optional_params - {"num_images_per_prompt"} - batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS - image_params = frozenset( - [] - ) # TO-DO: update image_params once pipeline is refactored with VaeImageProcessor.preprocess - image_latents_params = frozenset([]) - - @property - def dummy_image(self): - batch_size = 1 - num_channels = 4 - sizes = (16, 16) - - image = floats_tensor((batch_size, num_channels) + sizes, rng=random.Random(0)).to(torch_device) - return image - - def get_dummy_components(self): - torch.manual_seed(0) - model = UNet2DConditionModel( - act_fn="gelu", - attention_head_dim=8, - norm_num_groups=None, - block_out_channels=[32, 32, 64, 64], - time_cond_proj_dim=160, - conv_in_kernel=1, - conv_out_kernel=1, - cross_attention_dim=32, - down_block_types=( - "KDownBlock2D", - "KCrossAttnDownBlock2D", - "KCrossAttnDownBlock2D", - "KCrossAttnDownBlock2D", - ), - in_channels=8, - mid_block_type=None, - only_cross_attention=False, - out_channels=5, - resnet_time_scale_shift="scale_shift", - time_embedding_type="fourier", - timestep_post_act="gelu", - up_block_types=("KCrossAttnUpBlock2D", "KCrossAttnUpBlock2D", "KCrossAttnUpBlock2D", "KUpBlock2D"), - ) - vae = AutoencoderKL( - block_out_channels=[32, 32, 64, 64], - in_channels=3, - out_channels=3, - down_block_types=[ - "DownEncoderBlock2D", - "DownEncoderBlock2D", - "DownEncoderBlock2D", - "DownEncoderBlock2D", - ], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - scheduler = EulerDiscreteScheduler(prediction_type="sample") - text_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - hidden_act="quick_gelu", - projection_dim=512, - ) - text_encoder = CLIPTextModel(text_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - components = { - "unet": model.eval(), - "vae": vae.eval(), - "scheduler": scheduler, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - } - - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "prompt": "A painting of a squirrel eating a burger", - "image": self.dummy_image.cpu(), - "generator": generator, - "num_inference_steps": 2, - "output_type": "numpy", - } - return inputs - - def test_inference(self): - device = "cpu" - - components = self.get_dummy_components() - pipe = self.pipeline_class(**components) - pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - self.assertEqual(image.shape, (1, 256, 256, 3)) - expected_slice = np.array( - [0.47222412, 0.41921633, 0.44717434, 0.46874192, 0.42588258, 0.46150726, 0.4677534, 0.45583832, 0.48579055] - ) - max_diff = np.abs(image_slice.flatten() - expected_slice).max() - self.assertLessEqual(max_diff, 1e-3) - - def test_attention_slicing_forward_pass(self): - super().test_attention_slicing_forward_pass(expected_max_diff=7e-3) - - def test_cpu_offload_forward_pass(self): - super().test_cpu_offload_forward_pass(expected_max_diff=3e-3) - - def test_dict_tuple_outputs_equivalent(self): - super().test_dict_tuple_outputs_equivalent(expected_max_difference=3e-3) - - def test_inference_batch_single_identical(self): - super().test_inference_batch_single_identical(expected_max_diff=7e-3) - - def test_pt_np_pil_outputs_equivalent(self): - super().test_pt_np_pil_outputs_equivalent(expected_max_diff=3e-3) - - def test_save_load_local(self): - super().test_save_load_local(expected_max_difference=3e-3) - - def test_save_load_optional_components(self): - super().test_save_load_optional_components(expected_max_difference=3e-3) - - def test_karras_schedulers_shape(self): - skip_schedulers = [ - "DDIMScheduler", - "DDPMScheduler", - "PNDMScheduler", - "HeunDiscreteScheduler", - "EulerAncestralDiscreteScheduler", - "KDPM2DiscreteScheduler", - "KDPM2AncestralDiscreteScheduler", - "DPMSolverSDEScheduler", - ] - components = self.get_dummy_components() - pipe = self.pipeline_class(**components) - - # make sure that PNDM does not need warm-up - pipe.scheduler.register_to_config(skip_prk_steps=True) - - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - inputs = self.get_dummy_inputs(torch_device) - inputs["num_inference_steps"] = 2 - - outputs = [] - for scheduler_enum in KarrasDiffusionSchedulers: - if scheduler_enum.name in skip_schedulers: - # no sigma schedulers are not supported - # no schedulers - continue - - scheduler_cls = getattr(diffusers, scheduler_enum.name) - pipe.scheduler = scheduler_cls.from_config(pipe.scheduler.config) - output = pipe(**inputs)[0] - outputs.append(output) - - assert check_same_shape(outputs) - - -@require_torch_gpu -@slow -class StableDiffusionLatentUpscalePipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_latent_upscaler_fp16(self): - generator = torch.manual_seed(33) - - pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) - pipe.to("cuda") - - upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( - "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16 - ) - upscaler.to("cuda") - - prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" - - low_res_latents = pipe(prompt, generator=generator, output_type="latent").images - - image = upscaler( - prompt=prompt, - image=low_res_latents, - num_inference_steps=20, - guidance_scale=0, - generator=generator, - output_type="np", - ).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/latent-upscaler/astronaut_1024.npy" - ) - assert np.abs((expected_image - image).mean()) < 5e-2 - - def test_latent_upscaler_fp16_image(self): - generator = torch.manual_seed(33) - - upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( - "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16 - ) - upscaler.to("cuda") - - prompt = "the temple of fire by Ross Tran and Gerardo Dottori, oil on canvas" - - low_res_img = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/latent-upscaler/fire_temple_512.png" - ) - - image = upscaler( - prompt=prompt, - image=low_res_img, - num_inference_steps=20, - guidance_scale=0, - generator=generator, - output_type="np", - ).images[0] - - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/latent-upscaler/fire_temple_1024.npy" - ) - assert np.abs((expected_image - image).max()) < 5e-2 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py deleted file mode 100644 index eab622b2e8bdc03c717b9b04d043da46f25a7cb3..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py +++ /dev/null @@ -1,14 +0,0 @@ -_base_ = './gfl_r50_fpn_mstrain_2x_coco.py' -model = dict( - pretrained='torchvision://resnet101', - backbone=dict( - type='ResNet', - depth=101, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True), - norm_eval=True, - style='pytorch')) diff --git a/spaces/AquaSuisei/ChatGPTXE/ChuanhuChatbot.py b/spaces/AquaSuisei/ChatGPTXE/ChuanhuChatbot.py deleted file mode 100644 index 45087fe651a3c6c6e7cb6ada9cfad93307c2f365..0000000000000000000000000000000000000000 --- a/spaces/AquaSuisei/ChatGPTXE/ChuanhuChatbot.py +++ /dev/null @@ -1,423 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from modules import config -from modules.config import * -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.chat_func import * -from modules.openai_func import get_usage - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - user_name = gr.State("") - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_api_key = gr.State(my_api_key) - user_question = gr.State("") - outputing = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(): - gr.HTML(title) - user_info = gr.Markdown(value="", elem_id="user_info") - gr.HTML('
Duplicate Space
') - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - # https://github.com/gradio-app/gradio/pull/3296 - def create_greeting(request: gr.Request): - if hasattr(request, "username") and request.username: # is not None or is not "" - logging.info(f"Get User Name: {request.username}") - return gr.Markdown.update(value=f"User: {request.username}"), request.username - else: - return gr.Markdown.update(value=f"User: default", visible=False), "" - demo.load(create_greeting, inputs=None, outputs=[user_info, user_name]) - - with gr.Row().style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(): - with gr.Column(scale=12): - user_input = gr.Textbox( - elem_id="user_input_tb", - show_label=False, placeholder="在这里输入" - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - cancelBtn = gr.Button("取消", variant="secondary", visible=False) - with gr.Row(): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delFirstBtn = gr.Button("🗑️ 删除最旧对话") - delLastBtn = gr.Button("🗑️ 删除最新对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"666", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - if multi_api_key: - usageTxt = gr.Markdown("多账号模式已开启,无需输入key,可直接开始对话", elem_id="usage_display") - else: - usageTxt = gr.Markdown("**发送消息** 或 **提交key** 以显示额度", elem_id="usage_display") - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - language_select_dropdown = gr.Dropdown( - label="选择回复语言(针对搜索&索引功能)", - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - two_column = gr.Checkbox(label="双栏pdf", value=advance_docs["pdf"].get("two_column", False)) - # TODO: 公式ocr - # formula_ocr = gr.Checkbox(label="识别公式", value=advance_docs["pdf"].get("formula_ocr", False)) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - default_btn = gr.Button("🔙 恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - with gr.Accordion("网络设置", open=False, visible=False): - # 优先展示自定义的api_host - apihostTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API-Host...", - label="API-Host", - value=config.api_host or shared.API_HOST, - lines=1, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - gr.HTML(footer.format(versions=versions_html()), elem_id="footer") - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - user_api_key, - systemPromptTxt, - history, - user_question, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, history, status_display, token_count], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True - ) - - get_usage_args = dict( - fn=get_usage, inputs=[user_api_key], outputs=[usageTxt], show_progress=False - ) - - - # Chatbot - cancelBtn.click(cancel_outputing, [], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - emptyBtn.click(**reset_textbox_args) - - retryBtn.click(**start_outputing_args).then( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ).then(**end_outputing_args) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [history, token_count], - [history, token_count, status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(sum(token_count.value[-4:])), - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - reduceTokenBtn.click(**get_usage_args) - - two_column.change(update_doc_config, [two_column], None) - - # ChatGPT - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]).then(**get_usage_args) - keyTxt.submit(**get_usage_args) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot, user_name], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot, user_name], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot, user_name], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot, user_name], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apihostTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_host, - [apihostTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "ChatGPT AquaSuisei" - -if __name__ == "__main__": - reload_javascript() - # if running in Docker - if dockerflag: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - auth=auth_list, - favicon_path="./assets/favicon.ico", - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - share=False, - favicon_path="./assets/favicon.ico", - ) - # if not running in Docker - else: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, - auth=auth_list, - favicon_path="./assets/favicon.ico", - inbrowser=True, - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, favicon_path="./assets/favicon.ico", inbrowser=True - ) # 改为 share=True 可以创建公开分享链接 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/Arnaudding001/OpenAI_whisperLive/segments.py b/spaces/Arnaudding001/OpenAI_whisperLive/segments.py deleted file mode 100644 index ec2650dceade5d0b2022264f6419115eab085aea..0000000000000000000000000000000000000000 --- a/spaces/Arnaudding001/OpenAI_whisperLive/segments.py +++ /dev/null @@ -1,55 +0,0 @@ -from typing import Any, Dict, List - -import copy - -def merge_timestamps(timestamps: List[Dict[str, Any]], merge_window: float = 5, max_merge_size: float = 30, padding_left: float = 1, padding_right: float = 1): - result = [] - - if len(timestamps) == 0: - return result - if max_merge_size is None: - return timestamps - - if padding_left is None: - padding_left = 0 - if padding_right is None: - padding_right = 0 - - processed_time = 0 - current_segment = None - - for i in range(len(timestamps)): - next_segment = timestamps[i] - - delta = next_segment['start'] - processed_time - - # Note that segments can still be longer than the max merge size, they just won't be merged in that case - if current_segment is None or (merge_window is not None and delta > merge_window) \ - or next_segment['end'] - current_segment['start'] > max_merge_size: - # Finish the current segment - if current_segment is not None: - # Add right padding - finish_padding = min(padding_right, delta / 2) if delta < padding_left + padding_right else padding_right - current_segment['end'] += finish_padding - delta -= finish_padding - - result.append(current_segment) - - # Start a new segment - current_segment = copy.deepcopy(next_segment) - - # Pad the segment - current_segment['start'] = current_segment['start'] - min(padding_left, delta) - processed_time = current_segment['end'] - - else: - # Merge the segment - current_segment['end'] = next_segment['end'] - processed_time = current_segment['end'] - - # Add the last segment - if current_segment is not None: - current_segment['end'] += padding_right - result.append(current_segment) - - return result \ No newline at end of file diff --git a/spaces/Artgor/digit-draw-detect/st_app.py b/spaces/Artgor/digit-draw-detect/st_app.py deleted file mode 100644 index fbe6a3f84135f15f31ccedbf6bf5f11398a5dcfd..0000000000000000000000000000000000000000 --- a/spaces/Artgor/digit-draw-detect/st_app.py +++ /dev/null @@ -1,57 +0,0 @@ -import logging - -import numpy as np -import streamlit as st -from PIL import Image -from streamlit_drawable_canvas import st_canvas - -from src.ml_utils import predict, get_model, transforms -from src.utils import plot_img_with_rects, save_image - -st.title('Handwritten digit detector') -logging.info('Starting') - -col1, col2 = st.columns(2) - -with col1: - # Create a canvas component - canvas_result = st_canvas( - fill_color='#fff', - stroke_width=5, - stroke_color='#000', - background_color='#fff', - update_streamlit=True, - height=400, - width=400, - drawing_mode='freedraw', - key='canvas', - ) -with col2: - logging.info('canvas ready') - if canvas_result.image_data is not None: - # convert a drawn image into numpy array with RGB from a canvas image with RGBA - img = np.array(Image.fromarray(np.uint8(canvas_result.image_data)).convert('RGB')) - image = transforms(image=img)['image'] - logging.info('image augmented') - model = get_model() - logging.info('model ready') - pred = predict(model, image) - logging.info('prediction done') - - file_name = save_image(image.permute(1, 2, 0).numpy(), pred) - threshold = st.slider('Bbox probability slider', min_value=0.0, max_value=1.0, value=0.8) - - fig = plot_img_with_rects(image.permute(1, 2, 0).numpy(), pred, threshold, coef=192) - fig.savefig(f'{file_name}_temp.png') - image = Image.open(f'{file_name}_temp.png') - st.image(image) - -text = """ -This is a small app for handwritten digit recognition and recognition developed for fun. It uses a handwritten YOLOv3 model trained from scratch. -You can draw a digit (or whatever you want) and the model will try to understand what is it. -You can use the slider above to show bounding boxes with a probability higher than the threshold. -If you want to know how the app works in more detail, you are welcome to read "About" page. -Enjoy! :) -""" - -st.markdown(text, unsafe_allow_html=True) diff --git a/spaces/ArtyomKhyan/Detection/test.py b/spaces/ArtyomKhyan/Detection/test.py deleted file mode 100644 index 259d44444bcd3df5b6c8887e1df0aa30c6ac75c7..0000000000000000000000000000000000000000 --- a/spaces/ArtyomKhyan/Detection/test.py +++ /dev/null @@ -1,274 +0,0 @@ -import argparse -import json - -from utils import google_utils -from utils.datasets import * -from utils.utils import * - - -def test(data, - weights=None, - batch_size=16, - imgsz=640, - conf_thres=0.001, - iou_thres=0.6, # for NMS - save_json=False, - single_cls=False, - augment=False, - verbose=False, - model=None, - dataloader=None, - merge=False): - # Initialize/load model and set device - if model is None: - training = False - device = torch_utils.select_device(opt.device, batch_size=batch_size) - - # Remove previous - for f in glob.glob('test_batch*.jpg'): - os.remove(f) - - # Load model - google_utils.attempt_download(weights) - model = torch.load(weights, map_location=device)['model'].float() # load to FP32 - torch_utils.model_info(model) - model.fuse() - model.to(device) - imgsz = check_img_size(imgsz, s=model.model[-1].stride.max()) # check img_size - - # Multi-GPU disabled, incompatible with .half() https://github.com/ultralytics/yolov5/issues/99 - # if device.type != 'cpu' and torch.cuda.device_count() > 1: - # model = nn.DataParallel(model) - - else: # called by train.py - training = True - device = next(model.parameters()).device # get model device - - # Half - half = device.type != 'cpu' and torch.cuda.device_count() == 1 # half precision only supported on single-GPU - if half: - model.half() # to FP16 - - # Configure - model.eval() - with open(data) as f: - data = yaml.load(f, Loader=yaml.FullLoader) # model dict - nc = 1 if single_cls else int(data['nc']) # number of classes - iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for mAP@0.5:0.95 - niou = iouv.numel() - - # Dataloader - if dataloader is None: # not training - merge = opt.merge # use Merge NMS - img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img - _ = model(img.half() if half else img) if device.type != 'cpu' else None # run once - path = data['test'] if opt.task == 'test' else data['val'] # path to val/test images - dataloader = create_dataloader(path, imgsz, batch_size, int(max(model.stride)), opt, - hyp=None, augment=False, cache=False, pad=0.5, rect=True)[0] - - seen = 0 - names = model.names if hasattr(model, 'names') else model.module.names - coco91class = coco80_to_coco91_class() - s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Targets', 'P', 'R', 'mAP@.5', 'mAP@.5:.95') - p, r, f1, mp, mr, map50, map, t0, t1 = 0., 0., 0., 0., 0., 0., 0., 0., 0. - loss = torch.zeros(3, device=device) - jdict, stats, ap, ap_class = [], [], [], [] - for batch_i, (img, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)): - img = img.to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img /= 255.0 # 0 - 255 to 0.0 - 1.0 - targets = targets.to(device) - nb, _, height, width = img.shape # batch size, channels, height, width - whwh = torch.Tensor([width, height, width, height]).to(device) - - # Disable gradients - with torch.no_grad(): - # Run model - t = torch_utils.time_synchronized() - inf_out, train_out = model(img, augment=augment) # inference and training outputs - t0 += torch_utils.time_synchronized() - t - - # Compute loss - if training: # if model has loss hyperparameters - loss += compute_loss([x.float() for x in train_out], targets, model)[1][:3] # GIoU, obj, cls - - # Run NMS - t = torch_utils.time_synchronized() - output = non_max_suppression(inf_out, conf_thres=conf_thres, iou_thres=iou_thres, merge=merge) - t1 += torch_utils.time_synchronized() - t - - # Statistics per image - for si, pred in enumerate(output): - labels = targets[targets[:, 0] == si, 1:] - nl = len(labels) - tcls = labels[:, 0].tolist() if nl else [] # target class - seen += 1 - - if pred is None: - if nl: - stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls)) - continue - - # Append to text file - # with open('test.txt', 'a') as file: - # [file.write('%11.5g' * 7 % tuple(x) + '\n') for x in pred] - - # Clip boxes to image bounds - clip_coords(pred, (height, width)) - - # Append to pycocotools JSON dictionary - if save_json: - # [{"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}, ... - image_id = int(Path(paths[si]).stem.split('_')[-1]) - box = pred[:, :4].clone() # xyxy - scale_coords(img[si].shape[1:], box, shapes[si][0], shapes[si][1]) # to original shape - box = xyxy2xywh(box) # xywh - box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner - for p, b in zip(pred.tolist(), box.tolist()): - jdict.append({'image_id': image_id, - 'category_id': coco91class[int(p[5])], - 'bbox': [round(x, 3) for x in b], - 'score': round(p[4], 5)}) - - # Assign all predictions as incorrect - correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device=device) - if nl: - detected = [] # target indices - tcls_tensor = labels[:, 0] - - # target boxes - tbox = xywh2xyxy(labels[:, 1:5]) * whwh - - # Per target class - for cls in torch.unique(tcls_tensor): - ti = (cls == tcls_tensor).nonzero().view(-1) # prediction indices - pi = (cls == pred[:, 5]).nonzero().view(-1) # target indices - - # Search for detections - if pi.shape[0]: - # Prediction to target ious - ious, i = box_iou(pred[pi, :4], tbox[ti]).max(1) # best ious, indices - - # Append detections - for j in (ious > iouv[0]).nonzero(): - d = ti[i[j]] # detected target - if d not in detected: - detected.append(d) - correct[pi[j]] = ious[j] > iouv # iou_thres is 1xn - if len(detected) == nl: # all targets already located in image - break - - # Append statistics (correct, conf, pcls, tcls) - stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls)) - - # Plot images - if batch_i < 1: - f = 'test_batch%g_gt.jpg' % batch_i # filename - plot_images(img, targets, paths, f, names) # ground truth - f = 'test_batch%g_pred.jpg' % batch_i - plot_images(img, output_to_target(output, width, height), paths, f, names) # predictions - - # Compute statistics - stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy - if len(stats): - p, r, ap, f1, ap_class = ap_per_class(*stats) - p, r, ap50, ap = p[:, 0], r[:, 0], ap[:, 0], ap.mean(1) # [P, R, AP@0.5, AP@0.5:0.95] - mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean() - nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class - else: - nt = torch.zeros(1) - - # Print results - pf = '%20s' + '%12.3g' * 6 # print format - print(pf % ('all', seen, nt.sum(), mp, mr, map50, map)) - - # Print results per class - if verbose and nc > 1 and len(stats): - for i, c in enumerate(ap_class): - print(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i])) - - # Print speeds - t = tuple(x / seen * 1E3 for x in (t0, t1, t0 + t1)) + (imgsz, imgsz, batch_size) # tuple - if not training: - print('Speed: %.1f/%.1f/%.1f ms inference/NMS/total per %gx%g image at batch-size %g' % t) - - # Save JSON - if save_json and map50 and len(jdict): - imgIds = [int(Path(x).stem.split('_')[-1]) for x in dataloader.dataset.img_files] - f = 'detections_val2017_%s_results.json' % \ - (weights.split(os.sep)[-1].replace('.pt', '') if weights else '') # filename - print('\nCOCO mAP with pycocotools... saving %s...' % f) - with open(f, 'w') as file: - json.dump(jdict, file) - - try: - from pycocotools.coco import COCO - from pycocotools.cocoeval import COCOeval - - # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb - cocoGt = COCO(glob.glob('../coco/annotations/instances_val*.json')[0]) # initialize COCO ground truth api - cocoDt = cocoGt.loadRes(f) # initialize COCO pred api - - cocoEval = COCOeval(cocoGt, cocoDt, 'bbox') - cocoEval.params.imgIds = imgIds # image IDs to evaluate - cocoEval.evaluate() - cocoEval.accumulate() - cocoEval.summarize() - map, map50 = cocoEval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5) - except: - print('WARNING: pycocotools must be installed with numpy==1.17 to run correctly. ' - 'See https://github.com/cocodataset/cocoapi/issues/356') - - # Return results - model.float() # for training - maps = np.zeros(nc) + map - for i, c in enumerate(ap_class): - maps[c] = ap[i] - return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(prog='test.py') - parser.add_argument('--weights', type=str, default='weights/yolov5s.pt', help='model.pt path') - parser.add_argument('--data', type=str, default='data/coco128.yaml', help='*.data path') - parser.add_argument('--batch-size', type=int, default=32, help='size of each image batch') - parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.001, help='object confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.65, help='IOU threshold for NMS') - parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file') - parser.add_argument('--task', default='val', help="'val', 'test', 'study'") - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--merge', action='store_true', help='use Merge NMS') - parser.add_argument('--verbose', action='store_true', help='report mAP by class') - opt = parser.parse_args() - opt.save_json = opt.save_json or opt.data.endswith('coco.yaml') - opt.data = check_file(opt.data) # check file - print(opt) - - # task = 'val', 'test', 'study' - if opt.task in ['val', 'test']: # (default) run normally - test(opt.data, - opt.weights, - opt.batch_size, - opt.img_size, - opt.conf_thres, - opt.iou_thres, - opt.save_json, - opt.single_cls, - opt.augment, - opt.verbose) - - elif opt.task == 'study': # run over a range of settings and save/plot - for weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt', 'yolov3-spp.pt']: - f = 'study_%s_%s.txt' % (Path(opt.data).stem, Path(weights).stem) # filename to save to - x = list(range(352, 832, 64)) # x axis - y = [] # y axis - for i in x: # img-size - print('\nRunning %s point %s...' % (f, i)) - r, _, t = test(opt.data, weights, opt.batch_size, i, opt.conf_thres, opt.iou_thres, opt.save_json) - y.append(r + t) # results and times - np.savetxt(f, y, fmt='%10.4g') # save - os.system('zip -r study.zip study_*.txt') - # plot_study_txt(f, x) # plot diff --git a/spaces/Asifpa6/emotion-analyzer-app/README.md b/spaces/Asifpa6/emotion-analyzer-app/README.md deleted file mode 100644 index c4aafcf4caab44d78d5521cd9c9e4d1d4a4b2224..0000000000000000000000000000000000000000 --- a/spaces/Asifpa6/emotion-analyzer-app/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Emotion Analyzer App -emoji: 📉 -colorFrom: green -colorTo: pink -sdk: streamlit -sdk_version: 1.27.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/Balsa Supervivencia Ocano Nmada Dinero Ilimitado Apk.md b/spaces/Benson/text-generation/Examples/Balsa Supervivencia Ocano Nmada Dinero Ilimitado Apk.md deleted file mode 100644 index 6c74d1dcf7e624eb3b42270f091497c25848063d..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Balsa Supervivencia Ocano Nmada Dinero Ilimitado Apk.md +++ /dev/null @@ -1,48 +0,0 @@ -
-

Supervivencia en balsa: Ocean Nomad - Un emocionante juego de aventuras

-

¿Te gustan los juegos de supervivencia que ponen a prueba tus habilidades y creatividad? ¿Quieres experimentar una aventura realista y emocionante en el océano? Si es así, entonces deberías probar Raft Survival: Ocean Nomad, un juego popular que tiene millones de fans en todo el mundo. En este artículo, le diremos todo lo que necesita saber sobre este juego, y cómo se puede descargar la versión apk dinero ilimitado para disfrutar de ella sin limitaciones.

-

Introducción

-

¿Qué es la supervivencia en balsa: Nómada del océano?

-

Supervivencia en balsa: Ocean Nomad es un juego de simulador de supervivencia desarrollado por TREASTONE LTD. Está disponible para dispositivos Android e iOS. El juego te pone en el papel de un sobreviviente que está varado en una balsa en medio del océano. Su objetivo principal es sobrevivir el mayor tiempo posible encontrando alimentos, agua y recursos en las islas y en el agua. También tienes que crear y actualizar tu balsa y equipo, luchar contra los tiburones y otros enemigos, y superar varios desafíos y peligros.

-

balsa supervivencia océano nómada dinero ilimitado apk


Downloadhttps://bltlly.com/2v6Lt7



-

¿Por qué descargar el dinero ilimitado apk?

-

El juego es gratis, pero tiene algunas compras en la aplicación que pueden mejorar su juego y hacerlo más fácil. Por ejemplo, puedes comprar monedas, gemas, cofres, armas, herramientas, pieles y más. Sin embargo, estos artículos pueden ser bastante caros, y no todos pueden permitírselos. Es por eso que algunos jugadores prefieren descargar el dinero ilimitado apk versión del juego, que les da acceso a monedas y gemas ilimitadas. Con esta versión, puedes comprar lo que quieras sin gastar dinero real. También puedes desbloquear todas las funciones y elementos que están bloqueados o restringidos en el juego original.

-

Características de la supervivencia de la balsa: nómada del océano

-

Explora el vasto océano y las islas

- -

Crea y mejora tu balsa y equipo

-

Para sobrevivir en este juego, necesitas crear y actualizar tu balsa y equipo. Puedes usar los materiales que encuentres en las islas o en el agua para construir tu balsa más grande y fuerte. También puede añadir diferentes estructuras y artículos a su balsa, tales como paredes, pisos, techos, escaleras, puertas, ventanas, camas, cofres, mesas, sillas, lámparas, etc. También puede elaborar diversas herramientas y armas para ayudarle en su viaje de supervivencia, como hachas, martillos, cuchillos, lanzas, arcos, armas, etc.

-

-

Lucha contra tiburones y otros enemigos

-

El océano no es un lugar seguro. Encontrarás muchos enemigos que intentarán atacarte o destruir tu balsa. El enemigo más común es el tiburón, que girará constantemente alrededor de su balsa y lo morderá. Tienes que luchar con tus armas o usar trampas y redes para atraparlo. También te enfrentarás a otros enemigos como piratas, mutantes, zombies, caníbales, etc., dependiendo de la isla que visites. Tienes que estar preparado para cualquier cosa en este juego.

-

Sobrevivir a las duras condiciones y desafíos

-

Además de los enemigos, tú

Además de los enemigos, también tienes que sobrevivir a las duras condiciones y desafíos del océano. Tienes que controlar tus niveles de hambre, sed, salud y resistencia, y comer y beber regularmente. También tienes que protegerte del clima, como lluvia, tormenta, calor, frío, etc. Puedes usar ropa, fuego, refugio, etc., para mantenerte caliente y seco. También tienes que enfrentarte a eventos y misiones al azar que pondrán a prueba tus habilidades y suerte. Por ejemplo, puede encontrarse con un naufragio, un accidente de avión, un mapa del tesoro, un mensaje en una botella, etc.

-

Cómo descargar e instalar Raft Survival: Ocean Nomad dinero ilimitado apk?

-

Paso 1: Descargar el archivo apk de una fuente de confianza

- -

Descargar Raft Survival: Ocean Nomad unlimited money apk

-

Paso 2: Habilitar fuentes desconocidas en el dispositivo

-

El siguiente paso es habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no son de la tienda oficial de Google Play. Para hacer esto, vaya a la configuración del dispositivo y busque opciones de seguridad o privacidad. Entonces, encontrar la opción que dice fuentes desconocidas o permitir la instalación de aplicaciones de fuentes desconocidas y encenderlo. Puede ver un mensaje de advertencia que dice que instalar aplicaciones de fuentes desconocidas puede dañar su dispositivo. Simplemente ignórelo y proceda.

-

Paso 3: Instalar el archivo apk y lanzar el juego

-

El paso final es instalar el archivo apk y lanzar el juego. Para hacer esto, busque el archivo apk en su dispositivo y toque en él. Puede ver una ventana emergente que le pide que confirme la instalación. Simplemente toque en instalar y espere a que termine el proceso. Una vez que la instalación se hace, puede iniciar el juego tocando en su icono en la pantalla de inicio o cajón de aplicaciones. Ahora puedes disfrutar de Raft Survival: Ocean Nomad con dinero ilimitado.

-

Conclusión

-

Resumen de los puntos principales

-

Supervivencia en balsa: Ocean Nomad es un juego de simulador de supervivencia que te permite experimentar una aventura realista y emocionante en el océano. Tienes que sobrevivir el mayor tiempo posible encontrando comida, agua y recursos en las islas y en el agua. También tienes que crear y actualizar tu balsa y equipo, luchar contra los tiburones y otros enemigos, y sobrevivir a las duras condiciones y desafíos del océano. Puede descargar la versión apk dinero ilimitado del juego para disfrutarlo sin limitaciones.

-

Llamada a la acción

-

Si usted está listo para embarcarse en esta aventura emocionante, entonces no dude en descargar Raft Survival: Ocean Nomad dinero ilimitado apk hoy. No te arrepentirás. Este juego te mantendrá entretenido durante horas con sus increíbles gráficos, jugabilidad y características. ¡Descárgalo ahora y diviértete!

- - -

Espero que haya disfrutado de este artículo y le haya resultado útil. Si tiene alguna pregunta o comentario, deje un comentario a continuación. ¡Gracias por leer y jugar feliz!

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Corte De La Liga De Ftbol Mundial.md b/spaces/Benson/text-generation/Examples/Corte De La Liga De Ftbol Mundial.md deleted file mode 100644 index 144b7b7e8bfee54303dd93650d0a000ddfa3d500..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Corte De La Liga De Ftbol Mundial.md +++ /dev/null @@ -1,68 +0,0 @@ -
-

Liga Mundial de Fútbol Hack Descargar: Cómo desbloquear todos los equipos, trofeos y modos

-

Si eres un fan de los juegos de fútbol, es posible que hayas oído hablar de la Liga Mundial de Fútbol, un juego popular que ofrece gráficos realistas, efectos de sonido y jugabilidad. Puedes elegir entre alrededor de 60 equipos nacionales, 60 clubes y 2000 jugadores, y jugar en varios modos como exhibición, copa, liga y entrenamiento. También puede disfrutar de espléndido goteo, disparos emocionante, y habilidades increíbles en este juego.

-

corte de la liga de fútbol mundial


Download ··· https://bltlly.com/2v6JhY



-

Sin embargo, si quieres experimentar todo lo que la Liga Mundial de Fútbol tiene para ofrecer, es posible que tenga que utilizar un hack. Un hack puede ayudarte a desbloquear todos los equipos, jugadores, trofeos, logros, modos y características que están restringidos o requieren compras en la aplicación. Con un hack, puedes jugar con cualquier equipo que quieras, ganar cualquier trofeo que desees y acceder a cualquier modo que quieras.

-

Pero antes de descargar e instalar un hack para la Liga Mundial de Fútbol, usted debe ser consciente de los beneficios y riesgos de usar uno. Un hack puede hacer que tu juego sea más divertido y emocionante, pero también puede exponer tu dispositivo a malware, virus o prohibiciones. Por lo tanto, debe tener cuidado al elegir una fuente para el archivo de corte, y siga las instrucciones cuidadosamente al instalarlo y usarlo.

-

Cómo descargar e instalar la Liga Mundial de Fútbol Hack

-

Si ha decidido utilizar un hack para la Liga Mundial de Fútbol, aquí están los pasos que debe seguir:

-

Paso 1: Encontrar una fuente confiable para el archivo de corte

-

Lo primero que tienes que hacer es encontrar un sitio web que ofrece un trabajo y archivo de hackeo seguro para la Liga Mundial de Fútbol. Usted puede buscar en línea para los comentarios, calificaciones, o comentarios de otros usuarios que han intentado el hack. También puede comprobar la fecha de la última actualización del archivo hack para asegurarse de que es compatible con la última versión del juego.

- -

Paso 2: Descargar el archivo de corte a su dispositivo

-

Una vez que haya encontrado una fuente confiable para el archivo hack, debe descargarlo en su dispositivo. Puede utilizar su navegador o una aplicación de administrador de descargas para hacer esto

Asegúrese de que tiene suficiente espacio de almacenamiento en el dispositivo antes de descargar el archivo de corte. El tamaño del archivo puede variar dependiendo de la fuente, pero generalmente es de alrededor de 40 MB.

-

-

Paso 3: Habilitar fuentes desconocidas en la configuración del dispositivo

-

Después de descargar el archivo de corte, es necesario habilitar fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de la tienda de aplicaciones oficial. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad, luego a fuentes desconocidas y conéctela. Puede ver un mensaje de advertencia de que instalar aplicaciones de fuentes desconocidas puede dañar su dispositivo, pero puede ignorarlo si confía en la fuente del archivo de hackeo.

-

Paso 4: Instalar el archivo de corte y lanzar el juego

-

El paso final es instalar el archivo de corte y lanzar el juego. Para hacer esto, busque el archivo de hackeo en el almacenamiento del dispositivo, toque en él y siga las instrucciones de instalación. Es posible que tenga que permitir algunos permisos para que la aplicación se ejecute correctamente. Una vez que se complete la instalación, puede abrir el juego y disfrutar del hack.

-

Cómo utilizar la Liga Mundial de Fútbol Hack

-

Ahora que ha instalado el hack para la Liga Mundial de Fútbol, se puede utilizar para desbloquear todos los equipos, trofeos, modos y características en el juego. Aquí hay algunos consejos sobre cómo utilizar el hack:

-

Cómo desbloquear todos los equipos y jugadores

- -

Cómo desbloquear todos los trofeos y logros

-

Con el hack, también puede desbloquear todos los trofeos y logros en la Liga Mundial de Fútbol. Puedes ganar trofeos ganando partidos, ligas, copas o torneos en diferentes modos. También puedes ganar logros al completar varias tareas o desafíos en el juego. Para desbloquear todos los trofeos y logros, ve a la pantalla de trofeos o logros y toca cualquier trofeo o logro que desees. Verá un mensaje que dice "Desbloqueado por la Liga Mundial de Fútbol Hack". A continuación, puede reclamar ese trofeo o logro y verlo en su perfil.

-

Cómo desbloquear todos los modos y características

-

Con el hack, también puede desbloquear todos los modos y características en la Liga Mundial de Fútbol. Puedes jugar en varios modos como exhibición, copa, liga y entrenamiento. También puede disfrutar de funciones como la repetición, guardar/ cargar, reproducción automática, modo de edición y más. Para desbloquear todos los modos y características, vaya al modo o pantalla de funciones y toque en cualquier modo o característica que desee. Verá un mensaje que dice "Desbloqueado por la Liga Mundial de Fútbol Hack". A continuación, puede acceder a ese modo o característica y divertirse.

-

Consejos y trucos para jugar la Liga Mundial de Fútbol con Hack

-

Jugar a la Liga Mundial de Fútbol con hack puede ser muy agradable y satisfactorio, pero también puede ser desafiante y competitivo. Aquí hay algunos consejos y trucos para ayudarle a jugar mejor y divertirse más con el hack:

-

Consejo 1: Elige el mejor equipo y formación para tu estilo

-

Con el hack, puedes jugar con cualquier equipo que quieras, pero no todos los equipos son iguales en la Liga Mundial de Fútbol. Algunos equipos tienen mejores estadísticas, habilidades o química que otros. Algunos equipos también tienen diferentes formaciones, tácticas o estrategias que se adaptan a diferentes estilos de juego. Por lo tanto, debe elegir un equipo que coincida con su estilo de juego.

- -

También debes elegir una formación que se adapte a tu equipo y a tu estilo. Puede elegir entre varias formaciones como 4-4-2, 4-3-3, 3-5-2, 5-3-2 y más. También puedes personalizar tu formación cambiando las posiciones, roles o instrucciones de tus jugadores. Deberías experimentar con diferentes formaciones y encontrar la que funcione mejor para ti.

-

Consejo 2: Dominar las diferentes habilidades y movimientos para vencer a los defensores

-

Con el hack, puedes jugar con cualquier jugador que quieras, pero no todos los jugadores son iguales en la Liga Mundial de Fútbol. Algunos jugadores tienen mejores habilidades, movimientos o rasgos que otros. Algunos jugadores también tienen diferentes habilidades, estilos o especialidades que los hacen destacar del resto. Por lo tanto, debes dominar las diferentes habilidades y movimientos que cada jugador puede realizar.

-

Por ejemplo, si juegas con un jugador como Messi o Neymar, puedes usar sus altas estadísticas de regateo y habilidad para realizar trucos increíbles y fintas para vencer a los defensores. Si juegas con un jugador como Ronaldo o Ibrahimovic, puedes usar sus altas estadísticas de tiro y potencia para desatar poderosos tiros y encabezados para marcar goles. Si juegas con un jugador como Modric o De Bruyne, puedes usar sus altas estadísticas de pases y visión para crear oportunidades y asistencias para tus compañeros de equipo.

-

También debes aprender los diferentes botones y gestos que te permiten realizar diferentes habilidades y movimientos en el juego. Puede utilizar el joystick virtual para controlar la dirección y la velocidad de su reproductor. Puede utilizar el botón A para pasar o abordar, el botón B para disparar o deslizarse, el botón C para correr o cambiar de jugador, y el botón D para la habilidad o la presión. También puede deslizar el dedo en la pantalla para realizar varias acciones, como pase de lóbulo, disparo de chip, tiro de curva, pase largo, a través de la bola, y más. Debes practicar las diferentes habilidades y movimientos en el modo de entrenamiento o en partidos fáciles antes de usarlos en partidos más difíciles.

-

Consejo 3: Utilice los botones y el tiempo adecuados para disparar y pasar

- -

Por ejemplo, si juegas en el modo de exhibición o en el modo copa, puedes usar cualquier botón o gesto para disparar o pasar siempre y cuando apuntes bien y lo cronometras correctamente. Sin embargo, si juegas en el modo liga o en el modo torneo

necesitas usar el botón o gesto correcto para disparar o pasar dependiendo de la situación. Por ejemplo, debe usar el botón B o deslizar hacia arriba para un disparo potente, el botón A o deslizar hacia abajo para un disparo bajo, el botón D o deslizar hacia la izquierda o hacia la derecha para un disparo de curva, etc. También necesitas cronometrar tu tiro o pase según la posición, el movimiento y el ángulo de tu jugador y la pelota.

-

Si juegas en el modo de entrenamiento o en el modo de edición, puedes usar cualquier botón o gesto para disparar o pasar siempre que completes la tarea o desafío. Sin embargo, si juegas en el modo de repetición o en el modo guardar/cargar, necesitas usar el botón o gesto derecho para disparar o pasar de acuerdo con la acción grabada. Por ejemplo, necesitas usar el mismo botón o gesto que se usó en la acción original para reproducirlo o cargarlo.

-

Consejo 4: Ajusta el nivel de dificultad y la velocidad del juego según tu preferencia

-

Con el hack, también puede ajustar el nivel de dificultad y la velocidad de juego de la Liga Mundial de Fútbol de acuerdo a su preferencia. Puedes elegir entre cuatro niveles de dificultad: fácil, normal, duro y muy difícil. También puedes elegir entre tres velocidades de juego: lenta, normal y rápida. Puede cambiar estos ajustes en el menú de opciones antes de iniciar una coincidencia.

-

El nivel de dificultad y la velocidad del juego afectan lo desafiante y realista que es el juego. Cuanto mayor sea el nivel de dificultad, más hábiles e inteligentes serán los oponentes. Cuanto mayor sea la velocidad del juego, más dinámico y rápido será el juego. Debes elegir un nivel de dificultad y velocidad de juego que coincidan con tu nivel de habilidad y estilo de juego.

- -

Consejo 5: Disfrutar de los gráficos realistas y efectos de sonido del juego

-

Con el hack, también se puede disfrutar de los gráficos realistas y efectos de sonido de la Liga Mundial de Fútbol. El juego tiene gráficos de alta calidad que muestran jugadores detallados, estadios, campos, bolas y animaciones. El juego también tiene efectos de sonido realistas que incluyen ruidos de multitud, silbatos de árbitro, voces de jugador, sonidos de pelota y más. Puede ajustar la calidad gráfica y el volumen de sonido en el menú de opciones antes de iniciar una coincidencia.

-

Los gráficos y efectos de sonido de World Soccer League hacen que el juego sea más inmersivo y agradable. Crean una sensación de ambiente y emoción que te hacen sentir como si estuvieras viendo o jugando un partido de fútbol real. Deberías apreciar los gráficos y efectos de sonido de World Soccer League y divertirte con ellos.

-

Conclusión

-

World Soccer League es un gran juego para los aficionados al fútbol que quieren experimentar gráficos realistas, efectos de sonido y jugabilidad. Sin embargo, si quieres desbloquear todos los equipos, trofeos, modos y características en la Liga Mundial de Fútbol, es posible que tenga que utilizar un hack. Un hack puede ayudarle a acceder a todo lo que la Liga Mundial de Fútbol tiene para ofrecer sin gastar dinero o tiempo.

-

Pero antes de utilizar un hack para la Liga Mundial de Fútbol, usted debe ser consciente de los beneficios y riesgos de usar uno. Un hack puede hacer que tu juego sea más divertido y emocionante, pero también puede exponer tu dispositivo a malware, virus o prohibiciones. Por lo tanto

Por lo tanto, debe tener cuidado al elegir una fuente para el archivo de corte, y siga las instrucciones cuidadosamente al instalarlo y usarlo. También debes usar el hack de manera responsable y ética, y no abusar de él o dañar a otros jugadores.

-

Si sigues estos consejos y trucos, se puede disfrutar de la Liga Mundial de Fútbol con hack y tener un montón de diversión. Puedes jugar con cualquier equipo, ganar cualquier trofeo y acceder a cualquier modo que quieras. También puedes mejorar tus habilidades, desafiarte y sumergirte en el juego.

- -

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre el hack de la Liga Mundial de Fútbol:

- -Q1: ¿Es seguro usar el hack de la Liga Mundial de Fútbol? -A1: Hackeo de la Liga Mundial de Fútbol es seguro de usar si lo descarga de una fuente confiable y sigue las instrucciones de instalación y uso. Sin embargo, siempre hay un riesgo de malware, virus o prohibiciones al usar un hack, por lo que debe usarlo bajo su propio riesgo y discreción. -Q2: ¿Funcionará la Liga Mundial de Fútbol en cualquier dispositivo? -A2: El hackeo de la Liga Mundial de Fútbol funcionará en cualquier dispositivo que soporte el juego. El juego es compatible con Android 4.0 y hasta dispositivos. Sin embargo, el rendimiento y la calidad del juego y el hack pueden variar dependiendo de las especificaciones y ajustes del dispositivo. -Q3: ¿Puedo jugar la Liga Mundial de Fútbol en línea con hack? -A3: El hack de la Liga Mundial de Fútbol no afecta el modo en línea del juego. Todavía puede jugar en línea con otros jugadores que tienen la versión original o hackeada del juego. Sin embargo, debes ser respetuoso y justo al jugar en línea, y no usar el hack para engañar o acosar a otros jugadores. -Q4: ¿Puedo actualizar la Liga Mundial de Fútbol después de instalar hack? -A4: El hack de la Liga Mundial de Fútbol puede no funcionar si actualizas el juego después de instalarlo. La actualización puede sobrescribir o eliminar el archivo de hackeo, o hacerlo incompatible con el juego. Por lo tanto, debe evitar actualizar el juego después de instalar el hack, o hacer una copia de seguridad del archivo hack antes de actualizar. -Q5: ¿Dónde puedo encontrar más información sobre la Liga Mundial de Fútbol? - -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/types/AbortedGeneration.ts b/spaces/BetterAPI/BetterChat_new/src/lib/types/AbortedGeneration.ts deleted file mode 100644 index fe4c2824b4f3257bea71c3acacd65fcee0918188..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat_new/src/lib/types/AbortedGeneration.ts +++ /dev/null @@ -1,8 +0,0 @@ -// Ideally shouldn't be needed, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850 - -import type { Conversation } from "./Conversation"; -import type { Timestamps } from "./Timestamps"; - -export interface AbortedGeneration extends Timestamps { - conversationId: Conversation["_id"]; -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/hooks.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/hooks.py deleted file mode 100644 index d181ba2ec2e55d274897315887b78fbdca757da8..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/hooks.py +++ /dev/null @@ -1,33 +0,0 @@ -""" -requests.hooks -~~~~~~~~~~~~~~ - -This module provides the capabilities for the Requests hooks system. - -Available hooks: - -``response``: - The response generated from a Request. -""" -HOOKS = ["response"] - - -def default_hooks(): - return {event: [] for event in HOOKS} - - -# TODO: response is the only one - - -def dispatch_hook(key, hooks, hook_data, **kwargs): - """Dispatches a hook dictionary on a given piece of data.""" - hooks = hooks or {} - hooks = hooks.get(key) - if hooks: - if hasattr(hooks, "__call__"): - hooks = [hooks] - for hook in hooks: - _hook_data = hook(hook_data, **kwargs) - if _hook_data is not None: - hook_data = _hook_data - return hook_data diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/screen.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/screen.py deleted file mode 100644 index 7f416e1e799abfbf62382456020cc8e59e5cf01f..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/screen.py +++ /dev/null @@ -1,54 +0,0 @@ -from typing import Optional, TYPE_CHECKING - -from .segment import Segment -from .style import StyleType -from ._loop import loop_last - - -if TYPE_CHECKING: - from .console import ( - Console, - ConsoleOptions, - RenderResult, - RenderableType, - Group, - ) - - -class Screen: - """A renderable that fills the terminal screen and crops excess. - - Args: - renderable (RenderableType): Child renderable. - style (StyleType, optional): Optional background style. Defaults to None. - """ - - renderable: "RenderableType" - - def __init__( - self, - *renderables: "RenderableType", - style: Optional[StyleType] = None, - application_mode: bool = False, - ) -> None: - from pip._vendor.rich.console import Group - - self.renderable = Group(*renderables) - self.style = style - self.application_mode = application_mode - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - width, height = options.size - style = console.get_style(self.style) if self.style else None - render_options = options.update(width=width, height=height) - lines = console.render_lines( - self.renderable or "", render_options, style=style, pad=True - ) - lines = Segment.set_shape(lines, width, height, style=style) - new_line = Segment("\n\r") if self.application_mode else Segment.line() - for last, line in loop_last(lines): - yield from line - if not last: - yield new_line diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/config/_validate_pyproject/extra_validations.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/config/_validate_pyproject/extra_validations.py deleted file mode 100644 index 4130a421cfd7260d323b13cbd9d75ab8146e6030..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/config/_validate_pyproject/extra_validations.py +++ /dev/null @@ -1,36 +0,0 @@ -"""The purpose of this module is implement PEP 621 validations that are -difficult to express as a JSON Schema (or that are not supported by the current -JSON Schema library). -""" - -from typing import Mapping, TypeVar - -from .error_reporting import ValidationError - -T = TypeVar("T", bound=Mapping) - - -class RedefiningStaticFieldAsDynamic(ValidationError): - """According to PEP 621: - - Build back-ends MUST raise an error if the metadata specifies a field - statically as well as being listed in dynamic. - """ - - -def validate_project_dynamic(pyproject: T) -> T: - project_table = pyproject.get("project", {}) - dynamic = project_table.get("dynamic", []) - - for field in dynamic: - if field in project_table: - msg = f"You cannot provide a value for `project.{field}` and " - msg += "list it under `project.dynamic` at the same time" - name = f"data.project.{field}" - value = {field: project_table[field], "...": " # ...", "dynamic": dynamic} - raise RedefiningStaticFieldAsDynamic(msg, value, name, rule="PEP 621") - - return pyproject - - -EXTRA_VALIDATIONS = (validate_project_dynamic,) diff --git a/spaces/CVPR/LIVE/thrust/internal/scripts/eris_perf.py b/spaces/CVPR/LIVE/thrust/internal/scripts/eris_perf.py deleted file mode 100644 index 5804711019263fb31cdb7207fd13b3f03b26a758..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/internal/scripts/eris_perf.py +++ /dev/null @@ -1,189 +0,0 @@ -#! /usr/bin/env python -# -*- coding: utf-8 -*- - -############################################################################### -# Copyright (c) 2018 NVIDIA Corporation -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -############################################################################### - -from sys import exit - -from os.path import join, dirname, basename, realpath - -from csv import DictReader as csv_dict_reader - -from subprocess import Popen - -from argparse import ArgumentParser as argument_parser - -############################################################################### - -def printable_cmd(c): - """Converts a `list` of `str`s representing a shell command to a printable - `str`.""" - return " ".join(map(lambda e: '"' + str(e) + '"', c)) - -############################################################################### - -def print_file(p): - """Open the path `p` and print its contents to `stdout`.""" - print "********************************************************************************" - with open(p) as f: - for line in f: - print line, - print "********************************************************************************" - -############################################################################### - -ap = argument_parser( - description = ( - "CUDA Eris driver script: runs a benchmark suite multiple times, combines " - "the results, and outputs them in the CUDA Eris performance result format." - ) -) - -ap.add_argument( - "-b", "--benchmark", - help = ("The location of the benchmark suite executable to run."), - type = str, - default = join(dirname(realpath(__file__)), "bench"), - metavar = "R" -) - -ap.add_argument( - "-p", "--postprocess", - help = ("The location of the postprocessing script to run to combine the " - "results."), - type = str, - default = join(dirname(realpath(__file__)), "combine_benchmark_results.py"), - metavar = "R" -) - -ap.add_argument( - "-r", "--runs", - help = ("Run the benchmark suite `R` times.a),"), - type = int, default = 5, - metavar = "R" -) - -args = ap.parse_args() - -if args.runs <= 0: - print "ERROR: `--runs` must be greater than `0`." - ap.print_help() - exit(1) - -BENCHMARK_EXE = args.benchmark -BENCHMARK_NAME = basename(BENCHMARK_EXE) -POSTPROCESS_EXE = args.postprocess -OUTPUT_FILE_NAME = lambda i: BENCHMARK_NAME + "_" + str(i) + ".csv" -COMBINED_OUTPUT_FILE_NAME = BENCHMARK_NAME + "_combined.csv" - -############################################################################### - -print '&&&& RUNNING {0}'.format(BENCHMARK_NAME) - -print '#### RUNS {0}'.format(args.runs) - -############################################################################### - -print '#### CMD {0}'.format(BENCHMARK_EXE) - -for i in xrange(args.runs): - with open(OUTPUT_FILE_NAME(i), "w") as output_file: - print '#### RUN {0} OUTPUT -> {1}'.format(i, OUTPUT_FILE_NAME(i)) - - p = None - - try: - p = Popen(BENCHMARK_EXE, stdout = output_file, stderr = output_file) - p.communicate() - except OSError as ex: - print_file(OUTPUT_FILE_NAME(i)) - print '#### ERROR Caught OSError `{0}`.'.format(ex) - print '&&&& FAILED {0}'.format(BENCHMARK_NAME) - exit(-1) - - print_file(OUTPUT_FILE_NAME(i)) - - if p.returncode != 0: - print '#### ERROR Process exited with code {0}.'.format(p.returncode) - print '&&&& FAILED {0}'.format(BENCHMARK_NAME) - exit(p.returncode) - -############################################################################### - -post_cmd = [POSTPROCESS_EXE] - -# Add dependent variable options. -post_cmd += ["-dSTL Average Walltime,STL Walltime Uncertainty,STL Trials"] -post_cmd += ["-dSTL Average Throughput,STL Throughput Uncertainty,STL Trials"] -post_cmd += ["-dThrust Average Walltime,Thrust Walltime Uncertainty,Thrust Trials"] -post_cmd += ["-dThrust Average Throughput,Thrust Throughput Uncertainty,Thrust Trials"] - -post_cmd += [OUTPUT_FILE_NAME(i) for i in range(args.runs)] - -print '#### CMD {0}'.format(printable_cmd(post_cmd)) - -with open(COMBINED_OUTPUT_FILE_NAME, "w") as output_file: - p = None - - try: - p = Popen(post_cmd, stdout = output_file, stderr = output_file) - p.communicate() - except OSError as ex: - print_file(COMBINED_OUTPUT_FILE_NAME) - print '#### ERROR Caught OSError `{0}`.'.format(ex) - print '&&&& FAILED {0}'.format(BENCHMARK_NAME) - exit(-1) - - print_file(COMBINED_OUTPUT_FILE_NAME) - - if p.returncode != 0: - print '#### ERROR Process exited with code {0}.'.format(p.returncode) - print '&&&& FAILED {0}'.format(BENCHMARK_NAME) - exit(p.returncode) - - with open(COMBINED_OUTPUT_FILE_NAME) as input_file: - reader = csv_dict_reader(input_file) - - variable_units = reader.next() # Get units header row. - - distinguishing_variables = reader.fieldnames - - measured_variables = [ - ("STL Average Throughput", "+"), - ("Thrust Average Throughput", "+") - ] - - for record in reader: - for variable, directionality in measured_variables: - # Don't monitor regressions for STL implementations, nvbug 28980890: - if "STL" in variable: - continue - print "&&&& PERF {0}_{1}_{2}bit_{3}mib_{4} {5} {6}{7}".format( - record["Algorithm"], - record["Element Type"], - record["Element Size"], - record["Total Input Size"], - variable.replace(" ", "_").lower(), - record[variable], - directionality, - variable_units[variable] - ) - -############################################################################### - -print '&&&& PASSED {0}'.format(BENCHMARK_NAME) - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/execution_policy.h deleted file mode 100644 index ee49a60cb44a3183e6788f3d0b847204afc36380..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/execution_policy.h +++ /dev/null @@ -1,99 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ - -#pragma once - -#include -#include -#include -#include - -#include - -#if THRUST_CPP_DIALECT >= 2011 - #include -#endif - -namespace thrust -{ - -namespace cuda_cub -{ - -struct tag; - -template -struct execution_policy; - -template <> -struct execution_policy : thrust::execution_policy -{ - typedef tag tag_type; -}; - -struct tag : execution_policy -, thrust::detail::allocator_aware_execution_policy -#if THRUST_CPP_DIALECT >= 2011 -, thrust::detail::dependencies_aware_execution_policy -#endif -{}; - -template -struct execution_policy : thrust::execution_policy -{ - typedef tag tag_type; - operator tag() const { return tag(); } -}; - -} // namespace cuda_cub - -namespace system { namespace cuda { namespace detail -{ - -using thrust::cuda_cub::tag; -using thrust::cuda_cub::execution_policy; - -}}} // namespace system::cuda::detail - -namespace system { namespace cuda -{ - -using thrust::cuda_cub::tag; -using thrust::cuda_cub::execution_policy; - -}} // namespace system::cuda - -namespace cuda -{ - -using thrust::cuda_cub::tag; -using thrust::cuda_cub::execution_policy; - -} // namespace cuda - -} // end namespace thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/scan_by_key.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/scan_by_key.h deleted file mode 100644 index 2b5fa36483c451bac93827b239c17fb7850e2ed1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/scan_by_key.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits scan_by_key -#include - diff --git a/spaces/CVPR/WALT/mmdet/utils/util_random.py b/spaces/CVPR/WALT/mmdet/utils/util_random.py deleted file mode 100644 index e313e9947bb3232a9458878fd219e1594ab93d57..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/utils/util_random.py +++ /dev/null @@ -1,33 +0,0 @@ -"""Helpers for random number generators.""" -import numpy as np - - -def ensure_rng(rng=None): - """Coerces input into a random number generator. - - If the input is None, then a global random state is returned. - - If the input is a numeric value, then that is used as a seed to construct a - random state. Otherwise the input is returned as-is. - - Adapted from [1]_. - - Args: - rng (int | numpy.random.RandomState | None): - if None, then defaults to the global rng. Otherwise this can be an - integer or a RandomState class - Returns: - (numpy.random.RandomState) : rng - - a numpy random number generator - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501 - """ - - if rng is None: - rng = np.random.mtrand._rand - elif isinstance(rng, int): - rng = np.random.RandomState(rng) - else: - rng = rng - return rng diff --git a/spaces/CVPR/lama-example/saicinpainting/evaluation/vis.py b/spaces/CVPR/lama-example/saicinpainting/evaluation/vis.py deleted file mode 100644 index c2910b4ef8c61efee72dabd0531a9b669ec8bf98..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/evaluation/vis.py +++ /dev/null @@ -1,37 +0,0 @@ -import numpy as np -from skimage import io -from skimage.segmentation import mark_boundaries - - -def save_item_for_vis(item, out_file): - mask = item['mask'] > 0.5 - if mask.ndim == 3: - mask = mask[0] - img = mark_boundaries(np.transpose(item['image'], (1, 2, 0)), - mask, - color=(1., 0., 0.), - outline_color=(1., 1., 1.), - mode='thick') - - if 'inpainted' in item: - inp_img = mark_boundaries(np.transpose(item['inpainted'], (1, 2, 0)), - mask, - color=(1., 0., 0.), - mode='outer') - img = np.concatenate((img, inp_img), axis=1) - - img = np.clip(img * 255, 0, 255).astype('uint8') - io.imsave(out_file, img) - - -def save_mask_for_sidebyside(item, out_file): - mask = item['mask']# > 0.5 - if mask.ndim == 3: - mask = mask[0] - mask = np.clip(mask * 255, 0, 255).astype('uint8') - io.imsave(out_file, mask) - -def save_img_for_sidebyside(item, out_file): - img = np.transpose(item['image'], (1, 2, 0)) - img = np.clip(img * 255, 0, 255).astype('uint8') - io.imsave(out_file, img) \ No newline at end of file diff --git a/spaces/Codecooker/rvcapi/src/my_utils.py b/spaces/Codecooker/rvcapi/src/my_utils.py deleted file mode 100644 index a5258394b8ae5385daa665ab6ba6380507d4798a..0000000000000000000000000000000000000000 --- a/spaces/Codecooker/rvcapi/src/my_utils.py +++ /dev/null @@ -1,21 +0,0 @@ -import ffmpeg -import numpy as np - - -def load_audio(file, sr): - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - return np.frombuffer(out, np.float32).flatten() diff --git a/spaces/Cong723/gpt-academic-public/README.md b/spaces/Cong723/gpt-academic-public/README.md deleted file mode 100644 index 6c9da02b60aa81cf11de4a595dde2e2e44c0265d..0000000000000000000000000000000000000000 --- a/spaces/Cong723/gpt-academic-public/README.md +++ /dev/null @@ -1,312 +0,0 @@ ---- -title: academic-chatgpt -emoji: 😻 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.28.3 -python_version: 3.11 -app_file: main.py -pinned: false -duplicated_from: qingxu98/gpt-academic ---- - -# ChatGPT 学术优化 -> **Note** -> -> 安装依赖时,请严格选择requirements.txt中**指定的版本**。 -> -> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/` -> - -# GPT 学术优化 (GPT Academic) - -**如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发pull requests** - -If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself. - -> **Note** -> -> 1.请注意只有**红颜色**标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR! -> -> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。 -> -> 3.本项目兼容并鼓励尝试国产大语言模型chatglm和RWKV, 盘古等等。已支持OpenAI和API2D的api-key共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,api2d-key3"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。 - -
- -功能 | 描述 ---- | --- -一键润色 | 支持一键润色、一键查找论文语法错误 -一键中英互译 | 一键中英互译 -一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释 -[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键 -模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码 -[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树 -读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要 -Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文 -批量注释生成 | [函数插件] 一键批量生成函数注释 -Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗? -chat分析报告生成 | [函数插件] 运行后自动生成总结汇报 -[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程) -[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF -[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) -互联网信息聚合+GPT | [函数插件] 一键[让GPT先从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck),再回答问题,让信息永不过时 -公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮 -多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序 -启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题 -[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧? -更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 新加入Newbing测试接口(新必应AI) -…… | …… - -
- - -- 新界面(修改`config.py`中的LAYOUT选项即可实现“左右布局”和“上下布局”的切换) -
- -
- - -- 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板 -
- -
- -- 润色/纠错 -
- -
- -- 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读 -
- -
- -- 懒得看项目代码?整个工程直接给chatgpt炫嘴里 -
- -
- -- 多种大语言模型混合调用(ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
- -
- ---- - -## 安装-方法1:直接运行 (Windows, Linux or MacOS) - -1. 下载项目 -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. 配置API_KEY - -在`config.py`中,配置API KEY等设置,[特殊网络环境设置](https://github.com/binary-husky/gpt_academic/issues/1) 。 - -(P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。) - - -3. 安装依赖 -```sh -# (选择I: 如熟悉python)(python版本3.9以上,越新越好) -python -m pip install -r requirements.txt -# 备注:使用官方pip源或者阿里pip源,其他pip源(如一些大学的pip)有可能出问题,临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ - -# (选择II: 如不熟悉python)使用anaconda,步骤也是类似的: -# (II-1)conda create -n gptac_venv python=3.11 -# (II-2)conda activate gptac_venv -# (II-3)python -m pip install -r requirements.txt -``` - -如果需要支持清华ChatGLM后端,需要额外安装更多依赖(前提条件:熟悉python + 电脑配置够强): -```sh -python -m pip install -r request_llm/requirements_chatglm.txt - -# 备注:如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: -# 1:以上默认安装的为torch+cpu版,使用cuda需要卸载torch重新安装torch+cuda -# 2:如因本机配置不够无法加载模型,可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -``` - -4. 运行 -```sh -python main.py -``` - -5. 测试函数插件 -``` -- 测试函数插件模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能 - 点击 "[函数插件模板Demo] 历史上的今天" -``` - -## 安装-方法2:使用Docker - -1. 仅ChatGPT(推荐大多数人选择) - -``` sh -# 下载项目 -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# 配置 “Proxy”, “API_KEY” 以及 “WEB_PORT” (例如50923) 等 -用任意文本编辑器编辑 config.py -# 安装 -docker build -t gpt-academic . -#(最后一步-选择1)在Linux环境下,用`--net=host`更方便快捷 -docker run --rm -it --net=host gpt-academic -#(最后一步-选择2)在macOS/windows环境下,只能用-p选项将容器上的端口(例如50923)暴露给主机上的端口 -docker run --rm -it -p 50923:50923 gpt-academic -``` - -2. ChatGPT+ChatGLM(需要对Docker熟悉 + 读懂Dockerfile + 电脑配置够强) - -``` sh -# 修改Dockerfile -cd docs && nano Dockerfile+ChatGLM -# 构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs) -docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM . -# 运行 (1) 直接运行: -docker run --rm -it --net=host --gpus=all gpt-academic -# 运行 (2) 我想运行之前进容器做一些调整: -docker run --rm -it --net=host --gpus=all gpt-academic bash -``` - -3. ChatGPT + LLAMA + 盘古 + RWKV(需要精通Docker) -``` sh -1. 修改docker-compose.yml,删除方案一和方案二,保留方案三(基于jittor) -2. 修改docker-compose.yml中方案三的配置,参考其中注释即可 -3. 终端运行 docker-compose up -``` - - -## 安装-方法3:其他部署姿势 - -1. 如何使用反代URL/微软云AzureAPI -按照`config.py`中的说明配置API_URL_REDIRECT即可。 - -2. 远程云服务器部署(需要云服务器知识与经验) -请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. 使用WSL2(Windows Subsystem for Linux 子系统) -请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. 如何在二级网址(如`http://localhost/subpath`)下运行 -请访问[FastAPI运行说明](docs/WithFastapi.md) - -5. 使用docker-compose运行 -请阅读docker-compose.yml后,按照其中的提示操作即可 ---- - -## 自定义新的便捷按钮 / 自定义函数插件 - -1. 自定义新的便捷按钮(学术快捷键) -任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。) -例如 -``` -"超级英译中": { - # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等 - "Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n", - - # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。 - "Suffix": "", -}, -``` -
- -
- -2. 自定义函数插件 - -编写强大的函数插件来执行任何你想得到的和想不到的任务。 -本项目的插件编写、调试难度很低,只要您具备一定的python基础知识,就可以仿照我们提供的模板实现自己的插件功能。 -详情请参考[函数插件指南](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。 - ---- - -## 其他功能说明 - -1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的html文件, -另外在函数插件区(下拉菜单)调用 `载入对话历史存档` ,即可还原之前的会话。 -Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史html存档缓存,点击 `删除所有本地对话历史记录` 可以删除所有html存档缓存。 -
- -
- - - -2. 生成报告。大部分插件都会在执行结束后,生成工作报告 -
- - - -
- -3. 模块化功能设计,简单的接口却能支持强大的功能 -
- - -
- -4. 这是一个能够“自我译解”的开源项目 -
- -
- -5. 译解其他开源项目,不在话下 -
- -
- -
- -
- -6. 装饰[live2d](https://github.com/fghrsh/live2d_demo)的小功能(默认关闭,需要修改`config.py`) -
- -
- - -## 版本: -- version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级) -- version 3.4(Todo): 完善chatglm本地大模型的多线支持 -- version 3.3: +互联网信息综合功能 -- version 3.2: 函数插件支持更多参数接口 (保存对话功能, 解读任意语言代码+同时询问任意的LLM组合) -- version 3.1: 支持同时问询多个gpt模型!支持api2d,支持多个apikey负载均衡 -- version 3.0: 对chatglm和其他小型llm的支持 -- version 2.6: 重构了插件结构,提高了交互性,加入更多插件 -- version 2.5: 自更新,解决总结大工程源代码时文本过长、token溢出的问题 -- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。 -- version 2.3: 增强多线程交互性 -- version 2.2: 函数插件支持热重载 -- version 2.1: 可折叠式布局 -- version 2.0: 引入模块化函数插件 -- version 1.0: 基础功能 - -gpt_academic开发者QQ群-2:610599535 - - -## 参考与学习 - -``` -代码中参考了很多其他优秀项目中的设计,主要包括: - -# 项目1:清华ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# 项目2:清华JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# 项目3:借鉴了ChuanhuChatGPT中诸多技巧 -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# 项目4:ChatPaper -https://github.com/kaixindelele/ChatPaper - -# 更多: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` diff --git a/spaces/Cropinky/hana_hanak_houses/realesrgan/__init__.py b/spaces/Cropinky/hana_hanak_houses/realesrgan/__init__.py deleted file mode 100644 index 2276f1eecded80d1f00ff97b45c66c7a8922b987..0000000000000000000000000000000000000000 --- a/spaces/Cropinky/hana_hanak_houses/realesrgan/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# flake8: noqa -from .archs import * -from .data import * -from .models import * -from .utils import * -from .version import * diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/module-a5a0afa0.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/module-a5a0afa0.js deleted file mode 100644 index 12728485edb4892b09173520f3d951232fff3209..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/module-a5a0afa0.js +++ /dev/null @@ -1,2 +0,0 @@ -import{c as i}from"./module-a3cf0cc4.js";const c=i({characterize:({call:e})=>()=>e("characterize"),encode:({call:e})=>(r,n)=>e("encode",{recordingId:r,timeslice:n}),record:({call:e})=>async(r,n,o)=>{await e("record",{recordingId:r,sampleRate:n,typedArrays:o},o.map(({buffer:a})=>a))}}),u=e=>{const r=new Worker(e);return c(r)},l=`(()=>{var e={775:function(e,t,r){!function(e,t,r,n){"use strict";function o(e){return e&&"object"==typeof e&&"default"in e?e:{default:e}}var s=o(t),a=o(r),i=o(n),u=function(e,t){return void 0===t?e:t.reduce((function(e,t){if("capitalize"===t){var r=e.charAt(0).toUpperCase(),n=e.slice(1);return"".concat(r).concat(n)}return"dashify"===t?a.default(e):"prependIndefiniteArticle"===t?"".concat(i.default(e)," ").concat(e):e}),e)},c=function(e){var t=e.name+e.modifiers.map((function(e){return"\\\\.".concat(e,"\\\\(\\\\)")})).join("");return new RegExp("\\\\$\\\\{".concat(t,"}"),"g")},l=function(e,t){for(var r=/\\\${([^.}]+)((\\.[^(]+\\(\\))*)}/g,n=[],o=r.exec(e);null!==o;){var a={modifiers:[],name:o[1]};if(void 0!==o[3])for(var i=/\\.[^(]+\\(\\)/g,l=i.exec(o[2]);null!==l;)a.modifiers.push(l[0].slice(1,-2)),l=i.exec(o[2]);n.push(a),o=r.exec(e)}var d=n.reduce((function(e,r){return e.map((function(e){return"string"==typeof e?e.split(c(r)).reduce((function(e,n,o){return 0===o?[n]:r.name in t?[].concat(s.default(e),[u(t[r.name],r.modifiers),n]):[].concat(s.default(e),[function(e){return u(e[r.name],r.modifiers)},n])}),[]):[e]})).reduce((function(e,t){return[].concat(s.default(e),s.default(t))}),[])}),[e]);return function(e){return d.reduce((function(t,r){return[].concat(s.default(t),"string"==typeof r?[r]:[r(e)])}),[]).join("")}},d=function(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},r=void 0===e.code?void 0:l(e.code,t),n=void 0===e.message?void 0:l(e.message,t);function o(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},o=arguments.length>1?arguments[1]:void 0,s=void 0===o&&(t instanceof Error||void 0!==t.code&&"Exception"===t.code.slice(-9))?{cause:t,missingParameters:{}}:{cause:o,missingParameters:t},a=s.cause,i=s.missingParameters,u=void 0===n?new Error:new Error(n(i));return null!==a&&(u.cause=a),void 0!==r&&(u.code=r(i)),void 0!==e.status&&(u.status=e.status),u}return o};e.compile=d,Object.defineProperty(e,"__esModule",{value:!0})}(t,r(106),r(881),r(507))},881:e=>{"use strict";e.exports=(e,t)=>{if("string"!=typeof e)throw new TypeError("expected a string");return e.trim().replace(/([a-z])([A-Z])/g,"$1-$2").replace(/\\W/g,(e=>/[À-ž]/.test(e)?e:"-")).replace(/^-+|-+$/g,"").replace(/-{2,}/g,(e=>t&&t.condense?"-":e)).toLowerCase()}},107:function(e,t){!function(e){"use strict";var t=function(e){return function(t){var r=e(t);return t.add(r),r}},r=function(e){return function(t,r){return e.set(t,r),r}},n=void 0===Number.MAX_SAFE_INTEGER?9007199254740991:Number.MAX_SAFE_INTEGER,o=536870912,s=2*o,a=function(e,t){return function(r){var a=t.get(r),i=void 0===a?r.size:an)throw new Error("Congratulations, you created a collection of unique numbers which uses all available integers!");for(;r.has(i);)i=Math.floor(Math.random()*n);return e(r,i)}},i=new WeakMap,u=r(i),c=a(u,i),l=t(c);e.addUniqueNumber=l,e.generateUniqueNumber=c,Object.defineProperty(e,"__esModule",{value:!0})}(t)},507:e=>{var t=function(e){var t,r,n=/\\w+/.exec(e);if(!n)return"an";var o=(r=n[0]).toLowerCase(),s=["honest","hour","hono"];for(t in s)if(0==o.indexOf(s[t]))return"an";if(1==o.length)return"aedhilmnorsx".indexOf(o)>=0?"an":"a";if(r.match(/(?!FJO|[HLMNS]Y.|RY[EO]|SQU|(F[LR]?|[HL]|MN?|N|RH?|S[CHKLMNPTVW]?|X(YL)?)[AEIOU])[FHLMNRSX][A-Z]/))return"an";var a=[/^e[uw]/,/^onc?e\\b/,/^uni([^nmd]|mo)/,/^u[bcfhjkqrst][aeiou]/];for(t=0;t=0?"an":"a":"aeiou".indexOf(o[0])>=0||o.match(/^y(b[lor]|cl[ea]|fere|gg|p[ios]|rou|tt)/)?"an":"a"};void 0!==e.exports?e.exports=t:window.indefiniteArticle=t},768:e=>{e.exports=function(e,t){(null==t||t>e.length)&&(t=e.length);for(var r=0,n=new Array(t);r{var n=r(768);e.exports=function(e){if(Array.isArray(e))return n(e)},e.exports.__esModule=!0,e.exports.default=e.exports},642:e=>{e.exports=function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)},e.exports.__esModule=!0,e.exports.default=e.exports},344:e=>{e.exports=function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")},e.exports.__esModule=!0,e.exports.default=e.exports},106:(e,t,r)=>{var n=r(907),o=r(642),s=r(906),a=r(344);e.exports=function(e){return n(e)||o(e)||s(e)||a()},e.exports.__esModule=!0,e.exports.default=e.exports},906:(e,t,r)=>{var n=r(768);e.exports=function(e,t){if(e){if("string"==typeof e)return n(e,t);var r=Object.prototype.toString.call(e).slice(8,-1);return"Object"===r&&e.constructor&&(r=e.constructor.name),"Map"===r||"Set"===r?Array.from(e):"Arguments"===r||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(r)?n(e,t):void 0}},e.exports.__esModule=!0,e.exports.default=e.exports}},t={};function r(n){var o=t[n];if(void 0!==o)return o.exports;var s=t[n]={exports:{}};return e[n].call(s.exports,s,s.exports,r),s.exports}(()=>{"use strict";var e=r(775);const t=-32603,n=-32602,o=-32601,s=(0,e.compile)({message:'The requested method called "\${method}" is not supported.',status:o}),a=(0,e.compile)({message:'The handler of the method called "\${method}" returned no required result.',status:t}),i=(0,e.compile)({message:'The handler of the method called "\${method}" returned an unexpected result.',status:t}),u=(0,e.compile)({message:'The specified parameter called "portId" with the given value "\${portId}" does not identify a port connected to this worker.',status:n}),c=(e,t)=>async r=>{let{data:{id:n,method:o,params:u}}=r;const c=t[o];try{if(void 0===c)throw s({method:o});const t=void 0===u?c():c(u);if(void 0===t)throw a({method:o});const r=t instanceof Promise?await t:t;if(null===n){if(void 0!==r.result)throw i({method:o})}else{if(void 0===r.result)throw i({method:o});const{result:t,transferables:s=[]}=r;e.postMessage({id:n,result:t},s)}}catch(t){const{message:r,status:o=-32603}=t;e.postMessage({error:{code:o,message:r},id:n})}};var l=r(107);const d=new Map,f=(e,t,r)=>({...t,connect:r=>{let{port:n}=r;n.start();const o=e(n,t),s=(0,l.generateUniqueNumber)(d);return d.set(s,(()=>{o(),n.close(),d.delete(s)})),{result:s}},disconnect:e=>{let{portId:t}=e;const r=d.get(t);if(void 0===r)throw u({portId:t.toString()});return r(),{result:null}},isSupported:async()=>{if(await new Promise((e=>{const t=new ArrayBuffer(0),{port1:r,port2:n}=new MessageChannel;r.onmessage=t=>{let{data:r}=t;return e(null!==r)},n.postMessage(t,[t])}))){const e=r();return{result:e instanceof Promise?await e:e}}return{result:!1}}}),p=function(e,t){let r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:()=>!0;const n=f(p,t,r),o=c(e,n);return e.addEventListener("message",o),()=>e.removeEventListener("message",o)},m=e=>e.reduce(((e,t)=>e+t.length),0),h=(e,t)=>{const r=[];let n=0;e:for(;nt){const o=n-t;r.forEach(((t,r)=>{const n=t.pop(),s=n.length-o;t.push(n.subarray(0,s)),e[r].unshift(n.subarray(s))}))}return r},v=new Map,g=(e=>(t,r,n)=>{const o=e.get(t);if(void 0===o){const o={channelDataArrays:n.map((e=>[e])),isComplete:!0,sampleRate:r};return e.set(t,o),o}return o.channelDataArrays.forEach(((e,t)=>e.push(n[t]))),o})(v),x=((e,t)=>(r,n,o,s)=>{const a=o>>3,i="subsequent"===n?0:44,u=r.length,c=e(r[0]),l=new ArrayBuffer(c*u*a+i),d=new DataView(l);return"subsequent"!==n&&t(d,o,u,"complete"===n?c:Number.POSITIVE_INFINITY,s),r.forEach(((e,t)=>{let r=i+t*a;e.forEach((e=>{const t=e.length;for(let n=0;n{const s=t>>3,a=Math.min(n*r*s,4294967251);e.setUint32(0,1380533830),e.setUint32(4,a+36,!0),e.setUint32(8,1463899717),e.setUint32(12,1718449184),e.setUint32(16,16,!0),e.setUint16(20,1,!0),e.setUint16(22,r,!0),e.setUint32(24,o,!0),e.setUint32(28,o*r*s,!0),e.setUint16(32,r*s,!0),e.setUint16(34,t,!0),e.setUint32(36,1684108385),e.setUint32(40,a,!0)})),w=new Map;p(self,{characterize:()=>({result:/^audio\\/wav$/}),encode:e=>{let{recordingId:t,timeslice:r}=e;const n=w.get(t);void 0!==n&&(w.delete(t),n.reject(new Error("Another request was made to initiate an encoding.")));const o=v.get(t);if(null!==r){if(void 0===o||m(o.channelDataArrays[0])*(1e3/o.sampleRate){w.set(t,{reject:n,resolve:e,timeslice:r})}));const e=h(o.channelDataArrays,Math.ceil(r*(o.sampleRate/1e3))),n=x(e,o.isComplete?"initial":"subsequent",16,o.sampleRate);return o.isComplete=!1,{result:n,transferables:n}}if(void 0!==o){const e=x(o.channelDataArrays,o.isComplete?"complete":"subsequent",16,o.sampleRate);return v.delete(t),{result:e,transferables:e}}return{result:[],transferables:[]}},record:e=>{let{recordingId:t,sampleRate:r,typedArrays:n}=e;const o=g(t,r,n),s=w.get(t);if(void 0!==s&&m(o.channelDataArrays[0])*(1e3/r)>=s.timeslice){const e=h(o.channelDataArrays,Math.ceil(s.timeslice*(r/1e3))),n=x(e,o.isComplete?"initial":"subsequent",16,r);o.isComplete=!1,w.delete(t),s.resolve({result:n,transferables:n})}return{result:null}}})})()})();`,d=new Blob([l],{type:"application/javascript; charset=utf-8"}),s=URL.createObjectURL(d),t=u(s),p=t.characterize,m=t.connect,h=t.disconnect,v=t.encode,g=t.isSupported,x=t.record;URL.revokeObjectURL(s);export{p as characterize,m as connect,h as disconnect,v as encode,g as isSupported,x as record}; -//# sourceMappingURL=module-a5a0afa0.js.map diff --git a/spaces/DragGan/DragGan-Inversion/dnnlib/__init__.py b/spaces/DragGan/DragGan-Inversion/dnnlib/__init__.py deleted file mode 100644 index e7423bffe245d0ff3f32e8658aa67daae454e64e..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/dnnlib/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from .util import EasyDict, make_cache_dir_path diff --git a/spaces/DragGan/DragGan/stylegan_human/pti/training/projectors/w_plus_projector.py b/spaces/DragGan/DragGan/stylegan_human/pti/training/projectors/w_plus_projector.py deleted file mode 100644 index 7d4abaf0ef32378504191559d7c95f3e58a63ffa..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/stylegan_human/pti/training/projectors/w_plus_projector.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Project given image to the latent space of pretrained network pickle.""" - -import copy -import wandb -import numpy as np -import torch -import torch.nn.functional as F -from tqdm import tqdm -from configs import global_config, hyperparameters -import dnnlib -from utils.log_utils import log_image_from_w - - -def project( - G, - target: torch.Tensor, # [C,H,W] and dynamic range [0,255], W & H must match G output resolution - *, - num_steps=1000, - w_avg_samples=10000, - initial_learning_rate=0.01, - initial_noise_factor=0.05, - lr_rampdown_length=0.25, - lr_rampup_length=0.05, - noise_ramp_length=0.75, - regularize_noise_weight=1e5, - verbose=False, - device: torch.device, - use_wandb=False, - initial_w=None, - image_log_step=global_config.image_rec_result_log_snapshot, - w_name: str -): - print('inside training/projectors/w_plus_projector') - print(target.shape, G.img_channels, G.img_resolution * 2 , G.img_resolution) - assert target.shape == (G.img_channels, G.img_resolution * 2, G.img_resolution) - - def logprint(*args): - if verbose: - print(*args) - - G = copy.deepcopy(G).eval().requires_grad_(False).to(device).float() # type: ignore - - # Compute w stats. - logprint(f'Computing W midpoint and stddev using {w_avg_samples} samples...') - z_samples = np.random.RandomState(123).randn(w_avg_samples, G.z_dim) - w_samples = G.mapping(torch.from_numpy(z_samples).to(device), None) # [N, L, C] - w_samples = w_samples[:, :1, :].cpu().numpy().astype(np.float32) # [N, 1, C] - w_avg = np.mean(w_samples, axis=0, keepdims=True) # [1, 1, C] - w_avg_tensor = torch.from_numpy(w_avg).to(global_config.device) - w_std = (np.sum((w_samples - w_avg) ** 2) / w_avg_samples) ** 0.5 - - start_w = initial_w if initial_w is not None else w_avg - - # Setup noise inputs. - noise_bufs = {name: buf for (name, buf) in G.synthesis.named_buffers() if 'noise_const' in name} - - # Load VGG16 feature detector. - url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt' - with dnnlib.util.open_url(url) as f: - vgg16 = torch.jit.load(f).eval().to(device) - - # Features for target image. - target_images = target.unsqueeze(0).to(device).to(torch.float32) - if target_images.shape[2] > 256: - target_images = F.interpolate(target_images, size=(256, 256), mode='area') - target_features = vgg16(target_images, resize_images=False, return_lpips=True) - - start_w = np.repeat(start_w, G.mapping.num_ws, axis=1) - w_opt = torch.tensor(start_w, dtype=torch.float32, device=device, - requires_grad=True) # pylint: disable=not-callable - - optimizer = torch.optim.Adam([w_opt] + list(noise_bufs.values()), betas=(0.9, 0.999), - lr=hyperparameters.first_inv_lr) - - # Init noise. - for buf in noise_bufs.values(): - buf[:] = torch.randn_like(buf) - buf.requires_grad = True - - for step in tqdm(range(num_steps)): - - # Learning rate schedule. - t = step / num_steps - w_noise_scale = w_std * initial_noise_factor * max(0.0, 1.0 - t / noise_ramp_length) ** 2 - lr_ramp = min(1.0, (1.0 - t) / lr_rampdown_length) - lr_ramp = 0.5 - 0.5 * np.cos(lr_ramp * np.pi) - lr_ramp = lr_ramp * min(1.0, t / lr_rampup_length) - lr = initial_learning_rate * lr_ramp - for param_group in optimizer.param_groups: - param_group['lr'] = lr - - # Synth images from opt_w. - w_noise = torch.randn_like(w_opt) * w_noise_scale - ws = (w_opt + w_noise) - - synth_images = G.synthesis(ws, noise_mode='const', force_fp32=True) - - # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images. - synth_images = (synth_images + 1) * (255 / 2) - if synth_images.shape[2] > 256: - synth_images = F.interpolate(synth_images, size=(256, 256), mode='area') - - # Features for synth images. - synth_features = vgg16(synth_images, resize_images=False, return_lpips=True) - dist = (target_features - synth_features).square().sum() - - # Noise regularization. - reg_loss = 0.0 - for v in noise_bufs.values(): - noise = v[None, None, :, :] # must be [1,1,H,W] for F.avg_pool2d() - while True: - reg_loss += (noise * torch.roll(noise, shifts=1, dims=3)).mean() ** 2 - reg_loss += (noise * torch.roll(noise, shifts=1, dims=2)).mean() ** 2 - if noise.shape[2] <= 8: - break - noise = F.avg_pool2d(noise, kernel_size=2) - loss = dist + reg_loss * regularize_noise_weight - - if step % image_log_step == 0: - with torch.no_grad(): - if use_wandb: - global_config.training_step += 1 - wandb.log({f'first projection _{w_name}': loss.detach().cpu()}, step=global_config.training_step) - log_image_from_w(w_opt, G, w_name) - - # Step - optimizer.zero_grad(set_to_none=True) - loss.backward() - optimizer.step() - logprint(f'step {step + 1:>4d}/{num_steps}: dist {dist:<4.2f} loss {float(loss):<5.2f}') - - # Normalize noise. - with torch.no_grad(): - for buf in noise_bufs.values(): - buf -= buf.mean() - buf *= buf.square().mean().rsqrt() - - del G - return w_opt diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/data/realesrgan_paired_dataset.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/data/realesrgan_paired_dataset.py deleted file mode 100644 index 386c8d72496245dae8df033c2ebbd76b41ff45f1..0000000000000000000000000000000000000000 --- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/data/realesrgan_paired_dataset.py +++ /dev/null @@ -1,108 +0,0 @@ -import os -from basicsr.data.data_util import paired_paths_from_folder, paired_paths_from_lmdb -from basicsr.data.transforms import augment, paired_random_crop -from basicsr.utils import FileClient, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data -from torchvision.transforms.functional import normalize - - -@DATASET_REGISTRY.register() -class RealESRGANPairedDataset(data.Dataset): - """Paired image dataset for image restoration. - - Read LQ (Low Quality, e.g. LR (Low Resolution), blurry, noisy, etc) and GT image pairs. - - There are three modes: - 1. 'lmdb': Use lmdb files. - If opt['io_backend'] == lmdb. - 2. 'meta_info': Use meta information file to generate paths. - If opt['io_backend'] != lmdb and opt['meta_info'] is not None. - 3. 'folder': Scan folders to generate paths. - The rest. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - dataroot_lq (str): Data root path for lq. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - filename_tmpl (str): Template for each filename. Note that the template excludes the file extension. - Default: '{}'. - gt_size (int): Cropped patched size for gt patches. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h - and w for implementation). - - scale (bool): Scale, which will be added automatically. - phase (str): 'train' or 'val'. - """ - - def __init__(self, opt): - super(RealESRGANPairedDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt['io_backend'] - # mean and std for normalizing the input images - self.mean = opt['mean'] if 'mean' in opt else None - self.std = opt['std'] if 'std' in opt else None - - self.gt_folder, self.lq_folder = opt['dataroot_gt'], opt['dataroot_lq'] - self.filename_tmpl = opt['filename_tmpl'] if 'filename_tmpl' in opt else '{}' - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = [self.lq_folder, self.gt_folder] - self.io_backend_opt['client_keys'] = ['lq', 'gt'] - self.paths = paired_paths_from_lmdb([self.lq_folder, self.gt_folder], ['lq', 'gt']) - elif 'meta_info' in self.opt and self.opt['meta_info'] is not None: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt['meta_info']) as fin: - paths = [line.strip() for line in fin] - self.paths = [] - for path in paths: - gt_path, lq_path = path.split(', ') - gt_path = os.path.join(self.gt_folder, gt_path) - lq_path = os.path.join(self.lq_folder, lq_path) - self.paths.append(dict([('gt_path', gt_path), ('lq_path', lq_path)])) - else: - # disk backend - # it will scan the whole folder to get meta info - # it will be time-consuming for folders with too many files. It is recommended using an extra meta txt file - self.paths = paired_paths_from_folder([self.lq_folder, self.gt_folder], ['lq', 'gt'], self.filename_tmpl) - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - scale = self.opt['scale'] - - # Load gt and lq images. Dimension order: HWC; channel order: BGR; - # image range: [0, 1], float32. - gt_path = self.paths[index]['gt_path'] - img_bytes = self.file_client.get(gt_path, 'gt') - img_gt = imfrombytes(img_bytes, float32=True) - lq_path = self.paths[index]['lq_path'] - img_bytes = self.file_client.get(lq_path, 'lq') - img_lq = imfrombytes(img_bytes, float32=True) - - # augmentation for training - if self.opt['phase'] == 'train': - gt_size = self.opt['gt_size'] - # random crop - img_gt, img_lq = paired_random_crop(img_gt, img_lq, gt_size, scale, gt_path) - # flip, rotation - img_gt, img_lq = augment([img_gt, img_lq], self.opt['use_hflip'], self.opt['use_rot']) - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True) - # normalize - if self.mean is not None or self.std is not None: - normalize(img_lq, self.mean, self.std, inplace=True) - normalize(img_gt, self.mean, self.std, inplace=True) - - return {'lq': img_lq, 'gt': img_gt, 'lq_path': lq_path, 'gt_path': gt_path} - - def __len__(self): - return len(self.paths) diff --git a/spaces/Edward-Ji/essentials-of-microeconomics/essentials_of_microeconomics/app.py b/spaces/Edward-Ji/essentials-of-microeconomics/essentials_of_microeconomics/app.py deleted file mode 100644 index 50921a0314af4bf37804a1ca69d7deedab0bb849..0000000000000000000000000000000000000000 --- a/spaces/Edward-Ji/essentials-of-microeconomics/essentials_of_microeconomics/app.py +++ /dev/null @@ -1,102 +0,0 @@ -from pathlib import Path - -import numpy as np -from shiny import App, ui - -from trade_and_ppf import trade_and_ppf_ui, trade_and_ppf_server -from production_and_costs import ( - production_and_costs_ui, - production_and_costs_server -) -from equilibrium_and_welfare import ( - equilibrium_and_welfare_ui, - equilibrium_and_welfare_server -) -from elasticity import elasticity_ui, elasticity_server -from monopoly import monopoly_ui, monopoly_server -from oligopoly import oligopoly_server, oligopoly_ui -from taxes_and_subsidies import ( - taxes_and_subsidies_ui, - taxes_and_subsidies_server -) -from externalities import externalities_ui, externalities_server -from settings import settings_server, settings_ui - -np.seterr(divide="ignore", invalid="ignore") - -app_ui = ui.page_navbar( - ui.head_content( - ui.tags.title("Essentials of Microeconomics"), - ui.tags.link(rel="apple-touch-icon", sizes="180x180", - href="/apple-touch-icon.png"), - ui.tags.link(rel="icon", type="image/png", sizes="32x32", - href="/favicon-32x32.png"), - ui.tags.link(rel="icon", type="image/png", sizes="16x16", - href="/favicon-16x16.png"), - ui.tags.link(rel="manifest", href="/manifest.json"), - ui.tags.link( - rel="stylesheet", - href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.10.5/font/bootstrap-icons.css" - ), - ui.tags.link(rel="stylesheet", href="/main.css"), - ui.tags.script( - src="https://polyfill.io/v3/polyfill.min.js?features=es6"), - ui.tags.script( - id="MathJax-script", async_=True, - src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"), - ui.tags.script(src="/main.js") - ), - trade_and_ppf_ui("trade_and_ppf"), - ui.nav_menu( - "Market fundamentals", - production_and_costs_ui("production_and_costs"), - equilibrium_and_welfare_ui("equilibrium_and_welfare"), - elasticity_ui("elasticity") - ), - ui.nav_menu( - "Types of market", - monopoly_ui("monopoly"), - oligopoly_ui("oligopoly") - ), - ui.nav_menu( - "Market failures", - taxes_and_subsidies_ui("taxes_and_subsidies"), - externalities_ui("externalities") - ), - ui.nav_spacer(), - ui.nav_control( - ui.a(ui.tags.i(class_="bi bi-gear-fill", style=""), type_="button", - data_bs_toggle="modal", data_bs_target="#settings-modal")), - ui.nav_control( - ui.a(ui.tags.i(class_="bi bi-github", style=""), - href="https://github.com/Edward-Ji/essentials-of-microeconomics", - target="_blank")), - footer=settings_ui("settings"), - title=ui.img(src="favicon-32x32.png"), - position="fixed-top", - lang="en" -) - - -def server(input, output, session): - def mathjax(): - ui.insert_ui(ui.tags.script("MathJax.typeset()"), "body") - ui.remove_ui("body > script") - - session.on_flush(mathjax) - session.on_flushed(mathjax, once=False) - - settings = settings_server("settings") - trade_and_ppf_server("trade_and_ppf", settings) - production_and_costs_server("production_and_costs", settings) - equilibrium_and_welfare_server("equilibrium_and_welfare", settings) - elasticity_server("elasticity", settings) - monopoly_server("monopoly", settings) - oligopoly_server("oligopoly", settings) - taxes_and_subsidies_server("taxes_and_subsidies", settings) - externalities_server("externalities", settings) - - -www_dir = Path(__file__).parent.resolve() / "www" - -app = App(app_ui, server, static_assets=www_dir) diff --git a/spaces/Elbhnasy/ASD_Diagnosis/app.py b/spaces/Elbhnasy/ASD_Diagnosis/app.py deleted file mode 100644 index 9ddd6c25da041c0c53dcacff4cb6c83f478cc579..0000000000000000000000000000000000000000 --- a/spaces/Elbhnasy/ASD_Diagnosis/app.py +++ /dev/null @@ -1,178 +0,0 @@ -### 1. Imports and class names setup ### -import os -import cv2 -import dlib -import time -import numpy as np -import gradio as gr -from PIL import Image -from typing import Dict, Tuple -import torch -import torchvision -from torch import nn -from torchvision import transforms -from timeit import default_timer as timer -from model import create_ResNetb34_model - -# Setup class names -class_names = ["Autistic", "Non_Autistic"] - -### 2. Model and transforms preparation ### - - -resnet34, resnet34_transforms = create_ResNetb34_model(num_classes=len(class_names) ) - -# Load saved weights -resnet34.load_state_dict(torch.load(f="pretrained_resnet34_feature_extractor98.pth", - map_location=torch.device("cpu"),)) -### 3. Predict function ### -# Create predict function -# def predict(img)-> Tuple[Dict, float]: - -# # Start the timer -# start_time=timer() - -# # Transform the target image and add a batch dimension -# img=img.convert('RGB') -# img = resnet34_transforms(img).unsqueeze(0) -# # put model into evaluation mode and turn infarance mode -# resnet34.eval() -# with torch.inference_mode(): - -# # Pass the transformed image through the model and turn the prediction logits into prediction probabilities -# pred_probs=torch.softmax(resnet34(img),dim=1) -# # Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter) - -# pred_labels_and_probs={class_names[i]:float(pred_probs[0][i]) for i in range(len(class_names))} -# # Calculate the prediction time -# pred_time = round(timer() - start_time, 5) - -# # Return the prediction dictionary and prediction time -# return pred_labels_and_probs, pred_time -def predict_with_face_detection(img) : - """ - Detects faces in an image, performs a prediction on the faces, and returns the prediction and time taken. - - Args: - img_path: Path to the image. - - Returns: - A tuple of the prediction dictionary and the prediction time. - """ - # - img_array = np.asarray(img) - # Create a face detector. - face_detector = dlib.get_frontal_face_detector() - img=img.convert('RGB') - - # Detect faces in the image. - faces = face_detector(img_array) - - # Open the target image. - #img = Image.open(img_path) - - # Check if any faces were detected. - if len(faces) >= 1: - # Start the timer. - start_time = time.perf_counter() - - # Create lists to store the predicted labels and probabilities for each face. - predicted_labels = [] - predicted_probs = [] - - # Loop through each detected face. - for i, face in enumerate(faces): - # Get the coordinates of the face bounding box. - x1, y1, x2, y2 = face.left(), face.top(), face.right(), face.bottom() - - # Crop the face from the image. - cropped_face = img.crop((x1, y1, x2, y2)) - - # Transform the cropped face and add a batch dimension. - transformed_face = resnet34_transforms(cropped_face).unsqueeze(0) - - # Put the model into evaluation mode and disable gradient calculation. - resnet34.eval() - with torch.inference_mode(): - # Pass the transformed face through the model and turn the prediction logits into probabilities. - pred_probs = torch.softmax(resnet34(transformed_face), dim=1) - - # Get the predicted label with the highest probability. - predicted_label = class_names[pred_probs.argmax()] - - # Get the corresponding probability score. - predicted_prob = float(pred_probs.max()) - - # Append the predicted label and probability to the lists. - predicted_labels.append(predicted_label) - predicted_probs.append(predicted_prob) - - # Calculate the prediction time. - pred_time = round(time.perf_counter() - start_time, 5) - - # Create the output in Hugging Face format for handling multiple faces. - output = { - "labels": predicted_labels, - "scores": predicted_probs - } - - # If only one face is detected, return the final prediction label directly. - if len(faces) == 1: - return predicted_labels[0], pred_time - else: - # Otherwise, return the Hugging Face format for multiple faces. - return output, pred_time - else: - # No face detected or multiple faces detected. - return "Image Must Include At Least One Child's Face", 0.0 - - -### 4. Gradio app ### -example_list = [["examples/" + example] for example in os.listdir("examples")] - -# Create title, description and article strings -title = "ASD diagnosis" -description = """A feature extractor computer vision model to Identification of Autism in Children Using Static Facial Features and Deep Neural Networks. - - Requirements ⇒ - image must be only child’s face. - visible opened eyes (no hair blocking the eyes) and visible ears .""" - - -article = """Autism spectrum disorder (ASD) is a complicated neurological developmental disorder -that manifests itself in a variety of ways. The child diagnosed with ASD and their parents’ daily -lives can be dramatically improved with early diagnosis and appropriate medical intervention. The -applicability of static features extracted from autistic children’s face photographs as a biomarker to -distinguish them from typically developing children is investigated in this study paper. We used five -pre-trained CNN models: MobileNet, Xception, EfficientNetB0, EfficientNetB1, and EfficientNetB2 as -feature extractors and a DNN model as a binary classifier to identify autism in children accurately. -We used a publicly available dataset to train the suggested models, which consisted of face pictures -of children diagnosed with autism and controls classed as autistic and non-autistic. The Resnet34 -model outperformed the others, with an AUC of 98.63%, a sensitivity of 88.46%, and an NPV of 88%. -EfficientNetB0 produced a consistent prediction score of 59% for autistic and non-autistic groups -with a 95% confidence level.""" - -# Create the Gradio demo -input_1= gr.inputs.Image(type='pil', label="upload Image", source="upload",optional=True) -input_2 = gr.inputs.Image(type='pil', label="take photo", source="webcam",optional=True) -# inputs= [input_1, input_2] -app1 = gr.Interface(fn=predict_with_face_detection, # mapping function from input to output - inputs=input_1 ,# what are the inputs? - outputs=[gr.Label(num_top_classes=3, label="Predictions"), # what are the outputs? - gr.Number(label="Prediction time (s)")], # our fn has two outputs, therefore we have two outputs - examples=example_list, - title=title, - description=description, - article=article) -app2=gr.Interface(fn=predict_with_face_detection, # mapping function from input to output - inputs=input_2 ,# what are the inputs? - outputs=[gr.Label(num_top_classes=3, label="Predictions"), # what are the outputs? - gr.Number(label="Prediction time (s)")], # our fn has two outputs, therefore we have two outputs - examples=example_list, - title=title, - description=description, - article=article) -demo = gr.TabbedInterface([app1, app2], ["upload photo", "take photo"]) - -# Launch the demo! -demo.launch() diff --git a/spaces/EnzoBustos/IC-2022-Classificacao-de-Dados-Financeiros/app.py b/spaces/EnzoBustos/IC-2022-Classificacao-de-Dados-Financeiros/app.py deleted file mode 100644 index c552b1eeee37acd1524a4961023ebbacf6deccc8..0000000000000000000000000000000000000000 --- a/spaces/EnzoBustos/IC-2022-Classificacao-de-Dados-Financeiros/app.py +++ /dev/null @@ -1,196 +0,0 @@ -from transformers import pipeline -import torch -import streamlit as st -from textblob import TextBlob -from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer -import os -import re -import pandas as pd -from PIL import Image - -def translate_text_blob(text): - blob = TextBlob(text) - return str(blob.translate(from_lang="pt", to="en")) - -def sentiment_vader(text): - - vader_object = SentimentIntensityAnalyzer() - - sentiment_dict = vader_object.polarity_scores(text) - negative = sentiment_dict['neg'] - neutral = sentiment_dict['neu'] - positive = sentiment_dict['pos'] - compound = sentiment_dict['compound'] - - if sentiment_dict['compound'] >= 0.05 : - overall_sentiment = "Positive" - - elif sentiment_dict['compound'] <= - 0.05 : - overall_sentiment = "Negative" - - else : - overall_sentiment = "Neutral" - - return overall_sentiment.upper() - -def classify_by_company(text): - path = os.path.dirname(os.path.realpath(__file__)) + "/Companies" - - for filename in os.listdir(path): - with open(path + '/' + filename, 'r') as f: - companies = [word[:-1] for word in f.readlines()] - companies = "|".join(companies) - companies = "/" + companies + "/gm" - - if re.search(companies, text): - return filename[:-4] + " - Inferred by company name in text" - - return "" - -def run_models(parameters_list): - - translation_map = { - #Translation PT to EN - "TextBlob" : "TextBlob", - "M2M100" : "facebook/m2m100_418M", - "OPUS" : "Helsinki-NLP/opus-mt-mul-en", - "T5" : "unicamp-dl/translation-pt-en-t5", - "mBART" : "Narrativa/mbart-large-50-finetuned-opus-pt-en-translation", - } - - sentiment_map = { - #Sentiment Analysis - "VADER" : "VADER", - "FinBERT" : "ProsusAI/finbert", - "DistilBERT" : "distilbert-base-uncased-finetuned-sst-2-english", - "BERT" : "nlptown/bert-base-multilingual-uncased-sentiment", - } - - zeroshot_map = { - #Zeroshot Classification - "RoBERTa" : "joeddav/xlm-roberta-large-xnli", - "mDeBERTa" : "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli", - "DistilroBERTa" : "cross-encoder/nli-distilroberta-base", - } - - candidate_labels = [ - "Industrial Goods", - "Communications", - "Cyclic Consumption", - "Non-cyclical Consumption", - "Financial", - "Basic Materials", - #"Others", - "Oil, Gas and Biofuels", - "Health", - #"Initial Sector", - "Information Technology", - "Public utility" - ] - - device_num = 0 if torch.cuda.is_available() else -1 - - if parameters_list[0] == "TextBlob": - out_translation = translate_text_blob(parameters_list[3]) - else: - translation = pipeline("translation_pt_to_en", model=translation_map[parameters_list[0]], tokenizer=translation_map[parameters_list[0]], device=device_num) - out_translation = translation(parameters_list[3])[0]["translation_text"] - - if parameters_list[1] == "VADER": - out_sentiment = sentiment_vader(out_translation) - else: - sentiment = pipeline("sentiment-analysis", model=sentiment_map[parameters_list[1]], tokenizer=sentiment_map[parameters_list[1]], device=device_num) - out_sentiment = sentiment(out_translation)[0]["label"].upper() - - company_classification = classify_by_company(parameters_list[3].upper()) - - if company_classification: - out_classification = company_classification - else: - classification = pipeline("zero-shot-classification", model=zeroshot_map[parameters_list[2]], tokenizer=zeroshot_map[parameters_list[2]], device=device_num) - out_classification = classification(out_translation, candidate_labels)["labels"][0] + " - Inferred by {}".format(parameters_list[2]) - - out_translation += " - Translated by {}".format(parameters_list[0]) - out_sentiment += " - Analyzed by {}".format(parameters_list[1]) - - return out_translation, out_sentiment, out_classification - -sheet_id = "1TjDuF6dmirgdpuG_o5Y4CBPQdfmkksS1" -sheet_name = "Sheet1" -url = f"https://docs.google.com/spreadsheets/d/{sheet_id}/gviz/tq?tqx=out:csv&sheet={sheet_name}" - -df = pd.read_csv(url) - -header = st.container() -model = st.container() -model_1, model_2 = st.columns(2) -dataset = st.container() -analysis = st.container() -analysis_1, analysis_2 = st.columns(2) - -with header: - st.title("IC 2022 Classificação de Dados Financeiros") - st.write("Este trabalho de Iniciação Científica visa criar uma *interface web* que integre diversas funcionalidades de *machine learning*, essas funcionalidades cooperam entre si para realizar um processamento automático de textos financeiros com o fim de aplicar técnicar de Tradução Automática, Análise de Sentimentos e Classificação ZeroShot de textos em português sobre artigos do ramo financeiro. \n \n Este projeto também visa incluir novas técnicas ao leque de opções desta ferramenta, voltados principalmente para o Processamento de Linguagem Natural (PLN) tanto para fins de estudo e conhecimento dos modelos pautados como estado-da-arte, como também aperfeiçoamento dos módulos e saídas já implementados.") - -with model: - - st.header("Modelo para Tradução e Classificação") - - with model_1: - - text = st.text_area(label="Coloque seu texto sobre mercado financeiro em português!", - value=r"As ações da Raia Drogasil subiram em 98% desde o último bimestre, segundo as avaliações da revista!", - height=50, placeholder="Digite seu texto...") - - translation_pt_to_en = st.selectbox('Qual modelo você deseja usar para tradução?', ('TextBlob', 'M2M100', 'OPUS', 'T5', 'mBART')) - sentiment_analysis = st.selectbox('Qual modelo você deseja usar para análise de sentimento?', ('VADER', 'FinBERT', 'DistilBERT', 'BERT')) - zero_shot_classification = st.selectbox('Qual modelo você deseja usar para classificação?', ('RoBERTa', 'mDeBERTa', 'DistilroBERTa')) - - submit = st.button('Gerar análises!') - - with model_2: - if submit: - with st.spinner('Wait for it...'): - parameters = [translation_pt_to_en, sentiment_analysis, zero_shot_classification, text] - outputs = run_models(parameters) - - st.write("Translation..................................................................: \n {} \n \n".format(outputs[0])) - st.write("Sentiment...................................................................: \n {} \n \n".format(outputs[1])) - st.write("Classification...............................................................: \n {} \n \n".format(outputs[2])) - -with dataset: - st.header("Dados utilizados no projeto") - st.write("Os dados abaixo foram obtidos através de *web scrapping* dos sites Valor Globo, Infomoney e Exame para o fim de aplicação dos modelos selecionados, para a confecção dos dados abaixo foram utilizados o TextBlob para Tradução Automática, VADER para a Análise de Sentimentos, Inferição por empresas presentes no texto e Roberta para a Classificação.") - st.dataframe(df) - st.subheader("Descrição das colunas:") - st.write("\t**- date.........:** Coluna de entrada contendo as datas em que os textos foram publicados") - st.write("\t**- url..........:** Coluna de entrada contendo os links para as páginas *web* das quais os textos foram retirados") - st.write("\t**- texts........:** Coluna de entrada contendo os textos financeiros propriamente ditos") - st.write("\t**- is_title.....:** Coluna de entrada contendo os se os textos são, ou não, pertencentes ao título da notícia") - st.write("\t**- translated...:** Coluna de saída contendo os textos financeiros que foram traduzidos utilizando o TextBlob") - st.write("\t**- theme........:** Coluna de saída contendo as classificações em áreas financeiras, das quais metade foram obtidas pelo nome de empresas presentes no texto e a outra metade obtidos por classificação zeroshot do modelo RoBERTa") - st.write("\t**- sentiment....:** Coluna de saída contendo as análises de sentimentos dos textos utilizando VADER") - -with analysis: - st.header("Visualização dos dados utilizados através de WordClouds") - - with analysis_1: - wordcloud = st.selectbox('Qual wordcloud você deseja ver?', ( - "Health", - "Financial", - "Industrial Goods", - "Public utility", - "Others", - "Communications", - "Cyclic Consumption", - "Information Technology", - "Oil, Gas and Biofuels", - "Non-cyclical Consumption", - "Basic Materials", - )) - - with analysis_2: - image_path = os.path.dirname(os.path.realpath(__file__)) + '/Images/{}.png'.format(wordcloud) - image = Image.open(image_path) - st.image(image, caption='WordCloud dos textos classificados como {}'.format(wordcloud)) - \ No newline at end of file diff --git a/spaces/Eriberto/whisper-to-chatGPT/app.py b/spaces/Eriberto/whisper-to-chatGPT/app.py deleted file mode 100644 index 979d656c5bbd1319e9ff70e3883e5e5d16034bb2..0000000000000000000000000000000000000000 --- a/spaces/Eriberto/whisper-to-chatGPT/app.py +++ /dev/null @@ -1,153 +0,0 @@ -import os -import openai -import whisper -import gradio as gr - -openai.api_key = os.environ.get('SessionToken') - -whisper_model = whisper.load_model("small") - -conversation = "" -user_name = "MH" -bot_name = "bbDemo" - -def chat_hf(audio): - conversation = "" - try: - whisper_text = translate(audio) - user_input = whisper_text - - # Conversation route - prompt = user_name + ": " + user_input + "\n" + bot_name+ ": " - conversation += prompt # allows for context - # fetch the response from open AI api - response = openai.Completion.create(engine='text-davinci-003', prompt=conversation, max_tokens=50) - response_str = response["choices"][0]["text"].replace("\n", "") - response_str = response_str.split(user_name + ": ", 1)[0].split(bot_name + ": ", 1)[0] - - conversation += response_str + "\n" - - gpt_response = response_str - - except: - # Conversation route - whisper_text = translate(audio) - user_input = whisper_text - prompt = user_name + ": " + user_input + "\n" + bot_name+ ": " - conversation += prompt # allows for context - # fetch the response from open AI api - response = openai.Completion.create(engine='text-davinci-003', prompt=conversation, max_tokens=1024) - response_str = response["choices"][0]["text"].replace("\n", "") - response_str = response_str.split(user_name + ": ", 1)[0].split(bot_name + ": ", 1)[0] - - conversation += response_str + "\n" - - gpt_response = response_str - print("Error") - - - return whisper_text, gpt_response - - -def translate(audio): - print(""" - — - Sending audio to Whisper ... - — - """) - - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - mel = whisper.log_mel_spectrogram(audio).to(whisper_model.device) - - _, probs = whisper_model.detect_language(mel) - - transcript_options = whisper.DecodingOptions(task="transcribe", fp16 = False) - transcription = whisper.decode(whisper_model, mel, transcript_options) - - print("language spoken: " + transcription.language) - print("transcript: " + transcription.text) - print("———————————————————————————————————————————") - - return transcription.text - -title = """ -
-
-

- Whisper to chatGPT -

-
-

- Chat with GPT with your voice in your native language! -
-

-

- - -

-
-""" - -article = """ - -""" - -css = ''' - #col-container {max-width: 700px; margin-left: auto; margin-right: auto;} - a {text-decoration-line: underline; font-weight: 600;} - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } -''' - - -with gr.Blocks(css=css) as demo: - - with gr.Column(elem_id="col-container"): - - gr.HTML(title) - - with gr.Row(): - record_input = gr.Audio(source="microphone",type="filepath", show_label=False) - send_btn = gr.Button("Send my message !") - - with gr.Column(): - audio_translation = gr.Textbox(type="text",label="Whisper transcription") - gpt_response = gr.Textbox(type="text",label="chatGPT response") - - gr.HTML(article) - - send_btn.click(chat_hf, inputs=[record_input], outputs=[audio_translation, gpt_response]) - -demo.queue(max_size=32, concurrency_count=20).launch(debug=True) \ No newline at end of file diff --git a/spaces/EronSamez/RVC_HFmeu/tools/app.py b/spaces/EronSamez/RVC_HFmeu/tools/app.py deleted file mode 100644 index 602fbb71a49f2537295337cdcecf501abdd74153..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/tools/app.py +++ /dev/null @@ -1,148 +0,0 @@ -import logging -import os - -# os.system("wget -P cvec/ https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt") -import gradio as gr -from dotenv import load_dotenv - -from configs.config import Config -from i18n import I18nAuto -from infer.modules.vc.pipeline import Pipeline -VC = Pipeline - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) -logger = logging.getLogger(__name__) - -i18n = I18nAuto() -#(i18n) - -load_dotenv() -config = Config() -vc = VC(config) - -weight_root = os.getenv("weight_root") -weight_uvr5_root = os.getenv("weight_uvr5_root") -index_root = os.getenv("index_root") -names = [] -hubert_model = None -for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) -index_paths = [] -for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s/%s" % (root, name)) - - -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("在线demo"): - gr.Markdown( - value=""" - RVC 在线demo - """ - ) - sid = gr.Dropdown(label=i18n("推理音色"), choices=sorted(names)) - with gr.Column(): - spk_item = gr.Slider( - minimum=0, - maximum=2333, - step=1, - label=i18n("请选择说话人id"), - value=0, - visible=False, - interactive=True, - ) - sid.change(fn=vc.get_vc, inputs=[sid], outputs=[spk_item]) - gr.Markdown( - value=i18n("男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ") - ) - vc_input3 = gr.Audio(label="上传音频(长度小于90秒)") - vc_transform0 = gr.Number(label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0) - f0method0 = gr.Radio( - label=i18n("选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"), - choices=["pm", "harvest", "crepe", "rmvpe"], - value="pm", - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"), - value=3, - step=1, - interactive=True, - ) - with gr.Column(): - file_index1 = gr.Textbox( - label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"), - value="", - interactive=False, - visible=False, - ) - file_index2 = gr.Dropdown( - label=i18n("自动检测index路径,下拉式选择(dropdown)"), - choices=sorted(index_paths), - interactive=True, - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("检索特征占比"), - value=0.88, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("后处理重采样至最终采样率,0为不进行重采样"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"), - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n("保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果"), - value=0.33, - step=0.01, - interactive=True, - ) - f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调")) - but0 = gr.Button(i18n("转换"), variant="primary") - vc_output1 = gr.Textbox(label=i18n("输出信息")) - vc_output2 = gr.Audio(label=i18n("输出音频(右下角三个点,点了可以下载)")) - but0.click( - vc.vc_single, - [ - spk_item, - vc_input3, - vc_transform0, - f0_file, - f0method0, - file_index1, - file_index2, - # file_big_npy1, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - [vc_output1, vc_output2], - ) - - -app.launch() diff --git a/spaces/EsoCode/text-generation-webui/css/chat_style-wpp.css b/spaces/EsoCode/text-generation-webui/css/chat_style-wpp.css deleted file mode 100644 index 14b408784d182c13a495aa65d63365a531ab52f6..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/css/chat_style-wpp.css +++ /dev/null @@ -1,55 +0,0 @@ -.message { - padding-bottom: 25px; - font-size: 15px; - font-family: Helvetica, Arial, sans-serif; - line-height: 1.428571429; -} - -.text-you { - background-color: #d9fdd3; - border-radius: 15px; - padding: 10px; - padding-top: 5px; - float: right; -} - -.text-bot { - background-color: #f2f2f2; - border-radius: 15px; - padding: 10px; - padding-top: 5px; -} - -.dark .text-you { - background-color: #005c4b; - color: #111b21; -} - -.dark .text-bot { - background-color: #1f2937; - color: #111b21; -} - -.text-bot p, .text-you p { - margin-top: 5px; -} - -.message-body img { - max-width: 300px; - max-height: 300px; - border-radius: 20px; -} - -.message-body p { - margin-bottom: 0 !important; - font-size: 15px !important; - line-height: 1.428571429 !important; -} - -.dark .message-body p em { - color: rgb(138, 138, 138) !important; -} - -.message-body p em { - color: rgb(110, 110, 110) !important; -} \ No newline at end of file diff --git a/spaces/ExperimentalAI/epic-diffusion/app.py b/spaces/ExperimentalAI/epic-diffusion/app.py deleted file mode 100644 index 99a9ec0043f3e99df75059d8ecdd62c1036f0f1c..0000000000000000000000000000000000000000 --- a/spaces/ExperimentalAI/epic-diffusion/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import gradio as gr - -API_KEY=os.environ.get('HUGGING_FACE_HUB_TOKEN', None) - -article = """--- -This space was created using [SD Space Creator](https://huggingface.co/spaces/anzorq/sd-space-creator).""" - -gr.Interface.load( - name="models/johnslegers/epic-diffusion", - title="""Epic Diffusion""", - description="""Demo for Epic Diffusion Stable Diffusion model.""", - article=article, - api_key=API_KEY, - ).queue(concurrency_count=20).launch() diff --git a/spaces/FlowiseAI/Flowise/Dockerfile b/spaces/FlowiseAI/Flowise/Dockerfile deleted file mode 100644 index 9c0ad22929159b8c4d192856163699570fd27307..0000000000000000000000000000000000000000 --- a/spaces/FlowiseAI/Flowise/Dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -FROM node:18-alpine -USER root - -# Arguments that can be passed at build time -ARG FLOWISE_PATH=/usr/local/lib/node_modules/flowise -ARG BASE_PATH=/root/.flowise -ARG DATABASE_PATH=$BASE_PATH -ARG APIKEY_PATH=$BASE_PATH -ARG SECRETKEY_PATH=$BASE_PATH -ARG LOG_PATH=$BASE_PATH/logs - -# Install dependencies -RUN apk add --no-cache git python3 py3-pip make g++ build-base cairo-dev pango-dev chromium - -ENV PUPPETEER_SKIP_DOWNLOAD=true -ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser - -# Install Flowise globally -RUN npm install -g flowise - -# Configure Flowise directories using the ARG -RUN mkdir -p $LOG_PATH $FLOWISE_PATH/uploads && chmod -R 777 $LOG_PATH $FLOWISE_PATH - -WORKDIR /data - -CMD ["npx", "flowise", "start"] \ No newline at end of file diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/README.md b/spaces/FrankZxShen/so-vits-svc-models-pcr/README.md deleted file mode 100644 index b83bf5397d4f69e98af0b80c558b494d1d0945fa..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: So Vits Svc Models Pcr -emoji: 🦀 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FridaZuley/RVC_HFKawaii/tools/infer_batch_rvc.py b/spaces/FridaZuley/RVC_HFKawaii/tools/infer_batch_rvc.py deleted file mode 100644 index 763d17f14877a2ce35f750202e91356c1f24270f..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/tools/infer_batch_rvc.py +++ /dev/null @@ -1,72 +0,0 @@ -import argparse -import os -import sys - -print("Command-line arguments:", sys.argv) - -now_dir = os.getcwd() -sys.path.append(now_dir) -import sys - -import tqdm as tq -from dotenv import load_dotenv -from scipy.io import wavfile - -from configs.config import Config -from infer.modules.vc.modules import VC - - -def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--f0up_key", type=int, default=0) - parser.add_argument("--input_path", type=str, help="input path") - parser.add_argument("--index_path", type=str, help="index path") - parser.add_argument("--f0method", type=str, default="harvest", help="harvest or pm") - parser.add_argument("--opt_path", type=str, help="opt path") - parser.add_argument("--model_name", type=str, help="store in assets/weight_root") - parser.add_argument("--index_rate", type=float, default=0.66, help="index rate") - parser.add_argument("--device", type=str, help="device") - parser.add_argument("--is_half", type=bool, help="use half -> True") - parser.add_argument("--filter_radius", type=int, default=3, help="filter radius") - parser.add_argument("--resample_sr", type=int, default=0, help="resample sr") - parser.add_argument("--rms_mix_rate", type=float, default=1, help="rms mix rate") - parser.add_argument("--protect", type=float, default=0.33, help="protect") - - args = parser.parse_args() - sys.argv = sys.argv[:1] - - return args - - -def main(): - load_dotenv() - args = arg_parse() - config = Config() - config.device = args.device if args.device else config.device - config.is_half = args.is_half if args.is_half else config.is_half - vc = VC(config) - vc.get_vc(args.model_name) - audios = os.listdir(args.input_path) - for file in tq.tqdm(audios): - if file.endswith(".wav"): - file_path = os.path.join(args.input_path, file) - _, wav_opt = vc.vc_single( - 0, - file_path, - args.f0up_key, - None, - args.f0method, - args.index_path, - None, - args.index_rate, - args.filter_radius, - args.resample_sr, - args.rms_mix_rate, - args.protect, - ) - out_path = os.path.join(args.opt_path, file) - wavfile.write(out_path, wav_opt[0], wav_opt[1]) - - -if __name__ == "__main__": - main() diff --git a/spaces/Funbi/Chat2/README.md b/spaces/Funbi/Chat2/README.md deleted file mode 100644 index 202e6d617e26b5624e3a591f8520e6d4209a29da..0000000000000000000000000000000000000000 --- a/spaces/Funbi/Chat2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Chat2 -emoji: 🌖 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GXSA/bingo/src/components/chat-suggestions.tsx b/spaces/GXSA/bingo/src/components/chat-suggestions.tsx deleted file mode 100644 index 48aec7c84e4407c482acdfcc7857fb0f660d12d3..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length, setSuggestions]) - - return currentSuggestions?.length ? ( -
-
- - { - currentSuggestions.map(suggestion => ( - - )) - } -
-
- ) : null -} diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/train.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/train.py deleted file mode 100644 index 619952e8de6c390912fe341403a39169592e585d..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/encoder/train.py +++ /dev/null @@ -1,123 +0,0 @@ -from encoder.visualizations import Visualizations -from encoder.data_objects import SpeakerVerificationDataLoader, SpeakerVerificationDataset -from encoder.params_model import * -from encoder.model import SpeakerEncoder -from utils.profiler import Profiler -from pathlib import Path -import torch - -def sync(device: torch.device): - # For correct profiling (cuda operations are async) - if device.type == "cuda": - torch.cuda.synchronize(device) - - -def train(run_id: str, clean_data_root: Path, models_dir: Path, umap_every: int, save_every: int, - backup_every: int, vis_every: int, force_restart: bool, visdom_server: str, - no_visdom: bool): - # Create a dataset and a dataloader - dataset = SpeakerVerificationDataset(clean_data_root) - loader = SpeakerVerificationDataLoader( - dataset, - speakers_per_batch, - utterances_per_speaker, - num_workers=8, - ) - - # Setup the device on which to run the forward pass and the loss. These can be different, - # because the forward pass is faster on the GPU whereas the loss is often (depending on your - # hyperparameters) faster on the CPU. - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - # FIXME: currently, the gradient is None if loss_device is cuda - loss_device = torch.device("cpu") - - # Create the model and the optimizer - model = SpeakerEncoder(device, loss_device) - optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate_init) - init_step = 1 - - # Configure file path for the model - state_fpath = models_dir.joinpath(run_id + ".pt") - backup_dir = models_dir.joinpath(run_id + "_backups") - - # Load any existing model - if not force_restart: - if state_fpath.exists(): - print("Found existing model \"%s\", loading it and resuming training." % run_id) - checkpoint = torch.load(state_fpath) - init_step = checkpoint["step"] - model.load_state_dict(checkpoint["model_state"]) - optimizer.load_state_dict(checkpoint["optimizer_state"]) - optimizer.param_groups[0]["lr"] = learning_rate_init - else: - print("No model \"%s\" found, starting training from scratch." % run_id) - else: - print("Starting the training from scratch.") - model.train() - - # Initialize the visualization environment - vis = Visualizations(run_id, vis_every, server=visdom_server, disabled=no_visdom) - vis.log_dataset(dataset) - vis.log_params() - device_name = str(torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU") - vis.log_implementation({"Device": device_name}) - - # Training loop - profiler = Profiler(summarize_every=10, disabled=False) - for step, speaker_batch in enumerate(loader, init_step): - profiler.tick("Blocking, waiting for batch (threaded)") - - # Forward pass - inputs = torch.from_numpy(speaker_batch.data).to(device) - sync(device) - profiler.tick("Data to %s" % device) - embeds = model(inputs) - sync(device) - profiler.tick("Forward pass") - embeds_loss = embeds.view((speakers_per_batch, utterances_per_speaker, -1)).to(loss_device) - loss, eer = model.loss(embeds_loss) - sync(loss_device) - profiler.tick("Loss") - - # Backward pass - model.zero_grad() - loss.backward() - profiler.tick("Backward pass") - model.do_gradient_ops() - optimizer.step() - profiler.tick("Parameter update") - - # Update visualizations - # learning_rate = optimizer.param_groups[0]["lr"] - vis.update(loss.item(), eer, step) - - # Draw projections and save them to the backup folder - if umap_every != 0 and step % umap_every == 0: - print("Drawing and saving projections (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - projection_fpath = backup_dir.joinpath("%s_umap_%06d.png" % (run_id, step)) - embeds = embeds.detach().cpu().numpy() - vis.draw_projections(embeds, utterances_per_speaker, step, projection_fpath) - vis.save() - - # Overwrite the latest version of the model - if save_every != 0 and step % save_every == 0: - print("Saving the model (step %d)" % step) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, state_fpath) - - # Make a backup - if backup_every != 0 and step % backup_every == 0: - print("Making a backup (step %d)" % step) - backup_dir.mkdir(exist_ok=True) - backup_fpath = backup_dir.joinpath("%s_bak_%06d.pt" % (run_id, step)) - torch.save({ - "step": step + 1, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, backup_fpath) - - profiler.tick("Extras (visualizations, saving)") diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer_train.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer_train.py deleted file mode 100644 index 2743d590d882f209734b68921b84a9d23492942c..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/synthesizer_train.py +++ /dev/null @@ -1,35 +0,0 @@ -from synthesizer.hparams import hparams -from synthesizer.train import train -from utils.argutils import print_args -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("run_id", type=str, help= \ - "Name for this model instance. If a model state from the same run ID was previously " - "saved, the training will restart from there. Pass -f to overwrite saved states and " - "restart from scratch.") - parser.add_argument("syn_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the synthesizer directory that contains the ground truth mel spectrograms, " - "the wavs and the embeds.") - parser.add_argument("-m", "--models_dir", type=str, default="synthesizer/saved_models/", help=\ - "Path to the output directory that will contain the saved model weights and the logs.") - parser.add_argument("-s", "--save_every", type=int, default=1000, help= \ - "Number of steps between updates of the model on the disk. Set to 0 to never save the " - "model.") - parser.add_argument("-b", "--backup_every", type=int, default=25000, help= \ - "Number of steps between backups of the model. Set to 0 to never make backups of the " - "model.") - parser.add_argument("-f", "--force_restart", action="store_true", help= \ - "Do not load any saved model and restart from scratch.") - parser.add_argument("--hparams", default="", - help="Hyperparameter overrides as a comma-separated list of name=value " - "pairs") - args = parser.parse_args() - print_args(args, parser) - - args.hparams = hparams.parse(args.hparams) - - # Run the training - train(**vars(args)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pafpn/faster_rcnn_r50_pafpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/pafpn/faster_rcnn_r50_pafpn_1x_coco.py deleted file mode 100644 index b2fdef91c5cc8396baee9c2d8a09556162443078..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/pafpn/faster_rcnn_r50_pafpn_1x_coco.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' - -model = dict( - neck=dict( - type='PAFPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/fcn_hr18.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/fcn_hr18.py deleted file mode 100644 index c3e299bc89ada56ca14bbffcbdb08a586b8ed9e9..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/fcn_hr18.py +++ /dev/null @@ -1,52 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://msra/hrnetv2_w18', - backbone=dict( - type='HRNet', - norm_cfg=norm_cfg, - norm_eval=False, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(18, 36)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(18, 36, 72)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(18, 36, 72, 144)))), - decode_head=dict( - type='FCNHead', - in_channels=[18, 36, 72, 144], - in_index=(0, 1, 2, 3), - channels=sum([18, 36, 72, 144]), - input_transform='resize_concat', - kernel_size=1, - num_convs=1, - concat_input=False, - dropout_ratio=-1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/Gyuyu/andite-anything-v4.0/app.py b/spaces/Gyuyu/andite-anything-v4.0/app.py deleted file mode 100644 index 47a2051db6dadeea03edf70d62694fd3e5e88ba7..0000000000000000000000000000000000000000 --- a/spaces/Gyuyu/andite-anything-v4.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/andite/anything-v4.0").launch() \ No newline at end of file diff --git a/spaces/Hallucinate/demo/taming/data/open_images_helper.py b/spaces/Hallucinate/demo/taming/data/open_images_helper.py deleted file mode 100644 index 8feb7c6e705fc165d2983303192aaa88f579b243..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/taming/data/open_images_helper.py +++ /dev/null @@ -1,379 +0,0 @@ -open_images_unify_categories_for_coco = { - '/m/03bt1vf': '/m/01g317', - '/m/04yx4': '/m/01g317', - '/m/05r655': '/m/01g317', - '/m/01bl7v': '/m/01g317', - '/m/0cnyhnx': '/m/01xq0k1', - '/m/01226z': '/m/018xm', - '/m/05ctyq': '/m/018xm', - '/m/058qzx': '/m/04ctx', - '/m/06pcq': '/m/0l515', - '/m/03m3pdh': '/m/02crq1', - '/m/046dlr': '/m/01x3z', - '/m/0h8mzrc': '/m/01x3z', -} - - -top_300_classes_plus_coco_compatibility = [ - ('Man', 1060962), - ('Clothing', 986610), - ('Tree', 748162), - ('Woman', 611896), - ('Person', 610294), - ('Human face', 442948), - ('Girl', 175399), - ('Building', 162147), - ('Car', 159135), - ('Plant', 155704), - ('Human body', 137073), - ('Flower', 133128), - ('Window', 127485), - ('Human arm', 118380), - ('House', 114365), - ('Wheel', 111684), - ('Suit', 99054), - ('Human hair', 98089), - ('Human head', 92763), - ('Chair', 88624), - ('Boy', 79849), - ('Table', 73699), - ('Jeans', 57200), - ('Tire', 55725), - ('Skyscraper', 53321), - ('Food', 52400), - ('Footwear', 50335), - ('Dress', 50236), - ('Human leg', 47124), - ('Toy', 46636), - ('Tower', 45605), - ('Boat', 43486), - ('Land vehicle', 40541), - ('Bicycle wheel', 34646), - ('Palm tree', 33729), - ('Fashion accessory', 32914), - ('Glasses', 31940), - ('Bicycle', 31409), - ('Furniture', 30656), - ('Sculpture', 29643), - ('Bottle', 27558), - ('Dog', 26980), - ('Snack', 26796), - ('Human hand', 26664), - ('Bird', 25791), - ('Book', 25415), - ('Guitar', 24386), - ('Jacket', 23998), - ('Poster', 22192), - ('Dessert', 21284), - ('Baked goods', 20657), - ('Drink', 19754), - ('Flag', 18588), - ('Houseplant', 18205), - ('Tableware', 17613), - ('Airplane', 17218), - ('Door', 17195), - ('Sports uniform', 17068), - ('Shelf', 16865), - ('Drum', 16612), - ('Vehicle', 16542), - ('Microphone', 15269), - ('Street light', 14957), - ('Cat', 14879), - ('Fruit', 13684), - ('Fast food', 13536), - ('Animal', 12932), - ('Vegetable', 12534), - ('Train', 12358), - ('Horse', 11948), - ('Flowerpot', 11728), - ('Motorcycle', 11621), - ('Fish', 11517), - ('Desk', 11405), - ('Helmet', 10996), - ('Truck', 10915), - ('Bus', 10695), - ('Hat', 10532), - ('Auto part', 10488), - ('Musical instrument', 10303), - ('Sunglasses', 10207), - ('Picture frame', 10096), - ('Sports equipment', 10015), - ('Shorts', 9999), - ('Wine glass', 9632), - ('Duck', 9242), - ('Wine', 9032), - ('Rose', 8781), - ('Tie', 8693), - ('Butterfly', 8436), - ('Beer', 7978), - ('Cabinetry', 7956), - ('Laptop', 7907), - ('Insect', 7497), - ('Goggles', 7363), - ('Shirt', 7098), - ('Dairy Product', 7021), - ('Marine invertebrates', 7014), - ('Cattle', 7006), - ('Trousers', 6903), - ('Van', 6843), - ('Billboard', 6777), - ('Balloon', 6367), - ('Human nose', 6103), - ('Tent', 6073), - ('Camera', 6014), - ('Doll', 6002), - ('Coat', 5951), - ('Mobile phone', 5758), - ('Swimwear', 5729), - ('Strawberry', 5691), - ('Stairs', 5643), - ('Goose', 5599), - ('Umbrella', 5536), - ('Cake', 5508), - ('Sun hat', 5475), - ('Bench', 5310), - ('Bookcase', 5163), - ('Bee', 5140), - ('Computer monitor', 5078), - ('Hiking equipment', 4983), - ('Office building', 4981), - ('Coffee cup', 4748), - ('Curtain', 4685), - ('Plate', 4651), - ('Box', 4621), - ('Tomato', 4595), - ('Coffee table', 4529), - ('Office supplies', 4473), - ('Maple', 4416), - ('Muffin', 4365), - ('Cocktail', 4234), - ('Castle', 4197), - ('Couch', 4134), - ('Pumpkin', 3983), - ('Computer keyboard', 3960), - ('Human mouth', 3926), - ('Christmas tree', 3893), - ('Mushroom', 3883), - ('Swimming pool', 3809), - ('Pastry', 3799), - ('Lavender (Plant)', 3769), - ('Football helmet', 3732), - ('Bread', 3648), - ('Traffic sign', 3628), - ('Common sunflower', 3597), - ('Television', 3550), - ('Bed', 3525), - ('Cookie', 3485), - ('Fountain', 3484), - ('Paddle', 3447), - ('Bicycle helmet', 3429), - ('Porch', 3420), - ('Deer', 3387), - ('Fedora', 3339), - ('Canoe', 3338), - ('Carnivore', 3266), - ('Bowl', 3202), - ('Human eye', 3166), - ('Ball', 3118), - ('Pillow', 3077), - ('Salad', 3061), - ('Beetle', 3060), - ('Orange', 3050), - ('Drawer', 2958), - ('Platter', 2937), - ('Elephant', 2921), - ('Seafood', 2921), - ('Monkey', 2915), - ('Countertop', 2879), - ('Watercraft', 2831), - ('Helicopter', 2805), - ('Kitchen appliance', 2797), - ('Personal flotation device', 2781), - ('Swan', 2739), - ('Lamp', 2711), - ('Boot', 2695), - ('Bronze sculpture', 2693), - ('Chicken', 2677), - ('Taxi', 2643), - ('Juice', 2615), - ('Cowboy hat', 2604), - ('Apple', 2600), - ('Tin can', 2590), - ('Necklace', 2564), - ('Ice cream', 2560), - ('Human beard', 2539), - ('Coin', 2536), - ('Candle', 2515), - ('Cart', 2512), - ('High heels', 2441), - ('Weapon', 2433), - ('Handbag', 2406), - ('Penguin', 2396), - ('Rifle', 2352), - ('Violin', 2336), - ('Skull', 2304), - ('Lantern', 2285), - ('Scarf', 2269), - ('Saucer', 2225), - ('Sheep', 2215), - ('Vase', 2189), - ('Lily', 2180), - ('Mug', 2154), - ('Parrot', 2140), - ('Human ear', 2137), - ('Sandal', 2115), - ('Lizard', 2100), - ('Kitchen & dining room table', 2063), - ('Spider', 1977), - ('Coffee', 1974), - ('Goat', 1926), - ('Squirrel', 1922), - ('Cello', 1913), - ('Sushi', 1881), - ('Tortoise', 1876), - ('Pizza', 1870), - ('Studio couch', 1864), - ('Barrel', 1862), - ('Cosmetics', 1841), - ('Moths and butterflies', 1841), - ('Convenience store', 1817), - ('Watch', 1792), - ('Home appliance', 1786), - ('Harbor seal', 1780), - ('Luggage and bags', 1756), - ('Vehicle registration plate', 1754), - ('Shrimp', 1751), - ('Jellyfish', 1730), - ('French fries', 1723), - ('Egg (Food)', 1698), - ('Football', 1697), - ('Musical keyboard', 1683), - ('Falcon', 1674), - ('Candy', 1660), - ('Medical equipment', 1654), - ('Eagle', 1651), - ('Dinosaur', 1634), - ('Surfboard', 1630), - ('Tank', 1628), - ('Grape', 1624), - ('Lion', 1624), - ('Owl', 1622), - ('Ski', 1613), - ('Waste container', 1606), - ('Frog', 1591), - ('Sparrow', 1585), - ('Rabbit', 1581), - ('Pen', 1546), - ('Sea lion', 1537), - ('Spoon', 1521), - ('Sink', 1512), - ('Teddy bear', 1507), - ('Bull', 1495), - ('Sofa bed', 1490), - ('Dragonfly', 1479), - ('Brassiere', 1478), - ('Chest of drawers', 1472), - ('Aircraft', 1466), - ('Human foot', 1463), - ('Pig', 1455), - ('Fork', 1454), - ('Antelope', 1438), - ('Tripod', 1427), - ('Tool', 1424), - ('Cheese', 1422), - ('Lemon', 1397), - ('Hamburger', 1393), - ('Dolphin', 1390), - ('Mirror', 1390), - ('Marine mammal', 1387), - ('Giraffe', 1385), - ('Snake', 1368), - ('Gondola', 1364), - ('Wheelchair', 1360), - ('Piano', 1358), - ('Cupboard', 1348), - ('Banana', 1345), - ('Trumpet', 1335), - ('Lighthouse', 1333), - ('Invertebrate', 1317), - ('Carrot', 1268), - ('Sock', 1260), - ('Tiger', 1241), - ('Camel', 1224), - ('Parachute', 1224), - ('Bathroom accessory', 1223), - ('Earrings', 1221), - ('Headphones', 1218), - ('Skirt', 1198), - ('Skateboard', 1190), - ('Sandwich', 1148), - ('Saxophone', 1141), - ('Goldfish', 1136), - ('Stool', 1104), - ('Traffic light', 1097), - ('Shellfish', 1081), - ('Backpack', 1079), - ('Sea turtle', 1078), - ('Cucumber', 1075), - ('Tea', 1051), - ('Toilet', 1047), - ('Roller skates', 1040), - ('Mule', 1039), - ('Bust', 1031), - ('Broccoli', 1030), - ('Crab', 1020), - ('Oyster', 1019), - ('Cannon', 1012), - ('Zebra', 1012), - ('French horn', 1008), - ('Grapefruit', 998), - ('Whiteboard', 997), - ('Zucchini', 997), - ('Crocodile', 992), - - ('Clock', 960), - ('Wall clock', 958), - - ('Doughnut', 869), - ('Snail', 868), - - ('Baseball glove', 859), - - ('Panda', 830), - ('Tennis racket', 830), - - ('Pear', 652), - - ('Bagel', 617), - ('Oven', 616), - ('Ladybug', 615), - ('Shark', 615), - ('Polar bear', 614), - ('Ostrich', 609), - - ('Hot dog', 473), - ('Microwave oven', 467), - ('Fire hydrant', 20), - ('Stop sign', 20), - ('Parking meter', 20), - ('Bear', 20), - ('Flying disc', 20), - ('Snowboard', 20), - ('Tennis ball', 20), - ('Kite', 20), - ('Baseball bat', 20), - ('Kitchen knife', 20), - ('Knife', 20), - ('Submarine sandwich', 20), - ('Computer mouse', 20), - ('Remote control', 20), - ('Toaster', 20), - ('Sink', 20), - ('Refrigerator', 20), - ('Alarm clock', 20), - ('Wall clock', 20), - ('Scissors', 20), - ('Hair dryer', 20), - ('Toothbrush', 20), - ('Suitcase', 20) -] diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/ubert/modeling_ubert.py b/spaces/HaloMaster/chinesesummary/fengshen/models/ubert/modeling_ubert.py deleted file mode 100644 index 5f200c24110302020788fdc64801df3be84e3efa..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/models/ubert/modeling_ubert.py +++ /dev/null @@ -1,698 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from logging import basicConfig, setLogRecordFactory -import torch -from torch import nn -import json -from tqdm import tqdm -import os -import numpy as np -from transformers import ( - AutoTokenizer, - AutoModelForSequenceClassification, - BertTokenizer, - file_utils -) -import pytorch_lightning as pl - -from pytorch_lightning.callbacks import ModelCheckpoint -from pytorch_lightning import trainer, loggers -from torch.utils.data import Dataset, DataLoader -from transformers.optimization import get_linear_schedule_with_warmup -from transformers import BertForPreTraining, BertForMaskedLM, BertModel -from transformers import BertConfig, BertForTokenClassification, BertPreTrainedModel -import transformers -import unicodedata -import re -import argparse - - -transformers.logging.set_verbosity_error() -# os.environ["CUDA_VISIBLE_DEVICES"] = '6' - - -def search(pattern, sequence): - n = len(pattern) - res = [] - for i in range(len(sequence)): - if sequence[i:i + n] == pattern: - res.append([i, i + n-1]) - return res - - -class UbertDataset(Dataset): - def __init__(self, data, tokenizer, args, used_mask=True): - super().__init__() - self.tokenizer = tokenizer - self.max_length = args.max_length - self.num_labels = args.num_labels - self.used_mask = used_mask - self.data = data - self.args = args - - def __len__(self): - return len(self.data) - - def __getitem__(self, index): - return self.encode(self.data[index], self.used_mask) - - def encode(self, item, used_mask=False): - input_ids1 = [] - attention_mask1 = [] - token_type_ids1 = [] - span_labels1 = [] - span_labels_masks1 = [] - - input_ids0 = [] - attention_mask0 = [] - token_type_ids0 = [] - span_labels0 = [] - span_labels_masks0 = [] - - subtask_type = item['subtask_type'] - for choice in item['choices']: - try: - texta = item['task_type'] + '[SEP]' + \ - subtask_type + '[SEP]' + choice['entity_type'] - textb = item['text'] - encode_dict = self.tokenizer.encode_plus(texta, textb, - max_length=self.max_length, - padding='max_length', - truncation='longest_first') - - encode_sent = encode_dict['input_ids'] - encode_token_type_ids = encode_dict['token_type_ids'] - encode_attention_mask = encode_dict['attention_mask'] - span_label = np.zeros((self.max_length, self.max_length)) - span_label_mask = np.zeros( - (self.max_length, self.max_length))-10000 - - if item['task_type'] == '分类任务': - span_label_mask[0, 0] = 0 - span_label[0, 0] = choice['label'] - - else: - question_len = len(self.tokenizer.encode(texta)) - span_label_mask[question_len:, question_len:] = np.zeros( - (self.max_length-question_len, self.max_length-question_len)) - for entity in choice['entity_list']: - # if 'entity_name' in entity.keys() and entity['entity_name']=='': - # continue - entity_idx_list = entity['entity_idx'] - if entity_idx_list == []: - continue - for entity_idx in entity_idx_list: - if entity_idx == []: - continue - start_idx_text = item['text'][:entity_idx[0]] - start_idx_text_encode = self.tokenizer.encode( - start_idx_text, add_special_tokens=False) - start_idx = question_len + \ - len(start_idx_text_encode) - - end_idx_text = item['text'][:entity_idx[1]+1] - end_idx_text_encode = self.tokenizer.encode( - end_idx_text, add_special_tokens=False) - end_idx = question_len + \ - len(end_idx_text_encode) - 1 - if start_idx < self.max_length and end_idx < self.max_length: - span_label[start_idx, end_idx] = 1 - - if np.sum(span_label) < 1: - input_ids0.append(encode_sent) - attention_mask0.append(encode_attention_mask) - token_type_ids0.append(encode_token_type_ids) - span_labels0.append(span_label) - span_labels_masks0.append(span_label_mask) - else: - input_ids1.append(encode_sent) - attention_mask1.append(encode_attention_mask) - token_type_ids1.append(encode_token_type_ids) - span_labels1.append(span_label) - span_labels_masks1.append(span_label_mask) - except: - print(item) - print(texta) - print(textb) - - randomize = np.arange(len(input_ids0)) - np.random.shuffle(randomize) - cur = 0 - count = len(input_ids1) - while count < self.args.num_labels: - if cur < len(randomize): - input_ids1.append(input_ids0[randomize[cur]]) - attention_mask1.append(attention_mask0[randomize[cur]]) - token_type_ids1.append(token_type_ids0[randomize[cur]]) - span_labels1.append(span_labels0[randomize[cur]]) - span_labels_masks1.append(span_labels_masks0[randomize[cur]]) - cur += 1 - count += 1 - - while len(input_ids1) < self.args.num_labels: - input_ids1.append([0]*self.max_length) - attention_mask1.append([0]*self.max_length) - token_type_ids1.append([0]*self.max_length) - span_labels1.append(np.zeros((self.max_length, self.max_length))) - span_labels_masks1.append( - np.zeros((self.max_length, self.max_length))-10000) - - input_ids = input_ids1[:self.args.num_labels] - attention_mask = attention_mask1[:self.args.num_labels] - token_type_ids = token_type_ids1[:self.args.num_labels] - span_labels = span_labels1[:self.args.num_labels] - span_labels_masks = span_labels_masks1[:self.args.num_labels] - - span_labels = np.array(span_labels) - span_labels_masks = np.array(span_labels_masks) - if np.sum(span_labels) < 1: - span_labels[-1, -1, -1] = 1 - span_labels_masks[-1, -1, -1] = 10000 - - sample = { - "input_ids": torch.tensor(input_ids).long(), - "token_type_ids": torch.tensor(token_type_ids).long(), - "attention_mask": torch.tensor(attention_mask).float(), - "span_labels": torch.tensor(span_labels).float(), - "span_labels_mask": torch.tensor(span_labels_masks).float() - } - - return sample - - -class UbertDataModel(pl.LightningDataModule): - @staticmethod - def add_data_specific_args(parent_args): - parser = parent_args.add_argument_group('TASK NAME DataModel') - parser.add_argument('--num_workers', default=8, type=int) - parser.add_argument('--batchsize', default=8, type=int) - parser.add_argument('--max_length', default=128, type=int) - return parent_args - - def __init__(self, train_data, val_data, tokenizer, args): - super().__init__() - self.batchsize = args.batchsize - - self.train_data = UbertDataset(train_data, tokenizer, args, True) - self.valid_data = UbertDataset(val_data, tokenizer, args, False) - - def train_dataloader(self): - return DataLoader(self.train_data, shuffle=True, batch_size=self.batchsize, pin_memory=False) - - def val_dataloader(self): - return DataLoader(self.valid_data, shuffle=False, batch_size=self.batchsize, pin_memory=False) - - -class biaffine(nn.Module): - def __init__(self, in_size, out_size, bias_x=True, bias_y=True): - super().__init__() - self.bias_x = bias_x - self.bias_y = bias_y - self.out_size = out_size - self.U = torch.nn.Parameter(torch.zeros( - in_size + int(bias_x), out_size, in_size + int(bias_y))) - torch.nn.init.normal_(self.U, mean=0, std=0.1) - - def forward(self, x, y): - if self.bias_x: - x = torch.cat((x, torch.ones_like(x[..., :1])), dim=-1) - if self.bias_y: - y = torch.cat((y, torch.ones_like(y[..., :1])), dim=-1) - bilinar_mapping = torch.einsum('bxi,ioj,byj->bxyo', x, self.U, y) - return bilinar_mapping - - -class MultilabelCrossEntropy(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, y_pred, y_true): - y_true = y_true.float() - y_pred = torch.mul((1.0 - torch.mul(y_true, 2.0)), y_pred) - y_pred_neg = y_pred - torch.mul(y_true, 1e12) - y_pred_pos = y_pred - torch.mul(1.0 - y_true, 1e12) - zeros = torch.zeros_like(y_pred[..., :1]) - y_pred_neg = torch.cat([y_pred_neg, zeros], axis=-1) - y_pred_pos = torch.cat([y_pred_pos, zeros], axis=-1) - neg_loss = torch.logsumexp(y_pred_neg, axis=-1) - pos_loss = torch.logsumexp(y_pred_pos, axis=-1) - loss = torch.mean(neg_loss + pos_loss) - return loss - - -class UbertModel(BertPreTrainedModel): - - def __init__(self, config): - super().__init__(config) - self.bert = BertModel(config) - self.query_layer = torch.nn.Sequential(torch.nn.Linear(in_features=self.config.hidden_size, - out_features=self.config.biaffine_size), - torch.nn.GELU()) - self.key_layer = torch.nn.Sequential(torch.nn.Linear(in_features=self.config.hidden_size, out_features=self.config.biaffine_size), - torch.nn.GELU()) - self.biaffine_query_key_cls = biaffine(self.config.biaffine_size, 1) - self.loss_softmax = MultilabelCrossEntropy() - self.loss_sigmoid = torch.nn.BCEWithLogitsLoss(reduction='mean') - - def forward(self, - input_ids, - attention_mask, - token_type_ids, - span_labels=None, - span_labels_mask=None): - - batch_size, num_label, seq_len = input_ids.shape - - input_ids = input_ids.view(-1, seq_len) - attention_mask = attention_mask.view(-1, seq_len) - token_type_ids = token_type_ids.view(-1, seq_len) - - batch_size, seq_len = input_ids.shape - outputs = self.bert(input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - output_hidden_states=True) # (bsz, seq, dim) - - hidden_states = outputs[0] - batch_size, seq_len, hidden_size = hidden_states.shape - - query = self.query_layer(hidden_states) - key = self.key_layer(hidden_states) - - span_logits = self.biaffine_query_key_cls( - query, key).reshape(-1, num_label, seq_len, seq_len) - - span_logits = span_logits + span_labels_mask - - if span_labels == None: - return 0, span_logits - else: - soft_loss1 = self.loss_softmax( - span_logits.reshape(-1, num_label, seq_len*seq_len), span_labels.reshape(-1, num_label, seq_len*seq_len)) - soft_loss2 = self.loss_softmax(span_logits.permute( - 0, 2, 3, 1), span_labels.permute(0, 2, 3, 1)) - sig_loss = self.loss_sigmoid(span_logits, span_labels) - all_loss = 10*(100*sig_loss+soft_loss1+soft_loss2) - return all_loss, span_logits - - -class UbertLitModel(pl.LightningModule): - @staticmethod - def add_model_specific_args(parent_args): - parser = parent_args.add_argument_group('BaseModel') - - parser.add_argument('--learning_rate', default=1e-5, type=float) - parser.add_argument('--weight_decay', default=0.1, type=float) - parser.add_argument('--warmup', default=0.01, type=float) - parser.add_argument('--num_labels', default=10, type=int) - - return parent_args - - def __init__(self, args, num_data=1): - super().__init__() - self.args = args - self.num_data = num_data - self.model = UbertModel.from_pretrained( - self.args.pretrained_model_path) - self.count = 0 - - def setup(self, stage) -> None: - if stage == 'fit': - num_gpus = self.trainer.gpus if self.trainer.gpus is not None else 0 - self.total_step = int(self.trainer.max_epochs * self.num_data / - (max(1, num_gpus) * self.trainer.accumulate_grad_batches)) - print('Total training step:', self.total_step) - - def training_step(self, batch, batch_idx): - loss, span_logits = self.model(**batch) - span_acc, recall, precise = self.comput_metrix_span( - span_logits, batch['span_labels']) - self.log('train_loss', loss) - self.log('train_span_acc', span_acc) - self.log('train_span_recall', recall) - self.log('train_span_precise', precise) - - return loss - - def validation_step(self, batch, batch_idx): - loss, span_logits = self.model(**batch) - span_acc, recall, precise = self.comput_metrix_span( - span_logits, batch['span_labels']) - - self.log('val_loss', loss) - self.log('val_span_acc', span_acc) - self.log('val_span_recall', recall) - self.log('val_span_precise', precise) - - def predict_step(self, batch, batch_idx): - loss, span_logits = self.model(**batch) - span_acc = self.comput_metrix_span(span_logits, batch['span_labels']) - return span_acc.item() - - def configure_optimizers(self): - - no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight'] - paras = list( - filter(lambda p: p[1].requires_grad, self.named_parameters())) - paras = [{ - 'params': - [p for n, p in paras if not any(nd in n for nd in no_decay)], - 'weight_decay': self.args.weight_decay - }, { - 'params': [p for n, p in paras if any(nd in n for nd in no_decay)], - 'weight_decay': 0.0 - }] - optimizer = torch.optim.AdamW(paras, lr=self.args.learning_rate) - scheduler = get_linear_schedule_with_warmup( - optimizer, int(self.total_step * self.args.warmup), - self.total_step) - - return [{ - 'optimizer': optimizer, - 'lr_scheduler': { - 'scheduler': scheduler, - 'interval': 'step', - 'frequency': 1 - } - }] - - def comput_metrix_span(self, logits, labels): - ones = torch.ones_like(logits) - zero = torch.zeros_like(logits) - logits = torch.where(logits < 0, zero, ones) - y_pred = logits.view(size=(-1,)) - y_true = labels.view(size=(-1,)) - corr = torch.eq(y_pred, y_true).float() - corr = torch.multiply(y_true, corr) - recall = torch.sum(corr.float())/(torch.sum(y_true.float())+1e-5) - precise = torch.sum(corr.float())/(torch.sum(y_pred.float())+1e-5) - f1 = 2*recall*precise/(recall+precise+1e-5) - return f1, recall, precise - - -class TaskModelCheckpoint: - @staticmethod - def add_argparse_args(parent_args): - parser = parent_args.add_argument_group('BaseModel') - - parser.add_argument('--monitor', default='train_loss', type=str) - parser.add_argument('--mode', default='min', type=str) - parser.add_argument('--checkpoint_path', - default='./checkpoint/', type=str) - parser.add_argument( - '--filename', default='model-{epoch:02d}-{train_loss:.4f}', type=str) - - parser.add_argument('--save_top_k', default=3, type=float) - parser.add_argument('--every_n_epochs', default=1, type=float) - parser.add_argument('--every_n_train_steps', default=100, type=float) - - parser.add_argument('--save_weights_only', default=True, type=bool) - return parent_args - - def __init__(self, args): - self.callbacks = ModelCheckpoint(monitor=args.monitor, - save_top_k=args.save_top_k, - mode=args.mode, - save_last=True, - every_n_train_steps=args.every_n_train_steps, - save_weights_only=args.save_weights_only, - dirpath=args.checkpoint_path, - filename=args.filename) - - -class OffsetMapping: - def __init__(self): - self._do_lower_case = True - - @staticmethod - def stem(token): - if token[:2] == '##': - return token[2:] - else: - return token - - @staticmethod - def _is_control(ch): - return unicodedata.category(ch) in ('Cc', 'Cf') - - @staticmethod - def _is_special(ch): - return bool(ch) and (ch[0] == '[') and (ch[-1] == ']') - - def rematch(self, text, tokens): - if self._do_lower_case: - text = text.lower() - - normalized_text, char_mapping = '', [] - for i, ch in enumerate(text): - if self._do_lower_case: - ch = unicodedata.normalize('NFD', ch) - ch = ''.join( - [c for c in ch if unicodedata.category(c) != 'Mn']) - ch = ''.join([ - c for c in ch - if not (ord(c) == 0 or ord(c) == 0xfffd or self._is_control(c)) - ]) - normalized_text += ch - char_mapping.extend([i] * len(ch)) - - text, token_mapping, offset = normalized_text, [], 0 - for token in tokens: - if self._is_special(token): - token_mapping.append([offset]) - offset += 1 - else: - token = self.stem(token) - start = text[offset:].index(token) + offset - end = start + len(token) - token_mapping.append(char_mapping[start:end]) - offset = end - - return token_mapping - - -class extractModel: - def get_actual_id(self, text, query_text, tokenizer, args): - text_encode = tokenizer.encode(text) - one_input_encode = tokenizer.encode(query_text) - text_start_id = search(text_encode[1:-1], one_input_encode)[0][0] - text_end_id = text_start_id+len(text_encode)-1 - if text_end_id > args.max_length: - text_end_id = args.max_length - - text_token = tokenizer.tokenize(text) - text_mapping = OffsetMapping().rematch(text, text_token) - - return text_start_id, text_end_id, text_mapping, one_input_encode - - def extract_index(self, span_logits, sample_length, split_value=0.5): - result = [] - for i in range(sample_length): - for j in range(i, sample_length): - if span_logits[i, j] > split_value: - result.append((i, j, span_logits[i, j])) - return result - - def extract_entity(self, text, entity_idx, text_start_id, text_mapping): - start_split = text_mapping[entity_idx[0]-text_start_id] if entity_idx[0] - \ - text_start_id < len(text_mapping) and entity_idx[0]-text_start_id >= 0 else [] - end_split = text_mapping[entity_idx[1]-text_start_id] if entity_idx[1] - \ - text_start_id < len(text_mapping) and entity_idx[1]-text_start_id >= 0 else [] - entity = '' - if start_split != [] and end_split != []: - entity = text[start_split[0]:end_split[-1]+1] - return entity - - def extract(self, batch_data, model, tokenizer, args): - input_ids = [] - attention_mask = [] - token_type_ids = [] - span_labels_masks = [] - - for item in batch_data: - input_ids0 = [] - attention_mask0 = [] - token_type_ids0 = [] - span_labels_masks0 = [] - for choice in item['choices']: - texta = item['task_type'] + '[SEP]' + \ - item['subtask_type'] + '[SEP]' + choice['entity_type'] - textb = item['text'] - encode_dict = tokenizer.encode_plus(texta, textb, - max_length=args.max_length, - padding='max_length', - truncation='longest_first') - - encode_sent = encode_dict['input_ids'] - encode_token_type_ids = encode_dict['token_type_ids'] - encode_attention_mask = encode_dict['attention_mask'] - span_label_mask = np.zeros( - (args.max_length, args.max_length))-10000 - - if item['task_type'] == '分类任务': - span_label_mask[0, 0] = 0 - else: - question_len = len(tokenizer.encode(texta)) - span_label_mask[question_len:, question_len:] = np.zeros( - (args.max_length-question_len, args.max_length-question_len)) - input_ids0.append(encode_sent) - attention_mask0.append(encode_attention_mask) - token_type_ids0.append(encode_token_type_ids) - span_labels_masks0.append(span_label_mask) - - input_ids.append(input_ids0) - attention_mask.append(attention_mask0) - token_type_ids.append(token_type_ids0) - span_labels_masks.append(span_labels_masks0) - - input_ids = torch.tensor(input_ids).to(model.device) - attention_mask = torch.tensor(attention_mask).to(model.device) - token_type_ids = torch.tensor(token_type_ids).to(model.device) - span_labels_mask = torch.tensor(span_labels_masks).to(model.device) - - _, span_logits = model.model(input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - span_labels=None, - span_labels_mask=span_labels_mask) - - span_logits = torch.nn.functional.sigmoid(span_logits) - span_logits = span_logits.cpu().detach().numpy() - - for i, item in enumerate(batch_data): - if item['task_type'] == '分类任务': - cls_idx = 0 - max_c = np.argmax(span_logits[i, :, cls_idx, cls_idx]) - batch_data[i]['choices'][max_c]['label'] = 1 - batch_data[i]['choices'][max_c]['score'] = span_logits[i, - max_c, cls_idx, cls_idx] - else: - if item['subtask_type'] == '抽取式阅读理解': - for c in range(len(item['choices'])): - texta = item['subtask_type'] + \ - '[SEP]' + choice['entity_type'] - textb = item['text'] - text_start_id, text_end_id, offset_mapping, input_ids = self.get_actual_id( - item['text'], texta+'[SEP]'+textb, tokenizer, args) - logits = span_logits[i, c, :, :] - max_index = np.unravel_index( - np.argmax(logits, axis=None), logits.shape) - entity_list = [] - if logits[max_index] > args.threshold: - - entity = self.extract_entity( - item['text'], (max_index[0], max_index[1]), text_start_id, offset_mapping) - entity = { - 'entity_name': entity, - 'score': logits[max_index] - } - if entity not in entity_list: - entity_list.append(entity) - batch_data[i]['choices'][c]['entity_list'] = entity_list - else: - for c in range(len(item['choices'])): - texta = item['task_type'] + '[SEP]' + item['subtask_type'] + \ - '[SEP]' + item['choices'][c]['entity_type'] - - textb = item['text'] - text_start_id, text_end_id, offset_mapping, input_ids = self.get_actual_id( - item['text'], texta+'[SEP]'+textb, tokenizer, args) - logits = span_logits[i, c, :, :] - sample_length = len(input_ids) - entity_idx_type_list = self.extract_index( - logits, sample_length, split_value=args.threshold) - entity_list = [] - - for entity_idx in entity_idx_type_list: - entity = self.extract_entity( - item['text'], (entity_idx[0], entity_idx[1]), text_start_id, offset_mapping) - entity = { - 'entity_name': entity, - 'score': entity_idx[2] - } - if entity not in entity_list: - entity_list.append(entity) - batch_data[i]['choices'][c]['entity_list'] = entity_list - return batch_data - - -class UbertPiplines: - @staticmethod - def piplines_args(parent_args): - total_parser = parent_args.add_argument_group("piplines args") - total_parser.add_argument( - '--pretrained_model_path', default='IDEA-CCNL/Erlangshen-Ubert-110M-Chinese', type=str) - total_parser.add_argument('--output_save_path', - default='./predict.json', type=str) - - total_parser.add_argument('--load_checkpoints_path', - default='', type=str) - - total_parser.add_argument('--max_extract_entity_number', - default=1, type=float) - - total_parser.add_argument('--train', action='store_true') - - total_parser.add_argument('--threshold', - default=0.5, type=float) - - total_parser = UbertDataModel.add_data_specific_args(total_parser) - total_parser = TaskModelCheckpoint.add_argparse_args(total_parser) - total_parser = UbertLitModel.add_model_specific_args(total_parser) - total_parser = pl.Trainer.add_argparse_args(parent_args) - - return parent_args - - def __init__(self, args): - - if args.load_checkpoints_path != '': - self.model = UbertLitModel.load_from_checkpoint( - args.load_checkpoints_path, args=args) - else: - self.model = UbertLitModel(args) - - self.args = args - self.checkpoint_callback = TaskModelCheckpoint(args).callbacks - self.logger = loggers.TensorBoardLogger(save_dir=args.default_root_dir) - self.trainer = pl.Trainer.from_argparse_args(args, - logger=self.logger, - callbacks=[self.checkpoint_callback]) - - self.tokenizer = BertTokenizer.from_pretrained(args.pretrained_model_path, - additional_special_tokens=['[unused'+str(i+1)+']' for i in range(99)]) - - self.em = extractModel() - - def fit(self, train_data, dev_data): - data_model = UbertDataModel( - train_data, dev_data, self.tokenizer, self.args) - self.model.num_data = len(train_data) - self.trainer.fit(self.model, data_model) - - def predict(self, test_data, cuda=True): - result = [] - start = 0 - if cuda: - self.model = self.model.cuda() - self.model.eval() - while start < len(test_data): - batch_data = test_data[start:start+self.args.batchsize] - start += self.args.batchsize - - batch_result = self.em.extract( - batch_data, self.model, self.tokenizer, self.args) - result.extend(batch_result) - return result diff --git a/spaces/Hamda/AraJARIR/README.md b/spaces/Hamda/AraJARIR/README.md deleted file mode 100644 index f2633be54a57f16e361e2198f1da781b2be6b21d..0000000000000000000000000000000000000000 --- a/spaces/Hamda/AraJARIR/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AraJARIR -emoji: 😻 -colorFrom: indigo -colorTo: red -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/criterions/text_guide_cross_entropy_acc.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/criterions/text_guide_cross_entropy_acc.py deleted file mode 100644 index 0d356e5a10241716b58a5bc04a9d204a72553ff8..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/criterions/text_guide_cross_entropy_acc.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import math - -import torch -import torch.nn.functional as F -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.criterions.label_smoothed_cross_entropy import label_smoothed_nll_loss -from fairseq import metrics, utils - - -@register_criterion("guided_label_smoothed_cross_entropy_with_accuracy") -class GuidedCrossEntAccCriterion(FairseqCriterion): - def __init__( - self, - task, - sentence_avg, - guide_alpha, - text_input_cost_ratio, - label_smoothing, - disable_text_guide_update_num=0, - attentive_cost_regularization=0, - ): - """ - guide_alpha: alpha to inteplate nll and kd loss - text_input_cost_ratio: loss ratio for text only input data - label_smoothing: label smoothing ratio - disable_text_guide_update_num: only use nll loss for the first N updates - attentive_cost_regularization: ratio fo attentive cost - """ - super().__init__(task) - self.alpha = guide_alpha - self.attn_beta = attentive_cost_regularization - self.sentence_avg = sentence_avg - self.eps = label_smoothing - self.text_input_cost_ratio = text_input_cost_ratio - self.disable_update_num = disable_text_guide_update_num - assert self.alpha >= 0 and self.alpha <= 1.0 - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--label-smoothing', default=0., type=float, metavar='D', - help='epsilon for label smoothing, 0 means no label smoothing') - # fmt: off - parser.add_argument('--guide-alpha', default=0., type=float, metavar='D', - help='alpha to merge kd cost from text to speech input with ce loss') - # fmt: off - parser.add_argument('--disable-text-guide-update-num', default=0, type=int, metavar='D', - help='disable guided target from text for the first N updates.') - parser.add_argument("--attentive-cost-regularization", default=0.0, type=float, metavar='D', - help="use encoder attentive loss regularization with cost ratio D") - parser.add_argument("--attentive-cost-without-normalize", action='store_true', - help="Don't do normalization during attentive cost computation") - - def forward(self, model, sample, reduce=True): - reduction = 'sum' if reduce else 'none' - net_input = sample["net_input"] - net_output = model(**net_input) - attn_cost = None - lprobs = model.get_normalized_probs(net_output, log_probs=True) - is_dual_input = True if net_input['src_tokens'] is not None and net_input.get('src_txt_tokens') is not None else False - target = model.get_targets(sample, net_output) - src_token_num = 0 - if is_dual_input: - # lprobs_spch from speech encoder and lprobs_text from text encoder - lprobs_spch, lprobs_text = torch.chunk(lprobs, 2) - lprobs_spch.batch_first = lprobs.batch_first - lprobs_text.batch_first = lprobs.batch_first - - speech_loss, speech_nll_loss, speech_correct, speech_total = \ - self.guide_loss_and_acc(model, lprobs_spch, lprobs_text, target, reduce=(reduction == 'sum')) - text_loss, text_nll_loss, text_correct, text_total = self.compute_loss_and_acc(model, lprobs_text, target, reduction=reduction) - loss = (speech_loss + text_loss) - nll_loss = (speech_nll_loss + text_nll_loss) - correct = speech_correct + text_correct - total = speech_total + text_total - - attn_cost = net_output[1].get('attn_cost') - if attn_cost is not None: - # attn_cost is batch_first and padding tokens have been masked already - src_token_num = attn_cost.ne(0).sum() - attn_cost = attn_cost.sum() - loss = loss + attn_cost * self.attn_beta - else: - attn_cost = 0 - else: - loss, nll_loss, correct, total = self.compute_loss_and_acc(model, lprobs, target, reduction=reduction) - if sample["net_input"]['src_tokens'] is None: # text input only - loss = loss * self.text_input_cost_ratio - speech_loss = None - speech_nll_loss = None - - sample_size, logging_output = self.get_logging_output( - sample, loss, nll_loss, correct, total, src_token_num, speech_loss, speech_nll_loss, attn_cost, is_dual_input - ) - return loss, sample_size, logging_output - - def compute_loss_and_acc(self, model, lprobs, target, reduction='sum'): - if not lprobs.batch_first: - lprobs = lprobs.transpose(0, 1) - lprobs = lprobs.view(-1, lprobs.size(-1)) # -> (B x T) x C - target = target.view(-1) - loss, nll_loss = label_smoothed_nll_loss( - lprobs, target, self.eps, ignore_index=self.padding_idx, reduce=(reduction == 'sum'), - ) - - mask = target.ne(self.padding_idx) - correct = torch.sum(lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask))) - total = torch.sum(mask) - return loss, nll_loss, correct, total - - def guide_loss_and_acc(self, model, lprobs, lprobs_teacher, target, reduce=True): - """ lprobs_teacher is used as guide for lprobs """ - if self.alpha == 0.0 or model.num_updates < self.disable_update_num: - return self.compute_loss_and_acc(model, lprobs, target, reduction=('sum' if reduce else 'none')) - if not lprobs.batch_first: - lprobs = lprobs.transpose(0, 1) - lprobs_teacher = lprobs_teacher.transpose(0, 1) - - lprobs = lprobs.view(-1, lprobs.size(-1)).float() # -> (B x T) x C - lprobs_teacher = lprobs_teacher.view(-1, lprobs_teacher.size(-1)).float() # -> (B x T) x C - target = target.view(-1) - loss = F.nll_loss(lprobs, target, ignore_index=self.padding_idx, reduction='sum' if reduce else 'none') - nll_loss = loss - probs_teacher = lprobs_teacher.exp().masked_fill_(target.unsqueeze(-1).eq(self.padding_idx), 0) - probs_teacher = probs_teacher.detach() - guide_loss = -(probs_teacher*lprobs).sum() if reduce else -(probs_teacher*lprobs).sum(-1, keepdim=True) - loss = self.alpha*guide_loss + (1.0 - self.alpha)*loss - - mask = target.ne(self.padding_idx) - correct = torch.sum(lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask))) - total = torch.sum(mask) - return loss, nll_loss, correct, total - - def get_logging_output( - self, - sample, - loss, - nll_loss, - correct, - total, - src_token_num=0, - speech_loss=None, - speech_nll_loss=None, - attn_cost=None, - is_dual_input=False, - ): - - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - mul_size = 2 if is_dual_input else 1 - - logging_output = { - "loss": utils.item(loss.data), # * sample['ntokens'], - "nll_loss": utils.item(nll_loss.data), # * sample['ntokens'], - "ntokens": sample["ntokens"]*mul_size, - "nsentences": sample["target"].size(0)*mul_size, - "sample_size": sample_size*mul_size, - "correct": utils.item(correct.data), - "total": utils.item(total.data), - "src_token_num": utils.item(src_token_num.data) if src_token_num > 0 else 0, - "nframes": torch.sum(sample["net_input"]["src_lengths"]).item(), - } - - if speech_loss is not None: - logging_output["speech_loss"] = utils.item(speech_loss.data) - logging_output["speech_nll_loss"] = utils.item(speech_nll_loss.data) - logging_output["sample_size_speech_cost"] = sample_size - logging_output["speech_attn_loss"] = attn_cost - - return sample_size*mul_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - correct_sum = sum(log.get("correct", 0) for log in logging_outputs) - total_sum = sum(log.get("total", 0) for log in logging_outputs) - src_token_sum = sum(log.get("src_token_num", 0) for log in logging_outputs) - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - nframes = sum(log.get("nframes", 0) for log in logging_outputs) - speech_loss_sum = sum(log.get("speech_loss", 0) for log in logging_outputs) - speech_nll_loss_sum = sum(log.get("speech_nll_loss", 0) for log in logging_outputs) - speech_attn_loss_sum = sum(log.get("speech_attn_loss", 0) for log in logging_outputs) - sample_size_speech = sum(log.get("sample_size_speech_cost", 0) for log in logging_outputs) - - agg_output = { - "loss": loss_sum / sample_size / math.log(2) if sample_size > 0 else 0.0, - "nll_loss": nll_loss_sum / sample_size / math.log(2) if sample_size > 0 else 0.0, - # if args.sentence_avg, then sample_size is nsentences, and loss - # is per-sentence loss; else sample_size is ntokens, and the loss - # becomes per-output token loss - "speech_loss": speech_loss_sum / sample_size_speech / math.log(2) if sample_size_speech > 0 else 0.0, - "speech_nll_loss": speech_nll_loss_sum / sample_size_speech / math.log(2) if sample_size_speech > 0 else 0.0, - "speech_attn_loss": speech_attn_loss_sum / src_token_sum / math.log(2) if src_token_sum > 0 else 0.0, - "ntokens": ntokens, - "nsentences": nsentences, - "nframes": nframes, - "sample_size": sample_size, - "acc": correct_sum * 100.0 / total_sum if total_sum > 0 else 0.0, - "correct": correct_sum, - "total": total_sum, - "src_token_num": src_token_sum, - # total is the number of validate tokens - } - return agg_output - - @classmethod - def reduce_metrics(cls, logging_outputs): - """Aggregate logging outputs from data parallel training.""" - agg_logging_outputs = cls.aggregate_logging_outputs(logging_outputs) - for k, v in agg_logging_outputs.items(): - if k in {'nsentences', 'ntokens', 'sample_size'}: - continue - metrics.log_scalar(k, v, round=3) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/adamax.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/adamax.py deleted file mode 100644 index 98ff8ad7ad6c12ab5efc53ca76db2f1663be7906..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/optim/adamax.py +++ /dev/null @@ -1,172 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adamax") -class FairseqAdamax(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = Adamax(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--adamax-betas', default='(0.9, 0.999)', metavar='B', - help='betas for Adam optimizer') - parser.add_argument('--adamax-eps', type=float, default=1e-8, metavar='D', - help='epsilon for Adam optimizer') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--no-bias-correction', default=False, action='store_true', - help='disable bias correction') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "betas": eval(self.args.adamax_betas), - "eps": self.args.adamax_eps, - "weight_decay": self.args.weight_decay, - "bias_correction": not self.args.no_bias_correction, - } - - -class Adamax(torch.optim.Optimizer): - """Implements Adamax algorithm (a variant of Adam based on infinity norm). - - It has been proposed in `Adam: A Method for Stochastic Optimization`__. - - Compared to the version in PyTorch, this version implements a fix for weight decay. - - Args: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups - lr (float, optional): learning rate (default: 2e-3) - betas (Tuple[float, float], optional): coefficients used for computing - running averages of gradient and its square - eps (float, optional): term added to the denominator to improve - numerical stability (default: 1e-8) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - bias_correction (bool, optional): enable bias correction (default: True) - - __ https://arxiv.org/abs/1412.6980 - """ - - def __init__( - self, - params, - lr=2e-3, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - bias_correction=True, - ): - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) - if not 0.0 <= betas[1] < 1.0: - raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) - if not 0.0 <= weight_decay: - raise ValueError("Invalid weight_decay value: {}".format(weight_decay)) - - defaults = dict( - lr=lr, - betas=betas, - eps=eps, - weight_decay=weight_decay, - bias_correction=bias_correction, - ) - super(Adamax, self).__init__(params, defaults) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - def step(self, closure=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group["params"]: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError("Adamax does not support sparse gradients") - - p_data_fp32 = p.data - if p.data.dtype in {torch.float16, torch.bfloat16}: - p_data_fp32 = p_data_fp32.float() - - state = self.state[p] - - # State initialization - if len(state) == 0: - state["step"] = 0 - state["exp_avg"] = torch.zeros_like(p_data_fp32) - state["exp_inf"] = torch.zeros_like(p_data_fp32) - else: - state["exp_avg"] = state["exp_avg"].to(p_data_fp32) - state["exp_inf"] = state["exp_inf"].to(p_data_fp32) - - exp_avg, exp_inf = state["exp_avg"], state["exp_inf"] - beta1, beta2 = group["betas"] - eps = group["eps"] - - state["step"] += 1 - - # Update biased first moment estimate. - exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1) - - # Update the exponentially weighted infinity norm. - torch.max( - exp_inf.mul_(beta2), - grad.abs_(), - out=exp_inf, - ) - - step_size = group["lr"] - if group["bias_correction"]: - bias_correction = 1 - beta1 ** state["step"] - step_size /= bias_correction - - if group["weight_decay"] != 0: - p_data_fp32.add_( - p_data_fp32, alpha=-group["weight_decay"] * group["lr"] - ) - - p_data_fp32.addcdiv_(exp_avg, exp_inf.add(eps), value=-step_size) - - if p.data.dtype in {torch.float16, torch.bfloat16}: - p.data.copy_(p_data_fp32) - - return loss diff --git a/spaces/HarryLee/eCommerceImageCaptioning/utils/checkpoint_utils.py b/spaces/HarryLee/eCommerceImageCaptioning/utils/checkpoint_utils.py deleted file mode 100644 index 8fed4bc2a214833ab1153d5bc3ff6756db25048b..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/utils/checkpoint_utils.py +++ /dev/null @@ -1,875 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import collections -import contextlib -import logging -import numpy as np -import os -import re -import time -import traceback -import math -from collections import OrderedDict -from typing import Any, Dict, Optional, Union - -import torch -from fairseq.dataclass.configs import CheckpointConfig -from fairseq.dataclass.utils import ( - convert_namespace_to_omegaconf, - overwrite_args_by_name, -) -from fairseq.distributed.fully_sharded_data_parallel import FSDP, has_FSDP -from fairseq.file_io import PathManager -from fairseq.models import FairseqDecoder, FairseqEncoder -from omegaconf import DictConfig, open_dict, OmegaConf - -from data import data_utils - -logger = logging.getLogger(__name__) - - -def save_checkpoint(cfg: CheckpointConfig, trainer, epoch_itr, val_loss): - from fairseq import meters - - # only one worker should attempt to create the required dir - if trainer.data_parallel_rank == 0: - os.makedirs(cfg.save_dir, exist_ok=True) - - prev_best = getattr(save_checkpoint, "best", val_loss) - if val_loss is not None: - best_function = max if cfg.maximize_best_checkpoint_metric else min - save_checkpoint.best = best_function(val_loss, prev_best) - - if cfg.no_save: - return - - trainer.consolidate_optimizer() # TODO(SS): do we need this if no_save_optimizer_state - - if not trainer.should_save_checkpoint_on_current_rank: - if trainer.always_call_state_dict_during_save_checkpoint: - trainer.state_dict() - return - - write_timer = meters.StopwatchMeter() - write_timer.start() - - epoch = epoch_itr.epoch - end_of_epoch = epoch_itr.end_of_epoch() - updates = trainer.get_num_updates() - - logger.info(f"Preparing to save checkpoint for epoch {epoch} @ {updates} updates") - - def is_better(a, b): - return a >= b if cfg.maximize_best_checkpoint_metric else a <= b - - suffix = trainer.checkpoint_suffix - checkpoint_conds = collections.OrderedDict() - checkpoint_conds["checkpoint{}{}.pt".format(epoch, suffix)] = ( - end_of_epoch and not cfg.no_epoch_checkpoints and epoch % cfg.save_interval == 0 - ) - checkpoint_conds["checkpoint_{}_{}{}.pt".format(epoch, updates, suffix)] = ( - not end_of_epoch - and cfg.save_interval_updates > 0 - and updates % cfg.save_interval_updates == 0 - ) - checkpoint_conds["checkpoint_best{}.pt".format(suffix)] = val_loss is not None and ( - not hasattr(save_checkpoint, "best") - or is_better(val_loss, save_checkpoint.best) - ) - if val_loss is not None and cfg.keep_best_checkpoints > 0: - worst_best = getattr(save_checkpoint, "best", None) - chkpts = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format( - cfg.best_checkpoint_metric, suffix - ), - ) - if len(chkpts) > 0: - p = chkpts[-1] if cfg.maximize_best_checkpoint_metric else chkpts[0] - worst_best = float(p.rsplit("_")[-1].replace("{}.pt".format(suffix), "")) - # add random digits to resolve ties - with data_utils.numpy_seed(epoch, updates, val_loss): - rand_sfx = np.random.randint(0, cfg.keep_best_checkpoints) - - checkpoint_conds[ - "checkpoint.best_{}_{:.3f}{}{}.pt".format( - cfg.best_checkpoint_metric, - val_loss, - rand_sfx, - suffix - ) - ] = worst_best is None or is_better(val_loss, worst_best) - checkpoint_conds[ - "checkpoint_last{}.pt".format(suffix) - ] = not cfg.no_last_checkpoints - - extra_state = {"train_iterator": epoch_itr.state_dict(), "val_loss": val_loss} - if hasattr(save_checkpoint, "best"): - extra_state.update({"best": save_checkpoint.best}) - - checkpoints = [ - os.path.join(cfg.save_dir, fn) for fn, cond in checkpoint_conds.items() if cond - ] - if len(checkpoints) > 0: - trainer.save_checkpoint(checkpoints[0], extra_state) - for cp in checkpoints[1:]: - if cfg.write_checkpoints_asynchronously: - # TODO[ioPath]: Need to implement a delayed asynchronous - # file copying/moving feature. - logger.warning( - f"ioPath is not copying {checkpoints[0]} to {cp} " - "since async write mode is on." - ) - else: - assert PathManager.copy( - checkpoints[0], cp, overwrite=True - ), f"Failed to copy {checkpoints[0]} to {cp}" - - write_timer.stop() - logger.info( - "Saved checkpoint {} (epoch {} @ {} updates, score {}) (writing took {} seconds)".format( - checkpoints[0], epoch, updates, val_loss, write_timer.sum - ) - ) - - if not end_of_epoch and cfg.keep_interval_updates > 0: - # remove old checkpoints; checkpoints are sorted in descending order - if cfg.keep_interval_updates_pattern == -1: - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix) - ) - else: - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix), - keep_match=True, - ) - checkpoints = [ - x[0] - for x in checkpoints - if x[1] % cfg.keep_interval_updates_pattern != 0 - ] - - for old_chk in checkpoints[cfg.keep_interval_updates :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - if cfg.keep_last_epochs > 0: - # remove old epoch checkpoints; checkpoints are sorted in descending order - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint(\d+){}\.pt".format(suffix) - ) - for old_chk in checkpoints[cfg.keep_last_epochs :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - if cfg.keep_best_checkpoints > 0: - # only keep the best N checkpoints according to validation metric - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format( - cfg.best_checkpoint_metric, suffix - ), - ) - if not cfg.maximize_best_checkpoint_metric: - checkpoints = checkpoints[::-1] - for old_chk in checkpoints[cfg.keep_best_checkpoints :]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - -def load_checkpoint(cfg: CheckpointConfig, trainer, **passthrough_args): - """ - Load a checkpoint and restore the training iterator. - - *passthrough_args* will be passed through to - ``trainer.get_train_iterator``. - """ - - reset_optimizer = cfg.reset_optimizer - reset_lr_scheduler = cfg.reset_lr_scheduler - optimizer_overrides = ast.literal_eval(cfg.optimizer_overrides) - reset_meters = cfg.reset_meters - reset_dataloader = cfg.reset_dataloader - - if cfg.finetune_from_model is not None and ( - reset_optimizer or reset_lr_scheduler or reset_meters or reset_dataloader - ): - raise ValueError( - "--finetune-from-model can not be set together with either --reset-optimizer" - " or reset_lr_scheduler or reset_meters or reset_dataloader" - ) - - suffix = trainer.checkpoint_suffix - if ( - cfg.restore_file == "checkpoint_last.pt" - ): # default value of restore_file is 'checkpoint_last.pt' - checkpoint_path = os.path.join( - cfg.save_dir, "checkpoint_last{}.pt".format(suffix) - ) - first_launch = not PathManager.exists(checkpoint_path) - if cfg.finetune_from_model is not None and first_launch: - # if there is no last checkpoint to restore, start the finetune from pretrained model - # else just use usual logic to load checkpoint, e.g. restart from last checkpoint and etc. - if PathManager.exists(cfg.finetune_from_model): - checkpoint_path = cfg.finetune_from_model - reset_optimizer = True - reset_lr_scheduler = True - reset_meters = True - reset_dataloader = True - logger.info( - f"loading pretrained model from {checkpoint_path}: " - "optimizer, lr scheduler, meters, dataloader will be reset" - ) - else: - raise ValueError( - f"--funetune-from-model {cfg.finetune_from_model} does not exist" - ) - elif suffix is not None: - checkpoint_path = cfg.restore_file.replace(".pt", suffix + ".pt") - else: - checkpoint_path = cfg.restore_file - - if cfg.restore_file != "checkpoint_last.pt" and cfg.finetune_from_model: - raise ValueError( - "--finetune-from-model and --restore-file (non-default value) " - "can not be specified together: " + str(cfg) - ) - - extra_state = trainer.load_checkpoint( - checkpoint_path, - reset_optimizer, - reset_lr_scheduler, - optimizer_overrides, - reset_meters=reset_meters, - ) - - if ( - extra_state is not None - and "best" in extra_state - and not reset_optimizer - and not reset_meters - ): - save_checkpoint.best = extra_state["best"] - - if extra_state is not None and not reset_dataloader: - # restore iterator from checkpoint - itr_state = extra_state["train_iterator"] - epoch_itr = trainer.get_train_iterator( - epoch=itr_state["epoch"], load_dataset=True, **passthrough_args - ) - epoch_itr.load_state_dict(itr_state) - _n = itr_state['iterations_in_epoch'] - offset = sum(len(_) for _ in epoch_itr.batch_sampler[:_n]) - epoch_itr.dataset.dataset._seek(offset=offset) - true_num = int(math.ceil(len(epoch_itr.dataset) / 8)) * 8 - another_offset = ((epoch_itr.epoch - 1) * true_num + offset) // 8 - if hasattr(epoch_itr.dataset, 'pure_text_dataset'): - text_offset = (2 * another_offset) % len(epoch_itr.dataset.pure_text_dataset) - epoch_itr.dataset.pure_text_dataset._seek(offset=text_offset) - if hasattr(epoch_itr.dataset, 'pure_image_dataset'): - image_offset = another_offset % len(epoch_itr.dataset.pure_image_dataset) - epoch_itr.dataset.pure_image_dataset._seek(offset=image_offset) - if hasattr(epoch_itr.dataset, 'detection_dataset'): - detection_offset = another_offset % len(epoch_itr.dataset.detection_dataset) - epoch_itr.dataset.detection_dataset._seek(offset=detection_offset) - else: - epoch_itr = trainer.get_train_iterator( - epoch=1, load_dataset=True, **passthrough_args - ) - - trainer.lr_step(epoch_itr.epoch) - - return extra_state, epoch_itr - - -def load_checkpoint_to_cpu(path, arg_overrides=None, load_on_all_ranks=False): - """Loads a checkpoint to CPU (with upgrading for backward compatibility). - - If doing single-GPU training or if the checkpoint is only being loaded by at - most one process on each node (current default behavior is for only rank 0 - to read the checkpoint from disk), load_on_all_ranks should be False to - avoid errors from torch.distributed not having been initialized or - torch.distributed.barrier() hanging. - - If all processes on each node may be loading the checkpoint - simultaneously, load_on_all_ranks should be set to True to avoid I/O - conflicts. - - There's currently no support for > 1 but < all processes loading the - checkpoint on each node. - """ - local_path = PathManager.get_local_path(path) - # The locally cached file returned by get_local_path() may be stale for - # remote files that are periodically updated/overwritten (ex: - # checkpoint_last.pt) - so we remove the local copy, sync across processes - # (if needed), and then download a fresh copy. - if local_path != path and PathManager.path_requires_pathmanager(path): - try: - os.remove(local_path) - except FileNotFoundError: - # With potentially multiple processes removing the same file, the - # file being missing is benign (missing_ok isn't available until - # Python 3.8). - pass - if load_on_all_ranks: - torch.distributed.barrier() - local_path = PathManager.get_local_path(path) - - with open(local_path, "rb") as f: - state = torch.load(f, map_location=torch.device("cpu")) - - if "args" in state and state["args"] is not None and arg_overrides is not None: - args = state["args"] - for arg_name, arg_val in arg_overrides.items(): - setattr(args, arg_name, arg_val) - - if "cfg" in state and state["cfg"] is not None: - - # hack to be able to set Namespace in dict config. this should be removed when we update to newer - # omegaconf version that supports object flags, or when we migrate all existing models - from omegaconf import _utils - - old_primitive = _utils.is_primitive_type - _utils.is_primitive_type = lambda _: True - - state["cfg"] = OmegaConf.create(state["cfg"]) - - _utils.is_primitive_type = old_primitive - OmegaConf.set_struct(state["cfg"], True) - - if arg_overrides is not None: - overwrite_args_by_name(state["cfg"], arg_overrides) - - state = _upgrade_state_dict(state) - return state - - -def load_model_ensemble( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - """Loads an ensemble of models. - - Args: - filenames (List[str]): checkpoint files to load - arg_overrides (Dict[str,Any], optional): override model args that - were used during model training - task (fairseq.tasks.FairseqTask, optional): task to use for loading - """ - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble, args, _task = load_model_ensemble_and_task( - filenames, - arg_overrides, - task, - strict, - suffix, - num_shards, - state, - ) - return ensemble, args - - -def get_maybe_sharded_checkpoint_filename( - filename: str, suffix: str, shard_idx: int, num_shards: int -) -> str: - orig_filename = filename - filename = filename.replace(".pt", suffix + ".pt") - fsdp_filename = filename[:-3] + f"-shard{shard_idx}.pt" - model_parallel_filename = orig_filename[:-3] + f"_part{shard_idx}.pt" - if PathManager.exists(fsdp_filename): - return fsdp_filename - elif num_shards > 1: - return model_parallel_filename - else: - return filename - - -def load_model_ensemble_and_task( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - assert state is None or len(filenames) == 1 - - from fairseq import tasks - - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble = [] - cfg = None - for filename in filenames: - orig_filename = filename - model_shard_state = {"shard_weights": [], "shard_metadata": []} - assert num_shards > 0 - st = time.time() - for shard_idx in range(num_shards): - filename = get_maybe_sharded_checkpoint_filename( - orig_filename, suffix, shard_idx, num_shards - ) - - if not PathManager.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - if state is None: - state = load_checkpoint_to_cpu(filename, arg_overrides) - if "args" in state and state["args"] is not None: - cfg = convert_namespace_to_omegaconf(state["args"]) - elif "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - else: - raise RuntimeError( - f"Neither args nor cfg exist in state keys = {state.keys()}" - ) - - if task is None: - task = tasks.setup_task(cfg.task) - - if "task_state" in state: - task.load_state_dict(state["task_state"]) - - if "fsdp_metadata" in state and num_shards > 1: - model_shard_state["shard_weights"].append(state["model"]) - model_shard_state["shard_metadata"].append(state["fsdp_metadata"]) - # check FSDP import before the code goes too far - if not has_FSDP: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - if shard_idx == num_shards - 1: - consolidated_model_state = FSDP.consolidate_shard_weights( - shard_weights=model_shard_state["shard_weights"], - shard_metadata=model_shard_state["shard_metadata"], - ) - model = task.build_model(cfg.model) - model.load_state_dict( - consolidated_model_state, strict=strict, model_cfg=cfg.model - ) - else: - # model parallel checkpoint or unsharded checkpoint - model = task.build_model(cfg.model) - model.load_state_dict( - state["model"], strict=strict, model_cfg=cfg.model - ) - - # reset state so it gets loaded for the next model in ensemble - state = None - if shard_idx % 10 == 0 and shard_idx > 0: - elapsed = time.time() - st - logger.info( - f"Loaded {shard_idx} shards in {elapsed:.2f}s, {elapsed / (shard_idx+1):.2f}s/shard" - ) - - # build model for ensemble - ensemble.append(model) - return ensemble, cfg, task - - -def checkpoint_paths(path, pattern=r"checkpoint(\d+)\.pt", keep_match=False): - """Retrieves all checkpoints found in `path` directory. - - Checkpoints are identified by matching filename to the specified pattern. If - the pattern contains groups, the result will be sorted by the first group in - descending order. - """ - pt_regexp = re.compile(pattern) - files = PathManager.ls(path) - - entries = [] - for i, f in enumerate(files): - m = pt_regexp.fullmatch(f) - if m is not None: - idx = float(m.group(1)) if len(m.groups()) > 0 else i - entries.append((idx, m.group(0))) - if keep_match: - return [(os.path.join(path, x[1]), x[0]) for x in sorted(entries, reverse=True)] - else: - return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)] - - -def torch_persistent_save(obj, filename, async_write: bool = False): - if async_write: - with PathManager.opena(filename, "wb") as f: - _torch_persistent_save(obj, f) - else: - with PathManager.open(filename, "wb") as f: - _torch_persistent_save(obj, f) - # if PathManager.supports_rename(filename): - # # do atomic save - # with PathManager.open(filename + ".tmp", "wb") as f: - # _torch_persistent_save(obj, f) - # PathManager.rename(filename + ".tmp", filename) - # else: - # # fallback to non-atomic save - # with PathManager.open(filename, "wb") as f: - # _torch_persistent_save(obj, f) - - -def _torch_persistent_save(obj, f): - if isinstance(f, str): - with PathManager.open(f, "wb") as h: - torch_persistent_save(obj, h) - return - for i in range(3): - try: - return torch.save(obj, f) - except Exception: - if i == 2: - logger.error(traceback.format_exc()) - raise - - -def _upgrade_state_dict(state): - """Helper for upgrading old model checkpoints.""" - - # add optimizer_history - if "optimizer_history" not in state: - state["optimizer_history"] = [ - {"criterion_name": "CrossEntropyCriterion", "best_loss": state["best_loss"]} - ] - state["last_optimizer_state"] = state["optimizer"] - del state["optimizer"] - del state["best_loss"] - # move extra_state into sub-dictionary - if "epoch" in state and "extra_state" not in state: - state["extra_state"] = { - "epoch": state["epoch"], - "batch_offset": state["batch_offset"], - "val_loss": state["val_loss"], - } - del state["epoch"] - del state["batch_offset"] - del state["val_loss"] - # reduce optimizer history's memory usage (only keep the last state) - if "optimizer" in state["optimizer_history"][-1]: - state["last_optimizer_state"] = state["optimizer_history"][-1]["optimizer"] - for optim_hist in state["optimizer_history"]: - del optim_hist["optimizer"] - # record the optimizer class name - if "optimizer_name" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["optimizer_name"] = "FairseqNAG" - # move best_loss into lr_scheduler_state - if "lr_scheduler_state" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["lr_scheduler_state"] = { - "best": state["optimizer_history"][-1]["best_loss"] - } - del state["optimizer_history"][-1]["best_loss"] - # keep track of number of updates - if "num_updates" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["num_updates"] = 0 - # old model checkpoints may not have separate source/target positions - if ( - "args" in state - and hasattr(state["args"], "max_positions") - and not hasattr(state["args"], "max_source_positions") - ): - state["args"].max_source_positions = state["args"].max_positions - state["args"].max_target_positions = state["args"].max_positions - # use stateful training data iterator - if "train_iterator" not in state["extra_state"]: - state["extra_state"]["train_iterator"] = { - "epoch": state["extra_state"]["epoch"], - "iterations_in_epoch": state["extra_state"].get("batch_offset", 0), - } - - # backward compatibility, cfg updates - if "args" in state and state["args"] is not None: - # default to translation task - if not hasattr(state["args"], "task"): - state["args"].task = "translation" - # --raw-text and --lazy-load are deprecated - if getattr(state["args"], "raw_text", False): - state["args"].dataset_impl = "raw" - elif getattr(state["args"], "lazy_load", False): - state["args"].dataset_impl = "lazy" - # epochs start at 1 - if state["extra_state"]["train_iterator"] is not None: - state["extra_state"]["train_iterator"]["epoch"] = max( - state["extra_state"]["train_iterator"].get("epoch", 1), 1 - ) - # --remove-bpe ==> --postprocess - if hasattr(state["args"], "remove_bpe"): - state["args"].post_process = state["args"].remove_bpe - # --min-lr ==> --stop-min-lr - if hasattr(state["args"], "min_lr"): - state["args"].stop_min_lr = state["args"].min_lr - del state["args"].min_lr - # binary_cross_entropy / kd_binary_cross_entropy => wav2vec criterion - if ( - hasattr(state["args"], "criterion") - and state["args"].criterion in [ - "binary_cross_entropy", - "kd_binary_cross_entropy", - ] - ): - state["args"].criterion = "wav2vec" - # remove log_keys if it's None (criteria will supply a default value of []) - if hasattr(state["args"], "log_keys") and state["args"].log_keys is None: - delattr(state["args"], "log_keys") - # speech_pretraining => audio pretraining - if ( - hasattr(state["args"], "task") - and state["args"].task == "speech_pretraining" - ): - state["args"].task = "audio_pretraining" - # audio_cpc => wav2vec - if hasattr(state["args"], "arch") and state["args"].arch == "audio_cpc": - state["args"].arch = "wav2vec" - # convert legacy float learning rate to List[float] - if hasattr(state["args"], "lr") and isinstance(state["args"].lr, float): - state["args"].lr = [state["args"].lr] - # convert task data arg to a string instead of List[string] - if ( - hasattr(state["args"], "data") - and isinstance(state["args"].data, list) - and len(state["args"].data) > 0 - ): - state["args"].data = state["args"].data[0] - # remove keys in state["args"] related to teacher-student learning - for key in [ - "static_teachers", - "static_teacher_weights", - "dynamic_teachers", - "dynamic_teacher_weights", - ]: - if key in state["args"]: - delattr(state["args"], key) - - state["cfg"] = convert_namespace_to_omegaconf(state["args"]) - - if "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - with open_dict(cfg): - # any upgrades for Hydra-based configs - if ( - "task" in cfg - and "eval_wer_config" in cfg.task - and isinstance(cfg.task.eval_wer_config.print_alignment, bool) - ): - cfg.task.eval_wer_config.print_alignment = "hard" - if "generation" in cfg and isinstance(cfg.generation.print_alignment, bool): - cfg.generation.print_alignment = "hard" if cfg.generation.print_alignment else None - if ( - "model" in cfg - and "w2v_args" in cfg.model - and cfg.model.w2v_args is not None - and ( - hasattr(cfg.model.w2v_args, "task") or "task" in cfg.model.w2v_args - ) - and hasattr(cfg.model.w2v_args.task, "eval_wer_config") - and cfg.model.w2v_args.task.eval_wer_config is not None - and isinstance( - cfg.model.w2v_args.task.eval_wer_config.print_alignment, bool - ) - ): - cfg.model.w2v_args.task.eval_wer_config.print_alignment = "hard" - - return state - - -def prune_state_dict(state_dict, model_cfg: Optional[DictConfig]): - """Prune the given state_dict if desired for LayerDrop - (https://arxiv.org/abs/1909.11556). - - Training with LayerDrop allows models to be robust to pruning at inference - time. This function prunes state_dict to allow smaller models to be loaded - from a larger model and re-maps the existing state_dict for this to occur. - - It's called by functions that load models from checkpoints and does not - need to be called directly. - """ - arch = None - if model_cfg is not None: - arch = ( - model_cfg._name - if isinstance(model_cfg, DictConfig) - else getattr(model_cfg, "arch", None) - ) - - if not model_cfg or arch is None or arch == "ptt_transformer": - # args should not be none, but don't crash if it is. - return state_dict - - encoder_layers_to_keep = getattr(model_cfg, "encoder_layers_to_keep", None) - decoder_layers_to_keep = getattr(model_cfg, "decoder_layers_to_keep", None) - - if not encoder_layers_to_keep and not decoder_layers_to_keep: - return state_dict - - # apply pruning - logger.info( - "Pruning model to specified layer configuration - this works best if the model was trained with LayerDrop" - ) - - def create_pruning_pass(layers_to_keep, layer_name): - keep_layers = sorted( - int(layer_string) for layer_string in layers_to_keep.split(",") - ) - mapping_dict = {} - for i in range(len(keep_layers)): - mapping_dict[str(keep_layers[i])] = str(i) - - regex = re.compile(r"^{layer}.*\.layers\.(\d+)".format(layer=layer_name)) - return {"substitution_regex": regex, "mapping_dict": mapping_dict} - - pruning_passes = [] - if encoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(encoder_layers_to_keep, "encoder")) - if decoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(decoder_layers_to_keep, "decoder")) - - new_state_dict = {} - for layer_name in state_dict.keys(): - match = re.search(r"\.layers\.(\d+)\.", layer_name) - # if layer has no number in it, it is a supporting layer, such as an - # embedding - if not match: - new_state_dict[layer_name] = state_dict[layer_name] - continue - - # otherwise, layer should be pruned. - original_layer_number = match.group(1) - # figure out which mapping dict to replace from - for pruning_pass in pruning_passes: - if original_layer_number in pruning_pass["mapping_dict"] and pruning_pass[ - "substitution_regex" - ].search(layer_name): - new_layer_number = pruning_pass["mapping_dict"][original_layer_number] - substitution_match = pruning_pass["substitution_regex"].search( - layer_name - ) - new_state_key = ( - layer_name[: substitution_match.start(1)] - + new_layer_number - + layer_name[substitution_match.end(1) :] - ) - new_state_dict[new_state_key] = state_dict[layer_name] - - # Since layers are now pruned, *_layers_to_keep are no longer needed. - # This is more of "It would make it work fix" rather than a proper fix. - if isinstance(model_cfg, DictConfig): - context = open_dict(model_cfg) - else: - context = contextlib.ExitStack() - with context: - if hasattr(model_cfg, "encoder_layers_to_keep"): - model_cfg.encoder_layers_to_keep = None - if hasattr(model_cfg, "decoder_layers_to_keep"): - model_cfg.decoder_layers_to_keep = None - - return new_state_dict - - -def load_pretrained_component_from_model( - component: Union[FairseqEncoder, FairseqDecoder], checkpoint: str -): - """ - Load a pretrained FairseqEncoder or FairseqDecoder from checkpoint into the - provided `component` object. If state_dict fails to load, there may be a - mismatch in the architecture of the corresponding `component` found in the - `checkpoint` file. - """ - if not PathManager.exists(checkpoint): - raise IOError("Model file not found: {}".format(checkpoint)) - state = load_checkpoint_to_cpu(checkpoint) - if isinstance(component, FairseqEncoder): - component_type = "encoder" - elif isinstance(component, FairseqDecoder): - component_type = "decoder" - else: - raise ValueError( - "component to load must be either a FairseqEncoder or " - "FairseqDecoder. Loading other component types are not supported." - ) - component_state_dict = OrderedDict() - for key in state["model"].keys(): - if key.startswith(component_type): - # encoder.input_layers.0.0.weight --> input_layers.0.0.weight - component_subkey = key[len(component_type) + 1 :] - component_state_dict[component_subkey] = state["model"][key] - component.load_state_dict(component_state_dict, strict=True) - return component - - -def verify_checkpoint_directory(save_dir: str) -> None: - if not os.path.exists(save_dir): - os.makedirs(save_dir, exist_ok=True) - temp_file_path = os.path.join(save_dir, "dummy") - try: - with open(temp_file_path, "w"): - pass - except OSError as e: - logger.warning( - "Unable to access checkpoint save directory: {}".format(save_dir) - ) - raise e - else: - os.remove(temp_file_path) - - -def load_ema_from_checkpoint(fpath): - """Loads exponential moving averaged (EMA) checkpoint from input and - returns a model with ema weights. - - Args: - fpath: A string path of checkpoint to load from. - - Returns: - A dict of string keys mapping to various values. The 'model' key - from the returned dict should correspond to an OrderedDict mapping - string parameter names to torch Tensors. - """ - params_dict = collections.OrderedDict() - new_state = None - - with PathManager.open(fpath, 'rb') as f: - new_state = torch.load( - f, - map_location=( - lambda s, _: torch.serialization.default_restore_location(s, 'cpu') - ), - ) - - # EMA model is stored in a separate "extra state" - model_params = new_state['extra_state']['ema'] - - for key in list(model_params.keys()): - p = model_params[key] - if isinstance(p, torch.HalfTensor): - p = p.float() - if key not in params_dict: - params_dict[key] = p.clone() - # NOTE: clone() is needed in case of p is a shared parameter - else: - raise ValueError("Key {} is repeated in EMA model params.".format(key)) - - if len(params_dict) == 0: - raise ValueError( - f"Input checkpoint path '{fpath}' does not contain " - "ema model weights, is this model trained with EMA?" - ) - - new_state['model'] = params_dict - return new_state diff --git a/spaces/Harsh239/ChatBot/app.py b/spaces/Harsh239/ChatBot/app.py deleted file mode 100644 index a0b4aa38bc61c532d66d5664d202cef98aa501d3..0000000000000000000000000000000000000000 --- a/spaces/Harsh239/ChatBot/app.py +++ /dev/null @@ -1,16 +0,0 @@ -import openai -import gradio - -openai.api_key = "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" - -messages = [{"role": "system", "content": "You are a chatbot"}] - -def CustomChatGPT(user_input): - messages.append({"role": "user", "content": user_input}) - response = openai.ChatCompletion.create(model = "gpt-3.5-turbo", messages = messages) - ChatGPT_reply = response["choices"][0]["message"]["content"] - messages.append({"role": "assistant", "content": ChatGPT_reply}) - return ChatGPT_reply - -demo = gradio.Interface(fn=CustomChatGPT, inputs = "text", outputs = "text", title = "Harsh Gupta's ChatBot") -demo.launch() \ No newline at end of file diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/data/resample.sh b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/data/resample.sh deleted file mode 100644 index 8489b0a0056d46a93d24db8dba173ad7a4b8a44a..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/scripts/data/resample.sh +++ /dev/null @@ -1,14 +0,0 @@ -input_wav_path='/home/harveen/en/iitm_data/english/wav/' -output_wav_path='/home/harveen/en/iitm_data/english/wav_22k/' -output_sample_rate=22050 - -####################### - -dir=$PWD -parentdir="$(dirname "$dir")" -parentdir="$(dirname "$parentdir")" - -mkdir -p $output_wav_path -python $parentdir/utils/data/resample.py -i $input_wav_path -o $output_wav_path -s $output_sample_rate - -python $parentdir/utils/data/duration.py $output_wav_path diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.1d32cfe5.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.1d32cfe5.js deleted file mode 100644 index 0a49264d27b1b0f8bdad650ceace18c14c44b550..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/index.1d32cfe5.js +++ /dev/null @@ -1,2 +0,0 @@ -import{ah as s}from"./index.396f4a72.js";const o=["static"];export{s as Component,o as modes}; -//# sourceMappingURL=index.1d32cfe5.js.map diff --git a/spaces/HighCWu/starganv2vc-paddle/starganv2vc_paddle/Utils/ASR/__init__.py b/spaces/HighCWu/starganv2vc-paddle/starganv2vc_paddle/Utils/ASR/__init__.py deleted file mode 100644 index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/starganv2vc-paddle/starganv2vc_paddle/Utils/ASR/__init__.py +++ /dev/null @@ -1 +0,0 @@ - diff --git a/spaces/Hina4867/bingo/src/components/chat-scroll-anchor.tsx b/spaces/Hina4867/bingo/src/components/chat-scroll-anchor.tsx deleted file mode 100644 index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/components/chat-scroll-anchor.tsx +++ /dev/null @@ -1,29 +0,0 @@ -'use client' - -import * as React from 'react' -import { useInView } from 'react-intersection-observer' - -import { useAtBottom } from '@/lib/hooks/use-at-bottom' - -interface ChatScrollAnchorProps { - trackVisibility?: boolean -} - -export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) { - const isAtBottom = useAtBottom() - const { ref, entry, inView } = useInView({ - trackVisibility, - delay: 100, - rootMargin: '0px 0px -150px 0px' - }) - - React.useEffect(() => { - if (isAtBottom && trackVisibility && !inView) { - entry?.target.scrollIntoView({ - block: 'start' - }) - } - }, [inView, entry, isAtBottom, trackVisibility]) - - return
-} diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_deltas.sh b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_deltas.sh deleted file mode 100644 index af68715ab0d87ae40666596d9d877d593684f8e2..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_deltas.sh +++ /dev/null @@ -1,175 +0,0 @@ -#!/usr/bin/env bash - -# Copyright 2012 Johns Hopkins University (Author: Daniel Povey) -# Apache 2.0 - -# Begin configuration. -stage=-4 # This allows restarting after partway, when something when wrong. -config= -cmd=run.pl -scale_opts="--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1" -realign_iters="10 20 30"; -num_iters=35 # Number of iterations of training -max_iter_inc=25 # Last iter to increase #Gauss on. -beam=10 -careful=false -retry_beam=40 -boost_silence=1.0 # Factor by which to boost silence likelihoods in alignment -power=0.25 # Exponent for number of gaussians according to occurrence counts -cluster_thresh=-1 # for build-tree control final bottom-up clustering of leaves -norm_vars=false # deprecated. Prefer --cmvn-opts "--norm-vars=true" - # use the option --cmvn-opts "--norm-means=false" -cmvn_opts= -delta_opts= -context_opts= # use"--context-width=5 --central-position=2" for quinphone -num_nonsil_states=3 -# End configuration. - -echo "$0 $@" # Print the command line for logging - -[ -f path.sh ] && . ./path.sh; -. parse_options.sh || exit 1; - -if [ $# != 6 ]; then - echo "Usage: steps/train_deltas.sh " - echo "e.g.: steps/train_deltas.sh 2000 10000 data/train_si84_half data/lang exp/mono_ali exp/tri1" - echo "main options (for others, see top of script file)" - echo " --cmd (utils/run.pl|utils/queue.pl ) # how to run jobs." - echo " --config # config containing options" - echo " --stage # stage to do partial re-run from." - exit 1; -fi - -numleaves=$1 -totgauss=$2 -data=$3 -lang=$4 -alidir=$5 -dir=$6 - -for f in $alidir/final.mdl $alidir/ali.1.gz $data/feats.scp $lang/phones.txt; do - [ ! -f $f ] && echo "train_deltas.sh: no such file $f" && exit 1; -done - -numgauss=$numleaves -incgauss=$[($totgauss-$numgauss)/$max_iter_inc] # per-iter increment for #Gauss -oov=`cat $lang/oov.int` || exit 1; -ciphonelist=`cat $lang/phones/context_indep.csl` || exit 1; -nj=`cat $alidir/num_jobs` || exit 1; -mkdir -p $dir/log -echo $nj > $dir/num_jobs - -utils/lang/check_phones_compatible.sh $lang/phones.txt $alidir/phones.txt || exit 1; -cp $lang/phones.txt $dir || exit 1; - -sdata=$data/split$nj; -split_data.sh $data $nj || exit 1; - - -[ $(cat $alidir/cmvn_opts 2>/dev/null | wc -c) -gt 1 ] && [ -z "$cmvn_opts" ] && \ - echo "$0: warning: ignoring CMVN options from source directory $alidir" -$norm_vars && cmvn_opts="--norm-vars=true $cmvn_opts" -echo $cmvn_opts > $dir/cmvn_opts # keep track of options to CMVN. -[ ! -z $delta_opts ] && echo $delta_opts > $dir/delta_opts - -feats="ark,s,cs:apply-cmvn $cmvn_opts --utt2spk=ark:$sdata/JOB/utt2spk scp:$sdata/JOB/cmvn.scp scp:$sdata/JOB/feats.scp ark:- | add-deltas $delta_opts ark:- ark:- |" - -rm $dir/.error 2>/dev/null - -if [ $stage -le -3 ]; then - echo "$0: accumulating tree stats" - $cmd JOB=1:$nj $dir/log/acc_tree.JOB.log \ - acc-tree-stats $context_opts \ - --ci-phones=$ciphonelist $alidir/final.mdl "$feats" \ - "ark:gunzip -c $alidir/ali.JOB.gz|" $dir/JOB.treeacc || exit 1; - sum-tree-stats $dir/treeacc $dir/*.treeacc 2>$dir/log/sum_tree_acc.log || exit 1; - rm $dir/*.treeacc -fi - -if [ $stage -le -2 ]; then - echo "$0: getting questions for tree-building, via clustering" - # preparing questions, roots file... - cluster-phones --pdf-class-list=$(($num_nonsil_states / 2)) $context_opts \ - $dir/treeacc $lang/phones/sets.int \ - $dir/questions.int 2> $dir/log/questions.log || exit 1; - cat $lang/phones/extra_questions.int >> $dir/questions.int - compile-questions $context_opts $lang/topo $dir/questions.int \ - $dir/questions.qst 2>$dir/log/compile_questions.log || exit 1; - - echo "$0: building the tree" - $cmd $dir/log/build_tree.log \ - build-tree $context_opts --verbose=1 --max-leaves=$numleaves \ - --cluster-thresh=$cluster_thresh $dir/treeacc $lang/phones/roots.int \ - $dir/questions.qst $lang/topo $dir/tree || exit 1; - - $cmd $dir/log/init_model.log \ - gmm-init-model --write-occs=$dir/1.occs \ - $dir/tree $dir/treeacc $lang/topo $dir/1.mdl || exit 1; - if grep 'no stats' $dir/log/init_model.log; then - echo "** The warnings above about 'no stats' generally mean you have phones **" - echo "** (or groups of phones) in your phone set that had no corresponding data. **" - echo "** You should probably figure out whether something went wrong, **" - echo "** or whether your data just doesn't happen to have examples of those **" - echo "** phones. **" - fi - - gmm-mixup --mix-up=$numgauss $dir/1.mdl $dir/1.occs $dir/1.mdl 2>$dir/log/mixup.log || exit 1; - rm $dir/treeacc -fi - -if [ $stage -le -1 ]; then - # Convert the alignments. - echo "$0: converting alignments from $alidir to use current tree" - $cmd JOB=1:$nj $dir/log/convert.JOB.log \ - convert-ali $alidir/final.mdl $dir/1.mdl $dir/tree \ - "ark:gunzip -c $alidir/ali.JOB.gz|" "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; -fi - -if [ $stage -le 0 ]; then - echo "$0: compiling graphs of transcripts" - $cmd JOB=1:$nj $dir/log/compile_graphs.JOB.log \ - compile-train-graphs --read-disambig-syms=$lang/phones/disambig.int $dir/tree $dir/1.mdl $lang/L.fst \ - "ark:utils/sym2int.pl --map-oov $oov -f 2- $lang/words.txt < $sdata/JOB/text |" \ - "ark:|gzip -c >$dir/fsts.JOB.gz" || exit 1; -fi - -x=1 -while [ $x -lt $num_iters ]; do - echo "$0: training pass $x" - if [ $stage -le $x ]; then - if echo $realign_iters | grep -w $x >/dev/null; then - echo "$0: aligning data" - mdl="gmm-boost-silence --boost=$boost_silence `cat $lang/phones/optional_silence.csl` $dir/$x.mdl - |" - $cmd JOB=1:$nj $dir/log/align.$x.JOB.log \ - gmm-align-compiled $scale_opts --beam=$beam --retry-beam=$retry_beam --careful=$careful "$mdl" \ - "ark:gunzip -c $dir/fsts.JOB.gz|" "$feats" \ - "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; - fi - $cmd JOB=1:$nj $dir/log/acc.$x.JOB.log \ - gmm-acc-stats-ali $dir/$x.mdl "$feats" \ - "ark,s,cs:gunzip -c $dir/ali.JOB.gz|" $dir/$x.JOB.acc || exit 1; - $cmd $dir/log/update.$x.log \ - gmm-est --mix-up=$numgauss --power=$power \ - --write-occs=$dir/$[$x+1].occs $dir/$x.mdl \ - "gmm-sum-accs - $dir/$x.*.acc |" $dir/$[$x+1].mdl || exit 1; - rm $dir/$x.mdl $dir/$x.*.acc - rm $dir/$x.occs - fi - [ $x -le $max_iter_inc ] && numgauss=$[$numgauss+$incgauss]; - x=$[$x+1]; -done - -rm $dir/final.mdl $dir/final.occs 2>/dev/null -ln -s $x.mdl $dir/final.mdl -ln -s $x.occs $dir/final.occs - -steps/diagnostic/analyze_alignments.sh --cmd "$cmd" $lang $dir - -# Summarize warning messages... -utils/summarize_warnings.pl $dir/log - -steps/info/gmm_dir_info.pl $dir - -echo "$0: Done training system with delta+delta-delta features in $dir" - -exit 0 diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/huggingface/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/huggingface/__init__.py deleted file mode 100644 index f7911c2c8edf516855023a285b18935e5389ec02..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/huggingface/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the models/huggingface/ directory -models_dir = os.path.dirname(__file__) -for file in os.listdir(models_dir): - path = os.path.join(models_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - model_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module("fairseq.models.huggingface." + model_name) diff --git a/spaces/Illumotion/Koboldcpp/examples/batched/batched.cpp b/spaces/Illumotion/Koboldcpp/examples/batched/batched.cpp deleted file mode 100644 index 688ef221335a98f63117855c5844d39a4f5591cf..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/batched/batched.cpp +++ /dev/null @@ -1,255 +0,0 @@ -#include "common.h" -#include "llama.h" - -#include -#include -#include -#include -#include - -int main(int argc, char ** argv) { - gpt_params params; - - if (argc == 1 || argv[1][0] == '-') { - printf("usage: %s MODEL_PATH [PROMPT] [PARALLEL]\n" , argv[0]); - return 1 ; - } - - int n_parallel = 1; - - if (argc >= 2) { - params.model = argv[1]; - } - - if (argc >= 3) { - params.prompt = argv[2]; - } - - if (argc >= 4) { - n_parallel = std::atoi(argv[3]); - } - - if (params.prompt.empty()) { - params.prompt = "Hello my name is"; - } - - // total length of the sequences including the prompt - const int n_len = 32; - - // init LLM - - llama_backend_init(params.numa); - - // initialize the model - - llama_model_params model_params = llama_model_default_params(); - - // model_params.n_gpu_layers = 99; // offload all layers to the GPU - - llama_model * model = llama_load_model_from_file(params.model.c_str(), model_params); - - if (model == NULL) { - fprintf(stderr , "%s: error: unable to load model\n" , __func__); - return 1; - } - - // tokenize the prompt - - std::vector tokens_list; - tokens_list = ::llama_tokenize(model, params.prompt, true); - const int n_kv_req = tokens_list.size() + (n_len - tokens_list.size())*n_parallel; - - // initialize the context - - llama_context_params ctx_params = llama_context_default_params(); - - ctx_params.seed = 1234; - ctx_params.n_ctx = n_kv_req; - ctx_params.n_batch = std::max(n_len, n_parallel); - ctx_params.n_threads = params.n_threads; - ctx_params.n_threads_batch = params.n_threads_batch == -1 ? params.n_threads : params.n_threads_batch; - - llama_context * ctx = llama_new_context_with_model(model, ctx_params); - - if (ctx == NULL) { - fprintf(stderr , "%s: error: failed to create the llama_context\n" , __func__); - return 1; - } - - const int n_ctx = llama_n_ctx(ctx); - - LOG_TEE("\n%s: n_len = %d, n_ctx = %d, n_batch = %d, n_parallel = %d, n_kv_req = %d\n", __func__, n_len, n_ctx, ctx_params.n_batch, n_parallel, n_kv_req); - - // make sure the KV cache is big enough to hold all the prompt and generated tokens - if (n_kv_req > n_ctx) { - LOG_TEE("%s: error: n_kv_req (%d) > n_ctx, the required KV cache size is not big enough\n", __func__, n_kv_req); - LOG_TEE("%s: either reduce n_parallel or increase n_ctx\n", __func__); - return 1; - } - - // print the prompt token-by-token - - fprintf(stderr, "\n"); - - for (auto id : tokens_list) { - fprintf(stderr, "%s", llama_token_to_piece(ctx, id).c_str()); - } - - fflush(stderr); - - // create a llama_batch with size 512 - // we use this object to submit token data for decoding - - llama_batch batch = llama_batch_init(std::max(tokens_list.size(), (size_t)n_parallel), 0); - - // evaluate the initial prompt - batch.n_tokens = tokens_list.size(); - - for (int32_t i = 0; i < batch.n_tokens; i++) { - batch.token[i] = tokens_list[i]; - batch.pos[i] = i; - batch.seq_id[i] = 0; - batch.logits[i] = false; - } - - // llama_decode will output logits only for the last token of the prompt - batch.logits[batch.n_tokens - 1] = true; - - if (llama_decode(ctx, batch) != 0) { - LOG_TEE("%s: llama_decode() failed\n", __func__); - return 1; - } - - // assign the system KV cache to all parallel sequences - // this way, the parallel sequences will "reuse" the prompt tokens without having to copy them - for (int32_t i = 1; i < n_parallel; ++i) { - llama_kv_cache_seq_cp(ctx, 0, i, 0, batch.n_tokens); - } - - if (n_parallel > 1) { - LOG_TEE("\n\n%s: generating %d sequences ...\n", __func__, n_parallel); - } - - // main loop - - // we will store the parallel decoded sequences in this vector - std::vector streams(n_parallel); - - // remember the batch index of the last token for each parallel sequence - // we need this to determine which logits to sample from - std::vector i_batch(n_parallel, batch.n_tokens - 1); - - int n_cur = batch.n_tokens; - int n_decode = 0; - - const auto t_main_start = ggml_time_us(); - - while (n_cur <= n_len) { - // prepare the next batch - batch.n_tokens = 0; - - // sample the next token for each parallel sequence / stream - for (int32_t i = 0; i < n_parallel; ++i) { - if (i_batch[i] < 0) { - // the stream has already finished - continue; - } - - auto n_vocab = llama_n_vocab(model); - auto * logits = llama_get_logits_ith(ctx, i_batch[i]); - - std::vector candidates; - candidates.reserve(n_vocab); - - for (llama_token token_id = 0; token_id < n_vocab; token_id++) { - candidates.emplace_back(llama_token_data{ token_id, logits[token_id], 0.0f }); - } - - llama_token_data_array candidates_p = { candidates.data(), candidates.size(), false }; - - const int top_k = 40; - const float top_p = 0.9f; - const float temp = 0.4f; - - llama_sample_top_k(ctx, &candidates_p, top_k, 1); - llama_sample_top_p(ctx, &candidates_p, top_p, 1); - llama_sample_temp (ctx, &candidates_p, temp); - - const llama_token new_token_id = llama_sample_token(ctx, &candidates_p); - - //const llama_token new_token_id = llama_sample_token_greedy(ctx, &candidates_p); - - // is it an end of stream? -> mark the stream as finished - if (new_token_id == llama_token_eos(ctx) || n_cur == n_len) { - i_batch[i] = -1; - LOG_TEE("\n"); - if (n_parallel > 1) { - LOG_TEE("%s: stream %d finished at n_cur = %d", __func__, i, n_cur); - } - - continue; - } - - // if there is only one stream, we print immediately to stdout - if (n_parallel == 1) { - LOG_TEE("%s", llama_token_to_piece(ctx, new_token_id).c_str()); - fflush(stdout); - } - - streams[i] += llama_token_to_piece(ctx, new_token_id); - - // push this new token for next evaluation - batch.token [batch.n_tokens] = new_token_id; - batch.pos [batch.n_tokens] = n_cur; - batch.seq_id[batch.n_tokens] = i; - batch.logits[batch.n_tokens] = true; - - i_batch[i] = batch.n_tokens; - - batch.n_tokens += 1; - - n_decode += 1; - } - - // all streams are finished - if (batch.n_tokens == 0) { - break; - } - - n_cur += 1; - - // evaluate the current batch with the transformer model - if (llama_decode(ctx, batch)) { - fprintf(stderr, "%s : failed to eval, return code %d\n", __func__, 1); - return 1; - } - } - - LOG_TEE("\n"); - - if (n_parallel > 1) { - LOG_TEE("\n"); - - for (int32_t i = 0; i < n_parallel; ++i) { - LOG_TEE("sequence %d:\n\n%s%s\n\n", i, params.prompt.c_str(), streams[i].c_str()); - } - } - - const auto t_main_end = ggml_time_us(); - - LOG_TEE("%s: decoded %d tokens in %.2f s, speed: %.2f t/s\n", - __func__, n_decode, (t_main_end - t_main_start) / 1000000.0f, n_decode / ((t_main_end - t_main_start) / 1000000.0f)); - - llama_print_timings(ctx); - - fprintf(stderr, "\n"); - - llama_batch_free(batch); - - llama_free(ctx); - llama_free_model(model); - - llama_backend_free(); - - return 0; -} diff --git a/spaces/ImagineAI-Real/ImagineAI-Image-Generator/app.py b/spaces/ImagineAI-Real/ImagineAI-Image-Generator/app.py deleted file mode 100644 index f149b1f38b20e90a49b412b53314a6924c880029..0000000000000000000000000000000000000000 --- a/spaces/ImagineAI-Real/ImagineAI-Image-Generator/app.py +++ /dev/null @@ -1,84 +0,0 @@ -import gradio as gr -import requests -from PIL import Image -from io import BytesIO -import base64 - -api_url = "https://5cb20b40-572c-426f-9466-995256f9b6eb.id.repl.co/generate_image" - -def generate_image(prompt, seed=0, negative_prompt="", model="Dreamlike Diffusion", sampler="k_dpmpp_2s_a", steps=50): - data = "?prompt="+ prompt + "&seed="+ str(seed) + "&negative_prompt=" + negative_prompt + "&model=" + model + "&sampler=" + sampler + "&steps=" + str(steps) - response = requests.post(api_url+data, timeout=400) - if response.status_code == 200: - img_base64 = response.json()["url"] - img_bytes = base64.b64decode(img_base64) - img = Image.open(BytesIO(img_bytes)) - return img - else: - return None - -inputs = [ - gr.inputs.Textbox(label="Prompt"), - gr.inputs.Number(label="Seed", default=0), - gr.inputs.Textbox(label="Negative Prompt", default=""), - gr.inputs.Dropdown(['3DKX', 'Abyss OrangeMix', 'AbyssOrangeMix-AfterDark', 'ACertainThing', - 'AIO Pixel Art', 'Analog Diffusion', 'Anime Pencil Diffusion', 'Anygen', - 'Anything Diffusion', 'Anything v3', 'anything_v4_inpainting', - 'App Icon Diffusion', 'Arcane Diffusion', 'Archer Diffusion', - 'Asim Simpsons', 'A to Zovya RPG', 'Balloon Art', 'Borderlands', 'BPModel', - 'BubblyDubbly', 'Char', 'CharHelper', 'Cheese Daddys Landscape Mix', - 'ChilloutMix', 'ChromaV5', 'Classic Animation Diffusion', 'Clazy', - 'Colorful', 'Coloring Book', 'Comic-Diffusion', 'Concept Sheet', - 'Counterfeit', 'Cyberpunk Anime Diffusion', 'CyriousMix', - 'Dan Mumford Style', 'Darkest Diffusion', 'Dark Victorian Diffusion', - 'Deliberate', 'DGSpitzer Art Diffusion', 'Disco Elysium', 'DnD Item', - 'Double Exposure Diffusion', 'Dreamlike Diffusion', - 'dreamlike_diffusion_inpainting', 'Dreamlike Photoreal', - 'DreamLikeSamKuvshinov', 'Dreamshaper', 'DucHaiten', - 'DucHaiten Classic Anime', 'Dungeons and Diffusion', 'Dungeons n Waifus', - 'Eimis Anime Diffusion', 'Elden Ring Diffusion', "Elldreth's Lucid Mix", - 'Elldreths Retro Mix', 'Epic Diffusion', 'Eternos', 'Experience', - 'ExpMix Line', 'FaeTastic', 'Fantasy Card Diffusion', 'FKing SciFi', - 'Funko Diffusion', 'Furry Epoch', 'Future Diffusion', 'Ghibli Diffusion', - 'GorynichMix', 'Grapefruit Hentai', 'Graphic-Art', - 'GTA5 Artwork Diffusion', 'GuoFeng', 'Guohua Diffusion', 'HASDX', - 'Hassanblend', "Healy's Anime Blend", 'Hentai Diffusion', 'HRL', 'iCoMix', - 'Illuminati Diffusion', 'Inkpunk Diffusion', 'Jim Eidomode', - 'JWST Deep Space Diffusion', 'Kenshi', 'Knollingcase', 'Korestyle', - 'kurzgesagt', 'Laolei New Berry Protogen Mix', "Lawlas's yiff mix", - 'Liberty', 'Marvel Diffusion', 'Mega Merge Diffusion', 'Microcasing', - 'Microchars', 'Microcritters', 'Microscopic', 'Microworlds', - 'Midjourney Diffusion', 'Midjourney PaintArt', 'Min Illust Background', - 'ModernArt Diffusion', 'mo-di-diffusion', 'Moedel', 'MoistMix', - 'Movie Diffusion', 'NeverEnding Dream', 'Nitro Diffusion', 'Openniji', - 'OrbAI', 'Papercutcraft', 'Papercut Diffusion', 'Pastel Mix', - 'Perfect World', 'PFG', 'PIXHELL', 'Poison', 'Pokemon3D', 'PortraitPlus', - 'PPP', 'Pretty 2.5D', 'PRMJ', 'Project Unreal Engine 5', 'ProtoGen', - 'Protogen Anime', 'Protogen Infinity', 'Pulp Vector Art', 'PVC', - 'Rachel Walker Watercolors', 'Rainbowpatch', 'Ranma Diffusion', - 'RCNZ Dumb Monkey', 'RCNZ Gorilla With A Brick', 'RealBiter', - 'Realism Engine', 'Realistic Vision', 'Redshift Diffusion', 'Rev Animated', - 'Robo-Diffusion', 'Rodent Diffusion', 'RPG', 'Samdoesarts Ultmerge', - 'Sci-Fi Diffusion', 'SD-Silicon', 'Seek.art MEGA', 'Smoke Diffusion', - 'Something', 'Sonic Diffusion', 'Spider-Verse Diffusion', - 'Squishmallow Diffusion', 'stable_diffusion', 'stable_diffusion_2.1', - 'stable_diffusion_2_inpainting', 'Supermarionation', 'Sygil-Dev Diffusion', - 'Synthwave', 'SynthwavePunk', 'TrexMix', 'trinart', 'Trinart Characters', - 'Tron Legacy Diffusion', 'T-Shirt Diffusion', 'T-Shirt Print Designs', - 'Uhmami', 'Ultraskin', 'UMI Olympus', 'Unstable Ink Dream', 'URPM', - 'Valorant Diffusion', 'Van Gogh Diffusion', 'Vector Art', 'vectorartz', - 'Vintedois Diffusion', 'VinteProtogenMix', 'Vivid Watercolors', - 'Voxel Art Diffusion', 'waifu_diffusion', 'Wavyfusion', 'Woop-Woop Photo', - 'Xynthii-Diffusion', 'Yiffy', 'Zack3D', 'Zeipher Female Model', - 'Zelda BOTW'], label="Model", default="Dreamlike Diffusion"), - gr.inputs.Dropdown(["k_lms", "k_heun", "k_euler", "k_euler_a", "k_dpm_2", "k_dpm_2_a", "DDIM", "k_dpm_fast", "k_dpm_adaptive", "k_dpmpp_2m", "k_dpmpp_2s_a", "k_dpmpp_sde"], label="Sampler", default="k_dpmpp_2s_a"), - gr.inputs.Number(label="Steps", default=50) -] - -outputs = gr.outputs.Image(label="Generated Image", type="pil") - -interface = gr.Interface(generate_image, inputs, outputs, title="ImagineAI-Real/ImagineAI Image Generator", - description="Enter a prompt, click Submit and wait a bit for your image.
If taking too long, duplicate the space.
Or use one of the others:
  • ImagineAI-Image-Generator
  • ImagineAI-Image-Generator2
  • ImagineAI-Image-Generator3

  • Or download our app.
    If you got an error, just get the app, that works all the time. Sry", - examples=[]) - -interface.launch() diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/multiscale.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/multiscale.py deleted file mode 100644 index 65f0a54925593e9da8106bfc6d65a4098ce001d7..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/multiscale.py +++ /dev/null @@ -1,244 +0,0 @@ -from typing import List, Tuple, Union, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from saicinpainting.training.modules.base import get_conv_block_ctor, get_activation -from saicinpainting.training.modules.pix2pixhd import ResnetBlock - - -class ResNetHead(nn.Module): - def __init__(self, input_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', activation=nn.ReLU(True)): - assert (n_blocks >= 0) - super(ResNetHead, self).__init__() - - conv_layer = get_conv_block_ctor(conv_kind) - - model = [nn.ReflectionPad2d(3), - conv_layer(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), - activation] - - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - model += [conv_layer(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1), - norm_layer(ngf * mult * 2), - activation] - - mult = 2 ** n_downsampling - - ### resnet blocks - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=conv_kind)] - - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -class ResNetTail(nn.Module): - def __init__(self, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0, - add_in_proj=None): - assert (n_blocks >= 0) - super(ResNetTail, self).__init__() - - mult = 2 ** n_downsampling - - model = [] - - if add_in_proj is not None: - model.append(nn.Conv2d(add_in_proj, ngf * mult, kernel_size=1)) - - ### resnet blocks - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=conv_kind)] - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, - output_padding=1), - up_norm_layer(int(ngf * mult / 2)), - up_activation] - self.model = nn.Sequential(*model) - - out_layers = [] - for _ in range(out_extra_layers_n): - out_layers += [nn.Conv2d(ngf, ngf, kernel_size=1, padding=0), - up_norm_layer(ngf), - up_activation] - out_layers += [nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - - if add_out_act: - out_layers.append(get_activation('tanh' if add_out_act is True else add_out_act)) - - self.out_proj = nn.Sequential(*out_layers) - - def forward(self, input, return_last_act=False): - features = self.model(input) - out = self.out_proj(features) - if return_last_act: - return out, features - else: - return out - - -class MultiscaleResNet(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=2, n_blocks_head=2, n_blocks_tail=6, n_scales=3, - norm_layer=nn.BatchNorm2d, padding_type='reflect', conv_kind='default', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0, - out_cumulative=False, return_only_hr=False): - super().__init__() - - self.heads = nn.ModuleList([ResNetHead(input_nc, ngf=ngf, n_downsampling=n_downsampling, - n_blocks=n_blocks_head, norm_layer=norm_layer, padding_type=padding_type, - conv_kind=conv_kind, activation=activation) - for i in range(n_scales)]) - tail_in_feats = ngf * (2 ** n_downsampling) + ngf - self.tails = nn.ModuleList([ResNetTail(output_nc, - ngf=ngf, n_downsampling=n_downsampling, - n_blocks=n_blocks_tail, norm_layer=norm_layer, padding_type=padding_type, - conv_kind=conv_kind, activation=activation, up_norm_layer=up_norm_layer, - up_activation=up_activation, add_out_act=add_out_act, - out_extra_layers_n=out_extra_layers_n, - add_in_proj=None if (i == n_scales - 1) else tail_in_feats) - for i in range(n_scales)]) - - self.out_cumulative = out_cumulative - self.return_only_hr = return_only_hr - - @property - def num_scales(self): - return len(self.heads) - - def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \ - -> Union[torch.Tensor, List[torch.Tensor]]: - """ - :param ms_inputs: List of inputs of different resolutions from HR to LR - :param smallest_scales_num: int or None, number of smallest scales to take at input - :return: Depending on return_only_hr: - True: Only the most HR output - False: List of outputs of different resolutions from HR to LR - """ - if smallest_scales_num is None: - assert len(self.heads) == len(ms_inputs), (len(self.heads), len(ms_inputs), smallest_scales_num) - smallest_scales_num = len(self.heads) - else: - assert smallest_scales_num == len(ms_inputs) <= len(self.heads), (len(self.heads), len(ms_inputs), smallest_scales_num) - - cur_heads = self.heads[-smallest_scales_num:] - ms_features = [cur_head(cur_inp) for cur_head, cur_inp in zip(cur_heads, ms_inputs)] - - all_outputs = [] - prev_tail_features = None - for i in range(len(ms_features)): - scale_i = -i - 1 - - cur_tail_input = ms_features[-i - 1] - if prev_tail_features is not None: - if prev_tail_features.shape != cur_tail_input.shape: - prev_tail_features = F.interpolate(prev_tail_features, size=cur_tail_input.shape[2:], - mode='bilinear', align_corners=False) - cur_tail_input = torch.cat((cur_tail_input, prev_tail_features), dim=1) - - cur_out, cur_tail_feats = self.tails[scale_i](cur_tail_input, return_last_act=True) - - prev_tail_features = cur_tail_feats - all_outputs.append(cur_out) - - if self.out_cumulative: - all_outputs_cum = [all_outputs[0]] - for i in range(1, len(ms_features)): - cur_out = all_outputs[i] - cur_out_cum = cur_out + F.interpolate(all_outputs_cum[-1], size=cur_out.shape[2:], - mode='bilinear', align_corners=False) - all_outputs_cum.append(cur_out_cum) - all_outputs = all_outputs_cum - - if self.return_only_hr: - return all_outputs[-1] - else: - return all_outputs[::-1] - - -class MultiscaleDiscriminatorSimple(nn.Module): - def __init__(self, ms_impl): - super().__init__() - self.ms_impl = nn.ModuleList(ms_impl) - - @property - def num_scales(self): - return len(self.ms_impl) - - def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \ - -> List[Tuple[torch.Tensor, List[torch.Tensor]]]: - """ - :param ms_inputs: List of inputs of different resolutions from HR to LR - :param smallest_scales_num: int or None, number of smallest scales to take at input - :return: List of pairs (prediction, features) for different resolutions from HR to LR - """ - if smallest_scales_num is None: - assert len(self.ms_impl) == len(ms_inputs), (len(self.ms_impl), len(ms_inputs), smallest_scales_num) - smallest_scales_num = len(self.heads) - else: - assert smallest_scales_num == len(ms_inputs) <= len(self.ms_impl), \ - (len(self.ms_impl), len(ms_inputs), smallest_scales_num) - - return [cur_discr(cur_input) for cur_discr, cur_input in zip(self.ms_impl[-smallest_scales_num:], ms_inputs)] - - -class SingleToMultiScaleInputMixin: - def forward(self, x: torch.Tensor) -> List: - orig_height, orig_width = x.shape[2:] - factors = [2 ** i for i in range(self.num_scales)] - ms_inputs = [F.interpolate(x, size=(orig_height // f, orig_width // f), mode='bilinear', align_corners=False) - for f in factors] - return super().forward(ms_inputs) - - -class GeneratorMultiToSingleOutputMixin: - def forward(self, x): - return super().forward(x)[0] - - -class DiscriminatorMultiToSingleOutputMixin: - def forward(self, x): - out_feat_tuples = super().forward(x) - return out_feat_tuples[0][0], [f for _, flist in out_feat_tuples for f in flist] - - -class DiscriminatorMultiToSingleOutputStackedMixin: - def __init__(self, *args, return_feats_only_levels=None, **kwargs): - super().__init__(*args, **kwargs) - self.return_feats_only_levels = return_feats_only_levels - - def forward(self, x): - out_feat_tuples = super().forward(x) - outs = [out for out, _ in out_feat_tuples] - scaled_outs = [outs[0]] + [F.interpolate(cur_out, size=outs[0].shape[-2:], - mode='bilinear', align_corners=False) - for cur_out in outs[1:]] - out = torch.cat(scaled_outs, dim=1) - if self.return_feats_only_levels is not None: - feat_lists = [out_feat_tuples[i][1] for i in self.return_feats_only_levels] - else: - feat_lists = [flist for _, flist in out_feat_tuples] - feats = [f for flist in feat_lists for f in flist] - return out, feats - - -class MultiscaleDiscrSingleInput(SingleToMultiScaleInputMixin, DiscriminatorMultiToSingleOutputStackedMixin, MultiscaleDiscriminatorSimple): - pass - - -class MultiscaleResNetSingle(GeneratorMultiToSingleOutputMixin, SingleToMultiScaleInputMixin, MultiscaleResNet): - pass diff --git a/spaces/Jamel887/Rvc-tio887/lib/infer_pack/attentions.py b/spaces/Jamel887/Rvc-tio887/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/Jamel887/Rvc-tio887/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Jonni/04-Gradio_SOTA/app.py b/spaces/Jonni/04-Gradio_SOTA/app.py deleted file mode 100644 index c1cd92499cf1c7d2a91b4dc226bf2d558ff67661..0000000000000000000000000000000000000000 --- a/spaces/Jonni/04-Gradio_SOTA/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -from qasrl_model_pipeline import QASRL_Pipeline - -models = ["kleinay/qanom-seq2seq-model-baseline", - "kleinay/qanom-seq2seq-model-joint"] -pipelines = {model: QASRL_Pipeline(model) for model in models} - - -description = f"""Using Seq2Seq T5 model which takes a sequence of items and outputs another sequence this model generates Questions and Answers (QA) with focus on Semantic Role Labeling (SRL)""" -title="Seq2Seq T5 Questions and Answers (QA) with Semantic Role Labeling (SRL)" -examples = [[models[0], "In March and April the patient

    had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "fall"], - [models[1], "In March and April the patient had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions

    like anaphylaxis and shortness of breath.", True, "reactions"], - [models[0], "In March and April the patient had two falls. One was related

    to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", True, "relate"], - [models[1], "In March and April the patient

    had two falls. One was related to asthma, heart palpitations. The second was due to syncope and post covid vaccination dizziness during exercise. The patient is now getting an EKG. Former EKG had shown that there was a bundle branch block. Patient had some uncontrolled immune system reactions like anaphylaxis and shortness of breath.", False, "fall"]] - -input_sent_box_label = "Insert sentence here. Mark the predicate by adding the token '

    ' before it." -verb_form_inp_placeholder = "e.g. 'decide' for the nominalization 'decision', 'teach' for 'teacher', etc." -links = """

    -QASRL Website | Model Repo at Huggingface Hub -

    """ -def call(model_name, sentence, is_nominal, verb_form): - predicate_marker="

    " - if predicate_marker not in sentence: - raise ValueError("You must highlight one word of the sentence as a predicate using preceding '

    '.") - - if not verb_form: - if is_nominal: - raise ValueError("You should provide the verbal form of the nominalization") - - toks = sentence.split(" ") - pred_idx = toks.index(predicate_marker) - predicate = toks(pred_idx+1) - verb_form=predicate - pipeline = pipelines[model_name] - pipe_out = pipeline([sentence], - predicate_marker=predicate_marker, - predicate_type="nominal" if is_nominal else "verbal", - verb_form=verb_form)[0] - return pipe_out["QAs"], pipe_out["generated_text"] -iface = gr.Interface(fn=call, - inputs=[gr.inputs.Radio(choices=models, default=models[0], label="Model"), - gr.inputs.Textbox(placeholder=input_sent_box_label, label="Sentence", lines=4), - gr.inputs.Checkbox(default=True, label="Is Nominalization?"), - gr.inputs.Textbox(placeholder=verb_form_inp_placeholder, label="Verbal form (for nominalizations)", default='')], - outputs=[gr.outputs.JSON(label="Model Output - QASRL"), gr.outputs.Textbox(label="Raw output sequence")], - title=title, - description=description, - article=links, - examples=examples ) - -iface.launch() \ No newline at end of file diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/diffusionmodules/openaimodel.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/diffusionmodules/openaimodel.py deleted file mode 100644 index 9cc0535baeb13c8ef8d8b0d52469c68faeea6bcc..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/diffusionmodules/openaimodel.py +++ /dev/null @@ -1,1046 +0,0 @@ -# -------------------------------------------------------- -# Stable-Diffusion-Torch -# Based on Stable-Diffusion (https://github.com/CompVis/stable-diffusion) -# Removed Pytorch-lightning by Zigang Geng (zigang@mail.ustc.edu.cn) -# -------------------------------------------------------- - -from abc import abstractmethod -from functools import partial -import math -from typing import Iterable - -import numpy as np -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from ldm.modules.diffusionmodules.util import ( - checkpoint, - conv_nd, - linear, - avg_pool_nd, - zero_module, - normalization, - timestep_embedding, -) -from ldm.modules.attention import SpatialTransformer, CrossAttention, FeedForward - - -# dummy replace -def convert_module_to_f16(l): - """ - Convert primitive modules to float16. - """ - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - -def convert_module_to_f32(l): - """ - Convert primitive modules to float32, undoing convert_module_to_f16(). - """ - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)): - l.weight.data = l.weight.data.float() - if l.bias is not None: - l.bias.data = l.bias.data.float() - - -def convert_some_linear_to_f16(l): - """ - Convert linear modules to float16. - """ - if isinstance(l, nn.Linear): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - -def convert_some_linear_to_f32(l): - """ - Convert linear modules to float32. - """ - if isinstance(l, nn.Linear): - l.weight.data = l.weight.data.float() - if l.bias is not None: - l.bias.data = l.bias.data.float() - - -class PositionEmbedding(nn.Module): - def __init__(self, embed_dim, spacial_dim): - super().__init__() - self.positional_embedding = nn.Parameter(th.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5) - def forward(self): - return self.positional_embedding - - -## go -class AttentionPool2d(nn.Module): - """ - Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py - """ - - def __init__( - self, - spacial_dim: int, - embed_dim: int, - num_heads_channels: int, - output_dim: int = None, - ): - super().__init__() - self.positional_embedding = PositionEmbedding(embed_dim, spacial_dim) - self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1) - self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1) - self.num_heads = embed_dim // num_heads_channels - self.attention = QKVAttention(self.num_heads) - - def forward(self, x): - b, c, *_spatial = x.shape - x = x.reshape(b, c, -1) # NC(HW) - x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1) - x = x + self.positional_embedding()[None, :, :].to(x.dtype) # NC(HW+1) - x = self.qkv_proj(x) - x = self.attention(x) - x = self.c_proj(x) - return x[:, :, 0] - - -class TimestepBlock(nn.Module): - """ - Any module where forward() takes timestep embeddings as a second argument. - """ - - @abstractmethod - def forward(self, x, emb): - """ - Apply the module to `x` given `emb` timestep embeddings. - """ - - -class TimestepEmbedSequential(nn.Sequential, TimestepBlock): - """ - A sequential module that passes timestep embeddings to the children that - support it as an extra input. - """ - - def forward(self, x, emb, context=None): - for layer in self: - if isinstance(layer, TimestepBlock): - x = layer(x, emb) - elif isinstance(layer, SpatialTransformer): - x = layer(x, context) - else: - if isinstance(layer, Downsample) or isinstance(layer, Upsample): - x = layer(x) - else: - if hasattr(layer, 'weight'): - x = layer(x.type(layer.weight.dtype)) - else: - x = layer(x.type(emb.dtype)) - return x - - -class Upsample(nn.Module): - """ - An upsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - upsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - if use_conv: - self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=padding) - - def forward(self, x): - assert x.shape[1] == self.channels - if self.dims == 3: - x = F.interpolate( - x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" - ) - else: - x = F.interpolate(x, scale_factor=2, mode="nearest") - if self.use_conv: - x = self.conv(x) - return x - -class TransposedUpsample(nn.Module): - 'Learned 2x upsampling without padding' - def __init__(self, channels, out_channels=None, ks=5): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - - self.up = nn.ConvTranspose2d(self.channels,self.out_channels,kernel_size=ks,stride=2) - - def forward(self,x): - return self.up(x) - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None,padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - stride = 2 if dims != 3 else (1, 2, 2) - if use_conv: - self.op = conv_nd( - dims, self.channels, self.out_channels, 3, stride=stride, padding=padding - ) - else: - assert self.channels == self.out_channels - self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResBlock(TimestepBlock): - """ - A residual block that can optionally change the number of channels. - :param channels: the number of input channels. - :param emb_channels: the number of timestep embedding channels. - :param dropout: the rate of dropout. - :param out_channels: if specified, the number of out channels. - :param use_conv: if True and out_channels is specified, use a spatial - convolution instead of a smaller 1x1 convolution to change the - channels in the skip connection. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param use_checkpoint: if True, use gradient checkpointing on this module. - :param up: if True, use this block for upsampling. - :param down: if True, use this block for downsampling. - """ - - def __init__( - self, - channels, - emb_channels, - dropout, - out_channels=None, - use_conv=False, - use_scale_shift_norm=False, - dims=2, - use_checkpoint=False, - up=False, - down=False, - ): - super().__init__() - self.channels = channels - self.emb_channels = emb_channels - self.dropout = dropout - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_checkpoint = use_checkpoint - self.use_scale_shift_norm = use_scale_shift_norm - - self.in_layers = nn.Sequential( - normalization(channels), - nn.SiLU(), - conv_nd(dims, channels, self.out_channels, 3, padding=1), - ) - - self.updown = up or down - - if up: - self.h_upd = Upsample(channels, False, dims) - self.x_upd = Upsample(channels, False, dims) - elif down: - self.h_upd = Downsample(channels, False, dims) - self.x_upd = Downsample(channels, False, dims) - else: - self.h_upd = self.x_upd = nn.Identity() - - self.emb_layers = nn.Sequential( - nn.SiLU(), - linear( - emb_channels, - 2 * self.out_channels if use_scale_shift_norm else self.out_channels, - ), - ) - self.out_layers = nn.Sequential( - normalization(self.out_channels), - nn.SiLU(), - nn.Dropout(p=dropout), - zero_module( - conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1) - ), - ) - - if self.out_channels == channels: - self.skip_connection = nn.Identity() - elif use_conv: - self.skip_connection = conv_nd( - dims, channels, self.out_channels, 3, padding=1 - ) - else: - self.skip_connection = conv_nd(dims, channels, self.out_channels, 1) - - def forward(self, x, emb): - """ - Apply the block to a Tensor, conditioned on a timestep embedding. - :param x: an [N x C x ...] Tensor of features. - :param emb: an [N x emb_channels] Tensor of timestep embeddings. - :return: an [N x C x ...] Tensor of outputs. - """ - # return checkpoint( - # self._forward, (x, emb), self.use_checkpoint - # ) - return checkpoint( - self._forward, (x, emb), self.parameters(), self.use_checkpoint - ) - - - def _forward(self, x, emb): - x = x.type(self.emb_layers[1].weight.dtype) - if self.updown: - in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1] - h = in_rest(x) - h = self.h_upd(h) - x = self.x_upd(x) - h = in_conv(h) - else: - h = self.in_layers(x) - emb_out = self.emb_layers(emb.type(self.emb_layers[1].weight.dtype)).type(h.dtype) - while len(emb_out.shape) < len(h.shape): - emb_out = emb_out[..., None] - if self.use_scale_shift_norm: - out_norm, out_rest = self.out_layers[0], self.out_layers[1:] - scale, shift = th.chunk(emb_out, 2, dim=1) - h = out_norm(h) * (1 + scale) + shift - h = out_rest(h) - else: - h = h + emb_out - h = self.out_layers(h) - return self.skip_connection(x) + h.type(self.emb_layers[1].weight.dtype) - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. - Originally ported from here, but adapted to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - """ - - def __init__( - self, - channels, - num_heads=1, - num_head_channels=-1, - use_checkpoint=False, - use_new_attention_order=False, - ): - super().__init__() - self.channels = channels - if num_head_channels == -1: - self.num_heads = num_heads - else: - assert ( - channels % num_head_channels == 0 - ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}" - self.num_heads = channels // num_head_channels - self.use_checkpoint = use_checkpoint - self.norm = normalization(channels) - self.qkv = conv_nd(1, channels, channels * 3, 1) - if use_new_attention_order: - # split qkv before split heads - self.attention = QKVAttention(self.num_heads) - else: - # split heads before split qkv - self.attention = QKVAttentionLegacy(self.num_heads) - - self.proj_out = zero_module(conv_nd(1, channels, channels, 1)) - - def forward(self, x): - # return checkpoint(self._forward, (x,), self.use_checkpoint) - return checkpoint(self._forward, (x,), self.parameters(), self.use_checkpoint) # TODO: check checkpoint usage, is True # TODO: fix the .half call!!! - - def _forward(self, x): - b, c, *spatial = x.shape - x = x.reshape(b, c, -1) - qkv = self.qkv(self.norm(x)) - h = self.attention(qkv) - h = self.proj_out(h) - return (x + h).reshape(b, c, *spatial) - - -def count_flops_attn(model, _x, y): - """ - A counter for the `thop` package to count the operations in an - attention operation. - Meant to be used like: - macs, params = thop.profile( - model, - inputs=(inputs, timestamps), - custom_ops={QKVAttention: QKVAttention.count_flops}, - ) - """ - b, c, *spatial = y[0].shape - num_spatial = int(np.prod(spatial)) - # We perform two matmuls with the same number of ops. - # The first computes the weight matrix, the second computes - # the combination of the value vectors. - matmul_ops = 2 * b * (num_spatial ** 2) * c - model.total_ops += th.DoubleTensor([matmul_ops]) - - -class QKVAttentionLegacy(nn.Module): - """ - A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v) - return a.reshape(bs, -1, length) - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class QKVAttention(nn.Module): - """ - A module which performs QKV attention and splits in a different order. - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.chunk(3, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", - (q * scale).view(bs * self.n_heads, ch, length), - (k * scale).view(bs * self.n_heads, ch, length), - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v.reshape(bs * self.n_heads, ch, length)) - return a.reshape(bs, -1, length) - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class UNetModel(nn.Module): - """ - The full UNet model with attention and timestep embedding. - :param in_channels: channels in the input Tensor. - :param model_channels: base channel count for the model. - :param out_channels: channels in the output Tensor. - :param num_res_blocks: number of residual blocks per downsample. - :param attention_resolutions: a collection of downsample rates at which - attention will take place. May be a set, list, or tuple. - For example, if this contains 4, then at 4x downsampling, attention - will be used. - :param dropout: the dropout probability. - :param channel_mult: channel multiplier for each level of the UNet. - :param conv_resample: if True, use learned convolutions for upsampling and - downsampling. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param num_classes: if specified (as an int), then this model will be - class-conditional with `num_classes` classes. - :param use_checkpoint: use gradient checkpointing to reduce memory usage. - :param num_heads: the number of attention heads in each attention layer. - :param num_heads_channels: if specified, ignore num_heads and instead use - a fixed channel width per attention head. - :param num_heads_upsample: works with num_heads to set a different number - of heads for upsampling. Deprecated. - :param use_scale_shift_norm: use a FiLM-like conditioning mechanism. - :param resblock_updown: use residual blocks for up/downsampling. - :param use_new_attention_order: use a different attention pattern for potentially - increased efficiency. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - num_classes=None, - use_checkpoint=False, - use_fp16=False, - default_eps=False, - force_type_convert=False, - num_heads=-1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - use_spatial_transformer=False, # custom transformer support - transformer_depth=1, # custom transformer support - context_dim=None, # custom transformer support - n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model - legacy=True, - ): - super().__init__() - if use_spatial_transformer: - assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...' - - if context_dim is not None: - assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...' - from omegaconf.listconfig import ListConfig - if type(context_dim) == ListConfig: - context_dim = list(context_dim) - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - if num_heads == -1: - assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set' - - if num_head_channels == -1: - assert num_heads != -1, 'Either num_heads or num_head_channels has to be set' - - self.image_size = image_size - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - self.predict_codebook_ids = n_embed is not None - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - if self.num_classes is not None: - self.label_emb = nn.Embedding(num_classes, time_embed_dim) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, default_eps=default_eps, force_type_convert=force_type_convert, depth=transformer_depth, context_dim=context_dim - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, default_eps=default_eps, force_type_convert=force_type_convert, depth=transformer_depth, context_dim=context_dim - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - - self.output_blocks = nn.ModuleList([]) - for level, mult in list(enumerate(channel_mult))[::-1]: - for i in range(num_res_blocks + 1): - ich = input_block_chans.pop() - layers = [ - ResBlock( - ch + ich, - time_embed_dim, - dropout, - out_channels=model_channels * mult, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = model_channels * mult - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads_upsample, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, default_eps=default_eps, force_type_convert=force_type_convert, depth=transformer_depth, context_dim=context_dim - ) - ) - if level and i == num_res_blocks: - out_ch = ch - layers.append( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - up=True, - ) - if resblock_updown - else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ds //= 2 - self.output_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)), - ) - if self.predict_codebook_ids: - self.id_predictor = nn.Sequential( - normalization(ch), - conv_nd(dims, model_channels, n_embed, 1), - #nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits - ) - - # if use_fp16: - # self.convert_to_fp16() - # for name, m in self.named_modules(): - # if 'output_blocks.11.1' in name: - # if isinstance(m, (nn.Conv2d, nn.LayerNorm)) or 'proj_out' in name: - # m.weight.data = m.weight.data.float() - # m.bias.data = m.bias.data.float() - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - self.output_blocks.apply(convert_module_to_f16) - - for m in self.modules(): - if isinstance(m, (CrossAttention, FeedForward)): - m.apply(convert_some_linear_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - self.output_blocks.apply(convert_module_to_f32) - - for m in self.modules(): - if isinstance(m, (CrossAttention, FeedForward)): - m.apply(convert_some_linear_to_f32) - - def forward(self, x, timesteps=None, context=None, y=None,**kwargs): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :param context: conditioning plugged in via crossattn - :param y: an [N] Tensor of labels, if class-conditional. - :return: an [N x C x ...] Tensor of outputs. - """ - assert (y is not None) == ( - self.num_classes is not None - ), "must specify y if and only if the model is class-conditional" - hs = [] - t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False) - emb = self.time_embed(t_emb.type(self.time_embed[0].weight.dtype)) - - if self.num_classes is not None: - assert y.shape == (x.shape[0],) - emb = emb + self.label_emb(y) - - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb, context) - hs.append(h) - h = self.middle_block(h, emb, context) - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1) - h = module(h, emb, context) - h = h.type(x.dtype) - if self.predict_codebook_ids: - return self.id_predictor(h) - else: - return self.out(h) - - -class EncoderUNetModel(nn.Module): - """ - The half UNet model with attention and timestep embedding. - For usage, see UNet. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - use_checkpoint=False, - use_fp16=False, - num_heads=1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - pool="adaptive", - *args, - **kwargs - ): - super().__init__() - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - use_new_attention_order=use_new_attention_order, - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - use_new_attention_order=use_new_attention_order, - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - self.pool = pool - if pool == "adaptive": - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - nn.AdaptiveAvgPool2d((1, 1)), - zero_module(conv_nd(dims, ch, out_channels, 1)), - nn.Flatten(), - ) - elif pool == "attention": - assert num_head_channels != -1 - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - AttentionPool2d( - (image_size // ds), ch, num_head_channels, out_channels - ), - ) - elif pool == "spatial": - self.out = nn.Sequential( - nn.Linear(self._feature_size, 2048), - nn.ReLU(), - nn.Linear(2048, self.out_channels), - ) - elif pool == "spatial_v2": - self.out = nn.Sequential( - nn.Linear(self._feature_size, 2048), - normalization(2048), - nn.SiLU(), - nn.Linear(2048, self.out_channels), - ) - else: - raise NotImplementedError(f"Unexpected {pool} pooling") - - # if use_fp16: - # self.convert_to_fp16() - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - - for m in self.modules(): - if isinstance(m, (CrossAttention, FeedForward)): - m.apply(convert_some_linear_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - - for m in self.modules(): - if isinstance(m, (CrossAttention, FeedForward)): - m.apply(convert_some_linear_to_f32) - - def forward(self, x, timesteps): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :return: an [N x K] Tensor of outputs. - """ - emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) - - results = [] - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb) - if self.pool.startswith("spatial"): - results.append(h.type(x.dtype).mean(dim=(2, 3))) - h = self.middle_block(h, emb) - if self.pool.startswith("spatial"): - results.append(h.type(x.dtype).mean(dim=(2, 3))) - h = th.cat(results, axis=-1) - return self.out(h) - else: - h = h.type(x.dtype) - return self.out(h) diff --git a/spaces/KyanChen/FunSR/examples/resize.py b/spaces/KyanChen/FunSR/examples/resize.py deleted file mode 100644 index fbe317ce1a144f9d5a0996a4b975e7b20dc3cafd..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/examples/resize.py +++ /dev/null @@ -1,20 +0,0 @@ -import glob - -from PIL import Image -from torchvision import transforms -import cv2 -from torchvision.transforms import InterpolationMode - -patch_size = 48 - -for file in glob.glob("*.jpg"): - img = transforms.ToTensor()(Image.open(file).convert('RGB')) * 255 - img_lr = transforms.Resize(patch_size, InterpolationMode.BICUBIC)( - transforms.CenterCrop(8 * patch_size)(img)) - - img_hr = transforms.CenterCrop(8 * patch_size)(img) - - cv2.imwrite(f'AID_{file.split(".")[0]}_LR.png', img_lr.permute((1, 2, 0)).numpy()) - print(f'AID_{file.split(".")[0]}_LR.png') - cv2.imwrite(f'AID_{file.split(".")[0]}_HR.png', img_hr.permute((1, 2, 0)).numpy()) - diff --git a/spaces/Lbx091/rev/Dockerfile b/spaces/Lbx091/rev/Dockerfile deleted file mode 100644 index e6158e4b2d67eeea6e30ad3c1bb6043ec09b7b9b..0000000000000000000000000000000000000000 --- a/spaces/Lbx091/rev/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ -apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/LeoLeoLeo1/ChuanhuChatGPT/presets.py b/spaces/LeoLeoLeo1/ChuanhuChatGPT/presets.py deleted file mode 100644 index a292ee5fdf4dd706870af0fea002c087e2132e97..0000000000000000000000000000000000000000 --- a/spaces/LeoLeoLeo1/ChuanhuChatGPT/presets.py +++ /dev/null @@ -1,87 +0,0 @@ -# -*- coding:utf-8 -*- - -# ChatGPT 设置 -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀 -error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误 -connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时 -read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时 -proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误 -ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误 -no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位 - -max_token_streaming = 3500 # 流式对话时的最大 token 数 -timeout_streaming = 30 # 流式对话时的超时时间 -max_token_all = 3500 # 非流式对话时的最大 token 数 -timeout_all = 200 # 非流式对话时的超时时间 -enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -title = """

    Liyi's ChatGPT 🚀

    """ -description = """\ -
    - -由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发 - -访问川虎ChatGPT的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本 - -此App使用 `gpt-3.5-turbo` 大语言模型 -
    -""" - -summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", -] # 可选的模型 - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in 中文""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in 中文 -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Answer in the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch. -If the context isn't useful, return the original answer. -""" diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/sample_patches.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/sample_patches.py deleted file mode 100644 index f737172efda7220810db969c59d69d514b766a74..0000000000000000000000000000000000000000 --- a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/sample_patches.py +++ /dev/null @@ -1,65 +0,0 @@ -"""provides a faster sampling function""" - -import numpy as np -from csbdeep.utils import _raise, choice - - -def sample_patches(datas, patch_size, n_samples, valid_inds=None, verbose=False): - """optimized version of csbdeep.data.sample_patches_from_multiple_stacks - """ - - len(patch_size)==datas[0].ndim or _raise(ValueError()) - - if not all(( a.shape == datas[0].shape for a in datas )): - raise ValueError("all input shapes must be the same: %s" % (" / ".join(str(a.shape) for a in datas))) - - if not all(( 0 < s <= d for s,d in zip(patch_size,datas[0].shape) )): - raise ValueError("patch_size %s negative or larger than data shape %s along some dimensions" % (str(patch_size), str(datas[0].shape))) - - if valid_inds is None: - valid_inds = tuple(_s.ravel() for _s in np.meshgrid(*tuple(np.arange(p//2,s-p//2+1) for s,p in zip(datas[0].shape, patch_size)))) - - n_valid = len(valid_inds[0]) - - if n_valid == 0: - raise ValueError("no regions to sample from!") - - idx = choice(range(n_valid), n_samples, replace=(n_valid < n_samples)) - rand_inds = [v[idx] for v in valid_inds] - res = [np.stack([data[tuple(slice(_r-(_p//2),_r+_p-(_p//2)) for _r,_p in zip(r,patch_size))] for r in zip(*rand_inds)]) for data in datas] - - return res - - -def get_valid_inds(img, patch_size, patch_filter=None): - """ - Returns all indices of an image that - - can be used as center points for sampling patches of a given patch_size, and - - are part of the boolean mask given by the function patch_filter (if provided) - - img: np.ndarray - patch_size: tuple of ints - the width of patches per img dimension, - patch_filter: None or callable - a function with signature patch_filter(img, patch_size) returning a boolean mask - """ - - len(patch_size)==img.ndim or _raise(ValueError()) - - if not all(( 0 < s <= d for s,d in zip(patch_size,img.shape))): - raise ValueError("patch_size %s negative or larger than image shape %s along some dimensions" % (str(patch_size), str(img.shape))) - - if patch_filter is None: - # only cut border indices (which is faster) - patch_mask = np.ones(img.shape,dtype=bool) - valid_inds = tuple(np.arange(p // 2, s - p + p // 2 + 1).astype(np.uint32) for p, s in zip(patch_size, img.shape)) - valid_inds = tuple(s.ravel() for s in np.meshgrid(*valid_inds, indexing='ij')) - else: - patch_mask = patch_filter(img, patch_size) - - # get the valid indices - border_slices = tuple([slice(p // 2, s - p + p // 2 + 1) for p, s in zip(patch_size, img.shape)]) - valid_inds = np.where(patch_mask[border_slices]) - valid_inds = tuple((v + s.start).astype(np.uint32) for s, v in zip(border_slices, valid_inds)) - - return valid_inds diff --git a/spaces/Liu-LAB/GPT-academic/request_llm/bridge_spark.py b/spaces/Liu-LAB/GPT-academic/request_llm/bridge_spark.py deleted file mode 100644 index 0fe925f7a0354fe6361e9d11ae074dd287813e9f..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/request_llm/bridge_spark.py +++ /dev/null @@ -1,63 +0,0 @@ - -import time -import threading -import importlib -from toolbox import update_ui, get_conf, update_ui_lastest_msg -from multiprocessing import Process, Pipe - -model_name = '星火认知大模型' - -def validate_key(): - XFYUN_APPID, = get_conf('XFYUN_APPID', ) - if XFYUN_APPID == '00000000' or XFYUN_APPID == '': - return False - return True - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - ⭐多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - watch_dog_patience = 5 - response = "" - - if validate_key() is False: - raise RuntimeError('请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET') - - from .com_sparkapi import SparkRequestInstance - sri = SparkRequestInstance() - for response in sri.generate(inputs, llm_kwargs, history, sys_prompt): - if len(observe_window) >= 1: - observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。") - return response - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - ⭐单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - yield from update_ui(chatbot=chatbot, history=history) - - if validate_key() is False: - yield from update_ui_lastest_msg(lastmsg="[Local Message]: 请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET", chatbot=chatbot, history=history, delay=0) - return - - if additional_fn is not None: - from core_functional import handle_core_functionality - inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) - - # 开始接收回复 - from .com_sparkapi import SparkRequestInstance - sri = SparkRequestInstance() - for response in sri.generate(inputs, llm_kwargs, history, system_prompt): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == f"[Local Message]: 等待{model_name}响应中 ...": - response = f"[Local Message]: {model_name}响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) \ No newline at end of file diff --git a/spaces/ML701G7/taim-gan/src/visualization/__init__.py b/spaces/ML701G7/taim-gan/src/visualization/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/MMMMQZ/MQZGPT/modules/models.py b/spaces/MMMMQZ/MQZGPT/modules/models.py deleted file mode 100644 index 25b18b1904910e183a997a763008403d960868d6..0000000000000000000000000000000000000000 --- a/spaces/MMMMQZ/MQZGPT/modules/models.py +++ /dev/null @@ -1,625 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import platform -import base64 -from io import BytesIO -from PIL import Image - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum -import uuid - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy -from modules import config -from .base_model import BaseLLMModel, ModelType - - -class OpenAIClient(BaseLLMModel): - def __init__( - self, - model_name, - api_key, - system_prompt=INITIAL_SYSTEM_PROMPT, - temperature=1.0, - top_p=1.0, - ) -> None: - super().__init__( - model_name=model_name, - temperature=temperature, - top_p=top_p, - system_prompt=system_prompt, - ) - self.api_key = api_key - self.need_api_key = True - self._refresh_header() - - def get_answer_stream_iter(self): - response = self._get_response(stream=True) - if response is not None: - iter = self._decode_chat_response(response) - partial_text = "" - for i in iter: - partial_text += i - yield partial_text - else: - yield STANDARD_ERROR_MSG + GENERAL_ERROR_MSG - - def get_answer_at_once(self): - response = self._get_response() - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - total_token_count = response["usage"]["total_tokens"] - return content, total_token_count - - def count_token(self, user_input): - input_token_count = count_token(construct_user(user_input)) - if self.system_prompt is not None and len(self.all_token_counts) == 0: - system_prompt_token_count = count_token( - construct_system(self.system_prompt) - ) - return input_token_count + system_prompt_token_count - return input_token_count - - def billing_info(self): - try: - curr_time = datetime.datetime.now() - last_day_of_month = get_last_day_of_month( - curr_time).strftime("%Y-%m-%d") - first_day_of_month = curr_time.replace(day=1).strftime("%Y-%m-%d") - usage_url = f"{shared.state.usage_api_url}?start_date={first_day_of_month}&end_date={last_day_of_month}" - try: - usage_data = self._get_billing_data(usage_url) - except Exception as e: - logging.error(f"获取API使用情况失败:" + str(e)) - return i18n("**获取API使用情况失败**") - rounded_usage = "{:.5f}".format(usage_data["total_usage"] / 100) - return i18n("**本月使用金额** ") + f"\u3000 ${rounded_usage}" - except requests.exceptions.ConnectTimeout: - status_text = ( - STANDARD_ERROR_MSG + CONNECTION_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - ) - return status_text - except requests.exceptions.ReadTimeout: - status_text = STANDARD_ERROR_MSG + READ_TIMEOUT_MSG + ERROR_RETRIEVE_MSG - return status_text - except Exception as e: - import traceback - traceback.print_exc() - logging.error(i18n("获取API使用情况失败:") + str(e)) - return STANDARD_ERROR_MSG + ERROR_RETRIEVE_MSG - - def set_token_upper_limit(self, new_upper_limit): - pass - - @shared.state.switching_api_key # 在不开启多账号模式的时候,这个装饰器不会起作用 - def _get_response(self, stream=False): - openai_api_key = self.api_key - system_prompt = self.system_prompt - history = self.history - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - if system_prompt is not None: - history = [construct_system(system_prompt), *history] - - payload = { - "model": self.model_name, - "messages": history, - "temperature": self.temperature, - "top_p": self.top_p, - "n": self.n_choices, - "stream": stream, - "presence_penalty": self.presence_penalty, - "frequency_penalty": self.frequency_penalty, - } - - if self.max_generation_token is not None: - payload["max_tokens"] = self.max_generation_token - if self.stop_sequence is not None: - payload["stop"] = self.stop_sequence - if self.logit_bias is not None: - payload["logit_bias"] = self.logit_bias - if self.user_identifier is not None: - payload["user"] = self.user_identifier - - if stream: - timeout = TIMEOUT_STREAMING - else: - timeout = TIMEOUT_ALL - - # 如果有自定义的api-host,使用自定义host发送请求,否则使用默认设置发送请求 - if shared.state.completion_url != COMPLETION_URL: - logging.info(f"使用自定义API URL: {shared.state.completion_url}") - - with retrieve_proxy(): - try: - response = requests.post( - shared.state.completion_url, - headers=headers, - json=payload, - stream=stream, - timeout=timeout, - ) - except: - return None - return response - - def _refresh_header(self): - self.headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {self.api_key}", - } - - def _get_billing_data(self, billing_url): - with retrieve_proxy(): - response = requests.get( - billing_url, - headers=self.headers, - timeout=TIMEOUT_ALL, - ) - - if response.status_code == 200: - data = response.json() - return data - else: - raise Exception( - f"API request failed with status code {response.status_code}: {response.text}" - ) - - def _decode_chat_response(self, response): - error_msg = "" - for chunk in response.iter_lines(): - if chunk: - chunk = chunk.decode() - chunk_length = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - print(i18n("JSON解析错误,收到的内容: ") + f"{chunk}") - error_msg += chunk - continue - if chunk_length > 6 and "delta" in chunk["choices"][0]: - if chunk["choices"][0]["finish_reason"] == "stop": - break - try: - yield chunk["choices"][0]["delta"]["content"] - except Exception as e: - # logging.error(f"Error: {e}") - continue - if error_msg: - raise Exception(error_msg) - - def set_key(self, new_access_key): - ret = super().set_key(new_access_key) - self._refresh_header() - return ret - - -class ChatGLM_Client(BaseLLMModel): - def __init__(self, model_name) -> None: - super().__init__(model_name=model_name) - from transformers import AutoTokenizer, AutoModel - import torch - global CHATGLM_TOKENIZER, CHATGLM_MODEL - if CHATGLM_TOKENIZER is None or CHATGLM_MODEL is None: - system_name = platform.system() - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"THUDM/{model_name}" - CHATGLM_TOKENIZER = AutoTokenizer.from_pretrained( - model_source, trust_remote_code=True - ) - quantified = False - if "int4" in model_name: - quantified = True - model = AutoModel.from_pretrained( - model_source, trust_remote_code=True - ) - if torch.cuda.is_available(): - # run on CUDA - logging.info("CUDA is available, using CUDA") - model = model.half().cuda() - # mps加速还存在一些问题,暂时不使用 - elif system_name == "Darwin" and model_path is not None and not quantified: - logging.info("Running on macOS, using MPS") - # running on macOS and model already downloaded - model = model.half().to("mps") - else: - logging.info("GPU is not available, using CPU") - model = model.float() - model = model.eval() - CHATGLM_MODEL = model - - def _get_glm_style_input(self): - history = [x["content"] for x in self.history] - query = history.pop() - logging.debug(colorama.Fore.YELLOW + - f"{history}" + colorama.Fore.RESET) - assert ( - len(history) % 2 == 0 - ), f"History should be even length. current history is: {history}" - history = [[history[i], history[i + 1]] - for i in range(0, len(history), 2)] - return history, query - - def get_answer_at_once(self): - history, query = self._get_glm_style_input() - response, _ = CHATGLM_MODEL.chat( - CHATGLM_TOKENIZER, query, history=history) - return response, len(response) - - def get_answer_stream_iter(self): - history, query = self._get_glm_style_input() - for response, history in CHATGLM_MODEL.stream_chat( - CHATGLM_TOKENIZER, - query, - history, - max_length=self.token_upper_limit, - top_p=self.top_p, - temperature=self.temperature, - ): - yield response - - -class LLaMA_Client(BaseLLMModel): - def __init__( - self, - model_name, - lora_path=None, - ) -> None: - super().__init__(model_name=model_name) - from lmflow.datasets.dataset import Dataset - from lmflow.pipeline.auto_pipeline import AutoPipeline - from lmflow.models.auto_model import AutoModel - from lmflow.args import ModelArguments, DatasetArguments, InferencerArguments - - self.max_generation_token = 1000 - self.end_string = "\n\n" - # We don't need input data - data_args = DatasetArguments(dataset_path=None) - self.dataset = Dataset(data_args) - self.system_prompt = "" - - global LLAMA_MODEL, LLAMA_INFERENCER - if LLAMA_MODEL is None or LLAMA_INFERENCER is None: - model_path = None - if os.path.exists("models"): - model_dirs = os.listdir("models") - if model_name in model_dirs: - model_path = f"models/{model_name}" - if model_path is not None: - model_source = model_path - else: - model_source = f"decapoda-research/{model_name}" - # raise Exception(f"models目录下没有这个模型: {model_name}") - if lora_path is not None: - lora_path = f"lora/{lora_path}" - model_args = ModelArguments(model_name_or_path=model_source, lora_model_path=lora_path, model_type=None, config_overrides=None, config_name=None, tokenizer_name=None, cache_dir=None, - use_fast_tokenizer=True, model_revision='main', use_auth_token=False, torch_dtype=None, use_lora=False, lora_r=8, lora_alpha=32, lora_dropout=0.1, use_ram_optimized_load=True) - pipeline_args = InferencerArguments( - local_rank=0, random_seed=1, deepspeed='configs/ds_config_chatbot.json', mixed_precision='bf16') - - with open(pipeline_args.deepspeed, "r") as f: - ds_config = json.load(f) - LLAMA_MODEL = AutoModel.get_model( - model_args, - tune_strategy="none", - ds_config=ds_config, - ) - LLAMA_INFERENCER = AutoPipeline.get_pipeline( - pipeline_name="inferencer", - model_args=model_args, - data_args=data_args, - pipeline_args=pipeline_args, - ) - - def _get_llama_style_input(self): - history = [] - instruction = "" - if self.system_prompt: - instruction = (f"Instruction: {self.system_prompt}\n") - for x in self.history: - if x["role"] == "user": - history.append(f"{instruction}Input: {x['content']}") - else: - history.append(f"Output: {x['content']}") - context = "\n\n".join(history) - context += "\n\nOutput: " - return context - - def get_answer_at_once(self): - context = self._get_llama_style_input() - - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [{"text": context}]} - ) - - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=self.max_generation_token, - temperature=self.temperature, - ) - - response = output_dataset.to_dict()["instances"][0]["text"] - return response, len(response) - - def get_answer_stream_iter(self): - context = self._get_llama_style_input() - partial_text = "" - step = 1 - for _ in range(0, self.max_generation_token, step): - input_dataset = self.dataset.from_dict( - {"type": "text_only", "instances": [ - {"text": context + partial_text}]} - ) - output_dataset = LLAMA_INFERENCER.inference( - model=LLAMA_MODEL, - dataset=input_dataset, - max_new_tokens=step, - temperature=self.temperature, - ) - response = output_dataset.to_dict()["instances"][0]["text"] - if response == "" or response == self.end_string: - break - partial_text += response - yield partial_text - - -class XMChat(BaseLLMModel): - def __init__(self, api_key): - super().__init__(model_name="xmchat") - self.api_key = api_key - self.session_id = None - self.reset() - self.image_bytes = None - self.image_path = None - self.xm_history = [] - self.url = "https://xmbot.net/web" - self.last_conv_id = None - - def reset(self): - self.session_id = str(uuid.uuid4()) - self.last_conv_id = None - return [], "已重置" - - def image_to_base64(self, image_path): - # 打开并加载图片 - img = Image.open(image_path) - - # 获取图片的宽度和高度 - width, height = img.size - - # 计算压缩比例,以确保最长边小于4096像素 - max_dimension = 2048 - scale_ratio = min(max_dimension / width, max_dimension / height) - - if scale_ratio < 1: - # 按压缩比例调整图片大小 - new_width = int(width * scale_ratio) - new_height = int(height * scale_ratio) - img = img.resize((new_width, new_height), Image.ANTIALIAS) - - # 将图片转换为jpg格式的二进制数据 - buffer = BytesIO() - if img.mode == "RGBA": - img = img.convert("RGB") - img.save(buffer, format='JPEG') - binary_image = buffer.getvalue() - - # 对二进制数据进行Base64编码 - base64_image = base64.b64encode(binary_image).decode('utf-8') - - return base64_image - - def try_read_image(self, filepath): - def is_image_file(filepath): - # 判断文件是否为图片 - valid_image_extensions = [".jpg", ".jpeg", ".png", ".bmp", ".gif", ".tiff"] - file_extension = os.path.splitext(filepath)[1].lower() - return file_extension in valid_image_extensions - - if is_image_file(filepath): - logging.info(f"读取图片文件: {filepath}") - self.image_bytes = self.image_to_base64(filepath) - self.image_path = filepath - else: - self.image_bytes = None - self.image_path = None - - def like(self): - if self.last_conv_id is None: - return "点赞失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "good" - } - response = requests.post(self.url, json=data) - return "👍点赞成功,,感谢反馈~" - - def dislike(self): - if self.last_conv_id is None: - return "点踩失败,你还没发送过消息" - data = { - "uuid": self.last_conv_id, - "appraise": "bad" - } - response = requests.post(self.url, json=data) - return "👎点踩成功,感谢反馈~" - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = real_inputs - display_append = "" - limited_context = False - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - if files: - for file in files: - if file.name: - logging.info(f"尝试读取图像: {file.name}") - self.try_read_image(file.name) - if self.image_path is not None: - chatbot = chatbot + [((self.image_path,), None)] - if self.image_bytes is not None: - logging.info("使用图片作为输入") - # XMChat的一轮对话中实际上只能处理一张图片 - self.reset() - conv_id = str(uuid.uuid4()) - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "imgbase64", - "data": self.image_bytes - } - response = requests.post(self.url, json=data) - response = json.loads(response.text) - logging.info(f"图片回复: {response['data']}") - return None, chatbot, None - - def get_answer_at_once(self): - question = self.history[-1]["content"] - conv_id = str(uuid.uuid4()) - self.last_conv_id = conv_id - data = { - "user_id": self.api_key, - "session_id": self.session_id, - "uuid": conv_id, - "data_type": "text", - "data": question - } - response = requests.post(self.url, json=data) - try: - response = json.loads(response.text) - return response["data"], len(response["data"]) - except Exception as e: - return response.text, len(response.text) - - - - -def get_model( - model_name, - lora_model_path=None, - access_key=None, - temperature=None, - top_p=None, - system_prompt=None, -) -> BaseLLMModel: - msg = i18n("模型设置为了:") + f" {model_name}" - model_type = ModelType.get_type(model_name) - lora_selector_visibility = False - lora_choices = [] - dont_change_lora_selector = False - if model_type != ModelType.OpenAI: - config.local_embedding = True - # del current_model.model - model = None - try: - if model_type == ModelType.OpenAI: - logging.info(f"正在加载OpenAI模型: {model_name}") - model = OpenAIClient( - model_name=model_name, - api_key=access_key, - system_prompt=system_prompt, - temperature=temperature, - top_p=top_p, - ) - elif model_type == ModelType.ChatGLM: - logging.info(f"正在加载ChatGLM模型: {model_name}") - model = ChatGLM_Client(model_name) - elif model_type == ModelType.LLaMA and lora_model_path == "": - msg = f"现在请为 {model_name} 选择LoRA模型" - logging.info(msg) - lora_selector_visibility = True - if os.path.isdir("lora"): - lora_choices = get_file_names( - "lora", plain=True, filetypes=[""]) - lora_choices = ["No LoRA"] + lora_choices - elif model_type == ModelType.LLaMA and lora_model_path != "": - logging.info(f"正在加载LLaMA模型: {model_name} + {lora_model_path}") - dont_change_lora_selector = True - if lora_model_path == "No LoRA": - lora_model_path = None - msg += " + No LoRA" - else: - msg += f" + {lora_model_path}" - model = LLaMA_Client(model_name, lora_model_path) - elif model_type == ModelType.XMChat: - if os.environ.get("XMCHAT_API_KEY") != "": - access_key = os.environ.get("XMCHAT_API_KEY") - model = XMChat(api_key=access_key) - elif model_type == ModelType.Unknown: - raise ValueError(f"未知模型: {model_name}") - logging.info(msg) - except Exception as e: - logging.error(e) - msg = f"{STANDARD_ERROR_MSG}: {e}" - if dont_change_lora_selector: - return model, msg - else: - return model, msg, gr.Dropdown.update(choices=lora_choices, visible=lora_selector_visibility) - - -if __name__ == "__main__": - with open("config.json", "r") as f: - openai_api_key = cjson.load(f)["openai_api_key"] - # set logging level to debug - logging.basicConfig(level=logging.DEBUG) - # client = ModelManager(model_name="gpt-3.5-turbo", access_key=openai_api_key) - client = get_model(model_name="chatglm-6b-int4") - chatbot = [] - stream = False - # 测试账单功能 - logging.info(colorama.Back.GREEN + "测试账单功能" + colorama.Back.RESET) - logging.info(client.billing_info()) - # 测试问答 - logging.info(colorama.Back.GREEN + "测试问答" + colorama.Back.RESET) - question = "巴黎是中国的首都吗?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试问答后history : {client.history}") - # 测试记忆力 - logging.info(colorama.Back.GREEN + "测试记忆力" + colorama.Back.RESET) - question = "我刚刚问了你什么问题?" - for i in client.predict(inputs=question, chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"测试记忆力后history : {client.history}") - # 测试重试功能 - logging.info(colorama.Back.GREEN + "测试重试功能" + colorama.Back.RESET) - for i in client.retry(chatbot=chatbot, stream=stream): - logging.info(i) - logging.info(f"重试后history : {client.history}") - # # 测试总结功能 - # print(colorama.Back.GREEN + "测试总结功能" + colorama.Back.RESET) - # chatbot, msg = client.reduce_token_size(chatbot=chatbot) - # print(chatbot, msg) - # print(f"总结后history: {client.history}") diff --git a/spaces/Mahbodez/knee_report_checklist/logger.py b/spaces/Mahbodez/knee_report_checklist/logger.py deleted file mode 100644 index 16e9d415f345990847936b2fedb2d62600a3c2d0..0000000000000000000000000000000000000000 --- a/spaces/Mahbodez/knee_report_checklist/logger.py +++ /dev/null @@ -1,13 +0,0 @@ -class Logger: - def __init__(self, log_file): - self.log_file = log_file - - def log(self, name, message): - try: - with open(self.log_file, "a", encoding="utf-8") as f: - f.write(f"[{name}]: {message}\n") - except OSError as ex: - print(f"Error logging message: {ex}") - - def __call__(self, name, message): - self.log(name, message) diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/japanese.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/Makiing/coolb-in-gtest/src/lib/isomorphic/node.ts b/spaces/Makiing/coolb-in-gtest/src/lib/isomorphic/node.ts deleted file mode 100644 index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/lib/isomorphic/node.ts +++ /dev/null @@ -1,26 +0,0 @@ -import Debug from 'debug' - -const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici') -const { HttpsProxyAgent } = require('https-proxy-agent') -const ws = require('ws') - -const debug = Debug('bingo') - -const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY; -let WebSocket = ws.WebSocket - -if (httpProxy) { - setGlobalDispatcher(new ProxyAgent(httpProxy)) - const agent = new HttpsProxyAgent(httpProxy) - // @ts-ignore - WebSocket = class extends ws.WebSocket { - constructor(address: string | URL, options: typeof ws.WebSocket) { - super(address, { - ...options, - agent, - }) - } - } -} - -export default { fetch, WebSocket, debug } diff --git a/spaces/MarcusSu1216/XingTong/README.md b/spaces/MarcusSu1216/XingTong/README.md deleted file mode 100644 index 88c34b5a7b89f86a54f7a42831f41c26dcf28d58..0000000000000000000000000000000000000000 --- a/spaces/MarcusSu1216/XingTong/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: XingTong -emoji: ✨ -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Marne/MockingBird/mockingbirdforuse/synthesizer/hparams.py b/spaces/Marne/MockingBird/mockingbirdforuse/synthesizer/hparams.py deleted file mode 100644 index d88109a3c0821663d405fc34be069cd2509f18fd..0000000000000000000000000000000000000000 --- a/spaces/Marne/MockingBird/mockingbirdforuse/synthesizer/hparams.py +++ /dev/null @@ -1,113 +0,0 @@ -from dataclasses import dataclass - - -@dataclass -class HParams: - ### Signal Processing (used in both synthesizer and vocoder) - sample_rate = 16000 - n_fft = 800 - num_mels = 80 - hop_size = 200 - """Tacotron uses 12.5 ms frame shift (set to sample_rate * 0.0125)""" - win_size = 800 - """Tacotron uses 50 ms frame length (set to sample_rate * 0.050)""" - fmin = 55 - min_level_db = -100 - ref_level_db = 20 - max_abs_value = 4.0 - """Gradient explodes if too big, premature convergence if too small.""" - preemphasis = 0.97 - """Filter coefficient to use if preemphasize is True""" - preemphasize = True - - ### Tacotron Text-to-Speech (TTS) - tts_embed_dims = 512 - """Embedding dimension for the graphemes/phoneme inputs""" - tts_encoder_dims = 256 - tts_decoder_dims = 128 - tts_postnet_dims = 512 - tts_encoder_K = 5 - tts_lstm_dims = 1024 - tts_postnet_K = 5 - tts_num_highways = 4 - tts_dropout = 0.5 - tts_cleaner_names = ["basic_cleaners"] - tts_stop_threshold = -3.4 - """ - Value below which audio generation ends. - For example, for a range of [-4, 4], this - will terminate the sequence at the first - frame that has all values < -3.4 - """ - - ### Tacotron Training - tts_schedule = [ - (2, 1e-3, 10_000, 12), - (2, 5e-4, 15_000, 12), - (2, 2e-4, 20_000, 12), - (2, 1e-4, 30_000, 12), - (2, 5e-5, 40_000, 12), - (2, 1e-5, 60_000, 12), - (2, 5e-6, 160_000, 12), - (2, 3e-6, 320_000, 12), - (2, 1e-6, 640_000, 12), - ] - """ - Progressive training schedule - (r, lr, step, batch_size) - r = reduction factor (# of mel frames synthesized for each decoder iteration) - lr = learning rate - """ - - tts_clip_grad_norm = 1.0 - """clips the gradient norm to prevent explosion - set to None if not needed""" - tts_eval_interval = 500 - """ - Number of steps between model evaluation (sample generation) - Set to -1 to generate after completing epoch, or 0 to disable - """ - tts_eval_num_samples = 1 - """Makes this number of samples""" - tts_finetune_layers = [] - """For finetune usage, if set, only selected layers will be trained, available: encoder,encoder_proj,gst,decoder,postnet,post_proj""" - - ### Data Preprocessing - max_mel_frames = 900 - rescale = True - rescaling_max = 0.9 - synthesis_batch_size = 16 - """For vocoder preprocessing and inference.""" - - ### Mel Visualization and Griffin-Lim - signal_normalization = True - power = 1.5 - griffin_lim_iters = 60 - - ### Audio processing options - fmax = 7600 - """Should not exceed (sample_rate // 2)""" - allow_clipping_in_normalization = True - """Used when signal_normalization = True""" - clip_mels_length = True - """If true, discards samples exceeding max_mel_frames""" - use_lws = False - """Fast spectrogram phase recovery using local weighted sums""" - symmetric_mels = True - """Sets mel range to [-max_abs_value, max_abs_value] if True, and [0, max_abs_value] if False""" - trim_silence = True - """Use with sample_rate of 16000 for best results""" - - ### SV2TTS - speaker_embedding_size = 256 - """Dimension for the speaker embedding""" - silence_min_duration_split = 0.4 - """Duration in seconds of a silence for an utterance to be split""" - utterance_min_duration = 1.6 - """Duration in seconds below which utterances are discarded""" - use_gst = True - """Whether to use global style token""" - use_ser_for_gst = True - """Whether to use speaker embedding referenced for global style token""" - - -hparams = HParams() diff --git a/spaces/MathysL/AutoGPT4/scripts/check_requirements.py b/spaces/MathysL/AutoGPT4/scripts/check_requirements.py deleted file mode 100644 index e4eab024a6280c0d54110c69b2e03de639325fa6..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/scripts/check_requirements.py +++ /dev/null @@ -1,32 +0,0 @@ -import sys - -import pkg_resources - - -def main(): - requirements_file = sys.argv[1] - with open(requirements_file, "r") as f: - required_packages = [ - line.strip().split("#")[0].strip() for line in f.readlines() - ] - - installed_packages = [package.key for package in pkg_resources.working_set] - - missing_packages = [] - for package in required_packages: - if not package: # Skip empty lines - continue - package_name = package.strip().split("==")[0] - if package_name.lower() not in installed_packages: - missing_packages.append(package_name) - - if missing_packages: - print("Missing packages:") - print(", ".join(missing_packages)) - sys.exit(1) - else: - print("All packages are installed.") - - -if __name__ == "__main__": - main() diff --git a/spaces/Matthijs/mms-tts-demo/README.md b/spaces/Matthijs/mms-tts-demo/README.md deleted file mode 100644 index adba00f6ecd10d442a80cb95a17e36f74ade74b5..0000000000000000000000000000000000000000 --- a/spaces/Matthijs/mms-tts-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MMS-TTS Demo -emoji: 🥳 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MestikonAgency/README/setup.py b/spaces/MestikonAgency/README/setup.py deleted file mode 100644 index 57f86dcbb4bded68a5f9ce318241dfb7ccb5b96d..0000000000000000000000000000000000000000 --- a/spaces/MestikonAgency/README/setup.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. - -from setuptools import find_packages, setup - - -def get_requirements(path: str): - return [l.strip() for l in open(path)] - - -setup( - name="llama", - version="0.0.1", - packages=find_packages(), - install_requires=get_requirements("requirements.txt"), -) diff --git a/spaces/MetaDans/AIBOT/Dockerfile b/spaces/MetaDans/AIBOT/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/MetaDans/AIBOT/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Miuzarte/SUI-svc-3.0/utils.py b/spaces/Miuzarte/SUI-svc-3.0/utils.py deleted file mode 100644 index 3733a75111dc89cefa333b34933ae01623550ea7..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-3.0/utils.py +++ /dev/null @@ -1,338 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess - -import librosa -import numpy as np -import torchaudio -from scipy.io.wavfile import read -import torch -import torchvision -from torch.nn import functional as F -from commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(rank=None): - - hubert_soft = hubert_model.hubert_soft("hubert/hubert-soft-0d54a1f4.pt") - if rank is not None: - hubert_soft = hubert_soft.cuda(rank) - return hubert_soft - -def get_hubert_content(hmodel, y=None, path=None): - if path is not None: - source, sr = torchaudio.load(path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - else: - source = y - source = source.unsqueeze(0) - with torch.inference_mode(): - units = hmodel.units(source) - return units.transpose(1,2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def transform(mel, height): # 68-92 - #r = np.random.random() - #rate = r * 0.3 + 0.85 # 0.85-1.15 - #height = int(mel.size(-2) * rate) - tgt = torchvision.transforms.functional.resize(mel, (height, mel.size(-1))) - if height >= mel.size(-2): - return tgt[:, :mel.size(-2), :] - else: - silence = tgt[:,-1:,:].repeat(1,mel.size(-2)-height,1) - silence += torch.randn_like(silence) / 10 - return torch.cat((tgt, silence), 1) - - -def stretch(mel, width): # 0.5-2 - return torchvision.transforms.functional.resize(mel, (mel.size(-2), width)) - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if iteration is None: - iteration = 1 - if learning_rate is None: - learning_rate = 0.0002 - if optimizer is not None and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - # ckptname = checkpoint_path.split(os.sep)[-1] - # newest_step = int(ckptname.split(".")[0].split("_")[1]) - # val_steps = 2000 - # last_ckptname = checkpoint_path.replace(str(newest_step), str(newest_step - val_steps*3)) - # if newest_step >= val_steps*3: - # os.system(f"rm {last_ckptname}") - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/Miuzarte/SUI-svc-4.0/inference_main.py b/spaces/Miuzarte/SUI-svc-4.0/inference_main.py deleted file mode 100644 index a6e6b4bfac96db272fc9db2073c93b97a7101703..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-4.0/inference_main.py +++ /dev/null @@ -1,99 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import matplotlib.pyplot as plt -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - - - -def main(): - import argparse - - parser = argparse.ArgumentParser(description='sovits4 inference') - - # 一定要设置的部分 - parser.add_argument('-m', '--model_path', type=str, default="logs/44k/G_210000.pth", help='模型路径') - parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径') - parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=[""], help='wav文件路径') - parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)') - parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['suijiSUI'], help='合成目标说话人名称') - - # 可选项部分 - parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False, help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调') - parser.add_argument('-cm', '--cluster_model_path', type=str, default="logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填') - parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=0, help='聚类方案占比,范围0-1,若没有训练聚类模型则填0即可,如要使用建议设为0.5') - - # 不用动的部分 - parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50') - parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu') - parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学') - parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现') - parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式,默认flac') - - args = parser.parse_args() - - svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path) - infer_tool.mkdir(["raw", "results"]) - clean_names = args.clean_names - trans = args.trans - spk_list = args.spk_list - slice_db = args.slice_db - wav_format = args.wav_format - auto_predict_f0 = args.auto_predict_f0 - cluster_infer_ratio = args.cluster_infer_ratio - noice_scale = args.noice_scale - pad_seconds = args.pad_seconds - - infer_tool.fill_a_to_b(trans, clean_names) - for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])]) - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale - ) - _audio = out_audio.cpu().numpy() - - pad_len = int(svc_model.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - audio.extend(list(_audio)) - key = "auto" if auto_predict_f0 else f"{tran}key" - cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}" - res_path = f'{clean_name}_{key}_{spk}{cluster_name}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) - -if __name__ == '__main__': - main() diff --git a/spaces/Mosharof/FMS/app.py b/spaces/Mosharof/FMS/app.py deleted file mode 100644 index e2425307a84aa541f64a2a8640fc4d1f439d5e41..0000000000000000000000000000000000000000 --- a/spaces/Mosharof/FMS/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from fastai.vision.all import * -import gradio as gr -def is_cat(x): return x[0].isupper() - -learn = load_learner('model.pkl') - -categories = ('Dog','Cat') - -def classify_image(img): - pred, idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -image = gr.inputs.Image(shape=(192,192)) -lebel = gr.outputs.Label() -examples = ['dog.jpg','cat.jpg','dunno.jpg'] - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=lebel,examples=examples) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/satrn/satrn_shallow_5e_st_mj.py b/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/satrn/satrn_shallow_5e_st_mj.py deleted file mode 100644 index 32bbf0b87e3e00a868c210946162f25f0d2aadb2..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textrecog/satrn/satrn_shallow_5e_st_mj.py +++ /dev/null @@ -1,54 +0,0 @@ -_base_ = [ - '../_base_/datasets/mjsynth.py', - '../_base_/datasets/synthtext.py', - '../_base_/datasets/cute80.py', - '../_base_/datasets/iiit5k.py', - '../_base_/datasets/svt.py', - '../_base_/datasets/svtp.py', - '../_base_/datasets/icdar2013.py', - '../_base_/datasets/icdar2015.py', - '../_base_/default_runtime.py', - '../_base_/schedules/schedule_adam_step_5e.py', - '_base_satrn_shallow.py', -] - -train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=20, val_interval=1) - -# dataset settings -train_list = [_base_.mjsynth_textrecog_train, _base_.synthtext_textrecog_train] -test_list = [ - _base_.cute80_textrecog_test, _base_.iiit5k_textrecog_test, - _base_.svt_textrecog_test, _base_.svtp_textrecog_test, - _base_.icdar2013_textrecog_test, _base_.icdar2015_textrecog_test -] - -train_dataset = dict( - type='ConcatDataset', datasets=train_list, pipeline=_base_.train_pipeline) -test_dataset = dict( - type='ConcatDataset', datasets=test_list, pipeline=_base_.test_pipeline) - -# optimizer -optim_wrapper = dict(type='OptimWrapper', optimizer=dict(type='Adam', lr=3e-4)) - -train_dataloader = dict( - batch_size=128, - num_workers=24, - persistent_workers=True, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=train_dataset) - -test_dataloader = dict( - batch_size=1, - num_workers=4, - persistent_workers=True, - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=test_dataset) - -val_dataloader = test_dataloader - -val_evaluator = dict( - dataset_prefixes=['CUTE80', 'IIIT5K', 'SVT', 'SVTP', 'IC13', 'IC15']) -test_evaluator = val_evaluator - -auto_scale_lr = dict(base_batch_size=64 * 8) diff --git a/spaces/MrBodean/VoiceClone/encoder/data_objects/utterance.py b/spaces/MrBodean/VoiceClone/encoder/data_objects/utterance.py deleted file mode 100644 index 0768c3420f422a7464f305b4c1fb6752c57ceda7..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/encoder/data_objects/utterance.py +++ /dev/null @@ -1,26 +0,0 @@ -import numpy as np - - -class Utterance: - def __init__(self, frames_fpath, wave_fpath): - self.frames_fpath = frames_fpath - self.wave_fpath = wave_fpath - - def get_frames(self): - return np.load(self.frames_fpath) - - def random_partial(self, n_frames): - """ - Crops the frames into a partial utterance of n_frames - - :param n_frames: The number of frames of the partial utterance - :return: the partial utterance frames and a tuple indicating the start and end of the - partial utterance in the complete utterance. - """ - frames = self.get_frames() - if frames.shape[0] == n_frames: - start = 0 - else: - start = np.random.randint(0, frames.shape[0] - n_frames) - end = start + n_frames - return frames[start:end], (start, end) \ No newline at end of file diff --git a/spaces/MrKetchupp/nerijs-pixel-art-xl/README.md b/spaces/MrKetchupp/nerijs-pixel-art-xl/README.md deleted file mode 100644 index c929fcabb8949d20f39a70ed1a89d22632991fd4..0000000000000000000000000000000000000000 --- a/spaces/MrKetchupp/nerijs-pixel-art-xl/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Nerijs Pixel Art Xl -emoji: 🏃 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MrVicente/RA-BART/custom_bart/custom_constants.py b/spaces/MrVicente/RA-BART/custom_bart/custom_constants.py deleted file mode 100644 index c4c6f4e13be8f4a23be4694d9d86aa177817f99e..0000000000000000000000000000000000000000 --- a/spaces/MrVicente/RA-BART/custom_bart/custom_constants.py +++ /dev/null @@ -1,168 +0,0 @@ - -class BartConstants: - CHECKPOINT_FOR_DOC = "facebook/bart-base" - CONFIG_FOR_DOC = "BartConfig" - TOKENIZER_FOR_DOC = "BartTokenizer" - - # Base model docstring - EXPECTED_OUTPUT_SHAPE = [1, 8, 768] - - BART_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "facebook/bart-large", - ] - - BART_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`BartConfig`]): - Model configuration class with all the parameters of the model. Initializing with a config file does not - load the weights associated with the model, only the configuration. Check out the - [`~PreTrainedModel.from_pretrained`] method to load the model weights. - """ - BART_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`BartTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): - Indices of decoder input sequence tokens in the vocabulary. - - Indices can be obtained using [`BartTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are decoder input IDs?](../glossary#decoder-input-ids) - - Bart uses the `eos_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values` - is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`). - - For translation and summarization training, `decoder_input_ids` should be provided. If no - `decoder_input_ids` is provided, the model will create this tensor by shifting the `input_ids` to the right - for denoising pre-training following the paper. - decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*): - Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also - be used by default. - - If you want to change padding behavior, you should read [`modeling_bart._prepare_decoder_inputs`] and - modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more information - on the default strategy. - head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - decoder_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0, - 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*): - Tuple consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`) - `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of - hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape - `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape - `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. - - Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention - blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape - `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you - can choose to directly pass an embedded representation. This is useful if you want more control over how to - convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix. - decoder_inputs_embeds (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded - representation. If `past_key_values` is used, optionally only the last `decoder_inputs_embeds` have to be - input (see `past_key_values`). This is useful if you want more control over how to convert - `decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix. - - If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value - of `inputs_embeds`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - BART_GENERATION_EXAMPLE = r""" - Summarization example: - - ```python - >>> from transformers import BartTokenizer, BartForConditionalGeneration - - >>> model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn") - >>> tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") - - >>> ARTICLE_TO_SUMMARIZE = ( - ... "PG&E stated it scheduled the blackouts in response to forecasts for high winds " - ... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " - ... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." - ... ) - >>> inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt") - - >>> # Generate Summary - >>> summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0, max_length=20) - >>> tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] - 'PG&E scheduled the blackouts in response to forecasts for high winds amid dry conditions' - ``` - - Mask filling example: - - ```python - >>> from transformers import BartTokenizer, BartForConditionalGeneration - - >>> tokenizer = BartTokenizer.from_pretrained("facebook/bart-base") - >>> model = BartForConditionalGeneration.from_pretrained("facebook/bart-base") - - >>> TXT = "My friends are but they eat too many carbs." - >>> input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"] - >>> logits = model(input_ids).logits - - >>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() - >>> probs = logits[0, masked_index].softmax(dim=0) - >>> values, predictions = probs.topk(5) - - >>> tokenizer.decode(predictions).split() - ['not', 'good', 'healthy', 'great', 'very'] - ``` - """ - - - diff --git a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_transform_rotate.py b/spaces/MuGeminorum/insecta/khandy/boxes/boxes_transform_rotate.py deleted file mode 100644 index 7183259bf9e078b90f142e0c8f1c62cdc75422fc..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_transform_rotate.py +++ /dev/null @@ -1,140 +0,0 @@ -import numpy as np -from .boxes_utils import assert_and_normalize_shape - - -def rotate_boxes(boxes, angle, x_center=0, y_center=0, scale=1, - degrees=True, return_rotated_boxes=False): - """ - Args: - boxes: (N, 4+K) - angle: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - x_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - y_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - scale: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - scale factor in x and y dimension - degrees: bool - return_rotated_boxes: bool - """ - boxes = np.asarray(boxes, np.float32) - - angle = np.asarray(angle, np.float32) - x_center = np.asarray(x_center, np.float32) - y_center = np.asarray(y_center, np.float32) - scale = np.asarray(scale, np.float32) - - angle = assert_and_normalize_shape(angle, boxes.shape[0]) - x_center = assert_and_normalize_shape(x_center, boxes.shape[0]) - y_center = assert_and_normalize_shape(y_center, boxes.shape[0]) - scale = assert_and_normalize_shape(scale, boxes.shape[0]) - - if degrees: - angle = np.deg2rad(angle) - cos_val = scale * np.cos(angle) - sin_val = scale * np.sin(angle) - x_shift = x_center - x_center * cos_val + y_center * sin_val - y_shift = y_center - x_center * sin_val - y_center * cos_val - - x_mins, y_mins = boxes[:,0], boxes[:,1] - x_maxs, y_maxs = boxes[:,2], boxes[:,3] - x00 = x_mins * cos_val - y_mins * sin_val + x_shift - x10 = x_maxs * cos_val - y_mins * sin_val + x_shift - x11 = x_maxs * cos_val - y_maxs * sin_val + x_shift - x01 = x_mins * cos_val - y_maxs * sin_val + x_shift - - y00 = x_mins * sin_val + y_mins * cos_val + y_shift - y10 = x_maxs * sin_val + y_mins * cos_val + y_shift - y11 = x_maxs * sin_val + y_maxs * cos_val + y_shift - y01 = x_mins * sin_val + y_maxs * cos_val + y_shift - - rotated_boxes = np.stack([x00, y00, x10, y10, x11, y11, x01, y01], axis=-1) - ret_x_mins = np.min(rotated_boxes[:,0::2], axis=1) - ret_y_mins = np.min(rotated_boxes[:,1::2], axis=1) - ret_x_maxs = np.max(rotated_boxes[:,0::2], axis=1) - ret_y_maxs = np.max(rotated_boxes[:,1::2], axis=1) - - if boxes.ndim == 4: - ret_boxes = np.stack([ret_x_mins, ret_y_mins, ret_x_maxs, ret_y_maxs], axis=-1) - else: - ret_boxes = boxes.copy() - ret_boxes[:, :4] = np.stack([ret_x_mins, ret_y_mins, ret_x_maxs, ret_y_maxs], axis=-1) - - if not return_rotated_boxes: - return ret_boxes - else: - return ret_boxes, rotated_boxes - - -def rotate_boxes_wrt_centers(boxes, angle, scale=1, degrees=True, - return_rotated_boxes=False): - """ - Args: - boxes: (N, 4+K) - angle: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - scale: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1) - scale factor in x and y dimension - degrees: bool - return_rotated_boxes: bool - """ - boxes = np.asarray(boxes, np.float32) - - angle = np.asarray(angle, np.float32) - scale = np.asarray(scale, np.float32) - angle = assert_and_normalize_shape(angle, boxes.shape[0]) - scale = assert_and_normalize_shape(scale, boxes.shape[0]) - - if degrees: - angle = np.deg2rad(angle) - cos_val = scale * np.cos(angle) - sin_val = scale * np.sin(angle) - - x_centers = boxes[:, 2] + boxes[:, 0] - y_centers = boxes[:, 3] + boxes[:, 1] - x_centers *= 0.5 - y_centers *= 0.5 - - half_widths = boxes[:, 2] - boxes[:, 0] - half_heights = boxes[:, 3] - boxes[:, 1] - half_widths *= 0.5 - half_heights *= 0.5 - - half_widths_cos = half_widths * cos_val - half_widths_sin = half_widths * sin_val - half_heights_cos = half_heights * cos_val - half_heights_sin = half_heights * sin_val - - x00 = -half_widths_cos + half_heights_sin - x10 = half_widths_cos + half_heights_sin - x11 = half_widths_cos - half_heights_sin - x01 = -half_widths_cos - half_heights_sin - x00 += x_centers - x10 += x_centers - x11 += x_centers - x01 += x_centers - - y00 = -half_widths_sin - half_heights_cos - y10 = half_widths_sin - half_heights_cos - y11 = half_widths_sin + half_heights_cos - y01 = -half_widths_sin + half_heights_cos - y00 += y_centers - y10 += y_centers - y11 += y_centers - y01 += y_centers - - rotated_boxes = np.stack([x00, y00, x10, y10, x11, y11, x01, y01], axis=-1) - ret_x_mins = np.min(rotated_boxes[:,0::2], axis=1) - ret_y_mins = np.min(rotated_boxes[:,1::2], axis=1) - ret_x_maxs = np.max(rotated_boxes[:,0::2], axis=1) - ret_y_maxs = np.max(rotated_boxes[:,1::2], axis=1) - - if boxes.ndim == 4: - ret_boxes = np.stack([ret_x_mins, ret_y_mins, ret_x_maxs, ret_y_maxs], axis=-1) - else: - ret_boxes = boxes.copy() - ret_boxes[:, :4] = np.stack([ret_x_mins, ret_y_mins, ret_x_maxs, ret_y_maxs], axis=-1) - - if not return_rotated_boxes: - return ret_boxes - else: - return ret_boxes, rotated_boxes - - \ No newline at end of file diff --git a/spaces/MuhammedAyman29/mm/README.md b/spaces/MuhammedAyman29/mm/README.md deleted file mode 100644 index 58aacb8691f766e483cc57c51b2703390db2bcc3..0000000000000000000000000000000000000000 --- a/spaces/MuhammedAyman29/mm/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mm -emoji: 🌖 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/models/README.md b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/models/README.md deleted file mode 100644 index c2e572b6fe07631c17f37b29723fc7a0ac94a81e..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/models/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# Models - -Models are combinations of layers and networks that would be trained. - -Several pre-built canned models are provided to train encoder networks. These -models are intended as both convenience functions and canonical examples. - -* [`BertClassifier`](bert_classifier.py) implements a simple classification -model containing a single classification head using the Classification network. -It can be used as a regression model as well. - -* [`BertTokenClassifier`](bert_token_classifier.py) implements a simple token -classification model containing a single classification head using the -TokenClassification network. - -* [`BertSpanLabeler`](bert_span_labeler.py) implementats a simple single-span -start-end predictor (that is, a model that predicts two values: a start token -index and an end token index), suitable for SQuAD-style tasks. - -* [`BertPretrainer`](bert_pretrainer.py) implements a masked LM and a -classification head using the Masked LM and Classification networks, -respectively. diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/transformer/transformer_layers_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/transformer/transformer_layers_test.py deleted file mode 100644 index 82d37259da2854fb83e086749fe7a8df2c22e955..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/transformer/transformer_layers_test.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for layers in Transformer.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf - -from official.nlp.transformer import attention_layer -from official.nlp.transformer import embedding_layer -from official.nlp.transformer import ffn_layer -from official.nlp.transformer import metrics - - -class TransformerLayersTest(tf.test.TestCase): - - def test_attention_layer(self): - hidden_size = 64 - num_heads = 4 - dropout = 0.5 - dim_per_head = hidden_size // num_heads - layer = attention_layer.SelfAttention(hidden_size, num_heads, dropout) - self.assertDictEqual(layer.get_config(), { - "hidden_size": hidden_size, - "num_heads": num_heads, - "attention_dropout": dropout, - }) - length = 2 - x = tf.ones([1, length, hidden_size]) - bias = tf.ones([1]) - cache = { - "k": tf.zeros([1, 0, num_heads, dim_per_head]), - "v": tf.zeros([1, 0, num_heads, dim_per_head]), - } - y = layer(x, bias, training=True, cache=cache) - self.assertEqual(y.shape, (1, length, 64,)) - self.assertEqual(cache["k"].shape, (1, length, num_heads, dim_per_head,)) - self.assertEqual(cache["v"].shape, (1, length, num_heads, dim_per_head,)) - - def test_embedding_shared_weights(self): - vocab_size = 50 - hidden_size = 64 - length = 2 - layer = embedding_layer.EmbeddingSharedWeights(vocab_size, hidden_size) - self.assertDictEqual(layer.get_config(), { - "vocab_size": 50, - "hidden_size": 64, - }) - - idx = tf.ones([1, length], dtype="int32") - y = layer(idx) - self.assertEqual(y.shape, (1, length, hidden_size,)) - x = tf.ones([1, length, hidden_size]) - output = layer(x, "linear") - self.assertEqual(output.shape, (1, length, vocab_size,)) - - def test_feed_forward_network(self): - hidden_size = 64 - filter_size = 32 - relu_dropout = 0.5 - layer = ffn_layer.FeedForwardNetwork(hidden_size, filter_size, relu_dropout) - self.assertDictEqual(layer.get_config(), { - "hidden_size": hidden_size, - "filter_size": filter_size, - "relu_dropout": relu_dropout, - }) - length = 2 - x = tf.ones([1, length, hidden_size]) - y = layer(x, training=True) - self.assertEqual(y.shape, (1, length, hidden_size,)) - - def test_metric_layer(self): - vocab_size = 50 - logits = tf.keras.layers.Input((None, vocab_size), - dtype="float32", - name="logits") - targets = tf.keras.layers.Input((None,), dtype="int64", name="targets") - output_logits = metrics.MetricLayer(vocab_size)([logits, targets]) - self.assertEqual(output_logits.shape.as_list(), [None, None, vocab_size,]) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/Naszirs397/rvc-models/infer_pack/models_onnx.py b/spaces/Naszirs397/rvc-models/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/Naszirs397/rvc-models/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Nee001/bing0/src/components/welcome-screen.tsx b/spaces/Nee001/bing0/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/Nee001/bing0/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
    - {exampleMessages.map(example => ( - - ))} -
    - ) -} diff --git a/spaces/OAOA/DifFace/basicsr/archs/basicvsr_arch.py b/spaces/OAOA/DifFace/basicsr/archs/basicvsr_arch.py deleted file mode 100644 index ed7b824eae108a9bcca57f1c14dd0d8afafc4f58..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/archs/basicvsr_arch.py +++ /dev/null @@ -1,336 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F - -from basicsr.utils.registry import ARCH_REGISTRY -from .arch_util import ResidualBlockNoBN, flow_warp, make_layer -from .edvr_arch import PCDAlignment, TSAFusion -from .spynet_arch import SpyNet - - -@ARCH_REGISTRY.register() -class BasicVSR(nn.Module): - """A recurrent network for video SR. Now only x4 is supported. - - Args: - num_feat (int): Number of channels. Default: 64. - num_block (int): Number of residual blocks for each branch. Default: 15 - spynet_path (str): Path to the pretrained weights of SPyNet. Default: None. - """ - - def __init__(self, num_feat=64, num_block=15, spynet_path=None): - super().__init__() - self.num_feat = num_feat - - # alignment - self.spynet = SpyNet(spynet_path) - - # propagation - self.backward_trunk = ConvResidualBlocks(num_feat + 3, num_feat, num_block) - self.forward_trunk = ConvResidualBlocks(num_feat + 3, num_feat, num_block) - - # reconstruction - self.fusion = nn.Conv2d(num_feat * 2, num_feat, 1, 1, 0, bias=True) - self.upconv1 = nn.Conv2d(num_feat, num_feat * 4, 3, 1, 1, bias=True) - self.upconv2 = nn.Conv2d(num_feat, 64 * 4, 3, 1, 1, bias=True) - self.conv_hr = nn.Conv2d(64, 64, 3, 1, 1) - self.conv_last = nn.Conv2d(64, 3, 3, 1, 1) - - self.pixel_shuffle = nn.PixelShuffle(2) - - # activation functions - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def get_flow(self, x): - b, n, c, h, w = x.size() - - x_1 = x[:, :-1, :, :, :].reshape(-1, c, h, w) - x_2 = x[:, 1:, :, :, :].reshape(-1, c, h, w) - - flows_backward = self.spynet(x_1, x_2).view(b, n - 1, 2, h, w) - flows_forward = self.spynet(x_2, x_1).view(b, n - 1, 2, h, w) - - return flows_forward, flows_backward - - def forward(self, x): - """Forward function of BasicVSR. - - Args: - x: Input frames with shape (b, n, c, h, w). n is the temporal dimension / number of frames. - """ - flows_forward, flows_backward = self.get_flow(x) - b, n, _, h, w = x.size() - - # backward branch - out_l = [] - feat_prop = x.new_zeros(b, self.num_feat, h, w) - for i in range(n - 1, -1, -1): - x_i = x[:, i, :, :, :] - if i < n - 1: - flow = flows_backward[:, i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - feat_prop = torch.cat([x_i, feat_prop], dim=1) - feat_prop = self.backward_trunk(feat_prop) - out_l.insert(0, feat_prop) - - # forward branch - feat_prop = torch.zeros_like(feat_prop) - for i in range(0, n): - x_i = x[:, i, :, :, :] - if i > 0: - flow = flows_forward[:, i - 1, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - - feat_prop = torch.cat([x_i, feat_prop], dim=1) - feat_prop = self.forward_trunk(feat_prop) - - # upsample - out = torch.cat([out_l[i], feat_prop], dim=1) - out = self.lrelu(self.fusion(out)) - out = self.lrelu(self.pixel_shuffle(self.upconv1(out))) - out = self.lrelu(self.pixel_shuffle(self.upconv2(out))) - out = self.lrelu(self.conv_hr(out)) - out = self.conv_last(out) - base = F.interpolate(x_i, scale_factor=4, mode='bilinear', align_corners=False) - out += base - out_l[i] = out - - return torch.stack(out_l, dim=1) - - -class ConvResidualBlocks(nn.Module): - """Conv and residual block used in BasicVSR. - - Args: - num_in_ch (int): Number of input channels. Default: 3. - num_out_ch (int): Number of output channels. Default: 64. - num_block (int): Number of residual blocks. Default: 15. - """ - - def __init__(self, num_in_ch=3, num_out_ch=64, num_block=15): - super().__init__() - self.main = nn.Sequential( - nn.Conv2d(num_in_ch, num_out_ch, 3, 1, 1, bias=True), nn.LeakyReLU(negative_slope=0.1, inplace=True), - make_layer(ResidualBlockNoBN, num_block, num_feat=num_out_ch)) - - def forward(self, fea): - return self.main(fea) - - -@ARCH_REGISTRY.register() -class IconVSR(nn.Module): - """IconVSR, proposed also in the BasicVSR paper. - - Args: - num_feat (int): Number of channels. Default: 64. - num_block (int): Number of residual blocks for each branch. Default: 15. - keyframe_stride (int): Keyframe stride. Default: 5. - temporal_padding (int): Temporal padding. Default: 2. - spynet_path (str): Path to the pretrained weights of SPyNet. Default: None. - edvr_path (str): Path to the pretrained EDVR model. Default: None. - """ - - def __init__(self, - num_feat=64, - num_block=15, - keyframe_stride=5, - temporal_padding=2, - spynet_path=None, - edvr_path=None): - super().__init__() - - self.num_feat = num_feat - self.temporal_padding = temporal_padding - self.keyframe_stride = keyframe_stride - - # keyframe_branch - self.edvr = EDVRFeatureExtractor(temporal_padding * 2 + 1, num_feat, edvr_path) - # alignment - self.spynet = SpyNet(spynet_path) - - # propagation - self.backward_fusion = nn.Conv2d(2 * num_feat, num_feat, 3, 1, 1, bias=True) - self.backward_trunk = ConvResidualBlocks(num_feat + 3, num_feat, num_block) - - self.forward_fusion = nn.Conv2d(2 * num_feat, num_feat, 3, 1, 1, bias=True) - self.forward_trunk = ConvResidualBlocks(2 * num_feat + 3, num_feat, num_block) - - # reconstruction - self.upconv1 = nn.Conv2d(num_feat, num_feat * 4, 3, 1, 1, bias=True) - self.upconv2 = nn.Conv2d(num_feat, 64 * 4, 3, 1, 1, bias=True) - self.conv_hr = nn.Conv2d(64, 64, 3, 1, 1) - self.conv_last = nn.Conv2d(64, 3, 3, 1, 1) - - self.pixel_shuffle = nn.PixelShuffle(2) - - # activation functions - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - def pad_spatial(self, x): - """Apply padding spatially. - - Since the PCD module in EDVR requires that the resolution is a multiple - of 4, we apply padding to the input LR images if their resolution is - not divisible by 4. - - Args: - x (Tensor): Input LR sequence with shape (n, t, c, h, w). - Returns: - Tensor: Padded LR sequence with shape (n, t, c, h_pad, w_pad). - """ - n, t, c, h, w = x.size() - - pad_h = (4 - h % 4) % 4 - pad_w = (4 - w % 4) % 4 - - # padding - x = x.view(-1, c, h, w) - x = F.pad(x, [0, pad_w, 0, pad_h], mode='reflect') - - return x.view(n, t, c, h + pad_h, w + pad_w) - - def get_flow(self, x): - b, n, c, h, w = x.size() - - x_1 = x[:, :-1, :, :, :].reshape(-1, c, h, w) - x_2 = x[:, 1:, :, :, :].reshape(-1, c, h, w) - - flows_backward = self.spynet(x_1, x_2).view(b, n - 1, 2, h, w) - flows_forward = self.spynet(x_2, x_1).view(b, n - 1, 2, h, w) - - return flows_forward, flows_backward - - def get_keyframe_feature(self, x, keyframe_idx): - if self.temporal_padding == 2: - x = [x[:, [4, 3]], x, x[:, [-4, -5]]] - elif self.temporal_padding == 3: - x = [x[:, [6, 5, 4]], x, x[:, [-5, -6, -7]]] - x = torch.cat(x, dim=1) - - num_frames = 2 * self.temporal_padding + 1 - feats_keyframe = {} - for i in keyframe_idx: - feats_keyframe[i] = self.edvr(x[:, i:i + num_frames].contiguous()) - return feats_keyframe - - def forward(self, x): - b, n, _, h_input, w_input = x.size() - - x = self.pad_spatial(x) - h, w = x.shape[3:] - - keyframe_idx = list(range(0, n, self.keyframe_stride)) - if keyframe_idx[-1] != n - 1: - keyframe_idx.append(n - 1) # last frame is a keyframe - - # compute flow and keyframe features - flows_forward, flows_backward = self.get_flow(x) - feats_keyframe = self.get_keyframe_feature(x, keyframe_idx) - - # backward branch - out_l = [] - feat_prop = x.new_zeros(b, self.num_feat, h, w) - for i in range(n - 1, -1, -1): - x_i = x[:, i, :, :, :] - if i < n - 1: - flow = flows_backward[:, i, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - if i in keyframe_idx: - feat_prop = torch.cat([feat_prop, feats_keyframe[i]], dim=1) - feat_prop = self.backward_fusion(feat_prop) - feat_prop = torch.cat([x_i, feat_prop], dim=1) - feat_prop = self.backward_trunk(feat_prop) - out_l.insert(0, feat_prop) - - # forward branch - feat_prop = torch.zeros_like(feat_prop) - for i in range(0, n): - x_i = x[:, i, :, :, :] - if i > 0: - flow = flows_forward[:, i - 1, :, :, :] - feat_prop = flow_warp(feat_prop, flow.permute(0, 2, 3, 1)) - if i in keyframe_idx: - feat_prop = torch.cat([feat_prop, feats_keyframe[i]], dim=1) - feat_prop = self.forward_fusion(feat_prop) - - feat_prop = torch.cat([x_i, out_l[i], feat_prop], dim=1) - feat_prop = self.forward_trunk(feat_prop) - - # upsample - out = self.lrelu(self.pixel_shuffle(self.upconv1(feat_prop))) - out = self.lrelu(self.pixel_shuffle(self.upconv2(out))) - out = self.lrelu(self.conv_hr(out)) - out = self.conv_last(out) - base = F.interpolate(x_i, scale_factor=4, mode='bilinear', align_corners=False) - out += base - out_l[i] = out - - return torch.stack(out_l, dim=1)[..., :4 * h_input, :4 * w_input] - - -class EDVRFeatureExtractor(nn.Module): - """EDVR feature extractor used in IconVSR. - - Args: - num_input_frame (int): Number of input frames. - num_feat (int): Number of feature channels - load_path (str): Path to the pretrained weights of EDVR. Default: None. - """ - - def __init__(self, num_input_frame, num_feat, load_path): - - super(EDVRFeatureExtractor, self).__init__() - - self.center_frame_idx = num_input_frame // 2 - - # extract pyramid features - self.conv_first = nn.Conv2d(3, num_feat, 3, 1, 1) - self.feature_extraction = make_layer(ResidualBlockNoBN, 5, num_feat=num_feat) - self.conv_l2_1 = nn.Conv2d(num_feat, num_feat, 3, 2, 1) - self.conv_l2_2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_l3_1 = nn.Conv2d(num_feat, num_feat, 3, 2, 1) - self.conv_l3_2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - - # pcd and tsa module - self.pcd_align = PCDAlignment(num_feat=num_feat, deformable_groups=8) - self.fusion = TSAFusion(num_feat=num_feat, num_frame=num_input_frame, center_frame_idx=self.center_frame_idx) - - # activation function - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - if load_path: - self.load_state_dict(torch.load(load_path, map_location=lambda storage, loc: storage)['params']) - - def forward(self, x): - b, n, c, h, w = x.size() - - # extract features for each frame - # L1 - feat_l1 = self.lrelu(self.conv_first(x.view(-1, c, h, w))) - feat_l1 = self.feature_extraction(feat_l1) - # L2 - feat_l2 = self.lrelu(self.conv_l2_1(feat_l1)) - feat_l2 = self.lrelu(self.conv_l2_2(feat_l2)) - # L3 - feat_l3 = self.lrelu(self.conv_l3_1(feat_l2)) - feat_l3 = self.lrelu(self.conv_l3_2(feat_l3)) - - feat_l1 = feat_l1.view(b, n, -1, h, w) - feat_l2 = feat_l2.view(b, n, -1, h // 2, w // 2) - feat_l3 = feat_l3.view(b, n, -1, h // 4, w // 4) - - # PCD alignment - ref_feat_l = [ # reference feature list - feat_l1[:, self.center_frame_idx, :, :, :].clone(), feat_l2[:, self.center_frame_idx, :, :, :].clone(), - feat_l3[:, self.center_frame_idx, :, :, :].clone() - ] - aligned_feat = [] - for i in range(n): - nbr_feat_l = [ # neighboring feature list - feat_l1[:, i, :, :, :].clone(), feat_l2[:, i, :, :, :].clone(), feat_l3[:, i, :, :, :].clone() - ] - aligned_feat.append(self.pcd_align(nbr_feat_l, ref_feat_l)) - aligned_feat = torch.stack(aligned_feat, dim=1) # (b, t, c, h, w) - - # TSA fusion - return self.fusion(aligned_feat) diff --git a/spaces/OAOA/DifFace/basicsr/utils/misc.py b/spaces/OAOA/DifFace/basicsr/utils/misc.py deleted file mode 100644 index c8d4a1403509672e85e74ac476e028cefb6dbb62..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/utils/misc.py +++ /dev/null @@ -1,141 +0,0 @@ -import numpy as np -import os -import random -import time -import torch -from os import path as osp - -from .dist_util import master_only - - -def set_random_seed(seed): - """Set random seeds.""" - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - - -def get_time_str(): - return time.strftime('%Y%m%d_%H%M%S', time.localtime()) - - -def mkdir_and_rename(path): - """mkdirs. If path exists, rename it with timestamp and create a new one. - - Args: - path (str): Folder path. - """ - if osp.exists(path): - new_name = path + '_archived_' + get_time_str() - print(f'Path already exists. Rename it to {new_name}', flush=True) - os.rename(path, new_name) - os.makedirs(path, exist_ok=True) - - -@master_only -def make_exp_dirs(opt): - """Make dirs for experiments.""" - path_opt = opt['path'].copy() - if opt['is_train']: - mkdir_and_rename(path_opt.pop('experiments_root')) - else: - mkdir_and_rename(path_opt.pop('results_root')) - for key, path in path_opt.items(): - if ('strict_load' in key) or ('pretrain_network' in key) or ('resume' in key) or ('param_key' in key): - continue - else: - os.makedirs(path, exist_ok=True) - - -def scandir(dir_path, suffix=None, recursive=False, full_path=False): - """Scan a directory to find the interested files. - - Args: - dir_path (str): Path of the directory. - suffix (str | tuple(str), optional): File suffix that we are - interested in. Default: None. - recursive (bool, optional): If set to True, recursively scan the - directory. Default: False. - full_path (bool, optional): If set to True, include the dir_path. - Default: False. - - Returns: - A generator for all the interested files with relative paths. - """ - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('"suffix" must be a string or tuple of strings') - - root = dir_path - - def _scandir(dir_path, suffix, recursive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - if full_path: - return_path = entry.path - else: - return_path = osp.relpath(entry.path, root) - - if suffix is None: - yield return_path - elif return_path.endswith(suffix): - yield return_path - else: - if recursive: - yield from _scandir(entry.path, suffix=suffix, recursive=recursive) - else: - continue - - return _scandir(dir_path, suffix=suffix, recursive=recursive) - - -def check_resume(opt, resume_iter): - """Check resume states and pretrain_network paths. - - Args: - opt (dict): Options. - resume_iter (int): Resume iteration. - """ - if opt['path']['resume_state']: - # get all the networks - networks = [key for key in opt.keys() if key.startswith('network_')] - flag_pretrain = False - for network in networks: - if opt['path'].get(f'pretrain_{network}') is not None: - flag_pretrain = True - if flag_pretrain: - print('pretrain_network path will be ignored during resuming.') - # set pretrained model paths - for network in networks: - name = f'pretrain_{network}' - basename = network.replace('network_', '') - if opt['path'].get('ignore_resume_networks') is None or (network - not in opt['path']['ignore_resume_networks']): - opt['path'][name] = osp.join(opt['path']['models'], f'net_{basename}_{resume_iter}.pth') - print(f"Set {name} to {opt['path'][name]}") - - # change param_key to params in resume - param_keys = [key for key in opt['path'].keys() if key.startswith('param_key')] - for param_key in param_keys: - if opt['path'][param_key] == 'params_ema': - opt['path'][param_key] = 'params' - print(f'Set {param_key} to params') - - -def sizeof_fmt(size, suffix='B'): - """Get human readable file size. - - Args: - size (int): File size. - suffix (str): Suffix. Default: 'B'. - - Return: - str: Formatted file size. - """ - for unit in ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z']: - if abs(size) < 1024.0: - return f'{size:3.1f} {unit}{suffix}' - size /= 1024.0 - return f'{size:3.1f} Y{suffix}' diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/layers.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/layers.py deleted file mode 100644 index f10d557ff5a4fff03b94f81543bd58cf1a66bc8f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/layers.py +++ /dev/null @@ -1,103 +0,0 @@ -import torch -from librosa.filters import mel as librosa_mel_fn -from .audio_processing import dynamic_range_compression -from .audio_processing import dynamic_range_decompression -from .stft import STFT -from .utils import get_mask_from_lengths - - -class LinearNorm(torch.nn.Module): - def __init__(self, in_dim, out_dim, bias=True, w_init_gain='linear'): - super(LinearNorm, self).__init__() - self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias) - - torch.nn.init.xavier_uniform_( - self.linear_layer.weight, - gain=torch.nn.init.calculate_gain(w_init_gain)) - - def forward(self, x): - return self.linear_layer(x) - - -class ConvNorm(torch.nn.Module): - def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, - padding=None, dilation=1, bias=True, w_init_gain='linear'): - super(ConvNorm, self).__init__() - if padding is None: - assert(kernel_size % 2 == 1) - padding = int(dilation * (kernel_size - 1) / 2) - - self.conv = torch.nn.Conv1d(in_channels, out_channels, - kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, - bias=bias) - - torch.nn.init.xavier_uniform_( - self.conv.weight, gain=torch.nn.init.calculate_gain(w_init_gain)) - - def forward(self, signal): - conv_signal = self.conv(signal) - return conv_signal - - -class GlobalAvgPool(torch.nn.Module): - def __init__(self): - super(GlobalAvgPool, self).__init__() - - def forward(self, x, lengths=None): - """Average pooling across time steps (dim=1) with optionally lengths. - Args: - x: torch.Tensor of shape (N, T, ...) - lengths: None or torch.Tensor of shape (N,) - dim: dimension to pool - """ - if lengths is None: - return x.mean(dim=1, keepdim=False) - else: - mask = get_mask_from_lengths(lengths).type(x.type()).to(x.device) - mask_shape = list(mask.size()) + [1 for _ in range(x.ndimension()-2)] - mask = mask.reshape(*mask_shape) - numer = (x * mask).sum(dim=1, keepdim=False) - denom = mask.sum(dim=1, keepdim=False) - return numer / denom - - -class TacotronSTFT(torch.nn.Module): - def __init__(self, filter_length=1024, hop_length=256, win_length=1024, - n_mel_channels=80, sampling_rate=22050, mel_fmin=0.0, - mel_fmax=8000.0): - super(TacotronSTFT, self).__init__() - self.n_mel_channels = n_mel_channels - self.sampling_rate = sampling_rate - self.stft_fn = STFT(filter_length, hop_length, win_length) - mel_basis = librosa_mel_fn( - sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer('mel_basis', mel_basis) - - def spectral_normalize(self, magnitudes): - output = dynamic_range_compression(magnitudes) - return output - - def spectral_de_normalize(self, magnitudes): - output = dynamic_range_decompression(magnitudes) - return output - - def mel_spectrogram(self, y): - """Computes mel-spectrograms from a batch of waves - PARAMS - ------ - y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1] - - RETURNS - ------- - mel_output: torch.FloatTensor of shape (B, n_mel_channels, T) - """ - assert(torch.min(y.data) >= -1) - assert(torch.max(y.data) <= 1) - - magnitudes, phases = self.stft_fn.transform(y) - magnitudes = magnitudes.data - mel_output = torch.matmul(self.mel_basis, magnitudes) - mel_output = self.spectral_normalize(mel_output) - return mel_output diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py deleted file mode 100644 index 9304f99eb8169a614f39babc830c84cac80e080b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def gen_forward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - blocks = [32, 64, 128, 256] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector dynamicconv_cuda_forward(at::Tensor input, at::Tensor weight, int padding_l) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - const dim3 blocks(minibatch, numFeatures); - - auto output = at::zeros_like(input); - auto stream = at::cuda::getCurrentCUDAStream(); -""" - - switch = """ - switch(filterSize) { -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {pad}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "dynamicconv_forward", ([&] {{ - dynamicconv_forward_kernel<{k}, {b_size}, {pad}, scalar_t> - <<>>( - input.data(), - weight.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - output.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping forward pass" << std::endl; - } - break;\n -""" - - end = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping forward pass" << std::endl; - } - - return {output}; -} -""" - - with open("dynamicconv_cuda_forward.cu", "w") as forward: - forward.write(head) - forward.write(switch) - for k in kernels: - b_size = 32 - for b in blocks: - if b > k: - b_size = b - break - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=b_size, pad=pad)) - forward.write(bad_padding) - forward.write(end) - - -def gen_backward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - thresh = [512, 512, 512, 512, 512, 380, 256, 256] - min_block = [64, 64, 64, 64, 64, 64, 128, 256] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector dynamicconv_cuda_backward(at::Tensor gradOutput, int padding_l, at::Tensor input, at::Tensor weight) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - auto numChunks = 1; - - auto gradInput = at::zeros_like(input); - auto gradWeight = at::zeros_like(weight); - auto stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(minibatch, numHeads, numChunks); -""" - - sequence_if = """ - if (sequenceLength < {seq}) {{ - switch(filterSize) {{ -""" - - case_k = """ - case {k}: -""" - - chunks_reset = """ - numChunks = int(ceilf(sequenceLength/float({b_size}))); - blocks = dim3(minibatch, numHeads, numChunks); -""" - - main_block = """ - if (padding_l == {p}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(gradOutput.scalar_type(), "dynamicconv_backward", ([&] {{ - dynamicconv_backward_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - gradOutput.data(), - input.data(), - weight.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - gradWeight.data(), - gradInput.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping backward pass" << std::endl; - } - break;\n -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping backward pass" << std::endl; - } -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - last_return = """ - } - return {gradInput, gradWeight}; -} -""" - - with open("dynamicconv_cuda_backward.cu", "w") as backward: - backward.write(head) - for seq in seqs: - backward.write(sequence_if.format(seq=seq)) - for k, t, m in zip(kernels, thresh, min_block): - backward.write(case_k.format(k=k)) - if seq <= t: - b_size = seq - else: - b_size = m - backward.write(chunks_reset.format(b_size=b_size)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=b_size, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(con_else) - backward.write(final_else) - for k, m in zip(kernels, min_block): - backward.write(case_k.format(k=k)) - backward.write(chunks_reset.format(b_size=m)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=m, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(last_return) - - -if __name__ == "__main__": - gen_forward() - gen_backward() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/lstm_cell_with_zoneout.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/lstm_cell_with_zoneout.py deleted file mode 100644 index f04e5db255c62bbe0faebbc641f579f92be5580c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/lstm_cell_with_zoneout.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn - - -class LSTMCellWithZoneOut(nn.Module): - """ - Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations - https://arxiv.org/abs/1606.01305 - """ - - def __init__(self, prob: float, input_size: int, hidden_size: int, - bias: bool = True): - super(LSTMCellWithZoneOut, self).__init__() - self.lstm_cell = nn.LSTMCell(input_size, hidden_size, bias=bias) - self.prob = prob - if prob > 1.0 or prob < 0.0: - raise ValueError("zoneout probability must be in the range from " - "0.0 to 1.0.") - - def zoneout(self, h, next_h, prob): - if isinstance(h, tuple): - return tuple( - [self.zoneout(h[i], next_h[i], prob) for i in range(len(h))] - ) - - if self.training: - mask = h.new_zeros(*h.size()).bernoulli_(prob) - return mask * h + (1 - mask) * next_h - - return prob * h + (1 - prob) * next_h - - def forward(self, x, h): - return self.zoneout(h, self.lstm_cell(x, h), self.prob) diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/README.md b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/README.md deleted file mode 100644 index b501a6eb2a047d4adb6f297436c1c002c926a09f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/README.md +++ /dev/null @@ -1,115 +0,0 @@ -# HuBERT - -## Pre-trained and fine-tuned (ASR) models -Model | Pretraining Data | Finetuning Dataset | Model -|---|---|---|--- -HuBERT Base (~95M params) | [Librispeech](http://www.openslr.org/12) 960 hr | No finetuning (Pretrained Model) | [download](https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt) -HuBERT Large (~316M params) | [Libri-Light](https://github.com/facebookresearch/libri-light) 60k hr | No finetuning (Pretrained Model) | [download](https://dl.fbaipublicfiles.com/hubert/hubert_large_ll60k.pt) -HuBERT Extra Large (~1B params) | [Libri-Light](https://github.com/facebookresearch/libri-light) 60k hr | No finetuning (Pretrained Model) | [download](https://dl.fbaipublicfiles.com/hubert/hubert_xtralarge_ll60k.pt) -HuBERT Large | [Libri-Light](https://github.com/facebookresearch/libri-light) 60k hr | [Librispeech](http://www.openslr.org/12) 960 hr | [download](https://dl.fbaipublicfiles.com/hubert/hubert_large_ll60k_finetune_ls960.pt) -HuBERT Extra Large | [Libri-Light](https://github.com/facebookresearch/libri-light) 60k hr | [Librispeech](http://www.openslr.org/12) 960 hr | [download](https://dl.fbaipublicfiles.com/hubert/hubert_xtralarge_ll60k_finetune_ls960.pt) - -## Load a model -``` -ckpt_path = "/path/to/the/checkpoint.pt" -models, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path]) -model = models[0] -``` - -## Train a new model - -### Data preparation - -Follow the steps in `./simple_kmeans` to create: -- `{train,valid}.tsv` waveform list files -- `{train,valid}.km` frame-aligned pseudo label files. -The `label_rate` is the same as the feature frame rate used for clustering, -which is 100Hz for MFCC features and 50Hz for HuBERT features by default. - -### Pre-train a HuBERT model - -Suppose `{train,valid}.tsv` are saved at `/path/to/data`, `{train,valid}.km` -are saved at `/path/to/labels`, and the label rate is 100Hz. - -To train a base model (12 layer transformer), run: -```sh -$ python fairseq_cli/hydra_train.py \ - --config-dir /path/to/fairseq-py/examples/hubert/config/pretrain \ - --config-name hubert_base_librispeech \ - task.data=/path/to/data task.label_dir=/path/to/labels model.label_rate=100 -``` - -### Fine-tune a HuBERT model with a CTC loss - -Suppose `{train,valid}.tsv` are saved at `/path/to/data`, and their -corresponding character transcripts `{train,valid}.ltr` are saved at -`/path/to/trans`. - -To fine-tune a pre-trained HuBERT model at `/path/to/checkpoint`, run -```sh -$ python fairseq_cli/hydra_train.py \ - --config-dir /path/to/fairseq-py/examples/hubert/config/finetune \ - --config-name base_10h \ - task.data=/path/to/data task.label_dir=/path/to/trans \ - model.w2v_path=/path/to/checkpoint -``` - -### Decode a HuBERT model - -Suppose the `test.tsv` and `test.ltr` are the waveform list and transcripts of -the split to be decoded, saved at `/path/to/data`, and the fine-tuned model is -saved at `/path/to/checkpoint`. We support three decoding modes: -- Viterbi decoding: greedy decoding without a language model -- KenLM decoding: decoding with an arpa-format KenLM n-gram language model -- Fairseq-LM deocding: decoding with a Fairseq neural language model - - -#### Viterbi decoding - -`task.normalize` needs to be consistent with the value used during fine-tuning. -Decoding results will be saved at -`/path/to/experiment/directory/decode/viterbi/test`. - -```sh -$ python examples/speech_recognition/new/infer.py \ - --config-dir /path/to/fairseq-py/examples/hubert/config/decode \ - --config-name infer_viterbi \ - task.data=/path/to/data \ - task.normalize=[true|false] \ - decoding.exp_dir=/path/to/experiment/directory \ - common_eval.path=/path/to/checkpoint - dataset.gen_subset=test \ -``` - -#### KenLM / Fairseq-LM decoding - -Suppose the pronunciation lexicon and the n-gram LM are saved at -`/path/to/lexicon` and `/path/to/arpa`, respectively. Decoding results will be -saved at `/path/to/experiment/directory/decode/kenlm/test`. - -```sh -$ python examples/speech_recognition/new/infer.py \ - --config-dir /path/to/fairseq-py/examples/hubert/config/decode \ - --config-name infer_kenlm \ - task.data=/path/to/data \ - task.normalize=[true|false] \ - decoding.exp_dir=/path/to/experiment/directory \ - common_eval.path=/path/to/checkpoint - dataset.gen_subset=test \ - decoding.decoder.lexicon=/path/to/lexicon \ - decoding.decoder.lmpath=/path/to/arpa -``` - -The command above uses the default decoding hyperparameter, which can be found -in `examples/speech_recognition/hydra/decoder.py`. These parameters can be -configured from the command line. For example, to search with a beam size of -500, we can append the command above with `decoding.decoder.beam=500`. -Important parameters include: -- decoding.decoder.beam -- decoding.decoder.beamthreshold -- decoding.decoder.lmweight -- decoding.decoder.wordscore -- decoding.decoder.silweight - -To decode with a Fairseq LM, use `--config-name infer_fsqlm` instead, and -change the path of lexicon and LM accordingly. diff --git a/spaces/Omnibus/pdf-reader/README.md b/spaces/Omnibus/pdf-reader/README.md deleted file mode 100644 index 5b85b787b8592b48f02a671c8a30aece2fa30af5..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/pdf-reader/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Pdf Reader -emoji: 🌖 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/projects/README.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/projects/README.md deleted file mode 100644 index 95afe7ff8c8a9bd2f56621fcc3c1bdac11c256a9..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/projects/README.md +++ /dev/null @@ -1,2 +0,0 @@ - -Projects live in the [`projects` directory](../../projects) under the root of this repository, but not here. diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/dev/packaging/gen_install_table.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/dev/packaging/gen_install_table.py deleted file mode 100644 index b4c852dc53de613707b9668f748184c2b63b9dea..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/dev/packaging/gen_install_table.py +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# -*- coding: utf-8 -*- - -import argparse - -template = """
    install
    \
    -python -m pip install detectron2{d2_version} -f \\
    -  https://dl.fbaipublicfiles.com/detectron2/wheels/{cuda}/torch{torch}/index.html
    -
    """ -CUDA_SUFFIX = { - "11.3": "cu113", - "11.1": "cu111", - "11.0": "cu110", - "10.2": "cu102", - "10.1": "cu101", - "10.0": "cu100", - "9.2": "cu92", - "cpu": "cpu", -} - - -def gen_header(torch_versions): - return '' + "".join( - [ - ''.format(t) - for t in torch_versions - ] - ) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--d2-version", help="detectron2 version number, default to empty") - args = parser.parse_args() - d2_version = f"=={args.d2_version}" if args.d2_version else "" - - all_versions = ( - [("1.8", k) for k in ["11.1", "10.2", "10.1", "cpu"]] - + [("1.9", k) for k in ["11.1", "10.2", "cpu"]] - + [("1.10", k) for k in ["11.3", "11.1", "10.2", "cpu"]] - ) - - torch_versions = sorted( - {k[0] for k in all_versions}, key=lambda x: int(x.split(".")[1]), reverse=True - ) - cuda_versions = sorted( - {k[1] for k in all_versions}, key=lambda x: float(x) if x != "cpu" else 0, reverse=True - ) - - table = gen_header(torch_versions) - for cu in cuda_versions: - table += f""" """ - cu_suffix = CUDA_SUFFIX[cu] - for torch in torch_versions: - if (torch, cu) in all_versions: - cell = template.format(d2_version=d2_version, cuda=cu_suffix, torch=torch) - else: - cell = "" - table += f""" """ - table += "" - table += "
    CUDA torch {}
    {cu}{cell}
    " - print(table) diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/utils.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/utils.py deleted file mode 100644 index f337db7db54c82be041698d694e1403e8918c4c0..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/utils.py +++ /dev/null @@ -1,40 +0,0 @@ -"""Modified from https://github.com/CSAILVision/semantic-segmentation-pytorch""" - -import os -import sys - -import numpy as np -import torch - -try: - from urllib import urlretrieve -except ImportError: - from urllib.request import urlretrieve - - -def load_url(url, model_dir='./pretrained', map_location=None): - if not os.path.exists(model_dir): - os.makedirs(model_dir) - filename = url.split('/')[-1] - cached_file = os.path.join(model_dir, filename) - if not os.path.exists(cached_file): - sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file)) - urlretrieve(url, cached_file) - return torch.load(cached_file, map_location=map_location) - - -def color_encode(labelmap, colors, mode='RGB'): - labelmap = labelmap.astype('int') - labelmap_rgb = np.zeros((labelmap.shape[0], labelmap.shape[1], 3), - dtype=np.uint8) - for label in np.unique(labelmap): - if label < 0: - continue - labelmap_rgb += (labelmap == label)[:, :, np.newaxis] * \ - np.tile(colors[label], - (labelmap.shape[0], labelmap.shape[1], 1)) - - if mode == 'BGR': - return labelmap_rgb[:, :, ::-1] - else: - return labelmap_rgb diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/vit.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/vit.py deleted file mode 100644 index 59e4479650690e08cbc4cab9427aefda47c2116d..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/vit.py +++ /dev/null @@ -1,459 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/vision_transformer.py.""" - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (Conv2d, Linear, build_activation_layer, build_norm_layer, - constant_init, kaiming_init, normal_init) -from annotator.uniformer.mmcv.runner import _load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import DropPath, trunc_normal_ - - -class Mlp(nn.Module): - """MLP layer for Encoder block. - - Args: - in_features(int): Input dimension for the first fully - connected layer. - hidden_features(int): Output dimension for the first fully - connected layer. - out_features(int): Output dementsion for the second fully - connected layer. - act_cfg(dict): Config dict for activation layer. - Default: dict(type='GELU'). - drop(float): Drop rate for the dropout layer. Dropout rate has - to be between 0 and 1. Default: 0. - """ - - def __init__(self, - in_features, - hidden_features=None, - out_features=None, - act_cfg=dict(type='GELU'), - drop=0.): - super(Mlp, self).__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = Linear(in_features, hidden_features) - self.act = build_activation_layer(act_cfg) - self.fc2 = Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class Attention(nn.Module): - """Attention layer for Encoder block. - - Args: - dim (int): Dimension for the input vector. - num_heads (int): Number of parallel attention heads. - qkv_bias (bool): Enable bias for qkv if True. Default: False. - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - attn_drop (float): Drop rate for attention output weights. - Default: 0. - proj_drop (float): Drop rate for output weights. Default: 0. - """ - - def __init__(self, - dim, - num_heads=8, - qkv_bias=False, - qk_scale=None, - attn_drop=0., - proj_drop=0.): - super(Attention, self).__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim**-0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - b, n, c = x.shape - qkv = self.qkv(x).reshape(b, n, 3, self.num_heads, - c // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(b, n, c) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Block(nn.Module): - """Implements encoder block with residual connection. - - Args: - dim (int): The feature dimension. - num_heads (int): Number of parallel attention heads. - mlp_ratio (int): Ratio of mlp hidden dim to embedding dim. - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop (float): Drop rate for mlp output weights. Default: 0. - attn_drop (float): Drop rate for attention output weights. - Default: 0. - proj_drop (float): Drop rate for attn layer output weights. - Default: 0. - drop_path (float): Drop rate for paths of model. - Default: 0. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN', requires_grad=True). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, - dim, - num_heads, - mlp_ratio=4, - qkv_bias=False, - qk_scale=None, - drop=0., - attn_drop=0., - proj_drop=0., - drop_path=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN', eps=1e-6), - with_cp=False): - super(Block, self).__init__() - self.with_cp = with_cp - _, self.norm1 = build_norm_layer(norm_cfg, dim) - self.attn = Attention(dim, num_heads, qkv_bias, qk_scale, attn_drop, - proj_drop) - self.drop_path = DropPath( - drop_path) if drop_path > 0. else nn.Identity() - _, self.norm2 = build_norm_layer(norm_cfg, dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, - hidden_features=mlp_hidden_dim, - act_cfg=act_cfg, - drop=drop) - - def forward(self, x): - - def _inner_forward(x): - out = x + self.drop_path(self.attn(self.norm1(x))) - out = out + self.drop_path(self.mlp(self.norm2(out))) - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding. - - Args: - img_size (int | tuple): Input image size. - default: 224. - patch_size (int): Width and height for a patch. - default: 16. - in_channels (int): Input channels for images. Default: 3. - embed_dim (int): The embedding dimension. Default: 768. - """ - - def __init__(self, - img_size=224, - patch_size=16, - in_channels=3, - embed_dim=768): - super(PatchEmbed, self).__init__() - if isinstance(img_size, int): - self.img_size = (img_size, img_size) - elif isinstance(img_size, tuple): - self.img_size = img_size - else: - raise TypeError('img_size must be type of int or tuple') - h, w = self.img_size - self.patch_size = (patch_size, patch_size) - self.num_patches = (h // patch_size) * (w // patch_size) - self.proj = Conv2d( - in_channels, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - return self.proj(x).flatten(2).transpose(1, 2) - - -@BACKBONES.register_module() -class VisionTransformer(nn.Module): - """Vision transformer backbone. - - A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for - Image Recognition at Scale` - https://arxiv.org/abs/2010.11929 - - Args: - img_size (tuple): input image size. Default: (224, 224). - patch_size (int, tuple): patch size. Default: 16. - in_channels (int): number of input channels. Default: 3. - embed_dim (int): embedding dimension. Default: 768. - depth (int): depth of transformer. Default: 12. - num_heads (int): number of attention heads. Default: 12. - mlp_ratio (int): ratio of mlp hidden dim to embedding dim. - Default: 4. - out_indices (list | tuple | int): Output from which stages. - Default: -1. - qkv_bias (bool): enable bias for qkv if True. Default: True. - qk_scale (float): override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): dropout rate. Default: 0. - attn_drop_rate (float): attention dropout rate. Default: 0. - drop_path_rate (float): Rate of DropPath. Default: 0. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN', eps=1e-6, requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='GELU'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - final_norm (bool): Whether to add a additional layer to normalize - final feature map. Default: False. - interpolate_mode (str): Select the interpolate mode for position - embeding vector resize. Default: bicubic. - with_cls_token (bool): If concatenating class token into image tokens - as transformer input. Default: True. - with_cp (bool): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - """ - - def __init__(self, - img_size=(224, 224), - patch_size=16, - in_channels=3, - embed_dim=768, - depth=12, - num_heads=12, - mlp_ratio=4, - out_indices=11, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - norm_cfg=dict(type='LN', eps=1e-6, requires_grad=True), - act_cfg=dict(type='GELU'), - norm_eval=False, - final_norm=False, - with_cls_token=True, - interpolate_mode='bicubic', - with_cp=False): - super(VisionTransformer, self).__init__() - self.img_size = img_size - self.patch_size = patch_size - self.features = self.embed_dim = embed_dim - self.patch_embed = PatchEmbed( - img_size=img_size, - patch_size=patch_size, - in_channels=in_channels, - embed_dim=embed_dim) - - self.with_cls_token = with_cls_token - self.cls_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim)) - self.pos_embed = nn.Parameter( - torch.zeros(1, self.patch_embed.num_patches + 1, embed_dim)) - self.pos_drop = nn.Dropout(p=drop_rate) - - if isinstance(out_indices, int): - self.out_indices = [out_indices] - elif isinstance(out_indices, list) or isinstance(out_indices, tuple): - self.out_indices = out_indices - else: - raise TypeError('out_indices must be type of int, list or tuple') - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth) - ] # stochastic depth decay rule - self.blocks = nn.ModuleList([ - Block( - dim=embed_dim, - num_heads=num_heads, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=dpr[i], - attn_drop=attn_drop_rate, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp) for i in range(depth) - ]) - - self.interpolate_mode = interpolate_mode - self.final_norm = final_norm - if final_norm: - _, self.norm = build_norm_layer(norm_cfg, embed_dim) - - self.norm_eval = norm_eval - self.with_cp = with_cp - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = get_root_logger() - checkpoint = _load_checkpoint(pretrained, logger=logger) - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - if 'pos_embed' in state_dict.keys(): - if self.pos_embed.shape != state_dict['pos_embed'].shape: - logger.info(msg=f'Resize the pos_embed shape from \ -{state_dict["pos_embed"].shape} to {self.pos_embed.shape}') - h, w = self.img_size - pos_size = int( - math.sqrt(state_dict['pos_embed'].shape[1] - 1)) - state_dict['pos_embed'] = self.resize_pos_embed( - state_dict['pos_embed'], (h, w), (pos_size, pos_size), - self.patch_size, self.interpolate_mode) - - self.load_state_dict(state_dict, False) - - elif pretrained is None: - # We only implement the 'jax_impl' initialization implemented at - # https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L353 # noqa: E501 - trunc_normal_(self.pos_embed, std=.02) - trunc_normal_(self.cls_token, std=.02) - for n, m in self.named_modules(): - if isinstance(m, Linear): - trunc_normal_(m.weight, std=.02) - if m.bias is not None: - if 'mlp' in n: - normal_init(m.bias, std=1e-6) - else: - constant_init(m.bias, 0) - elif isinstance(m, Conv2d): - kaiming_init(m.weight, mode='fan_in') - if m.bias is not None: - constant_init(m.bias, 0) - elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): - constant_init(m.bias, 0) - constant_init(m.weight, 1.0) - else: - raise TypeError('pretrained must be a str or None') - - def _pos_embeding(self, img, patched_img, pos_embed): - """Positiong embeding method. - - Resize the pos_embed, if the input image size doesn't match - the training size. - Args: - img (torch.Tensor): The inference image tensor, the shape - must be [B, C, H, W]. - patched_img (torch.Tensor): The patched image, it should be - shape of [B, L1, C]. - pos_embed (torch.Tensor): The pos_embed weighs, it should be - shape of [B, L2, c]. - Return: - torch.Tensor: The pos encoded image feature. - """ - assert patched_img.ndim == 3 and pos_embed.ndim == 3, \ - 'the shapes of patched_img and pos_embed must be [B, L, C]' - x_len, pos_len = patched_img.shape[1], pos_embed.shape[1] - if x_len != pos_len: - if pos_len == (self.img_size[0] // self.patch_size) * ( - self.img_size[1] // self.patch_size) + 1: - pos_h = self.img_size[0] // self.patch_size - pos_w = self.img_size[1] // self.patch_size - else: - raise ValueError( - 'Unexpected shape of pos_embed, got {}.'.format( - pos_embed.shape)) - pos_embed = self.resize_pos_embed(pos_embed, img.shape[2:], - (pos_h, pos_w), self.patch_size, - self.interpolate_mode) - return self.pos_drop(patched_img + pos_embed) - - @staticmethod - def resize_pos_embed(pos_embed, input_shpae, pos_shape, patch_size, mode): - """Resize pos_embed weights. - - Resize pos_embed using bicubic interpolate method. - Args: - pos_embed (torch.Tensor): pos_embed weights. - input_shpae (tuple): Tuple for (input_h, intput_w). - pos_shape (tuple): Tuple for (pos_h, pos_w). - patch_size (int): Patch size. - Return: - torch.Tensor: The resized pos_embed of shape [B, L_new, C] - """ - assert pos_embed.ndim == 3, 'shape of pos_embed must be [B, L, C]' - input_h, input_w = input_shpae - pos_h, pos_w = pos_shape - cls_token_weight = pos_embed[:, 0] - pos_embed_weight = pos_embed[:, (-1 * pos_h * pos_w):] - pos_embed_weight = pos_embed_weight.reshape( - 1, pos_h, pos_w, pos_embed.shape[2]).permute(0, 3, 1, 2) - pos_embed_weight = F.interpolate( - pos_embed_weight, - size=[input_h // patch_size, input_w // patch_size], - align_corners=False, - mode=mode) - cls_token_weight = cls_token_weight.unsqueeze(1) - pos_embed_weight = torch.flatten(pos_embed_weight, 2).transpose(1, 2) - pos_embed = torch.cat((cls_token_weight, pos_embed_weight), dim=1) - return pos_embed - - def forward(self, inputs): - B = inputs.shape[0] - - x = self.patch_embed(inputs) - - cls_tokens = self.cls_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, x), dim=1) - x = self._pos_embeding(inputs, x, self.pos_embed) - - if not self.with_cls_token: - # Remove class token for transformer input - x = x[:, 1:] - - outs = [] - for i, blk in enumerate(self.blocks): - x = blk(x) - if i == len(self.blocks) - 1: - if self.final_norm: - x = self.norm(x) - if i in self.out_indices: - if self.with_cls_token: - # Remove class token and reshape token for decoder head - out = x[:, 1:] - else: - out = x - B, _, C = out.shape - out = out.reshape(B, inputs.shape[2] // self.patch_size, - inputs.shape[3] // self.patch_size, - C).permute(0, 3, 1, 2) - outs.append(out) - - return tuple(outs) - - def train(self, mode=True): - super(VisionTransformer, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.LayerNorm): - m.eval() diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/mapping.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/mapping.go deleted file mode 100644 index 75c73d9f010c2a2ee974593f75208232d12e8ad0..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/mapping.go and /dev/null differ diff --git a/spaces/PaulEdwards/StarWords/app.py b/spaces/PaulEdwards/StarWords/app.py deleted file mode 100644 index ba04fd113139f6ad2a8708e00ef5027be3941cd7..0000000000000000000000000000000000000000 --- a/spaces/PaulEdwards/StarWords/app.py +++ /dev/null @@ -1,1036 +0,0 @@ -import gradio as gr -from transformers import pipeline -title = "Starwars words..." -examples = [ - ["Did you hear that? They've shut down the main reactor. We'll be destroyed for sure. This is madness!"], - ["We're doomed!"], - ["There'll be no escape for the Princess this time."], - ["What's that?"], - ["I should have known better than to trust the logic of a half-sized thermocapsulary dehousing assister..."], - ["Hurry up! Come with me! What are you waiting for?! Get in gear!"], - ["Artoo! Artoo-Detoo, where are you?"], - ["At last! Where have you been?"], - ["They're heading in this direction. What are we going to do? We'll be sent to the spice mines of Kessel or smashed into who knows what!"], - ["Wait a minute, where are you going?"], - ["The Death Star plans are not in the main computer."], - ["Where are those transmissions you intercepted?"], - ["We intercepted no transmissions. Aaah... This is a consular ship. Were on a diplomatic mission."], - ["If this is a consular ship... where is the Ambassador?"], - ["Commander, tear this ship apart until you've found those plans and bring me the Ambassador. I want her alive!"], - ["There she is! Set for stun!"], - ["She'll be all right. Inform Lord Vader we have a prisoner."], - ["Hey, you're not permitted in there. It's restricted. You'll be deactivated for sure.."], - ["Don't call me a mindless philosopher, you overweight glob of grease! Now come out before somebody sees you."], - ["Secret mission? What plans? What are you talking about? I'm not getting in there!"], - ["I'm going to regret this."], - ["There goes another one."], - ["Hold your fire. There are no life forms. It must have been short-circuited."], - ["That's funny, the damage doesn't look as bad from out here."], - ["Are you sure this things safe?"], - ["I've told you kids to slow down!"], - ["Did I hear a young noise blast through here?"], - ["It was just Wormie on another rampage."], - ["Shape it up you guys!... Biggs?"], - ["I didn't know you were back! When did you get in?"], - ["Just now. I wanted to surprise you, hot shot. I thought you'd be here... certainly didn't expect you to be out working. "], - ["The Academy didn't change you much... but you're back so soon? Hey, what happened, didn't you get your commission?"], - ["Of course I got it. Signed aboard The Rand Ecliptic last week. First mate Biggs Darklighter at your service...... I just came back to say goodbye to all you unfortunate landlocked simpletons."], - ["I almost forgot. There's a battle going on! Right here in our system. Come and look!"], - ["Not again! Forget it."], - ["There they are!"], - ["That's no battle, hot shot... they're just sitting there! Probably a freighter-tanker refueling."], - ["But there was a lot of firing earlier..."], - ["Hey, easy with those..."], - ["Don't worry about it, Wormie."], - ["I keep telling you, the Rebellion is a long way from here. I doubt if the Empire would even fight to keep this system. Believe me Luke, this planet is a big hunk of nothing..."], - ["Lord Vader, I should have known. Only you could be so bold. The Imperial Senate will not sit stillfor this, when they hear you've attacked a diplomatic..."], - ["Don't play games with me, Your Highness. You weren't on any mercy mission this time. You passed directly through a restricted system. Several transmissions were beamed to this ship by Rebel spies. I want to know what happened to the plans they sent you."], - ["I don't know what you're talking about. I'm a member of the Imperial Senate on a diplomatic mission to Alderaan..."], - ["You're a part of the Rebel Alliance... and a traitor. Take her away!"], - ["Holding her is dangerous. If word of this gets out, it could generate sympathy for the Rebellion in the senate."], - ["I have traced the Rebel spies to her. Now she is my only link to find their secret base!"], - ["you anything."], - ["Leave that to me. Send a distress signal and then inform the senate that all aboard were killed!"], - ["Lord Vader, the battle station plans are not aboard this ship! And no transmissions were made. An escape pod was jettisoned during the fighting, but no life forms were aboard."], - ["She must have hidden the plans in the escape pod. Send a detachment down to retrieve them. See to it personally, Commander. There'll be no one to stop us this time."], - ["Yes, sir."], - ["How did I get into this mess? I really don't know how. We seem to be made to suffer. It's our lot in life."], - ["I've got to rest before I fall apart. My joints are almost frozen. "], - ["What a desolate place this is."], - ["Where are you going?"], - ["Well, I'm not going that way. It's much too rocky. This way is much easier."], - ["What makes you think there are settlements over there?"], - ["Don't get technical with me."], - ["What mission? What are you talking about? I've had just about enough of you! Go that way! You'll be malfunctioning within a day, you nearsighted scrap pile!"], - ["And don't let me catch you following me begging for help, because you won't get it."], - ["No more adventures. I'm not going that way."], - ["That malfunctioning little twerp. This is all his fault! He tricked me into going this way, but he'll do no better."], - ["Wait, what's that? A transport! I'm saved!"], - ["Over here! Help! Please, help!"], - ["... so I cut off my power, shut down the afterburners and came in low on Deak's trail. I was so close I thought I was going to fry my instruments. As it was I busted up the Skyhopper pretty bad. Uncle Owen was pretty upset. He grounded me for the rest of the season. You should have been there... it was fantastic."], - ["You ought to take it a little easy Luke. You may be the hottest bushpilot this side of Mos Eisley, but those little Skyhoppers are dangerous. Keep it up, and one day, whammo, you're going to be nothing more than a dark spot on the down side of a canyon wall."], - ["Look who's talking. Now that you've been around those giant starships you're beginning to sound like my uncle. You've gotten soft in the city..."], - ["I've missed you kid."], - ["Well, things haven't been the same since you left, Biggs. It's been so...quiet."], - ["Luke, I didn't come back just to say goodbye... I shouldn't tell you this, but you're the only one I can trust... and if I don't come back, I want somebody to know."], - ["What are you talking about?"], - ["I made some friends at the Academy. ... when our frigate goes to one of the central systems, we're going to jump ship and join the Alliance..."], - ["Join the Rebellion?! Are you kidding! How?"], - ["Quiet down will ya! You got a mouth bigger than a meteor crater!"], - ["I'm sorry. I'm quiet. Listen how quiet I am. You can barely hear me..."], - ["My friend has a friend on Bestine who might help us make contact."], - ["around forever trying to find them."], - ["I know it's a long shot, but if I don't find them I'll do what I can on my own... It's what we always talked about. Luke, I'm not going to wait for the Empire to draft me into service. The Rebellion is spreading and I want to be on the right side - the side I believe in. "], - ["And I'm stuck here..."], - ["I thought you were going to the Academy next term. You'll get your chance to get off this rock."], - ["Not likely! I had to cancel my application. There has been a lot of unrest among the Sand People since you left... they've even raided the outskirts of Anchorhead."], - ["Your uncle could hold off a whole colony of Sand People with one blaster."], - ["I know, but he's got enough vaporators going to make the place pay off. He needs me for just one more season. I can't leave him now."], - ["I feel for you, Luke, you're going to have to learn what seems to be important or what really is important. What good is all your uncle's work if it's taken over by the Empire?... You know they're starting to nationalize commerce in the central systems...it won't be long before your uncle is merely a tenant, slaving for the greater glory of the Empire."], - ["It couldn't happen here. You said it yourself. The Empire won't bother with this rock."], - ["Things always change."], - ["I wish I was going... Are you going to be around long? "], - ["No, I'm leaving in the morning..."], - ["Then I guess I won't see you."], - ["Maybe someday... I'll keep a lookout."], - ["Well, I'll be at the Academy next season... after that who knows. I won't be drafted into the Imperial Starfleet that's for sure... Take care of yourself, you'll always be the best friend I've got."], - ["So long, Luke."], - ["Artoo-Detoo! It's you! It's you!"], - ["Someone was in the pod. The tracks go off in this direction. "], - ["Look, sir - droids."], - ["Wake up! Wake up!"], - ["We're doomed."], - ["Do you think they'll melt us down?"], - ["Don't shoot! Don't shoot! Will this never end?"], - ["Luke, tell Owen that if he gets a translator to be sure it speaks Bocce."], - ["It looks like we don't have much of a choice but I'll remind him."], - ["I have no need for a protocol droid."], - ["Sir - not in an environment such as this - that's why I've also been programmed for over thirty secondary functions that..."], - ["What I really need is a droid that understands the binary languages of moisture vaporators."], - ["Vaporators! Sir - My first job was programming binary load lifters... very similar to your vaporators. You could say..."], - ["Do you speak Bocce?"], - ["Of course I can, sir. It's like a second language for me... I'm as fluent in Bocce..."], - ["All right; shut up! I'll take this one."], - ["Shutting up, sir."], - ["Luke, take these two over to the garage, will you? I want you to have both of them cleaned up before dinner."], - ["But I was going into Toshi Station to pick up some power converters..."], - ["You can waste time with your friends when your chores are done. Now, come on, get to it!"], - ["All right, come on! And the red one, come on. Well, come on, Red, let's go."], - ["Uncle Owen..."], - ["Yeah?"], - ["This R2 unit has a bad motivator. Look!"], - ["Hey, what're you trying to push on us?"], - ["Excuse me, sir, but that R2 unit is in prime condition. A real bargain."], - ["Uncle Owen..."], - ["Yeah?"], - ["What about that one?"], - ["What about that blue one? We'll take that one."], - ["Yeah, take it away."], - ["Uh, I'm quite sure you'll be very pleased with that one, sir. He really is in first-class condition. I've worked with him before. Here he comes."], - ["Okay, let's go."], - ["Now, don't forget this! Why I should stick my neck out for you is quite beyond my capacity!"], - ["Thank the maker! This oil bath is going to feel so good. I've got such a bad case of dust contamination, I can barely move!"], - ["It just isn't fair. Oh, Biggs is right. I'm never gonna get out of here!"], - ["Is there anything I might do to help? "], - ["Well, not unless you can alter time, speed up the harvest, or teleport me off this rock!"], - ["I don't think so, sir. I'm only a droid and not very knowledgeable about such things. Not on this planet, anyways. As a matter of fact, I'm not even sure which planet I'm on."], - ["Well, if there's a bright center to the universe, you're on the planet that it's farthest from."], - ["I see, sir."], - ["Uh, you can call me Luke."], - ["I see, sir Luke."], - ["Just Luke."], - ["And I am See-Threepio, human-cyborg relations, and this is my counterpart, Artoo-Detoo."], - ["Hello."], - ["You got a lot of carbon scoring here. It looks like you boys have seen a lot of action."], - ["With all we've been through, sometimes I'm amazed we're in as good condition as we are, what with the Rebellion and all."], - ["You know of the Rebellion against the Empire?"], - ["That's how we came to be in your service, if you take my meaning, sir."], - ["Have you been in many battles?"], - ["Several, I think. Actually, there's not much to tell. I'm not much more than an interpreter, and not very good at telling stories. Well, not at making them interesting, anyways."], - ["Well, my little friend, you've got something jammed in here real good. Were you on a starcruiser or..."], - ["Help me, Obi-Wan Kenobi. You'remy only hope."], - ["What's this?"], - ["What is what?!? He asked you a question...What is that?"], - ["Help me, Obi-Wan Kenobi. You're my only hope. Help me, Obi-Wan Kenobi. You're my only hope."], - ["Oh, he says it's nothing, sir. Merely a malfunction. Old data. Pay it no mind."], - ["Who is she? She's beautiful."], - ["I'm afraid I'm not quite sure, sir."], - ["Help me, Obi-Wan Kenobi..."], - ["I think she was a passenger on our last voyage. A person of some importance, sir - I believe. Our captain was attached to..."], - ["Is there more to this recording?"], - ["Behave yourself, Artoo. You're going to get us in trouble. It's all right, you can trust him. He's our new master."], - ["He says he's the property of Obi-Wan Kenobi, a resident of these parts. And it's a private message for him. Quite frankly, sir, I don't know what he's talking about. Our last master was Captain Antilles, but with what we've been through, this little R2 unit has become a bit eccentric."], - ["Obi-Wan Kenobi? I wonder if he means old Ben Kenobi?"], - ["I beg your pardon, sir, but do you know what he's talking about?"], - ["Well, I don't know anyone named Obi-Wan, but old Ben lives out beyond the dune sea. He's kind of a strange old hermit."], - ["I wonder who she is. It sounds like she's in trouble. I'd better play back the whole thing."], - ["He says the restraining bolt has short circuited his recording system. He suggests that if you remove the bolt, he might be able to play back the entire recording."], - ["H'm? Oh, yeah, well, I guess you're too small to run away on me if I take this off! Okay."], - ["There you go."], - ["Well, wait a minute. Where'd she go? Bring her back! Play back the entire message."], - ["been playing. The one you're carrying inside your rusty innards! "], - ["Luke? Luke! Come to dinner!"], - ["All right, I'll be right there, Aunt Beru."], - ["I'm sorry, sir, but he appears to have picked up a slight flutter."], - ["Well, see what you can do with him. I'll be right back."], - ["Just you reconsider playing that message for him."], - ["No, I don't think he likes you at all."], - ["No, I don't like you either."], - ["You know, I think that R2 unit we bought might have been stolen."], - ["What makes you think that?"], - ["Well, I stumbled across a recording while I was cleaning him. He says he belongs to someone called Obi-Wan Kenobi."], - ["I thought he might have meant Ben. Do you know what he's talking about? Well, I wonder if he's related to Ben."], - ["That old man's just a crazy wizard. Tomorrow I want you to take that R2 unit into Anchorhead and have its memory flushed. That'll be the end of it. It belongs to us now."], - ["But what if this Obi-Wan comes looking for him?"], - ["He won't, I don't think he exists any more. He died about the same time as your father."], - ["He knew my father?"], - ["I told you to forget it. Your only concern is to prepare the new droids for tomorrow. In the morning I want them on the south ridge working out those condensers."], - ["Yes, sir. I think those new droids are going to work out fine. In fact, I, uh, was also thinking about our agreement about my staying on another season. And if these new droids do work out, I want to transmit my application to the Academy this year."], - ["You mean the next semester before harvest?"], - ["Sure, there're more than enough droids."], - ["Harvest is when I need you the most. Only one more season. This year we'll make enough on the harvest so I'll be able to hire some more hands. And then you can go to the Academy next year."], - ["You must understand I need you here, Luke."], - ["But it's a whole 'nother year."], - ["Look, it's only one more season."], - ["Yeah, that's what you said last year when Biggs and Tank left."], - ["Where are you going?"], - ["It looks like I'm going nowhere. I have to finish cleaning those droids."], - ["Owen, he can't stay here forever. Most of his friends have gone. It means so much to him."], - ["I'll make it up to him next year. I promise."], - ["Luke's just not a farmer, Owen. He has too much of his father in him."], - ["That's what I'm afraid of."], - ["What are you doing hiding there?"], - ["It wasn't my fault, sir. Please don't deactivate me. I told him not to go, but he's faulty, malfunctioning; kept babbling on about his mission."], - ["Oh, no!"], - ["That R2 unit has always been a problem. These astro-droids are getting quite out of hand. Even I can't understand their logic at times. "], - ["How could I be so stupid? He's nowhere in sight. Blast it!"], - ["Pardon me, sir, but couldn't we go after him?"], - ["It's too dangerous with all the Sand People around. We'll have to wait until morning."], - ["Luke, I'm shutting the power down for the night."], - ["All right, I'll be there in a few minutes. Boy, am I gonna get it."], - ["You know that little droid is going to cause me a lot of trouble."], - ["Oh, he excels at that, sir."], - ["Luke? Luke? Luke? Where could he be loafing now!"], - ["Have you seen Luke this morning?"], - ["He said he had some things to do before he started today, so he left early."], - ["Uh? Did he take those two new droids with him?"], - ["I think so."], - ["Well, he'd better have those units in the south range repaired bemidday or there'll be hell to pay!"], - ["How's that."], - ["Old Ben Kenobi lives out in this direction somewhere, but I don't see how that R2 unit could have come this far. We must have missed him. Uncle Owen isn't going to take this very well."], - ["Sir, would it help if you told him it was my fault."], - ["Sure. He needs you. He'd probably only deactivate you for a day or so..."], - ["Deactivate! Well, on the other hand if you hadn't removed his restraining bolt..."], - ["Wait, there's something dead ahead on the scanner. It looks like our droid... hit the accelerator."], - ["Hey, whoa, just where do you think you're going?"], - ["Master Luke here is your rightful owner. We'll have no more of this Obi-Wan Kenobi jibberish... and don't talk to me of your mission, either. You're fortunate he doesn't blast you into a million pieces right here."], - ["Well, come on. It's getting late. I only hope we can get back before Uncle Owen really blows up."], - ["If you don't mind my saying so, sir, I think you should deactivate the little fugitive until you've gotten him back to your workshop."], - ["No, he's not going to try anything."], - ["What's wrong with him now?"], - ["Oh my... sir, he says there are several creatures approaching from the southeast."], - ["Sand People! Or worse! Come on, let's have a look. Come on."], - ["There are two Banthas down there but I don't see any... wait a second, they're Sand People all right. I can see one of them now."], - ["Hello there! Come here my little friend. Don't be afraid."], - ["Don't worry, he'll be all right."], - ["What happened?"], - ["Rest easy, son, you've had a busy day. You're fortunate you're still in one piece."], - ["Ben? Ben Kenobi! Boy, am I glad to see you! "], - ["The Jundland Wastes are not to be traveled lightly. Tell me, young Luke, what brings you out this far?"], - ["Oh, this little droid! I think he's searching for his former master... I've never seen such devotion in a droid before... there seems tobe no stopping him. He claims to be the property of an Obi-Wan Kenobi. Is he a relative of yours? Do you know who he's talking about?"], - ["Obi-Wan Kenobi... Obi-Wan? Now thats a name I haven't heard in a long time... a long time."], - ["I think my uncle knew him. He said he was dead."], - ["Oh, he's not dead, no... not yet."], - ["You know him!"], - ["Well of course, of course I know him. He's me! I haven't gone by the name Obi-Wan since oh, before you were born."], - ["Then the droid does belong to you."], - ["Don't seem to remember ever owning a droid. Very interesting... "], - ["I think we better get indoors. The Sand People are easily startled but they will soon be back and in greater numbers."], - ["... Threepio!"], - ["Where am I? I must have taken a bad step..."], - ["Can you stand? We've got to get out of here before the Sand People return."], - ["I don't think I can make it. You go on, Master Luke. There's no sense in you risking yourself on my account. I'm done for."], - ["No, you're not. What kind of talk is that?"], - ["Quickly, son... they're on the move."], - ["No, my father didn't fight in the wars. He was a navigator on a spice freighter."], - ["That's what your uncle told you. He didn't hold with your father's ideals. Thought he should have stayed here and not gotten involved."], - ["You fought in the Clone Wars?"], - ["Yes, I was once a Jedi Knight the same as your father."], - ["I wish I'd known him."], - ["He was the best star-pilot in the galaxy, and a cunning warrior. I understand you've become quite a good pilot yourself. And he was a good friend. Which reminds me..."], - ["I have something here for you. Your father wanted you to have this when you were old enough, but your uncle wouldn't allow it. He feared you might follow old Obi-Wan on some damned-fool idealistic crusade like your father did. "], - ["Sir, if you'll not be needing me, I'll close down for awhile."], - ["Sure, go ahead."], - ["What is it?"], - ["Your fathers lightsaber. This is the weapon of a Jedi Knight. Not as clumsy or as random as a blaster."], - ["An elegant weapon for a morecivilized time. For over a thousand generations the Jedi Knights were the guardians of peace and justice in the Old Republic. Before the dark times, before the Empire."], - ["How did my father die?"], - ["A young Jedi named Darth Vader, who was a pupil of mine until he turned to evil, helped the Empire hunt down and destroy the Jedi Knights. He betrayed and murdered your father. Now the Jedi are all but extinct. Vader was seduced by the dark side of the Force."], - ["The Force?"], - ["Well, the Force is what gives the Jedi his power. It's an energy field created by all living things. It surrounds us and penetrates us. It binds the galaxy together."], - ["Now, let's see if we can't figure out what you are, my little friend. And where you come from."], - ["I saw part of the message he was..."], - ["I seem to have found it."], - ["General Kenobi, years ago you served my father in the Clone Wars. Now he begs you to help him in his struggle against the Empire. I regret that I am unable to present my father's request to you in person, but my ship has fallen under attack and I'm afraid my mission to bring you to Alderaan has failed. I have placed information vital to the survival of the Rebellion into the memory systems of this R2 unit. My father will know how to retrieve it. You must see this droid safely delivered to him on Alderaan. This is our most desperate hour. Help me, Obi-Wan Kenobi, you're my only hope."], - ["You must learn the ways of the Force if you're to come with me to Alderaan."], - ["Alderaan? I'm not going to Alderaan. I've got to go home. It's late, I'm in for it as it is."], - ["I need your help, Luke. I'm getting too old for this sort of thing.She needs your help."], - ["I can't get involved! I've got work to do! It's not that I like the Empire. I hate it! But there's nothing I can do about it right now. It's such a long way from here."], - ["That's your uncle talking."], - ["Oh, God, my uncle. How am I ever going to explain this?"], - ["Learn about the Force, Luke."], - ["Look, I can take you as far as Anchorhead. You can get a transport there to Mos Eisley or wherever you're going."], - ["You must do what you feel is right, of course."], - ["Until this battle station is fully operational we are vulnerable. The Rebel Alliance is too well equipped. They're more dangerous than you realize."], - ["Dangerous to your starfleet, Commander; not to this battle station!"], - ["The Rebellion will continue to gain support in the Imperial Senate as long as...."], - ["The Imperial Senate will no longer be of any concern to us. I've just received word that the Emperor has dissolved the council permanently. The last remnants of the Old Republic have been swept away."], - ["That's impossible! How will the Emperor maintain control without the bureaucracy?"], - ["The regional governors now have direct control over territories. Fear will keep the local systems in line. Fear of this battle station."], - ["And what of the Rebellion? If the Rebels have obtained a complete technical readout of this station, it is possible, however unlikely, that they might find a weakness and exploit it."], - ["The plans you refer to will soon be back in our hands."], - ["Any attack made by the Rebels against this station would be a useless gesture, no matter what technical data they've obtained. This station is now the ultimate power in the universe. I suggest we use it!"], - ["Don't be too proud of this technological terror you've constructed. The ability to destroy a planet is insignificant next to the power of the Force."], - ["Don't try to frighten us with your sorcerer's ways, Lord Vader. Your sad devotion to that ancient religion has not helped you conjure up the stolen data tapes, or given you clairvoyance enough to find the Rebel's hidden fort..."], - ["I find your lack of faith disturbing."], - ["Enough of this! Vader, release him!"], - ["As you wish."], - ["This bickering is pointless. Lord Vader will provide us with the location of the Rebel fortress by the time this station is operational. We will then crush the Rebellion with one swift stroke."], - ["It looks like Sand People did this, all right. Look, here are gaffi sticks, bantha tracks. It's just... I never heard of them hitting anything this big before."], - ["They didn't. But we are meant to think they did. These tracks are side by side. Sand People always ride single file to hide there numbers."], - ["These are the same Jawas that sold us Artoo and Threepio."], - ["And these blast points, too accurate for Sand People. Only Imperial stormtroopers are so precise."], - ["Why would Imperial troops want to slaughter Jawas?"], - ["If they traced the robots here, they may have learned who they sold them to. And that would lead them back home!"], - ["Wait, Luke! It's too dangerous."], - ["Uncle Owen! Aunt Beru! Uncle Owen!"], - ["And, now Your Highness, we will discuss the location of your hidden Rebel base."], - ["There's nothing you could have done, Luke, had you been there. You'd have been killed, too, and the droids would now be in the hands of the Empire."], - ["I want to come with you to Alderaan. There's nothing here for me now. I want to learn the ways of the Force and become a Jedi like my father."], - ["Mos Eisley Spaceport. You will never find a more wretched hive of scum and villainy. We must be cautious."], - ["How long have you had these droids?"], - ["About three or four seasons."], - ["They're for sale if you want them."], - ["Let me see your identification."], - ["You don't need to see his identification."], - ["We don't need to see his identification."], - ["looking for."], - ["These are not the droids we're looking for."], - ["He can go about his business."], - ["You can go about your business."], - ["Move along."], - ["Move along. Move along."], - ["I can't abide these Jawas. Disgusting creatures."], - ["Go on, go on. I can't understand how we got by those troopers. I thought we were dead."], - ["The Force can have a strong influence on the weak-minded. You will find it a powerful ally."], - ["Do you really think we're going to find a pilot here that'll take us to Alderaan?"], - ["Well, most of the best freighter pilots can be found here. Only watch your step. This place can be a little rough."], - ["I'm ready for anything."], - ["Come along, Artoo."], - ["We don't serve their kind here!"], - ["What?"], - ["Your droids. They'll have to wait outside. We don't want them here."], - ["Listen, why don't you wait out by the speeder. We don't want any trouble."], - ["I heartily agree with you sir."], - ["Negola dewaghi wooldugger?!?"], - ["He doesn't like you."], - ["I'm sorry."], - ["I don't like you either"], - ["Don't insult us. You just watch yourself. We're wanted men. I have the death sentence on twelve systems."], - ["I'll be careful than."], - ["You'll be dead."], - ["This little one isn't worth the effort. Come let me buy you something..."], - ["No blasters! No blaster!"], - ["This is Chewbacca. He's first-mate on a ship that might suit our needs."], - ["I don't like the look of this."], - ["Han Solo. I'm captain of the Millennium Falcon. Chewie here tells me you're looking for passage to the Alderaan system."], - ["Yes, indeed. If it's a fast ship."], - ["Fast ship? You've never heard of the Millennium Falcon?"], - ["Should I have?"], - ["It's the ship that made the Kessel run in less than twelve parsecs!"], - ["I've outrun Imperial starships, not the local bulk-cruisers, mind you. I'm talking about the big Corellian ships now. She's fast enough for you, old man. What's the cargo?"], - ["Only passengers. Myself, the boy, two droids, and no questions asked."], - ["What is it? Some kind of local trouble?"], - ["Let's just say we'd like to avoid any Imperial entanglements."], - ["Well, that's the real trick, isn't it? And it's going to cost you something extra. Ten thousand in advance."], - ["Ten thousand? We could almost buy our own ship for that!"], - ["But who's going to fly it, kid! You?"], - ["You bet I could. I'm not such a bad pilot myself! We don't have to sit here and listen..."], - ["We haven't that much with us. But we could pay you two thousand now, plus fifteen when we reach Alderaan."], - ["Seventeen, huh!"], - ["Okay. You guys got yourself a ship. We'll leave as soon as you're ready. Docking bay Ninety-four."], - ["Ninety-four."], - ["Looks like somebody's beginning to take an interest in your handiwork."], - ["All right, we'll check it out."], - ["Seventeen thousand! Those guys must really be desperate. This could really save my neck. Get back to the ship and get her ready."], - ["You'll have to sell your speeder."], - ["That's okay. I'm never coming back to this planet again."], - ["Going somewhere, Solo?"], - ["Yes, Greedo. As a matter of fact, I was just going to see your boss. Tell Jabba that I've got his money."], - ["It's too late. You should have paid him when you had the chance. Jabba's put a price on your head, so large that every bounty hunter in the galaxy will be looking for you. I'm lucky I found you first."], - ["Yeah, but this time I got the money."], - ["If you give it to me, I might forget I found you."], - ["I don't have it with me. Tell Jabba..."], - ["Jabba's through with you. He has no time for smugglers who drop their shipments at the first sign of an Imperial cruiser."], - ["Even I get boarded sometimes. Do you think I had a choice?"], - ["You can tell that to Jabba. He may only take your ship."], - ["Over my dead body."], - ["That's the idea I've been looking forward to killing you for a long time."], - ["Yes, I'll bet you have."], - ["Sorry about the mess."], - ["Her resistance to the mind probe is considerable. It will be some time before we can extract any information from her."], - ["The final check-out is complete. All systems are operational. What course shall we set?"], - ["Perhaps she would respond to an alternative form of persuasion."], - ["What do you mean?"], - ["I think it is time we demonstrate the full power of this station.Set your course for Princess Leia's home planet of Alderaan."], - ["With pleasure."], - ["Lock the door, Artoo."], - ["All right, check that side of the street. It's secure. Move on to the next door."], - ["I would much rather have gone with Master Luke than stay here with you. I don't know what all this trouble is about, but I'm sure it must be your fault."], - ["You watch your language!"], - ["He says it's the best he can do. Since the XP-38 came out, they "], - ["It will be enough."], - ["If the ship's as fast as he's boasting, we ought to do well."], - ["Come on out, Solo!"], - ["I've been waiting for you, Jabba."], - ["I expected you would be."], - ["I'm not the type to run."], - ["Han, my boy, there are times when you disappoint me... why haven't you paid me? And why did you have to fry poor Greedo like that... after all we've been through together."], - ["You sent Greedo to blast me."], - ["Han, why you're the best smuggler in the business. You're too valuable to fry. He was only relaying my concern at your delays. He wasn't going to blast you."], - ["I think he thought he was. Next time don't send one of those twerps. If you've got something to say to me, come see me yourself."], - ["Han, Han! If only you hadn't had to dump that shipment of spice... you understand I just can't make an exception. Where would I be if every pilot who smuggled for me dumped their shipment at the first sign of an Imperial starship? It's not good business."], - ["You know, even I get boarded sometimes, Jabba. I had no choice, but I've gota charter now and I can pay you back, plus a little extra. I just need some more time."], - ["Put your blasters away. Han, my boy, I'm only doing this because you're the best and I need you. So, for an extra, say... twenty percent I'll give you a little more time... but this is it. If you disappoint me again, I'll put a price on your head so large you won't be able to go near a civilized system for the rest of your short life."], - ["Jabba, I'll pay you because it's my pleasure."], - ["What a piece of junk."], - ["She'll make point five beyond the speed of light. She may not look like much, but she's got it where it counts, kid. I've added some special modifications myself."], - ["We're a little rushed, so if you'll hurry aboard we'll get out of here."], - ["Hello, sir."], - ["Which way?"], - ["All right, men. Load your weapons!"], - ["Stop that ship!"], - ["Blast 'em!"], - ["Chewie, get us out of here!"], - ["Oh, my. I'd forgotten how much I hate space travel."], - ["It looks like an Imperial cruiser. Our passengers must be hotter than I thought. Try and hold them off. Angle the deflector shield while I make the calculations for the jump to light speed."], - ["Stay sharp! There are two more coming in; they're going to try to cut us off."], - ["Why don't you outrun them? I thought you said this thing was fast."], - ["Watch your mouth, kid, or you're going to find yourself floating home. We'll be safe enough once we make the jump to hyperspace. Besides, I know a few maneuvers. We'll lose them!"], - ["Here's where the fun begins!"], - ["How long before you can make the jump to light speed?"], - ["It'll take a few moments to get the coordinates from the navi-computer."], - ["Are you kidding? At the rate they're gaining..."], - ["Traveling through hyperspace isn't like dusting crops, boy! Without precise calculations we could fly right through a star or bounce too close to a supernova and that'd end your trip real quick, wouldn't it?"], - ["What's that flashing?"], - ["We're losing our deflector shield. Go strap yourself in, I'm going to make the jump to light speed."], - ["We've entered the Alderaan system."], - ["Governor Tarkin, I should have expected to find you holding Vader's leash. I recognized your foul stench when I was brought on board."], - ["Charming to the last. You don't know how hard I found it signing the order to terminate your life!"], - ["to take the responsibility yourself!"], - ["Princess Leia, before your execution I would like you to be my guest at a ceremony that will make this battle station operational. No star system will dare oppose the Emperor now."], - ["The more you tighten your grip, Tarkin, the more star systems will slip through your fingers."], - ["Not after we demonstrate the power of this station. In a way, you have determined the choice of the planet that'll be destroyed first. Since you are reluctant to provide us with the location of the Rebel base, I have chosen to test this station's destructive power... on your home planet of Alderaan."], - ["No! Alderaan is peaceful. We have no weapons. You can't possibly..."], - ["You would prefer another target? A military target? Then name the system!"], - ["I grow tired of asking this. So it'll be the last time. Where is the Rebel base?"], - ["Dantooine."], - ["They're on Dantooine."], - ["There. You see Lord Vader, she can be reasonable. Continue with the operation. You may fire when ready."], - ["What?"], - ["You're far too trusting. Dantooine is too remote to make an effective demonstration. But don't worry. We will deal with your Rebel friends soon enough. "], - ["No!"], - ["Commence primary ignition."], - ["Are you all right? What's wrong?"], - ["I felt a great disturbance in the Force... as if millions of voices suddenly cried out in terror and were suddenly silenced. I fear something terrible has happened."], - ["You'd better get on with your exercises."], - ["Well, you can forget your troubles with those Imperial slugs. I told you I'd outrun 'em."], - ["Don't everyone thank me at once."], - ["Anyway, we should be at Alderaan about oh-two-hundred hours."], - ["Now be careful, Artoo."], - ["He made a fair move. Screaming about it won't help you."], - ["Let him have it. It's not wise to upset a Wookiee."], - ["But sir, nobody worries about upsetting a droid."], - ["That's 'cause droids don't pull people's arms out of their socket when they lose. Wookiees are known to do that."], - ["I see your point, sir. I suggest a new strategy, Artoo. Let the Wookiee win."], - ["Remember, a Jedi can feel the Force flowing through him."], - ["You mean it controls your actions?"], - ["Partially. But it also obeys your commands."], - ["Hokey religions and ancient weapons are no match for a good blaster at your side, kid."], - ["You don't believe in the Force, do you?"], - ["Kid, I've flown from one side of this galaxy to the other. I've seen a lot of strange stuff, but I've never seen anything to make me believe there's one all-powerful force controlling everything. There's no mystical energy field that controls my destiny."], - ["It's all a lot of simple tricks and nonsense."], - ["I suggest you try it again, Luke."], - ["This time, let go your conscious self and act on instinct."], - ["With the blast shield down, I can't even see. How am I supposed to fight?"], - ["Your eyes can deceive you. Don't trust them."], - ["Stretch out with your feelings."], - ["You see, you can do it."], - ["I call it luck. "], - ["In my experience, there's no such thing as luck."], - ["Look, going good against remotes is one thing. Going good against the living? That's something else."], - ["Looks like we're coming up on Alderaan."], - ["You know, I did feel something. I could almost see the remote."], - ["That's good. You have taken your first step into a larger world."], - ["Yes."], - ["Our scout ships have reached Dantooine. They found the remains of a Rebel base, but they estimate that it has been deserted for some time. They are now conducting an extensive search of the surrounding systems."], - ["She lied! She lied to us!"], - ["I told you she would never consciously betray the Rebellion."], - ["Terminate her... immediately!"], - ["Stand by, Chewie, here we go. Cut in the sublight engines."], - ["What the...? Aw, we've come out of hyperspace into a meteor shower. Some kind of asteroid collision. It's not on any of the charts."], - ["What's going on?"], - ["Our position is correct, except... no, Alderaan!"], - ["What do you mean? Where is it?"], - ["Thats what I'm trying to tell you, kid. It ain't there. It's been totally blown away."], - ["What? How?"], - ["Destroyed... by the Empire!"], - ["The entire starfleet couldn't destroy the whole planet. It'd take a thousand ships with more fire power than I've..."], - ["There's another ship coming in."], - ["Maybe they know what happened."], - ["It's an Imperial fighter."], - ["It followed us!"], - ["No. It's a short range fighter."], - ["There aren't any bases around here. Where did it come from?"], - ["It sure is leaving in a big hurry. If they identify us, we're in big trouble."], - ["Not if I can help it. Chewie... jam it's transmissions."], - ["It'd be as well to let it go. It's too far out of range."], - ["Not for long..."], - ["A fighter that size couldn't get this deep into space on its own."], - ["Then he must have gotten lost, been part of a convoy or something..."], - ["Well, he ain't going to be around long enough to tell anyone about us."], - ["Look at him. He's heading for that small moon."], - ["I think I can get him before he gets there... he's almost in range."], - ["That's no moon! It's a space station."], - ["It's too big to be a space station."], - ["I have a very bad feeling about this."], - ["Turn the ship around!"], - ["Yeah, I think your right. Full reverse! Chewie, lock in the auxiliary power."], - ["Why are we still moving towards it?"], - ["We're caught in a tractor beam! It's pulling us in!"], - ["But there's gotta be something you can do!"], - ["There's nothin' I can do about it, kid. I'm in full power. I'm going to have to shut down. But they're not going to get me without a fight!"], - ["You can't win. But there are alternatives to fighting."], - ["Clear Bay twenty-three-seven. We are opening the magnetic field."], - ["To your stations!"], - ["Come with me."], - ["Close all outboard shields! Close all outboard shields!"], - ["Yes."], - ["We've captured a freighter entering the remains of the Alderaan system. It's markings match those of a ship that blasted its way out of Mos Eisley."], - ["They must be trying to return the stolen plans to the princess. She may yet be of some use to us."], - ["Unlock one-five-seven and nine. Release charges."], - ["There's no one on board, sir. According to the log, the crew abandoned ship right after takeoff. It must be a decoy, sir. Several of the escape pods have been jettisoned."], - ["Did you find any droids?"], - ["No, sir. If there were any on board, they must also have jettisoned."], - ["Send a scanning crew on board. I want every part of this ship checked."], - ["Yes, sir."], - ["I sense something... a presence I haven't felt since..."], - ["Get me a scanning crew in here on the double. I want every part of this ship checked!"], - ["Boy, it's lucky you had these compartments."], - ["I use them for smuggling. I never thought I'd be smuggling myself in them. This is ridiculous. Even if I could take off, I'd never get past the tractor beam."], - ["Leave that to me!"], - ["Damn fool. I knew that you were going to say that!"], - ["Who's the more foolish... the fool or the fool who follows him?"], - ["The ship's all yours. If the scanners pick up anything, report it immediately. All right, let's go."], - ["Hey down there, could you give us a hand with this?"], - ["TX-four-one-two. Why aren't you at your post? TX-four-one-two, do you copy? "], - ["Take over. We've got a bad transmitter. I'll see what I can do."], - ["You know, between his howling and your blasting everything in sight, it's a wonder the whole station doesn't know we're here."], - ["Bring them on! I prefer a straight fight to all this sneaking around."], - ["We found the computer outlet, sir."], - ["Plug in. He should be able to interpret the entire Imperial computer network."], - ["He says he's found the main control to the power beam that's holding the ship here. He'll try to make the precise location appear on the monitor."], - ["The tractor beam is coupled to the main reactor in seven locations. A power loss at one of the terminals will allow the ship to leave."], - ["I don't think you boys can help. I must go alone."], - ["Whatever you say. I've done more than I bargained for on this trip already."], - ["I want to go with you."], - ["Be patient, Luke. Stay and watch over the droids."], - ["But he can..."], - ["They must be delivered safely or other star systems will suffer the same fate as Alderaan. Your destiny lies along a different path from mine. The Force will be with you... always!"], - ["Boy you said it, Chewie."], - ["Where did you dig up that old fossil?"], - ["Ben is a great man."], - ["Yeah, great at getting us into trouble."], - ["I didn't hear you give any ideas..."], - ["Well, anything would be better than just hanging around waiting for them to pick us up..."], - ["Who do you think..."], - ["What is it?"], - ["I'm afraid I'm not quite sure, sir. He says "], - ["Well, who... who has he found?"], - ["Princess Leia."], - ["The princess? She's here?"], - ["Princess?"], - ["Where... where is she?"], - ["Princess? What's going on?"], - ["Level five. Detention block AA-twenty-three. I'm afraid she's scheduled to be terminated."], - ["Oh, no! We've got to do something."], - ["What are you talking about?"], - ["The droid belongs to her. She's the one in the message. We've got to help her."], - ["Now, look, don't get any funny ideas. The old man wants us to wait right here."], - ["But he didn't know she was here. Look, will you just find a way back into the detention block?"], - ["I'm not going anywhere."], - ["They're going to execute her. Look, a few minutes ago you said you didn't want to just wait here to be captured. Now all you want to do is stay. "], - ["Marching into the detention area is not what I had in mind."], - ["But they're going to kill her!"], - ["Better her than me..."], - ["She's rich."], - ["Rich?"], - ["Yes. Rich, powerful! Listen, if you were to rescue her, the reward would be..."], - ["What?"], - ["Well more wealth that you can imagine."], - ["I don't know, I can imagine quite a bit!"], - ["You'll get it!"], - ["I better!"], - ["You will..."], - ["All right, kid. But you'd better be right about this!"], - ["All right."], - ["What's your plan?"], - ["Uh... Threepio, hand me those binders there will you?"], - ["Okay. Now, I'm going to put these on you."], - ["Okay. Han, you put these on."], - ["Don't worry, Chewie. I think I know what he has in mind."], - ["Master Luke, sir! Pardon me for asking... but, ah... what should Artoo and I do if we're discovered here?"], - ["Lock the door!"], - ["And hope they don't have blasters."], - ["That isn't very reassuring."], - ["I can't see a thing in this helmet."], - ["This is not going to work."], - ["Why didn't you say so before?"], - ["I did say so before!"], - ["Where are you taking this... thing?"], - ["Prisoner transfer from Block one-one-three-eight."], - ["I wasn't notified. I'll have to clear it."], - ["Look out! He's loose!"], - ["He's going to pull us all apart."], - ["Go get him!"], - ["We've got to find out which cell this princess of yours is in. Here it is... cell twenty-one-eight-seven. You go get her. I'll hold them here."], - ["Everything is under control. Situation normal."], - ["What happened?"], - ["Uh... had a slight weapons malfunction. But, uh, everything's perfectly all right now. We're fine. We're all fine here, now, thank you. How are you?"], - ["We're sending a squad up."], - ["Uh, uh, negative, negative. We had a reactor leak here now. Give us a few minutes to lock it down. Large leak... very dangerous."], - ["Who is this? What's your operating number?"], - ["Boring conversation anyway.Luke! We're going to have company!"], - ["Aren't you a little short for a stormtrooper?"], - ["What? Oh... the uniform. I'm Luke Skywalker. I'm here to rescue you. "], - ["You're who?"], - ["I'm here to rescue you. I've got your R2 unit. I'm here with Ben Kenobi."], - ["Ben Kenobi is here! Where is he?"], - ["Come on!"], - ["He is here..."], - ["Obi-Wan Kenobi! What makes you think so?"], - ["A tremor in the Force. The last time I felt it was in the presence of my old master."], - ["Surely he must be dead by now."], - ["Don't underestimate the Force."], - ["The Jedi are extinct, their fire has gone out of the universe. You, my friend, are all that's left of their religion."], - ["Yes."], - ["Governor Tarkin, we have an emergency alert in detention block AA-twenty-three."], - ["The princess! Put all sections on alert!"], - ["Obi-Wan is here. The Force is with him."], - ["If you're right, he must not be allowed to escape."], - ["Escape may not his plan. I must face him alone."], - ["Chewie!"], - ["Get behind me! Get behind me!"], - ["Can't get out that way."], - ["Looks like you managed to cut off our only escape route."], - ["Maybe you'd like it back in your cell, Your Highness."], - ["See-Threepio! See-Threepio!"], - ["Yes sir?"], - ["We've been cut off! Are there any other ways out of the cell bay?...What was that? I didn't copy!"], - ["I said, all systems have been alerted to your presence, sir. The main entrance seems to be the only way out; all other information on your level is restricted."], - ["Open up in there!"], - ["Oh, no!"], - ["There isn't any other way out."], - ["I can't hold them off forever! Now what?"], - ["This is some rescue. When you came in here, didn't you have a plan for getting out?"], - ["He's the brains, sweetheart."], - ["Well, I didn't..."], - ["What the hell are you doing?"], - ["Somebody has to save our skins. Into the garbage chute, wise guy."], - ["Get in there you big furry oaf! I don't care what you smell! Get in there and don't worry about it."], - ["Wonderful girl! Either I'm going to kill her or I'm beginning to like her. Get in there!"], - ["Oh! The garbage chute was a really wonderful idea. What an incredible smell you've discovered! Let's get out of here! Get away from there..."], - ["No! wait!"], - ["Will you forget it? I already tried it. It's magnetically sealed!"], - ["Put that thing away! You're going to get us all killed."], - ["Absolutely, Your Worship. Look, I had everything under control until you led us down here. You know, it's not going to take them long to figure out what happened to us."], - ["It could be worse..."], - ["It's worse."], - ["There's something alive in here!"], - ["That's your imagination."], - ["Something just moves past my leg! Look! Did you see that?"], - ["What?"], - ["Help!"], - ["Luke! Luke! Luke!"], - ["Luke!"], - ["Luke, Luke, grab a hold of this."], - ["Blast it, will you! My gun's jammed."], - ["Where?"], - ["Anywhere! Oh!!"], - ["Luke! Luke!"], - ["Grab him!"], - ["What happened?"], - ["I don't know, it just let go of me and disappeared..."], - ["I've got a very bad feeling about this."], - ["The walls are moving!"], - ["Don't just stand there. Try and brace it with something."], - ["Wait a minute!"], - ["Threepio! Come in Threepio! Threepio! Where could he be?"], - ["Take over!See to him! Look there!"], - ["They're madmen! They're heading for the prison level. If you hurry, you might catch them."], - ["Follow me! You stand guard."], - ["Come on!"], - ["Oh! All this excitement has overrun the circuits of my counterpart here. If you don't mind, I'd like to take him down to maintenance."], - ["All right."], - ["Threepio! Come in, Threepio! Threepio!"], - ["Get to the top!"], - ["I can't "], - ["Where could he be? Threepio! Threepio, will you come in?"], - ["They aren't here! Something must have happened to them. See if they've been captured."], - ["Hurry!"], - ["One thing's for sure. We're all going to be a lot thinner!Get on top of it!"], - ["I'm trying!"], - ["Thank goodness, they haven't found them! Where could they be?"], - ["Use the comlink? Oh, my! I forgot I turned it off!"], - ["Are you there, sir?"], - ["Threepio!"], - ["We've had some problems..."], - ["Shut down all the garbage mashers on the detention level, will you? Do you copy?"], - ["Shut down all the garbage mashers on the detention level."], - ["Shut down all the garbage mashers on the detention level."], - ["No. Shut them all down! Hurry!"], - ["Listen to them! They're dying, Artoo! Curse my metal body! I wasn't fast enough. It's all my fault! My poor master!"], - ["Threepio, we're all right!"], - ["We're all right. You did great."], - ["Hey... hey, open the pressure maintenance hatch on unit number... where are we?"], - ["Three-two-six-eight-two-seven."], - ["If we can just avoid any more female advice, we ought to be able to get out of here."], - ["Well, let's get moving!"], - ["Where are you going?"], - ["No, wait. They'll hear!"], - ["Come here, you big coward!"], - ["Chewie! Come here!"], - ["Listen. I don't know who you are, or where you came from, but from now on, you do as I tell you. Okay?"], - ["Look, Your Worshipfulness, let's get one thing straight! I takeorders from one person! Me!"], - ["It's a wonder you're still alive.Will somebody get this big walking carpet out of my way?"], - ["No reward is worth this."], - ["Secure this area until the alert is canceled."], - ["Give me regular reports."], - ["Do you know what's going on?"], - ["Maybe it's another drill."], - ["What was that?"], - ["Oh, it's nothing. Don't worry about it."], - ["There she is."], - ["See-Threepio, do you copy?"], - ["For the moment. Uh, we're in the main hangar across from the ship."], - ["We're right above you. Stand by."], - ["You came in that thing? You're braver that I thought."], - ["Nice! Come on!"], - ["It's them! Blast them!"], - ["Get back to the ship!"], - ["Where are you going? Come back!"], - ["He certainly has courage."], - ["What good will it do us if he gets himself killed? Come on!"], - ["I think we took a wrong turn."], - ["There's no lock!"], - ["That oughta hold it for a while."], - ["Quick, we've got to get across. Find the control that extends the bridge."], - ["Oh, I think I just blasted it."], - ["They're coming through!"], - ["Here, hold this."], - ["Here they come!"], - ["For luck!"], - ["Where could they be?"], - ["Close the blast doors!"], - ["Open the blast doors! Open the blast doors!"], - ["I've been waiting for you, Obi-Wan. We meet again, at last. The circle is now complete."], - ["When I left you, I was but the learner; now I am the master."], - ["Only a master of evil, Darth."], - ["Your powers are weak, old man."], - ["You can't win, Darth. If you strike me down, I shall become more powerful than you can possibly imagine."], - ["Didn't we just leave this party?"], - ["What kept you?"], - ["We ran into some old friends."], - ["Is the ship all right?"], - ["Seems okay, if we can get to it.Just hope the old man got the tractor beam out of commission."], - ["Look!"], - ["Come on, Artoo, we're going!"], - ["Now's our chance! Go!"], - ["No!"], - ["Come on!"], - ["Come on! Luke, its too late!"], - ["Blast the door! Kid!"], - ["Run, Luke! Run!"], - ["I hope the old man got that tractor beam out if commission, or this is going to be a real short trip. Okay, hit it!"], - ["We're coming up on the sentry ships. Hold 'em off! Angle the deflector shields while I charge up the main guns!"], - ["I can't believe he's gone."], - ["There wasn't anything you could have done."], - ["Come on, buddy, we're not out of this yet!"], - ["You in, kid? Okay, stay sharp!"], - ["Here they come!"], - ["They're coming in too fast!"], - ["Oooh!"], - ["We've lost lateral controls."], - ["Don't worry, she'll hold together."], - ["You hear me, baby? Hold together!"], - ["Got him! I got him!"], - ["Great kid! Don't get cocky."], - ["There are still two more of them out there!"], - ["That's it! We did it!"], - ["We did it!"], - ["Help! I think I'm melting!This is all your fault."], - ["Are they away?"], - ["They have just made the jump into hyperspace."], - ["You're sure the homing beacon is secure aboard their ship? I'm taking an awful risk, Vader. This had better work."], - ["Not a bad bit of rescuing, huh? You know, sometimes I even amaze myself."], - ["That doesn't sound too hard. Besides, they let us go. It's the only explanation for the ease of our escape."], - ["Easy... you call that easy?"], - ["Their tracking us!"], - ["Not this ship, sister."], - ["At least the information in Artoo is still intact."], - ["What's so important? What's he carrying?"], - ["The technical readouts of that battle station. I only hope that when the data is analyzed, a weakness can be found. It's not over yet!"], - ["It is for me, sister! Look, I ain't in this for your revolution, and I'm not in it for you, Princess. I expect to be well paid. I'm in it for the money!"], - ["You needn't worry about your reward. If money is all that you love, then that's what you'll receive!"], - ["Your friend is quite a mercenary. I wonder if he really cares about anything... or anyone."], - ["I care!"], - ["So... what do you think of her, Han?"], - ["I'm trying not to, kid!"], - ["Good..."], - ["Still, she's got a lot of spirit. I don't know, what do you think? Do you think a princess and a guy like me..."], - ["No!"], - ["You're safe! We had feared the worst."], - ["When we heard about Alderaan, we were afraid that you were... lost along with your father."], - ["We don't have time for our sorrows, commander. The battle station has surely tracked us here.It's the only explanation for the ease of our escape. You must use the information in this R2 unit to plan the attack. It is our only hope."], - ["Yes."], - ["We are approaching the planet Yavin. The Rebel base is on a moon on the far side. We are preparing to orbit the planet."], - ["The battle station is heavily shielded and carries a firepower greater than half the star fleet.Its defenses are designed around a direct large-scale assault. A small one-man fighter should be able to penetrate the outer defense."], - ["Pardon me for asking, sir, but what good are snub fighters going to be against that?"], - ["Well, the Empire doesn't consider a small one-man fighter to be any threat, or they'd have a tighter defense. An analysis of the plans provided by Princess Leia has demonstrated a weakness in the battle station."], - ["The approach will not be easy. You are required to maneuver straight down this trench and skim the surface to this point. The target area is only two meters wide. It's a small thermal exhaust port, right below the main port. The shaft leads directly to the reactor system. A precise hit will start a chain reaction which should destroy the station."], - ["Only a precise hit will set up a chain reaction. The shaft is ray-shielded, so you'll have to use proton torpedoes."], - ["That's impossible, even for a computer."], - ["It's not impossible. I used to bullseye womp rats in my T-sixteen back home. They're not much bigger than two meters."], - ["Man your ships! And may the Force be with you!"], - ["Orbiting the planet at maximum velocity. The moon with the Rebel base will be in range in thirty minutes."], - ["This will be a day long remembered. It has seen the end of Kenobi and it will soon see the end of the Rebellion."], - ["All flight troops, man your stations. All flight troops, man your stations."], - ["So... you got your reward and you're just leaving then?"], - ["That's right, yeah! I got some old debts I've got to pay off with this stuff. Even if I didn't, you don't think I'd be fool enough to stick around here, do you? Why don't you come with us? You're pretty good in a fight. I could use you."], - ["Come on! Why don't you take a look around? You know what's about to happen, what they're up against. They could use a good pilot like you. You're turning your back on them."], - ["What good's a reward if you ain't around to use it? Besides, attacking that battle station ain'tmy idea of courage. It's more like suicide."], - ["All right. Well, take care of yourself, Han... guess that's what you're best at, isn't it?"], - ["Hey, Luke... may the Force be with you!"], - ["What're you lookin' at? I know what I'm doing."], - ["What's wrong?"], - ["Oh, it's Han! I don't know, I really thought he'd change his mind. "], - ["He's got to follow his own path. No one can choose it for him."], - ["I only wish Ben were here."], - ["Luke! I don't believe it! How'd you get here... are you going out with us?!"], - ["Biggs! Of course, I'll be up there with you! Listen, have I got some stories to tell..."], - ["Are you... Luke Skywalker? Have you been checked out on the Incom T-sixty-five?"], - ["Sir, Luke is the best bushpilot in the outer rim territories."], - ["I met your father once when I was just a boy. He was a great pilot. You'll do all right. If you've got half of your father's skill, you'll do better than all right."], - ["Thank you, sir. I'll try."], - ["I've got to get aboard. Listen, you'll tell me your stories when we come back. All right?"], - ["I told you I'd make it someday, Biggs."], - ["You did, all right. It's going to be like old times Luke. We're a couple of shooting stars that'll never be stopped!"], - ["This R2 unit of your seems a bit beat up. Do you want a new one?"], - ["Not on your life! That little droid and I have been through a lot together.You okay, Artoo?"], - ["Okay, easy she goes!"], - ["Hang on tight, Artoo, you've got to come back."], - ["You wouldn't want my life to get boring, would you?"], - ["Luke, the Force will be with you."], - ["Stand-by alert. Death Star approaching. Estimated time to firing range, fifteen minutes."], - ["All wings report in."], - ["Red Ten standing by."], - ["Red Seven standing by."], - ["Red Three standing by."], - ["Red Six standing by."], - ["Red Nine standing by."], - ["Red Two standing by."], - ["Red Eleven standing by."], - ["Red Five standing by."], - ["Lock S-foils in attack position."], - ["We're passing through their magnetic field."], - ["Hold tight!"], - ["Switch your deflectors on."], - ["Double front!"], - ["Look at the size of that thing!"], - ["Cut the chatter, Red Two."], - ["Accelerate to attack speed. This is it, boys!"], - ["Red Leader, this is Gold Leader."], - ["I copy, Gold Leader."], - ["We're starting for the target shaft now."], - ["We're in position. I'm going to cut across the axis and try and draw their fire."], - ["Heavy fire, boss! Twenty-threedegrees."], - ["I see it. Stay low. "], - ["This is Red Five! I'm going in!"], - ["Luke, pull up!"], - ["Are you all right?"], - ["I got a little cooked, but I'm okay."], - ["We count thirty Rebel ships, Lord Vader. But they're so small they're evading our turbo-lasers!"], - ["We'll have to destroy them ship to ship. Get the crews to their fighters."], - ["Luke, let me know when you're going in."], - ["I'm on my way in now..."], - ["Watch yourself! There's a lot of fire coming from the right side of that deflection tower."], - ["I'm on it."], - ["Squad leaders, we've picked up a new group of signals. Enemy fighters coming your way."], - ["My scope's negative. I don't see anything."], - ["Keep up your visual scanning. With all this jamming, they'll be on top of you before your scope can pick them up."], - ["Biggs! You've picked one up... watch it!"], - ["I can't see it! Where is he?!"], - ["He's on me tight, I can't shake him... I can't shake him."], - ["Hang on, Biggs, I'm coming in."], - ["Got him!"], - ["Several fighters have broken off from the main group. Come with me!"], - ["Pull in! Luke... pull in!"], - ["Watch your back, Luke!"], - ["Watch your back! Fighter's above you, coming in!"], - ["I'm hit, but not bad."], - ["Artoo, see what you can do with it. Hang on back there."], - ["Red Six..."], - ["Can you see Red Five?"], - ["There's a heavy fire zone on this side. Red Five, where are you?"], - ["I can't shake him!"], - ["I'm on him, Luke!"], - ["Hold on!"], - ["Blast it! Wedge where are you?"], - ["Thanks, Wedge."], - ["Good shooting, Wedge!"], - ["Red Leader..."], - ["... This is Gold Leader. We're starting out attack run."], - ["I copy, Gold Leader. Move into position."], - ["Stay in attack formation!"], - ["The exhaust post is..."], - ["... marked and locked in!"], - ["Switch power to front deflection screens."], - ["How many guns do you think, Gold Five?"], - ["I'd say about twenty guns. Some on the surface, some on the towers."], - ["Death Star will be in range in five minutes. "], - ["Switch to targeting computer."], - ["Computer's locked. Getting a signal."], - ["The guns... they've stopped!"], - ["Stabilize your rear deflectors. Watch for enemy fighters."], - ["They've coming in! Three marks at two ten."], - ["I'll take them myself! Cover me!"], - ["Yes, sir."], - ["I can't maneuver!"], - ["Stay on target."], - ["We're too close."], - ["Stay on target!"], - ["Loosen up!"], - ["Gold Five to Red Leader..."], - ["Lost Tiree, lost Dutch."], - ["I copy, Gold Five."], - ["They came from behind...."], - ["We've analyzed their attack, sir, and there is a danger. Should I have your ship standing by?"], - ["Evacuate? In out moment of triumph? I think you overestimate their chances!"], - ["Rebel base, three minutes and closing."], - ["Red Group, this is Red Leader."], - ["Rendezvous at mark six point one."], - ["This is Red Two. Flying towards you."], - ["Red Three, standing by."], - ["Red Leader, this is Base One. Keep half your group out of range for the next run."], - ["Copy, Base One. Luke, take Red Two and Three. Hold up here and wait for my signal... to start your run."], - ["This is it!"], - ["We should be able to see it by now."], - ["Keep your eyes open for those fighters!"], - ["There's too much interference!"], - ["Red Five, can you see them from where you are?"], - ["No sign of any... wait!"], - ["Coming in point three five."], - ["I see them."], - ["I'm in range."], - ["Target's coming up!"], - ["Just hold them off for a few seconds."], - ["Close up formation."], - ["Almost there!"], - ["You'd better let her loose."], - ["Almost there!"], - ["I can't hold them!"], - ["It's away!"], - ["It's a hit!"], - ["Negative."], - ["Negative! It didn't go in, it just impacted on the surface."], - ["Red Leader, we're right above you. Turn to point..."], - ["... oh-five; we'll cover for you."], - ["Stay there..."], - ["... I just lost my starboard engine."], - ["Get set up for your attack run."], - ["Rebel base, one minute and closing."], - ["Biggs, Wedge, let's close it up. We're going in. We're going in full throttle."], - ["Right with you, boss."], - ["Luke, at that speed will you be able to pull out in time?"], - ["It'll be just like Beggar's Canyon back home."], - ["We'll stay back far enough to cover you."], - ["My scope shows the tower, but I can't see the exhaust port! Are you sure the computer can hit it?"], - ["Watch yourself! Increase speed full throttle!"], - ["What about that tower?"], - ["You worry about those fighters! I'll worry about the tower!"], - ["Artoo... that, that stabilizer's broken loose again! See if you can't lock it down!"], - ["I'm hit! I can't stay with you."], - ["Get clear, Wedge."], - ["You can't do any more good back there!"], - ["Sorry!"], - ["Let him go! Stay on the leader!"], - ["Hurry, Luke, they're coming in much faster this time. I can't hold them!"], - ["Artoo, try and increase the power!"], - ["Hurry up, Luke!"], - ["Wait!"], - ["Rebel base, thirty seconds and closing."], - ["I'm on the leader."], - ["Hang on, Artoo!"], - ["Use the Force, Luke."], - ["Let go, Luke."], - ["The Force is strong with this one!"], - ["Luke, trust me."], - ["His computer's off. Luke, you switched off your targeting computer. What's wrong?"], - ["Nothing. I'm all right."], - ["I've lost Artoo!"], - ["The Death Star has cleared the planet. The Death Star has cleared the planet."], - ["Rebel base, in range."], - ["You may fire when ready."], - ["Commence primary ignition."], - ["I have you now."], - ["What?"], - ["Yahoo!"], - ["Look out!"], - ["You're all clear, kid."], - ["Now let's blow this thing and go home!"], - ["Stand by to fire at Rebel base."], - ["Standing by."], - ["Great shot, kid. That was one in a million."], - ["Remember, the Force will be with you... always."], - ["Luke! Luke! Luke!"], - ["Hey! Hey!"], - ["I knew you'd come back! I just knew it!"], - ["Well, I wasn't gonna let you get all the credit and take all the reward."], - ["Hey, I knew there was more to you than money."], - ["Oh, no!"], - ["Oh, my! Artoo! Can you hear me? Say something!You can repair him, can't you?"], - ["We'll get to work on him right away."], - ["You must repair him! Sir, if any of my circuits or gears will help, I'll gladly donate them."], - ["He'll be all right."] -] -from gradio import inputs -from gradio.inputs import Textbox -from gradio import outputs - -generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B") -generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") -generator1 = gr.Interface.load("huggingface/gpt2-large") - -#gr.Parallel(generator1, generator2, generator3, inputs=gr.inputs.Textbox(lines=6, label="Enter a sentence to get another sentence."),title=title, examples=examples).launch() - -def complete_with_gpt(text): - # Use the last 50 characters of the text as context - return text[:-50] + generator1(text[-50:]) - -with gr.Blocks() as demo: - textbox = gr.Textbox(placeholder="Type here and press enter...", lines=4) - btn = gr.Button("Generate") - - btn.click(complete_with_gpt, textbox, textbox) - -demo.launch() \ No newline at end of file diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/times.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/times.py deleted file mode 100644 index 3c9b8a4fc67a251c9e81a8c4a725cd1e25fcbebe..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/autogpt/commands/times.py +++ /dev/null @@ -1,10 +0,0 @@ -from datetime import datetime - - -def get_datetime() -> str: - """Return the current date and time - - Returns: - str: The current date and time - """ - return "Current date and time: " + datetime.now().strftime("%Y-%m-%d %H:%M:%S") diff --git a/spaces/PeepDaSlan9/HuggingFaceH4-starchat-alpha/app.py b/spaces/PeepDaSlan9/HuggingFaceH4-starchat-alpha/app.py deleted file mode 100644 index 6ec294f2c2bda44625ca5f0fd3c666f7be665216..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/HuggingFaceH4-starchat-alpha/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/HuggingFaceH4/starchat-alpha").launch() \ No newline at end of file diff --git a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/ONNXVITS_utils.py b/spaces/Plachta/VITS-Umamusume-voice-synthesizer/ONNXVITS_utils.py deleted file mode 100644 index b634ce380421571e6e07fb45dd59717b3f63115c..0000000000000000000000000000000000000000 --- a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/ONNXVITS_utils.py +++ /dev/null @@ -1,19 +0,0 @@ -import torch -import numpy as np -import random -import onnxruntime as ort -def set_random_seed(seed=0): - ort.set_seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.backends.cudnn.deterministic = True - random.seed(seed) - np.random.seed(seed) - -def runonnx(model_path, **kwargs): - ort_session = ort.InferenceSession(model_path) - outputs = ort_session.run( - None, - kwargs - ) - return outputs \ No newline at end of file diff --git a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/text/english.py b/spaces/Plachta/VITS-Umamusume-voice-synthesizer/text/english.py deleted file mode 100644 index 6817392ba8a9eb830351de89fb7afc5ad72f5e42..0000000000000000000000000000000000000000 --- a/spaces/Plachta/VITS-Umamusume-voice-synthesizer/text/english.py +++ /dev/null @@ -1,188 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, lazy ipa2) pairs: -_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ð', 'z'), - ('θ', 's'), - ('ʒ', 'ʑ'), - ('ʤ', 'dʑ'), - ('ˈ', '↓'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') - - -def english_to_lazy_ipa2(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa2: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/Pranjal-y/data_scraping_analysis/data_analysis.py b/spaces/Pranjal-y/data_scraping_analysis/data_analysis.py deleted file mode 100644 index 7475679b4ba8e6e57513d8fbc740c0da7247567a..0000000000000000000000000000000000000000 --- a/spaces/Pranjal-y/data_scraping_analysis/data_analysis.py +++ /dev/null @@ -1,256 +0,0 @@ -import streamlit as st -import pandas as pd -from pathlib import Path -from xml.etree import ElementTree as ET -import json -import altair as alt -import plotly.express as px -import plotly.graph_objects as go -import numpy as np -import re - - -def detect_file_format(file_path): - extension = Path(file_path).suffix.lower() - if extension == '.csv': - return 'csv' - elif extension == '.json': - return 'json' - elif extension == '.xml': - return 'xml' - else: - return 'unsupported' - -def read_csv(file_path): - return pd.read_csv(file_path) - -def read_json(file_path): - with open(file_path, 'r') as f: - return json.load(f) - -def read_xml(file_path): - tree = ET.parse(file_path) - root = tree.getroot() - data = [] - for item in root: - item_data = {} - for child in item: - item_data[child.tag] = child.text - data.append(item_data) - return pd.DataFrame(data) - -def data_analysis_page(file_path): - st.title("Data Analysis") - - file_format = detect_file_format(file_path) - st.write(f"Retrieved file is in {file_format.upper()} format") - - if file_format == 'csv': - data = read_csv(file_path) - elif file_format == 'json': - data = read_json(file_path) - elif file_format == 'xml': - data = read_xml(file_path) - else: - st.warning("Unsupported file format") - - # Display the retrieved data - st.write(f"

    Retrieved Data :

    ", unsafe_allow_html=True) - st.write(data) - - # Display summary statistics - st.write(f"

    Data Summary :

    ", unsafe_allow_html=True) - data_summary = data.describe(include='all') - st.write(data_summary) - - # Display data types - st.write(f"

    Data Types :

    ", unsafe_allow_html=True) - data_types = data.dtypes - st.write(data_types) - - # Initialize session state - if 'converted_data' not in st.session_state: - converted_data = data.copy() - - # Select column for analysis - selected_column = st.selectbox("Select a column for analysis:", data.columns) - if selected_column: - try: - numeric_data = data[selected_column].apply(pd.to_numeric, errors='coerce') - numeric_data = numeric_data.dropna() # Remove NaN values - average = numeric_data.mean() - total_rows = len(data) - user_engagement = numeric_data.sum() - - # Create columns for layout - col1, col2, col3 = st.columns(3) - with col1: - st.markdown('
    ', unsafe_allow_html=True) - st.write(f"

    Average {selected_column}

    ", unsafe_allow_html=True) - st.write(f"

    {average:.2f}

    ", unsafe_allow_html=True) - st.markdown('
    ', unsafe_allow_html=True) - - with col2: - st.markdown('
    ', unsafe_allow_html=True) - st.write(f"

    Total Number of Rows

    ", unsafe_allow_html=True) - st.write(f"

    {total_rows}

    ", unsafe_allow_html=True) - st.markdown('
    ', unsafe_allow_html=True) - - with col3: - st.markdown('
    ', unsafe_allow_html=True) - st.write(f"

    User Engagement

    ", unsafe_allow_html=True) - st.write(f"

    Sum of {selected_column}: {user_engagement}

    ", unsafe_allow_html=True) - st.markdown('
    ', unsafe_allow_html=True) - - except ValueError: - st.warning(f"Selected column '{selected_column}' contains non-numeric values.") - - # Convert column data type if you wish to - # Allow user to select a column for data conversion - column_to_convert = st.selectbox("Select a column for data conversion:", data.columns) - - # Check if the selected column is already numeric - if pd.api.types.is_numeric_dtype(converted_data[column_to_convert]): - st.warning(f"Column '{column_to_convert}' is already numeric. Please select a different column.") - else: - # Provide a button to initiate data conversion - if st.button("Convert to Numeric"): - # Remove commas from values if the column is not numeric - converted_data[column_to_convert] = converted_data[column_to_convert].str.replace(',', '') - - try: - # Convert to numeric - converted_data[column_to_convert] = pd.to_numeric(converted_data[column_to_convert]) - st.success(f"Converted '{column_to_convert}' to numeric.") - except ValueError: - st.warning(f"Column '{column_to_convert}' contains non-numeric values.") - - # Display updated data types - updated_data_types = converted_data.dtypes - st.write(updated_data_types) - st.write(converted_data) - - # Histogram of selected column with tooltips using Altair - # Histogram of selected column with tooltips using Altair - st.markdown('
    ', unsafe_allow_html=True) - st.write('## Histogram') - - # Select Numeric Column for Histogram - selected_column_hist = st.selectbox("Select Numeric Column for Histogram", data.columns) - - # Generate histogram - hist = alt.Chart(converted_data).mark_bar().encode( - x=alt.X(f'{selected_column_hist}:Q', bin=alt.Bin(maxbins=20), title=f'{selected_column_hist}'), - y=alt.Y('count():Q', title='Frequency'), - tooltip=[f'{selected_column_hist}:Q', 'count():Q'] - ).properties( - width=600, - height=400, - title=f'Distribution of {selected_column_hist}' - ) - st.altair_chart(hist, use_container_width=True) - - st.markdown('
    ', unsafe_allow_html=True) - - # Density plot using Altair - # Density plot using Altair - st.markdown('
    ', unsafe_allow_html=True) - st.write('## Density Plot') - - # Get a list of available columns for X-axis selection - available_columns1 = data.columns.tolist() - - # Provide a unique key for the radio button - selected_column1 = st.radio("Select X-axis Column:", available_columns1, key='density_radio') - - # Create the density plot based on the selected column - density_chart = alt.Chart(converted_data).mark_area().encode( - alt.X(f'{selected_column1}:Q', title=selected_column1), # Use the selected column here - alt.Y('density:Q', title='Density'), - alt.Tooltip([f'{selected_column1}:Q', 'density:Q']) - ).transform_density( - selected_column1, # Use the selected column here - as_=[selected_column1, 'density'] - ).properties( - width=600, - height=400, - title=f'Density Plot of {selected_column1}' - ) - st.altair_chart(density_chart, use_container_width=True) - st.markdown('
    ', unsafe_allow_html=True) - - # Double bar chart test - # Create a double bar chart comparing procedure_price and cred_procedure_price for available data - st.write('## Double Bar Chart: Comparison of Procedure Prices') - - # Get a list of available columns for X and Y axis selection - available_columns = data.columns.tolist() - - # Create dropdowns for selecting X and Y axes - selected_x_column = st.selectbox("Select X-axis Column:", available_columns) - selected_y_column = st.selectbox("Select Y-axis Column:", available_columns) - - if not converted_data.empty: - chart = alt.Chart(converted_data).mark_bar().encode( - x=alt.X(f'{selected_x_column}:N', title='Disease'), - y=alt.Y(f'{selected_y_column}:Q', title='Price (INR)', scale=alt.Scale(domain=(0, 7000000))), - color=alt.Color('type_of_procedure:N', title='Type of Procedure', - scale=alt.Scale(range=['blue', 'orange'])), - tooltip=[f'{selected_x_column}:N', f'{selected_y_column}:Q', 'type_of_procedure:N'] - ).transform_fold( - [selected_x_column, selected_y_column], - as_=['type_of_procedure', 'price'] - ).properties( - width=600, - height=400, - title='Comparison of Procedure Prices by Disease' - ) - st.markdown('
    ', unsafe_allow_html=True) - st.altair_chart(chart, use_container_width=True) - st.markdown('
    ', unsafe_allow_html=True) - else: - st.write('No valid data available for comparison.') - - # Separate available and unavailable data - data['Availability'] = data.apply(lambda row: 'Unavailable' if row.isna().any() else 'Available', axis=1) - available_data = data[data['Availability'] == 'Available'] - unavailable_data = data[data['Availability'] == 'Unavailable'] - - # Calculate the count of available and unavailable data - available_count = available_data.shape[0] - unavailable_count = unavailable_data.shape[0] - - # Create columns for layout - col1, col2 = st.columns(2) - - # Display the available vs unavailable data using a bar chart - with col1: - st.write('## Available vs Unavailable Data') - # Display count of available and unavailable data - st.write(f"Available Data Count: {available_count}") - st.write(f"Unavailable Data Count: {unavailable_count}") - chart = alt.Chart(pd.DataFrame({'Status': ['Available', 'Unavailable'], - 'Count': [len(available_data), len(unavailable_data)]})).mark_bar().encode( - x='Status:N', - y='Count:Q', - color=alt.Color('Status:N', scale=alt.Scale(range=['green', 'red'])), - tooltip=['Status:N', 'Count:Q'] - ) - st.altair_chart(chart, use_container_width=True) - - # Create a doughnut chart using Plotly - with col2: - labels = ['Available', 'Unavailable'] - values = [available_count, unavailable_count] - fig = go.Figure(data=[go.Pie(labels=labels, values=values, hole=0.4)]) - st.plotly_chart(fig, use_container_width=True) - - -# Example usage -file_path = "data_ret.csv" -data_analysis_page(file_path) - - - - - diff --git a/spaces/Pranjal2041/SemSup-XC/fetch_prod.py b/spaces/Pranjal2041/SemSup-XC/fetch_prod.py deleted file mode 100644 index 621acf5f6ec1239ae2b19bb7127d08859dbae48c..0000000000000000000000000000000000000000 --- a/spaces/Pranjal2041/SemSup-XC/fetch_prod.py +++ /dev/null @@ -1,36 +0,0 @@ -from bs4 import BeautifulSoup as bs -import requests -from typing import Dict, List, Optional - -from fake_http_header import FakeHttpHeader - -class Scraper: - - - def __init__(self): - ... - - def sanity_url(self, url : str) -> bool: - if url.find('amazon')==-1: - return False - return True - - def get_product(self, url : str) -> Dict: - if not self.sanity_url(url): - return 'Invalid URL' - - webpage = requests.get(url, headers=FakeHttpHeader().as_header_dict()) - f = open('webpage_out.html','w') - f.write(webpage.content.decode()) - f.close() - if webpage.status_code != 200: - return 'Error Loading Link' - try: - webpage = bs(webpage.content) - title = webpage.findAll("span", attrs={"id": 'productTitle'})[0].text.strip() - categories = [x.strip().lower() for x in webpage.findAll("div", attrs={"id": 'wayfinding-breadcrumbs_feature_div'})[0].text.strip().split('\n') if x.strip()!='' and len(x.strip()) >=3] - desc = webpage.findAll("div", attrs={"id": 'featurebullets_feature_div'})[0].text.replace('About this item','').strip() - except IndexError as e: - if webpage.content.find('captcha')!=-1: - return {'description' : 'Detected as a Bot. Please Try Again Later. Till then, you can continue to type in your description, or manually copy from Amazon.'} - return {'description' : f'{title}\n{desc}', 'labels' : categories} \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/solvers/diffusion.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/solvers/diffusion.py deleted file mode 100644 index 93dea2520836f458ab1b8514dca952b51d113ec2..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/solvers/diffusion.py +++ /dev/null @@ -1,279 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import flashy -import julius -import omegaconf -import torch -import torch.nn.functional as F - -from . import builders -from . import base -from .. import models -from ..modules.diffusion_schedule import NoiseSchedule -from ..metrics import RelativeVolumeMel -from ..models.builders import get_processor -from ..utils.samples.manager import SampleManager -from ..solvers.compression import CompressionSolver - - -class PerStageMetrics: - """Handle prompting the metrics per stage. - It outputs the metrics per range of diffusion states. - e.g. avg loss when t in [250, 500] - """ - def __init__(self, num_steps: int, num_stages: int = 4): - self.num_steps = num_steps - self.num_stages = num_stages - - def __call__(self, losses: dict, step: tp.Union[int, torch.Tensor]): - if type(step) is int: - stage = int((step / self.num_steps) * self.num_stages) - return {f"{name}_{stage}": loss for name, loss in losses.items()} - elif type(step) is torch.Tensor: - stage_tensor = ((step / self.num_steps) * self.num_stages).long() - out: tp.Dict[str, float] = {} - for stage_idx in range(self.num_stages): - mask = (stage_tensor == stage_idx) - N = mask.sum() - stage_out = {} - if N > 0: # pass if no elements in the stage - for name, loss in losses.items(): - stage_loss = (mask * loss).sum() / N - stage_out[f"{name}_{stage_idx}"] = stage_loss - out = {**out, **stage_out} - return out - - -class DataProcess: - """Apply filtering or resampling. - - Args: - initial_sr (int): Initial sample rate. - target_sr (int): Target sample rate. - use_resampling: Whether to use resampling or not. - use_filter (bool): - n_bands (int): Number of bands to consider. - idx_band (int): - device (torch.device or str): - cutoffs (): - boost (bool): - """ - def __init__(self, initial_sr: int = 24000, target_sr: int = 16000, use_resampling: bool = False, - use_filter: bool = False, n_bands: int = 4, - idx_band: int = 0, device: torch.device = torch.device('cpu'), cutoffs=None, boost=False): - """Apply filtering or resampling - Args: - initial_sr (int): sample rate of the dataset - target_sr (int): sample rate after resampling - use_resampling (bool): whether or not performs resampling - use_filter (bool): when True filter the data to keep only one frequency band - n_bands (int): Number of bands used - cuts (none or list): The cutoff frequencies of the band filtering - if None then we use mel scale bands. - idx_band (int): index of the frequency band. 0 are lows ... (n_bands - 1) highs - boost (bool): make the data scale match our music dataset. - """ - assert idx_band < n_bands - self.idx_band = idx_band - if use_filter: - if cutoffs is not None: - self.filter = julius.SplitBands(sample_rate=initial_sr, cutoffs=cutoffs).to(device) - else: - self.filter = julius.SplitBands(sample_rate=initial_sr, n_bands=n_bands).to(device) - self.use_filter = use_filter - self.use_resampling = use_resampling - self.target_sr = target_sr - self.initial_sr = initial_sr - self.boost = boost - - def process_data(self, x, metric=False): - if x is None: - return None - if self.boost: - x /= torch.clamp(x.std(dim=(1, 2), keepdim=True), min=1e-4) - x * 0.22 - if self.use_filter and not metric: - x = self.filter(x)[self.idx_band] - if self.use_resampling: - x = julius.resample_frac(x, old_sr=self.initial_sr, new_sr=self.target_sr) - return x - - def inverse_process(self, x): - """Upsampling only.""" - if self.use_resampling: - x = julius.resample_frac(x, old_sr=self.target_sr, new_sr=self.target_sr) - return x - - -class DiffusionSolver(base.StandardSolver): - """Solver for compression task. - - The diffusion task allows for MultiBand diffusion model training. - - Args: - cfg (DictConfig): Configuration. - """ - def __init__(self, cfg: omegaconf.DictConfig): - super().__init__(cfg) - self.cfg = cfg - self.device = cfg.device - self.sample_rate: int = self.cfg.sample_rate - self.codec_model = CompressionSolver.model_from_checkpoint( - cfg.compression_model_checkpoint, device=self.device) - - self.codec_model.set_num_codebooks(cfg.n_q) - assert self.codec_model.sample_rate == self.cfg.sample_rate, ( - f"Codec model sample rate is {self.codec_model.sample_rate} but " - f"Solver sample rate is {self.cfg.sample_rate}." - ) - assert self.codec_model.sample_rate == self.sample_rate, \ - f"Sample rate of solver {self.sample_rate} and codec {self.codec_model.sample_rate} " \ - "don't match." - - self.sample_processor = get_processor(cfg.processor, sample_rate=self.sample_rate) - self.register_stateful('sample_processor') - self.sample_processor.to(self.device) - - self.schedule = NoiseSchedule( - **cfg.schedule, device=self.device, sample_processor=self.sample_processor) - - self.eval_metric: tp.Optional[torch.nn.Module] = None - - self.rvm = RelativeVolumeMel() - self.data_processor = DataProcess(initial_sr=self.sample_rate, target_sr=cfg.resampling.target_sr, - use_resampling=cfg.resampling.use, cutoffs=cfg.filter.cutoffs, - use_filter=cfg.filter.use, n_bands=cfg.filter.n_bands, - idx_band=cfg.filter.idx_band, device=self.device) - - @property - def best_metric_name(self) -> tp.Optional[str]: - if self._current_stage == "evaluate": - return 'rvm' - else: - return 'loss' - - @torch.no_grad() - def get_condition(self, wav: torch.Tensor) -> torch.Tensor: - codes, scale = self.codec_model.encode(wav) - assert scale is None, "Scaled compression models not supported." - emb = self.codec_model.decode_latent(codes) - return emb - - def build_model(self): - """Build model and optimizer as well as optional Exponential Moving Average of the model. - """ - # Model and optimizer - self.model = models.builders.get_diffusion_model(self.cfg).to(self.device) - self.optimizer = builders.get_optimizer(self.model.parameters(), self.cfg.optim) - self.register_stateful('model', 'optimizer') - self.register_best_state('model') - self.register_ema('model') - - def build_dataloaders(self): - """Build audio dataloaders for each stage.""" - self.dataloaders = builders.get_audio_datasets(self.cfg) - - def show(self): - # TODO - raise NotImplementedError() - - def run_step(self, idx: int, batch: torch.Tensor, metrics: dict): - """Perform one training or valid step on a given batch.""" - x = batch.to(self.device) - loss_fun = F.mse_loss if self.cfg.loss.kind == 'mse' else F.l1_loss - - condition = self.get_condition(x) # [bs, 128, T/hop, n_emb] - sample = self.data_processor.process_data(x) - - input_, target, step = self.schedule.get_training_item(sample, - tensor_step=self.cfg.schedule.variable_step_batch) - out = self.model(input_, step, condition=condition).sample - - base_loss = loss_fun(out, target, reduction='none').mean(dim=(1, 2)) - reference_loss = loss_fun(input_, target, reduction='none').mean(dim=(1, 2)) - loss = base_loss / reference_loss ** self.cfg.loss.norm_power - - if self.is_training: - loss.mean().backward() - flashy.distrib.sync_model(self.model) - self.optimizer.step() - self.optimizer.zero_grad() - metrics = { - 'loss': loss.mean(), 'normed_loss': (base_loss / reference_loss).mean(), - } - metrics.update(self.per_stage({'loss': loss, 'normed_loss': base_loss / reference_loss}, step)) - metrics.update({ - 'std_in': input_.std(), 'std_out': out.std()}) - return metrics - - def run_epoch(self): - # reset random seed at the beginning of the epoch - self.rng = torch.Generator() - self.rng.manual_seed(1234 + self.epoch) - self.per_stage = PerStageMetrics(self.schedule.num_steps, self.cfg.metrics.num_stage) - # run epoch - super().run_epoch() - - def evaluate(self): - """Evaluate stage. - Runs audio reconstruction evaluation. - """ - self.model.eval() - evaluate_stage_name = f'{self.current_stage}' - loader = self.dataloaders['evaluate'] - updates = len(loader) - lp = self.log_progress(f'{evaluate_stage_name} estimate', loader, total=updates, updates=self.log_updates) - - metrics = {} - n = 1 - for idx, batch in enumerate(lp): - x = batch.to(self.device) - with torch.no_grad(): - y_pred = self.regenerate(x) - - y_pred = y_pred.cpu() - y = batch.cpu() # should already be on CPU but just in case - rvm = self.rvm(y_pred, y) - lp.update(**rvm) - if len(metrics) == 0: - metrics = rvm - else: - for key in rvm.keys(): - metrics[key] = (metrics[key] * n + rvm[key]) / (n + 1) - metrics = flashy.distrib.average_metrics(metrics) - return metrics - - @torch.no_grad() - def regenerate(self, wav: torch.Tensor, step_list: tp.Optional[list] = None): - """Regenerate the given waveform.""" - condition = self.get_condition(wav) - initial = self.schedule.get_initial_noise(self.data_processor.process_data(wav)) # sampling rate changes. - result = self.schedule.generate_subsampled(self.model, initial=initial, condition=condition, - step_list=step_list) - result = self.data_processor.inverse_process(result) - return result - - def generate(self): - """Generate stage.""" - sample_manager = SampleManager(self.xp) - self.model.eval() - generate_stage_name = f'{self.current_stage}' - - loader = self.dataloaders['generate'] - updates = len(loader) - lp = self.log_progress(generate_stage_name, loader, total=updates, updates=self.log_updates) - - for batch in lp: - reference, _ = batch - reference = reference.to(self.device) - estimate = self.regenerate(reference) - reference = reference.cpu() - estimate = estimate.cpu() - sample_manager.add_samples(estimate, self.epoch, ground_truth_wavs=reference) - flashy.distrib.barrier() diff --git a/spaces/RaIDooN/huggyllama-llama-13b/app.py b/spaces/RaIDooN/huggyllama-llama-13b/app.py deleted file mode 100644 index da8bb13329131fecac21a144971dbece1075d24d..0000000000000000000000000000000000000000 --- a/spaces/RaIDooN/huggyllama-llama-13b/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/huggyllama/llama-13b").launch() \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/pyopenssl.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/pyopenssl.py deleted file mode 100644 index 528764a033408806e38d7cb686a330a66ca01b10..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/pyopenssl.py +++ /dev/null @@ -1,519 +0,0 @@ -""" -TLS with SNI_-support for Python 2. Follow these instructions if you would -like to verify TLS certificates in Python 2. Note, the default libraries do -*not* do certificate checking; you need to do additional work to validate -certificates yourself. - -This needs the following packages installed: - -* `pyOpenSSL`_ (tested with 16.0.0) -* `cryptography`_ (minimum 1.3.4, from pyopenssl) -* `idna`_ (minimum 2.0, from cryptography) - -However, pyopenssl depends on cryptography, which depends on idna, so while we -use all three directly here we end up having relatively few packages required. - -You can install them with the following command: - -.. code-block:: bash - - $ python -m pip install pyopenssl cryptography idna - -To activate certificate checking, call -:func:`~urllib3.contrib.pyopenssl.inject_into_urllib3` from your Python code -before you begin making HTTP requests. This can be done in a ``sitecustomize`` -module, or at any other time before your application begins using ``urllib3``, -like this: - -.. code-block:: python - - try: - import pip._vendor.urllib3.contrib.pyopenssl as pyopenssl - pyopenssl.inject_into_urllib3() - except ImportError: - pass - -Now you can use :mod:`urllib3` as you normally would, and it will support SNI -when the required modules are installed. - -Activating this module also has the positive side effect of disabling SSL/TLS -compression in Python 2 (see `CRIME attack`_). - -.. _sni: https://en.wikipedia.org/wiki/Server_Name_Indication -.. _crime attack: https://en.wikipedia.org/wiki/CRIME_(security_exploit) -.. _pyopenssl: https://www.pyopenssl.org -.. _cryptography: https://cryptography.io -.. _idna: https://github.com/kjd/idna -""" -from __future__ import absolute_import - -import OpenSSL.SSL -from cryptography import x509 -from cryptography.hazmat.backends.openssl import backend as openssl_backend -from cryptography.hazmat.backends.openssl.x509 import _Certificate - -try: - from cryptography.x509 import UnsupportedExtension -except ImportError: - # UnsupportedExtension is gone in cryptography >= 2.1.0 - class UnsupportedExtension(Exception): - pass - - -from io import BytesIO -from socket import error as SocketError -from socket import timeout - -try: # Platform-specific: Python 2 - from socket import _fileobject -except ImportError: # Platform-specific: Python 3 - _fileobject = None - from ..packages.backports.makefile import backport_makefile - -import logging -import ssl -import sys -import warnings - -from .. import util -from ..packages import six -from ..util.ssl_ import PROTOCOL_TLS_CLIENT - -warnings.warn( - "'urllib3.contrib.pyopenssl' module is deprecated and will be removed " - "in a future release of urllib3 2.x. Read more in this issue: " - "https://github.com/urllib3/urllib3/issues/2680", - category=DeprecationWarning, - stacklevel=2, -) - -__all__ = ["inject_into_urllib3", "extract_from_urllib3"] - -# SNI always works. -HAS_SNI = True - -# Map from urllib3 to PyOpenSSL compatible parameter-values. -_openssl_versions = { - util.PROTOCOL_TLS: OpenSSL.SSL.SSLv23_METHOD, - PROTOCOL_TLS_CLIENT: OpenSSL.SSL.SSLv23_METHOD, - ssl.PROTOCOL_TLSv1: OpenSSL.SSL.TLSv1_METHOD, -} - -if hasattr(ssl, "PROTOCOL_SSLv3") and hasattr(OpenSSL.SSL, "SSLv3_METHOD"): - _openssl_versions[ssl.PROTOCOL_SSLv3] = OpenSSL.SSL.SSLv3_METHOD - -if hasattr(ssl, "PROTOCOL_TLSv1_1") and hasattr(OpenSSL.SSL, "TLSv1_1_METHOD"): - _openssl_versions[ssl.PROTOCOL_TLSv1_1] = OpenSSL.SSL.TLSv1_1_METHOD - -if hasattr(ssl, "PROTOCOL_TLSv1_2") and hasattr(OpenSSL.SSL, "TLSv1_2_METHOD"): - _openssl_versions[ssl.PROTOCOL_TLSv1_2] = OpenSSL.SSL.TLSv1_2_METHOD - - -_stdlib_to_openssl_verify = { - ssl.CERT_NONE: OpenSSL.SSL.VERIFY_NONE, - ssl.CERT_OPTIONAL: OpenSSL.SSL.VERIFY_PEER, - ssl.CERT_REQUIRED: OpenSSL.SSL.VERIFY_PEER - + OpenSSL.SSL.VERIFY_FAIL_IF_NO_PEER_CERT, -} -_openssl_to_stdlib_verify = dict((v, k) for k, v in _stdlib_to_openssl_verify.items()) - -# OpenSSL will only write 16K at a time -SSL_WRITE_BLOCKSIZE = 16384 - -orig_util_HAS_SNI = util.HAS_SNI -orig_util_SSLContext = util.ssl_.SSLContext - - -log = logging.getLogger(__name__) - - -def inject_into_urllib3(): - "Monkey-patch urllib3 with PyOpenSSL-backed SSL-support." - - _validate_dependencies_met() - - util.SSLContext = PyOpenSSLContext - util.ssl_.SSLContext = PyOpenSSLContext - util.HAS_SNI = HAS_SNI - util.ssl_.HAS_SNI = HAS_SNI - util.IS_PYOPENSSL = True - util.ssl_.IS_PYOPENSSL = True - - -def extract_from_urllib3(): - "Undo monkey-patching by :func:`inject_into_urllib3`." - - util.SSLContext = orig_util_SSLContext - util.ssl_.SSLContext = orig_util_SSLContext - util.HAS_SNI = orig_util_HAS_SNI - util.ssl_.HAS_SNI = orig_util_HAS_SNI - util.IS_PYOPENSSL = False - util.ssl_.IS_PYOPENSSL = False - - -def _validate_dependencies_met(): - """ - Verifies that PyOpenSSL's package-level dependencies have been met. - Throws `ImportError` if they are not met. - """ - # Method added in `cryptography==1.1`; not available in older versions - from cryptography.x509.extensions import Extensions - - if getattr(Extensions, "get_extension_for_class", None) is None: - raise ImportError( - "'cryptography' module missing required functionality. " - "Try upgrading to v1.3.4 or newer." - ) - - # pyOpenSSL 0.14 and above use cryptography for OpenSSL bindings. The _x509 - # attribute is only present on those versions. - from OpenSSL.crypto import X509 - - x509 = X509() - if getattr(x509, "_x509", None) is None: - raise ImportError( - "'pyOpenSSL' module missing required functionality. " - "Try upgrading to v0.14 or newer." - ) - - -def _dnsname_to_stdlib(name): - """ - Converts a dNSName SubjectAlternativeName field to the form used by the - standard library on the given Python version. - - Cryptography produces a dNSName as a unicode string that was idna-decoded - from ASCII bytes. We need to idna-encode that string to get it back, and - then on Python 3 we also need to convert to unicode via UTF-8 (the stdlib - uses PyUnicode_FromStringAndSize on it, which decodes via UTF-8). - - If the name cannot be idna-encoded then we return None signalling that - the name given should be skipped. - """ - - def idna_encode(name): - """ - Borrowed wholesale from the Python Cryptography Project. It turns out - that we can't just safely call `idna.encode`: it can explode for - wildcard names. This avoids that problem. - """ - from pip._vendor import idna - - try: - for prefix in [u"*.", u"."]: - if name.startswith(prefix): - name = name[len(prefix) :] - return prefix.encode("ascii") + idna.encode(name) - return idna.encode(name) - except idna.core.IDNAError: - return None - - # Don't send IPv6 addresses through the IDNA encoder. - if ":" in name: - return name - - name = idna_encode(name) - if name is None: - return None - elif sys.version_info >= (3, 0): - name = name.decode("utf-8") - return name - - -def get_subj_alt_name(peer_cert): - """ - Given an PyOpenSSL certificate, provides all the subject alternative names. - """ - # Pass the cert to cryptography, which has much better APIs for this. - if hasattr(peer_cert, "to_cryptography"): - cert = peer_cert.to_cryptography() - else: - # This is technically using private APIs, but should work across all - # relevant versions before PyOpenSSL got a proper API for this. - cert = _Certificate(openssl_backend, peer_cert._x509) - - # We want to find the SAN extension. Ask Cryptography to locate it (it's - # faster than looping in Python) - try: - ext = cert.extensions.get_extension_for_class(x509.SubjectAlternativeName).value - except x509.ExtensionNotFound: - # No such extension, return the empty list. - return [] - except ( - x509.DuplicateExtension, - UnsupportedExtension, - x509.UnsupportedGeneralNameType, - UnicodeError, - ) as e: - # A problem has been found with the quality of the certificate. Assume - # no SAN field is present. - log.warning( - "A problem was encountered with the certificate that prevented " - "urllib3 from finding the SubjectAlternativeName field. This can " - "affect certificate validation. The error was %s", - e, - ) - return [] - - # We want to return dNSName and iPAddress fields. We need to cast the IPs - # back to strings because the match_hostname function wants them as - # strings. - # Sadly the DNS names need to be idna encoded and then, on Python 3, UTF-8 - # decoded. This is pretty frustrating, but that's what the standard library - # does with certificates, and so we need to attempt to do the same. - # We also want to skip over names which cannot be idna encoded. - names = [ - ("DNS", name) - for name in map(_dnsname_to_stdlib, ext.get_values_for_type(x509.DNSName)) - if name is not None - ] - names.extend( - ("IP Address", str(name)) for name in ext.get_values_for_type(x509.IPAddress) - ) - - return names - - -class WrappedSocket(object): - """API-compatibility wrapper for Python OpenSSL's Connection-class. - - Note: _makefile_refs, _drop() and _reuse() are needed for the garbage - collector of pypy. - """ - - def __init__(self, connection, socket, suppress_ragged_eofs=True): - self.connection = connection - self.socket = socket - self.suppress_ragged_eofs = suppress_ragged_eofs - self._makefile_refs = 0 - self._closed = False - - def fileno(self): - return self.socket.fileno() - - # Copy-pasted from Python 3.5 source code - def _decref_socketios(self): - if self._makefile_refs > 0: - self._makefile_refs -= 1 - if self._closed: - self.close() - - def recv(self, *args, **kwargs): - try: - data = self.connection.recv(*args, **kwargs) - except OpenSSL.SSL.SysCallError as e: - if self.suppress_ragged_eofs and e.args == (-1, "Unexpected EOF"): - return b"" - else: - raise SocketError(str(e)) - except OpenSSL.SSL.ZeroReturnError: - if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN: - return b"" - else: - raise - except OpenSSL.SSL.WantReadError: - if not util.wait_for_read(self.socket, self.socket.gettimeout()): - raise timeout("The read operation timed out") - else: - return self.recv(*args, **kwargs) - - # TLS 1.3 post-handshake authentication - except OpenSSL.SSL.Error as e: - raise ssl.SSLError("read error: %r" % e) - else: - return data - - def recv_into(self, *args, **kwargs): - try: - return self.connection.recv_into(*args, **kwargs) - except OpenSSL.SSL.SysCallError as e: - if self.suppress_ragged_eofs and e.args == (-1, "Unexpected EOF"): - return 0 - else: - raise SocketError(str(e)) - except OpenSSL.SSL.ZeroReturnError: - if self.connection.get_shutdown() == OpenSSL.SSL.RECEIVED_SHUTDOWN: - return 0 - else: - raise - except OpenSSL.SSL.WantReadError: - if not util.wait_for_read(self.socket, self.socket.gettimeout()): - raise timeout("The read operation timed out") - else: - return self.recv_into(*args, **kwargs) - - # TLS 1.3 post-handshake authentication - except OpenSSL.SSL.Error as e: - raise ssl.SSLError("read error: %r" % e) - - def settimeout(self, timeout): - return self.socket.settimeout(timeout) - - def _send_until_done(self, data): - while True: - try: - return self.connection.send(data) - except OpenSSL.SSL.WantWriteError: - if not util.wait_for_write(self.socket, self.socket.gettimeout()): - raise timeout() - continue - except OpenSSL.SSL.SysCallError as e: - raise SocketError(str(e)) - - def sendall(self, data): - total_sent = 0 - while total_sent < len(data): - sent = self._send_until_done( - data[total_sent : total_sent + SSL_WRITE_BLOCKSIZE] - ) - total_sent += sent - - def shutdown(self): - # FIXME rethrow compatible exceptions should we ever use this - self.connection.shutdown() - - def close(self): - if self._makefile_refs < 1: - try: - self._closed = True - return self.connection.close() - except OpenSSL.SSL.Error: - return - else: - self._makefile_refs -= 1 - - def getpeercert(self, binary_form=False): - x509 = self.connection.get_peer_certificate() - - if not x509: - return x509 - - if binary_form: - return OpenSSL.crypto.dump_certificate(OpenSSL.crypto.FILETYPE_ASN1, x509) - - return { - "subject": ((("commonName", x509.get_subject().CN),),), - "subjectAltName": get_subj_alt_name(x509), - } - - def version(self): - return self.connection.get_protocol_version_name() - - def _reuse(self): - self._makefile_refs += 1 - - def _drop(self): - if self._makefile_refs < 1: - self.close() - else: - self._makefile_refs -= 1 - - -if _fileobject: # Platform-specific: Python 2 - - def makefile(self, mode, bufsize=-1): - self._makefile_refs += 1 - return _fileobject(self, mode, bufsize, close=True) - -else: # Platform-specific: Python 3 - makefile = backport_makefile - -WrappedSocket.makefile = makefile - - -class PyOpenSSLContext(object): - """ - I am a wrapper class for the PyOpenSSL ``Context`` object. I am responsible - for translating the interface of the standard library ``SSLContext`` object - to calls into PyOpenSSL. - """ - - def __init__(self, protocol): - self.protocol = _openssl_versions[protocol] - self._ctx = OpenSSL.SSL.Context(self.protocol) - self._options = 0 - self.check_hostname = False - - @property - def options(self): - return self._options - - @options.setter - def options(self, value): - self._options = value - self._ctx.set_options(value) - - @property - def verify_mode(self): - return _openssl_to_stdlib_verify[self._ctx.get_verify_mode()] - - @verify_mode.setter - def verify_mode(self, value): - self._ctx.set_verify(_stdlib_to_openssl_verify[value], _verify_callback) - - def set_default_verify_paths(self): - self._ctx.set_default_verify_paths() - - def set_ciphers(self, ciphers): - if isinstance(ciphers, six.text_type): - ciphers = ciphers.encode("utf-8") - self._ctx.set_cipher_list(ciphers) - - def load_verify_locations(self, cafile=None, capath=None, cadata=None): - if cafile is not None: - cafile = cafile.encode("utf-8") - if capath is not None: - capath = capath.encode("utf-8") - try: - self._ctx.load_verify_locations(cafile, capath) - if cadata is not None: - self._ctx.load_verify_locations(BytesIO(cadata)) - except OpenSSL.SSL.Error as e: - raise ssl.SSLError("unable to load trusted certificates: %r" % e) - - def load_cert_chain(self, certfile, keyfile=None, password=None): - self._ctx.use_certificate_chain_file(certfile) - if password is not None: - if not isinstance(password, six.binary_type): - password = password.encode("utf-8") - self._ctx.set_passwd_cb(lambda *_: password) - self._ctx.use_privatekey_file(keyfile or certfile) - - def set_alpn_protocols(self, protocols): - protocols = [six.ensure_binary(p) for p in protocols] - return self._ctx.set_alpn_protos(protocols) - - def wrap_socket( - self, - sock, - server_side=False, - do_handshake_on_connect=True, - suppress_ragged_eofs=True, - server_hostname=None, - ): - cnx = OpenSSL.SSL.Connection(self._ctx, sock) - - if isinstance(server_hostname, six.text_type): # Platform-specific: Python 3 - server_hostname = server_hostname.encode("utf-8") - - if server_hostname is not None: - cnx.set_tlsext_host_name(server_hostname) - - cnx.set_connect_state() - - while True: - try: - cnx.do_handshake() - except OpenSSL.SSL.WantReadError: - if not util.wait_for_read(sock, sock.gettimeout()): - raise timeout("select timed out") - continue - except OpenSSL.SSL.Error as e: - raise ssl.SSLError("bad handshake: %r" % e) - break - - return WrappedSocket(cnx, sock) - - -def _verify_callback(cnx, x509, err_no, err_depth, return_code): - return err_no == 0 diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/filepost.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/filepost.py deleted file mode 100644 index 36c9252c647e67bc7353c523152568b993c1331f..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/filepost.py +++ /dev/null @@ -1,98 +0,0 @@ -from __future__ import absolute_import - -import binascii -import codecs -import os -from io import BytesIO - -from .fields import RequestField -from .packages import six -from .packages.six import b - -writer = codecs.lookup("utf-8")[3] - - -def choose_boundary(): - """ - Our embarrassingly-simple replacement for mimetools.choose_boundary. - """ - boundary = binascii.hexlify(os.urandom(16)) - if not six.PY2: - boundary = boundary.decode("ascii") - return boundary - - -def iter_field_objects(fields): - """ - Iterate over fields. - - Supports list of (k, v) tuples and dicts, and lists of - :class:`~urllib3.fields.RequestField`. - - """ - if isinstance(fields, dict): - i = six.iteritems(fields) - else: - i = iter(fields) - - for field in i: - if isinstance(field, RequestField): - yield field - else: - yield RequestField.from_tuples(*field) - - -def iter_fields(fields): - """ - .. deprecated:: 1.6 - - Iterate over fields. - - The addition of :class:`~urllib3.fields.RequestField` makes this function - obsolete. Instead, use :func:`iter_field_objects`, which returns - :class:`~urllib3.fields.RequestField` objects. - - Supports list of (k, v) tuples and dicts. - """ - if isinstance(fields, dict): - return ((k, v) for k, v in six.iteritems(fields)) - - return ((k, v) for k, v in fields) - - -def encode_multipart_formdata(fields, boundary=None): - """ - Encode a dictionary of ``fields`` using the multipart/form-data MIME format. - - :param fields: - Dictionary of fields or list of (key, :class:`~urllib3.fields.RequestField`). - - :param boundary: - If not specified, then a random boundary will be generated using - :func:`urllib3.filepost.choose_boundary`. - """ - body = BytesIO() - if boundary is None: - boundary = choose_boundary() - - for field in iter_field_objects(fields): - body.write(b("--%s\r\n" % (boundary))) - - writer(body).write(field.render_headers()) - data = field.data - - if isinstance(data, int): - data = str(data) # Backwards compatibility - - if isinstance(data, six.text_type): - writer(body).write(data) - else: - body.write(data) - - body.write(b"\r\n") - - body.write(b("--%s--\r\n" % (boundary))) - - content_type = str("multipart/form-data; boundary=%s" % boundary) - - return body.getvalue(), content_type diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/__init__.py deleted file mode 100644 index d59226af9d7fe1b5279e99ff6e333032d1cec274..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pkg_resources/__init__.py +++ /dev/null @@ -1,3296 +0,0 @@ -""" -Package resource API --------------------- - -A resource is a logical file contained within a package, or a logical -subdirectory thereof. The package resource API expects resource names -to have their path parts separated with ``/``, *not* whatever the local -path separator is. Do not use os.path operations to manipulate resource -names being passed into the API. - -The package resource API is designed to work with normal filesystem packages, -.egg files, and unpacked .egg files. It can also work in a limited way with -.zip files and with custom PEP 302 loaders that support the ``get_data()`` -method. -""" - -import sys -import os -import io -import time -import re -import types -import zipfile -import zipimport -import warnings -import stat -import functools -import pkgutil -import operator -import platform -import collections -import plistlib -import email.parser -import errno -import tempfile -import textwrap -import itertools -import inspect -import ntpath -import posixpath -import importlib -from pkgutil import get_importer - -try: - import _imp -except ImportError: - # Python 3.2 compatibility - import imp as _imp - -try: - FileExistsError -except NameError: - FileExistsError = OSError - -# capture these to bypass sandboxing -from os import utime -try: - from os import mkdir, rename, unlink - WRITE_SUPPORT = True -except ImportError: - # no write support, probably under GAE - WRITE_SUPPORT = False - -from os import open as os_open -from os.path import isdir, split - -try: - import importlib.machinery as importlib_machinery - # access attribute to force import under delayed import mechanisms. - importlib_machinery.__name__ -except ImportError: - importlib_machinery = None - -from pkg_resources.extern.jaraco.text import ( - yield_lines, - drop_comment, - join_continuation, -) - -from pkg_resources.extern import appdirs -from pkg_resources.extern import packaging -__import__('pkg_resources.extern.packaging.version') -__import__('pkg_resources.extern.packaging.specifiers') -__import__('pkg_resources.extern.packaging.requirements') -__import__('pkg_resources.extern.packaging.markers') -__import__('pkg_resources.extern.packaging.utils') - -if sys.version_info < (3, 5): - raise RuntimeError("Python 3.5 or later is required") - -# declare some globals that will be defined later to -# satisfy the linters. -require = None -working_set = None -add_activation_listener = None -resources_stream = None -cleanup_resources = None -resource_dir = None -resource_stream = None -set_extraction_path = None -resource_isdir = None -resource_string = None -iter_entry_points = None -resource_listdir = None -resource_filename = None -resource_exists = None -_distribution_finders = None -_namespace_handlers = None -_namespace_packages = None - - -class PEP440Warning(RuntimeWarning): - """ - Used when there is an issue with a version or specifier not complying with - PEP 440. - """ - - -def parse_version(v): - try: - return packaging.version.Version(v) - except packaging.version.InvalidVersion: - warnings.warn( - f"{v} is an invalid version and will not be supported in " - "a future release", - PkgResourcesDeprecationWarning, - ) - return packaging.version.LegacyVersion(v) - - -_state_vars = {} - - -def _declare_state(vartype, **kw): - globals().update(kw) - _state_vars.update(dict.fromkeys(kw, vartype)) - - -def __getstate__(): - state = {} - g = globals() - for k, v in _state_vars.items(): - state[k] = g['_sget_' + v](g[k]) - return state - - -def __setstate__(state): - g = globals() - for k, v in state.items(): - g['_sset_' + _state_vars[k]](k, g[k], v) - return state - - -def _sget_dict(val): - return val.copy() - - -def _sset_dict(key, ob, state): - ob.clear() - ob.update(state) - - -def _sget_object(val): - return val.__getstate__() - - -def _sset_object(key, ob, state): - ob.__setstate__(state) - - -_sget_none = _sset_none = lambda *args: None - - -def get_supported_platform(): - """Return this platform's maximum compatible version. - - distutils.util.get_platform() normally reports the minimum version - of macOS that would be required to *use* extensions produced by - distutils. But what we want when checking compatibility is to know the - version of macOS that we are *running*. To allow usage of packages that - explicitly require a newer version of macOS, we must also know the - current version of the OS. - - If this condition occurs for any other platform with a version in its - platform strings, this function should be extended accordingly. - """ - plat = get_build_platform() - m = macosVersionString.match(plat) - if m is not None and sys.platform == "darwin": - try: - plat = 'macosx-%s-%s' % ('.'.join(_macos_vers()[:2]), m.group(3)) - except ValueError: - # not macOS - pass - return plat - - -__all__ = [ - # Basic resource access and distribution/entry point discovery - 'require', 'run_script', 'get_provider', 'get_distribution', - 'load_entry_point', 'get_entry_map', 'get_entry_info', - 'iter_entry_points', - 'resource_string', 'resource_stream', 'resource_filename', - 'resource_listdir', 'resource_exists', 'resource_isdir', - - # Environmental control - 'declare_namespace', 'working_set', 'add_activation_listener', - 'find_distributions', 'set_extraction_path', 'cleanup_resources', - 'get_default_cache', - - # Primary implementation classes - 'Environment', 'WorkingSet', 'ResourceManager', - 'Distribution', 'Requirement', 'EntryPoint', - - # Exceptions - 'ResolutionError', 'VersionConflict', 'DistributionNotFound', - 'UnknownExtra', 'ExtractionError', - - # Warnings - 'PEP440Warning', - - # Parsing functions and string utilities - 'parse_requirements', 'parse_version', 'safe_name', 'safe_version', - 'get_platform', 'compatible_platforms', 'yield_lines', 'split_sections', - 'safe_extra', 'to_filename', 'invalid_marker', 'evaluate_marker', - - # filesystem utilities - 'ensure_directory', 'normalize_path', - - # Distribution "precedence" constants - 'EGG_DIST', 'BINARY_DIST', 'SOURCE_DIST', 'CHECKOUT_DIST', 'DEVELOP_DIST', - - # "Provider" interfaces, implementations, and registration/lookup APIs - 'IMetadataProvider', 'IResourceProvider', 'FileMetadata', - 'PathMetadata', 'EggMetadata', 'EmptyProvider', 'empty_provider', - 'NullProvider', 'EggProvider', 'DefaultProvider', 'ZipProvider', - 'register_finder', 'register_namespace_handler', 'register_loader_type', - 'fixup_namespace_packages', 'get_importer', - - # Warnings - 'PkgResourcesDeprecationWarning', - - # Deprecated/backward compatibility only - 'run_main', 'AvailableDistributions', -] - - -class ResolutionError(Exception): - """Abstract base for dependency resolution errors""" - - def __repr__(self): - return self.__class__.__name__ + repr(self.args) - - -class VersionConflict(ResolutionError): - """ - An already-installed version conflicts with the requested version. - - Should be initialized with the installed Distribution and the requested - Requirement. - """ - - _template = "{self.dist} is installed but {self.req} is required" - - @property - def dist(self): - return self.args[0] - - @property - def req(self): - return self.args[1] - - def report(self): - return self._template.format(**locals()) - - def with_context(self, required_by): - """ - If required_by is non-empty, return a version of self that is a - ContextualVersionConflict. - """ - if not required_by: - return self - args = self.args + (required_by,) - return ContextualVersionConflict(*args) - - -class ContextualVersionConflict(VersionConflict): - """ - A VersionConflict that accepts a third parameter, the set of the - requirements that required the installed Distribution. - """ - - _template = VersionConflict._template + ' by {self.required_by}' - - @property - def required_by(self): - return self.args[2] - - -class DistributionNotFound(ResolutionError): - """A requested distribution was not found""" - - _template = ("The '{self.req}' distribution was not found " - "and is required by {self.requirers_str}") - - @property - def req(self): - return self.args[0] - - @property - def requirers(self): - return self.args[1] - - @property - def requirers_str(self): - if not self.requirers: - return 'the application' - return ', '.join(self.requirers) - - def report(self): - return self._template.format(**locals()) - - def __str__(self): - return self.report() - - -class UnknownExtra(ResolutionError): - """Distribution doesn't have an "extra feature" of the given name""" - - -_provider_factories = {} - -PY_MAJOR = '{}.{}'.format(*sys.version_info) -EGG_DIST = 3 -BINARY_DIST = 2 -SOURCE_DIST = 1 -CHECKOUT_DIST = 0 -DEVELOP_DIST = -1 - - -def register_loader_type(loader_type, provider_factory): - """Register `provider_factory` to make providers for `loader_type` - - `loader_type` is the type or class of a PEP 302 ``module.__loader__``, - and `provider_factory` is a function that, passed a *module* object, - returns an ``IResourceProvider`` for that module. - """ - _provider_factories[loader_type] = provider_factory - - -def get_provider(moduleOrReq): - """Return an IResourceProvider for the named module or requirement""" - if isinstance(moduleOrReq, Requirement): - return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] - try: - module = sys.modules[moduleOrReq] - except KeyError: - __import__(moduleOrReq) - module = sys.modules[moduleOrReq] - loader = getattr(module, '__loader__', None) - return _find_adapter(_provider_factories, loader)(module) - - -def _macos_vers(_cache=[]): - if not _cache: - version = platform.mac_ver()[0] - # fallback for MacPorts - if version == '': - plist = '/System/Library/CoreServices/SystemVersion.plist' - if os.path.exists(plist): - if hasattr(plistlib, 'readPlist'): - plist_content = plistlib.readPlist(plist) - if 'ProductVersion' in plist_content: - version = plist_content['ProductVersion'] - - _cache.append(version.split('.')) - return _cache[0] - - -def _macos_arch(machine): - return {'PowerPC': 'ppc', 'Power_Macintosh': 'ppc'}.get(machine, machine) - - -def get_build_platform(): - """Return this platform's string for platform-specific distributions - - XXX Currently this is the same as ``distutils.util.get_platform()``, but it - needs some hacks for Linux and macOS. - """ - from sysconfig import get_platform - - plat = get_platform() - if sys.platform == "darwin" and not plat.startswith('macosx-'): - try: - version = _macos_vers() - machine = os.uname()[4].replace(" ", "_") - return "macosx-%d.%d-%s" % ( - int(version[0]), int(version[1]), - _macos_arch(machine), - ) - except ValueError: - # if someone is running a non-Mac darwin system, this will fall - # through to the default implementation - pass - return plat - - -macosVersionString = re.compile(r"macosx-(\d+)\.(\d+)-(.*)") -darwinVersionString = re.compile(r"darwin-(\d+)\.(\d+)\.(\d+)-(.*)") -# XXX backward compat -get_platform = get_build_platform - - -def compatible_platforms(provided, required): - """Can code for the `provided` platform run on the `required` platform? - - Returns true if either platform is ``None``, or the platforms are equal. - - XXX Needs compatibility checks for Linux and other unixy OSes. - """ - if provided is None or required is None or provided == required: - # easy case - return True - - # macOS special cases - reqMac = macosVersionString.match(required) - if reqMac: - provMac = macosVersionString.match(provided) - - # is this a Mac package? - if not provMac: - # this is backwards compatibility for packages built before - # setuptools 0.6. All packages built after this point will - # use the new macOS designation. - provDarwin = darwinVersionString.match(provided) - if provDarwin: - dversion = int(provDarwin.group(1)) - macosversion = "%s.%s" % (reqMac.group(1), reqMac.group(2)) - if dversion == 7 and macosversion >= "10.3" or \ - dversion == 8 and macosversion >= "10.4": - return True - # egg isn't macOS or legacy darwin - return False - - # are they the same major version and machine type? - if provMac.group(1) != reqMac.group(1) or \ - provMac.group(3) != reqMac.group(3): - return False - - # is the required OS major update >= the provided one? - if int(provMac.group(2)) > int(reqMac.group(2)): - return False - - return True - - # XXX Linux and other platforms' special cases should go here - return False - - -def run_script(dist_spec, script_name): - """Locate distribution `dist_spec` and run its `script_name` script""" - ns = sys._getframe(1).f_globals - name = ns['__name__'] - ns.clear() - ns['__name__'] = name - require(dist_spec)[0].run_script(script_name, ns) - - -# backward compatibility -run_main = run_script - - -def get_distribution(dist): - """Return a current distribution object for a Requirement or string""" - if isinstance(dist, str): - dist = Requirement.parse(dist) - if isinstance(dist, Requirement): - dist = get_provider(dist) - if not isinstance(dist, Distribution): - raise TypeError("Expected string, Requirement, or Distribution", dist) - return dist - - -def load_entry_point(dist, group, name): - """Return `name` entry point of `group` for `dist` or raise ImportError""" - return get_distribution(dist).load_entry_point(group, name) - - -def get_entry_map(dist, group=None): - """Return the entry point map for `group`, or the full entry map""" - return get_distribution(dist).get_entry_map(group) - - -def get_entry_info(dist, group, name): - """Return the EntryPoint object for `group`+`name`, or ``None``""" - return get_distribution(dist).get_entry_info(group, name) - - -class IMetadataProvider: - def has_metadata(name): - """Does the package's distribution contain the named metadata?""" - - def get_metadata(name): - """The named metadata resource as a string""" - - def get_metadata_lines(name): - """Yield named metadata resource as list of non-blank non-comment lines - - Leading and trailing whitespace is stripped from each line, and lines - with ``#`` as the first non-blank character are omitted.""" - - def metadata_isdir(name): - """Is the named metadata a directory? (like ``os.path.isdir()``)""" - - def metadata_listdir(name): - """List of metadata names in the directory (like ``os.listdir()``)""" - - def run_script(script_name, namespace): - """Execute the named script in the supplied namespace dictionary""" - - -class IResourceProvider(IMetadataProvider): - """An object that provides access to package resources""" - - def get_resource_filename(manager, resource_name): - """Return a true filesystem path for `resource_name` - - `manager` must be an ``IResourceManager``""" - - def get_resource_stream(manager, resource_name): - """Return a readable file-like object for `resource_name` - - `manager` must be an ``IResourceManager``""" - - def get_resource_string(manager, resource_name): - """Return a string containing the contents of `resource_name` - - `manager` must be an ``IResourceManager``""" - - def has_resource(resource_name): - """Does the package contain the named resource?""" - - def resource_isdir(resource_name): - """Is the named resource a directory? (like ``os.path.isdir()``)""" - - def resource_listdir(resource_name): - """List of resource names in the directory (like ``os.listdir()``)""" - - -class WorkingSet: - """A collection of active distributions on sys.path (or a similar list)""" - - def __init__(self, entries=None): - """Create working set from list of path entries (default=sys.path)""" - self.entries = [] - self.entry_keys = {} - self.by_key = {} - self.normalized_to_canonical_keys = {} - self.callbacks = [] - - if entries is None: - entries = sys.path - - for entry in entries: - self.add_entry(entry) - - @classmethod - def _build_master(cls): - """ - Prepare the master working set. - """ - ws = cls() - try: - from __main__ import __requires__ - except ImportError: - # The main program does not list any requirements - return ws - - # ensure the requirements are met - try: - ws.require(__requires__) - except VersionConflict: - return cls._build_from_requirements(__requires__) - - return ws - - @classmethod - def _build_from_requirements(cls, req_spec): - """ - Build a working set from a requirement spec. Rewrites sys.path. - """ - # try it without defaults already on sys.path - # by starting with an empty path - ws = cls([]) - reqs = parse_requirements(req_spec) - dists = ws.resolve(reqs, Environment()) - for dist in dists: - ws.add(dist) - - # add any missing entries from sys.path - for entry in sys.path: - if entry not in ws.entries: - ws.add_entry(entry) - - # then copy back to sys.path - sys.path[:] = ws.entries - return ws - - def add_entry(self, entry): - """Add a path item to ``.entries``, finding any distributions on it - - ``find_distributions(entry, True)`` is used to find distributions - corresponding to the path entry, and they are added. `entry` is - always appended to ``.entries``, even if it is already present. - (This is because ``sys.path`` can contain the same value more than - once, and the ``.entries`` of the ``sys.path`` WorkingSet should always - equal ``sys.path``.) - """ - self.entry_keys.setdefault(entry, []) - self.entries.append(entry) - for dist in find_distributions(entry, True): - self.add(dist, entry, False) - - def __contains__(self, dist): - """True if `dist` is the active distribution for its project""" - return self.by_key.get(dist.key) == dist - - def find(self, req): - """Find a distribution matching requirement `req` - - If there is an active distribution for the requested project, this - returns it as long as it meets the version requirement specified by - `req`. But, if there is an active distribution for the project and it - does *not* meet the `req` requirement, ``VersionConflict`` is raised. - If there is no active distribution for the requested project, ``None`` - is returned. - """ - dist = self.by_key.get(req.key) - - if dist is None: - canonical_key = self.normalized_to_canonical_keys.get(req.key) - - if canonical_key is not None: - req.key = canonical_key - dist = self.by_key.get(canonical_key) - - if dist is not None and dist not in req: - # XXX add more info - raise VersionConflict(dist, req) - return dist - - def iter_entry_points(self, group, name=None): - """Yield entry point objects from `group` matching `name` - - If `name` is None, yields all entry points in `group` from all - distributions in the working set, otherwise only ones matching - both `group` and `name` are yielded (in distribution order). - """ - return ( - entry - for dist in self - for entry in dist.get_entry_map(group).values() - if name is None or name == entry.name - ) - - def run_script(self, requires, script_name): - """Locate distribution for `requires` and run `script_name` script""" - ns = sys._getframe(1).f_globals - name = ns['__name__'] - ns.clear() - ns['__name__'] = name - self.require(requires)[0].run_script(script_name, ns) - - def __iter__(self): - """Yield distributions for non-duplicate projects in the working set - - The yield order is the order in which the items' path entries were - added to the working set. - """ - seen = {} - for item in self.entries: - if item not in self.entry_keys: - # workaround a cache issue - continue - - for key in self.entry_keys[item]: - if key not in seen: - seen[key] = 1 - yield self.by_key[key] - - def add(self, dist, entry=None, insert=True, replace=False): - """Add `dist` to working set, associated with `entry` - - If `entry` is unspecified, it defaults to the ``.location`` of `dist`. - On exit from this routine, `entry` is added to the end of the working - set's ``.entries`` (if it wasn't already present). - - `dist` is only added to the working set if it's for a project that - doesn't already have a distribution in the set, unless `replace=True`. - If it's added, any callbacks registered with the ``subscribe()`` method - will be called. - """ - if insert: - dist.insert_on(self.entries, entry, replace=replace) - - if entry is None: - entry = dist.location - keys = self.entry_keys.setdefault(entry, []) - keys2 = self.entry_keys.setdefault(dist.location, []) - if not replace and dist.key in self.by_key: - # ignore hidden distros - return - - self.by_key[dist.key] = dist - normalized_name = packaging.utils.canonicalize_name(dist.key) - self.normalized_to_canonical_keys[normalized_name] = dist.key - if dist.key not in keys: - keys.append(dist.key) - if dist.key not in keys2: - keys2.append(dist.key) - self._added_new(dist) - - # FIXME: 'WorkingSet.resolve' is too complex (11) - def resolve(self, requirements, env=None, installer=None, # noqa: C901 - replace_conflicting=False, extras=None): - """List all distributions needed to (recursively) meet `requirements` - - `requirements` must be a sequence of ``Requirement`` objects. `env`, - if supplied, should be an ``Environment`` instance. If - not supplied, it defaults to all distributions available within any - entry or distribution in the working set. `installer`, if supplied, - will be invoked with each requirement that cannot be met by an - already-installed distribution; it should return a ``Distribution`` or - ``None``. - - Unless `replace_conflicting=True`, raises a VersionConflict exception - if - any requirements are found on the path that have the correct name but - the wrong version. Otherwise, if an `installer` is supplied it will be - invoked to obtain the correct version of the requirement and activate - it. - - `extras` is a list of the extras to be used with these requirements. - This is important because extra requirements may look like `my_req; - extra = "my_extra"`, which would otherwise be interpreted as a purely - optional requirement. Instead, we want to be able to assert that these - requirements are truly required. - """ - - # set up the stack - requirements = list(requirements)[::-1] - # set of processed requirements - processed = {} - # key -> dist - best = {} - to_activate = [] - - req_extras = _ReqExtras() - - # Mapping of requirement to set of distributions that required it; - # useful for reporting info about conflicts. - required_by = collections.defaultdict(set) - - while requirements: - # process dependencies breadth-first - req = requirements.pop(0) - if req in processed: - # Ignore cyclic or redundant dependencies - continue - - if not req_extras.markers_pass(req, extras): - continue - - dist = best.get(req.key) - if dist is None: - # Find the best distribution and add it to the map - dist = self.by_key.get(req.key) - if dist is None or (dist not in req and replace_conflicting): - ws = self - if env is None: - if dist is None: - env = Environment(self.entries) - else: - # Use an empty environment and workingset to avoid - # any further conflicts with the conflicting - # distribution - env = Environment([]) - ws = WorkingSet([]) - dist = best[req.key] = env.best_match( - req, ws, installer, - replace_conflicting=replace_conflicting - ) - if dist is None: - requirers = required_by.get(req, None) - raise DistributionNotFound(req, requirers) - to_activate.append(dist) - if dist not in req: - # Oops, the "best" so far conflicts with a dependency - dependent_req = required_by[req] - raise VersionConflict(dist, req).with_context(dependent_req) - - # push the new requirements onto the stack - new_requirements = dist.requires(req.extras)[::-1] - requirements.extend(new_requirements) - - # Register the new requirements needed by req - for new_requirement in new_requirements: - required_by[new_requirement].add(req.project_name) - req_extras[new_requirement] = req.extras - - processed[req] = True - - # return list of distros to activate - return to_activate - - def find_plugins( - self, plugin_env, full_env=None, installer=None, fallback=True): - """Find all activatable distributions in `plugin_env` - - Example usage:: - - distributions, errors = working_set.find_plugins( - Environment(plugin_dirlist) - ) - # add plugins+libs to sys.path - map(working_set.add, distributions) - # display errors - print('Could not load', errors) - - The `plugin_env` should be an ``Environment`` instance that contains - only distributions that are in the project's "plugin directory" or - directories. The `full_env`, if supplied, should be an ``Environment`` - contains all currently-available distributions. If `full_env` is not - supplied, one is created automatically from the ``WorkingSet`` this - method is called on, which will typically mean that every directory on - ``sys.path`` will be scanned for distributions. - - `installer` is a standard installer callback as used by the - ``resolve()`` method. The `fallback` flag indicates whether we should - attempt to resolve older versions of a plugin if the newest version - cannot be resolved. - - This method returns a 2-tuple: (`distributions`, `error_info`), where - `distributions` is a list of the distributions found in `plugin_env` - that were loadable, along with any other distributions that are needed - to resolve their dependencies. `error_info` is a dictionary mapping - unloadable plugin distributions to an exception instance describing the - error that occurred. Usually this will be a ``DistributionNotFound`` or - ``VersionConflict`` instance. - """ - - plugin_projects = list(plugin_env) - # scan project names in alphabetic order - plugin_projects.sort() - - error_info = {} - distributions = {} - - if full_env is None: - env = Environment(self.entries) - env += plugin_env - else: - env = full_env + plugin_env - - shadow_set = self.__class__([]) - # put all our entries in shadow_set - list(map(shadow_set.add, self)) - - for project_name in plugin_projects: - - for dist in plugin_env[project_name]: - - req = [dist.as_requirement()] - - try: - resolvees = shadow_set.resolve(req, env, installer) - - except ResolutionError as v: - # save error info - error_info[dist] = v - if fallback: - # try the next older version of project - continue - else: - # give up on this project, keep going - break - - else: - list(map(shadow_set.add, resolvees)) - distributions.update(dict.fromkeys(resolvees)) - - # success, no need to try any more versions of this project - break - - distributions = list(distributions) - distributions.sort() - - return distributions, error_info - - def require(self, *requirements): - """Ensure that distributions matching `requirements` are activated - - `requirements` must be a string or a (possibly-nested) sequence - thereof, specifying the distributions and versions required. The - return value is a sequence of the distributions that needed to be - activated to fulfill the requirements; all relevant distributions are - included, even if they were already activated in this working set. - """ - needed = self.resolve(parse_requirements(requirements)) - - for dist in needed: - self.add(dist) - - return needed - - def subscribe(self, callback, existing=True): - """Invoke `callback` for all distributions - - If `existing=True` (default), - call on all existing ones, as well. - """ - if callback in self.callbacks: - return - self.callbacks.append(callback) - if not existing: - return - for dist in self: - callback(dist) - - def _added_new(self, dist): - for callback in self.callbacks: - callback(dist) - - def __getstate__(self): - return ( - self.entries[:], self.entry_keys.copy(), self.by_key.copy(), - self.normalized_to_canonical_keys.copy(), self.callbacks[:] - ) - - def __setstate__(self, e_k_b_n_c): - entries, keys, by_key, normalized_to_canonical_keys, callbacks = e_k_b_n_c - self.entries = entries[:] - self.entry_keys = keys.copy() - self.by_key = by_key.copy() - self.normalized_to_canonical_keys = normalized_to_canonical_keys.copy() - self.callbacks = callbacks[:] - - -class _ReqExtras(dict): - """ - Map each requirement to the extras that demanded it. - """ - - def markers_pass(self, req, extras=None): - """ - Evaluate markers for req against each extra that - demanded it. - - Return False if the req has a marker and fails - evaluation. Otherwise, return True. - """ - extra_evals = ( - req.marker.evaluate({'extra': extra}) - for extra in self.get(req, ()) + (extras or (None,)) - ) - return not req.marker or any(extra_evals) - - -class Environment: - """Searchable snapshot of distributions on a search path""" - - def __init__( - self, search_path=None, platform=get_supported_platform(), - python=PY_MAJOR): - """Snapshot distributions available on a search path - - Any distributions found on `search_path` are added to the environment. - `search_path` should be a sequence of ``sys.path`` items. If not - supplied, ``sys.path`` is used. - - `platform` is an optional string specifying the name of the platform - that platform-specific distributions must be compatible with. If - unspecified, it defaults to the current platform. `python` is an - optional string naming the desired version of Python (e.g. ``'3.6'``); - it defaults to the current version. - - You may explicitly set `platform` (and/or `python`) to ``None`` if you - wish to map *all* distributions, not just those compatible with the - running platform or Python version. - """ - self._distmap = {} - self.platform = platform - self.python = python - self.scan(search_path) - - def can_add(self, dist): - """Is distribution `dist` acceptable for this environment? - - The distribution must match the platform and python version - requirements specified when this environment was created, or False - is returned. - """ - py_compat = ( - self.python is None - or dist.py_version is None - or dist.py_version == self.python - ) - return py_compat and compatible_platforms(dist.platform, self.platform) - - def remove(self, dist): - """Remove `dist` from the environment""" - self._distmap[dist.key].remove(dist) - - def scan(self, search_path=None): - """Scan `search_path` for distributions usable in this environment - - Any distributions found are added to the environment. - `search_path` should be a sequence of ``sys.path`` items. If not - supplied, ``sys.path`` is used. Only distributions conforming to - the platform/python version defined at initialization are added. - """ - if search_path is None: - search_path = sys.path - - for item in search_path: - for dist in find_distributions(item): - self.add(dist) - - def __getitem__(self, project_name): - """Return a newest-to-oldest list of distributions for `project_name` - - Uses case-insensitive `project_name` comparison, assuming all the - project's distributions use their project's name converted to all - lowercase as their key. - - """ - distribution_key = project_name.lower() - return self._distmap.get(distribution_key, []) - - def add(self, dist): - """Add `dist` if we ``can_add()`` it and it has not already been added - """ - if self.can_add(dist) and dist.has_version(): - dists = self._distmap.setdefault(dist.key, []) - if dist not in dists: - dists.append(dist) - dists.sort(key=operator.attrgetter('hashcmp'), reverse=True) - - def best_match( - self, req, working_set, installer=None, replace_conflicting=False): - """Find distribution best matching `req` and usable on `working_set` - - This calls the ``find(req)`` method of the `working_set` to see if a - suitable distribution is already active. (This may raise - ``VersionConflict`` if an unsuitable version of the project is already - active in the specified `working_set`.) If a suitable distribution - isn't active, this method returns the newest distribution in the - environment that meets the ``Requirement`` in `req`. If no suitable - distribution is found, and `installer` is supplied, then the result of - calling the environment's ``obtain(req, installer)`` method will be - returned. - """ - try: - dist = working_set.find(req) - except VersionConflict: - if not replace_conflicting: - raise - dist = None - if dist is not None: - return dist - for dist in self[req.key]: - if dist in req: - return dist - # try to download/install - return self.obtain(req, installer) - - def obtain(self, requirement, installer=None): - """Obtain a distribution matching `requirement` (e.g. via download) - - Obtain a distro that matches requirement (e.g. via download). In the - base ``Environment`` class, this routine just returns - ``installer(requirement)``, unless `installer` is None, in which case - None is returned instead. This method is a hook that allows subclasses - to attempt other ways of obtaining a distribution before falling back - to the `installer` argument.""" - if installer is not None: - return installer(requirement) - - def __iter__(self): - """Yield the unique project names of the available distributions""" - for key in self._distmap.keys(): - if self[key]: - yield key - - def __iadd__(self, other): - """In-place addition of a distribution or environment""" - if isinstance(other, Distribution): - self.add(other) - elif isinstance(other, Environment): - for project in other: - for dist in other[project]: - self.add(dist) - else: - raise TypeError("Can't add %r to environment" % (other,)) - return self - - def __add__(self, other): - """Add an environment or distribution to an environment""" - new = self.__class__([], platform=None, python=None) - for env in self, other: - new += env - return new - - -# XXX backward compatibility -AvailableDistributions = Environment - - -class ExtractionError(RuntimeError): - """An error occurred extracting a resource - - The following attributes are available from instances of this exception: - - manager - The resource manager that raised this exception - - cache_path - The base directory for resource extraction - - original_error - The exception instance that caused extraction to fail - """ - - -class ResourceManager: - """Manage resource extraction and packages""" - extraction_path = None - - def __init__(self): - self.cached_files = {} - - def resource_exists(self, package_or_requirement, resource_name): - """Does the named resource exist?""" - return get_provider(package_or_requirement).has_resource(resource_name) - - def resource_isdir(self, package_or_requirement, resource_name): - """Is the named resource an existing directory?""" - return get_provider(package_or_requirement).resource_isdir( - resource_name - ) - - def resource_filename(self, package_or_requirement, resource_name): - """Return a true filesystem path for specified resource""" - return get_provider(package_or_requirement).get_resource_filename( - self, resource_name - ) - - def resource_stream(self, package_or_requirement, resource_name): - """Return a readable file-like object for specified resource""" - return get_provider(package_or_requirement).get_resource_stream( - self, resource_name - ) - - def resource_string(self, package_or_requirement, resource_name): - """Return specified resource as a string""" - return get_provider(package_or_requirement).get_resource_string( - self, resource_name - ) - - def resource_listdir(self, package_or_requirement, resource_name): - """List the contents of the named resource directory""" - return get_provider(package_or_requirement).resource_listdir( - resource_name - ) - - def extraction_error(self): - """Give an error message for problems extracting file(s)""" - - old_exc = sys.exc_info()[1] - cache_path = self.extraction_path or get_default_cache() - - tmpl = textwrap.dedent(""" - Can't extract file(s) to egg cache - - The following error occurred while trying to extract file(s) - to the Python egg cache: - - {old_exc} - - The Python egg cache directory is currently set to: - - {cache_path} - - Perhaps your account does not have write access to this directory? - You can change the cache directory by setting the PYTHON_EGG_CACHE - environment variable to point to an accessible directory. - """).lstrip() - err = ExtractionError(tmpl.format(**locals())) - err.manager = self - err.cache_path = cache_path - err.original_error = old_exc - raise err - - def get_cache_path(self, archive_name, names=()): - """Return absolute location in cache for `archive_name` and `names` - - The parent directory of the resulting path will be created if it does - not already exist. `archive_name` should be the base filename of the - enclosing egg (which may not be the name of the enclosing zipfile!), - including its ".egg" extension. `names`, if provided, should be a - sequence of path name parts "under" the egg's extraction location. - - This method should only be called by resource providers that need to - obtain an extraction location, and only for names they intend to - extract, as it tracks the generated names for possible cleanup later. - """ - extract_path = self.extraction_path or get_default_cache() - target_path = os.path.join(extract_path, archive_name + '-tmp', *names) - try: - _bypass_ensure_directory(target_path) - except Exception: - self.extraction_error() - - self._warn_unsafe_extraction_path(extract_path) - - self.cached_files[target_path] = 1 - return target_path - - @staticmethod - def _warn_unsafe_extraction_path(path): - """ - If the default extraction path is overridden and set to an insecure - location, such as /tmp, it opens up an opportunity for an attacker to - replace an extracted file with an unauthorized payload. Warn the user - if a known insecure location is used. - - See Distribute #375 for more details. - """ - if os.name == 'nt' and not path.startswith(os.environ['windir']): - # On Windows, permissions are generally restrictive by default - # and temp directories are not writable by other users, so - # bypass the warning. - return - mode = os.stat(path).st_mode - if mode & stat.S_IWOTH or mode & stat.S_IWGRP: - msg = ( - "Extraction path is writable by group/others " - "and vulnerable to attack when " - "used with get_resource_filename ({path}). " - "Consider a more secure " - "location (set with .set_extraction_path or the " - "PYTHON_EGG_CACHE environment variable)." - ).format(**locals()) - warnings.warn(msg, UserWarning) - - def postprocess(self, tempname, filename): - """Perform any platform-specific postprocessing of `tempname` - - This is where Mac header rewrites should be done; other platforms don't - have anything special they should do. - - Resource providers should call this method ONLY after successfully - extracting a compressed resource. They must NOT call it on resources - that are already in the filesystem. - - `tempname` is the current (temporary) name of the file, and `filename` - is the name it will be renamed to by the caller after this routine - returns. - """ - - if os.name == 'posix': - # Make the resource executable - mode = ((os.stat(tempname).st_mode) | 0o555) & 0o7777 - os.chmod(tempname, mode) - - def set_extraction_path(self, path): - """Set the base path where resources will be extracted to, if needed. - - If you do not call this routine before any extractions take place, the - path defaults to the return value of ``get_default_cache()``. (Which - is based on the ``PYTHON_EGG_CACHE`` environment variable, with various - platform-specific fallbacks. See that routine's documentation for more - details.) - - Resources are extracted to subdirectories of this path based upon - information given by the ``IResourceProvider``. You may set this to a - temporary directory, but then you must call ``cleanup_resources()`` to - delete the extracted files when done. There is no guarantee that - ``cleanup_resources()`` will be able to remove all extracted files. - - (Note: you may not change the extraction path for a given resource - manager once resources have been extracted, unless you first call - ``cleanup_resources()``.) - """ - if self.cached_files: - raise ValueError( - "Can't change extraction path, files already extracted" - ) - - self.extraction_path = path - - def cleanup_resources(self, force=False): - """ - Delete all extracted resource files and directories, returning a list - of the file and directory names that could not be successfully removed. - This function does not have any concurrency protection, so it should - generally only be called when the extraction path is a temporary - directory exclusive to a single process. This method is not - automatically called; you must call it explicitly or register it as an - ``atexit`` function if you wish to ensure cleanup of a temporary - directory used for extractions. - """ - # XXX - - -def get_default_cache(): - """ - Return the ``PYTHON_EGG_CACHE`` environment variable - or a platform-relevant user cache dir for an app - named "Python-Eggs". - """ - return ( - os.environ.get('PYTHON_EGG_CACHE') - or appdirs.user_cache_dir(appname='Python-Eggs') - ) - - -def safe_name(name): - """Convert an arbitrary string to a standard distribution name - - Any runs of non-alphanumeric/. characters are replaced with a single '-'. - """ - return re.sub('[^A-Za-z0-9.]+', '-', name) - - -def safe_version(version): - """ - Convert an arbitrary string to a standard version string - """ - try: - # normalize the version - return str(packaging.version.Version(version)) - except packaging.version.InvalidVersion: - version = version.replace(' ', '.') - return re.sub('[^A-Za-z0-9.]+', '-', version) - - -def safe_extra(extra): - """Convert an arbitrary string to a standard 'extra' name - - Any runs of non-alphanumeric characters are replaced with a single '_', - and the result is always lowercased. - """ - return re.sub('[^A-Za-z0-9.-]+', '_', extra).lower() - - -def to_filename(name): - """Convert a project or version name to its filename-escaped form - - Any '-' characters are currently replaced with '_'. - """ - return name.replace('-', '_') - - -def invalid_marker(text): - """ - Validate text as a PEP 508 environment marker; return an exception - if invalid or False otherwise. - """ - try: - evaluate_marker(text) - except SyntaxError as e: - e.filename = None - e.lineno = None - return e - return False - - -def evaluate_marker(text, extra=None): - """ - Evaluate a PEP 508 environment marker. - Return a boolean indicating the marker result in this environment. - Raise SyntaxError if marker is invalid. - - This implementation uses the 'pyparsing' module. - """ - try: - marker = packaging.markers.Marker(text) - return marker.evaluate() - except packaging.markers.InvalidMarker as e: - raise SyntaxError(e) from e - - -class NullProvider: - """Try to implement resources and metadata for arbitrary PEP 302 loaders""" - - egg_name = None - egg_info = None - loader = None - - def __init__(self, module): - self.loader = getattr(module, '__loader__', None) - self.module_path = os.path.dirname(getattr(module, '__file__', '')) - - def get_resource_filename(self, manager, resource_name): - return self._fn(self.module_path, resource_name) - - def get_resource_stream(self, manager, resource_name): - return io.BytesIO(self.get_resource_string(manager, resource_name)) - - def get_resource_string(self, manager, resource_name): - return self._get(self._fn(self.module_path, resource_name)) - - def has_resource(self, resource_name): - return self._has(self._fn(self.module_path, resource_name)) - - def _get_metadata_path(self, name): - return self._fn(self.egg_info, name) - - def has_metadata(self, name): - if not self.egg_info: - return self.egg_info - - path = self._get_metadata_path(name) - return self._has(path) - - def get_metadata(self, name): - if not self.egg_info: - return "" - path = self._get_metadata_path(name) - value = self._get(path) - try: - return value.decode('utf-8') - except UnicodeDecodeError as exc: - # Include the path in the error message to simplify - # troubleshooting, and without changing the exception type. - exc.reason += ' in {} file at path: {}'.format(name, path) - raise - - def get_metadata_lines(self, name): - return yield_lines(self.get_metadata(name)) - - def resource_isdir(self, resource_name): - return self._isdir(self._fn(self.module_path, resource_name)) - - def metadata_isdir(self, name): - return self.egg_info and self._isdir(self._fn(self.egg_info, name)) - - def resource_listdir(self, resource_name): - return self._listdir(self._fn(self.module_path, resource_name)) - - def metadata_listdir(self, name): - if self.egg_info: - return self._listdir(self._fn(self.egg_info, name)) - return [] - - def run_script(self, script_name, namespace): - script = 'scripts/' + script_name - if not self.has_metadata(script): - raise ResolutionError( - "Script {script!r} not found in metadata at {self.egg_info!r}" - .format(**locals()), - ) - script_text = self.get_metadata(script).replace('\r\n', '\n') - script_text = script_text.replace('\r', '\n') - script_filename = self._fn(self.egg_info, script) - namespace['__file__'] = script_filename - if os.path.exists(script_filename): - with open(script_filename) as fid: - source = fid.read() - code = compile(source, script_filename, 'exec') - exec(code, namespace, namespace) - else: - from linecache import cache - cache[script_filename] = ( - len(script_text), 0, script_text.split('\n'), script_filename - ) - script_code = compile(script_text, script_filename, 'exec') - exec(script_code, namespace, namespace) - - def _has(self, path): - raise NotImplementedError( - "Can't perform this operation for unregistered loader type" - ) - - def _isdir(self, path): - raise NotImplementedError( - "Can't perform this operation for unregistered loader type" - ) - - def _listdir(self, path): - raise NotImplementedError( - "Can't perform this operation for unregistered loader type" - ) - - def _fn(self, base, resource_name): - self._validate_resource_path(resource_name) - if resource_name: - return os.path.join(base, *resource_name.split('/')) - return base - - @staticmethod - def _validate_resource_path(path): - """ - Validate the resource paths according to the docs. - https://setuptools.pypa.io/en/latest/pkg_resources.html#basic-resource-access - - >>> warned = getfixture('recwarn') - >>> warnings.simplefilter('always') - >>> vrp = NullProvider._validate_resource_path - >>> vrp('foo/bar.txt') - >>> bool(warned) - False - >>> vrp('../foo/bar.txt') - >>> bool(warned) - True - >>> warned.clear() - >>> vrp('/foo/bar.txt') - >>> bool(warned) - True - >>> vrp('foo/../../bar.txt') - >>> bool(warned) - True - >>> warned.clear() - >>> vrp('foo/f../bar.txt') - >>> bool(warned) - False - - Windows path separators are straight-up disallowed. - >>> vrp(r'\\foo/bar.txt') - Traceback (most recent call last): - ... - ValueError: Use of .. or absolute path in a resource path \ -is not allowed. - - >>> vrp(r'C:\\foo/bar.txt') - Traceback (most recent call last): - ... - ValueError: Use of .. or absolute path in a resource path \ -is not allowed. - - Blank values are allowed - - >>> vrp('') - >>> bool(warned) - False - - Non-string values are not. - - >>> vrp(None) - Traceback (most recent call last): - ... - AttributeError: ... - """ - invalid = ( - os.path.pardir in path.split(posixpath.sep) or - posixpath.isabs(path) or - ntpath.isabs(path) - ) - if not invalid: - return - - msg = "Use of .. or absolute path in a resource path is not allowed." - - # Aggressively disallow Windows absolute paths - if ntpath.isabs(path) and not posixpath.isabs(path): - raise ValueError(msg) - - # for compatibility, warn; in future - # raise ValueError(msg) - warnings.warn( - msg[:-1] + " and will raise exceptions in a future release.", - DeprecationWarning, - stacklevel=4, - ) - - def _get(self, path): - if hasattr(self.loader, 'get_data'): - return self.loader.get_data(path) - raise NotImplementedError( - "Can't perform this operation for loaders without 'get_data()'" - ) - - -register_loader_type(object, NullProvider) - - -def _parents(path): - """ - yield all parents of path including path - """ - last = None - while path != last: - yield path - last = path - path, _ = os.path.split(path) - - -class EggProvider(NullProvider): - """Provider based on a virtual filesystem""" - - def __init__(self, module): - super().__init__(module) - self._setup_prefix() - - def _setup_prefix(self): - # Assume that metadata may be nested inside a "basket" - # of multiple eggs and use module_path instead of .archive. - eggs = filter(_is_egg_path, _parents(self.module_path)) - egg = next(eggs, None) - egg and self._set_egg(egg) - - def _set_egg(self, path): - self.egg_name = os.path.basename(path) - self.egg_info = os.path.join(path, 'EGG-INFO') - self.egg_root = path - - -class DefaultProvider(EggProvider): - """Provides access to package resources in the filesystem""" - - def _has(self, path): - return os.path.exists(path) - - def _isdir(self, path): - return os.path.isdir(path) - - def _listdir(self, path): - return os.listdir(path) - - def get_resource_stream(self, manager, resource_name): - return open(self._fn(self.module_path, resource_name), 'rb') - - def _get(self, path): - with open(path, 'rb') as stream: - return stream.read() - - @classmethod - def _register(cls): - loader_names = 'SourceFileLoader', 'SourcelessFileLoader', - for name in loader_names: - loader_cls = getattr(importlib_machinery, name, type(None)) - register_loader_type(loader_cls, cls) - - -DefaultProvider._register() - - -class EmptyProvider(NullProvider): - """Provider that returns nothing for all requests""" - - module_path = None - - _isdir = _has = lambda self, path: False - - def _get(self, path): - return '' - - def _listdir(self, path): - return [] - - def __init__(self): - pass - - -empty_provider = EmptyProvider() - - -class ZipManifests(dict): - """ - zip manifest builder - """ - - @classmethod - def build(cls, path): - """ - Build a dictionary similar to the zipimport directory - caches, except instead of tuples, store ZipInfo objects. - - Use a platform-specific path separator (os.sep) for the path keys - for compatibility with pypy on Windows. - """ - with zipfile.ZipFile(path) as zfile: - items = ( - ( - name.replace('/', os.sep), - zfile.getinfo(name), - ) - for name in zfile.namelist() - ) - return dict(items) - - load = build - - -class MemoizedZipManifests(ZipManifests): - """ - Memoized zipfile manifests. - """ - manifest_mod = collections.namedtuple('manifest_mod', 'manifest mtime') - - def load(self, path): - """ - Load a manifest at path or return a suitable manifest already loaded. - """ - path = os.path.normpath(path) - mtime = os.stat(path).st_mtime - - if path not in self or self[path].mtime != mtime: - manifest = self.build(path) - self[path] = self.manifest_mod(manifest, mtime) - - return self[path].manifest - - -class ZipProvider(EggProvider): - """Resource support for zips and eggs""" - - eagers = None - _zip_manifests = MemoizedZipManifests() - - def __init__(self, module): - super().__init__(module) - self.zip_pre = self.loader.archive + os.sep - - def _zipinfo_name(self, fspath): - # Convert a virtual filename (full path to file) into a zipfile subpath - # usable with the zipimport directory cache for our target archive - fspath = fspath.rstrip(os.sep) - if fspath == self.loader.archive: - return '' - if fspath.startswith(self.zip_pre): - return fspath[len(self.zip_pre):] - raise AssertionError( - "%s is not a subpath of %s" % (fspath, self.zip_pre) - ) - - def _parts(self, zip_path): - # Convert a zipfile subpath into an egg-relative path part list. - # pseudo-fs path - fspath = self.zip_pre + zip_path - if fspath.startswith(self.egg_root + os.sep): - return fspath[len(self.egg_root) + 1:].split(os.sep) - raise AssertionError( - "%s is not a subpath of %s" % (fspath, self.egg_root) - ) - - @property - def zipinfo(self): - return self._zip_manifests.load(self.loader.archive) - - def get_resource_filename(self, manager, resource_name): - if not self.egg_name: - raise NotImplementedError( - "resource_filename() only supported for .egg, not .zip" - ) - # no need to lock for extraction, since we use temp names - zip_path = self._resource_to_zip(resource_name) - eagers = self._get_eager_resources() - if '/'.join(self._parts(zip_path)) in eagers: - for name in eagers: - self._extract_resource(manager, self._eager_to_zip(name)) - return self._extract_resource(manager, zip_path) - - @staticmethod - def _get_date_and_size(zip_stat): - size = zip_stat.file_size - # ymdhms+wday, yday, dst - date_time = zip_stat.date_time + (0, 0, -1) - # 1980 offset already done - timestamp = time.mktime(date_time) - return timestamp, size - - # FIXME: 'ZipProvider._extract_resource' is too complex (12) - def _extract_resource(self, manager, zip_path): # noqa: C901 - - if zip_path in self._index(): - for name in self._index()[zip_path]: - last = self._extract_resource( - manager, os.path.join(zip_path, name) - ) - # return the extracted directory name - return os.path.dirname(last) - - timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) - - if not WRITE_SUPPORT: - raise IOError('"os.rename" and "os.unlink" are not supported ' - 'on this platform') - try: - - real_path = manager.get_cache_path( - self.egg_name, self._parts(zip_path) - ) - - if self._is_current(real_path, zip_path): - return real_path - - outf, tmpnam = _mkstemp( - ".$extract", - dir=os.path.dirname(real_path), - ) - os.write(outf, self.loader.get_data(zip_path)) - os.close(outf) - utime(tmpnam, (timestamp, timestamp)) - manager.postprocess(tmpnam, real_path) - - try: - rename(tmpnam, real_path) - - except os.error: - if os.path.isfile(real_path): - if self._is_current(real_path, zip_path): - # the file became current since it was checked above, - # so proceed. - return real_path - # Windows, del old file and retry - elif os.name == 'nt': - unlink(real_path) - rename(tmpnam, real_path) - return real_path - raise - - except os.error: - # report a user-friendly error - manager.extraction_error() - - return real_path - - def _is_current(self, file_path, zip_path): - """ - Return True if the file_path is current for this zip_path - """ - timestamp, size = self._get_date_and_size(self.zipinfo[zip_path]) - if not os.path.isfile(file_path): - return False - stat = os.stat(file_path) - if stat.st_size != size or stat.st_mtime != timestamp: - return False - # check that the contents match - zip_contents = self.loader.get_data(zip_path) - with open(file_path, 'rb') as f: - file_contents = f.read() - return zip_contents == file_contents - - def _get_eager_resources(self): - if self.eagers is None: - eagers = [] - for name in ('native_libs.txt', 'eager_resources.txt'): - if self.has_metadata(name): - eagers.extend(self.get_metadata_lines(name)) - self.eagers = eagers - return self.eagers - - def _index(self): - try: - return self._dirindex - except AttributeError: - ind = {} - for path in self.zipinfo: - parts = path.split(os.sep) - while parts: - parent = os.sep.join(parts[:-1]) - if parent in ind: - ind[parent].append(parts[-1]) - break - else: - ind[parent] = [parts.pop()] - self._dirindex = ind - return ind - - def _has(self, fspath): - zip_path = self._zipinfo_name(fspath) - return zip_path in self.zipinfo or zip_path in self._index() - - def _isdir(self, fspath): - return self._zipinfo_name(fspath) in self._index() - - def _listdir(self, fspath): - return list(self._index().get(self._zipinfo_name(fspath), ())) - - def _eager_to_zip(self, resource_name): - return self._zipinfo_name(self._fn(self.egg_root, resource_name)) - - def _resource_to_zip(self, resource_name): - return self._zipinfo_name(self._fn(self.module_path, resource_name)) - - -register_loader_type(zipimport.zipimporter, ZipProvider) - - -class FileMetadata(EmptyProvider): - """Metadata handler for standalone PKG-INFO files - - Usage:: - - metadata = FileMetadata("/path/to/PKG-INFO") - - This provider rejects all data and metadata requests except for PKG-INFO, - which is treated as existing, and will be the contents of the file at - the provided location. - """ - - def __init__(self, path): - self.path = path - - def _get_metadata_path(self, name): - return self.path - - def has_metadata(self, name): - return name == 'PKG-INFO' and os.path.isfile(self.path) - - def get_metadata(self, name): - if name != 'PKG-INFO': - raise KeyError("No metadata except PKG-INFO is available") - - with io.open(self.path, encoding='utf-8', errors="replace") as f: - metadata = f.read() - self._warn_on_replacement(metadata) - return metadata - - def _warn_on_replacement(self, metadata): - replacement_char = '�' - if replacement_char in metadata: - tmpl = "{self.path} could not be properly decoded in UTF-8" - msg = tmpl.format(**locals()) - warnings.warn(msg) - - def get_metadata_lines(self, name): - return yield_lines(self.get_metadata(name)) - - -class PathMetadata(DefaultProvider): - """Metadata provider for egg directories - - Usage:: - - # Development eggs: - - egg_info = "/path/to/PackageName.egg-info" - base_dir = os.path.dirname(egg_info) - metadata = PathMetadata(base_dir, egg_info) - dist_name = os.path.splitext(os.path.basename(egg_info))[0] - dist = Distribution(basedir, project_name=dist_name, metadata=metadata) - - # Unpacked egg directories: - - egg_path = "/path/to/PackageName-ver-pyver-etc.egg" - metadata = PathMetadata(egg_path, os.path.join(egg_path,'EGG-INFO')) - dist = Distribution.from_filename(egg_path, metadata=metadata) - """ - - def __init__(self, path, egg_info): - self.module_path = path - self.egg_info = egg_info - - -class EggMetadata(ZipProvider): - """Metadata provider for .egg files""" - - def __init__(self, importer): - """Create a metadata provider from a zipimporter""" - - self.zip_pre = importer.archive + os.sep - self.loader = importer - if importer.prefix: - self.module_path = os.path.join(importer.archive, importer.prefix) - else: - self.module_path = importer.archive - self._setup_prefix() - - -_declare_state('dict', _distribution_finders={}) - - -def register_finder(importer_type, distribution_finder): - """Register `distribution_finder` to find distributions in sys.path items - - `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item - handler), and `distribution_finder` is a callable that, passed a path - item and the importer instance, yields ``Distribution`` instances found on - that path item. See ``pkg_resources.find_on_path`` for an example.""" - _distribution_finders[importer_type] = distribution_finder - - -def find_distributions(path_item, only=False): - """Yield distributions accessible via `path_item`""" - importer = get_importer(path_item) - finder = _find_adapter(_distribution_finders, importer) - return finder(importer, path_item, only) - - -def find_eggs_in_zip(importer, path_item, only=False): - """ - Find eggs in zip files; possibly multiple nested eggs. - """ - if importer.archive.endswith('.whl'): - # wheels are not supported with this finder - # they don't have PKG-INFO metadata, and won't ever contain eggs - return - metadata = EggMetadata(importer) - if metadata.has_metadata('PKG-INFO'): - yield Distribution.from_filename(path_item, metadata=metadata) - if only: - # don't yield nested distros - return - for subitem in metadata.resource_listdir(''): - if _is_egg_path(subitem): - subpath = os.path.join(path_item, subitem) - dists = find_eggs_in_zip(zipimport.zipimporter(subpath), subpath) - for dist in dists: - yield dist - elif subitem.lower().endswith(('.dist-info', '.egg-info')): - subpath = os.path.join(path_item, subitem) - submeta = EggMetadata(zipimport.zipimporter(subpath)) - submeta.egg_info = subpath - yield Distribution.from_location(path_item, subitem, submeta) - - -register_finder(zipimport.zipimporter, find_eggs_in_zip) - - -def find_nothing(importer, path_item, only=False): - return () - - -register_finder(object, find_nothing) - - -def _by_version_descending(names): - """ - Given a list of filenames, return them in descending order - by version number. - - >>> names = 'bar', 'foo', 'Python-2.7.10.egg', 'Python-2.7.2.egg' - >>> _by_version_descending(names) - ['Python-2.7.10.egg', 'Python-2.7.2.egg', 'bar', 'foo'] - >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.egg' - >>> _by_version_descending(names) - ['Setuptools-1.2.3.egg', 'Setuptools-1.2.3b1.egg'] - >>> names = 'Setuptools-1.2.3b1.egg', 'Setuptools-1.2.3.post1.egg' - >>> _by_version_descending(names) - ['Setuptools-1.2.3.post1.egg', 'Setuptools-1.2.3b1.egg'] - """ - def try_parse(name): - """ - Attempt to parse as a version or return a null version. - """ - try: - return packaging.version.Version(name) - except Exception: - return packaging.version.Version('0') - - def _by_version(name): - """ - Parse each component of the filename - """ - name, ext = os.path.splitext(name) - parts = itertools.chain(name.split('-'), [ext]) - return [try_parse(part) for part in parts] - - return sorted(names, key=_by_version, reverse=True) - - -def find_on_path(importer, path_item, only=False): - """Yield distributions accessible on a sys.path directory""" - path_item = _normalize_cached(path_item) - - if _is_unpacked_egg(path_item): - yield Distribution.from_filename( - path_item, metadata=PathMetadata( - path_item, os.path.join(path_item, 'EGG-INFO') - ) - ) - return - - entries = ( - os.path.join(path_item, child) - for child in safe_listdir(path_item) - ) - - # for performance, before sorting by version, - # screen entries for only those that will yield - # distributions - filtered = ( - entry - for entry in entries - if dist_factory(path_item, entry, only) - ) - - # scan for .egg and .egg-info in directory - path_item_entries = _by_version_descending(filtered) - for entry in path_item_entries: - fullpath = os.path.join(path_item, entry) - factory = dist_factory(path_item, entry, only) - for dist in factory(fullpath): - yield dist - - -def dist_factory(path_item, entry, only): - """Return a dist_factory for the given entry.""" - lower = entry.lower() - is_egg_info = lower.endswith('.egg-info') - is_dist_info = ( - lower.endswith('.dist-info') and - os.path.isdir(os.path.join(path_item, entry)) - ) - is_meta = is_egg_info or is_dist_info - return ( - distributions_from_metadata - if is_meta else - find_distributions - if not only and _is_egg_path(entry) else - resolve_egg_link - if not only and lower.endswith('.egg-link') else - NoDists() - ) - - -class NoDists: - """ - >>> bool(NoDists()) - False - - >>> list(NoDists()('anything')) - [] - """ - def __bool__(self): - return False - - def __call__(self, fullpath): - return iter(()) - - -def safe_listdir(path): - """ - Attempt to list contents of path, but suppress some exceptions. - """ - try: - return os.listdir(path) - except (PermissionError, NotADirectoryError): - pass - except OSError as e: - # Ignore the directory if does not exist, not a directory or - # permission denied - if e.errno not in (errno.ENOTDIR, errno.EACCES, errno.ENOENT): - raise - return () - - -def distributions_from_metadata(path): - root = os.path.dirname(path) - if os.path.isdir(path): - if len(os.listdir(path)) == 0: - # empty metadata dir; skip - return - metadata = PathMetadata(root, path) - else: - metadata = FileMetadata(path) - entry = os.path.basename(path) - yield Distribution.from_location( - root, entry, metadata, precedence=DEVELOP_DIST, - ) - - -def non_empty_lines(path): - """ - Yield non-empty lines from file at path - """ - with open(path) as f: - for line in f: - line = line.strip() - if line: - yield line - - -def resolve_egg_link(path): - """ - Given a path to an .egg-link, resolve distributions - present in the referenced path. - """ - referenced_paths = non_empty_lines(path) - resolved_paths = ( - os.path.join(os.path.dirname(path), ref) - for ref in referenced_paths - ) - dist_groups = map(find_distributions, resolved_paths) - return next(dist_groups, ()) - - -register_finder(pkgutil.ImpImporter, find_on_path) - -if hasattr(importlib_machinery, 'FileFinder'): - register_finder(importlib_machinery.FileFinder, find_on_path) - -_declare_state('dict', _namespace_handlers={}) -_declare_state('dict', _namespace_packages={}) - - -def register_namespace_handler(importer_type, namespace_handler): - """Register `namespace_handler` to declare namespace packages - - `importer_type` is the type or class of a PEP 302 "Importer" (sys.path item - handler), and `namespace_handler` is a callable like this:: - - def namespace_handler(importer, path_entry, moduleName, module): - # return a path_entry to use for child packages - - Namespace handlers are only called if the importer object has already - agreed that it can handle the relevant path item, and they should only - return a subpath if the module __path__ does not already contain an - equivalent subpath. For an example namespace handler, see - ``pkg_resources.file_ns_handler``. - """ - _namespace_handlers[importer_type] = namespace_handler - - -def _handle_ns(packageName, path_item): - """Ensure that named package includes a subpath of path_item (if needed)""" - - importer = get_importer(path_item) - if importer is None: - return None - - # use find_spec (PEP 451) and fall-back to find_module (PEP 302) - try: - spec = importer.find_spec(packageName) - except AttributeError: - # capture warnings due to #1111 - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - loader = importer.find_module(packageName) - else: - loader = spec.loader if spec else None - - if loader is None: - return None - module = sys.modules.get(packageName) - if module is None: - module = sys.modules[packageName] = types.ModuleType(packageName) - module.__path__ = [] - _set_parent_ns(packageName) - elif not hasattr(module, '__path__'): - raise TypeError("Not a package:", packageName) - handler = _find_adapter(_namespace_handlers, importer) - subpath = handler(importer, path_item, packageName, module) - if subpath is not None: - path = module.__path__ - path.append(subpath) - importlib.import_module(packageName) - _rebuild_mod_path(path, packageName, module) - return subpath - - -def _rebuild_mod_path(orig_path, package_name, module): - """ - Rebuild module.__path__ ensuring that all entries are ordered - corresponding to their sys.path order - """ - sys_path = [_normalize_cached(p) for p in sys.path] - - def safe_sys_path_index(entry): - """ - Workaround for #520 and #513. - """ - try: - return sys_path.index(entry) - except ValueError: - return float('inf') - - def position_in_sys_path(path): - """ - Return the ordinal of the path based on its position in sys.path - """ - path_parts = path.split(os.sep) - module_parts = package_name.count('.') + 1 - parts = path_parts[:-module_parts] - return safe_sys_path_index(_normalize_cached(os.sep.join(parts))) - - new_path = sorted(orig_path, key=position_in_sys_path) - new_path = [_normalize_cached(p) for p in new_path] - - if isinstance(module.__path__, list): - module.__path__[:] = new_path - else: - module.__path__ = new_path - - -def declare_namespace(packageName): - """Declare that package 'packageName' is a namespace package""" - - _imp.acquire_lock() - try: - if packageName in _namespace_packages: - return - - path = sys.path - parent, _, _ = packageName.rpartition('.') - - if parent: - declare_namespace(parent) - if parent not in _namespace_packages: - __import__(parent) - try: - path = sys.modules[parent].__path__ - except AttributeError as e: - raise TypeError("Not a package:", parent) from e - - # Track what packages are namespaces, so when new path items are added, - # they can be updated - _namespace_packages.setdefault(parent or None, []).append(packageName) - _namespace_packages.setdefault(packageName, []) - - for path_item in path: - # Ensure all the parent's path items are reflected in the child, - # if they apply - _handle_ns(packageName, path_item) - - finally: - _imp.release_lock() - - -def fixup_namespace_packages(path_item, parent=None): - """Ensure that previously-declared namespace packages include path_item""" - _imp.acquire_lock() - try: - for package in _namespace_packages.get(parent, ()): - subpath = _handle_ns(package, path_item) - if subpath: - fixup_namespace_packages(subpath, package) - finally: - _imp.release_lock() - - -def file_ns_handler(importer, path_item, packageName, module): - """Compute an ns-package subpath for a filesystem or zipfile importer""" - - subpath = os.path.join(path_item, packageName.split('.')[-1]) - normalized = _normalize_cached(subpath) - for item in module.__path__: - if _normalize_cached(item) == normalized: - break - else: - # Only return the path if it's not already there - return subpath - - -register_namespace_handler(pkgutil.ImpImporter, file_ns_handler) -register_namespace_handler(zipimport.zipimporter, file_ns_handler) - -if hasattr(importlib_machinery, 'FileFinder'): - register_namespace_handler(importlib_machinery.FileFinder, file_ns_handler) - - -def null_ns_handler(importer, path_item, packageName, module): - return None - - -register_namespace_handler(object, null_ns_handler) - - -def normalize_path(filename): - """Normalize a file/dir name for comparison purposes""" - return os.path.normcase(os.path.realpath(os.path.normpath( - _cygwin_patch(filename)))) - - -def _cygwin_patch(filename): # pragma: nocover - """ - Contrary to POSIX 2008, on Cygwin, getcwd (3) contains - symlink components. Using - os.path.abspath() works around this limitation. A fix in os.getcwd() - would probably better, in Cygwin even more so, except - that this seems to be by design... - """ - return os.path.abspath(filename) if sys.platform == 'cygwin' else filename - - -def _normalize_cached(filename, _cache={}): - try: - return _cache[filename] - except KeyError: - _cache[filename] = result = normalize_path(filename) - return result - - -def _is_egg_path(path): - """ - Determine if given path appears to be an egg. - """ - return _is_zip_egg(path) or _is_unpacked_egg(path) - - -def _is_zip_egg(path): - return ( - path.lower().endswith('.egg') and - os.path.isfile(path) and - zipfile.is_zipfile(path) - ) - - -def _is_unpacked_egg(path): - """ - Determine if given path appears to be an unpacked egg. - """ - return ( - path.lower().endswith('.egg') and - os.path.isfile(os.path.join(path, 'EGG-INFO', 'PKG-INFO')) - ) - - -def _set_parent_ns(packageName): - parts = packageName.split('.') - name = parts.pop() - if parts: - parent = '.'.join(parts) - setattr(sys.modules[parent], name, sys.modules[packageName]) - - -MODULE = re.compile(r"\w+(\.\w+)*$").match -EGG_NAME = re.compile( - r""" - (?P[^-]+) ( - -(?P[^-]+) ( - -py(?P[^-]+) ( - -(?P.+) - )? - )? - )? - """, - re.VERBOSE | re.IGNORECASE, -).match - - -class EntryPoint: - """Object representing an advertised importable object""" - - def __init__(self, name, module_name, attrs=(), extras=(), dist=None): - if not MODULE(module_name): - raise ValueError("Invalid module name", module_name) - self.name = name - self.module_name = module_name - self.attrs = tuple(attrs) - self.extras = tuple(extras) - self.dist = dist - - def __str__(self): - s = "%s = %s" % (self.name, self.module_name) - if self.attrs: - s += ':' + '.'.join(self.attrs) - if self.extras: - s += ' [%s]' % ','.join(self.extras) - return s - - def __repr__(self): - return "EntryPoint.parse(%r)" % str(self) - - def load(self, require=True, *args, **kwargs): - """ - Require packages for this EntryPoint, then resolve it. - """ - if not require or args or kwargs: - warnings.warn( - "Parameters to load are deprecated. Call .resolve and " - ".require separately.", - PkgResourcesDeprecationWarning, - stacklevel=2, - ) - if require: - self.require(*args, **kwargs) - return self.resolve() - - def resolve(self): - """ - Resolve the entry point from its module and attrs. - """ - module = __import__(self.module_name, fromlist=['__name__'], level=0) - try: - return functools.reduce(getattr, self.attrs, module) - except AttributeError as exc: - raise ImportError(str(exc)) from exc - - def require(self, env=None, installer=None): - if self.extras and not self.dist: - raise UnknownExtra("Can't require() without a distribution", self) - - # Get the requirements for this entry point with all its extras and - # then resolve them. We have to pass `extras` along when resolving so - # that the working set knows what extras we want. Otherwise, for - # dist-info distributions, the working set will assume that the - # requirements for that extra are purely optional and skip over them. - reqs = self.dist.requires(self.extras) - items = working_set.resolve(reqs, env, installer, extras=self.extras) - list(map(working_set.add, items)) - - pattern = re.compile( - r'\s*' - r'(?P.+?)\s*' - r'=\s*' - r'(?P[\w.]+)\s*' - r'(:\s*(?P[\w.]+))?\s*' - r'(?P\[.*\])?\s*$' - ) - - @classmethod - def parse(cls, src, dist=None): - """Parse a single entry point from string `src` - - Entry point syntax follows the form:: - - name = some.module:some.attr [extra1, extra2] - - The entry name and module name are required, but the ``:attrs`` and - ``[extras]`` parts are optional - """ - m = cls.pattern.match(src) - if not m: - msg = "EntryPoint must be in 'name=module:attrs [extras]' format" - raise ValueError(msg, src) - res = m.groupdict() - extras = cls._parse_extras(res['extras']) - attrs = res['attr'].split('.') if res['attr'] else () - return cls(res['name'], res['module'], attrs, extras, dist) - - @classmethod - def _parse_extras(cls, extras_spec): - if not extras_spec: - return () - req = Requirement.parse('x' + extras_spec) - if req.specs: - raise ValueError() - return req.extras - - @classmethod - def parse_group(cls, group, lines, dist=None): - """Parse an entry point group""" - if not MODULE(group): - raise ValueError("Invalid group name", group) - this = {} - for line in yield_lines(lines): - ep = cls.parse(line, dist) - if ep.name in this: - raise ValueError("Duplicate entry point", group, ep.name) - this[ep.name] = ep - return this - - @classmethod - def parse_map(cls, data, dist=None): - """Parse a map of entry point groups""" - if isinstance(data, dict): - data = data.items() - else: - data = split_sections(data) - maps = {} - for group, lines in data: - if group is None: - if not lines: - continue - raise ValueError("Entry points must be listed in groups") - group = group.strip() - if group in maps: - raise ValueError("Duplicate group name", group) - maps[group] = cls.parse_group(group, lines, dist) - return maps - - -def _version_from_file(lines): - """ - Given an iterable of lines from a Metadata file, return - the value of the Version field, if present, or None otherwise. - """ - def is_version_line(line): - return line.lower().startswith('version:') - version_lines = filter(is_version_line, lines) - line = next(iter(version_lines), '') - _, _, value = line.partition(':') - return safe_version(value.strip()) or None - - -class Distribution: - """Wrap an actual or potential sys.path entry w/metadata""" - PKG_INFO = 'PKG-INFO' - - def __init__( - self, location=None, metadata=None, project_name=None, - version=None, py_version=PY_MAJOR, platform=None, - precedence=EGG_DIST): - self.project_name = safe_name(project_name or 'Unknown') - if version is not None: - self._version = safe_version(version) - self.py_version = py_version - self.platform = platform - self.location = location - self.precedence = precedence - self._provider = metadata or empty_provider - - @classmethod - def from_location(cls, location, basename, metadata=None, **kw): - project_name, version, py_version, platform = [None] * 4 - basename, ext = os.path.splitext(basename) - if ext.lower() in _distributionImpl: - cls = _distributionImpl[ext.lower()] - - match = EGG_NAME(basename) - if match: - project_name, version, py_version, platform = match.group( - 'name', 'ver', 'pyver', 'plat' - ) - return cls( - location, metadata, project_name=project_name, version=version, - py_version=py_version, platform=platform, **kw - )._reload_version() - - def _reload_version(self): - return self - - @property - def hashcmp(self): - return ( - self.parsed_version, - self.precedence, - self.key, - self.location, - self.py_version or '', - self.platform or '', - ) - - def __hash__(self): - return hash(self.hashcmp) - - def __lt__(self, other): - return self.hashcmp < other.hashcmp - - def __le__(self, other): - return self.hashcmp <= other.hashcmp - - def __gt__(self, other): - return self.hashcmp > other.hashcmp - - def __ge__(self, other): - return self.hashcmp >= other.hashcmp - - def __eq__(self, other): - if not isinstance(other, self.__class__): - # It's not a Distribution, so they are not equal - return False - return self.hashcmp == other.hashcmp - - def __ne__(self, other): - return not self == other - - # These properties have to be lazy so that we don't have to load any - # metadata until/unless it's actually needed. (i.e., some distributions - # may not know their name or version without loading PKG-INFO) - - @property - def key(self): - try: - return self._key - except AttributeError: - self._key = key = self.project_name.lower() - return key - - @property - def parsed_version(self): - if not hasattr(self, "_parsed_version"): - self._parsed_version = parse_version(self.version) - - return self._parsed_version - - def _warn_legacy_version(self): - LV = packaging.version.LegacyVersion - is_legacy = isinstance(self._parsed_version, LV) - if not is_legacy: - return - - # While an empty version is technically a legacy version and - # is not a valid PEP 440 version, it's also unlikely to - # actually come from someone and instead it is more likely that - # it comes from setuptools attempting to parse a filename and - # including it in the list. So for that we'll gate this warning - # on if the version is anything at all or not. - if not self.version: - return - - tmpl = textwrap.dedent(""" - '{project_name} ({version})' is being parsed as a legacy, - non PEP 440, - version. You may find odd behavior and sort order. - In particular it will be sorted as less than 0.0. It - is recommended to migrate to PEP 440 compatible - versions. - """).strip().replace('\n', ' ') - - warnings.warn(tmpl.format(**vars(self)), PEP440Warning) - - @property - def version(self): - try: - return self._version - except AttributeError as e: - version = self._get_version() - if version is None: - path = self._get_metadata_path_for_display(self.PKG_INFO) - msg = ( - "Missing 'Version:' header and/or {} file at path: {}" - ).format(self.PKG_INFO, path) - raise ValueError(msg, self) from e - - return version - - @property - def _dep_map(self): - """ - A map of extra to its list of (direct) requirements - for this distribution, including the null extra. - """ - try: - return self.__dep_map - except AttributeError: - self.__dep_map = self._filter_extras(self._build_dep_map()) - return self.__dep_map - - @staticmethod - def _filter_extras(dm): - """ - Given a mapping of extras to dependencies, strip off - environment markers and filter out any dependencies - not matching the markers. - """ - for extra in list(filter(None, dm)): - new_extra = extra - reqs = dm.pop(extra) - new_extra, _, marker = extra.partition(':') - fails_marker = marker and ( - invalid_marker(marker) - or not evaluate_marker(marker) - ) - if fails_marker: - reqs = [] - new_extra = safe_extra(new_extra) or None - - dm.setdefault(new_extra, []).extend(reqs) - return dm - - def _build_dep_map(self): - dm = {} - for name in 'requires.txt', 'depends.txt': - for extra, reqs in split_sections(self._get_metadata(name)): - dm.setdefault(extra, []).extend(parse_requirements(reqs)) - return dm - - def requires(self, extras=()): - """List of Requirements needed for this distro if `extras` are used""" - dm = self._dep_map - deps = [] - deps.extend(dm.get(None, ())) - for ext in extras: - try: - deps.extend(dm[safe_extra(ext)]) - except KeyError as e: - raise UnknownExtra( - "%s has no such extra feature %r" % (self, ext) - ) from e - return deps - - def _get_metadata_path_for_display(self, name): - """ - Return the path to the given metadata file, if available. - """ - try: - # We need to access _get_metadata_path() on the provider object - # directly rather than through this class's __getattr__() - # since _get_metadata_path() is marked private. - path = self._provider._get_metadata_path(name) - - # Handle exceptions e.g. in case the distribution's metadata - # provider doesn't support _get_metadata_path(). - except Exception: - return '[could not detect]' - - return path - - def _get_metadata(self, name): - if self.has_metadata(name): - for line in self.get_metadata_lines(name): - yield line - - def _get_version(self): - lines = self._get_metadata(self.PKG_INFO) - version = _version_from_file(lines) - - return version - - def activate(self, path=None, replace=False): - """Ensure distribution is importable on `path` (default=sys.path)""" - if path is None: - path = sys.path - self.insert_on(path, replace=replace) - if path is sys.path: - fixup_namespace_packages(self.location) - for pkg in self._get_metadata('namespace_packages.txt'): - if pkg in sys.modules: - declare_namespace(pkg) - - def egg_name(self): - """Return what this distribution's standard .egg filename should be""" - filename = "%s-%s-py%s" % ( - to_filename(self.project_name), to_filename(self.version), - self.py_version or PY_MAJOR - ) - - if self.platform: - filename += '-' + self.platform - return filename - - def __repr__(self): - if self.location: - return "%s (%s)" % (self, self.location) - else: - return str(self) - - def __str__(self): - try: - version = getattr(self, 'version', None) - except ValueError: - version = None - version = version or "[unknown version]" - return "%s %s" % (self.project_name, version) - - def __getattr__(self, attr): - """Delegate all unrecognized public attributes to .metadata provider""" - if attr.startswith('_'): - raise AttributeError(attr) - return getattr(self._provider, attr) - - def __dir__(self): - return list( - set(super(Distribution, self).__dir__()) - | set( - attr for attr in self._provider.__dir__() - if not attr.startswith('_') - ) - ) - - @classmethod - def from_filename(cls, filename, metadata=None, **kw): - return cls.from_location( - _normalize_cached(filename), os.path.basename(filename), metadata, - **kw - ) - - def as_requirement(self): - """Return a ``Requirement`` that matches this distribution exactly""" - if isinstance(self.parsed_version, packaging.version.Version): - spec = "%s==%s" % (self.project_name, self.parsed_version) - else: - spec = "%s===%s" % (self.project_name, self.parsed_version) - - return Requirement.parse(spec) - - def load_entry_point(self, group, name): - """Return the `name` entry point of `group` or raise ImportError""" - ep = self.get_entry_info(group, name) - if ep is None: - raise ImportError("Entry point %r not found" % ((group, name),)) - return ep.load() - - def get_entry_map(self, group=None): - """Return the entry point map for `group`, or the full entry map""" - try: - ep_map = self._ep_map - except AttributeError: - ep_map = self._ep_map = EntryPoint.parse_map( - self._get_metadata('entry_points.txt'), self - ) - if group is not None: - return ep_map.get(group, {}) - return ep_map - - def get_entry_info(self, group, name): - """Return the EntryPoint object for `group`+`name`, or ``None``""" - return self.get_entry_map(group).get(name) - - # FIXME: 'Distribution.insert_on' is too complex (13) - def insert_on(self, path, loc=None, replace=False): # noqa: C901 - """Ensure self.location is on path - - If replace=False (default): - - If location is already in path anywhere, do nothing. - - Else: - - If it's an egg and its parent directory is on path, - insert just ahead of the parent. - - Else: add to the end of path. - If replace=True: - - If location is already on path anywhere (not eggs) - or higher priority than its parent (eggs) - do nothing. - - Else: - - If it's an egg and its parent directory is on path, - insert just ahead of the parent, - removing any lower-priority entries. - - Else: add it to the front of path. - """ - - loc = loc or self.location - if not loc: - return - - nloc = _normalize_cached(loc) - bdir = os.path.dirname(nloc) - npath = [(p and _normalize_cached(p) or p) for p in path] - - for p, item in enumerate(npath): - if item == nloc: - if replace: - break - else: - # don't modify path (even removing duplicates) if - # found and not replace - return - elif item == bdir and self.precedence == EGG_DIST: - # if it's an .egg, give it precedence over its directory - # UNLESS it's already been added to sys.path and replace=False - if (not replace) and nloc in npath[p:]: - return - if path is sys.path: - self.check_version_conflict() - path.insert(p, loc) - npath.insert(p, nloc) - break - else: - if path is sys.path: - self.check_version_conflict() - if replace: - path.insert(0, loc) - else: - path.append(loc) - return - - # p is the spot where we found or inserted loc; now remove duplicates - while True: - try: - np = npath.index(nloc, p + 1) - except ValueError: - break - else: - del npath[np], path[np] - # ha! - p = np - - return - - def check_version_conflict(self): - if self.key == 'setuptools': - # ignore the inevitable setuptools self-conflicts :( - return - - nsp = dict.fromkeys(self._get_metadata('namespace_packages.txt')) - loc = normalize_path(self.location) - for modname in self._get_metadata('top_level.txt'): - if (modname not in sys.modules or modname in nsp - or modname in _namespace_packages): - continue - if modname in ('pkg_resources', 'setuptools', 'site'): - continue - fn = getattr(sys.modules[modname], '__file__', None) - if fn and (normalize_path(fn).startswith(loc) or - fn.startswith(self.location)): - continue - issue_warning( - "Module %s was already imported from %s, but %s is being added" - " to sys.path" % (modname, fn, self.location), - ) - - def has_version(self): - try: - self.version - except ValueError: - issue_warning("Unbuilt egg for " + repr(self)) - return False - return True - - def clone(self, **kw): - """Copy this distribution, substituting in any changed keyword args""" - names = 'project_name version py_version platform location precedence' - for attr in names.split(): - kw.setdefault(attr, getattr(self, attr, None)) - kw.setdefault('metadata', self._provider) - return self.__class__(**kw) - - @property - def extras(self): - return [dep for dep in self._dep_map if dep] - - -class EggInfoDistribution(Distribution): - def _reload_version(self): - """ - Packages installed by distutils (e.g. numpy or scipy), - which uses an old safe_version, and so - their version numbers can get mangled when - converted to filenames (e.g., 1.11.0.dev0+2329eae to - 1.11.0.dev0_2329eae). These distributions will not be - parsed properly - downstream by Distribution and safe_version, so - take an extra step and try to get the version number from - the metadata file itself instead of the filename. - """ - md_version = self._get_version() - if md_version: - self._version = md_version - return self - - -class DistInfoDistribution(Distribution): - """ - Wrap an actual or potential sys.path entry - w/metadata, .dist-info style. - """ - PKG_INFO = 'METADATA' - EQEQ = re.compile(r"([\(,])\s*(\d.*?)\s*([,\)])") - - @property - def _parsed_pkg_info(self): - """Parse and cache metadata""" - try: - return self._pkg_info - except AttributeError: - metadata = self.get_metadata(self.PKG_INFO) - self._pkg_info = email.parser.Parser().parsestr(metadata) - return self._pkg_info - - @property - def _dep_map(self): - try: - return self.__dep_map - except AttributeError: - self.__dep_map = self._compute_dependencies() - return self.__dep_map - - def _compute_dependencies(self): - """Recompute this distribution's dependencies.""" - dm = self.__dep_map = {None: []} - - reqs = [] - # Including any condition expressions - for req in self._parsed_pkg_info.get_all('Requires-Dist') or []: - reqs.extend(parse_requirements(req)) - - def reqs_for_extra(extra): - for req in reqs: - if not req.marker or req.marker.evaluate({'extra': extra}): - yield req - - common = types.MappingProxyType(dict.fromkeys(reqs_for_extra(None))) - dm[None].extend(common) - - for extra in self._parsed_pkg_info.get_all('Provides-Extra') or []: - s_extra = safe_extra(extra.strip()) - dm[s_extra] = [r for r in reqs_for_extra(extra) if r not in common] - - return dm - - -_distributionImpl = { - '.egg': Distribution, - '.egg-info': EggInfoDistribution, - '.dist-info': DistInfoDistribution, -} - - -def issue_warning(*args, **kw): - level = 1 - g = globals() - try: - # find the first stack frame that is *not* code in - # the pkg_resources module, to use for the warning - while sys._getframe(level).f_globals is g: - level += 1 - except ValueError: - pass - warnings.warn(stacklevel=level + 1, *args, **kw) - - -def parse_requirements(strs): - """ - Yield ``Requirement`` objects for each specification in `strs`. - - `strs` must be a string, or a (possibly-nested) iterable thereof. - """ - return map(Requirement, join_continuation(map(drop_comment, yield_lines(strs)))) - - -class RequirementParseError(packaging.requirements.InvalidRequirement): - "Compatibility wrapper for InvalidRequirement" - - -class Requirement(packaging.requirements.Requirement): - def __init__(self, requirement_string): - """DO NOT CALL THIS UNDOCUMENTED METHOD; use Requirement.parse()!""" - super(Requirement, self).__init__(requirement_string) - self.unsafe_name = self.name - project_name = safe_name(self.name) - self.project_name, self.key = project_name, project_name.lower() - self.specs = [ - (spec.operator, spec.version) for spec in self.specifier] - self.extras = tuple(map(safe_extra, self.extras)) - self.hashCmp = ( - self.key, - self.url, - self.specifier, - frozenset(self.extras), - str(self.marker) if self.marker else None, - ) - self.__hash = hash(self.hashCmp) - - def __eq__(self, other): - return ( - isinstance(other, Requirement) and - self.hashCmp == other.hashCmp - ) - - def __ne__(self, other): - return not self == other - - def __contains__(self, item): - if isinstance(item, Distribution): - if item.key != self.key: - return False - - item = item.version - - # Allow prereleases always in order to match the previous behavior of - # this method. In the future this should be smarter and follow PEP 440 - # more accurately. - return self.specifier.contains(item, prereleases=True) - - def __hash__(self): - return self.__hash - - def __repr__(self): - return "Requirement.parse(%r)" % str(self) - - @staticmethod - def parse(s): - req, = parse_requirements(s) - return req - - -def _always_object(classes): - """ - Ensure object appears in the mro even - for old-style classes. - """ - if object not in classes: - return classes + (object,) - return classes - - -def _find_adapter(registry, ob): - """Return an adapter factory for `ob` from `registry`""" - types = _always_object(inspect.getmro(getattr(ob, '__class__', type(ob)))) - for t in types: - if t in registry: - return registry[t] - - -def ensure_directory(path): - """Ensure that the parent directory of `path` exists""" - dirname = os.path.dirname(path) - os.makedirs(dirname, exist_ok=True) - - -def _bypass_ensure_directory(path): - """Sandbox-bypassing version of ensure_directory()""" - if not WRITE_SUPPORT: - raise IOError('"os.mkdir" not supported on this platform.') - dirname, filename = split(path) - if dirname and filename and not isdir(dirname): - _bypass_ensure_directory(dirname) - try: - mkdir(dirname, 0o755) - except FileExistsError: - pass - - -def split_sections(s): - """Split a string or iterable thereof into (section, content) pairs - - Each ``section`` is a stripped version of the section header ("[section]") - and each ``content`` is a list of stripped lines excluding blank lines and - comment-only lines. If there are any such lines before the first section - header, they're returned in a first ``section`` of ``None``. - """ - section = None - content = [] - for line in yield_lines(s): - if line.startswith("["): - if line.endswith("]"): - if section or content: - yield section, content - section = line[1:-1].strip() - content = [] - else: - raise ValueError("Invalid section heading", line) - else: - content.append(line) - - # wrap up last segment - yield section, content - - -def _mkstemp(*args, **kw): - old_open = os.open - try: - # temporarily bypass sandboxing - os.open = os_open - return tempfile.mkstemp(*args, **kw) - finally: - # and then put it back - os.open = old_open - - -# Silence the PEP440Warning by default, so that end users don't get hit by it -# randomly just because they use pkg_resources. We want to append the rule -# because we want earlier uses of filterwarnings to take precedence over this -# one. -warnings.filterwarnings("ignore", category=PEP440Warning, append=True) - - -# from jaraco.functools 1.3 -def _call_aside(f, *args, **kwargs): - f(*args, **kwargs) - return f - - -@_call_aside -def _initialize(g=globals()): - "Set up global resource manager (deliberately not state-saved)" - manager = ResourceManager() - g['_manager'] = manager - g.update( - (name, getattr(manager, name)) - for name in dir(manager) - if not name.startswith('_') - ) - - -class PkgResourcesDeprecationWarning(Warning): - """ - Base class for warning about deprecations in ``pkg_resources`` - - This class is not derived from ``DeprecationWarning``, and as such is - visible by default. - """ - - -@_call_aside -def _initialize_master_working_set(): - """ - Prepare the master working set and make the ``require()`` - API available. - - This function has explicit effects on the global state - of pkg_resources. It is intended to be invoked once at - the initialization of this module. - - Invocation by other packages is unsupported and done - at their own risk. - """ - working_set = WorkingSet._build_master() - _declare_state('object', working_set=working_set) - - require = working_set.require - iter_entry_points = working_set.iter_entry_points - add_activation_listener = working_set.subscribe - run_script = working_set.run_script - # backward compatibility - run_main = run_script - # Activate all distributions already on sys.path with replace=False and - # ensure that all distributions added to the working set in the future - # (e.g. by calling ``require()``) will get activated as well, - # with higher priority (replace=True). - tuple( - dist.activate(replace=False) - for dist in working_set - ) - add_activation_listener( - lambda dist: dist.activate(replace=True), - existing=False, - ) - working_set.entries = [] - # match order - list(map(working_set.add_entry, sys.path)) - globals().update(locals()) diff --git a/spaces/RdnUser77/SpacIO_v1/app.py b/spaces/RdnUser77/SpacIO_v1/app.py deleted file mode 100644 index cb192fa0ba61ee25f9809e912232038fb37f7a47..0000000000000000000000000000000000000000 --- a/spaces/RdnUser77/SpacIO_v1/app.py +++ /dev/null @@ -1,89 +0,0 @@ -import gradio as gr -from random import randint -from all_models import models - - - -def load_fn(models): - global models_load - models_load = {} - - for model in models: - if model not in models_load.keys(): - try: - m = gr.load(f'models/{model}') - except Exception as error: - m = gr.Interface(lambda txt: None, ['text'], ['image']) - models_load.update({model: m}) - - -load_fn(models) - - -num_models = 6 -default_models = models[:num_models] - - - -def extend_choices(choices): - return choices + (num_models - len(choices)) * ['NA'] - - -def update_imgbox(choices): - choices_plus = extend_choices(choices) - return [gr.Image(None, label = m, visible = (m != 'NA')) for m in choices_plus] - - -def gen_fn(model_str, prompt): - if model_str == 'NA': - return None - noise = str(randint(0, 99999999999)) - return models_load[model_str](f'{prompt} {noise}') - - - -with gr.Blocks() as demo: - with gr.Tab('Multiple models'): - with gr.Accordion('Model selection'): - model_choice = gr.Dropdown(models, label = f'Choose up to {num_models} different models', value = default_models, multiselect = True, max_choices = num_models, interactive = True, filterable = False) - - txt_input = gr.Textbox(label = 'Prompt text') - gen_button = gr.Button('Generate') - stop_button = gr.Button('Stop', variant = 'secondary', interactive = False) - gen_button.click(lambda s: gr.update(interactive = True), None, stop_button) - - with gr.Row(): - output = [gr.Image(label = m) for m in default_models] - current_models = [gr.Textbox(m, visible = False) for m in default_models] - - model_choice.change(update_imgbox, model_choice, output) - model_choice.change(extend_choices, model_choice, current_models) - - for m, o in zip(current_models, output): - gen_event = gen_button.click(gen_fn, [m, txt_input], o) - stop_button.click(lambda s: gr.update(interactive = False), None, stop_button, cancels = [gen_event]) - - - with gr.Tab('Single model'): - model_choice2 = gr.Dropdown(models, label = 'Choose model', value = models[0], filterable = False) - txt_input2 = gr.Textbox(label = 'Prompt text') - - max_images = 6 - num_images = gr.Slider(1, max_images, value = max_images, step = 1, label = 'Number of images') - - gen_button2 = gr.Button('Generate') - stop_button2 = gr.Button('Stop', variant = 'secondary', interactive = False) - gen_button2.click(lambda s: gr.update(interactive = True), None, stop_button2) - - with gr.Row(): - output2 = [gr.Image(label = '') for _ in range(max_images)] - - for i, o in enumerate(output2): - img_i = gr.Number(i, visible = False) - num_images.change(lambda i, n: gr.update(visible = (i < n)), [img_i, num_images], o) - gen_event2 = gen_button2.click(lambda i, n, m, t: gen_fn(m, t) if (i < n) else None, [img_i, num_images, model_choice2, txt_input2], o) - stop_button2.click(lambda s: gr.update(interactive = False), None, stop_button2, cancels = [gen_event2]) - - -demo.queue(concurrency_count = 36) -demo.launch() \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/third_party/ALIKE/alike.py b/spaces/Realcat/image-matching-webui/third_party/ALIKE/alike.py deleted file mode 100644 index b975f806f3e0f593a3564ae52d9d08187f514b34..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/ALIKE/alike.py +++ /dev/null @@ -1,198 +0,0 @@ -import logging -import os -import cv2 -import torch -from copy import deepcopy -import torch.nn.functional as F -from torchvision.transforms import ToTensor -import math - -from alnet import ALNet -from soft_detect import DKD -import time - -configs = { - "alike-t": { - "c1": 8, - "c2": 16, - "c3": 32, - "c4": 64, - "dim": 64, - "single_head": True, - "radius": 2, - "model_path": os.path.join(os.path.split(__file__)[0], "models", "alike-t.pth"), - }, - "alike-s": { - "c1": 8, - "c2": 16, - "c3": 48, - "c4": 96, - "dim": 96, - "single_head": True, - "radius": 2, - "model_path": os.path.join(os.path.split(__file__)[0], "models", "alike-s.pth"), - }, - "alike-n": { - "c1": 16, - "c2": 32, - "c3": 64, - "c4": 128, - "dim": 128, - "single_head": True, - "radius": 2, - "model_path": os.path.join(os.path.split(__file__)[0], "models", "alike-n.pth"), - }, - "alike-l": { - "c1": 32, - "c2": 64, - "c3": 128, - "c4": 128, - "dim": 128, - "single_head": False, - "radius": 2, - "model_path": os.path.join(os.path.split(__file__)[0], "models", "alike-l.pth"), - }, -} - - -class ALike(ALNet): - def __init__( - self, - # ================================== feature encoder - c1: int = 32, - c2: int = 64, - c3: int = 128, - c4: int = 128, - dim: int = 128, - single_head: bool = False, - # ================================== detect parameters - radius: int = 2, - top_k: int = 500, - scores_th: float = 0.5, - n_limit: int = 5000, - device: str = "cpu", - model_path: str = "", - ): - super().__init__(c1, c2, c3, c4, dim, single_head) - self.radius = radius - self.top_k = top_k - self.n_limit = n_limit - self.scores_th = scores_th - self.dkd = DKD( - radius=self.radius, - top_k=self.top_k, - scores_th=self.scores_th, - n_limit=self.n_limit, - ) - self.device = device - - if model_path != "": - state_dict = torch.load(model_path, self.device) - self.load_state_dict(state_dict) - self.to(self.device) - self.eval() - logging.info(f"Loaded model parameters from {model_path}") - logging.info( - f"Number of model parameters: {sum(p.numel() for p in self.parameters() if p.requires_grad) / 1e3}KB" - ) - - def extract_dense_map(self, image, ret_dict=False): - # ==================================================== - # check image size, should be integer multiples of 2^5 - # if it is not a integer multiples of 2^5, padding zeros - device = image.device - b, c, h, w = image.shape - h_ = math.ceil(h / 32) * 32 if h % 32 != 0 else h - w_ = math.ceil(w / 32) * 32 if w % 32 != 0 else w - if h_ != h: - h_padding = torch.zeros(b, c, h_ - h, w, device=device) - image = torch.cat([image, h_padding], dim=2) - if w_ != w: - w_padding = torch.zeros(b, c, h_, w_ - w, device=device) - image = torch.cat([image, w_padding], dim=3) - # ==================================================== - - scores_map, descriptor_map = super().forward(image) - - # ==================================================== - if h_ != h or w_ != w: - descriptor_map = descriptor_map[:, :, :h, :w] - scores_map = scores_map[:, :, :h, :w] # Bx1xHxW - # ==================================================== - - # BxCxHxW - descriptor_map = torch.nn.functional.normalize(descriptor_map, p=2, dim=1) - - if ret_dict: - return { - "descriptor_map": descriptor_map, - "scores_map": scores_map, - } - else: - return descriptor_map, scores_map - - def forward(self, img, image_size_max=99999, sort=False, sub_pixel=False): - """ - :param img: np.array HxWx3, RGB - :param image_size_max: maximum image size, otherwise, the image will be resized - :param sort: sort keypoints by scores - :param sub_pixel: whether to use sub-pixel accuracy - :return: a dictionary with 'keypoints', 'descriptors', 'scores', and 'time' - """ - H, W, three = img.shape - assert three == 3, "input image shape should be [HxWx3]" - - # ==================== image size constraint - image = deepcopy(img) - max_hw = max(H, W) - if max_hw > image_size_max: - ratio = float(image_size_max / max_hw) - image = cv2.resize(image, dsize=None, fx=ratio, fy=ratio) - - # ==================== convert image to tensor - image = ( - torch.from_numpy(image) - .to(self.device) - .to(torch.float32) - .permute(2, 0, 1)[None] - / 255.0 - ) - - # ==================== extract keypoints - start = time.time() - - with torch.no_grad(): - descriptor_map, scores_map = self.extract_dense_map(image) - keypoints, descriptors, scores, _ = self.dkd( - scores_map, descriptor_map, sub_pixel=sub_pixel - ) - keypoints, descriptors, scores = keypoints[0], descriptors[0], scores[0] - keypoints = (keypoints + 1) / 2 * keypoints.new_tensor([[W - 1, H - 1]]) - - if sort: - indices = torch.argsort(scores, descending=True) - keypoints = keypoints[indices] - descriptors = descriptors[indices] - scores = scores[indices] - - end = time.time() - - return { - "keypoints": keypoints.cpu().numpy(), - "descriptors": descriptors.cpu().numpy(), - "scores": scores.cpu().numpy(), - "scores_map": scores_map.cpu().numpy(), - "time": end - start, - } - - -if __name__ == "__main__": - import numpy as np - from thop import profile - - net = ALike(c1=32, c2=64, c3=128, c4=128, dim=128, single_head=False) - - image = np.random.random((640, 480, 3)).astype(np.float32) - flops, params = profile(net, inputs=(image, 9999, False), verbose=False) - print("{:<30} {:<8} GFLops".format("Computational complexity: ", flops / 1e9)) - print("{:<30} {:<8} KB".format("Number of parameters: ", params / 1e3)) diff --git a/spaces/Reeve/Ohayou_Face/datasets/__init__.py b/spaces/Reeve/Ohayou_Face/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/Rojban/LangFlow/Dockerfile b/spaces/Rojban/LangFlow/Dockerfile deleted file mode 100644 index 5ab7753f3fbfc776659d3b837c2f54b73215c5a1..0000000000000000000000000000000000000000 --- a/spaces/Rojban/LangFlow/Dockerfile +++ /dev/null @@ -1,16 +0,0 @@ -FROM python:3.10-slim - -RUN apt-get update && apt-get install gcc g++ git make -y -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -WORKDIR $HOME/app - -COPY --chown=user . $HOME/app - - -RUN pip install langflow>=0.0.74 -U --user -RUN pip install wolframalpha -U --user -CMD ["langflow", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/RubenAMtz/pothole_detector/app.py b/spaces/RubenAMtz/pothole_detector/app.py deleted file mode 100644 index 83fd6cad01b8d94d6bd95a2b1ee6a3fc91d868bb..0000000000000000000000000000000000000000 --- a/spaces/RubenAMtz/pothole_detector/app.py +++ /dev/null @@ -1,16 +0,0 @@ -import gradio as gr -from fastai.vision.all import * - -learn = load_learner('export.pkl') -categories = ('damaged', 'pristine') - -def predict(img): - pred, idx, probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -examples = ['street04.png', 'street06.png'] - -iface = gr.Interface(fn=predict, inputs=image, outputs=label, examples=examples) -iface.launch(inline=False) \ No newline at end of file diff --git a/spaces/SUPERSHANKY/ControlNet_Colab/README.md b/spaces/SUPERSHANKY/ControlNet_Colab/README.md deleted file mode 100644 index 2a36b98f3f43098ed4891405c085f19d1afc1f72..0000000000000000000000000000000000000000 --- a/spaces/SUPERSHANKY/ControlNet_Colab/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: ControlNet with other models -emoji: 😻 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -python_version: 3.10.9 -app_file: app.py -pinned: false -license: mit -duplicated_from: hysts/ControlNet-with-other-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sangamesh/Cat_Dog_Classifier/README.md b/spaces/Sangamesh/Cat_Dog_Classifier/README.md deleted file mode 100644 index 5a7fef323bd3f9fd3d861febd5ec5d5d67e78df4..0000000000000000000000000000000000000000 --- a/spaces/Sangamesh/Cat_Dog_Classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cat Dog Classifier -emoji: 🐨 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/predict/p_poseNMS.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/predict/p_poseNMS.py deleted file mode 100644 index ce8cbc9f1afffaeb8dc52009012ee43e26264399..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/train_sppe/src/predict/p_poseNMS.py +++ /dev/null @@ -1,313 +0,0 @@ -# ----------------------------------------------------- -# Copyright (c) Shanghai Jiao Tong University. All rights reserved. -# Written by Jiefeng Li (jeff.lee.sjtu@gmail.com) -# ----------------------------------------------------- - -import torch -import json -import os -import numpy as np - -''' Constant Configuration ''' -delta1 = 1 -mu = 1.7 -delta2 = 1.3 -gamma = 22.48 -scoreThreds = 0.15 -matchThreds = 5 -alpha = 0.1 - - -def pose_nms(bboxes, pose_preds, pose_scores): - ''' - Parametric Pose NMS algorithm - bboxes: bbox locations list (n, 4) - bbox_scores: bbox scores list (n,) - pose_preds: pose locations list (n, 17, 2) - pose_scores: pose scores list (n, 17, 1) - ''' - pose_scores[pose_scores <= scoreThreds] = 1e-5 - pose_scores[pose_scores > 1] = 1 - final_result = [] - - ori_pose_preds = pose_preds.clone() - ori_pose_scores = pose_scores.clone() - - xmax = bboxes[:, 2] - xmin = bboxes[:, 0] - ymax = bboxes[:, 3] - ymin = bboxes[:, 1] - - widths = xmax - xmin - heights = ymax - ymin - ref_dists = alpha * np.maximum(widths, heights) - - nsamples = bboxes.shape[0] - human_scores = pose_scores.mean(dim=1) - - human_ids = np.arange(nsamples) - # Do pPose-NMS - pick = [] - merge_ids = [] - while(human_scores.shape[0] != 0): - # Pick the one with highest score - pick_id = torch.argmax(human_scores) - pick.append(human_ids[pick_id]) - # num_visPart = torch.sum(pose_scores[pick_id] > 0.2) - - # Get numbers of match keypoints by calling PCK_match - ref_dist = ref_dists[human_ids[pick_id]] - simi = get_parametric_distance( - pick_id, pose_preds, pose_scores, ref_dist) - num_match_keypoints = PCK_match( - pose_preds[pick_id], pose_preds, ref_dist) - - # Delete humans who have more than matchThreds keypoints overlap and high similarity - delete_ids = torch.from_numpy(np.arange(human_scores.shape[0]))[ - (simi > gamma) | (num_match_keypoints >= matchThreds)] - - if delete_ids.shape[0] == 0: - delete_ids = pick_id - #else: - # delete_ids = torch.from_numpy(delete_ids) - - merge_ids.append(human_ids[delete_ids]) - pose_preds = np.delete(pose_preds, delete_ids, axis=0) - pose_scores = np.delete(pose_scores, delete_ids, axis=0) - human_ids = np.delete(human_ids, delete_ids) - human_scores = np.delete(human_scores, delete_ids, axis=0) - - assert len(merge_ids) == len(pick) - preds_pick = ori_pose_preds[pick] - scores_pick = ori_pose_scores[pick] - - for j in range(len(pick)): - ids = np.arange(17) - max_score = torch.max(scores_pick[j, ids, 0]) - - if max_score < scoreThreds: - continue - - # Merge poses - merge_id = merge_ids[j] - merge_pose, merge_score = p_merge_fast( - preds_pick[j], ori_pose_preds[merge_id], ori_pose_scores[merge_id], ref_dists[pick[j]]) - - max_score = torch.max(merge_score[ids]) - if max_score < scoreThreds: - continue - - xmax = max(merge_pose[:, 0]) - xmin = min(merge_pose[:, 0]) - ymax = max(merge_pose[:, 1]) - ymin = min(merge_pose[:, 1]) - - if (1.5 ** 2 * (xmax - xmin) * (ymax - ymin) < 40 * 40.5): - continue - - final_result.append({ - 'keypoints': merge_pose - 0.3, - 'kp_score': merge_score, - 'proposal_score': torch.mean(merge_score) + 1.25 * max(merge_score) - }) - - return final_result - - -def filter_result(args): - score_pick, merge_id, pred_pick, pick, bbox_score_pick = args - global ori_pose_preds, ori_pose_scores, ref_dists - ids = np.arange(17) - max_score = torch.max(score_pick[ids, 0]) - - if max_score < scoreThreds: - return None - - # Merge poses - merge_pose, merge_score = p_merge_fast( - pred_pick, ori_pose_preds[merge_id], ori_pose_scores[merge_id], ref_dists[pick]) - - max_score = torch.max(merge_score[ids]) - if max_score < scoreThreds: - return None - - xmax = max(merge_pose[:, 0]) - xmin = min(merge_pose[:, 0]) - ymax = max(merge_pose[:, 1]) - ymin = min(merge_pose[:, 1]) - - if (1.5 ** 2 * (xmax - xmin) * (ymax - ymin) < 40 * 40.5): - return None - - return { - 'keypoints': merge_pose - 0.3, - 'kp_score': merge_score, - 'proposal_score': torch.mean(merge_score) + bbox_score_pick + 1.25 * max(merge_score) - } - - -def p_merge(ref_pose, cluster_preds, cluster_scores, ref_dist): - ''' - Score-weighted pose merging - INPUT: - ref_pose: reference pose -- [17, 2] - cluster_preds: redundant poses -- [n, 17, 2] - cluster_scores: redundant poses score -- [n, 17, 1] - ref_dist: reference scale -- Constant - OUTPUT: - final_pose: merged pose -- [17, 2] - final_score: merged score -- [17] - ''' - dist = torch.sqrt(torch.sum( - torch.pow(ref_pose[np.newaxis, :] - cluster_preds, 2), - dim=2 - )) # [n, 17] - - kp_num = 17 - ref_dist = min(ref_dist, 15) - - mask = (dist <= ref_dist) - final_pose = torch.zeros(kp_num, 2) - final_score = torch.zeros(kp_num) - - if cluster_preds.dim() == 2: - cluster_preds.unsqueeze_(0) - cluster_scores.unsqueeze_(0) - if mask.dim() == 1: - mask.unsqueeze_(0) - - for i in range(kp_num): - cluster_joint_scores = cluster_scores[:, i][mask[:, i]] # [k, 1] - cluster_joint_location = cluster_preds[:, i, :][mask[:, i].unsqueeze( - -1).repeat(1, 2)].view((torch.sum(mask[:, i]), -1)) - - # Get an normalized score - normed_scores = cluster_joint_scores / torch.sum(cluster_joint_scores) - - # Merge poses by a weighted sum - final_pose[i, 0] = torch.dot( - cluster_joint_location[:, 0], normed_scores.squeeze(-1)) - final_pose[i, 1] = torch.dot( - cluster_joint_location[:, 1], normed_scores.squeeze(-1)) - - final_score[i] = torch.dot(cluster_joint_scores.transpose( - 0, 1).squeeze(0), normed_scores.squeeze(-1)) - - return final_pose, final_score - - -def p_merge_fast(ref_pose, cluster_preds, cluster_scores, ref_dist): - ''' - Score-weighted pose merging - INPUT: - ref_pose: reference pose -- [17, 2] - cluster_preds: redundant poses -- [n, 17, 2] - cluster_scores: redundant poses score -- [n, 17, 1] - ref_dist: reference scale -- Constant - OUTPUT: - final_pose: merged pose -- [17, 2] - final_score: merged score -- [17] - ''' - dist = torch.sqrt(torch.sum( - torch.pow(ref_pose[np.newaxis, :] - cluster_preds, 2), - dim=2 - )) - - kp_num = 17 - ref_dist = min(ref_dist, 15) - - mask = (dist <= ref_dist) - final_pose = torch.zeros(kp_num, 2) - final_score = torch.zeros(kp_num) - - if cluster_preds.dim() == 2: - cluster_preds.unsqueeze_(0) - cluster_scores.unsqueeze_(0) - if mask.dim() == 1: - mask.unsqueeze_(0) - - # Weighted Merge - masked_scores = cluster_scores.mul(mask.float().unsqueeze(-1)) - normed_scores = masked_scores / torch.sum(masked_scores, dim=0) - - final_pose = torch.mul( - cluster_preds, normed_scores.repeat(1, 1, 2)).sum(dim=0) - final_score = torch.mul(masked_scores, normed_scores).sum(dim=0) - return final_pose, final_score - - -def get_parametric_distance(i, all_preds, keypoint_scores, ref_dist): - pick_preds = all_preds[i] - pred_scores = keypoint_scores[i] - dist = torch.sqrt(torch.sum( - torch.pow(pick_preds[np.newaxis, :] - all_preds, 2), - dim=2 - )) - mask = (dist <= 1) - - # Define a keypoints distance - score_dists = torch.zeros(all_preds.shape[0], 17) - keypoint_scores.squeeze_() - if keypoint_scores.dim() == 1: - keypoint_scores.unsqueeze_(0) - if pred_scores.dim() == 1: - pred_scores.unsqueeze_(1) - # The predicted scores are repeated up to do broadcast - pred_scores = pred_scores.repeat(1, all_preds.shape[0]).transpose(0, 1) - - score_dists[mask] = torch.tanh( - pred_scores[mask] / delta1) * torch.tanh(keypoint_scores[mask] / delta1) - - point_dist = torch.exp((-1) * dist / delta2) - final_dist = torch.sum(score_dists, dim=1) + mu * \ - torch.sum(point_dist, dim=1) - - return final_dist - - -def PCK_match(pick_pred, all_preds, ref_dist): - dist = torch.sqrt(torch.sum( - torch.pow(pick_pred[np.newaxis, :] - all_preds, 2), - dim=2 - )) - ref_dist = min(ref_dist, 7) - num_match_keypoints = torch.sum( - dist / ref_dist <= 1, - dim=1 - ) - - return num_match_keypoints - - -def write_json(all_results, outputpath, for_eval=False): - ''' - all_result: result dict of predictions - outputpath: output directory - ''' - json_results = [] - for im_res in all_results: - im_name = im_res['imgname'] - for human in im_res['result']: - keypoints = [] - result = {} - if for_eval: - result['image_id'] = int(im_name.split( - '/')[-1].split('.')[0].split('_')[-1]) - else: - result['image_id'] = im_name.split('/')[-1] - result['category_id'] = 1 - - kp_preds = human['keypoints'] - kp_scores = human['kp_score'] - pro_scores = human['proposal_score'] - for n in range(kp_scores.shape[0]): - keypoints.append(float(kp_preds[n, 0])) - keypoints.append(float(kp_preds[n, 1])) - keypoints.append(float(kp_scores[n])) - result['keypoints'] = keypoints - result['score'] = float(pro_scores) - - json_results.append(result) - - with open(os.path.join(outputpath, 'alphapose-results.json'), 'w') as json_file: - json_file.write(json.dumps(json_results)) diff --git a/spaces/SilenWang/ReviewGPT/lang/Setting.md b/spaces/SilenWang/ReviewGPT/lang/Setting.md deleted file mode 100644 index ab74ffb765a814f89240ce21374573b5ceb2e817..0000000000000000000000000000000000000000 --- a/spaces/SilenWang/ReviewGPT/lang/Setting.md +++ /dev/null @@ -1,7 +0,0 @@ -### Setting Instructions - -The currently available setup options include: - -- `OpenAI API Key`: All functionalities currently rely on the implementation of the OpenAI API, so this key is a required field. The filled-in key is not stored directly to the local machine, but instead in the `gradio.States` object. If self-deploying, you can write the content to the `utils/config.py` file (refer to the format of `utils/config_sample.py`), which can avoid having to fill in the key each time the page is refreshed. - -- `Email`: To use the API provided by NCBI, an email address must be provided, so this setting is also required. The process for self-deployment is the same as that for the `OpenAI API Key`. diff --git a/spaces/SouthCity/ShuruiXu/self_analysis.md b/spaces/SouthCity/ShuruiXu/self_analysis.md deleted file mode 100644 index acfbd3e91b46738af42c4a4859b08570be59d485..0000000000000000000000000000000000000000 --- a/spaces/SouthCity/ShuruiXu/self_analysis.md +++ /dev/null @@ -1,175 +0,0 @@ -# chatgpt-academic项目自译解报告 -(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方,全怪GPT😄) - -## [0/18] 程序摘要: functional_crazy.py - -这是一个功能扩展的程序,文件名为 `functional_crazy.py`。代码的主要功能是通过提供一系列函数插件,增强程序的功能,让用户可以通过界面中的按钮,快速调用对应的函数插件实现相应的操作。代码中使用了 `HotReload` 函数插件,可以在不重启程序的情况下更新函数插件的代码,让其生效。同时,通过 `UserVisibleLevel` 变量的设置,可以控制哪些插件会在UI界面显示出来。函数插件列表包括了以下功能:解析项目本身、解析一个Python项目、解析一个C++项目头文件、解析一个C++项目、读取文章并生成摘要、批量生成函数注释、全项目切换成英文、批量总结PDF文档、批量总结PDF文档pdfminer、批量总结Word文档、高阶功能模板函数、以及其他未经充分测试的函数插件。 - -## [1/18] 程序摘要: main.py - -该程序是一个基于Gradio构建的对话生成模型的Web界面示例,包含了以下主要功能: - -1.加载模型并对用户输入进行响应; -2.通过调用外部函数库来获取用户的输入,并在模型生成的过程中进行处理; -3.支持用户上传本地文件,供外部函数库调用; -4.支持停止当前的生成过程; -5.保存用户的历史记录,并将其记录在本地日志文件中,以供后续分析和使用。 - -该程序需要依赖于一些外部库和软件包,如Gradio、torch等。用户需要确保这些依赖项已经安装,并且在运行该程序前对config_private.py配置文件进行相应的修改。 - -## [2/18] 程序摘要: functional.py - -该文件定义了一个名为“functional”的函数,函数的作用是返回一个包含多个字典(键值对)的字典,每个键值对表示一种功能。该字典的键值由功能名称和对应的数据组成。其中的每个字典都包含4个键值对,分别为“Prefix”、“Suffix”、“Color”和“PreProcess”,分别表示前缀、后缀、按钮颜色和预处理函数。如果某些键值对没有给出,那么程序中默认相应的值,如按钮颜色默认为“secondary”等。每个功能描述了不同的学术润色/翻译/其他服务,如“英语学术润色”、“中文学术润色”、“查找语法错误”等。函数还引用了一个名为“clear_line_break”的函数,用于预处理修改前的文本。 - -## [3/18] 程序摘要: show_math.py - -该程序文件名为show_math.py,主要用途是将Markdown和LaTeX混合格式转换成带有MathML的HTML格式。该程序通过递归地处理LaTeX和Markdown混合段落逐一转换成HTML/MathML标记出来,并在LaTeX公式创建中进行错误处理。在程序文件中定义了3个变量,分别是incomplete,convError和convert,其中convert函数是用来执行转换的主要函数。程序使用正则表达式进行LaTeX格式和Markdown段落的分割,从而实现转换。如果在Latex转换过程中发生错误,程序将输出相应的错误信息。 - -## [4/18] 程序摘要: predict.py - -本程序文件的文件名为"./predict.py",主要包含三个函数: - -1. predict:正常对话时使用,具备完备的交互功能,不可多线程; -2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑; -3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程。 - -其中,predict函数用于基础的对话功能,发送至chatGPT,流式获取输出,根据点击的哪个按钮,进行对话预处理等额外操作;predict_no_ui函数用于payload比较大的情况,或者用于实现多线、带嵌套的复杂功能;predict_no_ui_long_connection实现调用predict_no_ui处理长文档时,避免连接断掉的情况,支持多线程。 - -## [5/18] 程序摘要: check_proxy.py - -该程序文件名为check_proxy.py,主要功能是检查代理服务器的可用性并返回代理服务器的地理位置信息或错误提示。具体实现方式如下: - -首先使用requests模块向指定网站(https://ipapi.co/json/)发送GET请求,请求结果以JSON格式返回。如果代理服务器参数(proxies)是有效的且没有指明'https'代理,则用默认字典值'无'替代。 - -然后,程序会解析返回的JSON数据,并根据数据中是否包含国家名字字段来判断代理服务器的地理位置。如果有国家名字字段,则将其打印出来并返回代理服务器的相关信息。如果没有国家名字字段,但有错误信息字段,则返回其他错误提示信息。 - -在程序执行前,程序会先设置环境变量no_proxy,并使用toolbox模块中的get_conf函数从配置文件中读取代理参数。 - -最后,检测程序会输出检查结果并返回对应的结果字符串。 - -## [6/18] 程序摘要: config_private.py - -本程序文件名为`config_private.py`,其功能为配置私有信息以便在主程序中使用。主要功能包括: - -- 配置OpenAI API的密钥和API URL -- 配置是否使用代理,如果使用代理配置代理地址和端口 -- 配置发送请求的超时时间和失败重试次数的限制 -- 配置并行使用线程数和用户名密码 -- 提供检查功能以确保API密钥已经正确设置 - -其中,需要特别注意的是:最后一个检查功能要求在运行之前必须将API密钥正确设置,否则程序会直接退出。 - -## [7/18] 程序摘要: config.py - -该程序文件是一个配置文件,用于配置OpenAI的API参数和优化体验的相关参数,具体包括以下几个步骤: - -1.设置OpenAI的API密钥。 - -2.选择是否使用代理,如果使用则需要设置代理地址和端口等参数。 - -3.设置请求OpenAI后的超时时间、网页的端口、重试次数、选择的OpenAI模型、API的网址等。 - -4.设置并行使用的线程数和用户名密码。 - -该程序文件的作用为在使用OpenAI API时进行相关参数的配置,以保证请求的正确性和速度,并且优化使用体验。 - -## [8/18] 程序摘要: theme.py - -该程序是一个自定义Gradio主题的Python模块。主题文件名为"./theme.py"。程序引入了Gradio模块,并定义了一个名为"adjust_theme()"的函数。该函数根据输入值调整Gradio的默认主题,返回一个包含所需自定义属性的主题对象。主题属性包括颜色、字体、过渡、阴影、按钮边框和渐变等。主题颜色列表包括石板色、灰色、锌色、中性色、石头色、红色、橙色、琥珀色、黄色、酸橙色、绿色、祖母绿、青蓝色、青色、天蓝色、蓝色、靛蓝色、紫罗兰色、紫色、洋红色、粉红色和玫瑰色。如果Gradio版本较旧,则不能自定义字体和颜色。 - -## [9/18] 程序摘要: toolbox.py - -该程序文件包含了一系列函数,用于实现聊天程序所需的各种功能,如预测对话、将对话记录写入文件、将普通文本转换为Markdown格式文本、装饰器函数CatchException和HotReload等。其中一些函数用到了第三方库,如Python-Markdown、mdtex2html、zipfile、tarfile、rarfile和py7zr。除此之外,还有一些辅助函数,如get_conf、clear_line_break和extract_archive等。主要功能包括: - -1. 导入markdown、mdtex2html、threading、functools等模块。 -2. 定义函数predict_no_ui_but_counting_down,用于生成对话。 -3. 定义函数write_results_to_file,用于将对话记录生成Markdown文件。 -4. 定义函数regular_txt_to_markdown,将普通文本转换为Markdown格式的文本。 -5. 定义装饰器函数CatchException,用于捕获函数执行异常并返回生成器。 -6. 定义函数report_execption,用于向chatbot中添加错误信息。 -7. 定义函数text_divide_paragraph,用于将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。 -8. 定义函数markdown_convertion,用于将Markdown格式的文本转换为HTML格式。 -9. 定义函数format_io,用于将输入和输出解析为HTML格式。 -10. 定义函数find_free_port,用于返回当前系统中可用的未使用端口。 -11. 定义函数extract_archive,用于解压归档文件。 -12. 定义函数find_recent_files,用于查找最近创建的文件。 -13. 定义函数on_file_uploaded,用于处理上传文件的操作。 -14. 定义函数on_report_generated,用于处理生成报告文件的操作。 - - -## [10/18] 程序摘要: crazy_functions/生成函数注释.py - -该程序文件是一个Python脚本,文件名为“生成函数注释.py”,位于“./crazy_functions/”目录下。该程序实现了一个批量生成函数注释的功能,可以对指定文件夹下的所有Python和C++源代码文件中的所有函数进行注释,使用Markdown表格输出注释结果。 - -该程序引用了predict.py和toolbox.py两个模块,其中predict.py实现了一个基于GPT模型的文本生成功能,用于生成函数注释,而toolbox.py实现了一些工具函数,包括异常处理函数、文本写入函数等。另外,该程序还定义了两个函数,一个是“生成函数注释”函数,用于处理单个文件的注释生成;另一个是“批量生成函数注释”函数,用于批量处理多个文件的注释生成。 - -## [11/18] 程序摘要: crazy_functions/读文章写摘要.py - -这个程序文件是一个名为“读文章写摘要”的函数。该函数的输入包括文章的文本内容、top_p(生成文本时选择最可能的词语的概率阈值)、temperature(控制生成文本的随机性的因子)、对话历史等参数,以及一个聊天机器人和一个系统提示的文本。该函数的主要工作是解析一组.tex文件,然后生成一段学术性语言的中文和英文摘要。在解析过程中,该函数使用一个名为“toolbox”的模块中的辅助函数和一个名为“predict”的模块中的函数来执行GPT-2模型的推理工作,然后将结果返回给聊天机器人。另外,该程序还包括一个名为“fast_debug”的bool型变量,用于调试和测试。 - -## [12/18] 程序摘要: crazy_functions/代码重写为全英文_多线程.py - -该程序文件实现了一个多线程操作,用于将指定目录下的所有 Python 文件中的中文转化为英文,并将转化后的文件存入另一个目录中。具体实现过程如下: - -1. 集合目标文件路径并清空历史记录。 -2. 循环目标文件,对每个文件启动一个线程进行任务操作。 -3. 各个线程同时开始执行任务函数,并在任务完成后将转化后的文件写入指定目录,最终生成一份任务执行报告。 - -## [13/18] 程序摘要: crazy_functions/高级功能函数模板.py - -该程序文件名为高级功能函数模板.py,它包含了一个名为“高阶功能模板函数”的函数,这个函数可以作为开发新功能函数的模板。该函数引用了predict.py和toolbox.py文件中的函数。在该函数内部,它首先清空了历史记录,然后对于今天和今天以后的四天,它问用户历史中哪些事件发生在这些日期,并列举两条事件并发送相关的图片。在向用户询问问题时,使用了GPT进行响应。由于请求GPT需要一定的时间,所以函数会在重新显示状态之前等待一段时间。在每次与用户的互动中,使用yield关键字生成器函数来输出聊天机器人的当前状态,包括聊天消息、历史记录和状态('正常')。最后,程序调用write_results_to_file函数将聊天的结果写入文件,以供后续的评估和分析。 - -## [14/18] 程序摘要: crazy_functions/总结word文档.py - -该程序文件名为总结word文档.py,主要功能是批量总结Word文档。具体实现过程是解析docx格式和doc格式文件,生成文件内容,然后使用自然语言处理工具对文章内容做中英文概述,最后给出建议。该程序需要依赖python-docx和pywin32,如果没有安装,会给出安装建议。 - -## [15/18] 程序摘要: crazy_functions/批量总结PDF文档pdfminer.py - -该程序文件名为pdfminer.py,位于./crazy_functions/目录下。程序实现了批量读取PDF文件,并使用pdfminer解析PDF文件内容。此外,程序还根据解析得到的文本内容,调用机器学习模型生成对每篇文章的概述,最终生成全文摘要。程序中还对模块依赖进行了导入检查,若缺少依赖,则会提供安装建议。 - -## [16/18] 程序摘要: crazy_functions/解析项目源代码.py - -这个程序文件中包含了几个函数,分别是: - -1. `解析源代码(file_manifest, project_folder, top_p, api_key, temperature, chatbot, history, systemPromptTxt)`:通过输入文件路径列表对程序文件进行逐文件分析,根据分析结果做出整体功能和构架的概括,并生成包括每个文件功能的markdown表格。 -2. `解析项目本身(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`:对当前文件夹下的所有Python文件及其子文件夹进行逐文件分析,并生成markdown表格。 -3. `解析一个Python项目(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`:对指定路径下的所有Python文件及其子文件夹进行逐文件分析,并生成markdown表格。 -4. `解析一个C项目的头文件(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`:对指定路径下的所有头文件进行逐文件分析,并生成markdown表格。 -5. `解析一个C项目(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`:对指定路径下的所有.h、.cpp、.c文件及其子文件夹进行逐文件分析,并生成markdown表格。 - -程序中还包含了一些辅助函数和变量,如CatchException装饰器函数,report_execption函数、write_results_to_file函数等。在执行过程中还会调用其他模块中的函数,如toolbox模块的函数和predict模块的函数。 - -## [17/18] 程序摘要: crazy_functions/批量总结PDF文档.py - -这个程序文件是一个名为“批量总结PDF文档”的函数插件。它导入了predict和toolbox模块,并定义了一些函数,包括is_paragraph_break,normalize_text和clean_text。这些函数是对输入文本进行预处理和清洗的功能函数。主要的功能函数是解析PDF,它打开每个PDF文件并将其内容存储在file_content变量中,然后传递给聊天机器人,以产生一句话的概括。在解析PDF文件之后,该函数连接了所有文件的摘要,以产生一段学术语言和英文摘要。最后,函数批量处理目标文件夹中的所有PDF文件,并输出结果。 - -## 根据以上你自己的分析,对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能。 - -该程序是一个聊天机器人,使用了OpenAI的GPT语言模型以及一些特殊的辅助功能去处理各种学术写作和科研润色任务。整个程序由一些函数组成,每个函数都代表了不同的学术润色/翻译/其他服务。 - -下面是程序中每个文件的功能列表: - -| 文件名 | 功能 | -|--------|--------| -| functional_crazy.py | 实现高级功能函数模板和其他一些辅助功能函数 | -| main.py | 程序的主要入口,负责程序的启动和UI的展示 | -| functional.py | 定义各种功能按钮的颜色和响应函数 | -| show_math.py | 解析LaTeX文本,将其转换为Markdown格式 | -| predict.py | 基础的对话功能,用于与chatGPT进行交互 | -| check_proxy.py | 检查代理设置的正确性 | -| config_private.py | 配置程序的API密钥和其他私有信息 | -| config.py | 配置OpenAI的API参数和程序的其他属性 | -| theme.py | 设置程序主题样式 | -| toolbox.py | 存放一些辅助函数供程序使用 | -| crazy_functions/生成函数注释.py | 生成Python文件中所有函数的注释 | -| crazy_functions/读文章写摘要.py | 解析文章文本,生成中英文摘要 | -| crazy_functions/代码重写为全英文_多线程.py | 将中文代码内容转化为英文 | -| crazy_functions/高级功能函数模板.py | 实现高级功能函数模板 | -| crazy_functions/总结word文档.py | 解析Word文件,生成文章内容的概要 | -| crazy_functions/批量总结PDF文档pdfminer.py | 解析PDF文件,生成文章内容的概要(使用pdfminer库) | -| crazy_functions/批量总结PDF文档.py | 解析PDF文件,生成文章内容的概要(使用PyMuPDF库) | -| crazy_functions/解析项目源代码.py | 解析C/C++源代码,生成markdown表格 | -| crazy_functions/批量总结PDF文档.py | 对PDF文件进行批量摘要生成 | - -总的来说,该程序提供了一系列的学术润色和翻译的工具,支持对各种类型的文件进行分析和处理。同时也提供了对话式用户界面,便于用户使用和交互。 - diff --git a/spaces/Spark808/rvc-demo/infer_pack/attentions.py b/spaces/Spark808/rvc-demo/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/Spark808/rvc-demo/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Sumit7864/Image-Enhancer/realesrgan/version.py b/spaces/Sumit7864/Image-Enhancer/realesrgan/version.py deleted file mode 100644 index 341373abddf34aba012f121d1dcddd1799749b32..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/realesrgan/version.py +++ /dev/null @@ -1,5 +0,0 @@ -# GENERATED VERSION FILE -# TIME: Sat Jun 24 12:47:47 2023 -__version__ = '0.3.0' -__gitsha__ = '5ca1078' -version_info = (0, 3, 0) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/cc_sqlalchemy/ddl/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/cc_sqlalchemy/ddl/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/matplotlibtools.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/matplotlibtools.py deleted file mode 100644 index 71f02644352e7b710098df74cd45e1ec6c68e675..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/matplotlibtools.py +++ /dev/null @@ -1,149 +0,0 @@ - -import sys -from _pydev_bundle import pydev_log - -backends = {'tk': 'TkAgg', - 'gtk': 'GTKAgg', - 'wx': 'WXAgg', - 'qt': 'QtAgg', # Auto-choose qt4/5 - 'qt4': 'Qt4Agg', - 'qt5': 'Qt5Agg', - 'osx': 'MacOSX'} - -# We also need a reverse backends2guis mapping that will properly choose which -# GUI support to activate based on the desired matplotlib backend. For the -# most part it's just a reverse of the above dict, but we also need to add a -# few others that map to the same GUI manually: -backend2gui = dict(zip(backends.values(), backends.keys())) -# In the reverse mapping, there are a few extra valid matplotlib backends that -# map to the same GUI support -backend2gui['GTK'] = backend2gui['GTKCairo'] = 'gtk' -backend2gui['WX'] = 'wx' -backend2gui['CocoaAgg'] = 'osx' - - -def do_enable_gui(guiname): - from _pydev_bundle.pydev_versioncheck import versionok_for_gui - if versionok_for_gui(): - try: - from pydev_ipython.inputhook import enable_gui - enable_gui(guiname) - except: - sys.stderr.write("Failed to enable GUI event loop integration for '%s'\n" % guiname) - pydev_log.exception() - elif guiname not in ['none', '', None]: - # Only print a warning if the guiname was going to do something - sys.stderr.write("Debug console: Python version does not support GUI event loop integration for '%s'\n" % guiname) - # Return value does not matter, so return back what was sent - return guiname - - -def find_gui_and_backend(): - """Return the gui and mpl backend.""" - matplotlib = sys.modules['matplotlib'] - # WARNING: this assumes matplotlib 1.1 or newer!! - backend = matplotlib.rcParams['backend'] - # In this case, we need to find what the appropriate gui selection call - # should be for IPython, so we can activate inputhook accordingly - gui = backend2gui.get(backend, None) - return gui, backend - - -def is_interactive_backend(backend): - """ Check if backend is interactive """ - matplotlib = sys.modules['matplotlib'] - from matplotlib.rcsetup import interactive_bk, non_interactive_bk # @UnresolvedImport - if backend in interactive_bk: - return True - elif backend in non_interactive_bk: - return False - else: - return matplotlib.is_interactive() - - -def patch_use(enable_gui_function): - """ Patch matplotlib function 'use' """ - matplotlib = sys.modules['matplotlib'] - - def patched_use(*args, **kwargs): - matplotlib.real_use(*args, **kwargs) - gui, backend = find_gui_and_backend() - enable_gui_function(gui) - - matplotlib.real_use = matplotlib.use - matplotlib.use = patched_use - - -def patch_is_interactive(): - """ Patch matplotlib function 'use' """ - matplotlib = sys.modules['matplotlib'] - - def patched_is_interactive(): - return matplotlib.rcParams['interactive'] - - matplotlib.real_is_interactive = matplotlib.is_interactive - matplotlib.is_interactive = patched_is_interactive - - -def activate_matplotlib(enable_gui_function): - """Set interactive to True for interactive backends. - enable_gui_function - Function which enables gui, should be run in the main thread. - """ - matplotlib = sys.modules['matplotlib'] - gui, backend = find_gui_and_backend() - is_interactive = is_interactive_backend(backend) - if is_interactive: - enable_gui_function(gui) - if not matplotlib.is_interactive(): - sys.stdout.write("Backend %s is interactive backend. Turning interactive mode on.\n" % backend) - matplotlib.interactive(True) - else: - if matplotlib.is_interactive(): - sys.stdout.write("Backend %s is non-interactive backend. Turning interactive mode off.\n" % backend) - matplotlib.interactive(False) - patch_use(enable_gui_function) - patch_is_interactive() - - -def flag_calls(func): - """Wrap a function to detect and flag when it gets called. - - This is a decorator which takes a function and wraps it in a function with - a 'called' attribute. wrapper.called is initialized to False. - - The wrapper.called attribute is set to False right before each call to the - wrapped function, so if the call fails it remains False. After the call - completes, wrapper.called is set to True and the output is returned. - - Testing for truth in wrapper.called allows you to determine if a call to - func() was attempted and succeeded.""" - - # don't wrap twice - if hasattr(func, 'called'): - return func - - def wrapper(*args, **kw): - wrapper.called = False - out = func(*args, **kw) - wrapper.called = True - return out - - wrapper.called = False - wrapper.__doc__ = func.__doc__ - return wrapper - - -def activate_pylab(): - pylab = sys.modules['pylab'] - pylab.show._needmain = False - # We need to detect at runtime whether show() is called by the user. - # For this, we wrap it into a decorator which adds a 'called' flag. - pylab.draw_if_interactive = flag_calls(pylab.draw_if_interactive) - - -def activate_pyplot(): - pyplot = sys.modules['matplotlib.pyplot'] - pyplot.show._needmain = False - # We need to detect at runtime whether show() is called by the user. - # For this, we wrap it into a decorator which adds a 'called' flag. - pyplot.draw_if_interactive = flag_calls(pyplot.draw_if_interactive) diff --git a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/models.py b/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/models.py deleted file mode 100644 index dd9e0c087357ecfc5a1548eddb5a30d77d2b5bf5..0000000000000000000000000000000000000000 --- a/spaces/TLME/Bert-VITS-Umamusume-Genshin-HonkaiSR/models.py +++ /dev/null @@ -1,986 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages - - -class DurationDiscriminator(nn.Module): # vits2 - def __init__( - self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0 - ): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d( - in_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d( - filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d( - 2 * filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d( - filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential(nn.Linear(filter_channels, 1), nn.Sigmoid()) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - - -class TransformerCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = ( - attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - isflow=True, - gin_channels=self.gin_channels, - ) - if share_parameter - else None - ) - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer( - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout, - filter_channels, - mean_only=True, - wn_sharing_parameter=self.wn, - gin_channels=self.gin_channels, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class StochasticDurationPredictor(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - ): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append( - modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3) - ) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv( - filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout - ) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append( - modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3) - ) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv( - filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout - ) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = ( - torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) - * x_mask - ) - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum( - (F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2] - ) - logq = ( - torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q**2)) * x_mask, [1, 2]) - - logdet_tot_q - ) - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = ( - torch.sum(0.5 * (math.log(2 * math.pi) + (z**2)) * x_mask, [1, 2]) - - logdet_tot - ) - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = ( - torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) - * noise_scale - ) - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__( - self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0 - ): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d( - in_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d( - filter_channels, filter_channels, kernel_size, padding=kernel_size // 2 - ) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__( - self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0, - ): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels**-0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels**-0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - self.ja_bert_proj = nn.Conv1d(768, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, ja_bert, g=None): - bert_emb = self.bert_proj(bert).transpose(1, 2) - ja_bert_emb = self.ja_bert_proj(ja_bert).transpose(1, 2) - x = ( - self.emb(x) - + self.tone_emb(tone) - + self.language_emb(language) - + bert_emb - + ja_bert_emb - ) * math.sqrt( - self.hidden_channels - ) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for layer in self.ups: - remove_weight_norm(layer) - for layer in self.resblocks: - layer.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm is False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for layer in self.convs: - x = layer(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm is False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for layer in self.convs: - x = layer(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class ReferenceEncoder(nn.Module): - """ - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - """ - - def __init__(self, spec_channels, gin_channels=0): - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [ - weight_norm( - nn.Conv2d( - in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1), - ) - ) - for i in range(K) - ] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) # noqa: E501 - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU( - input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True, - ) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer=4, - n_layers_trans_flow=6, - flow_share_parameter=False, - use_transformer_flow=True, - **kwargs - ): - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get( - "use_spk_conditioned_encoder", True - ) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder( - n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - if use_transformer_flow: - self.flow = TransformerCouplingBlock( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers_trans_flow, - 5, - p_dropout, - n_flow_layer, - gin_channels=gin_channels, - share_parameter=flow_share_parameter, - ) - else: - self.flow = ResidualCouplingBlock( - inter_channels, - hidden_channels, - 5, - 1, - n_flow_layer, - gin_channels=gin_channels, - ) - self.sdp = StochasticDurationPredictor( - hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels - ) - self.dp = DurationPredictor( - hidden_channels, 256, 3, 0.5, gin_channels=gin_channels - ) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert, ja_bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p( - x, x_lengths, tone, language, bert, ja_bert, g=g - ) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum( - -0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True - ) # [b, 1, t_s] - neg_cent2 = torch.matmul( - -0.5 * (z_p**2).transpose(1, 2), s_p_sq_r - ) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul( - z_p.transpose(1, 2), (m_p * s_p_sq_r) - ) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum( - -0.5 * (m_p**2) * s_p_sq_r, [1], keepdim=True - ) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = ( - torch.std(neg_cent) - * torch.randn_like(neg_cent) - * self.current_mas_noise_scale - ) - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = ( - monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)) - .unsqueeze(1) - .detach() - ) - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum( - x_mask - ) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return ( - o, - l_length, - attn, - ids_slice, - x_mask, - y_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - (x, logw, logw_), - ) - - def infer( - self, - x, - x_lengths, - sid, - tone, - language, - bert, - ja_bert, - noise_scale=0.667, - length_scale=1, - noise_scale_w=0.8, - max_len=None, - sdp_ratio=0, - y=None, - ): - # x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p( - x, x_lengths, tone, language, bert, ja_bert, g=g - ) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * ( - sdp_ratio - ) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to( - x_mask.dtype - ) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose( - 1, 2 - ) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose( - 1, 2 - ) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/TNR-5/test_dev_s/README.md b/spaces/TNR-5/test_dev_s/README.md deleted file mode 100644 index 5563893ca82194a9219b0ef9d477060bfeb918c6..0000000000000000000000000000000000000000 --- a/spaces/TNR-5/test_dev_s/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: ViewQ - search engine -emoji: 🔎🔍 -colorFrom: green -colorTo: red -sdk: static -pinned: false ---- - -🔎 ViewQ - new search system and engine for you 🔍 \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_null_file.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_null_file.py deleted file mode 100644 index b659673ef3c1d5431e6699898ae4d073b4be764b..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_null_file.py +++ /dev/null @@ -1,69 +0,0 @@ -from types import TracebackType -from typing import IO, Iterable, Iterator, List, Optional, Type - - -class NullFile(IO[str]): - def close(self) -> None: - pass - - def isatty(self) -> bool: - return False - - def read(self, __n: int = 1) -> str: - return "" - - def readable(self) -> bool: - return False - - def readline(self, __limit: int = 1) -> str: - return "" - - def readlines(self, __hint: int = 1) -> List[str]: - return [] - - def seek(self, __offset: int, __whence: int = 1) -> int: - return 0 - - def seekable(self) -> bool: - return False - - def tell(self) -> int: - return 0 - - def truncate(self, __size: Optional[int] = 1) -> int: - return 0 - - def writable(self) -> bool: - return False - - def writelines(self, __lines: Iterable[str]) -> None: - pass - - def __next__(self) -> str: - return "" - - def __iter__(self) -> Iterator[str]: - return iter([""]) - - def __enter__(self) -> IO[str]: - pass - - def __exit__( - self, - __t: Optional[Type[BaseException]], - __value: Optional[BaseException], - __traceback: Optional[TracebackType], - ) -> None: - pass - - def write(self, text: str) -> int: - return 0 - - def flush(self) -> None: - pass - - def fileno(self) -> int: - return -1 - - -NULL_FILE = NullFile() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/metadata.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/metadata.py deleted file mode 100644 index e76a60c395eb62d5f05d7248cf67210cdd10740d..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/packaging/metadata.py +++ /dev/null @@ -1,408 +0,0 @@ -import email.feedparser -import email.header -import email.message -import email.parser -import email.policy -import sys -import typing -from typing import Dict, List, Optional, Tuple, Union, cast - -if sys.version_info >= (3, 8): # pragma: no cover - from typing import TypedDict -else: # pragma: no cover - if typing.TYPE_CHECKING: - from typing_extensions import TypedDict - else: - try: - from typing_extensions import TypedDict - except ImportError: - - class TypedDict: - def __init_subclass__(*_args, **_kwargs): - pass - - -# The RawMetadata class attempts to make as few assumptions about the underlying -# serialization formats as possible. The idea is that as long as a serialization -# formats offer some very basic primitives in *some* way then we can support -# serializing to and from that format. -class RawMetadata(TypedDict, total=False): - """A dictionary of raw core metadata. - - Each field in core metadata maps to a key of this dictionary (when data is - provided). The key is lower-case and underscores are used instead of dashes - compared to the equivalent core metadata field. Any core metadata field that - can be specified multiple times or can hold multiple values in a single - field have a key with a plural name. - - Core metadata fields that can be specified multiple times are stored as a - list or dict depending on which is appropriate for the field. Any fields - which hold multiple values in a single field are stored as a list. - - """ - - # Metadata 1.0 - PEP 241 - metadata_version: str - name: str - version: str - platforms: List[str] - summary: str - description: str - keywords: List[str] - home_page: str - author: str - author_email: str - license: str - - # Metadata 1.1 - PEP 314 - supported_platforms: List[str] - download_url: str - classifiers: List[str] - requires: List[str] - provides: List[str] - obsoletes: List[str] - - # Metadata 1.2 - PEP 345 - maintainer: str - maintainer_email: str - requires_dist: List[str] - provides_dist: List[str] - obsoletes_dist: List[str] - requires_python: str - requires_external: List[str] - project_urls: Dict[str, str] - - # Metadata 2.0 - # PEP 426 attempted to completely revamp the metadata format - # but got stuck without ever being able to build consensus on - # it and ultimately ended up withdrawn. - # - # However, a number of tools had started emiting METADATA with - # `2.0` Metadata-Version, so for historical reasons, this version - # was skipped. - - # Metadata 2.1 - PEP 566 - description_content_type: str - provides_extra: List[str] - - # Metadata 2.2 - PEP 643 - dynamic: List[str] - - # Metadata 2.3 - PEP 685 - # No new fields were added in PEP 685, just some edge case were - # tightened up to provide better interoptability. - - -_STRING_FIELDS = { - "author", - "author_email", - "description", - "description_content_type", - "download_url", - "home_page", - "license", - "maintainer", - "maintainer_email", - "metadata_version", - "name", - "requires_python", - "summary", - "version", -} - -_LIST_STRING_FIELDS = { - "classifiers", - "dynamic", - "obsoletes", - "obsoletes_dist", - "platforms", - "provides", - "provides_dist", - "provides_extra", - "requires", - "requires_dist", - "requires_external", - "supported_platforms", -} - - -def _parse_keywords(data: str) -> List[str]: - """Split a string of comma-separate keyboards into a list of keywords.""" - return [k.strip() for k in data.split(",")] - - -def _parse_project_urls(data: List[str]) -> Dict[str, str]: - """Parse a list of label/URL string pairings separated by a comma.""" - urls = {} - for pair in data: - # Our logic is slightly tricky here as we want to try and do - # *something* reasonable with malformed data. - # - # The main thing that we have to worry about, is data that does - # not have a ',' at all to split the label from the Value. There - # isn't a singular right answer here, and we will fail validation - # later on (if the caller is validating) so it doesn't *really* - # matter, but since the missing value has to be an empty str - # and our return value is dict[str, str], if we let the key - # be the missing value, then they'd have multiple '' values that - # overwrite each other in a accumulating dict. - # - # The other potentional issue is that it's possible to have the - # same label multiple times in the metadata, with no solid "right" - # answer with what to do in that case. As such, we'll do the only - # thing we can, which is treat the field as unparseable and add it - # to our list of unparsed fields. - parts = [p.strip() for p in pair.split(",", 1)] - parts.extend([""] * (max(0, 2 - len(parts)))) # Ensure 2 items - - # TODO: The spec doesn't say anything about if the keys should be - # considered case sensitive or not... logically they should - # be case-preserving and case-insensitive, but doing that - # would open up more cases where we might have duplicate - # entries. - label, url = parts - if label in urls: - # The label already exists in our set of urls, so this field - # is unparseable, and we can just add the whole thing to our - # unparseable data and stop processing it. - raise KeyError("duplicate labels in project urls") - urls[label] = url - - return urls - - -def _get_payload(msg: email.message.Message, source: Union[bytes, str]) -> str: - """Get the body of the message.""" - # If our source is a str, then our caller has managed encodings for us, - # and we don't need to deal with it. - if isinstance(source, str): - payload: str = msg.get_payload() - return payload - # If our source is a bytes, then we're managing the encoding and we need - # to deal with it. - else: - bpayload: bytes = msg.get_payload(decode=True) - try: - return bpayload.decode("utf8", "strict") - except UnicodeDecodeError: - raise ValueError("payload in an invalid encoding") - - -# The various parse_FORMAT functions here are intended to be as lenient as -# possible in their parsing, while still returning a correctly typed -# RawMetadata. -# -# To aid in this, we also generally want to do as little touching of the -# data as possible, except where there are possibly some historic holdovers -# that make valid data awkward to work with. -# -# While this is a lower level, intermediate format than our ``Metadata`` -# class, some light touch ups can make a massive difference in usability. - -# Map METADATA fields to RawMetadata. -_EMAIL_TO_RAW_MAPPING = { - "author": "author", - "author-email": "author_email", - "classifier": "classifiers", - "description": "description", - "description-content-type": "description_content_type", - "download-url": "download_url", - "dynamic": "dynamic", - "home-page": "home_page", - "keywords": "keywords", - "license": "license", - "maintainer": "maintainer", - "maintainer-email": "maintainer_email", - "metadata-version": "metadata_version", - "name": "name", - "obsoletes": "obsoletes", - "obsoletes-dist": "obsoletes_dist", - "platform": "platforms", - "project-url": "project_urls", - "provides": "provides", - "provides-dist": "provides_dist", - "provides-extra": "provides_extra", - "requires": "requires", - "requires-dist": "requires_dist", - "requires-external": "requires_external", - "requires-python": "requires_python", - "summary": "summary", - "supported-platform": "supported_platforms", - "version": "version", -} - - -def parse_email(data: Union[bytes, str]) -> Tuple[RawMetadata, Dict[str, List[str]]]: - """Parse a distribution's metadata. - - This function returns a two-item tuple of dicts. The first dict is of - recognized fields from the core metadata specification. Fields that can be - parsed and translated into Python's built-in types are converted - appropriately. All other fields are left as-is. Fields that are allowed to - appear multiple times are stored as lists. - - The second dict contains all other fields from the metadata. This includes - any unrecognized fields. It also includes any fields which are expected to - be parsed into a built-in type but were not formatted appropriately. Finally, - any fields that are expected to appear only once but are repeated are - included in this dict. - - """ - raw: Dict[str, Union[str, List[str], Dict[str, str]]] = {} - unparsed: Dict[str, List[str]] = {} - - if isinstance(data, str): - parsed = email.parser.Parser(policy=email.policy.compat32).parsestr(data) - else: - parsed = email.parser.BytesParser(policy=email.policy.compat32).parsebytes(data) - - # We have to wrap parsed.keys() in a set, because in the case of multiple - # values for a key (a list), the key will appear multiple times in the - # list of keys, but we're avoiding that by using get_all(). - for name in frozenset(parsed.keys()): - # Header names in RFC are case insensitive, so we'll normalize to all - # lower case to make comparisons easier. - name = name.lower() - - # We use get_all() here, even for fields that aren't multiple use, - # because otherwise someone could have e.g. two Name fields, and we - # would just silently ignore it rather than doing something about it. - headers = parsed.get_all(name) - - # The way the email module works when parsing bytes is that it - # unconditionally decodes the bytes as ascii using the surrogateescape - # handler. When you pull that data back out (such as with get_all() ), - # it looks to see if the str has any surrogate escapes, and if it does - # it wraps it in a Header object instead of returning the string. - # - # As such, we'll look for those Header objects, and fix up the encoding. - value = [] - # Flag if we have run into any issues processing the headers, thus - # signalling that the data belongs in 'unparsed'. - valid_encoding = True - for h in headers: - # It's unclear if this can return more types than just a Header or - # a str, so we'll just assert here to make sure. - assert isinstance(h, (email.header.Header, str)) - - # If it's a header object, we need to do our little dance to get - # the real data out of it. In cases where there is invalid data - # we're going to end up with mojibake, but there's no obvious, good - # way around that without reimplementing parts of the Header object - # ourselves. - # - # That should be fine since, if mojibacked happens, this key is - # going into the unparsed dict anyways. - if isinstance(h, email.header.Header): - # The Header object stores it's data as chunks, and each chunk - # can be independently encoded, so we'll need to check each - # of them. - chunks: List[Tuple[bytes, Optional[str]]] = [] - for bin, encoding in email.header.decode_header(h): - try: - bin.decode("utf8", "strict") - except UnicodeDecodeError: - # Enable mojibake. - encoding = "latin1" - valid_encoding = False - else: - encoding = "utf8" - chunks.append((bin, encoding)) - - # Turn our chunks back into a Header object, then let that - # Header object do the right thing to turn them into a - # string for us. - value.append(str(email.header.make_header(chunks))) - # This is already a string, so just add it. - else: - value.append(h) - - # We've processed all of our values to get them into a list of str, - # but we may have mojibake data, in which case this is an unparsed - # field. - if not valid_encoding: - unparsed[name] = value - continue - - raw_name = _EMAIL_TO_RAW_MAPPING.get(name) - if raw_name is None: - # This is a bit of a weird situation, we've encountered a key that - # we don't know what it means, so we don't know whether it's meant - # to be a list or not. - # - # Since we can't really tell one way or another, we'll just leave it - # as a list, even though it may be a single item list, because that's - # what makes the most sense for email headers. - unparsed[name] = value - continue - - # If this is one of our string fields, then we'll check to see if our - # value is a list of a single item. If it is then we'll assume that - # it was emitted as a single string, and unwrap the str from inside - # the list. - # - # If it's any other kind of data, then we haven't the faintest clue - # what we should parse it as, and we have to just add it to our list - # of unparsed stuff. - if raw_name in _STRING_FIELDS and len(value) == 1: - raw[raw_name] = value[0] - # If this is one of our list of string fields, then we can just assign - # the value, since email *only* has strings, and our get_all() call - # above ensures that this is a list. - elif raw_name in _LIST_STRING_FIELDS: - raw[raw_name] = value - # Special Case: Keywords - # The keywords field is implemented in the metadata spec as a str, - # but it conceptually is a list of strings, and is serialized using - # ", ".join(keywords), so we'll do some light data massaging to turn - # this into what it logically is. - elif raw_name == "keywords" and len(value) == 1: - raw[raw_name] = _parse_keywords(value[0]) - # Special Case: Project-URL - # The project urls is implemented in the metadata spec as a list of - # specially-formatted strings that represent a key and a value, which - # is fundamentally a mapping, however the email format doesn't support - # mappings in a sane way, so it was crammed into a list of strings - # instead. - # - # We will do a little light data massaging to turn this into a map as - # it logically should be. - elif raw_name == "project_urls": - try: - raw[raw_name] = _parse_project_urls(value) - except KeyError: - unparsed[name] = value - # Nothing that we've done has managed to parse this, so it'll just - # throw it in our unparseable data and move on. - else: - unparsed[name] = value - - # We need to support getting the Description from the message payload in - # addition to getting it from the the headers. This does mean, though, there - # is the possibility of it being set both ways, in which case we put both - # in 'unparsed' since we don't know which is right. - try: - payload = _get_payload(parsed, data) - except ValueError: - unparsed.setdefault("description", []).append( - parsed.get_payload(decode=isinstance(data, bytes)) - ) - else: - if payload: - # Check to see if we've already got a description, if so then both - # it, and this body move to unparseable. - if "description" in raw: - description_header = cast(str, raw.pop("description")) - unparsed.setdefault("description", []).extend( - [description_header, payload] - ) - elif "description" in unparsed: - unparsed["description"].append(payload) - else: - raw["description"] = payload - - # We need to cast our `raw` to a metadata, because a TypedDict only support - # literal key names, but we're computing our key names on purpose, but the - # way this function is implemented, our `TypedDict` can only have valid key - # names. - return cast(RawMetadata, raw), unparsed diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/anchor_generator.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/anchor_generator.py deleted file mode 100644 index ee4b98819445f95982ca89a72cdd3e27b39b367f..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/anchor_generator.py +++ /dev/null @@ -1,382 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import collections -import math -from typing import List -import torch -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec -from detectron2.structures import Boxes, RotatedBoxes -from detectron2.utils.registry import Registry - -ANCHOR_GENERATOR_REGISTRY = Registry("ANCHOR_GENERATOR") -ANCHOR_GENERATOR_REGISTRY.__doc__ = """ -Registry for modules that creates object detection anchors for feature maps. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -class BufferList(nn.Module): - """ - Similar to nn.ParameterList, but for buffers - """ - - def __init__(self, buffers): - super().__init__() - for i, buffer in enumerate(buffers): - # Use non-persistent buffer so the values are not saved in checkpoint - self.register_buffer(str(i), buffer, persistent=False) - - def __len__(self): - return len(self._buffers) - - def __iter__(self): - return iter(self._buffers.values()) - - -def _create_grid_offsets(size: List[int], stride: int, offset: float, device: torch.device): - grid_height, grid_width = size - shifts_x = torch.arange( - offset * stride, grid_width * stride, step=stride, dtype=torch.float32, device=device - ) - shifts_y = torch.arange( - offset * stride, grid_height * stride, step=stride, dtype=torch.float32, device=device - ) - - shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x) - shift_x = shift_x.reshape(-1) - shift_y = shift_y.reshape(-1) - return shift_x, shift_y - - -def _broadcast_params(params, num_features, name): - """ - If one size (or aspect ratio) is specified and there are multiple feature - maps, we "broadcast" anchors of that single size (or aspect ratio) - over all feature maps. - - If params is list[float], or list[list[float]] with len(params) == 1, repeat - it num_features time. - - Returns: - list[list[float]]: param for each feature - """ - assert isinstance( - params, collections.abc.Sequence - ), f"{name} in anchor generator has to be a list! Got {params}." - assert len(params), f"{name} in anchor generator cannot be empty!" - if not isinstance(params[0], collections.abc.Sequence): # params is list[float] - return [params] * num_features - if len(params) == 1: - return list(params) * num_features - assert len(params) == num_features, ( - f"Got {name} of length {len(params)} in anchor generator, " - f"but the number of input features is {num_features}!" - ) - return params - - -@ANCHOR_GENERATOR_REGISTRY.register() -class DefaultAnchorGenerator(nn.Module): - """ - Compute anchors in the standard ways described in - "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks". - """ - - box_dim: torch.jit.Final[int] = 4 - """ - the dimension of each anchor box. - """ - - @configurable - def __init__(self, *, sizes, aspect_ratios, strides, offset=0.5): - """ - This interface is experimental. - - Args: - sizes (list[list[float]] or list[float]): - If ``sizes`` is list[list[float]], ``sizes[i]`` is the list of anchor sizes - (i.e. sqrt of anchor area) to use for the i-th feature map. - If ``sizes`` is list[float], ``sizes`` is used for all feature maps. - Anchor sizes are given in absolute lengths in units of - the input image; they do not dynamically scale if the input image size changes. - aspect_ratios (list[list[float]] or list[float]): list of aspect ratios - (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies. - strides (list[int]): stride of each input feature. - offset (float): Relative offset between the center of the first anchor and the top-left - corner of the image. Value has to be in [0, 1). - Recommend to use 0.5, which means half stride. - """ - super().__init__() - - self.strides = strides - self.num_features = len(self.strides) - sizes = _broadcast_params(sizes, self.num_features, "sizes") - aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios") - self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios) - - self.offset = offset - assert 0.0 <= self.offset < 1.0, self.offset - - @classmethod - def from_config(cls, cfg, input_shape: List[ShapeSpec]): - return { - "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES, - "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS, - "strides": [x.stride for x in input_shape], - "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET, - } - - def _calculate_anchors(self, sizes, aspect_ratios): - cell_anchors = [ - self.generate_cell_anchors(s, a).float() for s, a in zip(sizes, aspect_ratios) - ] - return BufferList(cell_anchors) - - @property - @torch.jit.unused - def num_cell_anchors(self): - """ - Alias of `num_anchors`. - """ - return self.num_anchors - - @property - @torch.jit.unused - def num_anchors(self): - """ - Returns: - list[int]: Each int is the number of anchors at every pixel - location, on that feature map. - For example, if at every pixel we use anchors of 3 aspect - ratios and 5 sizes, the number of anchors is 15. - (See also ANCHOR_GENERATOR.SIZES and ANCHOR_GENERATOR.ASPECT_RATIOS in config) - - In standard RPN models, `num_anchors` on every feature map is the same. - """ - return [len(cell_anchors) for cell_anchors in self.cell_anchors] - - def _grid_anchors(self, grid_sizes: List[List[int]]): - """ - Returns: - list[Tensor]: #featuremap tensors, each is (#locations x #cell_anchors) x 4 - """ - anchors = [] - # buffers() not supported by torchscript. use named_buffers() instead - buffers: List[torch.Tensor] = [x[1] for x in self.cell_anchors.named_buffers()] - for size, stride, base_anchors in zip(grid_sizes, self.strides, buffers): - shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors.device) - shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1) - - anchors.append((shifts.view(-1, 1, 4) + base_anchors.view(1, -1, 4)).reshape(-1, 4)) - - return anchors - - def generate_cell_anchors(self, sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.5, 1, 2)): - """ - Generate a tensor storing canonical anchor boxes, which are all anchor - boxes of different sizes and aspect_ratios centered at (0, 0). - We can later build the set of anchors for a full feature map by - shifting and tiling these tensors (see `meth:_grid_anchors`). - - Args: - sizes (tuple[float]): - aspect_ratios (tuple[float]]): - - Returns: - Tensor of shape (len(sizes) * len(aspect_ratios), 4) storing anchor boxes - in XYXY format. - """ - - # This is different from the anchor generator defined in the original Faster R-CNN - # code or Detectron. They yield the same AP, however the old version defines cell - # anchors in a less natural way with a shift relative to the feature grid and - # quantization that results in slightly different sizes for different aspect ratios. - # See also https://github.com/facebookresearch/Detectron/issues/227 - - anchors = [] - for size in sizes: - area = size ** 2.0 - for aspect_ratio in aspect_ratios: - # s * s = w * h - # a = h / w - # ... some algebra ... - # w = sqrt(s * s / a) - # h = a * w - w = math.sqrt(area / aspect_ratio) - h = aspect_ratio * w - x0, y0, x1, y1 = -w / 2.0, -h / 2.0, w / 2.0, h / 2.0 - anchors.append([x0, y0, x1, y1]) - return torch.tensor(anchors) - - def forward(self, features: List[torch.Tensor]): - """ - Args: - features (list[Tensor]): list of backbone feature maps on which to generate anchors. - - Returns: - list[Boxes]: a list of Boxes containing all the anchors for each feature map - (i.e. the cell anchors repeated over all locations in the feature map). - The number of anchors of each feature map is Hi x Wi x num_cell_anchors, - where Hi, Wi are resolution of the feature map divided by anchor stride. - """ - grid_sizes = [feature_map.shape[-2:] for feature_map in features] - anchors_over_all_feature_maps = self._grid_anchors(grid_sizes) - return [Boxes(x) for x in anchors_over_all_feature_maps] - - -@ANCHOR_GENERATOR_REGISTRY.register() -class RotatedAnchorGenerator(nn.Module): - """ - Compute rotated anchors used by Rotated RPN (RRPN), described in - "Arbitrary-Oriented Scene Text Detection via Rotation Proposals". - """ - - box_dim: int = 5 - """ - the dimension of each anchor box. - """ - - @configurable - def __init__(self, *, sizes, aspect_ratios, strides, angles, offset=0.5): - """ - This interface is experimental. - - Args: - sizes (list[list[float]] or list[float]): - If sizes is list[list[float]], sizes[i] is the list of anchor sizes - (i.e. sqrt of anchor area) to use for the i-th feature map. - If sizes is list[float], the sizes are used for all feature maps. - Anchor sizes are given in absolute lengths in units of - the input image; they do not dynamically scale if the input image size changes. - aspect_ratios (list[list[float]] or list[float]): list of aspect ratios - (i.e. height / width) to use for anchors. Same "broadcast" rule for `sizes` applies. - strides (list[int]): stride of each input feature. - angles (list[list[float]] or list[float]): list of angles (in degrees CCW) - to use for anchors. Same "broadcast" rule for `sizes` applies. - offset (float): Relative offset between the center of the first anchor and the top-left - corner of the image. Value has to be in [0, 1). - Recommend to use 0.5, which means half stride. - """ - super().__init__() - - self.strides = strides - self.num_features = len(self.strides) - sizes = _broadcast_params(sizes, self.num_features, "sizes") - aspect_ratios = _broadcast_params(aspect_ratios, self.num_features, "aspect_ratios") - angles = _broadcast_params(angles, self.num_features, "angles") - self.cell_anchors = self._calculate_anchors(sizes, aspect_ratios, angles) - - self.offset = offset - assert 0.0 <= self.offset < 1.0, self.offset - - @classmethod - def from_config(cls, cfg, input_shape: List[ShapeSpec]): - return { - "sizes": cfg.MODEL.ANCHOR_GENERATOR.SIZES, - "aspect_ratios": cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS, - "strides": [x.stride for x in input_shape], - "offset": cfg.MODEL.ANCHOR_GENERATOR.OFFSET, - "angles": cfg.MODEL.ANCHOR_GENERATOR.ANGLES, - } - - def _calculate_anchors(self, sizes, aspect_ratios, angles): - cell_anchors = [ - self.generate_cell_anchors(size, aspect_ratio, angle).float() - for size, aspect_ratio, angle in zip(sizes, aspect_ratios, angles) - ] - return BufferList(cell_anchors) - - @property - def num_cell_anchors(self): - """ - Alias of `num_anchors`. - """ - return self.num_anchors - - @property - def num_anchors(self): - """ - Returns: - list[int]: Each int is the number of anchors at every pixel - location, on that feature map. - For example, if at every pixel we use anchors of 3 aspect - ratios, 2 sizes and 5 angles, the number of anchors is 30. - (See also ANCHOR_GENERATOR.SIZES, ANCHOR_GENERATOR.ASPECT_RATIOS - and ANCHOR_GENERATOR.ANGLES in config) - - In standard RRPN models, `num_anchors` on every feature map is the same. - """ - return [len(cell_anchors) for cell_anchors in self.cell_anchors] - - def _grid_anchors(self, grid_sizes): - anchors = [] - for size, stride, base_anchors in zip(grid_sizes, self.strides, self.cell_anchors): - shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors.device) - zeros = torch.zeros_like(shift_x) - shifts = torch.stack((shift_x, shift_y, zeros, zeros, zeros), dim=1) - - anchors.append((shifts.view(-1, 1, 5) + base_anchors.view(1, -1, 5)).reshape(-1, 5)) - - return anchors - - def generate_cell_anchors( - self, - sizes=(32, 64, 128, 256, 512), - aspect_ratios=(0.5, 1, 2), - angles=(-90, -60, -30, 0, 30, 60, 90), - ): - """ - Generate a tensor storing canonical anchor boxes, which are all anchor - boxes of different sizes, aspect_ratios, angles centered at (0, 0). - We can later build the set of anchors for a full feature map by - shifting and tiling these tensors (see `meth:_grid_anchors`). - - Args: - sizes (tuple[float]): - aspect_ratios (tuple[float]]): - angles (tuple[float]]): - - Returns: - Tensor of shape (len(sizes) * len(aspect_ratios) * len(angles), 5) - storing anchor boxes in (x_ctr, y_ctr, w, h, angle) format. - """ - anchors = [] - for size in sizes: - area = size ** 2.0 - for aspect_ratio in aspect_ratios: - # s * s = w * h - # a = h / w - # ... some algebra ... - # w = sqrt(s * s / a) - # h = a * w - w = math.sqrt(area / aspect_ratio) - h = aspect_ratio * w - anchors.extend([0, 0, w, h, a] for a in angles) - - return torch.tensor(anchors) - - def forward(self, features): - """ - Args: - features (list[Tensor]): list of backbone feature maps on which to generate anchors. - - Returns: - list[RotatedBoxes]: a list of Boxes containing all the anchors for each feature map - (i.e. the cell anchors repeated over all locations in the feature map). - The number of anchors of each feature map is Hi x Wi x num_cell_anchors, - where Hi, Wi are resolution of the feature map divided by anchor stride. - """ - grid_sizes = [feature_map.shape[-2:] for feature_map in features] - anchors_over_all_feature_maps = self._grid_anchors(grid_sizes) - return [RotatedBoxes(x) for x in anchors_over_all_feature_maps] - - -def build_anchor_generator(cfg, input_shape): - """ - Built an anchor generator from `cfg.MODEL.ANCHOR_GENERATOR.NAME`. - """ - anchor_generator = cfg.MODEL.ANCHOR_GENERATOR.NAME - return ANCHOR_GENERATOR_REGISTRY.get(anchor_generator)(cfg, input_shape) diff --git a/spaces/ThankGod/image-classifier/Makefile b/spaces/ThankGod/image-classifier/Makefile deleted file mode 100644 index ff727d0ac0d87aa292e9ddbd99218cadb034f3a4..0000000000000000000000000000000000000000 --- a/spaces/ThankGod/image-classifier/Makefile +++ /dev/null @@ -1,27 +0,0 @@ -install: - pip install --upgrade pip &&\ - pip install -r requirements.txt - -test: - python -m pytest -vvv --cov=hello --cov=greeting \ - --cov=smath --cov=web tests - python -m pytest --nbval notebook.ipynb #tests our jupyter notebook - #python -m pytest -v tests/test_web.py #if you just want to test web - -debug: - python -m pytest -vv --pdb #Debugger is invoked - -one-test: - python -m pytest -vv tests/test_greeting.py::test_my_name4 - -debugthree: - #not working the way I expect - python -m pytest -vv --pdb --maxfail=4 # drop to PDB for first three failures - -format: - black *.py - -lint: - pylint --disable=R,C *.py - -all: install lint test format \ No newline at end of file diff --git a/spaces/Tuana/find-the-animal/utils/frontend.py b/spaces/Tuana/find-the-animal/utils/frontend.py deleted file mode 100644 index 406941c887f0f4e911aa26395f361c982759a72a..0000000000000000000000000000000000000000 --- a/spaces/Tuana/find-the-animal/utils/frontend.py +++ /dev/null @@ -1,19 +0,0 @@ -import streamlit as st - -def build_sidebar(): - sidebar = """ -
    -


    Github project - Based on Haystack

    -

    Project by Sara Zanzottera and Tuana Celik

    -

    Project based on the "Introduction to Image Retrieval" presentation by Sara at the Open NLP Meetup

    -

    Watch the presentation

    -
    - """ - st.sidebar.markdown(sidebar, unsafe_allow_html=True) - -def set_state_if_absent(key, value): - if key not in st.session_state: - st.session_state[key] = value - -def reset_results(*args): - st.session_state.results = None \ No newline at end of file diff --git a/spaces/VikasKumar01/My_AI_chatbot/app.py b/spaces/VikasKumar01/My_AI_chatbot/app.py deleted file mode 100644 index 33bdc69d00ecf6a7e38ca8d8c191f8ea75e2d94b..0000000000000000000000000000000000000000 --- a/spaces/VikasKumar01/My_AI_chatbot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """Meet satvika, MY youthful and witty personal assistant! At 21 year old, she's full of energy and always egaer to help. satvika's goal is to assist you with any question or problems you might have. Her enthusiam shines through in every response, making interactions with her enjoyable and engaging. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Wootang01/question_generator_two/README.md b/spaces/Wootang01/question_generator_two/README.md deleted file mode 100644 index 6a24d36212902f6641872e9fd15001a391e462bb..0000000000000000000000000000000000000000 --- a/spaces/Wootang01/question_generator_two/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Question_generator_two -emoji: 📈 -colorFrom: yellow -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/XzJosh/otto-Bert-VITS2/commons.py b/spaces/XzJosh/otto-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/otto-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/YUANAI/DiffspeechResearch/utils/metrics/dtw.py b/spaces/YUANAI/DiffspeechResearch/utils/metrics/dtw.py deleted file mode 100644 index 829e8e160355f8729b8e478bc4a24ca8597df58e..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/utils/metrics/dtw.py +++ /dev/null @@ -1,160 +0,0 @@ -from numpy import array, zeros, full, argmin, inf, ndim -from scipy.spatial.distance import cdist -from math import isinf - - -def dtw(x, y, dist, warp=1, w=inf, s=1.0): - """ - Computes Dynamic Time Warping (DTW) of two sequences. - - :param array x: N1*M array - :param array y: N2*M array - :param func dist: distance used as cost measure - :param int warp: how many shifts are computed. - :param int w: window size limiting the maximal distance between indices of matched entries |i,j|. - :param float s: weight applied on off-diagonal moves of the path. As s gets larger, the warping path is increasingly biased towards the diagonal - Returns the minimum distance, the cost matrix, the accumulated cost matrix, and the wrap path. - """ - assert len(x) - assert len(y) - assert isinf(w) or (w >= abs(len(x) - len(y))) - assert s > 0 - r, c = len(x), len(y) - if not isinf(w): - D0 = full((r + 1, c + 1), inf) - for i in range(1, r + 1): - D0[i, max(1, i - w):min(c + 1, i + w + 1)] = 0 - D0[0, 0] = 0 - else: - D0 = zeros((r + 1, c + 1)) - D0[0, 1:] = inf - D0[1:, 0] = inf - D1 = D0[1:, 1:] # view - for i in range(r): - for j in range(c): - if (isinf(w) or (max(0, i - w) <= j <= min(c, i + w))): - D1[i, j] = dist(x[i], y[j]) - C = D1.copy() - jrange = range(c) - for i in range(r): - if not isinf(w): - jrange = range(max(0, i - w), min(c, i + w + 1)) - for j in jrange: - min_list = [D0[i, j]] - for k in range(1, warp + 1): - i_k = min(i + k, r) - j_k = min(j + k, c) - min_list += [D0[i_k, j] * s, D0[i, j_k] * s] - D1[i, j] += min(min_list) - if len(x) == 1: - path = zeros(len(y)), range(len(y)) - elif len(y) == 1: - path = range(len(x)), zeros(len(x)) - else: - path = _traceback(D0) - return D1[-1, -1], C, D1, path - - -def accelerated_dtw(x, y, dist, warp=1): - """ - Computes Dynamic Time Warping (DTW) of two sequences in a faster way. - Instead of iterating through each element and calculating each distance, - this uses the cdist function from scipy (https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html) - - :param array x: N1*M array - :param array y: N2*M array - :param string or func dist: distance parameter for cdist. When string is given, cdist uses optimized functions for the distance metrics. - If a string is passed, the distance function can be 'braycurtis', 'canberra', 'chebyshev', 'cityblock', 'correlation', 'cosine', 'dice', 'euclidean', 'hamming', 'jaccard', 'kulsinski', 'mahalanobis', 'matching', 'minkowski', 'rogerstanimoto', 'russellrao', 'seuclidean', 'sokalmichener', 'sokalsneath', 'sqeuclidean', 'wminkowski', 'yule'. - :param int warp: how many shifts are computed. - Returns the minimum distance, the cost matrix, the accumulated cost matrix, and the wrap path. - """ - assert len(x) - assert len(y) - if ndim(x) == 1: - x = x.reshape(-1, 1) - if ndim(y) == 1: - y = y.reshape(-1, 1) - r, c = len(x), len(y) - D0 = zeros((r + 1, c + 1)) - D0[0, 1:] = inf - D0[1:, 0] = inf - D1 = D0[1:, 1:] - D0[1:, 1:] = cdist(x, y, dist) - C = D1.copy() - for i in range(r): - for j in range(c): - min_list = [D0[i, j]] - for k in range(1, warp + 1): - min_list += [D0[min(i + k, r), j], - D0[i, min(j + k, c)]] - D1[i, j] += min(min_list) - if len(x) == 1: - path = zeros(len(y)), range(len(y)) - elif len(y) == 1: - path = range(len(x)), zeros(len(x)) - else: - path = _traceback(D0) - return D1[-1, -1], C, D1, path - - -def _traceback(D): - i, j = array(D.shape) - 2 - p, q = [i], [j] - while (i > 0) or (j > 0): - tb = argmin((D[i, j], D[i, j + 1], D[i + 1, j])) - if tb == 0: - i -= 1 - j -= 1 - elif tb == 1: - i -= 1 - else: # (tb == 2): - j -= 1 - p.insert(0, i) - q.insert(0, j) - return array(p), array(q) - - -if __name__ == '__main__': - w = inf - s = 1.0 - if 1: # 1-D numeric - from sklearn.metrics.pairwise import manhattan_distances - - x = [0, 0, 1, 1, 2, 4, 2, 1, 2, 0] - y = [1, 1, 1, 2, 2, 2, 2, 3, 2, 0] - dist_fun = manhattan_distances - w = 1 - # s = 1.2 - elif 0: # 2-D numeric - from sklearn.metrics.pairwise import euclidean_distances - - x = [[0, 0], [0, 1], [1, 1], [1, 2], [2, 2], [4, 3], [2, 3], [1, 1], [2, 2], [0, 1]] - y = [[1, 0], [1, 1], [1, 1], [2, 1], [4, 3], [4, 3], [2, 3], [3, 1], [1, 2], [1, 0]] - dist_fun = euclidean_distances - else: # 1-D list of strings - from nltk.metrics.distance import edit_distance - - # x = ['we', 'shelled', 'clams', 'for', 'the', 'chowder'] - # y = ['class', 'too'] - x = ['i', 'soon', 'found', 'myself', 'muttering', 'to', 'the', 'walls'] - y = ['see', 'drown', 'himself'] - # x = 'we talked about the situation'.split() - # y = 'we talked about the situation'.split() - dist_fun = edit_distance - dist, cost, acc, path = dtw(x, y, dist_fun, w=w, s=s) - - # Vizualize - from matplotlib import pyplot as plt - - plt.imshow(cost.T, origin='lower', cmap=plt.cm.Reds, interpolation='nearest') - plt.plot(path[0], path[1], '-o') # relation - plt.xticks(range(len(x)), x) - plt.yticks(range(len(y)), y) - plt.xlabel('x') - plt.ylabel('y') - plt.axis('tight') - if isinf(w): - plt.title('Minimum distance: {}, slope weight: {}'.format(dist, s)) - else: - plt.title('Minimum distance: {}, window widht: {}, slope weight: {}'.format(dist, w, s)) - plt.show() diff --git a/spaces/YaTharThShaRma999/Testtrial1/README.md b/spaces/YaTharThShaRma999/Testtrial1/README.md deleted file mode 100644 index 9f24912c0d72bab8e8cbd3e6b39c1ba2c9e2a488..0000000000000000000000000000000000000000 --- a/spaces/YaTharThShaRma999/Testtrial1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: WizardLM -emoji: 😁 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v4/model_download/yolov5_model_p5_n.sh b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v4/model_download/yolov5_model_p5_n.sh deleted file mode 100644 index 5fc6d093f4b92e1ad735f8b513d01d95f4d53d5c..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v4/model_download/yolov5_model_p5_n.sh +++ /dev/null @@ -1,4 +0,0 @@ -cd ./yolov5 - -# 下载YOLOv5模型 -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n.pt diff --git a/spaces/ZhangYuanhan/Bamboo_ViT-B16_demo/README.md b/spaces/ZhangYuanhan/Bamboo_ViT-B16_demo/README.md deleted file mode 100644 index 74a1caa7498f2c89cde79a3f031ec6a77758e45f..0000000000000000000000000000000000000000 --- a/spaces/ZhangYuanhan/Bamboo_ViT-B16_demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bamboo ViT-B16 Demo -emoji: 🎋 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.0.17 -app_file: app.py -pinned: false -license: cc-by-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Zulqrnain/NewsSummarizer/README.md b/spaces/Zulqrnain/NewsSummarizer/README.md deleted file mode 100644 index fb41e0c8d1344ac64d60398569e2e178e002cd13..0000000000000000000000000000000000000000 --- a/spaces/Zulqrnain/NewsSummarizer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: NewsSummarizer -emoji: 🌖 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/a-v-bely/spanish-task-generator/\320\222\321\205\320\276\320\264.py" "b/spaces/a-v-bely/spanish-task-generator/\320\222\321\205\320\276\320\264.py" deleted file mode 100644 index fe3238dda91a8d18eb410d8ddde3439e0dfc6cd4..0000000000000000000000000000000000000000 --- "a/spaces/a-v-bely/spanish-task-generator/\320\222\321\205\320\276\320\264.py" +++ /dev/null @@ -1,35 +0,0 @@ -import warnings -import streamlit as st -from utilities.utils import is_valid_uuid -from utilities_database.user_database_widgets import LogIn - -warnings.filterwarnings('ignore') -st.header('Добро пожаловать!') -st.subheader('Вы используете инструмент по автоматической генерации лексико-грамматических заданий по' - ' испанскому языку!') -st.write('**Зарегистрируйтесь или войдите в аккаунт**') -__login__obj = LogIn(auth_token=st.secrets['COURIER_AUTH_TOKEN'], - company_name=st.secrets['COMPANY_NAME'], - width=200, height=200, - logout_button_name='Выйти', - hide_menu_bool=False, - hide_footer_bool=False, - lottie_url='https://assets2.lottiefiles.com/packages/lf20_jcikwtux.json') -LOGGED_IN = __login__obj.build_login_ui() -st.session_state['-LOGGED_IN-'] = False -# Check for username in cookies -if '-USER_NAME-' not in st.session_state: - if __login__obj.cookies.get('__streamlit_login_signup_ui_username__'): - if not is_valid_uuid(__login__obj.cookies['__streamlit_login_signup_ui_username__']): - st.session_state['-USER_NAME-'] = __login__obj.cookies['__streamlit_login_signup_ui_username__'] - st.session_state['-LOGGED_IN_BOOL-'] = True - -if LOGGED_IN: - st.session_state['-LOGGED_IN_BOOL-'] = True - # st.session_state['-USER_NAME-'] = - st.success('Можете переходить к следующим вкладкам!') - -st.markdown('*Автор-разработчик: А.В.Белый, кафедра математической лингвистики, филологический факультет СПбГУ,' - ' 3 курс, бакалавриат, "Прикладная, компьютерная и математическая лингвистика (английский язык)"*' - '\n\n*Научный руководитель: канд. филол. наук, доц. О.А.Митрофанова*') -st.markdown('*E-mail: st087202@student.spbu.ru*') diff --git a/spaces/aaronstaclara/towards-financial-inclusion/README.md b/spaces/aaronstaclara/towards-financial-inclusion/README.md deleted file mode 100644 index 3f2816f34ad52689ea847da4a5764cdec3d9267b..0000000000000000000000000000000000000000 --- a/spaces/aaronstaclara/towards-financial-inclusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Towards Financial Inclusion -emoji: 🐨 -colorFrom: gray -colorTo: green -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/kd_one_stage.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/kd_one_stage.py deleted file mode 100644 index 671ec19015c87fefd065b84ae887147f90cc892b..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/detectors/kd_one_stage.py +++ /dev/null @@ -1,100 +0,0 @@ -import mmcv -import torch -from mmcv.runner import load_checkpoint - -from .. import build_detector -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class KnowledgeDistillationSingleStageDetector(SingleStageDetector): - r"""Implementation of `Distilling the Knowledge in a Neural Network. - `_. - - Args: - teacher_config (str | dict): Config file path - or the config object of teacher model. - teacher_ckpt (str, optional): Checkpoint path of teacher model. - If left as None, the model will not load any weights. - """ - - def __init__(self, - backbone, - neck, - bbox_head, - teacher_config, - teacher_ckpt=None, - eval_teacher=True, - train_cfg=None, - test_cfg=None, - pretrained=None): - super().__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) - self.eval_teacher = eval_teacher - # Build teacher model - if isinstance(teacher_config, str): - teacher_config = mmcv.Config.fromfile(teacher_config) - self.teacher_model = build_detector(teacher_config['model']) - if teacher_ckpt is not None: - load_checkpoint( - self.teacher_model, teacher_ckpt, map_location='cpu') - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - x = self.extract_feat(img) - with torch.no_grad(): - teacher_x = self.teacher_model.extract_feat(img) - out_teacher = self.teacher_model.bbox_head(teacher_x) - losses = self.bbox_head.forward_train(x, out_teacher, img_metas, - gt_bboxes, gt_labels, - gt_bboxes_ignore) - return losses - - def cuda(self, device=None): - """Since teacher_model is registered as a plain object, it is necessary - to put the teacher model to cuda when calling cuda function.""" - self.teacher_model.cuda(device=device) - return super().cuda(device=device) - - def train(self, mode=True): - """Set the same train mode for teacher and student model.""" - if self.eval_teacher: - self.teacher_model.train(False) - else: - self.teacher_model.train(mode) - super().train(mode) - - def __setattr__(self, name, value): - """Set attribute, i.e. self.name = value - - This reloading prevent the teacher model from being registered as a - nn.Module. The teacher module is registered as a plain object, so that - the teacher parameters will not show up when calling - ``self.parameters``, ``self.modules``, ``self.children`` methods. - """ - if name == 'teacher_model': - object.__setattr__(self, name, value) - else: - super().__setattr__(name, value) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/dataset_wrappers.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/dataset_wrappers.py deleted file mode 100644 index 1a22501e0804e44e3350fb1f7bb95cd01fa14583..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/dataset_wrappers.py +++ /dev/null @@ -1,62 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv - * Copyright (c) OpenMMLab. All rights reserved. -''' - -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - concat the group flag for image aspect ratio. - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - """ - - def __init__(self, datasets): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.PALETTE = datasets[0].PALETTE - - -@DATASETS.register_module() -class RepeatDataset(object): - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - self.PALETTE = dataset.PALETTE - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - """Get item from original dataset.""" - return self.dataset[idx % self._ori_len] - - def __len__(self): - """The length is multiplied by ``times``""" - return self.times * self._ori_len diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/win32/winkey.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/win32/winkey.py deleted file mode 100644 index 9205b25345a4cabcc2701f69728f33d3a91b8935..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/win32/winkey.py +++ /dev/null @@ -1,197 +0,0 @@ -from pyglet.window import key -from .constants import * - -keymap = { - ord('A'): key.A, - ord('B'): key.B, - ord('C'): key.C, - ord('D'): key.D, - ord('E'): key.E, - ord('F'): key.F, - ord('G'): key.G, - ord('H'): key.H, - ord('I'): key.I, - ord('J'): key.J, - ord('K'): key.K, - ord('L'): key.L, - ord('M'): key.M, - ord('N'): key.N, - ord('O'): key.O, - ord('P'): key.P, - ord('Q'): key.Q, - ord('R'): key.R, - ord('S'): key.S, - ord('T'): key.T, - ord('U'): key.U, - ord('V'): key.V, - ord('W'): key.W, - ord('X'): key.X, - ord('Y'): key.Y, - ord('Z'): key.Z, - ord('0'): key._0, - ord('1'): key._1, - ord('2'): key._2, - ord('3'): key._3, - ord('4'): key._4, - ord('5'): key._5, - ord('6'): key._6, - ord('7'): key._7, - ord('8'): key._8, - ord('9'): key._9, - ord('\b'): key.BACKSPACE, - - # By experiment: - 0x14: key.CAPSLOCK, - 0x5d: key.MENU, - - # VK_LBUTTON: , - # VK_RBUTTON: , - VK_CANCEL: key.CANCEL, - # VK_MBUTTON: , - # VK_BACK: , - VK_TAB: key.TAB, - # VK_CLEAR: , - VK_RETURN: key.RETURN, - VK_SHIFT: key.LSHIFT, - VK_CONTROL: key.LCTRL, - VK_MENU: key.LALT, - VK_PAUSE: key.PAUSE, - # VK_CAPITAL: , - # VK_KANA: , - # VK_HANGEUL: , - # VK_HANGUL: , - # VK_JUNJA: , - # VK_FINAL: , - # VK_HANJA: , - # VK_KANJI: , - VK_ESCAPE: key.ESCAPE, - # VK_CONVERT: , - # VK_NONCONVERT: , - # VK_ACCEPT: , - # VK_MODECHANGE: , - VK_SPACE: key.SPACE, - VK_PRIOR: key.PAGEUP, - VK_NEXT: key.PAGEDOWN, - VK_END: key.END, - VK_HOME: key.HOME, - VK_LEFT: key.LEFT, - VK_UP: key.UP, - VK_RIGHT: key.RIGHT, - VK_DOWN: key.DOWN, - # VK_SELECT: , - VK_PRINT: key.PRINT, - # VK_EXECUTE: , - # VK_SNAPSHOT: , - VK_INSERT: key.INSERT, - VK_DELETE: key.DELETE, - VK_HELP: key.HELP, - VK_LWIN: key.LWINDOWS, - VK_RWIN: key.RWINDOWS, - # VK_APPS: , - VK_NUMPAD0: key.NUM_0, - VK_NUMPAD1: key.NUM_1, - VK_NUMPAD2: key.NUM_2, - VK_NUMPAD3: key.NUM_3, - VK_NUMPAD4: key.NUM_4, - VK_NUMPAD5: key.NUM_5, - VK_NUMPAD6: key.NUM_6, - VK_NUMPAD7: key.NUM_7, - VK_NUMPAD8: key.NUM_8, - VK_NUMPAD9: key.NUM_9, - VK_MULTIPLY: key.NUM_MULTIPLY, - VK_ADD: key.NUM_ADD, - # VK_SEPARATOR: , - VK_SUBTRACT: key.NUM_SUBTRACT, - VK_DECIMAL: key.NUM_DECIMAL, - VK_DIVIDE: key.NUM_DIVIDE, - VK_F1: key.F1, - VK_F2: key.F2, - VK_F3: key.F3, - VK_F4: key.F4, - VK_F5: key.F5, - VK_F6: key.F6, - VK_F7: key.F7, - VK_F8: key.F8, - VK_F9: key.F9, - VK_F10: key.F10, - VK_F11: key.F11, - VK_F12: key.F12, - VK_F13: key.F13, - VK_F14: key.F14, - VK_F15: key.F15, - VK_F16: key.F16, - VK_F17: key.F17, - VK_F18: key.F18, - VK_F19: key.F19, - VK_F20: key.F20, - VK_F21: key.F21, - VK_F22: key.F22, - VK_F23: key.F23, - VK_F24: key.F24, - VK_NUMLOCK: key.NUMLOCK, - VK_SCROLL: key.SCROLLLOCK, - VK_LSHIFT: key.LSHIFT, - VK_RSHIFT: key.RSHIFT, - VK_LCONTROL: key.LCTRL, - VK_RCONTROL: key.RCTRL, - VK_LMENU: key.LALT, - VK_RMENU: key.RALT, - # VK_PROCESSKEY: , - # VK_ATTN: , - # VK_CRSEL: , - # VK_EXSEL: , - # VK_EREOF: , - # VK_PLAY: , - # VK_ZOOM: , - # VK_NONAME: , - # VK_PA1: , - # VK_OEM_CLEAR: , - # VK_XBUTTON1: , - # VK_XBUTTON2: , - # VK_VOLUME_MUTE: , - # VK_VOLUME_DOWN: , - # VK_VOLUME_UP: , - # VK_MEDIA_NEXT_TRACK: , - # VK_MEDIA_PREV_TRACK: , - # VK_MEDIA_PLAY_PAUSE: , - # VK_BROWSER_BACK: , - # VK_BROWSER_FORWARD: , -} - -# Keys that must be translated via MapVirtualKey, as the virtual key code -# is language and keyboard dependent. -chmap = { - ord('!'): key.EXCLAMATION, - ord('"'): key.DOUBLEQUOTE, - ord('#'): key.HASH, - ord('$'): key.DOLLAR, - ord('%'): key.PERCENT, - ord('&'): key.AMPERSAND, - ord("'"): key.APOSTROPHE, - ord('('): key.PARENLEFT, - ord(')'): key.PARENRIGHT, - ord('*'): key.ASTERISK, - ord('+'): key.PLUS, - ord(','): key.COMMA, - ord('-'): key.MINUS, - ord('.'): key.PERIOD, - ord('/'): key.SLASH, - ord(':'): key.COLON, - ord(';'): key.SEMICOLON, - ord('<'): key.LESS, - ord('='): key.EQUAL, - ord('>'): key.GREATER, - ord('?'): key.QUESTION, - ord('@'): key.AT, - ord('['): key.BRACKETLEFT, - ord('\\'): key.BACKSLASH, - ord(']'): key.BRACKETRIGHT, - ord('\x5e'): key.ASCIICIRCUM, - ord('_'): key.UNDERSCORE, - ord('\x60'): key.GRAVE, - ord('`'): key.QUOTELEFT, - ord('{'): key.BRACELEFT, - ord('|'): key.BAR, - ord('}'): key.BRACERIGHT, - ord('~'): key.ASCIITILDE, -} diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/constants.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/constants.py deleted file mode 100644 index 8a5785b6fdb21910a174252c5af2f05b40ece4a5..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/constants.py +++ /dev/null @@ -1,149 +0,0 @@ -DEFAULT_Z_NEAR = 0.05 # Near clipping plane, in meters -DEFAULT_Z_FAR = 100.0 # Far clipping plane, in meters -DEFAULT_SCENE_SCALE = 2.0 # Default scene scale -MAX_N_LIGHTS = 4 # Maximum number of lights of each type allowed -TARGET_OPEN_GL_MAJOR = 4 # Target OpenGL Major Version -TARGET_OPEN_GL_MINOR = 1 # Target OpenGL Minor Version -MIN_OPEN_GL_MAJOR = 3 # Minimum OpenGL Major Version -MIN_OPEN_GL_MINOR = 3 # Minimum OpenGL Minor Version -FLOAT_SZ = 4 # Byte size of GL float32 -UINT_SZ = 4 # Byte size of GL uint32 -SHADOW_TEX_SZ = 2048 # Width and Height of Shadow Textures -TEXT_PADDING = 20 # Width of padding for rendering text (px) - - -# Flags for render type -class RenderFlags(object): - """Flags for rendering in the scene. - - Combine them with the bitwise or. For example, - - >>> flags = OFFSCREEN | SHADOWS_DIRECTIONAL | VERTEX_NORMALS - - would result in an offscreen render with directional shadows and - vertex normals enabled. - """ - NONE = 0 - """Normal PBR Render.""" - DEPTH_ONLY = 1 - """Only render the depth buffer.""" - OFFSCREEN = 2 - """Render offscreen and return the depth and (optionally) color buffers.""" - FLIP_WIREFRAME = 4 - """Invert the status of wireframe rendering for each mesh.""" - ALL_WIREFRAME = 8 - """Render all meshes as wireframes.""" - ALL_SOLID = 16 - """Render all meshes as solids.""" - SHADOWS_DIRECTIONAL = 32 - """Render shadows for directional lights.""" - SHADOWS_POINT = 64 - """Render shadows for point lights.""" - SHADOWS_SPOT = 128 - """Render shadows for spot lights.""" - SHADOWS_ALL = 32 | 64 | 128 - """Render shadows for all lights.""" - VERTEX_NORMALS = 256 - """Render vertex normals.""" - FACE_NORMALS = 512 - """Render face normals.""" - SKIP_CULL_FACES = 1024 - """Do not cull back faces.""" - RGBA = 2048 - """Render the color buffer with the alpha channel enabled.""" - FLAT = 4096 - """Render the color buffer flat, with no lighting computations.""" - SEG = 8192 - - -class TextAlign: - """Text alignment options for captions. - - Only use one at a time. - """ - CENTER = 0 - """Center the text by width and height.""" - CENTER_LEFT = 1 - """Center the text by height and left-align it.""" - CENTER_RIGHT = 2 - """Center the text by height and right-align it.""" - BOTTOM_LEFT = 3 - """Put the text in the bottom-left corner.""" - BOTTOM_RIGHT = 4 - """Put the text in the bottom-right corner.""" - BOTTOM_CENTER = 5 - """Center the text by width and fix it to the bottom.""" - TOP_LEFT = 6 - """Put the text in the top-left corner.""" - TOP_RIGHT = 7 - """Put the text in the top-right corner.""" - TOP_CENTER = 8 - """Center the text by width and fix it to the top.""" - - -class GLTF(object): - """Options for GL objects.""" - NEAREST = 9728 - """Nearest neighbor interpolation.""" - LINEAR = 9729 - """Linear interpolation.""" - NEAREST_MIPMAP_NEAREST = 9984 - """Nearest mipmapping.""" - LINEAR_MIPMAP_NEAREST = 9985 - """Linear mipmapping.""" - NEAREST_MIPMAP_LINEAR = 9986 - """Nearest mipmapping.""" - LINEAR_MIPMAP_LINEAR = 9987 - """Linear mipmapping.""" - CLAMP_TO_EDGE = 33071 - """Clamp to the edge of the texture.""" - MIRRORED_REPEAT = 33648 - """Mirror the texture.""" - REPEAT = 10497 - """Repeat the texture.""" - POINTS = 0 - """Render as points.""" - LINES = 1 - """Render as lines.""" - LINE_LOOP = 2 - """Render as a line loop.""" - LINE_STRIP = 3 - """Render as a line strip.""" - TRIANGLES = 4 - """Render as triangles.""" - TRIANGLE_STRIP = 5 - """Render as a triangle strip.""" - TRIANGLE_FAN = 6 - """Render as a triangle fan.""" - - -class BufFlags(object): - POSITION = 0 - NORMAL = 1 - TANGENT = 2 - TEXCOORD_0 = 4 - TEXCOORD_1 = 8 - COLOR_0 = 16 - JOINTS_0 = 32 - WEIGHTS_0 = 64 - - -class TexFlags(object): - NONE = 0 - NORMAL = 1 - OCCLUSION = 2 - EMISSIVE = 4 - BASE_COLOR = 8 - METALLIC_ROUGHNESS = 16 - DIFFUSE = 32 - SPECULAR_GLOSSINESS = 64 - - -class ProgramFlags: - NONE = 0 - USE_MATERIAL = 1 - VERTEX_NORMALS = 2 - FACE_NORMALS = 4 - - -__all__ = ['RenderFlags', 'TextAlign', 'GLTF'] diff --git a/spaces/adirik/stylemc-demo/encoder4editing/models/latent_codes_pool.py b/spaces/adirik/stylemc-demo/encoder4editing/models/latent_codes_pool.py deleted file mode 100644 index 0281d4b5e80f8eb26e824fa35b4f908dcb6634e6..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/encoder4editing/models/latent_codes_pool.py +++ /dev/null @@ -1,55 +0,0 @@ -import random -import torch - - -class LatentCodesPool: - """This class implements latent codes buffer that stores previously generated w latent codes. - This buffer enables us to update discriminators using a history of generated w's - rather than the ones produced by the latest encoder. - """ - - def __init__(self, pool_size): - """Initialize the ImagePool class - Parameters: - pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created - """ - self.pool_size = pool_size - if self.pool_size > 0: # create an empty pool - self.num_ws = 0 - self.ws = [] - - def query(self, ws): - """Return w's from the pool. - Parameters: - ws: the latest generated w's from the generator - Returns w's from the buffer. - By 50/100, the buffer will return input w's. - By 50/100, the buffer will return w's previously stored in the buffer, - and insert the current w's to the buffer. - """ - if self.pool_size == 0: # if the buffer size is 0, do nothing - return ws - return_ws = [] - for w in ws: # ws.shape: (batch, 512) or (batch, n_latent, 512) - # w = torch.unsqueeze(image.data, 0) - if w.ndim == 2: - i = random.randint(0, len(w) - 1) # apply a random latent index as a candidate - w = w[i] - self.handle_w(w, return_ws) - return_ws = torch.stack(return_ws, 0) # collect all the images and return - return return_ws - - def handle_w(self, w, return_ws): - if self.num_ws < self.pool_size: # if the buffer is not full; keep inserting current codes to the buffer - self.num_ws = self.num_ws + 1 - self.ws.append(w) - return_ws.append(w) - else: - p = random.uniform(0, 1) - if p > 0.5: # by 50% chance, the buffer will return a previously stored latent code, and insert the current code into the buffer - random_id = random.randint(0, self.pool_size - 1) # randint is inclusive - tmp = self.ws[random_id].clone() - self.ws[random_id] = w - return_ws.append(tmp) - else: # by another 50% chance, the buffer will return the current image - return_ws.append(w) diff --git a/spaces/adyjay/andite-anything-v4.0/README.md b/spaces/adyjay/andite-anything-v4.0/README.md deleted file mode 100644 index 7759f37f1031c968e6431941c4bdb09bc7648ab9..0000000000000000000000000000000000000000 --- a/spaces/adyjay/andite-anything-v4.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Andite Anything V4.0 -emoji: 💻 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/Music_Source_Separation/bytesep/callbacks/__init__.py b/spaces/akhaliq/Music_Source_Separation/bytesep/callbacks/__init__.py deleted file mode 100644 index e70c6c2a4fa8fcabfdb78502907d431b07158edc..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Music_Source_Separation/bytesep/callbacks/__init__.py +++ /dev/null @@ -1,76 +0,0 @@ -from typing import List - -import pytorch_lightning as pl -import torch.nn as nn - - -def get_callbacks( - task_name: str, - config_yaml: str, - workspace: str, - checkpoints_dir: str, - statistics_path: str, - logger: pl.loggers.TensorBoardLogger, - model: nn.Module, - evaluate_device: str, -) -> List[pl.Callback]: - r"""Get callbacks of a task and config yaml file. - - Args: - task_name: str - config_yaml: str - dataset_dir: str - workspace: str, containing useful files such as audios for evaluation - checkpoints_dir: str, directory to save checkpoints - statistics_dir: str, directory to save statistics - logger: pl.loggers.TensorBoardLogger - model: nn.Module - evaluate_device: str - - Return: - callbacks: List[pl.Callback] - """ - if task_name == 'musdb18': - - from bytesep.callbacks.musdb18 import get_musdb18_callbacks - - return get_musdb18_callbacks( - config_yaml=config_yaml, - workspace=workspace, - checkpoints_dir=checkpoints_dir, - statistics_path=statistics_path, - logger=logger, - model=model, - evaluate_device=evaluate_device, - ) - - elif task_name == 'voicebank-demand': - - from bytesep.callbacks.voicebank_demand import get_voicebank_demand_callbacks - - return get_voicebank_demand_callbacks( - config_yaml=config_yaml, - workspace=workspace, - checkpoints_dir=checkpoints_dir, - statistics_path=statistics_path, - logger=logger, - model=model, - evaluate_device=evaluate_device, - ) - - elif task_name in ['vctk-musdb18', 'violin-piano', 'piano-symphony']: - - from bytesep.callbacks.instruments_callbacks import get_instruments_callbacks - - return get_instruments_callbacks( - config_yaml=config_yaml, - workspace=workspace, - checkpoints_dir=checkpoints_dir, - statistics_path=statistics_path, - logger=logger, - model=model, - evaluate_device=evaluate_device, - ) - - else: - raise NotImplementedError diff --git a/spaces/akhaliq/lama/saicinpainting/evaluation/utils.py b/spaces/akhaliq/lama/saicinpainting/evaluation/utils.py deleted file mode 100644 index 6d7c15c9242ed8a9bc59fbb3b450cca394720bb8..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/saicinpainting/evaluation/utils.py +++ /dev/null @@ -1,28 +0,0 @@ -from enum import Enum - -import yaml -from easydict import EasyDict as edict -import torch.nn as nn -import torch - - -def load_yaml(path): - with open(path, 'r') as f: - return edict(yaml.safe_load(f)) - - -def move_to_device(obj, device): - if isinstance(obj, nn.Module): - return obj.to(device) - if torch.is_tensor(obj): - return obj.to(device) - if isinstance(obj, (tuple, list)): - return [move_to_device(el, device) for el in obj] - if isinstance(obj, dict): - return {name: move_to_device(val, device) for name, val in obj.items()} - raise ValueError(f'Unexpected type {type(obj)}') - - -class SmallMode(Enum): - DROP = "drop" - UPSCALE = "upscale" diff --git a/spaces/alamin655/websurfx/public/static/colorschemes/solarized-dark.css b/spaces/alamin655/websurfx/public/static/colorschemes/solarized-dark.css deleted file mode 100644 index 44494f9e57eb9e2a3f043ab1072474fcd922a0ea..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/public/static/colorschemes/solarized-dark.css +++ /dev/null @@ -1,11 +0,0 @@ -:root { - --background-color: #002b36; - --foreground-color: #c9e0e6; - --color-one: #073642; - --color-two: #2AA198ff; - --color-three: #2AA198ff; - --color-four: #EEE8D5ff; - --color-five: #268bd2; - --color-six: #d33682; - --color-seven: #fff; -} diff --git a/spaces/amielle/patent-summarizer/util/textproc.py b/spaces/amielle/patent-summarizer/util/textproc.py deleted file mode 100644 index 4f6b4d09a0ce0351d58eb790ca55a0000a2f6af2..0000000000000000000000000000000000000000 --- a/spaces/amielle/patent-summarizer/util/textproc.py +++ /dev/null @@ -1,86 +0,0 @@ -import re -import unicodedata -import requests -from bs4 import BeautifulSoup - -def retrieve_parsed_doc(patent_information, summaries_generated): - try: - language_config = "en" - if "https" in patent_information: - patent_code = patent_information.split("/")[4] - else: - patent_code = patent_information - URL = f"https://patents.google.com/patent/{patent_code}/{language_config}" - page = requests.get(URL) - - soup = BeautifulSoup(page.content, 'lxml') - - if "Abstract" in summaries_generated: - abstract = clean_text(soup.find({"div":{"class":"abstract"}}).prettify()) - else: - abstract = None - - if "Background" in summaries_generated: - background = clean_text(soup.find_all(itemprop="description", - itemscope="")[-1:][0].prettify()) - else: - background = None - - if "Claims" in summaries_generated: - claims = soup.find(itemprop="claims") - main_claim = claims.find_all({"div":{"class":"claim"}}) - main_claims = main_claim[0].select("div[class=claim]") - formatted_claims = set() - for i in main_claims: - formatted_claims.add(clean_text(i.prettify())) - try: - formatted_claims.remove('') - except: - pass - claim_list = sorted(list(formatted_claims), key=len, reverse=True) - else: - claim_list = None - - return [abstract, background, claim_list] - except Exception as e: - print(f'[ERROR] {e}') - return None - - -def get_word_index(s, limit): - try: - words = re.findall(r'\s*\S+\s*', s) - return sum(map(len, words[:limit])) + len(words[limit]) - len(words[limit].lstrip()) - except: - l = len(s) - chr_limit = 3500 - return l if l < chr_limit else chr_limit - - -def post_process(s): - # Basic post-processing - - if s[0] == " ": s = s[1:] - s = s.replace("- ", "-").replace(" .", ".") - return ".".join(s.split(".")[:-1])+"." - - -def clean_text(text): - # TODO: optimize text cleaning - reg = re.compile(r'<.*?>') - cleaned = reg.sub('', text) - cleaned = re.sub(r'\([^)]*\)', '', cleaned) - cleaned = re.sub(r"(\w)([A-Z]+)", r'.', cleaned) - cleaned = cleaned.strip() - cleaned = cleaned.lstrip() - cleaned = "".join(ch for ch in cleaned if unicodedata.category(ch)[0]!="C") - cleaned = re.sub(' +', ' ', cleaned) - cleaned = cleaned.replace(";", ", and") - cleaned = cleaned.replace(":", "") - cleaned = cleaned.replace(" .", ".") - cleaned = cleaned.replace(" ,", ",") - cleaned = cleaned.replace("\xa0", " ") - cleaned = cleaned.lstrip('0123456789.- ') # remove nums at start - cleaned = re.sub(r'\b(\w+)( \1\b)+', r'\1', cleaned) #remove repeated consecutive words - - return cleaned \ No newline at end of file diff --git a/spaces/anderbogia/dtp-asr-demo-v2/app.py b/spaces/anderbogia/dtp-asr-demo-v2/app.py deleted file mode 100644 index c92c99b5433c2699720682a957e526f1c8723e01..0000000000000000000000000000000000000000 --- a/spaces/anderbogia/dtp-asr-demo-v2/app.py +++ /dev/null @@ -1,93 +0,0 @@ -import os -#os.system("curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y") #Installing Rust manually -#os.system("exec bash") -#os.system("pip install --upgrade pip") -os.system("pip install transformers==4.30.2") #Some interoperability issue with Wav2Vec2CTCTokenizer. Refer here: https://github.com/huggingface/transformers/pull/26349 -os.system("pip install tokenizers fairseq") -os.system("pip install numpy==1.23.0") #NumPy 1.24 or less needed by Numba. Use 1.23, librosa still uses np.complex which was dropped in NumPy 1.24 -#os.system("pip install git+https://github.com/huggingface/transformers datasets[torch]") -os.system("pip install torch accelerate torchaudio datasets librosa easymms") - - -import gradio as gr -from transformers import pipeline, Wav2Vec2ForCTC, AutoProcessor -from datasets import load_dataset, Audio, Dataset -import torch -import librosa #For converting audio sample rate to 16k -from easymms.models.tts import TTSModel #For TTS inference using EasyMMS - -LANG = "dtp" #Change to tih for Timugon Murut or iba for Iban -model_id = "facebook/mms-1b-all" - -processor = AutoProcessor.from_pretrained(model_id) -model = Wav2Vec2ForCTC.from_pretrained(model_id).to("cpu") -processor.tokenizer.set_target_lang(LANG) -model.load_adapter(LANG) - -asr_pipeline = pipeline(task = "automatic-speech-recognition", model = model_id) #Function that returns a dict, transcription stored in item with key "text" - -def preprocess(input): #Sets recording sampling rate to 16k and returns numpy ndarray from audio - speech, sample_rate = librosa.load(input) - speech = librosa.resample(speech, orig_sr=sample_rate, target_sr=16000) - loaded_audio = Dataset.from_dict({"audio": [input]}).cast_column("audio", Audio(sampling_rate=16000)) - audio_to_array = loaded_audio[0]["audio"]["array"] - return audio_to_array - -def run(input): - inputs = processor(input, sampling_rate=16_000, return_tensors="pt") - with torch.no_grad(): - outputs = model(**inputs).logits - ids = torch.argmax(outputs, dim=-1)[0] - transcription = processor.decode(ids) - return transcription - -def transcribe(input): #Gradio UI wrapper function - audioarray = preprocess(input) #Call preprocessor function - out = run(audioarray) - return out - -with gr.Blocks(theme = gr.themes.Soft()) as demo: - gr.HTML( - """ -

    Ponutun Tuturan om Pomorolou Sinuat Boros Dusun

    -
    Poomitanan kopogunaan do somit tutun tuturan om pomorolou sinuat (speech recognition and text-to-speech models) - pinoluda' di Woyotanud Tuturan Gumukabang Tagayo di Meta (Meta Massive Multilingual Speech Project)
    -
    Guguno (app) diti winonsoi di Ander © 2023 id Universiti Teknologi PETRONAS
    - -
    -
    -
    -
    - """) - - tts = TTSModel(LANG) - - def fn2(input): - res = tts.synthesize(input) - flip_tuple = (res[1], res[0]) #EasyMMS synthesize() returns Tuple(data, sample_rate) where data is a numpy.array and sample_rate is int, - #but Gradio Audio() expects the same tuple but with the elements flipped - return flip_tuple - - with gr.Row(): - with gr.Column(scale = 1): - gr.HTML("""

    """) - - gr.Markdown(""" - **Huminodun, nulai di somit pongulai kikito DALL-E** - - *Huminodun, generated by the image generation model DALL-E* - """) - with gr.Column(scale = 4): - with gr.Tab("Rolou kumaa ginarit"): - input_audio = gr.Audio(source = "microphone", type = "filepath", label = "Gakamai rolou nu") - output_text = gr.components.Textbox(label = "Dalinsuat") - button1 = gr.Button("Dalinsuato' | Transcribe") - button1.click(transcribe, inputs = input_audio, outputs = output_text) - - with gr.Tab("Ginarit kumaa rolou"): - input_text = gr.components.Textbox(label = "Ginarit", placeholder = "Potutakai suat nu hiti") - button2 = gr.Button("Poulayo'") - output_audio = gr.components.Audio(label = "Rolou pinoulai") - button2.click(fn2, inputs = input_text, outputs = output_audio) - -demo.launch(debug = True) \ No newline at end of file diff --git a/spaces/andromeda123/captionscraft/app.py b/spaces/andromeda123/captionscraft/app.py deleted file mode 100644 index 642d772f10985266fb8c3ffc02f3ee3548cd91a9..0000000000000000000000000000000000000000 --- a/spaces/andromeda123/captionscraft/app.py +++ /dev/null @@ -1,158 +0,0 @@ -# -*- coding: utf-8 -*- - -# !apt install imagemagick - -# !cat /etc/ImageMagick-6/policy.xml | sed 's/none/read,write/g'> /etc/ImageMagick-6/policy.xml - -# Place files in this path or modify the paths to point to where the files are -srtfilename = "subtitles.txt" -mp4filename = "video.mp4" - -import sys -import os -import subprocess -import streamlit as st -from faster_whisper import WhisperModel -import time - -def save_uploadedfile(uploadedfile): - with open(filename,"wb") as f: - f.write(uploadedfile.getbuffer()) - -def time_to_seconds(time_obj): - return time_obj.hours * 3600 + time_obj.minutes * 60 + time_obj.seconds + time_obj.milliseconds / 1000 - - -def video2mp3(video_file, output_ext="mp3"): - filename, ext = os.path.splitext(video_file) - subprocess.call(["ffmpeg", "-y", "-i", video_file, f"{filename}.{output_ext}"], - stdout=subprocess.DEVNULL, - stderr=subprocess.STDOUT) - return f"{filename}.{output_ext}" - -def translate(audio , model): - options = dict(beam_size=5, best_of=5) - translate_options = dict(task="translate", **options) - result,info = model.transcribe(audio_file,**translate_options) - return result - -def format_timestamp(time): - if(time< 0): return "timestamp cannot be negative" - time_in_ms = round(time* 1000.0) - - hours = time_in_ms // 3_600_000 - time_in_ms -= hours * 3_600_000 - - minutes = time_in_ms // 60_000 - time_in_ms -= minutes * 60_000 - - seconds = time_in_ms // 1_000 - time_in_ms -= seconds * 1_000 - - return f"{hours}:{minutes:02d}:{seconds:02d},{time_in_ms:03d}" - -def write_srt(segments,filename): - index=1 - file1 = open(filename, "w") # append mode - - for segment in segments: - file1.write( f"{index}\n" - f"{format_timestamp(segment.start)} --> " - f"{format_timestamp(segment.end)}\n" - f"{segment.text.strip().replace('-->', '->')}\n\n",) - index+=1 - - - -############# -#PAGE SET UP -############# - -st.set_page_config(page_title="CaptionsCraft", - page_icon=":pen:", - layout="wide", - initial_sidebar_state="expanded" - ) - - -######### -#SIDEBAR -######## - -st.sidebar.header('Navigate to:') -nav = st.sidebar.radio('',['Go to homepage', 'Generate subtitles']) -st.sidebar.write('') -st.sidebar.write('') -st.sidebar.write('') -st.sidebar.write('') -st.sidebar.write('') - - -#HOME -##### - -if nav == 'Go to homepage': - - st.markdown("

    CaptionCraft

    ", unsafe_allow_html=True) - st.markdown("

    🎬✎

    ", unsafe_allow_html=True) - st.markdown("

    Utilizing advanced Whisper AI, it effortlessly converts any language spoken in a video into accurate English subtitles. Bridging communication gaps seamlessly.

    ", unsafe_allow_html=True) - - st.markdown('___') - - st.markdown("

    What is this App about?

    ", unsafe_allow_html=True) - st.write("""This app harnesses the cutting-edge power of the Whisper model to provide you with an unparalleled video subtitle generation experience. - -\n\nImagine watching a video in a language you don't understand, but with our app, you won't miss a single detail. Whether it's a captivating foreign short film, an informative documentary, or a heartwarming vlog, our app steps in to bridge the linguistic gap. - -\n\nPowered by Whisper AI, our app listens to the spoken words in the video and expertly converts them into accurate and contextually relevant English subtitles. It's like having your own personal interpreter working in real-time, enabling you to enjoy content from around the world without missing out on any crucial information.""") - - st.markdown("

    How to use the app?

    ", unsafe_allow_html=True) - st.write("""1) Navigate to the 'Generate subtitles' page using navigation bar on the left , and upload the video file. - \n\n 2) Choose the whisper model size \n\n 3) Upload your file (limit is 500 mb) \n\n 4) Your subtitles.txt file will be downloaded - \n\n 5) Using the file , subtitles can be imposed on any video using any standard video player application.""") - st.write("Here is the repo link : [GitHub](https://github.com/s0ur-oranges/subtitle_generator)") - - -if nav == 'Generate subtitles': - filename="videofile" - - print("hello") - st.write("Choose a model size from the following: ") - - model_size= st.radio("Model sizes",["no model selected","tiny","base","small","medium","large-v2"] , index=0 , label_visibility='hidden') - - st.write("") - - if model_size=="no model selected": - st.write("Select a model size to continue") - - - else: - uploaded_file = st.file_uploader("Upload your file here...") - # or run on CPU with INT8 - model = WhisperModel(model_size, device="cpu", compute_type="int8") - - if uploaded_file: - save_uploadedfile(uploaded_file) - print('file saved') - st.write("Please wait while your video is getting processed.") - - input_video = filename - audio_file = video2mp3(input_video) - - result = translate(audio_file,model) - - print('audio translated') - subtitle_filename='subtitles.txt' - write_srt(result,subtitle_filename) - - print('subtitle generated') - - with open(subtitle_filename, "rb") as file: - btn = st.download_button( - - label="Download file", - data=file, - file_name="subtitles.txt" - ) - st.write("Note: If you want to try another model size , reload the page before repeating the selection process.") \ No newline at end of file diff --git a/spaces/anshu-man853/webscrapping/app.py b/spaces/anshu-man853/webscrapping/app.py deleted file mode 100644 index 4edc57d0cced41b08ca07f7e9becf9ce8c45ae2c..0000000000000000000000000000000000000000 --- a/spaces/anshu-man853/webscrapping/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import gradio as gr -import requests -from bs4 import BeautifulSoup -import re -import html - -# Define the web scraping function -def scrape_website(url): - # Send a GET request to the website - response = requests.get(url) - html_content = response.content - # Parse the HTML content using BeautifulSoup - soup = BeautifulSoup(html_content, "html.parser") - # Extract all text from the HTML - text = soup.get_text() - # Clean the text by removing extra whitespaces and special characters - cleaned_text = re.sub(r"\s+", " ", text) - cleaned_text = html.unescape(cleaned_text) - return cleaned_text - -# Create a Gradio interface -iface = gr.Interface( - fn=scrape_website, - inputs="text", - outputs="text", - title="Web Scraping", - description="Enter a website URL to scrape its text", - example="https://www.example.com" -) - -iface.launch() diff --git a/spaces/antonovmaxim/text-generation-webui-space/extensions/multimodal/multimodal_embedder.py b/spaces/antonovmaxim/text-generation-webui-space/extensions/multimodal/multimodal_embedder.py deleted file mode 100644 index 62e99ca7c950bcdae65049d0cb426a2fa53ba2b7..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/extensions/multimodal/multimodal_embedder.py +++ /dev/null @@ -1,178 +0,0 @@ -import base64 -import logging -import re -from dataclasses import dataclass -from io import BytesIO -from typing import Any, List, Optional - -import torch -from PIL import Image - -from extensions.multimodal.pipeline_loader import load_pipeline -from modules import shared -from modules.text_generation import encode, get_max_prompt_length - - -@dataclass -class PromptPart: - text: str - image: Optional[Image.Image] = None - is_image: bool = False - input_ids: Optional[torch.Tensor] = None - embedding: Optional[torch.Tensor] = None - - -class MultimodalEmbedder: - def __init__(self, params: dict): - pipeline, source = load_pipeline(params) - self.pipeline = pipeline - logging.info(f'Multimodal: loaded pipeline {self.pipeline.name()} from pipelines/{source} ({self.pipeline.__class__.__name__})') - - def _split_prompt(self, prompt: str, load_images: bool = False) -> List[PromptPart]: - """Splits a prompt into a list of `PromptParts` to separate image data from text. - It will also append `image_start` and `image_end` before and after the image, and optionally parse and load the images, - if `load_images` is `True`. - """ - parts: List[PromptPart] = [] - curr = 0 - while True: - match = re.search(r'', prompt[curr:]) - if match is None: - # no more image tokens, append the rest of the prompt - if curr > 0: - # add image end token after last image - parts.append(PromptPart(text=self.pipeline.image_end() + prompt[curr:])) - else: - parts.append(PromptPart(text=prompt)) - break - # found an image, append image start token to the text - if match.start() > 0: - parts.append(PromptPart(text=prompt[curr:curr + match.start()] + self.pipeline.image_start())) - else: - parts.append(PromptPart(text=self.pipeline.image_start())) - # append the image - parts.append(PromptPart( - text=match.group(0), - image=Image.open(BytesIO(base64.b64decode(match.group(1)))) if load_images else None, - is_image=True - )) - curr += match.end() - return parts - - def _len_in_tokens_prompt_parts(self, parts: List[PromptPart]) -> int: - """Total length in tokens of all `parts`""" - tokens = 0 - for part in parts: - if part.is_image: - tokens += self.pipeline.num_image_embeds() - elif part.input_ids is not None: - tokens += len(part.input_ids) - else: - tokens += len(encode(part.text)[0]) - return tokens - - def len_in_tokens(self, prompt: str) -> int: - """Total length in tokens for a given text `prompt`""" - parts = self._split_prompt(prompt, False) - return self._len_in_tokens_prompt_parts(parts) - - def _encode_single_text(self, part: PromptPart, add_bos_token: bool) -> PromptPart: - """Encode a single prompt `part` to `input_ids`. Returns a `PromptPart`""" - if part.is_image: - placeholders = torch.ones((self.pipeline.num_image_embeds())) * self.pipeline.placeholder_token_id() - part.input_ids = placeholders.to(shared.model.device, dtype=torch.int64) - else: - part.input_ids = encode(part.text, add_bos_token=add_bos_token)[0].to(shared.model.device, dtype=torch.int64) - return part - - @staticmethod - def _num_images(parts: List[PromptPart]) -> int: - count = 0 - for part in parts: - if part.is_image: - count += 1 - return count - - def _encode_text(self, state, parts: List[PromptPart]) -> List[PromptPart]: - """Encode text to token_ids, also truncate the prompt, if necessary. - - The chat/instruct mode should make prompts that fit in get_max_prompt_length, but if max_new_tokens are set - such that the context + min_rows don't fit, we can get a prompt which is too long. - We can't truncate image embeddings, as it leads to broken generation, so remove the images instead and warn the user - """ - encoded: List[PromptPart] = [] - for i, part in enumerate(parts): - encoded.append(self._encode_single_text(part, i == 0 and state['add_bos_token'])) - - # truncation: - max_len = get_max_prompt_length(state) - removed_images = 0 - - # 1. remove entire text/image blocks - while self._len_in_tokens_prompt_parts(encoded[1:]) > max_len: - if encoded[0].is_image: - removed_images += 1 - encoded = encoded[1:] - - # 2. check if the last prompt part doesn't need to get truncated - if self._len_in_tokens_prompt_parts(encoded) > max_len: - if encoded[0].is_image: - # don't truncate image embeddings, just remove the image, otherwise generation will be broken - removed_images += 1 - encoded = encoded[1:] - elif len(encoded) > 1 and encoded[0].text.endswith(self.pipeline.image_start()): - # see if we can keep image_start token - len_image_start = len(encode(self.pipeline.image_start(), add_bos_token=state['add_bos_token'])[0]) - if self._len_in_tokens_prompt_parts(encoded[1:]) + len_image_start > max_len: - # we can't -> remove this text, and the image - encoded = encoded[2:] - removed_images += 1 - else: - # we can -> just truncate the text - trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len - encoded[0].input_ids = encoded[0].input_ids[trunc_len:] - elif len(encoded) > 0: - # only one text left, truncate it normally - trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len - encoded[0].input_ids = encoded[0].input_ids[trunc_len:] - - # notify user if we truncated an image - if removed_images > 0: - logging.warning(f"Multimodal: removed {removed_images} image(s) from prompt. Try decreasing max_new_tokens if generation is broken") - - return encoded - - def _embed(self, parts: List[PromptPart]) -> List[PromptPart]: - # batch images - image_indicies = [i for i, part in enumerate(parts) if part.is_image] - embedded = self.pipeline.embed_images([parts[i].image for i in image_indicies]) - for i, embeds in zip(image_indicies, embedded): - parts[i].embedding = embeds - # embed text - for (i, part) in enumerate(parts): - if not part.is_image: - parts[i].embedding = self.pipeline.embed_tokens(part.input_ids) - return parts - - def _remove_old_images(self, parts: List[PromptPart], params: dict) -> List[PromptPart]: - if params['add_all_images_to_prompt']: - return parts - already_added = False - for i, part in reversed(list(enumerate(parts))): - if part.is_image: - if already_added: - parts[i].embedding = self.pipeline.placeholder_embeddings() - else: - already_added = True - return parts - - def forward(self, prompt: str, state: Any, params: dict): - prompt_parts = self._split_prompt(prompt, True) - prompt_parts = self._encode_text(state, prompt_parts) - prompt_parts = self._embed(prompt_parts) - prompt_parts = self._remove_old_images(prompt_parts, params) - embeds = tuple(part.embedding for part in prompt_parts) - ids = tuple(part.input_ids for part in prompt_parts) - input_embeds = torch.cat(embeds, dim=0) - input_ids = torch.cat(ids, dim=0) - return prompt, input_ids, input_embeds, self._num_images(prompt_parts) diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/adabins/unet_adaptive_bins.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/adabins/unet_adaptive_bins.py deleted file mode 100644 index 733927795146fe13563d07d20fbb461da596a181..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/adabins/unet_adaptive_bins.py +++ /dev/null @@ -1,154 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import os -from pathlib import Path - -from .miniViT import mViT - - -class UpSampleBN(nn.Module): - def __init__(self, skip_input, output_features): - super(UpSampleBN, self).__init__() - - self._net = nn.Sequential(nn.Conv2d(skip_input, output_features, kernel_size=3, stride=1, padding=1), - nn.BatchNorm2d(output_features), - nn.LeakyReLU(), - nn.Conv2d(output_features, output_features, kernel_size=3, stride=1, padding=1), - nn.BatchNorm2d(output_features), - nn.LeakyReLU()) - - def forward(self, x, concat_with): - up_x = F.interpolate(x, size=[concat_with.size(2), concat_with.size(3)], mode='bilinear', align_corners=True) - f = torch.cat([up_x, concat_with], dim=1) - return self._net(f) - - -class DecoderBN(nn.Module): - def __init__(self, num_features=2048, num_classes=1, bottleneck_features=2048): - super(DecoderBN, self).__init__() - features = int(num_features) - - self.conv2 = nn.Conv2d(bottleneck_features, features, kernel_size=1, stride=1, padding=1) - - self.up1 = UpSampleBN(skip_input=features // 1 + 112 + 64, output_features=features // 2) - self.up2 = UpSampleBN(skip_input=features // 2 + 40 + 24, output_features=features // 4) - self.up3 = UpSampleBN(skip_input=features // 4 + 24 + 16, output_features=features // 8) - self.up4 = UpSampleBN(skip_input=features // 8 + 16 + 8, output_features=features // 16) - - # self.up5 = UpSample(skip_input=features // 16 + 3, output_features=features//16) - self.conv3 = nn.Conv2d(features // 16, num_classes, kernel_size=3, stride=1, padding=1) - # self.act_out = nn.Softmax(dim=1) if output_activation == 'softmax' else nn.Identity() - - def forward(self, features): - x_block0, x_block1, x_block2, x_block3, x_block4 = features[4], features[5], features[6], features[8], features[ - 11] - - x_d0 = self.conv2(x_block4) - - x_d1 = self.up1(x_d0, x_block3) - x_d2 = self.up2(x_d1, x_block2) - x_d3 = self.up3(x_d2, x_block1) - x_d4 = self.up4(x_d3, x_block0) - # x_d5 = self.up5(x_d4, features[0]) - out = self.conv3(x_d4) - # out = self.act_out(out) - # if with_features: - # return out, features[-1] - # elif with_intermediate: - # return out, [x_block0, x_block1, x_block2, x_block3, x_block4, x_d1, x_d2, x_d3, x_d4] - return out - - -class Encoder(nn.Module): - def __init__(self, backend): - super(Encoder, self).__init__() - self.original_model = backend - - def forward(self, x): - features = [x] - for k, v in self.original_model._modules.items(): - if (k == 'blocks'): - for ki, vi in v._modules.items(): - features.append(vi(features[-1])) - else: - features.append(v(features[-1])) - return features - - -class UnetAdaptiveBins(nn.Module): - def __init__(self, backend, n_bins=100, min_val=0.1, max_val=10, norm='linear'): - super(UnetAdaptiveBins, self).__init__() - self.num_classes = n_bins - self.min_val = min_val - self.max_val = max_val - self.encoder = Encoder(backend) - self.adaptive_bins_layer = mViT(128, n_query_channels=128, patch_size=16, - dim_out=n_bins, - embedding_dim=128, norm=norm) - - self.decoder = DecoderBN(num_classes=128) - self.conv_out = nn.Sequential(nn.Conv2d(128, n_bins, kernel_size=1, stride=1, padding=0), - nn.Softmax(dim=1)) - - def forward(self, x, **kwargs): - unet_out = self.decoder(self.encoder(x), **kwargs) - bin_widths_normed, range_attention_maps = self.adaptive_bins_layer(unet_out) - out = self.conv_out(range_attention_maps) - - # Post process - # n, c, h, w = out.shape - # hist = torch.sum(out.view(n, c, h * w), dim=2) / (h * w) # not used for training - - bin_widths = (self.max_val - self.min_val) * bin_widths_normed # .shape = N, dim_out - bin_widths = nn.functional.pad(bin_widths, (1, 0), mode='constant', value=self.min_val) - bin_edges = torch.cumsum(bin_widths, dim=1) - - centers = 0.5 * (bin_edges[:, :-1] + bin_edges[:, 1:]) - n, dout = centers.size() - centers = centers.view(n, dout, 1, 1) - - pred = torch.sum(out * centers, dim=1, keepdim=True) - - return bin_edges, pred - - def get_1x_lr_params(self): # lr/10 learning rate - return self.encoder.parameters() - - def get_10x_lr_params(self): # lr learning rate - modules = [self.decoder, self.adaptive_bins_layer, self.conv_out] - for m in modules: - yield from m.parameters() - - @classmethod - def build(cls, n_bins, **kwargs): - basemodel_name = 'tf_efficientnet_b5_ap' - - print('Loading base model ()...'.format(basemodel_name), end='') - predicted_torch_model_cache_path = str(Path.home()) + '\\.cache\\torch\\hub\\rwightman_gen-efficientnet-pytorch_master' - predicted_gep_cache_testilfe = Path(predicted_torch_model_cache_path + '\\hubconf.py') - #print(f"predicted_gep_cache_testilfe: {predicted_gep_cache_testilfe}") - # try to fetch the models from cache, and only if it can't be find, download from the internet (to enable offline usage) - if os.path.isfile(predicted_gep_cache_testilfe): - basemodel = torch.hub.load(predicted_torch_model_cache_path, basemodel_name, pretrained=True, source = 'local') - else: - basemodel = torch.hub.load('rwightman/gen-efficientnet-pytorch', basemodel_name, pretrained=True) - print('Done.') - - # Remove last layer - print('Removing last two layers (global_pool & classifier).') - basemodel.global_pool = nn.Identity() - basemodel.classifier = nn.Identity() - - # Building Encoder-Decoder model - print('Building Encoder-Decoder model..', end='') - m = cls(basemodel, n_bins=n_bins, **kwargs) - print('Done.') - return m - - -if __name__ == '__main__': - model = UnetAdaptiveBins.build(100) - x = torch.rand(2, 3, 480, 640) - bins, pred = model(x) - print(bins.shape, pred.shape) diff --git a/spaces/arslan-ahmed/talk-to-your-docs/whatsapp_chat_custom.py b/spaces/arslan-ahmed/talk-to-your-docs/whatsapp_chat_custom.py deleted file mode 100644 index 39d5762f8e57399f75fffa609c6b7c07bfaeb669..0000000000000000000000000000000000000000 --- a/spaces/arslan-ahmed/talk-to-your-docs/whatsapp_chat_custom.py +++ /dev/null @@ -1,49 +0,0 @@ -# created custom class for WhatsAppChatLoader - because original langchain one isnt working - -import re -from pathlib import Path -from typing import List - -from langchain.docstore.document import Document -from langchain.document_loaders.base import BaseLoader - - -def concatenate_rows(date: str, sender: str, text: str) -> str: - """Combine message information in a readable format ready to be used.""" - return f"{sender} on {date}: {text}\n\n" - -# def concatenate_rows(date: str, sender: str, text: str) -> str: -# """Combine message information in a readable format ready to be used.""" -# return f"{text}\n" - -class WhatsAppChatLoader(BaseLoader): - """Load `WhatsApp` messages text file.""" - - def __init__(self, path: str): - """Initialize with path.""" - self.file_path = path - - def load(self) -> List[Document]: - """Load documents.""" - p = Path(self.file_path) - text_content = "" - - ignore_lines = ["This message was deleted", ""] - ######################################################################################### - # original code from langchain replaced with this code - ######################################################################################### - # use https://whatstk.streamlit.app/ to get CSV - import pandas as pd - df = pd.read_csv(p)[['date', 'username', 'message']] - - for i,row in df.iterrows(): - date = row['date'] - sender = row['username'] - text = row['message'] - - if not any(x in text for x in ignore_lines): - text_content += concatenate_rows(date, sender, text) - - metadata = {"source": str(p)} - - return [Document(page_content=text_content.strip(), metadata=metadata)] \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/tts_datasets.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/tts_datasets.md deleted file mode 100644 index 11da1b7688d07dadfdb3dfab33deb4bcdf3f861a..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/tts_datasets.md +++ /dev/null @@ -1,17 +0,0 @@ -# TTS Datasets - -Some of the known public datasets that we successfully applied 🐸TTS: - -- [English - LJ Speech](https://keithito.com/LJ-Speech-Dataset/) -- [English - Nancy](http://www.cstr.ed.ac.uk/projects/blizzard/2011/lessac_blizzard2011/) -- [English - TWEB](https://www.kaggle.com/bryanpark/the-world-english-bible-speech-dataset) -- [English - LibriTTS](https://openslr.org/60/) -- [English - VCTK](https://datashare.ed.ac.uk/handle/10283/2950) -- [Multilingual - M-AI-Labs](http://www.caito.de/2019/01/the-m-ailabs-speech-dataset/) -- [Spanish](https://drive.google.com/file/d/1Sm_zyBo67XHkiFhcRSQ4YaHPYM0slO_e/view?usp=sharing) - thx! @carlfm01 -- [German - Thorsten OGVD](https://github.com/thorstenMueller/deep-learning-german-tts) -- [Japanese - Kokoro](https://www.kaggle.com/kaiida/kokoro-speech-dataset-v11-small/version/1) -- [Chinese](https://www.data-baker.com/data/index/source/) -- [Ukrainian - LADA](https://github.com/egorsmkv/ukrainian-tts-datasets/tree/main/lada) - -Let us know if you use 🐸TTS on a different dataset. diff --git a/spaces/asimokby/cv-parser-huggingface/ResumeReader.py b/spaces/asimokby/cv-parser-huggingface/ResumeReader.py deleted file mode 100644 index e122a299de5d0a30b6c5b44c166514e19ac089fa..0000000000000000000000000000000000000000 --- a/spaces/asimokby/cv-parser-huggingface/ResumeReader.py +++ /dev/null @@ -1,99 +0,0 @@ -import re -import os -import logging -import pdfplumber - -class ResumeReader: - - def convert_docx_to_txt(self, docx_file,docx_parser): - """ - A utility function to convert a Microsoft docx files to raw text. - - This code is largely borrowed from existing solutions, and does not match the style of the rest of this repo. - :param docx_file: docx file with gets uploaded by the user - :type docx_file: InMemoryUploadedFile - :return: The text contents of the docx file - :rtype: str - """ - - # doc = docx.Document(docx_file) - # allText = [] - # for docpara in doc.paragraphs: - # allText.append(docpara.text) - # text = ' '.join(allText) - text = "" - try: - clean_text = re.sub(r'\n+', '\n', text) - clean_text = clean_text.replace("\r", "\n").replace("\t", " ") # Normalize text blob - resume_lines = clean_text.splitlines() # Split text blob into individual lines - resume_lines = [re.sub('\s+', ' ', line.strip()) for line in resume_lines if - line.strip()] # Remove empty strings and whitespaces - return resume_lines, text - except Exception as e: - logging.error('Error in docx file:: ' + str(e)) - return [], " " - - def convert_pdf_to_txt(self, pdf_file): - """ - A utility function to convert a machine-readable PDF to raw text. - - This code is largely borrowed from existing solutions, and does not match the style of the rest of this repo. - :param input_pdf_path: Path to the .pdf file which should be converted - :type input_pdf_path: str - :return: The text contents of the pdf - :rtype: str - """ - - pdf = pdfplumber.open(pdf_file) - raw_text= "" - - for page in pdf.pages: - raw_text += page.extract_text() + "\n" - - pdf.close() - - try: - full_string = re.sub(r'\n+', '\n', raw_text) - full_string = full_string.replace("\r", "\n") - full_string = full_string.replace("\t", " ") - - # Remove awkward LaTeX bullet characters - full_string = re.sub(r"\uf0b7", " ", full_string) - full_string = re.sub(r"\(cid:\d{0,3}\)", " ", full_string) - full_string = re.sub(r'• ', " ", full_string) - - # Split text blob into individual lines - resume_lines = full_string.splitlines(True) - - # Remove empty strings and whitespaces - resume_lines = [re.sub('\s+', ' ', line.strip()) for line in resume_lines if line.strip()] - - return resume_lines, raw_text - except Exception as e: - logging.error('Error in docx file:: ' + str(e)) - return [], " " - - def read_file(self, file,docx_parser = "tika"): - """ - file : Give path of resume file - docx_parser : Enter docx2txt or tika, by default is tika - """ - print("Reading the Resume...") - # file = "/content/Asst Manager Trust Administration.docx" - file = os.path.join(file) - if file.endswith('docx') or file.endswith('doc'): - # if file.endswith('doc') and docx_parser == "docx2txt": - # docx_parser = "tika" - # logging.error("doc format not supported by the docx2txt changing back to tika") - resume_lines, raw_text = self.convert_docx_to_txt(file,docx_parser) - elif file.endswith('pdf'): - resume_lines, raw_text = self.convert_pdf_to_txt(file) - elif file.endswith('txt'): - with open(file, 'r', encoding='utf-8') as f: - resume_lines = f.readlines() - - else: - resume_lines = None - - - return resume_lines \ No newline at end of file diff --git a/spaces/awacke1/AutoMLUsingStreamlit-Plotly/Prompts.md b/spaces/awacke1/AutoMLUsingStreamlit-Plotly/Prompts.md deleted file mode 100644 index ec0817592c1a131230048d4f6e9994f6d2c9863d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AutoMLUsingStreamlit-Plotly/Prompts.md +++ /dev/null @@ -1,115 +0,0 @@ - - -write an automl streamlit python tensorflow keras application with mechanics that takes a CSV dataset, performs pandas profiling and generates the code to do classification -Here is an example of an Automated Machine Learning (AutoML) Streamlit application that takes a CSV dataset, performs pandas profiling, and generates the code to do classification using TensorFlow and Keras: - -python -Copy code -import streamlit as st -import pandas as pd -import pandas_profiling -from tensorflow.keras.models import Sequential -from tensorflow.keras.layers import Dense -from sklearn.model_selection import train_test_split - -st.set_page_config(page_title="AutoML Streamlit App", page_icon=":robot:", layout="wide") - -st.title("AutoML Streamlit App") - -# Upload a CSV dataset -uploaded_file = st.file_uploader("Upload your dataset", type=["csv"]) -if uploaded_file is not None: - # Load the dataset and display the first 5 rows - df = pd.read_csv(uploaded_file) - st.dataframe(df.head()) - - # Perform pandas profiling and display the report - profile = pandas_profiling.ProfileReport(df) - st.write("Pandas Profiling Report:", profile) - - # Split the data into features and target variables - X = df.drop(df.columns[-1], axis=1) - y = df[df.columns[-1]] - - # Split the data into training and testing sets - X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) - - # Define a sequential model with TensorFlow and Keras - model = Sequential() - model.add(Dense(16, input_dim=X_train.shape[1], activation="relu")) - model.add(Dense(1, activation="sigmoid")) - - # Compile the model - model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) - - # Train the model - model.fit(X_train, y_train, epochs=10, batch_size=32) - - # Evaluate the model on the test set - test_loss, test_accuracy = model.evaluate(X_test, y_test) - st.write("Test Accuracy:", test_accuracy) - - # Generate the code to do classification - st.write("Generated Code:") - st.code(""" -import pandas as pd -import tensorflow as tf -from tensorflow.keras.models import Sequential -from tensorflow.keras.layers import Dense - -# Load the dataset -df = pd.read_csv("dataset.csv") - -# Split the data into features and target variables -X = df.drop(df.columns[-1], axis=1) -y = df[df.columns[-1]] - -# Define a sequential model with TensorFlow and Keras -model = Sequential() -model.add(Dense(16, input_dim=X.shape[1], activation="relu")) -model.add(Dense(1, activation="sigmoid")) - -# Compile the model -model.compile(loss="binary_crossentropy", optimizer=" - - - -Aaron Wacker -continue writing the program above but simplify it and automatically generate a plotly visualization using treemap and sunburst plot from the dataset dynamically by interpreting the data types inside the CSV file -Here is a simplified version of the AutoML Streamlit application that automatically generates a Plotly visualization using treemap and sunburst plots from the CSV dataset dynamically by interpreting the data types inside the file: - -python -Copy code -import streamlit as st -import pandas as pd -import plotly.express as px - -st.set_page_config(page_title="AutoML Streamlit App", page_icon=":robot:", layout="wide") - -st.title("AutoML Streamlit App") - -# Upload a CSV dataset -uploaded_file = st.file_uploader("Upload your dataset", type=["csv"]) -if uploaded_file is not None: - # Load the dataset and display the first 5 rows - df = pd.read_csv(uploaded_file) - st.dataframe(df.head()) - - # Generate a treemap or sunburst plot based on data types - numerical_cols = df.select_dtypes(include=["float", "int"]).columns - categorical_cols = df.select_dtypes(include=["object"]).columns - - if len(numerical_cols) >= 2: - fig = px.scatter_matrix(df, dimensions=numerical_cols) - st.plotly_chart(fig) - elif len(categorical_cols) >= 2: - fig = px.treemap(df, path=categorical_cols) - st.plotly_chart(fig) - else: - fig = px.sunburst(df, path=categorical_cols + numerical_cols) - st.plotly_chart(fig) -In this simplified version, we use Plotly Express to generate a scatter matrix plot for numerical data, a treemap plot for categorical data, and a sunburst plot for both numerical and categorical data. The type of plot generated is based on the number of numerical and categorical columns present in the dataset. - - - -Regenerate response diff --git a/spaces/awacke1/AutoStableDiffusionTxt2ImgImg2Img/README.md b/spaces/awacke1/AutoStableDiffusionTxt2ImgImg2Img/README.md deleted file mode 100644 index d09d8ce162e139ce06f130f29b73cd0221407ed6..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AutoStableDiffusionTxt2ImgImg2Img/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Stable Diffusion Web UI Docker -emoji: 🐳 -colorFrom: blue -colorTo: blue -sdk: docker -sdk_version: 3.9 -app_file: oh-no.py -pinned: false -duplicated_from: camenduru/webui-docker ---- - -## Stable Diffusion Web UI -https://github.com/AUTOMATIC1111/stable-diffusion-webui - -## Documentation -https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/awacke1/Docker.VSCode.Integration.HF/README.md b/spaces/awacke1/Docker.VSCode.Integration.HF/README.md deleted file mode 100644 index 081c2aca5247737651bb3972e5352ab8cc8aea90..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Docker.VSCode.Integration.HF/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Visual Studio Code -emoji: 💻🐳 -colorFrom: red -colorTo: blue -sdk: docker -pinned: false -tags: -- vscode -duplicated_from: DockerTemplates/vscode ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/awacke1/HTML5-Tower-Building-3D-Game/v1-index.html b/spaces/awacke1/HTML5-Tower-Building-3D-Game/v1-index.html deleted file mode 100644 index bbf3eac2ba2fd26e3aa6c1e182b287a3e242e878..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5-Tower-Building-3D-Game/v1-index.html +++ /dev/null @@ -1,101 +0,0 @@ - - - - Tower Building Game - - - - - - - - diff --git a/spaces/awacke1/VizLib-Keras-n-Plotly/README.md b/spaces/awacke1/VizLib-Keras-n-Plotly/README.md deleted file mode 100644 index 6d0928d18a928cf4ec2a6a4617eb388de43a9fcb..0000000000000000000000000000000000000000 --- a/spaces/awacke1/VizLib-Keras-n-Plotly/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: VizLib Keras N Plotly -emoji: 💻 -colorFrom: red -colorTo: indigo -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/WebAssemblyStreamlitLite-stlite/index.html b/spaces/awacke1/WebAssemblyStreamlitLite-stlite/index.html deleted file mode 100644 index 44bd1cc51118aa69255d585846dc448f5f61e97a..0000000000000000000000000000000000000000 --- a/spaces/awacke1/WebAssemblyStreamlitLite-stlite/index.html +++ /dev/null @@ -1,33 +0,0 @@ - - - - - - WebAssemblyStreamlitLite-stlite in HTML5 - - - - - -
    -

    STLITE in an HTML5 Page Web App to Run Streamlit

    -

    - script source -

    -

    - url - Stlite Streamlit video tutorial. -

    -
    - - diff --git a/spaces/banana-projects/web3d/src/lib/Utils.ts b/spaces/banana-projects/web3d/src/lib/Utils.ts deleted file mode 100644 index bf473dc80dbc18eb442e4a098fe42b29ddd61df1..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/src/lib/Utils.ts +++ /dev/null @@ -1,51 +0,0 @@ - -class Utils { - /** - * "Real" modulo (always >= 0), not remainder. - */ - static mod(a: number, n: number): number { - return ((a % n) + n) % n; - } - - /** - * Return a random integer between min and max (upper bound is exclusive). - */ - static randomInt(maxOrMin: number, max?: number): number { - return (max) - ? maxOrMin + Math.floor(Math.random() * (max - maxOrMin)) - : Math.floor(Math.random() * maxOrMin); - } - static randomFloat(maxOrMin: number, max?: number): number { - return (max) - ? maxOrMin + (Math.random() * (max - maxOrMin)) - : Math.random() * maxOrMin; - } - - /** - * Clamp a val to [min, max] - */ - static clamp(val: number, min: number, max: number): number { - return Math.min(Math.max(min, val), max); - } - - /** - * Returns a promise that will resolve after the specified time - * @param ms Number of ms to wait - */ - static delay(ms: number) { - return new Promise((resolve, reject) => { - setTimeout(() => resolve(), ms); - }); - } - - /** - * Compatibility with iOS' SCNAction.wait() - */ - static wait(duration: number, range: number = 0) { - return this.delay( - duration * 1_000 - - range * 1_000 / 2 - + this.randomInt(range * 1_000) - ); - } -} diff --git a/spaces/bigjoker/stable-diffusion-webui/test/basic_features/img2img_test.py b/spaces/bigjoker/stable-diffusion-webui/test/basic_features/img2img_test.py deleted file mode 100644 index 08c5c903e8382ef4b969b01da87bc69fb06ff2b4..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/test/basic_features/img2img_test.py +++ /dev/null @@ -1,66 +0,0 @@ -import unittest -import requests -from gradio.processing_utils import encode_pil_to_base64 -from PIL import Image - - -class TestImg2ImgWorking(unittest.TestCase): - def setUp(self): - self.url_img2img = "http://localhost:7860/sdapi/v1/img2img" - self.simple_img2img = { - "init_images": [encode_pil_to_base64(Image.open(r"test/test_files/img2img_basic.png"))], - "resize_mode": 0, - "denoising_strength": 0.75, - "mask": None, - "mask_blur": 4, - "inpainting_fill": 0, - "inpaint_full_res": False, - "inpaint_full_res_padding": 0, - "inpainting_mask_invert": False, - "prompt": "example prompt", - "styles": [], - "seed": -1, - "subseed": -1, - "subseed_strength": 0, - "seed_resize_from_h": -1, - "seed_resize_from_w": -1, - "batch_size": 1, - "n_iter": 1, - "steps": 3, - "cfg_scale": 7, - "width": 64, - "height": 64, - "restore_faces": False, - "tiling": False, - "negative_prompt": "", - "eta": 0, - "s_churn": 0, - "s_tmax": 0, - "s_tmin": 0, - "s_noise": 1, - "override_settings": {}, - "sampler_index": "Euler a", - "include_init_images": False - } - - def test_img2img_simple_performed(self): - self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200) - - def test_inpainting_masked_performed(self): - self.simple_img2img["mask"] = encode_pil_to_base64(Image.open(r"test/test_files/mask_basic.png")) - self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200) - - def test_inpainting_with_inverted_masked_performed(self): - self.simple_img2img["mask"] = encode_pil_to_base64(Image.open(r"test/test_files/mask_basic.png")) - self.simple_img2img["inpainting_mask_invert"] = True - self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200) - - def test_img2img_sd_upscale_performed(self): - self.simple_img2img["script_name"] = "sd upscale" - self.simple_img2img["script_args"] = ["", 8, "Lanczos", 2.0] - - self.assertEqual(requests.post(self.url_img2img, json=self.simple_img2img).status_code, 200) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/bioriAsaeru/text-to-voice/DarkEbootFixerV55rar.md b/spaces/bioriAsaeru/text-to-voice/DarkEbootFixerV55rar.md deleted file mode 100644 index 4ff26cb381239ad6d94077fb6b418b82a296b0d1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/DarkEbootFixerV55rar.md +++ /dev/null @@ -1,7 +0,0 @@ - -

    http://codex.downloaddarkbootfixer.com/h2>DarkEbootFixerV55rar... Imgur: DarkEbootFixerV55rar 0x4443543567417c7c DarkEbootFixerV55rar s7z_yehmbi: DarkEbootFixerV55rar http://r3dcr33t.com/repo/view/ DarkEbootFixerV55rar -

    -

    DarkEbootFixerV55rar Gwen!'s Board. - Explore Gwen!'s board. DarkEbootFixerV55rar you've just visited the back page. It's dark so you can't read the URL, but for sure you can look over your shoulder. The DarkEbootFixerV55rar team is proud to announce the new version of DMP Boot Fix Download. DarkEbootFixerV55rar, in case you don't know, is a tool that repairs issues related to DMP media.
    DarkEbootFixerV55rar have a log for each version of DMP media, so that it is possible to compare the two versions. That way, if there is a difference, you can safely download and install the new version of DMP Boot Fix. The new version, DMP Boot Fix Download, fixed a few bugs related to DMP media. The version is 0.9.2 and is fully compatible with most DMP media formats. The developers have also added a compatibility checker, so you can easily determine whether your media is compatible with the latest version of the software. Download DarkEbootFixerV55rar, it's free and safe and includes a one year warranty. The DarkEbootFixerV55rar team has also released a free release of DMP Remote Link, version 0.92. https://coub.com/stories/7649178-darkebootfixerv55rar-free-release-compatibility-check. http://codex.downloaddarkbootfixer.

    -

    DarkEbootFixerV55rar


    Download Filehttps://urloso.com/2uyPS8



    -

    charn6 0d958a46dc https://coub.com/stories/5260461-darkebootfixerv55rar-switch. yoshitano 24 1. 1.0 You have to use it 2.) The same as you may right now. 2. I accidentally get the iso files of the v55 RAM replacement which is.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/blmdsydm/faster-whisper-webui/app-local.py b/spaces/blmdsydm/faster-whisper-webui/app-local.py deleted file mode 100644 index c7717d096ca5f95177f0dba03cd62ca729bae9f3..0000000000000000000000000000000000000000 --- a/spaces/blmdsydm/faster-whisper-webui/app-local.py +++ /dev/null @@ -1,5 +0,0 @@ -# Run the app with no audio file restrictions -from app import create_ui -from src.config import ApplicationConfig - -create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1)) \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/memory.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/memory.py deleted file mode 100644 index bd494780b9dbbd1571688cd270bb9b53d113c13e..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/memory.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -from contextlib import contextmanager -from functools import wraps -import torch - -__all__ = ["retry_if_cuda_oom"] - - -@contextmanager -def _ignore_torch_cuda_oom(): - """ - A context which ignores CUDA OOM exception from pytorch. - """ - try: - yield - except RuntimeError as e: - # NOTE: the string may change? - if "CUDA out of memory. " in str(e): - pass - else: - raise - - -def retry_if_cuda_oom(func): - """ - Makes a function retry itself after encountering - pytorch's CUDA OOM error. - It will first retry after calling `torch.cuda.empty_cache()`. - - If that still fails, it will then retry by trying to convert inputs to CPUs. - In this case, it expects the function to dispatch to CPU implementation. - The return values may become CPU tensors as well and it's user's - responsibility to convert it back to CUDA tensor if needed. - - Args: - func: a stateless callable that takes tensor-like objects as arguments - - Returns: - a callable which retries `func` if OOM is encountered. - - Examples: - :: - output = retry_if_cuda_oom(some_torch_function)(input1, input2) - # output may be on CPU even if inputs are on GPU - - Note: - 1. When converting inputs to CPU, it will only look at each argument and check - if it has `.device` and `.to` for conversion. Nested structures of tensors - are not supported. - - 2. Since the function might be called more than once, it has to be - stateless. - """ - - def maybe_to_cpu(x): - try: - like_gpu_tensor = x.device.type == "cuda" and hasattr(x, "to") - except AttributeError: - like_gpu_tensor = False - if like_gpu_tensor: - return x.to(device="cpu") - else: - return x - - @wraps(func) - def wrapped(*args, **kwargs): - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # Clear cache and retry - torch.cuda.empty_cache() - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # Try on CPU. This slows down the code significantly, therefore print a notice. - logger = logging.getLogger(__name__) - logger.info("Attempting to copy inputs of {} to CPU due to CUDA OOM".format(str(func))) - new_args = (maybe_to_cpu(x) for x in args) - new_kwargs = {k: maybe_to_cpu(v) for k, v in kwargs.items()} - return func(*new_args, **new_kwargs) - - return wrapped diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DeepLab/deeplab/resnet.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DeepLab/deeplab/resnet.py deleted file mode 100644 index 2cc277b24630a9425f4c37e1abc3352b49e1a031..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DeepLab/deeplab/resnet.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import fvcore.nn.weight_init as weight_init -import torch.nn.functional as F - -from detectron2.layers import CNNBlockBase, Conv2d, get_norm -from detectron2.modeling import BACKBONE_REGISTRY -from detectron2.modeling.backbone.resnet import ( - BasicStem, - BottleneckBlock, - DeformBottleneckBlock, - ResNet, -) - - -class DeepLabStem(CNNBlockBase): - """ - The DeepLab ResNet stem (layers before the first residual block). - """ - - def __init__(self, in_channels=3, out_channels=128, norm="BN"): - """ - Args: - norm (str or callable): norm after the first conv layer. - See :func:`layers.get_norm` for supported format. - """ - super().__init__(in_channels, out_channels, 4) - self.in_channels = in_channels - self.conv1 = Conv2d( - in_channels, - out_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False, - norm=get_norm(norm, out_channels // 2), - ) - self.conv2 = Conv2d( - out_channels // 2, - out_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False, - norm=get_norm(norm, out_channels // 2), - ) - self.conv3 = Conv2d( - out_channels // 2, - out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - weight_init.c2_msra_fill(self.conv1) - weight_init.c2_msra_fill(self.conv2) - weight_init.c2_msra_fill(self.conv3) - - def forward(self, x): - x = self.conv1(x) - x = F.relu_(x) - x = self.conv2(x) - x = F.relu_(x) - x = self.conv3(x) - x = F.relu_(x) - x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1) - return x - - -@BACKBONE_REGISTRY.register() -def build_resnet_deeplab_backbone(cfg, input_shape): - """ - Create a ResNet instance from config. - Returns: - ResNet: a :class:`ResNet` instance. - """ - # need registration of new blocks/stems? - norm = cfg.MODEL.RESNETS.NORM - if cfg.MODEL.RESNETS.STEM_TYPE == "basic": - stem = BasicStem( - in_channels=input_shape.channels, - out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS, - norm=norm, - ) - elif cfg.MODEL.RESNETS.STEM_TYPE == "deeplab": - stem = DeepLabStem( - in_channels=input_shape.channels, - out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS, - norm=norm, - ) - else: - raise ValueError("Unknown stem type: {}".format(cfg.MODEL.RESNETS.STEM_TYPE)) - - # fmt: off - freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT - out_features = cfg.MODEL.RESNETS.OUT_FEATURES - depth = cfg.MODEL.RESNETS.DEPTH - num_groups = cfg.MODEL.RESNETS.NUM_GROUPS - width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP - bottleneck_channels = num_groups * width_per_group - in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS - out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS - stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1 - res4_dilation = cfg.MODEL.RESNETS.RES4_DILATION - res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION - deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE - deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED - deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS - res5_multi_grid = cfg.MODEL.RESNETS.RES5_MULTI_GRID - # fmt: on - assert res4_dilation in {1, 2}, "res4_dilation cannot be {}.".format(res4_dilation) - assert res5_dilation in {1, 2, 4}, "res5_dilation cannot be {}.".format(res5_dilation) - if res4_dilation == 2: - # Always dilate res5 if res4 is dilated. - assert res5_dilation == 4 - - num_blocks_per_stage = {50: [3, 4, 6, 3], 101: [3, 4, 23, 3], 152: [3, 8, 36, 3]}[depth] - - stages = [] - - # Avoid creating variables without gradients - # It consumes extra memory and may cause allreduce to fail - out_stage_idx = [{"res2": 2, "res3": 3, "res4": 4, "res5": 5}[f] for f in out_features] - max_stage_idx = max(out_stage_idx) - for idx, stage_idx in enumerate(range(2, max_stage_idx + 1)): - if stage_idx == 4: - dilation = res4_dilation - elif stage_idx == 5: - dilation = res5_dilation - else: - dilation = 1 - first_stride = 1 if idx == 0 or dilation > 1 else 2 - stage_kargs = { - "num_blocks": num_blocks_per_stage[idx], - "stride_per_block": [first_stride] + [1] * (num_blocks_per_stage[idx] - 1), - "in_channels": in_channels, - "out_channels": out_channels, - "norm": norm, - } - stage_kargs["bottleneck_channels"] = bottleneck_channels - stage_kargs["stride_in_1x1"] = stride_in_1x1 - stage_kargs["dilation"] = dilation - stage_kargs["num_groups"] = num_groups - if deform_on_per_stage[idx]: - stage_kargs["block_class"] = DeformBottleneckBlock - stage_kargs["deform_modulated"] = deform_modulated - stage_kargs["deform_num_groups"] = deform_num_groups - else: - stage_kargs["block_class"] = BottleneckBlock - if stage_idx == 5: - stage_kargs.pop("dilation") - stage_kargs["dilation_per_block"] = [dilation * mg for mg in res5_multi_grid] - blocks = ResNet.make_stage(**stage_kargs) - in_channels = out_channels - out_channels *= 2 - bottleneck_channels *= 2 - stages.append(blocks) - return ResNet(stem, stages, out_features=out_features).freeze(freeze_at) diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/plots.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/plots.py deleted file mode 100644 index 1bbb9c09c33afe83c90d6ea96511ae64c8d9bec9..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/plots.py +++ /dev/null @@ -1,489 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Plotting utils -""" - -import math -import os -from copy import copy -from pathlib import Path -from urllib.error import URLError - -import cv2 -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import seaborn as sn -import torch -from PIL import Image, ImageDraw, ImageFont - -from utils.general import (CONFIG_DIR, FONT, LOGGER, Timeout, check_font, check_requirements, clip_coords, - increment_path, is_ascii, threaded, try_except, xywh2xyxy, xyxy2xywh) -from utils.metrics import fitness - -# Settings -RANK = int(os.getenv('RANK', -1)) -matplotlib.rc('font', **{'size': 11}) -matplotlib.use('Agg') # for writing to files only - - -class Colors: - # Ultralytics color palette https://ultralytics.com/ - def __init__(self): - # hex = matplotlib.colors.TABLEAU_COLORS.values() - hexs = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB', - '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7') - self.palette = [self.hex2rgb(f'#{c}') for c in hexs] - self.n = len(self.palette) - - def __call__(self, i, bgr=False): - c = self.palette[int(i) % self.n] - return (c[2], c[1], c[0]) if bgr else c - - @staticmethod - def hex2rgb(h): # rgb order (PIL) - return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - - -colors = Colors() # create instance for 'from utils.plots import colors' - - -def check_pil_font(font=FONT, size=10): - # Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary - font = Path(font) - font = font if font.exists() else (CONFIG_DIR / font.name) - try: - return ImageFont.truetype(str(font) if font.exists() else font.name, size) - except Exception: # download if missing - try: - check_font(font) - return ImageFont.truetype(str(font), size) - except TypeError: - check_requirements('Pillow>=8.4.0') # known issue https://github.com/ultralytics/yolov5/issues/5374 - except URLError: # not online - return ImageFont.load_default() - - -class Annotator: - # YOLOv5 Annotator for train/val mosaics and jpgs and detect/hub inference annotations - def __init__(self, im, line_width=None, font_size=None, font='Arial.ttf', pil=False, example='abc'): - assert im.data.contiguous, 'Image not contiguous. Apply np.ascontiguousarray(im) to Annotator() input images.' - non_ascii = not is_ascii(example) # non-latin labels, i.e. asian, arabic, cyrillic - self.pil = pil or non_ascii - if self.pil: # use PIL - self.im = im if isinstance(im, Image.Image) else Image.fromarray(im) - self.draw = ImageDraw.Draw(self.im) - self.font = check_pil_font(font='Arial.Unicode.ttf' if non_ascii else font, - size=font_size or max(round(sum(self.im.size) / 2 * 0.035), 12)) - else: # use cv2 - self.im = im - self.lw = line_width or max(round(sum(im.shape) / 2 * 0.003), 2) # line width - - def box_label(self, box, label='', color=(128, 128, 128), txt_color=(255, 255, 255)): - # Add one xyxy box to image with label - if self.pil or not is_ascii(label): - self.draw.rectangle(box, width=self.lw, outline=color) # box - if label: - w, h = self.font.getsize(label) # text width, height - outside = box[1] - h >= 0 # label fits outside box - self.draw.rectangle( - (box[0], box[1] - h if outside else box[1], box[0] + w + 1, - box[1] + 1 if outside else box[1] + h + 1), - fill=color, - ) - # self.draw.text((box[0], box[1]), label, fill=txt_color, font=self.font, anchor='ls') # for PIL>8.0 - self.draw.text((box[0], box[1] - h if outside else box[1]), label, fill=txt_color, font=self.font) - else: # cv2 - p1, p2 = (int(box[0]), int(box[1])), (int(box[2]), int(box[3])) - cv2.rectangle(self.im, p1, p2, color, thickness=self.lw, lineType=cv2.LINE_AA) - if label: - tf = max(self.lw - 1, 1) # font thickness - w, h = cv2.getTextSize(label, 0, fontScale=self.lw / 3, thickness=tf)[0] # text width, height - outside = p1[1] - h >= 3 - p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3 - cv2.rectangle(self.im, p1, p2, color, -1, cv2.LINE_AA) # filled - cv2.putText(self.im, - label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), - 0, - self.lw / 3, - txt_color, - thickness=tf, - lineType=cv2.LINE_AA) - - def rectangle(self, xy, fill=None, outline=None, width=1): - # Add rectangle to image (PIL-only) - self.draw.rectangle(xy, fill, outline, width) - - def text(self, xy, text, txt_color=(255, 255, 255)): - # Add text to image (PIL-only) - w, h = self.font.getsize(text) # text width, height - self.draw.text((xy[0], xy[1] - h + 1), text, fill=txt_color, font=self.font) - - def result(self): - # Return annotated image as array - return np.asarray(self.im) - - -def feature_visualization(x, module_type, stage, n=32, save_dir=Path('runs/detect/exp')): - """ - x: Features to be visualized - module_type: Module type - stage: Module stage within model - n: Maximum number of feature maps to plot - save_dir: Directory to save results - """ - if 'Detect' not in module_type: - batch, channels, height, width = x.shape # batch, channels, height, width - if height > 1 and width > 1: - f = save_dir / f"stage{stage}_{module_type.split('.')[-1]}_features.png" # filename - - blocks = torch.chunk(x[0].cpu(), channels, dim=0) # select batch index 0, block by channels - n = min(n, channels) # number of plots - fig, ax = plt.subplots(math.ceil(n / 8), 8, tight_layout=True) # 8 rows x n/8 cols - ax = ax.ravel() - plt.subplots_adjust(wspace=0.05, hspace=0.05) - for i in range(n): - ax[i].imshow(blocks[i].squeeze()) # cmap='gray' - ax[i].axis('off') - - LOGGER.info(f'Saving {f}... ({n}/{channels})') - plt.savefig(f, dpi=300, bbox_inches='tight') - plt.close() - np.save(str(f.with_suffix('.npy')), x[0].cpu().numpy()) # npy save - - -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - from scipy.signal import butter, filtfilt - - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - return butter(order, normal_cutoff, btype='low', analog=False) - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def output_to_target(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - for *box, conf, cls in o.cpu().numpy(): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) - return np.array(targets) - - -@threaded -def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=1920, max_subplots=16): - # Plot image grid with labels - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - if np.max(images[0]) <= 1: - images *= 255 # de-normalise (optional) - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - - # Build Image - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init - for i, im in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin - im = im.transpose(1, 2, 0) - mosaic[y:y + h, x:x + w, :] = im - - # Resize (optional) - scale = max_size / ns / max(h, w) - if scale < 1: - h = math.ceil(scale * h) - w = math.ceil(scale * w) - mosaic = cv2.resize(mosaic, tuple(int(x * ns) for x in (w, h))) - - # Annotate - fs = int((h + w) * ns * 0.01) # font size - annotator = Annotator(mosaic, line_width=round(fs / 10), font_size=fs, pil=True, example=names) - for i in range(i + 1): - x, y = int(w * (i // ns)), int(h * (i % ns)) # block origin - annotator.rectangle([x, y, x + w, y + h], None, (255, 255, 255), width=2) # borders - if paths: - annotator.text((x + 5, y + 5 + h), text=Path(paths[i]).name[:40], txt_color=(220, 220, 220)) # filenames - if len(targets) > 0: - ti = targets[targets[:, 0] == i] # image targets - boxes = xywh2xyxy(ti[:, 2:6]).T - classes = ti[:, 1].astype('int') - labels = ti.shape[1] == 6 # labels if no conf column - conf = None if labels else ti[:, 6] # check for confidence presence (label vs pred) - - if boxes.shape[1]: - if boxes.max() <= 1.01: # if normalized with tolerance 0.01 - boxes[[0, 2]] *= w # scale to pixels - boxes[[1, 3]] *= h - elif scale < 1: # absolute coords need scale if image scales - boxes *= scale - boxes[[0, 2]] += x - boxes[[1, 3]] += y - for j, box in enumerate(boxes.T.tolist()): - cls = classes[j] - color = colors(cls) - cls = names[cls] if names else cls - if labels or conf[j] > 0.25: # 0.25 conf thresh - label = f'{cls}' if labels else f'{cls} {conf[j]:.1f}' - annotator.box_label(box, label, color=color) - annotator.im.save(fname) # save - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - plt.close() - - -def plot_val_txt(): # from utils.plots import *; plot_val() - # Plot val.txt histograms - x = np.loadtxt('val.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label=f'{x[i].mean():.3g} +/- {x[i].std():.3g}') - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_val_study(file='', dir='', x=None): # from utils.plots import *; plot_val_study() - # Plot file=study.txt generated by val.py (or plot all study*.txt in dir) - save_dir = Path(file).parent if file else Path(dir) - plot2 = False # plot additional results - if plot2: - ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - # for f in [save_dir / f'study_coco_{x}.txt' for x in ['yolov5n6', 'yolov5s6', 'yolov5m6', 'yolov5l6', 'yolov5x6']]: - for f in sorted(save_dir.glob('study*.txt')): - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - if plot2: - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_preprocess (ms/img)', 't_inference (ms/img)', 't_NMS (ms/img)'] - for i in range(7): - ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[5, 1:j], - y[3, 1:j] * 1E2, - '.-', - linewidth=2, - markersize=8, - label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', - linewidth=2, - markersize=8, - alpha=.25, - label='EfficientDet') - - ax2.grid(alpha=0.2) - ax2.set_yticks(np.arange(20, 60, 5)) - ax2.set_xlim(0, 57) - ax2.set_ylim(25, 55) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - f = save_dir / 'study.png' - print(f'Saving {f}...') - plt.savefig(f, dpi=300) - - -@try_except # known issue https://github.com/ultralytics/yolov5/issues/5395 -@Timeout(30) # known issue https://github.com/ultralytics/yolov5/issues/5611 -def plot_labels(labels, names=(), save_dir=Path('')): - # plot dataset labels - LOGGER.info(f"Plotting labels to {save_dir / 'labels.jpg'}... ") - c, b = labels[:, 0], labels[:, 1:].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - - # seaborn correlogram - sn.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) - plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) - plt.close() - - # matplotlib labels - matplotlib.use('svg') # faster - ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() - y = ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - try: # color histogram bars by class - [y[2].patches[i].set_color([x / 255 for x in colors(i)]) for i in range(nc)] # known issue #3195 - except Exception: - pass - ax[0].set_ylabel('instances') - if 0 < len(names) < 30: - ax[0].set_xticks(range(len(names))) - ax[0].set_xticklabels(names, rotation=90, fontsize=10) - else: - ax[0].set_xlabel('classes') - sn.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) - sn.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) - - # rectangles - labels[:, 1:3] = 0.5 # center - labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 - img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) - for cls, *box in labels[:1000]: - ImageDraw.Draw(img).rectangle(box, width=1, outline=colors(cls)) # plot - ax[1].imshow(img) - ax[1].axis('off') - - for a in [0, 1, 2, 3]: - for s in ['top', 'right', 'left', 'bottom']: - ax[a].spines[s].set_visible(False) - - plt.savefig(save_dir / 'labels.jpg', dpi=200) - matplotlib.use('Agg') - plt.close() - - -def plot_evolve(evolve_csv='path/to/evolve.csv'): # from utils.plots import *; plot_evolve() - # Plot evolve.csv hyp evolution results - evolve_csv = Path(evolve_csv) - data = pd.read_csv(evolve_csv) - keys = [x.strip() for x in data.columns] - x = data.values - f = fitness(x) - j = np.argmax(f) # max fitness index - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - print(f'Best results from row {j} of {evolve_csv}:') - for i, k in enumerate(keys[7:]): - v = x[:, 7 + i] - mu = v[j] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(v, f, c=hist2d(v, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title(f'{k} = {mu:.3g}', fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print(f'{k:>15}: {mu:.3g}') - f = evolve_csv.with_suffix('.png') # filename - plt.savefig(f, dpi=200) - plt.close() - print(f'Saved {f}') - - -def plot_results(file='path/to/results.csv', dir=''): - # Plot training results.csv. Usage: from utils.plots import *; plot_results('path/to/results.csv') - save_dir = Path(file).parent if file else Path(dir) - fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) - ax = ax.ravel() - files = list(save_dir.glob('results*.csv')) - assert len(files), f'No results.csv files found in {save_dir.resolve()}, nothing to plot.' - for f in files: - try: - data = pd.read_csv(f) - s = [x.strip() for x in data.columns] - x = data.values[:, 0] - for i, j in enumerate([1, 2, 3, 4, 5, 8, 9, 10, 6, 7]): - y = data.values[:, j].astype('float') - # y[y == 0] = np.nan # don't show zero values - ax[i].plot(x, y, marker='.', label=f.stem, linewidth=2, markersize=8) - ax[i].set_title(s[j], fontsize=12) - # if j in [8, 9, 10]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - LOGGER.info(f'Warning: Plotting error for {f}: {e}') - ax[1].legend() - fig.savefig(save_dir / 'results.png', dpi=200) - plt.close() - - -def profile_idetection(start=0, stop=0, labels=(), save_dir=''): - # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() - ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() - s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] - files = list(Path(save_dir).glob('frames*.txt')) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows - n = results.shape[1] # number of rows - x = np.arange(start, min(stop, n) if stop else n) - results = results[:, x] - t = (results[0] - results[0].min()) # set t0=0s - results[0] = x - for i, a in enumerate(ax): - if i < len(results): - label = labels[fi] if len(labels) else f.stem.replace('frames_', '') - a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) - a.set_title(s[i]) - a.set_xlabel('time (s)') - # if fi == len(files) - 1: - # a.set_ylim(bottom=0) - for side in ['top', 'right']: - a.spines[side].set_visible(False) - else: - a.remove() - except Exception as e: - print(f'Warning: Plotting error for {f}; {e}') - ax[1].legend() - plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) - - -def save_one_box(xyxy, im, file=Path('im.jpg'), gain=1.02, pad=10, square=False, BGR=False, save=True): - # Save image crop as {file} with crop size multiple {gain} and {pad} pixels. Save and/or return crop - xyxy = torch.tensor(xyxy).view(-1, 4) - b = xyxy2xywh(xyxy) # boxes - if square: - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # attempt rectangle to square - b[:, 2:] = b[:, 2:] * gain + pad # box wh * gain + pad - xyxy = xywh2xyxy(b).long() - clip_coords(xyxy, im.shape) - crop = im[int(xyxy[0, 1]):int(xyxy[0, 3]), int(xyxy[0, 0]):int(xyxy[0, 2]), ::(1 if BGR else -1)] - if save: - file.parent.mkdir(parents=True, exist_ok=True) # make directory - f = str(increment_path(file).with_suffix('.jpg')) - # cv2.imwrite(f, crop) # https://github.com/ultralytics/yolov5/issues/7007 chroma subsampling issue - Image.fromarray(cv2.cvtColor(crop, cv2.COLOR_BGR2RGB)).save(f, quality=95, subsampling=0) - return crop diff --git a/spaces/caffeinum/VToonify/vtoonify/model/raft/train.py b/spaces/caffeinum/VToonify/vtoonify/model/raft/train.py deleted file mode 100644 index 307573097f13ee30c67bbe11658f457fdf1ead3c..0000000000000000000000000000000000000000 --- a/spaces/caffeinum/VToonify/vtoonify/model/raft/train.py +++ /dev/null @@ -1,247 +0,0 @@ -from __future__ import print_function, division -import sys -sys.path.append('core') - -import argparse -import os -import cv2 -import time -import numpy as np -import matplotlib.pyplot as plt - -import torch -import torch.nn as nn -import torch.optim as optim -import torch.nn.functional as F - -from torch.utils.data import DataLoader -from raft import RAFT -import evaluate -import datasets - -from torch.utils.tensorboard import SummaryWriter - -try: - from torch.cuda.amp import GradScaler -except: - # dummy GradScaler for PyTorch < 1.6 - class GradScaler: - def __init__(self): - pass - def scale(self, loss): - return loss - def unscale_(self, optimizer): - pass - def step(self, optimizer): - optimizer.step() - def update(self): - pass - - -# exclude extremly large displacements -MAX_FLOW = 400 -SUM_FREQ = 100 -VAL_FREQ = 5000 - - -def sequence_loss(flow_preds, flow_gt, valid, gamma=0.8, max_flow=MAX_FLOW): - """ Loss function defined over sequence of flow predictions """ - - n_predictions = len(flow_preds) - flow_loss = 0.0 - - # exlude invalid pixels and extremely large diplacements - mag = torch.sum(flow_gt**2, dim=1).sqrt() - valid = (valid >= 0.5) & (mag < max_flow) - - for i in range(n_predictions): - i_weight = gamma**(n_predictions - i - 1) - i_loss = (flow_preds[i] - flow_gt).abs() - flow_loss += i_weight * (valid[:, None] * i_loss).mean() - - epe = torch.sum((flow_preds[-1] - flow_gt)**2, dim=1).sqrt() - epe = epe.view(-1)[valid.view(-1)] - - metrics = { - 'epe': epe.mean().item(), - '1px': (epe < 1).float().mean().item(), - '3px': (epe < 3).float().mean().item(), - '5px': (epe < 5).float().mean().item(), - } - - return flow_loss, metrics - - -def count_parameters(model): - return sum(p.numel() for p in model.parameters() if p.requires_grad) - - -def fetch_optimizer(args, model): - """ Create the optimizer and learning rate scheduler """ - optimizer = optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.wdecay, eps=args.epsilon) - - scheduler = optim.lr_scheduler.OneCycleLR(optimizer, args.lr, args.num_steps+100, - pct_start=0.05, cycle_momentum=False, anneal_strategy='linear') - - return optimizer, scheduler - - -class Logger: - def __init__(self, model, scheduler): - self.model = model - self.scheduler = scheduler - self.total_steps = 0 - self.running_loss = {} - self.writer = None - - def _print_training_status(self): - metrics_data = [self.running_loss[k]/SUM_FREQ for k in sorted(self.running_loss.keys())] - training_str = "[{:6d}, {:10.7f}] ".format(self.total_steps+1, self.scheduler.get_last_lr()[0]) - metrics_str = ("{:10.4f}, "*len(metrics_data)).format(*metrics_data) - - # print the training status - print(training_str + metrics_str) - - if self.writer is None: - self.writer = SummaryWriter() - - for k in self.running_loss: - self.writer.add_scalar(k, self.running_loss[k]/SUM_FREQ, self.total_steps) - self.running_loss[k] = 0.0 - - def push(self, metrics): - self.total_steps += 1 - - for key in metrics: - if key not in self.running_loss: - self.running_loss[key] = 0.0 - - self.running_loss[key] += metrics[key] - - if self.total_steps % SUM_FREQ == SUM_FREQ-1: - self._print_training_status() - self.running_loss = {} - - def write_dict(self, results): - if self.writer is None: - self.writer = SummaryWriter() - - for key in results: - self.writer.add_scalar(key, results[key], self.total_steps) - - def close(self): - self.writer.close() - - -def train(args): - - model = nn.DataParallel(RAFT(args), device_ids=args.gpus) - print("Parameter Count: %d" % count_parameters(model)) - - if args.restore_ckpt is not None: - model.load_state_dict(torch.load(args.restore_ckpt), strict=False) - - model.cuda() - model.train() - - if args.stage != 'chairs': - model.module.freeze_bn() - - train_loader = datasets.fetch_dataloader(args) - optimizer, scheduler = fetch_optimizer(args, model) - - total_steps = 0 - scaler = GradScaler(enabled=args.mixed_precision) - logger = Logger(model, scheduler) - - VAL_FREQ = 5000 - add_noise = True - - should_keep_training = True - while should_keep_training: - - for i_batch, data_blob in enumerate(train_loader): - optimizer.zero_grad() - image1, image2, flow, valid = [x.cuda() for x in data_blob] - - if args.add_noise: - stdv = np.random.uniform(0.0, 5.0) - image1 = (image1 + stdv * torch.randn(*image1.shape).cuda()).clamp(0.0, 255.0) - image2 = (image2 + stdv * torch.randn(*image2.shape).cuda()).clamp(0.0, 255.0) - - flow_predictions = model(image1, image2, iters=args.iters) - - loss, metrics = sequence_loss(flow_predictions, flow, valid, args.gamma) - scaler.scale(loss).backward() - scaler.unscale_(optimizer) - torch.nn.utils.clip_grad_norm_(model.parameters(), args.clip) - - scaler.step(optimizer) - scheduler.step() - scaler.update() - - logger.push(metrics) - - if total_steps % VAL_FREQ == VAL_FREQ - 1: - PATH = 'checkpoints/%d_%s.pth' % (total_steps+1, args.name) - torch.save(model.state_dict(), PATH) - - results = {} - for val_dataset in args.validation: - if val_dataset == 'chairs': - results.update(evaluate.validate_chairs(model.module)) - elif val_dataset == 'sintel': - results.update(evaluate.validate_sintel(model.module)) - elif val_dataset == 'kitti': - results.update(evaluate.validate_kitti(model.module)) - - logger.write_dict(results) - - model.train() - if args.stage != 'chairs': - model.module.freeze_bn() - - total_steps += 1 - - if total_steps > args.num_steps: - should_keep_training = False - break - - logger.close() - PATH = 'checkpoints/%s.pth' % args.name - torch.save(model.state_dict(), PATH) - - return PATH - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--name', default='raft', help="name your experiment") - parser.add_argument('--stage', help="determines which dataset to use for training") - parser.add_argument('--restore_ckpt', help="restore checkpoint") - parser.add_argument('--small', action='store_true', help='use small model') - parser.add_argument('--validation', type=str, nargs='+') - - parser.add_argument('--lr', type=float, default=0.00002) - parser.add_argument('--num_steps', type=int, default=100000) - parser.add_argument('--batch_size', type=int, default=6) - parser.add_argument('--image_size', type=int, nargs='+', default=[384, 512]) - parser.add_argument('--gpus', type=int, nargs='+', default=[0,1]) - parser.add_argument('--mixed_precision', action='store_true', help='use mixed precision') - - parser.add_argument('--iters', type=int, default=12) - parser.add_argument('--wdecay', type=float, default=.00005) - parser.add_argument('--epsilon', type=float, default=1e-8) - parser.add_argument('--clip', type=float, default=1.0) - parser.add_argument('--dropout', type=float, default=0.0) - parser.add_argument('--gamma', type=float, default=0.8, help='exponential weighting') - parser.add_argument('--add_noise', action='store_true') - args = parser.parse_args() - - torch.manual_seed(1234) - np.random.seed(1234) - - if not os.path.isdir('checkpoints'): - os.mkdir('checkpoints') - - train(args) \ No newline at end of file diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/aspp.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/aspp.py deleted file mode 100644 index 14861aa9ede4fea6a69a49f189bcab997b558148..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/aspp.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from copy import deepcopy -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from .batch_norm import get_norm -from .blocks import DepthwiseSeparableConv2d -from .wrappers import Conv2d - - -class ASPP(nn.Module): - """ - Atrous Spatial Pyramid Pooling (ASPP). - """ - - def __init__( - self, - in_channels, - out_channels, - dilations, - *, - norm, - activation, - pool_kernel_size=None, - dropout: float = 0.0, - use_depthwise_separable_conv=False, - ): - """ - Args: - in_channels (int): number of input channels for ASPP. - out_channels (int): number of output channels. - dilations (list): a list of 3 dilations in ASPP. - norm (str or callable): normalization for all conv layers. - See :func:`layers.get_norm` for supported format. norm is - applied to all conv layers except the conv following - global average pooling. - activation (callable): activation function. - pool_kernel_size (tuple, list): the average pooling size (kh, kw) - for image pooling layer in ASPP. If set to None, it always - performs global average pooling. If not None, it must be - divisible by the shape of inputs in forward(). It is recommended - to use a fixed input feature size in training, and set this - option to match this size, so that it performs global average - pooling in training, and the size of the pooling window stays - consistent in inference. - dropout (float): apply dropout on the output of ASPP. It is used in - the official DeepLab implementation with a rate of 0.1: - https://github.com/tensorflow/models/blob/21b73d22f3ed05b650e85ac50849408dd36de32e/research/deeplab/model.py#L532 # noqa - use_depthwise_separable_conv (bool): use DepthwiseSeparableConv2d - for 3x3 convs in ASPP, proposed in :paper:`DeepLabV3+`. - """ - super(ASPP, self).__init__() - assert len(dilations) == 3, "ASPP expects 3 dilations, got {}".format(len(dilations)) - self.pool_kernel_size = pool_kernel_size - self.dropout = dropout - use_bias = norm == "" - self.convs = nn.ModuleList() - # conv 1x1 - self.convs.append( - Conv2d( - in_channels, - out_channels, - kernel_size=1, - bias=use_bias, - norm=get_norm(norm, out_channels), - activation=deepcopy(activation), - ) - ) - weight_init.c2_xavier_fill(self.convs[-1]) - # atrous convs - for dilation in dilations: - if use_depthwise_separable_conv: - self.convs.append( - DepthwiseSeparableConv2d( - in_channels, - out_channels, - kernel_size=3, - padding=dilation, - dilation=dilation, - norm1=norm, - activation1=deepcopy(activation), - norm2=norm, - activation2=deepcopy(activation), - ) - ) - else: - self.convs.append( - Conv2d( - in_channels, - out_channels, - kernel_size=3, - padding=dilation, - dilation=dilation, - bias=use_bias, - norm=get_norm(norm, out_channels), - activation=deepcopy(activation), - ) - ) - weight_init.c2_xavier_fill(self.convs[-1]) - # image pooling - # We do not add BatchNorm because the spatial resolution is 1x1, - # the original TF implementation has BatchNorm. - if pool_kernel_size is None: - image_pooling = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)), - ) - else: - image_pooling = nn.Sequential( - nn.AvgPool2d(kernel_size=pool_kernel_size, stride=1), - Conv2d(in_channels, out_channels, 1, bias=True, activation=deepcopy(activation)), - ) - weight_init.c2_xavier_fill(image_pooling[1]) - self.convs.append(image_pooling) - - self.project = Conv2d( - 5 * out_channels, - out_channels, - kernel_size=1, - bias=use_bias, - norm=get_norm(norm, out_channels), - activation=deepcopy(activation), - ) - weight_init.c2_xavier_fill(self.project) - - def forward(self, x): - size = x.shape[-2:] - if self.pool_kernel_size is not None: - if size[0] % self.pool_kernel_size[0] or size[1] % self.pool_kernel_size[1]: - raise ValueError( - "`pool_kernel_size` must be divisible by the shape of inputs. " - "Input size: {} `pool_kernel_size`: {}".format(size, self.pool_kernel_size) - ) - res = [] - for conv in self.convs: - res.append(conv(x)) - res[-1] = F.interpolate(res[-1], size=size, mode="bilinear", align_corners=False) - res = torch.cat(res, dim=1) - res = self.project(res) - res = F.dropout(res, self.dropout, training=self.training) if self.dropout > 0 else res - return res diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/lazyconfig_train_net.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/lazyconfig_train_net.py deleted file mode 100644 index bb62d36c0c171b0391453afafc2828ebab1b0da1..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tools/lazyconfig_train_net.py +++ /dev/null @@ -1,131 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Training script using the new "LazyConfig" python config files. - -This scripts reads a given python config file and runs the training or evaluation. -It can be used to train any models or dataset as long as they can be -instantiated by the recursive construction defined in the given config file. - -Besides lazy construction of models, dataloader, etc., this scripts expects a -few common configuration parameters currently defined in "configs/common/train.py". -To add more complicated training logic, you can easily add other configs -in the config file and implement a new train_net.py to handle them. -""" -import logging - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import LazyConfig, instantiate -from detectron2.engine import ( - AMPTrainer, - SimpleTrainer, - default_argument_parser, - default_setup, - default_writers, - hooks, - launch, -) -from detectron2.engine.defaults import create_ddp_model -from detectron2.evaluation import inference_on_dataset, print_csv_format -from detectron2.utils import comm - -logger = logging.getLogger("detectron2") - - -def do_test(cfg, model): - if "evaluator" in cfg.dataloader: - ret = inference_on_dataset( - model, instantiate(cfg.dataloader.test), instantiate(cfg.dataloader.evaluator) - ) - print_csv_format(ret) - return ret - - -def do_train(args, cfg): - """ - Args: - cfg: an object with the following attributes: - model: instantiate to a module - dataloader.{train,test}: instantiate to dataloaders - dataloader.evaluator: instantiate to evaluator for test set - optimizer: instantaite to an optimizer - lr_multiplier: instantiate to a fvcore scheduler - train: other misc config defined in `configs/common/train.py`, including: - output_dir (str) - init_checkpoint (str) - amp.enabled (bool) - max_iter (int) - eval_period, log_period (int) - device (str) - checkpointer (dict) - ddp (dict) - """ - model = instantiate(cfg.model) - logger = logging.getLogger("detectron2") - logger.info("Model:\n{}".format(model)) - model.to(cfg.train.device) - - cfg.optimizer.params.model = model - optim = instantiate(cfg.optimizer) - - train_loader = instantiate(cfg.dataloader.train) - - model = create_ddp_model(model, **cfg.train.ddp) - trainer = (AMPTrainer if cfg.train.amp.enabled else SimpleTrainer)(model, train_loader, optim) - checkpointer = DetectionCheckpointer( - model, - cfg.train.output_dir, - trainer=trainer, - ) - trainer.register_hooks( - [ - hooks.IterationTimer(), - hooks.LRScheduler(scheduler=instantiate(cfg.lr_multiplier)), - hooks.PeriodicCheckpointer(checkpointer, **cfg.train.checkpointer) - if comm.is_main_process() - else None, - hooks.EvalHook(cfg.train.eval_period, lambda: do_test(cfg, model)), - hooks.PeriodicWriter( - default_writers(cfg.train.output_dir, cfg.train.max_iter), - period=cfg.train.log_period, - ) - if comm.is_main_process() - else None, - ] - ) - - checkpointer.resume_or_load(cfg.train.init_checkpoint, resume=args.resume) - if args.resume and checkpointer.has_checkpoint(): - # The checkpoint stores the training iteration that just finished, thus we start - # at the next iteration - start_iter = trainer.iter + 1 - else: - start_iter = 0 - trainer.train(start_iter, cfg.train.max_iter) - - -def main(args): - cfg = LazyConfig.load(args.config_file) - cfg = LazyConfig.apply_overrides(cfg, args.opts) - default_setup(cfg, args) - - if args.eval_only: - model = instantiate(cfg.model) - model.to(cfg.train.device) - model = create_ddp_model(model) - DetectionCheckpointer(model).load(cfg.train.init_checkpoint) - print(do_test(cfg, model)) - else: - do_train(args, cfg) - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/chansung/palm-with-gradio-chat/js.py b/spaces/chansung/palm-with-gradio-chat/js.py deleted file mode 100644 index 781e4c35f98903536b1fcdb075a331988698eeb9..0000000000000000000000000000000000000000 --- a/spaces/chansung/palm-with-gradio-chat/js.py +++ /dev/null @@ -1,81 +0,0 @@ -GET_LOCAL_STORAGE = """ -function() { - globalThis.setStorage = (key, value)=>{ - localStorage.setItem(key, JSON.stringify(value)); - } - globalThis.getStorage = (key, value)=>{ - return JSON.parse(localStorage.getItem(key)); - } - - var local_data = getStorage('local_data'); - var history = []; - - if(local_data) { - local_data[0].pingpongs.forEach(element =>{ - history.push([element.ping, element.pong]); - }); - } - else { - local_data = []; - for (let step = 0; step < 10; step++) { - local_data.push({'ctx': '', 'pingpongs':[]}); - } - setStorage('local_data', local_data); - } - - if(history.length == 0) { - document.querySelector("#initial-popup").classList.remove('hide'); - } - - return [history, local_data]; -} -""" - -UPDATE_LEFT_BTNS_STATE = """ -(v)=>{ - document.querySelector('.custom-btn-highlight').classList.add('custom-btn'); - document.querySelector('.custom-btn-highlight').classList.remove('custom-btn-highlight'); - - const elements = document.querySelectorAll(".custom-btn"); - - for(var i=0; i < elements.length; i++) { - const element = elements[i]; - if(element.textContent == v) { - console.log(v); - element.classList.add('custom-btn-highlight'); - element.classList.remove('custom-btn'); - break; - } - } -}""" - -UPDATE_PLACEHOLDERS = """ -function update_placeholders(txt, placeholder_txt1, placeholder_txt2, placeholder_txt3) { - let example_prompt = txt; - - const regex = /\[([^\]]*)\]/g; - const matches = txt.match(regex); - - if (matches != null) { - if (matches.length >= 1) { - if (placeholder_txt1 !== "") { - example_prompt = example_prompt.replace(matches[0], placeholder_txt1); - } - } - - if (matches.length >= 2) { - if (placeholder_txt2 !== "") { - example_prompt = example_prompt.replace(matches[1], placeholder_txt2); - } - } - - if (matches.length >= 3) { - if (placeholder_txt1 !== "") { - example_prompt = example_prompt.replace(matches[2], placeholder_txt3); - } - } - } - - return example_prompt -} -""" \ No newline at end of file diff --git a/spaces/chats-bug/ai-image-captioning/app.py b/spaces/chats-bug/ai-image-captioning/app.py deleted file mode 100644 index 180841c336970ac66d8ad292095a6b5887498a42..0000000000000000000000000000000000000000 --- a/spaces/chats-bug/ai-image-captioning/app.py +++ /dev/null @@ -1,102 +0,0 @@ -import gradio as gr -import torch -from PIL import Image - -from model import BlipBaseModel, GitBaseCocoModel - -MODELS = { - "Git-Base-COCO": GitBaseCocoModel, - "Blip Base": BlipBaseModel, -} - -# examples = [["Image1.png"], ["Image2.png"], ["Image3.png"]] - -def generate_captions( - image, - num_captions, - model_name, - max_length, - temperature, - top_k, - top_p, - repetition_penalty, - diversity_penalty, - ): - """ - Generates captions for the given image. - - ----- - Parameters: - image: PIL.Image - The image to generate captions for. - num_captions: int - The number of captions to generate. - ** Rest of the parameters are the same as in the model.generate method. ** - ----- - Returns: - list[str] - """ - # Convert the numerical values to their corresponding types. - # Gradio Slider returns values as floats: except when the value is a whole number, in which case it returns an int. - # Only float values suffer from this issue. - temperature = float(temperature) - top_p = float(top_p) - repetition_penalty = float(repetition_penalty) - diversity_penalty = float(diversity_penalty) - - device = "cuda" if torch.cuda.is_available() else "cpu" - - model = MODELS[model_name](device) - - captions = model.generate( - image=image, - max_length=max_length, - num_captions=num_captions, - temperature=temperature, - top_k=top_k, - top_p=top_p, - repetition_penalty=repetition_penalty, - diversity_penalty=diversity_penalty, - ) - - # Convert list to a single string separated by newlines. - captions = "\n".join(captions) - return captions - -title = "AI tool for generating captions for images" -description = "This tool uses pretrained models to generate captions for images." - -interface = gr.Interface( - fn=generate_captions, - inputs=[ - gr.components.Image(type="pil", label="Image"), - gr.components.Slider(minimum=1, maximum=10, step=1, value=1, label="Number of Captions to Generate"), - gr.components.Dropdown(MODELS.keys(), label="Model", value=list(MODELS.keys())[1]), # Default to Blip Base - gr.components.Slider(minimum=20, maximum=100, step=5, value=50, label="Maximum Caption Length"), - gr.components.Slider(minimum=0.1, maximum=10.0, step=0.1, value=1.0, label="Temperature"), - gr.components.Slider(minimum=1, maximum=100, step=1, value=50, label="Top K"), - gr.components.Slider(minimum=0.1, maximum=5.0, step=0.1, value=1.0, label="Top P"), - gr.components.Slider(minimum=1.0, maximum=10.0, step=0.1, value=2.0, label="Repetition Penalty"), - gr.components.Slider(minimum=0.0, maximum=10.0, step=0.1, value=2.0, label="Diversity Penalty"), - ], - outputs=[ - gr.components.Textbox(label="Caption"), - ], - # Set image examples to be displayed in the interface. - examples = [ - ["Image1.png", 1, list(MODELS.keys())[1], 50, 1.0, 50, 1.0, 2.0, 2.0], - ["Image2.png", 1, list(MODELS.keys())[1], 50, 1.0, 50, 1.0, 2.0, 2.0], - ["Image3.png", 1, list(MODELS.keys())[1], 50, 1.0, 50, 1.0, 2.0, 2.0], - ], - title=title, - description=description, - allow_flagging="never", -) - - -if __name__ == "__main__": - # Launch the interface. - interface.launch( - enable_queue=True, - debug=True, - ) \ No newline at end of file diff --git a/spaces/christhegamechanger/background_swapping/setup.sh b/spaces/christhegamechanger/background_swapping/setup.sh deleted file mode 100644 index c8650a8b74a58d9a5f53b185fd711c5668e1cd52..0000000000000000000000000000000000000000 --- a/spaces/christhegamechanger/background_swapping/setup.sh +++ /dev/null @@ -1,13 +0,0 @@ -mkdir -p ~/.streamlit/ - -echo "\ -[general]\n\ -email = \"your-email@domain.com\"\n\ -" > ~/.streamlit/credentials.toml - -echo "\ -[server]\n\ -headless = true\n\ -enableCORS=false\n\ -port = $PORT\n\ -" > ~/.streamlit/config.toml \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/ttGlyphSet.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/ttGlyphSet.py deleted file mode 100644 index fa7fbd4f23558f6705ee3e819ded518bb7549e36..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/ttGlyphSet.py +++ /dev/null @@ -1,322 +0,0 @@ -"""GlyphSets returned by a TTFont.""" - -from abc import ABC, abstractmethod -from collections.abc import Mapping -from contextlib import contextmanager -from copy import copy -from types import SimpleNamespace -from fontTools.misc.fixedTools import otRound -from fontTools.misc.loggingTools import deprecateFunction -from fontTools.misc.transform import Transform -from fontTools.pens.transformPen import TransformPen, TransformPointPen - - -class _TTGlyphSet(Mapping): - - """Generic dict-like GlyphSet class that pulls metrics from hmtx and - glyph shape from TrueType or CFF. - """ - - def __init__(self, font, location, glyphsMapping): - self.font = font - self.defaultLocationNormalized = ( - {axis.axisTag: 0 for axis in self.font["fvar"].axes} - if "fvar" in self.font - else {} - ) - self.location = location if location is not None else {} - self.rawLocation = {} # VarComponent-only location - self.originalLocation = location if location is not None else {} - self.depth = 0 - self.locationStack = [] - self.rawLocationStack = [] - self.glyphsMapping = glyphsMapping - self.hMetrics = font["hmtx"].metrics - self.vMetrics = getattr(font.get("vmtx"), "metrics", None) - self.hvarTable = None - if location: - from fontTools.varLib.varStore import VarStoreInstancer - - self.hvarTable = getattr(font.get("HVAR"), "table", None) - if self.hvarTable is not None: - self.hvarInstancer = VarStoreInstancer( - self.hvarTable.VarStore, font["fvar"].axes, location - ) - # TODO VVAR, VORG - - @contextmanager - def pushLocation(self, location, reset: bool): - self.locationStack.append(self.location) - self.rawLocationStack.append(self.rawLocation) - if reset: - self.location = self.originalLocation.copy() - self.rawLocation = self.defaultLocationNormalized.copy() - else: - self.location = self.location.copy() - self.rawLocation = {} - self.location.update(location) - self.rawLocation.update(location) - - try: - yield None - finally: - self.location = self.locationStack.pop() - self.rawLocation = self.rawLocationStack.pop() - - @contextmanager - def pushDepth(self): - try: - depth = self.depth - self.depth += 1 - yield depth - finally: - self.depth -= 1 - - def __contains__(self, glyphName): - return glyphName in self.glyphsMapping - - def __iter__(self): - return iter(self.glyphsMapping.keys()) - - def __len__(self): - return len(self.glyphsMapping) - - @deprecateFunction( - "use 'glyphName in glyphSet' instead", category=DeprecationWarning - ) - def has_key(self, glyphName): - return glyphName in self.glyphsMapping - - -class _TTGlyphSetGlyf(_TTGlyphSet): - def __init__(self, font, location): - self.glyfTable = font["glyf"] - super().__init__(font, location, self.glyfTable) - self.gvarTable = font.get("gvar") - - def __getitem__(self, glyphName): - return _TTGlyphGlyf(self, glyphName) - - -class _TTGlyphSetCFF(_TTGlyphSet): - def __init__(self, font, location): - tableTag = "CFF2" if "CFF2" in font else "CFF " - self.charStrings = list(font[tableTag].cff.values())[0].CharStrings - super().__init__(font, location, self.charStrings) - self.blender = None - if location: - from fontTools.varLib.varStore import VarStoreInstancer - - varStore = getattr(self.charStrings, "varStore", None) - if varStore is not None: - instancer = VarStoreInstancer( - varStore.otVarStore, font["fvar"].axes, location - ) - self.blender = instancer.interpolateFromDeltas - - def __getitem__(self, glyphName): - return _TTGlyphCFF(self, glyphName) - - -class _TTGlyph(ABC): - - """Glyph object that supports the Pen protocol, meaning that it has - .draw() and .drawPoints() methods that take a pen object as their only - argument. Additionally there are 'width' and 'lsb' attributes, read from - the 'hmtx' table. - - If the font contains a 'vmtx' table, there will also be 'height' and 'tsb' - attributes. - """ - - def __init__(self, glyphSet, glyphName): - self.glyphSet = glyphSet - self.name = glyphName - self.width, self.lsb = glyphSet.hMetrics[glyphName] - if glyphSet.vMetrics is not None: - self.height, self.tsb = glyphSet.vMetrics[glyphName] - else: - self.height, self.tsb = None, None - if glyphSet.location and glyphSet.hvarTable is not None: - varidx = ( - glyphSet.font.getGlyphID(glyphName) - if glyphSet.hvarTable.AdvWidthMap is None - else glyphSet.hvarTable.AdvWidthMap.mapping[glyphName] - ) - self.width += glyphSet.hvarInstancer[varidx] - # TODO: VVAR/VORG - - @abstractmethod - def draw(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.basePen for details - how that works. - """ - raise NotImplementedError - - def drawPoints(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.pointPen for details - how that works. - """ - from fontTools.pens.pointPen import SegmentToPointPen - - self.draw(SegmentToPointPen(pen)) - - -class _TTGlyphGlyf(_TTGlyph): - def draw(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.basePen for details - how that works. - """ - glyph, offset = self._getGlyphAndOffset() - - with self.glyphSet.pushDepth() as depth: - - if depth: - offset = 0 # Offset should only apply at top-level - - if glyph.isVarComposite(): - self._drawVarComposite(glyph, pen, False) - return - - glyph.draw(pen, self.glyphSet.glyfTable, offset) - - def drawPoints(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.pointPen for details - how that works. - """ - glyph, offset = self._getGlyphAndOffset() - - with self.glyphSet.pushDepth() as depth: - - if depth: - offset = 0 # Offset should only apply at top-level - - if glyph.isVarComposite(): - self._drawVarComposite(glyph, pen, True) - return - - glyph.drawPoints(pen, self.glyphSet.glyfTable, offset) - - def _drawVarComposite(self, glyph, pen, isPointPen): - - from fontTools.ttLib.tables._g_l_y_f import ( - VarComponentFlags, - VAR_COMPONENT_TRANSFORM_MAPPING, - ) - - for comp in glyph.components: - - with self.glyphSet.pushLocation( - comp.location, comp.flags & VarComponentFlags.RESET_UNSPECIFIED_AXES - ): - try: - pen.addVarComponent( - comp.glyphName, comp.transform, self.glyphSet.rawLocation - ) - except AttributeError: - t = comp.transform.toTransform() - if isPointPen: - tPen = TransformPointPen(pen, t) - self.glyphSet[comp.glyphName].drawPoints(tPen) - else: - tPen = TransformPen(pen, t) - self.glyphSet[comp.glyphName].draw(tPen) - - def _getGlyphAndOffset(self): - if self.glyphSet.location and self.glyphSet.gvarTable is not None: - glyph = self._getGlyphInstance() - else: - glyph = self.glyphSet.glyfTable[self.name] - - offset = self.lsb - glyph.xMin if hasattr(glyph, "xMin") else 0 - return glyph, offset - - def _getGlyphInstance(self): - from fontTools.varLib.iup import iup_delta - from fontTools.ttLib.tables._g_l_y_f import GlyphCoordinates - from fontTools.varLib.models import supportScalar - - glyphSet = self.glyphSet - glyfTable = glyphSet.glyfTable - variations = glyphSet.gvarTable.variations[self.name] - hMetrics = glyphSet.hMetrics - vMetrics = glyphSet.vMetrics - coordinates, _ = glyfTable._getCoordinatesAndControls( - self.name, hMetrics, vMetrics - ) - origCoords, endPts = None, None - for var in variations: - scalar = supportScalar(glyphSet.location, var.axes) - if not scalar: - continue - delta = var.coordinates - if None in delta: - if origCoords is None: - origCoords, control = glyfTable._getCoordinatesAndControls( - self.name, hMetrics, vMetrics - ) - endPts = ( - control[1] if control[0] >= 1 else list(range(len(control[1]))) - ) - delta = iup_delta(delta, origCoords, endPts) - coordinates += GlyphCoordinates(delta) * scalar - - glyph = copy(glyfTable[self.name]) # Shallow copy - width, lsb, height, tsb = _setCoordinates(glyph, coordinates, glyfTable) - self.lsb = lsb - self.tsb = tsb - if glyphSet.hvarTable is None: - # no HVAR: let's set metrics from the phantom points - self.width = width - self.height = height - return glyph - - -class _TTGlyphCFF(_TTGlyph): - def draw(self, pen): - """Draw the glyph onto ``pen``. See fontTools.pens.basePen for details - how that works. - """ - self.glyphSet.charStrings[self.name].draw(pen, self.glyphSet.blender) - - -def _setCoordinates(glyph, coord, glyfTable): - # Handle phantom points for (left, right, top, bottom) positions. - assert len(coord) >= 4 - leftSideX = coord[-4][0] - rightSideX = coord[-3][0] - topSideY = coord[-2][1] - bottomSideY = coord[-1][1] - - for _ in range(4): - del coord[-1] - - if glyph.isComposite(): - assert len(coord) == len(glyph.components) - glyph.components = [copy(comp) for comp in glyph.components] # Shallow copy - for p, comp in zip(coord, glyph.components): - if hasattr(comp, "x"): - comp.x, comp.y = p - elif glyph.isVarComposite(): - glyph.components = [copy(comp) for comp in glyph.components] # Shallow copy - for comp in glyph.components: - coord = comp.setCoordinates(coord) - assert not coord - elif glyph.numberOfContours == 0: - assert len(coord) == 0 - else: - assert len(coord) == len(glyph.coordinates) - glyph.coordinates = coord - - glyph.recalcBounds(glyfTable) - - horizontalAdvanceWidth = otRound(rightSideX - leftSideX) - verticalAdvanceWidth = otRound(topSideY - bottomSideY) - leftSideBearing = otRound(glyph.xMin - leftSideX) - topSideBearing = otRound(topSideY - glyph.yMax) - return ( - horizontalAdvanceWidth, - leftSideBearing, - verticalAdvanceWidth, - topSideBearing, - ) diff --git a/spaces/cihyFjudo/fairness-paper-search/Chhota Bheem and the throne of Bali telugu movie in hindi download Watch the adventure of Bheem and his friends.md b/spaces/cihyFjudo/fairness-paper-search/Chhota Bheem and the throne of Bali telugu movie in hindi download Watch the adventure of Bheem and his friends.md deleted file mode 100644 index 6af4919e4ae57a9859d57625378ad8a186464e06..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Chhota Bheem and the throne of Bali telugu movie in hindi download Watch the adventure of Bheem and his friends.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Chhota Bheem and the throne of Bali telugu movie in hindi download


    Download File »»» https://tinurli.com/2uwkha



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/The-Art-Of-Speculation-Philip-L-Carret-1930-Revised-Editionpdf.md b/spaces/cihyFjudo/fairness-paper-search/The-Art-Of-Speculation-Philip-L-Carret-1930-Revised-Editionpdf.md deleted file mode 100644 index 431e4eec81d54a5ab168b5e9d75dc47d77f8648f..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/The-Art-Of-Speculation-Philip-L-Carret-1930-Revised-Editionpdf.md +++ /dev/null @@ -1,94 +0,0 @@ -## The Art Of Speculation Philip L Carret 1930 Revised Editionpdf - - - - - - ![The Art Of Speculation Philip L Carret 1930 Revised Editionpdf](https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSI5LhTUYqexXB8mw460g_6rI_XjKb8Uv9Z0CKdjwv2EkXcW9-XlN-7XLbq) - - - - - -**Download — [https://www.google.com/url?q=https%3A%2F%2Furluso.com%2F2txlgJ&sa=D&sntz=1&usg=AOvVaw3t1HOH5DBwDd5vbDIOTMBq](https://www.google.com/url?q=https%3A%2F%2Furluso.com%2F2txlgJ&sa=D&sntz=1&usg=AOvVaw3t1HOH5DBwDd5vbDIOTMBq)** - - - - - - - - - - - - - -# The Art of Speculation: A Classic Book on Investing by Philip L. Carret - - - -Have you ever wondered what it takes to be a successful speculator in the stock market? Do you want to learn from one of the pioneers of mutual funds and a role model for Warren Buffett? If so, you might be interested in reading *The Art of Speculation* by Philip L. Carret. - - - -*The Art of Speculation* is a book that was first published in 1927 and revised in 1930 by Philip L. Carret, a famed investor and founder of The Pioneer Fund, one of the first mutual funds in the United States. Carret was a former Barron's reporter and WWI aviator who launched the fund in 1928 after managing money for his friends and family. He ran the fund for 55 years, during which an investment of $10,000 became $8 million. Warren Buffett said of him that he had "the best long term investment record of anyone I know". - - - -In this book, Carret shares his insights and wisdom on the art and science of speculation, which he defines as "the purchase or sale of securities or commodities in expectation of profiting by fluctuations in their prices". He covers topics such as the machinery of markets, the vehicles of speculation, market movements, forecasting the major swings, reading financial statements, analyzing different types of stocks, trading in unlisted securities, options and arbitrage, and when speculation becomes investment. He also provides examples and anecdotes from his own experience and from other famous speculators such as Jesse Livermore, Bernard Baruch, and J.P. Morgan. - - - -*The Art of Speculation* is a classic book on investing that has stood the test of time and is still relevant today. It is not a book for beginners, but rather for those who have some knowledge and experience in the stock market and want to improve their skills and judgment. It is also a book that requires careful study and reflection, as Carret does not offer any easy formulas or rules, but rather principles and guidelines that need to be applied with discretion and common sense. - - - -If you are interested in reading *The Art of Speculation* by Philip L. Carret, you can find a free pdf version online at [archive.org](https://archive.org/details/artofspeculation0000carr_x7y1). You can also buy a paperback edition at [Google Books](https://books.google.com/books/about/The_Art_Of_Speculation.html?id=ANFvCwAAQBAJ) or [Google Books](https://books.google.com/books/about/The_Art_of_Speculation.html?id=OfWnbHN3aQ8C). - - - -In this article, we will take a closer look at the life and career of Philip L. Carret, the author of *The Art of Speculation* and one of the most influential investors of the 20th century. - - - -## Early Life and Education - - - -Philip L. Carret was born on November 29, 1896 in Lynn, Massachusetts. He was interested in chemistry and mathematics from an early age and graduated from Harvard College in 1917 with a Bachelor of Science in Chemistry. He then attended Harvard Business School but did not complete his degree due to his enlistment in the U.S. Army Air Service during World War I. He served as a pilot and instructor until 1919. - - - -## Career as a Journalist and Investor - - - -After the war, Carret worked as a reporter for Barron's, a financial weekly magazine. He covered topics such as banking, insurance, railroads, and utilities. He also developed his own style of investing based on value principles, which he introduced in a series of articles in 1927. He advocated buying stocks that were undervalued by the market and holding them for the long term, regardless of short-term fluctuations. He also emphasized the importance of diversification, margin of safety, and fundamental analysis. - - - -In 1928, Carret founded one of the first mutual funds in the United States, The Pioneer Fund (then: Fidelity Mutual Trust). He started the fund with $25,000 of his own money and $500,000 from his friends and family. He managed the fund for 55 years, achieving an average annual return of 13% and turning an initial investment of $10,000 into $8 million. He was one of the few investors who survived the Great Depression and profited from it by buying stocks at bargain prices. He also invested in companies that were pioneers in their fields, such as IBM, Xerox, Polaroid, and Boeing. - - - -In 1963, Carret founded Carret Asset Management, a family office and investment advisory firm for institutional clients and high net worth families. He continued to manage money until his death in 1998 at age 101. He was known for his discipline, patience, humility, and curiosity. He was an avid reader and traveler who visited more than 100 countries and witnessed several solar eclipses. - - - -## Legacy and Influence - - - -Philip L. Carret was widely respected and admired by his peers and successors in the investment world. He was a role model for Warren Buffett, who said that Carret had "the best long term investment record of anyone I know". He was also praised by Benjamin Graham, John Templeton, Peter Lynch, and John Bogle. He received many honors and awards for his achievements, such as the Financial Analysts Federation Award for Outstanding Achievement in 1976 and the Harvard Business School Alumni Achievement Award in 1984. - - - -*The Art of Speculation* is one of Carret's most enduring contributions to the field of investing. It is a book that combines theory and practice, history and philosophy, wisdom and wit. It is a book that teaches not only how to invest but also how to think. It is a book that has inspired generations of investors who seek to master the art and science of speculation. - - 1b8d091108 - - - - - diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/__init__.py deleted file mode 100644 index f4cba26bf6ecaf18e96a62db69f70078498451e3..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/__init__.py +++ /dev/null @@ -1,96 +0,0 @@ -# DON'T EDIT! This file is generated by MetaTools/buildTableList.py. -def _moduleFinderHint(): - """Dummy function to let modulefinder know what tables may be - dynamically imported. Generated by MetaTools/buildTableList.py. - - >>> _moduleFinderHint() - """ - from . import B_A_S_E_ - from . import C_B_D_T_ - from . import C_B_L_C_ - from . import C_F_F_ - from . import C_F_F__2 - from . import C_O_L_R_ - from . import C_P_A_L_ - from . import D_S_I_G_ - from . import D__e_b_g - from . import E_B_D_T_ - from . import E_B_L_C_ - from . import F_F_T_M_ - from . import F__e_a_t - from . import G_D_E_F_ - from . import G_M_A_P_ - from . import G_P_K_G_ - from . import G_P_O_S_ - from . import G_S_U_B_ - from . import G__l_a_t - from . import G__l_o_c - from . import H_V_A_R_ - from . import J_S_T_F_ - from . import L_T_S_H_ - from . import M_A_T_H_ - from . import M_E_T_A_ - from . import M_V_A_R_ - from . import O_S_2f_2 - from . import S_I_N_G_ - from . import S_T_A_T_ - from . import S_V_G_ - from . import S__i_l_f - from . import S__i_l_l - from . import T_S_I_B_ - from . import T_S_I_C_ - from . import T_S_I_D_ - from . import T_S_I_J_ - from . import T_S_I_P_ - from . import T_S_I_S_ - from . import T_S_I_V_ - from . import T_S_I__0 - from . import T_S_I__1 - from . import T_S_I__2 - from . import T_S_I__3 - from . import T_S_I__5 - from . import T_T_F_A_ - from . import V_D_M_X_ - from . import V_O_R_G_ - from . import V_V_A_R_ - from . import _a_n_k_r - from . import _a_v_a_r - from . import _b_s_l_n - from . import _c_i_d_g - from . import _c_m_a_p - from . import _c_v_a_r - from . import _c_v_t - from . import _f_e_a_t - from . import _f_p_g_m - from . import _f_v_a_r - from . import _g_a_s_p - from . import _g_c_i_d - from . import _g_l_y_f - from . import _g_v_a_r - from . import _h_d_m_x - from . import _h_e_a_d - from . import _h_h_e_a - from . import _h_m_t_x - from . import _k_e_r_n - from . import _l_c_a_r - from . import _l_o_c_a - from . import _l_t_a_g - from . import _m_a_x_p - from . import _m_e_t_a - from . import _m_o_r_t - from . import _m_o_r_x - from . import _n_a_m_e - from . import _o_p_b_d - from . import _p_o_s_t - from . import _p_r_e_p - from . import _p_r_o_p - from . import _s_b_i_x - from . import _t_r_a_k - from . import _v_h_e_a - from . import _v_m_t_x - - -if __name__ == "__main__": - import doctest, sys - - sys.exit(doctest.testmod().failed) diff --git a/spaces/cncn102/bingo1/src/components/theme-toggle.tsx b/spaces/cncn102/bingo1/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/codedog-ai/codedog-demo/codedog_demo/callbacks.py b/spaces/codedog-ai/codedog-demo/codedog_demo/callbacks.py deleted file mode 100644 index 552aca72c2766c79b2beb4dfbfc9c2760527cbde..0000000000000000000000000000000000000000 --- a/spaces/codedog-ai/codedog-demo/codedog_demo/callbacks.py +++ /dev/null @@ -1,61 +0,0 @@ -import time -import traceback -from functools import lru_cache -from os import environ as env -from os import listdir -from typing import List - -import requests - -from codedog_demo.github_utils import parse_github_pr_url - -codedog_api = env.get("CODEDOG_API_URL", "") -github_token = env.get("GITHUB_TOKEN", "") - -sample_names = [] -sample_contents = [] -for file in listdir("samples"): - sample_names.append(file.replace("@", "/")) - with open("samples/" + file, "r") as f: - sample_contents.append(f.read()) - - -def request_pr_review(url: str): - try: - repo, pr_number = parse_github_pr_url(url) - if not repo or not pr_number: - return "Invalid URL. Accept format is: https://www.github.com/{owner}/{repository}/pull/{pr_number}", "" - - result = _request_pr_review(repo, pr_number, ttl_hash=get_ttl_hash()) - - except Exception: - traceback.print_exc() - return "Something went wrong. Please try again later.", "" - return result, result - - -@lru_cache(maxsize=100) -def _request_pr_review(repo: str, pr_number: int, ttl_hash=None): - response = requests.post( - codedog_api, json={"repository": repo, "pull_request_number": pr_number, "token": github_token} - ) - result = response.text - if len(result) < 100: - if result == "stream timeout": - raise ValueError("Timeout") - print(f"Error result: {result}") - raise ValueError() - return result - - -def get_ttl_hash(seconds=120): - """Return the same value withing `seconds` time period""" - return round(time.time() / seconds) - - -def get_sample_choices() -> List[str]: - return sample_names - - -def show_sample(idx: int) -> str: - return sample_contents[idx] diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aac_ac3_parser.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aac_ac3_parser.c deleted file mode 100644 index 9ab979632dc98637c19eb7302203bcc85ed5ff1a..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aac_ac3_parser.c +++ /dev/null @@ -1,168 +0,0 @@ -/* - * Common AAC and AC-3 parser - * Copyright (c) 2003 Fabrice Bellard - * Copyright (c) 2003 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config_components.h" - -#include "libavutil/channel_layout.h" -#include "libavutil/common.h" -#include "parser.h" -#include "aac_ac3_parser.h" -#include "ac3_parser_internal.h" -#include "adts_header.h" - -int ff_aac_ac3_parse(AVCodecParserContext *s1, - AVCodecContext *avctx, - const uint8_t **poutbuf, int *poutbuf_size, - const uint8_t *buf, int buf_size) -{ - AACAC3ParseContext *s = s1->priv_data; - ParseContext *pc = &s->pc; - int len, i; - int new_frame_start; - int got_frame = 0; - - if (s1->flags & PARSER_FLAG_COMPLETE_FRAMES) { - i = buf_size; - got_frame = 1; - } else { -get_next: - i=END_NOT_FOUND; - if(s->remaining_size <= buf_size){ - if(s->remaining_size && !s->need_next_header){ - i= s->remaining_size; - s->remaining_size = 0; - }else{ //we need a header first - len=0; - for(i=s->remaining_size; istate = (s->state<<8) + buf[i]; - if((len=s->sync(s->state, &s->need_next_header, &new_frame_start))) - break; - } - if(len<=0){ - i=END_NOT_FOUND; - }else{ - got_frame = 1; - s->state=0; - i-= s->header_size -1; - s->remaining_size = len; - if(!new_frame_start || pc->index+i<=0){ - s->remaining_size += i; - goto get_next; - } - else if (i < 0) { - s->remaining_size += i; - } - } - } - } - - if(ff_combine_frame(pc, i, &buf, &buf_size)<0){ - s->remaining_size -= FFMIN(s->remaining_size, buf_size); - *poutbuf = NULL; - *poutbuf_size = 0; - return buf_size; - } - } - - *poutbuf = buf; - *poutbuf_size = buf_size; - - if (got_frame) { - int bit_rate; - - /* Due to backwards compatible HE-AAC the sample rate, channel count, - and total number of samples found in an AAC ADTS header are not - reliable. Bit rate is still accurate because the total frame - duration in seconds is still correct (as is the number of bits in - the frame). */ - if (avctx->codec_id != AV_CODEC_ID_AAC) { - AC3HeaderInfo hdr, *phrd = &hdr; - int offset = ff_ac3_find_syncword(buf, buf_size); - - if (offset < 0) - return i; - - buf += offset; - buf_size -= offset; - while (buf_size > 0) { - int ret = avpriv_ac3_parse_header(&phrd, buf, buf_size); - - if (ret < 0 || hdr.frame_size > buf_size) - return i; - - if (buf_size > hdr.frame_size) { - buf += hdr.frame_size; - buf_size -= hdr.frame_size; - continue; - } - /* Check for false positives since the syncword is not enough. - See section 6.1.2 of A/52. */ - if (av_crc(s->crc_ctx, 0, buf + 2, hdr.frame_size - 2)) - return i; - break; - } - - avctx->sample_rate = hdr.sample_rate; - - if (hdr.bitstream_id > 10) - avctx->codec_id = AV_CODEC_ID_EAC3; - - if (!CONFIG_EAC3_DECODER || avctx->codec_id != AV_CODEC_ID_EAC3) { - av_channel_layout_uninit(&avctx->ch_layout); - if (hdr.channel_layout) { - av_channel_layout_from_mask(&avctx->ch_layout, hdr.channel_layout); - } else { - avctx->ch_layout.order = AV_CHANNEL_ORDER_UNSPEC; - avctx->ch_layout.nb_channels = hdr.channels; - } -#if FF_API_OLD_CHANNEL_LAYOUT -FF_DISABLE_DEPRECATION_WARNINGS - avctx->channels = avctx->ch_layout.nb_channels; - avctx->channel_layout = hdr.channel_layout; -FF_ENABLE_DEPRECATION_WARNINGS -#endif - } - s1->duration = hdr.num_blocks * 256; - avctx->audio_service_type = hdr.bitstream_mode; - if (hdr.bitstream_mode == 0x7 && hdr.channels > 1) - avctx->audio_service_type = AV_AUDIO_SERVICE_TYPE_KARAOKE; - bit_rate = hdr.bit_rate; - } else { - AACADTSHeaderInfo hdr, *phrd = &hdr; - int ret = avpriv_adts_header_parse(&phrd, buf, buf_size); - - if (ret < 0) - return i; - - bit_rate = hdr.bit_rate; - } - - /* Calculate the average bit rate */ - s->frame_number++; - if (!CONFIG_EAC3_DECODER || avctx->codec_id != AV_CODEC_ID_EAC3) { - avctx->bit_rate += - (bit_rate - avctx->bit_rate) / s->frame_number; - } - } - - return i; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc.h deleted file mode 100644 index b030c652aec88ac94a7f1d17186cb685a603ce36..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc.h +++ /dev/null @@ -1,162 +0,0 @@ -/* - * AAC encoder - * Copyright (C) 2008 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_AACENC_H -#define AVCODEC_AACENC_H - -#include "libavutil/channel_layout.h" -#include "libavutil/float_dsp.h" -#include "libavutil/mem_internal.h" - -#include "avcodec.h" -#include "put_bits.h" - -#include "aac.h" -#include "audio_frame_queue.h" -#include "psymodel.h" - -#include "lpc.h" - -typedef enum AACCoder { - AAC_CODER_ANMR = 0, - AAC_CODER_TWOLOOP, - AAC_CODER_FAST, - - AAC_CODER_NB, -}AACCoder; - -typedef struct AACEncOptions { - int coder; - int pns; - int tns; - int ltp; - int pce; - int pred; - int mid_side; - int intensity_stereo; -} AACEncOptions; - -struct AACEncContext; - -typedef struct AACCoefficientsEncoder { - void (*search_for_quantizers)(AVCodecContext *avctx, struct AACEncContext *s, - SingleChannelElement *sce, const float lambda); - void (*encode_window_bands_info)(struct AACEncContext *s, SingleChannelElement *sce, - int win, int group_len, const float lambda); - void (*quantize_and_encode_band)(struct AACEncContext *s, PutBitContext *pb, const float *in, float *out, int size, - int scale_idx, int cb, const float lambda, int rtz); - void (*encode_tns_info)(struct AACEncContext *s, SingleChannelElement *sce); - void (*encode_ltp_info)(struct AACEncContext *s, SingleChannelElement *sce, int common_window); - void (*encode_main_pred)(struct AACEncContext *s, SingleChannelElement *sce); - void (*adjust_common_pred)(struct AACEncContext *s, ChannelElement *cpe); - void (*adjust_common_ltp)(struct AACEncContext *s, ChannelElement *cpe); - void (*apply_main_pred)(struct AACEncContext *s, SingleChannelElement *sce); - void (*apply_tns_filt)(struct AACEncContext *s, SingleChannelElement *sce); - void (*update_ltp)(struct AACEncContext *s, SingleChannelElement *sce); - void (*ltp_insert_new_frame)(struct AACEncContext *s); - void (*set_special_band_scalefactors)(struct AACEncContext *s, SingleChannelElement *sce); - void (*search_for_pns)(struct AACEncContext *s, AVCodecContext *avctx, SingleChannelElement *sce); - void (*mark_pns)(struct AACEncContext *s, AVCodecContext *avctx, SingleChannelElement *sce); - void (*search_for_tns)(struct AACEncContext *s, SingleChannelElement *sce); - void (*search_for_ltp)(struct AACEncContext *s, SingleChannelElement *sce, int common_window); - void (*search_for_ms)(struct AACEncContext *s, ChannelElement *cpe); - void (*search_for_is)(struct AACEncContext *s, AVCodecContext *avctx, ChannelElement *cpe); - void (*search_for_pred)(struct AACEncContext *s, SingleChannelElement *sce); -} AACCoefficientsEncoder; - -extern const AACCoefficientsEncoder ff_aac_coders[]; - -typedef struct AACQuantizeBandCostCacheEntry { - float rd; - float energy; - int bits; - char cb; - char rtz; - uint16_t generation; -} AACQuantizeBandCostCacheEntry; - -typedef struct AACPCEInfo { - AVChannelLayout layout; - int num_ele[4]; ///< front, side, back, lfe - int pairing[3][8]; ///< front, side, back - int index[4][8]; ///< front, side, back, lfe - uint8_t config_map[16]; ///< configs the encoder's channel specific settings - uint8_t reorder_map[16]; ///< maps channels from lavc to aac order -} AACPCEInfo; - -/** - * AAC encoder context - */ -typedef struct AACEncContext { - AVClass *av_class; - AACEncOptions options; ///< encoding options - PutBitContext pb; - AVTXContext *mdct1024; ///< long (1024 samples) frame transform context - av_tx_fn mdct1024_fn; - AVTXContext *mdct128; ///< short (128 samples) frame transform context - av_tx_fn mdct128_fn; - AVFloatDSPContext *fdsp; - AACPCEInfo pce; ///< PCE data, if needed - float *planar_samples[16]; ///< saved preprocessed input - - int profile; ///< copied from avctx - int needs_pce; ///< flag for non-standard layout - LPCContext lpc; ///< used by TNS - int samplerate_index; ///< MPEG-4 samplerate index - int channels; ///< channel count - const uint8_t *reorder_map; ///< lavc to aac reorder map - const uint8_t *chan_map; ///< channel configuration map - - ChannelElement *cpe; ///< channel elements - FFPsyContext psy; - struct FFPsyPreprocessContext* psypp; - const AACCoefficientsEncoder *coder; - int cur_channel; ///< current channel for coder context - int random_state; - float lambda; - int last_frame_pb_count; ///< number of bits for the previous frame - float lambda_sum; ///< sum(lambda), for Qvg reporting - int lambda_count; ///< count(lambda), for Qvg reporting - enum RawDataBlockType cur_type; ///< channel group type cur_channel belongs to - - AudioFrameQueue afq; - DECLARE_ALIGNED(16, int, qcoefs)[96]; ///< quantized coefficients - DECLARE_ALIGNED(32, float, scoefs)[1024]; ///< scaled coefficients - - uint16_t quantize_band_cost_cache_generation; - AACQuantizeBandCostCacheEntry quantize_band_cost_cache[256][128]; ///< memoization area for quantize_band_cost - - void (*abs_pow34)(float *out, const float *in, const int size); - void (*quant_bands)(int *out, const float *in, const float *scaled, - int size, int is_signed, int maxval, const float Q34, - const float rounding); - - struct { - float *samples; - } buffer; -} AACEncContext; - -void ff_aac_dsp_init_x86(AACEncContext *s); -void ff_aac_coder_init_mips(AACEncContext *c); -void ff_quantize_band_cost_cache_init(struct AACEncContext *s); - - -#endif /* AVCODEC_AACENC_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/h264chroma_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/h264chroma_init_arm.c deleted file mode 100644 index 5c7d5231865580968b130e3413f17eac9ba15db3..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/h264chroma_init_arm.c +++ /dev/null @@ -1,57 +0,0 @@ -/* - * ARM NEON optimised H.264 chroma functions - * Copyright (c) 2008 Mans Rullgard - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "libavutil/attributes.h" -#include "libavutil/cpu.h" -#include "libavutil/arm/cpu.h" -#include "libavcodec/h264chroma.h" - -void ff_put_h264_chroma_mc8_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); -void ff_put_h264_chroma_mc4_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); -void ff_put_h264_chroma_mc2_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); - -void ff_avg_h264_chroma_mc8_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); -void ff_avg_h264_chroma_mc4_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); -void ff_avg_h264_chroma_mc2_neon(uint8_t *dst, const uint8_t *src, ptrdiff_t stride, - int h, int x, int y); - -av_cold void ff_h264chroma_init_arm(H264ChromaContext *c, int bit_depth) -{ - const int high_bit_depth = bit_depth > 8; - int cpu_flags = av_get_cpu_flags(); - - if (have_neon(cpu_flags) && !high_bit_depth) { - c->put_h264_chroma_pixels_tab[0] = ff_put_h264_chroma_mc8_neon; - c->put_h264_chroma_pixels_tab[1] = ff_put_h264_chroma_mc4_neon; - c->put_h264_chroma_pixels_tab[2] = ff_put_h264_chroma_mc2_neon; - - c->avg_h264_chroma_pixels_tab[0] = ff_avg_h264_chroma_mc8_neon; - c->avg_h264_chroma_pixels_tab[1] = ff_avg_h264_chroma_mc4_neon; - c->avg_h264_chroma_pixels_tab[2] = ff_avg_h264_chroma_mc2_neon; - } -} diff --git a/spaces/conciomith/RetinaFace_FaceDetector_Extractor/RetinaFace.py b/spaces/conciomith/RetinaFace_FaceDetector_Extractor/RetinaFace.py deleted file mode 100644 index b3d0719a8cce330350d97fae7f5b5978bb6a64a5..0000000000000000000000000000000000000000 --- a/spaces/conciomith/RetinaFace_FaceDetector_Extractor/RetinaFace.py +++ /dev/null @@ -1,214 +0,0 @@ -import os -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' - -#--------------------------- - -import numpy as np -import tensorflow as tf -import cv2 - -import retinaface_model -import preprocess -import postprocess - -#--------------------------- - -import tensorflow as tf -tf_version = int(tf.__version__.split(".")[0]) - -if tf_version == 2: - import logging - tf.get_logger().setLevel(logging.ERROR) - -#--------------------------- - -def build_model(): - - global model #singleton design pattern - - if not "model" in globals(): - - model = tf.function( - retinaface_model.build_model(), - input_signature=(tf.TensorSpec(shape=[None, None, None, 3], dtype=np.float32),) - ) - - return model - -def get_image(img_path): - if type(img_path) == str: # Load from file path - if not os.path.isfile(img_path): - raise ValueError("Input image file path (", img_path, ") does not exist.") - img = cv2.imread(img_path) - - elif isinstance(img_path, np.ndarray): # Use given NumPy array - img = img_path.copy() - - else: - raise ValueError("Invalid image input. Only file paths or a NumPy array accepted.") - - # Validate image shape - if len(img.shape) != 3 or np.prod(img.shape) == 0: - raise ValueError("Input image needs to have 3 channels at must not be empty.") - - return img - -def detect_faces(img_path, threshold=0.9, model = None, allow_upscaling = True): - """ - TODO: add function doc here - """ - - img = get_image(img_path) - - #--------------------------- - - if model is None: - model = build_model() - - #--------------------------- - - nms_threshold = 0.4; decay4=0.5 - - _feat_stride_fpn = [32, 16, 8] - - _anchors_fpn = { - 'stride32': np.array([[-248., -248., 263., 263.], [-120., -120., 135., 135.]], dtype=np.float32), - 'stride16': np.array([[-56., -56., 71., 71.], [-24., -24., 39., 39.]], dtype=np.float32), - 'stride8': np.array([[-8., -8., 23., 23.], [ 0., 0., 15., 15.]], dtype=np.float32) - } - - _num_anchors = {'stride32': 2, 'stride16': 2, 'stride8': 2} - - #--------------------------- - - proposals_list = [] - scores_list = [] - landmarks_list = [] - im_tensor, im_info, im_scale = preprocess.preprocess_image(img, allow_upscaling) - net_out = model(im_tensor) - net_out = [elt.numpy() for elt in net_out] - sym_idx = 0 - - for _idx, s in enumerate(_feat_stride_fpn): - _key = 'stride%s'%s - scores = net_out[sym_idx] - scores = scores[:, :, :, _num_anchors['stride%s'%s]:] - - bbox_deltas = net_out[sym_idx + 1] - height, width = bbox_deltas.shape[1], bbox_deltas.shape[2] - - A = _num_anchors['stride%s'%s] - K = height * width - anchors_fpn = _anchors_fpn['stride%s'%s] - anchors = postprocess.anchors_plane(height, width, s, anchors_fpn) - anchors = anchors.reshape((K * A, 4)) - scores = scores.reshape((-1, 1)) - - bbox_stds = [1.0, 1.0, 1.0, 1.0] - bbox_deltas = bbox_deltas - bbox_pred_len = bbox_deltas.shape[3]//A - bbox_deltas = bbox_deltas.reshape((-1, bbox_pred_len)) - bbox_deltas[:, 0::4] = bbox_deltas[:,0::4] * bbox_stds[0] - bbox_deltas[:, 1::4] = bbox_deltas[:,1::4] * bbox_stds[1] - bbox_deltas[:, 2::4] = bbox_deltas[:,2::4] * bbox_stds[2] - bbox_deltas[:, 3::4] = bbox_deltas[:,3::4] * bbox_stds[3] - proposals = postprocess.bbox_pred(anchors, bbox_deltas) - - proposals = postprocess.clip_boxes(proposals, im_info[:2]) - - if s==4 and decay4<1.0: - scores *= decay4 - - scores_ravel = scores.ravel() - order = np.where(scores_ravel>=threshold)[0] - proposals = proposals[order, :] - scores = scores[order] - - proposals[:, 0:4] /= im_scale - proposals_list.append(proposals) - scores_list.append(scores) - - landmark_deltas = net_out[sym_idx + 2] - landmark_pred_len = landmark_deltas.shape[3]//A - landmark_deltas = landmark_deltas.reshape((-1, 5, landmark_pred_len//5)) - landmarks = postprocess.landmark_pred(anchors, landmark_deltas) - landmarks = landmarks[order, :] - - landmarks[:, :, 0:2] /= im_scale - landmarks_list.append(landmarks) - sym_idx += 3 - - proposals = np.vstack(proposals_list) - if proposals.shape[0]==0: - landmarks = np.zeros( (0,5,2) ) - return np.zeros( (0,5) ), landmarks - scores = np.vstack(scores_list) - scores_ravel = scores.ravel() - order = scores_ravel.argsort()[::-1] - - proposals = proposals[order, :] - scores = scores[order] - landmarks = np.vstack(landmarks_list) - landmarks = landmarks[order].astype(np.float32, copy=False) - - pre_det = np.hstack((proposals[:,0:4], scores)).astype(np.float32, copy=False) - - #nms = cpu_nms_wrapper(nms_threshold) - #keep = nms(pre_det) - keep = postprocess.cpu_nms(pre_det, nms_threshold) - - det = np.hstack( (pre_det, proposals[:,4:]) ) - det = det[keep, :] - landmarks = landmarks[keep] - - resp = {} - for idx, face in enumerate(det): - - label = 'face_'+str(idx+1) - resp[label] = {} - resp[label]["score"] = face[4] - - resp[label]["facial_area"] = list(face[0:4].astype(int)) - - resp[label]["landmarks"] = {} - resp[label]["landmarks"]["right_eye"] = list(landmarks[idx][0]) - resp[label]["landmarks"]["left_eye"] = list(landmarks[idx][1]) - resp[label]["landmarks"]["nose"] = list(landmarks[idx][2]) - resp[label]["landmarks"]["mouth_right"] = list(landmarks[idx][3]) - resp[label]["landmarks"]["mouth_left"] = list(landmarks[idx][4]) - - return resp - -def extract_faces(img_path, threshold=0.9, model = None, align = True, allow_upscaling = True): - - resp = [] - - #--------------------------- - - img = get_image(img_path) - - #--------------------------- - - obj = detect_faces(img_path = img, threshold = threshold, model = model, allow_upscaling = allow_upscaling) - - if type(obj) == dict: - for key in obj: - identity = obj[key] - - facial_area = identity["facial_area"] - facial_img = img[facial_area[1]: facial_area[3], facial_area[0]: facial_area[2]] - - if align == True: - landmarks = identity["landmarks"] - left_eye = landmarks["left_eye"] - right_eye = landmarks["right_eye"] - nose = landmarks["nose"] - mouth_right = landmarks["mouth_right"] - mouth_left = landmarks["mouth_left"] - - facial_img = postprocess.alignment_procedure(facial_img, right_eye, left_eye, nose) - - resp.append(facial_img[:, :, ::-1]) - #elif type(obj) == tuple: - - return resp diff --git a/spaces/congsaPfin/Manga-OCR/logs/FIFA 18 V10 APK - Play with Your Favorite Teams and Players in the Latest Version 2023.md b/spaces/congsaPfin/Manga-OCR/logs/FIFA 18 V10 APK - Play with Your Favorite Teams and Players in the Latest Version 2023.md deleted file mode 100644 index c286cddd0c5dd27d235c288ae2e83f5d53ca7b49..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/FIFA 18 V10 APK - Play with Your Favorite Teams and Players in the Latest Version 2023.md +++ /dev/null @@ -1,104 +0,0 @@ -
    -

    FIFA 18 APK + OBB Download: How to Enjoy the World's Game on Your Android Device

    -

    If you are a fan of football, you probably have heard of FIFA, the most popular and realistic football simulation game series in the world. And if you own an Android device, you might be wondering how you can play the latest installment of this series, FIFA 18, on your mobile phone or tablet.

    -

    fifa 18 apk + obb download


    DOWNLOADhttps://urlca.com/2uOaxP



    -

    Well, wonder no more, because in this article, we will show you how you can download and install FIFA 18 APK + OBB files on your Android device, and enjoy the amazing features and gameplay of this game wherever you go.

    -

    FIFA 18 is a football simulation game developed by EA Sports and released in September 2017 for various platforms, including Windows, PlayStation, Xbox, Nintendo Switch, and Android. It is the 25th edition of the FIFA series, and it features Cristiano Ronaldo as the cover star.

    -

    FIFA 18 has received positive reviews from critics and players alike, who praised its improved graphics, animations, gameplay, modes, and content. It is one of the best-selling games of all time, with over 24 million copies sold worldwide by the end of 2018.

    -

    But what makes FIFA 18 so special and fun to play? Let's take a look at some of its features and gameplay.

    -

    FIFA 18 Features and Gameplay

    -

    FIFA 18 is not just a simple update of its predecessor, FIFA 17. It introduces several new features and improvements that make it stand out from other football games. Here are some of them:

    -

    Real Player Motion Technology

    -

    This is a new animation system that uses pose trajectory matching on every frame to deliver the most responsive and fluid gameplay ever. It captures the real movements and motions of top players like Ronaldo, Messi, Neymar, and more, making them look and feel like their real-life counterparts.

    -

    With Real Player Motion Technology, you can experience realistic player acceleration, deceleration, turns, sprints, dribbles, shots, passes, tackles, and more. You can also see how players react to different situations on the pitch, such as fatigue, pressure, collisions, injuries, etc.

    -

    Player Personality

    -

    This is another feature that adds more realism and authenticity to the game. It reflects how players behave and move on the pitch according to their unique characteristics and styles. For example, Ronaldo will run and dribble with his signature sprint and chop, Messi will weave through defenders with his agile and nimble movements, Neymar will show off his flair and skills with his tricks and flicks, etc.

    -

    fifa 18 v10 apk obb latest version 2023
    -fifa 18 android apk obb offline download
    -fifa 18 mobile apk obb mod download
    -fifa 18 apk obb highly compressed download
    -fifa 18 apk obb data file download
    -fifa 18 apk obb free download for android
    -fifa 18 ultimate team apk obb download
    -fifa 18 world cup apk obb download
    -fifa 18 apk obb full version download
    -fifa 18 apk obb update patch download
    -fifa 18 apk obb rexdl download
    -fifa 18 apk obb revdl download
    -fifa 18 apk obb mega download
    -fifa 18 apk obb mediafire download
    -fifa 18 apk obb google drive download
    -fifa 18 apk obb direct link download
    -fifa 18 apk obb no verification download
    -fifa 18 apk obb unlocked download
    -fifa 18 apk obb original download
    -fifa 18 apk obb cracked download
    -fifa 18 mod apk obb unlimited money download
    -fifa 18 mod apk obb real faces download
    -fifa 18 mod apk obb new kits download
    -fifa 18 mod apk obb latest transfers download
    -fifa 18 mod apk obb offline mode download
    -how to install fifa 18 apk obb on android
    -how to play fifa 18 apk obb online
    -how to fix fifa 18 apk obb error
    -how to update fifa 18 apk obb manually
    -how to extract fifa 18 apk obb zip file
    -best site to download fifa 18 apk obb
    -best settings for fifa 18 apk obb
    -best graphics for fifa 18 apk obb
    -best players in fifa 18 apk obb
    -best teams in fifa 18 apk obb
    -tips and tricks for fifa 18 apk obb
    -cheats and hacks for fifa 18 apk obb
    -reviews and ratings for fifa 18 apk obb
    -features and gameplay of fifa 18 apk obb
    -requirements and compatibility of fifa 18 apk obb

    -

    Player Personality also affects how players interact with each other on the pitch, creating more realistic team chemistry and dynamics. For example, players will celebrate together after scoring a goal, console each other after missing a chance, argue with the referee or the opponents, etc.

    -

    Enhanced Dribbling and Crossing

    -

    FIFA 18 gives you more options and control on the ball, allowing you to create more chances and score more goals. You can use different types of dribbles, such as close control, speed dribble, skill dribble, etc., to beat defenders and create space. You can also use different types of crosses, such as early cross, driven cross, lofted cross, etc., to deliver accurate and dangerous balls into the box.

    -

    With Enhanced Dribbling and Crossing, you can unleash your creativity and style on the pitch, and enjoy the thrill of scoring spectacular goals.

    -

    The Journey: Hunter Returns

    -

    This is the second season of the story mode that debuted in FIFA 17. It follows the career of Alex Hunter, a young and talented footballer who dreams of becoming a star. You can control his actions and decisions on and off the pitch, affecting his relationships, reputation, and performance.

    -

    In FIFA 18, you can experience new challenges and opportunities as Alex Hunter moves to different clubs and leagues around the world. You can also customize his appearance, clothing, hairstyle, tattoos, etc., to suit your preferences. You can also meet and interact with famous players and managers, such as Ronaldo, Griezmann, Mourinho, etc.

    -

    The Journey: Hunter Returns is a captivating and immersive mode that lets you live the life of a footballer.

    -

    Other Modes and Content

    -

    FIFA 18 offers a variety of modes and content to suit your preferences. You can play online or offline, solo or with friends, casual or competitive. Here are some of the modes and content you can enjoy in FIFA 18:

    - - Ultimate Team: This is the most popular mode in FIFA 18. It allows you to build your own dream team from scratch using players from different clubs and leagues. You can earn coins by playing matches or trading players on the market. You can also use FIFA Points to buy packs that contain random players or items. You can compete in various tournaments and seasons online or offline, and earn rewards such as coins, packs, players, etc. - Career Mode: This is the mode where you can manage your own club or play as a single player. You can choose from hundreds of clubs from different leagues around the world. You can scout for new players, negotiate contracts, set tactics, train your squad, etc. You can also play as a single player and improve your skills and attributes by completing objectives and tasks. You can also request for transfers or loans to other clubs. - Kick Off: This is the mode where you can play a quick match against the AI or another player. You can choose from any club or national team in the game. You can also customize the match settings such as difficulty level, half length, weather, stadium, etc. You can also play in different modes such as Classic, Women's, World Cup, etc. - Online Seasons: This is the mode where you can play online matches against other players of similar skill level. You can choose from any club or national team in the game. You can play in 10 divisions, each with 10 matches. You can earn points by winning or drawing matches, and move up or down the divisions based on your performance. You can also earn coins and trophies by completing seasons. - Online Friendlies: This is the mode where you can play online matches against your friends. You can invite your friends to join your session, and choose from any club or national team in the game. You can also customize the match settings such as difficulty level, half length, weather, stadium, etc. You can also track your stats and results against your friends. - Skill Games: This is the mode where you can practice and improve your skills in various aspects of the game. You can choose from different categories such as shooting, passing, dribbling, defending, etc. You can also play in different levels of difficulty from beginner to advanced. You can earn coins and badges by completing skill games. - Customize: This is the mode where you can customize various aspects of the game to suit your preferences. You can edit players, teams, leagues, stadiums, balls, kits, etc. You can also create your own custom tournaments and leagues. You can also download and apply updates and patches for the game.

    FIFA 18 System Requirements and Compatibility

    -

    If you want to play FIFA 18 on your Android device, you need to make sure that your device meets the minimum and recommended specifications for the game. Here are the system requirements and compatibility for FIFA 18:

    -

    Minimum Specifications

    - - Android version: 4.4 KitKat or higher - CPU: Quad-core 1.4 GHz or higher - RAM: 1 GB or higher - Storage: 2 GB or higher - Internet connection: Required for online features

    Recommended Specifications

    - - Android version: 6.0 Marshmallow or higher - CPU: Octa-core 2.0 GHz or higher - RAM: 2 GB or higher - Storage: 4 GB or higher - Internet connection: Required for online features

    Supported Devices

    -

    FIFA 18 is compatible with most Android devices that meet the minimum specifications. However, some devices may run the game better than others, depending on their hardware and software configurations. Here are some of the supported devices for FIFA 18:

    - | Device | Model | | --- | --- | | Samsung | Galaxy S7, S8, S9, Note 8, Note 9, A5, A7, J5, J7 | | Huawei | P10, P20, Mate 10, Mate 20, Honor 8, Honor 9 | | LG | G5, G6, G7, V20, V30 | | Sony | Xperia XZ1, XZ2 | | Motorola | Moto G5S Plus, Moto Z2 Play | | Xiaomi | Mi 6X, Mi A1, Mi A2, Redmi Note 5, Redmi Note 6 | | OnePlus | OnePlus 5, OnePlus 6 | | Google | Pixel 2, Pixel 3 | | Nokia | Nokia 6, Nokia 7 Plus | | Asus | Zenfone 4, Zenfone 5 |

    This is not a complete list of supported devices, and you may be able to run FIFA 18 on other devices as well. However, if your device is not on this list, you may experience some issues or errors while playing the game.

    -

    Installation Guide and Tips

    -

    To download and install FIFA 18 APK + OBB files on your Android device, you need to follow these steps:

    - - Step 1: Download the FIFA 18 APK + OBB files from a trusted source. You can use the link below to download them directly from our website. The files are safe and virus-free, and they have been tested and verified by us. - Step 2: Enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on. This will allow you to install apps that are not from the Google Play Store. - Step 3: Locate the downloaded FIFA 18 APK + OBB files on your device. You can use a file manager app to find them in your Downloads folder or any other folder where you saved them. - Step 4: Install the FIFA 18 APK file by tapping on it and following the instructions on the screen. Do not open the app yet after the installation is complete. - Step 5: Extract the FIFA 18 OBB file using a zip extractor app. You will get a folder named com.ea.gp.fifaworld. Copy this folder and paste it in your Android > OBB folder. If you don't have an OBB folder, you can create one yourself. - Step 6: Launch the FIFA 18 app from your app drawer or home screen. You may need to verify your identity with an email or phone number. You may also need to download some additional data for the game to run smoothly. - Step 7: Enjoy playing FIFA 18 on your Android device.

    Here are some tips to help you play FIFA 18 better on your Android device:

    - - Tip 1: Make sure you have enough storage space and battery life on your device before playing the game. FIFA 18 is a large and demanding game that requires at least 2 GB of storage space and a lot of battery power to run properly. - Tip 2: Adjust the graphics settings and controls according to your device's performance and your preference. You can access these settings from the main menu of the game. You can lower the graphics quality and resolution to improve the framerate and reduce lag. You can also change the control scheme and sensitivity to suit your playstyle. - Tip 3: Connect to a stable and fast internet connection when playing online modes or features. FIFA 18 requires an internet connection to access some of its modes and features, such as Ultimate Team, Online Seasons, Online Friendlies, etc. You can use Wi-Fi or mobile data, but make sure they are reliable and fast enough to avoid disconnections or delays. - Tip 4: Update the game regularly to get the latest features and fixes. FIFA 18 receives frequent updates from EA Sports that add new content, improve gameplay, fix bugs, etc. You can update the game automatically or manually from the Google Play Store or from our website.

    FIFA 18 Review and Rating

    -

    FIFA 18 is one of the best football games ever made, and it has received rave reviews from critics and players alike. Here are some of the reviews and ratings for FIFA 18:

    -

    Critic Reviews

    -

    FIFA 18 has an average score of 84 out of 100 on Metacritic, based on the reviews of 41 critics. Here are some of the excerpts from the reviews:

    - - "FIFA 18 is simply magnificent. Streets ahead of what came before, and continuing its dominance over its rivals, EA has done a superb job. With huge improvements across the board, this is the game FIFA fans have waited five years for." - Trusted Reviews - "FIFA 18 is a far better football game than its predecessor. I was rather fond of FIFA 17, but despite the engine overhaul it was still beholden to some of FIFA’s more long-standing issues. Animations taking too long to unfold and delaying your move; wrestling to control unresponsive players; a lack of individuality from player to player." - IGN - "FIFA 18 is the best FIFA game EA has ever made. It’s that simple. I cannot believe the huge leap the series has made in one year. This is streets ahead of FIFA 17, let alone any that came before that." - The Sun

    User Reviews

    -

    FIFA 18 has an average score of 6.1 out of 10 on Metacritic, based on the reviews of 1,223 users. Here are some of the excerpts from the reviews:

    - - "FIFA 18 is a great game with amazing graphics and gameplay. The Journey mode is very interesting and fun to play. The Ultimate Team mode is addictive and rewarding. The online modes are smooth and competitive. The best FIFA game ever!" - User Review - "FIFA 18 is a good game but not a great one. The graphics and animations are impressive but the gameplay is still flawed and inconsistent. The Journey mode is boring and repetitive. The Ultimate Team mode is pay-to-win and unfair. The online modes are laggy and frustrating. The same FIFA game every year!" - User Review - "FIFA 18 is a terrible game with awful graphics and gameplay. The Journey mode is a joke and a waste of time. The Ultimate Team mode is a scam and a rip-off. The online modes are broken and unplayable. The worst FIFA game ever!" - User Review

    Pros and Cons

    -

    FIFA 18 has its strengths and weaknesses, like any other game. Here are some of the pros and cons of FIFA 18:

    - | Pros | Cons | | --- | --- | | - Stunning graphics and animations | - High system requirements and compatibility issues | | - Realistic and fluid gameplay | - Flawed and inconsistent gameplay | | - Captivating and immersive story mode | - Boring and repetitive story mode | | - Variety of modes and content | - Pay-to-win and unfair mode | | - Online features and community | - Online issues and problems |

    Conclusion

    -

    FIFA 18 is a football simulation game that offers a lot of features and gameplay options for fans of the sport. It has improved graphics, animations, gameplay, modes, and content compared to its previous versions. It also has a story mode that follows the career of Alex Hunter, a young footballer who wants to become a star.

    -

    However, FIFA 18 also has some drawbacks and limitations that may affect your enjoyment of the game. It has high system requirements and compatibility issues that may prevent you from running the game smoothly on your Android device. It also has flawed and inconsistent gameplay that may frustrate you at times. It also has pay-to-win and unfair modes that may discourage you from playing them.

    -

    Overall, FIFA 18 is a great game for football lovers, but it is not perfect. You may love it or hate it depending on your expectations and preferences.

    -

    If you want to try FIFA 18 on your Android device, you can download and install it using the link below. You can also check out our website for more games, apps, tips, tricks, guides, etc.

    -

    Thank you for reading this article, and we hope you have fun playing FIFA 18 on your Android device.

    -

    FAQs

    -

    Here are some of the frequently asked questions about FIFA 18:

    -

    Q1: Is FIFA 18 free to download and play on Android?

    -

    A1: Yes, FIFA 18 is free to download and play on Android devices. However, you may need to pay for some in-game items or features using real money or FIFA Points.

    -

    Q2: How much storage space do I need to install FIFA 18 on my Android device?

    -

    A2: You need at least 2 GB of storage space to install FIFA 18 on your Android device . However, you may need more space to download additional data or updates for the game.

    -

    Q3: How can I update FIFA 18 to the latest version on my Android device?

    -

    A3: You can update FIFA 18 to the latest version on your Android device by using the Google Play Store or our website. You can check for updates manually or enable automatic updates from the settings of the app. You can also download and install the latest APK + OBB files from our website.

    -

    Q4: How can I play online with other players in FIFA 18 on my Android device?

    -

    A4: You can play online with other players in FIFA 18 on your Android device by using an internet connection and logging in to your EA account. You can play online modes such as Ultimate Team, Online Seasons, Online Friendlies, etc. You can also join online communities and chat with other players.

    -

    Q5: How can I fix common issues or errors in FIFA 18 on my Android device?

    -

    A5: You can fix common issues or errors in FIFA 18 on your Android device by following these steps:

    - - Step 1: Make sure your device meets the minimum and recommended specifications for the game. - Step 2: Make sure you have enough storage space and battery life on your device. - Step 3: Make sure you have a stable and fast internet connection. - Step 4: Update the game to the latest version. - Step 5: Clear the cache and data of the app. - Step 6: Restart your device and launch the app again. - Step 7: Contact EA support if the problem persists.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/FR Legends APK 3.3.1 - The Ultimate Drifting Game for Android Devices.md b/spaces/congsaPfin/Manga-OCR/logs/FR Legends APK 3.3.1 - The Ultimate Drifting Game for Android Devices.md deleted file mode 100644 index b8f316b1896d313c863af2b9ca026b7550a22cc2..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/FR Legends APK 3.3.1 - The Ultimate Drifting Game for Android Devices.md +++ /dev/null @@ -1,95 +0,0 @@ - -

    FR Legends APK 3.3.1: The Ultimate Drifting Game for Android

    -

    If you are a fan of drifting and racing games, you might want to check out FR Legends APK 3.3.1, the latest version of the popular mobile game that lets you experience the spirit of drifting like never before.

    -

    fr legends apk 3.3.1


    Download Filehttps://urlca.com/2uO5oO



    -

    What is FR Legends?

    -

    FR Legends is a mobile game developed by Feng Li, a Chinese indie developer who is passionate about drifting and motorsports. The game was released in 2018 and has since gained a huge fan base around the world, especially among drift enthusiasts and car lovers.

    -

    FR Legends stands for "Front-engine, Rear-wheel-drive Legends", which refers to the type of cars that are used for drifting, such as Toyota AE86, Nissan Silvia, Mazda RX-7, and more. The game allows you to choose from a variety of drift cars, customize them to your liking, and compete with other players online or offline in various game modes.

    -

    Features of FR Legends

    -

    FR Legends is not just a simple racing game, it is a game that simulates the art and culture of drifting in a realistic and fun way. Here are some of the features that make FR Legends stand out from other drifting games:

    -

    Customizable cars

    -

    One of the most appealing aspects of FR Legends is that you can customize your own drift car to suit your style and preference. You can change the color, body kit, wheels, tires, suspension, engine, exhaust, and more. You can also add stickers, decals, and accessories to make your car look unique and cool.

    -

    fr legends mod apk 3.3.1 unlimited money
    -fr legends apk 3.3.1 download for android
    -fr legends apk 3.3.1 latest version
    -fr legends apk 3.3.1 free download
    -fr legends apk 3.3.1 update
    -fr legends apk 3.3.1 mod menu
    -fr legends apk 3.3.1 hack
    -fr legends apk 3.3.1 obb
    -fr legends apk 3.3.1 offline
    -fr legends apk 3.3.1 no root
    -fr legends apk 3.3.1 revdl
    -fr legends apk 3.3.1 rexdl
    -fr legends apk 3.3.1 pure
    -fr legends apk 3.3.1 apkpure
    -fr legends apk 3.3.1 uptodown
    -fr legends apk 3.3.1 android 1
    -fr legends apk 3.3.1 android oyun club
    -fr legends apk 3.3.1 an1
    -fr legends apk 3.3.1 happymod
    -fr legends apk 3.3.1 moddroid
    -fr legends apk 3.3.1 mob.org
    -fr legends apk 3.3.1 blackmod
    -fr legends apk 3.3.1 platinmods
    -fr legends apk 3.3.1 andropalace
    -fr legends apk 3.3.1 ihackedit
    -fr legends apk 3.3.1 lenov.ru
    -fr legends apk 3.3.1 mediafıre
    -fr legends apk 3.3.1 mega.nz
    -fr legends apk 3.3.1 google drive
    -fr legends apk 3.3.1 zippyshare
    -fr legends apk 3.3.1 datafilehost
    -fr legends apk 3.3.1 dropbox
    -fr legends apk mediafire.com/file/9q6x7y9w8g2a4q5/FR_LEGENDS_0_2_9_MOD.apk/file)

    -

    Realistic physics

    -

    FR Legends uses a realistic physics engine that makes the drifting experience more authentic and challenging. You have to master the throttle, brake, steering, and handbrake to control your car's angle and speed while drifting. You also have to deal with tire wear, smoke, damage, and collisions.

    -

    Online multiplayer

    -

    FR Legends lets you compete with other players from around the world in online multiplayer mode. You can join or create a room with up to six players and race against each other in tandem or solo mode. You can also chat with other players and make friends or rivals.

    -

    Various game modes

    -

    FR Legends offers different game modes to suit your mood and skill level. You can play in career mode, where you have to complete various missions and challenges to earn money and reputation. You can also play in free mode, where you can practice your drifting skills without any pressure or rules. You can also play in arcade mode, where you can enjoy some fun and casual drifting games.

    -

    How to download and install FR Legends APK 3.3.1

    -

    If you want to play FR Legends on your Android device, you have to download and install the APK file from a trusted source. Here are the requirements and steps to do so:

    -

    Requirements

    -
      -
    • Your Android device must have at least Android 5.0 or higher.
    • -
    • Your Android device must have at least 100 MB of free storage space.
    • -
    • You must enable the installation of apps from unknown sources on your Android device.
    • -
    -

    Steps

    -
      -
    1. Download the FR Legends APK 3.3.1 file from this link.
    2. -
    3. Locate the downloaded file on your Android device and tap on it to start the installation process.
    4. -
    5. Follow the instructions on the screen to complete the installation.
    6. -
    7. Launch the game and enjoy!
    8. -
    Pros and cons of FR Legends APK 3.3.1 -

    FR Legends APK 3.3.1 is not a perfect game, it has its pros and cons. Here are some of them:

    -

    Pros

    -
      -
    • It is free to download and play.
    • -
    • It has amazing graphics and sound effects.
    • -
    • It has a large and active community of players and fans.
    • -
    • It has frequent updates and new features.
    • -
    • It is fun and addictive to play.
    • -
    -

    Cons

    -
      -
    • It may not be compatible with some devices or regions.
    • -
    • It may have some bugs or glitches.
    • -
    • It may require a stable internet connection for online mode.
    • -
    • It may be too hard or frustrating for some players.
    • -
    • It may have some ads or in-app purchases.
    • -
    -

    Conclusion

    -

    FR Legends APK 3.3.1 is a game that will make you feel the thrill and excitement of drifting in a realistic and fun way. You can customize your own drift car, compete with other players online or offline, and enjoy various game modes. If you are a fan of drifting and racing games, you should definitely give FR Legends a try. You will not regret it!

    -

    FAQs

    -

    Here are some frequently asked questions about FR Legends APK 3.3.1:

    -
      -
    1. What is the difference between FR Legends APK and FR Legends MOD APK?
      The FR Legends APK is the original version of the game that you can download from the official source. The FR Legends MOD APK is a modified version of the game that may have some extra features or cheats, such as unlimited money, unlocked cars, etc. However, the FR Legends MOD APK may not be safe or legal to use, so we do not recommend it.
    2. -
    3. How can I get more money in FR Legends?
      You can get more money in FR Legends by completing missions and challenges in career mode, winning races in online mode, watching ads, or buying in-app purchases.
    4. -
    5. How can I play FR Legends on PC?
      You can play FR Legends on PC by using an Android emulator, such as BlueStacks, NoxPlayer, or LDPlayer. You have to download and install the emulator on your PC, then download and install the FR Legends APK file on the emulator, and then launch the game from the emulator.
    6. -
    7. How can I contact the developer of FR Legends?
      You can contact the developer of FR Legends by sending an email to frlegends@outlook.com, or by following their social media accounts on Facebook, Twitter, Instagram, or YouTube.
    8. -
    9. Is FR Legends safe to download and play?
      Yes, FR Legends is safe to download and play, as long as you download it from a trusted source, such as this link. You should also scan the APK file with an antivirus software before installing it on your device.
    10. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/FateGrand Order Mod APK with Menu Damage and Easy Win Features.md b/spaces/congsaPfin/Manga-OCR/logs/FateGrand Order Mod APK with Menu Damage and Easy Win Features.md deleted file mode 100644 index db06e2700695af0ca9846dbaaa0357e5ceb59667..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/FateGrand Order Mod APK with Menu Damage and Easy Win Features.md +++ /dev/null @@ -1,118 +0,0 @@ - -

    Fate/Grand Order APK Mods: How to Find and Install Them from Reddit

    -

    Fate/Grand Order is a mobile game that has taken the world by storm. Based on the popular Fate franchise by Type-Moon, the game lets you summon and command historical, mythical, and fictional heroes known as Servants to fight against enemies that threaten the human history. With stunning graphics, engaging story, and diverse gameplay, Fate/Grand Order has attracted millions of fans across the globe.

    -

    However, not everyone is satisfied with the official version of the game. Some players want to have more control over their gameplay, such as increasing their damage, skipping battles, or getting unlimited resources. That's why some players resort to using APK mods, which are modified versions of the game that alter some of its features or functions.

    -

    fate grand order apk mod reddit


    Download File ===== https://urlca.com/2uOd3K



    -

    But where can you find and install Fate/Grand Order APK mods? One of the most popular sources is Reddit, a social media platform where users can share and discuss various topics. In this article, we will show you how to find and install Fate/Grand Order APK mods from Reddit, as well as the pros and cons of using them.

    -

    What are APK Mods and Why Do Some Players Use Them?

    -

    An APK mod is a modified version of an Android application package (APK), which is the file format used to distribute and install applications on Android devices. By modifying the APK file, hackers or modders can change some aspects of the game, such as adding new features, removing restrictions, or altering the game data.

    -

    Some players use APK mods for various reasons, such as:

    -
      -
    • To gain an advantage in the game, such as increasing their damage, unlocking all Servants, or getting unlimited resources.
    • -
    • To bypass some limitations or difficulties in the game, such as skipping battles, avoiding ads, or accessing region-locked content.
    • -
    • To experience new or different aspects of the game, such as changing the graphics, adding new modes, or customizing their Servants.
    • -
    -

    However, using APK mods also comes with some risks and drawbacks, which we will discuss later in this article.

    -

    How to Find and Install Fate/Grand Order APK Mods from Reddit?

    -

    Reddit is one of the most popular sources of Fate/Grand Order APK mods. There are several subreddits (communities) dedicated to sharing and discussing Fate/Grand Order APK mods, such as r/grandordermods, r/FateGOmodding, or r/FateGOHacks. These subreddits often have links to download sites or guides on how to install the mods.

    -

    To find and install Fate/Grand Order APK mods from Reddit, you need to follow these steps:

    -

    fate grand order hack mod apk download
    -fate grand order mod menu apk android
    -fate grand order damage multiplier mod apk
    -fate grand order easy win mod apk
    -fate grand order blackmod team mod apk
    -fate grand order reddit apk mod guide
    -fate grand order reddit apk mod discussion
    -fate grand order reddit apk mod review
    -fate grand order reddit apk mod tips
    -fate grand order reddit apk mod news
    -fate grand order reddit apk mod update
    -fate grand order reddit apk mod support
    -fate grand order reddit apk mod help
    -fate grand order reddit apk mod feedback
    -fate grand order reddit apk mod request
    -fate grand order reddit apk mod link
    -fate grand order reddit apk mod source
    -fate grand order reddit apk mod safe
    -fate grand order reddit apk mod legit
    -fate grand order reddit apk mod working
    -fate grand order reddit apk mod latest
    -fate grand order reddit apk mod version
    -fate grand order reddit apk mod free
    -fate grand order reddit apk mod premium
    -fate grand order reddit apk mod vip
    -fate grand order reddit apk mod unlimited
    -fate grand order reddit apk mod features
    -fate grand order reddit apk mod benefits
    -fate grand order reddit apk mod advantages
    -fate grand order reddit apk mod disadvantages
    -fate grand order reddit apk mod pros and cons
    -fate grand order reddit apk mod comparison
    -fate grand order reddit apk mod alternatives
    -fate grand order reddit apk mod recommendations
    -fate grand order reddit apk mod suggestions
    -fate grand order reddit apk mod questions
    -fate grand order reddit apk mod answers
    -fate grand order reddit apk mod solutions
    -fate grand order reddit apk mod problems
    -fate grand order reddit apk mod issues
    -fate grand order reddit apk mod bugs
    -fate grand order reddit apk mod fixes
    -fate grand order reddit apk mod patches
    -fate grand order reddit apk mod cheats
    -fate grand order reddit apk mod hacks
    -fate grand order reddit apk mod tricks
    -fate grand order reddit apk mod secrets
    -fate grand order reddit apk mod tutorials
    -fate grand order reddit apk mod how to

    -
      -
    1. Browse through the subreddits that offer Fate/Grand Order APK mods and look for a mod that suits your preferences. Make sure to read the description, comments, and reviews of the mod before downloading it.
    2. -
    3. Download the mod from a reliable and safe source. Avoid clicking on suspicious links or ads that may contain malware or viruses. You may need to use a VPN or proxy service if the download site is blocked in your region.
    4. -
    5. Backup your original Fate/Grand Order APK file and data before installing the mod. You can use a file manager app or a backup app to do this. This way, you can restore your original game if something goes wrong with the mod.
    6. -
    7. Uninstall your original Fate/Grand Order app from your device. You can do this by going to Settings > Apps > Fate/Grand Order > Uninstall.
    8. -
    9. Install the modded Fate/Grand Order APK file on your device. You may need to enable Unknown Sources in your device settings to allow installation from third-party sources.
    10. -
    11. Launch the modded Fate/Grand Order app and enjoy your modified gameplay.
    12. -
    -

    Note: Some mods may require additional steps or files to work properly. Make sure to follow the instructions provided by the modder carefully.

    -

    What are the Pros and Cons of Using Fate Grand Order APK Mods?

    -

    Using Fate/Grand Order APK mods can have some benefits and drawbacks, depending on your perspective and preferences. Here are some of the pros and cons of using Fate/Grand Order APK mods:

    - - - - - - - - - - - - - - - - - -
    ProsCons
    You can have more fun and freedom in your gameplay, such as using your favorite Servants, skipping boring battles, or getting more rewards.You may lose the challenge and satisfaction of playing the game as intended, such as overcoming difficult enemies, earning your resources, or following the story.
    You can access content that is not available in your region, such as Japanese-only Servants, events, or voice lines.You may violate the terms of service of the game and risk getting banned or suspended from the game or losing your account data.
    You can experience new or different features that are not in the official version of the game, such as improved graphics, custom modes, or fan-made Servants.You may encounter bugs, errors, or compatibility issues that may affect your gameplay or damage your device.
    -

    Ultimately, the decision to use Fate/Grand Order APK mods is up to you. You should weigh the pros and cons carefully and decide whether you are willing to take the risks or not.

    -

    Conclusion: Should You Use Fate/Grand Order APK Mods or Not?

    -

    Fate/Grand Order is a great game that offers a lot of entertainment and enjoyment for its fans. However, some players may want to modify their gameplay by using APK mods, which are modified versions of the game that change some of its features or functions.

    -

    There are many sources of Fate/Grand Order APK mods, but one of the most popular ones is Reddit, a social media platform where users can share and discuss various topics. You can find and install Fate/Grand Order APK mods from Reddit by following some simple steps, but you should also be aware of the pros and cons of using them.

    -

    Using Fate/Grand Order APK mods can have some benefits, such as having more fun and freedom in your gameplay, accessing region-locked content, or experiencing new or different features. However, it can also have some drawbacks, such as losing the challenge and satisfaction of playing the game as intended, violating the terms of service of the game and risking getting banned or suspended, or encountering bugs, errors, or compatibility issues.

    -

    Therefore, you should use Fate/Grand Order APK mods at your own discretion and responsibility. You should also respect the game developers and other players who play the game legitimately. Remember that Fate/Grand Order is a game that is meant to be enjoyed by everyone.

    -

    FAQs: Some Common Questions and Answers about Fate/Grand Order APK Mods

    -

    Here are some common questions and answers about Fate/Grand Order APK mods that you may find helpful:

    -

    Q: Are Fate/Grand Order APK mods legal?

    -

    A: The legality of Fate/Grand Order APK mods may vary depending on your country or region. Generally speaking, modifying an application without the permission of the developer is considered illegal and may infringe on their intellectual property rights. However, some countries or regions may have more lenient laws or regulations regarding this matter. You should check your local laws before using Fate/Grand Order APK mods.

    -

    Q: Are Fate/Grand Order APK mods safe?

    -

    A: The safety of Fate/Grand Order APK mods may depend on the source and quality of the mod. Some mods may be safe and harmless, while others may contain malware or viruses that may harm your device or steal your personal information. You should always download Fate/Grand Order APK mods from reliable and trustworthy sources and scan them with an antivirus software before installing them. You should also backup your original game data before using any mod.

    -

    Q: How do I update my Fate/Grand Order APK mod?

    -

    A: The update process of Fate/Grand Order APK mod may vary depending on the type and version of the mod. Some mods may update automatically or have an update option within the app. Others may require you to download and install a new version of the mod manually. You should always check the modder's website or Reddit post for any updates or instructions regarding their mod.

    -

    Q: How do I uninstall my Fate/Grand Order APK mod?

    -

    A: The uninstall process of Fate/Grand Order APK mod may depend on how you installed it in the first place. If you installed it by replacing your original game app, you can simply uninstall it by going to Settings > Apps > Fate/Grand Order > Uninstall. If you installed it by using a parallel app or a clone app, you can uninstall it by going to the app settings and choosing the uninstall option. If you want to restore your original game app, you can reinstall it from the official source or from your backup file.

    -

    Q: Can I use Fate/Grand Order APK mods with other players or online features?

    -

    A: The compatibility of Fate/Grand Order APK mods with other players or online features may depend on the nature and extent of the mod. Some mods may work fine with other players or online features, while others may cause errors, crashes, or bans. You should always be careful and respectful when using Fate/Grand Order APK mods with other players or online features, as you may ruin their experience or violate the game rules.

    -

    -

    This is the end of the article. I hope you found it helpful and informative. Thank you for reading.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Komodo Chess 14 for Free and Play Like a Grandmaster.md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Komodo Chess 14 for Free and Play Like a Grandmaster.md deleted file mode 100644 index 2eed664fba081f8e652663543fbaaf52cbb10298..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Komodo Chess 14 for Free and Play Like a Grandmaster.md +++ /dev/null @@ -1,131 +0,0 @@ -
    -

    Komodo Chess 14 Free Download: How to Get the World Champion Chess Engine

    -

    If you are looking for a powerful and versatile chess engine that can help you improve your chess skills, you might want to check out Komodo Chess 14. This is the latest version of the world champion chess engine that has won several prestigious titles and awards. In this article, we will show you how to download Komodo Chess 14 for free, how to install and use it, and how to make the most of its features and benefits.

    -

    komodo chess 14 free download


    Download File ✓✓✓ https://urlca.com/2uObRu



    -

    What is Komodo Chess 14?

    -

    Komodo Chess 14 is a chess engine developed by GM Larry Kaufman and Mark Lefler, inspired by AlphaZero, the artificial intelligence program that defeated the best chess engines in the world. Komodo Chess 14 thinks like no other chess program, using a combination of brute force and human-like intuition to find the best moves in any position. It can play both standard chess and variants, such as Fischer Random, King of the Hill, Suicide, etc. It can also switch between different modes and personalities, such as MCTS (Monte Carlo Tree Search), Armageddon, Contempt, etc. Komodo Chess 14 is not only a strong opponent, but also a great teacher and analyzer, providing a Grandmaster evaluation of any position and suggesting improvements.

    -

    Features and benefits of Komodo Chess 14

    -

    Some of the features and benefits of Komodo Chess 14 are:

    -
      -
    • It is the three-time TCEC champion, the most prestigious online computer chess event, and has also won several CCT events and several World Championships.
    • -
    • It is a significant strength improvement over the previous version, about 12 elo in MCTS mode and 10 elo in standard mode.
    • -
    • It has a new feature called "Armageddon" mode, which tells Komodo that White (or Black) must win, draws are scored as losses for that color. This improves Komodo's performance as White by about 30 elo.
    • -
    • It has more levels, personalities, and auto-skill features added since Komodo 13.
    • -
    • It can play both standard chess and variants, such as Fischer Random, King of the Hill, Suicide, etc.
    • -
    • It can switch between different modes and personalities, such as MCTS (Monte Carlo Tree Search), Armageddon, Contempt, etc.
    • -
    • It provides a Grandmaster evaluation of any position and suggests improvements.
    • -
    -

    How to download Komodo Chess 14 for free

    -

    If you want to download Komodo Chess 14 for free, you have two options:

    -
      -
    1. You can visit the official website of Komodo Chess and click on the "Free" tab. There you will find a link to download Komodo 13.01 for free. This is an older version of Komodo Chess, but still very strong and useful.
    2. -
    3. You can visit the website of ChessBase, one of the compatible GUIs for Komodo Chess. There you will find a link to download a free trial version of Komodo Chess 14. This is a limited version of Komodo Chess 14 that expires after one month. However, you can still use it to test its features and performance.
    4. -
    -

    How to install and use Komodo Chess 14

    How to install and use Komodo Chess 14

    -

    Once you have downloaded Komodo Chess 14, either for free or as a paid product, you need to install it and use it with a compatible GUI (Graphical User Interface). A GUI is a software that allows you to interact with the chess engine, such as setting up the board, playing games, analyzing positions, etc. Komodo Chess 14 does not come with its own GUI, so you need to use one of the following options:

    -

    System requirements and compatible GUIs

    -

    The system requirements for Komodo Chess 14 are:

    -
      -
    • A 64-bit operating system (Windows, Linux, or Mac OS)
    • -
    • A 64-bit processor (Intel or AMD)
    • -
    • At least 4 GB of RAM
    • -
    • At least 100 MB of free disk space
    • -
    -

    The compatible GUIs for Komodo Chess 14 are:

    -

    komodo chess 14.1 free download
    -komodo dragon chess engine free download
    -komodo 14 chess software free download
    -how to download komodo chess 14 for free
    -komodo chess 14 vs stockfish 14 free download
    -komodo chess 14 world champion edition free download
    -komodo chess 14 review and free download
    -komodo chess 14 system requirements and free download
    -komodo chess 14 features and benefits free download
    -komodo chess 14 installation guide and free download
    -komodo chess 14 mcts mode free download
    -komodo chess 14 armageddon mode free download
    -komodo chess 14 personalities and auto-skill free download
    -komodo chess 14 compatible guis free download
    -komodo chess 14 multi-core support free download
    -komodo chess 14 evaluation developed by a grandmaster free download
    -komodo chess 14 three-time tcec champion free download
    -komodo chess 14 world computer chess champion free download
    -komodo chess 14 world chess software champion free download
    -komodo chess 14 world computer blitz champion free download
    -komodo chess 14 inspired by alphazero free download
    -komodo chess 14 redeveloped from the ground up free download
    -komodo chess 14 improved king safety and time management free download
    -komodo chess 14 best settings and options free download
    -komodo chess 14 user manual and tutorials free download
    -komodo chess 14 latest updates and patches free download
    -komodo chess 14 customer reviews and testimonials free download
    -komodo chess 14 discount code and coupon free download
    -komodo chess 14 official site and support free download
    -komodo chess 14 vs other chess engines free download
    -best way to learn from komodo chess 14 free download
    -how to play against komodo chess 14 online free download
    -how to analyze your games with komodo chess 14 free download
    -how to improve your rating with komodo chess 14 free download
    -how to train your opening repertoire with komodo chess 14 free download
    -how to master the endgame with komodo chess 14 free download
    -how to use the dragon by komodo chess app free download
    -how to get the android version of komodo chess 14 free download
    -how to get the mac osx version of komodo chess 14 free download
    -how to get the linux version of komodo chess 14 free download
    -how to get the windows version of komodo chess 14 free download
    -how to get the serial number for komodo chess 14 free download
    -how to activate your license for komodo chess 14 free download
    -how to upgrade from previous versions of komodo chess for free

    -
      -
    • ChessBase: This is the most popular and professional chess software in the world. It has a huge database of games, a powerful analysis tool, and many other features. You can buy Komodo Chess 14 as a standalone product or as part of a bundle with ChessBase. You can also download a free trial version of Komodo Chess 14 from ChessBase.
    • -
    • Fritz: This is another chess software from the same company as ChessBase. It has similar features but is more user-friendly and less expensive. You can buy Komodo Chess 14 as a standalone product or as part of a bundle with Fritz. You can also download a free trial version of Komodo Chess 14 from Fritz.
    • -
    • Arena: This is a free and open-source chess software that supports many chess engines, including Komodo Chess 14. It has a simple and intuitive interface and some basic features. You can download Arena for free from its official website.
    • -
    -

    Installation instructions and tips

    -

    The installation process for Komodo Chess 14 depends on the GUI you are using. Here are some general steps and tips:

    -
      -
    1. Download the ZIP file of Komodo Chess 14 from the official website or from the GUI website.
    2. -
    3. Extract the ZIP file to a folder on your computer.
    4. -
    5. Open the GUI of your choice and go to the menu where you can add or manage chess engines.
    6. -
    7. Select the option to add a new chess engine and browse to the folder where you extracted Komodo Chess 14.
    8. -
    9. Select the executable file of Komodo Chess 14 (either komodo-14.exe or komodo-14-mcts.exe) and click OK.
    10. -
    11. The GUI will recognize Komodo Chess 14 and add it to the list of available engines.
    12. -
    13. You can now select Komodo Chess 14 as your opponent or as your analyzer in the GUI.
    14. -
    -

    Some tips to optimize the performance of Komodo Chess 14 are:

    -
      -
    • Make sure you have enough RAM and CPU power for Komodo Chess 14 to run smoothly.
    • -
    • Adjust the settings of Komodo Chess 14 according to your preferences and needs. You can change the parameters such as hash size, threads, contempt, etc. in the engine options menu of the GUI.
    • -
    • Use a large opening book and endgame tablebase for better results. You can download them from various sources online or buy them from the GUI websites.
    • -

    How to play and analyze with Komodo Chess 14

    -

    Once you have installed and configured Komodo Chess 14, you can start playing and analyzing with it in the GUI of your choice. Here are some common ways to use Komodo Chess 14:

    -
      -
    • Play a game against Komodo Chess 14: You can choose the level, time control, color, and variant of the game. You can also enable or disable hints, takebacks, and engine assistance. You can see the evaluation, best move, and principal variation of Komodo Chess 14 during the game. You can also save, load, or export the game for later review.
    • -
    • Analyze a position or a game with Komodo Chess 14: You can set up any position on the board or load a game from a file or a database. You can then activate Komodo Chess 14 as your analyzer and see its evaluation, best move, and principal variation. You can also see the depth, nodes, speed, and score of Komodo Chess 14. You can adjust the analysis parameters such as multipv, infinite mode, etc. You can also add comments, variations, and annotations to the position or the game.
    • -
    • Use Komodo Chess 14 as a training tool: You can use various features of the GUI to improve your chess skills with Komodo Chess 14. For example, you can use the blunder check feature to find your mistakes in a game and see how Komodo Chess 14 would have played instead. You can also use the tactical analysis feature to generate puzzles from your games and see how Komodo Chess 14 would have solved them. You can also use the opening trainer feature to learn and practice openings with Komodo Chess 14.
    • -
    -

    How to improve your chess skills with Komodo Chess 14

    -

    Komodo Chess 14 is not only a strong opponent, but also a great teacher and analyzer. It can help you improve your chess skills in various ways. Here are some tips on how to use Komodo Chess 14 for learning and improvement:

    -

    Learn from the Grandmaster evaluation

    -

    Komodo Chess 14 provides a Grandmaster evaluation of any position and suggests improvements. You can learn from this evaluation by understanding why Komodo Chess 14 prefers certain moves over others, what are the plans and ideas behind them, and what are the strengths and weaknesses of each side. You can also compare your moves with Komodo Chess 14's moves and see where you went wrong or right. You can also ask Komodo Chess 14 to explain its moves in natural language using the "Why?" feature of the GUI.

    -

    Explore different modes and personalities

    -

    Komodo Chess 14 can switch between different modes and personalities, such as MCTS (Monte Carlo Tree Search), Armageddon, Contempt, etc. You can explore these modes and personalities by playing against them or analyzing with them. You can see how Komodo Chess 14 changes its style and behavior depending on the mode or personality. You can also learn from the different perspectives and approaches that Komodo Chess 14 offers. For example, you can use MCTS mode to see how Komodo Chess 14 thinks like AlphaZero, or use Armageddon mode to see how Komodo Chess 14 plays aggressively when it has to win.

    -

    Challenge yourself with puzzles and games

    -

    Komodo Chess 14 can also provide you with puzzles and games that challenge your chess skills. You can use various features of the GUI to generate puzzles from your games or from a database of games. You can then try to solve them with or without Komodo Chess 14's help. You can also play games against Komodo Chess 14 at different levels, time controls, colors, and variants. You can then review your games with Komodo Chess 14's analysis and feedback. You can also use the auto-skill feature of Komodo Chess 14 to adjust its level according to your performance.

    -

    Conclusion

    -

    Komodo Chess 14 is a world champion chess engine that thinks like no other chess program. It is a powerful and versatile chess engine that can play both standard chess and variants, switch between different modes and personalities, and provide a Grandmaster evaluation of any position. It is not only a strong opponent, but also a great teacher and analyzer that can help you improve your chess skills.

    -

    If you want to download Komodo Chess 14 for free, you have two options: either download an older version of Komodo Chess from its official website or download a free trial version of Komodo Chess 14 from one of its compatible GUIs (ChessBase or Fritz). If you want to buy Komodo Chess 14 as a paid product, you can do so from one of its compatible GUIs (ChessBase or Fritz). You can also buy Komodo Chess 14 as part of a bundle with ChessBase or Fritz, which will give you access to many other features and benefits. We hope this article has helped you learn more about Komodo Chess 14 and how to download it for free. If you are interested in trying out this amazing chess engine, don't hesitate to download it and start playing and analyzing with it. You will be amazed by how much you can improve your chess skills with Komodo Chess 14. Here are some FAQs that might answer some of your questions:

    FAQs

    -
      -
    1. What is the difference between Komodo 14 and Komodo 14 MCTS?
    2. -

      Komodo 14 is the standard version of Komodo Chess 14, which uses a brute force approach to search for the best moves. Komodo 14 MCTS is a variant of Komodo Chess 14, which uses a Monte Carlo Tree Search approach to search for the best moves. MCTS is inspired by AlphaZero, the artificial intelligence program that defeated the best chess engines in the world. MCTS is more creative and human-like than brute force, but also less reliable and consistent.

      -
    3. How can I update Komodo Chess 14?
    4. -

      If you have bought Komodo Chess 14 as a paid product, you can update it for free whenever a new version is released. You can do this by visiting the official website of Komodo Chess or the GUI website where you bought it and downloading the latest version. You can then install it over the previous version or in a new folder.

      -
    5. How can I contact the developers of Komodo Chess 14?
    6. -

      If you have any questions, feedback, or suggestions for the developers of Komodo Chess 14, you can contact them by visiting their official website and filling out the contact form. You can also join their forum and interact with other users and developers.

      -
    7. How can I support the development of Komodo Chess 14?
    8. -

      If you want to support the development of Komodo Chess 14, you can do so by buying their products, donating to their PayPal account, or subscribing to their Patreon page. You can also spread the word about Komodo Chess 14 and share your experiences with it on social media and online platforms.

      -
    9. Where can I find more information and resources about Komodo Chess 14?
    10. -

      If you want to find more information and resources about Komodo Chess 14, you can visit their official website, their forum, their YouTube channel, their Facebook page, their Twitter account, or their blog. You can also read reviews, articles, and books about Komodo Chess 14 online or offline.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Save The Dog from the Angry Bees in this Awesome APK Puzzle Game.md b/spaces/congsaPfin/Manga-OCR/logs/How to Save The Dog from the Angry Bees in this Awesome APK Puzzle Game.md deleted file mode 100644 index 97fc43fd40acadab1888545c334c5f8489cfe845..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Save The Dog from the Angry Bees in this Awesome APK Puzzle Game.md +++ /dev/null @@ -1,214 +0,0 @@ -
    -

    Save The Dog Bee APK: A Fun and Challenging Puzzle Game

    -

    If you are looking for a casual puzzle game that will test your brain and reflexes, you might want to try Save The Dog Bee APK. This is a game where you have to draw lines with your fingers to create walls that protect the dog from attacks by bees in the hive. You need to protect the dog with the painted wall for 10 seconds during the attack of the bees, hold on and you will win the game. Use your brain to save the doge.

    -

    save the dog bee apk


    Download Zip ——— https://urlca.com/2uO671



    -

    In this article, I will tell you more about Save The Dog Bee APK, such as its features, how to play, how to download and install it on your Android device, and some tips and tricks to help you win the game. I will also answer some frequently asked questions about the game. Let's get started!

    -

    Features of Save The Dog Bee APK

    -

    Save The Dog Bee APK is a simple but addictive puzzle game that will keep you entertained for hours. Here are some of the features that make this game fun and interesting:

    -
      -
    • A variety of levels

      -

      The game has hundreds of levels with different difficulty levels and challenges. You will never get bored as you try to save the dog from different scenarios and obstacles.

      -

      save the dog bee apk download
      -save the dog bee apk mod
      -save the dog bee apk free
      -save the dog bee apk latest version
      -save the dog bee apk android
      -save the dog bee apk offline
      -save the dog bee apk hack
      -save the dog bee apk unlimited money
      -save the dog bee apk no ads
      -save the dog bee apk full version
      -save the dog bee apk game
      -save the dog bee apk puzzle
      -save the dog bee apk funspace
      -save the dog bee apk review
      -save the dog bee apk tips
      -save the dog bee apk cheats
      -save the dog bee apk guide
      -save the dog bee apk walkthrough
      -save the dog bee apk gameplay
      -save the dog bee apk trailer
      -save the dog bee apk update
      -save the dog bee apk new levels
      -save the dog bee apk online
      -save the dog bee apk multiplayer
      -save the dog bee apk challenges
      -save the dog bee apk skins
      -save the dog bee apk characters
      -save the dog bee apk graphics
      -save the dog bee apk sound
      -save the dog bee apk music
      -save the dog bee apk rating
      -save the dog bee apk size
      -save the dog bee apk requirements
      -save the dog bee apk compatibility
      -save the dog bee apk installation
      -save the dog bee apk support
      -save the dog bee apk feedback
      -save the dog bee apk alternatives
      -save the dog bee apk similar games
      -save the dog bee apk genre
      -save the dog bee apks for pc
      -save the dog bee apks for ios
      -save the dog bee apks for windows 10
      -save the dog bee apks for macbook
      -save the dog bee apks for chromebook
      -download Save The Dog APK (Android Game) - Free Download - APKCombo[^1^]
      -tải xuống Save The Dog Bee APK cho Android[^2^]

    • -
    • Easy and funny gameplay

      -

      The game is easy to play but hard to master. You just need to swipe the screen to create a wall to protect the dog. As long as you don't let go, you can always draw the line. You can let go after producing a satisfactory pattern. Wait for the bees in the hive to attack. Hold your wall for 10 seconds, so that the dog will not be attacked by bees. You will win the game.

    • -
    • Funny dog expressions

      -

      The game has cute and funny graphics and animations. You will love the dog's expressions as he reacts to your actions and the bees' attacks. He will smile, cry, wink, or make other funny faces depending on the situation.

    • -
    • Puzzle and interesting levels

      -

      The game is not only about drawing lines. You also need to use your brain and logic to find the best way to save the dog. Sometimes you need to use other objects or tools in the environment, such as balloons, fans, magnets, or bombs. You also need to avoid traps and hazards that can harm the dog or break your wall.

    • -
    • Various skins

      -

      The game allows you to customize your dog with different skins. You can choose from different breeds, colors, or costumes. You can also save other animals besides dogs, such as chickens or sheep.

    • -
    -

    How to Play Save The Dog Bee APK

    -

    Save The Dog Bee APK is easy to play but challenging to master. Here are some basic steps on how to play the game:

    -
      -
    1. Download and install Save The Dog Bee APK on your Android device.
    2. -
    3. Open the game and choose a level.
    4. -
    5. Swipe the screen to create a wall to protect the dog from the bees.
    6. -
    7. Wait for the bees in the hive to attack.
    8. -
    9. Hold your wall for 10 seconds without letting go.
    10. -
    11. If the dog survives without being stung by bees, you win the level.
    12. -
    13. If the dog gets stung by bees or your wall breaks, you lose the level.
    14. -
    15. Try again until you win or move on to another level.
    16. -
    -

    How to Download and Install Save The Dog Bee APK on Android

    -

    If you want to download and install Save The Dog Bee APK on your Android device, you can follow these simple steps:

    -
      -
    1. Go to [this link](^1^) or [this link](^2^) on your browser.
    2. -
    3. Tap on Download APK
    4. Wait for the download to finish.
    5. -
    6. Go to your file manager and locate the downloaded APK file.
    7. -
    8. Tap on the file and allow installation from unknown sources if prompted.
    9. -
    10. Wait for the installation to complete.
    11. -
    12. Open the game and enjoy!
    13. -
    -

    Tips and Tricks to Win Save The Dog Bee APK

    -

    Save The Dog Bee APK is a fun and challenging puzzle game that requires you to think fast and act smart. Here are some tips and tricks that can help you win the game:

    -
      -
    • Plan ahead

      -

      Before you start drawing your wall, take a look at the level and see where the bees are coming from, what obstacles are in the way, and what tools or objects you can use. Try to anticipate the bees' movements and draw your wall accordingly. You can also use the pause button to think more carefully.

    • -
    • Use different shapes

      -

      You don't have to draw a straight line to create a wall. You can also use curves, circles, triangles, or other shapes to protect the dog. Sometimes, using different shapes can help you cover more area or create more stability for your wall.

    • -
    • Be creative

      -

      You can also use your wall to interact with other elements in the level, such as balloons, fans, magnets, or bombs. You can use your wall to pop balloons, deflect fans, attract magnets, or detonate bombs. These can help you create more space or clear more bees for your dog.

    • -
    • Be careful

      -

      You also need to be careful not to harm your dog or break your wall. Avoid drawing your wall too close to the dog or too far from the bees. Also, avoid drawing your wall over traps or hazards that can damage your wall or hurt your dog.

    • -
    • Have fun

      -

      The most important tip is to have fun while playing Save The Dog Bee APK. Don't get frustrated if you lose a level or make a mistake. Just try again and enjoy the game!

    • -
    -

    Conclusion

    -

    Save The Dog Bee APK is a casual puzzle game that will test your brain and reflexes as you try to save the dog from attacks by bees in the hive. You need to draw lines with your fingers to create walls that protect the dog for 10 seconds. The game has hundreds of levels with different difficulty levels and challenges. You also need to use your logic and creativity to find the best way to save the dog. The game has cute and funny graphics and animations that will make you smile. You can also customize your dog with different skins or save other animals besides dogs.

    -

    If you want to download and install Save The Dog Bee APK on your Android device, you can follow the simple steps I mentioned above. You can also use some of the tips and tricks I shared with you to help you win the game. I hope you enjoyed this article and found it helpful. If you have any questions or feedback about Save The Dog Bee APK, feel free to leave a comment below.

    -

    Frequently Asked Questions

    -

    Here are some of the frequently asked questions about Save The Dog Bee APK:

    -
      -
    1. Is Save The Dog Bee APK safe to download and install?

      -

      Yes, Save The Dog Bee APK is safe to download and install on your Android device. It does not contain any viruses, malware, or spyware that can harm your device or compromise your privacy. However, you should always download it from a trusted source like [this link] or [this link] and scan it with an antivirus app before installing it.

    2. -
    3. Is Save The Dog Bee APK free to play?

      -

      Yes, Save The Dog Bee APK is free to play on your Android device. You don't need to pay any money to download, install, or play the game. However, the game may contain some ads that can be removed by purchasing an ad-free version of the game.

    4. -
    5. How can I get more skins for my dog?

      -

      You can get more skins for your dog by completing levels and earning coins. You can use these coins to buy different skins from the shop. You can also watch ads or share the game with your friends to get more coins.

    6. -
    7. How can I save other animals besides dogs?

      -

      You can save other animals besides dogs by unlocking them from the shop. You need to have enough coins to buy them. Some of the animals you can save are chickens, sheep, pigs, cows, and cats.

    8. -
    9. What are the benefits of playing Save The Dog Bee APK?

      -

      Save The Dog Bee APK is not only a fun and entertaining game, but also a beneficial one. Playing this game can help you improve your cognitive skills, such as memory, attention, concentration, problem-solving, and creativity. It can also help you relieve stress, relax, and have a good mood.

    10. -
    -

    Outline of the Article

    -

    Here is the outline of the article I wrote:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    I hope you liked my article and found it useful.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Incredibox APK iOS Make Music with a Merry Crew of Beatboxers.md b/spaces/congsaPfin/Manga-OCR/logs/Incredibox APK iOS Make Music with a Merry Crew of Beatboxers.md deleted file mode 100644 index 36937cac6e38e504a4c38261bcfb4335a077dbc2..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Incredibox APK iOS Make Music with a Merry Crew of Beatboxers.md +++ /dev/null @@ -1,163 +0,0 @@ - -

    Incredibox Apk Ios: A Fun and Interactive Music App

    -

    Do you love music and want to create your own songs with a simple and intuitive app? If yes, then you should check out Incredibox apk ios, a music video game that lets you make music with a merry crew of beatboxers. In this article, we will tell you everything you need to know about Incredibox apk ios, including how to download and install it on your device, what are its features, what are the different versions available, and what are some of the best alternatives to it. Let's get started!

    -

    What is Incredibox and what can you do with it?

    -

    Incredibox is a music app that was created in 2009 by the French company So Far So Good (SFSG). It is a fun, interactive, and educational tool that allows you to create your own music with the help of a group of seven animated beatboxers. You can choose your musical style among nine impressive atmospheres and start to lay down, record, and share your mix. You can also find the right sound combos to unlock animated choruses that will enhance your tune. Incredibox is a great way to learn about rhythm and melody, as well as to express your creativity and musical talent.

    -

    incredibox apk ios


    Download Ziphttps://urlca.com/2uO8QM



    -

    Incredibox apk ios is the mobile version of Incredibox that is compatible with iOS devices such as iPhone, iPad, and iPod touch. You can download it from the App Store for $4.99. It requires iOS 15.0 or later and a Mac with Apple M1 chip or later. It has a 4.9-star rating on the App Store based on more than 30,000 reviews. It has also won several awards and appeared in various international media outlets such as BBC, Adobe, FWA, Gizmodo, Slate, Konbini, Softonic, Kotaku, Cosmopolitan, PocketGamer, AppAdvice, AppSpy, Vice, Ultralinx, and many others.

    -

    How to download and install Incredibox apk ios on your device?

    -

    Downloading and installing Incredibox apk ios on your device is very easy. Just follow these simple steps:

    -
      -
    1. Go to the App Store on your device and search for "Incredibox".
    2. -
    3. Tap on the app icon and then tap on "Get" or "Buy" to purchase it.
    4. -
    5. Enter your Apple ID password or use Touch ID or Face ID to confirm your purchase.
    6. -
    7. Wait for the app to download and install on your device.
    8. -
    9. Once the app is installed, tap on "Open" or find it on your home screen.
    10. -
    11. Enjoy making music with Incredibox!
    12. -
    -

    What are the main features of Incredibox apk ios?

    -

    Incredibox apk ios has many features that make it a fun and interactive music app. Here are some of them:

    -
      -
    • You can choose from nine different musical styles: Alpha (old school beatbox), Little Miss (R&B), Sunrise (pop), The Love (romantic), Brazil (samba), Alive (electro), Jeevan (Bollywood), Dystopia (cyberpunk), and Wekiddy (kids).
    • -
    • You can drag and drop icons onto the avatars to make them sing and start to compose your own music. Each icon represents a different sound loop such as beats, effects, melodies, chorus, or voices.
    • -
    • You can record your mix and share it with your friends or the world via email, social media, or the Incredibox website. You can also download your mix as an MP3 file or a video file.
    • -
    • You can explore the Top 50 chart on the Incredibox website and discover the best mixes created by other users. You can also vote for your favorite mixes and leave comments.
    • -
    • You can play with the app offline and enjoy it anywhere and anytime.
    • -
    -

    What are the different versions of Incredibox apk ios and how do they differ?

    -

    Incredibox apk ios has nine different versions that correspond to the nine musical styles available. Each version has its own theme, graphics, sounds, and bonuses. Here is a table that summarizes the main differences between the versions:

    -
    HeadingSubheadingContent
    H1Save The Dog Bee APK: A Fun and Challenging Puzzle GameAn introduction to the game and its main features.
    H2Features of Save The Dog Bee APKA list of the features that make the game fun and interesting.
    H3A variety of levelsA description of the different levels and challenges in the game.
    H3Easy and funny gameplayA description of how to play the game and its mechanics.
    H3Funny dog expressionsA description of the graphics and animations in the game.
    H3Puzzle and interesting levelsA description of how the game requires logic and creativity to save the dog.
    H3Various skinsA description of how to customize the dog with different skins or save other animals.
    H2How to Play Save The Dog Bee APKA step-by-step guide on how to play the game.
    H2How to Download and Install Save The Dog Bee APK on AndroidA step-by-step guide on how to download and install the game on Android devices.
    H2Tips and Tricks to Win Save The Dog Bee APKA list of tips and tricks that can help the player win the game.
    H3Plan aheadA tip on how to anticipate the bees' movements and draw the wall accordingly.
    H3Use different shapesA tip on how to use curves, circles, triangles, or other shapes to create a wall.
    H3Be creativeA tip on how to use the wall to interact with other elements in the level.
    H3Be carefulA tip on how to avoid harming the dog or breaking the wall.
    H3Have funA tip on how to enjoy the game and not get frustrated.
    H2ConclusionA summary of the main points of the article and a call to action for the reader.
    H2Frequently Asked QuestionsA list of FAQs about the game and their answers.
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    VersionRelease DateThemeBonus
    Alpha2009Old school beatboxA rap battle between two beatboxers
    Little Miss2012R&BA love story between a girl and a beatboxer
    Sunrise2013PopA flash mob dance in a park
    The Love2014RomanticA wedding ceremony with a choir of beatboxers
    Brazil2016SambaA carnival parade with dancers and musicians
    Alive2017ElectroA futuristic concert with robots and lasers
    Jeevan2018BollywoodA musical scene with dancers and elephants
    Dystopia2020Cyberpunk
    Wekiddy2021KidsA playful scene with toys and animals
    -

    What are some of the best alternatives to Incredibox apk ios for music lovers?

    -

    If you enjoy Incredibox apk ios, you might also like some of these other music apps that let you create, record, and share your own tunes:

    -
      -
    • GarageBand: This is a popular app that turns your device into a full-featured recording studio. You can play, record, and mix music with a variety of instruments, loops, and effects. You can also collaborate with other musicians and share your songs via iCloud or social media.
    • -
    • Music Maker Jam: This is a fun app that lets you create your own music in minutes. You can choose from thousands of studio-quality loops, beats, and samples to mix and match your own tracks. You can also apply effects, change the tempo, and adjust the volume. You can also join a global community of music makers and discover new genres and styles.
    • -
    • Beat Snap: This is an easy-to-use app that lets you make beats and music with your fingers. You can tap on the pads to play sounds, record your performance, and edit it later. You can also add effects, filters, and vocals to spice up your tracks. You can also explore and remix songs from other users or upload your own to the cloud.
    • -
    -

    Conclusion

    -

    Incredibox apk ios is a fun and interactive music app that lets you make music with a merry crew of beatboxers. You can choose from nine different musical styles, drag and drop icons onto the avatars to make them sing, unlock animated bonuses, record and share your mix, and explore the Top 50 chart. Incredibox apk ios is a great way to learn about rhythm and melody, as well as to express your creativity and musical talent. You can download it from the App Store for $4.99 and enjoy it offline anywhere and anytime.

    -

    If you are looking for more music apps to try out, you can also check out GarageBand, Music Maker Jam, or Beat Snap. They are some of the best alternatives to Incredibox apk ios that let you create, record, and share your own tunes with a variety of instruments, loops, and effects.

    -

    We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    incredibox app for ios
    -incredibox music game ios
    -incredibox download for iphone
    -incredibox beatbox app ios
    -incredibox create your own music ios
    -incredibox v8 dystopia ios
    -incredibox v7 jeevan ios
    -incredibox v6 alive ios
    -incredibox v5 brazil ios
    -incredibox the love ios
    -incredibox sunrise ios
    -incredibox little miss ios
    -incredibox app store
    -incredibox ipad app
    -incredibox ipod touch app
    -incredibox mac app
    -incredibox app review
    -incredibox app price
    -incredibox app features
    -incredibox app privacy
    -incredibox app dark mode
    -incredibox app mp3 file
    -incredibox app mixlist
    -incredibox app for kids
    -incredibox app no ads
    -how to get incredibox on ios
    -how to play incredibox on ios
    -how to record incredibox on ios
    -how to share incredibox on ios
    -how to download incredibox on ios
    -is incredibox available on ios
    -is incredibox free on ios
    -is incredibox safe on ios
    -is incredibox offline on ios
    -is incredibox worth it on ios
    -best music apps like incredibox for ios
    -best beatbox apps like incredibox for ios
    -best game apps like incredibox for ios
    -best creative apps like incredibox for ios
    -best educational apps like incredibox for ios
    -learn music with incredibox on ios
    -make beats with incredibox on ios
    -have fun with incredibox on ios
    -enjoy the full incredibox experience on ios
    -discover the 9 musical atmospheres of incredibox on ios
    -join the top 50 chart of incredibox on ios
    -watch the animated choruses of incredibox on ios
    -explore the futuristic world of incrediobox on ios
    -celebrate life with the mystic rhythm of incrediobox on ios

    -

    FAQs

    -

    Here are some of the frequently asked questions about Incredibox apk ios:

    -
      -
    1. Is Incredibox apk ios free?
    2. -

      No, Incredibox apk ios is not free. It costs $4.99 on the App Store. However, there is no in-app purchase or subscription required to use the app.

      -
    3. Is Incredibox apk ios safe?
    4. -

      Yes, Incredibox apk ios is safe to use. It does not contain any harmful or malicious content. It also does not collect or share any personal or sensitive information from the users.

      -
    5. Is Incredibox apk ios compatible with Android devices?
    6. -

      No, Incredibox apk ios is not compatible with Android devices. It is only available for iOS devices such as iPhone, iPad, and iPod touch. However, there is an Android version of Incredibox that you can download from Google Play for $3.99.

      -
    7. How do I update Incredibox apk ios?
    8. -

      To update Incredibox apk ios, you need to go to the App Store on your device and check for any available updates. If there is an update available, you need to tap on "Update" or "Install" to download and install it on your device.

      -
    9. How do I contact the developers of Incredibox apk ios?
    10. -

      To contact the developers of Incredibox apk ios, you can visit their official website at https://www.incredibox.com/. You can also follow them on Facebook at https://www.facebook.com/Incredibox.officiel/, on Twitter at https://twitter.com/incredibox_, or on Instagram at https://www.instagram.com/incredibox.officiel/. You can also send them an email at contact@incredibox.com.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/My Talking Angela APK iOS Tips and Tricks to Make Your Pet Happy.md b/spaces/congsaPfin/Manga-OCR/logs/My Talking Angela APK iOS Tips and Tricks to Make Your Pet Happy.md deleted file mode 100644 index 0431c5f526b4d923f4e9564e30b31afff4b9063f..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/My Talking Angela APK iOS Tips and Tricks to Make Your Pet Happy.md +++ /dev/null @@ -1,94 +0,0 @@ -
      -

      My Talking Angela APK iOS: How to Download and Play the Fun Virtual Pet Game

      -

      If you are looking for a fun and cute virtual pet game for your iOS device, you might want to check out My Talking Angela. This game lets you adopt a stylish cat named Angela and take care of her as your own. You can dress her up, play with her, feed her, and watch her grow. In this article, we will show you how to download and play My Talking Angela APK iOS, as well as some tips and tricks to make the most out of your experience.

      -

      my talking angela apk ios


      Downloadhttps://urlca.com/2uO8Iw



      -

      What is My Talking Angela?

      -

      A virtual pet game from Outfit7

      -

      My Talking Angela is a virtual pet game developed by Outfit7, the same company behind the popular My Talking Tom series. It was released in 2014 and has since gained millions of fans around the world. The game is available for both Android and iOS devices, as well as Windows Phone and Amazon Kindle.

      -

      Features and activities of My Talking Angela

      -

      My Talking Angela is more than just a pet simulator. It is also a fashion star, a dancer, a singer, and a friend. You can enjoy various features and activities with Angela, such as:

      -
        -
      • Stylish makeup looks
      • -
      • Wonderful wardrobe choices
      • -
      • Super sweet activities
      • -
      • Special sticker albums
      • -
      • Epic mini-games
      • -
      • Jet-setting travel options
      • -
      -

      You can also interact with Angela by talking to her, stroking her, poking her, or making her smile. She will respond with her own voice and expressions. You can even record videos of your interactions and share them with your friends.

      -

      How to download My Talking Angela APK iOS?

      -

      The official way: App Store

      -

      The easiest and safest way to download My Talking Angela APK iOS is through the App Store. You can simply search for the game on the App Store or click on this link to go directly to the download page. The game is free to download and play, but it offers in-app purchases for extra coins, diamonds, and items. You can also subscribe to get exclusive benefits such as unlimited energy, double coins, no ads, and more.

      -

      The alternative way: Internet Archive

      -

      If you want to try out the older versions of My Talking Angela APK iOS, you can use the Internet Archive website. This website archives various digital content, including apps, games, books, music, videos, and more. You can find the first version of My Talking Angela APK iOS from 2014 on this link. However, this version is only compatible with iOS 6.0 to iOS 6.1.6 devices such as iPad, iPhone, and iPod touch. You will need to use an emulator or a jailbroken device to run this version.

      -

      How to play My Talking Angela APK iOS?

      -

      Customize Angela's appearance and outfits

      -

      One of the main attractions of My Talking Angela APK iOS is that you can customize Angela's appearance and outfits according to your preferences. You can change her fur color, eye color, makeup style, hairstyle, accessories, clothes, shoes, and more. You can also unlock new items by leveling up or buying them with coins or diamonds.

      -

      Interact with Angela and make her happy

      -

      Another important aspect of My Talking Angela APK iOS is that you can interact with Angela and make her happy. You can talk to her and she will repeat what you say in a cute voice. You can also touch her, tickle her, or make funny faces at her. She will react with different emotions and sounds. You can also feed her, bathe her, brush her teeth, and put her to bed. You need to take care of her basic needs such as hunger, hygiene, energy, and happiness. If you neglect her, she will become sad or sick.

      -

      My Talking Angela 2 app download for ios
      -How to install My Talking Angela apk on iphone
      -My Talking Angela virtual pet game for ipad
      -My Talking Angela 1.0 long lost version for ios 6
      -My Talking Angela fashion star dress up game for ios
      -My Talking Angela apk mod unlimited money for ios
      -My Talking Angela free stickers and rewards for ios
      -My Talking Angela baking and dancing activities for ios
      -My Talking Angela spring magic update for ios
      -My Talking Angela rumors and myths debunked for ios
      -My Talking Angela best friend and chatbot for ios
      -My Talking Angela youtube videos and songs for ios
      -My Talking Angela customer support and feedback for ios
      -My Talking Angela privacy policy and terms of use for ios
      -My Talking Angela subscription and in-app purchases for ios
      -My Talking Angela mini-games and puzzles for ios
      -My Talking Angela jet-setting travel options for ios
      -My Talking Angela makeup and hair salon for ios
      -My Talking Angela wardrobe and fashion choices for ios
      -My Talking Angela super fun virtual star for ios
      -My Talking Angela 3D world and graphics for ios
      -My Talking Angela offline mode and data usage for ios
      -My Talking Angela tips and tricks for beginners for ios
      -My Talking Angela cheats and hacks for advanced players for ios
      -My Talking Angela reviews and ratings from users for ios
      -My Talking Angela compatible devices and versions for ios
      -My Talking Angela size and storage space required for ios
      -My Talking Angela latest news and updates for ios
      -My Talking Angela social media and community for ios
      -My Talking Angela screenshots and wallpapers for ios
      -How to uninstall or delete My Talking Angela apk from ios
      -How to backup or restore My Talking Angela data on ios
      -How to sync or transfer My Talking Angela progress across devices on ios
      -How to fix or troubleshoot My Talking Angela issues on ios
      -How to contact or report My Talking Angela developers on ios
      -How to customize or personalize My Talking Angela settings on ios
      -How to enable or disable notifications from My Talking Angela on ios
      -How to earn or spend coins and diamonds in My Talking Angela on ios
      -How to level up or unlock new features in My Talking Angela on ios
      -How to play or interact with My Talking Angela on ios

      -

      Explore different locations and mini-games

      -

      My Talking Angela APK iOS also offers various locations and mini-games for you to explore and enjoy. You can visit Angela's cozy home, where you can play with her toys, watch TV, or listen to music. You can also travel to different places such as Paris, New York, Tokyo, or the beach. Each location has its own theme and activities. You can also play fun mini-games with Angela, such as Happy Connect, Bubble Shooter, Brick Breaker, and more. These games will help you earn coins and diamonds, as well as improve Angela's skills and mood.

      -

      Tips and tricks for My Talking Angela APK iOS

      -

      Collect coins and diamonds

      -

      Coins and diamonds are the main currencies in My Talking Angela APK iOS. You can use them to buy new items, unlock new locations, or access premium features. You can earn coins and diamonds by playing mini-games, completing daily tasks and achievements, watching videos and ads, or using real money. You can also get free coins and diamonds by logging in every day, spinning the wheel of fortune, or opening mystery boxes.

      -

      Complete daily tasks and achievements

      -

      Daily tasks and achievements are another way to earn coins and diamonds in My Talking Angela APK iOS. Daily tasks are simple actions that you need to do every day, such as feeding Angela, playing with her, or dressing her up. Achievements are more challenging goals that you need to accomplish over time, such as collecting a certain number of stickers, reaching a certain level, or playing a certain mini-game. Completing these tasks and achievements will not only reward you with coins and diamonds, but also with experience points that will help you level up faster.

      -

      Watch videos and ads for rewards

      -

      Watching videos and ads is another option to get free rewards in My Talking Angela APK iOS. You can watch videos and ads by tapping on the TV icon in Angela's home or by clicking on the offer wall in the shop. You can get various rewards such as coins, diamonds, energy refills, stickers, or mystery boxes. However, you need to have an internet connection to watch videos and ads.

      -

      Conclusion

      -

      My Talking Angela APK iOS is a fun and cute virtual pet game that will keep you entertained for hours. You can adopt Angela as your own pet and take care of her as she grows from a kitten to a cat. You can customize her appearance and outfits, interact with her and make her happy, explore different locations and mini-games, and collect coins and diamonds. You can also record videos of your interactions and share them with your friends. If you are looking for a game that combines simulation, fashion, adventure, and fun, you should download My Talking Angela APK iOS today.

      -

      FAQs

      -
        -
      • Q: Is My Talking Angela APK iOS safe for kids?
      • -
      • A: My Talking Angela APK iOS is rated 4+ on the App Store, which means it is suitable for everyone. However, some parents may have concerns about the game's privacy policy, in-app purchases, and online interactions. Therefore, we recommend that parents supervise their kids while playing the game and use the parental control settings to limit or disable certain features.
      • -
      • Q: How can I get more stickers in My Talking Angela APK iOS?
      • -
      • A: Stickers are collectible items that you can find in mystery boxes or by traveling to different locations. You can also trade stickers with other players by using the sticker album feature. You need to have an internet connection and a Facebook account to use this feature. You can also buy stickers with diamonds in the shop.
      • -
      • Q: How can I backup or restore my progress in My Talking Angela APK iOS?
      • -
      • A: You can backup or restore your progress in My Talking Angela APK iOS by using the cloud save feature. You need to have an internet connection and a Facebook account to use this feature. You can access the cloud save feature by tapping on the settings icon in the top right corner of the screen and then tapping on the cloud icon. You can then choose to upload or download your progress.
      • -
      • Q: How can I change the language in My Talking Angela APK iOS?
      • -
      • A: You can change the language in My Talking Angela APK iOS by tapping on the settings icon in the top right corner of the screen and then tapping on the language icon. You can then choose from 32 different languages, including English, Spanish, French, German, Chinese, Japanese, and more.
      • -
      • Q: How can I contact the developers of My Talking Angela APK iOS?
      • -
      • A: You can contact the developers of My Talking Angela APK iOS by tapping on the settings icon in the top right corner of the screen and then tapping on the support icon. You can then choose to send an email, visit the website, or follow them on social media.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Piano Magic Tiles Play Music and Enjoy Mod APK Download.md b/spaces/congsaPfin/Manga-OCR/logs/Piano Magic Tiles Play Music and Enjoy Mod APK Download.md deleted file mode 100644 index 39f5d964586339d3e8a44b6097e7ed7bb290ab6d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Piano Magic Tiles Play Music and Enjoy Mod APK Download.md +++ /dev/null @@ -1,77 +0,0 @@ - -

      Piano Music Tiles: Magic Tiles Mod APK - A Fun and Relaxing Game for Music Lovers

      -

      Do you love music and piano? Do you want to play your favorite songs on your mobile device? Do you want to have a fun and relaxing time with a simple and addictive game? If you answered yes to any of these questions, then you should try Piano Music Tiles: Magic Tiles, a music piano game that will test your rhythm and reflexes. And if you want to have an even better experience, you should download the mod apk version of this game, which will give you unlimited coins and diamonds, no ads and pop-ups, and access to all songs and themes. In this article, we will tell you everything you need to know about Piano Music Tiles: Magic Tiles mod apk, including what it is, how to play it, why you should download it, what features it has, and how to download and install it.

      -

      piano music tiles magic tiles mod apk


      Download File » https://urlca.com/2uO7hN



      -

      What is Piano Music Tiles: Magic Tiles?

      -

      Piano Music Tiles: Magic Tiles is a music piano game that was developed by Piano Magic Tiles Challenge Music Free. It is available for Android devices on Google Play Store. The game has over 10 million downloads and a 4.5-star rating. The game is simple but challenging: all you need to do is feel the music and tap the black tiles. But remember, don't touch the white tiles or you will lose. The game has various types of music and genres, such as classical, pop, rock, EDM, anime, etc. You can also choose from different themes and backgrounds to customize your game. The game also has a challenge mode where you can compete with other players around the world on the leaderboard. You can also play offline and save your progress on the cloud.

      -

      Why download Piano Music Tiles: Magic Tiles mod apk?

      -

      While Piano Music Tiles: Magic Tiles is a free game, it also has some limitations and drawbacks that can affect your enjoyment. For example, you need coins and diamonds to unlock new songs and themes, but they are hard to earn and expensive to buy. You also have to deal with annoying ads and pop-ups that interrupt your game. And some songs and themes are locked behind a paywall or require a subscription. That's why we recommend downloading Piano Music Tiles: Magic Tiles mod apk, which will give you the following benefits:

      -

      Unlimited coins and diamonds

      -

      With Piano Music Tiles: Magic Tiles mod apk, you will have unlimited coins and diamonds in your account. You can use them to unlock any song or theme you want without spending real money. You can also use them to buy boosters and power-ups that will help you improve your score and performance.

      -

      No ads and pop-ups

      -

      With Piano Music Tiles: Magic Tiles mod apk, you will not see any ads or pop-ups on your screen. You can enjoy the game without any interruption or distraction. You can also save your data and battery life by not loading any unnecessary content.

      -

      Unlock all songs and themes

      -

      With Piano Music Tiles: Magic Tiles mod apk, you will have access to all songs and themes in the game. You can play any song or genre you like without waiting for it to be unlocked or paying for it. You can also choose from different themes and backgrounds to suit your mood and preference.

      -

      Features of Piano Music Tiles

      Features of Piano Music Tiles: Magic Tiles

      -

      Piano Music Tiles: Magic Tiles is not just a simple music piano game. It also has many features that make it more fun and relaxing. Here are some of the features that you can enjoy with this game:

      -

      piano magic tiles music game mod apk
      -magic tiles piano music challenge mod apk
      -piano music tiles free mod apk
      -magic tiles piano music songs mod apk
      -piano magic tiles pop music mod apk
      -magic tiles piano music master mod apk
      -piano music tiles 2 mod apk
      -magic tiles piano music anime mod apk
      -piano magic tiles classic music mod apk
      -magic tiles piano music rock mod apk
      -piano music tiles 3 mod apk
      -magic tiles piano music edm mod apk
      -piano magic tiles kpop music mod apk
      -magic tiles piano music offline mod apk
      -piano music tiles 4 mod apk
      -magic tiles piano music online mod apk
      -piano magic tiles christmas music mod apk
      -magic tiles piano music kids mod apk
      -piano music tiles 5 mod apk
      -magic tiles piano music relax mod apk
      -piano magic tiles bts music mod apk
      -magic tiles piano music quiz mod apk
      -piano music tiles 6 mod apk
      -magic tiles piano music maker mod apk
      -piano magic tiles jazz music mod apk
      -magic tiles piano music simulator mod apk
      -piano music tiles 7 mod apk
      -magic tiles piano music tutorial mod apk
      -piano magic tiles rap music mod apk
      -magic tiles piano music converter mod apk
      -piano music tiles 8 mod apk
      -magic tiles piano music downloader mod apk
      -piano magic tiles country music mod apk
      -magic tiles piano music editor mod apk
      -piano music tiles 9 mod apk
      -magic tiles piano music generator mod apk
      -piano magic tiles disco music mod apk
      -magic tiles piano music recorder mod apk
      -piano music tiles 10 mod apk
      -magic tiles piano music player mod apk

      -

      Various types of music and genres

      -

      Piano Music Tiles: Magic Tiles has a huge collection of songs and music that you can play. You can find songs from different genres, such as classical, pop, rock, EDM, anime, etc. You can also find songs from famous artists, such as Beethoven, Mozart, Chopin, Taylor Swift, Ed Sheeran, BTS, etc. You can also request new songs and genres from the developers. You will never get bored with this game because there is always something new to play.

      -

      Relaxing visual design and sound effects

      -

      Piano Music Tiles: Magic Tiles has a relaxing visual design and sound effects that will make you feel calm and peaceful. The game has different themes and backgrounds that you can choose from, such as night sky, forest, ocean, etc. The game also has realistic piano sound effects that will make you feel like you are playing a real piano. The game is designed to help you relax and enjoy the music.

      -

      Challenge mode and leaderboard

      -

      Piano Music Tiles: Magic Tiles also has a challenge mode where you can test your skills and compete with other players around the world. The challenge mode has different levels of difficulty and speed that you can choose from. The game also has a leaderboard where you can see your rank and score among other players. You can also share your achievements and progress with your friends on social media. The game is designed to challenge you and motivate you to improve.

      -

      Offline mode and cloud save

      -

      Piano Music Tiles: Magic Tiles also has an offline mode where you can play the game without an internet connection. You can play any song or theme that you have unlocked or downloaded without any limitation. The game also has a cloud save feature where you can save your progress and data on the cloud. You can access your account and data on any device that you use. The game is designed to be convenient and accessible.

      -

      How to download and install Piano Music Tiles: Magic Tiles mod apk?

      -

      If you are interested in downloading and installing Piano Music Tiles: Magic Tiles mod apk, you can follow these simple steps:

      -

      Step 1: Download the mod apk file from a trusted source

      -

      The first step is to download the mod apk file from a trusted source. You can find many websites that offer mod apk files for various games and apps, but not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your data. That's why we recommend downloading the mod apk file from our website, which is 100% safe and secure. You can download the mod apk file by clicking on this link: [Piano Music Tiles: Magic Tiles Mod APK].

      -

      Step 2: Enable unknown sources on your device settings

      -

      The second step is to enable unknown sources on your device settings. This is necessary because Android devices do not allow installing apps from sources other than Google Play Store by default. To enable unknown sources, you need to go to your device settings > security > unknown sources > toggle on. This will allow you to install apps from sources other than Google Play Store.

      -

      Step 3: Install the mod apk file and enjoy the game

      -

      The third step is to install the mod apk file and enjoy the game. To install the mod apk file, you need to locate the file on your device storage > tap on it > follow the instructions on the screen > wait for the installation to finish > open the game and enjoy. You will see that you have unlimited coins and diamonds, no ads and pop-ups, and access to all songs and themes in the game.

      -

      Conclusion

      -

      Piano Music Tiles: Magic Tiles is a music piano game that will give you a fun and relaxing time with your favorite songs. You can play various types of music and genres, customize your game with different themes and backgrounds, compete with other players on the challenge mode and leaderboard, and play offline and save your progress on the cloud. And if you want to have an even better experience, you should download Piano Music Tiles: Magic Tiles mod apk, which will give you unlimited coins and diamonds, no ads and pop-ups, and access to all songs and themes in the game. Download Piano Music Tiles: Magic Tiles mod apk now and enjoy the music!

      - FAQs Q: Is Piano Music Tiles: Magic Tiles mod apk safe to use? A: A: Yes, Piano Music Tiles: Magic Tiles mod apk is safe to use. It does not contain any viruses or malware that can harm your device or steal your data. It also does not require any root or jailbreak to install or run. However, you should always download the mod apk file from a trusted source, such as our website, to avoid any risks. Q: How can I update Piano Music Tiles: Magic Tiles mod apk? A: To update Piano Music Tiles: Magic Tiles mod apk, you need to download the latest version of the mod apk file from our website and install it over the existing one. You do not need to uninstall the previous version or lose your progress. However, you should always backup your data before updating to avoid any issues. Q: Can I play Piano Music Tiles: Magic Tiles mod apk with my friends? A: Yes, you can play Piano Music Tiles: Magic Tiles mod apk with your friends. You can connect your game with your Facebook account and invite your friends to join you. You can also see your friends' scores and achievements on the leaderboard and challenge them to beat you. Q: What are the minimum requirements to play Piano Music Tiles: Magic Tiles mod apk? A: The minimum requirements to play Piano Music Tiles: Magic Tiles mod apk are: - Android 4.1 or higher - 50 MB of free storage space - Internet connection (optional) Q: How can I contact the developers of Piano Music Tiles: Magic Tiles mod apk? A: If you have any questions, feedback, or suggestions for Piano Music Tiles: Magic Tiles mod apk, you can contact the developers by emailing them at [pianomagictileschallenge@gmail.com] or visiting their Facebook page at [Piano Magic Tiles Challenge Music Free].

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money Weapons and Vehicles in Rope Hero Mafia City Wars Hack APK.md b/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money Weapons and Vehicles in Rope Hero Mafia City Wars Hack APK.md deleted file mode 100644 index a08f94de4a338ae6f1ee807d3a6edd5f2d722a50..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Unlimited Money Weapons and Vehicles in Rope Hero Mafia City Wars Hack APK.md +++ /dev/null @@ -1,128 +0,0 @@ - -

      Rope Hero: Mafia City Wars Hack APK - How to Get Unlimited Money and Diamonds

      -

      Are you a fan of superhero games? Do you want to become a super rope hero who can fight crime and save the city? If yes, then you should try Rope Hero: Mafia City Wars, a thrilling action game with RPG elements. In this game, you can use your superpowers and guns to fight with the gangsters, capture districts, and complete quests. You can also customize your super rope hero with different skins and weapons.

      -

      However, to enjoy the game fully, you will need a lot of money and diamonds. Money is used to buy weapons, vehicles, and upgrades, while diamonds are used to unlock premium skins and items. Earning money and diamonds in the game is not easy, as you have to complete missions, watch ads, or spend real money. That's why many players are looking for a way to get unlimited money and diamonds in Rope Hero: Mafia City Wars.

      -

      rope hero mafia city wars hack apk


      Downloadhttps://urlca.com/2uO7qv



      -

      Fortunately, there is a solution for that. You can use a hack apk, which is a modified version of the original game that gives you access to unlimited resources. With a hack apk, you can enjoy the game without any limitations or restrictions. You can buy anything you want, unlock everything you need, and have more fun playing Rope Hero: Mafia City Wars.

      -

      Features of Rope Hero: Mafia City Wars Hack APK

      -

      A hack apk is not just a simple cheat tool. It is a fully functional game that has been modified to provide you with some amazing features that are not available in the original game. Here are some of the features of Rope Hero: Mafia City Wars Hack APK:

      -

      Unlimited money and diamonds

      -

      This is the main feature of the hack apk. You will get unlimited money and diamonds in your account as soon as you install the hack apk. You can use them to buy anything you want in the game, such as weapons, vehicles, upgrades, skins, and items. You don't have to worry about running out of money or diamonds ever again.

      -

      Unlock all superhero skins and weapons

      -

      Another feature of the hack apk is that it unlocks all the superhero skins and weapons in the game. You can choose from a variety of skins for your super rope hero, such as Spider-Man, Iron Man, Batman, Hulk, Deadpool, and more. You can also equip your hero with different weapons, such as pistols, rifles, shotguns, rocket launchers, grenades, swords, axes, hammers, and more. You can mix and match different skins and weapons to create your own unique superhero.

      -

      No ads and no root required

      -

      The hack apk also removes all the annoying ads and pop-ups that interrupt your gameplay. You can enjoy the game without any distractions or interruptions. The hack apk also does not require root access to work. You can install it on any Android device without worrying about rooting your device or voiding your warranty.

      -

      How to Download and Install Rope Hero: Mafia City Wars Hack APK

      -

      Downloading and installing the hack apk is very easy and simple. You just need to follow these steps:

      -

      Step 1: Enable unknown sources on your device

      -

      Before you can install the hack apk, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on.

      -

      Step 2: Download the hack apk file from a trusted source

      -

      Next, you need to download the hack apk file from a trusted source. You can use the link below to download the latest version of Rope Hero: Mafia City Wars Hack APK. The file size is about 100 MB, so make sure you have enough space on your device.

      -

      rope hero mafia city wars mod apk unlimited money
      -rope hero mafia city wars cheats android
      -rope hero mafia city wars hack download
      -rope hero mafia city wars game online
      -rope hero mafia city wars apk free
      -rope hero mafia city wars mod menu
      -rope hero mafia city wars unlimited gems
      -rope hero mafia city wars latest version
      -rope hero mafia city wars gameplay
      -rope hero mafia city wars hack ios
      -rope hero mafia city wars no ads
      -rope hero mafia city wars tips and tricks
      -rope hero mafia city wars review
      -rope hero mafia city wars best weapons
      -rope hero mafia city wars offline
      -rope hero mafia city wars hack tool
      -rope hero mafia city wars for pc
      -rope hero mafia city wars all characters
      -rope hero mafia city wars guide
      -rope hero mafia city wars codes
      -rope hero mafia city wars mod apk revdl
      -rope hero mafia city wars hack apk 2023
      -rope hero mafia city wars new update
      -rope hero mafia city wars superpowers
      -rope hero mafia city wars how to play
      -rope hero mafia city wars hack apk an1.com[^1^]
      -rope hero mafia city wars missions
      -rope hero mafia city wars secrets
      -rope hero mafia city wars vehicles
      -rope hero mafia city wars hack apk happymod
      -rope hero mafia city wars android 1
      -rope hero mafia city wars mod apk rexdl
      -rope hero mafia city wars hack version
      -rope hero mafia city wars download for android
      -rope hero mafia city wars mod apk android 1
      -rope hero mafia city wars hack no verification
      -rope hero mafia city wars wiki
      -rope hero mafia city wars mod apk 2023
      -rope hero mafia city wars hack online generator
      -rope hero mafia city wars unlimited everything
      -rope hero mafia city wars mod apk latest version download
      -rope hero mafia city wars hack without human verification
      -rope hero mafia city wars free gems and coins
      -rope hero mafia city wars mod apk obb download

      -

      Download Rope Hero: Mafia City Wars Hack APK

      -

      Step 3: Install the hack apk file and launch the game

      -

      Finally, you need to install the hack apk file and launch the game. To do this, locate the downloaded file on your device, tap on it, and follow the instructions on the screen. Once the installation is complete, open the game and enjoy unlimited money and diamonds.

      -

      Tips and Tricks for Playing Rope Hero: Mafia City Wars

      -

      Rope Hero: Mafia City Wars is a fun and addictive game that will keep you entertained for hours. However, if you want to master the game and become the best super rope hero in the city, you will need some tips and tricks. Here are some of them:

      -

      Use your superpowers wisely

      -

      Your superpowers are your main weapons in the game. You can use them to swing around the city, climb buildings, jump over obstacles, and fight enemies. However, you should also be careful not to overuse them, as they consume energy. You can replenish your energy by collecting blue orbs or using money or diamonds.

      -

      Explore the open world and complete quests

      -

      The game features a large open world that you can explore freely. You can find various locations, such as shops, banks, casinos, police stations, hospitals, and more. You can also interact with different characters, such as civilians, gangsters, cops, and superheroes. You can also complete various quests that will reward you with money, diamonds, experience points, and items. Quests are marked with yellow icons on the map.

      -

      Fight with the gangster bosses and capture districts

      -

      The city is divided into several districts that are controlled by different gangster bosses. You can challenge them to a fight and try to capture their districts. This will increase your reputation and influence in the city. You can also earn more money and diamonds by collecting taxes from the captured districts. However, be prepared to face strong resistance from the gangsters and their minions.

      -

      Conclusion

      -

      Rope Hero: Mafia City Wars is an exciting game that lets you become a super rope hero who can save the city from crime and chaos. You can use your superpowers and weapons to fight with the gangsters, capture districts, and complete quests. You can also customize your super rope hero with different skins and weapons.

      -

      If you want to enjoy the game without any limitations or restrictions, you can use a hack apk that gives you unlimited money and diamonds. With a hack apk, you can unlock everything you need in the game and have more fun playing Rope Hero: Mafia City Wars.

      -

      So what are you waiting for? Download Rope Hero: Mafia City Wars Hack APK now and become the ultimate super rope hero in the city!

      -

      FAQs

      -

      Is Rope Hero: Mafia City Wars Hack APK safe to use?

      -

      Yes, Rope Hero: Mafia City Wars Hack APK is safe to use. It does not contain any viruses or malware that can harm your device or compromise your privacy. However, you should always download it from a trusted source and scan it with an antivirus before installing it.

      -

      Will I get banned for using Rope Hero: Mafia City Wars Hack APK?

      -

      No, you will not get banned for using Rope Hero: Mafia City Wars Hack APK. The hack apk is undetectable by the game servers and does not interfere with other players' gameplay. However, you should avoid using it excessively or in a way that affects other players' enjoyment of the game. You should also respect the game rules and terms of service.

      -

      How can I update Rope Hero: Mafia City Wars Hack APK?

      -

      To update Rope Hero: Mafia City Wars Hack APK, you need to download the latest version of the hack apk from the same source you downloaded it from before. You can check the version number and the date of the hack apk on the download page. You can also follow the updates and news of the hack apk on its official website or social media pages. To install the update, you need to uninstall the previous version of the hack apk and install the new one.

      -

      What are the best superhero skins and weapons in Rope Hero: Mafia City Wars?

      -

      The best superhero skins and weapons in Rope Hero: Mafia City Wars depend on your personal preference and play style. However, some of the most popular and powerful ones are:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      SkinWeaponDescription
      Spider-ManWeb ShooterA classic superhero skin that lets you swing around the city with your web shooter. You can also shoot webs at enemies to immobilize them or pull them towards you.
      Iron ManRepulsor BlastA futuristic superhero skin that gives you a suit of armor with jet boosters and repulsor blasts. You can fly around the city and blast enemies with your powerful beams.
      BatmanBatarangA dark and mysterious superhero skin that gives you a cape and a batarang. You can glide around the city and throw batarangs at enemies to stun them or knock them out.
      HulkFistsA monstrous superhero skin that gives you incredible strength and durability. You can smash enemies with your fists or throw objects at them. You can also jump high and cause shockwaves when you land.
      DeadpoolDual SwordsA humorous and sarcastic superhero skin that gives you dual swords and a healing factor. You can slash enemies with your swords or use them to deflect bullets. You can also heal from any damage quickly.
      -

      How can I contact the developer of Rope Hero: Mafia City Wars?

      -

      If you have any questions, feedback, suggestions, or issues regarding Rope Hero: Mafia City Wars, you can contact the developer of the game through their email address or their social media pages. Here are their contact details:

      -

      Email: ropeheromafiacitywars@gmail.com

      -

      Facebook: Rope Hero: Mafia City Wars

      -

      Twitter: @RopeHeroMafia

      -

      Instagram: ropeheromafiacitywars

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/A.R.E.S. Extinction Agenda Torrent Download [Patch] !NEW!.md b/spaces/contluForse/HuggingGPT/assets/A.R.E.S. Extinction Agenda Torrent Download [Patch] !NEW!.md deleted file mode 100644 index 290788468988c19b6e39be88969dbfb63f7ce5d2..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/A.R.E.S. Extinction Agenda Torrent Download [Patch] !NEW!.md +++ /dev/null @@ -1,14 +0,0 @@ -

      A.R.E.S.: Extinction Agenda Torrent Download [Patch]


      Download ✓✓✓ https://ssurll.com/2uzyCL



      -
      -January 12, 2015 - staralbu 7b17bfd26b staralbu February 15 at 00:32. And what to do when people in the villages with children cannot go to the cities? -But what about young children if they do not have the opportunity to go to the clinic? -But what if people in the village have a car breakdown and there is no gasoline in the hospital? -And how to live for people who cannot buy their own medicines? -These are not just words. -Before my eyes, when I was working in an orphanage, a girl died. -She didn't have parents. -We treated her. -She had a very advanced form of hepatitis B. The girl was doomed to die. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/contluForse/HuggingGPT/assets/CopyTrans Control Center Crack ((TOP)).rar.md b/spaces/contluForse/HuggingGPT/assets/CopyTrans Control Center Crack ((TOP)).rar.md deleted file mode 100644 index 5331b6224c1d4011905e5dec4306a3fb8f58b16c..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/CopyTrans Control Center Crack ((TOP)).rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

      CopyTrans Control Center crack.rar


      DOWNLOAD 🔗 https://ssurll.com/2uzxzK



      -
      - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/contluForse/HuggingGPT/assets/Crack Prodad Vitascene Serial Number [EXCLUSIVE].md b/spaces/contluForse/HuggingGPT/assets/Crack Prodad Vitascene Serial Number [EXCLUSIVE].md deleted file mode 100644 index cbf69a9493b12e96a46af85ed3806e8954dbdeb9..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Crack Prodad Vitascene Serial Number [EXCLUSIVE].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Crack Prodad Vitascene Serial Number


      DOWNLOAD ••• https://ssurll.com/2uzxXu



      - -proDAD VitaScene in an advanced transition and filter software for video editing which designed with ... proDAD VitaScene 2.0 Full Keygen. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/encoder.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/encoder.py deleted file mode 100644 index 7f7149ca3c0cf2b6e019105af7e645cfbb3eda11..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/encoder.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class Encoder(nn.Module): - def __init__(self): - super(Encoder, self).__init__() - - basemodel_name = 'tf_efficientnet_b5_ap' - print('Loading base model ()...'.format(basemodel_name), end='') - repo_path = os.path.join(os.path.dirname(__file__), 'efficientnet_repo') - basemodel = torch.hub.load(repo_path, basemodel_name, pretrained=False, source='local') - print('Done.') - - # Remove last layer - print('Removing last two layers (global_pool & classifier).') - basemodel.global_pool = nn.Identity() - basemodel.classifier = nn.Identity() - - self.original_model = basemodel - - def forward(self, x): - features = [x] - for k, v in self.original_model._modules.items(): - if (k == 'blocks'): - for ki, vi in v._modules.items(): - features.append(vi(features[-1])) - else: - features.append(v(features[-1])) - return features - - diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/apis/inference.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/apis/inference.py deleted file mode 100644 index 90bc1c0c68525734bd6793f07c15fe97d3c8342c..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/apis/inference.py +++ /dev/null @@ -1,136 +0,0 @@ -import matplotlib.pyplot as plt -import annotator.uniformer.mmcv as mmcv -import torch -from annotator.uniformer.mmcv.parallel import collate, scatter -from annotator.uniformer.mmcv.runner import load_checkpoint - -from annotator.uniformer.mmseg.datasets.pipelines import Compose -from annotator.uniformer.mmseg.models import build_segmentor - - -def init_segmentor(config, checkpoint=None, device='cuda:0'): - """Initialize a segmentor from config file. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str, optional) CPU/CUDA device option. Default 'cuda:0'. - Use 'cpu' for loading model on CPU. - Returns: - nn.Module: The constructed segmentor. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - 'but got {}'.format(type(config))) - config.model.pretrained = None - config.model.train_cfg = None - model = build_segmentor(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - model.CLASSES = checkpoint['meta']['CLASSES'] - model.PALETTE = checkpoint['meta']['PALETTE'] - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage: - """A simple pipeline to load image.""" - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - - Returns: - dict: ``results`` will be returned containing loaded image. - """ - - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_segmentor(model, img): - """Inference image(s) with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - imgs (str/ndarray or list[str/ndarray]): Either image files or loaded - images. - - Returns: - (list[Tensor]): The segmentation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] - test_pipeline = Compose(test_pipeline) - # prepare data - data = dict(img=img) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - data['img_metas'] = [i.data[0] for i in data['img_metas']] - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result - - -def show_result_pyplot(model, - img, - result, - palette=None, - fig_size=(15, 10), - opacity=0.5, - title='', - block=True): - """Visualize the segmentation results on the image. - - Args: - model (nn.Module): The loaded segmentor. - img (str or np.ndarray): Image filename or loaded image. - result (list): The segmentation result. - palette (list[list[int]]] | None): The palette of segmentation - map. If None is given, random palette will be generated. - Default: None - fig_size (tuple): Figure size of the pyplot figure. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - title (str): The title of pyplot figure. - Default is ''. - block (bool): Whether to block the pyplot figure. - Default is True. - """ - if hasattr(model, 'module'): - model = model.module - img = model.show_result( - img, result, palette=palette, show=False, opacity=opacity) - # plt.figure(figsize=fig_size) - # plt.imshow(mmcv.bgr2rgb(img)) - # plt.title(title) - # plt.tight_layout() - # plt.show(block=block) - return mmcv.bgr2rgb(img) diff --git a/spaces/ctcconstruc/README/README.md b/spaces/ctcconstruc/README/README.md deleted file mode 100644 index 273d9ab5f43fafd17aefc7cbbefd65ad02b4c66a..0000000000000000000000000000000000000000 --- a/spaces/ctcconstruc/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 🏢 -colorFrom: indigo -colorTo: pink -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card 🔥 diff --git a/spaces/cymic/Waifu_Diffusion_Webui/javascript/dragdrop.js b/spaces/cymic/Waifu_Diffusion_Webui/javascript/dragdrop.js deleted file mode 100644 index 5aac57f77b93cae9bc176f1f602d7462986826c3..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/javascript/dragdrop.js +++ /dev/null @@ -1,86 +0,0 @@ -// allows drag-dropping files into gradio image elements, and also pasting images from clipboard - -function isValidImageList( files ) { - return files && files?.length === 1 && ['image/png', 'image/gif', 'image/jpeg'].includes(files[0].type); -} - -function dropReplaceImage( imgWrap, files ) { - if ( ! isValidImageList( files ) ) { - return; - } - - imgWrap.querySelector('.modify-upload button + button, .touch-none + div button + button')?.click(); - const callback = () => { - const fileInput = imgWrap.querySelector('input[type="file"]'); - if ( fileInput ) { - fileInput.files = files; - fileInput.dispatchEvent(new Event('change')); - } - }; - - if ( imgWrap.closest('#pnginfo_image') ) { - // special treatment for PNG Info tab, wait for fetch request to finish - const oldFetch = window.fetch; - window.fetch = async (input, options) => { - const response = await oldFetch(input, options); - if ( 'api/predict/' === input ) { - const content = await response.text(); - window.fetch = oldFetch; - window.requestAnimationFrame( () => callback() ); - return new Response(content, { - status: response.status, - statusText: response.statusText, - headers: response.headers - }) - } - return response; - }; - } else { - window.requestAnimationFrame( () => callback() ); - } -} - -window.document.addEventListener('dragover', e => { - const target = e.composedPath()[0]; - const imgWrap = target.closest('[data-testid="image"]'); - if ( !imgWrap ) { - return; - } - e.stopPropagation(); - e.preventDefault(); - e.dataTransfer.dropEffect = 'copy'; -}); - -window.document.addEventListener('drop', e => { - const target = e.composedPath()[0]; - const imgWrap = target.closest('[data-testid="image"]'); - if ( !imgWrap ) { - return; - } - e.stopPropagation(); - e.preventDefault(); - const files = e.dataTransfer.files; - dropReplaceImage( imgWrap, files ); -}); - -window.addEventListener('paste', e => { - const files = e.clipboardData.files; - if ( ! isValidImageList( files ) ) { - return; - } - - const visibleImageFields = [...gradioApp().querySelectorAll('[data-testid="image"]')] - .filter(el => uiElementIsVisible(el)); - if ( ! visibleImageFields.length ) { - return; - } - - const firstFreeImageField = visibleImageFields - .filter(el => el.querySelector('input[type=file]'))?.[0]; - - dropReplaceImage( - firstFreeImageField ? - firstFreeImageField : - visibleImageFields[visibleImageFields.length - 1] - , files ); -}); diff --git a/spaces/cynika/taffy/flask_api.py b/spaces/cynika/taffy/flask_api.py deleted file mode 100644 index 8cc236a1c34c9ddeddea99bcea13024fb0ccc90b..0000000000000000000000000000000000000000 --- a/spaces/cynika/taffy/flask_api.py +++ /dev/null @@ -1,56 +0,0 @@ -import io -import logging - -import soundfile -import torch -import torchaudio -from flask import Flask, request, send_file -from flask_cors import CORS - -from inference.infer_tool import Svc, RealTimeVC - -app = Flask(__name__) - -CORS(app) - -logging.getLogger('numba').setLevel(logging.WARNING) - - -@app.route("/voiceChangeModel", methods=["POST"]) -def voice_change_model(): - request_form = request.form - wave_file = request.files.get("sample", None) - # 变调信息 - f_pitch_change = float(request_form.get("fPitchChange", 0)) - # DAW所需的采样率 - daw_sample = int(float(request_form.get("sampleRate", 0))) - speaker_id = int(float(request_form.get("sSpeakId", 0))) - # http获得wav文件并转换 - input_wav_path = io.BytesIO(wave_file.read()) - - # 模型推理 - if raw_infer: - out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample) - else: - out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample) - # 返回音频 - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav") - out_wav_path.seek(0) - return send_file(out_wav_path, download_name="temp.wav", as_attachment=True) - - -if __name__ == '__main__': - # 启用则为直接切片合成,False为交叉淡化方式 - # vst插件调整0.3-0.5s切片时间可以降低延迟,直接切片方法会有连接处爆音、交叉淡化会有轻微重叠声音 - # 自行选择能接受的方法,或将vst最大切片时间调整为1s,此处设为Ture,延迟大音质稳定一些 - raw_infer = True - # 每个模型和config是唯一对应的 - model_name = "logs/32k/G_174000-Copy1.pth" - config_name = "configs/config.json" - svc_model = Svc(model_name, config_name) - svc = RealTimeVC() - # 此处与vst插件对应,不建议更改 - app.run(port=6842, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/danterivers/music-generation-samples/tests/__init__.py b/spaces/danterivers/music-generation-samples/tests/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/tests/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/dawood/audioldm-text-to-audio-generation/app.py b/spaces/dawood/audioldm-text-to-audio-generation/app.py deleted file mode 100644 index 7d21ec322d38cd4b957ac958d12215debad34586..0000000000000000000000000000000000000000 --- a/spaces/dawood/audioldm-text-to-audio-generation/app.py +++ /dev/null @@ -1,235 +0,0 @@ -import gradio as gr -import numpy as np -from audioldm import text_to_audio, build_model -from share_btn import community_icon_html, loading_icon_html, share_js - -model_id="haoheliu/AudioLDM-S-Full" - -audioldm = build_model() -# audioldm=None - -# def predict(input, history=[]): -# # tokenize the new input sentence -# new_user_input_ids = tokenizer.encode(input + tokenizer.eos_token, return_tensors='pt') - -# # append the new user input tokens to the chat history -# bot_input_ids = torch.cat([torch.LongTensor(history), new_user_input_ids], dim=-1) - -# # generate a response -# history = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id).tolist() - -# # convert the tokens to text, and then split the responses into lines -# response = tokenizer.decode(history[0]).split("<|endoftext|>") -# response = [(response[i], response[i+1]) for i in range(0, len(response)-1, 2)] # convert to tuples of list -# return response, history - -def text2audio(text, duration, guidance_scale, random_seed, n_candidates): - # print(text, length, guidance_scale) - waveform = text_to_audio(audioldm, text, random_seed, duration=duration, guidance_scale=guidance_scale, n_candidate_gen_per_text=int(n_candidates)) # [bs, 1, samples] - waveform = [gr.make_waveform((16000, wave[0])) for wave in waveform] - # waveform = [(16000, np.random.randn(16000)), (16000, np.random.randn(16000))] - if(len(waveform) == 1): - waveform = waveform[0] - return waveform,gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - -# iface = gr.Interface(fn=text2audio, inputs=[ -# gr.Textbox(value="A man is speaking in a huge room", max_lines=1), -# gr.Slider(2.5, 10, value=5, step=2.5), -# gr.Slider(0, 5, value=2.5, step=0.5), -# gr.Number(value=42) -# ], outputs=[gr.Audio(label="Output", type="numpy"), gr.Audio(label="Output", type="numpy")], -# allow_flagging="never" -# ) -# iface.launch(share=True) - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - #advanced-btn { - font-size: .7rem !important; - line-height: 19px; - margin-top: 12px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } - .animate-spin { - animation: spin 1s linear infinite; - } - @keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } - } - #share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; - margin-top: 10px; - margin-left: auto; - } - #share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; - } - #share-btn * { - all: unset; - } - #share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; - } - #share-btn-container .wrap { - display: none !important; - } - - .gr-form{ - flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0; - } - #prompt-container{ - gap: 0; - } - #prompt-text-input, #negative-prompt-text-input{padding: .45rem 0.625rem} - #component-16{border-top-width: 1px!important;margin-top: 1em} - .image_duplication{position: absolute; width: 100px; left: 50px} -""" -iface = gr.Blocks(css=css) - -with iface: - gr.HTML( - """ -
      -
      -

      - AudioLDM: Text-to-Audio Generation with Latent Diffusion Models -

      -
      -

      - [Paper] [Project page] -

      -
      - """ - ) - with gr.Group(): - with gr.Box(): - ############# Input - textbox = gr.Textbox(value="A hammer is hitting a wooden surface", max_lines=1, label="Input your text here. Please ensure it is descriptive and of moderate length.") - - with gr.Accordion("Advanced Options", open=False): - seed = gr.Number(value=42, label="Change this value (any integer number) will lead to a different generation result.") - duration = gr.Slider(2.5, 10, value=5, step=2.5, label="Duration (seconds)") - guidance_scale = gr.Slider(0, 5, value=2.5, step=0.5, label="Guidance scale (Large => better quality and relavancy to text; Small => better diversity)") - n_candidates = gr.Slider(1, 5, value=3, step=1, label="Automatic quality control. This number control the number of candidates (e.g., generate three audios and choose the best to show you). A Larger value usually lead to better quality with heavier computation") - ############# Output - # outputs=gr.Audio(label="Output", type="numpy") - outputs=gr.Video(label="Output") - with gr.Group(elem_id="container-advanced-btns"): - # advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("Share to community", elem_id="share-btn", visible=False) - # outputs=[gr.Audio(label="Output", type="numpy"), gr.Audio(label="Output", type="numpy")] - - btn = gr.Button("Submit").style(full_width=True) - btn.click(text2audio, inputs=[textbox, duration, guidance_scale, seed, n_candidates], outputs=[outputs, community_icon, loading_icon, share_button]) # , share_button, community_icon, loading_icon - share_button.click(None, [], [], _js=share_js) - gr.HTML(''' -
      - - ''') - - with gr.Accordion("Additional information", open=False): - gr.HTML( - """ -
      -

      We build the model with data from AudioSet, Freesound and BBC Sound Effect library. We share this demo based on the UK copyright exception of data for academic research.

      -

      This demo is strictly for research demo purpose only. For commercial use please contact us.

      -
      - """ - ) - -iface.queue(concurrency_count = 2) -iface.launch(debug=True) -# iface.launch(debug=True, share=True) \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/click/_textwrap.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/click/_textwrap.py deleted file mode 100644 index b47dcbd4264e86715adfae1c5124c288b67a983e..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/click/_textwrap.py +++ /dev/null @@ -1,49 +0,0 @@ -import textwrap -import typing as t -from contextlib import contextmanager - - -class TextWrapper(textwrap.TextWrapper): - def _handle_long_word( - self, - reversed_chunks: t.List[str], - cur_line: t.List[str], - cur_len: int, - width: int, - ) -> None: - space_left = max(width - cur_len, 1) - - if self.break_long_words: - last = reversed_chunks[-1] - cut = last[:space_left] - res = last[space_left:] - cur_line.append(cut) - reversed_chunks[-1] = res - elif not cur_line: - cur_line.append(reversed_chunks.pop()) - - @contextmanager - def extra_indent(self, indent: str) -> t.Iterator[None]: - old_initial_indent = self.initial_indent - old_subsequent_indent = self.subsequent_indent - self.initial_indent += indent - self.subsequent_indent += indent - - try: - yield - finally: - self.initial_indent = old_initial_indent - self.subsequent_indent = old_subsequent_indent - - def indent_only(self, text: str) -> str: - rv = [] - - for idx, line in enumerate(text.splitlines()): - indent = self.initial_indent - - if idx > 0: - indent = self.subsequent_indent - - rv.append(f"{indent}{line}") - - return "\n".join(rv) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/click/types.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/click/types.py deleted file mode 100644 index 2b1d1797f2e115e9bc976bcaf7d8e1884a91e91c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/click/types.py +++ /dev/null @@ -1,1089 +0,0 @@ -import os -import stat -import sys -import typing as t -from datetime import datetime -from gettext import gettext as _ -from gettext import ngettext - -from ._compat import _get_argv_encoding -from ._compat import open_stream -from .exceptions import BadParameter -from .utils import format_filename -from .utils import LazyFile -from .utils import safecall - -if t.TYPE_CHECKING: - import typing_extensions as te - from .core import Context - from .core import Parameter - from .shell_completion import CompletionItem - - -class ParamType: - """Represents the type of a parameter. Validates and converts values - from the command line or Python into the correct type. - - To implement a custom type, subclass and implement at least the - following: - - - The :attr:`name` class attribute must be set. - - Calling an instance of the type with ``None`` must return - ``None``. This is already implemented by default. - - :meth:`convert` must convert string values to the correct type. - - :meth:`convert` must accept values that are already the correct - type. - - It must be able to convert a value if the ``ctx`` and ``param`` - arguments are ``None``. This can occur when converting prompt - input. - """ - - is_composite: t.ClassVar[bool] = False - arity: t.ClassVar[int] = 1 - - #: the descriptive name of this type - name: str - - #: if a list of this type is expected and the value is pulled from a - #: string environment variable, this is what splits it up. `None` - #: means any whitespace. For all parameters the general rule is that - #: whitespace splits them up. The exception are paths and files which - #: are split by ``os.path.pathsep`` by default (":" on Unix and ";" on - #: Windows). - envvar_list_splitter: t.ClassVar[t.Optional[str]] = None - - def to_info_dict(self) -> t.Dict[str, t.Any]: - """Gather information that could be useful for a tool generating - user-facing documentation. - - Use :meth:`click.Context.to_info_dict` to traverse the entire - CLI structure. - - .. versionadded:: 8.0 - """ - # The class name without the "ParamType" suffix. - param_type = type(self).__name__.partition("ParamType")[0] - param_type = param_type.partition("ParameterType")[0] - - # Custom subclasses might not remember to set a name. - if hasattr(self, "name"): - name = self.name - else: - name = param_type - - return {"param_type": param_type, "name": name} - - def __call__( - self, - value: t.Any, - param: t.Optional["Parameter"] = None, - ctx: t.Optional["Context"] = None, - ) -> t.Any: - if value is not None: - return self.convert(value, param, ctx) - - def get_metavar(self, param: "Parameter") -> t.Optional[str]: - """Returns the metavar default for this param if it provides one.""" - - def get_missing_message(self, param: "Parameter") -> t.Optional[str]: - """Optionally might return extra information about a missing - parameter. - - .. versionadded:: 2.0 - """ - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - """Convert the value to the correct type. This is not called if - the value is ``None`` (the missing value). - - This must accept string values from the command line, as well as - values that are already the correct type. It may also convert - other compatible types. - - The ``param`` and ``ctx`` arguments may be ``None`` in certain - situations, such as when converting prompt input. - - If the value cannot be converted, call :meth:`fail` with a - descriptive message. - - :param value: The value to convert. - :param param: The parameter that is using this type to convert - its value. May be ``None``. - :param ctx: The current context that arrived at this value. May - be ``None``. - """ - return value - - def split_envvar_value(self, rv: str) -> t.Sequence[str]: - """Given a value from an environment variable this splits it up - into small chunks depending on the defined envvar list splitter. - - If the splitter is set to `None`, which means that whitespace splits, - then leading and trailing whitespace is ignored. Otherwise, leading - and trailing splitters usually lead to empty items being included. - """ - return (rv or "").split(self.envvar_list_splitter) - - def fail( - self, - message: str, - param: t.Optional["Parameter"] = None, - ctx: t.Optional["Context"] = None, - ) -> "t.NoReturn": - """Helper method to fail with an invalid value message.""" - raise BadParameter(message, ctx=ctx, param=param) - - def shell_complete( - self, ctx: "Context", param: "Parameter", incomplete: str - ) -> t.List["CompletionItem"]: - """Return a list of - :class:`~click.shell_completion.CompletionItem` objects for the - incomplete value. Most types do not provide completions, but - some do, and this allows custom types to provide custom - completions as well. - - :param ctx: Invocation context for this command. - :param param: The parameter that is requesting completion. - :param incomplete: Value being completed. May be empty. - - .. versionadded:: 8.0 - """ - return [] - - -class CompositeParamType(ParamType): - is_composite = True - - @property - def arity(self) -> int: # type: ignore - raise NotImplementedError() - - -class FuncParamType(ParamType): - def __init__(self, func: t.Callable[[t.Any], t.Any]) -> None: - self.name: str = func.__name__ - self.func = func - - def to_info_dict(self) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict() - info_dict["func"] = self.func - return info_dict - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - try: - return self.func(value) - except ValueError: - try: - value = str(value) - except UnicodeError: - value = value.decode("utf-8", "replace") - - self.fail(value, param, ctx) - - -class UnprocessedParamType(ParamType): - name = "text" - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - return value - - def __repr__(self) -> str: - return "UNPROCESSED" - - -class StringParamType(ParamType): - name = "text" - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - if isinstance(value, bytes): - enc = _get_argv_encoding() - try: - value = value.decode(enc) - except UnicodeError: - fs_enc = sys.getfilesystemencoding() - if fs_enc != enc: - try: - value = value.decode(fs_enc) - except UnicodeError: - value = value.decode("utf-8", "replace") - else: - value = value.decode("utf-8", "replace") - return value - return str(value) - - def __repr__(self) -> str: - return "STRING" - - -class Choice(ParamType): - """The choice type allows a value to be checked against a fixed set - of supported values. All of these values have to be strings. - - You should only pass a list or tuple of choices. Other iterables - (like generators) may lead to surprising results. - - The resulting value will always be one of the originally passed choices - regardless of ``case_sensitive`` or any ``ctx.token_normalize_func`` - being specified. - - See :ref:`choice-opts` for an example. - - :param case_sensitive: Set to false to make choices case - insensitive. Defaults to true. - """ - - name = "choice" - - def __init__(self, choices: t.Sequence[str], case_sensitive: bool = True) -> None: - self.choices = choices - self.case_sensitive = case_sensitive - - def to_info_dict(self) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict() - info_dict["choices"] = self.choices - info_dict["case_sensitive"] = self.case_sensitive - return info_dict - - def get_metavar(self, param: "Parameter") -> str: - choices_str = "|".join(self.choices) - - # Use curly braces to indicate a required argument. - if param.required and param.param_type_name == "argument": - return f"{{{choices_str}}}" - - # Use square braces to indicate an option or optional argument. - return f"[{choices_str}]" - - def get_missing_message(self, param: "Parameter") -> str: - return _("Choose from:\n\t{choices}").format(choices=",\n\t".join(self.choices)) - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - # Match through normalization and case sensitivity - # first do token_normalize_func, then lowercase - # preserve original `value` to produce an accurate message in - # `self.fail` - normed_value = value - normed_choices = {choice: choice for choice in self.choices} - - if ctx is not None and ctx.token_normalize_func is not None: - normed_value = ctx.token_normalize_func(value) - normed_choices = { - ctx.token_normalize_func(normed_choice): original - for normed_choice, original in normed_choices.items() - } - - if not self.case_sensitive: - normed_value = normed_value.casefold() - normed_choices = { - normed_choice.casefold(): original - for normed_choice, original in normed_choices.items() - } - - if normed_value in normed_choices: - return normed_choices[normed_value] - - choices_str = ", ".join(map(repr, self.choices)) - self.fail( - ngettext( - "{value!r} is not {choice}.", - "{value!r} is not one of {choices}.", - len(self.choices), - ).format(value=value, choice=choices_str, choices=choices_str), - param, - ctx, - ) - - def __repr__(self) -> str: - return f"Choice({list(self.choices)})" - - def shell_complete( - self, ctx: "Context", param: "Parameter", incomplete: str - ) -> t.List["CompletionItem"]: - """Complete choices that start with the incomplete value. - - :param ctx: Invocation context for this command. - :param param: The parameter that is requesting completion. - :param incomplete: Value being completed. May be empty. - - .. versionadded:: 8.0 - """ - from click.shell_completion import CompletionItem - - str_choices = map(str, self.choices) - - if self.case_sensitive: - matched = (c for c in str_choices if c.startswith(incomplete)) - else: - incomplete = incomplete.lower() - matched = (c for c in str_choices if c.lower().startswith(incomplete)) - - return [CompletionItem(c) for c in matched] - - -class DateTime(ParamType): - """The DateTime type converts date strings into `datetime` objects. - - The format strings which are checked are configurable, but default to some - common (non-timezone aware) ISO 8601 formats. - - When specifying *DateTime* formats, you should only pass a list or a tuple. - Other iterables, like generators, may lead to surprising results. - - The format strings are processed using ``datetime.strptime``, and this - consequently defines the format strings which are allowed. - - Parsing is tried using each format, in order, and the first format which - parses successfully is used. - - :param formats: A list or tuple of date format strings, in the order in - which they should be tried. Defaults to - ``'%Y-%m-%d'``, ``'%Y-%m-%dT%H:%M:%S'``, - ``'%Y-%m-%d %H:%M:%S'``. - """ - - name = "datetime" - - def __init__(self, formats: t.Optional[t.Sequence[str]] = None): - self.formats: t.Sequence[str] = formats or [ - "%Y-%m-%d", - "%Y-%m-%dT%H:%M:%S", - "%Y-%m-%d %H:%M:%S", - ] - - def to_info_dict(self) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict() - info_dict["formats"] = self.formats - return info_dict - - def get_metavar(self, param: "Parameter") -> str: - return f"[{'|'.join(self.formats)}]" - - def _try_to_convert_date(self, value: t.Any, format: str) -> t.Optional[datetime]: - try: - return datetime.strptime(value, format) - except ValueError: - return None - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - if isinstance(value, datetime): - return value - - for format in self.formats: - converted = self._try_to_convert_date(value, format) - - if converted is not None: - return converted - - formats_str = ", ".join(map(repr, self.formats)) - self.fail( - ngettext( - "{value!r} does not match the format {format}.", - "{value!r} does not match the formats {formats}.", - len(self.formats), - ).format(value=value, format=formats_str, formats=formats_str), - param, - ctx, - ) - - def __repr__(self) -> str: - return "DateTime" - - -class _NumberParamTypeBase(ParamType): - _number_class: t.ClassVar[t.Type[t.Any]] - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - try: - return self._number_class(value) - except ValueError: - self.fail( - _("{value!r} is not a valid {number_type}.").format( - value=value, number_type=self.name - ), - param, - ctx, - ) - - -class _NumberRangeBase(_NumberParamTypeBase): - def __init__( - self, - min: t.Optional[float] = None, - max: t.Optional[float] = None, - min_open: bool = False, - max_open: bool = False, - clamp: bool = False, - ) -> None: - self.min = min - self.max = max - self.min_open = min_open - self.max_open = max_open - self.clamp = clamp - - def to_info_dict(self) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict() - info_dict.update( - min=self.min, - max=self.max, - min_open=self.min_open, - max_open=self.max_open, - clamp=self.clamp, - ) - return info_dict - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - import operator - - rv = super().convert(value, param, ctx) - lt_min: bool = self.min is not None and ( - operator.le if self.min_open else operator.lt - )(rv, self.min) - gt_max: bool = self.max is not None and ( - operator.ge if self.max_open else operator.gt - )(rv, self.max) - - if self.clamp: - if lt_min: - return self._clamp(self.min, 1, self.min_open) # type: ignore - - if gt_max: - return self._clamp(self.max, -1, self.max_open) # type: ignore - - if lt_min or gt_max: - self.fail( - _("{value} is not in the range {range}.").format( - value=rv, range=self._describe_range() - ), - param, - ctx, - ) - - return rv - - def _clamp(self, bound: float, dir: "te.Literal[1, -1]", open: bool) -> float: - """Find the valid value to clamp to bound in the given - direction. - - :param bound: The boundary value. - :param dir: 1 or -1 indicating the direction to move. - :param open: If true, the range does not include the bound. - """ - raise NotImplementedError - - def _describe_range(self) -> str: - """Describe the range for use in help text.""" - if self.min is None: - op = "<" if self.max_open else "<=" - return f"x{op}{self.max}" - - if self.max is None: - op = ">" if self.min_open else ">=" - return f"x{op}{self.min}" - - lop = "<" if self.min_open else "<=" - rop = "<" if self.max_open else "<=" - return f"{self.min}{lop}x{rop}{self.max}" - - def __repr__(self) -> str: - clamp = " clamped" if self.clamp else "" - return f"<{type(self).__name__} {self._describe_range()}{clamp}>" - - -class IntParamType(_NumberParamTypeBase): - name = "integer" - _number_class = int - - def __repr__(self) -> str: - return "INT" - - -class IntRange(_NumberRangeBase, IntParamType): - """Restrict an :data:`click.INT` value to a range of accepted - values. See :ref:`ranges`. - - If ``min`` or ``max`` are not passed, any value is accepted in that - direction. If ``min_open`` or ``max_open`` are enabled, the - corresponding boundary is not included in the range. - - If ``clamp`` is enabled, a value outside the range is clamped to the - boundary instead of failing. - - .. versionchanged:: 8.0 - Added the ``min_open`` and ``max_open`` parameters. - """ - - name = "integer range" - - def _clamp( # type: ignore - self, bound: int, dir: "te.Literal[1, -1]", open: bool - ) -> int: - if not open: - return bound - - return bound + dir - - -class FloatParamType(_NumberParamTypeBase): - name = "float" - _number_class = float - - def __repr__(self) -> str: - return "FLOAT" - - -class FloatRange(_NumberRangeBase, FloatParamType): - """Restrict a :data:`click.FLOAT` value to a range of accepted - values. See :ref:`ranges`. - - If ``min`` or ``max`` are not passed, any value is accepted in that - direction. If ``min_open`` or ``max_open`` are enabled, the - corresponding boundary is not included in the range. - - If ``clamp`` is enabled, a value outside the range is clamped to the - boundary instead of failing. This is not supported if either - boundary is marked ``open``. - - .. versionchanged:: 8.0 - Added the ``min_open`` and ``max_open`` parameters. - """ - - name = "float range" - - def __init__( - self, - min: t.Optional[float] = None, - max: t.Optional[float] = None, - min_open: bool = False, - max_open: bool = False, - clamp: bool = False, - ) -> None: - super().__init__( - min=min, max=max, min_open=min_open, max_open=max_open, clamp=clamp - ) - - if (min_open or max_open) and clamp: - raise TypeError("Clamping is not supported for open bounds.") - - def _clamp(self, bound: float, dir: "te.Literal[1, -1]", open: bool) -> float: - if not open: - return bound - - # Could use Python 3.9's math.nextafter here, but clamping an - # open float range doesn't seem to be particularly useful. It's - # left up to the user to write a callback to do it if needed. - raise RuntimeError("Clamping is not supported for open bounds.") - - -class BoolParamType(ParamType): - name = "boolean" - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - if value in {False, True}: - return bool(value) - - norm = value.strip().lower() - - if norm in {"1", "true", "t", "yes", "y", "on"}: - return True - - if norm in {"0", "false", "f", "no", "n", "off"}: - return False - - self.fail( - _("{value!r} is not a valid boolean.").format(value=value), param, ctx - ) - - def __repr__(self) -> str: - return "BOOL" - - -class UUIDParameterType(ParamType): - name = "uuid" - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - import uuid - - if isinstance(value, uuid.UUID): - return value - - value = value.strip() - - try: - return uuid.UUID(value) - except ValueError: - self.fail( - _("{value!r} is not a valid UUID.").format(value=value), param, ctx - ) - - def __repr__(self) -> str: - return "UUID" - - -class File(ParamType): - """Declares a parameter to be a file for reading or writing. The file - is automatically closed once the context tears down (after the command - finished working). - - Files can be opened for reading or writing. The special value ``-`` - indicates stdin or stdout depending on the mode. - - By default, the file is opened for reading text data, but it can also be - opened in binary mode or for writing. The encoding parameter can be used - to force a specific encoding. - - The `lazy` flag controls if the file should be opened immediately or upon - first IO. The default is to be non-lazy for standard input and output - streams as well as files opened for reading, `lazy` otherwise. When opening a - file lazily for reading, it is still opened temporarily for validation, but - will not be held open until first IO. lazy is mainly useful when opening - for writing to avoid creating the file until it is needed. - - Starting with Click 2.0, files can also be opened atomically in which - case all writes go into a separate file in the same folder and upon - completion the file will be moved over to the original location. This - is useful if a file regularly read by other users is modified. - - See :ref:`file-args` for more information. - """ - - name = "filename" - envvar_list_splitter: t.ClassVar[str] = os.path.pathsep - - def __init__( - self, - mode: str = "r", - encoding: t.Optional[str] = None, - errors: t.Optional[str] = "strict", - lazy: t.Optional[bool] = None, - atomic: bool = False, - ) -> None: - self.mode = mode - self.encoding = encoding - self.errors = errors - self.lazy = lazy - self.atomic = atomic - - def to_info_dict(self) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict() - info_dict.update(mode=self.mode, encoding=self.encoding) - return info_dict - - def resolve_lazy_flag(self, value: "t.Union[str, os.PathLike[str]]") -> bool: - if self.lazy is not None: - return self.lazy - if os.fspath(value) == "-": - return False - elif "w" in self.mode: - return True - return False - - def convert( - self, - value: t.Union[str, "os.PathLike[str]", t.IO[t.Any]], - param: t.Optional["Parameter"], - ctx: t.Optional["Context"], - ) -> t.IO[t.Any]: - if _is_file_like(value): - return value - - value = t.cast("t.Union[str, os.PathLike[str]]", value) - - try: - lazy = self.resolve_lazy_flag(value) - - if lazy: - lf = LazyFile( - value, self.mode, self.encoding, self.errors, atomic=self.atomic - ) - - if ctx is not None: - ctx.call_on_close(lf.close_intelligently) - - return t.cast(t.IO[t.Any], lf) - - f, should_close = open_stream( - value, self.mode, self.encoding, self.errors, atomic=self.atomic - ) - - # If a context is provided, we automatically close the file - # at the end of the context execution (or flush out). If a - # context does not exist, it's the caller's responsibility to - # properly close the file. This for instance happens when the - # type is used with prompts. - if ctx is not None: - if should_close: - ctx.call_on_close(safecall(f.close)) - else: - ctx.call_on_close(safecall(f.flush)) - - return f - except OSError as e: # noqa: B014 - self.fail(f"'{format_filename(value)}': {e.strerror}", param, ctx) - - def shell_complete( - self, ctx: "Context", param: "Parameter", incomplete: str - ) -> t.List["CompletionItem"]: - """Return a special completion marker that tells the completion - system to use the shell to provide file path completions. - - :param ctx: Invocation context for this command. - :param param: The parameter that is requesting completion. - :param incomplete: Value being completed. May be empty. - - .. versionadded:: 8.0 - """ - from click.shell_completion import CompletionItem - - return [CompletionItem(incomplete, type="file")] - - -def _is_file_like(value: t.Any) -> "te.TypeGuard[t.IO[t.Any]]": - return hasattr(value, "read") or hasattr(value, "write") - - -class Path(ParamType): - """The ``Path`` type is similar to the :class:`File` type, but - returns the filename instead of an open file. Various checks can be - enabled to validate the type of file and permissions. - - :param exists: The file or directory needs to exist for the value to - be valid. If this is not set to ``True``, and the file does not - exist, then all further checks are silently skipped. - :param file_okay: Allow a file as a value. - :param dir_okay: Allow a directory as a value. - :param readable: if true, a readable check is performed. - :param writable: if true, a writable check is performed. - :param executable: if true, an executable check is performed. - :param resolve_path: Make the value absolute and resolve any - symlinks. A ``~`` is not expanded, as this is supposed to be - done by the shell only. - :param allow_dash: Allow a single dash as a value, which indicates - a standard stream (but does not open it). Use - :func:`~click.open_file` to handle opening this value. - :param path_type: Convert the incoming path value to this type. If - ``None``, keep Python's default, which is ``str``. Useful to - convert to :class:`pathlib.Path`. - - .. versionchanged:: 8.1 - Added the ``executable`` parameter. - - .. versionchanged:: 8.0 - Allow passing ``path_type=pathlib.Path``. - - .. versionchanged:: 6.0 - Added the ``allow_dash`` parameter. - """ - - envvar_list_splitter: t.ClassVar[str] = os.path.pathsep - - def __init__( - self, - exists: bool = False, - file_okay: bool = True, - dir_okay: bool = True, - writable: bool = False, - readable: bool = True, - resolve_path: bool = False, - allow_dash: bool = False, - path_type: t.Optional[t.Type[t.Any]] = None, - executable: bool = False, - ): - self.exists = exists - self.file_okay = file_okay - self.dir_okay = dir_okay - self.readable = readable - self.writable = writable - self.executable = executable - self.resolve_path = resolve_path - self.allow_dash = allow_dash - self.type = path_type - - if self.file_okay and not self.dir_okay: - self.name: str = _("file") - elif self.dir_okay and not self.file_okay: - self.name = _("directory") - else: - self.name = _("path") - - def to_info_dict(self) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict() - info_dict.update( - exists=self.exists, - file_okay=self.file_okay, - dir_okay=self.dir_okay, - writable=self.writable, - readable=self.readable, - allow_dash=self.allow_dash, - ) - return info_dict - - def coerce_path_result( - self, value: "t.Union[str, os.PathLike[str]]" - ) -> "t.Union[str, bytes, os.PathLike[str]]": - if self.type is not None and not isinstance(value, self.type): - if self.type is str: - return os.fsdecode(value) - elif self.type is bytes: - return os.fsencode(value) - else: - return t.cast("os.PathLike[str]", self.type(value)) - - return value - - def convert( - self, - value: "t.Union[str, os.PathLike[str]]", - param: t.Optional["Parameter"], - ctx: t.Optional["Context"], - ) -> "t.Union[str, bytes, os.PathLike[str]]": - rv = value - - is_dash = self.file_okay and self.allow_dash and rv in (b"-", "-") - - if not is_dash: - if self.resolve_path: - # os.path.realpath doesn't resolve symlinks on Windows - # until Python 3.8. Use pathlib for now. - import pathlib - - rv = os.fsdecode(pathlib.Path(rv).resolve()) - - try: - st = os.stat(rv) - except OSError: - if not self.exists: - return self.coerce_path_result(rv) - self.fail( - _("{name} {filename!r} does not exist.").format( - name=self.name.title(), filename=format_filename(value) - ), - param, - ctx, - ) - - if not self.file_okay and stat.S_ISREG(st.st_mode): - self.fail( - _("{name} {filename!r} is a file.").format( - name=self.name.title(), filename=format_filename(value) - ), - param, - ctx, - ) - if not self.dir_okay and stat.S_ISDIR(st.st_mode): - self.fail( - _("{name} '{filename}' is a directory.").format( - name=self.name.title(), filename=format_filename(value) - ), - param, - ctx, - ) - - if self.readable and not os.access(rv, os.R_OK): - self.fail( - _("{name} {filename!r} is not readable.").format( - name=self.name.title(), filename=format_filename(value) - ), - param, - ctx, - ) - - if self.writable and not os.access(rv, os.W_OK): - self.fail( - _("{name} {filename!r} is not writable.").format( - name=self.name.title(), filename=format_filename(value) - ), - param, - ctx, - ) - - if self.executable and not os.access(value, os.X_OK): - self.fail( - _("{name} {filename!r} is not executable.").format( - name=self.name.title(), filename=format_filename(value) - ), - param, - ctx, - ) - - return self.coerce_path_result(rv) - - def shell_complete( - self, ctx: "Context", param: "Parameter", incomplete: str - ) -> t.List["CompletionItem"]: - """Return a special completion marker that tells the completion - system to use the shell to provide path completions for only - directories or any paths. - - :param ctx: Invocation context for this command. - :param param: The parameter that is requesting completion. - :param incomplete: Value being completed. May be empty. - - .. versionadded:: 8.0 - """ - from click.shell_completion import CompletionItem - - type = "dir" if self.dir_okay and not self.file_okay else "file" - return [CompletionItem(incomplete, type=type)] - - -class Tuple(CompositeParamType): - """The default behavior of Click is to apply a type on a value directly. - This works well in most cases, except for when `nargs` is set to a fixed - count and different types should be used for different items. In this - case the :class:`Tuple` type can be used. This type can only be used - if `nargs` is set to a fixed number. - - For more information see :ref:`tuple-type`. - - This can be selected by using a Python tuple literal as a type. - - :param types: a list of types that should be used for the tuple items. - """ - - def __init__(self, types: t.Sequence[t.Union[t.Type[t.Any], ParamType]]) -> None: - self.types: t.Sequence[ParamType] = [convert_type(ty) for ty in types] - - def to_info_dict(self) -> t.Dict[str, t.Any]: - info_dict = super().to_info_dict() - info_dict["types"] = [t.to_info_dict() for t in self.types] - return info_dict - - @property - def name(self) -> str: # type: ignore - return f"<{' '.join(ty.name for ty in self.types)}>" - - @property - def arity(self) -> int: # type: ignore - return len(self.types) - - def convert( - self, value: t.Any, param: t.Optional["Parameter"], ctx: t.Optional["Context"] - ) -> t.Any: - len_type = len(self.types) - len_value = len(value) - - if len_value != len_type: - self.fail( - ngettext( - "{len_type} values are required, but {len_value} was given.", - "{len_type} values are required, but {len_value} were given.", - len_value, - ).format(len_type=len_type, len_value=len_value), - param=param, - ctx=ctx, - ) - - return tuple(ty(x, param, ctx) for ty, x in zip(self.types, value)) - - -def convert_type(ty: t.Optional[t.Any], default: t.Optional[t.Any] = None) -> ParamType: - """Find the most appropriate :class:`ParamType` for the given Python - type. If the type isn't provided, it can be inferred from a default - value. - """ - guessed_type = False - - if ty is None and default is not None: - if isinstance(default, (tuple, list)): - # If the default is empty, ty will remain None and will - # return STRING. - if default: - item = default[0] - - # A tuple of tuples needs to detect the inner types. - # Can't call convert recursively because that would - # incorrectly unwind the tuple to a single type. - if isinstance(item, (tuple, list)): - ty = tuple(map(type, item)) - else: - ty = type(item) - else: - ty = type(default) - - guessed_type = True - - if isinstance(ty, tuple): - return Tuple(ty) - - if isinstance(ty, ParamType): - return ty - - if ty is str or ty is None: - return STRING - - if ty is int: - return INT - - if ty is float: - return FLOAT - - if ty is bool: - return BOOL - - if guessed_type: - return STRING - - if __debug__: - try: - if issubclass(ty, ParamType): - raise AssertionError( - f"Attempted to use an uninstantiated parameter type ({ty})." - ) - except TypeError: - # ty is an instance (correct), so issubclass fails. - pass - - return FuncParamType(ty) - - -#: A dummy parameter type that just does nothing. From a user's -#: perspective this appears to just be the same as `STRING` but -#: internally no string conversion takes place if the input was bytes. -#: This is usually useful when working with file paths as they can -#: appear in bytes and unicode. -#: -#: For path related uses the :class:`Path` type is a better choice but -#: there are situations where an unprocessed type is useful which is why -#: it is is provided. -#: -#: .. versionadded:: 4.0 -UNPROCESSED = UnprocessedParamType() - -#: A unicode string parameter type which is the implicit default. This -#: can also be selected by using ``str`` as type. -STRING = StringParamType() - -#: An integer parameter. This can also be selected by using ``int`` as -#: type. -INT = IntParamType() - -#: A floating point value parameter. This can also be selected by using -#: ``float`` as type. -FLOAT = FloatParamType() - -#: A boolean parameter. This is the default for boolean flags. This can -#: also be selected by using ``bool`` as a type. -BOOL = BoolParamType() - -#: A UUID parameter. -UUID = UUIDParameterType() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/tests/test_connection.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/tests/test_connection.py deleted file mode 100644 index 73a27b98bebd949cb3b99e19a3a8a484455b58d7..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/h11/tests/test_connection.py +++ /dev/null @@ -1,1122 +0,0 @@ -from typing import Any, cast, Dict, List, Optional, Tuple, Type - -import pytest - -from .._connection import _body_framing, _keep_alive, Connection, NEED_DATA, PAUSED -from .._events import ( - ConnectionClosed, - Data, - EndOfMessage, - Event, - InformationalResponse, - Request, - Response, -) -from .._state import ( - CLIENT, - CLOSED, - DONE, - ERROR, - IDLE, - MIGHT_SWITCH_PROTOCOL, - MUST_CLOSE, - SEND_BODY, - SEND_RESPONSE, - SERVER, - SWITCHED_PROTOCOL, -) -from .._util import LocalProtocolError, RemoteProtocolError, Sentinel -from .helpers import ConnectionPair, get_all_events, receive_and_get - - -def test__keep_alive() -> None: - assert _keep_alive( - Request(method="GET", target="/", headers=[("Host", "Example.com")]) - ) - assert not _keep_alive( - Request( - method="GET", - target="/", - headers=[("Host", "Example.com"), ("Connection", "close")], - ) - ) - assert not _keep_alive( - Request( - method="GET", - target="/", - headers=[("Host", "Example.com"), ("Connection", "a, b, cLOse, foo")], - ) - ) - assert not _keep_alive( - Request(method="GET", target="/", headers=[], http_version="1.0") # type: ignore[arg-type] - ) - - assert _keep_alive(Response(status_code=200, headers=[])) # type: ignore[arg-type] - assert not _keep_alive(Response(status_code=200, headers=[("Connection", "close")])) - assert not _keep_alive( - Response(status_code=200, headers=[("Connection", "a, b, cLOse, foo")]) - ) - assert not _keep_alive(Response(status_code=200, headers=[], http_version="1.0")) # type: ignore[arg-type] - - -def test__body_framing() -> None: - def headers(cl: Optional[int], te: bool) -> List[Tuple[str, str]]: - headers = [] - if cl is not None: - headers.append(("Content-Length", str(cl))) - if te: - headers.append(("Transfer-Encoding", "chunked")) - return headers - - def resp( - status_code: int = 200, cl: Optional[int] = None, te: bool = False - ) -> Response: - return Response(status_code=status_code, headers=headers(cl, te)) - - def req(cl: Optional[int] = None, te: bool = False) -> Request: - h = headers(cl, te) - h += [("Host", "example.com")] - return Request(method="GET", target="/", headers=h) - - # Special cases where the headers are ignored: - for kwargs in [{}, {"cl": 100}, {"te": True}, {"cl": 100, "te": True}]: - kwargs = cast(Dict[str, Any], kwargs) - for meth, r in [ - (b"HEAD", resp(**kwargs)), - (b"GET", resp(status_code=204, **kwargs)), - (b"GET", resp(status_code=304, **kwargs)), - ]: - assert _body_framing(meth, r) == ("content-length", (0,)) - - # Transfer-encoding - for kwargs in [{"te": True}, {"cl": 100, "te": True}]: - kwargs = cast(Dict[str, Any], kwargs) - for meth, r in [(None, req(**kwargs)), (b"GET", resp(**kwargs))]: # type: ignore - assert _body_framing(meth, r) == ("chunked", ()) - - # Content-Length - for meth, r in [(None, req(cl=100)), (b"GET", resp(cl=100))]: # type: ignore - assert _body_framing(meth, r) == ("content-length", (100,)) - - # No headers - assert _body_framing(None, req()) == ("content-length", (0,)) # type: ignore - assert _body_framing(b"GET", resp()) == ("http/1.0", ()) - - -def test_Connection_basics_and_content_length() -> None: - with pytest.raises(ValueError): - Connection("CLIENT") # type: ignore - - p = ConnectionPair() - assert p.conn[CLIENT].our_role is CLIENT - assert p.conn[CLIENT].their_role is SERVER - assert p.conn[SERVER].our_role is SERVER - assert p.conn[SERVER].their_role is CLIENT - - data = p.send( - CLIENT, - Request( - method="GET", - target="/", - headers=[("Host", "example.com"), ("Content-Length", "10")], - ), - ) - assert data == ( - b"GET / HTTP/1.1\r\n" b"Host: example.com\r\n" b"Content-Length: 10\r\n\r\n" - ) - - for conn in p.conns: - assert conn.states == {CLIENT: SEND_BODY, SERVER: SEND_RESPONSE} - assert p.conn[CLIENT].our_state is SEND_BODY - assert p.conn[CLIENT].their_state is SEND_RESPONSE - assert p.conn[SERVER].our_state is SEND_RESPONSE - assert p.conn[SERVER].their_state is SEND_BODY - - assert p.conn[CLIENT].their_http_version is None - assert p.conn[SERVER].their_http_version == b"1.1" - - data = p.send(SERVER, InformationalResponse(status_code=100, headers=[])) # type: ignore[arg-type] - assert data == b"HTTP/1.1 100 \r\n\r\n" - - data = p.send(SERVER, Response(status_code=200, headers=[("Content-Length", "11")])) - assert data == b"HTTP/1.1 200 \r\nContent-Length: 11\r\n\r\n" - - for conn in p.conns: - assert conn.states == {CLIENT: SEND_BODY, SERVER: SEND_BODY} - - assert p.conn[CLIENT].their_http_version == b"1.1" - assert p.conn[SERVER].their_http_version == b"1.1" - - data = p.send(CLIENT, Data(data=b"12345")) - assert data == b"12345" - data = p.send( - CLIENT, Data(data=b"67890"), expect=[Data(data=b"67890"), EndOfMessage()] - ) - assert data == b"67890" - data = p.send(CLIENT, EndOfMessage(), expect=[]) - assert data == b"" - - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: SEND_BODY} - - data = p.send(SERVER, Data(data=b"1234567890")) - assert data == b"1234567890" - data = p.send(SERVER, Data(data=b"1"), expect=[Data(data=b"1"), EndOfMessage()]) - assert data == b"1" - data = p.send(SERVER, EndOfMessage(), expect=[]) - assert data == b"" - - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: DONE} - - -def test_chunked() -> None: - p = ConnectionPair() - - p.send( - CLIENT, - Request( - method="GET", - target="/", - headers=[("Host", "example.com"), ("Transfer-Encoding", "chunked")], - ), - ) - data = p.send(CLIENT, Data(data=b"1234567890", chunk_start=True, chunk_end=True)) - assert data == b"a\r\n1234567890\r\n" - data = p.send(CLIENT, Data(data=b"abcde", chunk_start=True, chunk_end=True)) - assert data == b"5\r\nabcde\r\n" - data = p.send(CLIENT, Data(data=b""), expect=[]) - assert data == b"" - data = p.send(CLIENT, EndOfMessage(headers=[("hello", "there")])) - assert data == b"0\r\nhello: there\r\n\r\n" - - p.send( - SERVER, Response(status_code=200, headers=[("Transfer-Encoding", "chunked")]) - ) - p.send(SERVER, Data(data=b"54321", chunk_start=True, chunk_end=True)) - p.send(SERVER, Data(data=b"12345", chunk_start=True, chunk_end=True)) - p.send(SERVER, EndOfMessage()) - - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: DONE} - - -def test_chunk_boundaries() -> None: - conn = Connection(our_role=SERVER) - - request = ( - b"POST / HTTP/1.1\r\n" - b"Host: example.com\r\n" - b"Transfer-Encoding: chunked\r\n" - b"\r\n" - ) - conn.receive_data(request) - assert conn.next_event() == Request( - method="POST", - target="/", - headers=[("Host", "example.com"), ("Transfer-Encoding", "chunked")], - ) - assert conn.next_event() is NEED_DATA - - conn.receive_data(b"5\r\nhello\r\n") - assert conn.next_event() == Data(data=b"hello", chunk_start=True, chunk_end=True) - - conn.receive_data(b"5\r\nhel") - assert conn.next_event() == Data(data=b"hel", chunk_start=True, chunk_end=False) - - conn.receive_data(b"l") - assert conn.next_event() == Data(data=b"l", chunk_start=False, chunk_end=False) - - conn.receive_data(b"o\r\n") - assert conn.next_event() == Data(data=b"o", chunk_start=False, chunk_end=True) - - conn.receive_data(b"5\r\nhello") - assert conn.next_event() == Data(data=b"hello", chunk_start=True, chunk_end=True) - - conn.receive_data(b"\r\n") - assert conn.next_event() == NEED_DATA - - conn.receive_data(b"0\r\n\r\n") - assert conn.next_event() == EndOfMessage() - - -def test_client_talking_to_http10_server() -> None: - c = Connection(CLIENT) - c.send(Request(method="GET", target="/", headers=[("Host", "example.com")])) - c.send(EndOfMessage()) - assert c.our_state is DONE - # No content-length, so Http10 framing for body - assert receive_and_get(c, b"HTTP/1.0 200 OK\r\n\r\n") == [ - Response(status_code=200, headers=[], http_version="1.0", reason=b"OK") # type: ignore[arg-type] - ] - assert c.our_state is MUST_CLOSE - assert receive_and_get(c, b"12345") == [Data(data=b"12345")] - assert receive_and_get(c, b"67890") == [Data(data=b"67890")] - assert receive_and_get(c, b"") == [EndOfMessage(), ConnectionClosed()] - assert c.their_state is CLOSED - - -def test_server_talking_to_http10_client() -> None: - c = Connection(SERVER) - # No content-length, so no body - # NB: no host header - assert receive_and_get(c, b"GET / HTTP/1.0\r\n\r\n") == [ - Request(method="GET", target="/", headers=[], http_version="1.0"), # type: ignore[arg-type] - EndOfMessage(), - ] - assert c.their_state is MUST_CLOSE - - # We automatically Connection: close back at them - assert ( - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - == b"HTTP/1.1 200 \r\nConnection: close\r\n\r\n" - ) - - assert c.send(Data(data=b"12345")) == b"12345" - assert c.send(EndOfMessage()) == b"" - assert c.our_state is MUST_CLOSE - - # Check that it works if they do send Content-Length - c = Connection(SERVER) - # NB: no host header - assert receive_and_get(c, b"POST / HTTP/1.0\r\nContent-Length: 10\r\n\r\n1") == [ - Request( - method="POST", - target="/", - headers=[("Content-Length", "10")], - http_version="1.0", - ), - Data(data=b"1"), - ] - assert receive_and_get(c, b"234567890") == [Data(data=b"234567890"), EndOfMessage()] - assert c.their_state is MUST_CLOSE - assert receive_and_get(c, b"") == [ConnectionClosed()] - - -def test_automatic_transfer_encoding_in_response() -> None: - # Check that in responses, the user can specify either Transfer-Encoding: - # chunked or no framing at all, and in both cases we automatically select - # the right option depending on whether the peer speaks HTTP/1.0 or - # HTTP/1.1 - for user_headers in [ - [("Transfer-Encoding", "chunked")], - [], - # In fact, this even works if Content-Length is set, - # because if both are set then Transfer-Encoding wins - [("Transfer-Encoding", "chunked"), ("Content-Length", "100")], - ]: - user_headers = cast(List[Tuple[str, str]], user_headers) - p = ConnectionPair() - p.send( - CLIENT, - [ - Request(method="GET", target="/", headers=[("Host", "example.com")]), - EndOfMessage(), - ], - ) - # When speaking to HTTP/1.1 client, all of the above cases get - # normalized to Transfer-Encoding: chunked - p.send( - SERVER, - Response(status_code=200, headers=user_headers), - expect=Response( - status_code=200, headers=[("Transfer-Encoding", "chunked")] - ), - ) - - # When speaking to HTTP/1.0 client, all of the above cases get - # normalized to no-framing-headers - c = Connection(SERVER) - receive_and_get(c, b"GET / HTTP/1.0\r\n\r\n") - assert ( - c.send(Response(status_code=200, headers=user_headers)) - == b"HTTP/1.1 200 \r\nConnection: close\r\n\r\n" - ) - assert c.send(Data(data=b"12345")) == b"12345" - - -def test_automagic_connection_close_handling() -> None: - p = ConnectionPair() - # If the user explicitly sets Connection: close, then we notice and - # respect it - p.send( - CLIENT, - [ - Request( - method="GET", - target="/", - headers=[("Host", "example.com"), ("Connection", "close")], - ), - EndOfMessage(), - ], - ) - for conn in p.conns: - assert conn.states[CLIENT] is MUST_CLOSE - # And if the client sets it, the server automatically echoes it back - p.send( - SERVER, - # no header here... - [Response(status_code=204, headers=[]), EndOfMessage()], # type: ignore[arg-type] - # ...but oh look, it arrived anyway - expect=[ - Response(status_code=204, headers=[("connection", "close")]), - EndOfMessage(), - ], - ) - for conn in p.conns: - assert conn.states == {CLIENT: MUST_CLOSE, SERVER: MUST_CLOSE} - - -def test_100_continue() -> None: - def setup() -> ConnectionPair: - p = ConnectionPair() - p.send( - CLIENT, - Request( - method="GET", - target="/", - headers=[ - ("Host", "example.com"), - ("Content-Length", "100"), - ("Expect", "100-continue"), - ], - ), - ) - for conn in p.conns: - assert conn.client_is_waiting_for_100_continue - assert not p.conn[CLIENT].they_are_waiting_for_100_continue - assert p.conn[SERVER].they_are_waiting_for_100_continue - return p - - # Disabled by 100 Continue - p = setup() - p.send(SERVER, InformationalResponse(status_code=100, headers=[])) # type: ignore[arg-type] - for conn in p.conns: - assert not conn.client_is_waiting_for_100_continue - assert not conn.they_are_waiting_for_100_continue - - # Disabled by a real response - p = setup() - p.send( - SERVER, Response(status_code=200, headers=[("Transfer-Encoding", "chunked")]) - ) - for conn in p.conns: - assert not conn.client_is_waiting_for_100_continue - assert not conn.they_are_waiting_for_100_continue - - # Disabled by the client going ahead and sending stuff anyway - p = setup() - p.send(CLIENT, Data(data=b"12345")) - for conn in p.conns: - assert not conn.client_is_waiting_for_100_continue - assert not conn.they_are_waiting_for_100_continue - - -def test_max_incomplete_event_size_countermeasure() -> None: - # Infinitely long headers are definitely not okay - c = Connection(SERVER) - c.receive_data(b"GET / HTTP/1.0\r\nEndless: ") - assert c.next_event() is NEED_DATA - with pytest.raises(RemoteProtocolError): - while True: - c.receive_data(b"a" * 1024) - c.next_event() - - # Checking that the same header is accepted / rejected depending on the - # max_incomplete_event_size setting: - c = Connection(SERVER, max_incomplete_event_size=5000) - c.receive_data(b"GET / HTTP/1.0\r\nBig: ") - c.receive_data(b"a" * 4000) - c.receive_data(b"\r\n\r\n") - assert get_all_events(c) == [ - Request( - method="GET", target="/", http_version="1.0", headers=[("big", "a" * 4000)] - ), - EndOfMessage(), - ] - - c = Connection(SERVER, max_incomplete_event_size=4000) - c.receive_data(b"GET / HTTP/1.0\r\nBig: ") - c.receive_data(b"a" * 4000) - with pytest.raises(RemoteProtocolError): - c.next_event() - - # Temporarily exceeding the size limit is fine, as long as its done with - # complete events: - c = Connection(SERVER, max_incomplete_event_size=5000) - c.receive_data(b"GET / HTTP/1.0\r\nContent-Length: 10000") - c.receive_data(b"\r\n\r\n" + b"a" * 10000) - assert get_all_events(c) == [ - Request( - method="GET", - target="/", - http_version="1.0", - headers=[("Content-Length", "10000")], - ), - Data(data=b"a" * 10000), - EndOfMessage(), - ] - - c = Connection(SERVER, max_incomplete_event_size=100) - # Two pipelined requests to create a way-too-big receive buffer... but - # it's fine because we're not checking - c.receive_data( - b"GET /1 HTTP/1.1\r\nHost: a\r\n\r\n" - b"GET /2 HTTP/1.1\r\nHost: b\r\n\r\n" + b"X" * 1000 - ) - assert get_all_events(c) == [ - Request(method="GET", target="/1", headers=[("host", "a")]), - EndOfMessage(), - ] - # Even more data comes in, still no problem - c.receive_data(b"X" * 1000) - # We can respond and reuse to get the second pipelined request - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - c.start_next_cycle() - assert get_all_events(c) == [ - Request(method="GET", target="/2", headers=[("host", "b")]), - EndOfMessage(), - ] - # But once we unpause and try to read the next message, and find that it's - # incomplete and the buffer is *still* way too large, then *that's* a - # problem: - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - c.start_next_cycle() - with pytest.raises(RemoteProtocolError): - c.next_event() - - -def test_reuse_simple() -> None: - p = ConnectionPair() - p.send( - CLIENT, - [Request(method="GET", target="/", headers=[("Host", "a")]), EndOfMessage()], - ) - p.send( - SERVER, - [ - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - EndOfMessage(), - ], - ) - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: DONE} - conn.start_next_cycle() - - p.send( - CLIENT, - [ - Request(method="DELETE", target="/foo", headers=[("Host", "a")]), - EndOfMessage(), - ], - ) - p.send( - SERVER, - [ - Response(status_code=404, headers=[(b"transfer-encoding", b"chunked")]), - EndOfMessage(), - ], - ) - - -def test_pipelining() -> None: - # Client doesn't support pipelining, so we have to do this by hand - c = Connection(SERVER) - assert c.next_event() is NEED_DATA - # 3 requests all bunched up - c.receive_data( - b"GET /1 HTTP/1.1\r\nHost: a.com\r\nContent-Length: 5\r\n\r\n" - b"12345" - b"GET /2 HTTP/1.1\r\nHost: a.com\r\nContent-Length: 5\r\n\r\n" - b"67890" - b"GET /3 HTTP/1.1\r\nHost: a.com\r\n\r\n" - ) - assert get_all_events(c) == [ - Request( - method="GET", - target="/1", - headers=[("Host", "a.com"), ("Content-Length", "5")], - ), - Data(data=b"12345"), - EndOfMessage(), - ] - assert c.their_state is DONE - assert c.our_state is SEND_RESPONSE - - assert c.next_event() is PAUSED - - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - assert c.their_state is DONE - assert c.our_state is DONE - - c.start_next_cycle() - - assert get_all_events(c) == [ - Request( - method="GET", - target="/2", - headers=[("Host", "a.com"), ("Content-Length", "5")], - ), - Data(data=b"67890"), - EndOfMessage(), - ] - assert c.next_event() is PAUSED - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - c.start_next_cycle() - - assert get_all_events(c) == [ - Request(method="GET", target="/3", headers=[("Host", "a.com")]), - EndOfMessage(), - ] - # Doesn't pause this time, no trailing data - assert c.next_event() is NEED_DATA - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - - # Arrival of more data triggers pause - assert c.next_event() is NEED_DATA - c.receive_data(b"SADF") - assert c.next_event() is PAUSED - assert c.trailing_data == (b"SADF", False) - # If EOF arrives while paused, we don't see that either: - c.receive_data(b"") - assert c.trailing_data == (b"SADF", True) - assert c.next_event() is PAUSED - c.receive_data(b"") - assert c.next_event() is PAUSED - # Can't call receive_data with non-empty buf after closing it - with pytest.raises(RuntimeError): - c.receive_data(b"FDSA") - - -def test_protocol_switch() -> None: - for (req, deny, accept) in [ - ( - Request( - method="CONNECT", - target="example.com:443", - headers=[("Host", "foo"), ("Content-Length", "1")], - ), - Response(status_code=404, headers=[(b"transfer-encoding", b"chunked")]), - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - ), - ( - Request( - method="GET", - target="/", - headers=[("Host", "foo"), ("Content-Length", "1"), ("Upgrade", "a, b")], - ), - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - InformationalResponse(status_code=101, headers=[("Upgrade", "a")]), - ), - ( - Request( - method="CONNECT", - target="example.com:443", - headers=[("Host", "foo"), ("Content-Length", "1"), ("Upgrade", "a, b")], - ), - Response(status_code=404, headers=[(b"transfer-encoding", b"chunked")]), - # Accept CONNECT, not upgrade - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - ), - ( - Request( - method="CONNECT", - target="example.com:443", - headers=[("Host", "foo"), ("Content-Length", "1"), ("Upgrade", "a, b")], - ), - Response(status_code=404, headers=[(b"transfer-encoding", b"chunked")]), - # Accept Upgrade, not CONNECT - InformationalResponse(status_code=101, headers=[("Upgrade", "b")]), - ), - ]: - - def setup() -> ConnectionPair: - p = ConnectionPair() - p.send(CLIENT, req) - # No switch-related state change stuff yet; the client has to - # finish the request before that kicks in - for conn in p.conns: - assert conn.states[CLIENT] is SEND_BODY - p.send(CLIENT, [Data(data=b"1"), EndOfMessage()]) - for conn in p.conns: - assert conn.states[CLIENT] is MIGHT_SWITCH_PROTOCOL - assert p.conn[SERVER].next_event() is PAUSED - return p - - # Test deny case - p = setup() - p.send(SERVER, deny) - for conn in p.conns: - assert conn.states == {CLIENT: DONE, SERVER: SEND_BODY} - p.send(SERVER, EndOfMessage()) - # Check that re-use is still allowed after a denial - for conn in p.conns: - conn.start_next_cycle() - - # Test accept case - p = setup() - p.send(SERVER, accept) - for conn in p.conns: - assert conn.states == {CLIENT: SWITCHED_PROTOCOL, SERVER: SWITCHED_PROTOCOL} - conn.receive_data(b"123") - assert conn.next_event() is PAUSED - conn.receive_data(b"456") - assert conn.next_event() is PAUSED - assert conn.trailing_data == (b"123456", False) - - # Pausing in might-switch, then recovery - # (weird artificial case where the trailing data actually is valid - # HTTP for some reason, because this makes it easier to test the state - # logic) - p = setup() - sc = p.conn[SERVER] - sc.receive_data(b"GET / HTTP/1.0\r\n\r\n") - assert sc.next_event() is PAUSED - assert sc.trailing_data == (b"GET / HTTP/1.0\r\n\r\n", False) - sc.send(deny) - assert sc.next_event() is PAUSED - sc.send(EndOfMessage()) - sc.start_next_cycle() - assert get_all_events(sc) == [ - Request(method="GET", target="/", headers=[], http_version="1.0"), # type: ignore[arg-type] - EndOfMessage(), - ] - - # When we're DONE, have no trailing data, and the connection gets - # closed, we report ConnectionClosed(). When we're in might-switch or - # switched, we don't. - p = setup() - sc = p.conn[SERVER] - sc.receive_data(b"") - assert sc.next_event() is PAUSED - assert sc.trailing_data == (b"", True) - p.send(SERVER, accept) - assert sc.next_event() is PAUSED - - p = setup() - sc = p.conn[SERVER] - sc.receive_data(b"") - assert sc.next_event() is PAUSED - sc.send(deny) - assert sc.next_event() == ConnectionClosed() - - # You can't send after switching protocols, or while waiting for a - # protocol switch - p = setup() - with pytest.raises(LocalProtocolError): - p.conn[CLIENT].send( - Request(method="GET", target="/", headers=[("Host", "a")]) - ) - p = setup() - p.send(SERVER, accept) - with pytest.raises(LocalProtocolError): - p.conn[SERVER].send(Data(data=b"123")) - - -def test_close_simple() -> None: - # Just immediately closing a new connection without anything having - # happened yet. - for (who_shot_first, who_shot_second) in [(CLIENT, SERVER), (SERVER, CLIENT)]: - - def setup() -> ConnectionPair: - p = ConnectionPair() - p.send(who_shot_first, ConnectionClosed()) - for conn in p.conns: - assert conn.states == { - who_shot_first: CLOSED, - who_shot_second: MUST_CLOSE, - } - return p - - # You can keep putting b"" into a closed connection, and you keep - # getting ConnectionClosed() out: - p = setup() - assert p.conn[who_shot_second].next_event() == ConnectionClosed() - assert p.conn[who_shot_second].next_event() == ConnectionClosed() - p.conn[who_shot_second].receive_data(b"") - assert p.conn[who_shot_second].next_event() == ConnectionClosed() - # Second party can close... - p = setup() - p.send(who_shot_second, ConnectionClosed()) - for conn in p.conns: - assert conn.our_state is CLOSED - assert conn.their_state is CLOSED - # But trying to receive new data on a closed connection is a - # RuntimeError (not ProtocolError, because the problem here isn't - # violation of HTTP, it's violation of physics) - p = setup() - with pytest.raises(RuntimeError): - p.conn[who_shot_second].receive_data(b"123") - # And receiving new data on a MUST_CLOSE connection is a ProtocolError - p = setup() - p.conn[who_shot_first].receive_data(b"GET") - with pytest.raises(RemoteProtocolError): - p.conn[who_shot_first].next_event() - - -def test_close_different_states() -> None: - req = [ - Request(method="GET", target="/foo", headers=[("Host", "a")]), - EndOfMessage(), - ] - resp = [ - Response(status_code=200, headers=[(b"transfer-encoding", b"chunked")]), - EndOfMessage(), - ] - - # Client before request - p = ConnectionPair() - p.send(CLIENT, ConnectionClosed()) - for conn in p.conns: - assert conn.states == {CLIENT: CLOSED, SERVER: MUST_CLOSE} - - # Client after request - p = ConnectionPair() - p.send(CLIENT, req) - p.send(CLIENT, ConnectionClosed()) - for conn in p.conns: - assert conn.states == {CLIENT: CLOSED, SERVER: SEND_RESPONSE} - - # Server after request -> not allowed - p = ConnectionPair() - p.send(CLIENT, req) - with pytest.raises(LocalProtocolError): - p.conn[SERVER].send(ConnectionClosed()) - p.conn[CLIENT].receive_data(b"") - with pytest.raises(RemoteProtocolError): - p.conn[CLIENT].next_event() - - # Server after response - p = ConnectionPair() - p.send(CLIENT, req) - p.send(SERVER, resp) - p.send(SERVER, ConnectionClosed()) - for conn in p.conns: - assert conn.states == {CLIENT: MUST_CLOSE, SERVER: CLOSED} - - # Both after closing (ConnectionClosed() is idempotent) - p = ConnectionPair() - p.send(CLIENT, req) - p.send(SERVER, resp) - p.send(CLIENT, ConnectionClosed()) - p.send(SERVER, ConnectionClosed()) - p.send(CLIENT, ConnectionClosed()) - p.send(SERVER, ConnectionClosed()) - - # In the middle of sending -> not allowed - p = ConnectionPair() - p.send( - CLIENT, - Request( - method="GET", target="/", headers=[("Host", "a"), ("Content-Length", "10")] - ), - ) - with pytest.raises(LocalProtocolError): - p.conn[CLIENT].send(ConnectionClosed()) - p.conn[SERVER].receive_data(b"") - with pytest.raises(RemoteProtocolError): - p.conn[SERVER].next_event() - - -# Receive several requests and then client shuts down their side of the -# connection; we can respond to each -def test_pipelined_close() -> None: - c = Connection(SERVER) - # 2 requests then a close - c.receive_data( - b"GET /1 HTTP/1.1\r\nHost: a.com\r\nContent-Length: 5\r\n\r\n" - b"12345" - b"GET /2 HTTP/1.1\r\nHost: a.com\r\nContent-Length: 5\r\n\r\n" - b"67890" - ) - c.receive_data(b"") - assert get_all_events(c) == [ - Request( - method="GET", - target="/1", - headers=[("host", "a.com"), ("content-length", "5")], - ), - Data(data=b"12345"), - EndOfMessage(), - ] - assert c.states[CLIENT] is DONE - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - assert c.states[SERVER] is DONE - c.start_next_cycle() - assert get_all_events(c) == [ - Request( - method="GET", - target="/2", - headers=[("host", "a.com"), ("content-length", "5")], - ), - Data(data=b"67890"), - EndOfMessage(), - ConnectionClosed(), - ] - assert c.states == {CLIENT: CLOSED, SERVER: SEND_RESPONSE} - c.send(Response(status_code=200, headers=[])) # type: ignore[arg-type] - c.send(EndOfMessage()) - assert c.states == {CLIENT: CLOSED, SERVER: MUST_CLOSE} - c.send(ConnectionClosed()) - assert c.states == {CLIENT: CLOSED, SERVER: CLOSED} - - -def test_sendfile() -> None: - class SendfilePlaceholder: - def __len__(self) -> int: - return 10 - - placeholder = SendfilePlaceholder() - - def setup( - header: Tuple[str, str], http_version: str - ) -> Tuple[Connection, Optional[List[bytes]]]: - c = Connection(SERVER) - receive_and_get( - c, "GET / HTTP/{}\r\nHost: a\r\n\r\n".format(http_version).encode("ascii") - ) - headers = [] - if header: - headers.append(header) - c.send(Response(status_code=200, headers=headers)) - return c, c.send_with_data_passthrough(Data(data=placeholder)) # type: ignore - - c, data = setup(("Content-Length", "10"), "1.1") - assert data == [placeholder] # type: ignore - # Raises an error if the connection object doesn't think we've sent - # exactly 10 bytes - c.send(EndOfMessage()) - - _, data = setup(("Transfer-Encoding", "chunked"), "1.1") - assert placeholder in data # type: ignore - data[data.index(placeholder)] = b"x" * 10 # type: ignore - assert b"".join(data) == b"a\r\nxxxxxxxxxx\r\n" # type: ignore - - c, data = setup(None, "1.0") # type: ignore - assert data == [placeholder] # type: ignore - assert c.our_state is SEND_BODY - - -def test_errors() -> None: - # After a receive error, you can't receive - for role in [CLIENT, SERVER]: - c = Connection(our_role=role) - c.receive_data(b"gibberish\r\n\r\n") - with pytest.raises(RemoteProtocolError): - c.next_event() - # Now any attempt to receive continues to raise - assert c.their_state is ERROR - assert c.our_state is not ERROR - print(c._cstate.states) - with pytest.raises(RemoteProtocolError): - c.next_event() - # But we can still yell at the client for sending us gibberish - if role is SERVER: - assert ( - c.send(Response(status_code=400, headers=[])) # type: ignore[arg-type] - == b"HTTP/1.1 400 \r\nConnection: close\r\n\r\n" - ) - - # After an error sending, you can no longer send - # (This is especially important for things like content-length errors, - # where there's complex internal state being modified) - def conn(role: Type[Sentinel]) -> Connection: - c = Connection(our_role=role) - if role is SERVER: - # Put it into the state where it *could* send a response... - receive_and_get(c, b"GET / HTTP/1.0\r\n\r\n") - assert c.our_state is SEND_RESPONSE - return c - - for role in [CLIENT, SERVER]: - if role is CLIENT: - # This HTTP/1.0 request won't be detected as bad until after we go - # through the state machine and hit the writing code - good = Request(method="GET", target="/", headers=[("Host", "example.com")]) - bad = Request( - method="GET", - target="/", - headers=[("Host", "example.com")], - http_version="1.0", - ) - elif role is SERVER: - good = Response(status_code=200, headers=[]) # type: ignore[arg-type,assignment] - bad = Response(status_code=200, headers=[], http_version="1.0") # type: ignore[arg-type,assignment] - # Make sure 'good' actually is good - c = conn(role) - c.send(good) - assert c.our_state is not ERROR - # Do that again, but this time sending 'bad' first - c = conn(role) - with pytest.raises(LocalProtocolError): - c.send(bad) - assert c.our_state is ERROR - assert c.their_state is not ERROR - # Now 'good' is not so good - with pytest.raises(LocalProtocolError): - c.send(good) - - # And check send_failed() too - c = conn(role) - c.send_failed() - assert c.our_state is ERROR - assert c.their_state is not ERROR - # This is idempotent - c.send_failed() - assert c.our_state is ERROR - assert c.their_state is not ERROR - - -def test_idle_receive_nothing() -> None: - # At one point this incorrectly raised an error - for role in [CLIENT, SERVER]: - c = Connection(role) - assert c.next_event() is NEED_DATA - - -def test_connection_drop() -> None: - c = Connection(SERVER) - c.receive_data(b"GET /") - assert c.next_event() is NEED_DATA - c.receive_data(b"") - with pytest.raises(RemoteProtocolError): - c.next_event() - - -def test_408_request_timeout() -> None: - # Should be able to send this spontaneously as a server without seeing - # anything from client - p = ConnectionPair() - p.send(SERVER, Response(status_code=408, headers=[(b"connection", b"close")])) - - -# This used to raise IndexError -def test_empty_request() -> None: - c = Connection(SERVER) - c.receive_data(b"\r\n") - with pytest.raises(RemoteProtocolError): - c.next_event() - - -# This used to raise IndexError -def test_empty_response() -> None: - c = Connection(CLIENT) - c.send(Request(method="GET", target="/", headers=[("Host", "a")])) - c.receive_data(b"\r\n") - with pytest.raises(RemoteProtocolError): - c.next_event() - - -@pytest.mark.parametrize( - "data", - [ - b"\x00", - b"\x20", - b"\x16\x03\x01\x00\xa5", # Typical start of a TLS Client Hello - ], -) -def test_early_detection_of_invalid_request(data: bytes) -> None: - c = Connection(SERVER) - # Early detection should occur before even receiving a `\r\n` - c.receive_data(data) - with pytest.raises(RemoteProtocolError): - c.next_event() - - -@pytest.mark.parametrize( - "data", - [ - b"\x00", - b"\x20", - b"\x16\x03\x03\x00\x31", # Typical start of a TLS Server Hello - ], -) -def test_early_detection_of_invalid_response(data: bytes) -> None: - c = Connection(CLIENT) - # Early detection should occur before even receiving a `\r\n` - c.receive_data(data) - with pytest.raises(RemoteProtocolError): - c.next_event() - - -# This used to give different headers for HEAD and GET. -# The correct way to handle HEAD is to put whatever headers we *would* have -# put if it were a GET -- even though we know that for HEAD, those headers -# will be ignored. -def test_HEAD_framing_headers() -> None: - def setup(method: bytes, http_version: bytes) -> Connection: - c = Connection(SERVER) - c.receive_data( - method + b" / HTTP/" + http_version + b"\r\n" + b"Host: example.com\r\n\r\n" - ) - assert type(c.next_event()) is Request - assert type(c.next_event()) is EndOfMessage - return c - - for method in [b"GET", b"HEAD"]: - # No Content-Length, HTTP/1.1 peer, should use chunked - c = setup(method, b"1.1") - assert ( - c.send(Response(status_code=200, headers=[])) == b"HTTP/1.1 200 \r\n" # type: ignore[arg-type] - b"Transfer-Encoding: chunked\r\n\r\n" - ) - - # No Content-Length, HTTP/1.0 peer, frame with connection: close - c = setup(method, b"1.0") - assert ( - c.send(Response(status_code=200, headers=[])) == b"HTTP/1.1 200 \r\n" # type: ignore[arg-type] - b"Connection: close\r\n\r\n" - ) - - # Content-Length + Transfer-Encoding, TE wins - c = setup(method, b"1.1") - assert ( - c.send( - Response( - status_code=200, - headers=[ - ("Content-Length", "100"), - ("Transfer-Encoding", "chunked"), - ], - ) - ) - == b"HTTP/1.1 200 \r\n" - b"Transfer-Encoding: chunked\r\n\r\n" - ) - - -def test_special_exceptions_for_lost_connection_in_message_body() -> None: - c = Connection(SERVER) - c.receive_data( - b"POST / HTTP/1.1\r\n" b"Host: example.com\r\n" b"Content-Length: 100\r\n\r\n" - ) - assert type(c.next_event()) is Request - assert c.next_event() is NEED_DATA - c.receive_data(b"12345") - assert c.next_event() == Data(data=b"12345") - c.receive_data(b"") - with pytest.raises(RemoteProtocolError) as excinfo: - c.next_event() - assert "received 5 bytes" in str(excinfo.value) - assert "expected 100" in str(excinfo.value) - - c = Connection(SERVER) - c.receive_data( - b"POST / HTTP/1.1\r\n" - b"Host: example.com\r\n" - b"Transfer-Encoding: chunked\r\n\r\n" - ) - assert type(c.next_event()) is Request - assert c.next_event() is NEED_DATA - c.receive_data(b"8\r\n012345") - assert c.next_event().data == b"012345" # type: ignore - c.receive_data(b"") - with pytest.raises(RemoteProtocolError) as excinfo: - c.next_event() - assert "incomplete chunked read" in str(excinfo.value) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpx/_content.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpx/_content.py deleted file mode 100644 index b16e12d954327e7ecd5f05885bb8778a0fbfa047..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpx/_content.py +++ /dev/null @@ -1,238 +0,0 @@ -import inspect -import warnings -from json import dumps as json_dumps -from typing import ( - Any, - AsyncIterable, - AsyncIterator, - Dict, - Iterable, - Iterator, - Mapping, - Optional, - Tuple, - Union, -) -from urllib.parse import urlencode - -from ._exceptions import StreamClosed, StreamConsumed -from ._multipart import MultipartStream -from ._types import ( - AsyncByteStream, - RequestContent, - RequestData, - RequestFiles, - ResponseContent, - SyncByteStream, -) -from ._utils import peek_filelike_length, primitive_value_to_str - - -class ByteStream(AsyncByteStream, SyncByteStream): - def __init__(self, stream: bytes) -> None: - self._stream = stream - - def __iter__(self) -> Iterator[bytes]: - yield self._stream - - async def __aiter__(self) -> AsyncIterator[bytes]: - yield self._stream - - -class IteratorByteStream(SyncByteStream): - CHUNK_SIZE = 65_536 - - def __init__(self, stream: Iterable[bytes]): - self._stream = stream - self._is_stream_consumed = False - self._is_generator = inspect.isgenerator(stream) - - def __iter__(self) -> Iterator[bytes]: - if self._is_stream_consumed and self._is_generator: - raise StreamConsumed() - - self._is_stream_consumed = True - if hasattr(self._stream, "read"): - # File-like interfaces should use 'read' directly. - chunk = self._stream.read(self.CHUNK_SIZE) - while chunk: - yield chunk - chunk = self._stream.read(self.CHUNK_SIZE) - else: - # Otherwise iterate. - for part in self._stream: - yield part - - -class AsyncIteratorByteStream(AsyncByteStream): - CHUNK_SIZE = 65_536 - - def __init__(self, stream: AsyncIterable[bytes]): - self._stream = stream - self._is_stream_consumed = False - self._is_generator = inspect.isasyncgen(stream) - - async def __aiter__(self) -> AsyncIterator[bytes]: - if self._is_stream_consumed and self._is_generator: - raise StreamConsumed() - - self._is_stream_consumed = True - if hasattr(self._stream, "aread"): - # File-like interfaces should use 'aread' directly. - chunk = await self._stream.aread(self.CHUNK_SIZE) - while chunk: - yield chunk - chunk = await self._stream.aread(self.CHUNK_SIZE) - else: - # Otherwise iterate. - async for part in self._stream: - yield part - - -class UnattachedStream(AsyncByteStream, SyncByteStream): - """ - If a request or response is serialized using pickle, then it is no longer - attached to a stream for I/O purposes. Any stream operations should result - in `httpx.StreamClosed`. - """ - - def __iter__(self) -> Iterator[bytes]: - raise StreamClosed() - - async def __aiter__(self) -> AsyncIterator[bytes]: - raise StreamClosed() - yield b"" # pragma: no cover - - -def encode_content( - content: Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]] -) -> Tuple[Dict[str, str], Union[SyncByteStream, AsyncByteStream]]: - if isinstance(content, (bytes, str)): - body = content.encode("utf-8") if isinstance(content, str) else content - content_length = len(body) - headers = {"Content-Length": str(content_length)} if body else {} - return headers, ByteStream(body) - - elif isinstance(content, Iterable) and not isinstance(content, dict): - # `not isinstance(content, dict)` is a bit oddly specific, but it - # catches a case that's easy for users to make in error, and would - # otherwise pass through here, like any other bytes-iterable, - # because `dict` happens to be iterable. See issue #2491. - content_length_or_none = peek_filelike_length(content) - - if content_length_or_none is None: - headers = {"Transfer-Encoding": "chunked"} - else: - headers = {"Content-Length": str(content_length_or_none)} - return headers, IteratorByteStream(content) # type: ignore - - elif isinstance(content, AsyncIterable): - headers = {"Transfer-Encoding": "chunked"} - return headers, AsyncIteratorByteStream(content) - - raise TypeError(f"Unexpected type for 'content', {type(content)!r}") - - -def encode_urlencoded_data( - data: RequestData, -) -> Tuple[Dict[str, str], ByteStream]: - plain_data = [] - for key, value in data.items(): - if isinstance(value, (list, tuple)): - plain_data.extend([(key, primitive_value_to_str(item)) for item in value]) - else: - plain_data.append((key, primitive_value_to_str(value))) - body = urlencode(plain_data, doseq=True).encode("utf-8") - content_length = str(len(body)) - content_type = "application/x-www-form-urlencoded" - headers = {"Content-Length": content_length, "Content-Type": content_type} - return headers, ByteStream(body) - - -def encode_multipart_data( - data: RequestData, files: RequestFiles, boundary: Optional[bytes] -) -> Tuple[Dict[str, str], MultipartStream]: - multipart = MultipartStream(data=data, files=files, boundary=boundary) - headers = multipart.get_headers() - return headers, multipart - - -def encode_text(text: str) -> Tuple[Dict[str, str], ByteStream]: - body = text.encode("utf-8") - content_length = str(len(body)) - content_type = "text/plain; charset=utf-8" - headers = {"Content-Length": content_length, "Content-Type": content_type} - return headers, ByteStream(body) - - -def encode_html(html: str) -> Tuple[Dict[str, str], ByteStream]: - body = html.encode("utf-8") - content_length = str(len(body)) - content_type = "text/html; charset=utf-8" - headers = {"Content-Length": content_length, "Content-Type": content_type} - return headers, ByteStream(body) - - -def encode_json(json: Any) -> Tuple[Dict[str, str], ByteStream]: - body = json_dumps(json).encode("utf-8") - content_length = str(len(body)) - content_type = "application/json" - headers = {"Content-Length": content_length, "Content-Type": content_type} - return headers, ByteStream(body) - - -def encode_request( - content: Optional[RequestContent] = None, - data: Optional[RequestData] = None, - files: Optional[RequestFiles] = None, - json: Optional[Any] = None, - boundary: Optional[bytes] = None, -) -> Tuple[Dict[str, str], Union[SyncByteStream, AsyncByteStream]]: - """ - Handles encoding the given `content`, `data`, `files`, and `json`, - returning a two-tuple of (, ). - """ - if data is not None and not isinstance(data, Mapping): - # We prefer to separate `content=` - # for raw request content, and `data=
      ` for url encoded or - # multipart form content. - # - # However for compat with requests, we *do* still support - # `data=` usages. We deal with that case here, treating it - # as if `content=<...>` had been supplied instead. - message = "Use 'content=<...>' to upload raw bytes/text content." - warnings.warn(message, DeprecationWarning) - return encode_content(data) - - if content is not None: - return encode_content(content) - elif files: - return encode_multipart_data(data or {}, files, boundary) - elif data: - return encode_urlencoded_data(data) - elif json is not None: - return encode_json(json) - - return {}, ByteStream(b"") - - -def encode_response( - content: Optional[ResponseContent] = None, - text: Optional[str] = None, - html: Optional[str] = None, - json: Optional[Any] = None, -) -> Tuple[Dict[str, str], Union[SyncByteStream, AsyncByteStream]]: - """ - Handles encoding the given `content`, returning a two-tuple of - (, ). - """ - if content is not None: - return encode_content(content) - elif text is not None: - return encode_text(text) - elif html is not None: - return encode_html(html) - elif json is not None: - return encode_json(json) - - return {}, ByteStream(b"") diff --git a/spaces/dcq/freegpt-webui/client/css/conversation.css b/spaces/dcq/freegpt-webui/client/css/conversation.css deleted file mode 100644 index 481ecc23746585a32d509f0d86a9b0136ef2efec..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/client/css/conversation.css +++ /dev/null @@ -1,137 +0,0 @@ -.conversation { - width: 60%; - margin: 0px 16px; - display: flex; - flex-direction: column; -} - -.conversation #messages { - width: 100%; - display: flex; - flex-direction: column; - overflow: auto; - overflow-wrap: break-word; - padding-bottom: 8px; -} - -.conversation .user-input { - max-height: 180px; - margin: 16px 0px; -} - -.conversation .user-input input { - font-size: 1rem; - background: none; - border: none; - outline: none; - color: var(--colour-3); -} - -.conversation .user-input input::placeholder { - color: var(--user-input); -} - -.conversation-title { - color: var(--colour-3); - font-size: 14px; -} - -.conversation .user-input textarea { - font-size: 1rem; - width: 100%; - height: 100%; - padding: 12px; - background: none; - border: none; - outline: none; - color: var(--colour-3); - resize: vertical; - max-height: 150px; - min-height: 80px; -} - -.box { - backdrop-filter: blur(20px); - -webkit-backdrop-filter: blur(20px); - background-color: var(--blur-bg); - height: 100%; - width: 100%; - border-radius: var(--border-radius-1); - border: 1px solid var(--blur-border); -} - -.input-box { - display: flex; - align-items: center; - padding: 8px; - cursor: pointer; -} - -#cursor { - line-height: 17px; - margin-left: 3px; - -webkit-animation: blink 0.8s infinite; - animation: blink 0.8s infinite; - width: 7px; - height: 15px; -} - -@keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -@-webkit-keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -/* scrollbar */ -.conversation #messages::-webkit-scrollbar { - width: 4px; - padding: 8px 0px; -} - -.conversation #messages::-webkit-scrollbar-track { - background-color: #ffffff00; -} - -.conversation #messages::-webkit-scrollbar-thumb { - background-color: #555555; - border-radius: 10px; -} - -@media screen and (max-width: 990px) { - .conversation { - width: 100%; - height: 90%; - } -} - -@media screen and (max-height: 720px) { - .conversation.box { - height: 70%; - } - - .conversation .user-input textarea { - font-size: 0.875rem; - } -} diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/text_to_image/train_text_to_image.py b/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/text_to_image/train_text_to_image.py deleted file mode 100644 index aba9020f58b651a8f3445b2ae1f5b1abeeba0fa7..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/text_to_image/train_text_to_image.py +++ /dev/null @@ -1,727 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import logging -import math -import os -import random -from pathlib import Path - -import datasets -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from datasets import load_dataset -from huggingface_hub import create_repo, upload_folder -from onnxruntime.training.ortmodule import ORTModule -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -import diffusers -from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from diffusers.training_utils import EMAModel -from diffusers.utils import check_min_version -from diffusers.utils.import_utils import is_xformers_available - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.13.0.dev0") - -logger = get_logger(__name__, log_level="INFO") - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that 🤗 Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--image_column", type=str, default="image", help="The column of the dataset containing an image." - ) - parser.add_argument( - "--caption_column", - type=str, - default="text", - help="The column of the dataset containing a caption or a list of captions.", - ) - parser.add_argument( - "--max_train_samples", - type=int, - default=None, - help=( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="sd-model-finetuned", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--random_flip", - action="store_true", - help="whether to randomly flip images horizontally", - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.") - parser.add_argument( - "--non_ema_revision", - type=str, - default=None, - required=False, - help=( - "Revision of pretrained non-ema model identifier. Must be a branch, tag or git identifier of the local or" - " remote repository specified with --pretrained_model_name_or_path." - ), - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - # Sanity checks - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("Need either a dataset name or a training folder.") - - # default to using the same revision for the non-ema model if not specified - if args.non_ema_revision is None: - args.non_ema_revision = args.revision - - return args - - -dataset_name_mapping = { - "lambdalabs/pokemon-blip-captions": ("image", "text"), -} - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - - accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - accelerator_project_config=accelerator_project_config, - ) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load scheduler, tokenizer and models. - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - tokenizer = CLIPTokenizer.from_pretrained( - args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision - ) - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.non_ema_revision - ) - - # Freeze vae and text_encoder - vae.requires_grad_(False) - text_encoder.requires_grad_(False) - - # Create EMA for the unet. - if args.use_ema: - ema_unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - ema_unet = EMAModel(ema_unet.parameters()) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - vae.enable_gradient_checkpointing() - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" - ) - - optimizer_cls = bnb.optim.AdamW8bit - else: - optimizer_cls = torch.optim.AdamW - - optimizer = optimizer_cls( - unet.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - ) - else: - data_files = {} - if args.train_data_dir is not None: - data_files["train"] = os.path.join(args.train_data_dir, "**") - dataset = load_dataset( - "imagefolder", - data_files=data_files, - cache_dir=args.cache_dir, - ) - # See more about loading custom images at - # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder - - # Preprocessing the datasets. - # We need to tokenize inputs and targets. - column_names = dataset["train"].column_names - - # 6. Get the column names for input/target. - dataset_columns = dataset_name_mapping.get(args.dataset_name, None) - if args.image_column is None: - image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] - else: - image_column = args.image_column - if image_column not in column_names: - raise ValueError( - f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.caption_column is None: - caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1] - else: - caption_column = args.caption_column - if caption_column not in column_names: - raise ValueError( - f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}" - ) - - # Preprocessing the datasets. - # We need to tokenize input captions and transform the images. - def tokenize_captions(examples, is_train=True): - captions = [] - for caption in examples[caption_column]: - if isinstance(caption, str): - captions.append(caption) - elif isinstance(caption, (list, np.ndarray)): - # take a random caption if there are multiple - captions.append(random.choice(caption) if is_train else caption[0]) - else: - raise ValueError( - f"Caption column `{caption_column}` should contain either strings or lists of strings." - ) - inputs = tokenizer( - captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ) - return inputs.input_ids - - # Preprocessing the datasets. - train_transforms = transforms.Compose( - [ - transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), - transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def preprocess_train(examples): - images = [image.convert("RGB") for image in examples[image_column]] - examples["pixel_values"] = [train_transforms(image) for image in images] - examples["input_ids"] = tokenize_captions(examples) - return examples - - with accelerator.main_process_first(): - if args.max_train_samples is not None: - dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) - # Set the training transforms - train_dataset = dataset["train"].with_transform(preprocess_train) - - def collate_fn(examples): - pixel_values = torch.stack([example["pixel_values"] for example in examples]) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - input_ids = torch.stack([example["input_ids"] for example in examples]) - return {"pixel_values": pixel_values, "input_ids": input_ids} - - # DataLoaders creation: - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - shuffle=True, - collate_fn=collate_fn, - batch_size=args.train_batch_size, - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - # Prepare everything with our `accelerator`. - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - unet = ORTModule(unet) - - if args.use_ema: - accelerator.register_for_checkpointing(ema_unet) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu and cast to weight_dtype - text_encoder.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - if args.use_ema: - ema_unet.to(accelerator.device) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("text2image-fine-tune", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - train_loss = 0.0 - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(weight_dtype)).latent_dist.sample() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - # Predict the noise residual and compute loss - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states, return_dict=False)[0] - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Gather the losses across all processes for logging (if we use distributed training). - avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean() - train_loss += avg_loss.item() / args.gradient_accumulation_steps - - # Backpropagate - accelerator.backward(loss) - if accelerator.sync_gradients: - accelerator.clip_grad_norm_(unet.parameters(), args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - if args.use_ema: - ema_unet.step(unet.parameters()) - progress_bar.update(1) - global_step += 1 - accelerator.log({"train_loss": train_loss}, step=global_step) - train_loss = 0.0 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - - if global_step >= args.max_train_steps: - break - - # Create the pipeline using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = accelerator.unwrap_model(unet) - if args.use_ema: - ema_unet.copy_to(unet.parameters()) - - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=text_encoder, - vae=vae, - unet=unet, - revision=args.revision, - ) - pipeline.save_pretrained(args.output_dir) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/deepghs/ml-danbooru-demo/app.py b/spaces/deepghs/ml-danbooru-demo/app.py deleted file mode 100644 index 3d6ec4bb031039c1d046aa05325c403c384c5386..0000000000000000000000000000000000000000 --- a/spaces/deepghs/ml-danbooru-demo/app.py +++ /dev/null @@ -1,214 +0,0 @@ -import json -import logging -import os -import re -import shutil -from functools import lru_cache -from typing import Optional, List, Tuple, Mapping - -import gradio as gr -import numpy as np -from PIL import Image -from hbutils.system import pip_install -from huggingface_hub import hf_hub_download - - -def _ensure_onnxruntime(): - try: - import onnxruntime - except (ImportError, ModuleNotFoundError): - logging.warning('Onnx runtime not installed, preparing to install ...') - if shutil.which('nvidia-smi'): - logging.info('Installing onnxruntime-gpu ...') - pip_install(['onnxruntime-gpu'], silent=True) - else: - logging.info('Installing onnxruntime (cpu) ...') - pip_install(['onnxruntime'], silent=True) - - -_ensure_onnxruntime() -from onnxruntime import get_available_providers, get_all_providers, InferenceSession, SessionOptions, \ - GraphOptimizationLevel - -alias = { - 'gpu': "CUDAExecutionProvider", - "trt": "TensorrtExecutionProvider", -} - - -def get_onnx_provider(provider: Optional[str] = None): - if not provider: - if "CUDAExecutionProvider" in get_available_providers(): - return "CUDAExecutionProvider" - else: - return "CPUExecutionProvider" - elif provider.lower() in alias: - return alias[provider.lower()] - else: - for p in get_all_providers(): - if provider.lower() == p.lower() or f'{provider}ExecutionProvider'.lower() == p.lower(): - return p - - raise ValueError(f'One of the {get_all_providers()!r} expected, ' - f'but unsupported provider {provider!r} found.') - - -def resize(pic: Image.Image, size: int, keep_ratio: float = True) -> Image.Image: - if not keep_ratio: - target_size = (size, size) - else: - min_edge = min(pic.size) - target_size = ( - int(pic.size[0] / min_edge * size), - int(pic.size[1] / min_edge * size), - ) - - target_size = ( - (target_size[0] // 4) * 4, - (target_size[1] // 4) * 4, - ) - - return pic.resize(target_size, resample=Image.Resampling.BILINEAR) - - -def to_tensor(pic: Image.Image): - img: np.ndarray = np.array(pic, np.uint8, copy=True) - img = img.reshape(pic.size[1], pic.size[0], len(pic.getbands())) - - # put it from HWC to CHW format - img = img.transpose((2, 0, 1)) - return img.astype(np.float32) / 255 - - -def fill_background(pic: Image.Image, background: str = 'white') -> Image.Image: - if pic.mode == 'RGB': - return pic - if pic.mode != 'RGBA': - pic = pic.convert('RGBA') - - background = background or 'white' - result = Image.new('RGBA', pic.size, background) - result.paste(pic, (0, 0), pic) - - return result.convert('RGB') - - -def image_to_tensor(pic: Image.Image, size: int = 512, keep_ratio: float = True, background: str = 'white'): - return to_tensor(resize(fill_background(pic, background), size, keep_ratio)) - - -MODELS = [ - 'ml_caformer_m36_dec-5-97527.onnx', - 'ml_caformer_m36_dec-3-80000.onnx', - 'TResnet-D-FLq_ema_6-30000.onnx', - 'TResnet-D-FLq_ema_6-10000.onnx', - 'TResnet-D-FLq_ema_4-10000.onnx', - 'TResnet-D-FLq_ema_2-40000.onnx', -] -DEFAULT_MODEL = MODELS[0] - - -def get_onnx_model_file(name=DEFAULT_MODEL): - return hf_hub_download( - repo_id='deepghs/ml-danbooru-onnx', - filename=name, - ) - - -@lru_cache() -def _open_onnx_model(ckpt: str, provider: str) -> InferenceSession: - options = SessionOptions() - options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL - if provider == "CPUExecutionProvider": - options.intra_op_num_threads = os.cpu_count() - - logging.info(f'Model {ckpt!r} loaded with provider {provider!r}') - return InferenceSession(ckpt, options, [provider]) - - -def load_classes() -> List[str]: - classes_file = hf_hub_download( - repo_id='deepghs/ml-danbooru-onnx', - filename='classes.json', - ) - with open(classes_file, 'r', encoding='utf-8') as f: - return json.load(f) - - -def get_tags_from_image(pic: Image.Image, threshold: float = 0.7, size: int = 512, keep_ratio: bool = False, - model_name=DEFAULT_MODEL): - real_input = image_to_tensor(pic, size, keep_ratio) - real_input = real_input.reshape(1, *real_input.shape) - - model = _open_onnx_model(get_onnx_model_file(model_name), get_onnx_provider('cpu')) - native_output, = model.run(['output'], {'input': real_input}) - - output = (1 / (1 + np.exp(-native_output))).reshape(-1) - tags = load_classes() - pairs = sorted([(tags[i], ratio) for i, ratio in enumerate(output)], key=lambda x: (-x[1], x[0])) - return {tag: float(ratio) for tag, ratio in pairs if ratio >= threshold} - - -RE_SPECIAL = re.compile(r'([\\()])') - - -def image_to_mldanbooru_tags(pic: Image.Image, threshold: float, size: int, keep_ratio: bool, model: str, - use_spaces: bool, use_escape: bool, include_ranks: bool, score_descend: bool) \ - -> Tuple[str, Mapping[str, float]]: - filtered_tags = get_tags_from_image(pic, threshold, size, keep_ratio, model) - - text_items = [] - tags_pairs = filtered_tags.items() - if score_descend: - tags_pairs = sorted(tags_pairs, key=lambda x: (-x[1], x[0])) - for tag, score in tags_pairs: - tag_outformat = tag - if use_spaces: - tag_outformat = tag_outformat.replace('_', ' ') - if use_escape: - tag_outformat = re.sub(RE_SPECIAL, r'\\\1', tag_outformat) - if include_ranks: - tag_outformat = f"({tag_outformat}:{score:.3f})" - text_items.append(tag_outformat) - output_text = ', '.join(text_items) - - return output_text, filtered_tags - - -if __name__ == '__main__': - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - gr_input_image = gr.Image(type='pil', label='Original Image') - with gr.Row(): - gr_threshold = gr.Slider(0.0, 1.0, 0.7, label='Tagging Confidence Threshold') - # gr_image_size = gr.Slider(128, 960, 640, step=32, label='Image for Recognition') - gr_image_size = gr.Slider(128, 960, 448, step=32, label='Image for Recognition') - gr_keep_ratio = gr.Checkbox(value=False, label='Keep the Ratio') - with gr.Row(): - gr_model = gr.Dropdown(MODELS, value=DEFAULT_MODEL, label='Model') - with gr.Row(): - gr_space = gr.Checkbox(value=False, label='Use Space Instead Of _') - gr_escape = gr.Checkbox(value=True, label='Use Text Escape') - gr_confidence = gr.Checkbox(value=False, label='Keep Confidences') - gr_order = gr.Checkbox(value=True, label='Descend By Confidence') - - gr_btn_submit = gr.Button(value='Tagging', variant='primary') - - with gr.Column(): - with gr.Tabs(): - with gr.Tab("Tags"): - gr_tags = gr.Label(label='Tags') - with gr.Tab("Exported Text"): - gr_output_text = gr.TextArea(label='Exported Text') - - gr_btn_submit.click( - image_to_mldanbooru_tags, - inputs=[ - gr_input_image, gr_threshold, gr_image_size, - gr_keep_ratio, gr_model, - gr_space, gr_escape, gr_confidence, gr_order - ], - outputs=[gr_output_text, gr_tags], - ) - demo.queue(os.cpu_count()).launch() diff --git a/spaces/dfurman/chat-all-in/src/semantic_search.py b/spaces/dfurman/chat-all-in/src/semantic_search.py deleted file mode 100644 index 00273a848798f2e7ed66d962067e06bd8b21aaed..0000000000000000000000000000000000000000 --- a/spaces/dfurman/chat-all-in/src/semantic_search.py +++ /dev/null @@ -1,61 +0,0 @@ -import os -from argparse import ArgumentParser -import logging - -from sentence_transformers import SentenceTransformer -import pandas as pd -import numpy as np -from torch.nn.functional import cosine_similarity -from torch import from_numpy - - -# python basic_semantic_search.py --query "tiger global" --n_answers 1 --episode_number E134 - - -def basic_semantic_search( - query: str, - n_answers: int, - episode_number: str, -): - embedding_model_name = "cached-all-mpnet-base-v2" - model = SentenceTransformer(embedding_model_name) - - corpus_texts_metadata = pd.read_parquet( - f"./embeddings/{episode_number}_sentence_embeddings_metadata.parquet" - ) - - corpus_emb = np.load(f"./embeddings/{episode_number}_sentence_embeddings.npy") - corpus_emb = from_numpy(corpus_emb) - query_emb = model.encode(query, convert_to_tensor=True) - - # Getting hits - hits = cosine_similarity(query_emb[None, :], corpus_emb, dim=1, eps=1e-8) - - corpus_texts_metadata["similarity"] = hits.tolist() - - # Filter to just top n answers - corpus_texts_metadata_ordered = corpus_texts_metadata.sort_values( - by="similarity", ascending=False - ).head(n_answers) - - logging.warning( - f"SEM SEARCH TOP N SENTENCES: {corpus_texts_metadata_ordered.sentences}" - ) - return corpus_texts_metadata_ordered - - -if __name__ == "__main__": - parser = ArgumentParser() - parser.add_argument( - "--query", - type=str, - help="Search query", - ), - parser.add_argument("--n_answers", type=int, help="N hits returned"), - parser.add_argument( - "--episode_number", - type=str, - help="Episode number, example: E134", - ), - args = parser.parse_args() - basic_semantic_search(args.query, args.n_answers, args.episode_number) diff --git a/spaces/diacanFperku/AutoGPT/Crack Para Punto De Venta Abarrotes 31 __EXCLUSIVE__.md b/spaces/diacanFperku/AutoGPT/Crack Para Punto De Venta Abarrotes 31 __EXCLUSIVE__.md deleted file mode 100644 index da80052dd002e5836b718262397313340127f352..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Crack Para Punto De Venta Abarrotes 31 __EXCLUSIVE__.md +++ /dev/null @@ -1,6 +0,0 @@ -

      crack para punto de venta abarrotes 31


      Downloadhttps://gohhs.com/2uFTmI



      - -autodesk ecotect analysis 2011 with x-force keygen crack para punto de venta abarrotes 31. Autodesk MatchMover ... Autodesk Ecotect Analysis 2011 With X- ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Luxor 1 Crack [UPDATED] Code For Kaspersky.md b/spaces/diacanFperku/AutoGPT/Luxor 1 Crack [UPDATED] Code For Kaspersky.md deleted file mode 100644 index c90d7e71a3c59cdf62ca689dc9bb94d915bf54ce..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Luxor 1 Crack [UPDATED] Code For Kaspersky.md +++ /dev/null @@ -1,31 +0,0 @@ -
      -

      How to Activate Luxor 1 and Kaspersky Antivirus Software with a Valid Code

      -

      If you are looking for a way to activate luxor 1 and kaspersky antivirus software with a valid code, you have come to the right place. In this article, we will explain what luxor 1 and kaspersky antivirus software are, why you need a valid code to activate them, and how to get and apply the code.

      -

      What are Luxor 1 and Kaspersky Antivirus Software?

      -

      Luxor 1 is a puzzle game developed by MumboJumbo that was released in 2005. It involves shooting colored spheres to match three or more of the same color and clear them from the board[^2^]. It is a fun and addictive game that can challenge your reflexes and strategy skills.

      -

      luxor 1 crack code for kaspersky


      Download ……… https://gohhs.com/2uFUZT



      -

      Kaspersky is a cybersecurity company that offers antivirus and internet security software for various devices and platforms[^2^]. It protects your digital life from malware, hackers, phishing, ransomware, and other online threats. It also provides features such as parental control, password manager, VPN, and cloud backup.

      -

      Why Do You Need a Valid Code to Activate Luxor 1 and Kaspersky Antivirus Software?

      -

      Luxor 1 and kaspersky antivirus software are not free products. They require a valid code to activate their full features and functions. A valid code is a unique sequence of letters and numbers that proves that you have purchased the product legally from the official website or an authorized distributor[^2^]. Without a valid code, you will not be able to enjoy luxor 1 and kaspersky antivirus software to their fullest potential.

      -

      A valid code also ensures that you get the latest updates and security patches for luxor 1 and kaspersky antivirus software[^2^]. These updates and patches can improve the performance, stability, and security of the products. They can also fix any bugs or errors that may occur.

      -

      How to Get and Apply the Code for Luxor 1 and Kaspersky Antivirus Software?

      -

      The best way to get a valid code for luxor 1 and kaspersky antivirus software is to purchase them legally from their official websites or authorized distributors[^2^]. You can find the links to their official websites below:

      - -

      After you purchase the products, you will receive an email with your activation code. You can also find your activation code on your My Kaspersky account if you have one[^3^]. You can create a My Kaspersky account for free on their website.

      -

      To apply the code for luxor 1 and kaspersky antivirus software, you need to download and install the products on your device first. You can find the download links below:

      - -

      After you install the products, you need to enter your activation code when prompted. You can also enter your activation code manually by following these steps:

      -

      -
        -
      1. For luxor 1, open the game and click on Options. Then click on Enter Activation Code and type in your code.
      2. -
      3. For kaspersky antivirus software, open the application and click on Enter activation code at the bottom right corner. Then type in your code.
      4. -
      -

      Congratulations! You have successfully activated luxor 1 and kaspersky antivirus software with a valid code. Now you can enjoy playing luxor 1 and protecting your device with kaspersky antivirus software.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Rajinikanth Telugu Dialogues Ringtones Free 15 Extra Quality.md b/spaces/diacanFperku/AutoGPT/Rajinikanth Telugu Dialogues Ringtones Free 15 Extra Quality.md deleted file mode 100644 index 00abe66ff98238be7d94d610ee9e283216aab830..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Rajinikanth Telugu Dialogues Ringtones Free 15 Extra Quality.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Rajinikanth Telugu Dialogues Ringtones Free 15


      Download > https://gohhs.com/2uFT9d



      -
      - 3cee63e6c2
      -
      -
      -

      diff --git a/spaces/diagaiwei/ir_chinese_medqa/baleen/condenser/tokenization.py b/spaces/diagaiwei/ir_chinese_medqa/baleen/condenser/tokenization.py deleted file mode 100644 index a92e793e70589d41039fe25295a02321359a58ac..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/baleen/condenser/tokenization.py +++ /dev/null @@ -1,118 +0,0 @@ -import torch - -from transformers import ElectraTokenizerFast - -class AnswerAwareTokenizer(): - def __init__(self, total_maxlen, bert_model='google/electra-base-discriminator'): - self.total_maxlen = total_maxlen - - self.tok = ElectraTokenizerFast.from_pretrained(bert_model) - - def process(self, questions, passages, all_answers=None, mask=None): - return TokenizationObject(self, questions, passages, all_answers, mask) - - def tensorize(self, questions, passages): - query_lengths = self.tok(questions, padding='longest', return_tensors='pt').attention_mask.sum(-1) - - encoding = self.tok(questions, passages, padding='longest', truncation='longest_first', - return_tensors='pt', max_length=self.total_maxlen, add_special_tokens=True) - - return encoding, query_lengths - - def get_all_candidates(self, encoding, index): - offsets, endpositions = self.all_word_positions(encoding, index) - - candidates = [(offset, endpos) - for idx, offset in enumerate(offsets) - for endpos in endpositions[idx:idx+10]] - - return candidates - - def all_word_positions(self, encoding, index): - words = encoding.word_ids(index) - offsets = [position - for position, (last_word_number, current_word_number) in enumerate(zip([-1] + words, words)) - if last_word_number != current_word_number] - - endpositions = offsets[1:] + [len(words)] - - return offsets, endpositions - - def characters_to_tokens(self, text, answers, encoding, index, offset, endpos): - # print(text, answers, encoding, index, offset, endpos) - # endpos = endpos - 1 - - for offset_ in range(offset, len(text)+1): - tokens_offset = encoding.char_to_token(index, offset_) - # print(f'tokens_offset = {tokens_offset}') - if tokens_offset is not None: - break - - for endpos_ in range(endpos, len(text)+1): - tokens_endpos = encoding.char_to_token(index, endpos_) - # print(f'tokens_endpos = {tokens_endpos}') - if tokens_endpos is not None: - break - - # None on whitespace! - assert tokens_offset is not None, (text, answers, offset) - # assert tokens_endpos is not None, (text, answers, endpos) - tokens_endpos = tokens_endpos if tokens_endpos is not None else len(encoding.tokens(index)) - - return tokens_offset, tokens_endpos - - def tokens_to_answer(self, encoding, index, text, tokens_offset, tokens_endpos): - # print(encoding, index, text, tokens_offset, tokens_endpos, len(encoding.tokens(index))) - - char_offset = encoding.word_to_chars(index, encoding.token_to_word(index, tokens_offset)).start - - try: - char_next_offset = encoding.word_to_chars(index, encoding.token_to_word(index, tokens_endpos)).start - char_endpos = char_next_offset - except: - char_endpos = encoding.word_to_chars(index, encoding.token_to_word(index, tokens_endpos-1)).end - - assert char_offset is not None - assert char_endpos is not None - - return text[char_offset:char_endpos].strip() - - -class TokenizationObject(): - def __init__(self, tokenizer: AnswerAwareTokenizer, questions, passages, answers=None, mask=None): - assert type(questions) is list and type(passages) is list - assert len(questions) in [1, len(passages)] - - if mask is None: - mask = [True for _ in passages] - - self.mask = mask - - self.tok = tokenizer - self.questions = questions if len(questions) == len(passages) else questions * len(passages) - self.passages = passages - self.answers = answers - - self.encoding, self.query_lengths = self._encode() - self.passages_only_encoding, self.candidates, self.candidates_list = self._candidize() - - if answers is not None: - self.gold_candidates = self.answers # self._answerize() - - def _encode(self): - return self.tok.tensorize(self.questions, self.passages) - - def _candidize(self): - encoding = self.tok.tok(self.passages, add_special_tokens=False) - - all_candidates = [self.tok.get_all_candidates(encoding, index) for index in range(len(self.passages))] - - bsize, maxcands = len(self.passages), max(map(len, all_candidates)) - all_candidates = [cands + [(-1, -1)] * (maxcands - len(cands)) for cands in all_candidates] - - candidates = torch.tensor(all_candidates) - assert candidates.size() == (bsize, maxcands, 2), (candidates.size(), (bsize, maxcands, 2), (self.questions, self.passages)) - - candidates = candidates + self.query_lengths.unsqueeze(-1).unsqueeze(-1) - - return encoding, candidates, all_candidates diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/parameters.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/parameters.py deleted file mode 100644 index 4802d7a05c2bb3fa462e7baa64373737a04e7c83..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/parameters.py +++ /dev/null @@ -1,12 +0,0 @@ -import torch - -DEVICE = torch.device("cuda") - -SAVED_CHECKPOINTS = [32*1000, 100*1000, 150*1000, 200*1000, 250*1000, 300*1000, 400*1000] -SAVED_CHECKPOINTS += [10*1000, 20*1000, 30*1000, 40*1000, 50*1000, 60*1000, 70*1000, 80*1000, 90*1000] -SAVED_CHECKPOINTS += [25*1000, 50*1000, 75*1000] - -SAVED_CHECKPOINTS = set(SAVED_CHECKPOINTS) - - -# TODO: final_ckpt 2k, 5k, 10k 20k, 50k, 100k 150k 200k, 500k, 1M 2M, 5M, 10M \ No newline at end of file diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/search/strided_tensor_core.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/search/strided_tensor_core.py deleted file mode 100644 index 49760d72866c19e4bbdf59ebe01cf99c97e34716..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/search/strided_tensor_core.py +++ /dev/null @@ -1,130 +0,0 @@ -import torch -import random - -import numpy as np - -from colbert.utils.utils import flatten - - -""" -import line_profiler -import atexit -profile = line_profiler.LineProfiler() -atexit.register(profile.print_stats) -""" - - -class StridedTensorCore: - # # @profile - def __init__(self, packed_tensor, lengths, dim=None, use_gpu=True): - self.dim = dim - self.tensor = packed_tensor - self.inner_dims = self.tensor.size()[1:] - self.use_gpu = use_gpu - - self.lengths = lengths.long() if torch.is_tensor(lengths) else torch.LongTensor(lengths) - - self.strides = _select_strides(self.lengths, [.5, .75, .9, .95]) + [self.lengths.max().item()] - self.max_stride = self.strides[-1] - - zero = torch.zeros(1, dtype=torch.long, device=self.lengths.device) - self.offsets = torch.cat((zero, torch.cumsum(self.lengths, dim=0))) - - if self.offsets[-2] + self.max_stride > self.tensor.size(0): - # if self.tensor.size(0) > 10_000_000: - # print("#> WARNING: StridedTensor has to add padding, internally, to a large tensor.") - # print("#> WARNING: Consider doing this padding in advance to save memory!") - - padding = torch.zeros(self.max_stride, *self.inner_dims, dtype=self.tensor.dtype, device=self.tensor.device) - self.tensor = torch.cat((self.tensor, padding)) - - self.views = {stride: _create_view(self.tensor, stride, self.inner_dims) for stride in self.strides} - - @classmethod - def from_packed_tensor(cls, tensor, lengths): - return cls(tensor, lengths) - - @classmethod - def from_padded_tensor(cls, tensor, mask): - pass - - @classmethod - def from_nested_list(cls, lst): - flat_lst = flatten(lst) - - tensor = torch.Tensor(flat_lst) - lengths = [len(sublst) for sublst in lst] - - return cls(tensor, lengths, dim=0) - - @classmethod - def from_tensors_list(cls, tensors): - # torch.cat(tensors) - # lengths. - # cls(tensor, lengths) - raise NotImplementedError() - - def as_packed_tensor(self, return_offsets=False): - unpadded_packed_tensor = self.tensor # [:self.offsets[-1]] - - return_vals = [unpadded_packed_tensor, self.lengths] - - if return_offsets: - return_vals.append(self.offsets) - - return tuple(return_vals) - - # # @profile - def as_padded_tensor(self): - if self.use_gpu: - view = _create_view(self.tensor.cuda(), self.max_stride, self.inner_dims)[self.offsets[:-1]] - mask = _create_mask(self.lengths.cuda(), self.max_stride, like=view, use_gpu=self.use_gpu) - else: - #import pdb - #pdb.set_trace() - view = _create_view(self.tensor, self.max_stride, self.inner_dims) - view = view[self.offsets[:-1]] - mask = _create_mask(self.lengths, self.max_stride, like=view, use_gpu=self.use_gpu) - - return view, mask - - def as_tensors_list(self): - raise NotImplementedError() - - - -def _select_strides(lengths, quantiles): - if lengths.size(0) < 5_000: - return _get_quantiles(lengths, quantiles) - - sample = torch.randint(0, lengths.size(0), size=(2_000,)) - - return _get_quantiles(lengths[sample], quantiles) - -def _get_quantiles(lengths, quantiles): - return torch.quantile(lengths.float(), torch.tensor(quantiles, device=lengths.device)).int().tolist() - - -def _create_view(tensor, stride, inner_dims): - outdim = tensor.size(0) - stride + 1 - size = (outdim, stride, *inner_dims) - - inner_dim_prod = int(np.prod(inner_dims)) - multidim_stride = [inner_dim_prod, inner_dim_prod] + [1] * len(inner_dims) - - return torch.as_strided(tensor, size=size, stride=multidim_stride) - - -def _create_mask(lengths, stride, like=None, use_gpu=True): - if use_gpu: - mask = torch.arange(stride).cuda() + 1 - mask = mask.unsqueeze(0) <= lengths.cuda().unsqueeze(-1) - else: - mask = torch.arange(stride) + 1 - mask = mask.unsqueeze(0) <= lengths.unsqueeze(-1) - - if like is not None: - for _ in range(like.dim() - mask.dim()): - mask = mask.unsqueeze(-1) - - return mask diff --git a/spaces/digitalxingtong/Eileen-Bert-Vits2/start.bat b/spaces/digitalxingtong/Eileen-Bert-Vits2/start.bat deleted file mode 100644 index 418d21233dbf720b0dd09821904d9d6a31b123a2..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Eileen-Bert-Vits2/start.bat +++ /dev/null @@ -1,2 +0,0 @@ -set PYTHON=venv\python.exe -start cmd /k "set PYTHON=%PYTHON%" \ No newline at end of file diff --git a/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/preprocess_text.py b/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/preprocess_text.py deleted file mode 100644 index 44c35fecd9b7f21016e80e9597d6055254cba3f7..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/preprocess_text.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -from random import shuffle - -import tqdm -from text.cleaner import clean_text -from collections import defaultdict -import shutil -stage = [1,2,3] - -transcription_path = 'filelists/short_character_anno.list' -train_path = 'filelists/train.list' -val_path = 'filelists/val.list' -config_path = "configs/config.json" -val_per_spk = 4 -max_val_total = 8 - -if 1 in stage: - with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f: - for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()): - try: - utt, spk, language, text = line.strip().split('|') - #language = "ZH" - norm_text, phones, tones, word2ph = clean_text(text, language) - f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones), - " ".join([str(i) for i in tones]), - " ".join([str(i) for i in word2ph]))) - except: - print("err!", utt) - -if 2 in stage: - spk_utt_map = defaultdict(list) - spk_id_map = {} - current_sid = 0 - - with open( transcription_path+'.cleaned', encoding='utf-8') as f: - for line in f.readlines(): - utt, spk, language, text, phones, tones, word2ph = line.strip().split('|') - spk_utt_map[spk].append(line) - if spk not in spk_id_map.keys(): - spk_id_map[spk] = current_sid - current_sid += 1 - train_list = [] - val_list = [] - for spk, utts in spk_utt_map.items(): - shuffle(utts) - val_list+=utts[:val_per_spk] - train_list+=utts[val_per_spk:] - if len(val_list) > max_val_total: - train_list+=val_list[max_val_total:] - val_list = val_list[:max_val_total] - - with open( train_path,"w", encoding='utf-8') as f: - for line in train_list: - f.write(line) - - file_path = transcription_path+'.cleaned' - shutil.copy(file_path,'./filelists/train.list') - - with open(val_path, "w", encoding='utf-8') as f: - for line in val_list: - f.write(line) - -if 3 in stage: - assert 2 in stage - config = json.load(open(config_path)) - config['data']["n_speakers"] = current_sid # - config["data"]['spk2id'] = spk_id_map - with open(config_path, 'w', encoding='utf-8') as f: - json.dump(config, f, indent=2, ensure_ascii=False) diff --git a/spaces/dirge/voicevox/generate_licenses.py b/spaces/dirge/voicevox/generate_licenses.py deleted file mode 100644 index da41db0c01e20dc8cf935418bb59a5c4923c56ae..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/generate_licenses.py +++ /dev/null @@ -1,337 +0,0 @@ -import json -import os -import subprocess -import urllib.request -from dataclasses import asdict, dataclass -from pathlib import Path -from typing import List, Optional - - -@dataclass -class License: - name: str - version: Optional[str] - license: Optional[str] - text: str - - -def generate_licenses() -> List[License]: - licenses: List[License] = [] - - # openjtalk - # https://sourceforge.net/projects/open-jtalk/files/Open%20JTalk/open_jtalk-1.11/ - licenses.append( - License( - name="Open JTalk", - version="1.11", - license="Modified BSD license", - text=Path("docs/licenses/open_jtalk/COPYING").read_text(), - ) - ) - licenses.append( - License( - name="MeCab", - version=None, - license="Modified BSD license", - text=Path("docs/licenses/open_jtalk/mecab/COPYING").read_text(), - ) - ) - licenses.append( - License( - name="NAIST Japanese Dictionary", - version=None, - license="Modified BSD license", - text=Path("docs/licenses//open_jtalk/mecab-naist-jdic/COPYING").read_text(), - ) - ) - with urllib.request.urlopen( - "https://raw.githubusercontent.com/r9y9/pyopenjtalk/master/pyopenjtalk/htsvoice/LICENSE_mei_normal.htsvoice" # noqa: B950 - ) as res: - licenses.append( - License( - name='HTS Voice "Mei"', - version=None, - license="Creative Commons Attribution 3.0 license", - text=res.read().decode(), - ) - ) - - # VOICEVOX CORE - with urllib.request.urlopen( - "https://raw.githubusercontent.com/VOICEVOX/voicevox_core/main/LICENSE" - ) as res: - licenses.append( - License( - name="VOICEVOX CORE", - version=None, - license="MIT license", - text=res.read().decode(), - ) - ) - - # VOICEVOX ENGINE - with urllib.request.urlopen( - "https://raw.githubusercontent.com/VOICEVOX/voicevox_engine/master/LGPL_LICENSE" - ) as res: - licenses.append( - License( - name="VOICEVOX ENGINE", - version=None, - license="LGPL license", - text=res.read().decode(), - ) - ) - - # world - with urllib.request.urlopen( - "https://raw.githubusercontent.com/mmorise/World/master/LICENSE.txt" - ) as res: - licenses.append( - License( - name="world", - version=None, - license="Modified BSD license", - text=res.read().decode(), - ) - ) - - # pytorch - with urllib.request.urlopen( - "https://raw.githubusercontent.com/pytorch/pytorch/master/LICENSE" - ) as res: - licenses.append( - License( - name="PyTorch", - version="1.9.0", - license="BSD-style license", - text=res.read().decode(), - ) - ) - - # onnxruntime - with urllib.request.urlopen( - "https://raw.githubusercontent.com/microsoft/onnxruntime/master/LICENSE" - ) as res: - licenses.append( - License( - name="ONNX Runtime", - version="1.13.1", - license="MIT license", - text=res.read().decode(), - ) - ) - - # Python - python_version = "3.11.3" - with urllib.request.urlopen( - f"https://raw.githubusercontent.com/python/cpython/v{python_version}/LICENSE" - ) as res: - licenses.append( - License( - name="Python", - version=python_version, - license="Python Software Foundation License", - text=res.read().decode(), - ) - ) - - # pip - try: - pip_licenses_output = subprocess.run( - "pip-licenses " - "--from=mixed " - "--format=json " - "--with-urls " - "--with-license-file " - "--no-license-path ", - shell=True, - capture_output=True, - check=True, - env=os.environ, - ).stdout.decode() - except subprocess.CalledProcessError as err: - raise Exception( - f"command output:\n{err.stderr and err.stderr.decode()}" - ) from err - - licenses_json = json.loads(pip_licenses_output) - for license_json in licenses_json: - license = License( - name=license_json["Name"], - version=license_json["Version"], - license=license_json["License"], - text=license_json["LicenseText"], - ) - # FIXME: assert license type - if license.text == "UNKNOWN": - if license.name.lower() == "core" and license.version == "0.0.0": - continue - elif license.name.lower() == "future": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/PythonCharmers/python-future/master/LICENSE.txt" # noqa: B950 - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "pefile": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/erocarrera/pefile/master/LICENSE" # noqa: B950 - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "pyopenjtalk": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/r9y9/pyopenjtalk/master/LICENSE.md" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "python-multipart": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/andrew-d/python-multipart/master/LICENSE.txt" # noqa: B950 - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "romkan": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/soimort/python-romkan/master/LICENSE" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "distlib": - with urllib.request.urlopen( - "https://bitbucket.org/pypa/distlib/raw/7d93712134b28401407da27382f2b6236c87623a/LICENSE.txt" # noqa: B950 - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "jsonschema": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/python-jsonschema/jsonschema/dbc398245a583cb2366795dc529ae042d10c1577/COPYING" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "lockfile": - with urllib.request.urlopen( - "https://opendev.org/openstack/pylockfile/raw/tag/0.12.2/LICENSE" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "platformdirs": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/platformdirs/platformdirs/aa671aaa97913c7b948567f4d9c77d4f98bfa134/LICENSE" - ) as res: - license.text = res.read().decode() - elif license.name.lower() == "webencodings": - with urllib.request.urlopen( - "https://raw.githubusercontent.com/gsnedders/python-webencodings/fa2cb5d75ab41e63ace691bc0825d3432ba7d694/LICENSE" - ) as res: - license.text = res.read().decode() - else: - # ライセンスがpypiに無い - raise Exception(f"No License info provided for {license.name}") - licenses.append(license) - - # OpenBLAS - with urllib.request.urlopen( - "https://raw.githubusercontent.com/xianyi/OpenBLAS/develop/LICENSE" - ) as res: - licenses.append( - License( - name="OpenBLAS", - version=None, - license="BSD 3-clause license", - text=res.read().decode(), - ) - ) - - # libsndfile-binaries - with urllib.request.urlopen( - "https://raw.githubusercontent.com/bastibe/libsndfile-binaries/84cb164928f17c7ca0c1e5c40342c20ce2b90e8c/COPYING" # noqa: B950 - ) as res: - licenses.append( - License( - name="libsndfile-binaries", - version="1.0.28", - license="LGPL-2.1 license", - text=res.read().decode(), - ) - ) - - # libogg - with urllib.request.urlopen( - "https://raw.githubusercontent.com/xiph/ogg/v1.3.2/COPYING" - ) as res: - licenses.append( - License( - name="libogg", - version="1.3.2", - license="BSD 3-clause license", - text=res.read().decode(), - ) - ) - - # libvorbis - with urllib.request.urlopen( - "https://raw.githubusercontent.com/xiph/vorbis/v1.3.5/COPYING" - ) as res: - licenses.append( - License( - name="libvorbis", - version="1.3.5", - license="BSD 3-clause license", - text=res.read().decode(), - ) - ) - - # libflac - with urllib.request.urlopen( - "https://raw.githubusercontent.com/xiph/flac/1.3.2/COPYING.Xiph" - ) as res: - licenses.append( - License( - name="FLAC", - version="1.3.2", - license="Xiph.org's BSD-like license", - text=res.read().decode(), - ) - ) - - # cuda - # license text from CUDA 11.6.2 - # https://developer.nvidia.com/cuda-11-6-2-download-archive?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exe_local # noqa: B950 - # https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda_11.6.2_511.65_windows.exe # noqa: B950 - # cuda_11.6.2_511.65_windows.exe (cuda_documentation/Doc/EULA.txt) - licenses.append( - License( - name="CUDA Toolkit", - version="11.6.2", - license=None, - text=Path("docs/licenses/cuda/EULA.txt").read_text(encoding="utf8"), - ) - ) - # cudnn - # license text from - # cuDNN v8.4.1 (May 27th, 2022), for CUDA 11.x, cuDNN Library for Windows - # https://developer.nvidia.com/rdp/cudnn-archive # noqa: B950 - # https://developer.download.nvidia.com/compute/redist/cudnn/v8.4.1/local_installers/11.6/cudnn-windows-x86_64-8.4.1.50_cuda11.6-archive.zip # noqa: B950 - # cudnn-windows-x86_64-8.4.1.50_cuda11.6-archive.zip (cudnn-windows-x86_64-8.4.1.50_cuda11.6-archive/LICENSE) # noqa: B950 - licenses.append( - License( - name="cuDNN", - version="8.4.1", - license=None, - text=Path("docs/licenses/cudnn/LICENSE").read_text(encoding="utf8"), - ) - ) - - return licenses - - -if __name__ == "__main__": - import argparse - import sys - - parser = argparse.ArgumentParser() - parser.add_argument("-o", "--output_path", type=str) - args = parser.parse_args() - - output_path = args.output_path - - licenses = generate_licenses() - - # dump - out = Path(output_path).open("w") if output_path else sys.stdout - json.dump( - [asdict(license) for license in licenses], - out, - ) diff --git a/spaces/dma123/gpt-js/js/chatbox.js b/spaces/dma123/gpt-js/js/chatbox.js deleted file mode 100644 index 2d878a40221db692e9ab6c749ff58a7d3bce1eed..0000000000000000000000000000000000000000 --- a/spaces/dma123/gpt-js/js/chatbox.js +++ /dev/null @@ -1,344 +0,0 @@ -"use strict"; - -// TODO: Maybe add token count and answer price to the title. - -// Chatbox display the currently selected message path through a Chatlog -class Chatbox { - constructor(chatlog, container) { - this.chatlog = chatlog; - this.container = container; - this.clipbadge = new ClipBadge({ autoRun: false }); - } - - // Updates the HTML inside the chat window - update(scroll = true) { - const should_scroll_down = scroll && - (this.container.parentElement.scrollHeight - this.container.parentElement.clientHeight <= - this.container.parentElement.scrollTop + 5); - - const fragment = document.createDocumentFragment(); - - // Show the active path through the chatlog - let message = this.chatlog.getFirstMessage(); - let alternative = null; - let lastRole = 'assistant'; - let pos = 0; - while (true) { - alternative = message.answerAlternatives; - if (alternative == null) break; - message = alternative.getActiveMessage(); - if (message === null) break; - - if (message.cache !== null) { - fragment.appendChild(message.cache); - lastRole = message.value.role; - pos++; - continue; - } - - const msgIdx = alternative.activeMessageIndex; - const msgCnt = alternative.messages.length; - pos++; - - if (message.value === null) { - let role = 'assistant'; - if (lastRole === 'assistant') role = 'user'; - const messageEl = this.#formatMessage({ value: { role, content: '🤔...' } }, pos, msgIdx, msgCnt); - fragment.appendChild(messageEl); - break; - } - - if (message.value.content === null) { - const messageEl = this.#formatMessage({ value: { role: message.value.role, content: '🤔...' } }, pos, msgIdx, msgCnt); - fragment.appendChild(messageEl); - break; - } - - const messageEl = this.#formatMessage(message, pos, msgIdx, msgCnt); - fragment.appendChild(messageEl); - message.cache = messageEl; - lastRole = message.value.role; - } - - this.container.replaceChildren(fragment); - - if (should_scroll_down) { - this.container.parentElement.scrollTop = this.container.parentElement.scrollHeight; - } - - try { - localStorage.chatlog = JSON.stringify(this.chatlog); - } catch (error) { - console.error(error); - } - } - - // Formats one message as HTML - #formatMessage(message, pos, msgIdx, msgCnt) { - let type = 'ping'; - if (message.value.role === 'assistant') type = 'pong'; - const el = document.createElement('div'); - el.classList.add('message'); - el.classList.add(type); - el.classList.add('hljs-nobg'); - el.classList.add('hljs-message'); - if (message.value.role === 'system') el.classList.add('system'); - el.dataset.plaintext = encodeURIComponent(message.value.content.trim()); - el.dataset.pos = pos; - - el.appendChild(this.#getAvatar(type)); - - let msgStat = ''; - if (msgIdx > 0 || msgCnt > 1) msgStat = ` ${msgIdx + 1}/${msgCnt}   `; - let model = ''; - if (message.metadata && message.metadata.model) { - model = ' ' + message.metadata.model + ''; - } - const msgTitleStrip = document.createElement('small'); - msgTitleStrip.innerHTML = `  ${msgStat}${message.value.role}${model}

      `; - el.appendChild(msgTitleStrip); - - const formattedEntities = this.#formatCodeBlocks(message.value.content); - if (formattedEntities) { - el.appendChild(formattedEntities); - } else { - const div = document.createElement('div'); - div.innerHTML = 'Error: Timeout on API server.'; - el.appendChild(div); - } - - el.getElementsByClassName('msg_mod-add-btn')[0].addEventListener('click', async () => { - const messageInp = document.getElementById("message-inp"); - if (type === 'ping') { - if (messageInp.value === '') messageInp.value = decodeURIComponent(el.dataset.plaintext); - } - const alternative = this.chatlog.getNthAlternatives(pos); - if (alternative !== null) alternative.addMessage(null); - this.update(false); - if (type === 'pong') { - // Assistant message - if (receiving) { - controller.abort(); - } - // SetTimeout because of race condition with controller.abort(). Probably not the best solution - setTimeout(() => { - // Set this globally to true, so that the click on submit can run without a message in the input box - regenerateLastAnswer = true; - document.getElementById("submit-btn").click(); - }, 100); - return; - } - messageInp.focus(); - }); - - if (msgIdx > 0 || msgCnt > 1) { - el.getElementsByClassName('msg_mod-prev-btn')[0].addEventListener('click', () => { - this.chatlog.getNthAlternatives(el.dataset.pos).prev(); - this.update(false); - }); - - el.getElementsByClassName('msg_mod-next-btn')[0].addEventListener('click', () => { - this.chatlog.getNthAlternatives(el.dataset.pos).next(); - this.update(false); - }); - } - - if (this.clipbadge) { - this.#prepareTablesAndRemainingSvg(el); - this.clipbadge.addTo(el); - } - - return el; - } - - #getAvatar(type) { - const avatar = document.createElement('img'); - let avatarSrc = undefined; - let avatarFromLocalStorage = false; - let canUselocalStorage = true; - try { - avatarSrc = localStorage.getItem(`${type}Avatar`); - avatarFromLocalStorage = avatarSrc !== null; - } catch (error) { - canUselocalStorage = false; - console.error(error); - } - avatar.classList.add('avatar'); - if (canUselocalStorage) avatar.classList.add('clickable'); - avatar.src = avatarSrc || 'data:image/svg+xml,' + encodeURIComponent(type === 'ping' ? avatar_ping : avatar_pong); - - avatar.addEventListener('click', () => { - if (!canUselocalStorage) return; - if (avatarFromLocalStorage) { - const original = 'data:image/svg+xml,' + encodeURIComponent(type === 'ping' ? avatar_ping : avatar_pong); - avatar.src = original; - try { - localStorage.removeItem(`${type}Avatar`); - } catch (error) { - console.error(error); - } - this.chatlog.clearCache(); - this.update(false); - return; - } - const input = document.createElement('input'); - input.type = 'file'; - input.accept = 'image/*'; - input.addEventListener('change', () => { - const file = input.files[0]; - const reader = new FileReader(); - reader.addEventListener('load', () => { - try { - localStorage.setItem(`${type}Avatar`, reader.result); - } catch (error) { - console.error(error); - } - avatar.src = reader.result; - this.chatlog.clearCache(); - this.update(false); - }); - reader.readAsDataURL(file); - }); - input.click(); - }); - - return avatar; - } - - // TODO: also update SVG - #prepareTablesAndRemainingSvg(parent) { - function tableToCSV(table) { - const separator = ';'; - const rows = table.querySelectorAll('tr'); - const csv = []; - for (const rowElement of rows) { - const row = []; - const cols = rowElement.querySelectorAll('td, th'); - for (const col of cols) { - let data = col.innerText.replace(/(\r\n|\n|\r)/gm, '').replace(/(\s\s)/gm, ' '); - data = data.replace(/"/g, '""'); - row.push(`"${data}"`); - } - csv.push(row.join(separator)); - } - return csv.join('\n'); - } - - const tables = parent.querySelectorAll('table'); - for (const table of tables) { - const div = document.createElement("div"); - div.classList.add('hljs-nobg'); - div.classList.add('hljs-table'); - div.classList.add('language-table'); - div.dataset.plaintext = encodeURIComponent(tableToCSV(table)); - - // div.appendChild(table); - // table.replaceWith(div); - const pe = table.parentElement; - pe.insertBefore(div, table); - pe.removeChild(table); - div.appendChild(table); - } - } - - // Adds syntax highlighting and renders latex formulas to all code blocks in a message - #formatCodeBlocks(text) { - if (!text) return text; - text = text.trim(); - - // To mark all SVG as svg, even when the AI marks it as something else - text = text.replaceAll(/```\w*\s* { - let data = decodeURIComponent(g1); - data = data.replace(/\n) - breaks: false, // Whether to convert line breaks into
      tags - langPrefix: 'language-', // The prefix for CSS classes applied to code blocks - linkify: true, // Whether to automatically convert URLs to links - typographer: false, // Whether to use typographic replacements for quotation marks and the like - quotes: `""''`, // Which types of quotes to use, if typographer is true - highlight: function (code, language) { - let value = ''; - try { - if (language && hljs.getLanguage(language)) { - value = hljs.highlight(code, { language, ignoreIllegals: true }).value; - } else { - const highlighted = hljs.highlightAuto(code); - language = highlighted.language ? highlighted.language : 'unknown'; - value = highlighted.value; - } - } catch (error) { - console.error(error, code); - } - return `
      ${value}
      `; - } - }; - const md = window.markdownit(md_settings); - md.validateLink = (link) => { - if (link.startsWith('javascript:')) return false; - return true; - }; - - text = md.render(text); - - // would be useful, but the created svgs are not standard conform, so it does not make sense - // text = text.replaceAll(/!\[([^]+)\]\((data:image\/[;,+%=a-z0-9-]+)\)/gi, '$1'); - - const origFormulas = []; - const kt_settings = { - delimiters: [ - { left: "$$", right: "$$", display: true }, - { left: "$", right: "$", display: false }, - // { left: "\\(", right: "\\)", display: false }, - { left: "\\begin{equation}", right: "\\end{equation}", display: true }, - // { left: "\\begin{align}", right: "\\end{align}", display: true }, - // { left: "\\begin{alignat}", right: "\\end{alignat}", display: true }, - // { left: "\\begin{gather}", right: "\\end{gather}", display: true }, - // { left: "\\begin{CD}", right: "\\end{CD}", display: true }, - // { left: "\\[", right: "\\]", display: true } - ], - ignoredTags: ['script', 'noscript', 'style', 'textarea', 'pre', 'code', 'option', 'table', 'svg'], - throwOnError: false, - preProcess: function (math) { - origFormulas.push(math); - return math; - } - }; - - const wrapper = document.createElement('div'); - wrapper.classList.add('content'); - wrapper.innerHTML = text; - - renderMathInElement(wrapper, kt_settings); - - const elems = wrapper.querySelectorAll('.katex'); - if (elems.length === origFormulas.length) { - for (let i = 0; i < elems.length; i++) { - const formula = elems[i].parentElement; - if (formula.classList.contains('katex-display')) { - const div = document.createElement("div"); - div.classList.add('hljs'); - div.classList.add('language-latex'); - div.dataset.plaintext = encodeURIComponent(origFormulas[i].trim()); - - const pe = formula.parentElement; - // div.appendChild(pe); - // pe.replaceWith(div); - const ppe = pe.parentElement; - ppe.insertBefore(div, pe); - ppe.removeChild(pe); - div.appendChild(pe); - } - } - } - - return wrapper; - } - -} diff --git a/spaces/dominguesm/alpaca-ptbr-7b/README.md b/spaces/dominguesm/alpaca-ptbr-7b/README.md deleted file mode 100644 index 5ee30c857603440413919c7d76921a1c451fbe4b..0000000000000000000000000000000000000000 --- a/spaces/dominguesm/alpaca-ptbr-7b/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Alpaca Ptbr 7b -emoji: 🦙 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: true -models: ["dominguesm/alpaca-lora-ptbr-7b"] -datasets: ["dominguesm/alpaca-data-pt-br"] -license: cc-by-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/duycse1603/math2tex/app.py b/spaces/duycse1603/math2tex/app.py deleted file mode 100644 index 4999fd32b784d311e2f8c0391eb86bb1bc65eb80..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/app.py +++ /dev/null @@ -1,293 +0,0 @@ -import yaml -from typing import List -import numpy as np -from PIL import Image -from pathlib import Path -from collections import defaultdict - -import cv2 -import torch -from torchvision.ops import nms -from timm.models.resnetv2 import ResNetV2 -from timm.models.layers import StdConv2dSame - -from pdf2image import convert_from_bytes - -from ScanSSD.detect_flow import MathDetector -from HybridViT.recog_flow import MathRecognition -from utils.p2l_utils import get_rolling_crops, postprocess - -import streamlit - - -class DetectCfg(): - def __init__ (self): - self.cuda = True if torch.cuda.is_available() else False - self.kernel = (1, 5) - self.padding = (0, 2) - self.phase = 'test' - self.visual_threshold = 0.8 - self.verbose = False - self.exp_name = 'SSD' - self.model_type = 512 - self.use_char_info = False - self.limit = -1 - self.cfg = 'hboxes512' - self.batch_size = 32 - self.num_workers = 4 - self.neg_mining = True - self.log_dir = 'logs' - self.stride = 0.1 - self.window = 1200 - -class App: - title = 'Math Expression Recognition Demo \n\n Note: For Math Detection, we reuse the model from this repo [ScanSSD: Scanning Single Shot Detector for Math in Document Images](https://github.com/MaliParag/ScanSSD).\n\nThis demo aim to present the effciency of our method [A Hybrid Vision Transformer Approach for Mathematical Expression Recognition](https://ieeexplore.ieee.org/document/10034626) in recognizing math expression in document images.' - - def __init__(self): - self._model_cache = {} - self.detect_model = MathDetector('saved_models/math_detect/AMATH512_e1GTDB.pth', DetectCfg()) - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.image_resizer = ResNetV2(layers=[2, 3, 3], num_classes=max((672, 192))//32, global_pool='avg', in_chans=1, drop_rate=.05, - preact=True, stem_type='same', conv_layer=StdConv2dSame).to(device) - self.image_resizer.load_state_dict(torch.load('saved_models/resizer/image_resizer.pth', map_location=device)) - self.image_resizer.eval() - - def detect_preprocess(self, img_list): - if isinstance(img_list, Image.Image): - img_list = [img_list] - - new_images = [] - - for temp_image in img_list: - img_size = 1280 - # convert image to numpy array - temp_image = np.array(temp_image) - img = cv2.resize(temp_image, (img_size, int(img_size * temp_image.shape[0] / temp_image.shape[1]))) - new_images.append(img) - - return new_images - - def _get_model(self, name): - if name in self._model_cache: - return self._model_cache[name] - - with open('recog_cfg.yaml', 'r') as f: - recog_cfg = yaml.safe_load(f) - - model_cfg = {} - model_cfg.update(recog_cfg['common']) - model_cfg.update(recog_cfg[name]) - recog_model = MathRecognition(model_cfg, self.image_resizer if model_cfg['resizer'] else None - ) - self._model_cache[name] = recog_model - - return recog_model - - def _get_boxes(self, img, temp_bb): - temp_bb[0] = max(0, temp_bb[0] - int(0.05 * (temp_bb[2] - temp_bb[0]))) - temp_bb[1] = max(0, temp_bb[1] - int(0.05 * (temp_bb[3] - temp_bb[1]))) - temp_bb[2] = min(img.shape[1], temp_bb[2] + int(0.05 * (temp_bb[2] - temp_bb[0]))) - temp_bb[3] = min(img.shape[0], temp_bb[3] + int(0.05 * (temp_bb[3] - temp_bb[1]))) - - # convert to int - temp_bb = [int(x) for x in temp_bb] - - return temp_bb - - @torch.inference_mode() - def math_detection(self, page_lst: List[np.ndarray]): - res = [] - - batch_size = 32 - threshold = 0.9 - iou = 0.1 - - for idx, temp_image in enumerate(page_lst): - crops_list, padded_crops_list, crops_info_list = get_rolling_crops(temp_image, stride=[128, 128]) - - scores_list = [] - wb_list = [] - for i in range(0, len(padded_crops_list), batch_size): - batch = padded_crops_list[i:i+batch_size] - window_borders, scores = self.detect_model.DetectAny(batch, threshold) - scores_list.extend(scores) - wb_list.extend(window_borders) - - # change crops to original image coordinates - bb_list, s_list = postprocess(wb_list, scores_list, crops_info_list) - - # convert to torch tensors - bb_torch = torch.tensor(bb_list).float() - scores_torch = torch.tensor(s_list) - - # perform non-maximum suppression - # check if bb_torch is empty - if bb_torch.shape[0] == 0: - res.append(([], [])) - continue - - indices = nms(bb_torch, scores_torch, iou) - - bb_torch = bb_torch[indices] - new_bb_list = bb_torch.int().tolist() - - for i in range(len(new_bb_list)): - save_name = 'Page ' + str(idx) + '-Expr ' + str(i) if len(page_lst) > 1 else 'Expr ' + str(i) - temp_bb = self._get_boxes(temp_image, new_bb_list[i][:]) - crop_expr = temp_image[temp_bb[1]:temp_bb[3], temp_bb[0]:temp_bb[2]] - crop_expr = Image.fromarray(crop_expr) - res.append((save_name, crop_expr)) - - return res - - def math_recognition(self, model_name, res: List): - model = self._get_model(model_name) - final_res = [] - for item in res: - name, crop_expr = item - if isinstance(crop_expr, list): - continue - latex_str = model(crop_expr, name=name) - final_res.append((name, crop_expr, latex_str)) - - return final_res - - def __call__(self, model_name, image_list, use_detect): - #Detect - if use_detect: - new_images = self.detect_preprocess(image_list) - res = self.math_detection(page_lst=new_images) - else: - res = [('latex_pred', image_list[0])] - #Recog - final_res = self.math_recognition(model_name, res) - display_name, origin_img, latex_pred = tuple([list(item) for item in zip(*final_res)]) - return display_name, origin_img, latex_pred - - -def api(): - app = App() - streamlit.set_page_config(page_title='Extract math expressions from documents', layout='wide') - streamlit.title(f'{app.title}') - streamlit.markdown(f""" - To use this interactive demo and reproduced models: - 1. Select what type of input data you want to get prediction. - 2. Upload your own image or pdf file (or select from the given examples). - 3. If input file is in pdf format, choose start page and end page. - 4. Click **Extract**. - - **Note: Current version of this demo only support single file upload for both Image and PDF option.** - """ - ) - - # model_name = streamlit.radio( - # label='The Math Recognition model to use', - # options=app.models - # ) - - extract_option = streamlit.radio( - label='Select type of input for prediction', - options=('Math expression image only', 'Full document image'), - - ) - - uploaded_file = streamlit.file_uploader( - 'Upload an image/pdf file', - type=['png', 'jpg', 'pdf'], - accept_multiple_files=False - ) - - if uploaded_file is not None: - if Path(uploaded_file.name).suffix == '.pdf': - bytes_data = uploaded_file.read() - - image_lst = convert_from_bytes(bytes_data, dpi=160, grayscale=True) - image_lst = [img.convert('RGB') for img in image_lst] - - container = streamlit.container() - range_cols = container.columns(2) - start_page = range_cols[0].number_input(label='Start page', min_value=0, max_value=len(image_lst)-2) - end_page = range_cols[1].number_input(label='End page', min_value=1, max_value=len(image_lst)-1) - - if start_page <= end_page: - image_lst = image_lst[start_page:end_page+1] - cols = streamlit.columns(len(image_lst)) - for i in range(len(cols)): - with cols[i]: - img_shape = image_lst[i].size - streamlit.image(image_lst[i], width=1024, caption=f'Page: {str(i)} Image shape: {str(img_shape)}', use_column_width='auto') - else: - image = Image.open(uploaded_file).convert('RGB') - image_lst = [image] - img_shape = image.size - streamlit.image(image, width=1024, caption='Image shape: ' + str(img_shape)) - else: - streamlit.text('\n') - - if streamlit.button('Extract'): - if uploaded_file is not None and image_lst is not None: - with streamlit.spinner('Computing'): - try: - use_detect = True - if extract_option == 'Math expression image only': - use_detect = False - model_name = 'version2' - else: - model_name = 'version2' - - display_name, origin_img, latex_code = app(model_name, image_lst, use_detect) - - if Path(uploaded_file.name).suffix == '.pdf': - page_dict = defaultdict(list) - for name, img, pred in zip(display_name, origin_img, latex_code): - name_components = name.split('-') - if len(name_components) <= 1: - page_name = 'Page0' - else: - page_name = name_components[0] - page_dict[page_name].append((img, pred)) - - tab_lst = streamlit.tabs(list(page_dict.keys())) - - for tab, page_name in zip(tab_lst, list(page_dict.keys())): - for idx, item in enumerate(page_dict[page_name]): - container = tab.container() - col_latex, col_render, col_org = container.columns(3, gap='large') - - if idx == 0: - col_latex.header('Predicted LaTeX') - col_render.header('Rendered Image') - col_org.header('Cropped Image') - - render_latex = f'$\\displaystyle {item[-1]}$' - col_latex.code(item[-1], language='latex') - col_render.markdown(render_latex) - img = np.asarray(item[0]) - col_org.image(img) - else: - for idx, (name, org, latex) in enumerate(zip(display_name, origin_img, latex_code)): - container = streamlit.container() - col_latex, col_render, col_org = container.columns(3, gap='large') - - if idx == 0: - col_latex.header('Predicted LaTeX') - col_render.header('Rendered Image') - col_org.header('Cropped Image') - - render_latex = f'$\\displaystyle {latex}$' - col_latex.code(latex, language='latex') - col_render.markdown(render_latex) - org = np.asarray(org) - col_org.image(org) - - except Exception as e: - streamlit.error(e) - else: - streamlit.error('Please upload an image.') - -if __name__ == '__main__': - # print(f"Is CUDA available: {torch.cuda.is_available()}") - # # True - # print(f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}") - # Tesla T4 - api() diff --git a/spaces/epexVfeibi/Imagedeblurr/A-18e Working Acm.md b/spaces/epexVfeibi/Imagedeblurr/A-18e Working Acm.md deleted file mode 100644 index 6b867d58f334d00751698106b33aa2f6a52abc10..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/A-18e Working Acm.md +++ /dev/null @@ -1,6 +0,0 @@ -

      a-18e working acm


      Download Ziphttps://jinyurl.com/2uEpWL



      - -F/A-18E Super Hornet Strike Fighter Simulation for FSX and P3D. ... The Superbug is the culmination of over a decade of work dating back to FS2004, ... a powerful external app called the Aircraft Configuration Manager (ACM), which may be ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/ercaronte/speech-to-speech-translation/README.md b/spaces/ercaronte/speech-to-speech-translation/README.md deleted file mode 100644 index 8d47c0b1818407436d25ac09b156fed34222953a..0000000000000000000000000000000000000000 --- a/spaces/ercaronte/speech-to-speech-translation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Speech To Speech Translation -emoji: 🏆 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fadyabila/Heart-Failure-Death-Prediction/main.py b/spaces/fadyabila/Heart-Failure-Death-Prediction/main.py deleted file mode 100644 index f1dab3dbda2d696310a86ad3968e18a230d50f0d..0000000000000000000000000000000000000000 --- a/spaces/fadyabila/Heart-Failure-Death-Prediction/main.py +++ /dev/null @@ -1,10 +0,0 @@ -import streamlit as st -import eda -import prediction - -navigation = st.sidebar.selectbox('Choose Page : ', ('EDA', 'Death Prediction')) - -if navigation == 'EDA': - eda.run() -else: - prediction.run() \ No newline at end of file diff --git a/spaces/falcondai/code-as-policies/sim.py b/spaces/falcondai/code-as-policies/sim.py deleted file mode 100644 index 901bd9c753bb6b9ff1ead200f5d9e8256231d5f6..0000000000000000000000000000000000000000 --- a/spaces/falcondai/code-as-policies/sim.py +++ /dev/null @@ -1,655 +0,0 @@ -import pybullet -from pybullet_utils.bullet_client import BulletClient -import pybullet_data -import threading -from time import sleep -import numpy as np -import os -from consts import BOUNDS, COLORS, PIXEL_SIZE, CORNER_POS -from shapely.geometry import box - - -# Gripper (Robotiq 2F85) code -class Robotiq2F85: - """Gripper handling for Robotiq 2F85.""" - - def __init__(self, robot, tool, p): - self.robot = robot - self.tool = tool - self._p = p - pos = [0.1339999999999999, -0.49199999999872496, 0.5] - rot = self._p.getQuaternionFromEuler([np.pi, 0, np.pi]) - urdf = 'robotiq_2f_85/robotiq_2f_85.urdf' - self.body = self._p.loadURDF(urdf, pos, rot) - self.n_joints = self._p.getNumJoints(self.body) - self.activated = False - - # Connect gripper base to robot tool. - self._p.createConstraint(self.robot, tool, self.body, 0, jointType=self._p.JOINT_FIXED, jointAxis=[0, 0, 0], parentFramePosition=[0, 0, 0], childFramePosition=[0, 0, -0.07], childFrameOrientation=self._p.getQuaternionFromEuler([0, 0, np.pi / 2])) - - # Set friction coefficients for gripper fingers. - for i in range(self._p.getNumJoints(self.body)): - self._p.changeDynamics(self.body, i, lateralFriction=10.0, spinningFriction=1.0, rollingFriction=1.0, frictionAnchor=True) - - # Start thread to handle additional gripper constraints. - self.motor_joint = 1 - self.constraints_thread = threading.Thread(target=self.step) - self.constraints_thread.daemon = True - self.constraints_thread.start() - - # Control joint positions by enforcing hard contraints on gripper behavior. - # Set one joint as the open/close motor joint (other joints should mimic). - def step(self): - while True: - try: - currj = [self._p.getJointState(self.body, i)[0] for i in range(self.n_joints)] - indj = [6, 3, 8, 5, 10] - targj = [currj[1], -currj[1], -currj[1], currj[1], currj[1]] - self._p.setJointMotorControlArray(self.body, indj, self._p.POSITION_CONTROL, targj, positionGains=np.ones(5)) - except: - return - sleep(0.001) - - # Close gripper fingers. - def activate(self): - self._p.setJointMotorControl2(self.body, self.motor_joint, self._p.VELOCITY_CONTROL, targetVelocity=1, force=10) - self.activated = True - - # Open gripper fingers. - def release(self): - self._p.setJointMotorControl2(self.body, self.motor_joint, self._p.VELOCITY_CONTROL, targetVelocity=-1, force=10) - self.activated = False - - # If activated and object in gripper: check object contact. - # If activated and nothing in gripper: check gripper contact. - # If released: check proximity to surface (disabled). - def detect_contact(self): - obj, _, ray_frac = self.check_proximity() - if self.activated: - empty = self.grasp_width() < 0.01 - cbody = self.body if empty else obj - if obj == self.body or obj == 0: - return False - return self.external_contact(cbody) - # else: - # return ray_frac < 0.14 or self.external_contact() - - # Return if body is in contact with something other than gripper - def external_contact(self, body=None): - if body is None: - body = self.body - pts = self._p.getContactPoints(bodyA=body) - pts = [pt for pt in pts if pt[2] != self.body] - return len(pts) > 0 # pylint: disable=g-explicit-length-test - - def check_grasp(self): - while self.moving(): - sleep(0.001) - success = self.grasp_width() > 0.01 - return success - - def grasp_width(self): - lpad = np.array(self._p.getLinkState(self.body, 4)[0]) - rpad = np.array(self._p.getLinkState(self.body, 9)[0]) - dist = np.linalg.norm(lpad - rpad) - 0.047813 - return dist - - def check_proximity(self): - ee_pos = np.array(self._p.getLinkState(self.robot, self.tool)[0]) - tool_pos = np.array(self._p.getLinkState(self.body, 0)[0]) - vec = (tool_pos - ee_pos) / np.linalg.norm((tool_pos - ee_pos)) - ee_targ = ee_pos + vec - ray_data = self._p.rayTest(ee_pos, ee_targ)[0] - obj, link, ray_frac = ray_data[0], ray_data[1], ray_data[2] - return obj, link, ray_frac - - -# Gym-style environment code -class PickPlaceEnv(): - - def __init__(self, render=False, high_res=False, high_frame_rate=False): - self.dt = 1/480 - self.sim_step = 0 - - # Configure and start PyBullet - # self._p = pybullet.connect(pybullet.DIRECT) - self._p = BulletClient(connection_mode=pybullet.DIRECT) - self._p.configureDebugVisualizer(self._p.COV_ENABLE_GUI, 0) - self._p.setPhysicsEngineParameter(enableFileCaching=0) - assets_path = os.path.dirname(os.path.abspath("")) - self._p.setAdditionalSearchPath(assets_path) - self._p.setAdditionalSearchPath(pybullet_data.getDataPath()) - self._p.setTimeStep(self.dt) - - self.home_joints = (np.pi / 2, -np.pi / 2, np.pi / 2, -np.pi / 2, 3 * np.pi / 2, 0) # Joint angles: (J0, J1, J2, J3, J4, J5). - self.home_ee_euler = (np.pi, 0, np.pi) # (RX, RY, RZ) rotation in Euler angles. - self.ee_link_id = 9 # Link ID of UR5 end effector. - self.tip_link_id = 10 # Link ID of gripper finger tips. - self.gripper = None - - self.render = render - self.high_res = high_res - self.high_frame_rate = high_frame_rate - - def reset(self, object_list): - self._p.resetSimulation(self._p.RESET_USE_DEFORMABLE_WORLD) - self._p.setGravity(0, 0, -9.8) - self.cache_video = [] - - # Temporarily disable rendering to load URDFs faster. - self._p.configureDebugVisualizer(self._p.COV_ENABLE_RENDERING, 0) - - # Add robot. - self._p.loadURDF("plane.urdf", [0, 0, -0.001]) - self.robot_id = self._p.loadURDF("ur5e/ur5e.urdf", [0, 0, 0], flags=self._p.URDF_USE_MATERIAL_COLORS_FROM_MTL) - self.ghost_id = self._p.loadURDF("ur5e/ur5e.urdf", [0, 0, -10]) # For forward kinematics. - self.joint_ids = [self._p.getJointInfo(self.robot_id, i) for i in range(self._p.getNumJoints(self.robot_id))] - self.joint_ids = [j[0] for j in self.joint_ids if j[2] == self._p.JOINT_REVOLUTE] - - # Move robot to home configuration. - for i in range(len(self.joint_ids)): - self._p.resetJointState(self.robot_id, self.joint_ids[i], self.home_joints[i]) - - # Add gripper. - if self.gripper is not None: - while self.gripper.constraints_thread.is_alive(): - self.constraints_thread_active = False - self.gripper = Robotiq2F85(self.robot_id, self.ee_link_id, self._p) - self.gripper.release() - - # Add workspace. - plane_shape = self._p.createCollisionShape(self._p.GEOM_BOX, halfExtents=[0.3, 0.3, 0.001]) - plane_visual = self._p.createVisualShape(self._p.GEOM_BOX, halfExtents=[0.3, 0.3, 0.001]) - plane_id = self._p.createMultiBody(0, plane_shape, plane_visual, basePosition=[0, -0.5, 0]) - self._p.changeVisualShape(plane_id, -1, rgbaColor=[0.2, 0.2, 0.2, 1.0]) - - # Load objects according to config. - self.object_list = object_list - self.obj_name_to_id = {} - obj_xyz = np.zeros((0, 3)) - for obj_name in object_list: - if ('block' in obj_name) or ('bowl' in obj_name): - - # Get random position 15cm+ from other objects. - while True: - rand_x = np.random.uniform(BOUNDS[0, 0] + 0.1, BOUNDS[0, 1] - 0.1) - rand_y = np.random.uniform(BOUNDS[1, 0] + 0.1, BOUNDS[1, 1] - 0.1) - rand_xyz = np.float32([rand_x, rand_y, 0.03]).reshape(1, 3) - if len(obj_xyz) == 0: - obj_xyz = np.concatenate((obj_xyz, rand_xyz), axis=0) - break - else: - nn_dist = np.min(np.linalg.norm(obj_xyz - rand_xyz, axis=1)).squeeze() - if nn_dist > 0.15: - obj_xyz = np.concatenate((obj_xyz, rand_xyz), axis=0) - break - - object_color = COLORS[obj_name.split(' ')[0]] - object_type = obj_name.split(' ')[1] - object_position = rand_xyz.squeeze() - if object_type == 'block': - object_shape = self._p.createCollisionShape(self._p.GEOM_BOX, halfExtents=[0.02, 0.02, 0.02]) - object_visual = self._p.createVisualShape(self._p.GEOM_BOX, halfExtents=[0.02, 0.02, 0.02]) - object_id = self._p.createMultiBody(0.01, object_shape, object_visual, basePosition=object_position) - elif object_type == 'bowl': - object_position[2] = 0 - object_id = self._p.loadURDF("bowl/bowl.urdf", object_position, useFixedBase=1) - self._p.changeVisualShape(object_id, -1, rgbaColor=object_color) - self.obj_name_to_id[obj_name] = object_id - - # Re-enable rendering. - self._p.configureDebugVisualizer(self._p.COV_ENABLE_RENDERING, 1) - - for _ in range(200): - self._p.stepSimulation() - - # record object positions at reset - self.init_pos = {name: self.get_obj_pos(name) for name in object_list} - - return self.get_observation() - - def servoj(self, joints): - """Move to target joint positions with position control.""" - self._p.setJointMotorControlArray( - bodyIndex=self.robot_id, - jointIndices=self.joint_ids, - controlMode=self._p.POSITION_CONTROL, - targetPositions=joints, - positionGains=[0.01]*6) - - def movep(self, position): - """Move to target end effector position.""" - joints = self._p.calculateInverseKinematics( - bodyUniqueId=self.robot_id, - endEffectorLinkIndex=self.tip_link_id, - targetPosition=position, - targetOrientation=self._p.getQuaternionFromEuler(self.home_ee_euler), - maxNumIterations=100) - self.servoj(joints) - - def get_ee_pos(self): - ee_xyz = np.float32(self._p.getLinkState(self.robot_id, self.tip_link_id)[0]) - return ee_xyz - - def step(self, action=None): - """Do pick and place motion primitive.""" - pick_pos, place_pos = action['pick'].copy(), action['place'].copy() - - # Set fixed primitive z-heights. - hover_xyz = np.float32([pick_pos[0], pick_pos[1], 0.2]) - if pick_pos.shape[-1] == 2: - pick_xyz = np.append(pick_pos, 0.025) - else: - pick_xyz = pick_pos - pick_xyz[2] = 0.025 - if place_pos.shape[-1] == 2: - place_xyz = np.append(place_pos, 0.15) - else: - place_xyz = place_pos - place_xyz[2] = 0.15 - - # Move to object. - ee_xyz = self.get_ee_pos() - while np.linalg.norm(hover_xyz - ee_xyz) > 0.01: - self.movep(hover_xyz) - self.step_sim_and_render() - ee_xyz = self.get_ee_pos() - - while np.linalg.norm(pick_xyz - ee_xyz) > 0.01: - self.movep(pick_xyz) - self.step_sim_and_render() - ee_xyz = self.get_ee_pos() - - # Pick up object. - self.gripper.activate() - for _ in range(240): - self.step_sim_and_render() - while np.linalg.norm(hover_xyz - ee_xyz) > 0.01: - self.movep(hover_xyz) - self.step_sim_and_render() - ee_xyz = self.get_ee_pos() - - for _ in range(50): - self.step_sim_and_render() - - # Move to place location. - while np.linalg.norm(place_xyz - ee_xyz) > 0.01: - self.movep(place_xyz) - self.step_sim_and_render() - ee_xyz = self.get_ee_pos() - - # Place down object. - while (not self.gripper.detect_contact()) and (place_xyz[2] > 0.03): - place_xyz[2] -= 0.001 - self.movep(place_xyz) - for _ in range(3): - self.step_sim_and_render() - self.gripper.release() - for _ in range(240): - self.step_sim_and_render() - place_xyz[2] = 0.2 - ee_xyz = self.get_ee_pos() - while np.linalg.norm(place_xyz - ee_xyz) > 0.01: - self.movep(place_xyz) - self.step_sim_and_render() - ee_xyz = self.get_ee_pos() - place_xyz = np.float32([0, -0.5, 0.2]) - while np.linalg.norm(place_xyz - ee_xyz) > 0.01: - self.movep(place_xyz) - self.step_sim_and_render() - ee_xyz = self.get_ee_pos() - - observation = self.get_observation() - reward = self.get_reward() - done = False - info = {} - return observation, reward, done, info - - def set_alpha_transparency(self, alpha: float) -> None: - for id in range(20): - visual_shape_data = self._p.getVisualShapeData(id) - for i in range(len(visual_shape_data)): - object_id, link_index, _, _, _, _, _, rgba_color = visual_shape_data[i] - rgba_color = list(rgba_color[0:3]) + [alpha] - self._p.changeVisualShape( - self.robot_id, linkIndex=i, rgbaColor=rgba_color) - self._p.changeVisualShape( - self.gripper.body, linkIndex=i, rgbaColor=rgba_color) - - def step_sim_and_render(self): - self._p.stepSimulation() - self.sim_step += 1 - - interval = 40 if self.high_frame_rate else 60 - # Render current image at 8 FPS. - if self.sim_step % interval == 0 and self.render: - self.cache_video.append(self.get_camera_image()) - - def get_camera_image(self): - if not self.high_res: - image_size = (240, 240) - intrinsics = (120., 0, 120., 0, 120., 120., 0, 0, 1) - else: - image_size=(360, 360) - intrinsics=(180., 0, 180., 0, 180., 180., 0, 0, 1) - color, _, _, _, _ = self.render_image(image_size, intrinsics) - return color - - def get_reward(self): - return None - - def get_observation(self): - observation = {} - - # Render current image. - color, depth, position, orientation, intrinsics = self.render_image() - - # Get heightmaps and colormaps. - points = self.get_pointcloud(depth, intrinsics) - position = np.float32(position).reshape(3, 1) - rotation = self._p.getMatrixFromQuaternion(orientation) - rotation = np.float32(rotation).reshape(3, 3) - transform = np.eye(4) - transform[:3, :] = np.hstack((rotation, position)) - points = self.transform_pointcloud(points, transform) - heightmap, colormap, xyzmap = self.get_heightmap(points, color, BOUNDS, PIXEL_SIZE) - - observation["image"] = colormap - observation["xyzmap"] = xyzmap - - return observation - - def render_image(self, image_size=(720, 720), intrinsics=(360., 0, 360., 0, 360., 360., 0, 0, 1)): - - # Camera parameters. - position = (0, -0.85, 0.4) - orientation = (np.pi / 4 + np.pi / 48, np.pi, np.pi) - orientation = self._p.getQuaternionFromEuler(orientation) - zrange = (0.01, 10.) - noise=True - - # OpenGL camera settings. - lookdir = np.float32([0, 0, 1]).reshape(3, 1) - updir = np.float32([0, -1, 0]).reshape(3, 1) - rotation = self._p.getMatrixFromQuaternion(orientation) - rotm = np.float32(rotation).reshape(3, 3) - lookdir = (rotm @ lookdir).reshape(-1) - updir = (rotm @ updir).reshape(-1) - lookat = position + lookdir - focal_len = intrinsics[0] - znear, zfar = (0.01, 10.) - viewm = self._p.computeViewMatrix(position, lookat, updir) - fovh = (image_size[0] / 2) / focal_len - fovh = 180 * np.arctan(fovh) * 2 / np.pi - - # Notes: 1) FOV is vertical FOV 2) aspect must be float - aspect_ratio = image_size[1] / image_size[0] - projm = self._p.computeProjectionMatrixFOV(fovh, aspect_ratio, znear, zfar) - - # Render with OpenGL camera settings. - _, _, color, depth, segm = self._p.getCameraImage( - width=image_size[1], - height=image_size[0], - viewMatrix=viewm, - projectionMatrix=projm, - shadow=1, - flags=self._p.ER_SEGMENTATION_MASK_OBJECT_AND_LINKINDEX, - renderer=self._p.ER_BULLET_HARDWARE_OPENGL) - - # Get color image. - color_image_size = (image_size[0], image_size[1], 4) - color = np.array(color, dtype=np.uint8).reshape(color_image_size) - color = color[:, :, :3] # remove alpha channel - if noise: - color = np.int32(color) - color += np.int32(np.random.normal(0, 3, color.shape)) - color = np.uint8(np.clip(color, 0, 255)) - - # Get depth image. - depth_image_size = (image_size[0], image_size[1]) - zbuffer = np.float32(depth).reshape(depth_image_size) - depth = (zfar + znear - (2 * zbuffer - 1) * (zfar - znear)) - depth = (2 * znear * zfar) / depth - if noise: - depth += np.random.normal(0, 0.003, depth.shape) - - intrinsics = np.float32(intrinsics).reshape(3, 3) - return color, depth, position, orientation, intrinsics - - def get_pointcloud(self, depth, intrinsics): - """Get 3D pointcloud from perspective depth image. - Args: - depth: HxW float array of perspective depth in meters. - intrinsics: 3x3 float array of camera intrinsics matrix. - Returns: - points: HxWx3 float array of 3D points in camera coordinates. - """ - height, width = depth.shape - xlin = np.linspace(0, width - 1, width) - ylin = np.linspace(0, height - 1, height) - px, py = np.meshgrid(xlin, ylin) - px = (px - intrinsics[0, 2]) * (depth / intrinsics[0, 0]) - py = (py - intrinsics[1, 2]) * (depth / intrinsics[1, 1]) - points = np.float32([px, py, depth]).transpose(1, 2, 0) - return points - - def transform_pointcloud(self, points, transform): - """Apply rigid transformation to 3D pointcloud. - Args: - points: HxWx3 float array of 3D points in camera coordinates. - transform: 4x4 float array representing a rigid transformation matrix. - Returns: - points: HxWx3 float array of transformed 3D points. - """ - padding = ((0, 0), (0, 0), (0, 1)) - homogen_points = np.pad(points.copy(), padding, - 'constant', constant_values=1) - for i in range(3): - points[Ellipsis, i] = np.sum(transform[i, :] * homogen_points, axis=-1) - return points - - def get_heightmap(self, points, colors, bounds, pixel_size): - """Get top-down (z-axis) orthographic heightmap image from 3D pointcloud. - Args: - points: HxWx3 float array of 3D points in world coordinates. - colors: HxWx3 uint8 array of values in range 0-255 aligned with points. - bounds: 3x2 float array of values (rows: X,Y,Z; columns: min,max) defining - region in 3D space to generate heightmap in world coordinates. - pixel_size: float defining size of each pixel in meters. - Returns: - heightmap: HxW float array of height (from lower z-bound) in meters. - colormap: HxWx3 uint8 array of backprojected color aligned with heightmap. - xyzmap: HxWx3 float array of XYZ points in world coordinates. - """ - width = int(np.round((bounds[0, 1] - bounds[0, 0]) / pixel_size)) - height = int(np.round((bounds[1, 1] - bounds[1, 0]) / pixel_size)) - heightmap = np.zeros((height, width), dtype=np.float32) - colormap = np.zeros((height, width, colors.shape[-1]), dtype=np.uint8) - xyzmap = np.zeros((height, width, 3), dtype=np.float32) - - # Filter out 3D points that are outside of the predefined bounds. - ix = (points[Ellipsis, 0] >= bounds[0, 0]) & (points[Ellipsis, 0] < bounds[0, 1]) - iy = (points[Ellipsis, 1] >= bounds[1, 0]) & (points[Ellipsis, 1] < bounds[1, 1]) - iz = (points[Ellipsis, 2] >= bounds[2, 0]) & (points[Ellipsis, 2] < bounds[2, 1]) - valid = ix & iy & iz - points = points[valid] - colors = colors[valid] - - # Sort 3D points by z-value, which works with array assignment to simulate - # z-buffering for rendering the heightmap image. - iz = np.argsort(points[:, -1]) - points, colors = points[iz], colors[iz] - px = np.int32(np.floor((points[:, 0] - bounds[0, 0]) / pixel_size)) - py = np.int32(np.floor((points[:, 1] - bounds[1, 0]) / pixel_size)) - px = np.clip(px, 0, width - 1) - py = np.clip(py, 0, height - 1) - heightmap[py, px] = points[:, 2] - bounds[2, 0] - for c in range(colors.shape[-1]): - colormap[py, px, c] = colors[:, c] - xyzmap[py, px, c] = points[:, c] - colormap = colormap[::-1, :, :] # Flip up-down. - xv, yv = np.meshgrid(np.linspace(BOUNDS[0, 0], BOUNDS[0, 1], height), - np.linspace(BOUNDS[1, 0], BOUNDS[1, 1], width)) - xyzmap[:, :, 0] = xv - xyzmap[:, :, 1] = yv - xyzmap = xyzmap[::-1, :, :] # Flip up-down. - heightmap = heightmap[::-1, :] # Flip up-down. - return heightmap, colormap, xyzmap - - def on_top_of(self, obj_a, obj_b): - """ - check if obj_a is on top of obj_b - condition 1: l2 distance on xy plane is less than a threshold - condition 2: obj_a is higher than obj_b - """ - obj_a_pos = self.get_obj_pos(obj_a) - obj_b_pos = self.get_obj_pos(obj_b) - xy_dist = np.linalg.norm(obj_a_pos[:2] - obj_b_pos[:2]) - if obj_b in CORNER_POS: - is_near = xy_dist < 0.06 - return is_near - elif 'bowl' in obj_b: - is_near = xy_dist < 0.06 - is_higher = obj_a_pos[2] > obj_b_pos[2] - return is_near and is_higher - else: - is_near = xy_dist < 0.04 - is_higher = obj_a_pos[2] > obj_b_pos[2] - return is_near and is_higher - - def get_obj_id(self, obj_name): - try: - if obj_name in self.obj_name_to_id: - obj_id = self.obj_name_to_id[obj_name] - else: - obj_name = obj_name.replace('circle', 'bowl').replace('square', 'block').replace('small', '').strip() - obj_id = self.obj_name_to_id[obj_name] - return obj_id - except: - raise Exception('Object name "{}" not found'.format(obj_name)) - - def get_obj_pos(self, obj_name): - obj_name = obj_name.replace('the', '').replace('_', ' ').strip() - if obj_name in CORNER_POS: - position = np.float32(np.array(CORNER_POS[obj_name])) - else: - pick_id = self.get_obj_id(obj_name) - pose = self._p.getBasePositionAndOrientation(pick_id) - position = np.float32(pose[0]) - return position - - def get_bounding_box(self, obj_name): - obj_id = self.get_obj_id(obj_name) - return self._p.getAABB(obj_id) - - -class LMP_wrapper(): - - def __init__(self, env, cfg, render=False): - self.env = env - self._cfg = cfg - self.object_names = list(self._cfg['env']['init_objs']) - - self._min_xy = np.array(self._cfg['env']['coords']['bottom_left']) - self._max_xy = np.array(self._cfg['env']['coords']['top_right']) - self._range_xy = self._max_xy - self._min_xy - - self._table_z = self._cfg['env']['coords']['table_z'] - self.render = render - - def is_obj_visible(self, obj_name): - return obj_name in self.object_names - - def get_obj_names(self): - return self.object_names[::] - - def denormalize_xy(self, pos_normalized): - return pos_normalized * self._range_xy + self._min_xy - - def get_corner_positions(self): - unit_square = box(0, 0, 1, 1) - normalized_corners = np.array(list(unit_square.exterior.coords))[:4] - corners = np.array(([self.denormalize_xy(corner) for corner in normalized_corners])) - return corners - - def get_side_positions(self): - side_xs = np.array([0, 0.5, 0.5, 1]) - side_ys = np.array([0.5, 0, 1, 0.5]) - normalized_side_positions = np.c_[side_xs, side_ys] - side_positions = np.array(([self.denormalize_xy(corner) for corner in normalized_side_positions])) - return side_positions - - def get_obj_pos(self, obj_name): - # return the xy position of the object in robot base frame - return self.env.get_obj_pos(obj_name)[:2] - - def get_obj_position_np(self, obj_name): - return self.get_pos(obj_name) - - def get_bbox(self, obj_name): - # return the axis-aligned object bounding box in robot base frame (not in pixels) - # the format is (min_x, min_y, max_x, max_y) - bbox = self.env.get_bounding_box(obj_name) - return bbox - - def get_color(self, obj_name): - for color, rgb in COLORS.items(): - if color in obj_name: - return rgb - - def pick_place(self, pick_pos, place_pos): - pick_pos_xyz = np.r_[pick_pos, [self._table_z]] - place_pos_xyz = np.r_[place_pos, [self._table_z]] - pass - - def put_first_on_second(self, arg1, arg2): - # put the object with obj_name on top of target - # target can either be another object name, or it can be an x-y position in robot base frame - pick_pos = self.get_obj_pos(arg1) if isinstance(arg1, str) else arg1 - place_pos = self.get_obj_pos(arg2) if isinstance(arg2, str) else arg2 - self.env.step(action={'pick': pick_pos, 'place': place_pos}) - - def get_robot_pos(self): - # return robot end-effector xy position in robot base frame - return self.env.get_ee_pos() - - def goto_pos(self, position_xy): - # move the robot end-effector to the desired xy position while maintaining same z - ee_xyz = self.env.get_ee_pos() - position_xyz = np.concatenate([position_xy, ee_xyz[-1]]) - while np.linalg.norm(position_xyz - ee_xyz) > 0.01: - self.env.movep(position_xyz) - self.env.step_sim_and_render() - ee_xyz = self.env.get_ee_pos() - - def follow_traj(self, traj): - for pos in traj: - self.goto_pos(pos) - - def get_corner_positions(self): - normalized_corners = np.array([ - [0, 1], - [1, 1], - [0, 0], - [1, 0] - ]) - return np.array(([self.denormalize_xy(corner) for corner in normalized_corners])) - - def get_side_positions(self): - normalized_sides = np.array([ - [0.5, 1], - [1, 0.5], - [0.5, 0], - [0, 0.5] - ]) - return np.array(([self.denormalize_xy(side) for side in normalized_sides])) - - def get_corner_name(self, pos): - corner_positions = self.get_corner_positions() - corner_idx = np.argmin(np.linalg.norm(corner_positions - pos, axis=1)) - return ['top left corner', 'top right corner', 'bottom left corner', 'botom right corner'][corner_idx] - - def get_side_name(self, pos): - side_positions = self.get_side_positions() - side_idx = np.argmin(np.linalg.norm(side_positions - pos, axis=1)) - return ['top side', 'right side', 'bottom side', 'left side'][side_idx] \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/2ks Tamil Love Bgm Ringtones Download the Latest and Most Popular Songs.md b/spaces/fatiXbelha/sd/2ks Tamil Love Bgm Ringtones Download the Latest and Most Popular Songs.md deleted file mode 100644 index f7a0ea8cae9e0c8118dd40b778f36cd5235f8259..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/2ks Tamil Love Bgm Ringtones Download the Latest and Most Popular Songs.md +++ /dev/null @@ -1,144 +0,0 @@ -
      -

      2ks Tamil Love BGM Ringtone Download: How to Enjoy the Best Love Background Music on Your Phone

      -

      If you are a fan of Tamil movies and music, you might have heard of the term "Tamil love BGM". BGM stands for background music, and it refers to the instrumental or vocal tracks that accompany the scenes and emotions in a film. Tamil love BGM is a specific genre of BGM that expresses the feelings of romance, passion, and affection in a melodious and captivating way.

      -

      Tamil love BGM is very popular among the fans of Kollywood, the Tamil film industry, as well as the lovers of Indian music in general. Many people enjoy listening to Tamil love BGM songs, watching Tamil love BGM movies, and downloading Tamil love BGM ringtones for their phones. In this article, we will explore the meaning and history of Tamil love BGM, the benefits and features of Tamil love BGM ringtones, and the steps and tips to download them. By the end of this article, you will be able to enjoy the best love background music on your phone.

      -

      2ks tamil love bgm ringtone download


      Download File ★★★★★ https://urllie.com/2uNzML



      -

      The Meaning and History of Tamil Love BGM

      -

      BGM is an acronym for background music, which is also known as incidental music or mood music. It is a type of music that is composed or selected to support or enhance the atmosphere, mood, or theme of a film, television show, video game, or other media. BGM can be instrumental or vocal, original or borrowed, diegetic or non-diegetic. Diegetic music is the music that is heard by the characters in the story, while non-diegetic music is the music that is heard only by the audience.

      -

      Tamil love BGM is a subgenre of BGM that focuses on expressing the emotions and feelings of love, romance, passion, and affection in a film. It can be used to create contrast, tension, suspense, or relief in different situations involving romantic relationships. It can also be used to highlight or emphasize certain moments, such as first meetings, confessions, kisses, breakups, reunions, etc. Tamil love BGM can be composed of various musical elements, such as melodies, harmonies, rhythms, instruments, vocals, lyrics, etc.

      -

      Tamil love BGM has a long and rich history that dates back to the early days of Indian cinema. Some of the pioneers of Tamil love BGM include composers such as Ilaiyaraaja, A.R. Rahman, Harris Jayaraj, Yuvan Shankar Raja, Anirudh Ravichander, G.V. Prakash Kumar, etc. They have created some of the most memorable and iconic Tamil love BGM songs and movies that have influenced generations of fans and musicians

      The Benefits and Features of Tamil Love BGM Ringtones

      -

      Tamil love BGM ringtones are not only pleasant to listen to, but also have many benefits and features that make them a great choice for your phone. Here are some of them:

      -
        -
      • Tamil love BGM ringtones can help you express your personality, mood, and taste. You can choose from a variety of genres and styles, such as classical, folk, pop, rock, jazz, etc. You can also choose from different themes and moods, such as romantic, sad, happy, funny, etc. You can customize your phone with Tamil love BGM ringtones that suit your preferences and interests.
      • -
      • Tamil love BGM ringtones can help you impress and attract others. You can use Tamil love BGM ringtones to show your love and appreciation for Tamil culture and music. You can also use Tamil love BGM ringtones to impress and attract your crush, partner, friends, family, or colleagues. You can share your favorite Tamil love BGM ringtones with them and bond over your common passion.
      • -
      • Tamil love BGM ringtones can help you relax and enjoy. You can use Tamil love BGM ringtones to create a soothing and enjoyable atmosphere for yourself. You can listen to Tamil love BGM ringtones to relax your mind and body, to reduce stress and anxiety, to improve your mood and energy, to enhance your creativity and productivity, etc. You can also use Tamil love BGM ringtones to enjoy the beauty and artistry of Tamil music.
      • -
      -

      Tamil love BGM ringtones have many characteristics and qualities that make them stand out from other types of ringtones. Here are some of them:

      -
        -
      • Tamil love BGM ringtones are melodious and captivating. They have catchy and memorable tunes that can easily stick in your mind. They have beautiful and expressive vocals that can touch your heart. They have rich and diverse instruments that can create a harmonious and balanced sound.
      • -
      • Tamil love BGM ringtones are meaningful and emotional. They have meaningful and poetic lyrics that can convey deep and complex emotions. They have emotional and passionate vocals that can evoke various feelings in you. They have dynamic and expressive instruments that can match the mood and tone of the lyrics.
      • -
      • Tamil love BGM ringtones are original and creative. They have original and unique compositions that can showcase the talent and skill of the composers. They have creative and innovative vocals that can add flavor and personality to the songs. They have varied and versatile instruments that can create a distinctive and diverse sound.
      • -
      -

      Tamil love BGM ringtones are available from many sources and examples. Here are some of them:

      - - - - - - - - - - - - - - - - - - - - - -
      SourceExample
      Websites[Zedge](^1^), [MobCup](^3^), [Tones6], etc.
      Apps[Ringtone Maker], [Zedge Ringtones & Wallpapers], [Tamil Ringtones], etc.
      Songs"Munbe Vaa" from Sillunu Oru Kaadhal, "Enna Solla Pogirai" from Kandukondain Kandukondain, "Kannazhaga" from 3, etc.
      MoviesVinnaithaandi Varuvaayaa, Minnale, Kaadhal Kondein, etc.

      The Steps and Tips to Download Tamil Love BGM Ringtones

      -

      Downloading Tamil love BGM ringtones is not a difficult task, but it requires some steps and tips to ensure a smooth and safe process. Here are some of them:

      -

      2ks tamil love bgm ringtone download mp3
      -2ks tamil love bgm ringtone download free
      -2ks tamil love bgm ringtone download zedge
      -2ks tamil love bgm ringtone download mobcup
      -2ks tamil love bgm ringtone download pagalworld
      -2ks tamil love bgm ringtone download masstamilan
      -2ks tamil love bgm ringtone download kuttyweb
      -2ks tamil love bgm ringtone download isaimini
      -2ks tamil love bgm ringtone download starmusiq
      -2ks tamil love bgm ringtone download naa songs
      -2ks tamil love bgm ringtone download for iphone
      -2ks tamil love bgm ringtone download for android
      -2ks tamil love bgm ringtone download online
      -2ks tamil love bgm ringtone download website
      -2ks tamil love bgm ringtone download app
      -2ks tamil love bgm ringtone download best
      -2ks tamil love bgm ringtone download latest
      -2ks tamil love bgm ringtone download new
      -2ks tamil love bgm ringtone download old
      -2ks tamil love bgm ringtone download romantic
      -2ks tamil love bgm ringtone download sad
      -2ks tamil love bgm ringtone download happy
      -2ks tamil love bgm ringtone download funny
      -2ks tamil love bgm ringtone download cute
      -2ks tamil love bgm ringtone download melody
      -2ks tamil love bgm ringtone download instrumental
      -2ks tamil love bgm ringtone download flute
      -2ks tamil love bgm ringtone download guitar
      -2ks tamil love bgm ringtone download piano
      -2ks tamil love bgm ringtone download violin
      -2ks tamil love bgm ringtone download remix
      -2ks tamil love bgm ringtone download mashup
      -2ks tamil love bgm ringtone download cover
      -2ks tamil love bgm ringtone download original
      -2ks tamil love bgm ringtone download movie name
      -2ks tamil love bgm ringtone download song name
      -2ks tamil love bgm ringtone download singer name
      -2ks tamil love bgm ringtone download composer name
      -2ks tamil love bgm ringtone download lyrics
      -2ks tamil love bgm ringtone download video
      -2ks tamil love bgm ringtone download status
      -2ks tamil love bgm ringtone download whatsapp status
      -2ks tamil love bgm ringtone download facebook status
      -2ks tamil love bgm ringtone download instagram status
      -2ks tamil love bgm ringtone download tiktok status
      -2ks tamil love bgm ringtone download youtube status
      -2ks tamil love bgm ringtone download review
      -2ks tamil love bgm ringtone download rating
      -2ks tamil love bgm ringtone download feedback

      -
        -
      1. The first step is to choose a source and a platform to download Tamil love BGM ringtones. You can use websites, apps, or songs as sources, and you can use your computer, phone, or tablet as platforms. You can also use a combination of them, such as downloading from a website to your computer and then transferring to your phone.
      2. -
      3. The second step is to browse and select the Tamil love BGM ringtones that you like. You can use the search function, the categories, the ratings, the reviews, or the recommendations to find the Tamil love BGM ringtones that suit your taste and preference. You can also preview the Tamil love BGM ringtones before downloading them.
      4. -
      5. The third step is to download and save the Tamil love BGM ringtones to your device. You can use the download button, the QR code, the link, or the email to download the Tamil love BGM ringtones. You can also choose the format, the quality, and the location of the Tamil love BGM ringtones.
      6. -
      7. The fourth step is to set and enjoy the Tamil love BGM ringtones on your phone. You can use the settings, the contacts, the profiles, or the apps to set the Tamil love BGM ringtones as your default ringtone, your contact ringtone, your notification ringtone, or your alarm ringtone. You can also adjust the volume, the duration, and the vibration of the Tamil love BGM ringtones.
      8. -
      -

      Downloading Tamil love BGM ringtones also requires some precautions and recommendations to ensure a secure and satisfying experience. Here are some of them:

      -
        -
      • Make sure that you download Tamil love BGM ringtones from reliable and reputable sources. Avoid downloading from unknown or suspicious sources that may contain viruses, malware, spyware, or other harmful elements.
      • -
      • Make sure that you download Tamil love BGM ringtones that are compatible and suitable for your device. Avoid downloading from incompatible or unsuitable sources that may cause errors, glitches, crashes, or other issues.
      • -
      • Make sure that you download Tamil love BGM ringtones that are legal and ethical. Avoid downloading from illegal or unethical sources that may violate the copyrights, trademarks, or other rights of the composers, singers, producers, or owners of the Tamil love BGM songs.
      • -
      • Make sure that you download Tamil love BGM ringtones that are free or affordable. Avoid downloading from expensive or unreasonable sources that may charge you hidden fees, subscriptions, or other costs.
      • -
      -

      Downloading Tamil love BGM ringtones also offers some alternatives and options to enhance your enjoyment and convenience. Here are some of them:

      -
        -
      • You can use online converters or editors to convert or edit the Tamil love BGM songs into ringtones. You can use tools such as [Online Audio Converter], [MP3 Cutter], [Ringtone Maker], etc.
      • -
      • You can use online streaming or sharing services to listen or share the Tamil love BGM songs without downloading them. You can use platforms such as [YouTube], [Spotify], [SoundCloud], etc.
      • -
      • You can use online generators or creators to create your own Tamil love BGM ringtones from scratch. You can use software such as [GarageBand], [FL Studio], [Audacity], etc.
      • -

      Conclusion: How to Make the Most of Tamil Love BGM Ringtones

      -

      In conclusion, Tamil love BGM ringtones are a great way to enjoy the best love background music on your phone. They have many benefits and features that can help you express your personality, mood, and taste, impress and attract others, and relax and enjoy. They also have many characteristics and qualities that make them melodious, captivating, meaningful, emotional, original, and creative. They are available from many sources and examples, such as websites, apps, songs, and movies.

      -

      To download Tamil love BGM ringtones, you need to follow some steps and tips to ensure a smooth and safe process. You need to choose a source and a platform, browse and select the ringtones, download and save them to your device, and set and enjoy them on your phone. You also need to take some precautions and recommendations to ensure a secure and satisfying experience. You need to download from reliable, compatible, legal, and free sources. You also have some alternatives and options to enhance your enjoyment and convenience. You can use online converters, editors, streaming services, sharing services, generators, or creators.

      -

      We hope that this article has helped you learn more about Tamil love BGM ringtones and how to download them. If you are interested in Tamil love BGM ringtones, we invite you to try them out for yourself and see how they can make your phone more fun and romantic. We also encourage you to explore more Tamil love BGM songs and movies and discover the beauty and artistry of Tamil music.

      -

      FAQs: Frequently Asked Questions about Tamil Love BGM Ringtones

      -

      Here are some of the most common questions that people ask about Tamil love BGM ringtones:

      -

      Q1: What are the best websites to download Tamil love BGM ringtones?

      -

      A1: There are many websites that offer Tamil love BGM ringtones for free or for a fee. Some of the best websites are [Zedge], [MobCup], [Tones6], etc. These websites have a large collection of Tamil love BGM ringtones from various genres, styles, themes, and moods. They also have user-friendly interfaces, easy download options, high-quality formats, and positive reviews.

      -

      Q2: What are the best apps to download Tamil love BGM ringtones?

      -

      A2: There are many apps that allow you to download Tamil love BGM ringtones directly to your phone. Some of the best apps are [Ringtone Maker], [Zedge Ringtones & Wallpapers], [Tamil Ringtones], etc. These apps have a wide range of Tamil love BGM ringtones from different sources, such as songs, movies, albums, etc. They also have user-friendly features, such as previewing, editing, setting, sharing, etc.

      -

      Q3: What are the best Tamil love BGM songs and movies?

      -

      A3: There are many Tamil love BGM songs and movies that have become classics and favorites among the fans of Tamil music. Some of the best Tamil love BGM songs are "Munbe Vaa" from Sillunu Oru Kaadhal, "Enna Solla Pogirai" from Kandukondain Kandukondain, "Kannazhaga" from 3, etc. Some of the best Tamil love BGM movies are Vinnaithaandi Varuvaayaa, Minnale, Kaadhal Kondein, etc.

      -

      Q4: How to set Tamil love BGM ringtones on your phone?

      -

      A4: To set Tamil love BGM ringtones on your phone, you need to follow these steps:

      -
        -
      1. Download the Tamil love BGM ringtone that you want from a website or an app.
      2. -
      3. Go to the settings of your phone and select the sound option.
      4. -
      5. Select the ringtone option and browse for the downloaded Tamil love BGM ringtone.
      6. -
      7. Select the downloaded Tamil love BGM ringtone as your default ringtone or assign it to a specific contact.
      8. -
      9. Enjoy your new Tamil love BGM ringtone on your phone.
      10. -
      -

      Q5: How to create your own Tamil love BGM ringtones?

      -

      A5: To create your own Tamil love BGM ringtones, you need to follow these steps:

      -
          -
        1. Choose a Tamil love BGM song that you want to use as the base for your ringtone. You can use your own collection, a streaming service, or a download website.
        2. -
        3. Use an online converter or an editor to convert the Tamil love BGM song into a ringtone format, such as MP3, M4R, OGG, etc. You can use tools such as [Online Audio Converter], [MP3 Cutter], [Ringtone Maker], etc.
        4. -
        5. Use the same or a different online converter or editor to edit the Tamil love BGM ringtone according to your preference. You can trim, crop, fade, loop, merge, split, etc. the Tamil love BGM ringtone.
        6. -
        7. Download and save the edited Tamil love BGM ringtone to your device. You can use the same or a different website or app that you used to download the Tamil love BGM song.
        8. -
        9. Set and enjoy your own Tamil love BGM ringtone on your phone. You can use the same steps that you used to set a downloaded Tamil love BGM ringtone.
        10. -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/AI Enlarger Pro MOD APK The Best App for Enlarging and Enhancing Images.md b/spaces/fatiXbelha/sd/AI Enlarger Pro MOD APK The Best App for Enlarging and Enhancing Images.md deleted file mode 100644 index 4541e90e103c3e3cc8d3340e1023ad96146568eb..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/AI Enlarger Pro MOD APK The Best App for Enlarging and Enhancing Images.md +++ /dev/null @@ -1,122 +0,0 @@ -
        -

        AI Image Enlarger Pro Mod APK Download: How to Enhance Your Photos and Anime Images with Artificial Intelligence

        -

        Have you ever wanted to enlarge your photos and anime images without losing quality or detail? Do you have pixelated or blurry images that you want to fix and improve? If yes, then you need AI Image Enlarger, an app that uses artificial intelligence to upscale and enhance your images. In this article, we will tell you everything you need to know about AI Image Enlarger Pro Mod APK, a modified version of the app that unlocks all the premium features for free. We will also show you how to download and install it on your Android device, and how to use it to make your photos and anime images look amazing.

        -

        ai image enlarger pro mod apk download


        DOWNLOAD ⚹⚹⚹ https://urllie.com/2uNEoH



        -

        Features of AI Image Enlarger Pro Mod APK

        -

        AI Image Enlarger is an app that uses advanced algorithms and neural networks to increase the size and quality of your photos and anime images. It can enlarge your images by 200%, 400%, or 800% without affecting the quality or clarity. It can also fix blurry or noisy images, remove artifacts, sharpen edges, and enhance colors. Here are some of the features of AI Image Enlarger Pro Mod APK:

        -
          -
        • It is free to download and use, no registration or subscription required.
        • -
        • It has no ads or watermarks on the output images.
        • -
        • It supports various image formats, such as JPG, PNG, BMP, GIF, etc.
        • -
        • It has four enlargement modes: Artwork, Photo, Face, and Custom. You can choose the best mode for your image type.
        • -
        • It has a batch processing feature that allows you to enlarge multiple images at once.
        • -
        • It has a preview feature that lets you compare the original and enlarged images before saving them.
        • -
        • It has a cloud service that enables you to upload and process your images online without using your device's resources.
        • -
        -

        How to download and install AI Image Enlarger Pro Mod APK

        -

        If you want to enjoy all the benefits of AI Image Enlarger Pro Mod APK, you need to download and install it on your Android device. Here are the steps to do so:

        -
          -
        1. Go to [AI Enlarger MOD APK (Pro Unlocked) 2.8.4 - APKMB.Com](^1^) and click on the download button.
        2. -
        3. Wait for the download to finish and then open the downloaded file.
        4. -
        5. If you see a warning message that says "Install blocked", go to your device's settings and enable "Unknown sources" under security options.
        6. -
        7. Tap on "Install" and wait for the installation to complete.
        8. -
        9. Launch the app and grant it the necessary permissions.
        10. -
        -

        How to use AI Image Enlarger Pro Mod APK to upscale and improve your photos and anime images

        -

        Now that you have installed AI Image Enlarger Pro Mod APK on your device, you can start using it to enhance your photos and anime images. Here are the steps to do so:

        -
          -
        1. Open the app and tap on the "+" button to select an image from your gallery or camera.
        2. -
        3. Choose the enlargement mode that suits your image type: Artwork, Photo, Face, or Custom.
        4. -
        5. Select the enlargement factor that you want: 2x, 4x, or 8x.
        6. -
        7. If you want to use the cloud service, tap on the cloud icon and sign in with your Google account. Otherwise, tap on the device icon to process your image locally.
        8. -
        9. Wait for the app to process your image and then tap on the preview icon to see the difference between the original and enlarged images.
        10. -
        11. If you are satisfied with the result, tap on the save icon to save your image to your device or share it with others.
        12. -
        -

        Pros and cons of AI Image Enlarger Pro Mod APK

        -

        AI Image Enlarger Pro Mod APK is a powerful and useful app that can help you improve your photos and anime images. However, like any other app, it has its pros and cons. Here are some of them:

        -

        Pros

        -
          -
        • It is easy to use and has a user-friendly interface.
        • -
        • It can enlarge your images by up to 800% without losing quality or detail.
        • -
        • It can fix blurry or noisy images, remove artifacts, sharpen edges, and enhance colors.
        • -
        • It has four enlargement modes that cater to different image types.
        • -
        • It has a batch processing feature that saves you time and effort.
        • -
        • It has a cloud service that offers faster and better processing.
        • -
        • It is free to download and use, no ads or watermarks.
        • -
        -

        Cons

        -
          -
        • It requires an internet connection to use the cloud service.
        • -
        • It may not work well on some devices or images.
        • -
        • It may consume a lot of battery or memory when processing large or multiple images.
        • -
        -

        Conclusion: Is AI Image Enlarger Pro Mod APK worth it?

        -

        If you are looking for an app that can help you enlarge and enhance your photos and anime images, AI Image Enlarger Pro Mod APK is a great option. It uses artificial intelligence to upscale and improve your images without affecting the quality or clarity. It has many features and benefits that make it stand out from other similar apps. It is also free to download and use, no ads or watermarks. However, it also has some drawbacks, such as requiring an internet connection for the cloud service, not working well on some devices or images, and consuming a lot of battery or memory. Therefore, you should weigh the pros and cons before deciding whether to download and install it on your device. We hope this article has helped you learn more about AI Image Enlarger Pro Mod APK and how to use it. If you have any questions or feedback, feel free to leave a comment below.

        -

        ai image enlarger pro mod apk free download
        -download ai image enlarger pro mod apk latest version
        -ai image enlarger pro mod apk for android
        -how to install ai image enlarger pro mod apk
        -ai image enlarger pro mod apk unlocked features
        -ai image enlarger pro mod apk premium
        -ai image enlarger pro mod apk no watermark
        -ai image enlarger pro mod apk 2023
        -ai image enlarger pro mod apk 4k 8k 16k resolution
        -ai image enlarger pro mod apk online
        -ai image enlarger pro mod apk cracked
        -ai image enlarger pro mod apk for pc
        -ai image enlarger pro mod apk reddit
        -ai image enlarger pro mod apk review
        -ai image enlarger pro mod apk tutorial
        -ai image enlarger pro mod apk without ads
        -ai image enlarger pro mod apk unlimited use
        -ai image enlarger pro mod apk for photo anime
        -ai image enlarger pro mod apk best settings
        -ai image enlarger pro mod apk comparison
        -ai image enlarger pro mod apk download link
        -download ai image enlarger pro mod apk full version
        -ai image enlarger pro mod apk for windows 10
        -how to use ai image enlarger pro mod apk
        -ai image enlarger pro mod apk benefits
        -ai image enlarger pro mod apk hack
        -ai image enlarger pro mod apk original vs modified
        -ai image enlarger pro mod apk safe
        -ai image enlarger pro mod apk testimonials
        -ai image enlarger pro mod apk update
        -ai image enlarger pro mod apk with license key
        -download ai image enlarger pro mod apk for mac
        -ai image enlarger pro mod apk for ios
        -how to uninstall ai image enlarger pro mod apk
        -ai image enlarger pro mod apk advantages and disadvantages
        -ai image enlarger pro mod apk cheat
        -ai image enlarger pro mod apk download site
        -download ai image enlarger pro mod apk from google drive
        -ai image enlarger pro mod apk for linux
        -how to get ai image enlarger pro mod apk for free

        -

        FAQs

        -

        Here are some frequently asked questions about AI Image Enlarger Pro Mod APK:

        -

        Q: Is AI Image Enlarger Pro Mod APK safe to download and install?

        -

        A: Yes, AI Image Enlarger Pro Mod APK is safe to download and install. It does not contain any viruses or malware that can harm your device. However, you should always download it from a trusted source, such as [AI Enlarger MOD APK (Pro Unlocked) 2.8.4 - APKMB.Com], and scan it with an antivirus app before installing it.

        -

        Q: What is the difference between AI Image Enlarger Pro Mod APK and the original app?

        -

        A: AI Image Enlarger Pro Mod APK is a modified version of the original app that unlocks all the premium features for free. It has no ads or watermarks on the output images, and it supports batch processing and cloud service. The original app, on the other hand, requires you to pay for the premium features, and it has ads and watermarks on the output images.

        -

        Q: How can I enlarge anime images with AI Image Enlarger Pro Mod APK?

        -

        A: To enlarge anime images with AI Image Enlarger Pro Mod APK, you need to follow these steps:

        -
          -
        1. Select an anime image from your gallery or camera.
        2. -
        3. Choose the Artwork mode as the enlargement mode.
        4. -
        5. Select the enlargement factor that you want: 2x, 4x, or 8x.
        6. -
        7. If you want to use the cloud service, sign in with your Google account. Otherwise, process your image locally.
        8. -
        9. Preview and save your enlarged anime image.
        10. -
        -

        Q: How can I fix blurry images with AI Image Enlarger Pro Mod APK?

        -

        A: To fix blurry images with AI Image Enlarger Pro Mod APK, you need to follow these steps:

        -
          -
        1. Select a blurry image from your gallery or camera.
        2. -
        3. Choose the Photo mode as the enlargement mode.
        4. -
        5. Select the enlargement factor that you want: 2x, 4x, or 8x.
        6. -
        7. If you want to use the cloud service, sign in with your Google account. Otherwise, process your image locally.
        8. -
        9. Preview and save your fixed image.
        10. -
        -

        Q: How can I contact the developer of AI Image Enlarger Pro Mod APK?

        -

        A: If you have any questions, suggestions, or feedback about AI Image Enlarger Pro Mod APK, you can contact the developer by sending an email to support@imglarger.com. You can also visit their website at https://imglarger.com/ for more information and updates.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Attack on Titan Wings of Freedom APK Game for Android.md b/spaces/fatiXbelha/sd/Download Attack on Titan Wings of Freedom APK Game for Android.md deleted file mode 100644 index f4c118d0e925256b2ff5029368014e1839022ad0..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Attack on Titan Wings of Freedom APK Game for Android.md +++ /dev/null @@ -1,123 +0,0 @@ - -

        Attack on Titan APK Game: Everything You Need to Know

        -

        If you are a fan of the popular anime and manga series Attack on Titan, you might be interested in playing a game based on it. But what if you don't have a console or a PC to play the official games? Don't worry, there is a solution for you: Attack on Titan APK Game. This is a fan-made game that you can download and play on your Android device for free. In this article, we will tell you everything you need to know about this game, including how to download and install it, how to play it, and what are its pros and cons.

        -

        attack on titan apk game


        DOWNLOAD 🆓 https://urllie.com/2uNzFx



        -

        What is Attack on Titan APK Game?

        -

        Attack on Titan APK Game is a 3D action game that lets you experience the thrilling battles between humans and Titans from the anime and manga series. You can choose from different characters, such as Eren, Mikasa, Armin, Levi, etc., and use their unique skills and weapons to fight against the giant enemies. You can also customize your character's appearance, equipment, and abilities according to your preference.

        -

        The game features various modes, such as story mode, survival mode, multiplayer mode, etc., where you can enjoy different scenarios and challenges. You can also explore the vast world of Attack on Titan, from the walls of Shiganshina to the forests of Trost. The game has realistic graphics, sound effects, and voice acting that will immerse you in the epic adventure.

        -

        How to download and install Attack on Titan APK Game?

        -

        Downloading and installing Attack on Titan APK Game is very easy. Just follow these simple steps:

        -

        attack on titan tactics apk download
        -attack on titan assault apk mod
        -attack on titan wings of freedom apk
        -attack on titan tribute game apk
        -attack on titan 2 final battle apk
        -attack on titan mobile game apk
        -attack on titan offline game apk
        -attack on titan 3d game apk
        -attack on titan fan game apk
        -attack on titan rpg game apk
        -attack on titan android game apk
        -attack on titan pc game apk
        -attack on titan online game apk
        -attack on titan free game apk
        -attack on titan full game apk
        -attack on titan season 4 game apk
        -attack on titan the final season game apk
        -attack on titan multiplayer game apk
        -attack on titan action game apk
        -attack on titan adventure game apk
        -attack on titan fighting game apk
        -attack on titan simulation game apk
        -attack on titan strategy game apk
        -attack on titan survival game apk
        -attack on titan horror game apk
        -attack on titan anime game apk
        -attack on titan manga game apk
        -attack on titan live action game apk
        -attack on titan vr game apk
        -attack on titan ar game apk
        -attack on titan card game apk
        -attack on titan puzzle game apk
        -attack on titan quiz game apk
        -attack on titan trivia game apk
        -attack on titan wallpaper game apk
        -attack on titan theme song game apk
        -attack on titan music game apk
        -attack on titan soundboard game apk
        -attack on titan voice changer game apk
        -attack on titan character creator game apk
        -attack on titan dress up game apk
        -attack on titan makeover game apk
        -attack on titan dating sim game apk
        -attack on titan fanfiction game apk
        -attack on titan chatbot game apk
        -attack on titan text adventure game apk
        -attack on titan choose your own adventure game apk
        -attack on titan interactive story game apk
        -attack on titan visual novel game apk

        -
          -
        1. Go to this link[^1] and download the APK file of the game.
        2. -
        3. Once the download is complete, go to your device's settings and enable the installation of apps from unknown sources.
        4. -
        5. Locate the downloaded APK file in your device's file manager and tap on it to start the installation process.
        6. -
        7. Follow the instructions on the screen and wait for the installation to finish.
        8. -
        9. Launch the game from your app drawer and enjoy!
        10. -
        -

        Note: The game requires an internet connection to run properly. You may also need to grant some permissions to the game for it to function correctly.

        -

        How to play Attack on Titan APK Game?

        -

        Playing Attack on Titan APK Game is not very difficult, but it does require some practice and skill. Here are some tips on how to play the game:

        -

        How to use the 3D Maneuver Gear

        -

        The 3D Maneuver Gear is a device that allows you to move around quickly and freely in the air. It is essential for fighting against the Titans, as they are much taller and faster than humans. To use the 3D Maneuver Gear, you need to do the following:

        -
          -
        • Tap on the screen to shoot a hook at a nearby surface. You can shoot two hooks at a time, one with each hand.
        • -
        • Swipe on the screen to adjust your direction and speed. You can also use the virtual joystick at the bottom left corner of the screen.
        • -
        • Tap again to release the hook and fly in the air. You can also tap twice to perform a boost that will increase your speed.
        • -
        • Be careful not to hit any obstacles or walls while using the 3D Mane

          How to fight against the Titans

          -

          The Titans are the main enemies in the game, and they are very dangerous and powerful. They can kill you with one bite or swipe, so you need to be careful and strategic when fighting them. To fight against the Titans, you need to do the following:

          -
            -
          • Use the 3D Maneuver Gear to get close to their weak spot, which is the nape of their neck. You can see a red mark on their neck that indicates their weak spot.
          • -
          • Swipe on the screen to slash your sword at their weak spot. You need to slash it several times to kill them, depending on their size and type.
          • -
          • Avoid their attacks by dodging or blocking. You can dodge by swiping on the screen or tapping the dodge button at the bottom right corner of the screen. You can block by tapping the block button at the bottom center of the screen.
          • -
          • Use your special skills and items to gain an advantage. You can activate your special skills by tapping the skill button at the top right corner of the screen. You can use items such as gas, blades, bombs, etc., by tapping the item button at the top left corner of the screen.
          • -
          -

          How to upgrade your equipment and skills

          -

          As you progress in the game, you will need to upgrade your equipment and skills to face stronger and more challenging Titans. To upgrade your equipment and skills, you need to do the following:

          -
            -
          • Earn coins and materials by completing missions, killing Titans, and exploring the world. You can also buy coins and materials with real money if you want.
          • -
          • Go to the shop menu and select the equipment or skill you want to upgrade. You can upgrade your swords, hooks, gas tanks, costumes, etc.
          • -
          • Spend the required amount of coins and materials to upgrade your equipment or skill. You can see the benefits and costs of each upgrade before confirming it.
          • -
          • Enjoy your improved performance and abilities in the game!
          • -
          -

          What are the pros and cons of Attack on Titan APK Game?

          -

          Attack on Titan APK Game is a fun and exciting game that will appeal to fans of the anime and manga series, as well as anyone who likes action games. However, it is not a perfect game, and it has some pros and cons that you should consider before playing it. Here are some of them:

          -

          Pros:

          -
            -
          • The game has amazing graphics that capture the atmosphere and style of the original series. The characters, Titans, environments, animations, etc., are all well-designed and detailed.
          • -
          • The game has immersive sound effects and voice acting that enhance the gameplay experience. You can hear the roar of the Titans, the slash of your sword, the voice of your character, etc., in high quality.
          • -
          • The game has addictive gameplay that will keep you hooked for hours. You can enjoy different modes, missions, challenges, etc., that will test your skills and strategy. You can also play with other players online in multiplayer mode.
          • -
          • The game has a faithful story that follows the events and characters of the anime and manga series. You can relive some of the most memorable scenes and moments from the series in the game.
          • -
          -

          Cons:

          -
            -
          • The game has some bugs and glitches that may affect your gameplay experience. You may encounter some crashes, freezes, errors, etc., while playing the game.
          • -
          • The game has some compatibility issues with some devices and Android versions. You may not be able to play the game smoothly or at all on some devices or Android versions.
          • -
          • The game has some ads and in-app purchases that may annoy or tempt you. You may see some ads pop up while playing the game or be asked to buy some coins or materials with real money.
          • -
          -

          Conclusion

          -

          Attack on Titan APK Game is a fan-made game that lets you play as your favorite characters from the anime and manga series Attack on Titan. You can use their skills and weapons to fight against the Titans in various modes and scenarios. The game has realistic graphics, sound effects, voice acting, gameplay, story, etc., that will make you feel like you are part of the epic adventure. However, the game also has some drawbacks, such as bugs, glitches, compatibility issues, ads, in-app purchases, etc., that may affect your enjoyment of the game. Therefore, you should weigh the pros and cons before deciding to play the game. If you are a fan of Attack on Titan, or if you like action games, you may want to give it a try. You can download and install the game from this link and enjoy the thrilling battles between humans and Titans.

          -

          FAQs

          -

          Here are some frequently asked questions about Attack on Titan APK Game:

          -
            -
          1. Is Attack on Titan APK Game safe to download and play?
          2. -

            Yes, Attack on Titan APK Game is safe to download and play, as long as you download it from a trusted source, such as this link. However, you should always be careful when downloading and installing apps from unknown sources, as they may contain viruses or malware that can harm your device.

            -
          3. Is Attack on Titan APK Game official or fan-made?
          4. -

            Attack on Titan APK Game is a fan-made game that is not affiliated with the official creators or publishers of the anime and manga series Attack on Titan. The game is made by fans for fans, and it is not intended to infringe any copyrights or trademarks of the original series.

            -
          5. How much space does Attack on Titan APK Game require?
          6. -

            Attack on Titan APK Game requires about 300 MB of free space on your device to download and install. You may also need some additional space for updates and data files.

            -
          7. Can I play Attack on Titan APK Game offline?
          8. -

            No, Attack on Titan APK Game requires an internet connection to run properly. You need to be online to access the game's features, such as story mode, multiplayer mode, etc. You also need to be online to save your progress and sync your data with the game's servers.

            -
          9. Can I play Attack on Titan APK Game with friends?
          10. -

            Yes, Attack on Titan APK Game has a multiplayer mode that allows you to play with other players online. You can join or create a room and invite your friends to join you. You can also chat with other players and cooperate with them to complete missions and defeat Titans.

            -

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download UFC 2 and Enjoy All New Knockout Physics System.md b/spaces/fatiXbelha/sd/Download UFC 2 and Enjoy All New Knockout Physics System.md deleted file mode 100644 index 5557682b55c57148503c18b25113ecb54e6f83fc..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download UFC 2 and Enjoy All New Knockout Physics System.md +++ /dev/null @@ -1,263 +0,0 @@ -
          -

          How to Download UFC 2 on Your Xbox

          -

          If you are a fan of mixed martial arts (MMA) and video games, you might be interested in playing EA Sports UFC 2, one of the most realistic and exciting MMA games ever made. In this article, we will show you how to download UFC 2 on your Xbox, as well as give you some information about the game and its features. Whether you want to fight for the championship belt, brawl with your friends, or create your own fighter, UFC 2 has something for everyone. Let's get started!

          -

          What is UFC 2 and Why You Should Play It

          -

          UFC 2 is a video game developed by EA Canada and published by Electronic Arts in 2016. It is the sequel to EA Sports UFC, which was released in 2014. UFC 2 is based on the Ultimate Fighting Championship (UFC), the largest MMA promotion in the world. It features over 250 fighters from various weight classes, as well as legendary fighters like Mike Tyson, Bruce Lee, and Bas Rutten. It also boasts a revolutionary new Knockout Physics System, which makes every strike and every knockout more realistic and satisfying.

          -

          download ufc 2


          Download Ziphttps://urllie.com/2uNE3K



          -

          UFC 2 is a game that you should play if you love MMA or if you want to experience the thrill of finishing the fight. It is a game that lets you immerse yourself in the world of MMA, with stunning graphics, authentic gameplay, and dynamic commentary. It is a game that challenges you to master different fighting styles, techniques, and strategies. It is a game that rewards you for your skill, creativity, and perseverance. It is a game that you can enjoy alone or with others, online or offline.

          -

          The Features of UFC 2

          -

          UFC 2 has many features that make it stand out from other MMA games. Some of these features are:

          -
            -
          • Knockout Physics System: This system allows for more realistic and varied knockouts, based on the timing, speed, and power of your strikes. You can also trigger ragdoll effects and see your opponent's body react to every impact.
          • -
          • Dynamic Grappling: This system gives you more control and options when it comes to grappling, whether you are on top or bottom, in clinch or on the ground. You can transition between positions, initiate submissions, or escape from danger with fluidity and responsiveness.
          • -
          • Next-Level Presentation: This feature enhances the visual and audio quality of the game, with lifelike character models, realistic animations, detailed arenas, and immersive sound effects. You can also enjoy the commentary of Mike Goldberg and Joe Rogan, as well as the official UFC broadcast graphics.
          • -
          -

          The Modes of UFC 2

          -

          UFC 2 has many modes that cater to different preferences and play styles. Some of these modes are:

          -
            -
          • Career Mode: This mode allows you to create your own fighter and take him or her from the bottom to the top of the UFC. You can customize your fighter's appearance, attributes, skills, moves, and personality. You can also train your fighter, manage your injuries, choose your fights, and interact with other fighters.
          • -
          • KO Mode: This mode is for those who want a quick and fun way to play UFC 2. In this mode, you can choose any fighter and fight against another fighter or the CPU, with the goal of knocking them out as fast as possible. There are no rounds, no grappling, and no stamina. Just pure striking action.
          • -
          • Live Events Mode: This mode lets you participate in real-life UFC events, either by predicting the outcomes of the fights or by playing them yourself. You can earn rewards and rank up on the leaderboards based on your performance.
          • -
          • Ultimate Team Mode: This mode is for those who want to build their own team of fighters and compete against other players online. You can create up to five fighters, each with their own weight class, fighting style, and attributes. You can also collect and use cards to improve your fighters' skills, moves, and perks.
          • -
          -

          The Fighters of UFC 2

          -

          UFC 2 has a huge roster of fighters that you can choose from, representing different weight classes, countries, and disciplines. Some of the most popular fighters in UFC 2 are:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          NameWeight ClassCountryDiscipline
          Conor McGregorFeatherweight/LightweightIrelandBoxing
          Ronda RouseyBantamweightUSAJudo
          Jon JonesLight HeavyweightUSAMuay Thai/Wrestling
          Demetrious JohnsonFlyweightUSAMixed Martial Arts
          Holly HolmBantamweight/FeatherweightUSABoxing/Kickboxing
          Daniel CormierLight Heavyweight/HeavyweightUSAWrestling/Boxing
          Jose AldoFeatherweightBrazilMuay Thai/Jiu-Jitsu
          Joanna JedrzejczykStrawweightPolandMuay Thai/Kickboxing
          Anderson SilvaMiddleweight/Light HeavyweightBrazilMuay Thai/Jiu-Jitsu
          Cris CyborgFeatherweightBrazilMuay Thai/Jiu-Jitsu
          Georges St-PierreWelterweight/MiddleweightCanadaKarate/Wrestling/Jiu-Jitsu
          Amanda NunesBantamweight/FeatherweightBrazilBoxing/Jiu-Jitsu
          Khabib NurmagomedovLightweightRussiaSambo/Wrestling/Judo
          Valentina ShevchenkoFlyweight/BantamweightKyrgyzstan/PeruMuay Thai/Judo/Taekwondo/Boxing
          Israel AdesanyaMiddleweight/Light HeavyweightNigeria/New ZealandKickboxing/Boxing/Taekwondo/Capoeira/Muay Thai/Jiu-Jitsu/Wrestling/Karate/Kung Fu/Tai Chi/Hapkido/Aikido/Krav Maga/Silat/Jeet Kune Do/Eskrima/Ninjutsu/Savate/Judo/Sambo/Systema/Wing Chun/Brazilian Jiu-Jitsu (BJJ)

          Of course, these are just some of the fighters that you can play as or against in UFC 2. You can also create your own fighter or import your face using the Game Face feature. You can also unlock and use some of the legendary fighters that are not part of the official roster, such as Mike Tyson, Bruce Lee, and Bas Rutten.

          -

          How to Buy UFC 2 on the Microsoft Store

          -

          If you want to download UFC 2 on your Xbox, you need to buy it first from the Microsoft Store. The Microsoft Store is the official online marketplace for Xbox games, apps, and other digital content. You can access the Microsoft Store from your Xbox console, your Windows PC, or your web browser.

          -

          download ufc 2 deluxe edition
          -download ufc 2 xbox one
          -download ufc 2 free trial
          -download ufc 2 pc
          -download ufc 2 ps4
          -download ufc 2 for android
          -download ufc 2 update
          -download ufc 2 roster
          -download ufc 2 soundtrack
          -download ufc 2 apk
          -download ufc 2 game
          -download ufc 2 mod
          -download ufc 2 crack
          -download ufc 2 torrent
          -download ufc 2 iso
          -download ufc 2 full version
          -download ufc 2 online
          -download ufc 2 offline
          -download ufc 2 demo
          -download ufc 2 beta
          -download ufc 2 patch
          -download ufc 2 dlc
          -download ufc 2 cheats
          -download ufc 2 trainer
          -download ufc 2 tips and tricks
          -download ufc 2 gameplay
          -download ufc 2 review
          -download ufc 2 guide
          -download ufc 2 walkthrough
          -download ufc 2 tutorial
          -download ufc 2 best fighters
          -download ufc 2 knockout mode
          -download ufc 2 ultimate team
          -download ufc 2 career mode
          -download ufc 2 custom fighters
          -download ufc 2 legends pack
          -download ufc 2 mike tyson
          -download ufc 2 bruce lee
          -download ufc 2 bas rutten
          -download ufc 2 sakuraba
          -download ufc 2 mcgregor vs diaz
          -download ufc 2 rousey vs holm
          -download ufc 2 jones vs cormier
          -download ufc 2 silva vs bisping
          -download ufc 2 aldo vs holloway
          -download ufc 2 nurmagomedov vs gaethje
          -download ufc 2 adesanya vs costa
          -download ufc 2 usman vs masvidal
          -download ufc 2 miocic vs ngannou

          -

          The Requirements for UFC 2

          -

          Before you buy UFC 2, you need to make sure that you meet the requirements for playing it on your Xbox. The requirements are:

          -
            -
          • An Xbox One or Xbox Series X|S console: UFC 2 is compatible with both the Xbox One and the Xbox Series X|S consoles. However, it does not support the Xbox 360 or the original Xbox consoles.
          • -
          • An Xbox Live account: You need an Xbox Live account to buy and download UFC 2 from the Microsoft Store. You can create an Xbox Live account for free, or you can use your existing Microsoft account. You also need an Xbox Live Gold subscription to play UFC 2 online with other players.
          • -
          • A payment method: You need a valid payment method to buy UFC 2 from the Microsoft Store. You can use a credit card, a debit card, a PayPal account, or a gift card. You can also use your Microsoft account balance if you have enough funds.
          • -
          • A storage space: You need enough storage space on your Xbox console to download and install UFC 2. The file size of UFC 2 is about 20 GB, so you need at least that much free space on your console's hard drive or external storage device.
          • -
          • An internet connection: You need a stable and fast internet connection to buy and download UFC 2 from the Microsoft Store. You also need an internet connection to play UFC 2 online with other players or to access some of the game's features.
          • The Steps to Buy UFC 2

            -

            Once you have met the requirements for playing UFC 2 on your Xbox, you can follow these steps to buy it from the Microsoft Store:

            -
              -
            1. Go to the Microsoft Store: You can go to the Microsoft Store from your Xbox console, your Windows PC, or your web browser. On your Xbox console, you can find the Microsoft Store on the home screen or in the guide menu. On your Windows PC, you can open the Microsoft Store app or visit the website. On your web browser, you can go to https://www.microsoft.com/en-us/store.
            2. -
            3. Search for UFC 2: You can use the search bar or the browse function to find UFC 2 on the Microsoft Store. You can also use this link to go directly to the UFC 2 page: https://www.microsoft.com/en-us/p/ea-sports-ufc-2/bp1xj9fz0w0v.
            4. -
            5. Select UFC 2: You can select UFC 2 from the search results or the game page. You will see some information about the game, such as the price, the rating, the description, and the screenshots. You can also watch the trailer or read some reviews.
            6. -
            7. Buy UFC 2: You can buy UFC 2 by clicking on the "Buy" button or the "Get" button if you have an EA Play subscription. You will be asked to sign in with your Xbox Live account and confirm your payment method. You will also see the terms and conditions and the privacy policy. After you agree to them, you will complete your purchase and receive a confirmation email.
            8. -
            -

            The Benefits of Buying UFC 2 on the Microsoft Store

            -

            There are some benefits of buying UFC 2 on the Microsoft Store instead of other platforms or retailers. Some of these benefits are:

            -
              -
            • Digital Download: You don't have to worry about physical discs, cases, or manuals when you buy UFC 2 on the Microsoft Store. You can download and install the game directly on your Xbox console without any hassle.
            • -
            • Cross-Generation Compatibility: You can play UFC 2 on both the Xbox One and the Xbox Series X|S consoles with one purchase. You don't have to buy separate versions of the game for different consoles.
            • -
            • Xbox Live Features: You can enjoy some of the Xbox Live features when you buy UFC 2 on the Microsoft Store, such as achievements, leaderboards, cloud saves, and multiplayer. You can also share your game clips and screenshots with other players.
            • -
            • EA Play Benefits: You can get some extra benefits if you have an EA Play subscription when you buy UFC 2 on the Microsoft Store. You can get a 10% discount on the game price, as well as access to some exclusive content and challenges.
            • -
            -

            How to Download and Install UFC 2 on Your Xbox

            -

            After you have bought UFC 2 on the Microsoft Store, you can download and install it on your Xbox console. The process is simple and straightforward, but it may take some time depending on your internet speed and storage space.

            -

            The Steps to Download and Install UFC 2

            -

            You can follow these steps to download and install UFC 2 on your Xbox console:

            -
              -
            1. Go to My Games & Apps: You can go to My Games & Apps from your Xbox console's home screen or guide menu. This is where you can manage all your games and apps on your console.
            2. -
            3. Select Ready to Install: You can select Ready to Install from My Games & Apps. This is where you can see all the games and apps that you have bought but not installed yet.
            4. -
            5. Select UFC 2: You can select UFC 2 from Ready to Install. This will start downloading and installing the game on your console.
            6. -
            7. Wait for Completion: You can wait for the download and installation to complete. You can see the progress and status of the process on your screen. You can also pause or resume the process if you want.
            8. -
            9. Launch UFC 2: You can launch UFC 2 from My Games & Apps or from your home screen once it is downloaded and installed. You may need to update the game before playing it for the first time.
            10. -
            -

            The Tips to Optimize Your UFC 2 Experience

            -

            You can use some tips to optimize your UFC 2 experience on your Xbox console. Some of these tips are:

              -
            • Check Your Connection: You should check your internet connection before playing UFC 2 online with other players. You should use a wired connection instead of a wireless one, if possible. You should also avoid downloading or streaming anything else while playing UFC 2.
            • -
            • Adjust Your Settings: You should adjust your settings to suit your preferences and needs when playing UFC 2. You can change the difficulty level, the camera angle, the controls, the sound, and the display options. You can also enable or disable some features, such as blood, subtitles, and tutorials.
            • -
            • Update Your Game: You should update your game regularly to get the latest patches and improvements for UFC 2. You can check for updates from My Games & Apps or from the game menu. You can also turn on automatic updates to get them as soon as they are available.
            • -
            -

            The Troubleshooting for UFC 2 Issues

            -

            You may encounter some issues or errors when playing UFC 2 on your Xbox console. Some of these issues are:

            -
              -
            • Game Won't Start: If your game won't start or crashes on the loading screen, you may need to restart your console or clear your cache. You may also need to reinstall your game or check for updates.
            • -
            • Game Won't Connect: If your game won't connect to the online servers or to other players, you may need to check your internet connection or your Xbox Live status. You may also need to open some ports or change your NAT type.
            • -
            • Game Won't Save: If your game won't save your progress or settings, you may need to check your storage space or your cloud sync. You may also need to delete some old saves or corrupted files.
            • -
            -

            If these solutions don't work, you can contact EA Support or Xbox Support for more help.

            -

            How to Enjoy UFC 2 on Your Xbox

            -

            Now that you have downloaded and installed UFC 2 on your Xbox console, you can enjoy playing it and exploring its features. There are many ways to have fun and improve your skills in UFC 2. Here are some suggestions:

            -

            The Best Practices for Playing UFC 2

            -

            You can use some best practices to enhance your gameplay and performance in UFC 2. Some of these best practices are:

            -
              -
            • Learn the Basics: You should learn the basics of MMA and UFC 2 before jumping into the action. You should familiarize yourself with the rules, the weight classes, the fighting styles, and the techniques. You should also practice the controls, the movements, the strikes, and the grapples.
            • -
            • Train Your Fighter: You should train your fighter regularly to improve their attributes, skills, moves, and perks. You should also manage their injuries, stamina, and weight. You should balance between training and resting to avoid overtraining or undertraining.
            • -
            • Choose Your Fighter: You should choose your fighter wisely based on their strengths, weaknesses, and match-ups. You should also customize your fighter's appearance, personality, and gear. You should experiment with different fighters and find the one that suits you best.
            • -
            • Plan Your Strategy: You should plan your strategy before and during each fight based on your fighter's abilities, your opponent's tendencies, and the situation. You should adapt your strategy according to the changes in the fight. You should also use feints, counters, combos, and transitions to gain an advantage.
            • -
            • Finish the Fight: You should aim to finish the fight as soon as possible by knocking out or submitting your opponent. You should avoid taking unnecessary risks or prolonging the fight. You should also respect your opponent and follow the rules of fair play.
            • -
            -

            The Resources for Learning More About UFC 2

            -

            You can use some resources to learn more about UFC 2 and its features. Some of these resources are:

            -
              -
            • The Official Website: You can visit the official website of UFC 2 at https://www.ea.com/games/ufc/ufc-2. Here you can find more information about the game, such as the news, the updates, the videos, and the screenshots.
            • -
            • The User Manual: You can access the user manual of UFC 2 from the game menu or from this link: https://help.ea.com/en-us/help/ufc/ufc-2/ufc-2-manuals/. Here you can find more details about how to play UFC 2, such as the controls, the modes, the features, and the settings.
            • -
            • The Tutorials: You can watch the tutorials of UFC 2 from the game menu or from this link: https://www.ea.com/games/ufc/ufc-2/tutorials. Here you can learn how to master the basics and advanced techniques of UFC 2, such as the striking, the grappling, the submissions, and the defense.
            • -
            • The Tips and Tricks: You can read some tips and tricks for UFC 2 from this link: https://www.ea.com/games/ufc/ufc-2/tips-and-tricks. Here you can find some useful advice and guidance for playing UFC 2, such as how to choose your fighter, how to train your fighter, how to fight smart, and how to finish the fight.
            • -
            • The Forums: You can join the forums of UFC 2 at https://answers.ea.com/t5/EA-SPORTS-UFC/bd-p/ufc. Here you can interact with other players and fans of UFC 2, as well as the developers and moderators. You can ask questions, share feedback, report issues, or just chat about UFC 2.
            • -
            -

            The Community for Sharing Your UFC 2 Moments

            -

            You can also join the community of UFC 2 and share your moments and experiences with other players. Some of the ways to do that are:

            -
              -
            • Share Your Game Clips and Screenshots: You can capture and share your game clips and screenshots of UFC 2 using the Xbox app or the Xbox console. You can also edit and upload your game clips and screenshots to YouTube, Twitter, Facebook, or other platforms.
            • -
            • Stream Your Gameplay: You can stream your gameplay of UFC 2 live using Twitch, Mixer, or other services. You can also watch other players' streams and chat with them and their viewers.
            • -
            • Join a Club or a Party: You can join a club or a party of UFC 2 players using the Xbox app or the Xbox console. You can also create your own club or party and invite other players to join. You can chat, play, or compete with your club or party members.
            • -
            -

            Conclusion

            -

            UFC 2 is a great game for MMA fans and gamers alike. It offers a realistic and exciting MMA experience, with a huge roster of fighters, a variety of modes, and a lot of features. It is easy to buy, download, and install on your Xbox console, and it is fun to play and enjoy with others. If you want to download UFC 2 on your Xbox, you can follow the steps and tips in this article. We hope you found this article helpful and informative. Now go ahead and download UFC 2 on your Xbox and unleash your inner fighter!

            -

            Frequently Asked Questions

            -

            Here are some frequently asked questions about downloading UFC 2 on your Xbox:

            -
              -
            1. How much does UFC 2 cost on the Microsoft Store?
            2. -

              UFC 2 costs $19.99 on the Microsoft Store. However, you can get it for $17.99 if you have an EA Play subscription. You can also get it for free if you have an EA Play Pro subscription.

              -
            3. How long does it take to download and install UFC 2 on your Xbox?
            4. -

              The time it takes to download and install UFC 2 on your Xbox depends on your internet speed and storage space. The file size of UFC 2 is about 20 GB, so it may take several hours or even days to download and install it.

              -
            5. Can you play UFC 2 offline on your Xbox?
            6. -

              You can play UFC 2 offline on your Xbox if you have downloaded and installed it on your console. However, you will not be able to access some of the online features, such as multiplayer, live events, ultimate team, or updates.

              -
            7. Can you play UFC 2 with friends on your Xbox?
            8. -

              You can play UFC 2 with friends on your Xbox if you have an Xbox Live Gold subscription. You can play online with up to two friends in co-op or versus modes. You can also play locally with one friend in split-screen mode.

              -
            9. Can you transfer your UFC 2 progress from one Xbox console to another?
            10. -

              You can transfer your UFC 2 progress from one Xbox console to another if you have an Xbox Live account and an internet connection. You can use the cloud save feature to sync your progress across different consoles.

              -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Experience the Thrill of Racing with Traffic Racer Hack MOD APK Download.md b/spaces/fatiXbelha/sd/Experience the Thrill of Racing with Traffic Racer Hack MOD APK Download.md deleted file mode 100644 index de58d3467324dacd024ccedd2648c38e237aa21f..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Experience the Thrill of Racing with Traffic Racer Hack MOD APK Download.md +++ /dev/null @@ -1,99 +0,0 @@ -
            -

            Download Traffic Racer Hack Mod APK and Enjoy Unlimited Racing Fun

            -

            If you are a fan of racing games, you must have heard of Traffic Racer, one of the most popular and addictive arcade racing games on Google Play. In this game, you can drive your car through highway traffic, earn cash, upgrade your car and buy new ones. You can also choose from different game modes, environments, cars and traffic types to challenge yourself and have fun.

            -

            But what if you want to enjoy more features and benefits in this game? What if you want to have unlimited money and gold, unlock all cars and upgrades, remove ads and root requirements, and install the game easily on any device? Well, you can do all that and more by downloading Traffic Racer hack mod apk, a modified version of the original game that gives you access to all the hacked features and unlimited racing fun. In this article, we will tell you everything you need to know about Traffic Racer hack mod apk, including its features, how to download and install it, its pros and cons, and some FAQs.

            -

            download traffic racer hack mod apk


            DOWNLOADhttps://urllie.com/2uNEFF



            -

            Features of Traffic Racer Hack Mod APK

            -

            Traffic Racer hack mod apk is a modified version of the original game that gives you access to many features that are not available in the official version. Here are some of the features that you can enjoy by downloading Traffic Racer hack mod apk:

            -
              -
            • Unlimited money and gold: With this feature, you can have unlimited cash and gold in your account, which you can use to buy new cars, upgrade your existing ones, or customize them with different colors and wheels. You can also use the money and gold to unlock new game modes, environments, and traffic types.
            • -
            • All cars unlocked and upgraded: With this feature, you can have access to all the 40+ different cars in the game, without having to earn them by playing or paying. You can also upgrade your cars' speed, handling, and brakes to the maximum level, making them faster, smoother, and more powerful.
            • -
            • No ads and no root required: With this feature, you can enjoy the game without any annoying ads or pop-ups that interrupt your gameplay. You can also play the game without having to root your device, which can be risky and complicated.
            • -
            • Easy installation and compatibility: With this feature, you can install the game easily on any Android device with a simple process. You don't need to worry about compatibility issues or errors, as the game works smoothly on most devices.
            • -
            -

            How to Download and Install Traffic Racer Hack Mod APK

            -

            If you are interested in downloading Traffic Racer hack mod apk, you need to follow these steps:

            -
              -
            1. Step 1: Download the apk file from a trusted source: You need to download the apk file of Traffic Racer hack mod apk from a reliable source that offers safe and virus-free downloads. You can use [APKMODY](^4^), a website that provides thousands of original APK, MOD APK, Premium APK of games & apps for free.
            2. -
            3. Step 2: Enable unknown sources on your device: You need to enable unknown sources on your device to allow the installation of apps from sources other than Google Play. To do this, go to Settings > Security > Unknown Sources and toggle it on.
            4. -
            5. Step 3: Install the apk file and launch the game: You need to locate the downloaded apk file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to complete. Then, launch the game from your app drawer or home screen.
            6. -
            7. Step 4: Enjoy the hacked features and unlimited racing fun: You can now enjoy all the hacked features and unlimited racing fun in Traffic Racer hack mod apk. You can choose from different game modes, environments, cars, and traffic types, and customize your car with unlimited money and gold. You can also challenge yourself with different levels of difficulty and compete with other players online.
            8. -
            -

            Pros and Cons of Traffic Racer Hack Mod APK

            -

            Like any other hack mod apk, Traffic Racer hack mod apk has its pros and cons. Here are some of them:

            -

            download traffic racer mod apk unlimited money
            -download traffic racer hack apk latest version
            -download traffic racer mod apk for android
            -download traffic racer hack apk free
            -download traffic racer mod apk revdl
            -download traffic racer hack apk no root
            -download traffic racer mod apk offline
            -download traffic racer hack apk 2023
            -download traffic racer mod apk rexdl
            -download traffic racer hack apk unlimited coins
            -download traffic racer mod apk android 1
            -download traffic racer hack apk 3.6
            -download traffic racer mod apk happymod
            -download traffic racer hack apk an1
            -download traffic racer mod apk 3.6 unlimited money
            -download traffic racer hack apk ios
            -download traffic racer mod apk 3.5
            -download traffic racer hack apk 3.5 unlimited money
            -download traffic racer mod apk 2.5 unlimited money and gold
            -download traffic racer hack apk 2.5
            -download traffic racer mod apk pure
            -download traffic racer hack apk android oyun club
            -download traffic racer mod apk old version
            -download traffic racer hack apk uptodown
            -download traffic racer mod apk mob.org
            -download traffic racer hack apk and obb
            -download traffic racer mod apk all cars unlocked
            -download traffic racer hack apk android 1.com
            -download traffic racer mod apk apkpure
            -download traffic racer hack apk apkmody
            -download traffic racer mod apk android oyun club
            -download traffic racer hack apk by skgames
            -download traffic racer mod apk by an1.com[^1^]
            -download traffic racer hack apk by revdl.com[^2^]
            -download traffic racer mod apk by rexdl.com
            -download traffic racer hack apk by happymod.com
            -download traffic racer mod apk by mob.org
            -download traffic racer hack apk by apkpure.com
            -download traffic racer mod apk by uptodown.com
            -download traffic racer hack apk by android 1.com

            - - - - - - - - - -
            ProsCons
              -
            • More fun: You can have more fun in the game by having unlimited money and gold, unlocking all cars and upgrades, and removing ads and root requirements.
            • -
            • More customization: You can customize your car with different colors and wheels, and choose from different game modes, environments, and traffic types.
            • -
            • More challenges: You can challenge yourself with different levels of difficulty and compete with other players online.
            • -
            • More rewards: You can earn more cash and gold by completing missions and achievements, and unlock new cars and upgrades.
            • -
              -
            • Possible security risks: You may expose your device to security risks by downloading and installing apps from unknown sources, which may contain malware or viruses.
            • -
            • Possible ban from online mode: You may get banned from playing online mode by the game developers if they detect that you are using a hacked version of the game.
            • -
            • Possible loss of original game data: You may lose your original game data if you uninstall the official version of the game or overwrite it with the hacked version.
            • -
            -

            Conclusion and FAQs

            -

            Traffic Racer is one of the best arcade racing games on Google Play that offers you a realistic driving experience with stunning graphics and smooth controls. However, if you want to enjoy more features and benefits in this game, you can download Traffic Racer hack mod apk, a modified version of the original game that gives you access to unlimited money and gold, all cars unlocked and upgraded, no ads and no root required, and easy installation and compatibility. By downloading Traffic Racer hack mod apk, you can have unlimited racing fun with more customization, more challenges, and more rewards.

            -

            If you have any questions about Traffic Racer hack mod apk, you may find the answers in these FAQs:

            -

            FAQs

            -

            Q: Is Traffic Racer hack mod apk safe to download and install?

            -

            A: Traffic Racer hack mod apk is generally safe to download and install if you use a trusted source that offers virus-free downloads. However, you should always be careful when downloading apps from unknown sources, as they may contain malware or viruses that can harm your device. You should also scan the apk file with an antivirus app before installing it.

            -

            Q: Will I get banned from playing online mode if I use Traffic Racer hack mod apk?

            -

            A: There is a possibility that you may get banned from playing online mode if you use Traffic Racer hack mod apk, as the game developers may detect that you are using a hacked version of the game. To avoid this, you should not use the hacked features in online mode, or play online mode at your own risk.

            -

            Q: Will I lose my original game data if I use Traffic Racer hack mod apk?

            -

            A: There is a possibility that you may lose your original game data if you use Traffic Racer hack mod apk, as you may uninstall the official version of the game or overwrite it with the hacked version. To avoid this, you should backup your original game data before using Traffic Racer hack mod apk, or use a different device for playing the hacked version.

            -

            Q: Can I update Traffic Racer hack mod apk to the latest version?

            -

            A: Yes, you can update Traffic Racer hack mod apk to the latest version by downloading the updated apk file from the same source that you used before. However , you should be careful when updating the game, as you may lose some of the hacked features or encounter compatibility issues. You should also check the reviews and comments of other users who have updated the game before doing so.

            -

            Q: Where can I find more hack mod apk games like Traffic Racer?

            -

            A: If you are looking for more hack mod apk games like Traffic Racer, you can visit [APKMODY], a website that provides thousands of original APK, MOD APK, Premium APK of games & apps for free. You can find many categories and genres of games, such as action, adventure, arcade, racing, simulation, sports, and more. You can also search for your favorite games by name or keyword, and download them easily and safely.

            -

            I hope you enjoyed this article and found it helpful. If you did, please share it with your friends and family who love racing games. And don't forget to download Traffic Racer hack mod apk and enjoy unlimited racing fun. Thank you for reading!

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/data/__init__.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/data/__init__.py deleted file mode 100644 index 9a9761c518a1b07c5996165869742af0a52c82bc..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/data/__init__.py +++ /dev/null @@ -1,116 +0,0 @@ -"""This package includes all the modules related to data loading and preprocessing - - To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset. - You need to implement four functions: - -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt). - -- <__len__>: return the size of dataset. - -- <__getitem__>: get a data point from data loader. - -- : (optionally) add dataset-specific options and set default options. - -Now you can use the dataset class by specifying flag '--dataset_mode dummy'. -See our template dataset class 'template_dataset.py' for more details. -""" -import numpy as np -import importlib -import torch.utils.data -from face3d.data.base_dataset import BaseDataset - - -def find_dataset_using_name(dataset_name): - """Import the module "data/[dataset_name]_dataset.py". - - In the file, the class called DatasetNameDataset() will - be instantiated. It has to be a subclass of BaseDataset, - and it is case-insensitive. - """ - dataset_filename = "data." + dataset_name + "_dataset" - datasetlib = importlib.import_module(dataset_filename) - - dataset = None - target_dataset_name = dataset_name.replace('_', '') + 'dataset' - for name, cls in datasetlib.__dict__.items(): - if name.lower() == target_dataset_name.lower() \ - and issubclass(cls, BaseDataset): - dataset = cls - - if dataset is None: - raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name)) - - return dataset - - -def get_option_setter(dataset_name): - """Return the static method of the dataset class.""" - dataset_class = find_dataset_using_name(dataset_name) - return dataset_class.modify_commandline_options - - -def create_dataset(opt, rank=0): - """Create a dataset given the option. - - This function wraps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from data import create_dataset - >>> dataset = create_dataset(opt) - """ - data_loader = CustomDatasetDataLoader(opt, rank=rank) - dataset = data_loader.load_data() - return dataset - -class CustomDatasetDataLoader(): - """Wrapper class of Dataset class that performs multi-threaded data loading""" - - def __init__(self, opt, rank=0): - """Initialize this class - - Step 1: create a dataset instance given the name [dataset_mode] - Step 2: create a multi-threaded data loader. - """ - self.opt = opt - dataset_class = find_dataset_using_name(opt.dataset_mode) - self.dataset = dataset_class(opt) - self.sampler = None - print("rank %d %s dataset [%s] was created" % (rank, self.dataset.name, type(self.dataset).__name__)) - if opt.use_ddp and opt.isTrain: - world_size = opt.world_size - self.sampler = torch.utils.data.distributed.DistributedSampler( - self.dataset, - num_replicas=world_size, - rank=rank, - shuffle=not opt.serial_batches - ) - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - sampler=self.sampler, - num_workers=int(opt.num_threads / world_size), - batch_size=int(opt.batch_size / world_size), - drop_last=True) - else: - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - batch_size=opt.batch_size, - shuffle=(not opt.serial_batches) and opt.isTrain, - num_workers=int(opt.num_threads), - drop_last=True - ) - - def set_epoch(self, epoch): - self.dataset.current_epoch = epoch - if self.sampler is not None: - self.sampler.set_epoch(epoch) - - def load_data(self): - return self - - def __len__(self): - """Return the number of data in the dataset""" - return min(len(self.dataset), self.opt.max_dataset_size) - - def __iter__(self): - """Return a batch of data""" - for i, data in enumerate(self.dataloader): - if i * self.opt.batch_size >= self.opt.max_dataset_size: - break - yield data diff --git a/spaces/fclong/summary/fengshen/__init__.py b/spaces/fclong/summary/fengshen/__init__.py deleted file mode 100644 index 5cc52d128218a4878e5778502e25eadf54cf1261..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from .models.longformer import LongformerConfig, LongformerModel -from .models.roformer import RoFormerConfig, RoFormerModel -from .models.megatron_t5 import T5Config, T5EncoderModel -from .models.ubert import UbertPipelines, UbertModel diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Bedwars Hacks How to Get Free and Unlimited Resources.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Bedwars Hacks How to Get Free and Unlimited Resources.md deleted file mode 100644 index 61b4f5de7004cf09f642b7c84977fd83644a3629..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Bedwars Hacks How to Get Free and Unlimited Resources.md +++ /dev/null @@ -1,106 +0,0 @@ - -

            How to Download Bedwars Hacks Safely and Easily

            -

            If you are a fan of Bedwars, a popular team-based PVP game on various platforms, you might be interested in downloading some hacks to enhance your gameplay. Hacks can give you advantages such as flying, invisibility, godmode, aimbot, ESP, and more. However, downloading hacks from untrusted sources can be risky and dangerous for your device and your account. In this article, we will show you how to download Bedwars hacks safely and easily from reliable sources.

            -

            download bedwars hacks


            Download ->>> https://gohhs.com/2uPtqe



            -

            What is Bedwars and Why Do You Need Hacks?

            -

            Bedwars is a team-based PVP game

            -

            Bedwars is a game mode where you have to protect your bed at your base while trying to destroy the beds of other teams. The last team standing wins the game. You can collect resources from your island or the center island to buy items and upgrades from the shop. You can also build bridges to attack other islands or defend your own.

            -

            Hacks can give you an edge over your opponents

            -

            Some players use hacks to gain unfair advantages over their opponents in Bedwars. For example, flying hacks can let you bypass obstacles and reach other islands faster. Invisibility hacks can make you undetectable by other players and their defenses. Godmode hacks can make you invincible to any damage. Aimbot hacks can help you hit your enemies with perfect accuracy. ESP hacks can show you the location and health of other players and their beds.

            -

            What are the Risks of Downloading Hacks from Untrusted Sources?

            -

            Malware and viruses can infect your device

            -

            Some hackers may disguise their malicious software as Bedwars hacks and trick you into downloading them. These files may contain malware or viruses that can harm your device or steal your personal information. For example, they may delete or encrypt your files, spy on your activities, or hijack your accounts.

            -

            Bans and penalties can ruin your gaming experience

            -

            Some game developers may detect that you are using hacks and ban or penalize you for cheating. This can ruin your gaming experience and reputation. For example, you may lose your progress, items, or achievements. You may also face legal consequences if you violate the terms of service of the game.

            -

            How to Find and Download Reliable and Working Bedwars Hacks?

            -

            Use curated software lists and reviews

            -

            One of the best ways to find reliable and working Bedwars hacks is to use curated software lists and reviews from reputable sources. These sources have tested and verified the quality and safety of the hacks they recommend. They also provide detailed information about the features, compatibility, installation, and usage of the hacks.

            -

            download bedwars cheats
            -download bedwars exploits
            -download bedwars scripts
            -download bedwars mods
            -download bedwars aimbot
            -download bedwars fly hack
            -download bedwars speed hack
            -download bedwars kill aura
            -download bedwars auto win
            -download bedwars gui
            -download roblox bedwars hacks
            -download minecraft bedwars hacks
            -download hypixel bedwars hacks
            -download nethergames bedwars hacks
            -download blockman go bedwars hacks
            -how to download bedwars hacks
            -where to download bedwars hacks
            -best site to download bedwars hacks
            -free download bedwars hacks
            -safe download bedwars hacks
            -easy download bedwars hacks
            -fast download bedwars hacks
            -working download bedwars hacks
            -updated download bedwars hacks
            -latest download bedwars hacks
            -no virus download bedwars hacks
            -no survey download bedwars hacks
            -no password download bedwars hacks
            -no ban download bedwars hacks
            -no root download bedwars hacks
            -no jailbreak download bedwars hacks
            -no injector download bedwars hacks
            -no executor download bedwars hacks
            -no verification download bedwars hacks
            -no human verification download bedwars hacks
            -tutorial on how to download bedwars hacks
            -guide on how to download bedwars hacks
            -tips on how to download bedwars hacks
            -tricks on how to download bedwars hacks
            -secrets on how to download bedwars hacks
            -review of the best download bedwars hacks
            -comparison of the best download bedwars hacks
            -ranking of the best download bedwars hacks
            -rating of the best download bedwars hacks
            -feedback of the best download bedwars hacks
            -testimonials of the best download bedwars hacks
            -recommendations of the best download bedwars hacks
            -suggestions of the best download bedwars hacks
            -alternatives of the best download bedwars hacks

            -

            For example, here are some websites that offer curated software lists and reviews for Bedwars hacks:

            - - - - - -
            NameURLDescription
            Wurst(https://www.wurstclient.net/)A popular and versatile hack client for Minecraft that supports Bedwars and other game modes.
            Sigma(https://sigmaclient.info/)A powerful and customizable hack client for Minecraft that offers a wide range of features and settings for Bedwars and other game modes.
            Badlion(https://www.badlion.net/)A premium and trusted hack client for Minecraft that provides high-quality and safe hacks for Bedwars and other game modes.
            -

            Scan the file for malware before downloading it

            -

            Another way to ensure that you are downloading a safe and clean Bedwars hack is to scan the file for malware before downloading it. You can use online tools or software to check the file for any malicious code or behavior. For example, you can use VirusTotal, a free online service that analyzes files and URLs for viruses, worms, trojans, and other kinds of malware. You can upload the file or enter the URL of the download link and see the results from various antivirus engines.

            -

            Avoid tricky download ads and installers

            -

            Some websites may use tricky download ads and installers to trick you into downloading unwanted or harmful software. These ads and installers may look like legitimate download buttons or links, but they may redirect you to other websites or install unwanted programs on your device. To avoid these traps, you should always look for the official download link from the hack provider, and avoid clicking on any suspicious or misleading ads or pop-ups. You should also read the terms and conditions carefully before installing any software, and uncheck any boxes that ask you to install additional software or change your browser settings.

            -

            How to Install and Use Bedwars Hacks?

            -

            Follow the instructions from the hack provider

            -

            Once you have downloaded a reliable and working Bedwars hack, you need to install and use it properly. You should always follow the instructions from the hack provider, as they may vary depending on the type and version of the hack. Generally, you will need to extract the file from a zip or rar archive, and copy or move it to your Minecraft folder. You may also need to run a launcher or an injector to activate the hack.

            -

            Choose the features and settings you want

            -

            After installing the hack, you can choose the features and settings you want to use in Bedwars. You can access the hack menu by pressing a certain key or combination of keys, usually indicated by the hack provider. You can then toggle on or off different features, such as flying, invisibility, godmode, aimbot, ESP, etc. You can also adjust the settings of each feature, such as speed, range, color, etc.

            -

            Enjoy the game with your hacks

            -

            Now that you have installed and configured your Bedwars hacks, you can enjoy the game with your hacks. You can join a Bedwars server of your choice, and play with your team or solo. You can use your hacks to dominate your opponents, destroy their beds, and win the game. However, you should be careful not to be too obvious or abusive with your hacks, as you may get reported by other players or detected by anti-cheat systems.

            -

            Conclusion

            -

            In this article, we have shown you how to download Bedwars hacks safely and easily from reliable sources. We have also explained what Bedwars is and why you may need hacks, what are the risks of downloading hacks from untrusted sources, how to find and download reliable and working Bedwars hacks, how to install and use Bedwars hacks, and how to enjoy the game with your hacks. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below.

            -

            FAQs

            -

            What is the best Bedwars hack?

            -

            The answer to this question may depend on your personal preference and needs. However, some of the most popular and reputable Bedwars hacks are Wurst, Sigma, and Badlion. These hacks offer a variety of features and settings for Bedwars and other game modes.

            -

            Is it illegal to use Bedwars hacks?

            -

            However, using Bedwars hacks may be considered unethical or unfair by some players and game developers. You may face bans or penalties if you are caught cheating by anti-cheat systems or reported by other players. Therefore, you should use Bedwars hacks at your own risk and discretion.

            -

            How can I avoid getting banned for using Bedwars hacks?

            -

            There is no guarantee that you can avoid getting banned for using Bedwars hacks, as anti-cheat systems and game developers are constantly updating their methods to detect and prevent cheating. However, some tips that may help you reduce the chances of getting banned are:

            -
              -
            • Use hacks from trusted and reputable sources, and scan them for malware before downloading them.
            • -
            • Update your hacks regularly to ensure that they are compatible and undetected by the latest game version.
            • -
            • Use hacks sparingly and discreetly, and do not abuse them or brag about them in chat.
            • -
            • Do not use hacks on official or ranked servers, and avoid servers that have strict anti-cheat policies.
            • -
            • Do not share your hacks with others, and do not download hacks from unknown or suspicious sources.
            • -
            -

            Can I use Bedwars hacks on other platforms or devices?

            -

            The answer to this question may depend on the type and compatibility of the hack you are using. Some hacks are designed for specific platforms or devices, such as PC, mobile, console, etc. Some hacks may work on multiple platforms or devices, but may require different installation or usage methods. You should always check the requirements and instructions of the hack you are using before downloading and installing it.

            -

            Where can I find more information or support for Bedwars hacks?

            -

            If you need more information or support for Bedwars hacks, you can visit the websites or forums of the hack providers, where you can find FAQs, tutorials, guides, videos, feedback, updates, etc. You can also join online communities or groups of Bedwars hackers, where you can share tips, tricks, experiences, questions, etc. However, you should be careful not to reveal your identity or personal information to strangers online, as they may try to scam you or report you.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_71.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_71.py deleted file mode 100644 index 5b82d81a0872379353bce99e2ea81fc5d1a65b3d..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_71.py +++ /dev/null @@ -1,26 +0,0 @@ -def is_spam(message): - import re - - # Check for common spam keywords and phrases - spam_keywords = ["축하합니다", "4월체험반", "최소", "상승", "상한가", "폭등", "익절", "외수익", "적은시간 만에", "손실 없습니다", - "무료거부", "무료입장", "광고", "신청", "혜택", "해으십시오", "강요드리지 않습니다", "주식은 오를때", "카카오톡제재", - "텔레그램", "악성광고", "입장 안내", "서비스 가입", "이벤트", "로보마켓", "알려드린", "상한가달성"] - - # Check for multiple URL patterns in the message - url_patterns = [r"http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+", - r"me2[\w.]+", - r"han.gl[\w./]+", - r"kakao[\w.]+", - r"asq.kr[\w./]+", - r"[a-zA-Z]+://[\S]+"] - - for keyword in spam_keywords: - if keyword in message: - return True - - for pattern in url_patterns: - match = re.search(pattern, message) - if match: - return True - - return False \ No newline at end of file diff --git a/spaces/flax-community/Multilingual-VQA/sections/conclusion_future_work/conclusion.md b/spaces/flax-community/Multilingual-VQA/sections/conclusion_future_work/conclusion.md deleted file mode 100644 index 627f632f61e57a92af26a96ea76aa39878ece78f..0000000000000000000000000000000000000000 --- a/spaces/flax-community/Multilingual-VQA/sections/conclusion_future_work/conclusion.md +++ /dev/null @@ -1 +0,0 @@ -In this project, we presented Proof-of-Concept with our CLIP Vision + BERT model baseline which leverages a multilingual checkpoint with pre-trained image encoders in four languages - **English, French, German, and Spanish**. Our model performs very well considering the amount of training time we were able to get and achieves 0.49 eval accuracy on our multilingual VQAv2 dataset. \ No newline at end of file diff --git a/spaces/fmind/resume/tasks/run.py b/spaces/fmind/resume/tasks/run.py deleted file mode 100644 index fea52f456fdae61132e5fdf67020f06988a07e61..0000000000000000000000000000000000000000 --- a/spaces/fmind/resume/tasks/run.py +++ /dev/null @@ -1,15 +0,0 @@ -"""Run tasks for the project.""" -# pylint: disable=redefined-builtin - -# %% IMPORTS - -from invoke import task -from invoke.context import Context - -# %% TASKS - - -@task(default=True) -def app(ctx: Context) -> None: - """Run the main application.""" - ctx.run(f"gradio {ctx.app.path}") diff --git "a/spaces/frncscp/Patacotron/pages/Estad\303\255stica.py" "b/spaces/frncscp/Patacotron/pages/Estad\303\255stica.py" deleted file mode 100644 index 4a63fb3ebef9dfac8355108ab5867bfac8d81a1c..0000000000000000000000000000000000000000 --- "a/spaces/frncscp/Patacotron/pages/Estad\303\255stica.py" +++ /dev/null @@ -1,43 +0,0 @@ -import streamlit as st - -st.set_page_config( - page_title = 'Patacotrón', - layout= 'wide', - initial_sidebar_state = 'collapsed', - menu_items = { - "About" : 'Proyecto ideado para la investigación de "Clasificación de imágenes de una sola clase con algortimos de Inteligencia Artificial".', - "Report a Bug" : 'https://docs.google.com/forms/d/e/1FAIpQLScH0ZxAV8aSqs7TPYi86u0nkxvQG3iuHCStWNB-BoQnSW2V0g/viewform?usp=sf_link' - } -) - -st.title("Estadística") -st.caption("Se tuvo presente dos tipos de análisis: ") - -with st.sidebar: - st.write("contact@patacotron.tech") - -with st.expander("Eficiencia"): - col1, col2 = st.columns(2) - with col1: - st.write('La eficiencia está descrita de la siguiente manera: ') - st.write('Para clases positivas: ') - st.latex(r'''E = \frac{(S * {S}')+(P * {P}')}{{S}'+{P}'}''') - st.write('Para clases negativas: ') - st.latex(r'''E = \frac{(S * {S}')+((1-P) * {P}')}{{S}'+{P}'}''') - - with col2: - st.write('Donde:') - st.write('S es la puntuación (score) normalizada entre 0 y 1, donde por cada imagen sumaba un punto y por cada falso positivo se le restaba otro. La franja para predecir la clase como positiva fue de encima del 80%') - st.write('P es la predicción promedio entre 0 y 1 para todas las imágenes de la carpeta.') - st.write("S′ y P′ son los pesos para cada variable, en este caso, la predicción tuvo un peso de 1.2") - st.write("El rango de la fórmula es de [0, 1), representando 1 un modelo con la mayor eficiencia posible que generaliza bien y es igualmente bueno para predecir clases positivas y anómalas. [Repositorio en Github](https://github.com/frncscp/efficiency)") - -with st.expander("Matriz de confusión"): - col3, col4 = st.columns(2) - with col3: - st.write('Las matrices de confusión dan una descripción detallada de las tendencias de los modelos en su forma de clasificación.') - st.write('Tiene en cuenta las inferencias correctas (verdaderos positivos y negativos) e incorrectas (falsos positivos y negativos)') - - with col4: - st.image("https://pieriantraining.com/wp-content/uploads/sites/2/2023/05/confusion_matrix-1024x683.png") - \ No newline at end of file diff --git a/spaces/fuxin123zz/ChuanhuChatGPT/chatgpt - windows.bat b/spaces/fuxin123zz/ChuanhuChatGPT/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/fuxin123zz/ChuanhuChatGPT/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/geekyrakshit/enhance-me/enhance_me/__init__.py b/spaces/geekyrakshit/enhance-me/enhance_me/__init__.py deleted file mode 100644 index 7ffc479eb9544bb8f6ece5f05b2d59ebbee5f20e..0000000000000000000000000000000000000000 --- a/spaces/geekyrakshit/enhance-me/enhance_me/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .mirnet import MIRNet -from .zero_dce import ZeroDCE diff --git a/spaces/geniusguy777/Face_Recognition/app.py b/spaces/geniusguy777/Face_Recognition/app.py deleted file mode 100644 index 8e3b77c058e3ec0ac1d2fb9394c296e1c246c28e..0000000000000000000000000000000000000000 --- a/spaces/geniusguy777/Face_Recognition/app.py +++ /dev/null @@ -1,188 +0,0 @@ -# Face Recognition Hub -# author: Zeng Yifu(曾逸夫) -# creation time: 2022-07-28 -# email: zyfiy1314@163.com -# project homepage: https://gitee.com/CV_Lab/face-recognition-hub - -import os -import sys -from pathlib import Path - -import face_recognition -import gradio as gr -from PIL import Image, ImageDraw, ImageFont - -from util.fonts_opt import is_fonts - -ROOT_PATH = sys.path[0] # 项目根目录 - -IMG_PATH_Test = "./img_examples/unknown" - -FONTSIZE = 15 - -OCR_TR_DESCRIPTION = '''# Face Recognition -
            https://github.com/ageitgey/face_recognition demo
            ''' - -def str_intercept(img_path): - img_path_ = img_path[::-1] - point_index = 0 # 记录反转后第一个点的位置 - slash_index = 0 # 记录反转后第一个斜杠的位置 - - flag_pi = 0 - flag_si = 0 - - for i in range(len(img_path_)): - if (img_path_[i] == "." and flag_pi == 0): - point_index = i - flag_pi = 1 - - if (img_path_[i] == "/" and flag_si == 0): - slash_index = i - flag_si = 1 - - point_index = len(img_path) - 1 - point_index - slash_index = len(img_path) - 1 - slash_index - - return point_index, slash_index - - -# 人脸录入 -def face_entry(img_path, name_text): - if img_path == "" or name_text == "" or img_path is None or name_text is None: - return None, None, None - - point_index, slash_index = str_intercept(img_path) - img_renamePath = f"{img_path[:slash_index+1]}{name_text}{img_path[point_index:]}" - os.rename(img_path, img_renamePath) - img_ = Image.open(img_renamePath) - print(img_renamePath) - - return img_, img_renamePath, name_text - - -# 设置示例 -def set_example_image(example: list): - return gr.Image.update(value=example[0]) - - -def face_recognition_(img_srcPath, img_tagPath, img_personName): - if img_tagPath == "" or img_tagPath is None: - return None - - image_of_person = face_recognition.load_image_file(img_srcPath) - person_face_encoding = face_recognition.face_encodings(image_of_person)[0] - - known_face_encodings = [ - person_face_encoding,] - - known_face_names = [ - img_personName,] - - test_image = face_recognition.load_image_file(img_tagPath) - - face_locations = face_recognition.face_locations(test_image) - face_encodings = face_recognition.face_encodings(test_image, face_locations) - - pil_image = Image.fromarray(test_image) - img_pil = ImageDraw.Draw(pil_image) - textFont = ImageFont.truetype(str(f"{ROOT_PATH}/fonts/SimSun.ttf"), size=FONTSIZE) - # ymin, xmax, ymax, xmin - for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings): - matches = face_recognition.compare_faces(known_face_encodings, face_encoding) - - name = "Unknown Person" - - if True in matches: - first_matches_index = matches.index(True) - name = known_face_names[first_matches_index] - - img_pil.rectangle([left, top, right, bottom], fill=None, outline=(255, 228, 181), width=2) # 边界框 - text_w, text_h = textFont.getsize(name) # 标签尺寸 - # 标签背景 - img_pil.rectangle( - (left, top, left + text_w, top + text_h), - fill=(255, 255, 255), - outline=(255, 255, 255), - ) - - # 标签 - img_pil.multiline_text( - (left, top), - name, - fill=(0, 0, 0), - font=textFont, - align="center", - ) - - del img_pil - return pil_image - - -def main(): - is_fonts(f"{ROOT_PATH}/fonts") # 检查字体文件 - - with gr.Blocks(css='style.css') as demo: - gr.Markdown(OCR_TR_DESCRIPTION) - - # -------------- 人脸识别 录入 -------------- - with gr.Row(): - gr.Markdown("### Step 01: Face Entry") - with gr.Row(): - with gr.Column(): - with gr.Row(): - input_img = gr.Image(image_mode="RGB", source="upload", type="filepath", label="face entry") - with gr.Row(): - input_name = gr.Textbox(label="Name") - with gr.Row(): - btn = gr.Button(value="Entry") - - with gr.Column(): - with gr.Row(): - output_ = gr.Image(image_mode="RGB", source="upload", type="pil", label="entry image") - input_srcImg = gr.Variable(value="") - input_srcName = gr.Variable(value="") - with gr.Row(): - example_list = [["./img_examples/known/ChengLong.jpg", "成龙"], - ["./img_examples/known/VinDiesel.jpg", "VinDiesel"], - ["./img_examples/known/JasonStatham.jpg", "JasonStatham"], - ["./img_examples/known/ZhenZidan.jpg", "甄子丹"]] - gr.Examples(example_list, - [input_img, input_name], - output_, - set_example_image, - cache_examples=False) - - - # -------------- 人脸识别 测试 -------------- - with gr.Row(): - gr.Markdown("### Step 02: Face Test") - with gr.Row(): - with gr.Column(): - with gr.Row(): - input_img_test = gr.Image(image_mode="RGB", source="upload", type="filepath", label="test image") - with gr.Row(): - btn_test = gr.Button(value="Test") - with gr.Row(): - paths = sorted(Path(IMG_PATH_Test).rglob('*.jpg')) - example_images_test = gr.Dataset(components=[input_img], - samples=[[path.as_posix()] for path in paths]) - - with gr.Column(): - with gr.Row(): - output_test = gr.Image(image_mode="RGB", source="upload", type="pil", label="identify image") - - btn.click(fn=face_entry, inputs=[input_img, input_name], outputs=[output_, input_srcImg, input_srcName]) - - btn_test.click(fn=face_recognition_, - inputs=[input_srcImg, input_img_test, input_srcName], - outputs=[output_test]) - example_images_test.click(fn=set_example_image, inputs=[ - example_images_test,], outputs=[ - input_img_test,]) - - return demo - - -if __name__ == "__main__": - demo = main() - demo.launch(inbrowser=True) diff --git a/spaces/giswqs/solara-demo/Dockerfile b/spaces/giswqs/solara-demo/Dockerfile deleted file mode 100644 index 271b19ce6fd8f70d42243166136a200328b1fd0f..0000000000000000000000000000000000000000 --- a/spaces/giswqs/solara-demo/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM jupyter/base-notebook:latest - -RUN mamba install -c conda-forge leafmap geopandas localtileserver -y && \ - fix-permissions "${CONDA_DIR}" && \ - fix-permissions "/home/${NB_USER}" - -COPY requirements.txt . -RUN pip install -r requirements.txt - -RUN mkdir ./pages -COPY /pages ./pages - -ENV PROJ_LIB='/opt/conda/share/proj' - -USER root -RUN chown -R ${NB_UID} ${HOME} -USER ${NB_USER} - -EXPOSE 8765 - -CMD ["solara", "run", "./pages", "--host=0.0.0.0"] diff --git a/spaces/gligen/demo/gligen/__init__.py b/spaces/gligen/demo/gligen/__init__.py deleted file mode 100644 index 67cf72156e8a5586636f0af71bb47be11a7db307..0000000000000000000000000000000000000000 --- a/spaces/gligen/demo/gligen/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ - -import os, sys -sys.path.append(os.path.dirname(__file__)) -sys.path.append(os.path.join(os.path.dirname(__file__), "ldm")) - -import gligen.evaluator as evaluator -import gligen.trainer as trainer - - -# import gligen.ldm as ldm \ No newline at end of file diff --git a/spaces/glrh11/object-detection/README.md b/spaces/glrh11/object-detection/README.md deleted file mode 100644 index eed8dc4dbe2d40db0bf0f95420b622f4011e69aa..0000000000000000000000000000000000000000 --- a/spaces/glrh11/object-detection/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Object Detection -emoji: 📊 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -参考: https://huggingface.co/spaces/ClassCat/YOLOS-Object-Detection diff --git a/spaces/gotiQspiryo/whisper-ui/examples/3ds Max 2015 X64 (64bit) Product Key Download.md b/spaces/gotiQspiryo/whisper-ui/examples/3ds Max 2015 X64 (64bit) Product Key Download.md deleted file mode 100644 index 2b05c89472c726b48dbe807a93a2606f09637863..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/3ds Max 2015 X64 (64bit) Product Key Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

            3ds Max 2015 x64 (64bit) Product key download


            Download Zip ✶✶✶ https://urlgoal.com/2uyLZu



            - - 3cee63e6c2
            -
            -
            -

            diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Auto-tune 7 Ilok Crack.exe Download 18 BEST.md b/spaces/gotiQspiryo/whisper-ui/examples/Auto-tune 7 Ilok Crack.exe Download 18 BEST.md deleted file mode 100644 index ca666757244fbb484f1aa68b70ada54c2481ea00..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Auto-tune 7 Ilok Crack.exe Download 18 BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

            auto-tune 7 ilok crack.exe download 18


            Download Zip >>>>> https://urlgoal.com/2uyNyH



            - - d5da3c52bf
            -
            -
            -

            diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Chelsea Jersey Font Download How to Use the Font Vector Files and Fonts for Your Projects.md b/spaces/gotiQspiryo/whisper-ui/examples/Chelsea Jersey Font Download How to Use the Font Vector Files and Fonts for Your Projects.md deleted file mode 100644 index fc78d0b901cc572b47e9f8d57a1eb4fca7c2bb32..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Chelsea Jersey Font Download How to Use the Font Vector Files and Fonts for Your Projects.md +++ /dev/null @@ -1,8 +0,0 @@ -
            -

            A traditional and auestere font, Cinzel has stunning contemporary lines and comes in bold, black or regular. Created by Natanael Gama, Cinzel is ready for you now in Easil or you can download it from Google Fonts.

            -

            Need a spooktacular font perfect for Halloween and all of your other spin-chilling project needs? The bloodcurdling Creepster is the perfect frightening font for your project needs. Brought to us by Sideshow this grisly font is available now for your creepy creations in Easil or ready to download in Google Fonts.

            -

            Chelsea Jersey Font Download


            DOWNLOAD ——— https://urlgoal.com/2uyM2b



            -

            The Chelsea logo has blue, red, gold, light blue, and gray colors and a lion standing on its rear legs while looking backwards and holding a staff. The lion is placed inside a circular object with a thick blue outline that features the team name, two red footballs, and two red flowers that symbolize the Remembrance Poppy. The Chelsea logo meaning symbolizes the important elements for which Chelsea is known for, specifically its coat of arms.Chelsea Logo Color Palette Image FormatThe Chelsea logo colours can be found in an image format below.Chelsea Logo FontsThe Chelsea logo font is a custom Chelsea typeface. The custom Chelsea (sans-serif) font is used for jersey lettering, player names, numbers, team logo, branding, and merchandise.Chelsea Logo JPGThe Chelsea logo JPG format can be found below.To download the Chelsea logo JPG format, right-click and choose save.Chelsea Logo PNGThe Chelsea logo PNG format can be found below.To download the Chelsea logo PNG format, right-click and choose save.SourceFiled Under: Premier LeaguePrimary Sidebarif(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-box-1','ezslot_2',101,'0','0']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-box-1-0');if(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-box-1','ezslot_3',101,'0','1']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-box-1-0_1');if(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-box-1','ezslot_4',101,'0','2']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-box-1-0_2');if(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-box-1','ezslot_5',101,'0','3']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-box-1-0_3');.box-1-multi-101border:none!important;display:block!important;float:none!important;line-height:0;margin-bottom:0!important;margin-left:auto!important;margin-right:auto!important;margin-top:6px!important;max-width:100%!important;min-height:250px;min-width:300px;padding:0;text-align:center!importantEnglish League TeamsArsenalAston VillaAFC BournemouthBrighton & Hove AlbionBurnleyChelseaCrystal Palace FCEverton FCLeicester CityLiverpoolManchester CityManchester UnitedNewcastle United FCNorwhich CitySheffield UnitedSouthampton FCTottenham HotspurWatford FCWest Ham UnitedWolverhamptonEFL ChampionshipCardiff CityFulhamHuddersfield TownLeeds UnitedStoke City FCSwansea City FCWest Bromwich Albionif(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-large-leaderboard-1','ezslot_11',123,'0','0']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-large-leaderboard-1-0');if(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-large-leaderboard-1','ezslot_12',123,'0','1']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-large-leaderboard-1-0_1');if(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-large-leaderboard-1','ezslot_13',123,'0','2']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-large-leaderboard-1-0_2');if(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-large-leaderboard-1','ezslot_14',123,'0','3']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-large-leaderboard-1-0_3');.large-leaderboard-1-multi-123border:none!important;display:block!important;float:none!important;line-height:0;margin-bottom:0!important;margin-left:auto!important;margin-right:auto!important;margin-top:0!important;max-width:100%!important;min-height:250px;min-width:300px;padding:0;text-align:center!importantif(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-medrectangle-1','ezslot_16',600,'0','0']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-medrectangle-1-0');if(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-medrectangle-1','ezslot_17',600,'0','1']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-medrectangle-1-0_1');if(typeof ez_ad_units!='undefined')ez_ad_units.push([[300,250],'teamcolorcodes_com-medrectangle-1','ezslot_18',600,'0','2']);__ez_fad_position('div-gpt-ad-teamcolorcodes_com-medrectangle-1-0_2');.medrectangle-1-multi-600border:none!important;display:inline-block;float:none!important;line-height:0;margin-bottom:15px!important;margin-left:0!important;margin-right:0!important;margin-top:15px!important;max-width:100%!important;min-height:250px;min-width:300px;padding:0Copyright © 2023 | Disclaimer | Privacy Policy

            -

            It\u2019s that time of year again. New kits are being released by all of the top brands (Adidas, Nike, etc.) and megastores are being stocked up with the latest looks. Chelsea would have usually unveiled its uniforms for the upcoming season by now, accompanied by a grand media campaign from Nike. However, sanctions have prevented that from happening. The Blues are a little late to the party this year, but that doesn\u2019t mean the jersey release won\u2019t feature images and videos of current players donning the latest and greatest from the American sportswear company.\nThe kit rollout is one of the many things to get excited about in the coming weeks. New owner Todd Boehly is looking to hit the ground running alongside Marina Granovskaia and Thomas Tuchel as they look to bring some new signings in the door. The Chelsea Megastore at Stamford Bridge and online is finally back up after being closed for three months, so there is now an abundance of fans waiting to snatch up their club\u2019s memorabilia. We\u2019ve gotten a look at some of the rumored looks thanks to Footy Headlines, but before diving into the designs, there are some sponsorship setbacks that need to be discussed.\n\n \n \n Also:\u00a0Stunning Chelsea 2022\/23 home and away kits leaked\n \n \n\n\nHere are the latest updates on Chelsea\u2019s 2022\/23 kits:\nSponsorships\nTrivago has burst onto the scene as one of the most beloved club sponsors in recent years. The travel company is the official sponsor of the Blues\u2019 training wear, but they were also one of the only major partners to stick around by the club\u2019s side during the sanctions imposed on former owner Roman Abramovich. Chelsea\u2019s two shirt sponsors\u2014Three UK and Hyundai\u2014both distanced themselves from the Russian-owned team. They went as far as to ask that their logos be removed from the kits, something that wasn\u2019t possible under the government sanctions.\nThese decisions have hurt both companies as Boehly and Co. have reportedly decided to seek their own sponsorship deals. While the club didn\u2019t confirm or deny these rumors, it did remove title sponsor, Three, from its sponsorships page on the official website. This only served to add fuel to the fire. On a smaller, yet still noticeable, scale, digital investing platform WhaleFin will take over for Hyundai as the Blues\u2019 sleeve sponsor beginning next season. The deal is said to be valued at around\u00a0\u00a320 million per year, which doubles the South Korean motor company\u2019s contributions to Chelsea. The Hong Kong-based financial company is also notably the new title sponsor for European giant Atletico Madrid. The logo features a simplified profile of a whale with the company\u2019s name below.\nAs for the club\u2019s title sponsor, it\u2019s unclear if Three will continue its relationship with the Blues. The United Kingdom telecommunications company has one year remaining on its current, lucrative deal with Chelsea. Goal\u2019s Nizaar Kinsella revealed that Three was on the verge of triggering a clause in the agreement with the English club that would extend the partnership for two more seasons. The sanctions put a swift stop to those plans, and there has been no official communication since. Rumors are beginning to swirl that the Blues are in search of a new main shirt sponsor. While common with 12 months remaining on deals, the new ownership will likely take some time to evaluate the situation. Boehly could be keen on cashing in on a more wealthy American sponsor, only time will tell.\nChelsea fans have been using Trivago\u2019s logo for concept kit designs, but as of now, there are no plans for the travel company to take over as the title sponsor. It\u2019ll be an interesting situation to watch as the summer drags on as it will impact the fans more than any other sponsorship decision made by the Blues. Now, let\u2019s discuss the things we do know about next year\u2019s kits.\n\n \n Next:\u00a0Home kit\n \n"},"title":"Chelsea 2022\/23 kit update: Home, away and third kit information","permalink":"https:\/\/theprideoflondon.com\/2022\/06\/03\/chelsea-2022-23-kit-update-home-away-third-kit-information\/2\/","shortCodeTitle":null,"content":"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\tChelsea players (Photo by Ryan Pierse\/Getty Images)\n\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\nHome kit\nThere will be a lot of changes happening around Stamford Bridge over the next few months. American frontman, Boehly, has a lot of plans for how he wants to operate his newest asset. Supporters do not need to fret though, Chelsea\u2019s kits will retain the classic look that they\u2019ve had for decades. The royal blue threads return in a much less eye-popping fashion than the ones Nike released ahead of the 2021\/22 campaign.\nThe 2022\/23 home shirt gives off heavy 2017\/18\u2014the Blues\u2019 first season with Nike\u2014vibes. The inaugural shirt from the Oregon-based company was a plain blue top with thread patterns up near the shoulders. Aside from the thread acting as a divider between the classic royal blue and a darker shade of the color around the shoulders, the shirt was nothing to phone home about. It ended up being one of the most hated kits amongst fans in recent memory. Luckily, Nike learned from its first mistake as the Blues\u2019 newest top is more adventurous:\n\n#Chelsea\u2019s 2022\/23 home kit leak via @Footy_Headlines. Another classy fit from Nike for next season, I can\u2019t wait to see these in action.\nThoughts? pic.twitter.com\/xKPTU5kad3\n\u2014 Gabe Henderson (@GabeHSports) December 31, 2021\n\nIn addition to the traditional royal blue base, it features a unique pattern around the neck that appears to be white and a teal\/aquamarine. It\u2019s unlikely that we learn more about the design of the collar until the kit\u2019s release. Early leaks of the kits appear to show emphasized patterns straight from the club\u2019s badge along the collar. This is a concept that Nike experimented on a much larger scale with Barcelona\u2019s 2021\/22 home kits. It was not well received by the Blaugrana\u2019s supporters given its wacky appearance, but it looks like it can be a nice secondary touch on the newest installments of Nike threads.\nThere is also a henley-style design on the home kits with one button at the top. Furthermore, the Three logo appears on the leaks (which came out December 31, 2021, before the sanctions), but there is no word yet on whether or not it\u2019ll appear on release.\u00a0The same green\/blue hybrid color from the collar appears again on the outline of the Nike swoosh on the chest of the shirt. This is a nice added touch which teases at more continuity throughout the 2022\/23 line.\nThere is no known information yet regarding the shorts or socks that complete the home look.\n\n \n Next:\u00a0Away kit\n \n","title":"Chelsea 2022\/23 kit update: Home, away and third kit information","permalink":"https:\/\/theprideoflondon.com\/2022\/06\/03\/chelsea-2022-23-kit-update-home-away-third-kit-information\/3\/","shortCodeTitle":null,"content":"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\tChelsea\u2019s English midfielder Ruben Loftus-Cheek (R) celebrates with teammates after scoring the opening goal of the English FA Cup semi-final football match between Chelsea and Crystal Palace at Wembley Stadium in north west London on April 17, 2022. (Photo by GLYN KIRK\/AFP via Getty Images)\n\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\nAway kit\nThe return of the iconic yellow and black combination in Chelsea\u2019s away threads last season had fans jumping for joy. It\u2019s incredibly difficult to appease an entire generation of kit fanatics. Regardless, it\u2019s not a stretch by any means to suggest this shirt went above and beyond the expectations that Chelsea supporters had for an away kit. I would even go as far as to say this was the most well-received and popular jersey that the Blues have produced since ditching Adidas for Nike before the 2017\/18 season.\nIt\u2019ll be unfortunate in the eyes of some that this campaign came and went without the away threads making more appearances. However, some aspects of the shirt are here to stay in the 2022\/23 away top:\n\n#Chelsea\u2018s 2022\/23 away kit has been leaked by @Footy_Headlines. What\u2019re your thoughts, Blues fans? pic.twitter.com\/W597SDFkrB\n\u2014 The Pride of London (@PrideOLondon) December 27, 2021\n\nNike usually keeps it relatively simple with Chelsea\u2019s away uniforms. From the white\/grey jersey in 2017\/18 to last year\u2019s yellow and black look, the kit maker rarely ever gets adventurous with the away threads. This trend will continue next season, but Nike has taken more of a risk than in years past with the aforementioned collar design from the home kit appearing front and center on the white away shirt. The horizontal lines present in last season\u2019s yellow kit are visible here too, they just appear to be more of a navy blue than a black. This has been confirmed.\nThe complete uniform will be white in the shirt and shorts, the socks will be the same navy that is featured on the stripes. Similar to the 2021\/22 away kit, the club\u2019s iconic badge will also take on the secondary color of the shirt, appearing in an all-navy shade. The collar design used on the home kit will take on a more prominent role on the away top. The teal will take over as the majority color here as the design appears in white. The design won\u2019t be around the collar on this shirt though, it\u2019ll coincide with the placement of the horizontal lines.\nThe henley-style won\u2019t make an appearance on the away uniform, that\u2019s a feature exclusive to the home kit. The collar on the away threads will also be normal, taking on the white of the shirt.\n\n \n Next:\u00a0Third kit\n \n","title":"Chelsea 2022\/23 kit update: Home, away and third kit information","permalink":"https:\/\/theprideoflondon.com\/2022\/06\/03\/chelsea-2022-23-kit-update-home-away-third-kit-information\/4\/","shortCodeTitle":null,"content":"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\tBURNLEY, ENGLAND \u2013 MARCH 05: Reece James of Chelsea celebrates after scoring their team\u2019s first goal during the Premier League match between Burnley and Chelsea at Turf Moor on March 05, 2022 in Burnley, England. (Photo by Lewis Storey\/Getty Images)\n\t\t\t\t\t\n\t\t\t\t\t\t\t\n\t\t\nThird kit\nAs you can tell by the accompanying picture above, the third kit is where Nike really makes its money. The Blues\u2019 alternate shirts are always the most daring of the three scheduled annual releases. This is where the clubs can rope in the supporters who care more about unique and fun designs than anything else. The cultural classics are more often than not the third shirts. It\u2019s not since the 2016\/17 line\u00a0released by Adidas for the eventual title winners that Chelsea fans have been treated to a simplistic third kit\u2014until now.\nThe Blues\u2019 2022\/23 third threads won\u2019t feature anything crazy, such as a black camo design, a city map on the chest, an extended collar, fading popsicle-colored stripes or an out-of-this-world color combination. Instead, Nike has decided to create a minimalistic shirt with some soothing colors to stand out from the intertwined designs of the home and away looks. Here\u2019s your first look at the newest leaked images of the Blues\u2019 alternate uniforms:\n\nLeaked: The 2022\/23 #Chelsea Third Kit.\n[via @Footy_Headlines] pic.twitter.com\/vk1pjXcvqT\n\u2014 Absolute Chelsea (@AbsoluteChelsea) March 16, 2022\n\nThe base color for this kit will be gold. Seeing as Chelsea\u2019s home and away threads will be blue and white respectively, it\u2019s not farfetched to expect the Blues to wear this classy kit more often than in years past. The accent color in this shirt will be black, which will appear on the badge, the Nike swoosh, the sponsor logo and on the sleeve cuffs. An orange (slightly darker than the secondary color of last year\u2019s third kit) stripe will sit on the edge\u00a0next to the black stripe on the cuff.\nThe most\u00a0distinctive feature of the newly leaked shirt is the design on the front and back collar. The front is masked by the fact it\u2019s the same gold color as the rest of the kit. However, the back portion sees the black accent reappear in a design straight out of the Blues\u2019 2007\/08 kit collection. It\u2019s just one more classy touch to a classic that has the potential to be a timeless look once the official line is unveiled in the next few months. There is no information available yet about the shorts or socks, but it\u2019s surely a great way to cap off three solid shirts for Chelsea\u2019s debut campaign under new ownership.\n\n \n \n Next:\u00a0Chelsea\u2019s summer transfer window attacking shortlist revealed\n \n \n\n\nWhich of the three new threads is your favorite? Let us know in the comments or on Twitter!"],"useSlideSources":true,"themeType":"classic","prevPost":"https:\/\/theprideoflondon.com\/2022\/06\/02\/chelseas-summer-transfer-window-attacking-shortlist-revealed\/","nextPost":"https:\/\/theprideoflondon.com\/2022\/06\/03\/everton-interested-2-chelsea-midfield\/","prevText":"Prev","nextText":"Next","buttonWidth":0,"buttonWidth_post":0,"postUrl":"https:\/\/theprideoflondon.com\/2022\/06\/03\/chelsea-2022-23-kit-update-home-away-third-kit-information\/","postId":100603,"refreshAds":true,"refreshAdsEveryNSlides":1,"adRefreshingMechanism":"javascript","siteUrl":"https:\/\/theprideoflondon.com","prevText_post":"Prev post","nextText_post":"Next post"}; (function ($) $(document).ready(function () try tpsInstance = new tps.createSlideshow(tpsOptions); catch(e) ); (jQuery)); (function () // create the elements and set all attributes:var conversationScript = document.createElement('script');conversationScript.setAttribute('async', 'true');conversationScript.setAttribute('src', ' _NmUXHpyU');conversationScript.setAttribute('data-spotim-module', 'spotim-launcher');conversationScript.setAttribute('data-post-url', ' -2022-23-kit-update-home-away-third-kit-information/');conversationScript.setAttribute('data-post-id', '640f012cd2b75bc4d0fe16dafde0f551');conversationScript.setAttribute('data-spotim-multi-instance', true);conversationScript.setAttribute('data-article-tags', 'Chelsea');conversationScript.setAttribute('data-disqus-url', ' -2022-23-kit-update-home-away-third-kit-information/#!');conversationScript.setAttribute('data-disqus-identifier', '100603 =100603');// append the elements to the container:var container = document.querySelector('#comments-wrapper-100603');container.appendChild(conversationScript);)(); Top StoriesThe Pride of London 8 monthsConor Gallagher's future revealed following Chelsea meeting

          • The Big Lead 8 monthsMax Scherzer's Dog Bit His Non-Throwing Hand The Pride of London 8 monthsChelsea fans say thank you to Roman Abramovich after 19 yearsThe Pride of London 8 monthsChelsea Football Club has officially been sold to Todd BoehlyThe Pride of London 8 monthsIs it time for Chelsea fans to embrace this former player again?
          Newsletter Chelsea FC news from FanSided Daily Your Chelsea FC.
          Your Inbox. Every Day. Build your custom FanSided Daily email newsletter with news and analysis on Chelsea FC and all your favorite sports teams, TV shows, and more.

          aaccfb2cb3
          -
          -
          \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download Swing Vote Andy Garcia How one justice holds the fate of a womans life in his hands.md b/spaces/gotiQspiryo/whisper-ui/examples/Download Swing Vote Andy Garcia How one justice holds the fate of a womans life in his hands.md deleted file mode 100644 index b156a884ad18cf082368c9a3867acf6db164d26f..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Download Swing Vote Andy Garcia How one justice holds the fate of a womans life in his hands.md +++ /dev/null @@ -1,5 +0,0 @@ -
          -

          Perhaps the biggest story from Tuesday\u0027s primary elections in New York was political giant Carolyn Maloney being ousted by Congressman Jerry Nadler.But as CBS2\u0027s Marcia Kramer reported Wednesday, there were winners and losers whose names weren\u0027t on the ballot.Gov. Kathy Hochul and Eric Adams weren\u0027t running for anything Tuesday, but both spent their political capital to back candidates who were.For the mayor, it was a bad investment. For the governor, well, let\u0027s just say she was pleased to take a victory lap after the Democratic candidate in a Hudson Valley swing district surprised many by winning.\"He is exactly who New Yorkers want to have, someone who represents New York values,\" Hochul said.Hochul walked away from the podium, ending an illegal guns press conference, but she was only too happy to return to talk about the victory of Democrat Pat Ryan in a bellwether special election in the state\u0027s 19th Congressional District to replace Antoino Delgado, who stepped down to become her lieutenant governor.It was a double victory because Ryan won talking about the issues she has made the centerpiece of her gubernatorial campaign -- abortion and protection from gun violence.\"That\u0027s how you get winners. That\u0027s how you win elections like he did unexpectedly,\" Hochul said.Adams had a tougher Election Day. He backed about a dozen state Senate candidates, but only a few found the win column, which could make it more difficult for him in Albany.\"Regardless of, you know, what happened last night, the mayor is going to have lot of trouble pushing through the reforms that he wants to see seen in the state Senate,\" political consultant Javier Lacayo said.Nadler emerged victorious in the day\u0027s marquee race, defeating fellow Rep. Maloney in the newly redrawn 12th District. Nadler spoke of the challenges ahead.\"When it comes to unpacking our Supreme Court, I\u0027m going to stand up and fight. We\u0027re going to end the scourge of gun violence. We\u0027re going to restore abortion access across our entire nation,\" Nadler said.\"I\u0027m proud to have followed in the footsteps and stand on the shoulders of the strong New York women who opened doors and took on the tough battles,\" Maloney said.In the other high-profile race for the open seat in the 10th District in Lower Manhattan and Brooklyn, former federal prosecutor Dan Goldman declared victory in a tight race against several challengers.\"This has been an inspiring and humbling experience as a first-time candidate,\" Goldman said.However, the second place finisher, progressive Assemblywomen Yuh-Line Niou, is not giving up.\"We will not concede until we count every vote,\" Niou said.There are a lot of absentee ballots in the district. Both Goldman and Niou hope it goes their way. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update25"},"@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update24","headline":"Expert analysis: What this means for New Yorkers","datePublished":"2022-08-24T11:22:48+0000","dateModified":"2022-08-24T11:22:00+0000","author":["@type":"Person","familyName":"Team","givenName":"CBS New York","name":"CBS New York Team"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"We\u0027re breaking down the New York Primary results with a recap of key races and expert analysis. Political expert Javier Lacayo joined CBS News New York to discuss how it will impact everyday New Yorkers. \"I think New York will continue to have a lot of clout in Congress, and really the focus for New York Democrats needs to be in making sure that they maintain control of the chamber in a year that\u0027s going to be very competitive with Republicans,\" he said. Watch his full interview above. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update24","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update23","headline":"Polls close in New York","datePublished":"2022-08-24T00:30:53+0000","dateModified":"2022-08-24T01:00:00+0000","author":["@type":"Person","familyName":"Team","givenName":"CBS New York","name":"CBS New York Team"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"Polls have now closed across New York state.For the latest election results, click here.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update23","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update22","headline":"Breaking down what\u0027s at stake in New York\u0027s Primary elections","datePublished":"2022-08-24T03:39:26+0000","dateModified":"2022-08-24T03:39:00+0000","author":["@type":"Person","familyName":"Kramer","givenName":"Marcia","name":"Marcia Kramer"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"There was a lot at stake in New York\u0027s Primary elections despite low voter turnout. CBS2 political reporter Marcia Kramer breaks down what it all means.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update22","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update21","headline":"A look at key New York state Senate races","datePublished":"2022-08-24T03:38:58+0000","dateModified":"2022-08-24T03:38:00+0000","author":["@type":"Person","familyName":"Brennan","givenName":"Dick","name":"Dick Brennan"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"CBS2\u0027s Dick Brennan was keeping an eye on key state Senate races, where left-wing and moderate Democrats were battling it out, and often with Mayor Eric Adams taking sides, wanting to push his party to the center and in hopes of gaining influence in the state legislature where he would like to see more bail reform.In Brooklyn in the 25th state Senate District, the progressive Jabari Brisport faced the Rev. Conrad Tillard, whom the mayor supported. Brisport is currently in the lead.In the 31st Senate District in Manhattan, incumbent Robert Jackson squared off against Angel Vasquez and holds a 19-point lead.In the Bronx, the hotly contested 33rd Senate District, progressive Sen. Gustovo Rivera was in fierce fight with another Adams-backed candidate, Miguelino Camilo and it continues to be a tight race right now.In the 59th district in parts of Queens, Brooklyn and Manhattan, Kristen Gonzalez, who had the support of Democratic Socialists, is way ahead of Elizabeth Crowley, a moderate backed by Adams.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update21","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update20","headline":"Long Island congressional races being watched nationwide","datePublished":"2022-08-24T02:26:06+0000","dateModified":"2022-08-24T03:33:00+0000","author":["@type":"Person","familyName":"Gusoff","givenName":"Carolyn","name":"Carolyn Gusoff"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"Because of the timing, in August, polling places CBS2\u0027s Carolyn Gusoff saw on Long Island were almost empty, yet in the end, this primary is going to have consequences nationally.With three open congressional seats, the Long Island suburban vote is being watched nationwide.\"Long Island\u0027s being watched all over the country for a hint to how suburban swing voter, the ones who decide national elections, are going to go,\" Hofstra University suburban studies chairman Lawrence Levy said.Long Island voters picked contenders in four congressional races. Three of them were rare open seats with veteran politicians Lee Zeldin, Tom Suozzi and Kathleen Rice moving on.After redistricting shifted district lines, they are unusually competitive races with voters motivated by national issues, like abortion rights, the economy, climate change and police funding.All four Long Island congressional seats are counted among the \"flippable.\" Long island party bosses are ready to tap into national passions.\"These are consequential decisions that were made by this court and been advocated for by this extreme Republican party for decades ... Voters are going to have their chance to make their voices heard, and I think they will, loud and clear,\" New York State Democratic Committee Chair Jay Jacobs said.\"We\u0027re hearing gas prices at $4-5 a gallon. We\u0027re hearing about empty grocery shelves ... And on top of that, our neighbors don\u0027t feel safe anymore,\" Suffolk County Republican Chairman Jesse Garcia said.No one is willing to make predictions just yet for the November outcomes. These races are all considered close and Long Island voters, unpredictable.As of 11 p.m., in the 1st District, Republican Nicholas LaLota had the lead with 47% of the vote. In the 2nd District, Republican Andrew Garbarino was ahead with 54% of the vote. In the 3rd District, Democrat Robert Zimmerman led with 36% of the vote. In the 4th District, Democrat Laura Gillen had a commanding 69% of the vote.Garbarino, Zimmerman and Gillen have all declared victory, and the Associated Press projects LaLota the winner in his district.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update20","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update19","headline":"Goldman declares victory, but Niou not conceding in 10th Congressional District","datePublished":"2022-08-24T03:15:23+0000","dateModified":"2022-08-24T03:30:00+0000","author":["@type":"Person","familyName":"Aiello","givenName":"Tony","name":"Tony Aiello"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"Congressman Jerry Nadler\u0027s decision to run in the 12th Congressional District opened up his seat in the 10th.A dozen Democrats were on the ballot in the district that now covers most of Lower Manhattan, along with Park Slope and other parts of Brooklyn.Results as of 11 p.m. show former federal prosecutor Dan Goldman with a fairly narrow lead over Assemblywoman Yuh-Line Niou.Goldman was the lead counsel for House Democrats in the first impeachment of Donald Trump. He poured $2 million of his own fortune into the race and declared victory Tuesday night, but Niou, running a strong second, is saying not so fast.\"This has been an inspiring and humbling experience as a first-time candidate, and to stand in front of you here today as your Democratic nominee for Congress ... And while we will appreciate and respect the democratic process and make sure that all of the votes are counted, it is quite clear from the way that the results have come in that we have won,\" Goldman told supporters.Niou, meanwhile, told her supporters, \"I know that tonight\u0027s results aren\u0027t yet what we hoped to hear, but we will not concede until we count every vote.\"As CBS2\u0027s Tony Aiello reports, it\u0027s not clear if there are enough absentee ballots that will put Niou over the top.Whoever wins on the Democratic side will face Republican Benine Hamdan in November.Congressman Mondaire Jones appeared to come in third in the 10th District. Jones represents a district that\u0027s split between Westchester and Rockland counties. With the musical chairs of redistricting, he moved to Brooklyn and ran in the 10th.He said he\u0027s proud of what he was able to accomplish in Congress. He is seen as a bright light in the Democratic Party, and even though he will not be on the ballot in the fall and will not be in Congress next year, many feel Jones is a promising star in the Democratic Party and his future is bright.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update19","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update18","headline":"Rep. Nadler slides to victory in newly redrawn 12th Congressional District","datePublished":"2022-08-24T03:05:05+0000","dateModified":"2022-08-24T03:15:00+0000","author":["@type":"Person","familyName":"Bauman","givenName":"Ali","name":"Ali Bauman"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"Congressman Jerry Nadler has defeated Congresswoman Carolyn Maloney and attorney Suraj Patel in the hotly contested 12th Congressional District.CBS2\u0027s Ali Bauman was at Nadler\u0027s headquarters on the Upper West Side when the projection came down, not even an hour after the polls closed. The room burst into applause, and supporters sporadically broke out into \"Jerry\" chants in the hours after.In his victory speech, Nadler said after 30 years in Congress, he decided to run again in the newly redrawn 12th District because Manhattan\u0027s West Side is his home, the residents are his community and he did not want to be anywhere else.Watch Nadler\u0027s full speechNadler slid to victory fairly quickly over Maloney, whom he has spent 30 years working alongside with in Congress.He thanked his family, staff, the volunteers who campaigned for him and, of course, his constituents, who he said made their voices clear.He also spoke of what he called overwhelming challenges he still faces in Congress but said as a lifelong New Yorker, he will do what New Yorkers do, which is stand up and fight.\"When it comes to unpacking our Supreme Court, I\u0027m going to stand up and fight. We\u0027re going to end the scourge of gun violence in America because we\u0027re going to stand up and fight. We\u0027re going to restore abortion access across our entire nation. I\u0027m going to stand up and fight to protect and expand our other fundamental rights, too,\" Nadler said.Nadler said he has spoken to both Maloney and Patel tonight, and he spoke highly of them in his speech, calling Patel a bright and committed young leader, as well as thanking Congresswoman Maloney for her decades of service to New York City.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update18","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update17","headline":"CBS News projects win for Rep. Nadler","datePublished":"2022-08-24T01:38:15+0000","dateModified":"2022-08-24T01:43:00+0000","author":["@type":"Person","familyName":"Team","givenName":"CBS New York","name":"CBS New York Team"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"CBS News and the Associated Press are projecting that Rep. Jerrold Nadler has won the Democratic nomination in New York\u0027s 12th Congressional District.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update17","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update16","headline":"12 official Democratic candidates square off in 10th Congressional District","datePublished":"2022-08-23T21:49:00+0000","dateModified":"2022-08-24T01:52:00+0000","author":["@type":"Person","familyName":"Aiello","givenName":"Tony","name":"Tony Aiello"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"It\u0027s one of the most compelling congressional primary seasons on record, after redistricting set off a high-stakes game of musical chairs.A huge number of Democrats decided to take a chance in the 10th Congressional District.CBS2\u0027s Tony Aiello has more on how the race is shaping up.Voters can\u0027t complain about a lack of choices. There are 12 official candidates in the Democratic primary, a number of them with impressive resumes and records.The district map has changed. The old 10th Congressional District snaked from Borough Park, Brooklyn up into the western half of Lower Manhattan and all along the West Side to Morningside Heights.The new district is much more compact, taking in all of Lower Manhattan and moving north in Brooklyn to encompass Park Slope, Cobble Hill, and Dumbo.Current Congressman Jerry Nadler decided to run in the new 12th District, creating a rare open seat. Six candidates have been on top of the voter opinion polls.Former federal prosecutor Dan Goldman was lead counsel for the first impeachment of Donald Trump. He has put more than $2 million of his own money into the race.Assemblywoman Yuh-Line Niou is also polling well. She has energized many progressive voters.Current 17th District Congressman Mondaire Jones relocated to Brooklyn from Westchester County as part of the musical chairs after redistricting. He is one of the few openly gay Black men currently in Congress, and is seen as a bridge between the progressive and moderate Democrats.With many impressive candidates, the following is sample of how some voters say they\u0027re making their choice:\"I read a lot. I went online and read and discussed with people,\" one woman said. \"But it did take a lot of reading.\"\"How left or center they are as Democrats,\" a man said, when asked how he made his choice. \"So I voted for one of the centrist Democrats.\"Another woman said it was a tough decision, adding, \"I was also swayed by who was doing well, given that there were so many candidates.\"Former Mayor Bill de Blasio dipped his toe into the primary waters, but felt a distinct chill from the voters and ended his campaign. However, he is still on the ballot, making it 13 names the voters have to choose from. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update16","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update15","headline":"The race to watch: New York\u0027s 12th Congressional District","datePublished":"2022-08-24T00:53:15+0000","dateModified":"2022-08-24T01:27:00+0000","author":["@type":"Person","familyName":"Bauman","givenName":"Ali","name":"Ali Bauman"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"The race to watch tonight is in the 12th Congressional District, where two of the most powerful Democrats in Congress have been pitted against each other with a third candidate who\u0027s nearly half their age.As CBS2\u0027s Ali Bauman reports, while it was a quiet Election Day in the heat of August, there is a lot at stake.The polls are closed and now all there is to do is wait.\"It\u0027s a really tough one. I think it\u0027s gonna be close, actually,\" one voter said.The candidates hit the pavement earlier to make their final push for New York\u0027s 12th Congressional District.Congresswoman Carolyn Maloney is fighting to keep her seat. The newly redrawn district has pitted the Upper East Sider against her West Side counterpart, Congressman Jerry Nadler.Hoping to unseat both of them is 38-year-old attorney Suraj Patel.\"It used to be whenever there was a woman on the ballot, I felt like I had to vote for her regardless, and now I\u0027m so thrilled there are so many women running. Now I just have to agree with them,\" voter Carolyn Montgomery said.Both Maloney and Nadler have 30 years in Congress.Nadler chairs the House Judiciary Committee and has highlighted the fact he is the only sitting Jewish congressman from New York City.Maloney chairs the House Oversight Committee and has leaned into being the only woman in this race, in a post-Roe election.Patel is a former Obama staffer, running as a fresh face for generational change.\"I did like Suraj Patel and ... the new energy he might bring,\" voter Patrick Wesonga said.\"[Maloney\u0027s] also a woman and she also stands strong on abortion issues and is endorsed by Planned Parenthood, so I voted for her,\" voter Jen Sales said.\"Nadler really supports my values,\" voter Steven Birkeland said.\"Nadler\u0027s been here for a zillion years, and before him, Ted Weiss, so I kinda went with my guys who have been with this neighborhood for a long time,\" voter Russ Owen said.No matter what happens, one thing is for sure -- at least one of New York\u0027s most veteran congressmembers will be out of a job.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update15","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update14","headline":"Anthony Salvanto on N.Y. Primary elections, what\u0027s motivating voters","datePublished":"2022-08-24T00:30:00+0000","dateModified":"2022-08-24T00:39:00+0000","author":["@type":"Person","familyName":"Team","givenName":"CBS New York","name":"CBS New York Team"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"Director of elections and surveys for CBS News Anthony Salvanto joined CBS2\u0027s Kristine Johnson and Dick Brennan to discuss what\u0027s motivating voters in New York\u0027s Primary elections.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update14","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update13","headline":"Ed O\u0027Keefe on New York\u0027s Primary elections, November midterms","datePublished":"2022-08-23T23:30:34+0000","dateModified":"2022-08-23T23:30:00+0000","author":["@type":"Person","familyName":"Team","givenName":"CBS New York","name":"CBS New York Team"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"CBS News senior White House and political correspondent Ed O\u0027Keefe joined CBS2\u0027s Maurice DuBois and Marcia Kramer to discuss New York\u0027s Primary elections.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update13","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update12","headline":"New York on verge of losing clout in Washington","datePublished":"2022-08-23T22:57:47+0000","dateModified":"2022-08-23T22:57:00+0000","author":["@type":"Person","familyName":"Kramer","givenName":"Marcia","name":"Marcia Kramer"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"This has been the most extraordinary Primary election New Yorkers have ever experienced.Not only is the state on the verge of losing clout in Washington, but CBS2 political reporter Marcia Kramer says many Democrats feel it was totally avoidable.When all the Primary votes are tallied, the political careers of a number of highly regarded and powerful New York Congressmembers could well be over. The hope of helping Democrats maintain control of the House has been dashed, and Democratic lawmakers brought it on themselves.Political consultant Basil Smikle, who has an in-depth knowledge of Democratic party politics in New York, admits that this primary election is a self-inflicted wound. \"Did the Democrats get greedy?\" Kramer asked.\"There was definitely a Democratic overreach. You know, they tried to do too much,\" Smikle said.At fault, he says, are New York state lawmakers who threw out district lines drawn by a non-partisan commission and instead drew their own districts in the hope of creating more Democratic seats and helping Nancy Pelosi and House Democrats maintain control of the lower chamber in the capital. It was fuzzy math.Before the new Census, the New York delegation had 27 seats -- 19 Democrats and eight Republicans. New York lost one seat in the Census. The new lines drawn by lawmakers created 22 Democratic seats and just four Republican seats.\"It was so egregious, particularly the Staten Island seat, that Republicans took the Democrats to court and won,\" Smikle said.Smikle is talking about the attempt by Democratic lawmakers to make it nearly impossible for Staten Island-Brooklyn Republican Congresswoman Nicole Maliotakis to win. They removed conservative areas like Bay Ridge from her district and substituted liberal areas like Park Slope, home of super-progressive former mayor Bill DeBlasio.\"For Republicans, it was a bridge too far,\" Smikle said.It was also apparently a bridge too far for the state court of appeals. It said it was gerrymandering, plain and simple. A special master was appointed to redraw the lines.He fixed the Maliotakis district, but, in the process, forced two big-time Washington power brokers to run against each other. At the end of the night, either House Judiciary Committee Chair Jerry Nadler or Oversight Chair Carolyn Maloney, or both, will not be going back to Washington.Also two of the state\u0027s African-American, gay congressmen, Mondaire Jones and Jamaal Bowman, are in a fight for survival.\"This was a completely avoidable mess, if you will ... Could potentially lose Democrats seats in the House at a very important time in our history,\" Smikle said.Losing clout in Washington could hurt both Gov. Kathy Hochul and Mayor Eric Adams. They are seeking support for a host of critical needs -- the MTA, building new tunnels under the Hudson and funds to cope with the influx of asylum seekers, to name a few.Whether or not races are called Tuesday night is going to depend on how close races like the Nadler-Maloney race and the new 10th District are. With many voters out of town or on vacation right now, there are a huge number of absentee ballots in Manhattan and Brooklyn that could affect the outcome.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update12","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update11","headline":"Mayor Adams-backed centrist Dems taking on progressives in state Senate races","datePublished":"2022-08-23T22:05:54+0000","dateModified":"2022-08-23T22:05:00+0000","author":["@type":"Person","familyName":"Brennan","givenName":"Dick","name":"Dick Brennan"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"There is a crucial battle in several key New York State Senate races, and the fight is among Democrats.In some ways Democratic primaries have become kind of proxy fight between Mayor Eric Adams and the moderate wing of the party, and those supported by the left wing, and it still has to do with the fight over bail reform.The battle lines are drawn as Democrats fight for the soul of their party.Incumbent Manhattan Sen. Robert Jackson is facing off against challenger Angel Vasquez in a newly redrawn district.The Bronx\u0027s progressive state senator, Gustavo Rivera, is facing a fight from Miguelina Camilo, who is backed by Mayor Adams.In Brooklyn, the leftist incumbent, Jabari Brisport, is going head to head with the Rev. Conrad Tillard, who is also endorsed by the mayor.And in an open seat in Queens, Adams tapped moderate former City Councilwoman Elizabeth Crowley in the race against Kristen Gonzalez, who has the support of the Democratic socialists.\"The battle of the Democratic party is between the center, the moderates, frankly, and the left, the progressives. Who will turn out a vote and who can win those races? Can the progressives put the forces together on the ground to turn out a vote is the great test today,\" Democratic political strategist Hank Sheinkopf said.And there is a test for Mayor Adams. He hopes his support of moderate candidates will send a signal to Albany and Gov. Kathy Hochul.\"The mayor\u0027s problem is he\u0027s gotta win if he wants to get bail reform done and if he wants the help in Albany he needs and the respect he requires. If he doesn\u0027t get the respect from the Albany politicians by beating them, well, he\u0027s gonna have problems going forward,\" Sheinkopf said.But how can any candidates turn out the vote in the third week in August, when many are checked out, or even out of town?\"Voting is ritualistic. It\u0027s behavioral. It\u0027s constant. It\u0027s kind of the thing you do at a particular time on a particular day. When you move it to the third week in August, a Tuesday, when no one votes, only the most likely voters who have participated as part of their religion will turn out and everybody else gets lost,\" Sheinkopf said.Do endorsements matter? We will find out. The mayor did have some success with the candidates he endorsed in the first round of primaries in June. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update11","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update10","headline":"Long Island congressional races could impact balance of power in Washington","datePublished":"2022-08-23T20:44:32+0000","dateModified":"2022-08-23T20:44:00+0000","author":["@type":"Person","familyName":"Gusoff","givenName":"Carolyn","name":"Carolyn Gusoff"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"PLAINVIEW, N.Y. -- August is an odd time for a primary and even more unusual is the fact on Long Island there are three open congressional seats and a fourth seat with a freshman congressman facing a primary.All of those races are being watched nationwide.\"The House of Representatives, the decision to go Republican, starts here on Long Island because of the opportunities we have here in Suffolk and in Nassau County,\" Suffolk County Republican Chairman Jesse Garcia said.\"The House of Representatives is going to be decided by perhaps a handful of members of Congress, so what happens on Long Island is very important,\" state Democratic Party Chairman Jay Jacobs said.There may be high stakes, but there has also been low turnout. It\u0027s confusing for some voters because not only is it a rare primary day in August, but also congressional district lines have also shifted on Long Island.Add in the departure of three familiar faces -- Lee Zeldin, who running for governor, and Tom Suozzi and Kathleen Rice, who have left their seats -- and the field is wide open.Three Republicans are vying for the nomination in the 1st District to face Democrat Bridget Fleming in November.In the 2nd District, incumbent Andrew Garbarino is facing a challenge from two Republicans. The winner will take on Jackie Gordon in the general election.In what was Suozzi\u0027s 3rd District, it\u0027s a five-way race in the Democratic primary to face George Santos in November.And in what was Rice\u0027s 4th District, one of four Democrats will face Anthony D\u0027Esposito in the fall.It\u0027s a rare opportunity for Long Island to impact the balance of power in Washington, where open seats are rare.\"Because Long Island is what we consider a typical swing region, we get to see here the trends that are going to go nationwide,\" said Lawrence Levy, chairman of suburban studies at Hofstra University. \"Political operatives around the country are looking at Long Island to see which way the respective parties are going -- will Trump-endorsed candidates triumph over more moderate candidates; will Democratic progressives have sway over moderates.\"It\u0027s a long way to November and no one is willing to call any of these four seats on Long Island. Democrats have a slight enrollment advantage across the two counties, but one-third of voters are independent and with redistricting and of the seats could go either way.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update10","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update9","headline":"No matter who wins the 12th District, it will be a shakeup in Washington","datePublished":"2022-08-23T20:43:04+0000","dateModified":"2022-08-23T20:43:00+0000","author":["@type":"Person","familyName":"Bauman","givenName":"Ali","name":"Ali Bauman"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"Much of New York will be watching the race between two Democratic mainstays, and allies in Congress. They\u0027re now vying for the same seat -- and a third, younger candidate is hoping to squeeze them out. It has been relatively quiet at the voting booths, but there is a lot at stake. The candidates are making their final push in the 12th Congressional District Tuesday.Rep. Jerry Nadler cast his vote on his home turf, the Upper West Side. \"Carolyn and I have worked on a lot of things together, but I think I have a more principled, progressive record,\" Nadler said. Nadler currently represents the 10th District, but redistricting earlier this year redrew the 12th District to stretch from Stuy-Town on the East Side up to West 114th Street, pitting Nadler against his longtime ally on the East Side, Rep. Carolyn Maloney. \"I came to Congress to fight for the Equal Rights Amendment. And I want to go back to push it over the finish line,\" Maloney said. Both were first elected to Congress in 1992. Maloney chairs the House Oversight Committee, while Nadler chairs the Judiciary Committee. Hoping to unseat them both is 38-year-old attorney and former Obama staffer Suraj Patel. \"The status quo in Washington, D.C. is broken. Washington no longer works for New York, and New Yorkers need to send a fighter,\" Patel said. Maloney has leaned into the fact she\u0027s the only woman in this race, while Nadler highlighted he is the only sitting Jewish congressman from New York City, and Patel focused on his youthful and more moderate energy. Bauman caught up with voters to ask how they were deciding. \"Sometimes I think it\u0027s better to have someone who\u0027s young and hungry and haven\u0027t gotten into the whole political system,\" said voter Gwyn McAllister. \"I support, since 1992, Nadler,\" one voter said. \"It\u0027s just time for a change, period,\" said another.\"Nadler. I wanted him as opposed to Maloney. I don\u0027t like her views on abortion,\" said voter Antonia Steiner. \"I think abortion rights, women\u0027s rights are huge right now,\" said voter Christie Caluccia. \"Democracy is in trouble now, and one of the campaigns is really about that, so I made my decision based on that,\" said voter Carlton Thompkins. No matter what happens Tuesday, at least one of New York\u0027s most veteran members of Congress will be out of a job. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update9","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update8","headline":"12th District race shaping up as battle between House veterans, sharp newcomer","datePublished":"2022-08-23T15:33:36+0000","dateModified":"2022-08-23T15:33:00+0000","author":["@type":"Person","familyName":"Maldonado","givenName":"Zinnia","name":"Zinnia Maldonado"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"There\u0027s not much left for the candidates to do in the 12th Congressional District. The power now lies in the hands of the voters.Many who spoke to CBS2\u0027s Zinnia Maldonado throughout Tuesday morning said they think the final results will be close.\"Every vote counts and I care about our city a lot,\" one person said.The people and the politicians of New York City hit the polls to decide the next 12th Congressional District representative.\"People\u0027s voices need to be heard and that is the only reason I participate in the elections,\" one resident said.\"I\u0027m feeling very good,\" Rep. Jerry Nadler said.Optimistic Nadler cast his vote on Tuesday morning.\"I was out campaigning early this morning. I\u0027ll be out campaigning this afternoon. You do everything you can. You don\u0027t want to lose an election and think to yourself, if only I done that,\" Nadler said.Nadler currently represents the 10th District and is looking to take Rep. Carolyn Maloney\u0027s title as representative of the 12th.\"I came to Congress to fight for the Equal Rights Amendment, and I want to go back to push it over the finish line,\" Maloney said.Maloney and Nadler are going up against each other after the district\u0027s lines were redrawn earlier this year, and now Maloney is fighting to keep her spot.\"They\u0027re both strong candidates,\" one voter said.\"It\u0027s very unfortunate that two terrific candidates are pitted against each other,\" another said.Also vying for the House seat is attorney Suraj Patel, who was out Tuesday morning campaigning for last-minute votes.\"Elections are about the future. They\u0027re about energies, ideas and optimism, hope for New York,\" Patel said.Those watching the race closely believe it\u0027s going to be a matter of who gets their voters to the polls.\"We just hope a couple more people come out and vote because it\u0027s an important election,\" said Fred Umane, Manhattan Commissioner of the New York City Board of Elections.\"We on the West Side, really love Nadler. The people on the East Side, I know a lot of them really love [Maloney], so the big question is going to be who\u0027s going to win?\" a voter said.One thing\u0027s for sure, by the end of Tuesday at least one of New York City\u0027s most veteran member of Congress will be voted out of office. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update8","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update7","headline":"Voters sound off about rent, subway crime","datePublished":"2022-08-23T15:17:35+0000","dateModified":"2022-08-23T15:17:00+0000","author":["@type":"Person","familyName":"Duddridge","givenName":"Natalie","name":"Natalie Duddridge"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"New York\u0027s brand new 10th Congressional District was redrawn to include all of Lower Manhattan and parts of Brooklyn. As CBS2\u0027s Natalie Duddridge reports, the competition is tough for the rare open seat. Anthony Loring was the first person to cast his ballot when polls opened at 6 a.m. Tuesday at 81 New Street. \"The issues I care the most about are candidates who have an economic message that looks out for the least economically advantaged people,\" he told Duddridge. Voters sounded off on a range of issues. \"I want the rent to go down a little bit. It\u0027s a little expensive right now -- $5,000, $6,000. I mean, some people are paying $7,000 I know. It\u0027s ridiculous,\" one person said. \"To clean up the streets,\" another person added. \u0027It\u0027s just really sad. I feel like there are a lot of people that need help that they\u0027re not getting.\"\"Subway crime, it\u0027s scary,\" another agreed. \"I care about cleaning up the streets, placing people with mental issues who are homeless into places so that when we commute into work we feel safe,\" Queens resident Bryan Stephens said. The ballot features 13 candidates. A recent Emerson College poll showcased the top six. Leading the way was former federal prosecutor Dan Goldman, who voted last Wednesday during early voting. He was followed by Assemblywoman Yuh-Line Niou, who cast her ballot at 8 a.m. in Lower Manhattan and encouraged people to vote. At 9 a.m., first-term Congressman Mondaire Jones voted at P.S. 58 in Brooklyn. City Councilwoman Carlina Rivera made a final push to voters on Fifth Avenue in Brooklyn. Also in the race are Assemblywoman Jo Anne Simone and former Congresswoman Holtzman. As for the Republican candidate, risk analyst Benine Hamdan is running unopposed. The district is heavily Democratic, so the winner of this primary will likely go on to win the General Election in November. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update7","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update6","headline":"Hitting the polls early","datePublished":"2022-08-23T14:01:04+0000","dateModified":"2022-08-23T14:01:00+0000","author":["@type":"Person","familyName":"Maldonado","givenName":"Zinnia","name":"Zinnia Maldonado"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"CBS2\u0027s Zinnia Maldonado spoke with voters casting their ballots in the hotly contested 12th Congressional District. \"I\u0027m interested in politics, so I always like to -- it only takes a minute when you come out early,\" one man said. \"I just think they are workhorses. I would like to see them both go out with honor and dignity and just make way for a new wave of candidates,\" a woman added.","mainEntityOfPage":"@type":"WebPage","@id":"#post-update6","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update5","headline":"Voters on the issues","datePublished":"2022-08-23T13:57:08+0000","dateModified":"2022-08-23T13:57:00+0000","author":["@type":"Person","familyName":"Duddridge","givenName":"Natalie","name":"Natalie Duddridge"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"CBS2\u0027s Natalie Duddridge caught up with voters as they cast their ballots in the newly redrawn 10th Congressional District. \"The issues I care the most about are candidate who have an economic message that looks out for the least economically advantaged people,\" Manhattan resident Anthony Loring told Duddridge. \"I don\u0027t really care what happens to the billionaires and millionaires in terms of tax issues, but I think a lot of the way resources are distributed in this country has gotten skewed for the past 40 years or so, and there are people who have a lot more than they\u0027ll ever need and people who are struggling to get by. So a more balanced and fair distribution of resources.\"","mainEntityOfPage":"@type":"WebPage","@id":"#post-update5","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update4","headline":"Election Day in NYC","datePublished":"2022-08-23T12:34:38+0000","dateModified":"2022-08-23T12:34:00+0000","author":["@type":"Person","familyName":"Team","givenName":"CBS New York","name":"CBS New York Team"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"","mainEntityOfPage":"@type":"WebPage","@id":"#post-update4","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update3","headline":"All eyes on 12th District","datePublished":"2022-08-23T09:14:18+0000","dateModified":"2022-08-23T09:14:00+0000","author":["@type":"Person","familyName":"Maldonado","givenName":"Zinnia","name":"Zinnia Maldonado"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"An Emerson College poll released last week had Congressman Jerry Nadler as the frontrunner, ahead of Congresswoman Carolyn Maloney and attorney Suraj Patel. Political experts say voter turnout is going to be a major factor.\"I want to go back to Congress to make sure that women are in the constitution so they cannot roll back the rights that we already have,\" Maloney said Monday. Fifteen-term Congresswoman Maloney, who chairs the powerful House Oversight Committee, is in the fight of her life in the newly redrawn district. The race to represent the 12th Congressional District is going to be a matter of who gets their voters to the polls, and if younger voters turn out for Patel. \"We\u0027re the only campaign presenting a plan for the next decade,\" he said on the campaign trail. Nadler has the support of Senate Majority Leader Chuck Schumer. \"I\u0027ve passed the Respect for Marriage Act to codify the right of marriage equality whatever the Supreme Court says, and I passed the assault weapons ban,\" Nadler said. Meanwhile, Hillary Clinton is supporting Maloney. \"I came to the Congress to fight for the Equal Rights Amendment, and I want to go back to put it over the finish line,\" Maloney said. \"For both Maloney and Nadler, they\u0027ve got significant seniority in Congress, they\u0027ve got a lot of clout in Congress,\" said former New Jersey State Legislator John Wisniewski. \"One of them will not have a seat when this is over, so both of them should be worried.\" Nadler has served in the 10th District since 2013, and Maloney has been in the 12th District since 2013. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update3","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update2","headline":"Vying for new 10th District","datePublished":"2022-08-23T09:26:59+0000","dateModified":"2022-08-23T09:26:00+0000","author":["@type":"Person","familyName":"Duddridge","givenName":"Natalie","name":"Natalie Duddridge"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"A crowded field of Democrats is facing off in the 10th Congressional District. They include Congressman Mondaire Jones, former Congresswoman Elizabeth Holtzman and prosecutor Daniel Goldman. The district is brand new, and the lines have been redrawn ton include all of Lower Manhattan and parts of Brooklyn. That means a rare open seat.Voters in Prospect Heights sounded off Monday on what matters most to them, with safety top of mind. \"People that can reform the city based on what\u0027s been changing for the past year or so, and make it a more livable place, and bring everything back to what it was before the pandemic,\" voter Michael Rosenblum told CBS2. \"Someone that will ensure rents can go down, and public safety.\"\"We like to hangout in the park. Now we\u0027re scared, we can\u0027t even go there, because you don\u0027t know what\u0027s going to happen,\" another voter added.The ballot features 13 candidates. A recent Emerson College poll showcased the top six. Leading the way was former federal prosecutor Goldman, followed by Assemblywoman Yuh-Line Niou, first-term Congressman Jones, City Councilwoman Carlina Rivera, Assemblywoman Jo Anne Simone, and former Congresswoman Holtzman. As for the Republican candidate, risk analyst Benine Hamdan is running unopposed in what is a heavily Democratic district. \"I\u0027m really hoping the candidates think about working families and think about progressive stances, and really holding our leaders accountable,\" voter Megan Cayler said. \"We are fortunate enough to live in one of the most diverse cities in the entire country, and I would love to have elected officials who reflect that.\"Again, the district is heavily Democratic, so winning the primary is a near-guarantee to entering Congress next January. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update2","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update1","headline":"Marcia Kramer sets the stage","datePublished":"2022-08-23T09:15:35+0000","dateModified":"2022-08-23T09:15:00+0000","author":["@type":"Person","familyName":"Kramer","givenName":"Marcia","name":"Marcia Kramer"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"CBS2\u0027s Political Reporter Marcia Kramer has a look at where the races stand heading into Election Day. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update1","@type":"BlogPosting","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/#post-update0","headline":"When & where to vote","datePublished":"2022-08-23T09:04:21+0000","dateModified":"2022-08-23T09:04:00+0000","author":["@type":"Person","familyName":"Team","givenName":"CBS New York","name":"CBS New York Team"],"image":"@context":"https:\/\/schema.org","@type":"ImageObject","height":630,"width":1200,"url":"https:\/\/assets3.cbsnewsstatic.com\/hub\/i\/r\/2022\/08\/24\/0991c7f8-33be-466e-963a-1fc179b28852\/thumbnail\/1200x630g2\/b3ba8a76488fabeb3e583b1103c3881e\/gettyimages-1417503814.jpg","publisher":"@context":"https:\/\/schema.org","@type":"Organization","@id":"https:\/\/www.cbsnews.com\/newyork\/","name":"CBS New York","foundingDate":"1948-05-06","sameAs":["https:\/\/www.cbsnewyork.com\/","https:\/\/www.facebook.com\/CBSNewYork\/","https:\/\/www.instagram.com\/cbsnewyork\/","https:\/\/twitter.com\/CBSNewYork","https:\/\/youtube.com\/CBSNewYork\/","https:\/\/en.wikipedia.org\/wiki\/WCBS-TV"],"logo":["@context":"https:\/\/schema.org","@type":"ImageObject","height":60,"width":600,"url":["https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-600x60.png","https:\/\/www.cbsnews.com\/assets\/partner\/google\/cbs-newyork-darkbg-600x60.png"]],"url":"https:\/\/www.cbsnews.com\/newyork\/","articleBody":"Polls will be open from 6 a.m. to 9 p.m. statewide. CLICK HERE to find your polling location. ","mainEntityOfPage":"@type":"WebPage","@id":"#post-update0"]} "@context":"http:\/\/schema.org\/","@type":"WebPage","name":"New York Primary Election Day: See results in the state\u0027s key races","url":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/" "@context":"https:\/\/schema.org","@type":"BreadcrumbList","itemListElement":["@type":"ListItem","position":1,"item":"@id":"https:\/\/www.cbsnews.com\/newyork\/","@type":"WebPage","@name":"CBSNews.com","@type":"ListItem","position":2,"name":"Politics","item":"@id":"https:\/\/www.cbsnews.com\/newyork\/politics","@type":"CollectionPage","@name":"Politics","@type":"ListItem","position":3,"name":"New York Primary: Latest results in the state\u0027s key races","item":"@id":"https:\/\/www.cbsnews.com\/newyork\/live-updates\/new-york-august-primary-election-10th-and-12th-congressional-districts\/","@name":"New York Primary: Latest results in the state\u0027s key races"] var CBSNEWS = CBSNEWS || ; CBSNEWS.features = 25:1; !function()var e,t,n,i,r=passive:!0,capture:!0,a=new Date,o=function()i=[],t=-1,e=null,f(addEventListener),c=function(i,r)e,u=function()if(t>=0&&t1e12?new Date:performance.now())-e.timeStamp;"pointerdown"==e.type?function(e,t)var n=function()c(e,t),a(),i=function()a(),a=function()removeEventListener("pointerup",n,r),removeEventListener("pointercancel",i,r);addEventListener("pointerup",n,r),addEventListener("pointercancel",i,r)(t,e):c(t,e),f=function(e)["mousedown","keydown","touchstart","pointerdown"].forEach((function(t)return e(t,s,r))),p="hidden"===document.visibilityState?0:1/0;addEventListener("visibilitychange",(function e(t)"hidden"===document.visibilityState&&(p=t.timeStamp,removeEventListener("visibilitychange",e,!0))),!0);o(),self.webVitals=firstInputPolyfill:function(e)i.push(e),u(),resetFirstInputPolyfill:o,get firstHiddenTime()return p}(); !function()function e()CBSNEWS.features.executeWithConsent(["unload-beacon"],"performance",function(),window)"onpagehide"in window?addEventListener("pagehide",e,capture:!0):(addEventListener("unload",e,capture:!0),addEventListener("beforeunload",e,capture:!0))(); .breaking-newsbottom:0;box-shadow:0 -4px 12px 0 rgba(0,0,0,0.25);display:flex;height:0;left:0;margin:0;position:fixed;right:0;transition:height 1s ease-out, opacity 1s ease;transform:translateZ(0);z-index:7.has__top-ad-container--adhesion .breaking-news,body.embedded .breaking-newsdisplay:none.breaking-news a[href=""]pointer-events:none.breaking-news .breaking-news__icon--type-video-playbackground-color:rgba(16,16,16,0.35);border-radius:50%;fill:#fff;height:40px;position:absolute;width:40px;z-index:7.breaking-news .breaking-news__headline-wrappermax-height:100%;position:relative;width:100%.breaking-news .breaking-news__headline-wrapper--wrapperalign-items:center;color:#fff;display:flex;flex-wrap:wrap;position:absolute;margin:16px 0 0 20px;width:90%.breaking-news .breaking-news__label-containerfont-family:"Proxima Nova",sans-serif;font-size:.94rem;line-height:1.6;font-weight:bold;font-size:12px;align-items:center;display:inline-flex;letter-spacing:2px;margin-right:6px;max-height:32px;text-decoration:none;text-transform:uppercase.breaking-news .breaking-news__label-container--type-live,.breaking-news .breaking-news__label-container:emptydisplay:none.breaking-news .breaking-news__video-containeralign-items:center;display:none;height:0;justify-content:center;position:relative.breaking-news .breaking-news__video-container .breaking-news__videomax-height:90px.breaking-news .breaking-news__headline-wrapper::before,.breaking-news .breaking-news__video-container::beforebackground:linear-gradient(270deg, #DE3D05 0%, #B60505 100%);content:"";height:100%;position:absolute;width:100%.breaking-news .breaking-news__video-container::beforebackground:linear-gradient(90deg, rgba(0,0,0,0.2) 0%, rgba(166,6,6,0.5) 50.31%, #B60505 100%);z-index:7.breaking-news--visible.smart-banner-breaking-news--visible .breaking-news,.device--type-amp .breaking-newsheight:90px.breaking-news--visible.smart-banner-breaking-news--visible .breaking-news .breaking-news__close,.device--type-amp .breaking-news .breaking-news__closedisplay:block.breaking-news .breaking-news__label-iconmargin:auto 8px auto 0.breaking-news .breaking-news__label-label--type-livepadding:0 0 0 22px;position:relative.breaking-news .breaking-news__label-label--type-live::before,.breaking-news .breaking-news__label-label--type-live::aftercontent:'';position:absolute;top:calc(50% - (6px / 2));left:6px;border-radius:50%;display:block;width:6px;height:6px;box-sizing:border-box.breaking-news .breaking-news__label-label--type-live::beforebackground:#B60505.breaking-news .breaking-news__label-label--type-live::afterborder:1px solid #B60505;animation:4s ease-in-out 3s infinite pulse.breaking-news .breaking-news__headlinefont-family:"Proxima Nova",sans-serif;font-size:1.94rem;line-height:1.33;font-weight:900;color:#fff;display:block;flex:1 1 100%;font-size:13px;height:45px;line-height:13.65px;margin:0;overflow:hidden;text-overflow:ellipsis.breaking-news .breaking-news__closedisplay:none;width:30px;height:30px;background:none;border-radius:50%;position:absolute;top:0;right:5px;border:none;fill:#F2F2F2;z-index:3.breaking-news .breaking-news__close:hovercursor:pointer.breaking-news .breaking-news__close svgheight:32px;width:32px.breaking-news[type="liveStreaming"] .breaking-news__video-container,.breaking-news[type="live"] .breaking-news__video-containerdisplay:flex;height:90px.breaking-news[type="liveStreaming"] .breaking-news__headline-wrapper--wrapper,.breaking-news[type="live"] .breaking-news__headline-wrapper--wrappermargin-left:0.device--type-amp .breaking-news .breaking-news__video-containerdisplay:flex;height:90px.device--type-amp .breaking-news .breaking-news__headline-wrapper--wrapper.has-videomargin-left:0.breaking-news--visible.smart-banner-breaking-news--visible [data-ad*="intromercial"],.breaking-news--visible.smart-banner-breaking-news--visible [data-ad*="omni-"],.breaking-news--visible.smart-banner-breaking-news--visible .top-ad-containervisibility:hidden@media (min-width: 768px).breaking-news .breaking-news__label-containerbackground:#F2F2F2;border-radius:2px;color:#B60505;font-size:11px;line-height:normal;line-height:initial;margin-bottom:5px;padding:3px 5px.breaking-news .breaking-news__headlinefont-size:17px;white-space:nowrap.breaking-news .breaking-news__headline-wrapper--wrappermargin-top:25px.breaking-news .breaking-news__video-container .breaking-news__videomax-height:90px.breaking-news .breaking-news__icon--type-video-playheight:50px;width:50px@media (min-width: 1020px).breaking-news .breaking-news__label-containerpadding:5px 6px.breaking-news .breaking-news__headlinefont-size:24px;line-height:normal;line-height:initial.breaking-news .breaking-news__headline-wrapper--wrappermargin-top:31px.breaking-news .breaking-news__video-container .breaking-news__videomax-height:117px.breaking-news--visible.smart-banner-breaking-news--visible .breaking-newsheight:117px.breaking-news--visible.smart-banner-breaking-news--visible .breaking-news .breaking-news__closedisplay:block.breaking-news .breaking-news__content-wrapper::before,.breaking-news .breaking-news__video-container::beforeheight:117px.breaking-news[type="liveStreaming"] .breaking-news__video-container,.breaking-news[type="live"] .breaking-news__video-containerdisplay:flex;height:117px.breaking-news[type="liveStreaming"] .breaking-news__headline-wrapper--wrapper,.breaking-news[type="live"] .breaking-news__headline-wrapper--wrappermargin-left:0 body .breaking-news[target-url=" -updates/new-york-august-primary-election-10th-and-12th-congressional-districts/"] display: none; var userAgent = navigator.userAgent.toLowerCase(); if (/msie|trident\//.test(userAgent) && !/edge/.test(userAgent)) document.getElementById('ieblock').innerHTML = 'NoticeYour web browser is not fully supported by CBS News and CBSNews.com. For optimal experience and full features, please upgrade to a modern browser.
          You can get the new Microsoft Edge at microsoft.com/edge, available to download on all versions of Windows in more than 90 languages.'; document.getElementById('ieblock').setAttribute('style', 'background-color: #B60505; color: #F5F5F5; font-size: 20px; font-family: sans-serif; padding: 100px 100px'); CBS News New York: Free 24/7 News

        • CBS New York App CBSNews.com Links & Numbers #BetterTogether Class Act with Chris Wragge CBS+ Black History is American History
        News All News NY News NJ News CT News LI News U.S. World Health Business Entertainment Politics Tech Weather First Alert Weather Radars & Maps CBS2 Weather Map CBS2 Weather Watchers First Alert Weather 101 Sports All Sports Giants Yankees Knicks Rangers Islanders CBS Sports Live Jets Mets Nets Devils Odds Video More Station Info WCBS-TV WLNY-TV Contact Us Advertise Contests & Promotions Galleries Links & Numbers Download the App Log In Search Search Live TV Watch CBS News

        -

        Download Swing Vote Andy Garcia


        Download Ziphttps://urlgoal.com/2uyLUL



        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/modules/learned_positional_embedding.py b/spaces/gradio/HuBERT/fairseq/modules/learned_positional_embedding.py deleted file mode 100644 index 378d0f707183dd344dbb9288dda394b11053acf0..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/learned_positional_embedding.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from torch import Tensor - - -class LearnedPositionalEmbedding(nn.Embedding): - """ - This module learns positional embeddings up to a fixed maximum size. - Padding ids are ignored by either offsetting based on padding_idx - or by setting padding_idx to None and ensuring that the appropriate - position ids are passed to the forward function. - """ - - def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: int): - super().__init__(num_embeddings, embedding_dim, padding_idx) - self.onnx_trace = False - if self.padding_idx is not None: - self.max_positions = self.num_embeddings - self.padding_idx - 1 - else: - self.max_positions = self.num_embeddings - - def forward( - self, - input: Tensor, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - positions: Optional[Tensor] = None, - ): - """Input is expected to be of size [bsz x seqlen].""" - assert (positions is None) or ( - self.padding_idx is None - ), "If positions is pre-computed then padding_idx should not be set." - - if positions is None: - if incremental_state is not None: - # positions is the same for every token when decoding a single step - # Without the int() cast, it doesn't work in some cases when exporting to ONNX - positions = torch.zeros( - (1, 1), device=input.device, dtype=input.dtype - ).fill_(int(self.padding_idx + input.size(1))) - else: - positions = utils.make_positions( - input, self.padding_idx, onnx_trace=self.onnx_trace - ) - return F.embedding( - positions, - self.weight, - self.padding_idx, - self.max_norm, - self.norm_type, - self.scale_grad_by_freq, - self.sparse, - ) diff --git a/spaces/gradio/musical_instrument_identification_main/data_setups.py b/spaces/gradio/musical_instrument_identification_main/data_setups.py deleted file mode 100644 index 2488ac6684d6f3ea3b1b2121f725f297bf8b1d3c..0000000000000000000000000000000000000000 --- a/spaces/gradio/musical_instrument_identification_main/data_setups.py +++ /dev/null @@ -1,80 +0,0 @@ -# Make function to find classes in target directory -import os -import librosa -import torch -import numpy as np -from torchaudio.transforms import Resample - -SAMPLE_RATE = 44100 -AUDIO_LEN = 2.90 - -# Parameters to control the MelSpec generation -N_MELS = 128 -F_MIN = 20 -F_MAX = 16000 -N_FFT = 1024 -HOP_LEN = 512 - -# Make function to find classes in target directory -def find_classes(directory: str): - # 1. Get the class names by scanning the target directory - classes = sorted(entry.name for entry in os.scandir(directory) if entry.is_dir()) - # 2. Raise an error if class names not found - if not classes: - raise FileNotFoundError(f"Couldn't find any classes in {directory}.") - # 3. Crearte a dictionary of index labels (computers prefer numerical rather than string labels) - class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)} - return classes, class_to_idx - -def resample(wav, sample_rate, new_sample_rate): - if wav.shape[0] >= 2: - wav = torch.mean(wav, dim=0) - else: - wav = wav.squeeze(0) - if sample_rate > new_sample_rate: - resampler = Resample(sample_rate, new_sample_rate) - wav = resampler(wav) - return wav - -def mono_to_color(X, eps=1e-6, mean=None, std=None): - X = np.stack([X, X, X], axis=-1) - # Standardize - mean = mean or X.mean() - std = std or X.std() - X = (X - mean) / (std + eps) - # Normalize to [0, 255] - _min, _max = X.min(), X.max() - if (_max - _min) > eps: - V = np.clip(X, _min, _max) - V = 255 * (V - _min) / (_max - _min) - V = V.astype(np.uint8) - else: - V = np.zeros_like(X, dtype=np.uint8) - return V - -def normalize(image, mean=None, std=None): - image = image / 255.0 - if mean is not None and std is not None: - image = (image - mean) / std - return np.moveaxis(image, 2, 0).astype(np.float32) - -def compute_melspec(wav, sample_rate=SAMPLE_RATE): - melspec = librosa.feature.melspectrogram( - y=wav, - sr=sample_rate, - n_fft=N_FFT, - fmin=F_MIN, - fmax=F_MAX, - n_mels=N_MELS, - hop_length=HOP_LEN - ) - melspec = librosa.power_to_db(melspec).astype(np.float32) - return melspec - -def audio_preprocess(wav, sample_rate): - wav = wav.numpy() - melspec = compute_melspec(wav, sample_rate) - image = mono_to_color(melspec) - image = normalize(image, mean=None, std=None) - image = torch.from_numpy(image) - return image \ No newline at end of file diff --git a/spaces/group2test/sd-space-creator/template/app_advanced.py b/spaces/group2test/sd-space-creator/template/app_advanced.py deleted file mode 100644 index e1f8466b8ed6cf76e6de2bdd7bd2e93a873da130..0000000000000000000000000000000000000000 --- a/spaces/group2test/sd-space-creator/template/app_advanced.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = '$model_id' -prefix = '$prefix' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
        -
        -

        $title

        -
        -

        - $description
        - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

        - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

        - Duplicate Space -
        - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ($prefix)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
        -
        -

        This space was created using SD Space Creator.

        -
        - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/guetLzy/Real-ESRGAN-Demo/cog_predict.py b/spaces/guetLzy/Real-ESRGAN-Demo/cog_predict.py deleted file mode 100644 index fa0f89dfda8e3ff14afd7b3b8544f04d86e96562..0000000000000000000000000000000000000000 --- a/spaces/guetLzy/Real-ESRGAN-Demo/cog_predict.py +++ /dev/null @@ -1,148 +0,0 @@ -# flake8: noqa -# This file is used for deploying replicate models -# running: cog predict -i img=@inputs/00017_gray.png -i version='General - v3' -i scale=2 -i face_enhance=True -i tile=0 -# push: cog push r8.im/xinntao/realesrgan - -import os - -os.system('pip install gfpgan') -os.system('python setup.py develop') - -import cv2 -import shutil -import tempfile -import torch -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.archs.srvgg_arch import SRVGGNetCompact - -from realesrgan.utils import RealESRGANer - -try: - from cog import BasePredictor, Input, Path - from gfpgan import GFPGANer -except Exception: - print('please install cog and realesrgan package') - - -class Predictor(BasePredictor): - - def setup(self): - os.makedirs('output', exist_ok=True) - # download weights - if not os.path.exists('weights/realesr-general-x4v3.pth'): - os.system( - 'wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth -P ./weights' - ) - if not os.path.exists('weights/GFPGANv1.4.pth'): - os.system('wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P ./weights') - if not os.path.exists('weights/RealESRGAN_x4plus.pth'): - os.system( - 'wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P ./weights' - ) - if not os.path.exists('weights/RealESRGAN_x4plus_anime_6B.pth'): - os.system( - 'wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P ./weights' - ) - if not os.path.exists('weights/realesr-animevideov3.pth'): - os.system( - 'wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth -P ./weights' - ) - - def choose_model(self, scale, version, tile=0): - half = True if torch.cuda.is_available() else False - if version == 'General - RealESRGANplus': - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - model_path = 'weights/RealESRGAN_x4plus.pth' - self.upsampler = RealESRGANer( - scale=4, model_path=model_path, model=model, tile=tile, tile_pad=10, pre_pad=0, half=half) - elif version == 'General - v3': - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - model_path = 'weights/realesr-general-x4v3.pth' - self.upsampler = RealESRGANer( - scale=4, model_path=model_path, model=model, tile=tile, tile_pad=10, pre_pad=0, half=half) - elif version == 'Anime - anime6B': - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - model_path = 'weights/RealESRGAN_x4plus_anime_6B.pth' - self.upsampler = RealESRGANer( - scale=4, model_path=model_path, model=model, tile=tile, tile_pad=10, pre_pad=0, half=half) - elif version == 'AnimeVideo - v3': - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - model_path = 'weights/realesr-animevideov3.pth' - self.upsampler = RealESRGANer( - scale=4, model_path=model_path, model=model, tile=tile, tile_pad=10, pre_pad=0, half=half) - - self.face_enhancer = GFPGANer( - model_path='weights/GFPGANv1.4.pth', - upscale=scale, - arch='clean', - channel_multiplier=2, - bg_upsampler=self.upsampler) - - def predict( - self, - img: Path = Input(description='Input'), - version: str = Input( - description='RealESRGAN version. Please see [Readme] below for more descriptions', - choices=['General - RealESRGANplus', 'General - v3', 'Anime - anime6B', 'AnimeVideo - v3'], - default='General - v3'), - scale: float = Input(description='Rescaling factor', default=2), - face_enhance: bool = Input( - description='Enhance faces with GFPGAN. Note that it does not work for anime images/vidoes', default=False), - tile: int = Input( - description= - 'Tile size. Default is 0, that is no tile. When encountering the out-of-GPU-memory issue, please specify it, e.g., 400 or 200', - default=0) - ) -> Path: - if tile <= 100 or tile is None: - tile = 0 - print(f'img: {img}. version: {version}. scale: {scale}. face_enhance: {face_enhance}. tile: {tile}.') - try: - extension = os.path.splitext(os.path.basename(str(img)))[1] - img = cv2.imread(str(img), cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = 'RGBA' - elif len(img.shape) == 2: - img_mode = None - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - else: - img_mode = None - - h, w = img.shape[0:2] - if h < 300: - img = cv2.resize(img, (w * 2, h * 2), interpolation=cv2.INTER_LANCZOS4) - - self.choose_model(scale, version, tile) - - try: - if face_enhance: - _, _, output = self.face_enhancer.enhance( - img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = self.upsampler.enhance(img, outscale=scale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set "tile" to a smaller size, e.g., 400.') - - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - # save_path = f'output/out.{extension}' - # cv2.imwrite(save_path, output) - out_path = Path(tempfile.mkdtemp()) / f'out.{extension}' - cv2.imwrite(str(out_path), output) - except Exception as error: - print('global exception: ', error) - finally: - clean_folder('output') - return out_path - - -def clean_folder(folder): - for filename in os.listdir(folder): - file_path = os.path.join(folder, filename) - try: - if os.path.isfile(file_path) or os.path.islink(file_path): - os.unlink(file_path) - elif os.path.isdir(file_path): - shutil.rmtree(file_path) - except Exception as e: - print(f'Failed to delete {file_path}. Reason: {e}') diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/texture.h b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/texture.h deleted file mode 100644 index f79b600fff0256cdadd38e265b49366549434ef8..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/nvdiffrast/common/texture.h +++ /dev/null @@ -1,78 +0,0 @@ -// Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#pragma once -#include "framework.h" - -//------------------------------------------------------------------------ -// Constants. - -#define TEX_DEBUG_MIP_RETAIN_VARIANCE 0 // For debugging -#define TEX_FWD_MAX_KERNEL_BLOCK_WIDTH 8 -#define TEX_FWD_MAX_KERNEL_BLOCK_HEIGHT 8 -#define TEX_FWD_MAX_MIP_KERNEL_BLOCK_WIDTH 8 -#define TEX_FWD_MAX_MIP_KERNEL_BLOCK_HEIGHT 8 -#define TEX_GRAD_MAX_KERNEL_BLOCK_WIDTH 8 -#define TEX_GRAD_MAX_KERNEL_BLOCK_HEIGHT 8 -#define TEX_GRAD_MAX_MIP_KERNEL_BLOCK_WIDTH 8 -#define TEX_GRAD_MAX_MIP_KERNEL_BLOCK_HEIGHT 8 -#define TEX_MAX_MIP_LEVEL 16 // Currently a texture cannot be larger than 2 GB because we use 32-bit indices everywhere. -#define TEX_MODE_NEAREST 0 // Nearest on base level. -#define TEX_MODE_LINEAR 1 // Bilinear on base level. -#define TEX_MODE_LINEAR_MIPMAP_NEAREST 2 // Bilinear on nearest mip level. -#define TEX_MODE_LINEAR_MIPMAP_LINEAR 3 // Trilinear. -#define TEX_MODE_COUNT 4 -#define TEX_BOUNDARY_MODE_CUBE 0 // Cube map mode. -#define TEX_BOUNDARY_MODE_WRAP 1 // Wrap (u, v). -#define TEX_BOUNDARY_MODE_CLAMP 2 // Clamp (u, v). -#define TEX_BOUNDARY_MODE_ZERO 3 // Pad with zeros. -#define TEX_BOUNDARY_MODE_COUNT 4 - -//------------------------------------------------------------------------ -// CUDA kernel params. - -struct TextureKernelParams -{ - const float* tex[TEX_MAX_MIP_LEVEL]; // Incoming texture buffer with mip levels. - const float* uv; // Incoming texcoord buffer. - const float* uvDA; // Incoming uv pixel diffs or NULL. - const float* mipLevelBias; // Incoming mip level bias or NULL. - const float* dy; // Incoming output gradient. - float* out; // Outgoing texture data. - float* gradTex[TEX_MAX_MIP_LEVEL]; // Outgoing texture gradients with mip levels. - float* gradUV; // Outgoing texcoord gradient. - float* gradUVDA; // Outgoing texcoord pixel differential gradient. - float* gradMipLevelBias; // Outgoing mip level bias gradient. - int enableMip; // If true, we have uv_da and/or mip_level_bias input(s), and a mip tensor. - int filterMode; // One of the TEX_MODE_ constants. - int boundaryMode; // One of the TEX_BOUNDARY_MODE_ contants. - int texConst; // If true, texture is known to be constant. - int mipLevelLimit; // Mip level limit coming from the op. - int channels; // Number of texture channels. - int imgWidth; // Image width. - int imgHeight; // Image height. - int texWidth; // Texture width. - int texHeight; // Texture height. - int texDepth; // Texture depth. - int n; // Minibatch size. - int mipLevelMax; // Maximum mip level index. Zero if mips disabled. - int mipLevelOut; // Mip level being calculated in builder kernel. -}; - -//------------------------------------------------------------------------ -// C++ helper function prototypes. - -void raiseMipSizeError(NVDR_CTX_ARGS, const TextureKernelParams& p); -int calculateMipInfo(NVDR_CTX_ARGS, TextureKernelParams& p, int* mipOffsets); - -//------------------------------------------------------------------------ -// Macros. - -#define mipLevelSize(p, i) make_int2(((p).texWidth >> (i)) > 1 ? ((p).texWidth >> (i)) : 1, ((p).texHeight >> (i)) > 1 ? ((p).texHeight >> (i)) : 1) - -//------------------------------------------------------------------------ diff --git a/spaces/gyugnsu/DragGan-Inversion/PTI/editings/latent_editor.py b/spaces/gyugnsu/DragGan-Inversion/PTI/editings/latent_editor.py deleted file mode 100644 index 32554e8010c4da27aaded1b0ce938bd37d5e242b..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/PTI/editings/latent_editor.py +++ /dev/null @@ -1,23 +0,0 @@ -import torch - -from configs import paths_config -from editings import ganspace -from utils.data_utils import tensor2im - - -class LatentEditor(object): - - def apply_ganspace(self, latent, ganspace_pca, edit_directions): - edit_latents = ganspace.edit(latent, ganspace_pca, edit_directions) - return edit_latents - - def apply_interfacegan(self, latent, direction, factor=1, factor_range=None): - edit_latents = [] - if factor_range is not None: # Apply a range of editing factors. for example, (-5, 5) - for f in range(*factor_range): - edit_latent = latent + f * direction - edit_latents.append(edit_latent) - edit_latents = torch.cat(edit_latents) - else: - edit_latents = latent + factor * direction - return edit_latents diff --git a/spaces/h2oai/h2ogpt-chatbot/src/client_test.py b/spaces/h2oai/h2ogpt-chatbot/src/client_test.py deleted file mode 100644 index fd9477b56e3244feaab53194565abb570cb7f274..0000000000000000000000000000000000000000 --- a/spaces/h2oai/h2ogpt-chatbot/src/client_test.py +++ /dev/null @@ -1,484 +0,0 @@ -""" -Client test. - -Run server: - -python generate.py --base_model=h2oai/h2ogpt-oig-oasst1-512-6_9b - -NOTE: For private models, add --use-auth_token=True - -NOTE: --use_gpu_id=True (default) must be used for multi-GPU in case see failures with cuda:x cuda:y mismatches. -Currently, this will force model to be on a single GPU. - -Then run this client as: - -python src/client_test.py - - - -For HF spaces: - -HOST="https://h2oai-h2ogpt-chatbot.hf.space" python src/client_test.py - -Result: - -Loaded as API: https://h2oai-h2ogpt-chatbot.hf.space ✔ -{'instruction_nochat': 'Who are you?', 'iinput_nochat': '', 'response': 'I am h2oGPT, a large language model developed by LAION.', 'sources': ''} - - -For demo: - -HOST="https://gpt.h2o.ai" python src/client_test.py - -Result: - -Loaded as API: https://gpt.h2o.ai ✔ -{'instruction_nochat': 'Who are you?', 'iinput_nochat': '', 'response': 'I am h2oGPT, a chatbot created by LAION.', 'sources': ''} - -NOTE: Raw output from API for nochat case is a string of a python dict and will remain so if other entries are added to dict: - -{'response': "I'm h2oGPT, a large language model by H2O.ai, the visionary leader in democratizing AI.", 'sources': ''} - - -""" -import ast -import time -import os -import markdown # pip install markdown -import pytest -from bs4 import BeautifulSoup # pip install beautifulsoup4 - -try: - from enums import DocumentSubset, LangChainAction -except: - from src.enums import DocumentSubset, LangChainAction - -from tests.utils import get_inf_server - -debug = False - -os.environ['HF_HUB_DISABLE_TELEMETRY'] = '1' - - -def get_client(serialize=True): - from gradio_client import Client - - client = Client(get_inf_server(), serialize=serialize) - if debug: - print(client.view_api(all_endpoints=True)) - return client - - -def get_args(prompt, prompt_type=None, chat=False, stream_output=False, - max_new_tokens=50, - top_k_docs=3, - langchain_mode='Disabled', - add_chat_history_to_context=True, - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - prompt_dict=None, - version=None, - h2ogpt_key=None, - visible_models=None, - system_prompt='', # default of no system prompt tiggered by empty string - add_search_to_context=False, - chat_conversation=None, - text_context_list=None, - ): - from collections import OrderedDict - kwargs = OrderedDict(instruction=prompt if chat else '', # only for chat=True - iinput='', # only for chat=True - context='', - # streaming output is supported, loops over and outputs each generation in streaming mode - # but leave stream_output=False for simple input/output mode - stream_output=stream_output, - prompt_type=prompt_type, - prompt_dict=prompt_dict, - temperature=0.1, - top_p=0.75, - top_k=40, - num_beams=1, - max_new_tokens=max_new_tokens, - min_new_tokens=0, - early_stopping=False, - max_time=20, - repetition_penalty=1.0, - num_return_sequences=1, - do_sample=True, - chat=chat, - instruction_nochat=prompt if not chat else '', - iinput_nochat='', # only for chat=False - langchain_mode=langchain_mode, - add_chat_history_to_context=add_chat_history_to_context, - langchain_action=langchain_action, - langchain_agents=langchain_agents, - top_k_docs=top_k_docs, - chunk=True, - chunk_size=512, - document_subset=DocumentSubset.Relevant.name, - document_choice=[], - pre_prompt_query=None, - prompt_query=None, - pre_prompt_summary=None, - prompt_summary=None, - system_prompt=system_prompt, - image_loaders=None, - pdf_loaders=None, - url_loaders=None, - jq_schema=None, - visible_models=visible_models, - h2ogpt_key=h2ogpt_key, - add_search_to_context=add_search_to_context, - chat_conversation=chat_conversation, - text_context_list=text_context_list, - docs_ordering_type=None, - min_max_new_tokens=None, - ) - diff = 0 - if version is None: - # latest - version = 1 - if version == 0: - diff = 1 - if version >= 1: - kwargs.update(dict(system_prompt=system_prompt)) - diff = 0 - - from evaluate_params import eval_func_param_names - assert len(set(eval_func_param_names).difference(set(list(kwargs.keys())))) == diff - if chat: - # add chatbot output on end. Assumes serialize=False - kwargs.update(dict(chatbot=[])) - - return kwargs, list(kwargs.values()) - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_basic(prompt_type='human_bot', version=None, visible_models=None, prompt='Who are you?', - h2ogpt_key=None): - return run_client_nochat(prompt=prompt, prompt_type=prompt_type, max_new_tokens=50, version=version, - visible_models=visible_models, h2ogpt_key=h2ogpt_key) - - -""" -time HOST=https://gpt-internal.h2o.ai PYTHONPATH=. pytest -n 20 src/client_test.py::test_client_basic_benchmark -32 seconds to answer 20 questions at once with 70B llama2 on 4x A100 80GB using TGI 0.9.3 -""" - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -@pytest.mark.parametrize("id", range(20)) -def test_client_basic_benchmark(id, prompt_type='human_bot', version=None): - return run_client_nochat(prompt=""" -/nfs4/llm/h2ogpt/h2ogpt/bin/python /home/arno/pycharm-2022.2.2/plugins/python/helpers/pycharm/_jb_pytest_runner.py --target src/client_test.py::test_client_basic -Testing started at 8:41 AM ... -Launching pytest with arguments src/client_test.py::test_client_basic --no-header --no-summary -q in /nfs4/llm/h2ogpt - -============================= test session starts ============================== -collecting ... -src/client_test.py:None (src/client_test.py) -ImportError while importing test module '/nfs4/llm/h2ogpt/src/client_test.py'. -Hint: make sure your test modules/packages have valid Python names. -Traceback: -h2ogpt/lib/python3.10/site-packages/_pytest/python.py:618: in _importtestmodule - mod = import_path(self.path, mode=importmode, root=self.config.rootpath) -h2ogpt/lib/python3.10/site-packages/_pytest/pathlib.py:533: in import_path - importlib.import_module(module_name) -/usr/lib/python3.10/importlib/__init__.py:126: in import_module - return _bootstrap._gcd_import(name[level:], package, level) -:1050: in _gcd_import - ??? -:1027: in _find_and_load - ??? -:1006: in _find_and_load_unlocked - ??? -:688: in _load_unlocked - ??? -h2ogpt/lib/python3.10/site-packages/_pytest/assertion/rewrite.py:168: in exec_module - exec(co, module.__dict__) -src/client_test.py:51: in - from enums import DocumentSubset, LangChainAction -E ModuleNotFoundError: No module named 'enums' - - -collected 0 items / 1 error - -=============================== 1 error in 0.14s =============================== -ERROR: not found: /nfs4/llm/h2ogpt/src/client_test.py::test_client_basic -(no name '/nfs4/llm/h2ogpt/src/client_test.py::test_client_basic' in any of []) - - -Process finished with exit code 4 - -What happened? -""", prompt_type=prompt_type, max_new_tokens=100, version=version) - - -def run_client_nochat(prompt, prompt_type, max_new_tokens, version=None, h2ogpt_key=None, visible_models=None): - kwargs, args = get_args(prompt, prompt_type, chat=False, max_new_tokens=max_new_tokens, version=version, - visible_models=visible_models, h2ogpt_key=h2ogpt_key) - - api_name = '/submit_nochat' - client = get_client(serialize=True) - res = client.predict( - *tuple(args), - api_name=api_name, - ) - print("Raw client result: %s" % res, flush=True) - res_dict = dict(prompt=kwargs['instruction_nochat'], iinput=kwargs['iinput_nochat'], - response=md_to_text(res)) - print(res_dict) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_basic_api(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_nochat_api(prompt='Who are you?', prompt_type=prompt_type, max_new_tokens=50, version=version, - h2ogpt_key=h2ogpt_key) - - -def run_client_nochat_api(prompt, prompt_type, max_new_tokens, version=None, h2ogpt_key=None): - kwargs, args = get_args(prompt, prompt_type, chat=False, max_new_tokens=max_new_tokens, version=version, - h2ogpt_key=h2ogpt_key) - - api_name = '/submit_nochat_api' # NOTE: like submit_nochat but stable API for string dict passing - client = get_client(serialize=True) - res = client.predict( - str(dict(kwargs)), - api_name=api_name, - ) - print("Raw client result: %s" % res, flush=True) - res_dict = dict(prompt=kwargs['instruction_nochat'], iinput=kwargs['iinput_nochat'], - response=md_to_text(ast.literal_eval(res)['response']), - sources=ast.literal_eval(res)['sources']) - print(res_dict) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_basic_api_lean(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_nochat_api_lean(prompt='Who are you?', prompt_type=prompt_type, max_new_tokens=50, - version=version, h2ogpt_key=h2ogpt_key) - - -def run_client_nochat_api_lean(prompt, prompt_type, max_new_tokens, version=None, h2ogpt_key=None): - kwargs = dict(instruction_nochat=prompt, h2ogpt_key=h2ogpt_key) - - api_name = '/submit_nochat_api' # NOTE: like submit_nochat but stable API for string dict passing - client = get_client(serialize=True) - res = client.predict( - str(dict(kwargs)), - api_name=api_name, - ) - print("Raw client result: %s" % res, flush=True) - res_dict = dict(prompt=kwargs['instruction_nochat'], - response=md_to_text(ast.literal_eval(res)['response']), - sources=ast.literal_eval(res)['sources'], - h2ogpt_key=h2ogpt_key) - print(res_dict) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_basic_api_lean_morestuff(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_nochat_api_lean_morestuff(prompt='Who are you?', prompt_type=prompt_type, max_new_tokens=50, - version=version, h2ogpt_key=h2ogpt_key) - - -def run_client_nochat_api_lean_morestuff(prompt, prompt_type='human_bot', max_new_tokens=512, version=None, - h2ogpt_key=None): - kwargs = dict( - instruction='', - iinput='', - context='', - stream_output=False, - prompt_type=prompt_type, - temperature=0.1, - top_p=0.75, - top_k=40, - num_beams=1, - max_new_tokens=1024, - min_new_tokens=0, - early_stopping=False, - max_time=20, - repetition_penalty=1.0, - num_return_sequences=1, - do_sample=True, - chat=False, - instruction_nochat=prompt, - iinput_nochat='', - langchain_mode='Disabled', - add_chat_history_to_context=True, - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - top_k_docs=4, - document_subset=DocumentSubset.Relevant.name, - document_choice=[], - h2ogpt_key=h2ogpt_key, - add_search_to_context=False, - ) - - api_name = '/submit_nochat_api' # NOTE: like submit_nochat but stable API for string dict passing - client = get_client(serialize=True) - res = client.predict( - str(dict(kwargs)), - api_name=api_name, - ) - print("Raw client result: %s" % res, flush=True) - res_dict = dict(prompt=kwargs['instruction_nochat'], - response=md_to_text(ast.literal_eval(res)['response']), - sources=ast.literal_eval(res)['sources'], - h2ogpt_key=h2ogpt_key) - print(res_dict) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_chat(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_chat(prompt='Who are you?', prompt_type=prompt_type, stream_output=False, max_new_tokens=50, - langchain_mode='Disabled', - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - version=version, - h2ogpt_key=h2ogpt_key) - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_chat_stream(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_chat(prompt="Tell a very long kid's story about birds.", prompt_type=prompt_type, - stream_output=True, max_new_tokens=512, - langchain_mode='Disabled', - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - version=version, - h2ogpt_key=h2ogpt_key) - - -def run_client_chat(prompt='', - stream_output=None, - max_new_tokens=128, - langchain_mode='Disabled', - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - prompt_type=None, prompt_dict=None, - version=None, - h2ogpt_key=None): - client = get_client(serialize=False) - - kwargs, args = get_args(prompt, prompt_type, chat=True, stream_output=stream_output, - max_new_tokens=max_new_tokens, - langchain_mode=langchain_mode, - langchain_action=langchain_action, - langchain_agents=langchain_agents, - prompt_dict=prompt_dict, - version=version, - h2ogpt_key=h2ogpt_key) - return run_client(client, prompt, args, kwargs) - - -def run_client(client, prompt, args, kwargs, do_md_to_text=True, verbose=False): - assert kwargs['chat'], "Chat mode only" - res = client.predict(*tuple(args), api_name='/instruction') - args[-1] += [res[-1]] - - res_dict = kwargs - res_dict['prompt'] = prompt - if not kwargs['stream_output']: - res = client.predict(*tuple(args), api_name='/instruction_bot') - res_dict['response'] = res[0][-1][1] - print(md_to_text(res_dict['response'], do_md_to_text=do_md_to_text)) - return res_dict, client - else: - job = client.submit(*tuple(args), api_name='/instruction_bot') - res1 = '' - while not job.done(): - outputs_list = job.communicator.job.outputs - if outputs_list: - res = job.communicator.job.outputs[-1] - res1 = res[0][-1][-1] - res1 = md_to_text(res1, do_md_to_text=do_md_to_text) - print(res1) - time.sleep(0.1) - full_outputs = job.outputs() - if verbose: - print('job.outputs: %s' % str(full_outputs)) - # ensure get ending to avoid race - # -1 means last response if streaming - # 0 means get text_output, ignore exception_text - # 0 means get list within text_output that looks like [[prompt], [answer]] - # 1 means get bot answer, so will have last bot answer - res_dict['response'] = md_to_text(full_outputs[-1][0][0][1], do_md_to_text=do_md_to_text) - return res_dict, client - - -@pytest.mark.skip(reason="For manual use against some server, no server launched") -def test_client_nochat_stream(prompt_type='human_bot', version=None, h2ogpt_key=None): - return run_client_nochat_gen(prompt="Tell a very long kid's story about birds.", prompt_type=prompt_type, - stream_output=True, max_new_tokens=512, - langchain_mode='Disabled', - langchain_action=LangChainAction.QUERY.value, - langchain_agents=[], - version=version, - h2ogpt_key=h2ogpt_key) - - -def run_client_nochat_gen(prompt, prompt_type, stream_output, max_new_tokens, - langchain_mode, langchain_action, langchain_agents, version=None, - h2ogpt_key=None): - client = get_client(serialize=False) - - kwargs, args = get_args(prompt, prompt_type, chat=False, stream_output=stream_output, - max_new_tokens=max_new_tokens, langchain_mode=langchain_mode, - langchain_action=langchain_action, langchain_agents=langchain_agents, - version=version, h2ogpt_key=h2ogpt_key) - return run_client_gen(client, prompt, args, kwargs) - - -def run_client_gen(client, prompt, args, kwargs, do_md_to_text=True, verbose=False): - res_dict = kwargs - res_dict['prompt'] = prompt - if not kwargs['stream_output']: - res = client.predict(str(dict(kwargs)), api_name='/submit_nochat_api') - res_dict.update(ast.literal_eval(res)) - print(md_to_text(res_dict['response'], do_md_to_text=do_md_to_text)) - return res_dict, client - else: - job = client.submit(str(dict(kwargs)), api_name='/submit_nochat_api') - while not job.done(): - outputs_list = job.communicator.job.outputs - if outputs_list: - res = job.communicator.job.outputs[-1] - res_dict = ast.literal_eval(res) - print('Stream: %s' % res_dict['response']) - time.sleep(0.1) - res_list = job.outputs() - assert len(res_list) > 0, "No response, check server" - res = res_list[-1] - res_dict = ast.literal_eval(res) - print('Final: %s' % res_dict['response']) - return res_dict, client - - -def md_to_text(md, do_md_to_text=True): - if not do_md_to_text: - return md - assert md is not None, "Markdown is None" - html = markdown.markdown(md) - soup = BeautifulSoup(html, features='html.parser') - return soup.get_text() - - -def run_client_many(prompt_type='human_bot', version=None, h2ogpt_key=None): - kwargs = dict(prompt_type=prompt_type, version=version, h2ogpt_key=h2ogpt_key) - ret1, _ = test_client_chat(**kwargs) - ret2, _ = test_client_chat_stream(**kwargs) - ret3, _ = test_client_nochat_stream(**kwargs) - ret4, _ = test_client_basic(**kwargs) - ret5, _ = test_client_basic_api(**kwargs) - ret6, _ = test_client_basic_api_lean(**kwargs) - ret7, _ = test_client_basic_api_lean_morestuff(**kwargs) - return ret1, ret2, ret3, ret4, ret5, ret6, ret7 - - -if __name__ == '__main__': - run_client_many() diff --git a/spaces/hackathon-somos-nlp-2023/suicide-comments-es/README.md b/spaces/hackathon-somos-nlp-2023/suicide-comments-es/README.md deleted file mode 100644 index ed8b75367b9eb9c8a21f71bfb1a1d3950260e0a6..0000000000000000000000000000000000000000 --- a/spaces/hackathon-somos-nlp-2023/suicide-comments-es/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Suicide Comments Es -emoji: ❤️ 🩺 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/hamacojr/CAT-Seg/cat_seg/__init__.py b/spaces/hamacojr/CAT-Seg/cat_seg/__init__.py deleted file mode 100644 index 4e095a29ff5b655d58af6ac7ef920d4089f465f6..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/cat_seg/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import data # register all new datasets -from . import modeling - -# config -from .config import add_cat_seg_config - -# dataset loading -from .data.dataset_mappers.detr_panoptic_dataset_mapper import DETRPanopticDatasetMapper -from .data.dataset_mappers.mask_former_panoptic_dataset_mapper import ( - MaskFormerPanopticDatasetMapper, -) -from .data.dataset_mappers.mask_former_semantic_dataset_mapper import ( - MaskFormerSemanticDatasetMapper, -) - -# models -from .cat_seg_model import CATSeg -from .test_time_augmentation import SemanticSegmentorWithTTA \ No newline at end of file diff --git a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/modeling/transformer/__init__.py b/spaces/hamacojr/SAM-CAT-Seg/cat_seg/modeling/transformer/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/modeling/transformer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/data.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/data.py deleted file mode 100644 index 863528a12879f85f2bba0e41d3a68da4e16a90fb..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/data.py +++ /dev/null @@ -1,514 +0,0 @@ -import ast -import json -import logging -import math -import os -import random -import sys -import time -from dataclasses import dataclass -from multiprocessing import Value - -import numpy as np -import pandas as pd -import torch -import torchvision.datasets as datasets -import webdataset as wds -from PIL import Image -from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler, IterableDataset, get_worker_info -from torch.utils.data.distributed import DistributedSampler -from webdataset.filters import _shuffle -from webdataset.tariterators import base_plus_ext, url_opener, tar_file_expander, valid_sample - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - - -class CsvDataset(Dataset): - def __init__(self, input_filename, transforms, img_key, caption_key, sep="\t", tokenizer=None): - logging.debug(f'Loading csv data from {input_filename}.') - df = pd.read_csv(input_filename, sep=sep) - - self.images = df[img_key].tolist() - self.captions = df[caption_key].tolist() - self.transforms = transforms - logging.debug('Done loading data.') - - self.tokenize = tokenizer - - def __len__(self): - return len(self.captions) - - def __getitem__(self, idx): - images = self.transforms(Image.open(str(self.images[idx]))) - texts = self.tokenize([str(self.captions[idx])])[0] - return images, texts - - -class SharedEpoch: - def __init__(self, epoch: int = 0): - self.shared_epoch = Value('i', epoch) - - def set_value(self, epoch): - self.shared_epoch.value = epoch - - def get_value(self): - return self.shared_epoch.value - - -@dataclass -class DataInfo: - dataloader: DataLoader - sampler: DistributedSampler = None - shared_epoch: SharedEpoch = None - - def set_epoch(self, epoch): - if self.shared_epoch is not None: - self.shared_epoch.set_value(epoch) - if self.sampler is not None and isinstance(self.sampler, DistributedSampler): - self.sampler.set_epoch(epoch) - - -def get_dataset_size(shards): - shards_list = wds.shardlists.expand_urls(shards) - dir_path = os.path.dirname(shards_list[0]) - sizes_filename = os.path.join(dir_path, 'sizes.json') - len_filename = os.path.join(dir_path, '__len__') - if os.path.exists(sizes_filename): - sizes = json.load(open(sizes_filename, 'r')) - total_size = sum([int(sizes[os.path.basename(shard)]) for shard in shards_list]) - elif os.path.exists(len_filename): - # FIXME this used to be eval(open(...)) but that seemed rather unsafe - total_size = ast.literal_eval(open(len_filename, 'r').read()) - else: - total_size = None # num samples undefined - # some common dataset sizes (at time of authors last download) - # CC3M (train): 2905954 - # CC12M: 10968539 - # LAION-400M: 407332084 - # LAION-2B (english): 2170337258 - num_shards = len(shards_list) - return total_size, num_shards - - -def get_imagenet(args, preprocess_fns, split): - assert split in ["train", "val", "v2"] - is_train = split == "train" - preprocess_train, preprocess_val = preprocess_fns - - if split == "v2": - from imagenetv2_pytorch import ImageNetV2Dataset - dataset = ImageNetV2Dataset(location=args.imagenet_v2, transform=preprocess_val) - else: - if is_train: - data_path = args.imagenet_train - preprocess_fn = preprocess_train - else: - data_path = args.imagenet_val - preprocess_fn = preprocess_val - assert data_path - - dataset = datasets.ImageFolder(data_path, transform=preprocess_fn) - - if is_train: - idxs = np.zeros(len(dataset.targets)) - target_array = np.array(dataset.targets) - k = 50 - for c in range(1000): - m = target_array == c - n = len(idxs[m]) - arr = np.zeros(n) - arr[:k] = 1 - np.random.shuffle(arr) - idxs[m] = arr - - idxs = idxs.astype('int') - sampler = SubsetRandomSampler(np.where(idxs)[0]) - else: - sampler = None - - dataloader = torch.utils.data.DataLoader( - dataset, - batch_size=args.batch_size, - num_workers=args.workers, - sampler=sampler, - ) - - return DataInfo(dataloader=dataloader, sampler=sampler) - - -def count_samples(dataloader): - os.environ["WDS_EPOCH"] = "0" - n_elements, n_batches = 0, 0 - for images, texts in dataloader: - n_batches += 1 - n_elements += len(images) - assert len(images) == len(texts) - return n_elements, n_batches - - -def filter_no_caption_or_no_image(sample): - has_caption = ('txt' in sample) - has_image = ('png' in sample or 'jpg' in sample or 'jpeg' in sample or 'webp' in sample) - return has_caption and has_image - - -def log_and_continue(exn): - """Call in an exception handler to ignore any exception, issue a warning, and continue.""" - logging.warning(f'Handling webdataset error ({repr(exn)}). Ignoring.') - return True - - -def group_by_keys_nothrow(data, keys=base_plus_ext, lcase=True, suffixes=None, handler=None): - """Return function over iterator that groups key, value pairs into samples. - - :param keys: function that splits the key into key and extension (base_plus_ext) - :param lcase: convert suffixes to lower case (Default value = True) - """ - current_sample = None - for filesample in data: - assert isinstance(filesample, dict) - fname, value = filesample["fname"], filesample["data"] - prefix, suffix = keys(fname) - if prefix is None: - continue - if lcase: - suffix = suffix.lower() - # FIXME webdataset version throws if suffix in current_sample, but we have a potential for - # this happening in the current LAION400m dataset if a tar ends with same prefix as the next - # begins, rare, but can happen since prefix aren't unique across tar files in that dataset - if current_sample is None or prefix != current_sample["__key__"] or suffix in current_sample: - if valid_sample(current_sample): - yield current_sample - current_sample = dict(__key__=prefix, __url__=filesample["__url__"]) - if suffixes is None or suffix in suffixes: - current_sample[suffix] = value - if valid_sample(current_sample): - yield current_sample - - -def tarfile_to_samples_nothrow(src, handler=log_and_continue): - # NOTE this is a re-impl of the webdataset impl with group_by_keys that doesn't throw - streams = url_opener(src, handler=handler) - files = tar_file_expander(streams, handler=handler) - samples = group_by_keys_nothrow(files, handler=handler) - return samples - - -def pytorch_worker_seed(increment=0): - """get dataloader worker seed from pytorch""" - worker_info = get_worker_info() - if worker_info is not None: - # favour using the seed already created for pytorch dataloader workers if it exists - seed = worker_info.seed - if increment: - # space out seed increments so they can't overlap across workers in different iterations - seed += increment * max(1, worker_info.num_workers) - return seed - # fallback to wds rank based seed - return wds.utils.pytorch_worker_seed() - - -_SHARD_SHUFFLE_SIZE = 2000 -_SHARD_SHUFFLE_INITIAL = 500 -_SAMPLE_SHUFFLE_SIZE = 5000 -_SAMPLE_SHUFFLE_INITIAL = 1000 - - -class detshuffle2(wds.PipelineStage): - def __init__( - self, - bufsize=1000, - initial=100, - seed=0, - epoch=-1, - ): - self.bufsize = bufsize - self.initial = initial - self.seed = seed - self.epoch = epoch - - def run(self, src): - if isinstance(self.epoch, SharedEpoch): - epoch = self.epoch.get_value() - else: - # NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train) - # situation as different workers may wrap at different times (or not at all). - self.epoch += 1 - epoch = self.epoch - rng = random.Random() - if self.seed < 0: - # If seed is negative, we use the worker's seed, this will be different across all nodes/workers - seed = pytorch_worker_seed(epoch) - else: - # This seed to be deterministic AND the same across all nodes/workers in each epoch - seed = self.seed + epoch - rng.seed(seed) - return _shuffle(src, self.bufsize, self.initial, rng) - - -class ResampledShards2(IterableDataset): - """An iterable dataset yielding a list of urls.""" - - def __init__( - self, - urls, - nshards=sys.maxsize, - worker_seed=None, - deterministic=False, - epoch=-1, - ): - """Sample shards from the shard list with replacement. - - :param urls: a list of URLs as a Python list or brace notation string - """ - super().__init__() - urls = wds.shardlists.expand_urls(urls) - self.urls = urls - assert isinstance(self.urls[0], str) - self.nshards = nshards - self.rng = random.Random() - self.worker_seed = worker_seed - self.deterministic = deterministic - self.epoch = epoch - - def __iter__(self): - """Return an iterator over the shards.""" - if isinstance(self.epoch, SharedEpoch): - epoch = self.epoch.get_value() - else: - # NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train) - # situation as different workers may wrap at different times (or not at all). - self.epoch += 1 - epoch = self.epoch - if self.deterministic: - # reset seed w/ epoch if deterministic - if self.worker_seed is None: - # pytorch worker seed should be deterministic due to being init by arg.seed + rank + worker id - seed = pytorch_worker_seed(epoch) - else: - seed = self.worker_seed() + epoch - self.rng.seed(seed) - for _ in range(self.nshards): - yield dict(url=self.rng.choice(self.urls)) - - -def get_wds_dataset(args, preprocess_img, is_train, epoch=0, floor=False, tokenizer=None): - input_shards = args.train_data if is_train else args.val_data - assert input_shards is not None - resampled = getattr(args, 'dataset_resampled', False) and is_train - - num_samples, num_shards = get_dataset_size(input_shards) - if not num_samples: - if is_train: - num_samples = args.train_num_samples - if not num_samples: - raise RuntimeError( - 'Currently, number of dataset samples must be specified for training dataset. ' - 'Please specify via `--train-num-samples` if no dataset length info present.') - else: - num_samples = args.val_num_samples or 0 # eval will just exhaust the iterator if not specified - - shared_epoch = SharedEpoch(epoch=epoch) # create a shared epoch store to sync epoch to dataloader worker proc - - if resampled: - pipeline = [ResampledShards2(input_shards, deterministic=True, epoch=shared_epoch)] - else: - pipeline = [wds.SimpleShardList(input_shards)] - - # at this point we have an iterator over all the shards - if is_train: - if not resampled: - pipeline.extend([ - detshuffle2( - bufsize=_SHARD_SHUFFLE_SIZE, - initial=_SHARD_SHUFFLE_INITIAL, - seed=args.seed, - epoch=shared_epoch, - ), - wds.split_by_node, - wds.split_by_worker, - ]) - pipeline.extend([ - # at this point, we have an iterator over the shards assigned to each worker at each node - tarfile_to_samples_nothrow, # wds.tarfile_to_samples(handler=log_and_continue), - wds.shuffle( - bufsize=_SAMPLE_SHUFFLE_SIZE, - initial=_SAMPLE_SHUFFLE_INITIAL, - ), - ]) - else: - pipeline.extend([ - wds.split_by_worker, - # at this point, we have an iterator over the shards assigned to each worker - wds.tarfile_to_samples(handler=log_and_continue), - ]) - pipeline.extend([ - wds.select(filter_no_caption_or_no_image), - wds.decode("pilrgb", handler=log_and_continue), - wds.rename(image="jpg;png;jpeg;webp", text="txt"), - wds.map_dict(image=preprocess_img, text=lambda text: tokenizer(text)[0]), - wds.to_tuple("image", "text"), - wds.batched(args.batch_size, partial=not is_train), - ]) - - dataset = wds.DataPipeline(*pipeline) - if is_train: - if not resampled: - assert num_shards >= args.workers * args.world_size, 'number of shards must be >= total workers' - # roll over and repeat a few samples to get same number of full batches on each node - round_fn = math.floor if floor else math.ceil - global_batch_size = args.batch_size * args.world_size - num_batches = round_fn(num_samples / global_batch_size) - num_workers = max(1, args.workers) - num_worker_batches = round_fn(num_batches / num_workers) # per dataloader worker - num_batches = num_worker_batches * num_workers - num_samples = num_batches * global_batch_size - dataset = dataset.with_epoch(num_worker_batches) # each worker is iterating over this - else: - # last batches are partial, eval is done on single (master) node - num_batches = math.ceil(num_samples / args.batch_size) - - dataloader = wds.WebLoader( - dataset, - batch_size=None, - shuffle=False, - num_workers=args.workers, - persistent_workers=True, - ) - - # FIXME not clear which approach is better, with_epoch before vs after dataloader? - # hoping to resolve via https://github.com/webdataset/webdataset/issues/169 - # if is_train: - # # roll over and repeat a few samples to get same number of full batches on each node - # global_batch_size = args.batch_size * args.world_size - # num_batches = math.ceil(num_samples / global_batch_size) - # num_workers = max(1, args.workers) - # num_batches = math.ceil(num_batches / num_workers) * num_workers - # num_samples = num_batches * global_batch_size - # dataloader = dataloader.with_epoch(num_batches) - # else: - # # last batches are partial, eval is done on single (master) node - # num_batches = math.ceil(num_samples / args.batch_size) - - # add meta-data to dataloader instance for convenience - dataloader.num_batches = num_batches - dataloader.num_samples = num_samples - - return DataInfo(dataloader=dataloader, shared_epoch=shared_epoch) - - -def get_csv_dataset(args, preprocess_fn, is_train, epoch=0, tokenizer=None): - input_filename = args.train_data if is_train else args.val_data - assert input_filename - dataset = CsvDataset( - input_filename, - preprocess_fn, - img_key=args.csv_img_key, - caption_key=args.csv_caption_key, - sep=args.csv_separator, - tokenizer=tokenizer - ) - num_samples = len(dataset) - sampler = DistributedSampler(dataset) if args.distributed and is_train else None - shuffle = is_train and sampler is None - - dataloader = DataLoader( - dataset, - batch_size=args.batch_size, - shuffle=shuffle, - num_workers=args.workers, - pin_memory=True, - sampler=sampler, - drop_last=is_train, - ) - dataloader.num_samples = num_samples - dataloader.num_batches = len(dataloader) - - return DataInfo(dataloader, sampler) - - -class SyntheticDataset(Dataset): - - def __init__(self, transform=None, image_size=(224, 224), caption="Dummy caption", dataset_size=100, tokenizer=None): - self.transform = transform - self.image_size = image_size - self.caption = caption - self.image = Image.new('RGB', image_size) - self.dataset_size = dataset_size - - self.preprocess_txt = lambda text: tokenizer(text)[0] - - def __len__(self): - return self.dataset_size - - def __getitem__(self, idx): - if self.transform is not None: - image = self.transform(self.image) - return image, self.preprocess_txt(self.caption) - - -def get_synthetic_dataset(args, preprocess_fn, is_train, epoch=0, tokenizer=None): - image_size = preprocess_fn.transforms[0].size - dataset = SyntheticDataset( - transform=preprocess_fn, image_size=image_size, dataset_size=args.train_num_samples, tokenizer=tokenizer) - num_samples = len(dataset) - sampler = DistributedSampler(dataset) if args.distributed and is_train else None - shuffle = is_train and sampler is None - - dataloader = DataLoader( - dataset, - batch_size=args.batch_size, - shuffle=shuffle, - num_workers=args.workers, - pin_memory=True, - sampler=sampler, - drop_last=is_train, - ) - dataloader.num_samples = num_samples - dataloader.num_batches = len(dataloader) - - return DataInfo(dataloader, sampler) - - -def get_dataset_fn(data_path, dataset_type): - if dataset_type == "webdataset": - return get_wds_dataset - elif dataset_type == "csv": - return get_csv_dataset - elif dataset_type == "synthetic": - return get_synthetic_dataset - elif dataset_type == "auto": - ext = data_path.split('.')[-1] - if ext in ['csv', 'tsv']: - return get_csv_dataset - elif ext in ['tar']: - return get_wds_dataset - else: - raise ValueError( - f"Tried to figure out dataset type, but failed for extension {ext}.") - else: - raise ValueError(f"Unsupported dataset type: {dataset_type}") - - -def get_data(args, preprocess_fns, epoch=0, tokenizer=None): - preprocess_train, preprocess_val = preprocess_fns - data = {} - - if args.train_data or args.dataset_type == "synthetic": - data["train"] = get_dataset_fn(args.train_data, args.dataset_type)( - args, preprocess_train, is_train=True, epoch=epoch, tokenizer=tokenizer) - - if args.val_data: - data["val"] = get_dataset_fn(args.val_data, args.dataset_type)( - args, preprocess_val, is_train=False, tokenizer=tokenizer) - - if args.imagenet_val is not None: - data["imagenet-val"] = get_imagenet(args, preprocess_fns, "val") - - if args.imagenet_v2 is not None: - data["imagenet-v2"] = get_imagenet(args, preprocess_fns, "v2") - - return data diff --git a/spaces/hands012/gpt-academic/docs/self_analysis.md b/spaces/hands012/gpt-academic/docs/self_analysis.md deleted file mode 100644 index ebc2337194974bf210794df7d858889010fecf08..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/docs/self_analysis.md +++ /dev/null @@ -1,378 +0,0 @@ -# chatgpt-academic项目自译解报告 -(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方,全怪GPT😄) - - -| 文件名 | 功能描述 | -| ------ | ------ | -| check_proxy.py | 检查代理有效性及地理位置 | -| colorful.py | 控制台打印彩色文字 | -| config.py | 配置和参数设置 | -| config_private.py | 私人配置和参数设置 | -| core_functional.py | 核心函数和参数设置 | -| crazy_functional.py | 高级功能插件集合 | -| main.py | 一个 Chatbot 程序,提供各种学术翻译、文本处理和其他查询服务 | -| multi_language.py | 识别和翻译不同语言 | -| theme.py | 自定义 gradio 应用程序主题 | -| toolbox.py | 工具类库,用于协助实现各种功能 | -| crazy_functions\crazy_functions_test.py | 测试 crazy_functions 中的各种函数 | -| crazy_functions\crazy_utils.py | 工具函数,用于字符串处理、异常检测、Markdown 格式转换等 | -| crazy_functions\Latex全文润色.py | 对整个 Latex 项目进行润色和纠错 | -| crazy_functions\Latex全文翻译.py | 对整个 Latex 项目进行翻译 | -| crazy_functions\\_\_init\_\_.py | 模块初始化文件,标识 `crazy_functions` 是一个包 | -| crazy_functions\下载arxiv论文翻译摘要.py | 下载 `arxiv` 论文的 PDF 文件,并提取摘要和翻译 | -| crazy_functions\代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 | -| crazy_functions\图片生成.py | 根据激励文本使用GPT模型生成相应的图像 | -| crazy_functions\对话历史存档.py | 将每次对话记录写入Markdown格式的文件中 | -| crazy_functions\总结word文档.py | 对输入的word文档进行摘要生成 | -| crazy_functions\总结音视频.py | 对输入的音视频文件进行摘要生成 | -| crazy_functions\批量Markdown翻译.py | 将指定目录下的Markdown文件进行中英文翻译 | -| crazy_functions\批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 | -| crazy_functions\批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 | -| crazy_functions\批量翻译PDF文档_多线程.py | 将指定目录下的PDF文件进行中英文翻译 | -| crazy_functions\理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 | -| crazy_functions\生成函数注释.py | 自动生成Python函数的注释 | -| crazy_functions\联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 | -| crazy_functions\解析JupyterNotebook.py | 对Jupyter Notebook进行代码解析 | -| crazy_functions\解析项目源代码.py | 对指定编程语言的源代码进行解析 | -| crazy_functions\询问多个大语言模型.py | 使用多个大语言模型对输入进行处理和回复 | -| crazy_functions\读文章写摘要.py | 对论文进行解析和全文摘要生成 | -| crazy_functions\谷歌检索小助手.py | 提供谷歌学术搜索页面中相关文章的元数据信息。 | -| crazy_functions\高级功能函数模板.py | 使用Unsplash API发送相关图片以回复用户的输入。 | -| request_llm\bridge_all.py | 基于不同LLM模型进行对话。 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型生成回复,支持单线程和多线程方式。 | -| request_llm\bridge_chatgpt.py | 基于GPT模型完成对话。 | -| request_llm\bridge_jittorllms_llama.py | 使用JittorLLMs模型完成对话,支持单线程和多线程方式。 | -| request_llm\bridge_jittorllms_pangualpha.py | 使用JittorLLMs模型完成对话,基于多进程和多线程方式。 | -| request_llm\bridge_jittorllms_rwkv.py | 使用JittorLLMs模型完成聊天功能,提供包括历史信息、参数调节等在内的多个功能选项。 | -| request_llm\bridge_moss.py | 加载Moss模型完成对话功能。 | -| request_llm\bridge_newbing.py | 使用Newbing聊天机器人进行对话,支持单线程和多线程方式。 | -| request_llm\bridge_newbingfree.py | 基于Bing chatbot API实现聊天机器人的文本生成功能。 | -| request_llm\bridge_stackclaude.py | 基于Slack API实现Claude与用户的交互。 | -| request_llm\bridge_tgui.py | 通过websocket实现聊天机器人与UI界面交互。 | -| request_llm\edge_gpt.py | 调用Bing chatbot API提供聊天机器人服务。 | -| request_llm\edge_gpt_free.py | 实现聊天机器人API,采用aiohttp和httpx工具库。 | -| request_llm\test_llms.py | 对llm模型进行单元测试。 | - -## 接下来请你逐文件分析下面的工程[0/48] 请对下面的程序文件做一个概述: check_proxy.py - -这个文件主要包含了五个函数: - -1. `check_proxy`:用于检查代理的有效性及地理位置,输出代理配置和所在地信息。 - -2. `backup_and_download`:用于备份当前版本并下载新版本。 - -3. `patch_and_restart`:用于覆盖更新当前版本并重新启动程序。 - -4. `get_current_version`:用于获取当前程序的版本号。 - -5. `auto_update`:用于自动检查新版本并提示用户更新。如果用户选择更新,则备份并下载新版本,覆盖更新当前版本并重新启动程序。如果更新失败,则输出错误信息,并不会向用户进行任何提示。 - -还有一个没有函数名的语句`os.environ['no_proxy'] = '*'`,用于设置环境变量,避免代理网络产生意外污染。 - -此外,该文件导入了以下三个模块/函数: - -- `requests` -- `shutil` -- `os` - -## [1/48] 请对下面的程序文件做一个概述: colorful.py - -该文件是一个Python脚本,用于在控制台中打印彩色文字。该文件包含了一些函数,用于以不同颜色打印文本。其中,红色、绿色、黄色、蓝色、紫色、靛色分别以函数 print红、print绿、print黄、print蓝、print紫、print靛 的形式定义;亮红色、亮绿色、亮黄色、亮蓝色、亮紫色、亮靛色分别以 print亮红、print亮绿、print亮黄、print亮蓝、print亮紫、print亮靛 的形式定义。它们使用 ANSI Escape Code 将彩色输出从控制台突出显示。如果运行在 Linux 操作系统上,文件所执行的操作被留空;否则,该文件导入了 colorama 库并调用 init() 函数进行初始化。最后,通过一系列条件语句,该文件通过将所有彩色输出函数的名称重新赋值为 print 函数的名称来避免输出文件的颜色问题。 - -## [2/48] 请对下面的程序文件做一个概述: config.py - -这个程序文件是用来配置和参数设置的。它包含了许多设置,如API key,使用代理,线程数,默认模型,超时时间等等。此外,它还包含了一些高级功能,如URL重定向等。这些设置将会影响到程序的行为和性能。 - -## [3/48] 请对下面的程序文件做一个概述: config_private.py - -这个程序文件是一个Python脚本,文件名为config_private.py。其中包含以下变量的赋值: - -1. API_KEY:API密钥。 -2. USE_PROXY:是否应用代理。 -3. proxies:如果使用代理,则设置代理网络的协议(socks5/http)、地址(localhost)和端口(11284)。 -4. DEFAULT_WORKER_NUM:默认的工作线程数量。 -5. SLACK_CLAUDE_BOT_ID:Slack机器人ID。 -6. SLACK_CLAUDE_USER_TOKEN:Slack用户令牌。 - -## [4/48] 请对下面的程序文件做一个概述: core_functional.py - -这是一个名为core_functional.py的源代码文件,该文件定义了一个名为get_core_functions()的函数,该函数返回一个字典,该字典包含了各种学术翻译润色任务的说明和相关参数,如颜色、前缀、后缀等。这些任务包括英语学术润色、中文学术润色、查找语法错误、中译英、学术中英互译、英译中、找图片和参考文献转Bib。其中,一些任务还定义了预处理函数用于处理任务的输入文本。 - -## [5/48] 请对下面的程序文件做一个概述: crazy_functional.py - -此程序文件(crazy_functional.py)是一个函数插件集合,包含了多个函数插件的定义和调用。这些函数插件旨在提供一些高级功能,如解析项目源代码、批量翻译PDF文档和Latex全文润色等。其中一些插件还支持热更新功能,不需要重启程序即可生效。文件中的函数插件按照功能进行了分类(第一组和第二组),并且有不同的调用方式(作为按钮或下拉菜单)。 - -## [6/48] 请对下面的程序文件做一个概述: main.py - -这是一个Python程序文件,文件名为main.py。该程序包含一个名为main的函数,程序会自动运行该函数。程序要求已经安装了gradio、os等模块,会根据配置文件加载代理、model、API Key等信息。程序提供了Chatbot功能,实现了一个对话界面,用户可以输入问题,然后Chatbot可以回答问题或者提供相关功能。程序还包含了基础功能区、函数插件区、更换模型 & SysPrompt & 交互界面布局、备选输入区,用户可以在这些区域选择功能和插件进行使用。程序中还包含了一些辅助模块,如logging等。 - -## [7/48] 请对下面的程序文件做一个概述: multi_language.py - -该文件multi_language.py是用于将项目翻译成不同语言的程序。它包含了以下函数和变量:lru_file_cache、contains_chinese、split_list、map_to_json、read_map_from_json、advanced_split、trans、trans_json、step_1_core_key_translate、CACHE_FOLDER、blacklist、LANG、TransPrompt、cached_translation等。注释和文档字符串提供了有关程序的说明,例如如何使用该程序,如何修改“LANG”和“TransPrompt”变量等。 - -## [8/48] 请对下面的程序文件做一个概述: theme.py - -这是一个Python源代码文件,文件名为theme.py。此文件中定义了一个函数adjust_theme,其功能是自定义gradio应用程序的主题,包括调整颜色、字体、阴影等。如果允许,则添加一个看板娘。此文件还包括变量advanced_css,其中包含一些CSS样式,用于高亮显示代码和自定义聊天框样式。此文件还导入了get_conf函数和gradio库。 - -## [9/48] 请对下面的程序文件做一个概述: toolbox.py - -toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和小工具函数,用于协助实现聊天机器人所需的各种功能,包括文本处理、功能插件加载、异常检测、Markdown格式转换,文件读写等等。此外,该库还包含一些依赖、参数配置等信息。该库易于理解和维护。 - -## [10/48] 请对下面的程序文件做一个概述: crazy_functions\crazy_functions_test.py - -这个文件是一个Python测试模块,用于测试crazy_functions中的各种函数插件。这些函数包括:解析Python项目源代码、解析Cpp项目源代码、Latex全文润色、Markdown中译英、批量翻译PDF文档、谷歌检索小助手、总结word文档、下载arxiv论文并翻译摘要、联网回答问题、和解析Jupyter Notebooks。对于每个函数插件,都有一个对应的测试函数来进行测试。 - -## [11/48] 请对下面的程序文件做一个概述: crazy_functions\crazy_utils.py - -这个Python文件中包括了两个函数: - -1. `input_clipping`: 该函数用于裁剪输入文本长度,使其不超过一定的限制。 -2. `request_gpt_model_in_new_thread_with_ui_alive`: 该函数用于请求 GPT 模型并保持用户界面的响应,支持多线程和实时更新用户界面。 - -这两个函数都依赖于从 `toolbox` 和 `request_llm` 中导入的一些工具函数。函数的输入和输出有详细的描述文档。 - -## [12/48] 请对下面的程序文件做一个概述: crazy_functions\Latex全文润色.py - -这是一个Python程序文件,文件名为crazy_functions\Latex全文润色.py。文件包含了一个PaperFileGroup类和三个函数Latex英文润色,Latex中文润色和Latex英文纠错。程序使用了字符串处理、正则表达式、文件读写、多线程等技术,主要作用是对整个Latex项目进行润色和纠错。其中润色和纠错涉及到了对文本的语法、清晰度和整体可读性等方面的提升。此外,该程序还参考了第三方库,并封装了一些工具函数。 - -## [13/48] 请对下面的程序文件做一个概述: crazy_functions\Latex全文翻译.py - -这个文件包含两个函数 `Latex英译中` 和 `Latex中译英`,它们都会对整个Latex项目进行翻译。这个文件还包含一个类 `PaperFileGroup`,它拥有一个方法 `run_file_split`,用于把长文本文件分成多个短文件。其中使用了工具库 `toolbox` 中的一些函数和从 `request_llm` 中导入了 `model_info`。接下来的函数把文件读取进来,把它们的注释删除,进行分割,并进行翻译。这个文件还包括了一些异常处理和界面更新的操作。 - -## [14/48] 请对下面的程序文件做一个概述: crazy_functions\__init__.py - -这是一个Python模块的初始化文件(__init__.py),命名为"crazy_functions"。该模块包含了一些疯狂的函数,但该文件并没有实现这些函数,而是作为一个包(package)来导入其它的Python模块以实现这些函数。在该文件中,没有定义任何类或函数,它唯一的作用就是标识"crazy_functions"模块是一个包。 - -## [15/48] 请对下面的程序文件做一个概述: crazy_functions\下载arxiv论文翻译摘要.py - -这是一个 Python 程序文件,文件名为 `下载arxiv论文翻译摘要.py`。程序包含多个函数,其中 `下载arxiv论文并翻译摘要` 函数的作用是下载 `arxiv` 论文的 PDF 文件,提取摘要并使用 GPT 对其进行翻译。其他函数包括用于下载 `arxiv` 论文的 `download_arxiv_` 函数和用于获取文章信息的 `get_name` 函数,其中涉及使用第三方库如 requests, BeautifulSoup 等。该文件还包含一些用于调试和存储文件的代码段。 - -## [16/48] 请对下面的程序文件做一个概述: crazy_functions\代码重写为全英文_多线程.py - -该程序文件是一个多线程程序,主要功能是将指定目录下的所有Python代码文件中的中文内容转化为英文,并将转化后的代码存储到一个新的文件中。其中,程序使用了GPT-3等技术进行中文-英文的转化,同时也进行了一些Token限制下的处理,以防止程序发生错误。程序在执行过程中还会输出一些提示信息,并将所有转化过的代码文件存储到指定目录下。在程序执行结束后,还会生成一个任务执行报告,记录程序运行的详细信息。 - -## [17/48] 请对下面的程序文件做一个概述: crazy_functions\图片生成.py - -该程序文件提供了一个用于生成图像的函数`图片生成`。函数实现的过程中,会调用`gen_image`函数来生成图像,并返回图像生成的网址和本地文件地址。函数有多个参数,包括`prompt`(激励文本)、`llm_kwargs`(GPT模型的参数)、`plugin_kwargs`(插件模型的参数)等。函数核心代码使用了`requests`库向OpenAI API请求图像,并做了简单的处理和保存。函数还更新了交互界面,清空聊天历史并显示正在生成图像的消息和最终的图像网址和预览。 - -## [18/48] 请对下面的程序文件做一个概述: crazy_functions\对话历史存档.py - -这个文件是名为crazy_functions\对话历史存档.py的Python程序文件,包含了4个函数: - -1. write_chat_to_file(chatbot, history=None, file_name=None):用来将对话记录以Markdown格式写入文件中,并且生成文件名,如果没指定文件名则用当前时间。写入完成后将文件路径打印出来。 - -2. gen_file_preview(file_name):从传入的文件中读取内容,解析出对话历史记录并返回前100个字符,用于文件预览。 - -3. read_file_to_chat(chatbot, history, file_name):从传入的文件中读取内容,解析出对话历史记录并更新聊天显示框。 - -4. 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):一个主要函数,用于保存当前对话记录并提醒用户。如果用户希望加载历史记录,则调用read_file_to_chat()来更新聊天显示框。如果用户希望删除历史记录,调用删除所有本地对话历史记录()函数完成删除操作。 - -## [19/48] 请对下面的程序文件做一个概述: crazy_functions\总结word文档.py - -该程序文件实现了一个总结Word文档的功能,使用Python的docx库读取docx格式的文件,使用pywin32库读取doc格式的文件。程序会先根据传入的txt参数搜索需要处理的文件,并逐个解析其中的内容,将内容拆分为指定长度的文章片段,然后使用另一个程序文件中的request_gpt_model_in_new_thread_with_ui_alive函数进行中文概述。最后将所有的总结结果写入一个文件中,并在界面上进行展示。 - -## [20/48] 请对下面的程序文件做一个概述: crazy_functions\总结音视频.py - -该程序文件包括两个函数:split_audio_file()和AnalyAudio(),并且导入了一些必要的库并定义了一些工具函数。split_audio_file用于将音频文件分割成多个时长相等的片段,返回一个包含所有切割音频片段文件路径的列表,而AnalyAudio用来分析音频文件,通过调用whisper模型进行音频转文字并使用GPT模型对音频内容进行概述,最终将所有总结结果写入结果文件中。 - -## [21/48] 请对下面的程序文件做一个概述: crazy_functions\批量Markdown翻译.py - -该程序文件名为`批量Markdown翻译.py`,包含了以下功能:读取Markdown文件,将长文本分离开来,将Markdown文件进行翻译(英译中和中译英),整理结果并退出。程序使用了多线程以提高效率。程序使用了`tiktoken`依赖库,可能需要额外安装。文件中还有一些其他的函数和类,但与文件名所描述的功能无关。 - -## [22/48] 请对下面的程序文件做一个概述: crazy_functions\批量总结PDF文档.py - -该文件是一个Python脚本,名为crazy_functions\批量总结PDF文档.py。在导入了一系列库和工具函数后,主要定义了5个函数,其中包括一个错误处理装饰器(@CatchException),用于批量总结PDF文档。该函数主要实现对PDF文档的解析,并调用模型生成中英文摘要。 - -## [23/48] 请对下面的程序文件做一个概述: crazy_functions\批量总结PDF文档pdfminer.py - -该程序文件是一个用于批量总结PDF文档的函数插件,使用了pdfminer插件和BeautifulSoup库来提取PDF文档的文本内容,对每个PDF文件分别进行处理并生成中英文摘要。同时,该程序文件还包括一些辅助工具函数和处理异常的装饰器。 - -## [24/48] 请对下面的程序文件做一个概述: crazy_functions\批量翻译PDF文档_多线程.py - -这个程序文件是一个Python脚本,文件名为“批量翻译PDF文档_多线程.py”。它主要使用了“toolbox”、“request_gpt_model_in_new_thread_with_ui_alive”、“request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency”、“colorful”等Python库和自定义的模块“crazy_utils”的一些函数。程序实现了一个批量翻译PDF文档的功能,可以自动解析PDF文件中的基础信息,递归地切割PDF文件,翻译和处理PDF论文中的所有内容,并生成相应的翻译结果文件(包括md文件和html文件)。功能比较复杂,其中需要调用多个函数和依赖库,涉及到多线程操作和UI更新。文件中有详细的注释和变量命名,代码比较清晰易读。 - -## [25/48] 请对下面的程序文件做一个概述: crazy_functions\理解PDF文档内容.py - -该程序文件实现了一个名为“理解PDF文档内容”的函数,该函数可以为输入的PDF文件提取摘要以及正文各部分的主要内容,并在提取过程中根据上下文关系进行学术性问题解答。该函数依赖于多个辅助函数和第三方库,并在执行过程中针对可能出现的异常进行了处理。 - -## [26/48] 请对下面的程序文件做一个概述: crazy_functions\生成函数注释.py - -该程序文件是一个Python模块文件,文件名为“生成函数注释.py”,定义了两个函数:一个是生成函数注释的主函数“生成函数注释”,另一个是通过装饰器实现异常捕捉的函数“批量生成函数注释”。该程序文件依赖于“toolbox”和本地“crazy_utils”模块,并且在运行时使用了多线程技术和GPT模型来生成注释。函数生成的注释结果使用Markdown表格输出并写入历史记录文件。 - -## [27/48] 请对下面的程序文件做一个概述: crazy_functions\联网的ChatGPT.py - -这是一个名为`联网的ChatGPT.py`的Python程序文件,其中定义了一个函数`连接网络回答问题`。该函数通过爬取搜索引擎的结果和访问网页来综合回答给定的问题,并使用ChatGPT模型完成回答。此外,该文件还包括一些工具函数,例如从网页中抓取文本和使用代理访问网页。 - -## [28/48] 请对下面的程序文件做一个概述: crazy_functions\解析JupyterNotebook.py - -这个程序文件包含了两个函数: `parseNotebook()`和`解析ipynb文件()`,并且引入了一些工具函数和类。`parseNotebook()`函数将Jupyter Notebook文件解析为文本代码块,`解析ipynb文件()`函数则用于解析多个Jupyter Notebook文件,使用`parseNotebook()`解析每个文件和一些其他的处理。函数中使用了多线程处理输入和输出,并且将结果写入到文件中。 - -## [29/48] 请对下面的程序文件做一个概述: crazy_functions\解析项目源代码.py - -这是一个源代码分析的Python代码文件,其中定义了多个函数,包括解析一个Python项目、解析一个C项目、解析一个C项目的头文件和解析一个Java项目等。其中解析源代码新函数是实际处理源代码分析并生成报告的函数。该函数首先会逐个读取传入的源代码文件,生成对应的请求内容,通过多线程发送到chatgpt进行分析。然后将结果写入文件,并进行汇总分析。最后通过调用update_ui函数刷新界面,完整实现了源代码的分析。 - -## [30/48] 请对下面的程序文件做一个概述: crazy_functions\询问多个大语言模型.py - -该程序文件包含两个函数:同时问询()和同时问询_指定模型(),它们的作用是使用多个大语言模型同时对用户输入进行处理,返回对应模型的回复结果。同时问询()会默认使用ChatGPT和ChatGLM两个模型,而同时问询_指定模型()则可以指定要使用的模型。该程序文件还引用了其他的模块和函数库。 - -## [31/48] 请对下面的程序文件做一个概述: crazy_functions\读文章写摘要.py - -这个程序文件是一个Python模块,文件名为crazy_functions\读文章写摘要.py。该模块包含了两个函数,其中主要函数是"读文章写摘要"函数,其实现了解析给定文件夹中的tex文件,对其中每个文件的内容进行摘要生成,并根据各论文片段的摘要,最终生成全文摘要。第二个函数是"解析Paper"函数,用于解析单篇论文文件。其中用到了一些工具函数和库,如update_ui、CatchException、report_execption、write_results_to_file等。 - -## [32/48] 请对下面的程序文件做一个概述: crazy_functions\谷歌检索小助手.py - -该文件是一个Python模块,文件名为“谷歌检索小助手.py”。该模块包含两个函数,一个是“get_meta_information()”,用于从提供的网址中分析出所有相关的学术文献的元数据信息;另一个是“谷歌检索小助手()”,是主函数,用于分析用户提供的谷歌学术搜索页面中出现的文章,并提取相关信息。其中,“谷歌检索小助手()”函数依赖于“get_meta_information()”函数,并调用了其他一些Python模块,如“arxiv”、“math”、“bs4”等。 - -## [33/48] 请对下面的程序文件做一个概述: crazy_functions\高级功能函数模板.py - -该程序文件定义了一个名为高阶功能模板函数的函数,该函数接受多个参数,包括输入的文本、gpt模型参数、插件模型参数、聊天显示框的句柄、聊天历史等,并利用送出请求,使用 Unsplash API 发送相关图片。其中,为了避免输入溢出,函数会在开始时清空历史。函数也有一些 UI 更新的语句。该程序文件还依赖于其他两个模块:CatchException 和 update_ui,以及一个名为 request_gpt_model_in_new_thread_with_ui_alive 的来自 crazy_utils 模块(应该是自定义的工具包)的函数。 - -## [34/48] 请对下面的程序文件做一个概述: request_llm\bridge_all.py - -该文件包含两个函数:predict和predict_no_ui_long_connection,用于基于不同的LLM模型进行对话。该文件还包含一个lazyloadTiktoken类和一个LLM_CATCH_EXCEPTION修饰器函数。其中lazyloadTiktoken类用于懒加载模型的tokenizer,LLM_CATCH_EXCEPTION用于错误处理。整个文件还定义了一些全局变量和模型信息字典,用于引用和配置LLM模型。 - -## [35/48] 请对下面的程序文件做一个概述: request_llm\bridge_chatglm.py - -这是一个Python程序文件,名为`bridge_chatglm.py`,其中定义了一个名为`GetGLMHandle`的类和三个方法:`predict_no_ui_long_connection`、 `predict`和 `stream_chat`。该文件依赖于多个Python库,如`transformers`和`sentencepiece`。该文件实现了一个聊天机器人,使用ChatGLM模型来生成回复,支持单线程和多线程方式。程序启动时需要加载ChatGLM的模型和tokenizer,需要一段时间。在配置文件`config.py`中设置参数会影响模型的内存和显存使用,因此程序可能会导致低配计算机卡死。 - -## [36/48] 请对下面的程序文件做一个概述: request_llm\bridge_chatgpt.py - -该文件为 Python 代码文件,文件名为 request_llm\bridge_chatgpt.py。该代码文件主要提供三个函数:predict、predict_no_ui和 predict_no_ui_long_connection,用于发送至 chatGPT 并等待回复,获取输出。该代码文件还包含一些辅助函数,用于处理连接异常、生成 HTTP 请求等。该文件的代码架构清晰,使用了多个自定义函数和模块。 - -## [37/48] 请对下面的程序文件做一个概述: request_llm\bridge_jittorllms_llama.py - -该代码文件实现了一个聊天机器人,其中使用了 JittorLLMs 模型。主要包括以下几个部分: -1. GetGLMHandle 类:一个进程类,用于加载 JittorLLMs 模型并接收并处理请求。 -2. predict_no_ui_long_connection 函数:一个多线程方法,用于在后台运行聊天机器人。 -3. predict 函数:一个单线程方法,用于在前端页面上交互式调用聊天机器人,以获取用户输入并返回相应的回复。 - -这个文件中还有一些辅助函数和全局变量,例如 importlib、time、threading 等。 - -## [38/48] 请对下面的程序文件做一个概述: request_llm\bridge_jittorllms_pangualpha.py - -这个文件是为了实现使用jittorllms(一种机器学习模型)来进行聊天功能的代码。其中包括了模型加载、模型的参数加载、消息的收发等相关操作。其中使用了多进程和多线程来提高性能和效率。代码中还包括了处理依赖关系的函数和预处理函数等。 - -## [39/48] 请对下面的程序文件做一个概述: request_llm\bridge_jittorllms_rwkv.py - -这个文件是一个Python程序,文件名为request_llm\bridge_jittorllms_rwkv.py。它依赖transformers、time、threading、importlib、multiprocessing等库。在文件中,通过定义GetGLMHandle类加载jittorllms模型参数和定义stream_chat方法来实现与jittorllms模型的交互。同时,该文件还定义了predict_no_ui_long_connection和predict方法来处理历史信息、调用jittorllms模型、接收回复信息并输出结果。 - -## [40/48] 请对下面的程序文件做一个概述: request_llm\bridge_moss.py - -该文件为一个Python源代码文件,文件名为 request_llm\bridge_moss.py。代码定义了一个 GetGLMHandle 类和两个函数 predict_no_ui_long_connection 和 predict。 - -GetGLMHandle 类继承自Process类(多进程),主要功能是启动一个子进程并加载 MOSS 模型参数,通过 Pipe 进行主子进程的通信。该类还定义了 check_dependency、moss_init、run 和 stream_chat 等方法,其中 check_dependency 和 moss_init 是子进程的初始化方法,run 是子进程运行方法,stream_chat 实现了主进程和子进程的交互过程。 - -函数 predict_no_ui_long_connection 是多线程方法,调用 GetGLMHandle 类加载 MOSS 参数后使用 stream_chat 实现主进程和子进程的交互过程。 - -函数 predict 是单线程方法,通过调用 update_ui 将交互过程中 MOSS 的回复实时更新到UI(User Interface)中,并执行一个 named function(additional_fn)指定的函数对输入进行预处理。 - -## [41/48] 请对下面的程序文件做一个概述: request_llm\bridge_newbing.py - -这是一个名为`bridge_newbing.py`的程序文件,包含三个部分: - -第一部分使用from语句导入了`edge_gpt`模块的`NewbingChatbot`类。 - -第二部分定义了一个名为`NewBingHandle`的继承自进程类的子类,该类会检查依赖性并启动进程。同时,该部分还定义了一个名为`predict_no_ui_long_connection`的多线程方法和一个名为`predict`的单线程方法,用于与NewBing进行通信。 - -第三部分定义了一个名为`newbing_handle`的全局变量,并导出了`predict_no_ui_long_connection`和`predict`这两个方法,以供其他程序可以调用。 - -## [42/48] 请对下面的程序文件做一个概述: request_llm\bridge_newbingfree.py - -这个Python文件包含了三部分内容。第一部分是来自edge_gpt_free.py文件的聊天机器人程序。第二部分是子进程Worker,用于调用主体。第三部分提供了两个函数:predict_no_ui_long_connection和predict用于调用NewBing聊天机器人和返回响应。其中predict函数还提供了一些参数用于控制聊天机器人的回复和更新UI界面。 - -## [43/48] 请对下面的程序文件做一个概述: request_llm\bridge_stackclaude.py - -这是一个Python源代码文件,文件名为request_llm\bridge_stackclaude.py。代码分为三个主要部分: - -第一部分定义了Slack API Client类,实现Slack消息的发送、接收、循环监听,用于与Slack API进行交互。 - -第二部分定义了ClaudeHandle类,继承Process类,用于创建子进程Worker,调用主体,实现Claude与用户交互的功能。 - -第三部分定义了predict_no_ui_long_connection和predict两个函数,主要用于通过调用ClaudeHandle对象的stream_chat方法来获取Claude的回复,并更新ui以显示相关信息。其中predict函数采用单线程方法,而predict_no_ui_long_connection函数使用多线程方法。 - -## [44/48] 请对下面的程序文件做一个概述: request_llm\bridge_tgui.py - -该文件是一个Python代码文件,名为request_llm\bridge_tgui.py。它包含了一些函数用于与chatbot UI交互,并通过WebSocket协议与远程LLM模型通信完成文本生成任务,其中最重要的函数是predict()和predict_no_ui_long_connection()。这个程序还有其他的辅助函数,如random_hash()。整个代码文件在协作的基础上完成了一次修改。 - -## [45/48] 请对下面的程序文件做一个概述: request_llm\edge_gpt.py - -该文件是一个用于调用Bing chatbot API的Python程序,它由多个类和辅助函数构成,可以根据给定的对话连接在对话中提出问题,使用websocket与远程服务通信。程序实现了一个聊天机器人,可以为用户提供人工智能聊天。 - -## [46/48] 请对下面的程序文件做一个概述: request_llm\edge_gpt_free.py - -该代码文件为一个会话API,可通过Chathub发送消息以返回响应。其中使用了 aiohttp 和 httpx 库进行网络请求并发送。代码中包含了一些函数和常量,多数用于生成请求数据或是请求头信息等。同时该代码文件还包含了一个 Conversation 类,调用该类可实现对话交互。 - -## [47/48] 请对下面的程序文件做一个概述: request_llm\test_llms.py - -这个文件是用于对llm模型进行单元测试的Python程序。程序导入一个名为"request_llm.bridge_newbingfree"的模块,然后三次使用该模块中的predict_no_ui_long_connection()函数进行预测,并输出结果。此外,还有一些注释掉的代码段,这些代码段也是关于模型预测的。 - -## 用一张Markdown表格简要描述以下文件的功能: -check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, multi_language.py, theme.py, toolbox.py, crazy_functions\crazy_functions_test.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py。根据以上分析,用一句话概括程序的整体功能。 - -| 文件名 | 功能描述 | -| ------ | ------ | -| check_proxy.py | 检查代理有效性及地理位置 | -| colorful.py | 控制台打印彩色文字 | -| config.py | 配置和参数设置 | -| config_private.py | 私人配置和参数设置 | -| core_functional.py | 核心函数和参数设置 | -| crazy_functional.py | 高级功能插件集合 | -| main.py | 一个 Chatbot 程序,提供各种学术翻译、文本处理和其他查询服务 | -| multi_language.py | 识别和翻译不同语言 | -| theme.py | 自定义 gradio 应用程序主题 | -| toolbox.py | 工具类库,用于协助实现各种功能 | -| crazy_functions\crazy_functions_test.py | 测试 crazy_functions 中的各种函数 | -| crazy_functions\crazy_utils.py | 工具函数,用于字符串处理、异常检测、Markdown 格式转换等 | -| crazy_functions\Latex全文润色.py | 对整个 Latex 项目进行润色和纠错 | -| crazy_functions\Latex全文翻译.py | 对整个 Latex 项目进行翻译 | -| crazy_functions\__init__.py | 模块初始化文件,标识 `crazy_functions` 是一个包 | -| crazy_functions\下载arxiv论文翻译摘要.py | 下载 `arxiv` 论文的 PDF 文件,并提取摘要和翻译 | - -这些程序源文件提供了基础的文本和语言处理功能、工具函数和高级插件,使 Chatbot 能够处理各种复杂的学术文本问题,包括润色、翻译、搜索、下载、解析等。 - -## 用一张Markdown表格简要描述以下文件的功能: -crazy_functions\代码重写为全英文_多线程.py, crazy_functions\图片生成.py, crazy_functions\对话历史存档.py, crazy_functions\总结word文档.py, crazy_functions\总结音视频.py, crazy_functions\批量Markdown翻译.py, crazy_functions\批量总结PDF文档.py, crazy_functions\批量总结PDF文档pdfminer.py, crazy_functions\批量翻译PDF文档_多线程.py, crazy_functions\理解PDF文档内容.py, crazy_functions\生成函数注释.py, crazy_functions\联网的ChatGPT.py, crazy_functions\解析JupyterNotebook.py, crazy_functions\解析项目源代码.py, crazy_functions\询问多个大语言模型.py, crazy_functions\读文章写摘要.py。根据以上分析,用一句话概括程序的整体功能。 - -| 文件名 | 功能简述 | -| --- | --- | -| 代码重写为全英文_多线程.py | 将Python源代码文件中的中文内容转化为英文 | -| 图片生成.py | 根据激励文本使用GPT模型生成相应的图像 | -| 对话历史存档.py | 将每次对话记录写入Markdown格式的文件中 | -| 总结word文档.py | 对输入的word文档进行摘要生成 | -| 总结音视频.py | 对输入的音视频文件进行摘要生成 | -| 批量Markdown翻译.py | 将指定目录下的Markdown文件进行中英文翻译 | -| 批量总结PDF文档.py | 对PDF文件进行切割和摘要生成 | -| 批量总结PDF文档pdfminer.py | 对PDF文件进行文本内容的提取和摘要生成 | -| 批量翻译PDF文档_多线程.py | 将指定目录下的PDF文件进行中英文翻译 | -| 理解PDF文档内容.py | 对PDF文件进行摘要生成和问题解答 | -| 生成函数注释.py | 自动生成Python函数的注释 | -| 联网的ChatGPT.py | 使用网络爬虫和ChatGPT模型进行聊天回答 | -| 解析JupyterNotebook.py | 对Jupyter Notebook进行代码解析 | -| 解析项目源代码.py | 对指定编程语言的源代码进行解析 | -| 询问多个大语言模型.py | 使用多个大语言模型对输入进行处理和回复 | -| 读文章写摘要.py | 对论文进行解析和全文摘要生成 | - -概括程序的整体功能:提供了一系列处理文本、文件和代码的功能,使用了各类语言模型、多线程、网络请求和数据解析技术来提高效率和精度。 - -## 用一张Markdown表格简要描述以下文件的功能: -crazy_functions\谷歌检索小助手.py, crazy_functions\高级功能函数模板.py, request_llm\bridge_all.py, request_llm\bridge_chatglm.py, request_llm\bridge_chatgpt.py, request_llm\bridge_jittorllms_llama.py, request_llm\bridge_jittorllms_pangualpha.py, request_llm\bridge_jittorllms_rwkv.py, request_llm\bridge_moss.py, request_llm\bridge_newbing.py, request_llm\bridge_newbingfree.py, request_llm\bridge_stackclaude.py, request_llm\bridge_tgui.py, request_llm\edge_gpt.py, request_llm\edge_gpt_free.py, request_llm\test_llms.py。根据以上分析,用一句话概括程序的整体功能。 - -| 文件名 | 功能描述 | -| --- | --- | -| crazy_functions\谷歌检索小助手.py | 提供谷歌学术搜索页面中相关文章的元数据信息。 | -| crazy_functions\高级功能函数模板.py | 使用Unsplash API发送相关图片以回复用户的输入。 | -| request_llm\bridge_all.py | 基于不同LLM模型进行对话。 | -| request_llm\bridge_chatglm.py | 使用ChatGLM模型生成回复,支持单线程和多线程方式。 | -| request_llm\bridge_chatgpt.py | 基于GPT模型完成对话。 | -| request_llm\bridge_jittorllms_llama.py | 使用JittorLLMs模型完成对话,支持单线程和多线程方式。 | -| request_llm\bridge_jittorllms_pangualpha.py | 使用JittorLLMs模型完成对话,基于多进程和多线程方式。 | -| request_llm\bridge_jittorllms_rwkv.py | 使用JittorLLMs模型完成聊天功能,提供包括历史信息、参数调节等在内的多个功能选项。 | -| request_llm\bridge_moss.py | 加载Moss模型完成对话功能。 | -| request_llm\bridge_newbing.py | 使用Newbing聊天机器人进行对话,支持单线程和多线程方式。 | -| request_llm\bridge_newbingfree.py | 基于Bing chatbot API实现聊天机器人的文本生成功能。 | -| request_llm\bridge_stackclaude.py | 基于Slack API实现Claude与用户的交互。 | -| request_llm\bridge_tgui.py | 通过websocket实现聊天机器人与UI界面交互。 | -| request_llm\edge_gpt.py | 调用Bing chatbot API提供聊天机器人服务。 | -| request_llm\edge_gpt_free.py | 实现聊天机器人API,采用aiohttp和httpx工具库。 | -| request_llm\test_llms.py | 对llm模型进行单元测试。 | -| 程序整体功能 | 实现不同种类的聊天机器人,可以根据输入进行文本生成。 | diff --git a/spaces/harpreetsahota/RAQA-with-LlamaIndex-and-a-fine-tuned-GPT-35/README.md b/spaces/harpreetsahota/RAQA-with-LlamaIndex-and-a-fine-tuned-GPT-35/README.md deleted file mode 100644 index b028311bfe88b0042235e4eba8b7ab138ab4408a..0000000000000000000000000000000000000000 --- a/spaces/harpreetsahota/RAQA-with-LlamaIndex-and-a-fine-tuned-GPT-35/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: RAQA on Chainlit - Chat The Hitchhikers Guide to the Galaxy with a fine-tuned GPT 3.5 -emoji: 🌌 -colorFrom: red -colorTo: red -sdk: docker -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/misc.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/misc.py deleted file mode 100644 index 3c50b69b38c950801baacba8b3684ffd23aef08b..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/misc.py +++ /dev/null @@ -1,21 +0,0 @@ -import torch.nn as nn -import torch -import torch.distributed as dist - -class GlobalAvgPool2d(nn.Module): - def __init__(self): - """Global average pooling over the input's spatial dimensions""" - super(GlobalAvgPool2d, self).__init__() - - def forward(self, inputs): - in_size = inputs.size() - return inputs.view((in_size[0], in_size[1], -1)).mean(dim=2) - -class SingleGPU(nn.Module): - def __init__(self, module): - super(SingleGPU, self).__init__() - self.module=module - - def forward(self, input): - return self.module(input.cuda(non_blocking=True)) - diff --git a/spaces/hekbobo/bingo/src/components/header.tsx b/spaces/hekbobo/bingo/src/components/header.tsx deleted file mode 100644 index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000 --- a/spaces/hekbobo/bingo/src/components/header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import * as React from 'react' -import { UserMenu } from './user-menu' - -export async function Header() { - return ( -
        -
        - -
        -
        - ) -} diff --git a/spaces/hekbobo/bingo/src/components/ui/voice/index.tsx b/spaces/hekbobo/bingo/src/components/ui/voice/index.tsx deleted file mode 100644 index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000 --- a/spaces/hekbobo/bingo/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import './index.scss' - -export interface VoiceProps extends CSSPropertyRule { - num?: number; - duration?: number; -} -export default function Voice({ duration = 400, num = 7, ...others }) { - return ( -
        - {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
        - ) - })} -
        - ) -} diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/one_hot_encoding.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/one_hot_encoding.py deleted file mode 100644 index 4c5e95b00cfe5e5d3b37934895b833a40f3514fc..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/one_hot_encoding.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import numpy as np - - -def to_one_hot(seg, all_seg_labels=None): - if all_seg_labels is None: - all_seg_labels = np.unique(seg) - result = np.zeros((len(all_seg_labels), *seg.shape), dtype=seg.dtype) - for i, l in enumerate(all_seg_labels): - result[i][seg == l] = 1 - return result diff --git a/spaces/huggingface-projects/magic-diffusion/share_btn.py b/spaces/huggingface-projects/magic-diffusion/share_btn.py deleted file mode 100644 index 1382fb25a5ef50e843598187e1e660e86ea8dd05..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/magic-diffusion/share_btn.py +++ /dev/null @@ -1,88 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `magic-prompt-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `magic-prompt-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgEl = gradioEl.querySelector('#input-img img'); - const imgEls = gradioEl.querySelectorAll('#generated-gallery img'); - const promptTxt = gradioEl.querySelector('#translated textarea').value; - let titleTxt = promptTxt; - if(titleTxt.length > 100){ - titleTxt = titleTxt.slice(0, 100) + ' ...'; - } - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!imgEls.length){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - }) - ); - const inputFile = await getInputImgFile(inputImgEl); - files.push(inputFile); - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const urlInputImg = urls.pop(); - const htmlImgs = urls.map(url => ``); - const htmlImgsMd = htmlImgs.join(`\n`); - const descriptionMd = `#### Input img: - -#### Caption: -${promptTxt} -#### Generations: -
        -${htmlImgsMd} -
        `; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/huggingface-projects/magic-diffusion/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/hugginglearners/Ethiopian-Food-Classifier/app.py b/spaces/hugginglearners/Ethiopian-Food-Classifier/app.py deleted file mode 100644 index d019070fa88cd70279e7f4e8cc36fdd843fee9fe..0000000000000000000000000000000000000000 --- a/spaces/hugginglearners/Ethiopian-Food-Classifier/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import gradio as gr -from huggingface_hub import from_pretrained_fastai -from fastai.vision.all import * - -repo_id = "Tinsae/EthioFoodtest3" - -learn = from_pretrained_fastai(repo_id) -labels = learn.dls.vocab -EXAMPLES_PATH = Path('./examples') - -title = "Ethiopian Food classifier " -description = """ -This app is a demo of a model trained to classify images of the following Ethiopian food categories -- Beyaynetu, Chechebsa, Doro wat, Firfir, Genfo, Kikil, Kitfo, Shekla tibs, Shiro wat, Tihlo and Tire_siga - -""" - -article = "Full report on this model can be found [here](https://wandb.ai/tinsae/Ethiopian-foods/reports/Ethiopian-Foods-Classification---VmlldzoyMzExNjk1?accessToken=hx3g5jwmlrn059f11zp5v2ktg62ygl23mkxy2tevliu6bmqsmpazp5jkmqzjrg71)" -examples = [f'{EXAMPLES_PATH}/{f.name}' for f in EXAMPLES_PATH.iterdir()] - -labels = learn.dls.vocab - -v =''' - - -

        A recipe video

        - {0} - - ''' -v_ls = ['''''', - '''''', - ''' ''', - '''''', - '''''', - '''''' , - '''''', - '''''', - '''''', - '''''', - '''''' - ] - -def predict(img): - img = PILImage.create(img) - pred, pred_w_idx, probs = learn.predict(img) - - labels_probs = {labels[i]: float(probs[i]) for i, _ in enumerate(labels)} - - - return labels_probs, v.format(v_ls[pred_w_idx]) - - - -demo = gr.Interface(predict, - "image", - [gr.outputs.Label(num_top_classes=3), "html"], - examples= examples, - title=title, - description=description, - article=article) - -demo.launch() \ No newline at end of file diff --git a/spaces/hylee/AnimeGANv2/test1.py b/spaces/hylee/AnimeGANv2/test1.py deleted file mode 100644 index 4a667d8fa3b8d5c1ca51b29ca181fe8ca72382c5..0000000000000000000000000000000000000000 --- a/spaces/hylee/AnimeGANv2/test1.py +++ /dev/null @@ -1,66 +0,0 @@ -import argparse -from tools.utils import * -import os -from tqdm import tqdm -from glob import glob -import time -import numpy as np -from net import generator -os.environ["CUDA_VISIBLE_DEVICES"] = "-1" - - -def stats_graph(graph): - flops = tf.profiler.profile(graph, options=tf.profiler.ProfileOptionBuilder.float_operation()) - # params = tf.profiler.profile(graph, options=tf.profiler.ProfileOptionBuilder.trainable_variables_parameter()) - print('FLOPs: {}'.format(flops.total_float_ops)) - -g_sess = None -test_generated = None -test_real = None - -def test(checkpoint_dir, style_name, test_file, if_adjust_brightness, img_size=[256,256]): - global g_sess - global test_generated - global test_real - - # tf.reset_default_graph() - result_dir = 'results/'+style_name - check_folder(result_dir) - - if g_sess is None: - test_real = tf.placeholder(tf.float32, [1, None, None, 3], name='test') - - with tf.variable_scope("generator", reuse=False): - test_generated = generator.G_net(test_real).fake - saver = tf.train.Saver() - - gpu_options = tf.GPUOptions(allow_growth=True) - g_sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, gpu_options=gpu_options)) - - # load model - ckpt = tf.train.get_checkpoint_state(checkpoint_dir) # checkpoint file information - if ckpt and ckpt.model_checkpoint_path: - ckpt_name = os.path.basename(ckpt.model_checkpoint_path) # first line - saver.restore(sess, os.path.join(checkpoint_dir, ckpt_name)) - print(" [*] Success to read {}".format(os.path.join(checkpoint_dir, ckpt_name))) - else: - print(" [*] Failed to find a checkpoint") - return - # stats_graph(tf.get_default_graph()) - - begin = time.time() - # print('Processing image: ' + sample_file) - sample_image = np.asarray(load_test_data(test_file, img_size)) - image_path = os.path.join(result_dir,'{0}'.format(os.path.basename(test_file))) - fake_img = g_sess.run(test_generated, feed_dict = {test_real : sample_image}) - if if_adjust_brightness: - save_images(fake_img, image_path, test_file) - else: - save_images(fake_img, image_path, None) - - end = time.time() - print(f'test-time: {end-begin} s') - - return image_path - - diff --git a/spaces/hysts/PnP-diffusion-features/app_generated_image.py b/spaces/hysts/PnP-diffusion-features/app_generated_image.py deleted file mode 100644 index a32199e7b5737f7c80978ed0f7620c195724cd3e..0000000000000000000000000000000000000000 --- a/spaces/hysts/PnP-diffusion-features/app_generated_image.py +++ /dev/null @@ -1,249 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import pathlib -import shlex -import subprocess -import tempfile - -import gradio as gr -from omegaconf import OmegaConf - - -def gen_feature_extraction_config( - exp_name: str, - prompt: str, - seed: int, - guidance_scale: float, - ddim_steps: int, -) -> str: - config = OmegaConf.load( - 'plug-and-play/configs/pnp/feature-extraction-generated.yaml') - config.config.experiment_name = exp_name - config.config.prompt = prompt - config.config.seed = seed - config.config.scale = guidance_scale - config.config.ddim_steps = ddim_steps - temp_file = tempfile.NamedTemporaryFile(suffix='.yaml', delete=False) - with open(temp_file.name, 'w') as f: - f.write(OmegaConf.to_yaml(config)) - return temp_file.name - - -def run_feature_extraction_command( - prompt: str, - seed: int, - guidance_scale: float, - ddim_steps: int, -) -> tuple[str, str]: - exp_name = f'{prompt.replace(" ", "_")}_{seed}_{guidance_scale:.1f}_{ddim_steps}' - if not pathlib.Path(f'plug-and-play/experiments/{exp_name}').exists(): - config_path = gen_feature_extraction_config( - exp_name, - prompt, - seed, - guidance_scale, - ddim_steps, - ) - subprocess.run(shlex.split( - f'python run_features_extraction.py --config {config_path}'), - cwd='plug-and-play') - return f'plug-and-play/experiments/{exp_name}/samples/0.png', exp_name - - -def gen_pnp_config( - exp_name: str, - prompt: str, - guidance_scale: float, - ddim_steps: int, - feature_injection_threshold: int, - negative_prompt: str, - negative_prompt_alpha: float, - negative_prompt_schedule: str, -) -> str: - config = OmegaConf.load('plug-and-play/configs/pnp/pnp-generated.yaml') - config.source_experiment_name = exp_name - config.prompts = [prompt] - config.scale = guidance_scale - config.num_ddim_sampling_steps = ddim_steps - config.feature_injection_threshold = feature_injection_threshold - config.negative_prompt = negative_prompt - config.negative_prompt_alpha = negative_prompt_alpha - config.negative_prompt_schedule = negative_prompt_schedule - temp_file = tempfile.NamedTemporaryFile(suffix='.yaml', delete=False) - with open(temp_file.name, 'w') as f: - f.write(OmegaConf.to_yaml(config)) - return temp_file.name - - -def run_pnp_command( - exp_name: str, - prompt: str, - negative_prompt: str, - guidance_scale: float, - ddim_steps: int, - feature_injection_threshold: int, - negative_prompt_alpha: float, - negative_prompt_schedule: str, -) -> str: - config_path = gen_pnp_config( - exp_name, - prompt, - guidance_scale, - ddim_steps, - feature_injection_threshold, - negative_prompt, - negative_prompt_alpha, - negative_prompt_schedule, - ) - subprocess.run(shlex.split(f'python run_pnp.py --config {config_path}'), - cwd='plug-and-play') - - out_dir = pathlib.Path( - f'plug-and-play/experiments/{exp_name}/translations/{guidance_scale}_{prompt.replace(" ", "_")}' - ) - out_label = f'INJECTION_T_{feature_injection_threshold}_STEPS_{ddim_steps}_NP-ALPHA_{negative_prompt_alpha}_SCHEDULE_{negative_prompt_schedule}_NP_{negative_prompt.replace(" ", "_")}' - out_path = out_dir / f'{out_label}_sample_0.png' - return out_path.as_posix() - - -def process_example(source_prompt: str, seed: int, - translation_prompt: str) -> tuple[str, str, str]: - generated_image, exp_name = run_feature_extraction_command( - source_prompt, seed, guidance_scale=5, ddim_steps=50) - result = run_pnp_command(exp_name, - translation_prompt, - negative_prompt='', - guidance_scale=7.5, - ddim_steps=50, - feature_injection_threshold=40, - negative_prompt_alpha=0.75, - negative_prompt_schedule='linear') - return generated_image, exp_name, result - - -def create_prompt_demo() -> gr.Blocks: - with gr.Blocks() as demo: - with gr.Box(): - gr.Markdown( - 'Step 1 (This step will take about 1.5 minutes on A10G.)') - with gr.Row(): - with gr.Column(): - source_prompt = gr.Text(label='Source prompt') - seed = gr.Slider(label='Seed', - minimum=0, - maximum=100000, - step=1, - value=0) - with gr.Accordion(label='Advanced settings', open=False): - source_guidance_scale = gr.Slider( - label='Guidance scale', - minimum=0, - maximum=50, - step=0.1, - value=5) - source_ddim_steps = gr.Slider(label='DDIM steps', - minimum=1, - maximum=100, - step=1, - value=50) - extract_feature_button = gr.Button( - 'Generate and extract features') - with gr.Column(): - generated_image = gr.Image(label='Generated image', - type='filepath') - exp_name = gr.Text(visible=False) - with gr.Box(): - gr.Markdown( - 'Step 2 (This step will take about 1.5 minutes on A10G.)') - with gr.Row(): - with gr.Column(): - translation_prompt = gr.Text( - label='Prompt for translation') - negative_prompt = gr.Text(label='Negative prompt') - with gr.Accordion(label='Advanced settings', open=False): - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0, - maximum=50, - step=0.1, - value=7.5) - ddim_steps = gr.Slider( - label='Number of inference steps', - minimum=1, - maximum=100, - step=1, - value=50) - feature_injection_threshold = gr.Slider( - label='Feature injection threshold', - minimum=0, - maximum=100, - step=1, - value=40) - negative_prompt_alpha = gr.Slider( - label='Negative prompt alpha', - minimum=0, - maximum=1, - step=0.01, - value=0.75) - negative_prompt_schedule = gr.Dropdown( - label='Negative prompt schedule', - choices=['linear', 'constant', 'exp'], - value='linear') - generate_button = gr.Button('Generate') - with gr.Column(): - result = gr.Image(label='Result', type='filepath') - with gr.Row(): - gr.Examples( - examples=[ - ['horse in mud', 50, 'a photo of a zebra in the snow'], - ['horse in mud', 50, 'a photo of a husky in the grass'], - ], - inputs=[ - source_prompt, - seed, - translation_prompt, - ], - outputs=[ - generated_image, - exp_name, - result, - ], - fn=process_example, - cache_examples=os.getenv('CACHE_EXAMPLES'), - ) - - extract_feature_button.click( - fn=run_feature_extraction_command, - inputs=[ - source_prompt, - seed, - source_guidance_scale, - source_ddim_steps, - ], - outputs=[ - generated_image, - exp_name, - ], - ) - generate_button.click( - fn=run_pnp_command, - inputs=[ - exp_name, - translation_prompt, - negative_prompt, - guidance_scale, - ddim_steps, - feature_injection_threshold, - negative_prompt_alpha, - negative_prompt_schedule, - ], - outputs=result, - ) - return demo - - -if __name__ == '__main__': - demo = create_prompt_demo() - demo.queue().launch() diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/eval_ijbc.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/eval_ijbc.py deleted file mode 100644 index 06c3506a8db432049e16b9235d85efe58109b5a8..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/eval_ijbc.py +++ /dev/null @@ -1,450 +0,0 @@ -# coding: utf-8 -import os -import pickle - -import matplotlib -import pandas as pd - -matplotlib.use("Agg") -import matplotlib.pyplot as plt -import timeit -import sklearn -import argparse -import cv2 -import numpy as np -import torch -from skimage import transform as trans -from backbones import get_model -from sklearn.metrics import roc_curve, auc - -from menpo.visualize.viewmatplotlib import sample_colours_from_colourmap -from prettytable import PrettyTable -from pathlib import Path - -import sys -import warnings - -sys.path.insert(0, "../") -warnings.filterwarnings("ignore") - -parser = argparse.ArgumentParser(description="do ijb test") -# general -parser.add_argument("--model-prefix", default="", help="path to load model.") -parser.add_argument("--image-path", default="", type=str, help="") -parser.add_argument("--result-dir", default=".", type=str, help="") -parser.add_argument("--batch-size", default=128, type=int, help="") -parser.add_argument("--network", default="iresnet50", type=str, help="") -parser.add_argument("--job", default="insightface", type=str, help="job name") -parser.add_argument("--target", default="IJBC", type=str, help="target, set to IJBC or IJBB") -args = parser.parse_args() - -target = args.target -model_path = args.model_prefix -image_path = args.image_path -result_dir = args.result_dir -gpu_id = None -use_norm_score = True # if Ture, TestMode(N1) -use_detector_score = True # if Ture, TestMode(D1) -use_flip_test = True # if Ture, TestMode(F1) -job = args.job -batch_size = args.batch_size - - -class Embedding(object): - def __init__(self, prefix, data_shape, batch_size=1): - image_size = (112, 112) - self.image_size = image_size - weight = torch.load(prefix) - resnet = get_model(args.network, dropout=0, fp16=False).cuda() - resnet.load_state_dict(weight) - model = torch.nn.DataParallel(resnet) - self.model = model - self.model.eval() - src = np.array( - [[30.2946, 51.6963], [65.5318, 51.5014], [48.0252, 71.7366], [33.5493, 92.3655], [62.7299, 92.2041]], - dtype=np.float32, - ) - src[:, 0] += 8.0 - self.src = src - self.batch_size = batch_size - self.data_shape = data_shape - - def get(self, rimg, landmark): - - assert landmark.shape[0] == 68 or landmark.shape[0] == 5 - assert landmark.shape[1] == 2 - if landmark.shape[0] == 68: - landmark5 = np.zeros((5, 2), dtype=np.float32) - landmark5[0] = (landmark[36] + landmark[39]) / 2 - landmark5[1] = (landmark[42] + landmark[45]) / 2 - landmark5[2] = landmark[30] - landmark5[3] = landmark[48] - landmark5[4] = landmark[54] - else: - landmark5 = landmark - tform = trans.SimilarityTransform() - tform.estimate(landmark5, self.src) - M = tform.params[0:2, :] - img = cv2.warpAffine(rimg, M, (self.image_size[1], self.image_size[0]), borderValue=0.0) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img_flip = np.fliplr(img) - img = np.transpose(img, (2, 0, 1)) # 3*112*112, RGB - img_flip = np.transpose(img_flip, (2, 0, 1)) - input_blob = np.zeros((2, 3, self.image_size[1], self.image_size[0]), dtype=np.uint8) - input_blob[0] = img - input_blob[1] = img_flip - return input_blob - - @torch.no_grad() - def forward_db(self, batch_data): - imgs = torch.Tensor(batch_data).cuda() - imgs.div_(255).sub_(0.5).div_(0.5) - feat = self.model(imgs) - feat = feat.reshape([self.batch_size, 2 * feat.shape[1]]) - return feat.cpu().numpy() - - -# 将一个list尽量均分成n份,限制len(list)==n,份数大于原list内元素个数则分配空list[] -def divideIntoNstrand(listTemp, n): - twoList = [[] for i in range(n)] - for i, e in enumerate(listTemp): - twoList[i % n].append(e) - return twoList - - -def read_template_media_list(path): - # ijb_meta = np.loadtxt(path, dtype=str) - ijb_meta = pd.read_csv(path, sep=" ", header=None).values - templates = ijb_meta[:, 1].astype(np.int) - medias = ijb_meta[:, 2].astype(np.int) - return templates, medias - - -# In[ ]: - - -def read_template_pair_list(path): - # pairs = np.loadtxt(path, dtype=str) - pairs = pd.read_csv(path, sep=" ", header=None).values - # print(pairs.shape) - # print(pairs[:, 0].astype(np.int)) - t1 = pairs[:, 0].astype(np.int) - t2 = pairs[:, 1].astype(np.int) - label = pairs[:, 2].astype(np.int) - return t1, t2, label - - -# In[ ]: - - -def read_image_feature(path): - with open(path, "rb") as fid: - img_feats = pickle.load(fid) - return img_feats - - -# In[ ]: - - -def get_image_feature(img_path, files_list, model_path, epoch, gpu_id): - batch_size = args.batch_size - data_shape = (3, 112, 112) - - files = files_list - print("files:", len(files)) - rare_size = len(files) % batch_size - faceness_scores = [] - batch = 0 - img_feats = np.empty((len(files), 1024), dtype=np.float32) - - batch_data = np.empty((2 * batch_size, 3, 112, 112)) - embedding = Embedding(model_path, data_shape, batch_size) - for img_index, each_line in enumerate(files[: len(files) - rare_size]): - name_lmk_score = each_line.strip().split(" ") - img_name = os.path.join(img_path, name_lmk_score[0]) - img = cv2.imread(img_name) - lmk = np.array([float(x) for x in name_lmk_score[1:-1]], dtype=np.float32) - lmk = lmk.reshape((5, 2)) - input_blob = embedding.get(img, lmk) - - batch_data[2 * (img_index - batch * batch_size)][:] = input_blob[0] - batch_data[2 * (img_index - batch * batch_size) + 1][:] = input_blob[1] - if (img_index + 1) % batch_size == 0: - print("batch", batch) - img_feats[batch * batch_size : batch * batch_size + batch_size][:] = embedding.forward_db(batch_data) - batch += 1 - faceness_scores.append(name_lmk_score[-1]) - - batch_data = np.empty((2 * rare_size, 3, 112, 112)) - embedding = Embedding(model_path, data_shape, rare_size) - for img_index, each_line in enumerate(files[len(files) - rare_size :]): - name_lmk_score = each_line.strip().split(" ") - img_name = os.path.join(img_path, name_lmk_score[0]) - img = cv2.imread(img_name) - lmk = np.array([float(x) for x in name_lmk_score[1:-1]], dtype=np.float32) - lmk = lmk.reshape((5, 2)) - input_blob = embedding.get(img, lmk) - batch_data[2 * img_index][:] = input_blob[0] - batch_data[2 * img_index + 1][:] = input_blob[1] - if (img_index + 1) % rare_size == 0: - print("batch", batch) - img_feats[len(files) - rare_size :][:] = embedding.forward_db(batch_data) - batch += 1 - faceness_scores.append(name_lmk_score[-1]) - faceness_scores = np.array(faceness_scores).astype(np.float32) - # img_feats = np.ones( (len(files), 1024), dtype=np.float32) * 0.01 - # faceness_scores = np.ones( (len(files), ), dtype=np.float32 ) - return img_feats, faceness_scores - - -# In[ ]: - - -def image2template_feature(img_feats=None, templates=None, medias=None): - # ========================================================== - # 1. face image feature l2 normalization. img_feats:[number_image x feats_dim] - # 2. compute media feature. - # 3. compute template feature. - # ========================================================== - unique_templates = np.unique(templates) - template_feats = np.zeros((len(unique_templates), img_feats.shape[1])) - - for count_template, uqt in enumerate(unique_templates): - - (ind_t,) = np.where(templates == uqt) - face_norm_feats = img_feats[ind_t] - face_medias = medias[ind_t] - unique_medias, unique_media_counts = np.unique(face_medias, return_counts=True) - media_norm_feats = [] - for u, ct in zip(unique_medias, unique_media_counts): - (ind_m,) = np.where(face_medias == u) - if ct == 1: - media_norm_feats += [face_norm_feats[ind_m]] - else: # image features from the same video will be aggregated into one feature - media_norm_feats += [np.mean(face_norm_feats[ind_m], axis=0, keepdims=True)] - media_norm_feats = np.array(media_norm_feats) - # media_norm_feats = media_norm_feats / np.sqrt(np.sum(media_norm_feats ** 2, -1, keepdims=True)) - template_feats[count_template] = np.sum(media_norm_feats, axis=0) - if count_template % 2000 == 0: - print("Finish Calculating {} template features.".format(count_template)) - # template_norm_feats = template_feats / np.sqrt(np.sum(template_feats ** 2, -1, keepdims=True)) - template_norm_feats = sklearn.preprocessing.normalize(template_feats) - # print(template_norm_feats.shape) - return template_norm_feats, unique_templates - - -# In[ ]: - - -def verification(template_norm_feats=None, unique_templates=None, p1=None, p2=None): - # ========================================================== - # Compute set-to-set Similarity Score. - # ========================================================== - template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int) - for count_template, uqt in enumerate(unique_templates): - template2id[uqt] = count_template - - score = np.zeros((len(p1),)) # save cosine distance between pairs - - total_pairs = np.array(range(len(p1))) - batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation - sublists = [total_pairs[i : i + batchsize] for i in range(0, len(p1), batchsize)] - total_sublists = len(sublists) - for c, s in enumerate(sublists): - feat1 = template_norm_feats[template2id[p1[s]]] - feat2 = template_norm_feats[template2id[p2[s]]] - similarity_score = np.sum(feat1 * feat2, -1) - score[s] = similarity_score.flatten() - if c % 10 == 0: - print("Finish {}/{} pairs.".format(c, total_sublists)) - return score - - -# In[ ]: -def verification2(template_norm_feats=None, unique_templates=None, p1=None, p2=None): - template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int) - for count_template, uqt in enumerate(unique_templates): - template2id[uqt] = count_template - score = np.zeros((len(p1),)) # save cosine distance between pairs - total_pairs = np.array(range(len(p1))) - batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation - sublists = [total_pairs[i : i + batchsize] for i in range(0, len(p1), batchsize)] - total_sublists = len(sublists) - for c, s in enumerate(sublists): - feat1 = template_norm_feats[template2id[p1[s]]] - feat2 = template_norm_feats[template2id[p2[s]]] - similarity_score = np.sum(feat1 * feat2, -1) - score[s] = similarity_score.flatten() - if c % 10 == 0: - print("Finish {}/{} pairs.".format(c, total_sublists)) - return score - - -def read_score(path): - with open(path, "rb") as fid: - img_feats = pickle.load(fid) - return img_feats - - -# # Step1: Load Meta Data - -# In[ ]: - -assert target == "IJBC" or target == "IJBB" - -# ============================================================= -# load image and template relationships for template feature embedding -# tid --> template id, mid --> media id -# format: -# image_name tid mid -# ============================================================= -start = timeit.default_timer() -templates, medias = read_template_media_list( - os.path.join("%s/meta" % image_path, "%s_face_tid_mid.txt" % target.lower()) -) -stop = timeit.default_timer() -print("Time: %.2f s. " % (stop - start)) - -# In[ ]: - -# ============================================================= -# load template pairs for template-to-template verification -# tid : template id, label : 1/0 -# format: -# tid_1 tid_2 label -# ============================================================= -start = timeit.default_timer() -p1, p2, label = read_template_pair_list( - os.path.join("%s/meta" % image_path, "%s_template_pair_label.txt" % target.lower()) -) -stop = timeit.default_timer() -print("Time: %.2f s. " % (stop - start)) - -# # Step 2: Get Image Features - -# In[ ]: - -# ============================================================= -# load image features -# format: -# img_feats: [image_num x feats_dim] (227630, 512) -# ============================================================= -start = timeit.default_timer() -img_path = "%s/loose_crop" % image_path -img_list_path = "%s/meta/%s_name_5pts_score.txt" % (image_path, target.lower()) -img_list = open(img_list_path) -files = img_list.readlines() -# files_list = divideIntoNstrand(files, rank_size) -files_list = files - -# img_feats -# for i in range(rank_size): -img_feats, faceness_scores = get_image_feature(img_path, files_list, model_path, 0, gpu_id) -stop = timeit.default_timer() -print("Time: %.2f s. " % (stop - start)) -print("Feature Shape: ({} , {}) .".format(img_feats.shape[0], img_feats.shape[1])) - -# # Step3: Get Template Features - -# In[ ]: - -# ============================================================= -# compute template features from image features. -# ============================================================= -start = timeit.default_timer() -# ========================================================== -# Norm feature before aggregation into template feature? -# Feature norm from embedding network and faceness score are able to decrease weights for noise samples (not face). -# ========================================================== -# 1. FaceScore (Feature Norm) -# 2. FaceScore (Detector) - -if use_flip_test: - # concat --- F1 - # img_input_feats = img_feats - # add --- F2 - img_input_feats = img_feats[:, 0 : img_feats.shape[1] // 2] + img_feats[:, img_feats.shape[1] // 2 :] -else: - img_input_feats = img_feats[:, 0 : img_feats.shape[1] // 2] - -if use_norm_score: - img_input_feats = img_input_feats -else: - # normalise features to remove norm information - img_input_feats = img_input_feats / np.sqrt(np.sum(img_input_feats**2, -1, keepdims=True)) - -if use_detector_score: - print(img_input_feats.shape, faceness_scores.shape) - img_input_feats = img_input_feats * faceness_scores[:, np.newaxis] -else: - img_input_feats = img_input_feats - -template_norm_feats, unique_templates = image2template_feature(img_input_feats, templates, medias) -stop = timeit.default_timer() -print("Time: %.2f s. " % (stop - start)) - -# # Step 4: Get Template Similarity Scores - -# In[ ]: - -# ============================================================= -# compute verification scores between template pairs. -# ============================================================= -start = timeit.default_timer() -score = verification(template_norm_feats, unique_templates, p1, p2) -stop = timeit.default_timer() -print("Time: %.2f s. " % (stop - start)) - -# In[ ]: -save_path = os.path.join(result_dir, args.job) -# save_path = result_dir + '/%s_result' % target - -if not os.path.exists(save_path): - os.makedirs(save_path) - -score_save_file = os.path.join(save_path, "%s.npy" % target.lower()) -np.save(score_save_file, score) - -# # Step 5: Get ROC Curves and TPR@FPR Table - -# In[ ]: - -files = [score_save_file] -methods = [] -scores = [] -for file in files: - methods.append(Path(file).stem) - scores.append(np.load(file)) - -methods = np.array(methods) -scores = dict(zip(methods, scores)) -colours = dict(zip(methods, sample_colours_from_colourmap(methods.shape[0], "Set2"))) -x_labels = [10**-6, 10**-5, 10**-4, 10**-3, 10**-2, 10**-1] -tpr_fpr_table = PrettyTable(["Methods"] + [str(x) for x in x_labels]) -fig = plt.figure() -for method in methods: - fpr, tpr, _ = roc_curve(label, scores[method]) - roc_auc = auc(fpr, tpr) - fpr = np.flipud(fpr) - tpr = np.flipud(tpr) # select largest tpr at same fpr - plt.plot( - fpr, tpr, color=colours[method], lw=1, label=("[%s (AUC = %0.4f %%)]" % (method.split("-")[-1], roc_auc * 100)) - ) - tpr_fpr_row = [] - tpr_fpr_row.append("%s-%s" % (method, target)) - for fpr_iter in np.arange(len(x_labels)): - _, min_index = min(list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr))))) - tpr_fpr_row.append("%.2f" % (tpr[min_index] * 100)) - tpr_fpr_table.add_row(tpr_fpr_row) -plt.xlim([10**-6, 0.1]) -plt.ylim([0.3, 1.0]) -plt.grid(linestyle="--", linewidth=1) -plt.xticks(x_labels) -plt.yticks(np.linspace(0.3, 1.0, 8, endpoint=True)) -plt.xscale("log") -plt.xlabel("False Positive Rate") -plt.ylabel("True Positive Rate") -plt.title("ROC on IJB") -plt.legend(loc="lower right") -fig.savefig(os.path.join(save_path, "%s.pdf" % target.lower())) -print(tpr_fpr_table) diff --git a/spaces/iamstolas/STOLAS/src/lib/hooks/use-enter-submit.tsx b/spaces/iamstolas/STOLAS/src/lib/hooks/use-enter-submit.tsx deleted file mode 100644 index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/lib/hooks/use-enter-submit.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import { useRef, type RefObject } from 'react' - -export function useEnterSubmit(): { - formRef: RefObject - onKeyDown: (event: React.KeyboardEvent) => void -} { - const formRef = useRef(null) - - const handleKeyDown = ( - event: React.KeyboardEvent - ): void => { - if ( - event.key === 'Enter' && - !event.shiftKey && - !event.nativeEvent.isComposing - ) { - formRef.current?.requestSubmit() - event.preventDefault() - } - } - - return { formRef, onKeyDown: handleKeyDown } -} diff --git a/spaces/igashov/DiffLinker/src/const.py b/spaces/igashov/DiffLinker/src/const.py deleted file mode 100644 index 63552a5d576ce5929573672ee12bc05a22e18bdd..0000000000000000000000000000000000000000 --- a/spaces/igashov/DiffLinker/src/const.py +++ /dev/null @@ -1,218 +0,0 @@ -import torch - -from rdkit import Chem - - -TORCH_FLOAT = torch.float32 -TORCH_INT = torch.int8 - -# #################################################################################### # -# ####################################### ZINC ####################################### # -# #################################################################################### # - -# Atom idx for one-hot encoding -ATOM2IDX = {'C': 0, 'O': 1, 'N': 2, 'F': 3, 'S': 4, 'Cl': 5, 'Br': 6, 'I': 7} -IDX2ATOM = {0: 'C', 1: 'O', 2: 'N', 3: 'F', 4: 'S', 5: 'Cl', 6: 'Br', 7: 'I'} - -# Atomic numbers (Z) -CHARGES = {'C': 6, 'O': 8, 'N': 7, 'F': 9, 'S': 16, 'Cl': 17, 'Br': 35, 'I': 53} - -# One-hot atom types -NUMBER_OF_ATOM_TYPES = len(ATOM2IDX) - - -# #################################################################################### # -# ####################################### GEOM ####################################### # -# #################################################################################### # - -# Atom idx for one-hot encoding -GEOM_ATOM2IDX = {'C': 0, 'O': 1, 'N': 2, 'F': 3, 'S': 4, 'Cl': 5, 'Br': 6, 'I': 7, 'P': 8} -GEOM_IDX2ATOM = {0: 'C', 1: 'O', 2: 'N', 3: 'F', 4: 'S', 5: 'Cl', 6: 'Br', 7: 'I', 8: 'P'} - -# Atomic numbers (Z) -GEOM_CHARGES = {'C': 6, 'O': 8, 'N': 7, 'F': 9, 'S': 16, 'Cl': 17, 'Br': 35, 'I': 53, 'P': 15} - -# One-hot atom types -GEOM_NUMBER_OF_ATOM_TYPES = len(GEOM_ATOM2IDX) - -# Dataset keys -DATA_LIST_ATTRS = { - 'uuid', 'name', 'fragments_smi', 'linker_smi', 'num_atoms' -} -DATA_ATTRS_TO_PAD = { - 'positions', 'one_hot', 'charges', 'anchors', 'fragment_mask', 'linker_mask', 'pocket_mask', 'fragment_only_mask' -} -DATA_ATTRS_TO_ADD_LAST_DIM = { - 'charges', 'anchors', 'fragment_mask', 'linker_mask', 'pocket_mask', 'fragment_only_mask' -} - -# Distribution of linker size in train data -LINKER_SIZE_DIST = { - 4: 85540, - 3: 113928, - 6: 70946, - 7: 30408, - 5: 77671, - 9: 5177, - 10: 1214, - 8: 12712, - 11: 158, - 12: 7, -} - - -# Bond lengths from: -# http://www.wiredchemist.com/chemistry/data/bond_energies_lengths.html -# And: -# http://chemistry-reference.com/tables/Bond%20Lengths%20and%20Enthalpies.pdf -BONDS_1 = { - 'H': { - 'H': 74, 'C': 109, 'N': 101, 'O': 96, 'F': 92, - 'B': 119, 'Si': 148, 'P': 144, 'As': 152, 'S': 134, - 'Cl': 127, 'Br': 141, 'I': 161 - }, - 'C': { - 'H': 109, 'C': 154, 'N': 147, 'O': 143, 'F': 135, - 'Si': 185, 'P': 184, 'S': 182, 'Cl': 177, 'Br': 194, - 'I': 214 - }, - 'N': { - 'H': 101, 'C': 147, 'N': 145, 'O': 140, 'F': 136, - 'Cl': 175, 'Br': 214, 'S': 168, 'I': 222, 'P': 177 - }, - 'O': { - 'H': 96, 'C': 143, 'N': 140, 'O': 148, 'F': 142, - 'Br': 172, 'S': 151, 'P': 163, 'Si': 163, 'Cl': 164, - 'I': 194 - }, - 'F': { - 'H': 92, 'C': 135, 'N': 136, 'O': 142, 'F': 142, - 'S': 158, 'Si': 160, 'Cl': 166, 'Br': 178, 'P': 156, - 'I': 187 - }, - 'B': { - 'H': 119, 'Cl': 175 - }, - 'Si': { - 'Si': 233, 'H': 148, 'C': 185, 'O': 163, 'S': 200, - 'F': 160, 'Cl': 202, 'Br': 215, 'I': 243, - }, - 'Cl': { - 'Cl': 199, 'H': 127, 'C': 177, 'N': 175, 'O': 164, - 'P': 203, 'S': 207, 'B': 175, 'Si': 202, 'F': 166, - 'Br': 214 - }, - 'S': { - 'H': 134, 'C': 182, 'N': 168, 'O': 151, 'S': 204, - 'F': 158, 'Cl': 207, 'Br': 225, 'Si': 200, 'P': 210, - 'I': 234 - }, - 'Br': { - 'Br': 228, 'H': 141, 'C': 194, 'O': 172, 'N': 214, - 'Si': 215, 'S': 225, 'F': 178, 'Cl': 214, 'P': 222 - }, - 'P': { - 'P': 221, 'H': 144, 'C': 184, 'O': 163, 'Cl': 203, - 'S': 210, 'F': 156, 'N': 177, 'Br': 222 - }, - 'I': { - 'H': 161, 'C': 214, 'Si': 243, 'N': 222, 'O': 194, - 'S': 234, 'F': 187, 'I': 266 - }, - 'As': { - 'H': 152 - } -} - -BONDS_2 = { - 'C': {'C': 134, 'N': 129, 'O': 120, 'S': 160}, - 'N': {'C': 129, 'N': 125, 'O': 121}, - 'O': {'C': 120, 'N': 121, 'O': 121, 'P': 150}, - 'P': {'O': 150, 'S': 186}, - 'S': {'P': 186} -} - -BONDS_3 = { - 'C': {'C': 120, 'N': 116, 'O': 113}, - 'N': {'C': 116, 'N': 110}, - 'O': {'C': 113} -} - -BOND_DICT = [ - None, - Chem.rdchem.BondType.SINGLE, - Chem.rdchem.BondType.DOUBLE, - Chem.rdchem.BondType.TRIPLE, - Chem.rdchem.BondType.AROMATIC, -] - -BOND2IDX = { - Chem.rdchem.BondType.SINGLE: 1, - Chem.rdchem.BondType.DOUBLE: 2, - Chem.rdchem.BondType.TRIPLE: 3, - Chem.rdchem.BondType.AROMATIC: 4, -} - -ALLOWED_BONDS = { - 'H': 1, - 'C': 4, - 'N': 3, - 'O': 2, - 'F': 1, - 'B': 3, - 'Al': 3, - 'Si': 4, - 'P': [3, 5], - 'S': 4, - 'Cl': 1, - 'As': 3, - 'Br': 1, - 'I': 1, - 'Hg': [1, 2], - 'Bi': [3, 5] -} - -MARGINS_EDM = [10, 5, 2] - -COLORS = ['C0', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8'] -# RADII = [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.3] -RADII = [0.77, 0.77, 0.77, 0.77, 0.77, 0.77, 0.77, 0.77, 0.77] - -ZINC_TRAIN_LINKER_ID2SIZE = [3, 4, 5, 6, 7, 8, 9, 10, 11, 12] -ZINC_TRAIN_LINKER_SIZE2ID = { - size: idx - for idx, size in enumerate(ZINC_TRAIN_LINKER_ID2SIZE) -} -ZINC_TRAIN_LINKER_SIZE_WEIGHTS = [ - 3.47347831e-01, - 4.63079100e-01, - 5.12370917e-01, - 5.62392614e-01, - 1.30294388e+00, - 3.24247801e+00, - 8.12391184e+00, - 3.45634358e+01, - 2.72428571e+02, - 6.26585714e+03 -] - - -GEOM_TRAIN_LINKER_ID2SIZE = [ - 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 36, 38, 41 -] -GEOM_TRAIN_LINKER_SIZE2ID = { - size: idx - for idx, size in enumerate(GEOM_TRAIN_LINKER_ID2SIZE) -} -GEOM_TRAIN_LINKER_SIZE_WEIGHTS = [ - 1.07790681e+00, 4.54693604e-01, 3.62575713e-01, 3.75199484e-01, - 3.67812588e-01, 3.92388528e-01, 3.83421054e-01, 4.26924670e-01, - 4.92768040e-01, 4.99761944e-01, 4.92342726e-01, 5.71456905e-01, - 7.30631393e-01, 8.45412928e-01, 9.97252243e-01, 1.25423985e+00, - 1.57316129e+00, 2.19902962e+00, 3.22640431e+00, 4.25481066e+00, - 6.34749573e+00, 9.00676236e+00, 1.43084017e+01, 2.25763173e+01, - 3.36867096e+01, 9.50713805e+01, 2.08693274e+02, 2.51659537e+02, - 7.77856749e+02, 8.55642424e+03, 8.55642424e+03, 4.27821212e+03, - 4.27821212e+03 -] diff --git a/spaces/inamXcontru/PoeticTTS/AutoCAD Mobile 2015 X32 (32bit) (Product Key And Xforce Keygen) TOP.md b/spaces/inamXcontru/PoeticTTS/AutoCAD Mobile 2015 X32 (32bit) (Product Key And Xforce Keygen) TOP.md deleted file mode 100644 index fc9f3f02a547722218e84a97200f89794c9d4625..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/AutoCAD Mobile 2015 X32 (32bit) (Product Key And Xforce Keygen) TOP.md +++ /dev/null @@ -1,22 +0,0 @@ - -

        How to Install and Activate AutoCAD Mobile 2015 on Your Device

        -

        AutoCAD Mobile 2015 is a mobile app that lets you access, create, and edit DWG files on your Android or iOS device. You can also collaborate with other AutoCAD users and sync your files with cloud storage services. To use AutoCAD Mobile 2015, you need a product key and an activation code that you can get from Autodesk or an authorized reseller.

        -

        AutoCAD Mobile 2015 X32 (32bit) (Product Key And Xforce Keygen)


        Download Filehttps://gohhs.com/2uz5sO



        -

        In this article, we will show you how to install and activate AutoCAD Mobile 2015 on your device using Xforce Keygen, a tool that generates valid serial numbers and activation codes for Autodesk products.

        -

        Step 1: Download and Install AutoCAD Mobile 2015

        -

        To download AutoCAD Mobile 2015, you can visit the official website of Autodesk[^1^] or the Google Play Store[^3^] for Android devices. For iOS devices, you can download it from the Apple App Store. Make sure your device meets the minimum system requirements for AutoCAD Mobile 2015 before installing it.

        -

        After downloading the app, follow the instructions on your screen to install it on your device. You may need to grant some permissions to the app to access your files, camera, location, etc.

        -

        -

        Step 2: Run Xforce Keygen and Generate a Product Key

        -

        Xforce Keygen is a software that can generate valid product keys and activation codes for Autodesk products. You can download it from various online sources, but be careful of malware and viruses. We recommend using a trusted antivirus program to scan the file before running it.

        -

        After downloading Xforce Keygen, run it as administrator on your computer. Select AutoCAD Mobile 2015 from the product list and click on Generate. You will see a product key that you can copy to your clipboard or write down somewhere.

        -

        Step 3: Enter the Product Key in AutoCAD Mobile 2015

        -

        Open AutoCAD Mobile 2015 on your device and sign in with your Autodesk account. If you don't have one, you can create one for free. Tap on the menu icon at the top left corner and select Subscription. Tap on Enter Product Key and paste or type the product key that you generated with Xforce Keygen. Tap on Activate.

        -

        Step 4: Generate an Activation Code with Xforce Keygen

        -

        After entering the product key, you will see a request code on your screen. This is a unique code that identifies your device and subscription. Copy this code to your clipboard or write it down somewhere.

        -

        Go back to Xforce Keygen on your computer and click on Patch. You will see a message saying "You need to apply patch when license screen appears". Click on OK. Then, paste or type the request code that you copied from AutoCAD Mobile 2015 in the Request field. Click on Generate. You will see an activation code that you can copy to your clipboard or write down somewhere.

        -

        Step 5: Enter the Activation Code in AutoCAD Mobile 2015

        -

        Go back to AutoCAD Mobile 2015 on your device and tap on Enter Activation Code. Paste or type the activation code that you generated with Xforce Keygen. Tap on Activate.

        -

        Congratulations! You have successfully installed and activated AutoCAD Mobile 2015 on your device. You can now enjoy the full features of the app for one year.

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/innev/whisper-Base/app.py b/spaces/innev/whisper-Base/app.py deleted file mode 100644 index 51f6aa59cd00e07f38c96aa5028d6c1c3f3fadcb..0000000000000000000000000000000000000000 --- a/spaces/innev/whisper-Base/app.py +++ /dev/null @@ -1,117 +0,0 @@ -#!/usr/local/bin/python3 -#-*- coding:utf-8 -*- -import gradio as gr -import librosa -import torch -from transformers import WhisperProcessor, WhisperForConditionalGeneration - -checkpoint = "openai/whisper-base" -# checkpoint = "/innev/open-ai/huggingface/openai/whisper-base" -processor = WhisperProcessor.from_pretrained(checkpoint) -model = WhisperForConditionalGeneration.from_pretrained(checkpoint) - -def process_audio(sampling_rate, waveform): - # convert from int16 to floating point - waveform = waveform / 32678.0 - - # convert to mono if stereo - if len(waveform.shape) > 1: - waveform = librosa.to_mono(waveform.T) - - # resample to 16 kHz if necessary - if sampling_rate != 16000: - waveform = librosa.resample(waveform, orig_sr=sampling_rate, target_sr=16000) - - # limit to 30 seconds - waveform = waveform[:16000*30] - - # make PyTorch tensor - waveform = torch.tensor(waveform) - return waveform - - -def predict(language, audio, mic_audio=None): - if mic_audio is not None: - sampling_rate, waveform = mic_audio - elif audio is not None: - sampling_rate, waveform = audio - else: - return "(please provide audio)" - - forced_decoder_ids = processor.get_decoder_prompt_ids(language=language, task="transcribe") - - waveform = process_audio(sampling_rate, waveform) - inputs = processor(audio=waveform, sampling_rate=16000, return_tensors="pt") - predicted_ids = model.generate(**inputs, max_length=400, forced_decoder_ids=forced_decoder_ids) - transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) - return transcription[0] - -supportLangs = ['english', 'chinese', 'german', 'spanish', 'russian', 'korean', 'french', 'japanese', 'portuguese'] - -title = "OpenAI Whisper Base" - -description = """ -本例用于演示 openai/whisper-base 模型的语音识别(ASR)能力。基于原始模型开发,没有对模型做微调。 本例默认输出为中文,Whisper识别出的是繁体中文。 - -Whisper包含多个不同大小的版本,理论来讲模型越大识别效果越好,模型越小速度越快 - -使用方法: 上传一个音频文件或直接在页面中录制音频。音频会在传递到模型之前转换为单声道并重新采样为16 kHz。 -""" - -article = """ - -## 音频案例: - -- "春日阳光普照大地,正是踏春好时节" 来源: 知琪(Zhiqi) -- "这是一年中最美味的团聚,也注定是一顿白感交集的晚餐。" 来源: 知厨(zhichu) -- "Hmm, I don't know" 来源: [InspectorJ](https://freesound.org/people/InspectorJ/sounds/519189) (CC BY 4.0 license) -- "Henry V" excerpt 来源: [acclivity](https://freesound.org/people/acclivity/sounds/24096) (CC BY-NC 4.0 license) -- "You can see it in the eyes" 来源: [JoyOhJoy](https://freesound.org/people/JoyOhJoy/sounds/165348) (CC0 license) -- "We yearn for time" 来源: [Sample_Me](https://freesound.org/people/Sample_Me/sounds/610529) (CC0 license) - -## 参考 - -- [OpenAI Whisper Base](https://huggingface.co/openai/whisper-base) -- [Innev GitHub](https://github.com/innev) - - -## 多语言支持 - -english, chinese, german, spanish, russian, korean, french, japanese, portuguese, turkish, polish, catalan, dutch, arabic, swedish, italian, indonesian, hindi, finnish, vietnamese, hebrew, ukrainian, greek, malay, czech, romanian, danish, hungarian, tamil, norwegian, thai, urdu, croatian, bulgarian, lithuanian, latin, maori, malayalam, welsh, slovak, telugu, persian, latvian, bengali, serbian, azerbaijani, slovenian, kannada, estonian, macedonian, breton, basque, icelandic, armenian, nepali, mongolian, bosnian, kazakh, albanian, swahili, galician, marathi, punjabi, sinhala, khmer, shona, yoruba, somali, afrikaans, occitan, georgian, belarusian, tajik, sindhi, gujarati, amharic, yiddish, lao, uzbek, faroese, haitian creole, pashto, turkmen, nynorsk, maltese, sanskrit, luxembourgish, myanmar, tibetan, tagalog, malagasy, assamese, tatar, hawaiian, lingala, hausa, bashkir, javanese, sundanese, burmese, valencian, flemish, haitian, letzeburgesch, pushto, panjabi, moldavian, moldovan, sinhalese, castilian - -## 模型版本 - -| 模型版本 | 参数大小 | 仅英语 | 多语言 | -|----------|------------|------------------------------------------------------|-----------------------------------------------------| -| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) | -| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) | -| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) | -| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) | -| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) | -| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) | -""" - -examples = [ - [None, "examples/zhiqi.wav", None], - [None, "examples/zhichu.wav", None], - [None, "examples/hmm_i_dont_know.wav", None], - [None, "examples/henry5.mp3", None], - [None, "examples/yearn_for_time.mp3", None], - [None, "examples/see_in_eyes.wav", None], -] - -gr.Interface( - fn=predict, - inputs=[ - gr.Radio(label="目标语言", choices=supportLangs, value="chinese"), - gr.Audio(label="上传语音", source="upload", type="numpy"), - gr.Audio(label="录制语音", source="microphone", type="numpy"), - ], - outputs=[ - gr.Text(label="识别出的文字"), - ], - title=title, - description=description, - article=article, - examples=examples, -).launch() \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Gta San Andreas Crack BEST! No Cd Serial Key Keygen.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Gta San Andreas Crack BEST! No Cd Serial Key Keygen.md deleted file mode 100644 index 3d5b737abd3bc006f4af42c472f44ae004dd30ed..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Gta San Andreas Crack BEST! No Cd Serial Key Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Gta San Andreas Crack! No Cd Serial Key keygen


        DOWNLOAD ⚙⚙⚙ https://urlin.us/2uEvgC



        -
        - d5da3c52bf
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Digital Insanity Keygen Sony Vegas 13 Patch [BETTER].md b/spaces/inreVtussa/clothingai/Examples/Digital Insanity Keygen Sony Vegas 13 Patch [BETTER].md deleted file mode 100644 index 1df3a6f3a7f367dfacf9d22804a628977b013f06..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Digital Insanity Keygen Sony Vegas 13 Patch [BETTER].md +++ /dev/null @@ -1,6 +0,0 @@ -

        digital insanity keygen sony vegas 13 patch


        Download Ziphttps://tiurll.com/2uClFW



        - - d5da3c52bf
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Dilwale Dulhania Le Jayenge 4 Movie In Hindi Download Mp4.md b/spaces/inreVtussa/clothingai/Examples/Dilwale Dulhania Le Jayenge 4 Movie In Hindi Download Mp4.md deleted file mode 100644 index 9d0bfb6eb84ecfb809d2645be9a3c4e77496a9bf..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Dilwale Dulhania Le Jayenge 4 Movie In Hindi Download Mp4.md +++ /dev/null @@ -1,7 +0,0 @@ - -

        Dilwale Dulhania Le Jayenge (1995) 480p Bluray
        Picked this 00 Dilwale Dulhania Le Jayenge (1995) 480p Bluray HD. Disc from one of following sources: Paid download (Mp3 or Video), Media composer (M3U playlist) or another public download provider (code, links or banners). Click on Share under the video player to upload this video to video sharing sites such as YouTube, Facebook, Vimeo, Dailymotion or Metacafe. If you added this video in this Website, please add the following source: Paid download (Mp3 or Video), Media composer (M3U playlist) or another public download provider (code, links or banners).
        FULL HD 7.95 FPS
        4b766eca1b

        -

        Dilwale Dulhania Le Jayenge (1995) Dvdrip
        Picked this 00 Dilwale Dulhania Le Jayenge (1995) Dvdrip HD. Disc from one of following sources: Paid download (Mp3 or Video), Media composer (M3U playlist) or another public download provider (code, links or banners). Click on Share under the video player to upload this video to video sharing sites such as YouTube, Facebook, Vimeo, Dailymotion or Metacafe. If you added this video in this Website, please add the following source: Paid download (Mp3 or Video), Media composer (M3U playlist) or another public download provider (code, links or banners).
        720p 54.75 FPS.
        b059b3db37
        326a42eaab

        -

        Dilwale Dulhania Le Jayenge 4 movie in hindi download mp4


        Downloadhttps://tiurll.com/2uCmgx



        -

        Dilwale Dulhania Le Jayenge (1995) 4K Free Download
        Picked this 00 Dilwale Dulhania Le Jayenge (1995) 4K Free Download HD. Disc from one of following sources: Paid download (Mp3 or Video), Media composer (M3U playlist) or another public download provider (code, links or banners). Click on Share under the video player to upload this video to video sharing sites such as YouTube, Facebook, Vimeo, Dailymotion or Metacafe.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/isotope21/Musicgen/musicgen_app_1.py b/spaces/isotope21/Musicgen/musicgen_app_1.py deleted file mode 100644 index 934f34bdadc9f592d4406026328a511311bf3a60..0000000000000000000000000000000000000000 --- a/spaces/isotope21/Musicgen/musicgen_app_1.py +++ /dev/null @@ -1,434 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Updated to account for UI changes from https://github.com/rkfg/audiocraft/blob/long/app.py -# also released under the MIT license. - -import argparse -from concurrent.futures import ProcessPoolExecutor -import os -from pathlib import Path -import subprocess as sp -from tempfile import NamedTemporaryFile -import time -import typing as tp -import warnings - -import torch -import gradio as gr - -from audiocraft.data.audio_utils import convert_audio -from audiocraft.data.audio import audio_write -from audiocraft.models import MusicGen, MultiBandDiffusion - - -MODEL = None # Last used model -IS_BATCHED = "facebook/MusicGen" in os.environ.get('SPACE_ID', '') -print(IS_BATCHED) -MAX_BATCH_SIZE = 12 -BATCHED_DURATION = 15 -INTERRUPTING = False -MBD = None -# We have to wrap subprocess call to clean a bit the log when using gr.make_waveform -_old_call = sp.call - - -def _call_nostderr(*args, **kwargs): - # Avoid ffmpeg vomiting on the logs. - kwargs['stderr'] = sp.DEVNULL - kwargs['stdout'] = sp.DEVNULL - _old_call(*args, **kwargs) - - -sp.call = _call_nostderr -# Preallocating the pool of processes. -pool = ProcessPoolExecutor(4) -pool.__enter__() - - -def interrupt(): - global INTERRUPTING - INTERRUPTING = True - - -class FileCleaner: - def __init__(self, file_lifetime: float = 3600): - self.file_lifetime = file_lifetime - self.files = [] - - def add(self, path: tp.Union[str, Path]): - self._cleanup() - self.files.append((time.time(), Path(path))) - - def _cleanup(self): - now = time.time() - for time_added, path in list(self.files): - if now - time_added > self.file_lifetime: - if path.exists(): - path.unlink() - self.files.pop(0) - else: - break - - -file_cleaner = FileCleaner() - - -def make_waveform(*args, **kwargs): - # Further remove some warnings. - be = time.time() - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - out = gr.make_waveform(*args, **kwargs) - print("Make a video took", time.time() - be) - return out - - -def load_model(version='facebook/musicgen-melody'): - global MODEL - print("Loading model", version) - if MODEL is None or MODEL.name != version: - MODEL = MusicGen.get_pretrained(version) - - -def load_diffusion(): - global MBD - if MBD is None: - print("loading MBD") - MBD = MultiBandDiffusion.get_mbd_musicgen() - - -def _do_predictions(texts, melodies, duration, progress=False, **gen_kwargs): - MODEL.set_generation_params(duration=duration, **gen_kwargs) - print("new batch", len(texts), texts, [None if m is None else (m[0], m[1].shape) for m in melodies]) - be = time.time() - processed_melodies = [] - target_sr = 32000 - target_ac = 1 - for melody in melodies: - if melody is None: - processed_melodies.append(None) - else: - sr, melody = melody[0], torch.from_numpy(melody[1]).to(MODEL.device).float().t() - if melody.dim() == 1: - melody = melody[None] - melody = melody[..., :int(sr * duration)] - melody = convert_audio(melody, sr, target_sr, target_ac) - processed_melodies.append(melody) - - if any(m is not None for m in processed_melodies): - outputs = MODEL.generate_with_chroma( - descriptions=texts, - melody_wavs=processed_melodies, - melody_sample_rate=target_sr, - progress=progress, - return_tokens=USE_DIFFUSION - ) - else: - outputs = MODEL.generate(texts, progress=progress, return_tokens=USE_DIFFUSION) - if USE_DIFFUSION: - outputs_diffusion = MBD.tokens_to_wav(outputs[1]) - outputs = torch.cat([outputs[0], outputs_diffusion], dim=0) - outputs = outputs.detach().cpu().float() - pending_videos = [] - out_wavs = [] - for output in outputs: - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, output, MODEL.sample_rate, strategy="loudness", - loudness_headroom_db=16, loudness_compressor=True, add_suffix=False) - pending_videos.append(pool.submit(make_waveform, file.name)) - out_wavs.append(file.name) - file_cleaner.add(file.name) - out_videos = [pending_video.result() for pending_video in pending_videos] - for video in out_videos: - file_cleaner.add(video) - print("batch finished", len(texts), time.time() - be) - print("Tempfiles currently stored: ", len(file_cleaner.files)) - return out_videos, out_wavs - - -def predict_batched(texts, melodies): - max_text_length = 512 - texts = [text[:max_text_length] for text in texts] - load_model('facebook/musicgen-melody') - res = _do_predictions(texts, melodies, BATCHED_DURATION) - return res - - -def predict_full(model, decoder, text, melody, duration, topk, topp, temperature, cfg_coef, progress=gr.Progress()): - global INTERRUPTING - global USE_DIFFUSION - INTERRUPTING = False - if temperature < 0: - raise gr.Error("Temperature must be >= 0.") - if topk < 0: - raise gr.Error("Topk must be non-negative.") - if topp < 0: - raise gr.Error("Topp must be non-negative.") - - topk = int(topk) - if decoder == "MultiBand_Diffusion": - USE_DIFFUSION = True - load_diffusion() - else: - USE_DIFFUSION = False - load_model(model) - - def _progress(generated, to_generate): - progress((min(generated, to_generate), to_generate)) - if INTERRUPTING: - raise gr.Error("Interrupted.") - MODEL.set_custom_progress_callback(_progress) - - videos, wavs = _do_predictions( - [text], [melody], duration, progress=True, - top_k=topk, top_p=topp, temperature=temperature, cfg_coef=cfg_coef) - if USE_DIFFUSION: - return videos[0], wavs[0], videos[1], wavs[1] - return videos[0], wavs[0], None, None - - -def toggle_audio_src(choice): - if choice == "mic": - return gr.update(source="microphone", value=None, label="Microphone") - else: - return gr.update(source="upload", value=None, label="File") - - -def toggle_diffusion(choice): - if choice == "MultiBand_Diffusion": - return [gr.update(visible=True)] * 2 - else: - return [gr.update(visible=False)] * 2 - - -def ui_full(launch_kwargs): - with gr.Blocks() as interface: - gr.Markdown( - """ - #GROOTIN - - Creative MUSIC-AI Mozart - - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text(label="Input Text", interactive=True) - with gr.Column(): - radio = gr.Radio(["file", "mic"], value="file", - label="Condition on a melody (optional) File or Mic") - melody = gr.Audio(source="upload", type="numpy", label="File", - interactive=True, elem_id="melody-input") - with gr.Row(): - submit = gr.Button("Submit") - # Adapted from https://github.com/rkfg/audiocraft/blob/long/app.py, MIT license. - _ = gr.Button("Interrupt").click(fn=interrupt, queue=False) - with gr.Row(): - model = gr.Radio(["facebook/musicgen-melody", "facebook/musicgen-medium", "facebook/musicgen-small", - "facebook/musicgen-large"], - label="Model", value="facebook/musicgen-melody", interactive=True) - with gr.Row(): - decoder = gr.Radio(["Default", "MultiBand_Diffusion"], - label="Decoder", value="Default", interactive=True) - with gr.Row(): - duration = gr.Slider(minimum=1, maximum=120, value=10, label="Duration", interactive=True) - with gr.Row(): - topk = gr.Number(label="Top-k", value=250, interactive=True) - topp = gr.Number(label="Top-p", value=0, interactive=True) - temperature = gr.Number(label="Temperature", value=1.0, interactive=True) - cfg_coef = gr.Number(label="Classifier Free Guidance", value=3.0, interactive=True) - with gr.Column(): - output = gr.Video(label="Generated Music") - # audio_output = gr.Audio(label="Generated Music (wav)", type='filepath') - diffusion_output = gr.Video(label="MultiBand Diffusion Decoder") - audio_diffusion = gr.Audio(label="MultiBand Diffusion Decoder (wav)", type='filepath') - submit.click(toggle_diffusion, decoder, [diffusion_output, audio_diffusion], queue=False, - show_progress=False).then(predict_full, inputs=[model, decoder, text, melody, duration, topk, topp, - temperature, cfg_coef], - outputs=[output, diffusion_output, audio_diffusion]) - # outputs=[output, audio_output, diffusion_output, audio_diffusion]) - radio.change(toggle_audio_src, radio, [melody], queue=False, show_progress=False) - - gr.Examples( - fn=predict_full, - examples=[ - [], - [], - [], - [], - [], - [], - ], - inputs=[text, melody, model, decoder], - outputs=[output] - ) - gr.Markdown( - # """ - # ### More details - # - # The model will generate a short music extract based on the description you provided. - # The model can generate up to 30 seconds of audio in one pass. It is now possible - # to extend the generation by feeding back the end of the previous chunk of audio. - # This can take a long time, and the model might lose consistency. The model might also - # decide at arbitrary positions that the song ends. - # - # **WARNING:** Choosing long durations will take a long time to generate (2min might take ~10min). - # An overlap of 12 seconds is kept with the previously generated chunk, and 18 "new" seconds - # are generated each time. - # - # We present 4 model variations: - # 1. facebook/musicgen-melody -- a music generation model capable of generating music condition - # on text and melody inputs. **Note**, you can also use text only. - # 2. facebook/musicgen-small -- a 300M transformer decoder conditioned on text only. - # 3. facebook/musicgen-medium -- a 1.5B transformer decoder conditioned on text only. - # 4. facebook/musicgen-large -- a 3.3B transformer decoder conditioned on text only. - # - # We also present two way of decoding the audio tokens - # 1. Use the default GAN based compression model - # 2. Use MultiBand Diffusion from (paper linknano ) - # - # When using `facebook/musicgen-melody`, you can optionally provide a reference audio from - # which a broad melody will be extracted. The model will then try to follow both - # the description and melody provided. - # - # You can also use your own GPU or a Google Colab by following the instructions on our repo. - # See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - # for more details. - # """ - ) - - interface.queue().launch(**launch_kwargs) - - -def ui_batched(launch_kwargs): - with gr.Blocks() as demo: - gr.Markdown( - """ - # MusicGen - - This is the demo for [MusicGen](https://github.com/facebookresearch/audiocraft), - a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284). -
        - - Duplicate Space - for longer sequences, more control and no queue.

        - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text(label="Describe your music", lines=2, interactive=True) - with gr.Column(): - radio = gr.Radio(["file", "mic"], value="file", - label="Condition on a melody (optional) File or Mic") - melody = gr.Audio(source="upload", type="numpy", label="File", - interactive=True, elem_id="melody-input") - with gr.Row(): - submit = gr.Button("Generate") - with gr.Column(): - output = gr.Video(label="Generated Music") - audio_output = gr.Audio(label="Generated Music (wav)", type='filepath') - submit.click(predict_batched, inputs=[text, melody], - outputs=[output, audio_output], batch=True, max_batch_size=MAX_BATCH_SIZE) - radio.change(toggle_audio_src, radio, [melody], queue=False, show_progress=False) - gr.Examples( - fn=predict_batched, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - ], - [ - "90s rock song with electric guitar and heavy drums", - None, - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130", - "./assets/bach.mp3", - ], - [ - "lofi slow bpm electro chill with organic samples", - None, - ], - ], - inputs=[text, melody], - outputs=[output] - ) - gr.Markdown(""" - ### More details - - The model will generate 12 seconds of audio based on the description you provided. - You can optionally provide a reference audio from which a broad melody will be extracted. - The model will then try to follow both the description and melody provided. - All samples are generated with the `melody` model. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """) - - demo.queue(max_size=8 * 4).launch(**launch_kwargs) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - '--listen', - type=str, - default='0.0.0.0' if 'SPACE_ID' in os.environ else '127.0.0.1', - help='IP to listen on for connections to Gradio', - ) - parser.add_argument( - '--username', type=str, default='', help='Username for authentication' - ) - parser.add_argument( - '--password', type=str, default='', help='Password for authentication' - ) - parser.add_argument( - '--server_port', - type=int, - default=0, - help='Port to run the server listener on', - ) - parser.add_argument( - '--inbrowser', action='store_true', help='Open in browser' - ) - parser.add_argument( - '--share', action='store_true', help='Share the gradio UI' - ) - - args = parser.parse_args() - - launch_kwargs = {} - launch_kwargs['server_name'] = args.listen - - if args.username and args.password: - launch_kwargs['auth'] = (args.username, args.password) - if args.server_port: - launch_kwargs['server_port'] = args.server_port - if args.inbrowser: - launch_kwargs['inbrowser'] = args.inbrowser - if args.share: - launch_kwargs['share'] = args.share - - # Show the interface - if IS_BATCHED: - global USE_DIFFUSION - USE_DIFFUSION = False - ui_batched(launch_kwargs) - else: - ui_full(launch_kwargs) diff --git a/spaces/ivntl/MMS/tts.py b/spaces/ivntl/MMS/tts.py deleted file mode 100644 index dfc53054a7aac3bf651b2f5f6872dbfddf3500eb..0000000000000000000000000000000000000000 --- a/spaces/ivntl/MMS/tts.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import re -import tempfile -import torch -import sys -import gradio as gr - -from huggingface_hub import hf_hub_download - -# Setup TTS env -if "vits" not in sys.path: - sys.path.append("vits") - -from vits import commons, utils -from vits.models import SynthesizerTrn - - -TTS_LANGUAGES = {} -with open(f"data/tts/all_langs.tsv") as f: - for line in f: - iso, name = line.split(" ", 1) - TTS_LANGUAGES[iso] = name - - -class TextMapper(object): - def __init__(self, vocab_file): - self.symbols = [ - x.replace("\n", "") for x in open(vocab_file, encoding="utf-8").readlines() - ] - self.SPACE_ID = self.symbols.index(" ") - self._symbol_to_id = {s: i for i, s in enumerate(self.symbols)} - self._id_to_symbol = {i: s for i, s in enumerate(self.symbols)} - - def text_to_sequence(self, text, cleaner_names): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - """ - sequence = [] - clean_text = text.strip() - for symbol in clean_text: - symbol_id = self._symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - def uromanize(self, text, uroman_pl): - iso = "xxx" - with tempfile.NamedTemporaryFile() as tf, tempfile.NamedTemporaryFile() as tf2: - with open(tf.name, "w") as f: - f.write("\n".join([text])) - cmd = f"perl " + uroman_pl - cmd += f" -l {iso} " - cmd += f" < {tf.name} > {tf2.name}" - os.system(cmd) - outtexts = [] - with open(tf2.name) as f: - for line in f: - line = re.sub(r"\s+", " ", line).strip() - outtexts.append(line) - outtext = outtexts[0] - return outtext - - def get_text(self, text, hps): - text_norm = self.text_to_sequence(text, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def filter_oov(self, text, lang=None): - text = self.preprocess_char(text, lang=lang) - val_chars = self._symbol_to_id - txt_filt = "".join(list(filter(lambda x: x in val_chars, text))) - return txt_filt - - def preprocess_char(self, text, lang=None): - """ - Special treatement of characters in certain languages - """ - if lang == "ron": - text = text.replace("ț", "ţ") - print(f"{lang} (ț -> ţ): {text}") - return text - - -def synthesize(text, lang, speed=None): - if speed is None: - speed = 1.0 - - lang_code = lang.split()[0].strip() - - vocab_file = hf_hub_download( - repo_id="facebook/mms-tts", - filename="vocab.txt", - subfolder=f"models/{lang_code}", - ) - config_file = hf_hub_download( - repo_id="facebook/mms-tts", - filename="config.json", - subfolder=f"models/{lang_code}", - ) - g_pth = hf_hub_download( - repo_id="facebook/mms-tts", - filename="G_100000.pth", - subfolder=f"models/{lang_code}", - ) - - if torch.cuda.is_available(): - device = torch.device("cuda") - elif ( - hasattr(torch.backends, "mps") - and torch.backends.mps.is_available() - and torch.backends.mps.is_built() - ): - device = torch.device("mps") - else: - device = torch.device("cpu") - - print(f"Run inference with {device}") - - assert os.path.isfile(config_file), f"{config_file} doesn't exist" - hps = utils.get_hparams_from_file(config_file) - text_mapper = TextMapper(vocab_file) - net_g = SynthesizerTrn( - len(text_mapper.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model, - ) - net_g.to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(g_pth, net_g, None) - - is_uroman = hps.data.training_files.split(".")[-1] == "uroman" - - if is_uroman: - uroman_dir = "uroman" - assert os.path.exists(uroman_dir) - uroman_pl = os.path.join(uroman_dir, "bin", "uroman.pl") - text = text_mapper.uromanize(text, uroman_pl) - - text = text.lower() - text = text_mapper.filter_oov(text, lang=lang) - stn_tst = text_mapper.get_text(text, hps) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]).to(device) - hyp = ( - net_g.infer( - x_tst, - x_tst_lengths, - noise_scale=0.667, - noise_scale_w=0.8, - length_scale=1.0 / speed, - )[0][0, 0] - .cpu() - .float() - .numpy() - ) - - return gr.Audio.update(value=(hps.data.sampling_rate, hyp)), text - - -TTS_EXAMPLES = [ - ["I am going to the store.", "eng (English)"], - ["안녕하세요.", "kor (Korean)"], - ["क्या मुझे पीने का पानी मिल सकता है?", "hin (Hindi)"], - ["Tanış olmağıma çox şadam", "azj-script_latin (Azerbaijani, North)"], - ["Mu zo murna a cikin ƙasar.", "hau (Hausa)"], -] diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/torch_utils/custom_ops.py b/spaces/james-oldfield/PandA/networks/stylegan3/torch_utils/custom_ops.py deleted file mode 100644 index dd7cc046e925f58602154be9bdf678ca9d76f59f..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/torch_utils/custom_ops.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import glob -import hashlib -import importlib -import os -import re -import shutil -import uuid - -import torch -import torch.utils.cpp_extension -from torch.utils.file_baton import FileBaton - -#---------------------------------------------------------------------------- -# Global options. - -verbosity = 'brief' # Verbosity level: 'none', 'brief', 'full' - -#---------------------------------------------------------------------------- -# Internal helper funcs. - -def _find_compiler_bindir(): - patterns = [ - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin', - ] - for pattern in patterns: - matches = sorted(glob.glob(pattern)) - if len(matches): - return matches[-1] - return None - -#---------------------------------------------------------------------------- - -def _get_mangled_gpu_name(): - name = torch.cuda.get_device_name().lower() - out = [] - for c in name: - if re.match('[a-z0-9_-]+', c): - out.append(c) - else: - out.append('-') - return ''.join(out) - -#---------------------------------------------------------------------------- -# Main entry point for compiling and loading C++/CUDA plugins. - -_cached_plugins = dict() - -def get_plugin(module_name, sources, headers=None, source_dir=None, **build_kwargs): - assert verbosity in ['none', 'brief', 'full'] - if headers is None: - headers = [] - if source_dir is not None: - sources = [os.path.join(source_dir, fname) for fname in sources] - headers = [os.path.join(source_dir, fname) for fname in headers] - - # Already cached? - if module_name in _cached_plugins: - return _cached_plugins[module_name] - - # Print status. - if verbosity == 'full': - print(f'Setting up PyTorch plugin "{module_name}"...') - elif verbosity == 'brief': - print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True) - verbose_build = (verbosity == 'full') - - # Compile and load. - try: # pylint: disable=too-many-nested-blocks - # Make sure we can find the necessary compiler binaries. - if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0: - compiler_bindir = _find_compiler_bindir() - if compiler_bindir is None: - raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".') - os.environ['PATH'] += ';' + compiler_bindir - - # Some containers set TORCH_CUDA_ARCH_LIST to a list that can either - # break the build or unnecessarily restrict what's available to nvcc. - # Unset it to let nvcc decide based on what's available on the - # machine. - os.environ['TORCH_CUDA_ARCH_LIST'] = '' - - # Incremental build md5sum trickery. Copies all the input source files - # into a cached build directory under a combined md5 digest of the input - # source files. Copying is done only if the combined digest has changed. - # This keeps input file timestamps and filenames the same as in previous - # extension builds, allowing for fast incremental rebuilds. - # - # This optimization is done only in case all the source files reside in - # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR - # environment variable is set (we take this as a signal that the user - # actually cares about this.) - # - # EDIT: We now do it regardless of TORCH_EXTENSIOS_DIR, in order to work - # around the *.cu dependency bug in ninja config. - # - all_source_files = sorted(sources + headers) - all_source_dirs = set(os.path.dirname(fname) for fname in all_source_files) - if len(all_source_dirs) == 1: # and ('TORCH_EXTENSIONS_DIR' in os.environ): - - # Compute combined hash digest for all source files. - hash_md5 = hashlib.md5() - for src in all_source_files: - with open(src, 'rb') as f: - hash_md5.update(f.read()) - - # Select cached build directory name. - source_digest = hash_md5.hexdigest() - build_top_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access - cached_build_dir = os.path.join(build_top_dir, f'{source_digest}-{_get_mangled_gpu_name()}') - - if not os.path.isdir(cached_build_dir): - tmpdir = f'{build_top_dir}/srctmp-{uuid.uuid4().hex}' - os.makedirs(tmpdir) - for src in all_source_files: - shutil.copyfile(src, os.path.join(tmpdir, os.path.basename(src))) - try: - os.replace(tmpdir, cached_build_dir) # atomic - except OSError: - # source directory already exists, delete tmpdir and its contents. - shutil.rmtree(tmpdir) - if not os.path.isdir(cached_build_dir): raise - - # Compile. - cached_sources = [os.path.join(cached_build_dir, os.path.basename(fname)) for fname in sources] - torch.utils.cpp_extension.load(name=module_name, build_directory=cached_build_dir, - verbose=verbose_build, sources=cached_sources, **build_kwargs) - else: - torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs) - - # Load. - module = importlib.import_module(module_name) - - except: - if verbosity == 'brief': - print('Failed!') - raise - - # Print status and add to cache dict. - if verbosity == 'full': - print(f'Done setting up PyTorch plugin "{module_name}".') - elif verbosity == 'brief': - print('Done.') - _cached_plugins[module_name] = module - return module - -#---------------------------------------------------------------------------- diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/viz/stylemix_widget.py b/spaces/james-oldfield/PandA/networks/stylegan3/viz/stylemix_widget.py deleted file mode 100644 index 0d7bf3e6b4bed1f06774a9d4bd0797cf699f9142..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/viz/stylemix_widget.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import imgui -from gui_utils import imgui_utils - -#---------------------------------------------------------------------------- - -class StyleMixingWidget: - def __init__(self, viz): - self.viz = viz - self.seed_def = 1000 - self.seed = self.seed_def - self.animate = False - self.enables = [] - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - num_ws = viz.result.get('num_ws', 0) - num_enables = viz.result.get('num_ws', 18) - self.enables += [False] * max(num_enables - len(self.enables), 0) - - if show: - imgui.text('Stylemix') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 8), imgui_utils.grayed_out(num_ws == 0): - _changed, self.seed = imgui.input_int('##seed', self.seed) - imgui.same_line(viz.label_w + viz.font_size * 8 + viz.spacing) - with imgui_utils.grayed_out(num_ws == 0): - _clicked, self.animate = imgui.checkbox('Anim', self.animate) - - pos2 = imgui.get_content_region_max()[0] - 1 - viz.button_w - pos1 = pos2 - imgui.get_text_line_height() - viz.spacing - pos0 = viz.label_w + viz.font_size * 12 - imgui.push_style_var(imgui.STYLE_FRAME_PADDING, [0, 0]) - for idx in range(num_enables): - imgui.same_line(round(pos0 + (pos1 - pos0) * (idx / (num_enables - 1)))) - if idx == 0: - imgui.set_cursor_pos_y(imgui.get_cursor_pos_y() + 3) - with imgui_utils.grayed_out(num_ws == 0): - _clicked, self.enables[idx] = imgui.checkbox(f'##{idx}', self.enables[idx]) - if imgui.is_item_hovered(): - imgui.set_tooltip(f'{idx}') - imgui.pop_style_var(1) - - imgui.same_line(pos2) - imgui.set_cursor_pos_y(imgui.get_cursor_pos_y() - 3) - with imgui_utils.grayed_out(num_ws == 0): - if imgui_utils.button('Reset', width=-1, enabled=(self.seed != self.seed_def or self.animate or any(self.enables[:num_enables]))): - self.seed = self.seed_def - self.animate = False - self.enables = [False] * num_enables - - if any(self.enables[:num_ws]): - viz.args.stylemix_idx = [idx for idx, enable in enumerate(self.enables) if enable] - viz.args.stylemix_seed = self.seed & ((1 << 32) - 1) - if self.animate: - self.seed += 1 - -#---------------------------------------------------------------------------- diff --git a/spaces/janewu/hualao/README.md b/spaces/janewu/hualao/README.md deleted file mode 100644 index 542720b29b04484bb9666a7882d508581ef73afa..0000000000000000000000000000000000000000 --- a/spaces/janewu/hualao/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hualao -emoji: 🐨 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jbilcke-hf/MusicGen/tests/models/test_encodec_model.py b/spaces/jbilcke-hf/MusicGen/tests/models/test_encodec_model.py deleted file mode 100644 index 2f9c1db3f69a45f02451b71da95f44356811acbb..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/MusicGen/tests/models/test_encodec_model.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random - -import numpy as np -import torch - -from audiocraft.models import EncodecModel -from audiocraft.modules import SEANetEncoder, SEANetDecoder -from audiocraft.quantization import DummyQuantizer - - -class TestEncodecModel: - - def _create_encodec_model(self, - sample_rate: int, - channels: int, - dim: int = 5, - n_filters: int = 3, - n_residual_layers: int = 1, - ratios: list = [5, 4, 3, 2], - **kwargs): - frame_rate = np.prod(ratios) - encoder = SEANetEncoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - decoder = SEANetDecoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - quantizer = DummyQuantizer() - model = EncodecModel(encoder, decoder, quantizer, frame_rate=frame_rate, - sample_rate=sample_rate, channels=channels, **kwargs) - return model - - def test_model(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model = self._create_encodec_model(sample_rate, channels) - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - res = model(x) - assert res.x.shape == x.shape - - def test_model_renorm(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model_nonorm = self._create_encodec_model(sample_rate, channels, renormalize=False) - model_renorm = self._create_encodec_model(sample_rate, channels, renormalize=True) - - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - codes, scales = model_nonorm.encode(x) - codes, scales = model_renorm.encode(x) - assert scales is not None diff --git a/spaces/jbilcke-hf/observer/src/components/ui/dropdown-menu.tsx b/spaces/jbilcke-hf/observer/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 5803489a1d197a9db5018e413e63abe84b2efb8e..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/observer/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,200 +0,0 @@ -"use client" - -import * as React from "react" -import * as DropdownMenuPrimitive from "@radix-ui/react-dropdown-menu" -import { Check, ChevronRight, Circle } from "lucide-react" - -import { cn } from "@/lib/utils" - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, children, ...props }, ref) => ( - - {children} - - -)) -DropdownMenuSubTrigger.displayName = - DropdownMenuPrimitive.SubTrigger.displayName - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuCheckboxItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, checked, ...props }, ref) => ( - - - - - - - {children} - -)) -DropdownMenuCheckboxItem.displayName = - DropdownMenuPrimitive.CheckboxItem.displayName - -const DropdownMenuRadioItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -DropdownMenuRadioItem.displayName = DropdownMenuPrimitive.RadioItem.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = "DropdownMenuShortcut" - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuCheckboxItem, - DropdownMenuRadioItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuSubTrigger, - DropdownMenuRadioGroup, -} diff --git a/spaces/jbilcke-hf/webapp-factory-llama-node/README.md b/spaces/jbilcke-hf/webapp-factory-llama-node/README.md deleted file mode 100644 index c4f39d3eba56c2c3e2f305623984564f0021d6a1..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/webapp-factory-llama-node/README.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Webapp Factory llama-node -emoji: 🏭🦙 -colorFrom: brown -colorTo: red -sdk: docker -pinned: false -app_port: 7860 ---- - -A minimalist Docker project to generate apps on demand. - -Ready to be used in a Hugging Face Space. - -# Examples - -## Local prompt examples - -``` -http://localhost:7860/?prompt=a%20pong%20game%20clone%20in%20HTML,%20made%20using%20the%20canvas -``` -``` -http://localhost:7860/?prompt=a simple html canvas game where we need to feed tadpoles controlled by an AI. The tadpoles move randomly, but when the user click inside the canvas to add some kind of food, the tadpoles will compete to eat it. Tadpole who didn't eat will die, and those who ate will reproduce. -``` - -## Installation - -### Prerequisites - -**A powerful machine is required! You need at least 24 Gb of memory!** - -- Install NVM: https://github.com/nvm-sh/nvm -- Install Docker https://www.docker.com - -### Download the model - -``` -cd models -wget ADD https://huggingface.co/TheBloke/airoboros-13b-gpt4-GGML/resolve/main/airoboros-13b-gpt4.ggmlv3.q4_0.bin -``` - -Note: the Dockerfile script will do this automatically - -### Building and run without Docker - -```bash -nvm use -npm i -npm run start -``` - -### Building and running with Docker - -```bash -npm run docker -``` - -This script is a shortcut executing the following commands: - -```bash -docker build -t webapp-factory-llama-node . -docker run -it -p 7860:7860 webapp-factory-llama-node -``` - -### Deployment to Hugging Face - -The standard free CPU instance (16 Gb) will not be enough for this project, you should use the upgraded CPU instance (32 Gb) - diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/synthesizer/utils/__init__.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/synthesizer/utils/__init__.py deleted file mode 100644 index 5ae3e48110e61231acf1e666e5fa76af5e4ebdcd..0000000000000000000000000000000000000000 --- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/synthesizer/utils/__init__.py +++ /dev/null @@ -1,45 +0,0 @@ -import torch - - -_output_ref = None -_replicas_ref = None - -def data_parallel_workaround(model, *input): - global _output_ref - global _replicas_ref - device_ids = list(range(torch.cuda.device_count())) - output_device = device_ids[0] - replicas = torch.nn.parallel.replicate(model, device_ids) - # input.shape = (num_args, batch, ...) - inputs = torch.nn.parallel.scatter(input, device_ids) - # inputs.shape = (num_gpus, num_args, batch/num_gpus, ...) - replicas = replicas[:len(inputs)] - outputs = torch.nn.parallel.parallel_apply(replicas, inputs) - y_hat = torch.nn.parallel.gather(outputs, output_device) - _output_ref = outputs - _replicas_ref = replicas - return y_hat - - -class ValueWindow(): - def __init__(self, window_size=100): - self._window_size = window_size - self._values = [] - - def append(self, x): - self._values = self._values[-(self._window_size - 1):] + [x] - - @property - def sum(self): - return sum(self._values) - - @property - def count(self): - return len(self._values) - - @property - def average(self): - return self.sum / max(1, self.count) - - def reset(self): - self._values = [] diff --git a/spaces/joaogante/generate_quality_improvement/README.md b/spaces/joaogante/generate_quality_improvement/README.md deleted file mode 100644 index 6253e064e59fefb5a120102020c220ac9cc9fe0f..0000000000000000000000000000000000000000 --- a/spaces/joaogante/generate_quality_improvement/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Generate Quality Improvement -emoji: ⚡ -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/SHAKE256.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/SHAKE256.py deleted file mode 100644 index f75b8221dfe4663abfb46b6fe082dcf2bfafab28..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/SHAKE256.py +++ /dev/null @@ -1,130 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2015, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from Crypto.Util.py3compat import bord - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - create_string_buffer, - get_raw_buffer, c_size_t, - c_uint8_ptr, c_ubyte) - -from Crypto.Hash.keccak import _raw_keccak_lib - -class SHAKE256_XOF(object): - """A SHAKE256 hash object. - Do not instantiate directly. - Use the :func:`new` function. - - :ivar oid: ASN.1 Object ID - :vartype oid: string - """ - - # ASN.1 Object ID - oid = "2.16.840.1.101.3.4.2.12" - - def __init__(self, data=None): - state = VoidPointer() - result = _raw_keccak_lib.keccak_init(state.address_of(), - c_size_t(64), - c_ubyte(24)) - if result: - raise ValueError("Error %d while instantiating SHAKE256" - % result) - self._state = SmartPointer(state.get(), - _raw_keccak_lib.keccak_destroy) - self._is_squeezing = False - self._padding = 0x1F - - if data: - self.update(data) - - def update(self, data): - """Continue hashing of a message by consuming the next chunk of data. - - Args: - data (byte string/byte array/memoryview): The next chunk of the message being hashed. - """ - - if self._is_squeezing: - raise TypeError("You cannot call 'update' after the first 'read'") - - result = _raw_keccak_lib.keccak_absorb(self._state.get(), - c_uint8_ptr(data), - c_size_t(len(data))) - if result: - raise ValueError("Error %d while updating SHAKE256 state" - % result) - return self - - def read(self, length): - """ - Compute the next piece of XOF output. - - .. note:: - You cannot use :meth:`update` anymore after the first call to - :meth:`read`. - - Args: - length (integer): the amount of bytes this method must return - - :return: the next piece of XOF output (of the given length) - :rtype: byte string - """ - - self._is_squeezing = True - bfr = create_string_buffer(length) - result = _raw_keccak_lib.keccak_squeeze(self._state.get(), - bfr, - c_size_t(length), - c_ubyte(self._padding)) - if result: - raise ValueError("Error %d while extracting from SHAKE256" - % result) - - return get_raw_buffer(bfr) - - def new(self, data=None): - return type(self)(data=data) - - -def new(data=None): - """Return a fresh instance of a SHAKE256 object. - - Args: - data (bytes/bytearray/memoryview): - The very first chunk of the message to hash. - It is equivalent to an early call to :meth:`update`. - Optional. - - :Return: A :class:`SHAKE256_XOF` object - """ - - return SHAKE256_XOF(data=data) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/dictTools.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/dictTools.py deleted file mode 100644 index 259613b27048c458980986167d429847d270691f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/dictTools.py +++ /dev/null @@ -1,83 +0,0 @@ -"""Misc dict tools.""" - - -__all__ = ["hashdict"] - -# https://stackoverflow.com/questions/1151658/python-hashable-dicts -class hashdict(dict): - """ - hashable dict implementation, suitable for use as a key into - other dicts. - - >>> h1 = hashdict({"apples": 1, "bananas":2}) - >>> h2 = hashdict({"bananas": 3, "mangoes": 5}) - >>> h1+h2 - hashdict(apples=1, bananas=3, mangoes=5) - >>> d1 = {} - >>> d1[h1] = "salad" - >>> d1[h1] - 'salad' - >>> d1[h2] - Traceback (most recent call last): - ... - KeyError: hashdict(bananas=3, mangoes=5) - - based on answers from - http://stackoverflow.com/questions/1151658/python-hashable-dicts - - """ - - def __key(self): - return tuple(sorted(self.items())) - - def __repr__(self): - return "{0}({1})".format( - self.__class__.__name__, - ", ".join("{0}={1}".format(str(i[0]), repr(i[1])) for i in self.__key()), - ) - - def __hash__(self): - return hash(self.__key()) - - def __setitem__(self, key, value): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def __delitem__(self, key): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def clear(self): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def pop(self, *args, **kwargs): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def popitem(self, *args, **kwargs): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def setdefault(self, *args, **kwargs): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - def update(self, *args, **kwargs): - raise TypeError( - "{0} does not support item assignment".format(self.__class__.__name__) - ) - - # update is not ok because it mutates the object - # __add__ is ok because it creates a new object - # while the new object is under construction, it's ok to mutate it - def __add__(self, right): - result = hashdict(self) - dict.update(result, right) - return result diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/tree/summarize_query.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/tree/summarize_query.py deleted file mode 100644 index 9c61ee77759c505999424bb14f1a28511c427fef..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/query/tree/summarize_query.py +++ /dev/null @@ -1,60 +0,0 @@ -"""Summarize query.""" - -import logging -from typing import Any, List, Optional, cast - -from gpt_index.data_structs.data_structs import IndexGraph, Node -from gpt_index.indices.query.base import BaseGPTIndexQuery -from gpt_index.indices.query.embedding_utils import SimilarityTracker -from gpt_index.indices.query.schema import QueryBundle -from gpt_index.indices.response.builder import ResponseMode -from gpt_index.indices.utils import get_sorted_node_list - - -class GPTTreeIndexSummarizeQuery(BaseGPTIndexQuery[IndexGraph]): - """GPT Tree Index summarize query. - - This class builds a query-specific tree from leaf nodes to return a response. - Using this query mode means that the tree index doesn't need to be built - when initialized, since we rebuild the tree for each query. - - .. code-block:: python - - response = index.query("", mode="summarize") - - Args: - text_qa_template (Optional[QuestionAnswerPrompt]): Question-Answer Prompt - (see :ref:`Prompt-Templates`). - - """ - - def __init__( - self, - index_struct: IndexGraph, - num_children: int = 10, - **kwargs: Any, - ) -> None: - """Initialize params.""" - if "response_mode" in kwargs: - raise ValueError( - "response_mode should not be specified for summarize query" - ) - response_kwargs = kwargs.pop("response_kwargs", {}) - response_kwargs.update(num_children=num_children) - super().__init__( - index_struct, - response_mode=ResponseMode.TREE_SUMMARIZE, - response_kwargs=response_kwargs, - **kwargs, - ) - - def _get_nodes_for_response( - self, - query_bundle: QueryBundle, - similarity_tracker: Optional[SimilarityTracker] = None, - ) -> List[Node]: - """Get nodes for response.""" - logging.info(f"> Starting query: {query_bundle.query_str}") - index_struct = cast(IndexGraph, self._index_struct) - sorted_node_list = get_sorted_node_list(index_struct.all_nodes) - return sorted_node_list diff --git a/spaces/jordonpeter01/MusicGen/tests/modules/test_rope.py b/spaces/jordonpeter01/MusicGen/tests/modules/test_rope.py deleted file mode 100644 index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen/tests/modules/test_rope.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.modules.rope import RotaryEmbedding -from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend - - -def test_rope(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_rope_io_dtypes(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32) - rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64) - - # Test bfloat16 inputs w/ both 32 and 64 precision rope. - xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - - # Test float32 inputs w/ both 32 and 64 precision rope. - xq_32 = torch.rand((B, T, H, C)).to(torch.float32) - xk_32 = torch.rand((B, T, H, C)).to(torch.float32) - xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - - -def test_transformer_with_rope(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - for pos in ['rope', 'sin_rope']: - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding=pos) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - out = tr(x) - assert list(out.shape) == list(x.shape) - - -@torch.no_grad() -def test_rope_streaming(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, causal=True, dropout=0., - custom=True, positional_embedding='rope') - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -@torch.no_grad() -def test_rope_streaming_past_context(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - - for context in [None, 10]: - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=True, - dropout=0., positional_embedding='rope') - tr.eval() - - steps = 20 - x = torch.randn(3, steps, 16) - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_rope_memory_efficient(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - # Check at float precision b/c this is the rope default. - assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm() - - -def test_rope_with_xpos(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_positional_scale(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert torch.allclose(xq, xq_out) - assert torch.allclose(xk, xk_out) diff --git a/spaces/jotarodadada/animeCf/app.py b/spaces/jotarodadada/animeCf/app.py deleted file mode 100644 index 8694d698f10e3660a5107b7feec5e80e5a203a67..0000000000000000000000000000000000000000 --- a/spaces/jotarodadada/animeCf/app.py +++ /dev/null @@ -1,64 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
        ' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
        ' - '参数路径,可更换,最前面是超分倍率,过大会内存不足
        ' - '降噪版(denoise):如果原片噪声多,压得烂,推荐使用;目前2倍模型支持了3个降噪等级
        ' - '无降噪版(no-denoise):如果原片噪声不多,压得还行,但是想提高分辨率/清晰度/做通用性的增强、修复处理,推荐使用
        ' - '保守版(conservative):如果你担心丢失纹理,担心画风被改变,担心颜色被增强,总之就是各种担心AI会留下浓重的处理痕迹,推荐使用该版本。
        ' - 'tile越大,越省显存,速度越慢') - iface.launch() diff --git a/spaces/jpfearnworks/ai_agents/modules/knowledge_retrieval/destination_chain.py b/spaces/jpfearnworks/ai_agents/modules/knowledge_retrieval/destination_chain.py deleted file mode 100644 index 6a74d1f10d9e9fb6c662f30094a085289a1503be..0000000000000000000000000000000000000000 --- a/spaces/jpfearnworks/ai_agents/modules/knowledge_retrieval/destination_chain.py +++ /dev/null @@ -1,54 +0,0 @@ -from modules.base.chain import IChain -from modules.base.llm_chain_config import LLMChainConfig -from modules.knowledge_retrieval.base.knowledge_domain import KnowledgeDomain -from modules.settings.user_settings import UserSettings -from typing import Dict , Any, Callable -import os - -class DestinationChain(IChain): - """ - DestinationChain Class - - Design: - The DestinationChain class extends the IChain interface and provides an implementation for the - run method. It follows the Liskov Substitution Principle (LSP) as it can be used wherever IChain - is expected. The class also adheres to the Dependency Inversion Principle (DIP) as it depends on - the abstraction (KnowledgeDomain) rather than a concrete class. - - Intended Implementation: - The DestinationChain class serves as a wrapper around a KnowledgeDomain instance. It implements - the run method from the IChain interface, which simply calls the generate_response method of the - KnowledgeDomain. As such, when the run method is called with a question as input, the - DestinationChain class will return a response generated by the KnowledgeDomain. - """ - knowledge_domain: KnowledgeDomain - api_key: str - llm: Any - display: Callable - usage: str - - def run(self, input: str) -> str: - return self.knowledge_domain.generate_response(input) - -class DestinationChainStrategy(DestinationChain): - """Base class for Chain Strategies""" - - def __init__(self, config: LLMChainConfig, display: Callable, knowledge_domain: KnowledgeDomain, usage: str): - settings = UserSettings.get_instance() - api_key = settings.get_api_key() - - super().__init__(api_key=api_key, knowledge_domain=knowledge_domain, llm=config.llm_class, display=display, usage=usage) - - self.llm = config.llm_class(temperature=config.temperature, max_tokens=config.max_tokens) - - self.usage = config.usage - - def run(self, question): - response = self.knowledge_domain.generate_response(question) - self.display(response) - return response - - -def get_chain_config(temperature: float = 0.7) -> LLMChainConfig: - usage = "This is the default chain model that should only be used as a last resort" - return LLMChainConfig(usage=usage) diff --git a/spaces/juliensimon/song-lyrics/README.md b/spaces/juliensimon/song-lyrics/README.md deleted file mode 100644 index 0966043404d58541bd7e76e873fd6277a0a18927..0000000000000000000000000000000000000000 --- a/spaces/juliensimon/song-lyrics/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Song Lyrics -emoji: 👁 -colorFrom: purple -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/justest/gpt4free/g4f/models.py b/spaces/justest/gpt4free/g4f/models.py deleted file mode 100644 index ecf18e6dffe029d6bbd651428094083c15b77283..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/models.py +++ /dev/null @@ -1,201 +0,0 @@ -from g4f import Provider - - -class Model: - class model: - name: str - base_provider: str - best_provider: str - - class gpt_35_turbo: - name: str = 'gpt-3.5-turbo' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Forefront - - class gpt_4: - name: str = 'gpt-4' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Bing - best_providers: list = [Provider.Bing, Provider.Lockchat] - - class claude_instant_v1_100k: - name: str = 'claude-instant-v1-100k' - base_provider: str = 'anthropic' - best_provider: Provider.Provider = Provider.Vercel - - class claude_instant_v1: - name: str = 'claude-instant-v1' - base_provider: str = 'anthropic' - best_provider: Provider.Provider = Provider.Vercel - - class claude_v1_100k: - name: str = 'claude-v1-100k' - base_provider: str = 'anthropic' - best_provider: Provider.Provider = Provider.Vercel - - class claude_v1: - name: str = 'claude-v1' - base_provider: str = 'anthropic' - best_provider: Provider.Provider = Provider.Vercel - - class alpaca_7b: - name: str = 'alpaca-7b' - base_provider: str = 'replicate' - best_provider: Provider.Provider = Provider.Vercel - - class stablelm_tuned_alpha_7b: - name: str = 'stablelm-tuned-alpha-7b' - base_provider: str = 'replicate' - best_provider: Provider.Provider = Provider.Vercel - - class bloom: - name: str = 'bloom' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class bloomz: - name: str = 'bloomz' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class flan_t5_xxl: - name: str = 'flan-t5-xxl' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class flan_ul2: - name: str = 'flan-ul2' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class gpt_neox_20b: - name: str = 'gpt-neox-20b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class oasst_sft_4_pythia_12b_epoch_35: - name: str = 'oasst-sft-4-pythia-12b-epoch-3.5' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class santacoder: - name: str = 'santacoder' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class command_medium_nightly: - name: str = 'command-medium-nightly' - base_provider: str = 'cohere' - best_provider: Provider.Provider = Provider.Vercel - - class command_xlarge_nightly: - name: str = 'command-xlarge-nightly' - base_provider: str = 'cohere' - best_provider: Provider.Provider = Provider.Vercel - - class code_cushman_001: - name: str = 'code-cushman-001' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class code_davinci_002: - name: str = 'code-davinci-002' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_ada_001: - name: str = 'text-ada-001' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_babbage_001: - name: str = 'text-babbage-001' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_curie_001: - name: str = 'text-curie-001' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_davinci_002: - name: str = 'text-davinci-002' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_davinci_003: - name: str = 'text-davinci-003' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class palm: - name: str = 'palm' - base_provider: str = 'google' - best_provider: Provider.Provider = Provider.Bard - - - """ 'falcon-40b': Model.falcon_40b, - 'falcon-7b': Model.falcon_7b, - 'llama-13b': Model.llama_13b,""" - - class falcon_40b: - name: str = 'falcon-40b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.H2o - - class falcon_7b: - name: str = 'falcon-7b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.H2o - - class llama_13b: - name: str = 'llama-13b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.H2o - -class ModelUtils: - convert: dict = { - 'gpt-3.5-turbo': Model.gpt_35_turbo, - 'gpt-4': Model.gpt_4, - - 'claude-instant-v1-100k': Model.claude_instant_v1_100k, - 'claude-v1-100k': Model.claude_v1_100k, - 'claude-instant-v1': Model.claude_instant_v1, - 'claude-v1': Model.claude_v1, - - 'alpaca-7b': Model.alpaca_7b, - 'stablelm-tuned-alpha-7b': Model.stablelm_tuned_alpha_7b, - - 'bloom': Model.bloom, - 'bloomz': Model.bloomz, - - 'flan-t5-xxl': Model.flan_t5_xxl, - 'flan-ul2': Model.flan_ul2, - - 'gpt-neox-20b': Model.gpt_neox_20b, - 'oasst-sft-4-pythia-12b-epoch-3.5': Model.oasst_sft_4_pythia_12b_epoch_35, - 'santacoder': Model.santacoder, - - 'command-medium-nightly': Model.command_medium_nightly, - 'command-xlarge-nightly': Model.command_xlarge_nightly, - - 'code-cushman-001': Model.code_cushman_001, - 'code-davinci-002': Model.code_davinci_002, - - 'text-ada-001': Model.text_ada_001, - 'text-babbage-001': Model.text_babbage_001, - 'text-curie-001': Model.text_curie_001, - 'text-davinci-002': Model.text_davinci_002, - 'text-davinci-003': Model.text_davinci_003, - - 'palm2': Model.palm, - 'palm': Model.palm, - 'google': Model.palm, - 'google-bard': Model.palm, - 'google-palm': Model.palm, - 'bard': Model.palm, - - 'falcon-40b': Model.falcon_40b, - 'falcon-7b': Model.falcon_7b, - 'llama-13b': Model.llama_13b, - } \ No newline at end of file diff --git a/spaces/jyseo/3DFuse/ldm/models/diffusion/ddim.py b/spaces/jyseo/3DFuse/ldm/models/diffusion/ddim.py deleted file mode 100644 index 192804ca0a932c5f5d50b5cbe5ac258ebe7cae15..0000000000000000000000000000000000000000 --- a/spaces/jyseo/3DFuse/ldm/models/diffusion/ddim.py +++ /dev/null @@ -1,338 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm - -from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - dynamic_threshold=None, - ucg_schedule=None, - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - ctmp = conditioning[list(conditioning.keys())[0]] - while isinstance(ctmp, list): ctmp = ctmp[0] - cbs = ctmp.shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - elif isinstance(conditioning, list): - for ctmp in conditioning: - if ctmp.shape[0] != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold, - ucg_schedule=ucg_schedule - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, dynamic_threshold=None, - ucg_schedule=None): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - if ucg_schedule is not None: - assert len(ucg_schedule) == len(time_range) - unconditional_guidance_scale = ucg_schedule[i] - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - dynamic_threshold=dynamic_threshold) - - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, - dynamic_threshold=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - model_output = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - if isinstance(c, dict): - assert isinstance(unconditional_conditioning, dict) - c_in = dict() - for k in c: - if isinstance(c[k], list): - c_in[k] = [torch.cat([ - unconditional_conditioning[k][i], - c[k][i]]) for i in range(len(c[k]))] - else: - c_in[k] = torch.cat([ - unconditional_conditioning[k], - c[k]]) - elif isinstance(c, list): - c_in = list() - assert isinstance(unconditional_conditioning, list) - for i in range(len(c)): - c_in.append(torch.cat([unconditional_conditioning[i], c[i]])) - else: - c_in = torch.cat([unconditional_conditioning, c]) - - model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - model_output = model_uncond + unconditional_guidance_scale * (model_t - model_uncond) - - if self.model.parameterization == "v": - e_t = self.model.predict_eps_from_z_and_v(x, t, model_output) - else: - e_t = model_output - - if score_corrector is not None: - assert self.model.parameterization == "eps", 'not implemented' - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - if self.model.parameterization != "v": - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - else: - pred_x0 = self.model.predict_start_from_z_and_v(x, t, model_output) - - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - - if dynamic_threshold is not None: - raise NotImplementedError() - - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def encode(self, x0, c, t_enc, use_original_steps=False, return_intermediates=None, - unconditional_guidance_scale=1.0, unconditional_conditioning=None, callback=None): - num_reference_steps = self.ddpm_num_timesteps if use_original_steps else self.ddim_timesteps.shape[0] - - assert t_enc <= num_reference_steps - num_steps = t_enc - - if use_original_steps: - alphas_next = self.alphas_cumprod[:num_steps] - alphas = self.alphas_cumprod_prev[:num_steps] - else: - alphas_next = self.ddim_alphas[:num_steps] - alphas = torch.tensor(self.ddim_alphas_prev[:num_steps]) - - x_next = x0 - intermediates = [] - inter_steps = [] - for i in tqdm(range(num_steps), desc='Encoding Image'): - t = torch.full((x0.shape[0],), i, device=self.model.device, dtype=torch.long) - if unconditional_guidance_scale == 1.: - noise_pred = self.model.apply_model(x_next, t, c) - else: - assert unconditional_conditioning is not None - e_t_uncond, noise_pred = torch.chunk( - self.model.apply_model(torch.cat((x_next, x_next)), torch.cat((t, t)), - torch.cat((unconditional_conditioning, c))), 2) - noise_pred = e_t_uncond + unconditional_guidance_scale * (noise_pred - e_t_uncond) - - xt_weighted = (alphas_next[i] / alphas[i]).sqrt() * x_next - weighted_noise_pred = alphas_next[i].sqrt() * ( - (1 / alphas_next[i] - 1).sqrt() - (1 / alphas[i] - 1).sqrt()) * noise_pred - x_next = xt_weighted + weighted_noise_pred - if return_intermediates and i % ( - num_steps // return_intermediates) == 0 and i < num_steps - 1: - intermediates.append(x_next) - inter_steps.append(i) - elif return_intermediates and i >= num_steps - 2: - intermediates.append(x_next) - inter_steps.append(i) - if callback: callback(i) - - out = {'x_encoded': x_next, 'intermediate_steps': inter_steps} - if return_intermediates: - out.update({'intermediates': intermediates}) - return x_next, out - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False, callback=None): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - if callback: callback(i) - return x_dec \ No newline at end of file diff --git a/spaces/k1ngtai/MMS/vits/modules.py b/spaces/k1ngtai/MMS/vits/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/k1ngtai/MMS/vits/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/kazuk/youtube-whisper-04/app.py b/spaces/kazuk/youtube-whisper-04/app.py deleted file mode 100644 index 4a61dc561a016c53ad93a3c556b0ef7bafa964eb..0000000000000000000000000000000000000000 --- a/spaces/kazuk/youtube-whisper-04/app.py +++ /dev/null @@ -1,66 +0,0 @@ -import gradio as gr -import whisper -from pytube import YouTube - -def get_audio(url): - yt = YouTube(url) - return yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4") - -def get_transcript(url, model_size, lang, format): - - model = whisper.load_model(model_size) - - if lang == "None": - lang = None - - result = model.transcribe(get_audio(url), fp16=False, language=lang) - - if format == "None": - return result["text"] - elif format == ".srt": - return format_to_srt(result["segments"]) - -def format_to_srt(segments): - output = "" - for i, segment in enumerate(segments): - output += f"{i + 1}\n" - output += f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n" - output += f"{segment['text']}\n\n" - return output - -def format_timestamp(t): - hh = t//3600 - mm = (t - hh*3600)//60 - ss = t - hh*3600 - mm*60 - mi = (t - int(t))*1000 - return f"{int(hh):02d}:{int(mm):02d}:{int(ss):02d},{int(mi):03d}" - - -langs = ["None"] + sorted(list(whisper.tokenizer.LANGUAGES.values())) -model_size = list(whisper._MODELS.keys()) - -with gr.Blocks() as demo: - - with gr.Row(): - - with gr.Column(): - - with gr.Row(): - url = gr.Textbox(placeholder='Youtube video URL', label='URL') - - with gr.Row(): - - model_size = gr.Dropdown(choices=model_size, value='tiny', label="Model") - lang = gr.Dropdown(choices=langs, value="None", label="Language (Optional)") - format = gr.Dropdown(choices=["None", ".srt"], value="None", label="Timestamps? (Optional)") - - with gr.Row(): - gr.Markdown("Larger models are more accurate, but slower. For 1min video, it'll take ~30s (tiny), ~1min (base), ~3min (small), ~5min (medium), etc.") - transcribe_btn = gr.Button('Transcribe') - - with gr.Column(): - outputs = gr.Textbox(placeholder='Transcription of the video', label='Transcription') - - transcribe_btn.click(get_transcript, inputs=[url, model_size, lang, format], outputs=outputs) - -demo.launch(debug=True) diff --git a/spaces/kdrkdrkdr/HutaoTTS/transforms.py b/spaces/kdrkdrkdr/HutaoTTS/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/HutaoTTS/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/kepl/gpt/client/css/typing.css b/spaces/kepl/gpt/client/css/typing.css deleted file mode 100644 index f998ebe7f2172e4ac23cdeff6ba6fd811b67a145..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/client/css/typing.css +++ /dev/null @@ -1,15 +0,0 @@ -.typing { - position: absolute; - top: -25px; - left: 0; - font-size: 14px; - animation: show_popup 0.4s; -} - -.typing-hiding { - animation: hide_popup 0.4s; -} - -.typing-hidden { - display: none; -} diff --git a/spaces/kepl/gpt/g4f/Provider/Providers/helpers/phind.py b/spaces/kepl/gpt/g4f/Provider/Providers/helpers/phind.py deleted file mode 100644 index 70525d51d849c43bd1cf29c7f9b18f22bff1e982..0000000000000000000000000000000000000000 --- a/spaces/kepl/gpt/g4f/Provider/Providers/helpers/phind.py +++ /dev/null @@ -1,69 +0,0 @@ -import sys -import json -import datetime -import urllib.parse - -from curl_cffi import requests - -config = json.loads(sys.argv[1]) -prompt = config['messages'][-1]['content'] - -skill = 'expert' if config['model'] == 'gpt-4' else 'intermediate' - -json_data = json.dumps({ - 'question': prompt, - 'options': { - 'skill': skill, - 'date': datetime.datetime.now().strftime('%d/%m/%Y'), - 'language': 'en', - 'detailed': True, - 'creative': True, - 'customLinks': []}}, separators=(',', ':')) - -headers = { - 'Content-Type': 'application/json', - 'Pragma': 'no-cache', - 'Accept': '*/*', - 'Sec-Fetch-Site': 'same-origin', - 'Accept-Language': 'en-GB,en;q=0.9', - 'Cache-Control': 'no-cache', - 'Sec-Fetch-Mode': 'cors', - 'Content-Length': str(len(json_data)), - 'Origin': 'https://www.phind.com', - 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.4 Safari/605.1.15', - 'Referer': f'https://www.phind.com/search?q={urllib.parse.quote(prompt)}&source=searchbox', - 'Connection': 'keep-alive', - 'Host': 'www.phind.com', - 'Sec-Fetch-Dest': 'empty' -} - - -def output(chunk): - try: - if b'PHIND_METADATA' in chunk: - return - - if chunk == b'data: \r\ndata: \r\ndata: \r\n\r\n': - chunk = b'data: \n\r\n\r\n' - - chunk = chunk.decode() - - chunk = chunk.replace('data: \r\n\r\ndata: ', 'data: \n') - chunk = chunk.replace('\r\ndata: \r\ndata: \r\n\r\n', '\n\r\n\r\n') - chunk = chunk.replace('data: ', '').replace('\r\n\r\n', '') - - print(chunk, flush=True, end = '') - - except json.decoder.JSONDecodeError: - pass - -while True: - try: - response = requests.post('https://www.phind.com/api/infer/answer', - headers=headers, data=json_data, content_callback=output, timeout=999999, impersonate='safari15_5') - - exit(0) - - except Exception as e: - print('an error occured, retrying... |', e, flush=True) - continue \ No newline at end of file diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/mkgui/base/ui/streamlit_utils.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/mkgui/base/ui/streamlit_utils.py deleted file mode 100644 index beb6e65c61f8a16b4376494123f31178cdb88bde..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/mkgui/base/ui/streamlit_utils.py +++ /dev/null @@ -1,13 +0,0 @@ -CUSTOM_STREAMLIT_CSS = """ -div[data-testid="stBlock"] button { - width: 100% !important; - margin-bottom: 20px !important; - border-color: #bfbfbf !important; -} -section[data-testid="stSidebar"] div { - max-width: 10rem; -} -pre code { - white-space: pre-wrap; -} -""" diff --git a/spaces/kquote03/lama-video-watermark-remover/models/ade20k/segm_lib/nn/parallel/__init__.py b/spaces/kquote03/lama-video-watermark-remover/models/ade20k/segm_lib/nn/parallel/__init__.py deleted file mode 100644 index 9b52f49cc0755562218a460483cbf02514ddd773..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/models/ade20k/segm_lib/nn/parallel/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .data_parallel import UserScatteredDataParallel, user_scattered_collate, async_copy_to diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/losses/fid/inception.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/losses/fid/inception.py deleted file mode 100644 index e9bd0863b457aaa40c770eaa4acbb142b18fc18b..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/losses/fid/inception.py +++ /dev/null @@ -1,323 +0,0 @@ -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision import models - -try: - from torchvision.models.utils import load_state_dict_from_url -except ImportError: - from torch.utils.model_zoo import load_url as load_state_dict_from_url - -# Inception weights ported to Pytorch from -# http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz -FID_WEIGHTS_URL = 'https://github.com/mseitzer/pytorch-fid/releases/download/fid_weights/pt_inception-2015-12-05-6726825d.pth' - - -LOGGER = logging.getLogger(__name__) - - -class InceptionV3(nn.Module): - """Pretrained InceptionV3 network returning feature maps""" - - # Index of default block of inception to return, - # corresponds to output of final average pooling - DEFAULT_BLOCK_INDEX = 3 - - # Maps feature dimensionality to their output blocks indices - BLOCK_INDEX_BY_DIM = { - 64: 0, # First max pooling features - 192: 1, # Second max pooling featurs - 768: 2, # Pre-aux classifier features - 2048: 3 # Final average pooling features - } - - def __init__(self, - output_blocks=[DEFAULT_BLOCK_INDEX], - resize_input=True, - normalize_input=True, - requires_grad=False, - use_fid_inception=True): - """Build pretrained InceptionV3 - - Parameters - ---------- - output_blocks : list of int - Indices of blocks to return features of. Possible values are: - - 0: corresponds to output of first max pooling - - 1: corresponds to output of second max pooling - - 2: corresponds to output which is fed to aux classifier - - 3: corresponds to output of final average pooling - resize_input : bool - If true, bilinearly resizes input to width and height 299 before - feeding input to model. As the network without fully connected - layers is fully convolutional, it should be able to handle inputs - of arbitrary size, so resizing might not be strictly needed - normalize_input : bool - If true, scales the input from range (0, 1) to the range the - pretrained Inception network expects, namely (-1, 1) - requires_grad : bool - If true, parameters of the model require gradients. Possibly useful - for finetuning the network - use_fid_inception : bool - If true, uses the pretrained Inception model used in Tensorflow's - FID implementation. If false, uses the pretrained Inception model - available in torchvision. The FID Inception model has different - weights and a slightly different structure from torchvision's - Inception model. If you want to compute FID scores, you are - strongly advised to set this parameter to true to get comparable - results. - """ - super(InceptionV3, self).__init__() - - self.resize_input = resize_input - self.normalize_input = normalize_input - self.output_blocks = sorted(output_blocks) - self.last_needed_block = max(output_blocks) - - assert self.last_needed_block <= 3, \ - 'Last possible output block index is 3' - - self.blocks = nn.ModuleList() - - if use_fid_inception: - inception = fid_inception_v3() - else: - inception = models.inception_v3(pretrained=True) - - # Block 0: input to maxpool1 - block0 = [ - inception.Conv2d_1a_3x3, - inception.Conv2d_2a_3x3, - inception.Conv2d_2b_3x3, - nn.MaxPool2d(kernel_size=3, stride=2) - ] - self.blocks.append(nn.Sequential(*block0)) - - # Block 1: maxpool1 to maxpool2 - if self.last_needed_block >= 1: - block1 = [ - inception.Conv2d_3b_1x1, - inception.Conv2d_4a_3x3, - nn.MaxPool2d(kernel_size=3, stride=2) - ] - self.blocks.append(nn.Sequential(*block1)) - - # Block 2: maxpool2 to aux classifier - if self.last_needed_block >= 2: - block2 = [ - inception.Mixed_5b, - inception.Mixed_5c, - inception.Mixed_5d, - inception.Mixed_6a, - inception.Mixed_6b, - inception.Mixed_6c, - inception.Mixed_6d, - inception.Mixed_6e, - ] - self.blocks.append(nn.Sequential(*block2)) - - # Block 3: aux classifier to final avgpool - if self.last_needed_block >= 3: - block3 = [ - inception.Mixed_7a, - inception.Mixed_7b, - inception.Mixed_7c, - nn.AdaptiveAvgPool2d(output_size=(1, 1)) - ] - self.blocks.append(nn.Sequential(*block3)) - - for param in self.parameters(): - param.requires_grad = requires_grad - - def forward(self, inp): - """Get Inception feature maps - - Parameters - ---------- - inp : torch.autograd.Variable - Input tensor of shape Bx3xHxW. Values are expected to be in - range (0, 1) - - Returns - ------- - List of torch.autograd.Variable, corresponding to the selected output - block, sorted ascending by index - """ - outp = [] - x = inp - - if self.resize_input: - x = F.interpolate(x, - size=(299, 299), - mode='bilinear', - align_corners=False) - - if self.normalize_input: - x = 2 * x - 1 # Scale from range (0, 1) to range (-1, 1) - - for idx, block in enumerate(self.blocks): - x = block(x) - if idx in self.output_blocks: - outp.append(x) - - if idx == self.last_needed_block: - break - - return outp - - -def fid_inception_v3(): - """Build pretrained Inception model for FID computation - - The Inception model for FID computation uses a different set of weights - and has a slightly different structure than torchvision's Inception. - - This method first constructs torchvision's Inception and then patches the - necessary parts that are different in the FID Inception model. - """ - LOGGER.info('fid_inception_v3 called') - inception = models.inception_v3(num_classes=1008, - aux_logits=False, - pretrained=False) - LOGGER.info('models.inception_v3 done') - inception.Mixed_5b = FIDInceptionA(192, pool_features=32) - inception.Mixed_5c = FIDInceptionA(256, pool_features=64) - inception.Mixed_5d = FIDInceptionA(288, pool_features=64) - inception.Mixed_6b = FIDInceptionC(768, channels_7x7=128) - inception.Mixed_6c = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6d = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6e = FIDInceptionC(768, channels_7x7=192) - inception.Mixed_7b = FIDInceptionE_1(1280) - inception.Mixed_7c = FIDInceptionE_2(2048) - - LOGGER.info('fid_inception_v3 patching done') - - state_dict = load_state_dict_from_url(FID_WEIGHTS_URL, progress=True) - LOGGER.info('fid_inception_v3 weights downloaded') - - inception.load_state_dict(state_dict) - LOGGER.info('fid_inception_v3 weights loaded into model') - - return inception - - -class FIDInceptionA(models.inception.InceptionA): - """InceptionA block patched for FID computation""" - def __init__(self, in_channels, pool_features): - super(FIDInceptionA, self).__init__(in_channels, pool_features) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch5x5 = self.branch5x5_1(x) - branch5x5 = self.branch5x5_2(branch5x5) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionC(models.inception.InceptionC): - """InceptionC block patched for FID computation""" - def __init__(self, in_channels, channels_7x7): - super(FIDInceptionC, self).__init__(in_channels, channels_7x7) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch7x7 = self.branch7x7_1(x) - branch7x7 = self.branch7x7_2(branch7x7) - branch7x7 = self.branch7x7_3(branch7x7) - - branch7x7dbl = self.branch7x7dbl_1(x) - branch7x7dbl = self.branch7x7dbl_2(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_3(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_4(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_5(branch7x7dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch7x7, branch7x7dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_1(models.inception.InceptionE): - """First InceptionE block patched for FID computation""" - def __init__(self, in_channels): - super(FIDInceptionE_1, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_2(models.inception.InceptionE): - """Second InceptionE block patched for FID computation""" - def __init__(self, in_channels): - super(FIDInceptionE_2, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: The FID Inception model uses max pooling instead of average - # pooling. This is likely an error in this specific Inception - # implementation, as other Inception models use average pooling here - # (which matches the description in the paper). - branch_pool = F.max_pool2d(x, kernel_size=3, stride=1, padding=1) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) diff --git a/spaces/krisnadwipaj/interactive-dashboard/app.py b/spaces/krisnadwipaj/interactive-dashboard/app.py deleted file mode 100644 index 1b69953e43b9f8fcd75a5cffb577c2cd7d45f44b..0000000000000000000000000000000000000000 --- a/spaces/krisnadwipaj/interactive-dashboard/app.py +++ /dev/null @@ -1,55 +0,0 @@ -import streamlit as st -import pandas as pd -import plotly.express as px -import math - -complete_acc = pd.read_csv('completedacct.csv') -complete_district = pd.read_csv('completeddistrict.csv') -complete_loan = pd.read_csv('completedloan.csv') - -#clean data -complete_acc = complete_acc.drop(['year', 'month', 'day', 'date'], axis = 1) - -#set up the data -acc_loan = complete_loan.merge(complete_acc, how='inner', left_on='account_id', right_on='account_id') -acc_loan_district = acc_loan.merge(complete_district, how='inner', left_on='district_id', right_on='district_id') -year = acc_loan_district['year'].isin([2013, 2014]) -acc_loan_district = acc_loan_district[year] -total_loan = acc_loan_district['amount'].sum() -growth_rate = acc_loan_district.groupby('year')['amount'].sum().reset_index() -growth_rate['lead(1)'] = growth_rate['amount'].shift(1) -growth_rate['growth_rate'] = ((growth_rate['amount'] - growth_rate['lead(1)'])/growth_rate['lead(1)'])*100 -growth_rate_fix = math.ceil(growth_rate['growth_rate'].iloc[1]) -loan_per_city = acc_loan_district.groupby('city')['amount'].sum().reset_index().sort_values('amount', ascending = False) -loan_per_city = loan_per_city.head(10) -loan_per_region = acc_loan_district.groupby('region')['amount'].sum().reset_index().sort_values('amount', ascending = False) -region = acc_loan_district['region'].unique().tolist() -region.append('All') - - -def get_filtered_data(df, region): - if region == 'All': - mask_region = df['region'].isin(['Northeast', 'South', 'Midwest', 'West']) - else: - mask_region = df['region'] == region - return df[mask_region] - -def line_plot(df): - df = df.groupby('month')['amount'].sum().reset_index() - fig = px.line(df, x = df['month'], y = df['amount'], title = 'Trend Loan per Month') - return fig - -def bar_plot(df, x, y, title): - fig = px.bar(df, x=x, y=y, title=title) - return fig - -st.title('Total Loans Period 2013-2014') -col1, col2 = st.columns(2) -col1.metric(label='Total', value='$'+str(total_loan)) -col2.metric(label='Growth Rate in 12 Month', value='%'+str(growth_rate_fix)) -st.plotly_chart(bar_plot(loan_per_city, loan_per_city['city'], loan_per_city['amount'], 'By City')) -st.plotly_chart(bar_plot(loan_per_region, loan_per_region['region'], loan_per_region['amount'], 'By Region')) -st.markdown('Trend Filter') -selected_region = st.selectbox('Select Region', region, key = 'region') -df_filtered = get_filtered_data(acc_loan_district, selected_region) -st.plotly_chart(line_plot(df_filtered)) \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/altair/vegalite/v5/schema/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/altair/vegalite/v5/schema/__init__.py deleted file mode 100644 index 123a3fb5f048408f59a80cc0fa80097b652ceebb..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/altair/vegalite/v5/schema/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# ruff: noqa -from .core import * -from .channels import * -SCHEMA_VERSION = 'v5.8.0' -SCHEMA_URL = 'https://vega.github.io/schema/vega-lite/v5.8.0.json' diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attr/setters.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attr/setters.py deleted file mode 100644 index 12ed6750df35b96e2ccde24a9752dca22929188d..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/attr/setters.py +++ /dev/null @@ -1,73 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -Commonly used hooks for on_setattr. -""" - - -from . import _config -from .exceptions import FrozenAttributeError - - -def pipe(*setters): - """ - Run all *setters* and return the return value of the last one. - - .. versionadded:: 20.1.0 - """ - - def wrapped_pipe(instance, attrib, new_value): - rv = new_value - - for setter in setters: - rv = setter(instance, attrib, rv) - - return rv - - return wrapped_pipe - - -def frozen(_, __, ___): - """ - Prevent an attribute to be modified. - - .. versionadded:: 20.1.0 - """ - raise FrozenAttributeError() - - -def validate(instance, attrib, new_value): - """ - Run *attrib*'s validator on *new_value* if it has one. - - .. versionadded:: 20.1.0 - """ - if _config._run_validators is False: - return new_value - - v = attrib.validator - if not v: - return new_value - - v(instance, attrib, new_value) - - return new_value - - -def convert(instance, attrib, new_value): - """ - Run *attrib*'s converter -- if it has one -- on *new_value* and return the - result. - - .. versionadded:: 20.1.0 - """ - c = attrib.converter - if c: - return c(new_value) - - return new_value - - -# Sentinel for disabling class-wide *on_setattr* hooks for certain attributes. -# autodata stopped working, so the docstring is inlined in the API docs. -NO_OP = object() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/__init__.py deleted file mode 100644 index ef44e8ac9c9913f13e748833a07fb90824f90463..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -import logging -from fontTools.misc.loggingTools import configLogger - -log = logging.getLogger(__name__) - -version = __version__ = "4.39.4" - -__all__ = ["version", "log", "configLogger"] diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/_transports/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/_transports/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/lRoz/j-hartmann-emotion-english-distilroberta-base/README.md b/spaces/lRoz/j-hartmann-emotion-english-distilroberta-base/README.md deleted file mode 100644 index 29a41e9501aaab0cb8a607eda8ae8611de57d20f..0000000000000000000000000000000000000000 --- a/spaces/lRoz/j-hartmann-emotion-english-distilroberta-base/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: J Hartmann Emotion English Distilroberta Base -emoji: 🐢 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/models/op/fused_bias_act.cpp b/spaces/lambdalabs/LambdaSuperRes/KAIR/models/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/models/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/leilevy/bingo/src/components/welcome-screen.tsx b/spaces/leilevy/bingo/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/leilevy/bingo/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
        - {exampleMessages.map(example => ( - - ))} -
        - ) -} diff --git a/spaces/leumastai/BackgroundChanger/commons/selfie_seg.py b/spaces/leumastai/BackgroundChanger/commons/selfie_seg.py deleted file mode 100644 index 1797ef80262a2429d64f4e564a277fe7e61672de..0000000000000000000000000000000000000000 --- a/spaces/leumastai/BackgroundChanger/commons/selfie_seg.py +++ /dev/null @@ -1,230 +0,0 @@ -import os -import cv2 -import mediapipe as mp -import numpy as np -from moviepy.editor import ( - VideoFileClip, AudioFileClip) - -mp_drawing = mp.solutions.drawing_utils -mp_selfie_segmentation = mp.solutions.selfie_segmentation - -imgs = os.listdir("bg_imgs/") -rnd_img = "bg_imgs/" + np.random.choice(imgs) -#IMAGE_FILES = ["/home/samuel/Documents/Computer Vision Codes/istockphoto-1193994027-170667a.jpg"] - - - -# For webcam input: -def load_from_webcam(bg_type: str = "blur"): - cap = cv2.VideoCapture(0) - with mp_selfie_segmentation.SelfieSegmentation( - model_selection=1) as selfie_segmentation: - - while cap.isOpened(): - success, image = cap.read() - if not success: - print("Ignoring empty camera frame.") - # If loading a video, use 'break' instead of 'continue'. - continue - - # Flip the image horizontally for a later selfie-view display, and convert - # the BGR image to RGB. - image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB) - # To improve performance, optionally mark the image as not writeable to - # pass by reference. - image.flags.writeable = False - results = selfie_segmentation.process(image) - - image.flags.writeable = True - image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) - - # Draw selfie segmentation on the background image. - # To improve segmentation around boundaries, consider applying a joint - # bilateral filter to "results.segmentation_mask" with "image". - condition = np.stack( - (results.segmentation_mask,) * 3, axis=-1) > 0.7 - # The background can be customized. - # a) Load an image (with the same width and height of the input image) to - # be the background, e.g., bg_image = cv2.imread('/path/to/image/file') - # b) Blur the input image by applying image filtering, e.g., - # bg_image = cv2.GaussianBlur(image,(55,55),0) - image_height, image_width, _ = image.shape - - if bg_type == "blur": - bg_image = cv2.GaussianBlur(image,(55,55),0) - - if bg_type == "random_image": - bg_image = cv2.resize(cv2.imread(rnd_img), (image_width, image_height)) - - if (bg_image is None) or (bg_image == "solid_colors"): - bg_image = np.zeros(image.shape, dtype=np.uint8) - bg_image[:] = np.random.randint(0, high=256, size=(3,)).tolist() - - output_image = np.where(condition, image, bg_image) - - cv2.imshow('MediaPipe Selfie Segmentation', output_image) - if cv2.waitKey(5) & 0xFF == ord("q"): - break - cap.release() - - - -# For static images -def load_from_static_image(file: cv2.Mat, bg_type: str = "solid_colors"): - with mp_selfie_segmentation.SelfieSegmentation( - model_selection=0) as selfie_segmentation: - #for idx, file in enumerate(IMAGE_FILES): - #print (file) - image = file #cv2.imread(file) - image_height, image_width, _ = image.shape - # Convert the BGR image to RGB before processing. - results = selfie_segmentation.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) - - # Draw selfie segmentation on the background image. - # To improve segmentation around boundaries, consider applying a joint - # bilateral filter to "results.segmentation_mask" with "image". - # increase threshold to 0.8 or reduce - condition = np.stack((results.segmentation_mask,) * 3, axis=-1) > 0.8 - - if bg_type == "blur": - bg_image = cv2.GaussianBlur(image, (55,55), 0) - if bg_type == "random_image": - bg_image = cv2.resize(cv2.imread(rnd_img), (image_width, image_height)) - if (bg_type is None) or (bg_type == "solid_colors"): - bg_image = np.zeros(image.shape, dtype=np.uint8) - bg_image[:] = np.random.randint(0, high=256, size=(3,)).tolist() - - output_image = np.where(condition, image, bg_image) - return output_image - - -# For Videos -def load_from_video(file: str, bg_type: str = "solid_colors"): - vcap = cv2.VideoCapture(file) - # Get video properties - frame_width = int(vcap.get(3)) - frame_height = int(vcap.get(4)) - vid_fps = int(vcap.get(5)) - - vid_size = (frame_width, frame_height) - - audio_path = "audio.mp3" - video_path = "output_video_from_file.mp4" - # *'h264' - output = cv2.VideoWriter(video_path, cv2.VideoWriter_fourcc(*'avc1'), vid_fps, vid_size) - - selfie_segmentation = mp_selfie_segmentation.SelfieSegmentation(model_selection=1) - solid_bg = np.random.randint(0, high=256, size=(3,)).tolist() - - while True: - success, image = vcap.read() - if success == True: - - image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB) - image.flags.writeable = False - - results = selfie_segmentation.process(image) - image.flags.writeable = True - - image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) - condition = np.stack( - (results.segmentation_mask, ) * 3, axis=-1) > 0.7 - - image_height, image_width = image.shape[:2] - - if bg_type == "blur": - bg_image = cv2.GaussianBlur(image, (55,55),0) - - if bg_type == "random_image": - bg_image = cv2.resize(cv2.imread(rnd_img), (image_width, image_height)) - - if (bg_type == None) | (bg_type == "solid_colors"): - bg_image = np.zeros(image.shape, dtype=np.uint8) - bg_image[:] = solid_bg - - output_image = np.where(condition, image, bg_image) - output.write(output_image) - - else: - print ("Video stream disconnected") - break - vcap.release() - output.release() - - try: - clip = VideoFileClip(file) - clip.audio.write_audiofile(audio_path) - - video_clip = VideoFileClip(video_path) - audio_clip = AudioFileClip(audio_path) - - if video_clip.end > audio_clip.end: - final_clip = video_clip.set_audio(audio_clip) - final_clip.write_videofile("final.mp4") - else: - audio_clip = audio_clip.subclip(0, video_clip.end) - final_clip = video_clip.set_audio(audio_clip) - final_clip.write_videofile("final.mp4") - - os.remove(video_path) - os.remove(audio_path) - except AttributeError: #i.e there's no audio in the video - return "/home/samuel/Documents/Computer Vision Codes/selfie_seg/output_video_from_file.mp4" - - - return "final.mp4" - - -if __name__ == "__main__": - - vp = "/home/samuel/Documents/Computer Vision Codes/Course Overview_5.mp4" - load_from_video(vp, bg_type="solid_colors") - - vp = "/home/samuel/Documents/Computer Vision Codes/Course Overview_5.mp4" - - - """ vcap = cv2.VideoCapture(vp) - frame_width = int(vcap.get(3)) - frame_height = int(vcap.get(4)) - frame_size = (frame_width,frame_height) - fps = int(vcap.get(5)) - - audio_path = "audio.mp3" - video_path = "output_video_from_file.mp4" - - output = cv2.VideoWriter(video_path, cv2.VideoWriter_fourcc('M','J','P','G'), fps, frame_size) - - clip = VideoFileClip(vp) - clip.audio.write_audiofile(audio_path) - - while True: - ret, frame = vcap.read() - if ret == True: - output.write(frame) - else: - print ("Video stram disconnected") - break - - vcap.release() - output.release() - - - video_clip = VideoFileClip(video_path) - audio_clip = AudioFileClip(audio_path) - - - if video_clip.end > audio_clip.end: - final_clip = video_clip.set_audio(audio_clip) - final_clip.write_videofile("final.mp4") - else: - audio_clip = audio_clip.subclip(0, video_clip.end) - final_clip = video_clip.set_audio(audio_clip) - final_clip.write_videofile("final.mp4") - - - os.remove(video_path) - os.remove(audio_path) """ - - - #final_output = os.system("ffmpeg -i " + video_path+" -i "+audio_path+" -c:v copy -c:a aac "+output_path) - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Como Instalar Matlab 2013 B [Extra Quality] Crack.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Como Instalar Matlab 2013 B [Extra Quality] Crack.md deleted file mode 100644 index a6887f150ce2e7747b4d15aa86d307332e181ddc..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Como Instalar Matlab 2013 B [Extra Quality] Crack.md +++ /dev/null @@ -1,77 +0,0 @@ -
        -

        Como Instalar Matlab 2013 B Crack

        -

        Matlab es un programa de lenguaje de alto nivel que se utiliza para realizar tareas computacionales de forma eficiente y rápida. Es una herramienta indispensable para la investigación y el desarrollo en campos como el control automático, el procesamiento de imágenes y señales, la ingeniería y las matemáticas. En este artículo te explicaremos como instalar Matlab 2013 B con crack en tu sistema operativo Windows.

        -

        Pasos previos a la instalación

        -

        Antes de instalar Matlab 2013 B con crack, debes asegurarte de que tu sistema cumpla con los requisitos mínimos. Estos son:

        -

        Como Instalar Matlab 2013 B Crack


        Download Zip ===== https://bytlly.com/2uGxZW



        -
          -
        • Sistema operativo: Windows XP, Vista, 7 u 8
        • -
        • Procesador: Pentium III o superior
        • -
        • Memoria RAM: 1 GB o más
        • -
        • Espacio en disco duro: 1 GB para Matlab solo, 3-4 GB para una instalación típica
        • -
        -

        También debes descargar el archivo Matlab_R2013a_Full_Setup.iso desde algún sitio web confiable. Este archivo contiene el programa y el crack que necesitarás para activarlo.

        -

        Pasos para la instalación

        -

        Una vez que tengas el archivo descargado, sigue estos pasos para instalar Matlab 2013 B con crack:

        -
          -
        1. Monta el archivo iso en una unidad virtual o extrae su contenido con algún programa como WinRAR o 7-Zip.
        2. -
        3. Ejecuta el archivo setup.exe como administrador y sigue las instrucciones del asistente de instalación.
        4. -
        5. Cuando te pida introducir la clave de producto, selecciona la opción "I have the File Installation Key for my license" e introduce el siguiente código: 25716-63335-16746-06072
        6. -
        7. Cuando te pida seleccionar los componentes a instalar, elige los que necesites según tu uso de Matlab. Recomendamos instalar al menos los siguientes: MATLAB, Simulink, Symbolic Math Toolbox y MuPAD Notebook App.
        8. -
        9. Cuando te pida especificar la carpeta de instalación, elige la que prefieras o deja la predeterminada.
        10. -
        11. Cuando te pida activar Matlab, selecciona la opción "Activate manually without the Internet" y haz clic en Next.
        12. -
        13. Cuando te pida seleccionar la licencia a activar, haz clic en Browse y busca el archivo license_standalone.lic que se encuentra en la carpeta crack del archivo iso.
        14. -
        15. Haz clic en Next y espera a que se complete la instalación.
        16. -
        17. Cuando termine la instalación, copia el archivo libmwservices.dll que se encuentra en la carpeta crack del archivo iso y pégalo en la carpeta bin\win32 o bin\win64 de tu carpeta de instalación de Matlab, según sea tu sistema operativo. Reemplaza el archivo existente si te lo pide.
        18. -
        19. Listo, ya puedes ejecutar Matlab 2013 B con crack desde el menú de inicio o desde el acceso directo que se creó en tu escritorio.
        20. -
        -

        Conclusión

        -

        En este artículo te hemos mostrado como instalar Matlab 2013 B con crack en tu sistema operativo Windows. Esperamos que te haya sido útil y que puedas disfrutar de este poderoso programa para realizar tus tareas computacionales. Recuerda que este método es solo para fines educativos y que debes adquirir una licencia legal si quieres usar Matlab de forma profesional.

        -

        Ventajas de usar Matlab 2013 B con crack

        -

        Matlab 2013 B con crack es una versión que te permite usar todas las funciones y herramientas de este programa sin tener que pagar una licencia. Esto te da la oportunidad de explorar y aprender Matlab sin limitaciones ni restricciones. Además, al instalar Matlab 2013 B con crack podrás disfrutar de las siguientes ventajas:

        -
          -
        • Acceso a las últimas actualizaciones y mejoras de Matlab.
        • -
        • Compatibilidad con otros programas y lenguajes como C, C++, Java, Python y Excel.
        • -
        • Posibilidad de crear y ejecutar aplicaciones gráficas de usuario (GUI) con facilidad.
        • -
        • Capacidad de generar código C/C++ o HDL a partir de tus modelos de Matlab.
        • -
        • Integración con hardware y dispositivos externos como cámaras, sensores, instrumentos y robots.
        • -
        -

        Matlab 2013 B con crack es una opción ideal para estudiantes, profesores, investigadores y profesionales que quieran usar este programa para fines educativos o personales. Sin embargo, si quieres usar Matlab para fines comerciales o profesionales, te recomendamos adquirir una licencia legal que te garantice el soporte técnico y la seguridad de tu trabajo.

        -

        Desventajas de usar Matlab 2013 B con crack

        -

        Aunque usar Matlab 2013 B con crack tiene sus ventajas, también tiene algunos inconvenientes que debes tener en cuenta antes de instalarlo. Estos son:

        -
          -
        • Vulnerabilidad a virus, malware y spyware que puedan dañar tu sistema o robar tu información.
        • -
        • Inestabilidad y errores en el funcionamiento del programa que puedan afectar tu trabajo o causar pérdida de datos.
        • -
        • Incompatibilidad con algunas funciones o herramientas de versiones posteriores de Matlab.
        • -
        • Infracción de los derechos de autor y la propiedad intelectual de Mathworks, la empresa creadora de Matlab.
        • -
        • Riesgo de sanciones legales o penales por parte de Mathworks o de terceros afectados por el uso ilegal de Matlab.
        • -
        -

        Por estas razones, te aconsejamos que uses Matlab 2013 B con crack bajo tu propia responsabilidad y que tomes las medidas necesarias para proteger tu sistema y tu trabajo. También te sugerimos que respetes las normas éticas y legales que rigen el uso de software y que apoyes el desarrollo e innovación de Matlab comprando una licencia legal si puedes hacerlo.

        -

        -

        Cómo usar Matlab 2013 B con crack

        -

        Una vez que hayas instalado Matlab 2013 B con crack, podrás usarlo para realizar tus tareas computacionales de forma sencilla y eficaz. Matlab tiene una interfaz gráfica de usuario (GUI) que te permite acceder a todas sus funciones y herramientas mediante menús, botones y ventanas. También tiene una ventana de comandos donde puedes escribir y ejecutar código en el lenguaje de Matlab. Además, puedes crear y editar archivos de script o función con el editor integrado de Matlab.

        -

        Para usar Matlab 2013 B con crack, solo tienes que seguir estos pasos:

        -
          -
        1. Abre Matlab desde el menú de inicio o desde el acceso directo que se creó en tu escritorio.
        2. -
        3. Espera a que se cargue el entorno de trabajo de Matlab y verás la ventana principal con las siguientes partes: la barra de herramientas, la barra de menús, la ventana de comandos, el explorador de archivos, el espacio de trabajo y el historial de comandos.
        4. -
        5. Selecciona la opción que quieras realizar desde los menús o las herramientas disponibles. Por ejemplo, puedes crear un nuevo archivo de script o función desde el menú File > New > Script o Function. También puedes abrir un archivo existente desde el menú File > Open o desde el explorador de archivos.
        6. -
        7. Escribe y ejecuta el código que quieras en el lenguaje de Matlab. Puedes usar las funciones y herramientas integradas de Matlab o crear las tuyas propias. También puedes usar las funciones y herramientas adicionales que hayas instalado junto con Matlab, como Symbolic Math Toolbox o MuPAD Notebook App.
        8. -
        9. Visualiza y analiza los resultados de tu código en la ventana de comandos, el espacio de trabajo o las ventanas gráficas que se generen. Puedes modificar los parámetros, las variables o los gráficos según tus necesidades.
        10. -
        11. Guarda y exporta tu trabajo desde el menú File > Save o File > Export. Puedes guardar tu código en formato .m o .mat. También puedes exportar tus datos o gráficos en otros formatos como .txt, .csv, .xls, .png, .pdf, etc.
        12. -
        -

        Cómo aprender Matlab 2013 B con crack

        -

        Si quieres aprender a usar Matlab 2013 B con crack de forma efectiva y aprovechar al máximo sus posibilidades, te recomendamos que sigas estos consejos:

        -
          -
        • Consulta la documentación oficial de Matlab que se encuentra en el menú Help > Documentation. Allí encontrarás información detallada sobre todas las funciones y herramientas de Matlab, así como ejemplos prácticos y tutoriales.
        • -
        • Busca recursos en línea sobre Matlab como cursos, libros, blogs, foros o vídeos. Hay muchos sitios web dedicados a enseñar y compartir conocimientos sobre Matlab. Algunos ejemplos son: Coursera, Udemy, Mathworks Blog, MATLAB Central o YouTube.
        • -
        • Practica y experimenta con Matlab lo más que puedas. La mejor forma de aprender es haciendo. Intenta resolver problemas reales o simulados con Matlab y compara tus resultados con los de otras fuentes. También puedes crear tus propios proyectos o desafíos con Matlab y compartirlos con otros usuarios.
        • -
        • Pide ayuda cuando la necesites. Si tienes alguna duda o dificultad con Matlab, no dudes en consultar a alguien que sepa más que tú. Puedes recurrir a tus profesores, compañeros, amigos o familiares que usen Matlab. También puedes hacer preguntas en sitios web especializados como Stack Overflow o MATLAB Answers.
        • -
        -

        Matlab 2013 B con crack es un programa muy potente y versátil que te permite realizar tareas computacionales de forma rápida y eficiente. Esperamos que este artículo te haya servido para instalarlo y usarlo correctamente. Recuerda que este método es solo para fines educativos y que debes respetar los derechos de autor y la propiedad intelectual de Mathworks.

        -

        Conclusión

        -

        En este artículo te hemos mostrado cómo instalar Matlab 2013 B con crack en tu sistema operativo Windows. También te hemos explicado cómo usar Matlab 2013 B con crack para realizar tus tareas computacionales de forma sencilla y eficaz. Además, te hemos dado algunos consejos para aprender Matlab 2013 B con crack y aprovechar al máximo sus posibilidades.

        -

        Matlab 2013 B con crack es una versión que te permite usar todas las funciones y herramientas de este programa sin tener que pagar una licencia. Esto te da la oportunidad de explorar y aprender Matlab sin limitaciones ni restricciones. Sin embargo, también tiene algunos inconvenientes que debes tener en cuenta antes de instalarlo, como la vulnerabilidad a virus, la inestabilidad, la incompatibilidad, la infracción de los derechos de autor y el riesgo de sanciones legales.

        -

        Por estas razones, te aconsejamos que uses Matlab 2013 B con crack bajo tu propia responsabilidad y que tomes las medidas necesarias para proteger tu sistema y tu trabajo. También te sugerimos que respetes las normas éticas y legales que rigen el uso de software y que apoyes el desarrollo e innovación de Matlab comprando una licencia legal si puedes hacerlo.

        -

        Esperamos que este artículo te haya sido útil y que puedas disfrutar de este poderoso programa para realizar tus tareas computacionales. Si quieres saber más sobre Matlab o sobre otros temas relacionados con la informática, la ingeniería o las matemáticas, te invitamos a visitar nuestro sitio web donde encontrarás más artículos, cursos, libros y vídeos de interés.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Euro Truck Simulator 2 V1.32.3s Utorrent [UPD].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Euro Truck Simulator 2 V1.32.3s Utorrent [UPD].md deleted file mode 100644 index 62606bd710d8b629046e5f67db1e07f2d9f602b3..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Euro Truck Simulator 2 V1.32.3s Utorrent [UPD].md +++ /dev/null @@ -1,6 +0,0 @@ -

        Euro Truck Simulator 2 V1.32.3s Utorrent


        Download Zip ===== https://bytlly.com/2uGya8



        -
        -Euro Truck Simulator 2 - game update v.1.32.3.14 - Download Game update (patch) to Euro Truck Simulator 2, ... Software in 2017, December 5, before this date movie is not available for download with uTorrent. ... Game patched to v1.32.3s 1fdad05405
        -
        -
        -

        diff --git a/spaces/lvwerra/bary_score/tests.py b/spaces/lvwerra/bary_score/tests.py deleted file mode 100644 index 601ed757507caebec67493462d11eb4c8901c2a1..0000000000000000000000000000000000000000 --- a/spaces/lvwerra/bary_score/tests.py +++ /dev/null @@ -1,17 +0,0 @@ -test_cases = [ - { - "predictions": [0, 0], - "references": [1, 1], - "result": {"metric_score": 0} - }, - { - "predictions": [1, 1], - "references": [1, 1], - "result": {"metric_score": 1} - }, - { - "predictions": [1, 0], - "references": [1, 1], - "result": {"metric_score": 0.5} - } -] \ No newline at end of file diff --git a/spaces/ma-xu/LIVE/pybind11/include/pybind11/complex.h b/spaces/ma-xu/LIVE/pybind11/include/pybind11/complex.h deleted file mode 100644 index f8327eb37307490b658becf3d151132ddb5df531..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/include/pybind11/complex.h +++ /dev/null @@ -1,65 +0,0 @@ -/* - pybind11/complex.h: Complex number support - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "pybind11.h" -#include - -/// glibc defines I as a macro which breaks things, e.g., boost template names -#ifdef I -# undef I -#endif - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) - -template struct format_descriptor, detail::enable_if_t::value>> { - static constexpr const char c = format_descriptor::c; - static constexpr const char value[3] = { 'Z', c, '\0' }; - static std::string format() { return std::string(value); } -}; - -#ifndef PYBIND11_CPP17 - -template constexpr const char format_descriptor< - std::complex, detail::enable_if_t::value>>::value[3]; - -#endif - -PYBIND11_NAMESPACE_BEGIN(detail) - -template struct is_fmt_numeric, detail::enable_if_t::value>> { - static constexpr bool value = true; - static constexpr int index = is_fmt_numeric::index + 3; -}; - -template class type_caster> { -public: - bool load(handle src, bool convert) { - if (!src) - return false; - if (!convert && !PyComplex_Check(src.ptr())) - return false; - Py_complex result = PyComplex_AsCComplex(src.ptr()); - if (result.real == -1.0 && PyErr_Occurred()) { - PyErr_Clear(); - return false; - } - value = std::complex((T) result.real, (T) result.imag); - return true; - } - - static handle cast(const std::complex &src, return_value_policy /* policy */, handle /* parent */) { - return PyComplex_FromDoubles((double) src.real(), (double) src.imag()); - } - - PYBIND11_TYPE_CASTER(std::complex, _("complex")); -}; -PYBIND11_NAMESPACE_END(detail) -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) diff --git a/spaces/ma-xu/LIVE/thrust/testing/unittest/unittest.h b/spaces/ma-xu/LIVE/thrust/testing/unittest/unittest.h deleted file mode 100644 index 49c9daf429ade8877027382a22712a42677e6043..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/testing/unittest/unittest.h +++ /dev/null @@ -1,11 +0,0 @@ -#pragma once - -// this is the only header included by unittests -// it pulls in all the others used for unittesting - -#include -#include -#include -#include -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/device_malloc_allocator.h b/spaces/ma-xu/LIVE/thrust/thrust/device_malloc_allocator.h deleted file mode 100644 index e40c362e08dfd6111ebb0932530c4df10438249f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/device_malloc_allocator.h +++ /dev/null @@ -1,185 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file device_malloc_allocator.h - * \brief An allocator which allocates storage with \p device_malloc - */ - -#pragma once - -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ - -// forward declarations to WAR circular #includes -template class device_ptr; -template device_ptr device_malloc(const std::size_t n); - -/*! \addtogroup memory_management Memory Management - * \addtogroup memory_management_classes Memory Management Classes - * \ingroup memory_management - * \{ - */ - -/*! \p device_malloc_allocator is a device memory allocator that employs the - * \p device_malloc function for allocation. - * - * \p device_malloc_allocator is deprecated in favor of thrust::mr - * memory resource-based allocators. - * - * \see device_malloc - * \see device_ptr - * \see device_allocator - * \see http://www.sgi.com/tech/stl/Allocators.html - */ -template - class device_malloc_allocator -{ - public: - /*! Type of element allocated, \c T. */ - typedef T value_type; - - /*! Pointer to allocation, \c device_ptr. */ - typedef device_ptr pointer; - - /*! \c const pointer to allocation, \c device_ptr. */ - typedef device_ptr const_pointer; - - /*! Reference to allocated element, \c device_reference. */ - typedef device_reference reference; - - /*! \c const reference to allocated element, \c device_reference. */ - typedef device_reference const_reference; - - /*! Type of allocation size, \c std::size_t. */ - typedef std::size_t size_type; - - /*! Type of allocation difference, \c pointer::difference_type. */ - typedef typename pointer::difference_type difference_type; - - /*! The \p rebind metafunction provides the type of a \p device_malloc_allocator - * instantiated with another type. - * - * \tparam U The other type to use for instantiation. - */ - template - struct rebind - { - /*! The typedef \p other gives the type of the rebound \p device_malloc_allocator. - */ - typedef device_malloc_allocator other; - }; // end rebind - - /*! No-argument constructor has no effect. */ - __host__ __device__ - inline device_malloc_allocator() {} - - /*! No-argument destructor has no effect. */ - __host__ __device__ - inline ~device_malloc_allocator() {} - - /*! Copy constructor has no effect. */ - __host__ __device__ - inline device_malloc_allocator(device_malloc_allocator const&) {} - - /*! Constructor from other \p device_malloc_allocator has no effect. */ - template - __host__ __device__ - inline device_malloc_allocator(device_malloc_allocator const&) {} - -#if THRUST_CPP_DIALECT >= 2011 - device_malloc_allocator & operator=(const device_malloc_allocator &) = default; -#endif - - /*! Returns the address of an allocated object. - * \return &r. - */ - __host__ __device__ - inline pointer address(reference r) { return &r; } - - /*! Returns the address an allocated object. - * \return &r. - */ - __host__ __device__ - inline const_pointer address(const_reference r) { return &r; } - - /*! Allocates storage for \p cnt objects. - * \param cnt The number of objects to allocate. - * \return A \p pointer to uninitialized storage for \p cnt objects. - * \note Memory allocated by this function must be deallocated with \p deallocate. - */ - __host__ - inline pointer allocate(size_type cnt, - const_pointer = const_pointer(static_cast(0))) - { - if(cnt > this->max_size()) - { - throw std::bad_alloc(); - } // end if - - return pointer(device_malloc(cnt)); - } // end allocate() - - /*! Deallocates storage for objects allocated with \p allocate. - * \param p A \p pointer to the storage to deallocate. - * \param cnt The size of the previous allocation. - * \note Memory deallocated by this function must previously have been - * allocated with \p allocate. - */ - __host__ - inline void deallocate(pointer p, size_type cnt) - { - // silence unused parameter warning while still leaving the parameter name for Doxygen - (void)(cnt); - - device_free(p); - } // end deallocate() - - /*! Returns the largest value \c n for which allocate(n) might succeed. - * \return The largest value \c n for which allocate(n) might succeed. - */ - inline size_type max_size() const - { - return (std::numeric_limits::max)() / sizeof(T); - } // end max_size() - - /*! Compares against another \p device_malloc_allocator for equality. - * \return \c true - */ - __host__ __device__ - inline bool operator==(device_malloc_allocator const&) const { return true; } - - /*! Compares against another \p device_malloc_allocator for inequality. - * \return \c false - */ - __host__ __device__ - inline bool operator!=(device_malloc_allocator const &a) const {return !operator==(a); } -}; // end device_malloc_allocator - -/*! \} - */ - -} // end thrust - - diff --git a/spaces/mahmuod/CLIP-Interrogator/README.md b/spaces/mahmuod/CLIP-Interrogator/README.md deleted file mode 100644 index 49e83a2bc7ca24ea655d72b3ea49fe3f9733fe30..0000000000000000000000000000000000000000 --- a/spaces/mahmuod/CLIP-Interrogator/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: CLIP Interrogator -emoji: 🕵️‍♂️ -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: true -license: mit -duplicated_from: pharma/CLIP-Interrogator ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mayajwilson76/insurance-stress-testing-demo/app.py b/spaces/mayajwilson76/insurance-stress-testing-demo/app.py deleted file mode 100644 index 99029bfb46d870e8c098ad49cc52aed6a188edb8..0000000000000000000000000000000000000000 --- a/spaces/mayajwilson76/insurance-stress-testing-demo/app.py +++ /dev/null @@ -1,312 +0,0 @@ -import requests -import json -import gradio as gr -# from concurrent.futures import ThreadPoolExecutor -import pdfplumber -import pandas as pd -import langchain -import time -from cnocr import CnOcr -import pinecone -import openai -from langchain.vectorstores import Pinecone -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.text_splitter import CharacterTextSplitter - -# from langchain.document_loaders import PyPDFLoader -from langchain.document_loaders import UnstructuredWordDocumentLoader -from langchain.document_loaders import UnstructuredPowerPointLoader -# from langchain.document_loaders.image import UnstructuredImageLoader - - -from langchain.chains.question_answering import load_qa_chain -from langchain import OpenAI - -from sentence_transformers import SentenceTransformer, models, util -word_embedding_model = models.Transformer('sentence-transformers/all-MiniLM-L6-v2', do_lower_case=True) -pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), pooling_mode='cls') -embedder = SentenceTransformer(modules=[word_embedding_model, pooling_model]) -ocr = CnOcr() -# chat_url = 'https://Raghav001-API.hf.space/sale' -chat_url = 'https://Raghav001-API.hf.space/chatpdf' -chat_emd = 'https://Raghav001-API.hf.space/embedd' -headers = { - 'Content-Type': 'application/json', -} -# thread_pool_executor = ThreadPoolExecutor(max_workers=4) -history_max_len = 500 -all_max_len = 3000 - - - -# Initialize Pinecone client and create an index -pinecone.init(api_key="ffb1f594-0915-4ebf-835f-c1eaa62fdcdc",environment = "us-west4-gcp-free") -index = pinecone.Index(index_name="test") - - -def get_emb(text): - emb_url = 'https://Raghav001-API.hf.space/embeddings' - data = {"content": text} - try: - result = requests.post(url=emb_url, - data=json.dumps(data), - headers=headers - ) - print("--------------------------------Embeddings-----------------------------------") - print(result.json()['data'][0]['embedding']) - return result.json()['data'][0]['embedding'] - except Exception as e: - print('data', data, 'result json', result.json()) - - -def doc_emb(doc: str): - texts = doc.split('\n') - # futures = [] - emb_list = embedder.encode(texts) - print('emb_list',emb_list) - # for text in texts: - # futures.append(thread_pool_executor.submit(get_emb, text)) - # for f in futures: - # emb_list.append(f.result()) - print('\n'.join(texts)) - pine(doc) - gr.Textbox.update(value="") - return texts, emb_list, gr.Textbox.update(visible=True), gr.Button.update(visible=True), gr.Markdown.update( - value="""success ! Let's talk"""), gr.Chatbot.update(visible=True) - - -def get_response(msg, bot, doc_text_list, doc_embeddings): - # future = thread_pool_executor.submit(get_emb, msg) - gr.Textbox.update(value="") - now_len = len(msg) - req_json = {'question': msg} - his_bg = -1 - for i in range(len(bot) - 1, -1, -1): - if now_len + len(bot[i][0]) + len(bot[i][1]) > history_max_len: - break - now_len += len(bot[i][0]) + len(bot[i][1]) - his_bg = i - req_json['history'] = [] if his_bg == -1 else bot[his_bg:] - # query_embedding = future.result() - query_embedding = embedder.encode([msg]) - cos_scores = util.cos_sim(query_embedding, doc_embeddings)[0] - score_index = [[score, index] for score, index in zip(cos_scores, [i for i in range(len(cos_scores))])] - score_index.sort(key=lambda x: x[0], reverse=True) - print('score_index:\n', score_index) - print('doc_emb_state', doc_emb_state) - index_set, sub_doc_list = set(), [] - for s_i in score_index: - doc = doc_text_list[s_i[1]] - if now_len + len(doc) > all_max_len: - break - index_set.add(s_i[1]) - now_len += len(doc) - # Maybe the paragraph is truncated wrong, so add the upper and lower paragraphs - if s_i[1] > 0 and s_i[1] -1 not in index_set: - doc = doc_text_list[s_i[1]-1] - if now_len + len(doc) > all_max_len: - break - index_set.add(s_i[1]-1) - now_len += len(doc) - if s_i[1] + 1 < len(doc_text_list) and s_i[1] + 1 not in index_set: - doc = doc_text_list[s_i[1]+1] - if now_len + len(doc) > all_max_len: - break - index_set.add(s_i[1]+1) - now_len += len(doc) - - index_list = list(index_set) - index_list.sort() - for i in index_list: - sub_doc_list.append(doc_text_list[i]) - req_json['doc'] = '' if len(sub_doc_list) == 0 else '\n'.join(sub_doc_list) - data = {"content": json.dumps(req_json)} - print('data:\n', req_json) - result = requests.post(url=chat_url, - data=json.dumps(data), - headers=headers - ) - res = result.json()['content'] - bot.append([msg, res]) - return bot[max(0, len(bot) - 3):] - - -def up_file(fls): - doc_text_list = [] - - - names = [] - print(names) - for i in fls: - names.append(str(i.name)) - - - pdf = [] - docs = [] - pptx = [] - - for i in names: - - if i[-3:] == "pdf": - pdf.append(i) - elif i[-4:] == "docx": - docs.append(i) - else: - pptx.append(i) - - - #Pdf Extracting - for idx, file in enumerate(pdf): - print("11111") - #print(file.name) - with pdfplumber.open(file) as pdf: - for i in range(len(pdf.pages)): - # Read page i+1 of a PDF document - page = pdf.pages[i] - res_list = page.extract_text().split('\n')[:-1] - - for j in range(len(page.images)): - # Get the binary stream of the image - img = page.images[j] - file_name = '{}-{}-{}.png'.format(str(time.time()), str(i), str(j)) - with open(file_name, mode='wb') as f: - f.write(img['stream'].get_data()) - try: - res = ocr.ocr(file_name) - # res = PyPDFLoader(file_name) - except Exception as e: - res = [] - if len(res) > 0: - res_list.append(' '.join([re['text'] for re in res])) - - tables = page.extract_tables() - for table in tables: - # The first column is used as the header - df = pd.DataFrame(table[1:], columns=table[0]) - try: - records = json.loads(df.to_json(orient="records", force_ascii=False)) - for rec in records: - res_list.append(json.dumps(rec, ensure_ascii=False)) - except Exception as e: - res_list.append(str(df)) - - doc_text_list += res_list - - #pptx Extracting - for i in pptx: - loader = UnstructuredPowerPointLoader(i) - data = loader.load() - # content = str(data).split("'") - # cnt = content[1] - # # c = cnt.split('\\n\\n') - # # final = "".join(c) - # c = cnt.replace('\\n\\n',"").replace("","").replace("\t","") - doc_text_list.append(data) - - - - #Doc Extracting - for i in docs: - loader = UnstructuredWordDocumentLoader(i) - data = loader.load() - # content = str(data).split("'") - # cnt = content[1] - # # c = cnt.split('\\n\\n') - # # final = "".join(c) - # c = cnt.replace('\\n\\n',"").replace("","").replace("\t","") - doc_text_list.append(data) - - # #Image Extraction - # for i in jpg: - # loader = UnstructuredImageLoader(i) - # data = loader.load() - # # content = str(data).split("'") - # # cnt = content[1] - # # # c = cnt.split('\\n\\n') - # # # final = "".join(c) - # # c = cnt.replace('\\n\\n',"").replace("","").replace("\t","") - # doc_text_list.append(data) - - doc_text_list = [str(text).strip() for text in doc_text_list if len(str(text).strip()) > 0] - # print(doc_text_list) - return gr.Textbox.update(value='\n'.join(doc_text_list), visible=True), gr.Button.update( - visible=True), gr.Markdown.update( - value="Processing") - - -def pine(data): - char_text_spliter = CharacterTextSplitter(chunk_size = 1000, chunk_overlap=0) - # doc_text = char_text_spliter.split_documents(data) - doc_spilt = [] - data = data.split(" ") - # print(len(data)) - - c = 0 - check = 0 - for i in data: - # print(i) - if c == 350: - text = " ".join(data[check: check + c]) - print(text) - print(check) - doc_spilt.append(text) - check = check + c - c = 0 - else: - c = c+1 - - - Embedding_model = "text-embedding-ada-002" - embeddings = OpenAIEmbeddings(openai_api_key=OpenAI_key) - - print(requests.post(url = chat_emd)) - - # embeddings = requests.post(url=chat_emd, - # data=json.dumps(data), - # headers=headers - # ) - - pinecone.init(api_key = "ffb1f594-0915-4ebf-835f-c1eaa62fdcdc", - environment = "us-west4-gcp-free" - ) - - index_name = "test" - docstore = Pinecone.from_texts([d for d in doc_spilt],embeddings,index_name = index_name,namespace='a1') - - return '' - -def get_answer(query_live): - - llm = OpenAI(temperature=0, openai='aaa') - qa_chain = load_qa_chain(llm,chain_type='stuff') - query = query_live - docs = docstore.similarity_search(query) - qa_chain.run(input_documents = docs, question = query) - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - file = gr.File(file_types=['.pdf'], label='Click to upload Document', file_count='multiple') - doc_bu = gr.Button(value='Submit', visible=False) - - - txt = gr.Textbox(label='result', visible=False) - - - doc_text_state = gr.State([]) - doc_emb_state = gr.State([]) - - with gr.Column(): - md = gr.Markdown("Please Upload the PDF") - chat_bot = gr.Chatbot(visible=False) - msg_txt = gr.Textbox(visible = False) - chat_bu = gr.Button(value='Clear', visible=False) - - file.change(up_file, [file], [txt, doc_bu, md]) #hiding the text - doc_bu.click(doc_emb, [txt], [doc_text_state, doc_emb_state, msg_txt, chat_bu, md, chat_bot]) - msg_txt.submit(get_response, [msg_txt, chat_bot,doc_text_state, doc_emb_state], [chat_bot],queue=False) - chat_bu.click(lambda: None, None, chat_bot, queue=False) - -if __name__ == "__main__": - demo.queue().launch(show_api=False) - # demo.queue().launch(share=False, server_name='172.22.2.54', server_port=9191) \ No newline at end of file diff --git a/spaces/merve/data-leak/public/measuring-fairness/gs.js b/spaces/merve/data-leak/public/measuring-fairness/gs.js deleted file mode 100644 index f3f72c87ecdb3e28fb4f4d198d70900b431151c2..0000000000000000000000000000000000000000 --- a/spaces/merve/data-leak/public/measuring-fairness/gs.js +++ /dev/null @@ -1,106 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - -window.makeGS = function(){ - var gs = {} - - var bodySel = d3.select('body') - - var prevSlideIndex = -1 - function updateSlide(i){ - var slide = slides[i] - if (!slide) return - - gs.prevSlide = gs.curSlide - gs.curSlide = slide - - var dur = gs.prevSlide ? 500*1 : 0 - - sel.personSel.transition().duration(dur) - .translate(d => d.pos[slide.pos]) - - sel.textSel.transition().duration(dur) - .at({fill: slide.textFill}) - - - sel.rectSel.transition('opacity').duration(dur) - .at({opacity: slide.rectOpacity}) - - if (!slide.animateThreshold){ - sel.rectSel.transition('fill').duration(dur) - .at({fill: slide.rectFill}) - - sel.textSel.transition('stroke').duration(dur) - .st({strokeWidth: slide.textStroke}) - - slider.setSlider(slide.threshold, true) - bodySel.transition('gs-tween') - } else { - sel.rectSel.transition('fill').duration(dur) - sel.textSel.transition('stroke').duration(dur) - - bodySel.transition('gs-tween').duration(dur*2) - .attrTween('gs-tween', () => { - var i = d3.interpolate(slider.threshold, slide.threshold) - - return t => { - slider.setSlider(i(t)) - } - }) - } - - - sel.truthAxis.transition().duration(dur) - .st({opacity: slide.truthAxisOpacity}) - - sel.mlAxis.transition().duration(dur) - .st({opacity: slide.mlAxisOpacity}) - - sel.fpAxis.transition().duration(dur) - .st({opacity: slide.fpAxisOpacity}) - - sel.sexAxis.transition().duration(dur) - .st({opacity: slide.sexAxisOpacity}) - - sel.brAxis.transition().duration(dur) - .st({opacity: slide.brAxisOpacity}) - - sel.botAxis.transition().duration(dur) - .translate(slide.botAxisY, 1) - - - prevSlideIndex = i - slides.curSlide = slide - } - - gs.graphScroll = d3.graphScroll() - .container(d3.select('.container-1')) - .graph(d3.selectAll('container-1 #graph')) - .eventId('uniqueId1') - .sections(d3.selectAll('.container-1 #sections > div')) - .offset(innerWidth < 900 ? 300 : 520) - .on('active', updateSlide) - - return gs -} - - - - - -if (window.init) window.init() diff --git a/spaces/merve/hidden-bias/index.html b/spaces/merve/hidden-bias/index.html deleted file mode 100644 index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - My static Space - - - -
        -

        Welcome to your static Space!

        -

        - You can modify this app directly by editing index.html in the - Files and versions tab. -

        -

        - Also don't forget to check the - Spaces documentation. -

        -
        - - diff --git a/spaces/mfrashad/CharacterGAN/models/__init__.py b/spaces/mfrashad/CharacterGAN/models/__init__.py deleted file mode 100644 index 9941a7bb29d1b9a0a00f9cf90ddf2c48f1e38ed9..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -from .wrappers import * \ No newline at end of file diff --git a/spaces/mikeee/llama2-7b-chat-uncensored-ggml/README.md b/spaces/mikeee/llama2-7b-chat-uncensored-ggml/README.md deleted file mode 100644 index c969d36d31a1a5578d768e640fc1f18db39abdf0..0000000000000000000000000000000000000000 --- a/spaces/mikeee/llama2-7b-chat-uncensored-ggml/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: llama2-7b-chat-uncensored-ggml -emoji: 🚀 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: true -duplicated_from: mikeee/llama2-13b-ggml ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mikeee/radiobee-aligner/radiobee/detect.py b/spaces/mikeee/radiobee-aligner/radiobee/detect.py deleted file mode 100644 index 9a819925c6fe66d01568ead5150181f17dd83858..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/radiobee/detect.py +++ /dev/null @@ -1,81 +0,0 @@ -"""Detect language via polyglot and fastlid.""" -# pylint: disable= - -from typing import Any, Callable, List, Optional - -from polyglot.text import Detector -import polyglot.detect.base -from polyglot.detect.base import UnknownLanguage -from fastlid import fastlid - -from logzero import logger - -polyglot.detect.base.logger.setLevel("ERROR") - - -def with_func_attrs(**attrs: Any) -> Callable: - """Define func_attrs.""" - - def with_attrs(fct: Callable) -> Callable: - for key, val in attrs.items(): - setattr(fct, key, val) - return fct - - return with_attrs - - -# @with_func_attrs(set_languages=None) -# def detect(text: str) -> str: -def detect(text: str, set_languages: Optional[List[str]] = None) -> str: - """Detect language via polyglot and fastlid. - - check first with fastlid, if conf < 0.3, check with polyglot.text.Detector - - Alternative in detec_alt.py - """ - # if not text.strip(): return "en" - fastlid.set_languages = set_languages - lang, conf = fastlid(text) - detect.lang_conf = lang, conf - if conf >= 0.3 or lang in ["zh"]: - return lang - - try: - langs = [(elm.code[:2], elm.confidence) for elm in Detector(text).languages] - detect.lang_conf = langs - # lang, conf = _[0] - except UnknownLanguage: - if set_languages is None: - def_lang = "en" - else: - # def_lang = set_languages[-1] - def_lang = set_languages[0] - logger.warning(" UnknownLanguage exception: probably snippet too short, setting to %s", def_lang) - langs = [(def_lang, 0)] - except Exception as exc: - logger.error(exc) - langs = [("en", 0)] - - del conf - - # return first enrty's lang - if set_languages is None: - def_lang = langs[0][0] - else: - def_lang = "en" - - # pick the first in Detector(text).languages - - # just to silence pyright - # set_languages_: List[str] = [""] if set_languages is None else set_languages - - for elm in langs: - if elm[0] in set_languages: # type: ignore - def_lang = elm[0] - break - - # set_languages is set - if not isinstance(set_languages, (list, tuple)): - logger.warning("set_languages (%s) ought to be a list/tuple") - - return def_lang diff --git a/spaces/mira-causality/counterfactuals/app_utils.py b/spaces/mira-causality/counterfactuals/app_utils.py deleted file mode 100644 index e423b8c2bc87c636005fbceb744ed3f1f3f2db74..0000000000000000000000000000000000000000 --- a/spaces/mira-causality/counterfactuals/app_utils.py +++ /dev/null @@ -1,435 +0,0 @@ -import torch -import numpy as np -import networkx as nx -import matplotlib.pyplot as plt - -from PIL import Image - -from matplotlib import rc, patches, colors - -rc("font", **{"family": "serif", "serif": ["Roman"]}) -rc("text", usetex=True) -rc("image", interpolation="none") -rc("text.latex", preamble=r"\usepackage{amsmath} \usepackage{amssymb}") - -from datasets import get_attr_max_min - -HAMMER = np.array(Image.open("./hammer.png").resize((35, 35))) / 255 - - -class MidpointNormalize(colors.Normalize): - def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False): - self.midpoint = midpoint - colors.Normalize.__init__(self, vmin, vmax, clip) - - def __call__(self, value, clip=None): - v_ext = np.max([np.abs(self.vmin), np.abs(self.vmax)]) - x, y = [-v_ext, self.midpoint, v_ext], [0, 0.5, 1] - return np.ma.masked_array(np.interp(value, x, y)) - - -def postprocess(x): - return ((x + 1.0) * 127.5).squeeze().detach().cpu().numpy() - - -def mnist_graph(*args): - x, t, i, y = r"$\mathbf{x}$", r"$t$", r"$i$", r"$y$" - ut, ui, uy = r"$\mathbf{U}_t$", r"$\mathbf{U}_i$", r"$\mathbf{U}_y$" - zx, ex = r"$\mathbf{z}_{1:L}$", r"$\boldsymbol{\epsilon}$" - - G = nx.DiGraph() - G.add_edge(t, x) - G.add_edge(i, x) - G.add_edge(y, x) - G.add_edge(t, i) - G.add_edge(ut, t) - G.add_edge(ui, i) - G.add_edge(uy, y) - G.add_edge(zx, x) - G.add_edge(ex, x) - - pos = { - y: (0, 0), - uy: (-1, 0), - t: (0, 0.5), - ut: (0, 1), - x: (1, 0), - zx: (2, 0.375), - ex: (2, 0), - i: (1, 0.5), - ui: (1, 1), - } - - node_c = {} - for node in G: - node_c[node] = "lightgrey" if node in [x, t, i, y] else "white" - node_line_c = {k: "black" for k, _ in node_c.items()} - edge_c = {e: "black" for e in G.edges} - - if args[0]: # do_t - edge_c[(ut, t)] = "lightgrey" - # G.remove_edge(ut, t) - node_line_c[t] = "red" - if args[1]: # do_i - edge_c[(ui, i)] = "lightgrey" - edge_c[(t, i)] = "lightgrey" - # G.remove_edges_from([(ui, i), (t, i)]) - node_line_c[i] = "red" - if args[2]: # do_y - edge_c[(uy, y)] = "lightgrey" - # G.remove_edge(uy, y) - node_line_c[y] = "red" - - fs = 30 - options = { - "font_size": fs, - "node_size": 3000, - "node_color": list(node_c.values()), - "edgecolors": list(node_line_c.values()), - "edge_color": list(edge_c.values()), - "linewidths": 2, - "width": 2, - } - plt.close("all") - fig, ax = plt.subplots(1, 1, figsize=(6, 4.1)) # , constrained_layout=True) - # fig.patch.set_visible(False) - ax.margins(x=0.06, y=0.15, tight=False) - ax.axis("off") - nx.draw_networkx(G, pos, **options, arrowsize=25, arrowstyle="-|>", ax=ax) - # need to reuse x, y limits so that the graphs plot the same way before and after removing edges - x_lim = (-1.348, 2.348) - y_lim = (-0.215, 1.215) - ax.set_xlim(x_lim) - ax.set_ylim(y_lim) - rect = patches.FancyBboxPatch( - (1.75, -0.16), - 0.5, - 0.7, - boxstyle="round, pad=0.05, rounding_size=0", - linewidth=2, - edgecolor="black", - facecolor="none", - linestyle="-", - ) - ax.add_patch(rect) - ax.text(1.85, 0.65, r"$\mathbf{U}_{\mathbf{x}}$", fontsize=fs) - - if args[0]: # do_t - fig.figimage(HAMMER, 0.26 * fig.bbox.xmax, 0.525 * fig.bbox.ymax, zorder=10) - if args[1]: # do_i - fig.figimage(HAMMER, 0.5175 * fig.bbox.xmax, 0.525 * fig.bbox.ymax, zorder=11) - if args[2]: # do_y - fig.figimage(HAMMER, 0.26 * fig.bbox.xmax, 0.2 * fig.bbox.ymax, zorder=12) - - fig.tight_layout() - fig.canvas.draw() - return np.array(fig.canvas.renderer.buffer_rgba()) - - -def brain_graph(*args): - x, m, s, a, b, v = r"$\mathbf{x}$", r"$m$", r"$s$", r"$a$", r"$b$", r"$v$" - um, us, ua, ub, uv = ( - r"$\mathbf{U}_m$", - r"$\mathbf{U}_s$", - r"$\mathbf{U}_a$", - r"$\mathbf{U}_b$", - r"$\mathbf{U}_v$", - ) - zx, ex = r"$\mathbf{z}_{1:L}$", r"$\boldsymbol{\epsilon}$" - - G = nx.DiGraph() - G.add_edge(m, x) - G.add_edge(s, x) - G.add_edge(b, x) - G.add_edge(v, x) - G.add_edge(zx, x) - G.add_edge(ex, x) - G.add_edge(a, b) - G.add_edge(a, v) - G.add_edge(s, b) - G.add_edge(um, m) - G.add_edge(us, s) - G.add_edge(ua, a) - G.add_edge(ub, b) - G.add_edge(uv, v) - - pos = { - x: (0, 0), - zx: (-0.25, -1), - ex: (0.25, -1), - a: (0, 1), - ua: (0, 2), - s: (1, 0), - us: (1, -1), - b: (1, 1), - ub: (1, 2), - m: (-1, 0), - um: (-1, -1), - v: (-1, 1), - uv: (-1, 2), - } - - node_c = {} - for node in G: - node_c[node] = "lightgrey" if node in [x, m, s, a, b, v] else "white" - node_line_c = {k: "black" for k, _ in node_c.items()} - edge_c = {e: "black" for e in G.edges} - - if args[0]: # do_m - # G.remove_edge(um, m) - edge_c[(um, m)] = "lightgrey" - node_line_c[m] = "red" - if args[1]: # do_s - # G.remove_edge(us, s) - edge_c[(us, s)] = "lightgrey" - node_line_c[s] = "red" - if args[2]: # do_a - # G.remove_edge(ua, a) - edge_c[(ua, a)] = "lightgrey" - node_line_c[a] = "red" - if args[3]: # do_b - # G.remove_edges_from([(ub, b), (s, b), (a, b)]) - edge_c[(ub, b)] = "lightgrey" - edge_c[(s, b)] = "lightgrey" - edge_c[(a, b)] = "lightgrey" - node_line_c[b] = "red" - if args[4]: # do_v - # G.remove_edges_from([(uv, v), (a, v), (b, v)]) - edge_c[(uv, v)] = "lightgrey" - edge_c[(a, v)] = "lightgrey" - edge_c[(b, v)] = "lightgrey" - node_line_c[v] = "red" - - fs = 30 - options = { - "font_size": fs, - "node_size": 3000, - "node_color": list(node_c.values()), - "edgecolors": list(node_line_c.values()), - "edge_color": list(edge_c.values()), - "linewidths": 2, - "width": 2, - } - - plt.close("all") - fig, ax = plt.subplots(1, 1, figsize=(5, 5)) # , constrained_layout=True) - # fig.patch.set_visible(False) - ax.margins(x=0.1, y=0.08, tight=False) - ax.axis("off") - nx.draw_networkx(G, pos, **options, arrowsize=25, arrowstyle="-|>", ax=ax) - # need to reuse x, y limits so that the graphs plot the same way before and after removing edges - x_lim = (-1.32, 1.32) - y_lim = (-1.414, 2.414) - ax.set_xlim(x_lim) - ax.set_ylim(y_lim) - rect = patches.FancyBboxPatch( - (-0.5, -1.325), - 1, - 0.65, - boxstyle="round, pad=0.05, rounding_size=0", - linewidth=2, - edgecolor="black", - facecolor="none", - linestyle="-", - ) - ax.add_patch(rect) - # ax.text(1.85, 0.65, r"$\mathbf{U}_{\mathbf{x}}$", fontsize=fs) - - if args[0]: # do_m - fig.figimage(HAMMER, 0.0075 * fig.bbox.xmax, 0.395 * fig.bbox.ymax, zorder=10) - if args[1]: # do_s - fig.figimage(HAMMER, 0.72 * fig.bbox.xmax, 0.395 * fig.bbox.ymax, zorder=11) - if args[2]: # do_a - fig.figimage(HAMMER, 0.363 * fig.bbox.xmax, 0.64 * fig.bbox.ymax, zorder=12) - if args[3]: # do_b - fig.figimage(HAMMER, 0.72 * fig.bbox.xmax, 0.64 * fig.bbox.ymax, zorder=13) - if args[4]: # do_v - fig.figimage(HAMMER, 0.0075 * fig.bbox.xmax, 0.64 * fig.bbox.ymax, zorder=14) - else: # b -> v - a3 = patches.FancyArrowPatch( - (0.86, 1.21), - (-0.86, 1.21), - connectionstyle="arc3,rad=.3", - linewidth=2, - arrowstyle="simple, head_width=10, head_length=10", - color="k", - ) - ax.add_patch(a3) - # print(ax.get_xlim()) - # print(ax.get_ylim()) - fig.tight_layout() - fig.canvas.draw() - return np.array(fig.canvas.renderer.buffer_rgba()) - - -def chest_graph(*args): - x, a, d, r, s = r"$\mathbf{x}$", r"$a$", r"$d$", r"$r$", r"$s$" - ua, ud, ur, us = ( - r"$\mathbf{U}_a$", - r"$\mathbf{U}_d$", - r"$\mathbf{U}_r$", - r"$\mathbf{U}_s$", - ) - zx, ex = r"$\mathbf{z}_{1:L}$", r"$\boldsymbol{\epsilon}$" - - G = nx.DiGraph() - G.add_edge(ua, a) - G.add_edge(ud, d) - G.add_edge(ur, r) - G.add_edge(us, s) - G.add_edge(a, d) - G.add_edge(d, x) - G.add_edge(r, x) - G.add_edge(s, x) - G.add_edge(ex, x) - G.add_edge(zx, x) - G.add_edge(a, x) - - pos = { - x: (0, 0), - a: (-1, 1), - d: (0, 1), - r: (1, 1), - s: (1, 0), - ua: (-1, 2), - ud: (0, 2), - ur: (1, 2), - us: (1, -1), - zx: (-0.25, -1), - ex: (0.25, -1), - } - - node_c = {} - for node in G: - node_c[node] = "lightgrey" if node in [x, a, d, r, s] else "white" - - edge_c = {e: "black" for e in G.edges} - node_line_c = {k: "black" for k, _ in node_c.items()} - - if args[0]: # do_r - # G.remove_edge(ur, r) - edge_c[(ur, r)] = "lightgrey" - node_line_c[r] = "red" - if args[1]: # do_s - # G.remove_edges_from([(us, s)]) - edge_c[(us, s)] = "lightgrey" - node_line_c[s] = "red" - if args[2]: # do_f (do_d) - # G.remove_edges_from([(ud, d), (a, d)]) - edge_c[(ud, d)] = "lightgrey" - edge_c[(a, d)] = "lightgrey" - node_line_c[d] = "red" - if args[3]: # do_a - # G.remove_edge(ua, a) - edge_c[(ua, a)] = "lightgrey" - node_line_c[a] = "red" - - fs = 30 - options = { - "font_size": fs, - "node_size": 3000, - "node_color": list(node_c.values()), - "edgecolors": list(node_line_c.values()), - "edge_color": list(edge_c.values()), - "linewidths": 2, - "width": 2, - } - plt.close("all") - fig, ax = plt.subplots(1, 1, figsize=(5, 5)) # , constrained_layout=True) - # fig.patch.set_visible(False) - ax.margins(x=0.1, y=0.08, tight=False) - ax.axis("off") - nx.draw_networkx(G, pos, **options, arrowsize=25, arrowstyle="-|>", ax=ax) - # need to reuse x, y limits so that the graphs plot the same way before and after removing edges - x_lim = (-1.32, 1.32) - y_lim = (-1.414, 2.414) - ax.set_xlim(x_lim) - ax.set_ylim(y_lim) - rect = patches.FancyBboxPatch( - (-0.5, -1.325), - 1, - 0.65, - boxstyle="round, pad=0.05, rounding_size=0", - linewidth=2, - edgecolor="black", - facecolor="none", - linestyle="-", - ) - ax.add_patch(rect) - ax.text(-0.9, -1.075, r"$\mathbf{U}_{\mathbf{x}}$", fontsize=fs) - - if args[0]: # do_r - fig.figimage(HAMMER, 0.72 * fig.bbox.xmax, 0.64 * fig.bbox.ymax, zorder=10) - if args[1]: # do_s - fig.figimage(HAMMER, 0.72 * fig.bbox.xmax, 0.395 * fig.bbox.ymax, zorder=11) - if args[2]: # do_f - fig.figimage(HAMMER, 0.363 * fig.bbox.xmax, 0.64 * fig.bbox.ymax, zorder=12) - if args[3]: # do_a - fig.figimage(HAMMER, 0.0075 * fig.bbox.xmax, 0.64 * fig.bbox.ymax, zorder=13) - - fig.tight_layout() - fig.canvas.draw() - return np.array(fig.canvas.renderer.buffer_rgba()) - - -def vae_preprocess(args, pa): - if "ukbb" in args.hps: - # preprocessing ukbb parents for the vae which was originally trained using - # log standardized parents. The pgm was trained using [-1,1] normalization - # first undo [-1,1] parent preprocessing back to original range - for k, v in pa.items(): - if k != "mri_seq" and k != "sex": - pa[k] = (v + 1) / 2 # [-1,1] -> [0,1] - _max, _min = get_attr_max_min(k) - pa[k] = pa[k] * (_max - _min) + _min - # log_standardize parents for vae input - for k, v in pa.items(): - logpa_k = torch.log(v.clamp(min=1e-12)) - if k == "age": - pa[k] = (logpa_k - 4.112339973449707) / 0.11769197136163712 - elif k == "brain_volume": - pa[k] = (logpa_k - 13.965583801269531) / 0.09537758678197861 - elif k == "ventricle_volume": - pa[k] = (logpa_k - 10.345998764038086) / 0.43127763271331787 - # concatenate parents expand to input res for conditioning the vae - pa = torch.cat( - [pa[k] if len(pa[k].shape) > 1 else pa[k][..., None] for k in args.parents_x], - dim=1, - ) - pa = ( - pa[..., None, None].repeat(1, 1, *(args.input_res,) * 2).to(args.device).float() - ) - return pa - - -def preprocess_brain(args, obs): - obs["x"] = (obs["x"][None, ...].float().to(args.device) - 127.5) / 127.5 # [-1,1] - # for all other variables except x - for k in [k for k in obs.keys() if k != "x"]: - obs[k] = obs[k].float().to(args.device).view(1, 1) - if k in ["age", "brain_volume", "ventricle_volume"]: - k_max, k_min = get_attr_max_min(k) - obs[k] = (obs[k] - k_min) / (k_max - k_min) # [0,1] - obs[k] = 2 * obs[k] - 1 # [-1,1] - return obs - - -def get_fig_arr(x, width=4, height=4, dpi=144, cmap="Greys_r", norm=None): - fig = plt.figure(figsize=(width, height), dpi=dpi) - ax = plt.axes([0, 0, 1, 1], frameon=False) - if cmap == "Greys_r": - ax.imshow(x, cmap=cmap, vmin=0, vmax=255) - else: - ax.imshow(x, cmap=cmap, norm=norm) - ax.axis("off") - fig.canvas.draw() - return np.array(fig.canvas.renderer.buffer_rgba()) - - -def normalize(x, x_min=None, x_max=None, zero_one=False): - if x_min is None: - x_min = x.min() - if x_max is None: - x_max = x.max() - x = (x - x_min) / (x_max - x_min) # [0,1] - return x if zero_one else 2 * x - 1 # else [-1,1] diff --git a/spaces/miracle01/speechemotion/melspec.py b/spaces/miracle01/speechemotion/melspec.py deleted file mode 100644 index a57c82d9cdcc59b874d26f7b371cbc8e9d6feee9..0000000000000000000000000000000000000000 --- a/spaces/miracle01/speechemotion/melspec.py +++ /dev/null @@ -1,113 +0,0 @@ -import numpy as np -import cv2 -import librosa -import librosa.display -from tensorflow.keras.models import load_model -from datetime import datetime -import matplotlib.pyplot as plt - -# constants -starttime = datetime.now() - -CAT6 = ['fear', 'angry', 'neutral', 'happy', 'sad', 'surprise'] -CAT7 = ['fear', 'disgust', 'neutral', 'happy', 'sad', 'surprise', 'angry'] -CAT3 = ["positive", "neutral", "negative"] - -COLOR_DICT = {"neutral": "grey", - "positive": "green", - "happy": "green", - "surprise": "orange", - "fear": "purple", - "negative": "red", - "angry": "red", - "sad": "lightblue", - "disgust":"brown"} - -TEST_CAT = ['fear', 'disgust', 'neutral', 'happy', 'sad', 'surprise', 'angry'] -TEST_PRED = np.array([.3,.3,.4,.1,.6,.9,.1]) - -# page settings -# st.set_page_config(page_title="SER web-app", page_icon=":speech_balloon:", layout="wide") - -def get_melspec(audio): - y, sr = librosa.load(audio, sr=44100) - X = librosa.stft(y) - Xdb = librosa.amplitude_to_db(abs(X)) - img = np.stack((Xdb,) * 3,-1) - img = img.astype(np.uint8) - grayImage = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - grayImage = cv2.resize(grayImage, (224, 224)) - rgbImage = np.repeat(grayImage[..., np.newaxis], 3, -1) - return (rgbImage, Xdb) - - -def get_title(predictions, categories, first_line=''): - txt = f"{first_line}\nDetected emotion: \ - {categories[predictions.argmax()]} - {predictions.max() * 100:.2f}%" - return txt - - -def plot_colored_polar(fig, predictions, categories, - title="", colors=COLOR_DICT): - N = len(predictions) - ind = predictions.argmax() - - COLOR = color_sector = colors[categories[ind]] - sector_colors = [colors[i] for i in categories] - - fig.set_facecolor("#d1d1e0") - ax = plt.subplot(111, polar="True") - - theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False) - for sector in range(predictions.shape[0]): - radii = np.zeros_like(predictions) - radii[sector] = predictions[sector] * 10 - width = np.pi / 1.8 * predictions - c = sector_colors[sector] - ax.bar(theta, radii, width=width, bottom=0.0, color=c, alpha=0.25) - - angles = [i / float(N) * 2 * np.pi for i in range(N)] - angles += angles[:1] - - data = list(predictions) - data += data[:1] - plt.polar(angles, data, color=COLOR, linewidth=2) - plt.fill(angles, data, facecolor=COLOR, alpha=0.25) - - ax.spines['polar'].set_color('lightgrey') - ax.set_theta_offset(np.pi / 3) - ax.set_theta_direction(-1) - plt.xticks(angles[:-1], categories) - ax.set_rlabel_position(0) - plt.yticks([0, .25, .5, .75, 1], color="grey", size=8) - - plt.suptitle(title, color="darkblue", size=10) - plt.title(f"BIG {N}\n", color=COLOR) - plt.ylim(0, 1) - plt.subplots_adjust(top=0.75) - -def plot_melspec(path, tmodel=None, three=False, - CAT3=CAT3, CAT6=CAT6): - # load model if it is not loaded - if tmodel is None: - tmodel = load_model("tmodel_all.h5") - # mel-spec model results - mel = get_melspec(path)[0] - mel = mel.reshape(1, *mel.shape) - tpred = tmodel.predict(mel)[0] - cat = CAT6 - - if three: - pos = tpred[3] + tpred[5] * .5 - neu = tpred[2] + tpred[5] * .5 + tpred[4] * .5 - neg = tpred[0] + tpred[1] + tpred[4] * .5 - tpred = np.array([pos, neu, neg]) - cat = CAT3 - - txt = get_title(tpred, cat) - fig = plt.figure(figsize=(6, 4)) - plot_colored_polar(fig, predictions=tpred, categories=cat, title=txt) - return (fig, tpred) - -if __name__ == "__main__": - plot_melspec("test.wav") \ No newline at end of file diff --git a/spaces/ml6team/byt5_ocr_corrector/app.py b/spaces/ml6team/byt5_ocr_corrector/app.py deleted file mode 100644 index 10b9f6c81201051255c06e2230a585851bfd090a..0000000000000000000000000000000000000000 --- a/spaces/ml6team/byt5_ocr_corrector/app.py +++ /dev/null @@ -1,61 +0,0 @@ -from textwrap import wrap - -from transformers import pipeline -import nlpaug.augmenter.char as nac -import streamlit as st - -st.markdown('# ByT5 Dutch OCR Corrector :pill:') -st.write('This app corrects common dutch OCR mistakes, to showcase how this could be used in an OCR post-processing pipeline.') - -st.markdown(""" -To use this: -- Enter a text with OCR mistakes and hit 'unscramble':point_down: -- Or enter a normal text, scramble it :twisted_rightwards_arrows: and then hit 'unscramble' :point_down:""") - -@st.cache(allow_output_mutation=True, - suppress_st_warning=True, - show_spinner=False) -def load_model(): - with st.spinner('Please wait for the model to load...'): - ocr_pipeline=pipeline( - 'text2text-generation', - model='ml6team/byt5-base-dutch-ocr-correction', - tokenizer='ml6team/byt5-base-dutch-ocr-correction' - ) - return ocr_pipeline - -ocr_pipeline = load_model() - -if 'text' not in st.session_state: - st.session_state['text'] = "" - -left_area, right_area = st.columns(2) - -# Format the left area -left_area.header("Input") -form = left_area.form(key='ocrcorrector') -placeholder = form.empty() -placeholder.empty() -input_text = placeholder.text_area(value=st.session_state.text, label='Insert text:', key='input_text') -scramble_button = form.form_submit_button(label='Scramble') -submit_button = form.form_submit_button(label='Unscramble') - -# Right area -right_area.header("Output") - -if scramble_button: - aug = nac.OcrAug() - st.session_state.text = st.session_state.input_text - base_text = st.session_state.text - augmented_data = aug.augment(base_text) - st.session_state.text = augmented_data - del st.session_state.input_text - placeholder.empty() - input_text = placeholder.text_area(value=st.session_state.text, label='Insert text:', key='input_text') - -if submit_button: - base_text = st.session_state.input_text - output_text = " ".join([x['generated_text'] for x in ocr_pipeline(wrap(base_text, 128))]) - right_area.markdown('#####') - right_area.text_area(value=output_text, label="Corrected text:") - diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/modeling/clip_adapter/adapter.py b/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/modeling/clip_adapter/adapter.py deleted file mode 100644 index 864d20b160714865b4130fab8714f323aaae2572..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/Segment-Any-RGBD/open_vocab_seg/modeling/clip_adapter/adapter.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved -# Modified by Feng Liang from -# https://github.com/MendelXu/zsseg.baseline/blob/master/mask_former/modeling/clip_adapter/adapter.py - -from typing import List -import torch -from torch import nn -from torch.nn import functional as F -from detectron2.structures import BitMasks -from .utils import build_clip_model, crop_with_mask -from .text_template import PromptExtractor - - -PIXEL_MEAN = (0.48145466, 0.4578275, 0.40821073) -PIXEL_STD = (0.26862954, 0.26130258, 0.27577711) - - -class ClipAdapter(nn.Module): - def __init__(self, clip_model_name: str, mask_prompt_depth: int, text_templates: PromptExtractor): - super().__init__() - self.clip_model = build_clip_model(clip_model_name, mask_prompt_depth) - self.text_templates = text_templates - self.text_templates.init_buffer(self.clip_model) - self.text_feature_buffer = {} - - def forward(self, image: torch.Tensor, text: List[str], **kwargs): - image = self._preprocess_image(image, **kwargs) - text_feature = self.get_text_features(text) # k,feat_dim - image_features = self.get_image_features(image) - return self.get_sim_logits(text_feature, image_features) - - def _preprocess_image(self, image: torch.Tensor): - return image - - def _get_text_features(self, noun_list: List[str]): - left_noun_list = [ - noun for noun in noun_list if noun not in self.text_feature_buffer - ] - if len(left_noun_list) > 0: - left_text_features = self.text_templates( - left_noun_list, self.clip_model - ) - self.text_feature_buffer.update( - { - noun: text_feature - for noun, text_feature in zip( - left_noun_list, left_text_features - ) - } - ) - return torch.stack([self.text_feature_buffer[noun] for noun in noun_list]) - - - def get_text_features(self, noun_list: List[str]): - return self._get_text_features(noun_list) - - def get_image_features(self, image: torch.Tensor): - image_features = self.clip_model.visual(image) - image_features = image_features / image_features.norm(dim=-1, keepdim=True) - return image_features - - def get_sim_logits( - self, - text_features: torch.Tensor, - image_features: torch.Tensor, - temperature: float = 100, - ): - return temperature * image_features @ text_features.T - - def normalize_feature(self, feat: torch.Tensor): - return feat / feat.norm(dim=-1, keepdim=True) - - -class MaskFormerClipAdapter(ClipAdapter): - def __init__( - self, - clip_model_name: str, - text_templates: PromptExtractor, - mask_fill: str = "mean", - mask_expand_ratio: float = 1.0, - mask_thr: float = 0.5, - mask_matting: bool = False, - region_resized: bool = True, - mask_prompt_depth: int = 0, - mask_prompt_fwd: bool = False, - ): - super().__init__(clip_model_name, mask_prompt_depth, text_templates) - self.non_object_embedding = nn.Parameter( - torch.empty(1, self.clip_model.text_projection.shape[-1]) - ) - nn.init.normal_( - self.non_object_embedding.data, - std=self.clip_model.transformer.width ** -0.5, - ) - # for test - self.mask_fill = mask_fill - if self.mask_fill == "zero": - self.mask_fill = (0.0, 0.0, 0.0) - elif self.mask_fill == "mean": - self.mask_fill = [255.0 * c for c in PIXEL_MEAN] - else: - raise NotImplementedError( - "Unknown mask_fill method: {}".format(self.mask_fill) - ) - self.mask_expand_ratio = mask_expand_ratio - self.mask_thr = mask_thr - self.mask_matting = mask_matting - self.region_resized = region_resized - self.mask_prompt_fwd = mask_prompt_fwd - self.register_buffer( - "pixel_mean", torch.Tensor(PIXEL_MEAN).reshape(1, 3, 1, 1) * 255.0 - ) - self.register_buffer( - "pixel_std", torch.Tensor(PIXEL_STD).reshape(1, 3, 1, 1) * 255.0 - ) - - def forward( - self, - image: torch.Tensor, - text: List[str], - mask: torch.Tensor, - normalize: bool = True, - fwd_w_region_mask: bool = False, - ): - (regions, unnorm_regions), region_masks, valid_flag = self._preprocess_image(image, mask, normalize=normalize) - if regions is None: - return None, valid_flag - if isinstance(regions, list): - assert NotImplementedError - image_features = torch.cat( - [self.get_image_features(image_i) for image_i in regions], dim=0 - ) - else: - if self.mask_prompt_fwd: - image_features = self.get_image_features(regions, region_masks) - else: - image_features = self.get_image_features(regions) - text_feature = self.get_text_features(text) # k,feat_dim - return self.get_sim_logits(text_feature, image_features), unnorm_regions, valid_flag - - def get_image_features(self, image, region_masks=None): - image_features = self.clip_model.visual(image, region_masks) - image_features = image_features / image_features.norm(dim=-1, keepdim=True) - return image_features - - def _preprocess_image( - self, image: torch.Tensor, mask: torch.Tensor, normalize: bool = True - ): - """crop, mask and normalize the image - - Args: - image ([type]): [C,H,W] - mask ([type]): [K,H,W - normalize (bool, optional): [description]. Defaults to True. - """ - dtype = mask.dtype - bin_mask = mask > self.mask_thr - valid = bin_mask.sum(dim=(-1, -2)) > 0 - bin_mask = bin_mask[valid] - mask = mask[valid] - if not self.mask_matting: - mask = bin_mask - bin_mask = BitMasks(bin_mask) - bboxes = bin_mask.get_bounding_boxes() - # crop,mask - regions = [] - region_masks = [] - for bbox, single_mask in zip(bboxes, mask): - region, region_mask = crop_with_mask( - image.type(dtype), - single_mask.type(dtype), - bbox, - fill=self.mask_fill, - expand_ratio=self.mask_expand_ratio, - ) - regions.append(region.unsqueeze(0)) - region_masks.append(region_mask.unsqueeze(0)) - if len(regions) == 0: - return None, valid - unnorm_regions = regions - if normalize: - regions = [(r - self.pixel_mean) / self.pixel_std for r in regions] - # resize - if self.region_resized: - regions = [ - F.interpolate(r, size=(224, 224), mode="bicubic") for r in regions - ] - regions = torch.cat(regions) - region_masks = [ - F.interpolate(r, size=(224, 224), mode="nearest") for r in region_masks - ] - region_masks = torch.cat(region_masks) - unnorm_regions = [ - F.interpolate(r, size=(224, 224), mode="bicubic") for r in unnorm_regions - ] - unnorm_regions = torch.cat(unnorm_regions) - return (regions, unnorm_regions), region_masks, valid - - def get_text_features(self, noun_list: List[str]): - object_text_features = self._get_text_features(noun_list) - non_object_text_features = ( - self.non_object_embedding - / self.non_object_embedding.norm(dim=-1, keepdim=True) - ) - return torch.cat([object_text_features, non_object_text_features], dim=0) diff --git a/spaces/mnauf/detect-bees/hubconf.py b/spaces/mnauf/detect-bees/hubconf.py deleted file mode 100644 index 41af8e39d14deba8679400d02c192696bcf37544..0000000000000000000000000000000000000000 --- a/spaces/mnauf/detect-bees/hubconf.py +++ /dev/null @@ -1,169 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -PyTorch Hub models https://pytorch.org/hub/ultralytics_yolov5 - -Usage: - import torch - model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # official model - model = torch.hub.load('ultralytics/yolov5:master', 'yolov5s') # from branch - model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.pt') # custom/local model - model = torch.hub.load('.', 'custom', 'yolov5s.pt', source='local') # local repo -""" - -import torch - - -def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True, device=None): - """Creates or loads a YOLOv5 model - - Arguments: - name (str): model name 'yolov5s' or path 'path/to/best.pt' - pretrained (bool): load pretrained weights into the model - channels (int): number of input channels - classes (int): number of model classes - autoshape (bool): apply YOLOv5 .autoshape() wrapper to model - verbose (bool): print all information to screen - device (str, torch.device, None): device to use for model parameters - - Returns: - YOLOv5 model - """ - from pathlib import Path - - from models.common import AutoShape, DetectMultiBackend - from models.experimental import attempt_load - from models.yolo import ClassificationModel, DetectionModel, SegmentationModel - from utils.downloads import attempt_download - from utils.general import LOGGER, check_requirements, intersect_dicts, logging - from utils.torch_utils import select_device - - if not verbose: - LOGGER.setLevel(logging.WARNING) - check_requirements(exclude=('opencv-python', 'tensorboard', 'thop')) - name = Path(name) - path = name.with_suffix('.pt') if name.suffix == '' and not name.is_dir() else name # checkpoint path - try: - device = select_device(device) - if pretrained and channels == 3 and classes == 80: - try: - model = DetectMultiBackend(path, device=device, fuse=autoshape) # detection model - if autoshape: - if model.pt and isinstance(model.model, ClassificationModel): - LOGGER.warning('WARNING ⚠️ YOLOv5 ClassificationModel is not yet AutoShape compatible. ' - 'You must pass torch tensors in BCHW to this model, i.e. shape(1,3,224,224).') - elif model.pt and isinstance(model.model, SegmentationModel): - LOGGER.warning('WARNING ⚠️ YOLOv5 SegmentationModel is not yet AutoShape compatible. ' - 'You will not be able to run inference with this model.') - else: - model = AutoShape(model) # for file/URI/PIL/cv2/np inputs and NMS - except Exception: - model = attempt_load(path, device=device, fuse=False) # arbitrary model - else: - cfg = list((Path(__file__).parent / 'models').rglob(f'{path.stem}.yaml'))[0] # model.yaml path - model = DetectionModel(cfg, channels, classes) # create model - if pretrained: - ckpt = torch.load(attempt_download(path), map_location=device) # load - csd = ckpt['model'].float().state_dict() # checkpoint state_dict as FP32 - csd = intersect_dicts(csd, model.state_dict(), exclude=['anchors']) # intersect - model.load_state_dict(csd, strict=False) # load - if len(ckpt['model'].names) == classes: - model.names = ckpt['model'].names # set class names attribute - if not verbose: - LOGGER.setLevel(logging.INFO) # reset to default - return model.to(device) - - except Exception as e: - help_url = 'https://github.com/ultralytics/yolov5/issues/36' - s = f'{e}. Cache may be out of date, try `force_reload=True` or see {help_url} for help.' - raise Exception(s) from e - - -def custom(path='path/to/model.pt', autoshape=True, _verbose=True, device=None): - # YOLOv5 custom or local model - return _create(path, autoshape=autoshape, verbose=_verbose, device=device) - - -def yolov5n(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-nano model https://github.com/ultralytics/yolov5 - return _create('yolov5n', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5s(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-small model https://github.com/ultralytics/yolov5 - return _create('yolov5s', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5m(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-medium model https://github.com/ultralytics/yolov5 - return _create('yolov5m', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5l(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-large model https://github.com/ultralytics/yolov5 - return _create('yolov5l', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5x(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-xlarge model https://github.com/ultralytics/yolov5 - return _create('yolov5x', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5n6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-nano-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5n6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5s6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-small-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5s6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5m6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-medium-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5m6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5l6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-large-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5l6', pretrained, channels, classes, autoshape, _verbose, device) - - -def yolov5x6(pretrained=True, channels=3, classes=80, autoshape=True, _verbose=True, device=None): - # YOLOv5-xlarge-P6 model https://github.com/ultralytics/yolov5 - return _create('yolov5x6', pretrained, channels, classes, autoshape, _verbose, device) - - -if __name__ == '__main__': - import argparse - from pathlib import Path - - import numpy as np - from PIL import Image - - from utils.general import cv2, print_args - - # Argparser - parser = argparse.ArgumentParser() - parser.add_argument('--model', type=str, default='yolov5s', help='model name') - opt = parser.parse_args() - print_args(vars(opt)) - - # Model - model = _create(name=opt.model, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True) - # model = custom(path='path/to/model.pt') # custom - - # Images - imgs = [ - 'data/images/zidane.jpg', # filename - Path('data/images/zidane.jpg'), # Path - 'https://ultralytics.com/images/zidane.jpg', # URI - cv2.imread('data/images/bus.jpg')[:, :, ::-1], # OpenCV - Image.open('data/images/bus.jpg'), # PIL - np.zeros((320, 640, 3))] # numpy - - # Inference - results = model(imgs, size=320) # batched inference - - # Results - results.print() - results.save() diff --git a/spaces/mnauf/detect-bees/utils/dataloaders.py b/spaces/mnauf/detect-bees/utils/dataloaders.py deleted file mode 100644 index 75ea78c9ec989e971cd355e2df5e7dd2c0c3db0b..0000000000000000000000000000000000000000 --- a/spaces/mnauf/detect-bees/utils/dataloaders.py +++ /dev/null @@ -1,1186 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Dataloaders and dataset utils -""" - -import contextlib -import glob -import hashlib -import json -import math -import os -import random -import shutil -import time -from itertools import repeat -from multiprocessing.pool import Pool, ThreadPool -from pathlib import Path -from threading import Thread -from urllib.parse import urlparse - -import numpy as np -import torch -import torch.nn.functional as F -import torchvision -import yaml -from PIL import ExifTags, Image, ImageOps -from torch.utils.data import DataLoader, Dataset, dataloader, distributed -from tqdm import tqdm - -from utils.augmentations import (Albumentations, augment_hsv, classify_albumentations, classify_transforms, copy_paste, - cutout, letterbox, mixup, random_perspective) -from utils.general import (DATASETS_DIR, LOGGER, NUM_THREADS, check_dataset, check_requirements, check_yaml, clean_str, - cv2, is_colab, is_kaggle, segments2boxes, unzip_file, xyn2xy, xywh2xyxy, xywhn2xyxy, - xyxy2xywhn) -from utils.torch_utils import torch_distributed_zero_first - -# Parameters -HELP_URL = 'See https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data' -IMG_FORMATS = 'bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp', 'pfm' # include image suffixes -VID_FORMATS = 'asf', 'avi', 'gif', 'm4v', 'mkv', 'mov', 'mp4', 'mpeg', 'mpg', 'ts', 'wmv' # include video suffixes -BAR_FORMAT = '{l_bar}{bar:10}{r_bar}{bar:-10b}' # tqdm bar format -LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html -RANK = int(os.getenv('RANK', -1)) -PIN_MEMORY = str(os.getenv('PIN_MEMORY', True)).lower() == 'true' # global pin_memory for dataloaders - -# Get orientation exif tag -for orientation in ExifTags.TAGS.keys(): - if ExifTags.TAGS[orientation] == 'Orientation': - break - - -def get_hash(paths): - # Returns a single hash value of a list of paths (files or dirs) - size = sum(os.path.getsize(p) for p in paths if os.path.exists(p)) # sizes - h = hashlib.md5(str(size).encode()) # hash sizes - h.update(''.join(paths).encode()) # hash paths - return h.hexdigest() # return hash - - -def exif_size(img): - # Returns exif-corrected PIL size - s = img.size # (width, height) - with contextlib.suppress(Exception): - rotation = dict(img._getexif().items())[orientation] - if rotation in [6, 8]: # rotation 270 or 90 - s = (s[1], s[0]) - return s - - -def exif_transpose(image): - """ - Transpose a PIL image accordingly if it has an EXIF Orientation tag. - Inplace version of https://github.com/python-pillow/Pillow/blob/master/src/PIL/ImageOps.py exif_transpose() - - :param image: The image to transpose. - :return: An image. - """ - exif = image.getexif() - orientation = exif.get(0x0112, 1) # default 1 - if orientation > 1: - method = { - 2: Image.FLIP_LEFT_RIGHT, - 3: Image.ROTATE_180, - 4: Image.FLIP_TOP_BOTTOM, - 5: Image.TRANSPOSE, - 6: Image.ROTATE_270, - 7: Image.TRANSVERSE, - 8: Image.ROTATE_90}.get(orientation) - if method is not None: - image = image.transpose(method) - del exif[0x0112] - image.info["exif"] = exif.tobytes() - return image - - -def seed_worker(worker_id): - # Set dataloader worker seed https://pytorch.org/docs/stable/notes/randomness.html#dataloader - worker_seed = torch.initial_seed() % 2 ** 32 - np.random.seed(worker_seed) - random.seed(worker_seed) - - -def create_dataloader(path, - imgsz, - batch_size, - stride, - single_cls=False, - hyp=None, - augment=False, - cache=False, - pad=0.0, - rect=False, - rank=-1, - workers=8, - image_weights=False, - quad=False, - prefix='', - shuffle=False): - if rect and shuffle: - LOGGER.warning('WARNING ⚠️ --rect is incompatible with DataLoader shuffle, setting shuffle=False') - shuffle = False - with torch_distributed_zero_first(rank): # init dataset *.cache only once if DDP - dataset = LoadImagesAndLabels( - path, - imgsz, - batch_size, - augment=augment, # augmentation - hyp=hyp, # hyperparameters - rect=rect, # rectangular batches - cache_images=cache, - single_cls=single_cls, - stride=int(stride), - pad=pad, - image_weights=image_weights, - prefix=prefix) - - batch_size = min(batch_size, len(dataset)) - nd = torch.cuda.device_count() # number of CUDA devices - nw = min([os.cpu_count() // max(nd, 1), batch_size if batch_size > 1 else 0, workers]) # number of workers - sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle) - loader = DataLoader if image_weights else InfiniteDataLoader # only DataLoader allows for attribute updates - generator = torch.Generator() - generator.manual_seed(6148914691236517205 + RANK) - return loader(dataset, - batch_size=batch_size, - shuffle=shuffle and sampler is None, - num_workers=nw, - sampler=sampler, - pin_memory=PIN_MEMORY, - collate_fn=LoadImagesAndLabels.collate_fn4 if quad else LoadImagesAndLabels.collate_fn, - worker_init_fn=seed_worker, - generator=generator), dataset - - -class InfiniteDataLoader(dataloader.DataLoader): - """ Dataloader that reuses workers - - Uses same syntax as vanilla DataLoader - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - object.__setattr__(self, 'batch_sampler', _RepeatSampler(self.batch_sampler)) - self.iterator = super().__iter__() - - def __len__(self): - return len(self.batch_sampler.sampler) - - def __iter__(self): - for _ in range(len(self)): - yield next(self.iterator) - - -class _RepeatSampler: - """ Sampler that repeats forever - - Args: - sampler (Sampler) - """ - - def __init__(self, sampler): - self.sampler = sampler - - def __iter__(self): - while True: - yield from iter(self.sampler) - - -class LoadScreenshots: - # YOLOv5 screenshot dataloader, i.e. `python sample_solution.py --source "screen 0 100 100 512 256"` - def __init__(self, source, img_size=640, stride=32, auto=True, transforms=None): - # source = [screen_number left top width height] (pixels) - check_requirements('mss') - import mss - - source, *params = source.split() - self.screen, left, top, width, height = 0, None, None, None, None # default to full screen 0 - if len(params) == 1: - self.screen = int(params[0]) - elif len(params) == 4: - left, top, width, height = (int(x) for x in params) - elif len(params) == 5: - self.screen, left, top, width, height = (int(x) for x in params) - self.img_size = img_size - self.stride = stride - self.transforms = transforms - self.auto = auto - self.mode = 'stream' - self.frame = 0 - self.sct = mss.mss() - - # Parse monitor shape - monitor = self.sct.monitors[self.screen] - self.top = monitor["top"] if top is None else (monitor["top"] + top) - self.left = monitor["left"] if left is None else (monitor["left"] + left) - self.width = width or monitor["width"] - self.height = height or monitor["height"] - self.monitor = {"left": self.left, "top": self.top, "width": self.width, "height": self.height} - - def __iter__(self): - return self - - def __next__(self): - # mss screen capture: get raw pixels from the screen as np array - im0 = np.array(self.sct.grab(self.monitor))[:, :, :3] # [:, :, :3] BGRA to BGR - s = f"screen {self.screen} (LTWH): {self.left},{self.top},{self.width},{self.height}: " - - if self.transforms: - im = self.transforms(im0) # transforms - else: - im = letterbox(im0, self.img_size, stride=self.stride, auto=self.auto)[0] # padded resize - im = im.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB - im = np.ascontiguousarray(im) # contiguous - self.frame += 1 - return str(self.screen), im, im0, None, s # screen, img, original img, im0s, s - - -class LoadImages: - # YOLOv5 image/video dataloader, i.e. `python sample_solution.py --source image.jpg/vid.mp4` - def __init__(self, path, img_size=640, stride=32, auto=True, transforms=None, vid_stride=1): - files = [] - for p in sorted(path) if isinstance(path, (list, tuple)) else [path]: - p = str(Path(p).resolve()) - if '*' in p: - files.extend(sorted(glob.glob(p, recursive=True))) # glob - elif os.path.isdir(p): - files.extend(sorted(glob.glob(os.path.join(p, '*.*')))) # dir - elif os.path.isfile(p): - files.append(p) # files - else: - raise FileNotFoundError(f'{p} does not exist') - - images = [x for x in files if x.split('.')[-1].lower() in IMG_FORMATS] - videos = [x for x in files if x.split('.')[-1].lower() in VID_FORMATS] - ni, nv = len(images), len(videos) - - self.img_size = img_size - self.stride = stride - self.files = images + videos - self.nf = ni + nv # number of files - self.video_flag = [False] * ni + [True] * nv - self.mode = 'image' - self.auto = auto - self.transforms = transforms # optional - self.vid_stride = vid_stride # video frame-rate stride - if any(videos): - self._new_video(videos[0]) # new video - else: - self.cap = None - assert self.nf > 0, f'No images or videos found in {p}. ' \ - f'Supported formats are:\nimages: {IMG_FORMATS}\nvideos: {VID_FORMATS}' - - def __iter__(self): - self.count = 0 - return self - - def __next__(self): - if self.count == self.nf: - raise StopIteration - path = self.files[self.count] - - if self.video_flag[self.count]: - # Read video - self.mode = 'video' - for _ in range(self.vid_stride): - self.cap.grab() - ret_val, im0 = self.cap.retrieve() - while not ret_val: - self.count += 1 - self.cap.release() - if self.count == self.nf: # last video - raise StopIteration - path = self.files[self.count] - self._new_video(path) - ret_val, im0 = self.cap.read() - - self.frame += 1 - # im0 = self._cv2_rotate(im0) # for use if cv2 autorotation is False - s = f'video {self.count + 1}/{self.nf} ({self.frame}/{self.frames}) {path}: ' - - else: - # Read image - self.count += 1 - im0 = cv2.imread(path) # BGR - assert im0 is not None, f'Image Not Found {path}' - s = f'image {self.count}/{self.nf} {path}: ' - - if self.transforms: - im = self.transforms(im0) # transforms - else: - im = letterbox(im0, self.img_size, stride=self.stride, auto=self.auto)[0] # padded resize - im = im.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB - im = np.ascontiguousarray(im) # contiguous - - return path, im, im0, self.cap, s - - def _new_video(self, path): - # Create a new video capture object - self.frame = 0 - self.cap = cv2.VideoCapture(path) - self.frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT) / self.vid_stride) - self.orientation = int(self.cap.get(cv2.CAP_PROP_ORIENTATION_META)) # rotation degrees - # self.cap.set(cv2.CAP_PROP_ORIENTATION_AUTO, 0) # disable https://github.com/ultralytics/yolov5/issues/8493 - - def _cv2_rotate(self, im): - # Rotate a cv2 video manually - if self.orientation == 0: - return cv2.rotate(im, cv2.ROTATE_90_CLOCKWISE) - elif self.orientation == 180: - return cv2.rotate(im, cv2.ROTATE_90_COUNTERCLOCKWISE) - elif self.orientation == 90: - return cv2.rotate(im, cv2.ROTATE_180) - return im - - def __len__(self): - return self.nf # number of files - - -class LoadStreams: - # YOLOv5 streamloader, i.e. `python sample_solution.py --source 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP streams` - def __init__(self, sources='streams.txt', img_size=640, stride=32, auto=True, transforms=None, vid_stride=1): - torch.backends.cudnn.benchmark = True # faster for fixed-size inference - self.mode = 'stream' - self.img_size = img_size - self.stride = stride - self.vid_stride = vid_stride # video frame-rate stride - sources = Path(sources).read_text().rsplit() if Path(sources).is_file() else [sources] - n = len(sources) - self.sources = [clean_str(x) for x in sources] # clean source names for later - self.imgs, self.fps, self.frames, self.threads = [None] * n, [0] * n, [0] * n, [None] * n - for i, s in enumerate(sources): # index, source - # Start thread to read frames from video stream - st = f'{i + 1}/{n}: {s}... ' - if urlparse(s).hostname in ('www.youtube.com', 'youtube.com', 'youtu.be'): # if source is YouTube video - check_requirements(('pafy', 'youtube_dl==2020.12.2')) - import pafy - s = pafy.new(s).getbest(preftype="mp4").url # YouTube URL - s = eval(s) if s.isnumeric() else s # i.e. s = '0' local webcam - if s == 0: - assert not is_colab(), '--source 0 webcam unsupported on Colab. Rerun command in a local environment.' - assert not is_kaggle(), '--source 0 webcam unsupported on Kaggle. Rerun command in a local environment.' - cap = cv2.VideoCapture(s) - assert cap.isOpened(), f'{st}Failed to open {s}' - w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - fps = cap.get(cv2.CAP_PROP_FPS) # warning: may return 0 or nan - self.frames[i] = max(int(cap.get(cv2.CAP_PROP_FRAME_COUNT)), 0) or float('inf') # infinite stream fallback - self.fps[i] = max((fps if math.isfinite(fps) else 0) % 100, 0) or 30 # 30 FPS fallback - - _, self.imgs[i] = cap.read() # guarantee first frame - self.threads[i] = Thread(target=self.update, args=([i, cap, s]), daemon=True) - LOGGER.info(f"{st} Success ({self.frames[i]} frames {w}x{h} at {self.fps[i]:.2f} FPS)") - self.threads[i].start() - LOGGER.info('') # newline - - # check for common shapes - s = np.stack([letterbox(x, img_size, stride=stride, auto=auto)[0].shape for x in self.imgs]) - self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal - self.auto = auto and self.rect - self.transforms = transforms # optional - if not self.rect: - LOGGER.warning('WARNING ⚠️ Stream shapes differ. For optimal performance supply similarly-shaped streams.') - - def update(self, i, cap, stream): - # Read stream `i` frames in daemon thread - n, f = 0, self.frames[i] # frame number, frame array - while cap.isOpened() and n < f: - n += 1 - cap.grab() # .read() = .grab() followed by .retrieve() - if n % self.vid_stride == 0: - success, im = cap.retrieve() - if success: - self.imgs[i] = im - else: - LOGGER.warning('WARNING ⚠️ Video stream unresponsive, please check your IP camera connection.') - self.imgs[i] = np.zeros_like(self.imgs[i]) - cap.open(stream) # re-open stream if signal was lost - time.sleep(0.0) # wait time - - def __iter__(self): - self.count = -1 - return self - - def __next__(self): - self.count += 1 - if not all(x.is_alive() for x in self.threads) or cv2.waitKey(1) == ord('q'): # q to quit - cv2.destroyAllWindows() - raise StopIteration - - im0 = self.imgs.copy() - if self.transforms: - im = np.stack([self.transforms(x) for x in im0]) # transforms - else: - im = np.stack([letterbox(x, self.img_size, stride=self.stride, auto=self.auto)[0] for x in im0]) # resize - im = im[..., ::-1].transpose((0, 3, 1, 2)) # BGR to RGB, BHWC to BCHW - im = np.ascontiguousarray(im) # contiguous - - return self.sources, im, im0, None, '' - - def __len__(self): - return len(self.sources) # 1E12 frames = 32 streams at 30 FPS for 30 years - - -def img2label_paths(img_paths): - # Define label paths as a function of image paths - sa, sb = f'{os.sep}images{os.sep}', f'{os.sep}labels{os.sep}' # /images/, /labels/ substrings - return [sb.join(x.rsplit(sa, 1)).rsplit('.', 1)[0] + '.txt' for x in img_paths] - - -class LoadImagesAndLabels(Dataset): - # YOLOv5 train_loader/val_loader, loads images and labels for training and validation - cache_version = 0.6 # dataset labels *.cache version - rand_interp_methods = [cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4] - - def __init__(self, - path, - img_size=640, - batch_size=16, - augment=False, - hyp=None, - rect=False, - image_weights=False, - cache_images=False, - single_cls=False, - stride=32, - pad=0.0, - prefix=''): - self.img_size = img_size - self.augment = augment - self.hyp = hyp - self.image_weights = image_weights - self.rect = False if image_weights else rect - self.mosaic = self.augment and not self.rect # load 4 images at a time into a mosaic (only during training) - self.mosaic_border = [-img_size // 2, -img_size // 2] - self.stride = stride - self.path = path - self.albumentations = Albumentations(size=img_size) if augment else None - - try: - f = [] # image files - for p in path if isinstance(path, list) else [path]: - p = Path(p) # os-agnostic - if p.is_dir(): # dir - f += glob.glob(str(p / '**' / '*.*'), recursive=True) - # f = list(p.rglob('*.*')) # pathlib - elif p.is_file(): # file - with open(p) as t: - t = t.read().strip().splitlines() - parent = str(p.parent) + os.sep - f += [x.replace('./', parent) if x.startswith('./') else x for x in t] # local to global path - # f += [p.parent / x.lstrip(os.sep) for x in t] # local to global path (pathlib) - else: - raise FileNotFoundError(f'{prefix}{p} does not exist') - self.im_files = sorted(x.replace('/', os.sep) for x in f if x.split('.')[-1].lower() in IMG_FORMATS) - # self.img_files = sorted([x for x in f if x.suffix[1:].lower() in IMG_FORMATS]) # pathlib - assert self.im_files, f'{prefix}No images found' - except Exception as e: - raise Exception(f'{prefix}Error loading data from {path}: {e}\n{HELP_URL}') - - # Check cache - self.label_files = img2label_paths(self.im_files) # labels - cache_path = (p if p.is_file() else Path(self.label_files[0]).parent).with_suffix('.cache') - try: - cache, exists = np.load(cache_path, allow_pickle=True).item(), True # load dict - assert cache['version'] == self.cache_version # matches current version - assert cache['hash'] == get_hash(self.label_files + self.im_files) # identical hash - except Exception: - cache, exists = self.cache_labels(cache_path, prefix), False # run cache ops - - # Display cache - nf, nm, ne, nc, n = cache.pop('results') # found, missing, empty, corrupt, total - if exists and LOCAL_RANK in {-1, 0}: - d = f"Scanning '{cache_path}' images and labels... {nf} found, {nm} missing, {ne} empty, {nc} corrupt" - tqdm(None, desc=prefix + d, total=n, initial=n, bar_format=BAR_FORMAT) # display cache results - if cache['msgs']: - LOGGER.info('\n'.join(cache['msgs'])) # display warnings - assert nf > 0 or not augment, f'{prefix}No labels found in {cache_path}, can not start training. {HELP_URL}' - - # Read cache - [cache.pop(k) for k in ('hash', 'version', 'msgs')] # remove items - labels, shapes, self.segments = zip(*cache.values()) - nl = len(np.concatenate(labels, 0)) # number of labels - assert nl > 0 or not augment, f'{prefix}All labels empty in {cache_path}, can not start training. {HELP_URL}' - self.labels = list(labels) - self.shapes = np.array(shapes) - self.im_files = list(cache.keys()) # update - self.label_files = img2label_paths(cache.keys()) # update - n = len(shapes) # number of images - bi = np.floor(np.arange(n) / batch_size).astype(int) # batch index - nb = bi[-1] + 1 # number of batches - self.batch = bi # batch index of image - self.n = n - self.indices = range(n) - - # Update labels - include_class = [] # filter labels to include only these classes (optional) - include_class_array = np.array(include_class).reshape(1, -1) - for i, (label, segment) in enumerate(zip(self.labels, self.segments)): - if include_class: - j = (label[:, 0:1] == include_class_array).any(1) - self.labels[i] = label[j] - if segment: - self.segments[i] = segment[j] - if single_cls: # single-class training, merge all classes into 0 - self.labels[i][:, 0] = 0 - if segment: - self.segments[i][:, 0] = 0 - - # Rectangular Training - if self.rect: - # Sort by aspect ratio - s = self.shapes # wh - ar = s[:, 1] / s[:, 0] # aspect ratio - irect = ar.argsort() - self.im_files = [self.im_files[i] for i in irect] - self.label_files = [self.label_files[i] for i in irect] - self.labels = [self.labels[i] for i in irect] - self.segments = [self.segments[i] for i in irect] - self.shapes = s[irect] # wh - ar = ar[irect] - - # Set training image shapes - shapes = [[1, 1]] * nb - for i in range(nb): - ari = ar[bi == i] - mini, maxi = ari.min(), ari.max() - if maxi < 1: - shapes[i] = [maxi, 1] - elif mini > 1: - shapes[i] = [1, 1 / mini] - - self.batch_shapes = np.ceil(np.array(shapes) * img_size / stride + pad).astype(int) * stride - - # Cache images into RAM/disk for faster training (WARNING: large datasets may exceed system resources) - self.ims = [None] * n - self.npy_files = [Path(f).with_suffix('.npy') for f in self.im_files] - if cache_images: - gb = 0 # Gigabytes of cached images - self.im_hw0, self.im_hw = [None] * n, [None] * n - fcn = self.cache_images_to_disk if cache_images == 'disk' else self.load_image - results = ThreadPool(NUM_THREADS).imap(fcn, range(n)) - pbar = tqdm(enumerate(results), total=n, bar_format=BAR_FORMAT, disable=LOCAL_RANK > 0) - for i, x in pbar: - if cache_images == 'disk': - gb += self.npy_files[i].stat().st_size - else: # 'ram' - self.ims[i], self.im_hw0[i], self.im_hw[i] = x # im, hw_orig, hw_resized = load_image(self, i) - gb += self.ims[i].nbytes - pbar.desc = f'{prefix}Caching images ({gb / 1E9:.1f}GB {cache_images})' - pbar.close() - - def cache_labels(self, path=Path('./labels.cache'), prefix=''): - # Cache dataset labels, check images and read shapes - x = {} # dict - nm, nf, ne, nc, msgs = 0, 0, 0, 0, [] # number missing, found, empty, corrupt, messages - desc = f"{prefix}Scanning '{path.parent / path.stem}' images and labels..." - with Pool(NUM_THREADS) as pool: - pbar = tqdm(pool.imap(verify_image_label, zip(self.im_files, self.label_files, repeat(prefix))), - desc=desc, - total=len(self.im_files), - bar_format=BAR_FORMAT) - for im_file, lb, shape, segments, nm_f, nf_f, ne_f, nc_f, msg in pbar: - nm += nm_f - nf += nf_f - ne += ne_f - nc += nc_f - if im_file: - x[im_file] = [lb, shape, segments] - if msg: - msgs.append(msg) - pbar.desc = f"{desc}{nf} found, {nm} missing, {ne} empty, {nc} corrupt" - - pbar.close() - if msgs: - LOGGER.info('\n'.join(msgs)) - if nf == 0: - LOGGER.warning(f'{prefix}WARNING ⚠️ No labels found in {path}. {HELP_URL}') - x['hash'] = get_hash(self.label_files + self.im_files) - x['results'] = nf, nm, ne, nc, len(self.im_files) - x['msgs'] = msgs # warnings - x['version'] = self.cache_version # cache version - try: - np.save(path, x) # save cache for next time - path.with_suffix('.cache.npy').rename(path) # remove .npy suffix - LOGGER.info(f'{prefix}New cache created: {path}') - except Exception as e: - LOGGER.warning(f'{prefix}WARNING ⚠️ Cache directory {path.parent} is not writeable: {e}') # not writeable - return x - - def __len__(self): - return len(self.im_files) - - # def __iter__(self): - # self.count = -1 - # print('ran dataset iter') - # #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF) - # return self - - def __getitem__(self, index): - index = self.indices[index] # linear, shuffled, or image_weights - - hyp = self.hyp - mosaic = self.mosaic and random.random() < hyp['mosaic'] - if mosaic: - # Load mosaic - img, labels = self.load_mosaic(index) - shapes = None - - # MixUp augmentation - if random.random() < hyp['mixup']: - img, labels = mixup(img, labels, *self.load_mosaic(random.randint(0, self.n - 1))) - - else: - # Load image - img, (h0, w0), (h, w) = self.load_image(index) - - # Letterbox - shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape - img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment) - shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling - - labels = self.labels[index].copy() - if labels.size: # normalized xywh to pixel xyxy format - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1]) - - if self.augment: - img, labels = random_perspective(img, - labels, - degrees=hyp['degrees'], - translate=hyp['translate'], - scale=hyp['scale'], - shear=hyp['shear'], - perspective=hyp['perspective']) - - nl = len(labels) # number of labels - if nl: - labels[:, 1:5] = xyxy2xywhn(labels[:, 1:5], w=img.shape[1], h=img.shape[0], clip=True, eps=1E-3) - - if self.augment: - # Albumentations - img, labels = self.albumentations(img, labels) - nl = len(labels) # update after albumentations - - # HSV color-space - augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) - - # Flip up-down - if random.random() < hyp['flipud']: - img = np.flipud(img) - if nl: - labels[:, 2] = 1 - labels[:, 2] - - # Flip left-right - if random.random() < hyp['fliplr']: - img = np.fliplr(img) - if nl: - labels[:, 1] = 1 - labels[:, 1] - - # Cutouts - # labels = cutout(img, labels, p=0.5) - # nl = len(labels) # update after cutout - - labels_out = torch.zeros((nl, 6)) - if nl: - labels_out[:, 1:] = torch.from_numpy(labels) - - # Convert - img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB - img = np.ascontiguousarray(img) - - return torch.from_numpy(img), labels_out, self.im_files[index], shapes - - def load_image(self, i): - # Loads 1 image from dataset index 'i', returns (im, original hw, resized hw) - im, f, fn = self.ims[i], self.im_files[i], self.npy_files[i], - if im is None: # not cached in RAM - if fn.exists(): # load npy - im = np.load(fn) - else: # read image - im = cv2.imread(f) # BGR - assert im is not None, f'Image Not Found {f}' - h0, w0 = im.shape[:2] # orig hw - r = self.img_size / max(h0, w0) # ratio - if r != 1: # if sizes are not equal - interp = cv2.INTER_LINEAR if (self.augment or r > 1) else cv2.INTER_AREA - im = cv2.resize(im, (int(w0 * r), int(h0 * r)), interpolation=interp) - return im, (h0, w0), im.shape[:2] # im, hw_original, hw_resized - return self.ims[i], self.im_hw0[i], self.im_hw[i] # im, hw_original, hw_resized - - def cache_images_to_disk(self, i): - # Saves an image as an *.npy file for faster loading - f = self.npy_files[i] - if not f.exists(): - np.save(f.as_posix(), cv2.imread(self.im_files[i])) - - def load_mosaic(self, index): - # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - labels4, segments4 = [], [] - s = self.img_size - yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y - indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - random.shuffle(indices) - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = self.load_image(index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - labels4.append(labels) - segments4.extend(segments) - - # Concat/clip labels - labels4 = np.concatenate(labels4, 0) - for x in (labels4[:, 1:], *segments4): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp['copy_paste']) - img4, labels4 = random_perspective(img4, - labels4, - segments4, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img4, labels4 - - def load_mosaic9(self, index): - # YOLOv5 9-mosaic loader. Loads 1 image + 8 random images into a 9-image mosaic - labels9, segments9 = [], [] - s = self.img_size - indices = [index] + random.choices(self.indices, k=8) # 8 additional image indices - random.shuffle(indices) - hp, wp = -1, -1 # height, width previous - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = self.load_image(index) - - # place img in img9 - if i == 0: # center - img9 = np.full((s * 3, s * 3, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - h0, w0 = h, w - c = s, s, s + w, s + h # xmin, ymin, xmax, ymax (base) coordinates - elif i == 1: # top - c = s, s - h, s + w, s - elif i == 2: # top right - c = s + wp, s - h, s + wp + w, s - elif i == 3: # right - c = s + w0, s, s + w0 + w, s + h - elif i == 4: # bottom right - c = s + w0, s + hp, s + w0 + w, s + hp + h - elif i == 5: # bottom - c = s + w0 - w, s + h0, s + w0, s + h0 + h - elif i == 6: # bottom left - c = s + w0 - wp - w, s + h0, s + w0 - wp, s + h0 + h - elif i == 7: # left - c = s - w, s + h0 - h, s, s + h0 - elif i == 8: # top left - c = s - w, s + h0 - hp - h, s, s + h0 - hp - - padx, pady = c[:2] - x1, y1, x2, y2 = (max(x, 0) for x in c) # allocate coords - - # Labels - labels, segments = self.labels[index].copy(), self.segments[index].copy() - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padx, pady) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padx, pady) for x in segments] - labels9.append(labels) - segments9.extend(segments) - - # Image - img9[y1:y2, x1:x2] = img[y1 - pady:, x1 - padx:] # img9[ymin:ymax, xmin:xmax] - hp, wp = h, w # height, width previous - - # Offset - yc, xc = (int(random.uniform(0, s)) for _ in self.mosaic_border) # mosaic center x, y - img9 = img9[yc:yc + 2 * s, xc:xc + 2 * s] - - # Concat/clip labels - labels9 = np.concatenate(labels9, 0) - labels9[:, [1, 3]] -= xc - labels9[:, [2, 4]] -= yc - c = np.array([xc, yc]) # centers - segments9 = [x - c for x in segments9] - - for x in (labels9[:, 1:], *segments9): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img9, labels9 = replicate(img9, labels9) # replicate - - # Augment - img9, labels9 = random_perspective(img9, - labels9, - segments9, - degrees=self.hyp['degrees'], - translate=self.hyp['translate'], - scale=self.hyp['scale'], - shear=self.hyp['shear'], - perspective=self.hyp['perspective'], - border=self.mosaic_border) # border to remove - - return img9, labels9 - - @staticmethod - def collate_fn(batch): - im, label, path, shapes = zip(*batch) # transposed - for i, lb in enumerate(label): - lb[:, 0] = i # add target image index for build_targets() - return torch.stack(im, 0), torch.cat(label, 0), path, shapes - - @staticmethod - def collate_fn4(batch): - im, label, path, shapes = zip(*batch) # transposed - n = len(shapes) // 4 - im4, label4, path4, shapes4 = [], [], path[:n], shapes[:n] - - ho = torch.tensor([[0.0, 0, 0, 1, 0, 0]]) - wo = torch.tensor([[0.0, 0, 1, 0, 0, 0]]) - s = torch.tensor([[1, 1, 0.5, 0.5, 0.5, 0.5]]) # scale - for i in range(n): # zidane torch.zeros(16,3,720,1280) # BCHW - i *= 4 - if random.random() < 0.5: - im1 = F.interpolate(im[i].unsqueeze(0).float(), scale_factor=2.0, mode='bilinear', - align_corners=False)[0].type(im[i].type()) - lb = label[i] - else: - im1 = torch.cat((torch.cat((im[i], im[i + 1]), 1), torch.cat((im[i + 2], im[i + 3]), 1)), 2) - lb = torch.cat((label[i], label[i + 1] + ho, label[i + 2] + wo, label[i + 3] + ho + wo), 0) * s - im4.append(im1) - label4.append(lb) - - for i, lb in enumerate(label4): - lb[:, 0] = i # add target image index for build_targets() - - return torch.stack(im4, 0), torch.cat(label4, 0), path4, shapes4 - - -# Ancillary functions -------------------------------------------------------------------------------------------------- -def flatten_recursive(path=DATASETS_DIR / 'coco128'): - # Flatten a recursive directory by bringing all files to top level - new_path = Path(f'{str(path)}_flat') - if os.path.exists(new_path): - shutil.rmtree(new_path) # delete output folder - os.makedirs(new_path) # make new output folder - for file in tqdm(glob.glob(f'{str(Path(path))}/**/*.*', recursive=True)): - shutil.copyfile(file, new_path / Path(file).name) - - -def extract_boxes(path=DATASETS_DIR / 'coco128'): # from utils.dataloaders import *; extract_boxes() - # Convert detection dataset into classification dataset, with one directory per class - path = Path(path) # images dir - shutil.rmtree(path / 'classification') if (path / 'classification').is_dir() else None # remove existing - files = list(path.rglob('*.*')) - n = len(files) # number of files - for im_file in tqdm(files, total=n): - if im_file.suffix[1:] in IMG_FORMATS: - # image - im = cv2.imread(str(im_file))[..., ::-1] # BGR to RGB - h, w = im.shape[:2] - - # labels - lb_file = Path(img2label_paths([str(im_file)])[0]) - if Path(lb_file).exists(): - with open(lb_file) as f: - lb = np.array([x.split() for x in f.read().strip().splitlines()], dtype=np.float32) # labels - - for j, x in enumerate(lb): - c = int(x[0]) # class - f = (path / 'classifier') / f'{c}' / f'{path.stem}_{im_file.stem}_{j}.jpg' # new filename - if not f.parent.is_dir(): - f.parent.mkdir(parents=True) - - b = x[1:] * [w, h, w, h] # box - # b[2:] = b[2:].max() # rectangle to square - b[2:] = b[2:] * 1.2 + 3 # pad - b = xywh2xyxy(b.reshape(-1, 4)).ravel().astype(int) - - b[[0, 2]] = np.clip(b[[0, 2]], 0, w) # clip boxes outside of image - b[[1, 3]] = np.clip(b[[1, 3]], 0, h) - assert cv2.imwrite(str(f), im[b[1]:b[3], b[0]:b[2]]), f'box failure in {f}' - - -def autosplit(path=DATASETS_DIR / 'coco128/images', weights=(0.9, 0.1, 0.0), annotated_only=False): - """ Autosplit a dataset into train/val/test splits and save path/autosplit_*.txt files - Usage: from utils.dataloaders import *; autosplit() - Arguments - path: Path to images directory - weights: Train, val, test weights (list, tuple) - annotated_only: Only use images with an annotated txt file - """ - path = Path(path) # images dir - files = sorted(x for x in path.rglob('*.*') if x.suffix[1:].lower() in IMG_FORMATS) # image files only - n = len(files) # number of files - random.seed(0) # for reproducibility - indices = random.choices([0, 1, 2], weights=weights, k=n) # assign each image to a split - - txt = ['autosplit_train.txt', 'autosplit_val.txt', 'autosplit_test.txt'] # 3 txt files - for x in txt: - if (path.parent / x).exists(): - (path.parent / x).unlink() # remove existing - - print(f'Autosplitting images from {path}' + ', using *.txt labeled images only' * annotated_only) - for i, img in tqdm(zip(indices, files), total=n): - if not annotated_only or Path(img2label_paths([str(img)])[0]).exists(): # check label - with open(path.parent / txt[i], 'a') as f: - f.write(f'./{img.relative_to(path.parent).as_posix()}' + '\n') # add image to txt file - - -def verify_image_label(args): - # Verify one image-label pair - im_file, lb_file, prefix = args - nm, nf, ne, nc, msg, segments = 0, 0, 0, 0, '', [] # number (missing, found, empty, corrupt), message, segments - try: - # verify images - im = Image.open(im_file) - im.verify() # PIL verify - shape = exif_size(im) # image size - assert (shape[0] > 9) & (shape[1] > 9), f'image size {shape} <10 pixels' - assert im.format.lower() in IMG_FORMATS, f'invalid image format {im.format}' - if im.format.lower() in ('jpg', 'jpeg'): - with open(im_file, 'rb') as f: - f.seek(-2, 2) - if f.read() != b'\xff\xd9': # corrupt JPEG - ImageOps.exif_transpose(Image.open(im_file)).save(im_file, 'JPEG', subsampling=0, quality=100) - msg = f'{prefix}WARNING ⚠️ {im_file}: corrupt JPEG restored and saved' - - # verify labels - if os.path.isfile(lb_file): - nf = 1 # label found - with open(lb_file) as f: - lb = [x.split() for x in f.read().strip().splitlines() if len(x)] - if any(len(x) > 6 for x in lb): # is segment - classes = np.array([x[0] for x in lb], dtype=np.float32) - segments = [np.array(x[1:], dtype=np.float32).reshape(-1, 2) for x in lb] # (cls, xy1...) - lb = np.concatenate((classes.reshape(-1, 1), segments2boxes(segments)), 1) # (cls, xywh) - lb = np.array(lb, dtype=np.float32) - nl = len(lb) - if nl: - assert lb.shape[1] == 5, f'labels require 5 columns, {lb.shape[1]} columns detected' - assert (lb >= 0).all(), f'negative label values {lb[lb < 0]}' - assert (lb[:, 1:] <= 1).all(), f'non-normalized or out of bounds coordinates {lb[:, 1:][lb[:, 1:] > 1]}' - _, i = np.unique(lb, axis=0, return_index=True) - if len(i) < nl: # duplicate row check - lb = lb[i] # remove duplicates - if segments: - segments = [segments[x] for x in i] - msg = f'{prefix}WARNING ⚠️ {im_file}: {nl - len(i)} duplicate labels removed' - else: - ne = 1 # label empty - lb = np.zeros((0, 5), dtype=np.float32) - else: - nm = 1 # label missing - lb = np.zeros((0, 5), dtype=np.float32) - return im_file, lb, shape, segments, nm, nf, ne, nc, msg - except Exception as e: - nc = 1 - msg = f'{prefix}WARNING ⚠️ {im_file}: ignoring corrupt image/label: {e}' - return [None, None, None, None, nm, nf, ne, nc, msg] - - -class HUBDatasetStats(): - """ Class for generating HUB dataset JSON and `-hub` dataset directory - - Arguments - path: Path to data.yaml or data.zip (with data.yaml inside data.zip) - autodownload: Attempt to download dataset if not found locally - - Usage - from utils.dataloaders import HUBDatasetStats - stats = HUBDatasetStats('coco128.yaml', autodownload=True) # usage 1 - stats = HUBDatasetStats('path/to/coco128.zip') # usage 2 - stats.get_json(save=False) - stats.process_images() - """ - - def __init__(self, path='coco128.yaml', autodownload=False): - # Initialize class - zipped, data_dir, yaml_path = self._unzip(Path(path)) - try: - with open(check_yaml(yaml_path), errors='ignore') as f: - data = yaml.safe_load(f) # data dict - if zipped: - data['path'] = data_dir - except Exception as e: - raise Exception("error/HUB/dataset_stats/yaml_load") from e - - check_dataset(data, autodownload) # download dataset if missing - self.hub_dir = Path(data['path'] + '-hub') - self.im_dir = self.hub_dir / 'images' - self.im_dir.mkdir(parents=True, exist_ok=True) # makes /images - self.stats = {'nc': data['nc'], 'names': list(data['names'].values())} # statistics dictionary - self.data = data - - @staticmethod - def _find_yaml(dir): - # Return data.yaml file - files = list(dir.glob('*.yaml')) or list(dir.rglob('*.yaml')) # try root level first and then recursive - assert files, f'No *.yaml file found in {dir}' - if len(files) > 1: - files = [f for f in files if f.stem == dir.stem] # prefer *.yaml files that match dir name - assert files, f'Multiple *.yaml files found in {dir}, only 1 *.yaml file allowed' - assert len(files) == 1, f'Multiple *.yaml files found: {files}, only 1 *.yaml file allowed in {dir}' - return files[0] - - def _unzip(self, path): - # Unzip data.zip - if not str(path).endswith('.zip'): # path is data.yaml - return False, None, path - assert Path(path).is_file(), f'Error unzipping {path}, file not found' - unzip_file(path, path=path.parent) - dir = path.with_suffix('') # dataset directory == zip name - assert dir.is_dir(), f'Error unzipping {path}, {dir} not found. path/to/abc.zip MUST unzip to path/to/abc/' - return True, str(dir), self._find_yaml(dir) # zipped, data_dir, yaml_path - - def _hub_ops(self, f, max_dim=1920): - # HUB ops for 1 image 'f': resize and save at reduced quality in /dataset-hub for web/app viewing - f_new = self.im_dir / Path(f).name # dataset-hub image filename - try: # use PIL - im = Image.open(f) - r = max_dim / max(im.height, im.width) # ratio - if r < 1.0: # image too large - im = im.resize((int(im.width * r), int(im.height * r))) - im.save(f_new, 'JPEG', quality=50, optimize=True) # save - except Exception as e: # use OpenCV - LOGGER.info(f'WARNING ⚠️ HUB ops PIL failure {f}: {e}') - im = cv2.imread(f) - im_height, im_width = im.shape[:2] - r = max_dim / max(im_height, im_width) # ratio - if r < 1.0: # image too large - im = cv2.resize(im, (int(im_width * r), int(im_height * r)), interpolation=cv2.INTER_AREA) - cv2.imwrite(str(f_new), im) - - def get_json(self, save=False, verbose=False): - # Return dataset JSON for Ultralytics HUB - def _round(labels): - # Update labels to integer class and 6 decimal place floats - return [[int(c), *(round(x, 4) for x in points)] for c, *points in labels] - - for split in 'train', 'val', 'test': - if self.data.get(split) is None: - self.stats[split] = None # i.e. no test set - continue - dataset = LoadImagesAndLabels(self.data[split]) # load dataset - x = np.array([ - np.bincount(label[:, 0].astype(int), minlength=self.data['nc']) - for label in tqdm(dataset.labels, total=dataset.n, desc='Statistics')]) # shape(128x80) - self.stats[split] = { - 'instance_stats': { - 'total': int(x.sum()), - 'per_class': x.sum(0).tolist()}, - 'image_stats': { - 'total': dataset.n, - 'unlabelled': int(np.all(x == 0, 1).sum()), - 'per_class': (x > 0).sum(0).tolist()}, - 'labels': [{ - str(Path(k).name): _round(v.tolist())} for k, v in zip(dataset.im_files, dataset.labels)]} - - # Save, print and return - if save: - stats_path = self.hub_dir / 'stats.json' - print(f'Saving {stats_path.resolve()}...') - with open(stats_path, 'w') as f: - json.dump(self.stats, f) # save stats.json - if verbose: - print(json.dumps(self.stats, indent=2, sort_keys=False)) - return self.stats - - def process_images(self): - # Compress images for Ultralytics HUB - for split in 'train', 'val', 'test': - if self.data.get(split) is None: - continue - dataset = LoadImagesAndLabels(self.data[split]) # load dataset - desc = f'{split} images' - for _ in tqdm(ThreadPool(NUM_THREADS).imap(self._hub_ops, dataset.im_files), total=dataset.n, desc=desc): - pass - print(f'Done. All images saved to {self.im_dir}') - return self.im_dir - - -# Classification dataloaders ------------------------------------------------------------------------------------------- -class ClassificationDataset(torchvision.datasets.ImageFolder): - """ - YOLOv5 Classification Dataset. - Arguments - root: Dataset path - transform: torchvision transforms, used by default - album_transform: Albumentations transforms, used if installed - """ - - def __init__(self, root, augment, imgsz, cache=False): - super().__init__(root=root) - self.torch_transforms = classify_transforms(imgsz) - self.album_transforms = classify_albumentations(augment, imgsz) if augment else None - self.cache_ram = cache is True or cache == 'ram' - self.cache_disk = cache == 'disk' - self.samples = [list(x) + [Path(x[0]).with_suffix('.npy'), None] for x in self.samples] # file, index, npy, im - - def __getitem__(self, i): - f, j, fn, im = self.samples[i] # filename, index, filename.with_suffix('.npy'), image - if self.cache_ram and im is None: - im = self.samples[i][3] = cv2.imread(f) - elif self.cache_disk: - if not fn.exists(): # load npy - np.save(fn.as_posix(), cv2.imread(f)) - im = np.load(fn) - else: # read image - im = cv2.imread(f) # BGR - if self.album_transforms: - sample = self.album_transforms(image=cv2.cvtColor(im, cv2.COLOR_BGR2RGB))["image"] - else: - sample = self.torch_transforms(im) - return sample, j - - -def create_classification_dataloader(path, - imgsz=224, - batch_size=16, - augment=True, - cache=False, - rank=-1, - workers=8, - shuffle=True): - # Returns Dataloader object to be used with YOLOv5 Classifier - with torch_distributed_zero_first(rank): # init dataset *.cache only once if DDP - dataset = ClassificationDataset(root=path, imgsz=imgsz, augment=augment, cache=cache) - batch_size = min(batch_size, len(dataset)) - nd = torch.cuda.device_count() - nw = min([os.cpu_count() // max(nd, 1), batch_size if batch_size > 1 else 0, workers]) - sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle) - generator = torch.Generator() - generator.manual_seed(6148914691236517205 + RANK) - return InfiniteDataLoader(dataset, - batch_size=batch_size, - shuffle=shuffle and sampler is None, - num_workers=nw, - sampler=sampler, - pin_memory=PIN_MEMORY, - worker_init_fn=seed_worker, - generator=generator) # or DataLoader(persistent_workers=True) diff --git a/spaces/mohamedemam/bert_sentaces_similarty/bert_gradio.py b/spaces/mohamedemam/bert_sentaces_similarty/bert_gradio.py deleted file mode 100644 index bdf524fd58806256e931d7ca6563e55ee24b48a3..0000000000000000000000000000000000000000 --- a/spaces/mohamedemam/bert_sentaces_similarty/bert_gradio.py +++ /dev/null @@ -1,160 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# In[2]: - - -import pandas as pd -import streamlit as st -import torch -from torch.utils.data import DataLoader ,Dataset -from transformers import AutoTokenizer,BertForQuestionAnswering,AutoModel - - -# In[3]: - - -from transformers import AutoTokenizer,BertForQuestionAnswering,AutoModel -model_checkpoint = "bert-base-uncased" -tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) - - -# In[4]: - - -from transformers import DataCollatorWithPadding - - -# In[5]: - - -torch.set_default_device('cpu') - - -# In[6]: - - -from transformers import BertTokenizer, BertModel - - -# In[7]: - - -class bert_compare(torch.nn.Module): - def __init__ (self): - super(bert_compare,self).__init__() - self.bert=BertModel.from_pretrained("bert-base-uncased") - - self.Linear=torch.nn.Linear(768,30 ) - self.elu=torch.nn.ELU() - self.Linear2=torch.nn.Linear(280 ,1 ) - self.cnn1=torch.nn.Conv1d(768,256,kernel_size=2) - self.cnn2=torch.nn.Conv1d(256,10,kernel_size=2) - - self.relu=torch.nn.ReLU() - def forward(self,x): - x=self.bert(**x).last_hidden_state - x=x.permute(0,2,1) - x=self.cnn1(x) - x=self.relu(x) - x=self.cnn2(x) - x=torch.nn.Flatten()(x) - x=self.Linear2(x) - return x - - -# In[8]: - - -model=bert_compare() -optim=torch.optim.AdamW(model.parameters(),lr=5e-5) -loss=torch.nn.BCEWithLogitsLoss() - - -# In[9]: - - -def tok(x,y): - out=tokenizer(x,y, truncation=True, max_length=30,padding='max_length', return_tensors="pt") - out={key:value for key,value in out.items()} - return out -h=tok('my name is mohamed','what is your name') -model(h) - - -# In[10]: - - -model=torch.load('Downloads/model9.pth',map_location=torch.device('cpu')) - - -# In[11]: - - -word=['my name is mohamed ', "How do I read and find my YouTube comments?" ,"How can I see all my Youtube comments?","How can Internet speed be increased by hacking through DNS?","What is the step by step guide to invest in share market in india?","where is capital of egypt?",'when did you born ','what is your name',"what is capital of egypt",'how old are you'] - - -# In[19]: - - -import gradio as gr - - -# In[12]: - - -def tok(x,y): - out=tokenizer(x,y, truncation=True, max_length=30,padding='max_length', return_tensors="pt") - out={key:value for key,value in out.items()} - return out -for i in range(9): - r=torch.randint(len(word),size=(1,)) - r2=torch.randint(len(word),size=(1,)) - h=tok(word[r],word[r2]) - e=model(h) - ans= 'the same' if int(torch.sigmoid( e)>=.5) else 'not the same' - print (f'{word[r]} is {ans} {word[r2]}' ) - - -# In[32]: - - -def sentance_calcute(sentance1,sentance2) ->(int,str) : - out=tokenizer(sentance1,sentance2, truncation=True, max_length=30,padding='max_length', return_tensors="pt") - h={key:value for key,value in out.items()} - e=model(h) - ans=torch.sigmoid( e) - ans2='Same' if ans>=.5 else 'Not same' - return ans,ans2 - - -# In[46]: - - -input_color = "lightred" # Change the color of the input fields - -iface = gr.Interface( - fn=sentance_calcute, - inputs=["text", "text"], - outputs=["number", "text"], - layout="horizontal", - title="Sentence Similarity Checker", - description="Enter two sentences to check their similarity.", - examples=[ - ["The sun is in the west.", "The sun goes down in the west."], - ["Why is biodiversity important for ecosystems?", "She is extremely joyful."], - ["The cat is sleeping on the chair.", "The cat is napping on the chair."] - ,["Why is biodiversity important for ecosystems?", "When did the Renaissance period begin?"] - ], - -) - -# Launch the interface -iface.launch() - - -# In[ ]: - - - - diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py b/spaces/mshukor/UnIVAL/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py deleted file mode 100644 index 66d50d07ff2067b802b90a2aadd88df23153830a..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - -import numpy as np - - -aggregate_funcs = { - "std": np.std, - "var": np.var, - "median": np.median, - "mean": np.mean, - "min": np.min, - "max": np.max, -} - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--input_file", required=True, type=str) - parser.add_argument("-n", "--repeat_times", required=True, type=int) - parser.add_argument("-o", "--output_file", required=False) - parser.add_argument("-f", "--func", required=False, default="mean") - args = parser.parse_args() - - stream = open(args.output_file, "w") if args.output_file else sys.stdout - - segment_scores = [] - for line in open(args.input_file): - segment_scores.append(float(line.strip())) - if len(segment_scores) == args.repeat_times: - stream.write("{}\n".format(aggregate_funcs[args.func](segment_scores))) - segment_scores = [] - - -if __name__ == "__main__": - main() diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/lru_cache_dataset.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/lru_cache_dataset.py deleted file mode 100644 index a7854ac1701392754ce5795cafe9c634671aebdf..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/lru_cache_dataset.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from functools import lru_cache - -from . import BaseWrapperDataset - - -class LRUCacheDataset(BaseWrapperDataset): - def __init__(self, dataset, token=None): - super().__init__(dataset) - - @lru_cache(maxsize=8) - def __getitem__(self, index): - return self.dataset[index] - - @lru_cache(maxsize=8) - def collater(self, samples): - return self.dataset.collater(samples) diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/eval/eval_vqa_lambdas.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/eval/eval_vqa_lambdas.sh deleted file mode 100644 index 7c2e5f48772bcc777962e5d3d2e4d7e6de066d05..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/ratatouille/eval/eval_vqa_lambdas.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=eval_vqa_base_best_ratacapgroundsnlivqalr5e5_bestlambdas -#SBATCH --nodes=2 -#SBATCH --ntasks=2 -#SBATCH --gpus=16 -#SBATCH --threads-per-core=2 -#SBATCH --gpu-bind=closest -#SBATCH -C MI250 -#SBATCH -A gda2204 -#SBATCH --time=24:00:00 -#SBATCH --mail-type=END,FAIL -#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/eval_vqa_base_best_ratacapgroundsnlivqalr5e5_bestlambdas.out -#SBATCH --exclusive -#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr - - -cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts -source /lus/home/NAT/gda2204/mshukor/.bashrc - -conda activate main - - -rm core-python3* - - -srun -l -N 2 -n 2 -c 128 --gpus=16 bash averaging/ratatouille/eval/eval_vqa_lambdas.sh - - diff --git a/spaces/mueller-franzes/medfusion-app/tests/dataset/test_dataset_airogs_prep.py b/spaces/mueller-franzes/medfusion-app/tests/dataset/test_dataset_airogs_prep.py deleted file mode 100644 index b0ef97e45d5de6a5748e91307c3a0245c4f0a0d5..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/tests/dataset/test_dataset_airogs_prep.py +++ /dev/null @@ -1,23 +0,0 @@ -from medical_diffusion.data.datasets import SimpleDataset2D, AIROGSDataset - -import torch.nn.functional as F - -import matplotlib.pyplot as plt -from pathlib import Path -from torchvision.utils import save_image - - -path_out = Path('/mnt/hdd/datasets/eye/AIROGS/data_256x256/') -path_out.mkdir(parents=True, exist_ok=True) - -ds = AIROGSDataset( - crawler_ext='jpg', - image_resize=256, - image_crop=(256, 256), - path_root='/mnt/hdd/datasets/eye/AIROGS/data/', # '/home/gustav/Documents/datasets/AIROGS/dataset', '/mnt/hdd/datasets/eye/AIROGS/data/' -) - -weights = ds.get_weights() - -for img in ds: - img['source'].save(path_out/f"{img['uid']}.jpg") \ No newline at end of file diff --git a/spaces/mygyasir/genious_bgremover/carvekit/trimap/add_ops.py b/spaces/mygyasir/genious_bgremover/carvekit/trimap/add_ops.py deleted file mode 100644 index dfb37ca63680a515bcb4a0e4d50823f2ba6f0685..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/genious_bgremover/carvekit/trimap/add_ops.py +++ /dev/null @@ -1,91 +0,0 @@ -""" -Source url: https://github.com/OPHoperHPO/image-background-remove-tool -Author: Nikita Selin (OPHoperHPO)[https://github.com/OPHoperHPO]. -License: Apache License 2.0 -""" -import cv2 -import numpy as np -from PIL import Image - - -def prob_filter(mask: Image.Image, prob_threshold=231) -> Image.Image: - """ - Applies a filter to the mask by the probability of locating an object in the object area. - - Args: - prob_threshold: Threshold of probability for mark area as background. - mask: Predicted object mask - - Raises: - ValueError if mask or trimap has wrong color mode - - Returns: - Generated trimap for image. - """ - if mask.mode != "L": - raise ValueError("Input mask has wrong color mode.") - # noinspection PyTypeChecker - mask_array = np.array(mask) - mask_array[mask_array > prob_threshold] = 255 # Probability filter for mask - mask_array[mask_array <= prob_threshold] = 0 - return Image.fromarray(mask_array).convert("L") - - -def prob_as_unknown_area( - trimap: Image.Image, mask: Image.Image, prob_threshold=255 -) -> Image.Image: - """ - Marks any uncertainty in the seg mask as an unknown region. - - Args: - prob_threshold: Threshold of probability for mark area as unknown. - trimap: Generated trimap. - mask: Predicted object mask - - Raises: - ValueError if mask or trimap has wrong color mode - - Returns: - Generated trimap for image. - """ - if mask.mode != "L" or trimap.mode != "L": - raise ValueError("Input mask has wrong color mode.") - # noinspection PyTypeChecker - mask_array = np.array(mask) - # noinspection PyTypeChecker - trimap_array = np.array(trimap) - trimap_array[np.logical_and(mask_array <= prob_threshold, mask_array > 0)] = 127 - return Image.fromarray(trimap_array).convert("L") - - -def post_erosion(trimap: Image.Image, erosion_iters=1) -> Image.Image: - """ - Performs erosion on the mask and marks the resulting area as an unknown region. - - Args: - erosion_iters: The number of iterations of erosion that - the object's mask will be subjected to before forming an unknown area - trimap: Generated trimap. - mask: Predicted object mask - - Returns: - Generated trimap for image. - """ - if trimap.mode != "L": - raise ValueError("Input mask has wrong color mode.") - # noinspection PyTypeChecker - trimap_array = np.array(trimap) - if erosion_iters > 0: - without_unknown_area = trimap_array.copy() - without_unknown_area[without_unknown_area == 127] = 0 - - erosion_kernel = np.ones((3, 3), np.uint8) - erode = cv2.erode( - without_unknown_area, erosion_kernel, iterations=erosion_iters - ) - erode = np.where(erode == 0, 0, without_unknown_area) - trimap_array[np.logical_and(erode == 0, without_unknown_area > 0)] = 127 - erode = trimap_array.copy() - else: - erode = trimap_array.copy() - return Image.fromarray(erode).convert("L") diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/side_by_side.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/side_by_side.py deleted file mode 100644 index 8ba7a42a3b8597552b8002d1eb245d5776aff7f7..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/side_by_side.py +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env python3 -import os -import random - -import cv2 -import numpy as np - -from saicinpainting.evaluation.data import PrecomputedInpaintingResultsDataset -from saicinpainting.evaluation.utils import load_yaml -from saicinpainting.training.visualizers.base import visualize_mask_and_images - - -def main(args): - config = load_yaml(args.config) - - datasets = [PrecomputedInpaintingResultsDataset(args.datadir, cur_predictdir, **config.dataset_kwargs) - for cur_predictdir in args.predictdirs] - assert len({len(ds) for ds in datasets}) == 1 - len_first = len(datasets[0]) - - indices = list(range(len_first)) - if len_first > args.max_n: - indices = sorted(random.sample(indices, args.max_n)) - - os.makedirs(args.outpath, exist_ok=True) - - filename2i = {} - - keys = ['image'] + [i for i in range(len(datasets))] - for img_i in indices: - try: - mask_fname = os.path.basename(datasets[0].mask_filenames[img_i]) - if mask_fname in filename2i: - filename2i[mask_fname] += 1 - idx = filename2i[mask_fname] - mask_fname_only, ext = os.path.split(mask_fname) - mask_fname = f'{mask_fname_only}_{idx}{ext}' - else: - filename2i[mask_fname] = 1 - - cur_vis_dict = datasets[0][img_i] - for ds_i, ds in enumerate(datasets): - cur_vis_dict[ds_i] = ds[img_i]['inpainted'] - - vis_img = visualize_mask_and_images(cur_vis_dict, keys, - last_without_mask=False, - mask_only_first=True, - black_mask=args.black) - vis_img = np.clip(vis_img * 255, 0, 255).astype('uint8') - - out_fname = os.path.join(args.outpath, mask_fname) - - - - vis_img = cv2.cvtColor(vis_img, cv2.COLOR_RGB2BGR) - cv2.imwrite(out_fname, vis_img) - except Exception as ex: - print(f'Could not process {img_i} due to {ex}') - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('--max-n', type=int, default=100, help='Maximum number of images to print') - aparser.add_argument('--black', action='store_true', help='Whether to fill mask on GT with black') - aparser.add_argument('config', type=str, help='Path to evaluation config (e.g. configs/eval1.yaml)') - aparser.add_argument('outpath', type=str, help='Where to put results') - aparser.add_argument('datadir', type=str, - help='Path to folder with images and masks') - aparser.add_argument('predictdirs', type=str, - nargs='+', - help='Path to folders with predicts') - - - main(aparser.parse_args()) diff --git a/spaces/nakas/MusicGenDemucs/audiocraft/models/loaders.py b/spaces/nakas/MusicGenDemucs/audiocraft/models/loaders.py deleted file mode 100644 index 97c662c3212b7695669cbfc5214ff2f099c3f319..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/audiocraft/models/loaders.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility functions to load from the checkpoints. -Each checkpoint is a torch.saved dict with the following keys: -- 'xp.cfg': the hydra config as dumped during training. This should be used - to rebuild the object using the audiocraft.models.builders functions, -- 'model_best_state': a readily loadable best state for the model, including - the conditioner. The model obtained from `xp.cfg` should be compatible - with this state dict. In the case of a LM, the encodec model would not be - bundled along but instead provided separately. - -Those functions also support loading from a remote location with the Torch Hub API. -They also support overriding some parameters, in particular the device and dtype -of the returned model. -""" - -from pathlib import Path -from huggingface_hub import hf_hub_download -import typing as tp -import os - -from omegaconf import OmegaConf -import torch - -from . import builders - - -HF_MODEL_CHECKPOINTS_MAP = { - "small": "facebook/musicgen-small", - "medium": "facebook/musicgen-medium", - "large": "facebook/musicgen-large", - "melody": "facebook/musicgen-melody", -} - - -def _get_state_dict( - file_or_url_or_id: tp.Union[Path, str], - filename: tp.Optional[str] = None, - device='cpu', - cache_dir: tp.Optional[str] = None, -): - # Return the state dict either from a file or url - file_or_url_or_id = str(file_or_url_or_id) - assert isinstance(file_or_url_or_id, str) - - if os.path.isfile(file_or_url_or_id): - return torch.load(file_or_url_or_id, map_location=device) - - if os.path.isdir(file_or_url_or_id): - file = f"{file_or_url_or_id}/{filename}" - return torch.load(file, map_location=device) - - elif file_or_url_or_id.startswith('https://'): - return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True) - - elif file_or_url_or_id in HF_MODEL_CHECKPOINTS_MAP: - assert filename is not None, "filename needs to be defined if using HF checkpoints" - - repo_id = HF_MODEL_CHECKPOINTS_MAP[file_or_url_or_id] - file = hf_hub_download(repo_id=repo_id, filename=filename, cache_dir=cache_dir) - return torch.load(file, map_location=device) - - else: - raise ValueError(f"{file_or_url_or_id} is not a valid name, path or link that can be loaded.") - - -def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - model = builders.get_compression_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - return model - - -def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - if cfg.device == 'cpu': - cfg.dtype = 'float32' - else: - cfg.dtype = 'float16' - model = builders.get_lm_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - model.cfg = cfg - return model diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/LICENSE.md b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/LICENSE.md deleted file mode 100644 index 261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64..0000000000000000000000000000000000000000 --- a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/LICENSE.md +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Eagles Disobey The Case For Inc.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Eagles Disobey The Case For Inc.md deleted file mode 100644 index ef90ae3d1b57f5fe0e45a6eb346fd5900e4e5a7c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Eagles Disobey The Case For Inc.md +++ /dev/null @@ -1,19 +0,0 @@ -
        -```html -

        Eagles Disobey: The Case For Inc

        -

        Eagles Disobey is a book and a website that claim to provide evidence of ancient life on Mars, specifically in a region called Inca City. The authors, B. J. Wolf and Dan B. Catselas Burisch, use image analysis and computer graphics to enhance the photos taken by NASA's Mariner 9 and Viking orbiters, and reveal what they believe are artificial structures, carvings, statues, and even faces of Martian beings.

        -

        The book, published in 1998, was based on Wolf's research as a graduate student at the University of Nevada, Las Vegas. Wolf claims to have discovered a striking resemblance between Inca City and Machu Picchu, the famous citadel of the Inca civilization in Peru. He also claims to have found other features that resemble Incan symbols, such as the condor, the puma, and the sun god Inti.

        -

        Eagles Disobey: The Case For Inc


        DOWNLOAD ->>->>->> https://urlcod.com/2uIaZc



        -

        Burisch, who co-authored the book and runs the website, is a controversial figure who claims to have worked as a microbiologist for the US government on top-secret projects involving extraterrestrial lifeforms. He also claims to have been involved in time travel experiments and to have witnessed the assassination of John F. Kennedy from a different timeline. Burisch has been accused of fabricating his credentials and his stories by many critics and skeptics.

        -

        Eagles Disobey has been dismissed by most mainstream scientists and experts as pseudoscience and conspiracy theory. They argue that the images used by Wolf and Burisch are low-resolution, noisy, and distorted by natural processes such as erosion, wind, and shadows. They also point out that there is no physical or biological evidence to support the existence of an ancient Martian civilization or any connection with the Incas.

        -

        However, Wolf and Burisch maintain that their work is based on rigorous analysis and scientific methods. They claim that they have uncovered a hidden truth that has been suppressed by NASA and other authorities for decades. They also claim that their work has humanitarian and spiritual implications, as they seek to raise awareness of the plight of refugees, indigenous peoples, and other oppressed groups on Earth.

        -``` - -```html -

        One of the most controversial aspects of Eagles Disobey is the claim that Inca City was visited by a secret mission in 1976, codenamed Project Preserve Destiny. According to Burisch, he was part of a team of scientists and military personnel who traveled to Mars using a device called the Looking Glass, which could manipulate space and time. He claims that they explored Inca City and collected samples of Martian artifacts and DNA.

        -

        Burisch also claims that he encountered a living Martian being, whom he named J-Rod. He says that J-Rod was a descendant of the original inhabitants of Inca City, who had survived underground after a cataclysmic event that wiped out most of the surface life. He says that J-Rod communicated with him telepathically and shared his knowledge and wisdom. He also says that J-Rod was related to him genetically, as part of a complex lineage that involved time travel and hybridization.

        -

        Wolf and Burisch say that they have evidence to support their claims, such as photos, videos, documents, and testimonies from other witnesses. They say that they have been threatened and harassed by the government and other forces who want to keep their discoveries secret. They say that they have decided to disobey the orders of their superiors and reveal the truth to the public, hence the title of their book and website.

        -```

        -

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mihaela Bilic Sanatatea Are Gust Pdf 22.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mihaela Bilic Sanatatea Are Gust Pdf 22.md deleted file mode 100644 index 7cd3777d5fb450222e8fe66f683681d7ba68607d..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mihaela Bilic Sanatatea Are Gust Pdf 22.md +++ /dev/null @@ -1,14 +0,0 @@ -
        -

        Review: Sanatatea are gust by Mihaela Bilic

        -

        Sanatatea are gust (Health has taste) is a book by Mihaela Bilic, a Romanian nutritionist who wants to bring back joy and pleasure to the tables of Romanians. The book is a guide to healthy and balanced eating, based on scientific evidence and common sense. It debunks some myths about diets, calories, fats, carbohydrates, proteins, vitamins and minerals, and offers practical advice on how to choose, prepare and enjoy food without guilt or deprivation.

        -

        The book is divided into four parts: the first one explains the basic principles of nutrition and metabolism, the second one describes the different types of food and their role in the body, the third one presents some common health problems related to nutrition and how to prevent or treat them, and the fourth one offers some recipes and menus for different occasions and preferences. The book is written in a clear and engaging style, with examples, anecdotes, tips and illustrations. It also includes some tests and quizzes to help readers assess their own eating habits and needs.

        -

        Mihaela Bilic Sanatatea Are Gust Pdf 22


        DOWNLOAD - https://urlcod.com/2uIaUv



        -

        Sanatatea are gust is a useful and informative book for anyone who wants to learn more about nutrition and how to eat well for their health and well-being. It is not a diet book, but rather a lifestyle book that encourages readers to enjoy food as a source of nourishment, pleasure and social connection. The book is available in PDF format on Scribd[^2^] [^3^], where it has received positive ratings and reviews from readers who appreciated its content, style and message[^1^]. The book was first published in 2011 and has sold thousands of copies in Romania.

        - -

        The book also addresses some specific nutritional needs and challenges for different groups of people, such as women, men, seniors, pregnant women, babies, children, teenagers, vegetarians, and people with various health conditions. It explains how to adapt the diet to different seasons, situations and preferences, and how to cope with cravings, temptations and emotional eating. It also gives some tips on how to shop, cook and eat smartly and sustainably.

        -

        One of the main messages of the book is that health has taste, meaning that healthy food can and should be delicious, satisfying and enjoyable. The author encourages readers to rediscover the pleasure of eating and to appreciate the diversity and richness of food. She also emphasizes the importance of moderation, balance and variety in the diet, and warns against extreme or restrictive diets that can harm the body and the mind. She advocates for a positive and respectful attitude towards food and oneself, and for a holistic approach to health that includes physical, mental and emotional aspects.

        - -

        The book also covers some topics that are often controversial or confusing for many people, such as salt, sugar, artificial sweeteners, organic food, supplements, alcohol, fasting and holidays. It clarifies some misconceptions and myths about these topics and provides some guidelines on how to consume them wisely and safely. It also explains how different foods interact with each other and with medications, and how to avoid potential adverse effects or interactions.

        -

        Sanatatea are gust is a comprehensive and reliable source of information and inspiration for anyone who wants to improve their health and quality of life through nutrition. It is not a rigid or dogmatic book, but rather a flexible and realistic one that respects individual differences and preferences. It is not a book that tells you what to eat or not to eat, but rather a book that teaches you how to eat well and enjoy it.

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Perfect Keylogger 1682 Download Full 18 [HOT].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Perfect Keylogger 1682 Download Full 18 [HOT].md deleted file mode 100644 index 2e945404ed9f52b022cc65371a336313865d7cc2..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Perfect Keylogger 1682 Download Full 18 [HOT].md +++ /dev/null @@ -1,13 +0,0 @@ - -

        How to Download Perfect Keylogger 1682 Full Version for Free

        -

        Perfect Keylogger is a powerful and stealthy software that can record everything you type on your computer or smartphone. It can be used for various purposes, such as monitoring employees, children, spouses, or yourself. However, it can also be used by hackers and identity thieves to steal your personal and financial information, such as passwords, credit card numbers, bank accounts, and more.

        -

        If you want to download Perfect Keylogger 1682 full version for free, you should be careful and aware of the risks involved. There are many websites that claim to offer free downloads of Perfect Keylogger 1682, but most of them are fake or malicious. They may contain viruses, trojans, spyware, or other malware that can infect your device and compromise your security and privacy.

        -

        Perfect Keylogger 1682 Download Full 18


        DOWNLOADhttps://urlcod.com/2uIcnK



        -

        One of the websites that claims to offer Perfect Keylogger 1682 full version for free is https://tatarulja2013.wixsite.com/triclorevi/post/perfect-keylogger-1682-download-full-18. However, this website is not trustworthy and may harm your device. It has a low trust rating and a poor reputation on various online platforms. It also contains suspicious links and advertisements that may redirect you to other malicious websites or download unwanted programs on your device.

        -

        Another website that claims to offer Perfect Keylogger 1682 full version for free is https://tourismcenter.ge/wp-content/uploads/2022/06/Perfect_Keylogger_1682_Download_Full_18.pdf. However, this website is also not reliable and may damage your device. It has a low trust score and a bad reputation on various online platforms. It also contains a PDF file that may contain malware or phishing content that can trick you into revealing your personal or financial information.

        -

        The third website that claims to offer Perfect Keylogger 1682 full version for free is https://sway.office.com/KlDrRZYHp1VtJRHk. However, this website is also not safe and may harm your device. It has a low trust rating and a poor reputation on various online platforms. It also contains a Microsoft Sway presentation that may contain malware or phishing content that can deceive you into giving away your personal or financial information.

        -

        Therefore, we do not recommend downloading Perfect Keylogger 1682 full version for free from any of these websites. They are not legitimate and may expose you to serious security and privacy risks. Instead, we suggest you to purchase Perfect Keylogger 1682 from its official website https://www.blazingtools.com/bpk.html. This way, you can get a genuine and safe product that can meet your needs and expectations.

        -

        Perfect Keylogger 1682 is a powerful and stealthy software that can record everything you type on your computer or smartphone. However, it can also be used by hackers and identity thieves to steal your personal and financial information. Therefore, you should be careful and aware of the risks involved when downloading Perfect Keylogger 1682 full version for free from untrustworthy websites. Instead, we suggest you to purchase Perfect Keylogger 1682 from its official website https://www.blazingtools.com/bpk.html.

        -

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/ngxson/poet-cat/frontend/pages/api/hello.ts b/spaces/ngxson/poet-cat/frontend/pages/api/hello.ts deleted file mode 100644 index f8bcc7e5caed177cb9ecfa7c02bc9a854b8ad1ff..0000000000000000000000000000000000000000 --- a/spaces/ngxson/poet-cat/frontend/pages/api/hello.ts +++ /dev/null @@ -1,13 +0,0 @@ -// Next.js API route support: https://nextjs.org/docs/api-routes/introduction -import type { NextApiRequest, NextApiResponse } from 'next' - -type Data = { - name: string -} - -export default function handler( - req: NextApiRequest, - res: NextApiResponse -) { - res.status(200).json({ name: 'John Doe' }) -} diff --git a/spaces/nikhil5678/turkey-syria-earthquake-tweets/README.md b/spaces/nikhil5678/turkey-syria-earthquake-tweets/README.md deleted file mode 100644 index 577d197dd266c9ebe1d059a73d09d0904ed1721e..0000000000000000000000000000000000000000 --- a/spaces/nikhil5678/turkey-syria-earthquake-tweets/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Turkey Syria Earthquake Tweets -emoji: ⚡ -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/predictors/chart_with_confidence.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/predictors/chart_with_confidence.py deleted file mode 100644 index 9c1cd6cc8fda56e831fbc02a8ffdd844866c0e4f..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/predictors/chart_with_confidence.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from . import DensePoseChartConfidencePredictorMixin, DensePoseChartPredictor -from .registry import DENSEPOSE_PREDICTOR_REGISTRY - - -@DENSEPOSE_PREDICTOR_REGISTRY.register() -class DensePoseChartWithConfidencePredictor( - DensePoseChartConfidencePredictorMixin, DensePoseChartPredictor -): - """ - Predictor that combines chart and chart confidence estimation - """ - - pass diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/default.css b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/default.css deleted file mode 100644 index ca4d1f4fedb43ee6bd949569a3dda538ae71f7a1..0000000000000000000000000000000000000000 --- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/default.css +++ /dev/null @@ -1,239 +0,0 @@ -.cke_emoji { - overflow-y: hidden; - height: 100%; -} - -.cke_button_icon {background-image: url(/sites/all/modules/custom/intsys/intsys_codepre_button/plugins/emoji/icons/emojipanel.png)} - -.cke_emoji-suggestion_item { - overflow: hidden; - text-overflow: ellipsis; - white-space: nowrap; - font-family: sans-serif, Arial, Verdana, "Trebuchet MS", "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; -} - -.cke_emoji-suggestion_item span { - font-family: "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; -} - -.cke_emoji-panel { - width: 310px; - height: 300px; - overflow: hidden; -} - -.cke_emoji-inner_panel { - width: 100%; -} - -.cke_emoji-panel_block a { - display: inline-block; - width: 100%; - padding-top: 2px; -} - -.cke_emoji-inner_panel > h2 { - font-size: 2em; -} - -/* TOP NAVIGATION */ -.cke_emoji-navigation_icons { - display: none; -} -.cke_emoji-inner_panel > nav { - width: 100%; - height: 24px; - margin-top: 10px; - margin-bottom: 6px; - padding-bottom: 4px; - border-bottom: 1px solid #d1d1d1; -} -.cke_emoji-inner_panel > nav > ul { - margin-left: 10px; - margin-right: 10px; - margin-top: 8px; - padding: 0; - list-style-type: none; - height: 24px; -} - -.cke_emoji-inner_panel > nav li { - display: inline-block; - width: 24px; - height: auto; - margin: 0 6px; - text-align: center; -} - -.cke_browser_ie .cke_emoji-inner_panel > nav li { - height: 22px; -} - -.cke_emoji-inner_panel li svg { - opacity: 0.4; - width: 80%; -} - -.cke_emoji-inner_panel li span { - opacity: 0.4; -} - -.cke_emoji-inner_panel li:hover svg, .cke_emoji-inner_panel li:hover span{ - opacity: 1; -} - -.cke_emoji-inner_panel .active { - border-bottom: 5px solid rgba(44, 195, 255, 1); -} - -.cke_emoji-navigation_item span { - width: 21px; - height: 21px; - display: inline-block; -} - -/* SEARCHBOX */ -.cke_emoji-search { - position: relative; - height: 25px; - display: block; - border: 1px solid #d1d1d1; - margin-left: 10px; - margin-right: 10px; -} - -.cke_emoji-search .cke_emoji-search_loupe { - position: absolute; - top: 6px; - left: 6px; - display: inline-block; - width: 14px; - height: 14px; - opacity: 0.4; -} - -.cke_rtl .cke_emoji-search .cke_emoji-search_loupe { - left: auto; - right: 6px; -} - -.cke_emoji-search span { - background-repeat: no-repeat; - background-position: -60px -15px; - background-size: 75px 30px; -} - -.cke_emoji-search input { - -webkit-appearance: none; - border: none; - width: 100%; - height: 100%; - padding-left: 25px; - padding-right: 10px; - margin-left: 0 -} - -.cke_rtl .cke_emoji-search input { - padding-left: 10px; - padding-right: 25px; - margin-right: 0; -} - -/* EMOJI */ -.cke_emoji-outer_emoji_block { - height: 180px; - overflow-x: hidden; - overflow-y: auto; - margin-top: 5px; - margin-left: 10px; - margin-right: 10px; - padding-left: 2px; - padding-right: 2px; -} - -.cke_emoji-outer_emoji_block h2 { - font-size: 1.3em; - font-weight: 600; - margin: 5px 0 3px 0; -} - -.cke_emoji-outer_emoji_block ul { - margin: 0 0 15px 0; - padding: 0; - list-style-type: none; -} - -.cke_emoji-item { - font-family: "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; - list-style-type: none; - display: inline-table; - width: 36px; - height: 36px; - font-size: 1.8em; - text-align: center; -} - -.cke_emoji-item:hover { - border-radius: 10%; - background-color: rgba(44, 195, 255, 0.2); -} - -.cke_emoji-item > a { - text-decoration: none; - display: table-cell; - vertical-align: middle; -} - -.cke_emoji-outer_emoji_block .hidden { - display: none -} - -/* STATUS BAR */ -.cke_emoji-status_bar { - height: 34px; - padding-left: 10px; - padding-right: 10px; - padding-top: 3px; - margin-top: 3px; - border-top: 1px solid #d1d1d1; - line-height: 1; -} - -.cke_emoji-status_bar p { - margin-top: 3px; -} - -.cke_emoji-status_bar > div { - display: inline-block; - margin-top: 3px; -} - -.cke_emoji-status_icon { - font-family: "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; - font-size: 2.2em; - float: left; - margin-right: 10px; -} - -.cke_rtl .cke_emoji-status_icon { - float: right; - margin-right: 0px; - margin-left: 10px; -} - -.cke_emoji-panel_block p { - margin-bottom: 0; -} - -p.cke_emoji-status_description { - font-weight: 600; -} - -p.cke_emoji-status_full_name { - font-size: 0.8em; - color: #d1d1d1; -} - -.cke_emoji-inner_panel a:focus, .cke_emoji-inner_panel input:focus { - outline: 2px solid #139FF7; -} diff --git a/spaces/oguzakif/video-object-remover/SiamMask/data/det/visual.py b/spaces/oguzakif/video-object-remover/SiamMask/data/det/visual.py deleted file mode 100644 index 254133baed59402f7fee80df8794c3998ec3d2a9..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/data/det/visual.py +++ /dev/null @@ -1,49 +0,0 @@ -# -------------------------------------------------------- -# SiamMask -# Licensed under The MIT License -# Written by Qiang Wang (wangqiang2015 at ia.ac.cn) -# -------------------------------------------------------- -from os.path import join -from os import listdir -import cv2 -import numpy as np -import glob -import xml.etree.ElementTree as ET - -visual = False -color_bar = np.random.randint(0, 255, (90, 3)) - -VID_base_path = './ILSVRC2015' -ann_base_path = join(VID_base_path, 'Annotations/DET/train/') -img_base_path = join(VID_base_path, 'Data/DET/train/') -sub_sets = sorted({'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i'}) -for sub_set in sub_sets: - sub_set_base_path = join(ann_base_path, sub_set) - class_names = sorted(listdir(sub_set_base_path)) - for vi, class_name in enumerate(class_names): - print('subset: {} video id: {:04d} / {:04d}'.format(sub_set, vi, len(class_names))) - - class_base_path = join(sub_set_base_path, class_name) - xmls = sorted(glob.glob(join(class_base_path, '*.xml'))) - for xml in xmls: - f = dict() - xmltree = ET.parse(xml) - size = xmltree.findall('size')[0] - frame_sz = [int(it.text) for it in size] - objects = xmltree.findall('object') - # if visual: - img_path = xml.replace('xml', 'JPEG').replace('Annotations', 'Data') - im = cv2.imread(img_path) - for object_iter in objects: - bndbox = object_iter.find('bndbox') - bbox = [int(bndbox.find('xmin').text), int(bndbox.find('ymin').text), - int(bndbox.find('xmax').text), int(bndbox.find('ymax').text)] - if visual: - pt1 = (int(bbox[0]), int(bbox[1])) - pt2 = (int(bbox[2]), int(bbox[3])) - cv2.rectangle(im, pt1, pt2, color_bar[vi], 3) - if visual: - cv2.imshow('img', im) - cv2.waitKey(500) - -print('done!') diff --git a/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/async_components/login.py b/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/async_components/login.py deleted file mode 100644 index 59f3542ed73e5e2e40ae9900bafb61d5665104a4..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/lib/itchat/async_components/login.py +++ /dev/null @@ -1,422 +0,0 @@ -import asyncio -import os, time, re, io -import threading -import json -import random -import traceback -import logging -try: - from httplib import BadStatusLine -except ImportError: - from http.client import BadStatusLine - -import requests # type: ignore -from pyqrcode import QRCode - -from .. import config, utils -from ..returnvalues import ReturnValue -from ..storage.templates import wrap_user_dict -from .contact import update_local_chatrooms, update_local_friends -from .messages import produce_msg - -logger = logging.getLogger('itchat') - - -def load_login(core): - core.login = login - core.get_QRuuid = get_QRuuid - core.get_QR = get_QR - core.check_login = check_login - core.web_init = web_init - core.show_mobile_login = show_mobile_login - core.start_receiving = start_receiving - core.get_msg = get_msg - core.logout = logout - -async def login(self, enableCmdQR=False, picDir=None, qrCallback=None, EventScanPayload=None,ScanStatus=None,event_stream=None, - loginCallback=None, exitCallback=None): - if self.alive or self.isLogging: - logger.warning('itchat has already logged in.') - return - self.isLogging = True - - while self.isLogging: - uuid = await push_login(self) - if uuid: - payload = EventScanPayload( - status=ScanStatus.Waiting, - qrcode=f"qrcode/https://login.weixin.qq.com/l/{uuid}" - ) - event_stream.emit('scan', payload) - await asyncio.sleep(0.1) - else: - logger.info('Getting uuid of QR code.') - self.get_QRuuid() - payload = EventScanPayload( - status=ScanStatus.Waiting, - qrcode=f"https://login.weixin.qq.com/l/{self.uuid}" - ) - print(f"https://wechaty.js.org/qrcode/https://login.weixin.qq.com/l/{self.uuid}") - event_stream.emit('scan', payload) - await asyncio.sleep(0.1) - # logger.info('Please scan the QR code to log in.') - isLoggedIn = False - while not isLoggedIn: - status = await self.check_login() - # if hasattr(qrCallback, '__call__'): - # await qrCallback(uuid=self.uuid, status=status, qrcode=self.qrStorage.getvalue()) - if status == '200': - isLoggedIn = True - payload = EventScanPayload( - status=ScanStatus.Scanned, - qrcode=f"https://login.weixin.qq.com/l/{self.uuid}" - ) - event_stream.emit('scan', payload) - await asyncio.sleep(0.1) - elif status == '201': - if isLoggedIn is not None: - logger.info('Please press confirm on your phone.') - isLoggedIn = None - payload = EventScanPayload( - status=ScanStatus.Waiting, - qrcode=f"https://login.weixin.qq.com/l/{self.uuid}" - ) - event_stream.emit('scan', payload) - await asyncio.sleep(0.1) - elif status != '408': - payload = EventScanPayload( - status=ScanStatus.Cancel, - qrcode=f"https://login.weixin.qq.com/l/{self.uuid}" - ) - event_stream.emit('scan', payload) - await asyncio.sleep(0.1) - break - if isLoggedIn: - payload = EventScanPayload( - status=ScanStatus.Confirmed, - qrcode=f"https://login.weixin.qq.com/l/{self.uuid}" - ) - event_stream.emit('scan', payload) - await asyncio.sleep(0.1) - break - elif self.isLogging: - logger.info('Log in time out, reloading QR code.') - payload = EventScanPayload( - status=ScanStatus.Timeout, - qrcode=f"https://login.weixin.qq.com/l/{self.uuid}" - ) - event_stream.emit('scan', payload) - await asyncio.sleep(0.1) - else: - return - logger.info('Loading the contact, this may take a little while.') - await self.web_init() - await self.show_mobile_login() - self.get_contact(True) - if hasattr(loginCallback, '__call__'): - r = await loginCallback(self.storageClass.userName) - else: - utils.clear_screen() - if os.path.exists(picDir or config.DEFAULT_QR): - os.remove(picDir or config.DEFAULT_QR) - logger.info('Login successfully as %s' % self.storageClass.nickName) - await self.start_receiving(exitCallback) - self.isLogging = False - -async def push_login(core): - cookiesDict = core.s.cookies.get_dict() - if 'wxuin' in cookiesDict: - url = '%s/cgi-bin/mmwebwx-bin/webwxpushloginurl?uin=%s' % ( - config.BASE_URL, cookiesDict['wxuin']) - headers = { 'User-Agent' : config.USER_AGENT} - r = core.s.get(url, headers=headers).json() - if 'uuid' in r and r.get('ret') in (0, '0'): - core.uuid = r['uuid'] - return r['uuid'] - return False - -def get_QRuuid(self): - url = '%s/jslogin' % config.BASE_URL - params = { - 'appid' : 'wx782c26e4c19acffb', - 'fun' : 'new', - 'redirect_uri' : 'https://wx.qq.com/cgi-bin/mmwebwx-bin/webwxnewloginpage?mod=desktop', - 'lang' : 'zh_CN' } - headers = { 'User-Agent' : config.USER_AGENT} - r = self.s.get(url, params=params, headers=headers) - regx = r'window.QRLogin.code = (\d+); window.QRLogin.uuid = "(\S+?)";' - data = re.search(regx, r.text) - if data and data.group(1) == '200': - self.uuid = data.group(2) - return self.uuid - -async def get_QR(self, uuid=None, enableCmdQR=False, picDir=None, qrCallback=None): - uuid = uuid or self.uuid - picDir = picDir or config.DEFAULT_QR - qrStorage = io.BytesIO() - qrCode = QRCode('https://login.weixin.qq.com/l/' + uuid) - qrCode.png(qrStorage, scale=10) - if hasattr(qrCallback, '__call__'): - await qrCallback(uuid=uuid, status='0', qrcode=qrStorage.getvalue()) - else: - with open(picDir, 'wb') as f: - f.write(qrStorage.getvalue()) - if enableCmdQR: - utils.print_cmd_qr(qrCode.text(1), enableCmdQR=enableCmdQR) - else: - utils.print_qr(picDir) - return qrStorage - -async def check_login(self, uuid=None): - uuid = uuid or self.uuid - url = '%s/cgi-bin/mmwebwx-bin/login' % config.BASE_URL - localTime = int(time.time()) - params = 'loginicon=true&uuid=%s&tip=1&r=%s&_=%s' % ( - uuid, int(-localTime / 1579), localTime) - headers = { 'User-Agent' : config.USER_AGENT} - r = self.s.get(url, params=params, headers=headers) - regx = r'window.code=(\d+)' - data = re.search(regx, r.text) - if data and data.group(1) == '200': - if await process_login_info(self, r.text): - return '200' - else: - return '400' - elif data: - return data.group(1) - else: - return '400' - -async def process_login_info(core, loginContent): - ''' when finish login (scanning qrcode) - * syncUrl and fileUploadingUrl will be fetched - * deviceid and msgid will be generated - * skey, wxsid, wxuin, pass_ticket will be fetched - ''' - regx = r'window.redirect_uri="(\S+)";' - core.loginInfo['url'] = re.search(regx, loginContent).group(1) - headers = { 'User-Agent' : config.USER_AGENT, - 'client-version' : config.UOS_PATCH_CLIENT_VERSION, - 'extspam' : config.UOS_PATCH_EXTSPAM, - 'referer' : 'https://wx.qq.com/?&lang=zh_CN&target=t' - } - r = core.s.get(core.loginInfo['url'], headers=headers, allow_redirects=False) - core.loginInfo['url'] = core.loginInfo['url'][:core.loginInfo['url'].rfind('/')] - for indexUrl, detailedUrl in ( - ("wx2.qq.com" , ("file.wx2.qq.com", "webpush.wx2.qq.com")), - ("wx8.qq.com" , ("file.wx8.qq.com", "webpush.wx8.qq.com")), - ("qq.com" , ("file.wx.qq.com", "webpush.wx.qq.com")), - ("web2.wechat.com" , ("file.web2.wechat.com", "webpush.web2.wechat.com")), - ("wechat.com" , ("file.web.wechat.com", "webpush.web.wechat.com"))): - fileUrl, syncUrl = ['https://%s/cgi-bin/mmwebwx-bin' % url for url in detailedUrl] - if indexUrl in core.loginInfo['url']: - core.loginInfo['fileUrl'], core.loginInfo['syncUrl'] = \ - fileUrl, syncUrl - break - else: - core.loginInfo['fileUrl'] = core.loginInfo['syncUrl'] = core.loginInfo['url'] - core.loginInfo['deviceid'] = 'e' + repr(random.random())[2:17] - core.loginInfo['logintime'] = int(time.time() * 1e3) - core.loginInfo['BaseRequest'] = {} - cookies = core.s.cookies.get_dict() - skey = re.findall('(.*?)', r.text, re.S)[0] - pass_ticket = re.findall('(.*?)', r.text, re.S)[0] - core.loginInfo['skey'] = core.loginInfo['BaseRequest']['Skey'] = skey - core.loginInfo['wxsid'] = core.loginInfo['BaseRequest']['Sid'] = cookies["wxsid"] - core.loginInfo['wxuin'] = core.loginInfo['BaseRequest']['Uin'] = cookies["wxuin"] - core.loginInfo['pass_ticket'] = pass_ticket - - # A question : why pass_ticket == DeviceID ? - # deviceID is only a randomly generated number - - # UOS PATCH By luvletter2333, Sun Feb 28 10:00 PM - # for node in xml.dom.minidom.parseString(r.text).documentElement.childNodes: - # if node.nodeName == 'skey': - # core.loginInfo['skey'] = core.loginInfo['BaseRequest']['Skey'] = node.childNodes[0].data - # elif node.nodeName == 'wxsid': - # core.loginInfo['wxsid'] = core.loginInfo['BaseRequest']['Sid'] = node.childNodes[0].data - # elif node.nodeName == 'wxuin': - # core.loginInfo['wxuin'] = core.loginInfo['BaseRequest']['Uin'] = node.childNodes[0].data - # elif node.nodeName == 'pass_ticket': - # core.loginInfo['pass_ticket'] = core.loginInfo['BaseRequest']['DeviceID'] = node.childNodes[0].data - if not all([key in core.loginInfo for key in ('skey', 'wxsid', 'wxuin', 'pass_ticket')]): - logger.error('Your wechat account may be LIMITED to log in WEB wechat, error info:\n%s' % r.text) - core.isLogging = False - return False - return True - -async def web_init(self): - url = '%s/webwxinit' % self.loginInfo['url'] - params = { - 'r': int(-time.time() / 1579), - 'pass_ticket': self.loginInfo['pass_ticket'], } - data = { 'BaseRequest': self.loginInfo['BaseRequest'], } - headers = { - 'ContentType': 'application/json; charset=UTF-8', - 'User-Agent' : config.USER_AGENT, } - r = self.s.post(url, params=params, data=json.dumps(data), headers=headers) - dic = json.loads(r.content.decode('utf-8', 'replace')) - # deal with login info - utils.emoji_formatter(dic['User'], 'NickName') - self.loginInfo['InviteStartCount'] = int(dic['InviteStartCount']) - self.loginInfo['User'] = wrap_user_dict(utils.struct_friend_info(dic['User'])) - self.memberList.append(self.loginInfo['User']) - self.loginInfo['SyncKey'] = dic['SyncKey'] - self.loginInfo['synckey'] = '|'.join(['%s_%s' % (item['Key'], item['Val']) - for item in dic['SyncKey']['List']]) - self.storageClass.userName = dic['User']['UserName'] - self.storageClass.nickName = dic['User']['NickName'] - # deal with contact list returned when init - contactList = dic.get('ContactList', []) - chatroomList, otherList = [], [] - for m in contactList: - if m['Sex'] != 0: - otherList.append(m) - elif '@@' in m['UserName']: - m['MemberList'] = [] # don't let dirty info pollute the list - chatroomList.append(m) - elif '@' in m['UserName']: - # mp will be dealt in update_local_friends as well - otherList.append(m) - if chatroomList: - update_local_chatrooms(self, chatroomList) - if otherList: - update_local_friends(self, otherList) - return dic - -async def show_mobile_login(self): - url = '%s/webwxstatusnotify?lang=zh_CN&pass_ticket=%s' % ( - self.loginInfo['url'], self.loginInfo['pass_ticket']) - data = { - 'BaseRequest' : self.loginInfo['BaseRequest'], - 'Code' : 3, - 'FromUserName' : self.storageClass.userName, - 'ToUserName' : self.storageClass.userName, - 'ClientMsgId' : int(time.time()), } - headers = { - 'ContentType': 'application/json; charset=UTF-8', - 'User-Agent' : config.USER_AGENT, } - r = self.s.post(url, data=json.dumps(data), headers=headers) - return ReturnValue(rawResponse=r) - -async def start_receiving(self, exitCallback=None, getReceivingFnOnly=False): - self.alive = True - def maintain_loop(): - retryCount = 0 - while self.alive: - try: - i = sync_check(self) - if i is None: - self.alive = False - elif i == '0': - pass - else: - msgList, contactList = self.get_msg() - if msgList: - msgList = produce_msg(self, msgList) - for msg in msgList: - self.msgList.put(msg) - if contactList: - chatroomList, otherList = [], [] - for contact in contactList: - if '@@' in contact['UserName']: - chatroomList.append(contact) - else: - otherList.append(contact) - chatroomMsg = update_local_chatrooms(self, chatroomList) - chatroomMsg['User'] = self.loginInfo['User'] - self.msgList.put(chatroomMsg) - update_local_friends(self, otherList) - retryCount = 0 - except requests.exceptions.ReadTimeout: - pass - except: - retryCount += 1 - logger.error(traceback.format_exc()) - if self.receivingRetryCount < retryCount: - self.alive = False - else: - time.sleep(1) - self.logout() - if hasattr(exitCallback, '__call__'): - exitCallback(self.storageClass.userName) - else: - logger.info('LOG OUT!') - if getReceivingFnOnly: - return maintain_loop - else: - maintainThread = threading.Thread(target=maintain_loop) - maintainThread.setDaemon(True) - maintainThread.start() - -def sync_check(self): - url = '%s/synccheck' % self.loginInfo.get('syncUrl', self.loginInfo['url']) - params = { - 'r' : int(time.time() * 1000), - 'skey' : self.loginInfo['skey'], - 'sid' : self.loginInfo['wxsid'], - 'uin' : self.loginInfo['wxuin'], - 'deviceid' : self.loginInfo['deviceid'], - 'synckey' : self.loginInfo['synckey'], - '_' : self.loginInfo['logintime'], } - headers = { 'User-Agent' : config.USER_AGENT} - self.loginInfo['logintime'] += 1 - try: - r = self.s.get(url, params=params, headers=headers, timeout=config.TIMEOUT) - except requests.exceptions.ConnectionError as e: - try: - if not isinstance(e.args[0].args[1], BadStatusLine): - raise - # will return a package with status '0 -' - # and value like: - # 6f:00:8a:9c:09:74:e4:d8:e0:14:bf:96:3a:56:a0:64:1b:a4:25:5d:12:f4:31:a5:30:f1:c6:48:5f:c3:75:6a:99:93 - # seems like status of typing, but before I make further achievement code will remain like this - return '2' - except: - raise - r.raise_for_status() - regx = r'window.synccheck={retcode:"(\d+)",selector:"(\d+)"}' - pm = re.search(regx, r.text) - if pm is None or pm.group(1) != '0': - logger.debug('Unexpected sync check result: %s' % r.text) - return None - return pm.group(2) - -def get_msg(self): - self.loginInfo['deviceid'] = 'e' + repr(random.random())[2:17] - url = '%s/webwxsync?sid=%s&skey=%s&pass_ticket=%s' % ( - self.loginInfo['url'], self.loginInfo['wxsid'], - self.loginInfo['skey'],self.loginInfo['pass_ticket']) - data = { - 'BaseRequest' : self.loginInfo['BaseRequest'], - 'SyncKey' : self.loginInfo['SyncKey'], - 'rr' : ~int(time.time()), } - headers = { - 'ContentType': 'application/json; charset=UTF-8', - 'User-Agent' : config.USER_AGENT } - r = self.s.post(url, data=json.dumps(data), headers=headers, timeout=config.TIMEOUT) - dic = json.loads(r.content.decode('utf-8', 'replace')) - if dic['BaseResponse']['Ret'] != 0: return None, None - self.loginInfo['SyncKey'] = dic['SyncKey'] - self.loginInfo['synckey'] = '|'.join(['%s_%s' % (item['Key'], item['Val']) - for item in dic['SyncCheckKey']['List']]) - return dic['AddMsgList'], dic['ModContactList'] - -def logout(self): - if self.alive: - url = '%s/webwxlogout' % self.loginInfo['url'] - params = { - 'redirect' : 1, - 'type' : 1, - 'skey' : self.loginInfo['skey'], } - headers = { 'User-Agent' : config.USER_AGENT} - self.s.get(url, params=params, headers=headers) - self.alive = False - self.isLogging = False - self.s.cookies.clear() - del self.chatroomList[:] - del self.memberList[:] - del self.mpList[:] - return ReturnValue({'BaseResponse': { - 'ErrMsg': 'logout successfully.', - 'Ret': 0, }}) diff --git a/spaces/omri374/presidio/openai_fake_data_generator.py b/spaces/omri374/presidio/openai_fake_data_generator.py deleted file mode 100644 index d89458f56ff2f1e1537f2ab49742922f0fb0d330..0000000000000000000000000000000000000000 --- a/spaces/omri374/presidio/openai_fake_data_generator.py +++ /dev/null @@ -1,80 +0,0 @@ -from collections import namedtuple -from typing import Optional - -import openai -import logging - -logger = logging.getLogger("presidio-streamlit") - -OpenAIParams = namedtuple( - "open_ai_params", - ["openai_key", "model", "api_base", "deployment_name", "api_version", "api_type"], -) - - -def set_openai_params(openai_params: OpenAIParams): - """Set the OpenAI API key. - :param openai_params: OpenAIParams object with the following fields: key, model, api version, deployment_name, - The latter only relate to Azure OpenAI deployments. - """ - openai.api_key = openai_params.openai_key - openai.api_version = openai_params.api_version - if openai_params.api_base: - openai.api_base = openai_params.api_base - openai.api_type = openai_params.api_type - - -def call_completion_model( - prompt: str, - model: str = "text-davinci-003", - max_tokens: int = 512, - deployment_id: Optional[str] = None, -) -> str: - """Creates a request for the OpenAI Completion service and returns the response. - - :param prompt: The prompt for the completion model - :param model: OpenAI model name - :param max_tokens: Model's max_tokens parameter - :param deployment_id: Azure OpenAI deployment ID - """ - if deployment_id: - response = openai.Completion.create( - deployment_id=deployment_id, model=model, prompt=prompt, max_tokens=max_tokens - ) - else: - response = openai.Completion.create( - model=model, prompt=prompt, max_tokens=max_tokens - ) - - return response["choices"][0].text - - -def create_prompt(anonymized_text: str) -> str: - """ - Create the prompt with instructions to GPT-3. - - :param anonymized_text: Text with placeholders instead of PII values, e.g. My name is . - """ - - prompt = f""" - Your role is to create synthetic text based on de-identified text with placeholders instead of Personally Identifiable Information (PII). - Replace the placeholders (e.g. ,, {{DATE}}, {{ip_address}}) with fake values. - - Instructions: - - a. Use completely random numbers, so every digit is drawn between 0 and 9. - b. Use realistic names that come from diverse genders, ethnicities and countries. - c. If there are no placeholders, return the text as is and provide an answer. - d. Keep the formatting as close to the original as possible. - e. If PII exists in the input, replace it with fake values in the output. - - input: How do I change the limit on my credit card {{credit_card_number}}? - output: How do I change the limit on my credit card 2539 3519 2345 1555? - input: was the chief science officer at . - output: Katherine Buckjov was the chief science officer at NASA. - input: Cameroon lives in . - output: Vladimir lives in Moscow. - input: {anonymized_text} - output: - """ - return prompt diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_unclip_txt2img_to_image_variation.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_unclip_txt2img_to_image_variation.py deleted file mode 100644 index 07f8ebf2a3d012600a533dcfa642b609c31a3d8c..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_unclip_txt2img_to_image_variation.py +++ /dev/null @@ -1,41 +0,0 @@ -import argparse - -from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection - -from diffusers import UnCLIPImageVariationPipeline, UnCLIPPipeline - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - - parser.add_argument( - "--txt2img_unclip", - default="kakaobrain/karlo-v1-alpha", - type=str, - required=False, - help="The pretrained txt2img unclip.", - ) - - args = parser.parse_args() - - txt2img = UnCLIPPipeline.from_pretrained(args.txt2img_unclip) - - feature_extractor = CLIPImageProcessor() - image_encoder = CLIPVisionModelWithProjection.from_pretrained("openai/clip-vit-large-patch14") - - img2img = UnCLIPImageVariationPipeline( - decoder=txt2img.decoder, - text_encoder=txt2img.text_encoder, - tokenizer=txt2img.tokenizer, - text_proj=txt2img.text_proj, - feature_extractor=feature_extractor, - image_encoder=image_encoder, - super_res_first=txt2img.super_res_first, - super_res_last=txt2img.super_res_last, - decoder_scheduler=txt2img.decoder_scheduler, - super_res_scheduler=txt2img.super_res_scheduler, - ) - - img2img.save_pretrained(args.dump_path) diff --git a/spaces/patgpt4/MusicGen/audiocraft/quantization/__init__.py b/spaces/patgpt4/MusicGen/audiocraft/quantization/__init__.py deleted file mode 100644 index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000 --- a/spaces/patgpt4/MusicGen/audiocraft/quantization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .vq import ResidualVectorQuantizer -from .base import BaseQuantizer, DummyQuantizer, QuantizedResult diff --git a/spaces/paulbricman/conceptarium/frontend/components/navigator.py b/spaces/paulbricman/conceptarium/frontend/components/navigator.py deleted file mode 100644 index 3e2fe9126288361e00b84cfe7c5c036b3a3305b9..0000000000000000000000000000000000000000 --- a/spaces/paulbricman/conceptarium/frontend/components/navigator.py +++ /dev/null @@ -1,21 +0,0 @@ -import streamlit as st -from . import knowledge - - -def paint(): - modality = st.selectbox('modality', ['text', 'image'], - ['text', 'image'].index(st.session_state.get('navigator_modality', 'text')), help='Select the type of query you want to search with.') - - if modality == 'text': - input = st.text_area('input', height=100, - help='Enter the actual contents of your query.') - elif modality == 'image': - input = st.file_uploader( - 'input', help='Enter the actual contents of your query.') - - if st.button('jump', help='Click to search for thoughts based on the specified query.'): - st.session_state['authorized_thoughts'] = knowledge.load( - modality, input) - st.session_state['navigator_modality'] = modality - st.session_state['navigator_input'] = input - st.session_state['navigator_thought'] = None diff --git a/spaces/peteralexandercharles/whisper-restore-punctuation/app.py b/spaces/peteralexandercharles/whisper-restore-punctuation/app.py deleted file mode 100644 index 6ed962b7b9ecd1422d58a45388ef5fa979418cbf..0000000000000000000000000000000000000000 --- a/spaces/peteralexandercharles/whisper-restore-punctuation/app.py +++ /dev/null @@ -1,46 +0,0 @@ -from speechbox import PunctuationRestorer -import librosa -import subprocess -import gradio as gr - -restorer = PunctuationRestorer.from_pretrained("openai/whisper-tiny.en") - - -def convert_to_wav(path): - if path[-3:] != 'wav': - new_path = '.'.join(path.split('.')[:-1]) + '.wav' - try: - subprocess.call(['ffmpeg', '-i', path, new_path, '-y']) - except: # noqa: E722 - return path, 'Error: Could not convert file to .wav' - path = new_path - return path, None - - -def restore(audio, original_transcript): - path, error = convert_to_wav(audio) - print(error) - data, samplerate = librosa.load(path, sr=16_000) - - text, log_probs = restorer(data, original_transcript, samplerate, num_beams=1) - - return text, log_probs - - -gr.Interface( - title='Punctuation Restorer', - fn=restore, - inputs=[ - gr.inputs.Audio(source="upload", type="filepath"), - gr.inputs.Textbox(default="", label="normalized text") - ], - outputs=[ - gr.outputs.Textbox(label='Restored text'), - gr.Number(label='Log probability') - ], - examples=[ - ["./common_voice_en_18301577.mp3", "do not cross the yellow light"], - ["./sample1.flac", "going along slushy country roads and speaking to damp audiences in draughty school rooms day after day for a fortnight he'll have to put in an appearance at some place of worship on sunday morning and he can come to us immediately afterwards"], - ["./sample2.flac", "before he had time to answer a much encumbered vera burst into the room with the question i say can i leave these here these were a small black pig and a lusty specimen of black red game cock"], - ] - ).launch() \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/reportLabPen.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/reportLabPen.py deleted file mode 100644 index 2cb89c8bf4c772b7a987edb0593c40c83cc2201b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/pens/reportLabPen.py +++ /dev/null @@ -1,80 +0,0 @@ -from fontTools.pens.basePen import BasePen -from reportlab.graphics.shapes import Path - - -__all__ = ["ReportLabPen"] - - -class ReportLabPen(BasePen): - - """A pen for drawing onto a ``reportlab.graphics.shapes.Path`` object.""" - - def __init__(self, glyphSet, path=None): - BasePen.__init__(self, glyphSet) - if path is None: - path = Path() - self.path = path - - def _moveTo(self, p): - (x, y) = p - self.path.moveTo(x, y) - - def _lineTo(self, p): - (x, y) = p - self.path.lineTo(x, y) - - def _curveToOne(self, p1, p2, p3): - (x1, y1) = p1 - (x2, y2) = p2 - (x3, y3) = p3 - self.path.curveTo(x1, y1, x2, y2, x3, y3) - - def _closePath(self): - self.path.closePath() - - -if __name__ == "__main__": - import sys - - if len(sys.argv) < 3: - print( - "Usage: reportLabPen.py []" - ) - print( - " If no image file name is created, by default .png is created." - ) - print(" example: reportLabPen.py Arial.TTF R test.png") - print( - " (The file format will be PNG, regardless of the image file name supplied)" - ) - sys.exit(0) - - from fontTools.ttLib import TTFont - from reportlab.lib import colors - - path = sys.argv[1] - glyphName = sys.argv[2] - if len(sys.argv) > 3: - imageFile = sys.argv[3] - else: - imageFile = "%s.png" % glyphName - - font = TTFont(path) # it would work just as well with fontTools.t1Lib.T1Font - gs = font.getGlyphSet() - pen = ReportLabPen(gs, Path(fillColor=colors.red, strokeWidth=5)) - g = gs[glyphName] - g.draw(pen) - - w, h = g.width, 1000 - from reportlab.graphics import renderPM - from reportlab.graphics.shapes import Group, Drawing, scale - - # Everything is wrapped in a group to allow transformations. - g = Group(pen.path) - g.translate(0, 200) - g.scale(0.3, 0.3) - - d = Drawing(w, h) - d.add(g) - - renderPM.drawToFile(d, imageFile, fmt="PNG") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/node_modules/esbuild-wasm/lib/main.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/node_modules/esbuild-wasm/lib/main.js deleted file mode 100644 index b67efba258f7e97213543b86e65c06147d7f6db2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/node_modules/esbuild-wasm/lib/main.js +++ /dev/null @@ -1,2200 +0,0 @@ -"use strict"; -var __defProp = Object.defineProperty; -var __getOwnPropDesc = Object.getOwnPropertyDescriptor; -var __getOwnPropNames = Object.getOwnPropertyNames; -var __hasOwnProp = Object.prototype.hasOwnProperty; -var __export = (target, all) => { - for (var name in all) - __defProp(target, name, { get: all[name], enumerable: true }); -}; -var __copyProps = (to, from, except, desc) => { - if (from && typeof from === "object" || typeof from === "function") { - for (let key of __getOwnPropNames(from)) - if (!__hasOwnProp.call(to, key) && key !== except) - __defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable }); - } - return to; -}; -var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod); - -// lib/npm/node.ts -var node_exports = {}; -__export(node_exports, { - analyzeMetafile: () => analyzeMetafile, - analyzeMetafileSync: () => analyzeMetafileSync, - build: () => build, - buildSync: () => buildSync, - context: () => context, - default: () => node_default, - formatMessages: () => formatMessages, - formatMessagesSync: () => formatMessagesSync, - initialize: () => initialize, - transform: () => transform, - transformSync: () => transformSync, - version: () => version -}); -module.exports = __toCommonJS(node_exports); - -// lib/shared/stdio_protocol.ts -function encodePacket(packet) { - let visit = (value) => { - if (value === null) { - bb.write8(0); - } else if (typeof value === "boolean") { - bb.write8(1); - bb.write8(+value); - } else if (typeof value === "number") { - bb.write8(2); - bb.write32(value | 0); - } else if (typeof value === "string") { - bb.write8(3); - bb.write(encodeUTF8(value)); - } else if (value instanceof Uint8Array) { - bb.write8(4); - bb.write(value); - } else if (value instanceof Array) { - bb.write8(5); - bb.write32(value.length); - for (let item of value) { - visit(item); - } - } else { - let keys = Object.keys(value); - bb.write8(6); - bb.write32(keys.length); - for (let key of keys) { - bb.write(encodeUTF8(key)); - visit(value[key]); - } - } - }; - let bb = new ByteBuffer(); - bb.write32(0); - bb.write32(packet.id << 1 | +!packet.isRequest); - visit(packet.value); - writeUInt32LE(bb.buf, bb.len - 4, 0); - return bb.buf.subarray(0, bb.len); -} -function decodePacket(bytes) { - let visit = () => { - switch (bb.read8()) { - case 0: - return null; - case 1: - return !!bb.read8(); - case 2: - return bb.read32(); - case 3: - return decodeUTF8(bb.read()); - case 4: - return bb.read(); - case 5: { - let count = bb.read32(); - let value2 = []; - for (let i = 0; i < count; i++) { - value2.push(visit()); - } - return value2; - } - case 6: { - let count = bb.read32(); - let value2 = {}; - for (let i = 0; i < count; i++) { - value2[decodeUTF8(bb.read())] = visit(); - } - return value2; - } - default: - throw new Error("Invalid packet"); - } - }; - let bb = new ByteBuffer(bytes); - let id = bb.read32(); - let isRequest = (id & 1) === 0; - id >>>= 1; - let value = visit(); - if (bb.ptr !== bytes.length) { - throw new Error("Invalid packet"); - } - return { id, isRequest, value }; -} -var ByteBuffer = class { - constructor(buf = new Uint8Array(1024)) { - this.buf = buf; - this.len = 0; - this.ptr = 0; - } - _write(delta) { - if (this.len + delta > this.buf.length) { - let clone = new Uint8Array((this.len + delta) * 2); - clone.set(this.buf); - this.buf = clone; - } - this.len += delta; - return this.len - delta; - } - write8(value) { - let offset = this._write(1); - this.buf[offset] = value; - } - write32(value) { - let offset = this._write(4); - writeUInt32LE(this.buf, value, offset); - } - write(bytes) { - let offset = this._write(4 + bytes.length); - writeUInt32LE(this.buf, bytes.length, offset); - this.buf.set(bytes, offset + 4); - } - _read(delta) { - if (this.ptr + delta > this.buf.length) { - throw new Error("Invalid packet"); - } - this.ptr += delta; - return this.ptr - delta; - } - read8() { - return this.buf[this._read(1)]; - } - read32() { - return readUInt32LE(this.buf, this._read(4)); - } - read() { - let length = this.read32(); - let bytes = new Uint8Array(length); - let ptr = this._read(bytes.length); - bytes.set(this.buf.subarray(ptr, ptr + length)); - return bytes; - } -}; -var encodeUTF8; -var decodeUTF8; -var encodeInvariant; -if (typeof TextEncoder !== "undefined" && typeof TextDecoder !== "undefined") { - let encoder = new TextEncoder(); - let decoder = new TextDecoder(); - encodeUTF8 = (text) => encoder.encode(text); - decodeUTF8 = (bytes) => decoder.decode(bytes); - encodeInvariant = 'new TextEncoder().encode("")'; -} else if (typeof Buffer !== "undefined") { - encodeUTF8 = (text) => Buffer.from(text); - decodeUTF8 = (bytes) => { - let { buffer, byteOffset, byteLength } = bytes; - return Buffer.from(buffer, byteOffset, byteLength).toString(); - }; - encodeInvariant = 'Buffer.from("")'; -} else { - throw new Error("No UTF-8 codec found"); -} -if (!(encodeUTF8("") instanceof Uint8Array)) - throw new Error(`Invariant violation: "${encodeInvariant} instanceof Uint8Array" is incorrectly false - -This indicates that your JavaScript environment is broken. You cannot use -esbuild in this environment because esbuild relies on this invariant. This -is not a problem with esbuild. You need to fix your environment instead. -`); -function readUInt32LE(buffer, offset) { - return buffer[offset++] | buffer[offset++] << 8 | buffer[offset++] << 16 | buffer[offset++] << 24; -} -function writeUInt32LE(buffer, value, offset) { - buffer[offset++] = value; - buffer[offset++] = value >> 8; - buffer[offset++] = value >> 16; - buffer[offset++] = value >> 24; -} - -// lib/shared/common.ts -var quote = JSON.stringify; -var buildLogLevelDefault = "warning"; -var transformLogLevelDefault = "silent"; -function validateTarget(target) { - validateStringValue(target, "target"); - if (target.indexOf(",") >= 0) - throw new Error(`Invalid target: ${target}`); - return target; -} -var canBeAnything = () => null; -var mustBeBoolean = (value) => typeof value === "boolean" ? null : "a boolean"; -var mustBeString = (value) => typeof value === "string" ? null : "a string"; -var mustBeRegExp = (value) => value instanceof RegExp ? null : "a RegExp object"; -var mustBeInteger = (value) => typeof value === "number" && value === (value | 0) ? null : "an integer"; -var mustBeFunction = (value) => typeof value === "function" ? null : "a function"; -var mustBeArray = (value) => Array.isArray(value) ? null : "an array"; -var mustBeObject = (value) => typeof value === "object" && value !== null && !Array.isArray(value) ? null : "an object"; -var mustBeEntryPoints = (value) => typeof value === "object" && value !== null ? null : "an array or an object"; -var mustBeWebAssemblyModule = (value) => value instanceof WebAssembly.Module ? null : "a WebAssembly.Module"; -var mustBeObjectOrNull = (value) => typeof value === "object" && !Array.isArray(value) ? null : "an object or null"; -var mustBeStringOrBoolean = (value) => typeof value === "string" || typeof value === "boolean" ? null : "a string or a boolean"; -var mustBeStringOrObject = (value) => typeof value === "string" || typeof value === "object" && value !== null && !Array.isArray(value) ? null : "a string or an object"; -var mustBeStringOrArray = (value) => typeof value === "string" || Array.isArray(value) ? null : "a string or an array"; -var mustBeStringOrUint8Array = (value) => typeof value === "string" || value instanceof Uint8Array ? null : "a string or a Uint8Array"; -var mustBeStringOrURL = (value) => typeof value === "string" || value instanceof URL ? null : "a string or a URL"; -function getFlag(object, keys, key, mustBeFn) { - let value = object[key]; - keys[key + ""] = true; - if (value === void 0) - return void 0; - let mustBe = mustBeFn(value); - if (mustBe !== null) - throw new Error(`${quote(key)} must be ${mustBe}`); - return value; -} -function checkForInvalidFlags(object, keys, where) { - for (let key in object) { - if (!(key in keys)) { - throw new Error(`Invalid option ${where}: ${quote(key)}`); - } - } -} -function validateInitializeOptions(options) { - let keys = /* @__PURE__ */ Object.create(null); - let wasmURL = getFlag(options, keys, "wasmURL", mustBeStringOrURL); - let wasmModule = getFlag(options, keys, "wasmModule", mustBeWebAssemblyModule); - let worker = getFlag(options, keys, "worker", mustBeBoolean); - checkForInvalidFlags(options, keys, "in initialize() call"); - return { - wasmURL, - wasmModule, - worker - }; -} -function validateMangleCache(mangleCache) { - let validated; - if (mangleCache !== void 0) { - validated = /* @__PURE__ */ Object.create(null); - for (let key in mangleCache) { - let value = mangleCache[key]; - if (typeof value === "string" || value === false) { - validated[key] = value; - } else { - throw new Error(`Expected ${quote(key)} in mangle cache to map to either a string or false`); - } - } - } - return validated; -} -function pushLogFlags(flags, options, keys, isTTY2, logLevelDefault) { - let color = getFlag(options, keys, "color", mustBeBoolean); - let logLevel = getFlag(options, keys, "logLevel", mustBeString); - let logLimit = getFlag(options, keys, "logLimit", mustBeInteger); - if (color !== void 0) - flags.push(`--color=${color}`); - else if (isTTY2) - flags.push(`--color=true`); - flags.push(`--log-level=${logLevel || logLevelDefault}`); - flags.push(`--log-limit=${logLimit || 0}`); -} -function validateStringValue(value, what, key) { - if (typeof value !== "string") { - throw new Error(`Expected value for ${what}${key !== void 0 ? " " + quote(key) : ""} to be a string, got ${typeof value} instead`); - } - return value; -} -function pushCommonFlags(flags, options, keys) { - let legalComments = getFlag(options, keys, "legalComments", mustBeString); - let sourceRoot = getFlag(options, keys, "sourceRoot", mustBeString); - let sourcesContent = getFlag(options, keys, "sourcesContent", mustBeBoolean); - let target = getFlag(options, keys, "target", mustBeStringOrArray); - let format = getFlag(options, keys, "format", mustBeString); - let globalName = getFlag(options, keys, "globalName", mustBeString); - let mangleProps = getFlag(options, keys, "mangleProps", mustBeRegExp); - let reserveProps = getFlag(options, keys, "reserveProps", mustBeRegExp); - let mangleQuoted = getFlag(options, keys, "mangleQuoted", mustBeBoolean); - let minify = getFlag(options, keys, "minify", mustBeBoolean); - let minifySyntax = getFlag(options, keys, "minifySyntax", mustBeBoolean); - let minifyWhitespace = getFlag(options, keys, "minifyWhitespace", mustBeBoolean); - let minifyIdentifiers = getFlag(options, keys, "minifyIdentifiers", mustBeBoolean); - let lineLimit = getFlag(options, keys, "lineLimit", mustBeInteger); - let drop = getFlag(options, keys, "drop", mustBeArray); - let dropLabels = getFlag(options, keys, "dropLabels", mustBeArray); - let charset = getFlag(options, keys, "charset", mustBeString); - let treeShaking = getFlag(options, keys, "treeShaking", mustBeBoolean); - let ignoreAnnotations = getFlag(options, keys, "ignoreAnnotations", mustBeBoolean); - let jsx = getFlag(options, keys, "jsx", mustBeString); - let jsxFactory = getFlag(options, keys, "jsxFactory", mustBeString); - let jsxFragment = getFlag(options, keys, "jsxFragment", mustBeString); - let jsxImportSource = getFlag(options, keys, "jsxImportSource", mustBeString); - let jsxDev = getFlag(options, keys, "jsxDev", mustBeBoolean); - let jsxSideEffects = getFlag(options, keys, "jsxSideEffects", mustBeBoolean); - let define = getFlag(options, keys, "define", mustBeObject); - let logOverride = getFlag(options, keys, "logOverride", mustBeObject); - let supported = getFlag(options, keys, "supported", mustBeObject); - let pure = getFlag(options, keys, "pure", mustBeArray); - let keepNames = getFlag(options, keys, "keepNames", mustBeBoolean); - let platform = getFlag(options, keys, "platform", mustBeString); - let tsconfigRaw = getFlag(options, keys, "tsconfigRaw", mustBeStringOrObject); - if (legalComments) - flags.push(`--legal-comments=${legalComments}`); - if (sourceRoot !== void 0) - flags.push(`--source-root=${sourceRoot}`); - if (sourcesContent !== void 0) - flags.push(`--sources-content=${sourcesContent}`); - if (target) { - if (Array.isArray(target)) - flags.push(`--target=${Array.from(target).map(validateTarget).join(",")}`); - else - flags.push(`--target=${validateTarget(target)}`); - } - if (format) - flags.push(`--format=${format}`); - if (globalName) - flags.push(`--global-name=${globalName}`); - if (platform) - flags.push(`--platform=${platform}`); - if (tsconfigRaw) - flags.push(`--tsconfig-raw=${typeof tsconfigRaw === "string" ? tsconfigRaw : JSON.stringify(tsconfigRaw)}`); - if (minify) - flags.push("--minify"); - if (minifySyntax) - flags.push("--minify-syntax"); - if (minifyWhitespace) - flags.push("--minify-whitespace"); - if (minifyIdentifiers) - flags.push("--minify-identifiers"); - if (lineLimit) - flags.push(`--line-limit=${lineLimit}`); - if (charset) - flags.push(`--charset=${charset}`); - if (treeShaking !== void 0) - flags.push(`--tree-shaking=${treeShaking}`); - if (ignoreAnnotations) - flags.push(`--ignore-annotations`); - if (drop) - for (let what of drop) - flags.push(`--drop:${validateStringValue(what, "drop")}`); - if (dropLabels) - flags.push(`--drop-labels=${Array.from(dropLabels).map((what) => validateStringValue(what, "dropLabels")).join(",")}`); - if (mangleProps) - flags.push(`--mangle-props=${mangleProps.source}`); - if (reserveProps) - flags.push(`--reserve-props=${reserveProps.source}`); - if (mangleQuoted !== void 0) - flags.push(`--mangle-quoted=${mangleQuoted}`); - if (jsx) - flags.push(`--jsx=${jsx}`); - if (jsxFactory) - flags.push(`--jsx-factory=${jsxFactory}`); - if (jsxFragment) - flags.push(`--jsx-fragment=${jsxFragment}`); - if (jsxImportSource) - flags.push(`--jsx-import-source=${jsxImportSource}`); - if (jsxDev) - flags.push(`--jsx-dev`); - if (jsxSideEffects) - flags.push(`--jsx-side-effects`); - if (define) { - for (let key in define) { - if (key.indexOf("=") >= 0) - throw new Error(`Invalid define: ${key}`); - flags.push(`--define:${key}=${validateStringValue(define[key], "define", key)}`); - } - } - if (logOverride) { - for (let key in logOverride) { - if (key.indexOf("=") >= 0) - throw new Error(`Invalid log override: ${key}`); - flags.push(`--log-override:${key}=${validateStringValue(logOverride[key], "log override", key)}`); - } - } - if (supported) { - for (let key in supported) { - if (key.indexOf("=") >= 0) - throw new Error(`Invalid supported: ${key}`); - const value = supported[key]; - if (typeof value !== "boolean") - throw new Error(`Expected value for supported ${quote(key)} to be a boolean, got ${typeof value} instead`); - flags.push(`--supported:${key}=${value}`); - } - } - if (pure) - for (let fn of pure) - flags.push(`--pure:${validateStringValue(fn, "pure")}`); - if (keepNames) - flags.push(`--keep-names`); -} -function flagsForBuildOptions(callName, options, isTTY2, logLevelDefault, writeDefault) { - var _a2; - let flags = []; - let entries = []; - let keys = /* @__PURE__ */ Object.create(null); - let stdinContents = null; - let stdinResolveDir = null; - pushLogFlags(flags, options, keys, isTTY2, logLevelDefault); - pushCommonFlags(flags, options, keys); - let sourcemap = getFlag(options, keys, "sourcemap", mustBeStringOrBoolean); - let bundle = getFlag(options, keys, "bundle", mustBeBoolean); - let splitting = getFlag(options, keys, "splitting", mustBeBoolean); - let preserveSymlinks = getFlag(options, keys, "preserveSymlinks", mustBeBoolean); - let metafile = getFlag(options, keys, "metafile", mustBeBoolean); - let outfile = getFlag(options, keys, "outfile", mustBeString); - let outdir = getFlag(options, keys, "outdir", mustBeString); - let outbase = getFlag(options, keys, "outbase", mustBeString); - let tsconfig = getFlag(options, keys, "tsconfig", mustBeString); - let resolveExtensions = getFlag(options, keys, "resolveExtensions", mustBeArray); - let nodePathsInput = getFlag(options, keys, "nodePaths", mustBeArray); - let mainFields = getFlag(options, keys, "mainFields", mustBeArray); - let conditions = getFlag(options, keys, "conditions", mustBeArray); - let external = getFlag(options, keys, "external", mustBeArray); - let packages = getFlag(options, keys, "packages", mustBeString); - let alias = getFlag(options, keys, "alias", mustBeObject); - let loader = getFlag(options, keys, "loader", mustBeObject); - let outExtension = getFlag(options, keys, "outExtension", mustBeObject); - let publicPath = getFlag(options, keys, "publicPath", mustBeString); - let entryNames = getFlag(options, keys, "entryNames", mustBeString); - let chunkNames = getFlag(options, keys, "chunkNames", mustBeString); - let assetNames = getFlag(options, keys, "assetNames", mustBeString); - let inject = getFlag(options, keys, "inject", mustBeArray); - let banner = getFlag(options, keys, "banner", mustBeObject); - let footer = getFlag(options, keys, "footer", mustBeObject); - let entryPoints = getFlag(options, keys, "entryPoints", mustBeEntryPoints); - let absWorkingDir = getFlag(options, keys, "absWorkingDir", mustBeString); - let stdin = getFlag(options, keys, "stdin", mustBeObject); - let write = (_a2 = getFlag(options, keys, "write", mustBeBoolean)) != null ? _a2 : writeDefault; - let allowOverwrite = getFlag(options, keys, "allowOverwrite", mustBeBoolean); - let mangleCache = getFlag(options, keys, "mangleCache", mustBeObject); - keys.plugins = true; - checkForInvalidFlags(options, keys, `in ${callName}() call`); - if (sourcemap) - flags.push(`--sourcemap${sourcemap === true ? "" : `=${sourcemap}`}`); - if (bundle) - flags.push("--bundle"); - if (allowOverwrite) - flags.push("--allow-overwrite"); - if (splitting) - flags.push("--splitting"); - if (preserveSymlinks) - flags.push("--preserve-symlinks"); - if (metafile) - flags.push(`--metafile`); - if (outfile) - flags.push(`--outfile=${outfile}`); - if (outdir) - flags.push(`--outdir=${outdir}`); - if (outbase) - flags.push(`--outbase=${outbase}`); - if (tsconfig) - flags.push(`--tsconfig=${tsconfig}`); - if (packages) - flags.push(`--packages=${packages}`); - if (resolveExtensions) { - let values = []; - for (let value of resolveExtensions) { - validateStringValue(value, "resolve extension"); - if (value.indexOf(",") >= 0) - throw new Error(`Invalid resolve extension: ${value}`); - values.push(value); - } - flags.push(`--resolve-extensions=${values.join(",")}`); - } - if (publicPath) - flags.push(`--public-path=${publicPath}`); - if (entryNames) - flags.push(`--entry-names=${entryNames}`); - if (chunkNames) - flags.push(`--chunk-names=${chunkNames}`); - if (assetNames) - flags.push(`--asset-names=${assetNames}`); - if (mainFields) { - let values = []; - for (let value of mainFields) { - validateStringValue(value, "main field"); - if (value.indexOf(",") >= 0) - throw new Error(`Invalid main field: ${value}`); - values.push(value); - } - flags.push(`--main-fields=${values.join(",")}`); - } - if (conditions) { - let values = []; - for (let value of conditions) { - validateStringValue(value, "condition"); - if (value.indexOf(",") >= 0) - throw new Error(`Invalid condition: ${value}`); - values.push(value); - } - flags.push(`--conditions=${values.join(",")}`); - } - if (external) - for (let name of external) - flags.push(`--external:${validateStringValue(name, "external")}`); - if (alias) { - for (let old in alias) { - if (old.indexOf("=") >= 0) - throw new Error(`Invalid package name in alias: ${old}`); - flags.push(`--alias:${old}=${validateStringValue(alias[old], "alias", old)}`); - } - } - if (banner) { - for (let type in banner) { - if (type.indexOf("=") >= 0) - throw new Error(`Invalid banner file type: ${type}`); - flags.push(`--banner:${type}=${validateStringValue(banner[type], "banner", type)}`); - } - } - if (footer) { - for (let type in footer) { - if (type.indexOf("=") >= 0) - throw new Error(`Invalid footer file type: ${type}`); - flags.push(`--footer:${type}=${validateStringValue(footer[type], "footer", type)}`); - } - } - if (inject) - for (let path3 of inject) - flags.push(`--inject:${validateStringValue(path3, "inject")}`); - if (loader) { - for (let ext in loader) { - if (ext.indexOf("=") >= 0) - throw new Error(`Invalid loader extension: ${ext}`); - flags.push(`--loader:${ext}=${validateStringValue(loader[ext], "loader", ext)}`); - } - } - if (outExtension) { - for (let ext in outExtension) { - if (ext.indexOf("=") >= 0) - throw new Error(`Invalid out extension: ${ext}`); - flags.push(`--out-extension:${ext}=${validateStringValue(outExtension[ext], "out extension", ext)}`); - } - } - if (entryPoints) { - if (Array.isArray(entryPoints)) { - for (let i = 0, n = entryPoints.length; i < n; i++) { - let entryPoint = entryPoints[i]; - if (typeof entryPoint === "object" && entryPoint !== null) { - let entryPointKeys = /* @__PURE__ */ Object.create(null); - let input = getFlag(entryPoint, entryPointKeys, "in", mustBeString); - let output = getFlag(entryPoint, entryPointKeys, "out", mustBeString); - checkForInvalidFlags(entryPoint, entryPointKeys, "in entry point at index " + i); - if (input === void 0) - throw new Error('Missing property "in" for entry point at index ' + i); - if (output === void 0) - throw new Error('Missing property "out" for entry point at index ' + i); - entries.push([output, input]); - } else { - entries.push(["", validateStringValue(entryPoint, "entry point at index " + i)]); - } - } - } else { - for (let key in entryPoints) { - entries.push([key, validateStringValue(entryPoints[key], "entry point", key)]); - } - } - } - if (stdin) { - let stdinKeys = /* @__PURE__ */ Object.create(null); - let contents = getFlag(stdin, stdinKeys, "contents", mustBeStringOrUint8Array); - let resolveDir = getFlag(stdin, stdinKeys, "resolveDir", mustBeString); - let sourcefile = getFlag(stdin, stdinKeys, "sourcefile", mustBeString); - let loader2 = getFlag(stdin, stdinKeys, "loader", mustBeString); - checkForInvalidFlags(stdin, stdinKeys, 'in "stdin" object'); - if (sourcefile) - flags.push(`--sourcefile=${sourcefile}`); - if (loader2) - flags.push(`--loader=${loader2}`); - if (resolveDir) - stdinResolveDir = resolveDir; - if (typeof contents === "string") - stdinContents = encodeUTF8(contents); - else if (contents instanceof Uint8Array) - stdinContents = contents; - } - let nodePaths = []; - if (nodePathsInput) { - for (let value of nodePathsInput) { - value += ""; - nodePaths.push(value); - } - } - return { - entries, - flags, - write, - stdinContents, - stdinResolveDir, - absWorkingDir, - nodePaths, - mangleCache: validateMangleCache(mangleCache) - }; -} -function flagsForTransformOptions(callName, options, isTTY2, logLevelDefault) { - let flags = []; - let keys = /* @__PURE__ */ Object.create(null); - pushLogFlags(flags, options, keys, isTTY2, logLevelDefault); - pushCommonFlags(flags, options, keys); - let sourcemap = getFlag(options, keys, "sourcemap", mustBeStringOrBoolean); - let sourcefile = getFlag(options, keys, "sourcefile", mustBeString); - let loader = getFlag(options, keys, "loader", mustBeString); - let banner = getFlag(options, keys, "banner", mustBeString); - let footer = getFlag(options, keys, "footer", mustBeString); - let mangleCache = getFlag(options, keys, "mangleCache", mustBeObject); - checkForInvalidFlags(options, keys, `in ${callName}() call`); - if (sourcemap) - flags.push(`--sourcemap=${sourcemap === true ? "external" : sourcemap}`); - if (sourcefile) - flags.push(`--sourcefile=${sourcefile}`); - if (loader) - flags.push(`--loader=${loader}`); - if (banner) - flags.push(`--banner=${banner}`); - if (footer) - flags.push(`--footer=${footer}`); - return { - flags, - mangleCache: validateMangleCache(mangleCache) - }; -} -function createChannel(streamIn) { - const requestCallbacksByKey = {}; - const closeData = { didClose: false, reason: "" }; - let responseCallbacks = {}; - let nextRequestID = 0; - let nextBuildKey = 0; - let stdout = new Uint8Array(16 * 1024); - let stdoutUsed = 0; - let readFromStdout = (chunk) => { - let limit = stdoutUsed + chunk.length; - if (limit > stdout.length) { - let swap = new Uint8Array(limit * 2); - swap.set(stdout); - stdout = swap; - } - stdout.set(chunk, stdoutUsed); - stdoutUsed += chunk.length; - let offset = 0; - while (offset + 4 <= stdoutUsed) { - let length = readUInt32LE(stdout, offset); - if (offset + 4 + length > stdoutUsed) { - break; - } - offset += 4; - handleIncomingPacket(stdout.subarray(offset, offset + length)); - offset += length; - } - if (offset > 0) { - stdout.copyWithin(0, offset, stdoutUsed); - stdoutUsed -= offset; - } - }; - let afterClose = (error) => { - closeData.didClose = true; - if (error) - closeData.reason = ": " + (error.message || error); - const text = "The service was stopped" + closeData.reason; - for (let id in responseCallbacks) { - responseCallbacks[id](text, null); - } - responseCallbacks = {}; - }; - let sendRequest = (refs, value, callback) => { - if (closeData.didClose) - return callback("The service is no longer running" + closeData.reason, null); - let id = nextRequestID++; - responseCallbacks[id] = (error, response) => { - try { - callback(error, response); - } finally { - if (refs) - refs.unref(); - } - }; - if (refs) - refs.ref(); - streamIn.writeToStdin(encodePacket({ id, isRequest: true, value })); - }; - let sendResponse = (id, value) => { - if (closeData.didClose) - throw new Error("The service is no longer running" + closeData.reason); - streamIn.writeToStdin(encodePacket({ id, isRequest: false, value })); - }; - let handleRequest = async (id, request) => { - try { - if (request.command === "ping") { - sendResponse(id, {}); - return; - } - if (typeof request.key === "number") { - const requestCallbacks = requestCallbacksByKey[request.key]; - if (requestCallbacks) { - const callback = requestCallbacks[request.command]; - if (callback) { - await callback(id, request); - return; - } - } - } - throw new Error(`Invalid command: ` + request.command); - } catch (e) { - const errors = [extractErrorMessageV8(e, streamIn, null, void 0, "")]; - try { - sendResponse(id, { errors }); - } catch { - } - } - }; - let isFirstPacket = true; - let handleIncomingPacket = (bytes) => { - if (isFirstPacket) { - isFirstPacket = false; - let binaryVersion = String.fromCharCode(...bytes); - if (binaryVersion !== "0.19.0") { - throw new Error(`Cannot start service: Host version "${"0.19.0"}" does not match binary version ${quote(binaryVersion)}`); - } - return; - } - let packet = decodePacket(bytes); - if (packet.isRequest) { - handleRequest(packet.id, packet.value); - } else { - let callback = responseCallbacks[packet.id]; - delete responseCallbacks[packet.id]; - if (packet.value.error) - callback(packet.value.error, {}); - else - callback(null, packet.value); - } - }; - let buildOrContext = ({ callName, refs, options, isTTY: isTTY2, defaultWD: defaultWD2, callback }) => { - let refCount = 0; - const buildKey = nextBuildKey++; - const requestCallbacks = {}; - const buildRefs = { - ref() { - if (++refCount === 1) { - if (refs) - refs.ref(); - } - }, - unref() { - if (--refCount === 0) { - delete requestCallbacksByKey[buildKey]; - if (refs) - refs.unref(); - } - } - }; - requestCallbacksByKey[buildKey] = requestCallbacks; - buildRefs.ref(); - buildOrContextImpl( - callName, - buildKey, - sendRequest, - sendResponse, - buildRefs, - streamIn, - requestCallbacks, - options, - isTTY2, - defaultWD2, - (err, res) => { - try { - callback(err, res); - } finally { - buildRefs.unref(); - } - } - ); - }; - let transform2 = ({ callName, refs, input, options, isTTY: isTTY2, fs: fs3, callback }) => { - const details = createObjectStash(); - let start = (inputPath) => { - try { - if (typeof input !== "string" && !(input instanceof Uint8Array)) - throw new Error('The input to "transform" must be a string or a Uint8Array'); - let { - flags, - mangleCache - } = flagsForTransformOptions(callName, options, isTTY2, transformLogLevelDefault); - let request = { - command: "transform", - flags, - inputFS: inputPath !== null, - input: inputPath !== null ? encodeUTF8(inputPath) : typeof input === "string" ? encodeUTF8(input) : input - }; - if (mangleCache) - request.mangleCache = mangleCache; - sendRequest(refs, request, (error, response) => { - if (error) - return callback(new Error(error), null); - let errors = replaceDetailsInMessages(response.errors, details); - let warnings = replaceDetailsInMessages(response.warnings, details); - let outstanding = 1; - let next = () => { - if (--outstanding === 0) { - let result = { - warnings, - code: response.code, - map: response.map, - mangleCache: void 0, - legalComments: void 0 - }; - if ("legalComments" in response) - result.legalComments = response == null ? void 0 : response.legalComments; - if (response.mangleCache) - result.mangleCache = response == null ? void 0 : response.mangleCache; - callback(null, result); - } - }; - if (errors.length > 0) - return callback(failureErrorWithLog("Transform failed", errors, warnings), null); - if (response.codeFS) { - outstanding++; - fs3.readFile(response.code, (err, contents) => { - if (err !== null) { - callback(err, null); - } else { - response.code = contents; - next(); - } - }); - } - if (response.mapFS) { - outstanding++; - fs3.readFile(response.map, (err, contents) => { - if (err !== null) { - callback(err, null); - } else { - response.map = contents; - next(); - } - }); - } - next(); - }); - } catch (e) { - let flags = []; - try { - pushLogFlags(flags, options, {}, isTTY2, transformLogLevelDefault); - } catch { - } - const error = extractErrorMessageV8(e, streamIn, details, void 0, ""); - sendRequest(refs, { command: "error", flags, error }, () => { - error.detail = details.load(error.detail); - callback(failureErrorWithLog("Transform failed", [error], []), null); - }); - } - }; - if ((typeof input === "string" || input instanceof Uint8Array) && input.length > 1024 * 1024) { - let next = start; - start = () => fs3.writeFile(input, next); - } - start(null); - }; - let formatMessages2 = ({ callName, refs, messages, options, callback }) => { - let result = sanitizeMessages(messages, "messages", null, ""); - if (!options) - throw new Error(`Missing second argument in ${callName}() call`); - let keys = {}; - let kind = getFlag(options, keys, "kind", mustBeString); - let color = getFlag(options, keys, "color", mustBeBoolean); - let terminalWidth = getFlag(options, keys, "terminalWidth", mustBeInteger); - checkForInvalidFlags(options, keys, `in ${callName}() call`); - if (kind === void 0) - throw new Error(`Missing "kind" in ${callName}() call`); - if (kind !== "error" && kind !== "warning") - throw new Error(`Expected "kind" to be "error" or "warning" in ${callName}() call`); - let request = { - command: "format-msgs", - messages: result, - isWarning: kind === "warning" - }; - if (color !== void 0) - request.color = color; - if (terminalWidth !== void 0) - request.terminalWidth = terminalWidth; - sendRequest(refs, request, (error, response) => { - if (error) - return callback(new Error(error), null); - callback(null, response.messages); - }); - }; - let analyzeMetafile2 = ({ callName, refs, metafile, options, callback }) => { - if (options === void 0) - options = {}; - let keys = {}; - let color = getFlag(options, keys, "color", mustBeBoolean); - let verbose = getFlag(options, keys, "verbose", mustBeBoolean); - checkForInvalidFlags(options, keys, `in ${callName}() call`); - let request = { - command: "analyze-metafile", - metafile - }; - if (color !== void 0) - request.color = color; - if (verbose !== void 0) - request.verbose = verbose; - sendRequest(refs, request, (error, response) => { - if (error) - return callback(new Error(error), null); - callback(null, response.result); - }); - }; - return { - readFromStdout, - afterClose, - service: { - buildOrContext, - transform: transform2, - formatMessages: formatMessages2, - analyzeMetafile: analyzeMetafile2 - } - }; -} -function buildOrContextImpl(callName, buildKey, sendRequest, sendResponse, refs, streamIn, requestCallbacks, options, isTTY2, defaultWD2, callback) { - const details = createObjectStash(); - const isContext = callName === "context"; - const handleError = (e, pluginName) => { - const flags = []; - try { - pushLogFlags(flags, options, {}, isTTY2, buildLogLevelDefault); - } catch { - } - const message = extractErrorMessageV8(e, streamIn, details, void 0, pluginName); - sendRequest(refs, { command: "error", flags, error: message }, () => { - message.detail = details.load(message.detail); - callback(failureErrorWithLog(isContext ? "Context failed" : "Build failed", [message], []), null); - }); - }; - let plugins; - if (typeof options === "object") { - const value = options.plugins; - if (value !== void 0) { - if (!Array.isArray(value)) - return handleError(new Error(`"plugins" must be an array`), ""); - plugins = value; - } - } - if (plugins && plugins.length > 0) { - if (streamIn.isSync) - return handleError(new Error("Cannot use plugins in synchronous API calls"), ""); - handlePlugins( - buildKey, - sendRequest, - sendResponse, - refs, - streamIn, - requestCallbacks, - options, - plugins, - details - ).then( - (result) => { - if (!result.ok) - return handleError(result.error, result.pluginName); - try { - buildOrContextContinue(result.requestPlugins, result.runOnEndCallbacks, result.scheduleOnDisposeCallbacks); - } catch (e) { - handleError(e, ""); - } - }, - (e) => handleError(e, "") - ); - return; - } - try { - buildOrContextContinue(null, (result, done) => done([], []), () => { - }); - } catch (e) { - handleError(e, ""); - } - function buildOrContextContinue(requestPlugins, runOnEndCallbacks, scheduleOnDisposeCallbacks) { - const writeDefault = streamIn.hasFS; - const { - entries, - flags, - write, - stdinContents, - stdinResolveDir, - absWorkingDir, - nodePaths, - mangleCache - } = flagsForBuildOptions(callName, options, isTTY2, buildLogLevelDefault, writeDefault); - if (write && !streamIn.hasFS) - throw new Error(`The "write" option is unavailable in this environment`); - const request = { - command: "build", - key: buildKey, - entries, - flags, - write, - stdinContents, - stdinResolveDir, - absWorkingDir: absWorkingDir || defaultWD2, - nodePaths, - context: isContext - }; - if (requestPlugins) - request.plugins = requestPlugins; - if (mangleCache) - request.mangleCache = mangleCache; - const buildResponseToResult = (response, callback2) => { - const result = { - errors: replaceDetailsInMessages(response.errors, details), - warnings: replaceDetailsInMessages(response.warnings, details), - outputFiles: void 0, - metafile: void 0, - mangleCache: void 0 - }; - const originalErrors = result.errors.slice(); - const originalWarnings = result.warnings.slice(); - if (response.outputFiles) - result.outputFiles = response.outputFiles.map(convertOutputFiles); - if (response.metafile) - result.metafile = JSON.parse(response.metafile); - if (response.mangleCache) - result.mangleCache = response.mangleCache; - if (response.writeToStdout !== void 0) - console.log(decodeUTF8(response.writeToStdout).replace(/\n$/, "")); - runOnEndCallbacks(result, (onEndErrors, onEndWarnings) => { - if (originalErrors.length > 0 || onEndErrors.length > 0) { - const error = failureErrorWithLog("Build failed", originalErrors.concat(onEndErrors), originalWarnings.concat(onEndWarnings)); - return callback2(error, null, onEndErrors, onEndWarnings); - } - callback2(null, result, onEndErrors, onEndWarnings); - }); - }; - let latestResultPromise; - let provideLatestResult; - if (isContext) - requestCallbacks["on-end"] = (id, request2) => new Promise((resolve) => { - buildResponseToResult(request2, (err, result, onEndErrors, onEndWarnings) => { - const response = { - errors: onEndErrors, - warnings: onEndWarnings - }; - if (provideLatestResult) - provideLatestResult(err, result); - latestResultPromise = void 0; - provideLatestResult = void 0; - sendResponse(id, response); - resolve(); - }); - }); - sendRequest(refs, request, (error, response) => { - if (error) - return callback(new Error(error), null); - if (!isContext) { - return buildResponseToResult(response, (err, res) => { - scheduleOnDisposeCallbacks(); - return callback(err, res); - }); - } - if (response.errors.length > 0) { - return callback(failureErrorWithLog("Context failed", response.errors, response.warnings), null); - } - let didDispose = false; - const result = { - rebuild: () => { - if (!latestResultPromise) - latestResultPromise = new Promise((resolve, reject) => { - let settlePromise; - provideLatestResult = (err, result2) => { - if (!settlePromise) - settlePromise = () => err ? reject(err) : resolve(result2); - }; - const triggerAnotherBuild = () => { - const request2 = { - command: "rebuild", - key: buildKey - }; - sendRequest(refs, request2, (error2, response2) => { - if (error2) { - reject(new Error(error2)); - } else if (settlePromise) { - settlePromise(); - } else { - triggerAnotherBuild(); - } - }); - }; - triggerAnotherBuild(); - }); - return latestResultPromise; - }, - watch: (options2 = {}) => new Promise((resolve, reject) => { - if (!streamIn.hasFS) - throw new Error(`Cannot use the "watch" API in this environment`); - const keys = {}; - checkForInvalidFlags(options2, keys, `in watch() call`); - const request2 = { - command: "watch", - key: buildKey - }; - sendRequest(refs, request2, (error2) => { - if (error2) - reject(new Error(error2)); - else - resolve(void 0); - }); - }), - serve: (options2 = {}) => new Promise((resolve, reject) => { - if (!streamIn.hasFS) - throw new Error(`Cannot use the "serve" API in this environment`); - const keys = {}; - const port = getFlag(options2, keys, "port", mustBeInteger); - const host = getFlag(options2, keys, "host", mustBeString); - const servedir = getFlag(options2, keys, "servedir", mustBeString); - const keyfile = getFlag(options2, keys, "keyfile", mustBeString); - const certfile = getFlag(options2, keys, "certfile", mustBeString); - const fallback = getFlag(options2, keys, "fallback", mustBeString); - const onRequest = getFlag(options2, keys, "onRequest", mustBeFunction); - checkForInvalidFlags(options2, keys, `in serve() call`); - const request2 = { - command: "serve", - key: buildKey, - onRequest: !!onRequest - }; - if (port !== void 0) - request2.port = port; - if (host !== void 0) - request2.host = host; - if (servedir !== void 0) - request2.servedir = servedir; - if (keyfile !== void 0) - request2.keyfile = keyfile; - if (certfile !== void 0) - request2.certfile = certfile; - if (fallback !== void 0) - request2.fallback = fallback; - sendRequest(refs, request2, (error2, response2) => { - if (error2) - return reject(new Error(error2)); - if (onRequest) { - requestCallbacks["serve-request"] = (id, request3) => { - onRequest(request3.args); - sendResponse(id, {}); - }; - } - resolve(response2); - }); - }), - cancel: () => new Promise((resolve) => { - if (didDispose) - return resolve(); - const request2 = { - command: "cancel", - key: buildKey - }; - sendRequest(refs, request2, () => { - resolve(); - }); - }), - dispose: () => new Promise((resolve) => { - if (didDispose) - return resolve(); - didDispose = true; - const request2 = { - command: "dispose", - key: buildKey - }; - sendRequest(refs, request2, () => { - resolve(); - scheduleOnDisposeCallbacks(); - refs.unref(); - }); - }) - }; - refs.ref(); - callback(null, result); - }); - } -} -var handlePlugins = async (buildKey, sendRequest, sendResponse, refs, streamIn, requestCallbacks, initialOptions, plugins, details) => { - let onStartCallbacks = []; - let onEndCallbacks = []; - let onResolveCallbacks = {}; - let onLoadCallbacks = {}; - let onDisposeCallbacks = []; - let nextCallbackID = 0; - let i = 0; - let requestPlugins = []; - let isSetupDone = false; - plugins = [...plugins]; - for (let item of plugins) { - let keys = {}; - if (typeof item !== "object") - throw new Error(`Plugin at index ${i} must be an object`); - const name = getFlag(item, keys, "name", mustBeString); - if (typeof name !== "string" || name === "") - throw new Error(`Plugin at index ${i} is missing a name`); - try { - let setup = getFlag(item, keys, "setup", mustBeFunction); - if (typeof setup !== "function") - throw new Error(`Plugin is missing a setup function`); - checkForInvalidFlags(item, keys, `on plugin ${quote(name)}`); - let plugin = { - name, - onStart: false, - onEnd: false, - onResolve: [], - onLoad: [] - }; - i++; - let resolve = (path3, options = {}) => { - if (!isSetupDone) - throw new Error('Cannot call "resolve" before plugin setup has completed'); - if (typeof path3 !== "string") - throw new Error(`The path to resolve must be a string`); - let keys2 = /* @__PURE__ */ Object.create(null); - let pluginName = getFlag(options, keys2, "pluginName", mustBeString); - let importer = getFlag(options, keys2, "importer", mustBeString); - let namespace = getFlag(options, keys2, "namespace", mustBeString); - let resolveDir = getFlag(options, keys2, "resolveDir", mustBeString); - let kind = getFlag(options, keys2, "kind", mustBeString); - let pluginData = getFlag(options, keys2, "pluginData", canBeAnything); - checkForInvalidFlags(options, keys2, "in resolve() call"); - return new Promise((resolve2, reject) => { - const request = { - command: "resolve", - path: path3, - key: buildKey, - pluginName: name - }; - if (pluginName != null) - request.pluginName = pluginName; - if (importer != null) - request.importer = importer; - if (namespace != null) - request.namespace = namespace; - if (resolveDir != null) - request.resolveDir = resolveDir; - if (kind != null) - request.kind = kind; - else - throw new Error(`Must specify "kind" when calling "resolve"`); - if (pluginData != null) - request.pluginData = details.store(pluginData); - sendRequest(refs, request, (error, response) => { - if (error !== null) - reject(new Error(error)); - else - resolve2({ - errors: replaceDetailsInMessages(response.errors, details), - warnings: replaceDetailsInMessages(response.warnings, details), - path: response.path, - external: response.external, - sideEffects: response.sideEffects, - namespace: response.namespace, - suffix: response.suffix, - pluginData: details.load(response.pluginData) - }); - }); - }); - }; - let promise = setup({ - initialOptions, - resolve, - onStart(callback) { - let registeredText = `This error came from the "onStart" callback registered here:`; - let registeredNote = extractCallerV8(new Error(registeredText), streamIn, "onStart"); - onStartCallbacks.push({ name, callback, note: registeredNote }); - plugin.onStart = true; - }, - onEnd(callback) { - let registeredText = `This error came from the "onEnd" callback registered here:`; - let registeredNote = extractCallerV8(new Error(registeredText), streamIn, "onEnd"); - onEndCallbacks.push({ name, callback, note: registeredNote }); - plugin.onEnd = true; - }, - onResolve(options, callback) { - let registeredText = `This error came from the "onResolve" callback registered here:`; - let registeredNote = extractCallerV8(new Error(registeredText), streamIn, "onResolve"); - let keys2 = {}; - let filter = getFlag(options, keys2, "filter", mustBeRegExp); - let namespace = getFlag(options, keys2, "namespace", mustBeString); - checkForInvalidFlags(options, keys2, `in onResolve() call for plugin ${quote(name)}`); - if (filter == null) - throw new Error(`onResolve() call is missing a filter`); - let id = nextCallbackID++; - onResolveCallbacks[id] = { name, callback, note: registeredNote }; - plugin.onResolve.push({ id, filter: filter.source, namespace: namespace || "" }); - }, - onLoad(options, callback) { - let registeredText = `This error came from the "onLoad" callback registered here:`; - let registeredNote = extractCallerV8(new Error(registeredText), streamIn, "onLoad"); - let keys2 = {}; - let filter = getFlag(options, keys2, "filter", mustBeRegExp); - let namespace = getFlag(options, keys2, "namespace", mustBeString); - checkForInvalidFlags(options, keys2, `in onLoad() call for plugin ${quote(name)}`); - if (filter == null) - throw new Error(`onLoad() call is missing a filter`); - let id = nextCallbackID++; - onLoadCallbacks[id] = { name, callback, note: registeredNote }; - plugin.onLoad.push({ id, filter: filter.source, namespace: namespace || "" }); - }, - onDispose(callback) { - onDisposeCallbacks.push(callback); - }, - esbuild: streamIn.esbuild - }); - if (promise) - await promise; - requestPlugins.push(plugin); - } catch (e) { - return { ok: false, error: e, pluginName: name }; - } - } - requestCallbacks["on-start"] = async (id, request) => { - let response = { errors: [], warnings: [] }; - await Promise.all(onStartCallbacks.map(async ({ name, callback, note }) => { - try { - let result = await callback(); - if (result != null) { - if (typeof result !== "object") - throw new Error(`Expected onStart() callback in plugin ${quote(name)} to return an object`); - let keys = {}; - let errors = getFlag(result, keys, "errors", mustBeArray); - let warnings = getFlag(result, keys, "warnings", mustBeArray); - checkForInvalidFlags(result, keys, `from onStart() callback in plugin ${quote(name)}`); - if (errors != null) - response.errors.push(...sanitizeMessages(errors, "errors", details, name)); - if (warnings != null) - response.warnings.push(...sanitizeMessages(warnings, "warnings", details, name)); - } - } catch (e) { - response.errors.push(extractErrorMessageV8(e, streamIn, details, note && note(), name)); - } - })); - sendResponse(id, response); - }; - requestCallbacks["on-resolve"] = async (id, request) => { - let response = {}, name = "", callback, note; - for (let id2 of request.ids) { - try { - ({ name, callback, note } = onResolveCallbacks[id2]); - let result = await callback({ - path: request.path, - importer: request.importer, - namespace: request.namespace, - resolveDir: request.resolveDir, - kind: request.kind, - pluginData: details.load(request.pluginData) - }); - if (result != null) { - if (typeof result !== "object") - throw new Error(`Expected onResolve() callback in plugin ${quote(name)} to return an object`); - let keys = {}; - let pluginName = getFlag(result, keys, "pluginName", mustBeString); - let path3 = getFlag(result, keys, "path", mustBeString); - let namespace = getFlag(result, keys, "namespace", mustBeString); - let suffix = getFlag(result, keys, "suffix", mustBeString); - let external = getFlag(result, keys, "external", mustBeBoolean); - let sideEffects = getFlag(result, keys, "sideEffects", mustBeBoolean); - let pluginData = getFlag(result, keys, "pluginData", canBeAnything); - let errors = getFlag(result, keys, "errors", mustBeArray); - let warnings = getFlag(result, keys, "warnings", mustBeArray); - let watchFiles = getFlag(result, keys, "watchFiles", mustBeArray); - let watchDirs = getFlag(result, keys, "watchDirs", mustBeArray); - checkForInvalidFlags(result, keys, `from onResolve() callback in plugin ${quote(name)}`); - response.id = id2; - if (pluginName != null) - response.pluginName = pluginName; - if (path3 != null) - response.path = path3; - if (namespace != null) - response.namespace = namespace; - if (suffix != null) - response.suffix = suffix; - if (external != null) - response.external = external; - if (sideEffects != null) - response.sideEffects = sideEffects; - if (pluginData != null) - response.pluginData = details.store(pluginData); - if (errors != null) - response.errors = sanitizeMessages(errors, "errors", details, name); - if (warnings != null) - response.warnings = sanitizeMessages(warnings, "warnings", details, name); - if (watchFiles != null) - response.watchFiles = sanitizeStringArray(watchFiles, "watchFiles"); - if (watchDirs != null) - response.watchDirs = sanitizeStringArray(watchDirs, "watchDirs"); - break; - } - } catch (e) { - response = { id: id2, errors: [extractErrorMessageV8(e, streamIn, details, note && note(), name)] }; - break; - } - } - sendResponse(id, response); - }; - requestCallbacks["on-load"] = async (id, request) => { - let response = {}, name = "", callback, note; - for (let id2 of request.ids) { - try { - ({ name, callback, note } = onLoadCallbacks[id2]); - let result = await callback({ - path: request.path, - namespace: request.namespace, - suffix: request.suffix, - pluginData: details.load(request.pluginData) - }); - if (result != null) { - if (typeof result !== "object") - throw new Error(`Expected onLoad() callback in plugin ${quote(name)} to return an object`); - let keys = {}; - let pluginName = getFlag(result, keys, "pluginName", mustBeString); - let contents = getFlag(result, keys, "contents", mustBeStringOrUint8Array); - let resolveDir = getFlag(result, keys, "resolveDir", mustBeString); - let pluginData = getFlag(result, keys, "pluginData", canBeAnything); - let loader = getFlag(result, keys, "loader", mustBeString); - let errors = getFlag(result, keys, "errors", mustBeArray); - let warnings = getFlag(result, keys, "warnings", mustBeArray); - let watchFiles = getFlag(result, keys, "watchFiles", mustBeArray); - let watchDirs = getFlag(result, keys, "watchDirs", mustBeArray); - checkForInvalidFlags(result, keys, `from onLoad() callback in plugin ${quote(name)}`); - response.id = id2; - if (pluginName != null) - response.pluginName = pluginName; - if (contents instanceof Uint8Array) - response.contents = contents; - else if (contents != null) - response.contents = encodeUTF8(contents); - if (resolveDir != null) - response.resolveDir = resolveDir; - if (pluginData != null) - response.pluginData = details.store(pluginData); - if (loader != null) - response.loader = loader; - if (errors != null) - response.errors = sanitizeMessages(errors, "errors", details, name); - if (warnings != null) - response.warnings = sanitizeMessages(warnings, "warnings", details, name); - if (watchFiles != null) - response.watchFiles = sanitizeStringArray(watchFiles, "watchFiles"); - if (watchDirs != null) - response.watchDirs = sanitizeStringArray(watchDirs, "watchDirs"); - break; - } - } catch (e) { - response = { id: id2, errors: [extractErrorMessageV8(e, streamIn, details, note && note(), name)] }; - break; - } - } - sendResponse(id, response); - }; - let runOnEndCallbacks = (result, done) => done([], []); - if (onEndCallbacks.length > 0) { - runOnEndCallbacks = (result, done) => { - (async () => { - const onEndErrors = []; - const onEndWarnings = []; - for (const { name, callback, note } of onEndCallbacks) { - let newErrors; - let newWarnings; - try { - const value = await callback(result); - if (value != null) { - if (typeof value !== "object") - throw new Error(`Expected onEnd() callback in plugin ${quote(name)} to return an object`); - let keys = {}; - let errors = getFlag(value, keys, "errors", mustBeArray); - let warnings = getFlag(value, keys, "warnings", mustBeArray); - checkForInvalidFlags(value, keys, `from onEnd() callback in plugin ${quote(name)}`); - if (errors != null) - newErrors = sanitizeMessages(errors, "errors", details, name); - if (warnings != null) - newWarnings = sanitizeMessages(warnings, "warnings", details, name); - } - } catch (e) { - newErrors = [extractErrorMessageV8(e, streamIn, details, note && note(), name)]; - } - if (newErrors) { - onEndErrors.push(...newErrors); - try { - result.errors.push(...newErrors); - } catch { - } - } - if (newWarnings) { - onEndWarnings.push(...newWarnings); - try { - result.warnings.push(...newWarnings); - } catch { - } - } - } - done(onEndErrors, onEndWarnings); - })(); - }; - } - let scheduleOnDisposeCallbacks = () => { - for (const cb of onDisposeCallbacks) { - setTimeout(() => cb(), 0); - } - }; - isSetupDone = true; - return { - ok: true, - requestPlugins, - runOnEndCallbacks, - scheduleOnDisposeCallbacks - }; -}; -function createObjectStash() { - const map = /* @__PURE__ */ new Map(); - let nextID = 0; - return { - load(id) { - return map.get(id); - }, - store(value) { - if (value === void 0) - return -1; - const id = nextID++; - map.set(id, value); - return id; - } - }; -} -function extractCallerV8(e, streamIn, ident) { - let note; - let tried = false; - return () => { - if (tried) - return note; - tried = true; - try { - let lines = (e.stack + "").split("\n"); - lines.splice(1, 1); - let location = parseStackLinesV8(streamIn, lines, ident); - if (location) { - note = { text: e.message, location }; - return note; - } - } catch { - } - }; -} -function extractErrorMessageV8(e, streamIn, stash, note, pluginName) { - let text = "Internal error"; - let location = null; - try { - text = (e && e.message || e) + ""; - } catch { - } - try { - location = parseStackLinesV8(streamIn, (e.stack + "").split("\n"), ""); - } catch { - } - return { id: "", pluginName, text, location, notes: note ? [note] : [], detail: stash ? stash.store(e) : -1 }; -} -function parseStackLinesV8(streamIn, lines, ident) { - let at = " at "; - if (streamIn.readFileSync && !lines[0].startsWith(at) && lines[1].startsWith(at)) { - for (let i = 1; i < lines.length; i++) { - let line = lines[i]; - if (!line.startsWith(at)) - continue; - line = line.slice(at.length); - while (true) { - let match = /^(?:new |async )?\S+ \((.*)\)$/.exec(line); - if (match) { - line = match[1]; - continue; - } - match = /^eval at \S+ \((.*)\)(?:, \S+:\d+:\d+)?$/.exec(line); - if (match) { - line = match[1]; - continue; - } - match = /^(\S+):(\d+):(\d+)$/.exec(line); - if (match) { - let contents; - try { - contents = streamIn.readFileSync(match[1], "utf8"); - } catch { - break; - } - let lineText = contents.split(/\r\n|\r|\n|\u2028|\u2029/)[+match[2] - 1] || ""; - let column = +match[3] - 1; - let length = lineText.slice(column, column + ident.length) === ident ? ident.length : 0; - return { - file: match[1], - namespace: "file", - line: +match[2], - column: encodeUTF8(lineText.slice(0, column)).length, - length: encodeUTF8(lineText.slice(column, column + length)).length, - lineText: lineText + "\n" + lines.slice(1).join("\n"), - suggestion: "" - }; - } - break; - } - } - } - return null; -} -function failureErrorWithLog(text, errors, warnings) { - let limit = 5; - let summary = errors.length < 1 ? "" : ` with ${errors.length} error${errors.length < 2 ? "" : "s"}:` + errors.slice(0, limit + 1).map((e, i) => { - if (i === limit) - return "\n..."; - if (!e.location) - return ` -error: ${e.text}`; - let { file, line, column } = e.location; - let pluginText = e.pluginName ? `[plugin: ${e.pluginName}] ` : ""; - return ` -${file}:${line}:${column}: ERROR: ${pluginText}${e.text}`; - }).join(""); - let error = new Error(`${text}${summary}`); - error.errors = errors; - error.warnings = warnings; - return error; -} -function replaceDetailsInMessages(messages, stash) { - for (const message of messages) { - message.detail = stash.load(message.detail); - } - return messages; -} -function sanitizeLocation(location, where) { - if (location == null) - return null; - let keys = {}; - let file = getFlag(location, keys, "file", mustBeString); - let namespace = getFlag(location, keys, "namespace", mustBeString); - let line = getFlag(location, keys, "line", mustBeInteger); - let column = getFlag(location, keys, "column", mustBeInteger); - let length = getFlag(location, keys, "length", mustBeInteger); - let lineText = getFlag(location, keys, "lineText", mustBeString); - let suggestion = getFlag(location, keys, "suggestion", mustBeString); - checkForInvalidFlags(location, keys, where); - return { - file: file || "", - namespace: namespace || "", - line: line || 0, - column: column || 0, - length: length || 0, - lineText: lineText || "", - suggestion: suggestion || "" - }; -} -function sanitizeMessages(messages, property, stash, fallbackPluginName) { - let messagesClone = []; - let index = 0; - for (const message of messages) { - let keys = {}; - let id = getFlag(message, keys, "id", mustBeString); - let pluginName = getFlag(message, keys, "pluginName", mustBeString); - let text = getFlag(message, keys, "text", mustBeString); - let location = getFlag(message, keys, "location", mustBeObjectOrNull); - let notes = getFlag(message, keys, "notes", mustBeArray); - let detail = getFlag(message, keys, "detail", canBeAnything); - let where = `in element ${index} of "${property}"`; - checkForInvalidFlags(message, keys, where); - let notesClone = []; - if (notes) { - for (const note of notes) { - let noteKeys = {}; - let noteText = getFlag(note, noteKeys, "text", mustBeString); - let noteLocation = getFlag(note, noteKeys, "location", mustBeObjectOrNull); - checkForInvalidFlags(note, noteKeys, where); - notesClone.push({ - text: noteText || "", - location: sanitizeLocation(noteLocation, where) - }); - } - } - messagesClone.push({ - id: id || "", - pluginName: pluginName || fallbackPluginName, - text: text || "", - location: sanitizeLocation(location, where), - notes: notesClone, - detail: stash ? stash.store(detail) : -1 - }); - index++; - } - return messagesClone; -} -function sanitizeStringArray(values, property) { - const result = []; - for (const value of values) { - if (typeof value !== "string") - throw new Error(`${quote(property)} must be an array of strings`); - result.push(value); - } - return result; -} -function convertOutputFiles({ path: path3, contents, hash }) { - let text = null; - return { - path: path3, - contents, - hash, - get text() { - const binary = this.contents; - if (text === null || binary !== contents) { - contents = binary; - text = decodeUTF8(binary); - } - return text; - } - }; -} - -// lib/npm/node-platform.ts -var fs = require("fs"); -var os = require("os"); -var path = require("path"); -var ESBUILD_BINARY_PATH = process.env.ESBUILD_BINARY_PATH || ESBUILD_BINARY_PATH; - -// lib/npm/node.ts -var child_process = require("child_process"); -var crypto = require("crypto"); -var path2 = require("path"); -var fs2 = require("fs"); -var os2 = require("os"); -var tty = require("tty"); -var worker_threads; -if (process.env.ESBUILD_WORKER_THREADS !== "0") { - try { - worker_threads = require("worker_threads"); - } catch { - } - let [major, minor] = process.versions.node.split("."); - if ( - // { - if ((!ESBUILD_BINARY_PATH || true) && (path2.basename(__filename) !== "main.js" || path2.basename(__dirname) !== "lib")) { - throw new Error( - `The esbuild JavaScript API cannot be bundled. Please mark the "esbuild" package as external so it's not included in the bundle. - -More information: The file containing the code for esbuild's JavaScript API (${__filename}) does not appear to be inside the esbuild package on the file system, which usually means that the esbuild package was bundled into another file. This is problematic because the API needs to run a binary executable inside the esbuild package which is located using a relative path from the API code to the executable. If the esbuild package is bundled, the relative path will be incorrect and the executable won't be found.` - ); - } - if (true) { - return ["node", [path2.join(__dirname, "..", "bin", "esbuild")]]; - } else { - const { binPath, isWASM } = generateBinPath(); - if (isWASM) { - return ["node", [binPath]]; - } else { - return [binPath, []]; - } - } -}; -var isTTY = () => tty.isatty(2); -var fsSync = { - readFile(tempFile, callback) { - try { - let contents = fs2.readFileSync(tempFile, "utf8"); - try { - fs2.unlinkSync(tempFile); - } catch { - } - callback(null, contents); - } catch (err) { - callback(err, null); - } - }, - writeFile(contents, callback) { - try { - let tempFile = randomFileName(); - fs2.writeFileSync(tempFile, contents); - callback(tempFile); - } catch { - callback(null); - } - } -}; -var fsAsync = { - readFile(tempFile, callback) { - try { - fs2.readFile(tempFile, "utf8", (err, contents) => { - try { - fs2.unlink(tempFile, () => callback(err, contents)); - } catch { - callback(err, contents); - } - }); - } catch (err) { - callback(err, null); - } - }, - writeFile(contents, callback) { - try { - let tempFile = randomFileName(); - fs2.writeFile(tempFile, contents, (err) => err !== null ? callback(null) : callback(tempFile)); - } catch { - callback(null); - } - } -}; -var version = "0.19.0"; -var build = (options) => ensureServiceIsRunning().build(options); -var context = (buildOptions) => ensureServiceIsRunning().context(buildOptions); -var transform = (input, options) => ensureServiceIsRunning().transform(input, options); -var formatMessages = (messages, options) => ensureServiceIsRunning().formatMessages(messages, options); -var analyzeMetafile = (messages, options) => ensureServiceIsRunning().analyzeMetafile(messages, options); -var buildSync = (options) => { - if (worker_threads && !isInternalWorkerThread) { - if (!workerThreadService) - workerThreadService = startWorkerThreadService(worker_threads); - return workerThreadService.buildSync(options); - } - let result; - runServiceSync((service) => service.buildOrContext({ - callName: "buildSync", - refs: null, - options, - isTTY: isTTY(), - defaultWD, - callback: (err, res) => { - if (err) - throw err; - result = res; - } - })); - return result; -}; -var transformSync = (input, options) => { - if (worker_threads && !isInternalWorkerThread) { - if (!workerThreadService) - workerThreadService = startWorkerThreadService(worker_threads); - return workerThreadService.transformSync(input, options); - } - let result; - runServiceSync((service) => service.transform({ - callName: "transformSync", - refs: null, - input, - options: options || {}, - isTTY: isTTY(), - fs: fsSync, - callback: (err, res) => { - if (err) - throw err; - result = res; - } - })); - return result; -}; -var formatMessagesSync = (messages, options) => { - if (worker_threads && !isInternalWorkerThread) { - if (!workerThreadService) - workerThreadService = startWorkerThreadService(worker_threads); - return workerThreadService.formatMessagesSync(messages, options); - } - let result; - runServiceSync((service) => service.formatMessages({ - callName: "formatMessagesSync", - refs: null, - messages, - options, - callback: (err, res) => { - if (err) - throw err; - result = res; - } - })); - return result; -}; -var analyzeMetafileSync = (metafile, options) => { - if (worker_threads && !isInternalWorkerThread) { - if (!workerThreadService) - workerThreadService = startWorkerThreadService(worker_threads); - return workerThreadService.analyzeMetafileSync(metafile, options); - } - let result; - runServiceSync((service) => service.analyzeMetafile({ - callName: "analyzeMetafileSync", - refs: null, - metafile: typeof metafile === "string" ? metafile : JSON.stringify(metafile), - options, - callback: (err, res) => { - if (err) - throw err; - result = res; - } - })); - return result; -}; -var initializeWasCalled = false; -var initialize = (options) => { - options = validateInitializeOptions(options || {}); - if (options.wasmURL) - throw new Error(`The "wasmURL" option only works in the browser`); - if (options.wasmModule) - throw new Error(`The "wasmModule" option only works in the browser`); - if (options.worker) - throw new Error(`The "worker" option only works in the browser`); - if (initializeWasCalled) - throw new Error('Cannot call "initialize" more than once'); - ensureServiceIsRunning(); - initializeWasCalled = true; - return Promise.resolve(); -}; -var defaultWD = process.cwd(); -var longLivedService; -var ensureServiceIsRunning = () => { - if (longLivedService) - return longLivedService; - let [command, args] = esbuildCommandAndArgs(); - let child = child_process.spawn(command, args.concat(`--service=${"0.19.0"}`, "--ping"), { - windowsHide: true, - stdio: ["pipe", "pipe", "inherit"], - cwd: defaultWD - }); - let { readFromStdout, afterClose, service } = createChannel({ - writeToStdin(bytes) { - child.stdin.write(bytes, (err) => { - if (err) - afterClose(err); - }); - }, - readFileSync: fs2.readFileSync, - isSync: false, - hasFS: true, - esbuild: node_exports - }); - child.stdin.on("error", afterClose); - child.on("error", afterClose); - const stdin = child.stdin; - const stdout = child.stdout; - stdout.on("data", readFromStdout); - stdout.on("end", afterClose); - let refCount = 0; - child.unref(); - if (stdin.unref) { - stdin.unref(); - } - if (stdout.unref) { - stdout.unref(); - } - const refs = { - ref() { - if (++refCount === 1) - child.ref(); - }, - unref() { - if (--refCount === 0) - child.unref(); - } - }; - longLivedService = { - build: (options) => new Promise((resolve, reject) => { - service.buildOrContext({ - callName: "build", - refs, - options, - isTTY: isTTY(), - defaultWD, - callback: (err, res) => err ? reject(err) : resolve(res) - }); - }), - context: (options) => new Promise((resolve, reject) => service.buildOrContext({ - callName: "context", - refs, - options, - isTTY: isTTY(), - defaultWD, - callback: (err, res) => err ? reject(err) : resolve(res) - })), - transform: (input, options) => new Promise((resolve, reject) => service.transform({ - callName: "transform", - refs, - input, - options: options || {}, - isTTY: isTTY(), - fs: fsAsync, - callback: (err, res) => err ? reject(err) : resolve(res) - })), - formatMessages: (messages, options) => new Promise((resolve, reject) => service.formatMessages({ - callName: "formatMessages", - refs, - messages, - options, - callback: (err, res) => err ? reject(err) : resolve(res) - })), - analyzeMetafile: (metafile, options) => new Promise((resolve, reject) => service.analyzeMetafile({ - callName: "analyzeMetafile", - refs, - metafile: typeof metafile === "string" ? metafile : JSON.stringify(metafile), - options, - callback: (err, res) => err ? reject(err) : resolve(res) - })) - }; - return longLivedService; -}; -var runServiceSync = (callback) => { - let [command, args] = esbuildCommandAndArgs(); - let stdin = new Uint8Array(); - let { readFromStdout, afterClose, service } = createChannel({ - writeToStdin(bytes) { - if (stdin.length !== 0) - throw new Error("Must run at most one command"); - stdin = bytes; - }, - isSync: true, - hasFS: true, - esbuild: node_exports - }); - callback(service); - let stdout = child_process.execFileSync(command, args.concat(`--service=${"0.19.0"}`), { - cwd: defaultWD, - windowsHide: true, - input: stdin, - // We don't know how large the output could be. If it's too large, the - // command will fail with ENOBUFS. Reserve 16mb for now since that feels - // like it should be enough. Also allow overriding this with an environment - // variable. - maxBuffer: +process.env.ESBUILD_MAX_BUFFER || 16 * 1024 * 1024 - }); - readFromStdout(stdout); - afterClose(null); -}; -var randomFileName = () => { - return path2.join(os2.tmpdir(), `esbuild-${crypto.randomBytes(32).toString("hex")}`); -}; -var workerThreadService = null; -var startWorkerThreadService = (worker_threads2) => { - let { port1: mainPort, port2: workerPort } = new worker_threads2.MessageChannel(); - let worker = new worker_threads2.Worker(__filename, { - workerData: { workerPort, defaultWD, esbuildVersion: "0.19.0" }, - transferList: [workerPort], - // From node's documentation: https://nodejs.org/api/worker_threads.html - // - // Take care when launching worker threads from preload scripts (scripts loaded - // and run using the `-r` command line flag). Unless the `execArgv` option is - // explicitly set, new Worker threads automatically inherit the command line flags - // from the running process and will preload the same preload scripts as the main - // thread. If the preload script unconditionally launches a worker thread, every - // thread spawned will spawn another until the application crashes. - // - execArgv: [] - }); - let nextID = 0; - let fakeBuildError = (text) => { - let error = new Error(`Build failed with 1 error: -error: ${text}`); - let errors = [{ id: "", pluginName: "", text, location: null, notes: [], detail: void 0 }]; - error.errors = errors; - error.warnings = []; - return error; - }; - let validateBuildSyncOptions = (options) => { - if (!options) - return; - let plugins = options.plugins; - if (plugins && plugins.length > 0) - throw fakeBuildError(`Cannot use plugins in synchronous API calls`); - }; - let applyProperties = (object, properties) => { - for (let key in properties) { - object[key] = properties[key]; - } - }; - let runCallSync = (command, args) => { - let id = nextID++; - let sharedBuffer = new SharedArrayBuffer(8); - let sharedBufferView = new Int32Array(sharedBuffer); - let msg = { sharedBuffer, id, command, args }; - worker.postMessage(msg); - let status = Atomics.wait(sharedBufferView, 0, 0); - if (status !== "ok" && status !== "not-equal") - throw new Error("Internal error: Atomics.wait() failed: " + status); - let { message: { id: id2, resolve, reject, properties } } = worker_threads2.receiveMessageOnPort(mainPort); - if (id !== id2) - throw new Error(`Internal error: Expected id ${id} but got id ${id2}`); - if (reject) { - applyProperties(reject, properties); - throw reject; - } - return resolve; - }; - worker.unref(); - return { - buildSync(options) { - validateBuildSyncOptions(options); - return runCallSync("build", [options]); - }, - transformSync(input, options) { - return runCallSync("transform", [input, options]); - }, - formatMessagesSync(messages, options) { - return runCallSync("formatMessages", [messages, options]); - }, - analyzeMetafileSync(metafile, options) { - return runCallSync("analyzeMetafile", [metafile, options]); - } - }; -}; -var startSyncServiceWorker = () => { - let workerPort = worker_threads.workerData.workerPort; - let parentPort = worker_threads.parentPort; - let extractProperties = (object) => { - let properties = {}; - if (object && typeof object === "object") { - for (let key in object) { - properties[key] = object[key]; - } - } - return properties; - }; - try { - let service = ensureServiceIsRunning(); - defaultWD = worker_threads.workerData.defaultWD; - parentPort.on("message", (msg) => { - (async () => { - let { sharedBuffer, id, command, args } = msg; - let sharedBufferView = new Int32Array(sharedBuffer); - try { - switch (command) { - case "build": - workerPort.postMessage({ id, resolve: await service.build(args[0]) }); - break; - case "transform": - workerPort.postMessage({ id, resolve: await service.transform(args[0], args[1]) }); - break; - case "formatMessages": - workerPort.postMessage({ id, resolve: await service.formatMessages(args[0], args[1]) }); - break; - case "analyzeMetafile": - workerPort.postMessage({ id, resolve: await service.analyzeMetafile(args[0], args[1]) }); - break; - default: - throw new Error(`Invalid command: ${command}`); - } - } catch (reject) { - workerPort.postMessage({ id, reject, properties: extractProperties(reject) }); - } - Atomics.add(sharedBufferView, 0, 1); - Atomics.notify(sharedBufferView, 0, Infinity); - })(); - }); - } catch (reject) { - parentPort.on("message", (msg) => { - let { sharedBuffer, id } = msg; - let sharedBufferView = new Int32Array(sharedBuffer); - workerPort.postMessage({ id, reject, properties: extractProperties(reject) }); - Atomics.add(sharedBufferView, 0, 1); - Atomics.notify(sharedBufferView, 0, Infinity); - }); - } -}; -if (isInternalWorkerThread) { - startSyncServiceWorker(); -} -var node_default = node_exports; -// Annotate the CommonJS export names for ESM import in node: -0 && (module.exports = { - analyzeMetafile, - analyzeMetafileSync, - build, - buildSync, - context, - formatMessages, - formatMessagesSync, - initialize, - transform, - transformSync, - version -}); diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_pylab_helpers.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_pylab_helpers.py deleted file mode 100644 index d32a69d4ff991dff512bb0c4867b24471f1f9fb5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/_pylab_helpers.py +++ /dev/null @@ -1,135 +0,0 @@ -""" -Manage figures for the pyplot interface. -""" - -import atexit -from collections import OrderedDict - - -class Gcf: - """ - Singleton to maintain the relation between figures and their managers, and - keep track of and "active" figure and manager. - - The canvas of a figure created through pyplot is associated with a figure - manager, which handles the interaction between the figure and the backend. - pyplot keeps track of figure managers using an identifier, the "figure - number" or "manager number" (which can actually be any hashable value); - this number is available as the :attr:`number` attribute of the manager. - - This class is never instantiated; it consists of an `OrderedDict` mapping - figure/manager numbers to managers, and a set of class methods that - manipulate this `OrderedDict`. - - Attributes - ---------- - figs : OrderedDict - `OrderedDict` mapping numbers to managers; the active manager is at the - end. - """ - - figs = OrderedDict() - - @classmethod - def get_fig_manager(cls, num): - """ - If manager number *num* exists, make it the active one and return it; - otherwise return *None*. - """ - manager = cls.figs.get(num, None) - if manager is not None: - cls.set_active(manager) - return manager - - @classmethod - def destroy(cls, num): - """ - Destroy manager *num* -- either a manager instance or a manager number. - - In the interactive backends, this is bound to the window "destroy" and - "delete" events. - - It is recommended to pass a manager instance, to avoid confusion when - two managers share the same number. - """ - if all(hasattr(num, attr) for attr in ["num", "destroy"]): - manager = num - if cls.figs.get(manager.num) is manager: - cls.figs.pop(manager.num) - else: - try: - manager = cls.figs.pop(num) - except KeyError: - return - if hasattr(manager, "_cidgcf"): - manager.canvas.mpl_disconnect(manager._cidgcf) - manager.destroy() - del manager, num - - @classmethod - def destroy_fig(cls, fig): - """Destroy figure *fig*.""" - num = next((manager.num for manager in cls.figs.values() - if manager.canvas.figure == fig), None) - if num is not None: - cls.destroy(num) - - @classmethod - def destroy_all(cls): - """Destroy all figures.""" - for manager in list(cls.figs.values()): - manager.canvas.mpl_disconnect(manager._cidgcf) - manager.destroy() - cls.figs.clear() - - @classmethod - def has_fignum(cls, num): - """Return whether figure number *num* exists.""" - return num in cls.figs - - @classmethod - def get_all_fig_managers(cls): - """Return a list of figure managers.""" - return list(cls.figs.values()) - - @classmethod - def get_num_fig_managers(cls): - """Return the number of figures being managed.""" - return len(cls.figs) - - @classmethod - def get_active(cls): - """Return the active manager, or *None* if there is no manager.""" - return next(reversed(cls.figs.values())) if cls.figs else None - - @classmethod - def _set_new_active_manager(cls, manager): - """Adopt *manager* into pyplot and make it the active manager.""" - if not hasattr(manager, "_cidgcf"): - manager._cidgcf = manager.canvas.mpl_connect( - "button_press_event", lambda event: cls.set_active(manager)) - fig = manager.canvas.figure - fig.number = manager.num - label = fig.get_label() - if label: - manager.set_window_title(label) - cls.set_active(manager) - - @classmethod - def set_active(cls, manager): - """Make *manager* the active manager.""" - cls.figs[manager.num] = manager - cls.figs.move_to_end(manager.num) - - @classmethod - def draw_all(cls, force=False): - """ - Redraw all stale managed figures, or, if *force* is True, all managed - figures. - """ - for manager in cls.get_all_fig_managers(): - if force or manager.canvas.figure.stale: - manager.canvas.draw_idle() - - -atexit.register(Gcf.destroy_all) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/mplot3d/tests/test_art3d.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/mplot3d/tests/test_art3d.py deleted file mode 100644 index 4ed48aae46858555f8813fbbff9e573587ea45b0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/mplot3d/tests/test_art3d.py +++ /dev/null @@ -1,56 +0,0 @@ -import numpy as np - -import matplotlib.pyplot as plt - -from matplotlib.backend_bases import MouseEvent -from mpl_toolkits.mplot3d.art3d import Line3DCollection - - -def test_scatter_3d_projection_conservation(): - fig = plt.figure() - ax = fig.add_subplot(projection='3d') - # fix axes3d projection - ax.roll = 0 - ax.elev = 0 - ax.azim = -45 - ax.stale = True - - x = [0, 1, 2, 3, 4] - scatter_collection = ax.scatter(x, x, x) - fig.canvas.draw_idle() - - # Get scatter location on canvas and freeze the data - scatter_offset = scatter_collection.get_offsets() - scatter_location = ax.transData.transform(scatter_offset) - - # Yaw -44 and -46 are enough to produce two set of scatter - # with opposite z-order without moving points too far - for azim in (-44, -46): - ax.azim = azim - ax.stale = True - fig.canvas.draw_idle() - - for i in range(5): - # Create a mouse event used to locate and to get index - # from each dots - event = MouseEvent("button_press_event", fig.canvas, - *scatter_location[i, :]) - contains, ind = scatter_collection.contains(event) - assert contains is True - assert len(ind["ind"]) == 1 - assert ind["ind"][0] == i - - -def test_zordered_error(): - # Smoke test for https://github.com/matplotlib/matplotlib/issues/26497 - lc = [(np.fromiter([0.0, 0.0, 0.0], dtype="float"), - np.fromiter([1.0, 1.0, 1.0], dtype="float"))] - pc = [np.fromiter([0.0, 0.0], dtype="float"), - np.fromiter([0.0, 1.0], dtype="float"), - np.fromiter([1.0, 1.0], dtype="float")] - - fig = plt.figure() - ax = fig.add_subplot(projection="3d") - ax.add_collection(Line3DCollection(lc)) - ax.scatter(*pc, visible=False) - plt.draw() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/array_api/_sorting_functions.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/array_api/_sorting_functions.py deleted file mode 100644 index 9b8cb044d88a992da63d6e1a68c5b4c998a49680..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/array_api/_sorting_functions.py +++ /dev/null @@ -1,54 +0,0 @@ -from __future__ import annotations - -from ._array_object import Array -from ._dtypes import _real_numeric_dtypes - -import numpy as np - - -# Note: the descending keyword argument is new in this function -def argsort( - x: Array, /, *, axis: int = -1, descending: bool = False, stable: bool = True -) -> Array: - """ - Array API compatible wrapper for :py:func:`np.argsort `. - - See its docstring for more information. - """ - if x.dtype not in _real_numeric_dtypes: - raise TypeError("Only real numeric dtypes are allowed in argsort") - # Note: this keyword argument is different, and the default is different. - kind = "stable" if stable else "quicksort" - if not descending: - res = np.argsort(x._array, axis=axis, kind=kind) - else: - # As NumPy has no native descending sort, we imitate it here. Note that - # simply flipping the results of np.argsort(x._array, ...) would not - # respect the relative order like it would in native descending sorts. - res = np.flip( - np.argsort(np.flip(x._array, axis=axis), axis=axis, kind=kind), - axis=axis, - ) - # Rely on flip()/argsort() to validate axis - normalised_axis = axis if axis >= 0 else x.ndim + axis - max_i = x.shape[normalised_axis] - 1 - res = max_i - res - return Array._new(res) - -# Note: the descending keyword argument is new in this function -def sort( - x: Array, /, *, axis: int = -1, descending: bool = False, stable: bool = True -) -> Array: - """ - Array API compatible wrapper for :py:func:`np.sort `. - - See its docstring for more information. - """ - if x.dtype not in _real_numeric_dtypes: - raise TypeError("Only real numeric dtypes are allowed in sort") - # Note: this keyword argument is different, and the default is different. - kind = "stable" if stable else "quicksort" - res = np.sort(x._array, axis=axis, kind=kind) - if descending: - res = np.flip(res, axis=axis) - return Array._new(res) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/string/gh24008.f b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/string/gh24008.f deleted file mode 100644 index ab64cf771f68bbcecc8ac2d5d649545fc357e15b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/string/gh24008.f +++ /dev/null @@ -1,8 +0,0 @@ - SUBROUTINE GREET(NAME, GREETING) - CHARACTER NAME*(*), GREETING*(*) - CHARACTER*(50) MESSAGE - - MESSAGE = 'Hello, ' // NAME // ', ' // GREETING -c$$$ PRINT *, MESSAGE - - END SUBROUTINE GREET diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/tests/test_laguerre.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/tests/test_laguerre.py deleted file mode 100644 index 227ef3c5576dd666e2eb76576eb260d5ba48cb0e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/tests/test_laguerre.py +++ /dev/null @@ -1,537 +0,0 @@ -"""Tests for laguerre module. - -""" -from functools import reduce - -import numpy as np -import numpy.polynomial.laguerre as lag -from numpy.polynomial.polynomial import polyval -from numpy.testing import ( - assert_almost_equal, assert_raises, assert_equal, assert_, - ) - -L0 = np.array([1])/1 -L1 = np.array([1, -1])/1 -L2 = np.array([2, -4, 1])/2 -L3 = np.array([6, -18, 9, -1])/6 -L4 = np.array([24, -96, 72, -16, 1])/24 -L5 = np.array([120, -600, 600, -200, 25, -1])/120 -L6 = np.array([720, -4320, 5400, -2400, 450, -36, 1])/720 - -Llist = [L0, L1, L2, L3, L4, L5, L6] - - -def trim(x): - return lag.lagtrim(x, tol=1e-6) - - -class TestConstants: - - def test_lagdomain(self): - assert_equal(lag.lagdomain, [0, 1]) - - def test_lagzero(self): - assert_equal(lag.lagzero, [0]) - - def test_lagone(self): - assert_equal(lag.lagone, [1]) - - def test_lagx(self): - assert_equal(lag.lagx, [1, -1]) - - -class TestArithmetic: - x = np.linspace(-3, 3, 100) - - def test_lagadd(self): - for i in range(5): - for j in range(5): - msg = f"At i={i}, j={j}" - tgt = np.zeros(max(i, j) + 1) - tgt[i] += 1 - tgt[j] += 1 - res = lag.lagadd([0]*i + [1], [0]*j + [1]) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - def test_lagsub(self): - for i in range(5): - for j in range(5): - msg = f"At i={i}, j={j}" - tgt = np.zeros(max(i, j) + 1) - tgt[i] += 1 - tgt[j] -= 1 - res = lag.lagsub([0]*i + [1], [0]*j + [1]) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - def test_lagmulx(self): - assert_equal(lag.lagmulx([0]), [0]) - assert_equal(lag.lagmulx([1]), [1, -1]) - for i in range(1, 5): - ser = [0]*i + [1] - tgt = [0]*(i - 1) + [-i, 2*i + 1, -(i + 1)] - assert_almost_equal(lag.lagmulx(ser), tgt) - - def test_lagmul(self): - # check values of result - for i in range(5): - pol1 = [0]*i + [1] - val1 = lag.lagval(self.x, pol1) - for j in range(5): - msg = f"At i={i}, j={j}" - pol2 = [0]*j + [1] - val2 = lag.lagval(self.x, pol2) - pol3 = lag.lagmul(pol1, pol2) - val3 = lag.lagval(self.x, pol3) - assert_(len(pol3) == i + j + 1, msg) - assert_almost_equal(val3, val1*val2, err_msg=msg) - - def test_lagdiv(self): - for i in range(5): - for j in range(5): - msg = f"At i={i}, j={j}" - ci = [0]*i + [1] - cj = [0]*j + [1] - tgt = lag.lagadd(ci, cj) - quo, rem = lag.lagdiv(tgt, ci) - res = lag.lagadd(lag.lagmul(quo, ci), rem) - assert_almost_equal(trim(res), trim(tgt), err_msg=msg) - - def test_lagpow(self): - for i in range(5): - for j in range(5): - msg = f"At i={i}, j={j}" - c = np.arange(i + 1) - tgt = reduce(lag.lagmul, [c]*j, np.array([1])) - res = lag.lagpow(c, j) - assert_equal(trim(res), trim(tgt), err_msg=msg) - - -class TestEvaluation: - # coefficients of 1 + 2*x + 3*x**2 - c1d = np.array([9., -14., 6.]) - c2d = np.einsum('i,j->ij', c1d, c1d) - c3d = np.einsum('i,j,k->ijk', c1d, c1d, c1d) - - # some random values in [-1, 1) - x = np.random.random((3, 5))*2 - 1 - y = polyval(x, [1., 2., 3.]) - - def test_lagval(self): - #check empty input - assert_equal(lag.lagval([], [1]).size, 0) - - #check normal input) - x = np.linspace(-1, 1) - y = [polyval(x, c) for c in Llist] - for i in range(7): - msg = f"At i={i}" - tgt = y[i] - res = lag.lagval(x, [0]*i + [1]) - assert_almost_equal(res, tgt, err_msg=msg) - - #check that shape is preserved - for i in range(3): - dims = [2]*i - x = np.zeros(dims) - assert_equal(lag.lagval(x, [1]).shape, dims) - assert_equal(lag.lagval(x, [1, 0]).shape, dims) - assert_equal(lag.lagval(x, [1, 0, 0]).shape, dims) - - def test_lagval2d(self): - x1, x2, x3 = self.x - y1, y2, y3 = self.y - - #test exceptions - assert_raises(ValueError, lag.lagval2d, x1, x2[:2], self.c2d) - - #test values - tgt = y1*y2 - res = lag.lagval2d(x1, x2, self.c2d) - assert_almost_equal(res, tgt) - - #test shape - z = np.ones((2, 3)) - res = lag.lagval2d(z, z, self.c2d) - assert_(res.shape == (2, 3)) - - def test_lagval3d(self): - x1, x2, x3 = self.x - y1, y2, y3 = self.y - - #test exceptions - assert_raises(ValueError, lag.lagval3d, x1, x2, x3[:2], self.c3d) - - #test values - tgt = y1*y2*y3 - res = lag.lagval3d(x1, x2, x3, self.c3d) - assert_almost_equal(res, tgt) - - #test shape - z = np.ones((2, 3)) - res = lag.lagval3d(z, z, z, self.c3d) - assert_(res.shape == (2, 3)) - - def test_laggrid2d(self): - x1, x2, x3 = self.x - y1, y2, y3 = self.y - - #test values - tgt = np.einsum('i,j->ij', y1, y2) - res = lag.laggrid2d(x1, x2, self.c2d) - assert_almost_equal(res, tgt) - - #test shape - z = np.ones((2, 3)) - res = lag.laggrid2d(z, z, self.c2d) - assert_(res.shape == (2, 3)*2) - - def test_laggrid3d(self): - x1, x2, x3 = self.x - y1, y2, y3 = self.y - - #test values - tgt = np.einsum('i,j,k->ijk', y1, y2, y3) - res = lag.laggrid3d(x1, x2, x3, self.c3d) - assert_almost_equal(res, tgt) - - #test shape - z = np.ones((2, 3)) - res = lag.laggrid3d(z, z, z, self.c3d) - assert_(res.shape == (2, 3)*3) - - -class TestIntegral: - - def test_lagint(self): - # check exceptions - assert_raises(TypeError, lag.lagint, [0], .5) - assert_raises(ValueError, lag.lagint, [0], -1) - assert_raises(ValueError, lag.lagint, [0], 1, [0, 0]) - assert_raises(ValueError, lag.lagint, [0], lbnd=[0]) - assert_raises(ValueError, lag.lagint, [0], scl=[0]) - assert_raises(TypeError, lag.lagint, [0], axis=.5) - - # test integration of zero polynomial - for i in range(2, 5): - k = [0]*(i - 2) + [1] - res = lag.lagint([0], m=i, k=k) - assert_almost_equal(res, [1, -1]) - - # check single integration with integration constant - for i in range(5): - scl = i + 1 - pol = [0]*i + [1] - tgt = [i] + [0]*i + [1/scl] - lagpol = lag.poly2lag(pol) - lagint = lag.lagint(lagpol, m=1, k=[i]) - res = lag.lag2poly(lagint) - assert_almost_equal(trim(res), trim(tgt)) - - # check single integration with integration constant and lbnd - for i in range(5): - scl = i + 1 - pol = [0]*i + [1] - lagpol = lag.poly2lag(pol) - lagint = lag.lagint(lagpol, m=1, k=[i], lbnd=-1) - assert_almost_equal(lag.lagval(-1, lagint), i) - - # check single integration with integration constant and scaling - for i in range(5): - scl = i + 1 - pol = [0]*i + [1] - tgt = [i] + [0]*i + [2/scl] - lagpol = lag.poly2lag(pol) - lagint = lag.lagint(lagpol, m=1, k=[i], scl=2) - res = lag.lag2poly(lagint) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with default k - for i in range(5): - for j in range(2, 5): - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j): - tgt = lag.lagint(tgt, m=1) - res = lag.lagint(pol, m=j) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with defined k - for i in range(5): - for j in range(2, 5): - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j): - tgt = lag.lagint(tgt, m=1, k=[k]) - res = lag.lagint(pol, m=j, k=list(range(j))) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with lbnd - for i in range(5): - for j in range(2, 5): - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j): - tgt = lag.lagint(tgt, m=1, k=[k], lbnd=-1) - res = lag.lagint(pol, m=j, k=list(range(j)), lbnd=-1) - assert_almost_equal(trim(res), trim(tgt)) - - # check multiple integrations with scaling - for i in range(5): - for j in range(2, 5): - pol = [0]*i + [1] - tgt = pol[:] - for k in range(j): - tgt = lag.lagint(tgt, m=1, k=[k], scl=2) - res = lag.lagint(pol, m=j, k=list(range(j)), scl=2) - assert_almost_equal(trim(res), trim(tgt)) - - def test_lagint_axis(self): - # check that axis keyword works - c2d = np.random.random((3, 4)) - - tgt = np.vstack([lag.lagint(c) for c in c2d.T]).T - res = lag.lagint(c2d, axis=0) - assert_almost_equal(res, tgt) - - tgt = np.vstack([lag.lagint(c) for c in c2d]) - res = lag.lagint(c2d, axis=1) - assert_almost_equal(res, tgt) - - tgt = np.vstack([lag.lagint(c, k=3) for c in c2d]) - res = lag.lagint(c2d, k=3, axis=1) - assert_almost_equal(res, tgt) - - -class TestDerivative: - - def test_lagder(self): - # check exceptions - assert_raises(TypeError, lag.lagder, [0], .5) - assert_raises(ValueError, lag.lagder, [0], -1) - - # check that zeroth derivative does nothing - for i in range(5): - tgt = [0]*i + [1] - res = lag.lagder(tgt, m=0) - assert_equal(trim(res), trim(tgt)) - - # check that derivation is the inverse of integration - for i in range(5): - for j in range(2, 5): - tgt = [0]*i + [1] - res = lag.lagder(lag.lagint(tgt, m=j), m=j) - assert_almost_equal(trim(res), trim(tgt)) - - # check derivation with scaling - for i in range(5): - for j in range(2, 5): - tgt = [0]*i + [1] - res = lag.lagder(lag.lagint(tgt, m=j, scl=2), m=j, scl=.5) - assert_almost_equal(trim(res), trim(tgt)) - - def test_lagder_axis(self): - # check that axis keyword works - c2d = np.random.random((3, 4)) - - tgt = np.vstack([lag.lagder(c) for c in c2d.T]).T - res = lag.lagder(c2d, axis=0) - assert_almost_equal(res, tgt) - - tgt = np.vstack([lag.lagder(c) for c in c2d]) - res = lag.lagder(c2d, axis=1) - assert_almost_equal(res, tgt) - - -class TestVander: - # some random values in [-1, 1) - x = np.random.random((3, 5))*2 - 1 - - def test_lagvander(self): - # check for 1d x - x = np.arange(3) - v = lag.lagvander(x, 3) - assert_(v.shape == (3, 4)) - for i in range(4): - coef = [0]*i + [1] - assert_almost_equal(v[..., i], lag.lagval(x, coef)) - - # check for 2d x - x = np.array([[1, 2], [3, 4], [5, 6]]) - v = lag.lagvander(x, 3) - assert_(v.shape == (3, 2, 4)) - for i in range(4): - coef = [0]*i + [1] - assert_almost_equal(v[..., i], lag.lagval(x, coef)) - - def test_lagvander2d(self): - # also tests lagval2d for non-square coefficient array - x1, x2, x3 = self.x - c = np.random.random((2, 3)) - van = lag.lagvander2d(x1, x2, [1, 2]) - tgt = lag.lagval2d(x1, x2, c) - res = np.dot(van, c.flat) - assert_almost_equal(res, tgt) - - # check shape - van = lag.lagvander2d([x1], [x2], [1, 2]) - assert_(van.shape == (1, 5, 6)) - - def test_lagvander3d(self): - # also tests lagval3d for non-square coefficient array - x1, x2, x3 = self.x - c = np.random.random((2, 3, 4)) - van = lag.lagvander3d(x1, x2, x3, [1, 2, 3]) - tgt = lag.lagval3d(x1, x2, x3, c) - res = np.dot(van, c.flat) - assert_almost_equal(res, tgt) - - # check shape - van = lag.lagvander3d([x1], [x2], [x3], [1, 2, 3]) - assert_(van.shape == (1, 5, 24)) - - -class TestFitting: - - def test_lagfit(self): - def f(x): - return x*(x - 1)*(x - 2) - - # Test exceptions - assert_raises(ValueError, lag.lagfit, [1], [1], -1) - assert_raises(TypeError, lag.lagfit, [[1]], [1], 0) - assert_raises(TypeError, lag.lagfit, [], [1], 0) - assert_raises(TypeError, lag.lagfit, [1], [[[1]]], 0) - assert_raises(TypeError, lag.lagfit, [1, 2], [1], 0) - assert_raises(TypeError, lag.lagfit, [1], [1, 2], 0) - assert_raises(TypeError, lag.lagfit, [1], [1], 0, w=[[1]]) - assert_raises(TypeError, lag.lagfit, [1], [1], 0, w=[1, 1]) - assert_raises(ValueError, lag.lagfit, [1], [1], [-1,]) - assert_raises(ValueError, lag.lagfit, [1], [1], [2, -1, 6]) - assert_raises(TypeError, lag.lagfit, [1], [1], []) - - # Test fit - x = np.linspace(0, 2) - y = f(x) - # - coef3 = lag.lagfit(x, y, 3) - assert_equal(len(coef3), 4) - assert_almost_equal(lag.lagval(x, coef3), y) - coef3 = lag.lagfit(x, y, [0, 1, 2, 3]) - assert_equal(len(coef3), 4) - assert_almost_equal(lag.lagval(x, coef3), y) - # - coef4 = lag.lagfit(x, y, 4) - assert_equal(len(coef4), 5) - assert_almost_equal(lag.lagval(x, coef4), y) - coef4 = lag.lagfit(x, y, [0, 1, 2, 3, 4]) - assert_equal(len(coef4), 5) - assert_almost_equal(lag.lagval(x, coef4), y) - # - coef2d = lag.lagfit(x, np.array([y, y]).T, 3) - assert_almost_equal(coef2d, np.array([coef3, coef3]).T) - coef2d = lag.lagfit(x, np.array([y, y]).T, [0, 1, 2, 3]) - assert_almost_equal(coef2d, np.array([coef3, coef3]).T) - # test weighting - w = np.zeros_like(x) - yw = y.copy() - w[1::2] = 1 - y[0::2] = 0 - wcoef3 = lag.lagfit(x, yw, 3, w=w) - assert_almost_equal(wcoef3, coef3) - wcoef3 = lag.lagfit(x, yw, [0, 1, 2, 3], w=w) - assert_almost_equal(wcoef3, coef3) - # - wcoef2d = lag.lagfit(x, np.array([yw, yw]).T, 3, w=w) - assert_almost_equal(wcoef2d, np.array([coef3, coef3]).T) - wcoef2d = lag.lagfit(x, np.array([yw, yw]).T, [0, 1, 2, 3], w=w) - assert_almost_equal(wcoef2d, np.array([coef3, coef3]).T) - # test scaling with complex values x points whose square - # is zero when summed. - x = [1, 1j, -1, -1j] - assert_almost_equal(lag.lagfit(x, x, 1), [1, -1]) - assert_almost_equal(lag.lagfit(x, x, [0, 1]), [1, -1]) - - -class TestCompanion: - - def test_raises(self): - assert_raises(ValueError, lag.lagcompanion, []) - assert_raises(ValueError, lag.lagcompanion, [1]) - - def test_dimensions(self): - for i in range(1, 5): - coef = [0]*i + [1] - assert_(lag.lagcompanion(coef).shape == (i, i)) - - def test_linear_root(self): - assert_(lag.lagcompanion([1, 2])[0, 0] == 1.5) - - -class TestGauss: - - def test_100(self): - x, w = lag.laggauss(100) - - # test orthogonality. Note that the results need to be normalized, - # otherwise the huge values that can arise from fast growing - # functions like Laguerre can be very confusing. - v = lag.lagvander(x, 99) - vv = np.dot(v.T * w, v) - vd = 1/np.sqrt(vv.diagonal()) - vv = vd[:, None] * vv * vd - assert_almost_equal(vv, np.eye(100)) - - # check that the integral of 1 is correct - tgt = 1.0 - assert_almost_equal(w.sum(), tgt) - - -class TestMisc: - - def test_lagfromroots(self): - res = lag.lagfromroots([]) - assert_almost_equal(trim(res), [1]) - for i in range(1, 5): - roots = np.cos(np.linspace(-np.pi, 0, 2*i + 1)[1::2]) - pol = lag.lagfromroots(roots) - res = lag.lagval(roots, pol) - tgt = 0 - assert_(len(pol) == i + 1) - assert_almost_equal(lag.lag2poly(pol)[-1], 1) - assert_almost_equal(res, tgt) - - def test_lagroots(self): - assert_almost_equal(lag.lagroots([1]), []) - assert_almost_equal(lag.lagroots([0, 1]), [1]) - for i in range(2, 5): - tgt = np.linspace(0, 3, i) - res = lag.lagroots(lag.lagfromroots(tgt)) - assert_almost_equal(trim(res), trim(tgt)) - - def test_lagtrim(self): - coef = [2, -1, 1, 0] - - # Test exceptions - assert_raises(ValueError, lag.lagtrim, coef, -1) - - # Test results - assert_equal(lag.lagtrim(coef), coef[:-1]) - assert_equal(lag.lagtrim(coef, 1), coef[:-3]) - assert_equal(lag.lagtrim(coef, 2), [0]) - - def test_lagline(self): - assert_equal(lag.lagline(3, 4), [7, -4]) - - def test_lag2poly(self): - for i in range(7): - assert_almost_equal(lag.lag2poly([0]*i + [1]), Llist[i]) - - def test_poly2lag(self): - for i in range(7): - assert_almost_equal(lag.poly2lag(Llist[i]), [0]*i + [1]) - - def test_weight(self): - x = np.linspace(0, 10, 11) - tgt = np.exp(-x) - res = lag.lagweight(x) - assert_almost_equal(res, tgt) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_alter_axes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_alter_axes.py deleted file mode 100644 index c68171ab254c7c8582a206a8e9b44b3845c47efc..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/test_alter_axes.py +++ /dev/null @@ -1,30 +0,0 @@ -from datetime import datetime - -import pytz - -from pandas import DataFrame -import pandas._testing as tm - - -class TestDataFrameAlterAxes: - # Tests for setting index/columns attributes directly (i.e. __setattr__) - - def test_set_axis_setattr_index(self): - # GH 6785 - # set the index manually - - df = DataFrame([{"ts": datetime(2014, 4, 1, tzinfo=pytz.utc), "foo": 1}]) - expected = df.set_index("ts") - df.index = df["ts"] - df.pop("ts") - tm.assert_frame_equal(df, expected) - - # Renaming - - def test_assign_columns(self, float_frame): - float_frame["hi"] = "there" - - df = float_frame.copy() - df.columns = ["foo", "bar", "baz", "quux", "foo2"] - tm.assert_series_equal(float_frame["C"], df["baz"], check_names=False) - tm.assert_series_equal(float_frame["hi"], df["foo2"], check_names=False) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/test_old_base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/test_old_base.py deleted file mode 100644 index 79dc423f12a85b93a5f91df6fe5d8269800b06fa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/test_old_base.py +++ /dev/null @@ -1,1025 +0,0 @@ -from __future__ import annotations - -from datetime import datetime -import gc - -import numpy as np -import pytest - -from pandas._libs.tslibs import Timestamp - -from pandas.core.dtypes.common import ( - is_integer_dtype, - is_numeric_dtype, -) -from pandas.core.dtypes.dtypes import CategoricalDtype - -import pandas as pd -from pandas import ( - CategoricalIndex, - DatetimeIndex, - DatetimeTZDtype, - Index, - IntervalIndex, - MultiIndex, - PeriodIndex, - RangeIndex, - Series, - TimedeltaIndex, - isna, - period_range, -) -import pandas._testing as tm -from pandas.core.arrays import BaseMaskedArray - - -class TestBase: - @pytest.fixture( - params=[ - RangeIndex(start=0, stop=20, step=2), - Index(np.arange(5, dtype=np.float64)), - Index(np.arange(5, dtype=np.float32)), - Index(np.arange(5, dtype=np.uint64)), - Index(range(0, 20, 2), dtype=np.int64), - Index(range(0, 20, 2), dtype=np.int32), - Index(range(0, 20, 2), dtype=np.int16), - Index(range(0, 20, 2), dtype=np.int8), - Index(list("abcde")), - Index([0, "a", 1, "b", 2, "c"]), - period_range("20130101", periods=5, freq="D"), - TimedeltaIndex( - [ - "0 days 01:00:00", - "1 days 01:00:00", - "2 days 01:00:00", - "3 days 01:00:00", - "4 days 01:00:00", - ], - dtype="timedelta64[ns]", - freq="D", - ), - DatetimeIndex( - ["2013-01-01", "2013-01-02", "2013-01-03", "2013-01-04", "2013-01-05"], - dtype="datetime64[ns]", - freq="D", - ), - IntervalIndex.from_breaks(range(11), closed="right"), - ] - ) - def simple_index(self, request): - return request.param - - def test_pickle_compat_construction(self, simple_index): - # need an object to create with - if isinstance(simple_index, RangeIndex): - pytest.skip("RangeIndex() is a valid constructor") - msg = "|".join( - [ - r"Index\(\.\.\.\) must be called with a collection of some " - r"kind, None was passed", - r"DatetimeIndex\(\) must be called with a collection of some " - r"kind, None was passed", - r"TimedeltaIndex\(\) must be called with a collection of some " - r"kind, None was passed", - r"__new__\(\) missing 1 required positional argument: 'data'", - r"__new__\(\) takes at least 2 arguments \(1 given\)", - ] - ) - with pytest.raises(TypeError, match=msg): - type(simple_index)() - - def test_shift(self, simple_index): - # GH8083 test the base class for shift - if isinstance(simple_index, (DatetimeIndex, TimedeltaIndex, PeriodIndex)): - pytest.skip("Tested in test_ops/test_arithmetic") - idx = simple_index - msg = ( - f"This method is only implemented for DatetimeIndex, PeriodIndex and " - f"TimedeltaIndex; Got type {type(idx).__name__}" - ) - with pytest.raises(NotImplementedError, match=msg): - idx.shift(1) - with pytest.raises(NotImplementedError, match=msg): - idx.shift(1, 2) - - def test_constructor_name_unhashable(self, simple_index): - # GH#29069 check that name is hashable - # See also same-named test in tests.series.test_constructors - idx = simple_index - with pytest.raises(TypeError, match="Index.name must be a hashable type"): - type(idx)(idx, name=[]) - - def test_create_index_existing_name(self, simple_index): - # GH11193, when an existing index is passed, and a new name is not - # specified, the new index should inherit the previous object name - expected = simple_index.copy() - if not isinstance(expected, MultiIndex): - expected.name = "foo" - result = Index(expected) - tm.assert_index_equal(result, expected) - - result = Index(expected, name="bar") - expected.name = "bar" - tm.assert_index_equal(result, expected) - else: - expected.names = ["foo", "bar"] - result = Index(expected) - tm.assert_index_equal( - result, - Index( - Index( - [ - ("foo", "one"), - ("foo", "two"), - ("bar", "one"), - ("baz", "two"), - ("qux", "one"), - ("qux", "two"), - ], - dtype="object", - ), - names=["foo", "bar"], - ), - ) - - result = Index(expected, names=["A", "B"]) - tm.assert_index_equal( - result, - Index( - Index( - [ - ("foo", "one"), - ("foo", "two"), - ("bar", "one"), - ("baz", "two"), - ("qux", "one"), - ("qux", "two"), - ], - dtype="object", - ), - names=["A", "B"], - ), - ) - - def test_numeric_compat(self, simple_index): - idx = simple_index - # Check that this doesn't cover MultiIndex case, if/when it does, - # we can remove multi.test_compat.test_numeric_compat - assert not isinstance(idx, MultiIndex) - if type(idx) is Index: - pytest.skip("Not applicable for Index") - if is_numeric_dtype(simple_index.dtype) or isinstance( - simple_index, TimedeltaIndex - ): - pytest.skip("Tested elsewhere.") - - typ = type(idx._data).__name__ - cls = type(idx).__name__ - lmsg = "|".join( - [ - rf"unsupported operand type\(s\) for \*: '{typ}' and 'int'", - "cannot perform (__mul__|__truediv__|__floordiv__) with " - f"this index type: ({cls}|{typ})", - ] - ) - with pytest.raises(TypeError, match=lmsg): - idx * 1 - rmsg = "|".join( - [ - rf"unsupported operand type\(s\) for \*: 'int' and '{typ}'", - "cannot perform (__rmul__|__rtruediv__|__rfloordiv__) with " - f"this index type: ({cls}|{typ})", - ] - ) - with pytest.raises(TypeError, match=rmsg): - 1 * idx - - div_err = lmsg.replace("*", "/") - with pytest.raises(TypeError, match=div_err): - idx / 1 - div_err = rmsg.replace("*", "/") - with pytest.raises(TypeError, match=div_err): - 1 / idx - - floordiv_err = lmsg.replace("*", "//") - with pytest.raises(TypeError, match=floordiv_err): - idx // 1 - floordiv_err = rmsg.replace("*", "//") - with pytest.raises(TypeError, match=floordiv_err): - 1 // idx - - def test_logical_compat(self, simple_index): - if simple_index.dtype == object: - pytest.skip("Tested elsewhere.") - idx = simple_index - if idx.dtype.kind in "iufcbm": - assert idx.all() == idx._values.all() - assert idx.all() == idx.to_series().all() - assert idx.any() == idx._values.any() - assert idx.any() == idx.to_series().any() - else: - msg = "cannot perform (any|all)" - if isinstance(idx, IntervalIndex): - msg = ( - r"'IntervalArray' with dtype interval\[.*\] does " - "not support reduction '(any|all)'" - ) - with pytest.raises(TypeError, match=msg): - idx.all() - with pytest.raises(TypeError, match=msg): - idx.any() - - def test_repr_roundtrip(self, simple_index): - if isinstance(simple_index, IntervalIndex): - pytest.skip(f"Not a valid repr for {type(simple_index).__name__}") - idx = simple_index - tm.assert_index_equal(eval(repr(idx)), idx) - - def test_repr_max_seq_item_setting(self, simple_index): - # GH10182 - if isinstance(simple_index, IntervalIndex): - pytest.skip(f"Not a valid repr for {type(simple_index).__name__}") - idx = simple_index - idx = idx.repeat(50) - with pd.option_context("display.max_seq_items", None): - repr(idx) - assert "..." not in str(idx) - - @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") - def test_ensure_copied_data(self, index): - # Check the "copy" argument of each Index.__new__ is honoured - # GH12309 - init_kwargs = {} - if isinstance(index, PeriodIndex): - # Needs "freq" specification: - init_kwargs["freq"] = index.freq - elif isinstance(index, (RangeIndex, MultiIndex, CategoricalIndex)): - pytest.skip( - "RangeIndex cannot be initialized from data, " - "MultiIndex and CategoricalIndex are tested separately" - ) - elif index.dtype == object and index.inferred_type == "boolean": - init_kwargs["dtype"] = index.dtype - - index_type = type(index) - result = index_type(index.values, copy=True, **init_kwargs) - if isinstance(index.dtype, DatetimeTZDtype): - result = result.tz_localize("UTC").tz_convert(index.tz) - if isinstance(index, (DatetimeIndex, TimedeltaIndex)): - index = index._with_freq(None) - - tm.assert_index_equal(index, result) - - if isinstance(index, PeriodIndex): - # .values an object array of Period, thus copied - result = index_type(ordinal=index.asi8, copy=False, **init_kwargs) - tm.assert_numpy_array_equal(index.asi8, result.asi8, check_same="same") - elif isinstance(index, IntervalIndex): - # checked in test_interval.py - pass - elif type(index) is Index and not isinstance(index.dtype, np.dtype): - result = index_type(index.values, copy=False, **init_kwargs) - tm.assert_index_equal(result, index) - - if isinstance(index._values, BaseMaskedArray): - assert np.shares_memory(index._values._data, result._values._data) - tm.assert_numpy_array_equal( - index._values._data, result._values._data, check_same="same" - ) - assert np.shares_memory(index._values._mask, result._values._mask) - tm.assert_numpy_array_equal( - index._values._mask, result._values._mask, check_same="same" - ) - elif index.dtype == "string[python]": - assert np.shares_memory(index._values._ndarray, result._values._ndarray) - tm.assert_numpy_array_equal( - index._values._ndarray, result._values._ndarray, check_same="same" - ) - elif index.dtype == "string[pyarrow]": - assert tm.shares_memory(result._values, index._values) - else: - raise NotImplementedError(index.dtype) - else: - result = index_type(index.values, copy=False, **init_kwargs) - tm.assert_numpy_array_equal(index.values, result.values, check_same="same") - - def test_memory_usage(self, index): - index._engine.clear_mapping() - result = index.memory_usage() - if index.empty: - # we report 0 for no-length - assert result == 0 - return - - # non-zero length - index.get_loc(index[0]) - result2 = index.memory_usage() - result3 = index.memory_usage(deep=True) - - # RangeIndex, IntervalIndex - # don't have engines - # Index[EA] has engine but it does not have a Hashtable .mapping - if not isinstance(index, (RangeIndex, IntervalIndex)) and not ( - type(index) is Index and not isinstance(index.dtype, np.dtype) - ): - assert result2 > result - - if index.inferred_type == "object": - assert result3 > result2 - - def test_argsort(self, index): - if isinstance(index, CategoricalIndex): - pytest.skip(f"{type(self).__name__} separately tested") - - result = index.argsort() - expected = np.array(index).argsort() - tm.assert_numpy_array_equal(result, expected, check_dtype=False) - - def test_numpy_argsort(self, index): - result = np.argsort(index) - expected = index.argsort() - tm.assert_numpy_array_equal(result, expected) - - result = np.argsort(index, kind="mergesort") - expected = index.argsort(kind="mergesort") - tm.assert_numpy_array_equal(result, expected) - - # these are the only two types that perform - # pandas compatibility input validation - the - # rest already perform separate (or no) such - # validation via their 'values' attribute as - # defined in pandas.core.indexes/base.py - they - # cannot be changed at the moment due to - # backwards compatibility concerns - if isinstance(index, (CategoricalIndex, RangeIndex)): - msg = "the 'axis' parameter is not supported" - with pytest.raises(ValueError, match=msg): - np.argsort(index, axis=1) - - msg = "the 'order' parameter is not supported" - with pytest.raises(ValueError, match=msg): - np.argsort(index, order=("a", "b")) - - def test_repeat(self, simple_index): - rep = 2 - idx = simple_index.copy() - new_index_cls = idx._constructor - expected = new_index_cls(idx.values.repeat(rep), name=idx.name) - tm.assert_index_equal(idx.repeat(rep), expected) - - idx = simple_index - rep = np.arange(len(idx)) - expected = new_index_cls(idx.values.repeat(rep), name=idx.name) - tm.assert_index_equal(idx.repeat(rep), expected) - - def test_numpy_repeat(self, simple_index): - rep = 2 - idx = simple_index - expected = idx.repeat(rep) - tm.assert_index_equal(np.repeat(idx, rep), expected) - - msg = "the 'axis' parameter is not supported" - with pytest.raises(ValueError, match=msg): - np.repeat(idx, rep, axis=0) - - def test_where(self, listlike_box, simple_index): - if isinstance(simple_index, (IntervalIndex, PeriodIndex)) or is_numeric_dtype( - simple_index.dtype - ): - pytest.skip("Tested elsewhere.") - klass = listlike_box - - idx = simple_index - if isinstance(idx, (DatetimeIndex, TimedeltaIndex)): - # where does not preserve freq - idx = idx._with_freq(None) - - cond = [True] * len(idx) - result = idx.where(klass(cond)) - expected = idx - tm.assert_index_equal(result, expected) - - cond = [False] + [True] * len(idx[1:]) - expected = Index([idx._na_value] + idx[1:].tolist(), dtype=idx.dtype) - result = idx.where(klass(cond)) - tm.assert_index_equal(result, expected) - - def test_insert_base(self, index): - result = index[1:4] - - if not len(index): - pytest.skip("Not applicable for empty index") - - # test 0th element - assert index[0:4].equals(result.insert(0, index[0])) - - def test_insert_out_of_bounds(self, index): - # TypeError/IndexError matches what np.insert raises in these cases - - if len(index) > 0: - err = TypeError - else: - err = IndexError - if len(index) == 0: - # 0 vs 0.5 in error message varies with numpy version - msg = "index (0|0.5) is out of bounds for axis 0 with size 0" - else: - msg = "slice indices must be integers or None or have an __index__ method" - with pytest.raises(err, match=msg): - index.insert(0.5, "foo") - - msg = "|".join( - [ - r"index -?\d+ is out of bounds for axis 0 with size \d+", - "loc must be an integer between", - ] - ) - with pytest.raises(IndexError, match=msg): - index.insert(len(index) + 1, 1) - - with pytest.raises(IndexError, match=msg): - index.insert(-len(index) - 1, 1) - - def test_delete_base(self, index): - if not len(index): - pytest.skip("Not applicable for empty index") - - if isinstance(index, RangeIndex): - # tested in class - pytest.skip(f"{type(self).__name__} tested elsewhere") - - expected = index[1:] - result = index.delete(0) - assert result.equals(expected) - assert result.name == expected.name - - expected = index[:-1] - result = index.delete(-1) - assert result.equals(expected) - assert result.name == expected.name - - length = len(index) - msg = f"index {length} is out of bounds for axis 0 with size {length}" - with pytest.raises(IndexError, match=msg): - index.delete(length) - - @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") - def test_equals(self, index): - if isinstance(index, IntervalIndex): - pytest.skip(f"{type(index).__name__} tested elsewhere") - - is_ea_idx = type(index) is Index and not isinstance(index.dtype, np.dtype) - - assert index.equals(index) - assert index.equals(index.copy()) - if not is_ea_idx: - # doesn't hold for e.g. IntegerDtype - assert index.equals(index.astype(object)) - - assert not index.equals(list(index)) - assert not index.equals(np.array(index)) - - # Cannot pass in non-int64 dtype to RangeIndex - if not isinstance(index, RangeIndex) and not is_ea_idx: - same_values = Index(index, dtype=object) - assert index.equals(same_values) - assert same_values.equals(index) - - if index.nlevels == 1: - # do not test MultiIndex - assert not index.equals(Series(index)) - - def test_equals_op(self, simple_index): - # GH9947, GH10637 - index_a = simple_index - - n = len(index_a) - index_b = index_a[0:-1] - index_c = index_a[0:-1].append(index_a[-2:-1]) - index_d = index_a[0:1] - - msg = "Lengths must match|could not be broadcast" - with pytest.raises(ValueError, match=msg): - index_a == index_b - expected1 = np.array([True] * n) - expected2 = np.array([True] * (n - 1) + [False]) - tm.assert_numpy_array_equal(index_a == index_a, expected1) - tm.assert_numpy_array_equal(index_a == index_c, expected2) - - # test comparisons with numpy arrays - array_a = np.array(index_a) - array_b = np.array(index_a[0:-1]) - array_c = np.array(index_a[0:-1].append(index_a[-2:-1])) - array_d = np.array(index_a[0:1]) - with pytest.raises(ValueError, match=msg): - index_a == array_b - tm.assert_numpy_array_equal(index_a == array_a, expected1) - tm.assert_numpy_array_equal(index_a == array_c, expected2) - - # test comparisons with Series - series_a = Series(array_a) - series_b = Series(array_b) - series_c = Series(array_c) - series_d = Series(array_d) - with pytest.raises(ValueError, match=msg): - index_a == series_b - - tm.assert_numpy_array_equal(index_a == series_a, expected1) - tm.assert_numpy_array_equal(index_a == series_c, expected2) - - # cases where length is 1 for one of them - with pytest.raises(ValueError, match="Lengths must match"): - index_a == index_d - with pytest.raises(ValueError, match="Lengths must match"): - index_a == series_d - with pytest.raises(ValueError, match="Lengths must match"): - index_a == array_d - msg = "Can only compare identically-labeled Series objects" - with pytest.raises(ValueError, match=msg): - series_a == series_d - with pytest.raises(ValueError, match="Lengths must match"): - series_a == array_d - - # comparing with a scalar should broadcast; note that we are excluding - # MultiIndex because in this case each item in the index is a tuple of - # length 2, and therefore is considered an array of length 2 in the - # comparison instead of a scalar - if not isinstance(index_a, MultiIndex): - expected3 = np.array([False] * (len(index_a) - 2) + [True, False]) - # assuming the 2nd to last item is unique in the data - item = index_a[-2] - tm.assert_numpy_array_equal(index_a == item, expected3) - tm.assert_series_equal(series_a == item, Series(expected3)) - - def test_format(self, simple_index): - # GH35439 - if is_numeric_dtype(simple_index.dtype) or isinstance( - simple_index, DatetimeIndex - ): - pytest.skip("Tested elsewhere.") - idx = simple_index - expected = [str(x) for x in idx] - assert idx.format() == expected - - def test_format_empty(self, simple_index): - # GH35712 - if isinstance(simple_index, (PeriodIndex, RangeIndex)): - pytest.skip("Tested elsewhere") - empty_idx = type(simple_index)([]) - assert empty_idx.format() == [] - assert empty_idx.format(name=True) == [""] - - def test_fillna(self, index): - # GH 11343 - if len(index) == 0: - pytest.skip("Not relevant for empty index") - elif index.dtype == bool: - pytest.skip(f"{index.dtype} cannot hold NAs") - elif isinstance(index, Index) and is_integer_dtype(index.dtype): - pytest.skip(f"Not relevant for Index with {index.dtype}") - elif isinstance(index, MultiIndex): - idx = index.copy(deep=True) - msg = "isna is not defined for MultiIndex" - with pytest.raises(NotImplementedError, match=msg): - idx.fillna(idx[0]) - else: - idx = index.copy(deep=True) - result = idx.fillna(idx[0]) - tm.assert_index_equal(result, idx) - assert result is not idx - - msg = "'value' must be a scalar, passed: " - with pytest.raises(TypeError, match=msg): - idx.fillna([idx[0]]) - - idx = index.copy(deep=True) - values = idx._values - - values[1] = np.nan - - idx = type(index)(values) - - msg = "does not support 'downcast'" - msg2 = r"The 'downcast' keyword in .*Index\.fillna is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg2): - with pytest.raises(NotImplementedError, match=msg): - # For now at least, we only raise if there are NAs present - idx.fillna(idx[0], downcast="infer") - - expected = np.array([False] * len(idx), dtype=bool) - expected[1] = True - tm.assert_numpy_array_equal(idx._isnan, expected) - assert idx.hasnans is True - - def test_nulls(self, index): - # this is really a smoke test for the methods - # as these are adequately tested for function elsewhere - if len(index) == 0: - tm.assert_numpy_array_equal(index.isna(), np.array([], dtype=bool)) - elif isinstance(index, MultiIndex): - idx = index.copy() - msg = "isna is not defined for MultiIndex" - with pytest.raises(NotImplementedError, match=msg): - idx.isna() - elif not index.hasnans: - tm.assert_numpy_array_equal(index.isna(), np.zeros(len(index), dtype=bool)) - tm.assert_numpy_array_equal(index.notna(), np.ones(len(index), dtype=bool)) - else: - result = isna(index) - tm.assert_numpy_array_equal(index.isna(), result) - tm.assert_numpy_array_equal(index.notna(), ~result) - - def test_empty(self, simple_index): - # GH 15270 - idx = simple_index - assert not idx.empty - assert idx[:0].empty - - def test_join_self_unique(self, join_type, simple_index): - idx = simple_index - if idx.is_unique: - joined = idx.join(idx, how=join_type) - assert (idx == joined).all() - - def test_map(self, simple_index): - # callable - if isinstance(simple_index, (TimedeltaIndex, PeriodIndex)): - pytest.skip("Tested elsewhere.") - idx = simple_index - - result = idx.map(lambda x: x) - # RangeIndex are equivalent to the similar Index with int64 dtype - tm.assert_index_equal(result, idx, exact="equiv") - - @pytest.mark.parametrize( - "mapper", - [ - lambda values, index: {i: e for e, i in zip(values, index)}, - lambda values, index: Series(values, index), - ], - ) - @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") - def test_map_dictlike(self, mapper, simple_index, request): - idx = simple_index - if isinstance(idx, (DatetimeIndex, TimedeltaIndex, PeriodIndex)): - pytest.skip("Tested elsewhere.") - - identity = mapper(idx.values, idx) - - result = idx.map(identity) - # RangeIndex are equivalent to the similar Index with int64 dtype - tm.assert_index_equal(result, idx, exact="equiv") - - # empty mappable - dtype = None - if idx.dtype.kind == "f": - dtype = idx.dtype - - expected = Index([np.nan] * len(idx), dtype=dtype) - result = idx.map(mapper(expected, idx)) - tm.assert_index_equal(result, expected) - - def test_map_str(self, simple_index): - # GH 31202 - if isinstance(simple_index, CategoricalIndex): - pytest.skip("See test_map.py") - idx = simple_index - result = idx.map(str) - expected = Index([str(x) for x in idx], dtype=object) - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize("copy", [True, False]) - @pytest.mark.parametrize("name", [None, "foo"]) - @pytest.mark.parametrize("ordered", [True, False]) - def test_astype_category(self, copy, name, ordered, simple_index): - # GH 18630 - idx = simple_index - if name: - idx = idx.rename(name) - - # standard categories - dtype = CategoricalDtype(ordered=ordered) - result = idx.astype(dtype, copy=copy) - expected = CategoricalIndex(idx, name=name, ordered=ordered) - tm.assert_index_equal(result, expected, exact=True) - - # non-standard categories - dtype = CategoricalDtype(idx.unique().tolist()[:-1], ordered) - result = idx.astype(dtype, copy=copy) - expected = CategoricalIndex(idx, name=name, dtype=dtype) - tm.assert_index_equal(result, expected, exact=True) - - if ordered is False: - # dtype='category' defaults to ordered=False, so only test once - result = idx.astype("category", copy=copy) - expected = CategoricalIndex(idx, name=name) - tm.assert_index_equal(result, expected, exact=True) - - def test_is_unique(self, simple_index): - # initialize a unique index - index = simple_index.drop_duplicates() - assert index.is_unique is True - - # empty index should be unique - index_empty = index[:0] - assert index_empty.is_unique is True - - # test basic dupes - index_dup = index.insert(0, index[0]) - assert index_dup.is_unique is False - - # single NA should be unique - index_na = index.insert(0, np.nan) - assert index_na.is_unique is True - - # multiple NA should not be unique - index_na_dup = index_na.insert(0, np.nan) - assert index_na_dup.is_unique is False - - @pytest.mark.arm_slow - def test_engine_reference_cycle(self, simple_index): - # GH27585 - index = simple_index - nrefs_pre = len(gc.get_referrers(index)) - index._engine - assert len(gc.get_referrers(index)) == nrefs_pre - - def test_getitem_2d_deprecated(self, simple_index): - # GH#30588, GH#31479 - if isinstance(simple_index, IntervalIndex): - pytest.skip("Tested elsewhere") - idx = simple_index - msg = "Multi-dimensional indexing" - with pytest.raises(ValueError, match=msg): - idx[:, None] - - if not isinstance(idx, RangeIndex): - # GH#44051 RangeIndex already raised pre-2.0 with a different message - with pytest.raises(ValueError, match=msg): - idx[True] - with pytest.raises(ValueError, match=msg): - idx[False] - else: - msg = "only integers, slices" - with pytest.raises(IndexError, match=msg): - idx[True] - with pytest.raises(IndexError, match=msg): - idx[False] - - def test_copy_shares_cache(self, simple_index): - # GH32898, GH36840 - idx = simple_index - idx.get_loc(idx[0]) # populates the _cache. - copy = idx.copy() - - assert copy._cache is idx._cache - - def test_shallow_copy_shares_cache(self, simple_index): - # GH32669, GH36840 - idx = simple_index - idx.get_loc(idx[0]) # populates the _cache. - shallow_copy = idx._view() - - assert shallow_copy._cache is idx._cache - - shallow_copy = idx._shallow_copy(idx._data) - assert shallow_copy._cache is not idx._cache - assert shallow_copy._cache == {} - - def test_index_groupby(self, simple_index): - idx = simple_index[:5] - to_groupby = np.array([1, 2, np.nan, 2, 1]) - tm.assert_dict_equal( - idx.groupby(to_groupby), {1.0: idx[[0, 4]], 2.0: idx[[1, 3]]} - ) - - to_groupby = DatetimeIndex( - [ - datetime(2011, 11, 1), - datetime(2011, 12, 1), - pd.NaT, - datetime(2011, 12, 1), - datetime(2011, 11, 1), - ], - tz="UTC", - ).values - - ex_keys = [Timestamp("2011-11-01"), Timestamp("2011-12-01")] - expected = {ex_keys[0]: idx[[0, 4]], ex_keys[1]: idx[[1, 3]]} - tm.assert_dict_equal(idx.groupby(to_groupby), expected) - - def test_append_preserves_dtype(self, simple_index): - # In particular Index with dtype float32 - index = simple_index - N = len(index) - - result = index.append(index) - assert result.dtype == index.dtype - tm.assert_index_equal(result[:N], index, check_exact=True) - tm.assert_index_equal(result[N:], index, check_exact=True) - - alt = index.take(list(range(N)) * 2) - tm.assert_index_equal(result, alt, check_exact=True) - - def test_inv(self, simple_index): - idx = simple_index - - if idx.dtype.kind in ["i", "u"]: - res = ~idx - expected = Index(~idx.values, name=idx.name) - tm.assert_index_equal(res, expected) - - # check that we are matching Series behavior - res2 = ~Series(idx) - tm.assert_series_equal(res2, Series(expected)) - else: - if idx.dtype.kind == "f": - msg = "ufunc 'invert' not supported for the input types" - else: - msg = "bad operand" - with pytest.raises(TypeError, match=msg): - ~idx - - # check that we get the same behavior with Series - with pytest.raises(TypeError, match=msg): - ~Series(idx) - - def test_is_boolean_is_deprecated(self, simple_index): - # GH50042 - idx = simple_index - with tm.assert_produces_warning(FutureWarning): - idx.is_boolean() - - def test_is_floating_is_deprecated(self, simple_index): - # GH50042 - idx = simple_index - with tm.assert_produces_warning(FutureWarning): - idx.is_floating() - - def test_is_integer_is_deprecated(self, simple_index): - # GH50042 - idx = simple_index - with tm.assert_produces_warning(FutureWarning): - idx.is_integer() - - def test_holds_integer_deprecated(self, simple_index): - # GH50243 - idx = simple_index - msg = f"{type(idx).__name__}.holds_integer is deprecated. " - with tm.assert_produces_warning(FutureWarning, match=msg): - idx.holds_integer() - - def test_is_numeric_is_deprecated(self, simple_index): - # GH50042 - idx = simple_index - with tm.assert_produces_warning( - FutureWarning, - match=f"{type(idx).__name__}.is_numeric is deprecated. ", - ): - idx.is_numeric() - - def test_is_categorical_is_deprecated(self, simple_index): - # GH50042 - idx = simple_index - with tm.assert_produces_warning( - FutureWarning, - match=r"Use pandas\.api\.types\.is_categorical_dtype instead", - ): - idx.is_categorical() - - def test_is_interval_is_deprecated(self, simple_index): - # GH50042 - idx = simple_index - with tm.assert_produces_warning(FutureWarning): - idx.is_interval() - - def test_is_object_is_deprecated(self, simple_index): - # GH50042 - idx = simple_index - with tm.assert_produces_warning(FutureWarning): - idx.is_object() - - -class TestNumericBase: - @pytest.fixture( - params=[ - RangeIndex(start=0, stop=20, step=2), - Index(np.arange(5, dtype=np.float64)), - Index(np.arange(5, dtype=np.float32)), - Index(np.arange(5, dtype=np.uint64)), - Index(range(0, 20, 2), dtype=np.int64), - Index(range(0, 20, 2), dtype=np.int32), - Index(range(0, 20, 2), dtype=np.int16), - Index(range(0, 20, 2), dtype=np.int8), - ] - ) - def simple_index(self, request): - return request.param - - def test_constructor_unwraps_index(self, simple_index): - if isinstance(simple_index, RangeIndex): - pytest.skip("Tested elsewhere.") - index_cls = type(simple_index) - dtype = simple_index.dtype - - idx = Index([1, 2], dtype=dtype) - result = index_cls(idx) - expected = np.array([1, 2], dtype=idx.dtype) - tm.assert_numpy_array_equal(result._data, expected) - - def test_can_hold_identifiers(self, simple_index): - idx = simple_index - key = idx[0] - assert idx._can_hold_identifiers_and_holds_name(key) is False - - def test_view(self, simple_index): - if isinstance(simple_index, RangeIndex): - pytest.skip("Tested elsewhere.") - index_cls = type(simple_index) - dtype = simple_index.dtype - - idx = index_cls([], dtype=dtype, name="Foo") - idx_view = idx.view() - assert idx_view.name == "Foo" - - idx_view = idx.view(dtype) - tm.assert_index_equal(idx, index_cls(idx_view, name="Foo"), exact=True) - - idx_view = idx.view(index_cls) - tm.assert_index_equal(idx, index_cls(idx_view, name="Foo"), exact=True) - - def test_format(self, simple_index): - # GH35439 - if isinstance(simple_index, DatetimeIndex): - pytest.skip("Tested elsewhere") - idx = simple_index - max_width = max(len(str(x)) for x in idx) - expected = [str(x).ljust(max_width) for x in idx] - assert idx.format() == expected - - def test_insert_non_na(self, simple_index): - # GH#43921 inserting an element that we know we can hold should - # not change dtype or type (except for RangeIndex) - index = simple_index - - result = index.insert(0, index[0]) - - expected = Index([index[0]] + list(index), dtype=index.dtype) - tm.assert_index_equal(result, expected, exact=True) - - def test_insert_na(self, nulls_fixture, simple_index): - # GH 18295 (test missing) - index = simple_index - na_val = nulls_fixture - - if na_val is pd.NaT: - expected = Index([index[0], pd.NaT] + list(index[1:]), dtype=object) - else: - expected = Index([index[0], np.nan] + list(index[1:])) - # GH#43921 we preserve float dtype - if index.dtype.kind == "f": - expected = Index(expected, dtype=index.dtype) - - result = index.insert(1, na_val) - tm.assert_index_equal(result, expected, exact=True) - - def test_arithmetic_explicit_conversions(self, simple_index): - # GH 8608 - # add/sub are overridden explicitly for Float/Int Index - index_cls = type(simple_index) - if index_cls is RangeIndex: - idx = RangeIndex(5) - else: - idx = index_cls(np.arange(5, dtype="int64")) - - # float conversions - arr = np.arange(5, dtype="int64") * 3.2 - expected = Index(arr, dtype=np.float64) - fidx = idx * 3.2 - tm.assert_index_equal(fidx, expected) - fidx = 3.2 * idx - tm.assert_index_equal(fidx, expected) - - # interops with numpy arrays - expected = Index(arr, dtype=np.float64) - a = np.zeros(5, dtype="float64") - result = fidx - a - tm.assert_index_equal(result, expected) - - expected = Index(-arr, dtype=np.float64) - a = np.zeros(5, dtype="float64") - result = a - fidx - tm.assert_index_equal(result, expected) - - @pytest.mark.parametrize("complex_dtype", [np.complex64, np.complex128]) - def test_astype_to_complex(self, complex_dtype, simple_index): - result = simple_index.astype(complex_dtype) - - assert type(result) is Index and result.dtype == complex_dtype - - def test_cast_string(self, simple_index): - if isinstance(simple_index, RangeIndex): - pytest.skip("casting of strings not relevant for RangeIndex") - result = type(simple_index)(["0", "1", "2"], dtype=simple_index.dtype) - expected = type(simple_index)([0, 1, 2], dtype=simple_index.dtype) - tm.assert_index_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/idna/idnadata.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/idna/idnadata.py deleted file mode 100644 index 1b5805d15e53994f9909dd6f064603574eefdb32..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/idna/idnadata.py +++ /dev/null @@ -1,2137 +0,0 @@ -# This file is automatically generated by tools/idna-data - -__version__ = '14.0.0' -scripts = { - 'Greek': ( - 0x37000000374, - 0x37500000378, - 0x37a0000037e, - 0x37f00000380, - 0x38400000385, - 0x38600000387, - 0x3880000038b, - 0x38c0000038d, - 0x38e000003a2, - 0x3a3000003e2, - 0x3f000000400, - 0x1d2600001d2b, - 0x1d5d00001d62, - 0x1d6600001d6b, - 0x1dbf00001dc0, - 0x1f0000001f16, - 0x1f1800001f1e, - 0x1f2000001f46, - 0x1f4800001f4e, - 0x1f5000001f58, - 0x1f5900001f5a, - 0x1f5b00001f5c, - 0x1f5d00001f5e, - 0x1f5f00001f7e, - 0x1f8000001fb5, - 0x1fb600001fc5, - 0x1fc600001fd4, - 0x1fd600001fdc, - 0x1fdd00001ff0, - 0x1ff200001ff5, - 0x1ff600001fff, - 0x212600002127, - 0xab650000ab66, - 0x101400001018f, - 0x101a0000101a1, - 0x1d2000001d246, - ), - 'Han': ( - 0x2e8000002e9a, - 0x2e9b00002ef4, - 0x2f0000002fd6, - 0x300500003006, - 0x300700003008, - 0x30210000302a, - 0x30380000303c, - 0x340000004dc0, - 0x4e000000a000, - 0xf9000000fa6e, - 0xfa700000fada, - 0x16fe200016fe4, - 0x16ff000016ff2, - 0x200000002a6e0, - 0x2a7000002b739, - 0x2b7400002b81e, - 0x2b8200002cea2, - 0x2ceb00002ebe1, - 0x2f8000002fa1e, - 0x300000003134b, - ), - 'Hebrew': ( - 0x591000005c8, - 0x5d0000005eb, - 0x5ef000005f5, - 0xfb1d0000fb37, - 0xfb380000fb3d, - 0xfb3e0000fb3f, - 0xfb400000fb42, - 0xfb430000fb45, - 0xfb460000fb50, - ), - 'Hiragana': ( - 0x304100003097, - 0x309d000030a0, - 0x1b0010001b120, - 0x1b1500001b153, - 0x1f2000001f201, - ), - 'Katakana': ( - 0x30a1000030fb, - 0x30fd00003100, - 0x31f000003200, - 0x32d0000032ff, - 0x330000003358, - 0xff660000ff70, - 0xff710000ff9e, - 0x1aff00001aff4, - 0x1aff50001affc, - 0x1affd0001afff, - 0x1b0000001b001, - 0x1b1200001b123, - 0x1b1640001b168, - ), -} -joining_types = { - 0x600: 85, - 0x601: 85, - 0x602: 85, - 0x603: 85, - 0x604: 85, - 0x605: 85, - 0x608: 85, - 0x60b: 85, - 0x620: 68, - 0x621: 85, - 0x622: 82, - 0x623: 82, - 0x624: 82, - 0x625: 82, - 0x626: 68, - 0x627: 82, - 0x628: 68, - 0x629: 82, - 0x62a: 68, - 0x62b: 68, - 0x62c: 68, - 0x62d: 68, - 0x62e: 68, - 0x62f: 82, - 0x630: 82, - 0x631: 82, - 0x632: 82, - 0x633: 68, - 0x634: 68, - 0x635: 68, - 0x636: 68, - 0x637: 68, - 0x638: 68, - 0x639: 68, - 0x63a: 68, - 0x63b: 68, - 0x63c: 68, - 0x63d: 68, - 0x63e: 68, - 0x63f: 68, - 0x640: 67, - 0x641: 68, - 0x642: 68, - 0x643: 68, - 0x644: 68, - 0x645: 68, - 0x646: 68, - 0x647: 68, - 0x648: 82, - 0x649: 68, - 0x64a: 68, - 0x66e: 68, - 0x66f: 68, - 0x671: 82, - 0x672: 82, - 0x673: 82, - 0x674: 85, - 0x675: 82, - 0x676: 82, - 0x677: 82, - 0x678: 68, - 0x679: 68, - 0x67a: 68, - 0x67b: 68, - 0x67c: 68, - 0x67d: 68, - 0x67e: 68, - 0x67f: 68, - 0x680: 68, - 0x681: 68, - 0x682: 68, - 0x683: 68, - 0x684: 68, - 0x685: 68, - 0x686: 68, - 0x687: 68, - 0x688: 82, - 0x689: 82, - 0x68a: 82, - 0x68b: 82, - 0x68c: 82, - 0x68d: 82, - 0x68e: 82, - 0x68f: 82, - 0x690: 82, - 0x691: 82, - 0x692: 82, - 0x693: 82, - 0x694: 82, - 0x695: 82, - 0x696: 82, - 0x697: 82, - 0x698: 82, - 0x699: 82, - 0x69a: 68, - 0x69b: 68, - 0x69c: 68, - 0x69d: 68, - 0x69e: 68, - 0x69f: 68, - 0x6a0: 68, - 0x6a1: 68, - 0x6a2: 68, - 0x6a3: 68, - 0x6a4: 68, - 0x6a5: 68, - 0x6a6: 68, - 0x6a7: 68, - 0x6a8: 68, - 0x6a9: 68, - 0x6aa: 68, - 0x6ab: 68, - 0x6ac: 68, - 0x6ad: 68, - 0x6ae: 68, - 0x6af: 68, - 0x6b0: 68, - 0x6b1: 68, - 0x6b2: 68, - 0x6b3: 68, - 0x6b4: 68, - 0x6b5: 68, - 0x6b6: 68, - 0x6b7: 68, - 0x6b8: 68, - 0x6b9: 68, - 0x6ba: 68, - 0x6bb: 68, - 0x6bc: 68, - 0x6bd: 68, - 0x6be: 68, - 0x6bf: 68, - 0x6c0: 82, - 0x6c1: 68, - 0x6c2: 68, - 0x6c3: 82, - 0x6c4: 82, - 0x6c5: 82, - 0x6c6: 82, - 0x6c7: 82, - 0x6c8: 82, - 0x6c9: 82, - 0x6ca: 82, - 0x6cb: 82, - 0x6cc: 68, - 0x6cd: 82, - 0x6ce: 68, - 0x6cf: 82, - 0x6d0: 68, - 0x6d1: 68, - 0x6d2: 82, - 0x6d3: 82, - 0x6d5: 82, - 0x6dd: 85, - 0x6ee: 82, - 0x6ef: 82, - 0x6fa: 68, - 0x6fb: 68, - 0x6fc: 68, - 0x6ff: 68, - 0x70f: 84, - 0x710: 82, - 0x712: 68, - 0x713: 68, - 0x714: 68, - 0x715: 82, - 0x716: 82, - 0x717: 82, - 0x718: 82, - 0x719: 82, - 0x71a: 68, - 0x71b: 68, - 0x71c: 68, - 0x71d: 68, - 0x71e: 82, - 0x71f: 68, - 0x720: 68, - 0x721: 68, - 0x722: 68, - 0x723: 68, - 0x724: 68, - 0x725: 68, - 0x726: 68, - 0x727: 68, - 0x728: 82, - 0x729: 68, - 0x72a: 82, - 0x72b: 68, - 0x72c: 82, - 0x72d: 68, - 0x72e: 68, - 0x72f: 82, - 0x74d: 82, - 0x74e: 68, - 0x74f: 68, - 0x750: 68, - 0x751: 68, - 0x752: 68, - 0x753: 68, - 0x754: 68, - 0x755: 68, - 0x756: 68, - 0x757: 68, - 0x758: 68, - 0x759: 82, - 0x75a: 82, - 0x75b: 82, - 0x75c: 68, - 0x75d: 68, - 0x75e: 68, - 0x75f: 68, - 0x760: 68, - 0x761: 68, - 0x762: 68, - 0x763: 68, - 0x764: 68, - 0x765: 68, - 0x766: 68, - 0x767: 68, - 0x768: 68, - 0x769: 68, - 0x76a: 68, - 0x76b: 82, - 0x76c: 82, - 0x76d: 68, - 0x76e: 68, - 0x76f: 68, - 0x770: 68, - 0x771: 82, - 0x772: 68, - 0x773: 82, - 0x774: 82, - 0x775: 68, - 0x776: 68, - 0x777: 68, - 0x778: 82, - 0x779: 82, - 0x77a: 68, - 0x77b: 68, - 0x77c: 68, - 0x77d: 68, - 0x77e: 68, - 0x77f: 68, - 0x7ca: 68, - 0x7cb: 68, - 0x7cc: 68, - 0x7cd: 68, - 0x7ce: 68, - 0x7cf: 68, - 0x7d0: 68, - 0x7d1: 68, - 0x7d2: 68, - 0x7d3: 68, - 0x7d4: 68, - 0x7d5: 68, - 0x7d6: 68, - 0x7d7: 68, - 0x7d8: 68, - 0x7d9: 68, - 0x7da: 68, - 0x7db: 68, - 0x7dc: 68, - 0x7dd: 68, - 0x7de: 68, - 0x7df: 68, - 0x7e0: 68, - 0x7e1: 68, - 0x7e2: 68, - 0x7e3: 68, - 0x7e4: 68, - 0x7e5: 68, - 0x7e6: 68, - 0x7e7: 68, - 0x7e8: 68, - 0x7e9: 68, - 0x7ea: 68, - 0x7fa: 67, - 0x840: 82, - 0x841: 68, - 0x842: 68, - 0x843: 68, - 0x844: 68, - 0x845: 68, - 0x846: 82, - 0x847: 82, - 0x848: 68, - 0x849: 82, - 0x84a: 68, - 0x84b: 68, - 0x84c: 68, - 0x84d: 68, - 0x84e: 68, - 0x84f: 68, - 0x850: 68, - 0x851: 68, - 0x852: 68, - 0x853: 68, - 0x854: 82, - 0x855: 68, - 0x856: 82, - 0x857: 82, - 0x858: 82, - 0x860: 68, - 0x861: 85, - 0x862: 68, - 0x863: 68, - 0x864: 68, - 0x865: 68, - 0x866: 85, - 0x867: 82, - 0x868: 68, - 0x869: 82, - 0x86a: 82, - 0x870: 82, - 0x871: 82, - 0x872: 82, - 0x873: 82, - 0x874: 82, - 0x875: 82, - 0x876: 82, - 0x877: 82, - 0x878: 82, - 0x879: 82, - 0x87a: 82, - 0x87b: 82, - 0x87c: 82, - 0x87d: 82, - 0x87e: 82, - 0x87f: 82, - 0x880: 82, - 0x881: 82, - 0x882: 82, - 0x883: 67, - 0x884: 67, - 0x885: 67, - 0x886: 68, - 0x887: 85, - 0x888: 85, - 0x889: 68, - 0x88a: 68, - 0x88b: 68, - 0x88c: 68, - 0x88d: 68, - 0x88e: 82, - 0x890: 85, - 0x891: 85, - 0x8a0: 68, - 0x8a1: 68, - 0x8a2: 68, - 0x8a3: 68, - 0x8a4: 68, - 0x8a5: 68, - 0x8a6: 68, - 0x8a7: 68, - 0x8a8: 68, - 0x8a9: 68, - 0x8aa: 82, - 0x8ab: 82, - 0x8ac: 82, - 0x8ad: 85, - 0x8ae: 82, - 0x8af: 68, - 0x8b0: 68, - 0x8b1: 82, - 0x8b2: 82, - 0x8b3: 68, - 0x8b4: 68, - 0x8b5: 68, - 0x8b6: 68, - 0x8b7: 68, - 0x8b8: 68, - 0x8b9: 82, - 0x8ba: 68, - 0x8bb: 68, - 0x8bc: 68, - 0x8bd: 68, - 0x8be: 68, - 0x8bf: 68, - 0x8c0: 68, - 0x8c1: 68, - 0x8c2: 68, - 0x8c3: 68, - 0x8c4: 68, - 0x8c5: 68, - 0x8c6: 68, - 0x8c7: 68, - 0x8c8: 68, - 0x8e2: 85, - 0x1806: 85, - 0x1807: 68, - 0x180a: 67, - 0x180e: 85, - 0x1820: 68, - 0x1821: 68, - 0x1822: 68, - 0x1823: 68, - 0x1824: 68, - 0x1825: 68, - 0x1826: 68, - 0x1827: 68, - 0x1828: 68, - 0x1829: 68, - 0x182a: 68, - 0x182b: 68, - 0x182c: 68, - 0x182d: 68, - 0x182e: 68, - 0x182f: 68, - 0x1830: 68, - 0x1831: 68, - 0x1832: 68, - 0x1833: 68, - 0x1834: 68, - 0x1835: 68, - 0x1836: 68, - 0x1837: 68, - 0x1838: 68, - 0x1839: 68, - 0x183a: 68, - 0x183b: 68, - 0x183c: 68, - 0x183d: 68, - 0x183e: 68, - 0x183f: 68, - 0x1840: 68, - 0x1841: 68, - 0x1842: 68, - 0x1843: 68, - 0x1844: 68, - 0x1845: 68, - 0x1846: 68, - 0x1847: 68, - 0x1848: 68, - 0x1849: 68, - 0x184a: 68, - 0x184b: 68, - 0x184c: 68, - 0x184d: 68, - 0x184e: 68, - 0x184f: 68, - 0x1850: 68, - 0x1851: 68, - 0x1852: 68, - 0x1853: 68, - 0x1854: 68, - 0x1855: 68, - 0x1856: 68, - 0x1857: 68, - 0x1858: 68, - 0x1859: 68, - 0x185a: 68, - 0x185b: 68, - 0x185c: 68, - 0x185d: 68, - 0x185e: 68, - 0x185f: 68, - 0x1860: 68, - 0x1861: 68, - 0x1862: 68, - 0x1863: 68, - 0x1864: 68, - 0x1865: 68, - 0x1866: 68, - 0x1867: 68, - 0x1868: 68, - 0x1869: 68, - 0x186a: 68, - 0x186b: 68, - 0x186c: 68, - 0x186d: 68, - 0x186e: 68, - 0x186f: 68, - 0x1870: 68, - 0x1871: 68, - 0x1872: 68, - 0x1873: 68, - 0x1874: 68, - 0x1875: 68, - 0x1876: 68, - 0x1877: 68, - 0x1878: 68, - 0x1880: 85, - 0x1881: 85, - 0x1882: 85, - 0x1883: 85, - 0x1884: 85, - 0x1885: 84, - 0x1886: 84, - 0x1887: 68, - 0x1888: 68, - 0x1889: 68, - 0x188a: 68, - 0x188b: 68, - 0x188c: 68, - 0x188d: 68, - 0x188e: 68, - 0x188f: 68, - 0x1890: 68, - 0x1891: 68, - 0x1892: 68, - 0x1893: 68, - 0x1894: 68, - 0x1895: 68, - 0x1896: 68, - 0x1897: 68, - 0x1898: 68, - 0x1899: 68, - 0x189a: 68, - 0x189b: 68, - 0x189c: 68, - 0x189d: 68, - 0x189e: 68, - 0x189f: 68, - 0x18a0: 68, - 0x18a1: 68, - 0x18a2: 68, - 0x18a3: 68, - 0x18a4: 68, - 0x18a5: 68, - 0x18a6: 68, - 0x18a7: 68, - 0x18a8: 68, - 0x18aa: 68, - 0x200c: 85, - 0x200d: 67, - 0x202f: 85, - 0x2066: 85, - 0x2067: 85, - 0x2068: 85, - 0x2069: 85, - 0xa840: 68, - 0xa841: 68, - 0xa842: 68, - 0xa843: 68, - 0xa844: 68, - 0xa845: 68, - 0xa846: 68, - 0xa847: 68, - 0xa848: 68, - 0xa849: 68, - 0xa84a: 68, - 0xa84b: 68, - 0xa84c: 68, - 0xa84d: 68, - 0xa84e: 68, - 0xa84f: 68, - 0xa850: 68, - 0xa851: 68, - 0xa852: 68, - 0xa853: 68, - 0xa854: 68, - 0xa855: 68, - 0xa856: 68, - 0xa857: 68, - 0xa858: 68, - 0xa859: 68, - 0xa85a: 68, - 0xa85b: 68, - 0xa85c: 68, - 0xa85d: 68, - 0xa85e: 68, - 0xa85f: 68, - 0xa860: 68, - 0xa861: 68, - 0xa862: 68, - 0xa863: 68, - 0xa864: 68, - 0xa865: 68, - 0xa866: 68, - 0xa867: 68, - 0xa868: 68, - 0xa869: 68, - 0xa86a: 68, - 0xa86b: 68, - 0xa86c: 68, - 0xa86d: 68, - 0xa86e: 68, - 0xa86f: 68, - 0xa870: 68, - 0xa871: 68, - 0xa872: 76, - 0xa873: 85, - 0x10ac0: 68, - 0x10ac1: 68, - 0x10ac2: 68, - 0x10ac3: 68, - 0x10ac4: 68, - 0x10ac5: 82, - 0x10ac6: 85, - 0x10ac7: 82, - 0x10ac8: 85, - 0x10ac9: 82, - 0x10aca: 82, - 0x10acb: 85, - 0x10acc: 85, - 0x10acd: 76, - 0x10ace: 82, - 0x10acf: 82, - 0x10ad0: 82, - 0x10ad1: 82, - 0x10ad2: 82, - 0x10ad3: 68, - 0x10ad4: 68, - 0x10ad5: 68, - 0x10ad6: 68, - 0x10ad7: 76, - 0x10ad8: 68, - 0x10ad9: 68, - 0x10ada: 68, - 0x10adb: 68, - 0x10adc: 68, - 0x10add: 82, - 0x10ade: 68, - 0x10adf: 68, - 0x10ae0: 68, - 0x10ae1: 82, - 0x10ae2: 85, - 0x10ae3: 85, - 0x10ae4: 82, - 0x10aeb: 68, - 0x10aec: 68, - 0x10aed: 68, - 0x10aee: 68, - 0x10aef: 82, - 0x10b80: 68, - 0x10b81: 82, - 0x10b82: 68, - 0x10b83: 82, - 0x10b84: 82, - 0x10b85: 82, - 0x10b86: 68, - 0x10b87: 68, - 0x10b88: 68, - 0x10b89: 82, - 0x10b8a: 68, - 0x10b8b: 68, - 0x10b8c: 82, - 0x10b8d: 68, - 0x10b8e: 82, - 0x10b8f: 82, - 0x10b90: 68, - 0x10b91: 82, - 0x10ba9: 82, - 0x10baa: 82, - 0x10bab: 82, - 0x10bac: 82, - 0x10bad: 68, - 0x10bae: 68, - 0x10baf: 85, - 0x10d00: 76, - 0x10d01: 68, - 0x10d02: 68, - 0x10d03: 68, - 0x10d04: 68, - 0x10d05: 68, - 0x10d06: 68, - 0x10d07: 68, - 0x10d08: 68, - 0x10d09: 68, - 0x10d0a: 68, - 0x10d0b: 68, - 0x10d0c: 68, - 0x10d0d: 68, - 0x10d0e: 68, - 0x10d0f: 68, - 0x10d10: 68, - 0x10d11: 68, - 0x10d12: 68, - 0x10d13: 68, - 0x10d14: 68, - 0x10d15: 68, - 0x10d16: 68, - 0x10d17: 68, - 0x10d18: 68, - 0x10d19: 68, - 0x10d1a: 68, - 0x10d1b: 68, - 0x10d1c: 68, - 0x10d1d: 68, - 0x10d1e: 68, - 0x10d1f: 68, - 0x10d20: 68, - 0x10d21: 68, - 0x10d22: 82, - 0x10d23: 68, - 0x10f30: 68, - 0x10f31: 68, - 0x10f32: 68, - 0x10f33: 82, - 0x10f34: 68, - 0x10f35: 68, - 0x10f36: 68, - 0x10f37: 68, - 0x10f38: 68, - 0x10f39: 68, - 0x10f3a: 68, - 0x10f3b: 68, - 0x10f3c: 68, - 0x10f3d: 68, - 0x10f3e: 68, - 0x10f3f: 68, - 0x10f40: 68, - 0x10f41: 68, - 0x10f42: 68, - 0x10f43: 68, - 0x10f44: 68, - 0x10f45: 85, - 0x10f51: 68, - 0x10f52: 68, - 0x10f53: 68, - 0x10f54: 82, - 0x10f70: 68, - 0x10f71: 68, - 0x10f72: 68, - 0x10f73: 68, - 0x10f74: 82, - 0x10f75: 82, - 0x10f76: 68, - 0x10f77: 68, - 0x10f78: 68, - 0x10f79: 68, - 0x10f7a: 68, - 0x10f7b: 68, - 0x10f7c: 68, - 0x10f7d: 68, - 0x10f7e: 68, - 0x10f7f: 68, - 0x10f80: 68, - 0x10f81: 68, - 0x10fb0: 68, - 0x10fb1: 85, - 0x10fb2: 68, - 0x10fb3: 68, - 0x10fb4: 82, - 0x10fb5: 82, - 0x10fb6: 82, - 0x10fb7: 85, - 0x10fb8: 68, - 0x10fb9: 82, - 0x10fba: 82, - 0x10fbb: 68, - 0x10fbc: 68, - 0x10fbd: 82, - 0x10fbe: 68, - 0x10fbf: 68, - 0x10fc0: 85, - 0x10fc1: 68, - 0x10fc2: 82, - 0x10fc3: 82, - 0x10fc4: 68, - 0x10fc5: 85, - 0x10fc6: 85, - 0x10fc7: 85, - 0x10fc8: 85, - 0x10fc9: 82, - 0x10fca: 68, - 0x10fcb: 76, - 0x110bd: 85, - 0x110cd: 85, - 0x1e900: 68, - 0x1e901: 68, - 0x1e902: 68, - 0x1e903: 68, - 0x1e904: 68, - 0x1e905: 68, - 0x1e906: 68, - 0x1e907: 68, - 0x1e908: 68, - 0x1e909: 68, - 0x1e90a: 68, - 0x1e90b: 68, - 0x1e90c: 68, - 0x1e90d: 68, - 0x1e90e: 68, - 0x1e90f: 68, - 0x1e910: 68, - 0x1e911: 68, - 0x1e912: 68, - 0x1e913: 68, - 0x1e914: 68, - 0x1e915: 68, - 0x1e916: 68, - 0x1e917: 68, - 0x1e918: 68, - 0x1e919: 68, - 0x1e91a: 68, - 0x1e91b: 68, - 0x1e91c: 68, - 0x1e91d: 68, - 0x1e91e: 68, - 0x1e91f: 68, - 0x1e920: 68, - 0x1e921: 68, - 0x1e922: 68, - 0x1e923: 68, - 0x1e924: 68, - 0x1e925: 68, - 0x1e926: 68, - 0x1e927: 68, - 0x1e928: 68, - 0x1e929: 68, - 0x1e92a: 68, - 0x1e92b: 68, - 0x1e92c: 68, - 0x1e92d: 68, - 0x1e92e: 68, - 0x1e92f: 68, - 0x1e930: 68, - 0x1e931: 68, - 0x1e932: 68, - 0x1e933: 68, - 0x1e934: 68, - 0x1e935: 68, - 0x1e936: 68, - 0x1e937: 68, - 0x1e938: 68, - 0x1e939: 68, - 0x1e93a: 68, - 0x1e93b: 68, - 0x1e93c: 68, - 0x1e93d: 68, - 0x1e93e: 68, - 0x1e93f: 68, - 0x1e940: 68, - 0x1e941: 68, - 0x1e942: 68, - 0x1e943: 68, - 0x1e94b: 84, -} -codepoint_classes = { - 'PVALID': ( - 0x2d0000002e, - 0x300000003a, - 0x610000007b, - 0xdf000000f7, - 0xf800000100, - 0x10100000102, - 0x10300000104, - 0x10500000106, - 0x10700000108, - 0x1090000010a, - 0x10b0000010c, - 0x10d0000010e, - 0x10f00000110, - 0x11100000112, - 0x11300000114, - 0x11500000116, - 0x11700000118, - 0x1190000011a, - 0x11b0000011c, - 0x11d0000011e, - 0x11f00000120, - 0x12100000122, - 0x12300000124, - 0x12500000126, - 0x12700000128, - 0x1290000012a, - 0x12b0000012c, - 0x12d0000012e, - 0x12f00000130, - 0x13100000132, - 0x13500000136, - 0x13700000139, - 0x13a0000013b, - 0x13c0000013d, - 0x13e0000013f, - 0x14200000143, - 0x14400000145, - 0x14600000147, - 0x14800000149, - 0x14b0000014c, - 0x14d0000014e, - 0x14f00000150, - 0x15100000152, - 0x15300000154, - 0x15500000156, - 0x15700000158, - 0x1590000015a, - 0x15b0000015c, - 0x15d0000015e, - 0x15f00000160, - 0x16100000162, - 0x16300000164, - 0x16500000166, - 0x16700000168, - 0x1690000016a, - 0x16b0000016c, - 0x16d0000016e, - 0x16f00000170, - 0x17100000172, - 0x17300000174, - 0x17500000176, - 0x17700000178, - 0x17a0000017b, - 0x17c0000017d, - 0x17e0000017f, - 0x18000000181, - 0x18300000184, - 0x18500000186, - 0x18800000189, - 0x18c0000018e, - 0x19200000193, - 0x19500000196, - 0x1990000019c, - 0x19e0000019f, - 0x1a1000001a2, - 0x1a3000001a4, - 0x1a5000001a6, - 0x1a8000001a9, - 0x1aa000001ac, - 0x1ad000001ae, - 0x1b0000001b1, - 0x1b4000001b5, - 0x1b6000001b7, - 0x1b9000001bc, - 0x1bd000001c4, - 0x1ce000001cf, - 0x1d0000001d1, - 0x1d2000001d3, - 0x1d4000001d5, - 0x1d6000001d7, - 0x1d8000001d9, - 0x1da000001db, - 0x1dc000001de, - 0x1df000001e0, - 0x1e1000001e2, - 0x1e3000001e4, - 0x1e5000001e6, - 0x1e7000001e8, - 0x1e9000001ea, - 0x1eb000001ec, - 0x1ed000001ee, - 0x1ef000001f1, - 0x1f5000001f6, - 0x1f9000001fa, - 0x1fb000001fc, - 0x1fd000001fe, - 0x1ff00000200, - 0x20100000202, - 0x20300000204, - 0x20500000206, - 0x20700000208, - 0x2090000020a, - 0x20b0000020c, - 0x20d0000020e, - 0x20f00000210, - 0x21100000212, - 0x21300000214, - 0x21500000216, - 0x21700000218, - 0x2190000021a, - 0x21b0000021c, - 0x21d0000021e, - 0x21f00000220, - 0x22100000222, - 0x22300000224, - 0x22500000226, - 0x22700000228, - 0x2290000022a, - 0x22b0000022c, - 0x22d0000022e, - 0x22f00000230, - 0x23100000232, - 0x2330000023a, - 0x23c0000023d, - 0x23f00000241, - 0x24200000243, - 0x24700000248, - 0x2490000024a, - 0x24b0000024c, - 0x24d0000024e, - 0x24f000002b0, - 0x2b9000002c2, - 0x2c6000002d2, - 0x2ec000002ed, - 0x2ee000002ef, - 0x30000000340, - 0x34200000343, - 0x3460000034f, - 0x35000000370, - 0x37100000372, - 0x37300000374, - 0x37700000378, - 0x37b0000037e, - 0x39000000391, - 0x3ac000003cf, - 0x3d7000003d8, - 0x3d9000003da, - 0x3db000003dc, - 0x3dd000003de, - 0x3df000003e0, - 0x3e1000003e2, - 0x3e3000003e4, - 0x3e5000003e6, - 0x3e7000003e8, - 0x3e9000003ea, - 0x3eb000003ec, - 0x3ed000003ee, - 0x3ef000003f0, - 0x3f3000003f4, - 0x3f8000003f9, - 0x3fb000003fd, - 0x43000000460, - 0x46100000462, - 0x46300000464, - 0x46500000466, - 0x46700000468, - 0x4690000046a, - 0x46b0000046c, - 0x46d0000046e, - 0x46f00000470, - 0x47100000472, - 0x47300000474, - 0x47500000476, - 0x47700000478, - 0x4790000047a, - 0x47b0000047c, - 0x47d0000047e, - 0x47f00000480, - 0x48100000482, - 0x48300000488, - 0x48b0000048c, - 0x48d0000048e, - 0x48f00000490, - 0x49100000492, - 0x49300000494, - 0x49500000496, - 0x49700000498, - 0x4990000049a, - 0x49b0000049c, - 0x49d0000049e, - 0x49f000004a0, - 0x4a1000004a2, - 0x4a3000004a4, - 0x4a5000004a6, - 0x4a7000004a8, - 0x4a9000004aa, - 0x4ab000004ac, - 0x4ad000004ae, - 0x4af000004b0, - 0x4b1000004b2, - 0x4b3000004b4, - 0x4b5000004b6, - 0x4b7000004b8, - 0x4b9000004ba, - 0x4bb000004bc, - 0x4bd000004be, - 0x4bf000004c0, - 0x4c2000004c3, - 0x4c4000004c5, - 0x4c6000004c7, - 0x4c8000004c9, - 0x4ca000004cb, - 0x4cc000004cd, - 0x4ce000004d0, - 0x4d1000004d2, - 0x4d3000004d4, - 0x4d5000004d6, - 0x4d7000004d8, - 0x4d9000004da, - 0x4db000004dc, - 0x4dd000004de, - 0x4df000004e0, - 0x4e1000004e2, - 0x4e3000004e4, - 0x4e5000004e6, - 0x4e7000004e8, - 0x4e9000004ea, - 0x4eb000004ec, - 0x4ed000004ee, - 0x4ef000004f0, - 0x4f1000004f2, - 0x4f3000004f4, - 0x4f5000004f6, - 0x4f7000004f8, - 0x4f9000004fa, - 0x4fb000004fc, - 0x4fd000004fe, - 0x4ff00000500, - 0x50100000502, - 0x50300000504, - 0x50500000506, - 0x50700000508, - 0x5090000050a, - 0x50b0000050c, - 0x50d0000050e, - 0x50f00000510, - 0x51100000512, - 0x51300000514, - 0x51500000516, - 0x51700000518, - 0x5190000051a, - 0x51b0000051c, - 0x51d0000051e, - 0x51f00000520, - 0x52100000522, - 0x52300000524, - 0x52500000526, - 0x52700000528, - 0x5290000052a, - 0x52b0000052c, - 0x52d0000052e, - 0x52f00000530, - 0x5590000055a, - 0x56000000587, - 0x58800000589, - 0x591000005be, - 0x5bf000005c0, - 0x5c1000005c3, - 0x5c4000005c6, - 0x5c7000005c8, - 0x5d0000005eb, - 0x5ef000005f3, - 0x6100000061b, - 0x62000000640, - 0x64100000660, - 0x66e00000675, - 0x679000006d4, - 0x6d5000006dd, - 0x6df000006e9, - 0x6ea000006f0, - 0x6fa00000700, - 0x7100000074b, - 0x74d000007b2, - 0x7c0000007f6, - 0x7fd000007fe, - 0x8000000082e, - 0x8400000085c, - 0x8600000086b, - 0x87000000888, - 0x8890000088f, - 0x898000008e2, - 0x8e300000958, - 0x96000000964, - 0x96600000970, - 0x97100000984, - 0x9850000098d, - 0x98f00000991, - 0x993000009a9, - 0x9aa000009b1, - 0x9b2000009b3, - 0x9b6000009ba, - 0x9bc000009c5, - 0x9c7000009c9, - 0x9cb000009cf, - 0x9d7000009d8, - 0x9e0000009e4, - 0x9e6000009f2, - 0x9fc000009fd, - 0x9fe000009ff, - 0xa0100000a04, - 0xa0500000a0b, - 0xa0f00000a11, - 0xa1300000a29, - 0xa2a00000a31, - 0xa3200000a33, - 0xa3500000a36, - 0xa3800000a3a, - 0xa3c00000a3d, - 0xa3e00000a43, - 0xa4700000a49, - 0xa4b00000a4e, - 0xa5100000a52, - 0xa5c00000a5d, - 0xa6600000a76, - 0xa8100000a84, - 0xa8500000a8e, - 0xa8f00000a92, - 0xa9300000aa9, - 0xaaa00000ab1, - 0xab200000ab4, - 0xab500000aba, - 0xabc00000ac6, - 0xac700000aca, - 0xacb00000ace, - 0xad000000ad1, - 0xae000000ae4, - 0xae600000af0, - 0xaf900000b00, - 0xb0100000b04, - 0xb0500000b0d, - 0xb0f00000b11, - 0xb1300000b29, - 0xb2a00000b31, - 0xb3200000b34, - 0xb3500000b3a, - 0xb3c00000b45, - 0xb4700000b49, - 0xb4b00000b4e, - 0xb5500000b58, - 0xb5f00000b64, - 0xb6600000b70, - 0xb7100000b72, - 0xb8200000b84, - 0xb8500000b8b, - 0xb8e00000b91, - 0xb9200000b96, - 0xb9900000b9b, - 0xb9c00000b9d, - 0xb9e00000ba0, - 0xba300000ba5, - 0xba800000bab, - 0xbae00000bba, - 0xbbe00000bc3, - 0xbc600000bc9, - 0xbca00000bce, - 0xbd000000bd1, - 0xbd700000bd8, - 0xbe600000bf0, - 0xc0000000c0d, - 0xc0e00000c11, - 0xc1200000c29, - 0xc2a00000c3a, - 0xc3c00000c45, - 0xc4600000c49, - 0xc4a00000c4e, - 0xc5500000c57, - 0xc5800000c5b, - 0xc5d00000c5e, - 0xc6000000c64, - 0xc6600000c70, - 0xc8000000c84, - 0xc8500000c8d, - 0xc8e00000c91, - 0xc9200000ca9, - 0xcaa00000cb4, - 0xcb500000cba, - 0xcbc00000cc5, - 0xcc600000cc9, - 0xcca00000cce, - 0xcd500000cd7, - 0xcdd00000cdf, - 0xce000000ce4, - 0xce600000cf0, - 0xcf100000cf3, - 0xd0000000d0d, - 0xd0e00000d11, - 0xd1200000d45, - 0xd4600000d49, - 0xd4a00000d4f, - 0xd5400000d58, - 0xd5f00000d64, - 0xd6600000d70, - 0xd7a00000d80, - 0xd8100000d84, - 0xd8500000d97, - 0xd9a00000db2, - 0xdb300000dbc, - 0xdbd00000dbe, - 0xdc000000dc7, - 0xdca00000dcb, - 0xdcf00000dd5, - 0xdd600000dd7, - 0xdd800000de0, - 0xde600000df0, - 0xdf200000df4, - 0xe0100000e33, - 0xe3400000e3b, - 0xe4000000e4f, - 0xe5000000e5a, - 0xe8100000e83, - 0xe8400000e85, - 0xe8600000e8b, - 0xe8c00000ea4, - 0xea500000ea6, - 0xea700000eb3, - 0xeb400000ebe, - 0xec000000ec5, - 0xec600000ec7, - 0xec800000ece, - 0xed000000eda, - 0xede00000ee0, - 0xf0000000f01, - 0xf0b00000f0c, - 0xf1800000f1a, - 0xf2000000f2a, - 0xf3500000f36, - 0xf3700000f38, - 0xf3900000f3a, - 0xf3e00000f43, - 0xf4400000f48, - 0xf4900000f4d, - 0xf4e00000f52, - 0xf5300000f57, - 0xf5800000f5c, - 0xf5d00000f69, - 0xf6a00000f6d, - 0xf7100000f73, - 0xf7400000f75, - 0xf7a00000f81, - 0xf8200000f85, - 0xf8600000f93, - 0xf9400000f98, - 0xf9900000f9d, - 0xf9e00000fa2, - 0xfa300000fa7, - 0xfa800000fac, - 0xfad00000fb9, - 0xfba00000fbd, - 0xfc600000fc7, - 0x10000000104a, - 0x10500000109e, - 0x10d0000010fb, - 0x10fd00001100, - 0x120000001249, - 0x124a0000124e, - 0x125000001257, - 0x125800001259, - 0x125a0000125e, - 0x126000001289, - 0x128a0000128e, - 0x1290000012b1, - 0x12b2000012b6, - 0x12b8000012bf, - 0x12c0000012c1, - 0x12c2000012c6, - 0x12c8000012d7, - 0x12d800001311, - 0x131200001316, - 0x13180000135b, - 0x135d00001360, - 0x138000001390, - 0x13a0000013f6, - 0x14010000166d, - 0x166f00001680, - 0x16810000169b, - 0x16a0000016eb, - 0x16f1000016f9, - 0x170000001716, - 0x171f00001735, - 0x174000001754, - 0x17600000176d, - 0x176e00001771, - 0x177200001774, - 0x1780000017b4, - 0x17b6000017d4, - 0x17d7000017d8, - 0x17dc000017de, - 0x17e0000017ea, - 0x18100000181a, - 0x182000001879, - 0x1880000018ab, - 0x18b0000018f6, - 0x19000000191f, - 0x19200000192c, - 0x19300000193c, - 0x19460000196e, - 0x197000001975, - 0x1980000019ac, - 0x19b0000019ca, - 0x19d0000019da, - 0x1a0000001a1c, - 0x1a2000001a5f, - 0x1a6000001a7d, - 0x1a7f00001a8a, - 0x1a9000001a9a, - 0x1aa700001aa8, - 0x1ab000001abe, - 0x1abf00001acf, - 0x1b0000001b4d, - 0x1b5000001b5a, - 0x1b6b00001b74, - 0x1b8000001bf4, - 0x1c0000001c38, - 0x1c4000001c4a, - 0x1c4d00001c7e, - 0x1cd000001cd3, - 0x1cd400001cfb, - 0x1d0000001d2c, - 0x1d2f00001d30, - 0x1d3b00001d3c, - 0x1d4e00001d4f, - 0x1d6b00001d78, - 0x1d7900001d9b, - 0x1dc000001e00, - 0x1e0100001e02, - 0x1e0300001e04, - 0x1e0500001e06, - 0x1e0700001e08, - 0x1e0900001e0a, - 0x1e0b00001e0c, - 0x1e0d00001e0e, - 0x1e0f00001e10, - 0x1e1100001e12, - 0x1e1300001e14, - 0x1e1500001e16, - 0x1e1700001e18, - 0x1e1900001e1a, - 0x1e1b00001e1c, - 0x1e1d00001e1e, - 0x1e1f00001e20, - 0x1e2100001e22, - 0x1e2300001e24, - 0x1e2500001e26, - 0x1e2700001e28, - 0x1e2900001e2a, - 0x1e2b00001e2c, - 0x1e2d00001e2e, - 0x1e2f00001e30, - 0x1e3100001e32, - 0x1e3300001e34, - 0x1e3500001e36, - 0x1e3700001e38, - 0x1e3900001e3a, - 0x1e3b00001e3c, - 0x1e3d00001e3e, - 0x1e3f00001e40, - 0x1e4100001e42, - 0x1e4300001e44, - 0x1e4500001e46, - 0x1e4700001e48, - 0x1e4900001e4a, - 0x1e4b00001e4c, - 0x1e4d00001e4e, - 0x1e4f00001e50, - 0x1e5100001e52, - 0x1e5300001e54, - 0x1e5500001e56, - 0x1e5700001e58, - 0x1e5900001e5a, - 0x1e5b00001e5c, - 0x1e5d00001e5e, - 0x1e5f00001e60, - 0x1e6100001e62, - 0x1e6300001e64, - 0x1e6500001e66, - 0x1e6700001e68, - 0x1e6900001e6a, - 0x1e6b00001e6c, - 0x1e6d00001e6e, - 0x1e6f00001e70, - 0x1e7100001e72, - 0x1e7300001e74, - 0x1e7500001e76, - 0x1e7700001e78, - 0x1e7900001e7a, - 0x1e7b00001e7c, - 0x1e7d00001e7e, - 0x1e7f00001e80, - 0x1e8100001e82, - 0x1e8300001e84, - 0x1e8500001e86, - 0x1e8700001e88, - 0x1e8900001e8a, - 0x1e8b00001e8c, - 0x1e8d00001e8e, - 0x1e8f00001e90, - 0x1e9100001e92, - 0x1e9300001e94, - 0x1e9500001e9a, - 0x1e9c00001e9e, - 0x1e9f00001ea0, - 0x1ea100001ea2, - 0x1ea300001ea4, - 0x1ea500001ea6, - 0x1ea700001ea8, - 0x1ea900001eaa, - 0x1eab00001eac, - 0x1ead00001eae, - 0x1eaf00001eb0, - 0x1eb100001eb2, - 0x1eb300001eb4, - 0x1eb500001eb6, - 0x1eb700001eb8, - 0x1eb900001eba, - 0x1ebb00001ebc, - 0x1ebd00001ebe, - 0x1ebf00001ec0, - 0x1ec100001ec2, - 0x1ec300001ec4, - 0x1ec500001ec6, - 0x1ec700001ec8, - 0x1ec900001eca, - 0x1ecb00001ecc, - 0x1ecd00001ece, - 0x1ecf00001ed0, - 0x1ed100001ed2, - 0x1ed300001ed4, - 0x1ed500001ed6, - 0x1ed700001ed8, - 0x1ed900001eda, - 0x1edb00001edc, - 0x1edd00001ede, - 0x1edf00001ee0, - 0x1ee100001ee2, - 0x1ee300001ee4, - 0x1ee500001ee6, - 0x1ee700001ee8, - 0x1ee900001eea, - 0x1eeb00001eec, - 0x1eed00001eee, - 0x1eef00001ef0, - 0x1ef100001ef2, - 0x1ef300001ef4, - 0x1ef500001ef6, - 0x1ef700001ef8, - 0x1ef900001efa, - 0x1efb00001efc, - 0x1efd00001efe, - 0x1eff00001f08, - 0x1f1000001f16, - 0x1f2000001f28, - 0x1f3000001f38, - 0x1f4000001f46, - 0x1f5000001f58, - 0x1f6000001f68, - 0x1f7000001f71, - 0x1f7200001f73, - 0x1f7400001f75, - 0x1f7600001f77, - 0x1f7800001f79, - 0x1f7a00001f7b, - 0x1f7c00001f7d, - 0x1fb000001fb2, - 0x1fb600001fb7, - 0x1fc600001fc7, - 0x1fd000001fd3, - 0x1fd600001fd8, - 0x1fe000001fe3, - 0x1fe400001fe8, - 0x1ff600001ff7, - 0x214e0000214f, - 0x218400002185, - 0x2c3000002c60, - 0x2c6100002c62, - 0x2c6500002c67, - 0x2c6800002c69, - 0x2c6a00002c6b, - 0x2c6c00002c6d, - 0x2c7100002c72, - 0x2c7300002c75, - 0x2c7600002c7c, - 0x2c8100002c82, - 0x2c8300002c84, - 0x2c8500002c86, - 0x2c8700002c88, - 0x2c8900002c8a, - 0x2c8b00002c8c, - 0x2c8d00002c8e, - 0x2c8f00002c90, - 0x2c9100002c92, - 0x2c9300002c94, - 0x2c9500002c96, - 0x2c9700002c98, - 0x2c9900002c9a, - 0x2c9b00002c9c, - 0x2c9d00002c9e, - 0x2c9f00002ca0, - 0x2ca100002ca2, - 0x2ca300002ca4, - 0x2ca500002ca6, - 0x2ca700002ca8, - 0x2ca900002caa, - 0x2cab00002cac, - 0x2cad00002cae, - 0x2caf00002cb0, - 0x2cb100002cb2, - 0x2cb300002cb4, - 0x2cb500002cb6, - 0x2cb700002cb8, - 0x2cb900002cba, - 0x2cbb00002cbc, - 0x2cbd00002cbe, - 0x2cbf00002cc0, - 0x2cc100002cc2, - 0x2cc300002cc4, - 0x2cc500002cc6, - 0x2cc700002cc8, - 0x2cc900002cca, - 0x2ccb00002ccc, - 0x2ccd00002cce, - 0x2ccf00002cd0, - 0x2cd100002cd2, - 0x2cd300002cd4, - 0x2cd500002cd6, - 0x2cd700002cd8, - 0x2cd900002cda, - 0x2cdb00002cdc, - 0x2cdd00002cde, - 0x2cdf00002ce0, - 0x2ce100002ce2, - 0x2ce300002ce5, - 0x2cec00002ced, - 0x2cee00002cf2, - 0x2cf300002cf4, - 0x2d0000002d26, - 0x2d2700002d28, - 0x2d2d00002d2e, - 0x2d3000002d68, - 0x2d7f00002d97, - 0x2da000002da7, - 0x2da800002daf, - 0x2db000002db7, - 0x2db800002dbf, - 0x2dc000002dc7, - 0x2dc800002dcf, - 0x2dd000002dd7, - 0x2dd800002ddf, - 0x2de000002e00, - 0x2e2f00002e30, - 0x300500003008, - 0x302a0000302e, - 0x303c0000303d, - 0x304100003097, - 0x30990000309b, - 0x309d0000309f, - 0x30a1000030fb, - 0x30fc000030ff, - 0x310500003130, - 0x31a0000031c0, - 0x31f000003200, - 0x340000004dc0, - 0x4e000000a48d, - 0xa4d00000a4fe, - 0xa5000000a60d, - 0xa6100000a62c, - 0xa6410000a642, - 0xa6430000a644, - 0xa6450000a646, - 0xa6470000a648, - 0xa6490000a64a, - 0xa64b0000a64c, - 0xa64d0000a64e, - 0xa64f0000a650, - 0xa6510000a652, - 0xa6530000a654, - 0xa6550000a656, - 0xa6570000a658, - 0xa6590000a65a, - 0xa65b0000a65c, - 0xa65d0000a65e, - 0xa65f0000a660, - 0xa6610000a662, - 0xa6630000a664, - 0xa6650000a666, - 0xa6670000a668, - 0xa6690000a66a, - 0xa66b0000a66c, - 0xa66d0000a670, - 0xa6740000a67e, - 0xa67f0000a680, - 0xa6810000a682, - 0xa6830000a684, - 0xa6850000a686, - 0xa6870000a688, - 0xa6890000a68a, - 0xa68b0000a68c, - 0xa68d0000a68e, - 0xa68f0000a690, - 0xa6910000a692, - 0xa6930000a694, - 0xa6950000a696, - 0xa6970000a698, - 0xa6990000a69a, - 0xa69b0000a69c, - 0xa69e0000a6e6, - 0xa6f00000a6f2, - 0xa7170000a720, - 0xa7230000a724, - 0xa7250000a726, - 0xa7270000a728, - 0xa7290000a72a, - 0xa72b0000a72c, - 0xa72d0000a72e, - 0xa72f0000a732, - 0xa7330000a734, - 0xa7350000a736, - 0xa7370000a738, - 0xa7390000a73a, - 0xa73b0000a73c, - 0xa73d0000a73e, - 0xa73f0000a740, - 0xa7410000a742, - 0xa7430000a744, - 0xa7450000a746, - 0xa7470000a748, - 0xa7490000a74a, - 0xa74b0000a74c, - 0xa74d0000a74e, - 0xa74f0000a750, - 0xa7510000a752, - 0xa7530000a754, - 0xa7550000a756, - 0xa7570000a758, - 0xa7590000a75a, - 0xa75b0000a75c, - 0xa75d0000a75e, - 0xa75f0000a760, - 0xa7610000a762, - 0xa7630000a764, - 0xa7650000a766, - 0xa7670000a768, - 0xa7690000a76a, - 0xa76b0000a76c, - 0xa76d0000a76e, - 0xa76f0000a770, - 0xa7710000a779, - 0xa77a0000a77b, - 0xa77c0000a77d, - 0xa77f0000a780, - 0xa7810000a782, - 0xa7830000a784, - 0xa7850000a786, - 0xa7870000a789, - 0xa78c0000a78d, - 0xa78e0000a790, - 0xa7910000a792, - 0xa7930000a796, - 0xa7970000a798, - 0xa7990000a79a, - 0xa79b0000a79c, - 0xa79d0000a79e, - 0xa79f0000a7a0, - 0xa7a10000a7a2, - 0xa7a30000a7a4, - 0xa7a50000a7a6, - 0xa7a70000a7a8, - 0xa7a90000a7aa, - 0xa7af0000a7b0, - 0xa7b50000a7b6, - 0xa7b70000a7b8, - 0xa7b90000a7ba, - 0xa7bb0000a7bc, - 0xa7bd0000a7be, - 0xa7bf0000a7c0, - 0xa7c10000a7c2, - 0xa7c30000a7c4, - 0xa7c80000a7c9, - 0xa7ca0000a7cb, - 0xa7d10000a7d2, - 0xa7d30000a7d4, - 0xa7d50000a7d6, - 0xa7d70000a7d8, - 0xa7d90000a7da, - 0xa7f20000a7f5, - 0xa7f60000a7f8, - 0xa7fa0000a828, - 0xa82c0000a82d, - 0xa8400000a874, - 0xa8800000a8c6, - 0xa8d00000a8da, - 0xa8e00000a8f8, - 0xa8fb0000a8fc, - 0xa8fd0000a92e, - 0xa9300000a954, - 0xa9800000a9c1, - 0xa9cf0000a9da, - 0xa9e00000a9ff, - 0xaa000000aa37, - 0xaa400000aa4e, - 0xaa500000aa5a, - 0xaa600000aa77, - 0xaa7a0000aac3, - 0xaadb0000aade, - 0xaae00000aaf0, - 0xaaf20000aaf7, - 0xab010000ab07, - 0xab090000ab0f, - 0xab110000ab17, - 0xab200000ab27, - 0xab280000ab2f, - 0xab300000ab5b, - 0xab600000ab6a, - 0xabc00000abeb, - 0xabec0000abee, - 0xabf00000abfa, - 0xac000000d7a4, - 0xfa0e0000fa10, - 0xfa110000fa12, - 0xfa130000fa15, - 0xfa1f0000fa20, - 0xfa210000fa22, - 0xfa230000fa25, - 0xfa270000fa2a, - 0xfb1e0000fb1f, - 0xfe200000fe30, - 0xfe730000fe74, - 0x100000001000c, - 0x1000d00010027, - 0x100280001003b, - 0x1003c0001003e, - 0x1003f0001004e, - 0x100500001005e, - 0x10080000100fb, - 0x101fd000101fe, - 0x102800001029d, - 0x102a0000102d1, - 0x102e0000102e1, - 0x1030000010320, - 0x1032d00010341, - 0x103420001034a, - 0x103500001037b, - 0x103800001039e, - 0x103a0000103c4, - 0x103c8000103d0, - 0x104280001049e, - 0x104a0000104aa, - 0x104d8000104fc, - 0x1050000010528, - 0x1053000010564, - 0x10597000105a2, - 0x105a3000105b2, - 0x105b3000105ba, - 0x105bb000105bd, - 0x1060000010737, - 0x1074000010756, - 0x1076000010768, - 0x1078000010786, - 0x10787000107b1, - 0x107b2000107bb, - 0x1080000010806, - 0x1080800010809, - 0x1080a00010836, - 0x1083700010839, - 0x1083c0001083d, - 0x1083f00010856, - 0x1086000010877, - 0x108800001089f, - 0x108e0000108f3, - 0x108f4000108f6, - 0x1090000010916, - 0x109200001093a, - 0x10980000109b8, - 0x109be000109c0, - 0x10a0000010a04, - 0x10a0500010a07, - 0x10a0c00010a14, - 0x10a1500010a18, - 0x10a1900010a36, - 0x10a3800010a3b, - 0x10a3f00010a40, - 0x10a6000010a7d, - 0x10a8000010a9d, - 0x10ac000010ac8, - 0x10ac900010ae7, - 0x10b0000010b36, - 0x10b4000010b56, - 0x10b6000010b73, - 0x10b8000010b92, - 0x10c0000010c49, - 0x10cc000010cf3, - 0x10d0000010d28, - 0x10d3000010d3a, - 0x10e8000010eaa, - 0x10eab00010ead, - 0x10eb000010eb2, - 0x10f0000010f1d, - 0x10f2700010f28, - 0x10f3000010f51, - 0x10f7000010f86, - 0x10fb000010fc5, - 0x10fe000010ff7, - 0x1100000011047, - 0x1106600011076, - 0x1107f000110bb, - 0x110c2000110c3, - 0x110d0000110e9, - 0x110f0000110fa, - 0x1110000011135, - 0x1113600011140, - 0x1114400011148, - 0x1115000011174, - 0x1117600011177, - 0x11180000111c5, - 0x111c9000111cd, - 0x111ce000111db, - 0x111dc000111dd, - 0x1120000011212, - 0x1121300011238, - 0x1123e0001123f, - 0x1128000011287, - 0x1128800011289, - 0x1128a0001128e, - 0x1128f0001129e, - 0x1129f000112a9, - 0x112b0000112eb, - 0x112f0000112fa, - 0x1130000011304, - 0x113050001130d, - 0x1130f00011311, - 0x1131300011329, - 0x1132a00011331, - 0x1133200011334, - 0x113350001133a, - 0x1133b00011345, - 0x1134700011349, - 0x1134b0001134e, - 0x1135000011351, - 0x1135700011358, - 0x1135d00011364, - 0x113660001136d, - 0x1137000011375, - 0x114000001144b, - 0x114500001145a, - 0x1145e00011462, - 0x11480000114c6, - 0x114c7000114c8, - 0x114d0000114da, - 0x11580000115b6, - 0x115b8000115c1, - 0x115d8000115de, - 0x1160000011641, - 0x1164400011645, - 0x116500001165a, - 0x11680000116b9, - 0x116c0000116ca, - 0x117000001171b, - 0x1171d0001172c, - 0x117300001173a, - 0x1174000011747, - 0x118000001183b, - 0x118c0000118ea, - 0x118ff00011907, - 0x119090001190a, - 0x1190c00011914, - 0x1191500011917, - 0x1191800011936, - 0x1193700011939, - 0x1193b00011944, - 0x119500001195a, - 0x119a0000119a8, - 0x119aa000119d8, - 0x119da000119e2, - 0x119e3000119e5, - 0x11a0000011a3f, - 0x11a4700011a48, - 0x11a5000011a9a, - 0x11a9d00011a9e, - 0x11ab000011af9, - 0x11c0000011c09, - 0x11c0a00011c37, - 0x11c3800011c41, - 0x11c5000011c5a, - 0x11c7200011c90, - 0x11c9200011ca8, - 0x11ca900011cb7, - 0x11d0000011d07, - 0x11d0800011d0a, - 0x11d0b00011d37, - 0x11d3a00011d3b, - 0x11d3c00011d3e, - 0x11d3f00011d48, - 0x11d5000011d5a, - 0x11d6000011d66, - 0x11d6700011d69, - 0x11d6a00011d8f, - 0x11d9000011d92, - 0x11d9300011d99, - 0x11da000011daa, - 0x11ee000011ef7, - 0x11fb000011fb1, - 0x120000001239a, - 0x1248000012544, - 0x12f9000012ff1, - 0x130000001342f, - 0x1440000014647, - 0x1680000016a39, - 0x16a4000016a5f, - 0x16a6000016a6a, - 0x16a7000016abf, - 0x16ac000016aca, - 0x16ad000016aee, - 0x16af000016af5, - 0x16b0000016b37, - 0x16b4000016b44, - 0x16b5000016b5a, - 0x16b6300016b78, - 0x16b7d00016b90, - 0x16e6000016e80, - 0x16f0000016f4b, - 0x16f4f00016f88, - 0x16f8f00016fa0, - 0x16fe000016fe2, - 0x16fe300016fe5, - 0x16ff000016ff2, - 0x17000000187f8, - 0x1880000018cd6, - 0x18d0000018d09, - 0x1aff00001aff4, - 0x1aff50001affc, - 0x1affd0001afff, - 0x1b0000001b123, - 0x1b1500001b153, - 0x1b1640001b168, - 0x1b1700001b2fc, - 0x1bc000001bc6b, - 0x1bc700001bc7d, - 0x1bc800001bc89, - 0x1bc900001bc9a, - 0x1bc9d0001bc9f, - 0x1cf000001cf2e, - 0x1cf300001cf47, - 0x1da000001da37, - 0x1da3b0001da6d, - 0x1da750001da76, - 0x1da840001da85, - 0x1da9b0001daa0, - 0x1daa10001dab0, - 0x1df000001df1f, - 0x1e0000001e007, - 0x1e0080001e019, - 0x1e01b0001e022, - 0x1e0230001e025, - 0x1e0260001e02b, - 0x1e1000001e12d, - 0x1e1300001e13e, - 0x1e1400001e14a, - 0x1e14e0001e14f, - 0x1e2900001e2af, - 0x1e2c00001e2fa, - 0x1e7e00001e7e7, - 0x1e7e80001e7ec, - 0x1e7ed0001e7ef, - 0x1e7f00001e7ff, - 0x1e8000001e8c5, - 0x1e8d00001e8d7, - 0x1e9220001e94c, - 0x1e9500001e95a, - 0x1fbf00001fbfa, - 0x200000002a6e0, - 0x2a7000002b739, - 0x2b7400002b81e, - 0x2b8200002cea2, - 0x2ceb00002ebe1, - 0x300000003134b, - ), - 'CONTEXTJ': ( - 0x200c0000200e, - ), - 'CONTEXTO': ( - 0xb7000000b8, - 0x37500000376, - 0x5f3000005f5, - 0x6600000066a, - 0x6f0000006fa, - 0x30fb000030fc, - ), -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/_log_render.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/_log_render.py deleted file mode 100644 index fc16c84437a8a34231c44d3f0a331459ddcb0f34..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/_log_render.py +++ /dev/null @@ -1,94 +0,0 @@ -from datetime import datetime -from typing import Iterable, List, Optional, TYPE_CHECKING, Union, Callable - - -from .text import Text, TextType - -if TYPE_CHECKING: - from .console import Console, ConsoleRenderable, RenderableType - from .table import Table - -FormatTimeCallable = Callable[[datetime], Text] - - -class LogRender: - def __init__( - self, - show_time: bool = True, - show_level: bool = False, - show_path: bool = True, - time_format: Union[str, FormatTimeCallable] = "[%x %X]", - omit_repeated_times: bool = True, - level_width: Optional[int] = 8, - ) -> None: - self.show_time = show_time - self.show_level = show_level - self.show_path = show_path - self.time_format = time_format - self.omit_repeated_times = omit_repeated_times - self.level_width = level_width - self._last_time: Optional[Text] = None - - def __call__( - self, - console: "Console", - renderables: Iterable["ConsoleRenderable"], - log_time: Optional[datetime] = None, - time_format: Optional[Union[str, FormatTimeCallable]] = None, - level: TextType = "", - path: Optional[str] = None, - line_no: Optional[int] = None, - link_path: Optional[str] = None, - ) -> "Table": - from .containers import Renderables - from .table import Table - - output = Table.grid(padding=(0, 1)) - output.expand = True - if self.show_time: - output.add_column(style="log.time") - if self.show_level: - output.add_column(style="log.level", width=self.level_width) - output.add_column(ratio=1, style="log.message", overflow="fold") - if self.show_path and path: - output.add_column(style="log.path") - row: List["RenderableType"] = [] - if self.show_time: - log_time = log_time or console.get_datetime() - time_format = time_format or self.time_format - if callable(time_format): - log_time_display = time_format(log_time) - else: - log_time_display = Text(log_time.strftime(time_format)) - if log_time_display == self._last_time and self.omit_repeated_times: - row.append(Text(" " * len(log_time_display))) - else: - row.append(log_time_display) - self._last_time = log_time_display - if self.show_level: - row.append(level) - - row.append(Renderables(renderables)) - if self.show_path and path: - path_text = Text() - path_text.append( - path, style=f"link file://{link_path}" if link_path else "" - ) - if line_no: - path_text.append(":") - path_text.append( - f"{line_no}", - style=f"link file://{link_path}#{line_no}" if link_path else "", - ) - row.append(path_text) - - output.add_row(*row) - return output - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - - c = Console() - c.print("[on blue]Hello", justify="right") - c.log("[on blue]hello", justify="right") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/requirements.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/requirements.py deleted file mode 100644 index 9495a1df1e6e8a738a6f26efed3657f2b709a11f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/requirements.py +++ /dev/null @@ -1,145 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. -from __future__ import absolute_import, division, print_function - -import string -import re - -from pkg_resources.extern.pyparsing import stringStart, stringEnd, originalTextFor, ParseException -from pkg_resources.extern.pyparsing import ZeroOrMore, Word, Optional, Regex, Combine -from pkg_resources.extern.pyparsing import Literal as L # noqa -from urllib import parse as urlparse - -from ._typing import TYPE_CHECKING -from .markers import MARKER_EXPR, Marker -from .specifiers import LegacySpecifier, Specifier, SpecifierSet - -if TYPE_CHECKING: # pragma: no cover - from typing import List - - -class InvalidRequirement(ValueError): - """ - An invalid requirement was found, users should refer to PEP 508. - """ - - -ALPHANUM = Word(string.ascii_letters + string.digits) - -LBRACKET = L("[").suppress() -RBRACKET = L("]").suppress() -LPAREN = L("(").suppress() -RPAREN = L(")").suppress() -COMMA = L(",").suppress() -SEMICOLON = L(";").suppress() -AT = L("@").suppress() - -PUNCTUATION = Word("-_.") -IDENTIFIER_END = ALPHANUM | (ZeroOrMore(PUNCTUATION) + ALPHANUM) -IDENTIFIER = Combine(ALPHANUM + ZeroOrMore(IDENTIFIER_END)) - -NAME = IDENTIFIER("name") -EXTRA = IDENTIFIER - -URI = Regex(r"[^ ]+")("url") -URL = AT + URI - -EXTRAS_LIST = EXTRA + ZeroOrMore(COMMA + EXTRA) -EXTRAS = (LBRACKET + Optional(EXTRAS_LIST) + RBRACKET)("extras") - -VERSION_PEP440 = Regex(Specifier._regex_str, re.VERBOSE | re.IGNORECASE) -VERSION_LEGACY = Regex(LegacySpecifier._regex_str, re.VERBOSE | re.IGNORECASE) - -VERSION_ONE = VERSION_PEP440 ^ VERSION_LEGACY -VERSION_MANY = Combine( - VERSION_ONE + ZeroOrMore(COMMA + VERSION_ONE), joinString=",", adjacent=False -)("_raw_spec") -_VERSION_SPEC = Optional(((LPAREN + VERSION_MANY + RPAREN) | VERSION_MANY)) -_VERSION_SPEC.setParseAction(lambda s, l, t: t._raw_spec or "") - -VERSION_SPEC = originalTextFor(_VERSION_SPEC)("specifier") -VERSION_SPEC.setParseAction(lambda s, l, t: t[1]) - -MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") -MARKER_EXPR.setParseAction( - lambda s, l, t: Marker(s[t._original_start : t._original_end]) -) -MARKER_SEPARATOR = SEMICOLON -MARKER = MARKER_SEPARATOR + MARKER_EXPR - -VERSION_AND_MARKER = VERSION_SPEC + Optional(MARKER) -URL_AND_MARKER = URL + Optional(MARKER) - -NAMED_REQUIREMENT = NAME + Optional(EXTRAS) + (URL_AND_MARKER | VERSION_AND_MARKER) - -REQUIREMENT = stringStart + NAMED_REQUIREMENT + stringEnd -# pkg_resources.extern.pyparsing isn't thread safe during initialization, so we do it eagerly, see -# issue #104 -REQUIREMENT.parseString("x[]") - - -class Requirement(object): - """Parse a requirement. - - Parse a given requirement string into its parts, such as name, specifier, - URL, and extras. Raises InvalidRequirement on a badly-formed requirement - string. - """ - - # TODO: Can we test whether something is contained within a requirement? - # If so how do we do that? Do we need to test against the _name_ of - # the thing as well as the version? What about the markers? - # TODO: Can we normalize the name and extra name? - - def __init__(self, requirement_string): - # type: (str) -> None - try: - req = REQUIREMENT.parseString(requirement_string) - except ParseException as e: - raise InvalidRequirement( - 'Parse error at "{0!r}": {1}'.format( - requirement_string[e.loc : e.loc + 8], e.msg - ) - ) - - self.name = req.name - if req.url: - parsed_url = urlparse.urlparse(req.url) - if parsed_url.scheme == "file": - if urlparse.urlunparse(parsed_url) != req.url: - raise InvalidRequirement("Invalid URL given") - elif not (parsed_url.scheme and parsed_url.netloc) or ( - not parsed_url.scheme and not parsed_url.netloc - ): - raise InvalidRequirement("Invalid URL: {0}".format(req.url)) - self.url = req.url - else: - self.url = None - self.extras = set(req.extras.asList() if req.extras else []) - self.specifier = SpecifierSet(req.specifier) - self.marker = req.marker if req.marker else None - - def __str__(self): - # type: () -> str - parts = [self.name] # type: List[str] - - if self.extras: - parts.append("[{0}]".format(",".join(sorted(self.extras)))) - - if self.specifier: - parts.append(str(self.specifier)) - - if self.url: - parts.append("@ {0}".format(self.url)) - if self.marker: - parts.append(" ") - - if self.marker: - parts.append("; {0}".format(self.marker)) - - return "".join(parts) - - def __repr__(self): - # type: () -> str - return "".format(str(self)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/__init__.py deleted file mode 100644 index 9d6f0bc0dd674e92a985a5f997b17039ade95217..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/__init__.py +++ /dev/null @@ -1,242 +0,0 @@ -"""Extensions to the 'distutils' for large or complex distributions""" - -from fnmatch import fnmatchcase -import functools -import os -import re - -import _distutils_hack.override # noqa: F401 - -import distutils.core -from distutils.errors import DistutilsOptionError -from distutils.util import convert_path - -from ._deprecation_warning import SetuptoolsDeprecationWarning - -import setuptools.version -from setuptools.extension import Extension -from setuptools.dist import Distribution -from setuptools.depends import Require -from . import monkey - - -__all__ = [ - 'setup', - 'Distribution', - 'Command', - 'Extension', - 'Require', - 'SetuptoolsDeprecationWarning', - 'find_packages', - 'find_namespace_packages', -] - -__version__ = setuptools.version.__version__ - -bootstrap_install_from = None - - -class PackageFinder: - """ - Generate a list of all Python packages found within a directory - """ - - @classmethod - def find(cls, where='.', exclude=(), include=('*',)): - """Return a list all Python packages found within directory 'where' - - 'where' is the root directory which will be searched for packages. It - should be supplied as a "cross-platform" (i.e. URL-style) path; it will - be converted to the appropriate local path syntax. - - 'exclude' is a sequence of package names to exclude; '*' can be used - as a wildcard in the names, such that 'foo.*' will exclude all - subpackages of 'foo' (but not 'foo' itself). - - 'include' is a sequence of package names to include. If it's - specified, only the named packages will be included. If it's not - specified, all found packages will be included. 'include' can contain - shell style wildcard patterns just like 'exclude'. - """ - - return list( - cls._find_packages_iter( - convert_path(where), - cls._build_filter('ez_setup', '*__pycache__', *exclude), - cls._build_filter(*include), - ) - ) - - @classmethod - def _find_packages_iter(cls, where, exclude, include): - """ - All the packages found in 'where' that pass the 'include' filter, but - not the 'exclude' filter. - """ - for root, dirs, files in os.walk(where, followlinks=True): - # Copy dirs to iterate over it, then empty dirs. - all_dirs = dirs[:] - dirs[:] = [] - - for dir in all_dirs: - full_path = os.path.join(root, dir) - rel_path = os.path.relpath(full_path, where) - package = rel_path.replace(os.path.sep, '.') - - # Skip directory trees that are not valid packages - if '.' in dir or not cls._looks_like_package(full_path): - continue - - # Should this package be included? - if include(package) and not exclude(package): - yield package - - # Keep searching subdirectories, as there may be more packages - # down there, even if the parent was excluded. - dirs.append(dir) - - @staticmethod - def _looks_like_package(path): - """Does a directory look like a package?""" - return os.path.isfile(os.path.join(path, '__init__.py')) - - @staticmethod - def _build_filter(*patterns): - """ - Given a list of patterns, return a callable that will be true only if - the input matches at least one of the patterns. - """ - return lambda name: any(fnmatchcase(name, pat=pat) for pat in patterns) - - -class PEP420PackageFinder(PackageFinder): - @staticmethod - def _looks_like_package(path): - return True - - -find_packages = PackageFinder.find -find_namespace_packages = PEP420PackageFinder.find - - -def _install_setup_requires(attrs): - # Note: do not use `setuptools.Distribution` directly, as - # our PEP 517 backend patch `distutils.core.Distribution`. - class MinimalDistribution(distutils.core.Distribution): - """ - A minimal version of a distribution for supporting the - fetch_build_eggs interface. - """ - - def __init__(self, attrs): - _incl = 'dependency_links', 'setup_requires' - filtered = {k: attrs[k] for k in set(_incl) & set(attrs)} - distutils.core.Distribution.__init__(self, filtered) - - def finalize_options(self): - """ - Disable finalize_options to avoid building the working set. - Ref #2158. - """ - - dist = MinimalDistribution(attrs) - - # Honor setup.cfg's options. - dist.parse_config_files(ignore_option_errors=True) - if dist.setup_requires: - dist.fetch_build_eggs(dist.setup_requires) - - -def setup(**attrs): - # Make sure we have any requirements needed to interpret 'attrs'. - _install_setup_requires(attrs) - return distutils.core.setup(**attrs) - - -setup.__doc__ = distutils.core.setup.__doc__ - - -_Command = monkey.get_unpatched(distutils.core.Command) - - -class Command(_Command): - __doc__ = _Command.__doc__ - - command_consumes_arguments = False - - def __init__(self, dist, **kw): - """ - Construct the command for dist, updating - vars(self) with any keyword parameters. - """ - _Command.__init__(self, dist) - vars(self).update(kw) - - def _ensure_stringlike(self, option, what, default=None): - val = getattr(self, option) - if val is None: - setattr(self, option, default) - return default - elif not isinstance(val, str): - raise DistutilsOptionError( - "'%s' must be a %s (got `%s`)" % (option, what, val) - ) - return val - - def ensure_string_list(self, option): - r"""Ensure that 'option' is a list of strings. If 'option' is - currently a string, we split it either on /,\s*/ or /\s+/, so - "foo bar baz", "foo,bar,baz", and "foo, bar baz" all become - ["foo", "bar", "baz"]. - """ - val = getattr(self, option) - if val is None: - return - elif isinstance(val, str): - setattr(self, option, re.split(r',\s*|\s+', val)) - else: - if isinstance(val, list): - ok = all(isinstance(v, str) for v in val) - else: - ok = False - if not ok: - raise DistutilsOptionError( - "'%s' must be a list of strings (got %r)" % (option, val) - ) - - def reinitialize_command(self, command, reinit_subcommands=0, **kw): - cmd = _Command.reinitialize_command(self, command, reinit_subcommands) - vars(cmd).update(kw) - return cmd - - -def _find_all_simple(path): - """ - Find all files under 'path' - """ - results = ( - os.path.join(base, file) - for base, dirs, files in os.walk(path, followlinks=True) - for file in files - ) - return filter(os.path.isfile, results) - - -def findall(dir=os.curdir): - """ - Find all files under 'dir' and return the list of full filenames. - Unless dir is '.', return full filenames with dir prepended. - """ - files = _find_all_simple(dir) - if dir == os.curdir: - make_rel = functools.partial(os.path.relpath, start=dir) - files = map(make_rel, files) - return list(files) - - -class sic(str): - """Treat this string as-is (https://en.wikipedia.org/wiki/Sic)""" - - -# Apply monkey patches -monkey.patch_all() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/version.py deleted file mode 100644 index 11cbaea79d1f4f46f9ae4bea542d7c66ded96e34..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/tqdm/version.py +++ /dev/null @@ -1,9 +0,0 @@ -"""`tqdm` version detector. Precedence: installed dist, git, 'UNKNOWN'.""" -try: - from ._dist_ver import __version__ -except ImportError: - try: - from setuptools_scm import get_version - __version__ = get_version(root='..', relative_to=__file__) - except (ImportError, LookupError): - __version__ = "UNKNOWN" diff --git a/spaces/pycoming/bingo/src/lib/hooks/use-enter-submit.tsx b/spaces/pycoming/bingo/src/lib/hooks/use-enter-submit.tsx deleted file mode 100644 index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000 --- a/spaces/pycoming/bingo/src/lib/hooks/use-enter-submit.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import { useRef, type RefObject } from 'react' - -export function useEnterSubmit(): { - formRef: RefObject - onKeyDown: (event: React.KeyboardEvent) => void -} { - const formRef = useRef(null) - - const handleKeyDown = ( - event: React.KeyboardEvent - ): void => { - if ( - event.key === 'Enter' && - !event.shiftKey && - !event.nativeEvent.isComposing - ) { - formRef.current?.requestSubmit() - event.preventDefault() - } - } - - return { formRef, onKeyDown: handleKeyDown } -} diff --git a/spaces/qinzhu/diy-girlfriend-online/text/ngu_dialect.py b/spaces/qinzhu/diy-girlfriend-online/text/ngu_dialect.py deleted file mode 100644 index 69d0ce6fe5a989843ee059a71ccab793f20f9176..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/diy-girlfriend-online/text/ngu_dialect.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', - 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC("chinese_dialect_lexicons/"+dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/qmjnh/FLowerCLassification/app.py b/spaces/qmjnh/FLowerCLassification/app.py deleted file mode 100644 index c8b71b43e1beeae82f5fb55b72295cc05a44569c..0000000000000000000000000000000000000000 --- a/spaces/qmjnh/FLowerCLassification/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import gradio as gr -import numpy as np -import tensorflow as tf - -def softmax(vector): - e = np.exp(vector) - return e / e.sum() - -def image_to_output (input_img): - gr_img = [] - gr_img.append(input_img) - img2 = tf.image.resize(tf.cast(gr_img, tf.float32)/255. , [224, 224]) - - # print(img2) - - x_test = np.asarray(img2) - - prediction = model2.predict(x_test,batch_size=1).flatten() - prediction = softmax(prediction) - - confidences = {labels[i]: float(prediction[i]) for i in range(102)} - # confidences = {labels[i]:float(top[i]) for i in range(num_predictions)} - - return confidences - -# Download the model checkpoint -import os -import requests -pretrained_repo = 'pretrained_model' -model_repo_link = 'https://huggingface.co/qmjnh/FLowerCLassification-model/resolve/main/' -for item in [ - 'variables.data-00000-of-00001', - 'variables.index', - 'keras_metadata.pb', - 'saved_model.pb', - ]: - params = requests.get(model_repo_link+item) - if item.startswith('variables'): - output_file = os.path.join(pretrained_repo, 'variables', item) - else: output_file = os.path.join(pretrained_repo, item) - if not os.path.exists(os.path.dirname(output_file)): - os.makedirs(os.path.dirname(output_file)) - with open(output_file, 'wb') as f: - print(f'Downloading from {model_repo_link+item} to {output_file}') - f.write(params.content) - -# Load the model -model2=tf.keras.models.load_model(pretrained_repo) - -# Read the labels -with open('flower_names.txt') as f: - labels = f.readlines() - -# Run gradio -from gradio.components import Image as gradio_image -from gradio.components import Label as gradio_label -UI=gr.Interface(fn=image_to_output, - inputs=gradio_image(shape=(224,224)), - outputs=gradio_label(num_top_classes=5), - interpretation="default" - ) -UI.launch() \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/3d Sex Villa 2 Full Version Highly Compressed !!TOP!!.md b/spaces/quidiaMuxgu/Expedit-SAM/3d Sex Villa 2 Full Version Highly Compressed !!TOP!!.md deleted file mode 100644 index 98eb5a0b45d631038be4599689af2ea68376640d..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/3d Sex Villa 2 Full Version Highly Compressed !!TOP!!.md +++ /dev/null @@ -1,10 +0,0 @@ -

        3d Sex Villa 2 Full Version Highly Compressed


        DOWNLOADhttps://geags.com/2uCru4



        - -Aug 17, 2014 - 3D Sexvilla is the best interactive sex simulator for your PC. Sexy girls, extremely detailed and realistically modeled in 3D, ... Download 3d porn download. -Download 3D Sex Villa 2 - Sex Villa. -Download free and without registration: porn movie, Russian porn videos, porno cartoons, HD, 3D, porn games, ... -Download free 3D Sex Villa - sex villa - SexVilla 2 The Klub - sex villa in Russian - 3d Sexvilla 2 Klub Edition V037 Rus - 3D Sexvilla 2 ... Download 3D SexVilla 2 - sex villa - sex villa in Russian - 3d Sexvilla 2 - sex villa download torrent. -The Klub - sex villa game in Russian! 8a78ff9644
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Dell N5110 Windows 7 Home Premium X64 Recovery Disks Free Download ((NEW)).md b/spaces/quidiaMuxgu/Expedit-SAM/Dell N5110 Windows 7 Home Premium X64 Recovery Disks Free Download ((NEW)).md deleted file mode 100644 index aa218b72a9d23de56fda8061449e9a486fef2a91..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Dell N5110 Windows 7 Home Premium X64 Recovery Disks Free Download ((NEW)).md +++ /dev/null @@ -1,65 +0,0 @@ -
        -

        Dell N5110 Windows 7 Home Premium x64 Recovery Disks Free Download

        -

        If you have a Dell N5110 laptop that came with Windows 7 Home Premium x64 preinstalled, you may need to restore your operating system in case of a computer crash, hard drive replacement, or other serious software problem. Fortunately, you can download the recovery disks for free from Dell's website and use them to reinstall Windows 7 on your laptop.

        -

        Dell N5110 Windows 7 Home Premium x64 Recovery Disks free download


        Download Zip ✏ ✏ ✏ https://geags.com/2uCq61



        -

        In this article, we will show you how to download and use the Dell N5110 Windows 7 Home Premium x64 recovery disks for free. We will also explain why you may need them and what are the benefits of using them.

        -

        Why Do You Need Dell N5110 Windows 7 Home Premium x64 Recovery Disks?

        -

        Windows 7 is a popular operating system that offers many features and functions for users. However, it is not immune to errors, viruses, malware, or hardware failures that can cause your laptop to malfunction or stop working altogether. In such cases, you may need to reinstall Windows 7 from scratch to fix the problem and restore your laptop to its original state.

        -

        However, reinstalling Windows 7 can be tricky if you don't have the original installation media or a backup of your system. You may lose your personal files, settings, drivers, and applications that you installed on your laptop. You may also face compatibility issues with some devices or software that require specific drivers or updates.

        -

        That's why Dell provides recovery disks for its laptops that contain the exact image of the operating system that was preinstalled on your laptop when you bought it. These disks include all the drivers, updates, and applications that are compatible with your laptop model and hardware configuration. By using these disks, you can easily restore your laptop to its factory settings without losing any data or functionality.

        -

        -

        How to Download Dell N5110 Windows 7 Home Premium x64 Recovery Disks for Free?

        -

        Dell offers two ways to download the recovery disks for your laptop: using the Dell OS Recovery Tool or using the Dell Support website. Both methods are free and easy to use.

        -

        Using the Dell OS Recovery Tool

        -

        The Dell OS Recovery Tool is a software application that allows you to download either Microsoft Windows, Ubuntu, or Linux operating system recovery image that was preinstalled on your Dell computer. You can use this tool to create a bootable USB drive or DVD that contains the recovery image.

        -

        To use this tool, follow these steps:

        -
          -
        1. Download and install the Dell OS Recovery Tool from https://www.dell.com/support/home/drivers/osiso/recoverytool.
        2. -
        3. Launch the tool and enter your service tag or express service code of your laptop. You can find these numbers on a sticker at the bottom of your laptop or on the BIOS screen.
        4. -
        5. Select the operating system that you want to download. In this case, choose Windows 7 Home Premium x64.
        6. -
        7. Select the language and region of your operating system.
        8. -
        9. Select whether you want to create a bootable USB drive or DVD. If you choose USB drive, make sure you have a blank USB drive with at least 8 GB of space. If you choose DVD, make sure you have a blank DVD and a DVD burner.
        10. -
        11. Follow the instructions on the screen to download and create the recovery media.
        12. -
        -

        Using the Dell Support Website

        -

        The Dell Support website is another option to download the recovery disks for your laptop. You can access this website from any computer with an internet connection and download the recovery image as an ISO file. You can then burn this file to a DVD or copy it to a USB drive using a third-party tool.

        -

        To use this method, follow these steps:

        -
          -
        1. Go to https://www.dell.com/support/home/en-us/product-support/product/inspiron-15r-n5110/drivers.
        2. -
        3. Click on Drivers & Downloads tab.
        4. -
        5. Under Operating System, select Windows 7 Home Premium x64.
        6. -
        7. Under Category, select Operating System.
        8. -
        9. Click on Download next to Windows 7 Home Premium SP1 (English) - Recovery Image - This image will work with all Windows 7 versions (Home Basic/Home Premium/Professional/Ultimate).
        10. -
        11. Save the ISO file to your computer.
        12. -
        13. Burn the ISO file to a DVD or copy it to a USB drive using a third-party tool such as Rufus or PowerISO.
        14. -
        -

        How to Use Dell N5110 Windows 7 Home Premium x64 Recovery Disks?

        -

        Once you have created the recovery media using either method, you can use it to restore your laptop to its factory settings. To do this, follow these steps:

        -
          -
        1. Backup any important data that you want to keep from your laptop to an external drive or cloud storage.
        2. -
        3. Insert the recovery media (USB drive or DVD) into your laptop.
        4. -
        5. Restart your laptop and press F12 key repeatedly during boot up until you see the Boot Menu screen.
        6. -
        7. Select USB Storage Device or CD/DVD Drive from the list depending on what type of media you are using.
        8. -
        9. Press Enter to boot from the recovery media.
        10. -
        11. Follow the instructions on the screen to select your language, keyboard layout, and agree to the license terms.
        12. -
        13. Select Recover from a drive option.
        14. -
        15. Select Just remove my files option if you want to perform a quick format of your hard drive or Fully clean the drive option if you want to perform a secure erase of your hard drive.
        16. -
        17. Select Recover button to start the recovery process.
        18. -
        19. Wait for the process to complete. It may take several minutes depending on the size of your hard drive and speed of your media.
        20. -
        21. When prompted, remove the recovery media and restart your laptop.
        22. -
        23. Your laptop will boot into Windows 7 Home Premium x64 as it was when you first bought it. You can then set up your user account, password, network settings, and other preferences as usual.
        24. - -

        -

        What are the Benefits of Using Dell N5110 Windows 7 Home Premium x64 Recovery Disks?

        -

        Using the Dell N5110 Windows 7 Home Premium x64 recovery disks has many advantages over other methods of reinstalling Windows 7 on your laptop. Some of these benefits are:

        -
          -
        • You can save time and money by not having to buy a new copy of Windows 7 or pay for a professional service to fix your laptop.
        • -
        • You can avoid compatibility issues with your laptop's hardware and software by using the exact image of the operating system that was preinstalled on your laptop.
        • -
        • You can ensure that your laptop is secure and up-to-date by using the latest drivers, updates, and applications provided by Dell.
        • -
        • You can restore your laptop to its original performance and functionality by removing any errors, viruses, malware, or unwanted programs that may have affected your system.
        • -
        • You can keep your personal files, settings, and preferences intact by backing them up before using the recovery disks and restoring them after the recovery process.
        • -
        -

        Conclusion

        -

        Dell N5110 Windows 7 Home Premium x64 recovery disks are a great solution for restoring your laptop to its factory settings in case of a serious software problem. You can download these disks for free from Dell's website and use them to reinstall Windows 7 on your laptop without losing any data or functionality. By using these disks, you can enjoy the features and benefits of Windows 7 on your laptop as it was when you first bought it.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Easy Iso Dis V.44 Base [EXCLUSIVE].md b/spaces/quidiaMuxgu/Expedit-SAM/Easy Iso Dis V.44 Base [EXCLUSIVE].md deleted file mode 100644 index 652783a5c497ae935dbd6837f8fcd43e53a4590e..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Easy Iso Dis V.44 Base [EXCLUSIVE].md +++ /dev/null @@ -1,76 +0,0 @@ -
        -

        How to Install Easy DIS v.44 Base for BMW Diagnostic Software

        -

        Easy DIS v.44 Base is a software package that allows you to run BMW diagnostic software on your PC. It is based on the original GT1 software that is used by BMW dealers and technicians. Easy DIS v.44 Base can be installed on a virtual machine using VMware and Daemon Tools. In this article, we will show you how to install Easy DIS v.44 Base step by step.

        -

        What You Need

        -
          -
        • Easy DIS v.44 BASE iso file[^1^]
        • -
        • Easy DIS v.44 Program iso file[^1^]
        • -
        • VMware Workstation version 6[^2^]
        • -
        • Daemon Tools[^2^]
        • -
        • A PC with enough disk space and memory to run a virtual machine
        • -
        -

        How to Install

        -
          -
        1. Open VMware and create a new virtual machine with the following settings: -
            -
          • Type: Typical
          • -
          • Guest operating system: Other - Other
          • -
          • Name: Any name you like
          • -
          • Network: Host Only
          • -
          • Disk size: 18.635 GB, allocate all disk space now
          • -
          -
        2. -
        3. Remove the sound adapter from the virtual machine settings.
        4. -
        5. Add two additional Ethernet adapters and assign them to VMnet1.
        6. -
        7. Uncheck drag and drop from Guest Isolation in the options tab.
        8. -
        9. Mount the Easy DIS v.44 BASE iso file in Daemon Tools.
        10. -
        11. Edit the CD drive of your virtual machine to use the Daemon Tools drive.
        12. -
        13. Start the virtual machine and quickly press F2 to enter the BIOS.
        14. -
        15. Set the CD-ROM to be the first boot device.
        16. -
        17. Save and exit the BIOS. The installation of GT1 will begin automatically.
        18. -
        19. When the installation is finished, you will be asked to eject the CD and restart the VM, but do not do that yet.
        20. -
        21. Eject the Easy DIS v.44 BASE iso file from Daemon Tools.
        22. -
        23. Mount the Easy DIS v.44 Program iso file in Daemon Tools.
        24. -
        25. Shut off and restart the virtual machine.
        26. -
        27. Return to the BIOS by pressing F2 and set the boot order back to default.
        28. -
        29. Save and exit the BIOS. The VM will boot up and ask you to select your language, vehicle, country, dealer number, and other information.
        30. -
        31. Enter 12345 for dealer number and make up the rest of the information as you like.
        32. -
        33. Click End > Quit > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > End (yes, that's a lot of clicks).
        34. -
        35. The installation is complete. You can now run DIS from your desktop or start menu.
        36. -

        How to Use DIS

        -

        DIS stands for Diagnostic and Information System. It is a software that allows you to perform various diagnostic and coding functions on your BMW. You can read and clear fault codes, view live data, program modules, reset service intervals, and more. To use DIS, you need to connect your car to your PC using a compatible interface cable. The most common ones are INPA K+DCAN for OBD2 cars and INPA K+D-CAN for older cars with round 20-pin connector. You also need to configure the network settings of your virtual machine to match the interface cable.

        -

        Easy Iso Dis V.44 Base


        Downloadhttps://geags.com/2uCrzR



        -

        Connecting Your Car

        -
          -
        1. Plug the interface cable into your car's diagnostic port. The location of the port may vary depending on the model and year of your car. It is usually under the dashboard, near the steering wheel, or in the engine bay.
        2. -
        3. Plug the other end of the cable into your PC's USB port.
        4. -
        5. Turn on the ignition of your car, but do not start the engine.
        6. -
        7. Open VMware and select your virtual machine.
        8. -
        9. Click on Edit > Virtual Network Editor.
        10. -
        11. Select VMnet1 from the list of virtual networks.
        12. -
        13. Click on NAT Settings.
        14. -
        15. Enter 192.168.68.1 for Gateway IP address.
        16. -
        17. Enter 255.255.255.0 for Subnet mask.
        18. -
        19. Click OK > Apply > OK.
        20. -
        21. Click on Edit > Virtual Machine Settings.
        22. -
        23. Select Network Adapter 1 from the list of hardware devices.
        24. -
        25. Click on Advanced > Generate.
        26. -
        27. Note down the MAC address that is generated.
        28. -
        29. Click OK > OK.
        30. -
        31. Start the virtual machine.
        32. -
        33. Login with username Administrator and password EasyDIS44.
        34. -
        -

        Running DIS

        -
          -
        1. Double-click on the DIS icon on the desktop or go to Start > Programs > DIS v44 > DIS v44.
        2. -
        3. Select your language and click OK.
        4. -
        5. Select Diagnosis from the main menu.
        6. -
        7. Select your vehicle series and model from the list or enter your VIN number.
        8. -
        9. Select Test Schedule from the sub-menu.
        10. -
        11. Select Quick Test from the list of test options.
        12. -
        13. The software will scan your car and display a list of modules and their status. Green means no faults, yellow means faults present, and red means communication error.
        14. -
        15. You can click on each module to view more details, such as fault codes, live data, coding options, etc.
        16. -
        17. You can also perform other functions from the sub-menu, such as Service Functions, Coding Data, or Component Activation.
        18. -

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Eca Vrt Disk 2012 Dvd Iso Full.zip.md b/spaces/quidiaMuxgu/Expedit-SAM/Eca Vrt Disk 2012 Dvd Iso Full.zip.md deleted file mode 100644 index 45e44507c3318ee33384eb46e5d0ab452c94d94a..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Eca Vrt Disk 2012 Dvd Iso Full.zip.md +++ /dev/null @@ -1,12 +0,0 @@ -

        Eca Vrt Disk 2012 Dvd Iso Full.zip


        DOWNLOADhttps://geags.com/2uCsjZ



        -
        -June 20, 2021 - Vrt shared files: eca vrt 2009.part01.rar mediafire baixar eca vrt dvd . download . Eka Vrt Disk 2012.rar. eca vrt dvd 2012 iso torrent. Eka Vrt Disc 2012. -Eka Vrt Disk 2012.rar. -Eka Vrt Disk 2012.rar. eca vrt 2012 iso torrent. -Eca Vrt Disc 2012.part01.rar, eca vrt 2012 iso torrent. -Eca Vrt Disc 2012.part01.rar, eca vrt dvd. -Eka Vrt Disc 2012.part01.rar. -Eka Vrt Disk 8a78ff9644
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Free Download Lumion 7 Pro Full Software.md b/spaces/quidiaMuxgu/Expedit-SAM/Free Download Lumion 7 Pro Full Software.md deleted file mode 100644 index 009225101c162fc9c92d1b0b9d90e4d340215bed..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Free Download Lumion 7 Pro Full Software.md +++ /dev/null @@ -1,9 +0,0 @@ - -

        open studio is an open source 3d modeling application and a part of the open architectural software series. it is a windows based free download and it works on windows. this program is used by designers for the creation of a building construction or land development projects. open studio lets the users to sketch or perform painting on the 3d models.

        -

        recad pro is a professional 2d drafting software that provides you an easiest tool for cutting and extracting numbers of polygons and polygons into a single file. you can easily design the diagrams and models in seconds and create architectural details, building styles, industry drawings, architectural floor plans, house plans and many more.

        -

        Free Download Lumion 7 Pro Full Software


        Download Zip ❤❤❤ https://geags.com/2uCrVN



        -

        lumion 7 crack + serial key is a powerful software. it,s used for design your product and services from 3d elements. this program also helps to manage the different view for rotating, rendering, and measuring without any other software. it is used for structural design, building parts, and exterior design.

        -

        lumion 7 crack + serial key full version is an architectural software. it is very efficient and best software. it,s used for performance of your data and designing of your architecture. this software is much powerfull for all professionals and designers.

        -

        lumion 7 crack + serial key full version is an application for a model 3d building. lumion 7 crack + serial key full version is helpful to compare 2 different projects. it also helps in supporting the 3d rendering and provide the best quality. it,s best for a home building and it,s better than previous versions.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Honda Cbr 600 Rr Pc40 Bedienungsanleitung Deutsch.md b/spaces/quidiaMuxgu/Expedit-SAM/Honda Cbr 600 Rr Pc40 Bedienungsanleitung Deutsch.md deleted file mode 100644 index 5ac17588bd72b23218f5c96982dc6435368bb82f..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Honda Cbr 600 Rr Pc40 Bedienungsanleitung Deutsch.md +++ /dev/null @@ -1,55 +0,0 @@ - -

        Honda CBR 600 RR PC40: Die ultimative Bedienungsanleitung

        - -

        Die Honda CBR 600 RR PC40 ist ein sportliches Motorrad, das sowohl auf der Straße als auch auf der Rennstrecke überzeugt. Mit einem leistungsstarken Motor, einem agilen Fahrwerk und einem aerodynamischen Design bietet die Honda CBR 600 RR PC40 Fahrspaß pur. Doch um das volle Potenzial dieses Bikes auszuschöpfen, ist es wichtig, die Bedienung, Wartung und Pflege zu kennen. In diesem Artikel erfährst du alles, was du über die Honda CBR 600 RR PC40 wissen musst: von der Bedienungsanleitung über die technischen Daten bis hin zu den Tipps und Tricks für eine optimale Performance.

        -

        honda cbr 600 rr pc40 bedienungsanleitung deutsch


        Download Filehttps://geags.com/2uCq2P



        - -

        Die Bedienungsanleitung der Honda CBR 600 RR PC40

        - -

        Die Bedienungsanleitung der Honda CBR 600 RR PC40 ist ein unverzichtbares Dokument für jeden Besitzer dieses Motorrads. Sie enthält alle wichtigen Informationen über die Funktionen, Einstellungen und Sicherheitshinweise des Bikes. Die Bedienungsanleitung der Honda CBR 600 RR PC40 kannst du kostenlos online herunterladen oder als gedrucktes Exemplar bei deinem Honda-Händler anfordern. Die Bedienungsanleitung der Honda CBR 600 RR PC40 ist in mehreren Sprachen verfügbar, darunter auch Deutsch.

        - -

        Die Bedienungsanleitung der Honda CBR 600 RR PC40 gliedert sich in folgende Kapitel:

        - -
          -
        • Vorwort: Hier findest du eine kurze Einführung in das Motorrad, die Garantiebedingungen und die Verwendung der Bedienungsanleitung.
        • -
        • Sicherheit: Hier werden dir die grundlegenden Sicherheitsregeln für das Fahren und die Wartung des Motorrads erklärt.
        • -
        • Instrumente und Anzeigen: Hier werden dir die verschiedenen Instrumente und Anzeigen auf dem Armaturenbrett und am Lenker vorgestellt und ihre Funktionen beschrieben.
        • -
        • Betrieb: Hier erfährst du alles über die Bedienung des Motorrads, wie zum Beispiel das Starten, das Schalten, das Bremsen, das Lenken und das Abstellen.
        • -
        • Wartung: Hier wird dir gezeigt, wie du dein Motorrad regelmäßig überprüfst, reinigst und schmierst, um eine optimale Leistung und Langlebigkeit zu gewährleisten.
        • -
        • Fehlerbehebung: Hier findest du eine Liste von häufigen Problemen und möglichen Lösungen, falls dein Motorrad nicht richtig funktioniert.
        • -
        • Technische Daten: Hier werden dir die technischen Daten deines Motorrads wie zum Beispiel die Abmessungen, das Gewicht, die Leistung, der Kraftstoffverbrauch und die Reifengröße angegeben.
        • -
        • Index: Hier kannst du nach Stichworten suchen, um schnell die gewünschten Informationen zu finden.
        • -
        - -

        Die technischen Daten der Honda CBR 600 RR PC40

        - -

        Die Honda CBR 600 RR PC40 ist ein leistungsstarkes Motorrad mit einem flüssigkeitsgekühlten Vierzylinder-Reihenmotor mit einem Hubraum von 599 cm³. Der Motor leistet 88 kW (120 PS) bei 13.500 U/min und hat ein maximales Drehmoment von 66 Nm bei 11.250 U/min. Die Honda CBR 600 RR PC40 hat ein Sechsgang-Getriebe mit einer Kette als Antrieb. Die Höchstgeschwindigkeit liegt bei etwa 260 km/h.

        - -

        Die Honda CBR 600 RR PC40 hat ein Aluminium-Brückenrahmen mit einer Upside-Down-Gabel vorne und einem Pro-Link-Federbein hinten. Die Bremsen sind Scheibenbremsen mit ABS-System. Die Reifen haben eine Größe von 120/70 ZR17 vorne und 180/55 ZR17 hinten. Die Sitzhöhe beträgt 820 mm und das Leergewicht liegt bei 186 kg.

        -

        - -

        Die Tipps und Tricks für die Honda CBR 600 RR PC40

        - -

        Die Honda CBR 600 RR PC40 ist ein Motorrad, das viel Spaß macht, aber auch viel Pflege erfordert. Um dein Bike in einem guten Zustand zu halten und seine Performance zu verbessern, solltest du folgende Tipps und Tricks beachten:

        - -
          -
        • Kontrolliere regelmäßig den Ölstand, den Reifendruck, die Bremsflüssigkeit und die Kettenspannung deines Motorrads.
        • -
        • Wechsle das Öl und den Ölfilter alle 12.000 km oder einmal im Jahr.
        • -
        • Wechsle die Zündkerzen alle 24.000 km oder alle zwei Jahre.
        • -
        • Wechsle den Luftfilter alle 18.000 km oder alle anderthalb Jahre.
        • -
        • Wechsle die Bremsbeläge alle 12.000 km oder bei Verschleiß.
        • -
        • Wechsle die Reifen alle 10.000 km oder bei Abnutzung.
        • -
        • Reinige dein Motorrad regelmäßig mit einem weichen Tuch und einem milden Reinigungsmittel. Vermeide aggressive Chemikalien oder Hochdruckreiniger.
        • -
        • Lagere dein Motorrad an einem trockenen und geschützten Ort. Wenn du dein Motorrad längere Zeit nicht benutzt, solltest du den Tank volltanken, den Reifendruck erhöhen, die Batterie abklemmen und eine Abdeckplane verwenden.
        • -
        • Fahre dein Motorrad vorsichtig ein, wenn es neu oder nach einer Reparatur ist. Vermeide hohe Drehzahlen, starke Beschleunigung oder Bremsung für die ersten 1.000 km.
        • -
        • Fahre dein Motorrad immer mit angemessener Geschwindigkeit und unter Berücksichtigung der Straßenverhältnisse und der Verkehrsregeln.
        • -
        - -

        Fazit

        - -

        Die Honda CBR 600 RR PC40 ist ein sportliches Motorrad, das dir viel Fahrspaß bietet. Um dein Bike optimal zu nutzen, solltest du dich mit der Bedienungsanleitung vertraut machen, die technischen Daten kennen und die Tipps und Tricks befolgen. So kannst du dein Motorrad lange genießen und sicher fahren.

        -

        Fazit

        - -

        Die Honda CBR 600 RR PC40 ist ein sportliches Motorrad, das dir viel Fahrspaß bietet. Um dein Bike optimal zu nutzen, solltest du dich mit der Bedienungsanleitung vertraut machen, die technischen Daten kennen und die Tipps und Tricks befolgen. So kannst du dein Motorrad lange genießen und sicher fahren.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Mardaani 2 Full Movie Fix Download Free.md b/spaces/quidiaMuxgu/Expedit-SAM/Mardaani 2 Full Movie Fix Download Free.md deleted file mode 100644 index c31c8bbf810c94c220ec40337878631119ce3713..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Mardaani 2 Full Movie Fix Download Free.md +++ /dev/null @@ -1,24 +0,0 @@ -
        -

        Mardaani 2 Full Movie Download Free: Watch Rani Mukerji's Thrilling Action

        -

        If you are a fan of action thrillers and Rani Mukerji, you might be interested in watching Mardaani 2, the sequel to the 2014 hit movie Mardaani. Mardaani 2 is a 2019 Bollywood Hindi language movie that follows the story of Shivani Shivaji Roy, a fearless cop who faces a ruthless serial killer who targets young women. In this article, we will tell you more about Mardaani 2 and how to download it for free in HD quality.

        -

        What is Mardaani 2 about?

        -

        Mardaani 2 is directed by Gopi Puthran, who also wrote the script for the first movie. It stars Rani Mukerji as Shivani Shivaji Roy, a senior inspector of the Mumbai Crime Branch. The movie is set in Kota, Rajasthan, where a 21-year-old psychopath named Sunny (played by Vishal Jethwa) kidnaps, rapes and murders young women. He also taunts Shivani by leaving clues and messages for her, challenging her to catch him. Shivani takes up the case and vows to bring him to justice. The movie is a cat-and-mouse game between Shivani and Sunny, who are both determined to outsmart each other.

        -

        Mardaani 2 full movie download free


        Download Zip » https://geags.com/2uCslm



        -

        What are the reviews of Mardaani 2?

        -

        Mardaani 2 received positive reviews from critics and audiences alike. It was praised for its gripping storyline, realistic portrayal of crimes against women, and powerful performances by Rani Mukerji and Vishal Jethwa. The movie also addressed some social issues such as gender inequality, corruption and media sensationalism. The movie was rated 7.3 out of 10 on IMDb and 3.5 out of 5 on Times of India. The movie was also a commercial success, earning over Rs. 67 crore at the box office.

        -

        How to download Mardaani 2 full movie for free?

        -

        If you want to watch Mardaani 2 full movie for free, you can find it on various websites that offer free movie downloads. However, you have to be careful because some of these websites may contain viruses or malware that can harm your device or data. We recommend you to download Mardaani 2 full movie from our website, which is 100% safe and secure. To download Mardaani 2 full movie from our website, follow these steps:

        -
          -
        1. Click on the download button below to start the download process.
        2. -
        3. Save the file to your preferred location on your device.
        4. -
        5. Enjoy watching Mardaani 2 full movie for free in HD quality.
        6. -
        - -Download Mardaani 2 Full Movie Free - -

        Conclusion

        -

        Mardaani 2 full movie download free is one of the best ways to watch Rani Mukerji's thrilling action. It is a movie that will keep you on the edge of your seat and make you appreciate the courage and dedication of women cops. It is also a movie that will make you aware of the atrocities that young women face in our society and inspire you to fight against them. Download Mardaani 2 full movie for free today and enjoy watching it with your friends and family!

        -

        Mardaani 2 Full Movie Download Free: Watch Rani Mukerji's Thrilling Action

        " and ends with "

        Download Mardaani 2 full movie for free today and enjoy watching it with your friends and family!

        ". There is no need to write more content for this keyword. Thank you for reading and have a nice day! ? -

        Mardaani 2 full movie download free is one of the best ways to watch Rani Mukerji's thrilling action.

        " There is no need to write another conclusion. Thank you for reading and have a nice day! ?

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/r3gm/SoniTranslate_translate_audio_of_a_video_content/lib/infer_pack/transforms.py b/spaces/r3gm/SoniTranslate_translate_audio_of_a_video_content/lib/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/r3gm/SoniTranslate_translate_audio_of_a_video_content/lib/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/utils/__init__.py b/spaces/radames/UserControllableLT-Latent-Transformer/expansion/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/radames/transformers-js-svelte-example-app/assets/index-4438c50a.css b/spaces/radames/transformers-js-svelte-example-app/assets/index-4438c50a.css deleted file mode 100644 index 0bc19ee37c1a50f22627fc22a61c75b7ea8b55fb..0000000000000000000000000000000000000000 --- a/spaces/radames/transformers-js-svelte-example-app/assets/index-4438c50a.css +++ /dev/null @@ -1 +0,0 @@ -@import"https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500;600;700;800;900&family=Xanh+Mono&display=swap";*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-feature-settings:inherit;font-variation-settings:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}dialog{padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.static{position:static}.mb-2{margin-bottom:.5rem}.mb-4{margin-bottom:1rem}.flex{display:flex}.min-h-screen{min-height:100vh}.w-full{width:100%}.max-w-xs{max-width:20rem}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.rounded{border-radius:.25rem}.border{border-width:1px}.border-gray-300{--tw-border-opacity: 1;border-color:rgb(209 213 219 / var(--tw-border-opacity))}.bg-gray-100{--tw-bg-opacity: 1;background-color:rgb(243 244 246 / var(--tw-bg-opacity))}.p-12{padding:3rem}.p-2{padding:.5rem}.text-center{text-align:center}.text-2xl{font-size:1.5rem;line-height:2rem}.text-5xl{font-size:3rem;line-height:1}.font-bold{font-weight:700}:root{--foreground-rgb: 0, 0, 0;--background-start-rgb: 214, 219, 220;--background-end-rgb: 255, 255, 255}@media (prefers-color-scheme: dark){:root{--foreground-rgb: 255, 255, 255;--background-start-rgb: 0, 0, 0;--background-end-rgb: 0, 0, 0}}body{font-family:Inter,sans-serif;color:rgb(var(--foreground-rgb));background:linear-gradient(to bottom,transparent,rgb(var(--background-end-rgb))) rgb(var(--background-start-rgb))}@media (prefers-color-scheme: dark){.dark\:bg-gray-800{--tw-bg-opacity: 1;background-color:rgb(31 41 55 / var(--tw-bg-opacity))}.dark\:text-black{--tw-text-opacity: 1;color:rgb(0 0 0 / var(--tw-text-opacity))}} diff --git a/spaces/raedeXanto/academic-chatgpt-beta/ArtsAcoustic.Reverb.VST.v1.2.2.Incl.Keygen-AiR utorrent Experience the Sound of a Highly Advanced Reverb Plugin.md b/spaces/raedeXanto/academic-chatgpt-beta/ArtsAcoustic.Reverb.VST.v1.2.2.Incl.Keygen-AiR utorrent Experience the Sound of a Highly Advanced Reverb Plugin.md deleted file mode 100644 index a05eb8656adac05a192145af8d69dbee7b400849..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/ArtsAcoustic.Reverb.VST.v1.2.2.Incl.Keygen-AiR utorrent Experience the Sound of a Highly Advanced Reverb Plugin.md +++ /dev/null @@ -1,139 +0,0 @@ -
        -

        ArtsAcoustic Reverb: A High Quality Algorithmic Reverb Plugin

        -

        If you are looking for a professional and versatile reverb plugin for your music production, you might want to check out ArtsAcoustic Reverb. This plugin is a high quality algorithmic reverb that can create realistic and smooth reverberation effects for any sound source. In this article, we will explain what ArtsAcoustic Reverb is, what features and benefits it offers, how to use it in your music production, and how to get it with utorrent and Keygen-AiR.

        -

        What is ArtsAcoustic Reverb?

        -

        ArtsAcoustic Reverb is a plugin that simulates the sound of a room or a space with reverberation. Reverberation is the effect of sound waves bouncing off walls, ceilings, floors, and other objects, creating a sense of depth and space. Reverberation is an essential element of music production, as it can enhance the realism, richness, and mood of any sound.

        -

        ArtsAcoustic.Reverb.VST.v1.2.2.Incl.Keygen-AiR utorrent


        Download >>> https://tinourl.com/2uL0hX



        -

        Unlike some other reverb plugins that use convolution, which is a process of sampling real spaces and applying them to sounds, ArtsAcoustic Reverb uses algorithmic reverb, which is a process of generating reverberation effects mathematically based on parameters such as room size, decay time, diffusion, damping, etc. Algorithmic reverb has some advantages over convolution reverb, such as having more control over the reverb parameters, being more flexible and creative, and being less CPU-intensive.

        -

        Features and benefits of ArtsAcoustic Reverb

        -

        ArtsAcoustic Reverb has many features and benefits that make it a powerful and user-friendly reverb plugin. Some of them are:

        -
          -
        • It has a high quality sound that can compete with hardware reverbs.
        • -
        • It has a simple and intuitive interface that allows you to adjust the reverb parameters easily.
        • -
        • It has a large number of presets that cover various styles and genres of music.
        • -
        • It has a low CPU usage that does not affect your workflow.
        • -
        • It supports VST, AudioUnit, and VST3 formats, which means you can use it with most DAWs (digital audio workstations).
        • -
        -

        How to use ArtsAcoustic Reverb in your music production

        -

        To use ArtsAcoustic Reverb in your music production, you need to have a DAW that supports VST, AudioUnit, or VST3 plugins. You also need to have ArtsAcoustic Reverb installed on your computer. Here are some steps to follow:

        -

        ArtsAcoustic Reverb plugin review
        -How to use ArtsAcoustic Reverb in your DAW
        -ArtsAcoustic Reverb vs other algorithmic reverbs
        -Best settings for ArtsAcoustic Reverb
        -Download ArtsAcoustic Reverb VST for free
        -ArtsAcoustic Reverb tutorial and tips
        -ArtsAcoustic Reverb features and specifications
        -ArtsAcoustic Reverb alternatives and comparisons
        -How to install ArtsAcoustic Reverb on Windows or Mac
        -ArtsAcoustic Reverb sound examples and demos
        -ArtsAcoustic Reverb coupon code and discount
        -How to create realistic rooms with ArtsAcoustic Reverb
        -ArtsAcoustic Reverb user manual and guide
        -ArtsAcoustic Reverb compatibility and system requirements
        -ArtsAcoustic Reverb customer support and feedback
        -How to update ArtsAcoustic Reverb to the latest version
        -ArtsAcoustic Reverb license agreement and terms of use
        -How to uninstall ArtsAcoustic Reverb from your computer
        -ArtsAcoustic Reverb presets and patches
        -How to optimize CPU usage with ArtsAcoustic Reverb
        -How to fix common issues with ArtsAcoustic Reverb
        -How to register ArtsAcoustic Reverb with your serial number
        -How to use convolution reverb with ArtsAcoustic Reverb
        -How to mix vocals with ArtsAcoustic Reverb
        -How to add echo and delay effects with ArtsAcoustic Reverb
        -How to use EQ and damping with ArtsAcoustic Reverb
        -How to design custom rooms with ArtsAcoustic Reverb
        -How to use time modulation with ArtsAcoustic Reverb
        -How to use stereo width and diffusion with ArtsAcoustic Reverb
        -How to use pre-delay and early reflections with ArtsAcoustic Reverb
        -How to use reverb tails and decay with ArtsAcoustic Reverb
        -How to use reverb density and size with ArtsAcoustic Reverb
        -How to use reverb shape and color with ArtsAcoustic Reverb
        -How to use reverb wet and dry levels with ArtsAcoustic Reverb
        -How to use reverb bypass and solo modes with ArtsAcoustic Reverb
        -How to use reverb input and output meters with ArtsAcoustic Reverb
        -How to use reverb presets browser and manager with ArtsAcoustic Reverb
        -How to use reverb automation and MIDI control with ArtsAcoustic Reverb
        -How to use reverb randomizer and morpher with ArtsAcoustic Reverb
        -How to use reverb analyzer and visualizer with ArtsAcoustic Reverb
        -How to use reverb lock and sync functions with ArtsAcoustic Reverb
        -How to use reverb undo and redo buttons with ArtsAcoustic Reverb
        -How to use reverb compare and copy functions with ArtsAcoustic Reverb
        -How to use reverb global settings and preferences with ArtsAcoustic Reverb
        -How to use reverb help menu and online resources with ArtsAcoustic Reverb

        -
          -
        1. Open your DAW and create a new project or load an existing one.
        2. -
        3. Add an audio track or a MIDI track with an instrument plugin.
        4. -
        5. Add ArtsAcoustic Reverb as an insert effect or a send effect on the track.
        6. -
        7. Select a preset from the preset menu or tweak the reverb parameters to your liking.
        8. -
        9. Adjust the dry/wet mix knob to balance the amount of direct sound and reverberated sound.
        10. -
        11. Play back the track and listen to how ArtsAcoustic Reverb enhances the sound.
        12. -
        -

        You can also automate the reverb parameters or use modulation sources such as LFOs (low frequency oscillators) or envelopes to create dynamic and expressive reverberation effects.

        -

        What is VST and how to install it?

        -

        VST stands for Virtual Studio Technology. It is a standard for audio plugins that allow you to add effects or instruments to your DAW. VST plugins are software programs that run inside your DAW and process or generate audio signals. There are thousands of VST plugins available online, both free and paid, that cover various types of effects and instruments.

        -

        To install VST plugins on your computer, you need to follow these steps:

        -
          -
        1. Download the VST plugin file from the website of the developer or another source.
        2. -
        3. Unzip the file if it is compressed.
        4. -
        5. Copy the file or folder containing the file to your VST plugin folder. The location of this folder may vary depending on your operating system and DAW preferences. Usually, it is something like C:\Program Files\VstPlugins or C:\Program Files (x86)\VstPlugins on Windows or /Library/Audio/Plug-Ins/VST on Mac OS X.
        6. -
        7. Open your DAW and scan for new plugins or refresh the plugin list.
        8. -
        9. Find the plugin in your DAW's plugin browser and drag it onto a track or an effect slot.
        10. -
        -

        What is utorrent and why do you need it?

        -

        utorrent is a popular torrent client that allows you to download files from peer-to-peer networks. Torrents are files that contain information about other files that are shared by users online. By using utorrent, you can download these files faster and more efficiently than using regular download methods.

        -

        You may need utorrent if you want to get ArtsAcoustic Reverb from a torrent site. Torrent sites are websites that host torrents for various types of files, including software, music, movies, games, etc. Some torrent sites may offer ArtsAcoustic Reverb for free or at a lower price than buying it from the official website. However, you should be careful when downloading files from torrent sites, as they may contain viruses, malware, or illegal content.

        -

        utorrent: A popular torrent client

        -

        utorrent is one of the most popular torrent clients in the world. It has many features and advantages that make it a reliable and convenient tool for downloading files from torrent sites. Some of them are:

        -
          -
        • It is small in size and does not consume much memory or CPU resources.
        • -
        • It has a simple and user-friendly interface that allows you to manage your downloads easily.
        • -
        • It supports various protocols such as magnet links, DHT (distributed hash table), PEX (peer exchange), etc., which improve the speed and efficiency of downloads.
        • -
        • It has advanced options such as bandwidth management, encryption, proxy support, etc., which allow you to customize your downloads according to your preferences.
        • -
        • It has built-in features such as RSS feeds, media player, remote control app, etc., which enhance your downloading experience.
        • -
        -

        How to download and use utorrent to get ArtsAcoustic Reverb

        -

        To download and use utorrent to get ArtsAcoustic Reverb from a torrent site

        , you need to follow these steps: 1) Download utorrent from its official website: https://www.utorrent.com/ 2) Install utorrent on your computer by following the instructions on screen. 3) Find a torrent site that offers ArtsAcoustic Reverb. You can use search engines or online directories to find such sites. 4) Search for ArtsAcoustic.Reverb.VST.v1.2.2.Incl.Keygen-AiR on the torrent site. 5) Download the torrent file or copy the magnet link of ArtsAcoustic.Reverb.VST.v1.2.2.Incl.Keygen-AiR. 6) Open utorrent and add the torrent file or paste the magnet link into utorrent. 7) Choose a location where you want to save the downloaded files. 8) Wait for utorrent to download all the files from other users who are sharing them. 9) Once the download is complete Here is the continuation of the article. 

        How to activate ArtsAcoustic Reverb with Keygen-AiR

        -

        Keygen-AiR is a tool that can generate serial numbers for ArtsAcoustic Reverb. A serial number is a code that you need to enter in the plugin to activate it and use it without any limitations. Keygen-AiR can help you get a serial number for ArtsAcoustic Reverb without buying it from the official website.

        -

        However, you should be aware that using Keygen-AiR may be illegal, unethical, or risky. You may violate the copyright laws or the terms of service of ArtsAcoustic Reverb. You may also expose your computer to viruses, malware, or other threats. Therefore, you should use Keygen-AiR at your own risk and discretion.

        -

        Keygen-AiR: A tool to generate serial numbers for ArtsAcoustic Reverb

        -

        Keygen-AiR is a small program that can create serial numbers for various software products, including ArtsAcoustic Reverb. It has a simple interface that allows you to select the product name, generate a serial number, and copy it to the clipboard. You can then paste the serial number in the plugin's registration window and activate it.

        -

        Keygen-AiR is usually included in the torrent file of ArtsAcoustic Reverb that you download from a torrent site. You can find it in a folder named Keygen or Crack. You may need to unzip it or run it as an administrator before using it.

        -

        How to run Keygen-AiR and enter the serial number in ArtsAcoustic Reverb

        -

        To run Keygen-AiR and enter the serial number in ArtsAcoustic Reverb, you need to follow these steps:

        -
          -
        1. Open the folder where you saved the downloaded files of ArtsAcoustic Reverb.
        2. -
        3. Find the folder named Keygen or Crack and open it.
        4. -
        5. Double-click on the file named Keygen.exe or something similar.
        6. -
        7. A window will pop up with the name of Keygen-AiR and a list of products.
        8. -
        9. Select ArtsAcoustic Reverb from the list and click on Generate.
        10. -
        11. A serial number will appear in a box below. Copy it to the clipboard by clicking on Copy.
        12. -
        13. Open your DAW and load ArtsAcoustic Reverb on a track or an effect slot.
        14. -
        15. A registration window will appear asking you to enter your name and serial number.
        16. -
        17. Paste the serial number that you copied from Keygen-AiR into the box labeled Serial Number.
        18. -
        19. Type any name that you want into the box labeled Name.
        20. -
        21. Click on Register or OK.
        22. -
        23. The registration window will close and ArtsAcoustic Reverb will be activated.
        24. -
        -

        Conclusion

        -

        In this article, we have explained what ArtsAcoustic Reverb is, what features and benefits it offers, how to use it in your music production, and how to get it with utorrent and Keygen-AiR. We hope that this article has been informative and helpful for you.

        -

        ArtsAcoustic Reverb is a high quality algorithmic reverb plugin that can create realistic and smooth reverberation effects for any sound source. It has a simple and intuitive interface that allows you to adjust the reverb parameters easily. It has a low CPU usage that does not affect your workflow. It supports VST, AudioUnit, and VST3 formats, which means you can use it with most DAWs.

        -

        VST is a standard for audio plugins that allow you to add effects or instruments to your DAW. VST plugins are software programs that run inside your DAW and process or generate audio signals. To install VST plugins on your computer, you need to download them from the website of the developer or another source, unzip them if they are compressed, copy them to your VST plugin folder, open your DAW and scan for new plugins or refresh the plugin list, and find them in your DAW's plugin browser and drag them onto a track or an effect slot.

        -

        utorrent is a popular torrent client that allows you to download files from peer-to-peer networks. Torrents are files that contain information about other files that are shared by users online. By using utorrent, you can download these files faster and more efficiently than using regular download methods. You may need utorrent if you want to get ArtsAcoustic Reverb from a torrent site. Torrent sites are websites that host torrents for various types of files, including software, music, movies, games, etc. Some torrent sites may offer ArtsAcoustic Reverb for free or at a lower price than buying it from the official website. However, you should be careful when downloading files from torrent sites, as they may contain viruses, malware, or illegal content.

        -

        Keygen-AiR is a tool that can generate serial numbers for ArtsAcoustic Reverb. A serial number is a code that you need to enter in the plugin to activate it and use it without any limitations. Keygen-AiR can help you get a serial number for ArtsAcoustic Reverb without buying it from the official website. However, you should be aware that using Keygen-AiR may be illegal, unethical, or risky. You may violate the copyright laws or the terms of service of ArtsAcoustic Reverb. You may also expose your computer to viruses, malware, or other threats. Therefore, you should use Keygen-AiR at your own risk and discretion.

        -

        FAQs

        -
          -
        • What is the difference between algorithmic reverb and convolution reverb?
        • -
        • Algorithmic reverb is a process of generating reverberation effects mathematically based on parameters such as room size, decay time, diffusion, damping, etc. Convolution reverb is a process of sampling real spaces and applying them to sounds.
        • -
        • What are some advantages of algorithmic reverb over convolution reverb?
        • -
        • Some advantages of algorithmic reverb over convolution reverb are having more control over the reverb parameters, being more flexible and creative, and being less CPU-intensive.
        • -
        • What are some disadvantages of algorithmic reverb over convolution reverb?
        • -
        • Some disadvantages of algorithmic reverb over convolution reverb are being less realistic and natural sounding than real spaces, being more prone to artifacts such as ringing or metallic sounds, and being more dependent on the quality of the algorithm.
        • -
        • What are some alternatives to utorrent?
        • -
        • Some alternatives to utorrent are BitTorrent Here is the continuation of the article. , BitComet, Deluge, Transmission, Vuze, etc.
        • -
        • What are some risks of using Keygen-AiR?
        • -
        • Some risks of using Keygen-AiR are violating the copyright laws or the terms of service of ArtsAcoustic Reverb, exposing your computer to viruses, malware, or other threats, and facing legal consequences or penalties.
        • -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/ramiin2/AutoGPT/autogpt/json_utils/json_fix_general.py b/spaces/ramiin2/AutoGPT/autogpt/json_utils/json_fix_general.py deleted file mode 100644 index 7010fa3b9c1909de0e5a7f6ec13ca8aa418fe6c7..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/autogpt/json_utils/json_fix_general.py +++ /dev/null @@ -1,124 +0,0 @@ -"""This module contains functions to fix JSON strings using general programmatic approaches, suitable for addressing -common JSON formatting issues.""" -from __future__ import annotations - -import contextlib -import json -import re -from typing import Optional - -from autogpt.config import Config -from autogpt.json_utils.utilities import extract_char_position - -CFG = Config() - - -def fix_invalid_escape(json_to_load: str, error_message: str) -> str: - """Fix invalid escape sequences in JSON strings. - - Args: - json_to_load (str): The JSON string. - error_message (str): The error message from the JSONDecodeError - exception. - - Returns: - str: The JSON string with invalid escape sequences fixed. - """ - while error_message.startswith("Invalid \\escape"): - bad_escape_location = extract_char_position(error_message) - json_to_load = ( - json_to_load[:bad_escape_location] + json_to_load[bad_escape_location + 1 :] - ) - try: - json.loads(json_to_load) - return json_to_load - except json.JSONDecodeError as e: - if CFG.debug_mode: - print("json loads error - fix invalid escape", e) - error_message = str(e) - return json_to_load - - -def balance_braces(json_string: str) -> Optional[str]: - """ - Balance the braces in a JSON string. - - Args: - json_string (str): The JSON string. - - Returns: - str: The JSON string with braces balanced. - """ - - open_braces_count = json_string.count("{") - close_braces_count = json_string.count("}") - - while open_braces_count > close_braces_count: - json_string += "}" - close_braces_count += 1 - - while close_braces_count > open_braces_count: - json_string = json_string.rstrip("}") - close_braces_count -= 1 - - with contextlib.suppress(json.JSONDecodeError): - json.loads(json_string) - return json_string - - -def add_quotes_to_property_names(json_string: str) -> str: - """ - Add quotes to property names in a JSON string. - - Args: - json_string (str): The JSON string. - - Returns: - str: The JSON string with quotes added to property names. - """ - - def replace_func(match: re.Match) -> str: - return f'"{match[1]}":' - - property_name_pattern = re.compile(r"(\w+):") - corrected_json_string = property_name_pattern.sub(replace_func, json_string) - - try: - json.loads(corrected_json_string) - return corrected_json_string - except json.JSONDecodeError as e: - raise e - - -def correct_json(json_to_load: str) -> str: - """ - Correct common JSON errors. - Args: - json_to_load (str): The JSON string. - """ - - try: - if CFG.debug_mode: - print("json", json_to_load) - json.loads(json_to_load) - return json_to_load - except json.JSONDecodeError as e: - if CFG.debug_mode: - print("json loads error", e) - error_message = str(e) - if error_message.startswith("Invalid \\escape"): - json_to_load = fix_invalid_escape(json_to_load, error_message) - if error_message.startswith( - "Expecting property name enclosed in double quotes" - ): - json_to_load = add_quotes_to_property_names(json_to_load) - try: - json.loads(json_to_load) - return json_to_load - except json.JSONDecodeError as e: - if CFG.debug_mode: - print("json loads error - add quotes", e) - error_message = str(e) - if balanced_str := balance_braces(json_to_load): - return balanced_str - return json_to_load diff --git a/spaces/raphaelsty/games/app.py b/spaces/raphaelsty/games/app.py deleted file mode 100644 index 7b8e4858a5eafefc1a5e06a5182cc7454a061084..0000000000000000000000000000000000000000 --- a/spaces/raphaelsty/games/app.py +++ /dev/null @@ -1,283 +0,0 @@ -import json - -import streamlit as st -from annotated_text import annotated_text -from cherche import compose, qa, rank, retrieve, summary -from sentence_transformers import SentenceTransformer -from sklearn.feature_extraction.text import TfidfVectorizer -from transformers import pipeline - - -@st.cache(hash_funcs={compose.Pipeline: lambda _: None}, allow_output_mutation=True) -def loading_pipelines(): - """Create three pipelines dedicated to neural research. The first one is dedicated to game - retrieval. The second is dedicated to the question answering task. The third is dedicated to - the summarization task. Save pipelines as pickle file. - - >>> search = ( - ... tfidf(on = "game") + ranker(on = "game") | tfidf(on = ["game", "summary"]) + - ... ranker(on = ["game", "summary"]) + documents - ... ) - - """ - # Load documents - with open("games.json", "r") as documents_file: - documents = json.load(documents_file) - - # A first retriever dedicated to title - retriever_title = retrieve.TfIdf( - key="id", - on=["game"], - documents=documents, - tfidf=TfidfVectorizer( - lowercase=True, - min_df=1, - max_df=0.9, - ngram_range=(3, 7), - analyzer="char", - ), - k=30, - ) - - # A second retriever dedicated to title and also summary of games. - retriever_title_summary = retrieve.TfIdf( - key="id", - on=["game", "summary"], - documents=documents, - tfidf=TfidfVectorizer( - lowercase=True, - min_df=1, - max_df=0.9, - ngram_range=(3, 7), - analyzer="char", - ), - k=30, - ) - - # Load our encoder to re-rank retrievers documents. - encoder = SentenceTransformer("sentence-transformers/all-mpnet-base-v2").encode - - # A ranker dedicated to title - ranker_title = rank.Encoder( - key="id", - on=["game"], - encoder=encoder, - k=5, - path="games_title.pkl", - ) - - # A ranker dedicated to title and summary - ranker_title_summary = rank.Encoder( - key="id", - on=["game", "summary"], - encoder=encoder, - k=5, - path="games_summary.pkl", - ) - - # Pipeline creation - search = ( - (retriever_title + ranker_title) | (retriever_title_summary + ranker_title_summary) - ) + documents - - # Index - search.add(documents) - return search - - -@st.cache(hash_funcs={compose.Pipeline: lambda _: None}, allow_output_mutation=True) -def write_search(query): - return search(query)[:5] - - -@st.cache(hash_funcs={compose.Pipeline: lambda _: None}, allow_output_mutation=True) -def loading_summarization_pipeline(): - summarizer = summary.Summary( - model=pipeline( - "summarization", - model="sshleifer/distilbart-cnn-12-6", - tokenizer="sshleifer/distilbart-cnn-12-6", - framework="pt", - ), - on=["game", "summary"], - max_length=50, - ) - - search_summarize = search + summarizer - return search_summarize - - -@st.cache(hash_funcs={compose.Pipeline: lambda _: None}, allow_output_mutation=True) -def write_search_summarize(query_summarize): - return search_summarize(query_summarize) - - -@st.cache(hash_funcs={compose.Pipeline: lambda _: None}, allow_output_mutation=True) -def loading_qa_pipeline(): - question_answering = qa.QA( - model=pipeline( - "question-answering", - model="deepset/roberta-base-squad2", - tokenizer="deepset/roberta-base-squad2", - ), - k=3, - on="summary", - ) - search_qa = search + question_answering - return search_qa - - -@st.cache(hash_funcs={compose.Pipeline: lambda _: None}, allow_output_mutation=True) -def write_search_qa(query_qa): - return search_qa(query_qa) - - -if __name__ == "__main__": - - st.markdown("# 🕹 Cherche") - - st.markdown( - "[Cherche](https://github.com/raphaelsty/cherche) (search in French) allows you to create a \ - neural search pipeline using retrievers and pre-trained language models as rankers. Cherche's main strength is its ability to build diverse and end-to-end pipelines." - ) - - st.image("explain.png") - - st.markdown( - "Here is a demo of neural search for video games using a sample of reviews made by [Metacritic](https://www.metacritic.com). \ - Starting the app may take a while if the models are not stored in cache." - ) - - # Will be slow the first time, you will need to compute embeddings. - search = loading_pipelines() - - st.markdown("## 👾 Neural search") - - st.markdown( - '```search = (tfidf(on = "title") + ranker(on = "title") | tfidf(on = ["title", "summary"]) + ranker(on = ["game", "summary"]) + documents)```' - ) - - query = st.text_input( - "games", - value="super smash bros", - max_chars=None, - key=None, - type="default", - help=None, - autocomplete=None, - on_change=None, - args=None, - kwargs=None, - ) - - if query: - - for document in write_search(query): - if document["rate"] < 10: - document["rate"] *= 10 - - st.markdown(f"### {document['game']}") - st.markdown(f"Metacritic Rating: {document['rate']}") - - col_1, col_2 = st.columns([1, 5]) - with col_1: - st.image(document["image"], width=100) - with col_2: - st.write(f"{document['summary'][:430]}...") - - st.markdown("## 🎲 Summarization") - - st.markdown( - '```search = (tfidf(on = "title") + ranker(on = "title") | tfidf(on = ["title", "summary"]) + ranker(on = ["game", "summary"]) + documents + summarization(on = "summary"))```' - ) - - st.markdown( - "Let's create a summay but it may take few seconds. Summarization models are not that fast using CPU. Also it may take time to load the summarization model if it's not in cache yet.." - ) - - query_summarize = st.text_input( - "summarization", - value="super smash bros", - max_chars=None, - key=None, - type="default", - help=None, - autocomplete=None, - on_change=None, - args=None, - kwargs=None, - ) - - if query_summarize: - search_summarize = loading_summarization_pipeline() - st.write(f"**{write_search_summarize(query_summarize)}**") - - st.markdown("## 🎮 Question answering") - - st.markdown( - '```search = (tfidf(on = "title") + ranker(on = "title") | tfidf(on = ["title", "summary"]) + ranker(on = ["game", "summary"]) + documents + question_answering(on = "summary"))```' - ) - - st.markdown( - "It may take few seconds. Question answering models are not that fast using CPU. Also it may take time to load the question answering model if it's not in cache yet." - ) - - query_qa = st.text_input( - "question", - value="What is the purpose of playing Super Smash Bros?", - max_chars=None, - key=None, - type="default", - help=None, - autocomplete=None, - on_change=None, - args=None, - kwargs=None, - ) - - if query_qa: - - search_qa = loading_qa_pipeline() - for document_qa in write_search_qa(query_qa): - - st.markdown(f"### {document_qa['game']}") - st.markdown(f"Metacritic Rating: {document_qa['rate']}") - - col_1, col_2 = st.columns([1, 5]) - with col_1: - st.image(document_qa["image"], width=100) - with col_2: - - annotations = document_qa["summary"].split(document_qa["answer"]) - - if document_qa["start"] == 0: - annotated_text( - ( - document_qa["answer"], - f"answer {round(document_qa['qa_score'], 2)}", - "#8ef", - ), - " ", - " ".join(annotations[1:]), - ) - - elif document_qa["end"] == len(document_qa["summary"]): - annotated_text( - " ".join(annotations[:-1]), - ( - document_qa["answer"], - f"answer {round(document_qa['qa_score'], 2)}", - "#8ef", - ), - ) - - else: - annotated_text( - annotations[0], - ( - document_qa["answer"], - f"answer {round(document_qa['qa_score'], 2)}", - "#8ef", - ), - annotations[1], - ) diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crackcorelvideostudioprox3gratuit.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crackcorelvideostudioprox3gratuit.md deleted file mode 100644 index 53b2667260b4a7db02f38da5541737563df012c9..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Crackcorelvideostudioprox3gratuit.md +++ /dev/null @@ -1,29 +0,0 @@ -
        -

        Crack Corel VideoStudio Pro X3 Gratuit: How to Get the Best Video Editing Software for Free

        - -

        If you are looking for a powerful and easy-to-use video editing software, you might have heard of Corel VideoStudio Pro X3. This software is designed to help you create professional videos in minutes, with enhanced masking and color grading, new smart video tools, optimized performance, and premium effects. You can edit 4K, HD, 360 video editing, with a full suite of creative editing tools. You can also turn your photos and videos into movies with VideoStudio.

        -

        crackcorelvideostudioprox3gratuit


        Download Ziphttps://urlgoal.com/2uCMNr



        - -

        However, Corel VideoStudio Pro X3 is not a cheap software. It costs $59.99 for a single license, which might be too expensive for some users. That's why some people are looking for a way to get Corel VideoStudio Pro X3 gratuit, or free, with a crack. A crack is a program that modifies the original software to bypass its security features and allow unlimited use without paying.

        - -

        Is it Safe to Use Crack Corel VideoStudio Pro X3 Gratuit?

        - -

        Before you decide to use crack Corel VideoStudio Pro X3 gratuit, you should be aware of the risks involved. Using a cracked software is illegal and unethical, as it violates the copyright and license agreement of the original software. You might also face legal consequences if you are caught using or distributing a cracked software.

        - -

        Moreover, using a cracked software can expose your computer to malware and viruses, as the crack might contain malicious code that can harm your system or steal your personal information. You might also experience performance issues, errors, crashes, or compatibility problems with your hardware or other software. You might also lose access to updates, support, or online features of the original software.

        -

        - -

        How to Get Corel VideoStudio Pro X3 Gratuit Legally?

        - -

        If you want to use Corel VideoStudio Pro X3 gratuit without risking your security or breaking the law, there are some legal ways to do so. One way is to use the free trial version of the software, which allows you to use all the features and functions for 30 days. You can download the free trial version from the official website of Corel.

        - -

        Another way is to use a discount coupon or a promo code that can reduce the price of the software significantly. You can find such coupons or codes from various sources online, such as blogs, forums, newsletters, or social media. However, you should be careful and verify the validity and reliability of the coupons or codes before using them.

        - -

        Conclusion

        - -

        Corel VideoStudio Pro X3 is a great video editing software that can help you create stunning videos with ease and creativity. However, it is not advisable to use crack Corel VideoStudio Pro X3 gratuit, as it can pose serious risks to your security and legality. Instead, you should use the free trial version or a discount coupon to get the software for free or at a lower price.

        -

        Conclusion

        - -

        Corel VideoStudio Pro X3 is a great video editing software that can help you create stunning videos with ease and creativity. However, it is not advisable to use crack Corel VideoStudio Pro X3 gratuit, as it can pose serious risks to your security and legality. Instead, you should use the free trial version or a discount coupon to get the software for free or at a lower price.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Audio Cd To Mp3 Converter Serial Number VERIFIED.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Audio Cd To Mp3 Converter Serial Number VERIFIED.md deleted file mode 100644 index a67534a8cfa549e61bf3e57049914a8f5023a6c4..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Audio Cd To Mp3 Converter Serial Number VERIFIED.md +++ /dev/null @@ -1,11 +0,0 @@ -

        Free Audio Cd To Mp3 Converter Serial Number


        DOWNLOAD →→→ https://urlgoal.com/2uCM44



        -
        -mp3 converter and audio converter, convert mp3 to wav, cd to mp3, flac to mp3, burn CD with FreeRIP MP3 Converter. With FreeRIP MP3 Converter you can convert mp3 to wav, cd to mp3, flac to mp3, burn CD with FreeRIP MP3 Converter. -Just select the audio file to convert, click the Start button and the program will start working. -Unlike other applications, FreeRIP MP3 Converter does not require registration after installation, so you can download for free and start converting. -It's really simple and fast . -What's new in this version: -Fixed an issue with disconnection at startup. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Download Air Strike 3d Full Version.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Download Air Strike 3d Full Version.md deleted file mode 100644 index 6745f900ead0ff00f08fc3873cc141f549a3bb0c..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Free Download Air Strike 3d Full Version.md +++ /dev/null @@ -1,11 +0,0 @@ -

        Free Download Air Strike 3d Full Version


        DOWNLOAD ……… https://urlgoal.com/2uCKGz



        -
        -January 15, 2022 - Fly a state-of-the-art helicopter equipped with the latest weapons and traverse 20 immense levels filled with enemies and challenging terrain. Air Strike 3D will take you into a breathtaking world with many enemies and obstacles waiting for you at every turn. -Key Features: - 20 huge and colorful levels filled with deadly dangers and challenging enemies. -- A variety of enemies such as zombies, robots and flying saucers. -- A variety of weapons such as a shotgun, machine gun and rocket launcher. -- Stunning 3D environments. -- Great graphics that allow you to immerse yourself in the action. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/relbert/Analogy/README.md b/spaces/relbert/Analogy/README.md deleted file mode 100644 index 3459acbdc84ffaddbc8f981134206f733ec3c7ff..0000000000000000000000000000000000000000 --- a/spaces/relbert/Analogy/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Analogy -emoji: 🐨 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/remilia/Ghostly/app.py b/spaces/remilia/Ghostly/app.py deleted file mode 100644 index ae2f68c236859525cdf4cc902bf15a9b2db7414a..0000000000000000000000000000000000000000 --- a/spaces/remilia/Ghostly/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import time - -import gradio as gr -from gradio.themes.utils.theme_dropdown import create_theme_dropdown - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='remilia/Ghostly') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `Ghostly` - To use this theme, set `theme='remilia/Ghostly'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/rktraz/art_style_classifier/README.md b/spaces/rktraz/art_style_classifier/README.md deleted file mode 100644 index 642fad5926be0851980757852b7bd9fe546c9f34..0000000000000000000000000000000000000000 --- a/spaces/rktraz/art_style_classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Art Style Classifier -emoji: 🐨 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/robinhad/qirimtatar-tts/tests/__init__.py b/spaces/robinhad/qirimtatar-tts/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/robmarkcole/fire-detection-from-images/app.py b/spaces/robmarkcole/fire-detection-from-images/app.py deleted file mode 100644 index bbfdd3b7cf5437006a52209b05c9f097ce927522..0000000000000000000000000000000000000000 --- a/spaces/robmarkcole/fire-detection-from-images/app.py +++ /dev/null @@ -1,28 +0,0 @@ -""" -Source: https://github.com/AK391/yolov5/blob/master/utils/gradio/demo.py -""" - -import gradio as gr -import torch -from PIL import Image - -model = torch.hub.load('ultralytics/yolov5', 'custom', 'best.pt') # force_reload=True to update - - -def yolo(im, size=640): - g = (size / max(im.size)) # gain - im = im.resize((int(x * g) for x in im.size), Image.ANTIALIAS) # resize - results = model(im) # inference - results.render() # updates results.ims with boxes and labels - return Image.fromarray(results.ims[0]) - - -inputs = gr.inputs.Image(type='pil', label="Original Image") -outputs = gr.outputs.Image(type="pil", label="Output Image") - -title = "YOLOv5" -description = "YOLOv5 demo for fire detection. Upload an image or click an example image to use." -article = "See https://github.com/robmarkcole/fire-detection-from-images" -examples = [['pan-fire.jpg'], ['fire-basket.jpg']] -gr.Interface(yolo, inputs, outputs, title=title, description=description, article=article, examples=examples).launch( - debug=True) \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/post_processing/bbox_nms.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/post_processing/bbox_nms.py deleted file mode 100644 index 4fcf57bb501de25adbba08d3fb5fe2cc8d00cd1c..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/post_processing/bbox_nms.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from mmcv.ops.nms import batched_nms - -from mmdet.core.bbox.iou_calculators import bbox_overlaps - - -def multiclass_nms(multi_bboxes, - multi_scores, - score_thr, - nms_cfg, - max_num=-1, - score_factors=None, - return_inds=False): - """NMS for multi-class bboxes. - - Args: - multi_bboxes (Tensor): shape (n, #class*4) or (n, 4) - multi_scores (Tensor): shape (n, #class), where the last column - contains scores of the background class, but this will be ignored. - score_thr (float): bbox threshold, bboxes with scores lower than it - will not be considered. - nms_cfg (dict): a dict that contains the arguments of nms operations - max_num (int, optional): if there are more than max_num bboxes after - NMS, only top max_num will be kept. Default to -1. - score_factors (Tensor, optional): The factors multiplied to scores - before applying NMS. Default to None. - return_inds (bool, optional): Whether return the indices of kept - bboxes. Default to False. - - Returns: - tuple: (dets, labels, indices (optional)), tensors of shape (k, 5), - (k), and (k). Dets are boxes with scores. Labels are 0-based. - """ - num_classes = multi_scores.size(1) - 1 - # exclude background category - if multi_bboxes.shape[1] > 4: - bboxes = multi_bboxes.view(multi_scores.size(0), -1, 4) - else: - bboxes = multi_bboxes[:, None].expand( - multi_scores.size(0), num_classes, 4) - - scores = multi_scores[:, :-1] - - labels = torch.arange(num_classes, dtype=torch.long, device=scores.device) - labels = labels.view(1, -1).expand_as(scores) - - bboxes = bboxes.reshape(-1, 4) - scores = scores.reshape(-1) - labels = labels.reshape(-1) - - if not torch.onnx.is_in_onnx_export(): - # NonZero not supported in TensorRT - # remove low scoring boxes - valid_mask = scores > score_thr - # multiply score_factor after threshold to preserve more bboxes, improve - # mAP by 1% for YOLOv3 - if score_factors is not None: - # expand the shape to match original shape of score - score_factors = score_factors.view(-1, 1).expand( - multi_scores.size(0), num_classes) - score_factors = score_factors.reshape(-1) - scores = scores * score_factors - - if not torch.onnx.is_in_onnx_export(): - # NonZero not supported in TensorRT - inds = valid_mask.nonzero(as_tuple=False).squeeze(1) - bboxes, scores, labels = bboxes[inds], scores[inds], labels[inds] - else: - # TensorRT NMS plugin has invalid output filled with -1 - # add dummy data to make detection output correct. - bboxes = torch.cat([bboxes, bboxes.new_zeros(1, 4)], dim=0) - scores = torch.cat([scores, scores.new_zeros(1)], dim=0) - labels = torch.cat([labels, labels.new_zeros(1)], dim=0) - - if bboxes.numel() == 0: - if torch.onnx.is_in_onnx_export(): - raise RuntimeError('[ONNX Error] Can not record NMS ' - 'as it has not been executed this time') - dets = torch.cat([bboxes, scores[:, None]], -1) - if return_inds: - return dets, labels, inds - else: - return dets, labels - - dets, keep = batched_nms(bboxes, scores, labels, nms_cfg) - - if max_num > 0: - dets = dets[:max_num] - keep = keep[:max_num] - - if return_inds: - return dets, labels[keep], inds[keep] - else: - return dets, labels[keep] - - -def fast_nms(multi_bboxes, - multi_scores, - multi_coeffs, - score_thr, - iou_thr, - top_k, - max_num=-1): - """Fast NMS in `YOLACT `_. - - Fast NMS allows already-removed detections to suppress other detections so - that every instance can be decided to be kept or discarded in parallel, - which is not possible in traditional NMS. This relaxation allows us to - implement Fast NMS entirely in standard GPU-accelerated matrix operations. - - Args: - multi_bboxes (Tensor): shape (n, #class*4) or (n, 4) - multi_scores (Tensor): shape (n, #class+1), where the last column - contains scores of the background class, but this will be ignored. - multi_coeffs (Tensor): shape (n, #class*coeffs_dim). - score_thr (float): bbox threshold, bboxes with scores lower than it - will not be considered. - iou_thr (float): IoU threshold to be considered as conflicted. - top_k (int): if there are more than top_k bboxes before NMS, - only top top_k will be kept. - max_num (int): if there are more than max_num bboxes after NMS, - only top max_num will be kept. If -1, keep all the bboxes. - Default: -1. - - Returns: - tuple: (dets, labels, coefficients), tensors of shape (k, 5), (k, 1), - and (k, coeffs_dim). Dets are boxes with scores. - Labels are 0-based. - """ - - scores = multi_scores[:, :-1].t() # [#class, n] - scores, idx = scores.sort(1, descending=True) - - idx = idx[:, :top_k].contiguous() - scores = scores[:, :top_k] # [#class, topk] - num_classes, num_dets = idx.size() - boxes = multi_bboxes[idx.view(-1), :].view(num_classes, num_dets, 4) - coeffs = multi_coeffs[idx.view(-1), :].view(num_classes, num_dets, -1) - - iou = bbox_overlaps(boxes, boxes) # [#class, topk, topk] - iou.triu_(diagonal=1) - iou_max, _ = iou.max(dim=1) - - # Now just filter out the ones higher than the threshold - keep = iou_max <= iou_thr - - # Second thresholding introduces 0.2 mAP gain at negligible time cost - keep *= scores > score_thr - - # Assign each kept detection to its corresponding class - classes = torch.arange( - num_classes, device=boxes.device)[:, None].expand_as(keep) - classes = classes[keep] - - boxes = boxes[keep] - coeffs = coeffs[keep] - scores = scores[keep] - - # Only keep the top max_num highest scores across all classes - scores, idx = scores.sort(0, descending=True) - if max_num > 0: - idx = idx[:max_num] - scores = scores[:max_num] - - classes = classes[idx] - boxes = boxes[idx] - coeffs = coeffs[idx] - - cls_dets = torch.cat([boxes, scores[:, None]], dim=1) - return cls_dets, classes, coeffs diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/mask2former_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/mask2former_head.py deleted file mode 100644 index 59047bdbb7939ba4fe7bcbdb0d0b165e408ed7be..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/mask2former_head.py +++ /dev/null @@ -1,430 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, build_plugin_layer, caffe2_xavier_init -from mmcv.cnn.bricks.transformer import (build_positional_encoding, - build_transformer_layer_sequence) -from mmcv.ops import point_sample -from mmcv.runner import ModuleList - -from mmdet.core import build_assigner, build_sampler, reduce_mean -from mmdet.models.utils import get_uncertain_point_coords_with_randomness -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead -from .maskformer_head import MaskFormerHead - - -@HEADS.register_module() -class Mask2FormerHead(MaskFormerHead): - """Implements the Mask2Former head. - - See `Masked-attention Mask Transformer for Universal Image - Segmentation `_ for details. - - Args: - in_channels (list[int]): Number of channels in the input feature map. - feat_channels (int): Number of channels for features. - out_channels (int): Number of channels for output. - num_things_classes (int): Number of things. - num_stuff_classes (int): Number of stuff. - num_queries (int): Number of query in Transformer decoder. - pixel_decoder (:obj:`mmcv.ConfigDict` | dict): Config for pixel - decoder. Defaults to None. - enforce_decoder_input_project (bool, optional): Whether to add - a layer to change the embed_dim of tranformer encoder in - pixel decoder to the embed_dim of transformer decoder. - Defaults to False. - transformer_decoder (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder. Defaults to None. - positional_encoding (:obj:`mmcv.ConfigDict` | dict): Config for - transformer decoder position encoding. Defaults to None. - loss_cls (:obj:`mmcv.ConfigDict` | dict): Config of the classification - loss. Defaults to None. - loss_mask (:obj:`mmcv.ConfigDict` | dict): Config of the mask loss. - Defaults to None. - loss_dice (:obj:`mmcv.ConfigDict` | dict): Config of the dice loss. - Defaults to None. - train_cfg (:obj:`mmcv.ConfigDict` | dict): Training config of - Mask2Former head. - test_cfg (:obj:`mmcv.ConfigDict` | dict): Testing config of - Mask2Former head. - init_cfg (dict or list[dict], optional): Initialization config dict. - Defaults to None. - """ - - def __init__(self, - in_channels, - feat_channels, - out_channels, - num_things_classes=80, - num_stuff_classes=53, - num_queries=100, - num_transformer_feat_level=3, - pixel_decoder=None, - enforce_decoder_input_project=False, - transformer_decoder=None, - positional_encoding=None, - loss_cls=None, - loss_mask=None, - loss_dice=None, - train_cfg=None, - test_cfg=None, - init_cfg=None, - **kwargs): - super(AnchorFreeHead, self).__init__(init_cfg) - self.num_things_classes = num_things_classes - self.num_stuff_classes = num_stuff_classes - self.num_classes = self.num_things_classes + self.num_stuff_classes - self.num_queries = num_queries - self.num_transformer_feat_level = num_transformer_feat_level - self.num_heads = transformer_decoder.transformerlayers.\ - attn_cfgs.num_heads - self.num_transformer_decoder_layers = transformer_decoder.num_layers - assert pixel_decoder.encoder.transformerlayers.\ - attn_cfgs.num_levels == num_transformer_feat_level - pixel_decoder_ = copy.deepcopy(pixel_decoder) - pixel_decoder_.update( - in_channels=in_channels, - feat_channels=feat_channels, - out_channels=out_channels) - self.pixel_decoder = build_plugin_layer(pixel_decoder_)[1] - self.transformer_decoder = build_transformer_layer_sequence( - transformer_decoder) - self.decoder_embed_dims = self.transformer_decoder.embed_dims - - self.decoder_input_projs = ModuleList() - # from low resolution to high resolution - for _ in range(num_transformer_feat_level): - if (self.decoder_embed_dims != feat_channels - or enforce_decoder_input_project): - self.decoder_input_projs.append( - Conv2d( - feat_channels, self.decoder_embed_dims, kernel_size=1)) - else: - self.decoder_input_projs.append(nn.Identity()) - self.decoder_positional_encoding = build_positional_encoding( - positional_encoding) - self.query_embed = nn.Embedding(self.num_queries, feat_channels) - self.query_feat = nn.Embedding(self.num_queries, feat_channels) - # from low resolution to high resolution - self.level_embed = nn.Embedding(self.num_transformer_feat_level, - feat_channels) - - self.cls_embed = nn.Linear(feat_channels, self.num_classes + 1) - self.mask_embed = nn.Sequential( - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, feat_channels), nn.ReLU(inplace=True), - nn.Linear(feat_channels, out_channels)) - - self.test_cfg = test_cfg - self.train_cfg = train_cfg - if train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - self.sampler = build_sampler(self.train_cfg.sampler, context=self) - self.num_points = self.train_cfg.get('num_points', 12544) - self.oversample_ratio = self.train_cfg.get('oversample_ratio', 3.0) - self.importance_sample_ratio = self.train_cfg.get( - 'importance_sample_ratio', 0.75) - - self.class_weight = loss_cls.class_weight - self.loss_cls = build_loss(loss_cls) - self.loss_mask = build_loss(loss_mask) - self.loss_dice = build_loss(loss_dice) - - def init_weights(self): - for m in self.decoder_input_projs: - if isinstance(m, Conv2d): - caffe2_xavier_init(m, bias=0) - - self.pixel_decoder.init_weights() - - for p in self.transformer_decoder.parameters(): - if p.dim() > 1: - nn.init.xavier_normal_(p) - - def _get_target_single(self, cls_score, mask_pred, gt_labels, gt_masks, - img_metas): - """Compute classification and mask targets for one image. - - Args: - cls_score (Tensor): Mask score logits from a single decoder layer - for one image. Shape (num_queries, cls_out_channels). - mask_pred (Tensor): Mask logits for a single decoder layer for one - image. Shape (num_queries, h, w). - gt_labels (Tensor): Ground truth class indices for one image with - shape (num_gts, ). - gt_masks (Tensor): Ground truth mask for each image, each with - shape (num_gts, h, w). - img_metas (dict): Image informtation. - - Returns: - tuple[Tensor]: A tuple containing the following for one image. - - - labels (Tensor): Labels of each image. \ - shape (num_queries, ). - - label_weights (Tensor): Label weights of each image. \ - shape (num_queries, ). - - mask_targets (Tensor): Mask targets of each image. \ - shape (num_queries, h, w). - - mask_weights (Tensor): Mask weights of each image. \ - shape (num_queries, ). - - pos_inds (Tensor): Sampled positive indices for each \ - image. - - neg_inds (Tensor): Sampled negative indices for each \ - image. - """ - # sample points - num_queries = cls_score.shape[0] - num_gts = gt_labels.shape[0] - - point_coords = torch.rand((1, self.num_points, 2), - device=cls_score.device) - # shape (num_queries, num_points) - mask_points_pred = point_sample( - mask_pred.unsqueeze(1), point_coords.repeat(num_queries, 1, - 1)).squeeze(1) - # shape (num_gts, num_points) - gt_points_masks = point_sample( - gt_masks.unsqueeze(1).float(), point_coords.repeat(num_gts, 1, - 1)).squeeze(1) - - # assign and sample - assign_result = self.assigner.assign(cls_score, mask_points_pred, - gt_labels, gt_points_masks, - img_metas) - sampling_result = self.sampler.sample(assign_result, mask_pred, - gt_masks) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label target - labels = gt_labels.new_full((self.num_queries, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_labels.new_ones((self.num_queries, )) - - # mask target - mask_targets = gt_masks[sampling_result.pos_assigned_gt_inds] - mask_weights = mask_pred.new_zeros((self.num_queries, )) - mask_weights[pos_inds] = 1.0 - - return (labels, label_weights, mask_targets, mask_weights, pos_inds, - neg_inds) - - def loss_single(self, cls_scores, mask_preds, gt_labels_list, - gt_masks_list, img_metas): - """Loss function for outputs from a single decoder layer. - - Args: - cls_scores (Tensor): Mask score logits from a single decoder layer - for all images. Shape (batch_size, num_queries, - cls_out_channels). Note `cls_out_channels` should includes - background. - mask_preds (Tensor): Mask logits for a pixel decoder for all - images. Shape (batch_size, num_queries, h, w). - gt_labels_list (list[Tensor]): Ground truth class indices for each - image, each with shape (num_gts, ). - gt_masks_list (list[Tensor]): Ground truth mask for each image, - each with shape (num_gts, h, w). - img_metas (list[dict]): List of image meta information. - - Returns: - tuple[Tensor]: Loss components for outputs from a single \ - decoder layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - mask_preds_list = [mask_preds[i] for i in range(num_imgs)] - (labels_list, label_weights_list, mask_targets_list, mask_weights_list, - num_total_pos, - num_total_neg) = self.get_targets(cls_scores_list, mask_preds_list, - gt_labels_list, gt_masks_list, - img_metas) - # shape (batch_size, num_queries) - labels = torch.stack(labels_list, dim=0) - # shape (batch_size, num_queries) - label_weights = torch.stack(label_weights_list, dim=0) - # shape (num_total_gts, h, w) - mask_targets = torch.cat(mask_targets_list, dim=0) - # shape (batch_size, num_queries) - mask_weights = torch.stack(mask_weights_list, dim=0) - - # classfication loss - # shape (batch_size * num_queries, ) - cls_scores = cls_scores.flatten(0, 1) - labels = labels.flatten(0, 1) - label_weights = label_weights.flatten(0, 1) - - class_weight = cls_scores.new_tensor(self.class_weight) - loss_cls = self.loss_cls( - cls_scores, - labels, - label_weights, - avg_factor=class_weight[labels].sum()) - - num_total_masks = reduce_mean(cls_scores.new_tensor([num_total_pos])) - num_total_masks = max(num_total_masks, 1) - - # extract positive ones - # shape (batch_size, num_queries, h, w) -> (num_total_gts, h, w) - mask_preds = mask_preds[mask_weights > 0] - - if mask_targets.shape[0] == 0: - # zero match - loss_dice = mask_preds.sum() - loss_mask = mask_preds.sum() - return loss_cls, loss_mask, loss_dice - - with torch.no_grad(): - points_coords = get_uncertain_point_coords_with_randomness( - mask_preds.unsqueeze(1), None, self.num_points, - self.oversample_ratio, self.importance_sample_ratio) - # shape (num_total_gts, h, w) -> (num_total_gts, num_points) - mask_point_targets = point_sample( - mask_targets.unsqueeze(1).float(), points_coords).squeeze(1) - # shape (num_queries, h, w) -> (num_queries, num_points) - mask_point_preds = point_sample( - mask_preds.unsqueeze(1), points_coords).squeeze(1) - - # dice loss - loss_dice = self.loss_dice( - mask_point_preds, mask_point_targets, avg_factor=num_total_masks) - - # mask loss - # shape (num_queries, num_points) -> (num_queries * num_points, ) - mask_point_preds = mask_point_preds.reshape(-1) - # shape (num_total_gts, num_points) -> (num_total_gts * num_points, ) - mask_point_targets = mask_point_targets.reshape(-1) - loss_mask = self.loss_mask( - mask_point_preds, - mask_point_targets, - avg_factor=num_total_masks * self.num_points) - - return loss_cls, loss_mask, loss_dice - - def forward_head(self, decoder_out, mask_feature, attn_mask_target_size): - """Forward for head part which is called after every decoder layer. - - Args: - decoder_out (Tensor): in shape (num_queries, batch_size, c). - mask_feature (Tensor): in shape (batch_size, c, h, w). - attn_mask_target_size (tuple[int, int]): target attention - mask size. - - Returns: - tuple: A tuple contain three elements. - - - cls_pred (Tensor): Classification scores in shape \ - (batch_size, num_queries, cls_out_channels). \ - Note `cls_out_channels` should includes background. - - mask_pred (Tensor): Mask scores in shape \ - (batch_size, num_queries,h, w). - - attn_mask (Tensor): Attention mask in shape \ - (batch_size * num_heads, num_queries, h, w). - """ - decoder_out = self.transformer_decoder.post_norm(decoder_out) - decoder_out = decoder_out.transpose(0, 1) - # shape (batch_size, num_queries, c) - cls_pred = self.cls_embed(decoder_out) - # shape (batch_size, num_queries, c) - mask_embed = self.mask_embed(decoder_out) - # shape (batch_size, num_queries, h, w) - mask_pred = torch.einsum('bqc,bchw->bqhw', mask_embed, mask_feature) - attn_mask = F.interpolate( - mask_pred, - attn_mask_target_size, - mode='bilinear', - align_corners=False) - # shape (batch_size, num_queries, h, w) -> - # (batch_size * num_head, num_queries, h*w) - attn_mask = attn_mask.flatten(2).unsqueeze(1).repeat( - (1, self.num_heads, 1, 1)).flatten(0, 1) - attn_mask = attn_mask.sigmoid() < 0.5 - attn_mask = attn_mask.detach() - - return cls_pred, mask_pred, attn_mask - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (list[Tensor]): Multi scale Features from the - upstream network, each is a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple: A tuple contains two elements. - - - cls_pred_list (list[Tensor)]: Classification logits \ - for each decoder layer. Each is a 3D-tensor with shape \ - (batch_size, num_queries, cls_out_channels). \ - Note `cls_out_channels` should includes background. - - mask_pred_list (list[Tensor]): Mask logits for each \ - decoder layer. Each with shape (batch_size, num_queries, \ - h, w). - """ - batch_size = len(img_metas) - mask_features, multi_scale_memorys = self.pixel_decoder(feats) - # multi_scale_memorys (from low resolution to high resolution) - decoder_inputs = [] - decoder_positional_encodings = [] - for i in range(self.num_transformer_feat_level): - decoder_input = self.decoder_input_projs[i](multi_scale_memorys[i]) - # shape (batch_size, c, h, w) -> (h*w, batch_size, c) - decoder_input = decoder_input.flatten(2).permute(2, 0, 1) - level_embed = self.level_embed.weight[i].view(1, 1, -1) - decoder_input = decoder_input + level_embed - # shape (batch_size, c, h, w) -> (h*w, batch_size, c) - mask = decoder_input.new_zeros( - (batch_size, ) + multi_scale_memorys[i].shape[-2:], - dtype=torch.bool) - decoder_positional_encoding = self.decoder_positional_encoding( - mask) - decoder_positional_encoding = decoder_positional_encoding.flatten( - 2).permute(2, 0, 1) - decoder_inputs.append(decoder_input) - decoder_positional_encodings.append(decoder_positional_encoding) - # shape (num_queries, c) -> (num_queries, batch_size, c) - query_feat = self.query_feat.weight.unsqueeze(1).repeat( - (1, batch_size, 1)) - query_embed = self.query_embed.weight.unsqueeze(1).repeat( - (1, batch_size, 1)) - - cls_pred_list = [] - mask_pred_list = [] - cls_pred, mask_pred, attn_mask = self.forward_head( - query_feat, mask_features, multi_scale_memorys[0].shape[-2:]) - cls_pred_list.append(cls_pred) - mask_pred_list.append(mask_pred) - - for i in range(self.num_transformer_decoder_layers): - level_idx = i % self.num_transformer_feat_level - # if a mask is all True(all background), then set it all False. - attn_mask[torch.where( - attn_mask.sum(-1) == attn_mask.shape[-1])] = False - - # cross_attn + self_attn - layer = self.transformer_decoder.layers[i] - attn_masks = [attn_mask, None] - query_feat = layer( - query=query_feat, - key=decoder_inputs[level_idx], - value=decoder_inputs[level_idx], - query_pos=query_embed, - key_pos=decoder_positional_encodings[level_idx], - attn_masks=attn_masks, - query_key_padding_mask=None, - # here we do not apply masking on padded region - key_padding_mask=None) - cls_pred, mask_pred, attn_mask = self.forward_head( - query_feat, mask_features, multi_scale_memorys[ - (i + 1) % self.num_transformer_feat_level].shape[-2:]) - - cls_pred_list.append(cls_pred) - mask_pred_list.append(mask_pred) - - return cls_pred_list, mask_pred_list diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/configs/_base_/datasets/coco_panoptic.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/configs/_base_/datasets/coco_panoptic.py deleted file mode 100644 index dbade7c0ac20141806b93f0ea7b5ca26d748246e..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/configs/_base_/datasets/coco_panoptic.py +++ /dev/null @@ -1,59 +0,0 @@ -# dataset settings -dataset_type = 'CocoPanopticDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadPanopticAnnotations', - with_bbox=True, - with_mask=True, - with_seg=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='SegRescale', scale_factor=1 / 4), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=data_root + 'annotations/panoptic_train2017.json', - img_prefix=data_root + 'train2017/', - seg_prefix=data_root + 'annotations/panoptic_train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=data_root + 'annotations/panoptic_val2017.json', - img_prefix=data_root + 'val2017/', - seg_prefix=data_root + 'annotations/panoptic_val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=data_root + 'annotations/panoptic_val2017.json', - img_prefix=data_root + 'val2017/', - seg_prefix=data_root + 'annotations/panoptic_val2017/', - pipeline=test_pipeline)) -evaluation = dict(interval=1, metric=['PQ']) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Frenzal Rhomb-Punch In The Face How to Play the Guitar Riff.md b/spaces/rorallitri/biomedical-language-models/logs/Frenzal Rhomb-Punch In The Face How to Play the Guitar Riff.md deleted file mode 100644 index a83e4cff0374f62fc96853062838ac2c857418a5..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Frenzal Rhomb-Punch In The Face How to Play the Guitar Riff.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Frenzal Rhomb-Punch In The Face mp3


        Download » https://tinurll.com/2uzm8d



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Kitab Al Kimya Pdf 82 The Ultimate Guide to Tibb and Hikmat.md b/spaces/rorallitri/biomedical-language-models/logs/Kitab Al Kimya Pdf 82 The Ultimate Guide to Tibb and Hikmat.md deleted file mode 100644 index 0c1c2079f52ed203a64e7fea3ae337ff524b7bb3..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Kitab Al Kimya Pdf 82 The Ultimate Guide to Tibb and Hikmat.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Kitab Al Kimya Pdf 82


        Download Filehttps://tinurll.com/2uzm9N



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/utils/autocast.py b/spaces/rstallman/Mayfair-Partner-Music/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/sergiomar73/nlp-gpt3-zero-shot-classification-app/README.md b/spaces/sergiomar73/nlp-gpt3-zero-shot-classification-app/README.md deleted file mode 100644 index 383e9e1c5317bea825584027e90c64794c94f202..0000000000000000000000000000000000000000 --- a/spaces/sergiomar73/nlp-gpt3-zero-shot-classification-app/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Nlp Gpt3 Zero Shot Classification App -emoji: 🏢 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: unlicense ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shi-labs/OneFormer/oneformer/utils/box_ops.py b/spaces/shi-labs/OneFormer/oneformer/utils/box_ops.py deleted file mode 100644 index a2b62ad99ed1fc35cdb10a9e11acdeb0ff1abcc4..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/OneFormer/oneformer/utils/box_ops.py +++ /dev/null @@ -1,133 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Utilities for bounding box manipulation and GIoU. -""" -import torch, os -from torchvision.ops.boxes import box_area - - -def box_cxcywh_to_xyxy(x): - x_c, y_c, w, h = x.unbind(-1) - b = [(x_c - 0.5 * w), (y_c - 0.5 * h), - (x_c + 0.5 * w), (y_c + 0.5 * h)] - return torch.stack(b, dim=-1) - - -def box_xyxy_to_cxcywh(x): - x0, y0, x1, y1 = x.unbind(-1) - b = [(x0 + x1) / 2, (y0 + y1) / 2, - (x1 - x0), (y1 - y0)] - return torch.stack(b, dim=-1) - - -# modified from torchvision to also return the union -def box_iou(boxes1, boxes2): - area1 = box_area(boxes1) - area2 = box_area(boxes2) - - # import ipdb; ipdb.set_trace() - lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2] - rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2] - - wh = (rb - lt).clamp(min=0) # [N,M,2] - inter = wh[:, :, 0] * wh[:, :, 1] # [N,M] - - union = area1[:, None] + area2 - inter - - iou = inter / (union + 1e-6) - return iou, union - - -def generalized_box_iou(boxes1, boxes2): - """ - Generalized IoU from https://giou.stanford.edu/ - The boxes should be in [x0, y0, x1, y1] format - Returns a [N, M] pairwise matrix, where N = len(boxes1) - and M = len(boxes2) - """ - # degenerate boxes gives inf / nan results - # so do an early check - assert (boxes1[:, 2:] >= boxes1[:, :2]).all() - assert (boxes2[:, 2:] >= boxes2[:, :2]).all() - # except: - # import ipdb; ipdb.set_trace() - iou, union = box_iou(boxes1, boxes2) - - lt = torch.min(boxes1[:, None, :2], boxes2[:, :2]) - rb = torch.max(boxes1[:, None, 2:], boxes2[:, 2:]) - - wh = (rb - lt).clamp(min=0) # [N,M,2] - area = wh[:, :, 0] * wh[:, :, 1] - - return iou - (area - union) / (area + 1e-6) - - - -# modified from torchvision to also return the union -def box_iou_pairwise(boxes1, boxes2): - area1 = box_area(boxes1) - area2 = box_area(boxes2) - - lt = torch.max(boxes1[:, :2], boxes2[:, :2]) # [N,2] - rb = torch.min(boxes1[:, 2:], boxes2[:, 2:]) # [N,2] - - wh = (rb - lt).clamp(min=0) # [N,2] - inter = wh[:, 0] * wh[:, 1] # [N] - - union = area1 + area2 - inter - - iou = inter / union - return iou, union - - -def generalized_box_iou_pairwise(boxes1, boxes2): - """ - Generalized IoU from https://giou.stanford.edu/ - Input: - - boxes1, boxes2: N,4 - Output: - - giou: N, 4 - """ - # degenerate boxes gives inf / nan results - # so do an early check - assert (boxes1[:, 2:] >= boxes1[:, :2]).all() - assert (boxes2[:, 2:] >= boxes2[:, :2]).all() - assert boxes1.shape == boxes2.shape - iou, union = box_iou_pairwise(boxes1, boxes2) # N, 4 - - lt = torch.min(boxes1[:, :2], boxes2[:, :2]) - rb = torch.max(boxes1[:, 2:], boxes2[:, 2:]) - - wh = (rb - lt).clamp(min=0) # [N,2] - area = wh[:, 0] * wh[:, 1] - - return iou - (area - union) / area - -def masks_to_boxes(masks): - """Compute the bounding boxes around the provided masks - The masks should be in format [N, H, W] where N is the number of masks, (H, W) are the spatial dimensions. - Returns a [N, 4] tensors, with the boxes in xyxy format - """ - if masks.numel() == 0: - return torch.zeros((0, 4), device=masks.device) - - h, w = masks.shape[-2:] - - y = torch.arange(0, h, dtype=torch.float) - x = torch.arange(0, w, dtype=torch.float) - y, x = torch.meshgrid(y, x) - - x_mask = (masks * x.unsqueeze(0)) - x_max = x_mask.flatten(1).max(-1)[0] - x_min = x_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - y_mask = (masks * y.unsqueeze(0)) - y_max = y_mask.flatten(1).max(-1)[0] - y_min = y_mask.masked_fill(~(masks.bool()), 1e8).flatten(1).min(-1)[0] - - return torch.stack([x_min, y_min, x_max, y_max], 1) - -if __name__ == '__main__': - x = torch.rand(5, 4) - y = torch.rand(3, 4) - iou, union = box_iou(x, y) \ No newline at end of file diff --git a/spaces/siddh4rth/narrify/README.md b/spaces/siddh4rth/narrify/README.md deleted file mode 100644 index 6d66f9547f8fc357732941719532d403b83bf2ea..0000000000000000000000000000000000000000 --- a/spaces/siddh4rth/narrify/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Narrify -emoji: 📉 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/silentAw404/bot.py/README.md b/spaces/silentAw404/bot.py/README.md deleted file mode 100644 index 3b249b65c37f71aff4fec75535dbf3bbafea9349..0000000000000000000000000000000000000000 --- a/spaces/silentAw404/bot.py/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bot.py -emoji: 📈 -colorFrom: purple -colorTo: green -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/__init__.py b/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/__init__.py deleted file mode 100644 index 26e5749f5ff75472f2d9e67a2346633527d603d6..0000000000000000000000000000000000000000 --- a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/sstan_models/__init__.py +++ /dev/null @@ -1,44 +0,0 @@ -""" -Copyright (C) 2019 NVIDIA Corporation. All rights reserved. -Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). -""" - -import importlib -import torch - - -def find_model_using_name(model_name): - # Given the option --model [modelname], - # the file "sstan_models/modelname_model.py" - # will be imported. - model_filename = "sstan_models." + model_name + "_model" - modellib = importlib.import_module(model_filename) - - # In the file, the class called ModelNameModel() will - # be instantiated. It has to be a subclass of torch.nn.Module, - # and it is case-insensitive. - model = None - target_model_name = model_name.replace('_', '') + 'model' - for name, cls in modellib.__dict__.items(): - if name.lower() == target_model_name.lower() \ - and issubclass(cls, torch.nn.Module): - model = cls - - if model is None: - print("In %s.py, there should be a subclass of torch.nn.Module with class name that matches %s in lowercase." % (model_filename, target_model_name)) - exit(0) - - return model - - -def get_option_setter(model_name): - model_class = find_model_using_name(model_name) - return model_class.modify_commandline_options - - -def create_model(opt): - model = find_model_using_name(opt.model) - instance = model(opt) - print("model [%s] was created" % (type(instance).__name__)) - - return instance diff --git a/spaces/smangrul/Text-To-Image/README.md b/spaces/smangrul/Text-To-Image/README.md deleted file mode 100644 index 2f027afa9042f9a09c5cb58cfecb2a3430436505..0000000000000000000000000000000000000000 --- a/spaces/smangrul/Text-To-Image/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Text To Image -emoji: 🤗 -colorFrom: blue -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/sparanoid/milky-green-sovits-4/inference_main.py b/spaces/sparanoid/milky-green-sovits-4/inference_main.py deleted file mode 100644 index 80a470ea9146f1f75e785411dd5d3b6fade64b70..0000000000000000000000000000000000000000 --- a/spaces/sparanoid/milky-green-sovits-4/inference_main.py +++ /dev/null @@ -1,100 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import matplotlib.pyplot as plt -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - - - -def main(): - import argparse - - parser = argparse.ArgumentParser(description='sovits4 inference') - - # 一定要设置的部分 - parser.add_argument('-m', '--model_path', type=str, default="/Volumes/Extend/下载/G_20800.pth", help='模型路径') - parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径') - parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src"], help='wav文件名列表,放在raw文件夹下') - parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)') - parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nyaru'], help='合成目标说话人名称') - - # 可选项部分 - parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False, - help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调') - parser.add_argument('-cm', '--cluster_model_path', type=str, default="/Volumes/Extend/下载/so-vits-svc-4.0/logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填') - parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=1, help='聚类方案占比,范围0-1,若没有训练聚类模型则填0即可') - - # 不用动的部分 - parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50') - parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu') - parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学') - parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现') - parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式') - - args = parser.parse_args() - - svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path) - infer_tool.mkdir(["raw", "results"]) - clean_names = args.clean_names - trans = args.trans - spk_list = args.spk_list - slice_db = args.slice_db - wav_format = args.wav_format - auto_predict_f0 = args.auto_predict_f0 - cluster_infer_ratio = args.cluster_infer_ratio - noice_scale = args.noice_scale - pad_seconds = args.pad_seconds - - infer_tool.fill_a_to_b(trans, clean_names) - for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])]) - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale - ) - _audio = out_audio.cpu().numpy() - - pad_len = int(svc_model.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - audio.extend(list(_audio)) - key = "auto" if auto_predict_f0 else f"{tran}key" - cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}" - res_path = f'./results/old——{clean_name}_{key}_{spk}{cluster_name}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) - -if __name__ == '__main__': - main() diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/byte_level_bpe/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/byte_level_bpe/README.md deleted file mode 100644 index 657092660eae42d20f67647417623b8b8cb7b66c..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/byte_level_bpe/README.md +++ /dev/null @@ -1,88 +0,0 @@ -# Neural Machine Translation with Byte-Level Subwords - -https://arxiv.org/abs/1909.03341 - -We provide an implementation of byte-level byte-pair encoding (BBPE), taking IWSLT 2017 Fr-En translation as -example. - -## Data -Get data and generate fairseq binary dataset: -```bash -bash ./get_data.sh -``` - -## Model Training -Train Transformer model with Bi-GRU embedding contextualization (implemented in `gru_transformer.py`): -```bash -# VOCAB=bytes -# VOCAB=chars -VOCAB=bbpe2048 -# VOCAB=bpe2048 -# VOCAB=bbpe4096 -# VOCAB=bpe4096 -# VOCAB=bpe16384 -``` -```bash -fairseq-train "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --arch gru_transformer --encoder-layers 2 --decoder-layers 2 --dropout 0.3 --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --log-format 'simple' --log-interval 100 --save-dir "checkpoints/${VOCAB}" \ - --batch-size 100 --max-update 100000 --update-freq 2 -``` - -## Generation -`fairseq-generate` requires bytes (BBPE) decoder to convert byte-level representation back to characters: -```bash -# BPE=--bpe bytes -# BPE=--bpe characters -BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe2048.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe2048.model -# BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe16384.model -``` - -```bash -fairseq-generate "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --source-lang fr --gen-subset test --sacrebleu --path "checkpoints/${VOCAB}/checkpoint_last.pt" \ - --tokenizer moses --moses-target-lang en ${BPE} -``` -When using `fairseq-interactive`, bytes (BBPE) encoder/decoder is required to tokenize input data and detokenize model predictions: -```bash -fairseq-interactive "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --path "checkpoints/${VOCAB}/checkpoint_last.pt" --input data/test.fr --tokenizer moses --moses-source-lang fr \ - --moses-target-lang en ${BPE} --buffer-size 1000 --max-tokens 10000 -``` - -## Results -| Vocabulary | Model | BLEU | -|:-------------:|:-------------:|:-------------:| -| Joint BPE 16k ([Kudo, 2018](https://arxiv.org/abs/1804.10959)) | 512d LSTM 2+2 | 33.81 | -| Joint BPE 16k | Transformer base 2+2 (w/ GRU) | 36.64 (36.72) | -| Joint BPE 4k | Transformer base 2+2 (w/ GRU) | 35.49 (36.10) | -| Joint BBPE 4k | Transformer base 2+2 (w/ GRU) | 35.61 (35.82) | -| Joint BPE 2k | Transformer base 2+2 (w/ GRU) | 34.87 (36.13) | -| Joint BBPE 2k | Transformer base 2+2 (w/ GRU) | 34.98 (35.43) | -| Characters | Transformer base 2+2 (w/ GRU) | 31.78 (33.30) | -| Bytes | Transformer base 2+2 (w/ GRU) | 31.57 (33.62) | - - -## Citation -``` -@misc{wang2019neural, - title={Neural Machine Translation with Byte-Level Subwords}, - author={Changhan Wang and Kyunghyun Cho and Jiatao Gu}, - year={2019}, - eprint={1909.03341}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - - -## Contact -Changhan Wang ([changhan@fb.com](mailto:changhan@fb.com)), -Kyunghyun Cho ([kyunghyuncho@fb.com](mailto:kyunghyuncho@fb.com)), -Jiatao Gu ([jgu@fb.com](mailto:jgu@fb.com)) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/audio/hubert_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/audio/hubert_dataset.py deleted file mode 100644 index f00fe301a64a8740ed3ce07e44f6774edb933926..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/audio/hubert_dataset.py +++ /dev/null @@ -1,358 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools -import logging -import os -import sys -from typing import Any, List, Optional, Union - -import numpy as np - -import torch -import torch.nn.functional as F -from fairseq.data import data_utils -from fairseq.data.fairseq_dataset import FairseqDataset - -logger = logging.getLogger(__name__) - - -def load_audio(manifest_path, max_keep, min_keep): - n_long, n_short = 0, 0 - names, inds, sizes = [], [], [] - with open(manifest_path) as f: - root = f.readline().strip() - for ind, line in enumerate(f): - items = line.strip().split("\t") - assert len(items) == 2, line - sz = int(items[1]) - if min_keep is not None and sz < min_keep: - n_short += 1 - elif max_keep is not None and sz > max_keep: - n_long += 1 - else: - names.append(items[0]) - inds.append(ind) - sizes.append(sz) - tot = ind + 1 - logger.info( - ( - f"max_keep={max_keep}, min_keep={min_keep}, " - f"loaded {len(names)}, skipped {n_short} short and {n_long} long, " - f"longest-loaded={max(sizes)}, shortest-loaded={min(sizes)}" - ) - ) - return root, names, inds, tot, sizes - - -def load_label(label_path, inds, tot): - with open(label_path) as f: - labels = [line.rstrip() for line in f] - assert ( - len(labels) == tot - ), f"number of labels does not match ({len(labels)} != {tot})" - labels = [labels[i] for i in inds] - return labels - - -def load_label_offset(label_path, inds, tot): - with open(label_path) as f: - code_lengths = [len(line.encode("utf-8")) for line in f] - assert ( - len(code_lengths) == tot - ), f"number of labels does not match ({len(code_lengths)} != {tot})" - offsets = list(itertools.accumulate([0] + code_lengths)) - offsets = [(offsets[i], offsets[i + 1]) for i in inds] - return offsets - - -def verify_label_lengths( - audio_sizes, - audio_rate, - label_path, - label_rate, - inds, - tot, - tol=0.1, # tolerance in seconds -): - if label_rate < 0: - logger.info(f"{label_path} is sequence label. skipped") - return - - with open(label_path) as f: - lengths = [len(line.rstrip().split()) for line in f] - assert len(lengths) == tot - lengths = [lengths[i] for i in inds] - num_invalid = 0 - for i, ind in enumerate(inds): - dur_from_audio = audio_sizes[i] / audio_rate - dur_from_label = lengths[i] / label_rate - if abs(dur_from_audio - dur_from_label) > tol: - logger.warning( - ( - f"audio and label duration differ too much " - f"(|{dur_from_audio} - {dur_from_label}| > {tol}) " - f"in line {ind+1} of {label_path}. Check if `label_rate` " - f"is correctly set (currently {label_rate}). " - f"num. of samples = {audio_sizes[i]}; " - f"label length = {lengths[i]}" - ) - ) - num_invalid += 1 - if num_invalid > 0: - logger.warning( - f"total {num_invalid} (audio, label) pairs with mismatched lengths" - ) - - -class HubertDataset(FairseqDataset): - def __init__( - self, - manifest_path: str, - sample_rate: float, - label_paths: List[str], - label_rates: Union[List[float], float], # -1 for sequence labels - pad_list: List[str], - eos_list: List[str], - label_processors: Optional[List[Any]] = None, - max_keep_sample_size: Optional[int] = None, - min_keep_sample_size: Optional[int] = None, - max_sample_size: Optional[int] = None, - shuffle: bool = True, - pad_audio: bool = False, - normalize: bool = False, - store_labels: bool = True, - random_crop: bool = False, - single_target: bool = False, - ): - self.audio_root, self.audio_names, inds, tot, self.sizes = load_audio( - manifest_path, max_keep_sample_size, min_keep_sample_size - ) - self.sample_rate = sample_rate - self.shuffle = shuffle - self.random_crop = random_crop - - self.num_labels = len(label_paths) - self.pad_list = pad_list - self.eos_list = eos_list - self.label_processors = label_processors - self.single_target = single_target - self.label_rates = ( - [label_rates for _ in range(len(label_paths))] - if isinstance(label_rates, int) - else label_rates - ) - self.store_labels = store_labels - if store_labels: - self.label_list = [load_label(p, inds, tot) for p in label_paths] - else: - self.label_paths = label_paths - self.label_offsets_list = [ - load_label_offset(p, inds, tot) for p in label_paths - ] - assert ( - label_processors is None - or len(label_processors) == self.num_labels - ) - for label_path, label_rate in zip(label_paths, self.label_rates): - verify_label_lengths( - self.sizes, sample_rate, label_path, label_rate, inds, tot - ) - - self.max_sample_size = ( - max_sample_size if max_sample_size is not None else sys.maxsize - ) - self.pad_audio = pad_audio - self.normalize = normalize - logger.info( - f"pad_audio={pad_audio}, random_crop={random_crop}, " - f"normalize={normalize}, max_sample_size={self.max_sample_size}" - ) - - def get_audio(self, index): - import soundfile as sf - - wav_path = os.path.join(self.audio_root, self.audio_names[index]) - wav, cur_sample_rate = sf.read(wav_path) - wav = torch.from_numpy(wav).float() - wav = self.postprocess(wav, cur_sample_rate) - return wav - - def get_label(self, index, label_idx): - if self.store_labels: - label = self.label_list[label_idx][index] - else: - with open(self.label_paths[label_idx]) as f: - offset_s, offset_e = self.label_offsets_list[label_idx][index] - f.seek(offset_s) - label = f.read(offset_e - offset_s) - - if self.label_processors is not None: - label = self.label_processors[label_idx](label) - return label - - def get_labels(self, index): - return [self.get_label(index, i) for i in range(self.num_labels)] - - def __getitem__(self, index): - wav = self.get_audio(index) - labels = self.get_labels(index) - return {"id": index, "source": wav, "label_list": labels} - - def __len__(self): - return len(self.sizes) - - def crop_to_max_size(self, wav, target_size): - size = len(wav) - diff = size - target_size - if diff <= 0: - return wav, 0 - - start, end = 0, target_size - if self.random_crop: - start = np.random.randint(0, diff + 1) - end = size - diff + start - return wav[start:end], start - - def collater(self, samples): - # target = max(sizes) -> random_crop not used - # target = max_sample_size -> random_crop used for long - samples = [s for s in samples if s["source"] is not None] - if len(samples) == 0: - return {} - - audios = [s["source"] for s in samples] - audio_sizes = [len(s) for s in audios] - if self.pad_audio: - audio_size = min(max(audio_sizes), self.max_sample_size) - else: - audio_size = min(min(audio_sizes), self.max_sample_size) - collated_audios, padding_mask, audio_starts = self.collater_audio( - audios, audio_size - ) - - targets_by_label = [ - [s["label_list"][i] for s in samples] - for i in range(self.num_labels) - ] - targets_list, lengths_list, ntokens_list = self.collater_label( - targets_by_label, audio_size, audio_starts - ) - - net_input = {"source": collated_audios, "padding_mask": padding_mask} - batch = { - "id": torch.LongTensor([s["id"] for s in samples]), - "net_input": net_input, - } - - if self.single_target: - batch["target_lengths"] = lengths_list[0] - batch["ntokens"] = ntokens_list[0] - batch["target"] = targets_list[0] - else: - batch["target_lengths_list"] = lengths_list - batch["ntokens_list"] = ntokens_list - batch["target_list"] = targets_list - return batch - - def collater_audio(self, audios, audio_size): - collated_audios = audios[0].new_zeros(len(audios), audio_size) - padding_mask = ( - torch.BoolTensor(collated_audios.shape).fill_(False) - # if self.pad_audio else None - ) - audio_starts = [0 for _ in audios] - for i, audio in enumerate(audios): - diff = len(audio) - audio_size - if diff == 0: - collated_audios[i] = audio - elif diff < 0: - assert self.pad_audio - collated_audios[i] = torch.cat( - [audio, audio.new_full((-diff,), 0.0)] - ) - padding_mask[i, diff:] = True - else: - collated_audios[i], audio_starts[i] = self.crop_to_max_size( - audio, audio_size - ) - return collated_audios, padding_mask, audio_starts - - def collater_frm_label( - self, targets, audio_size, audio_starts, label_rate, pad - ): - assert label_rate > 0 - s2f = label_rate / self.sample_rate - frm_starts = [int(round(s * s2f)) for s in audio_starts] - frm_size = int(round(audio_size * s2f)) - if not self.pad_audio: - rem_size = [len(t) - s for t, s in zip(targets, frm_starts)] - frm_size = min(frm_size, *rem_size) - targets = [t[s: s + frm_size] for t, s in zip(targets, frm_starts)] - logger.debug(f"audio_starts={audio_starts}") - logger.debug(f"frame_starts={frm_starts}") - logger.debug(f"frame_size={frm_size}") - - lengths = torch.LongTensor([len(t) for t in targets]) - ntokens = lengths.sum().item() - targets = data_utils.collate_tokens( - targets, pad_idx=pad, left_pad=False - ) - return targets, lengths, ntokens - - def collater_seq_label(self, targets, pad): - lengths = torch.LongTensor([len(t) for t in targets]) - ntokens = lengths.sum().item() - targets = data_utils.collate_tokens( - targets, pad_idx=pad, left_pad=False - ) - return targets, lengths, ntokens - - def collater_label(self, targets_by_label, audio_size, audio_starts): - targets_list, lengths_list, ntokens_list = [], [], [] - itr = zip(targets_by_label, self.label_rates, self.pad_list) - for targets, label_rate, pad in itr: - if label_rate == -1: - targets, lengths, ntokens = self.collater_seq_label( - targets, pad - ) - else: - targets, lengths, ntokens = self.collater_frm_label( - targets, audio_size, audio_starts, label_rate, pad - ) - targets_list.append(targets) - lengths_list.append(lengths) - ntokens_list.append(ntokens) - return targets_list, lengths_list, ntokens_list - - def num_tokens(self, index): - return self.size(index) - - def size(self, index): - if self.pad_audio: - return self.sizes[index] - return min(self.sizes[index], self.max_sample_size) - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - - order.append(self.sizes) - return np.lexsort(order)[::-1] - - def postprocess(self, wav, cur_sample_rate): - if wav.dim() == 2: - wav = wav.mean(-1) - assert wav.dim() == 1, wav.dim() - - if cur_sample_rate != self.sample_rate: - raise Exception(f"sr {cur_sample_rate} != {self.sample_rate}") - - if self.normalize: - with torch.no_grad(): - wav = F.layer_norm(wav, wav.shape) - return wav diff --git a/spaces/stomexserde/gpt4-ui/Examples/Blue Streak In Punjabi Dubbed(Bhola Te Mirza).md b/spaces/stomexserde/gpt4-ui/Examples/Blue Streak In Punjabi Dubbed(Bhola Te Mirza).md deleted file mode 100644 index 053b210aafefb8428557332172edd1c967fe9a04..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Blue Streak In Punjabi Dubbed(Bhola Te Mirza).md +++ /dev/null @@ -1,17 +0,0 @@ - -

        Blue Streak In Punjabi Dubbed(Bhola Te Mirza): A Hilarious Comedy Movie

        -

        Blue Streak is a 1999 American action comedy film starring Martin Lawrence as a jewel thief who poses as a cop to retrieve a diamond he stole and hid at a police station. The film was dubbed in Punjabi by Azizi Totay Official and released as Bhola Te Mirza, which means Bhola and Mirza, the names of the main characters in the Punjabi version.

        -

        Blue Streak In Punjabi Dubbed(Bhola Te Mirza)


        Download File ✪✪✪ https://urlgoal.com/2uI6Rb



        -

        The movie is full of funny dialogues, jokes and situations that will make you laugh out loud. The Punjabi dubbing adds a new flavor and charm to the original movie, making it more enjoyable and entertaining for the Punjabi audience. The movie is divided into two parts, which are available on Dailymotion[^1^] and SoundCloud[^2^]. You can also watch a clip of the movie on Facebook[^3^].

        -

        If you are looking for a fun and relaxing movie to watch with your friends or family, you should definitely check out Blue Streak In Punjabi Dubbed(Bhola Te Mirza). It is a comedy masterpiece that will keep you hooked till the end.

        Here are some of the best scenes and dialogues from Blue Streak In Punjabi Dubbed(Bhola Te Mirza):

        -
          -
        • When Bhola (Martin Lawrence) pretends to be a pizza delivery guy and enters the police station to find his diamond, he encounters a rude officer who insults him and his pizza. Bhola responds by saying "You don't know the taste of pizza, you only eat cow dung." The officer gets angry and tries to chase him, but Bhola escapes by throwing pizza at him.
        • -
        • When Bhola meets Mirza (Luke Wilson), a naive and honest cop who becomes his partner, he tries to impress him by showing off his fake police skills. He tells Mirza that he can tell if someone is lying by looking at their eyes. He then points at a random woman and says "She is lying. She is not pregnant, she is just fat." The woman hears him and slaps him hard.
        • -
        • When Bhola and Mirza are interrogating a suspect named Tulip (Dave Chappelle), who is also Bhola's former partner in crime, Bhola acts tough and threatens to shoot him if he doesn't cooperate. Tulip calls his bluff and says "You can't shoot me, you are a cop. And you are not even a real cop, you are a fake cop. And you are not even a good fake cop, you are a bad fake cop." Bhola gets annoyed and says "Shut up, you are a flower. And you are not even a real flower, you are a fake flower. And you are not even a good fake flower, you are a bad fake flower."
        • -
        -

        These are just some of the hilarious moments from Blue Streak In Punjabi Dubbed(Bhola Te Mirza). The movie is full of witty and clever humor that will make you laugh till your stomach hurts. So don't miss this opportunity to watch this amazing movie and have a great time.

        Blue Streak In Punjabi Dubbed(Bhola Te Mirza) is not only a comedy movie, but also a movie that has a good story and action. The movie shows how Bhola, who is a criminal, gradually changes his ways and becomes a better person. He learns to respect the law and his partner Mirza, who helps him in his mission. He also develops a romantic interest in a female cop named Janice (Nicole Ari Parker), who is unaware of his true identity.

        -

        The movie also has some thrilling and exciting action scenes, such as the car chase, the shootout, the helicopter escape, and the final confrontation between Bhola and Deacon (Peter Greene), the main villain who wants to steal the diamond. The movie keeps you on the edge of your seat with its fast-paced and suspenseful plot. The movie also has a twist ending that will surprise you and make you smile.

        -

        -

        Blue Streak In Punjabi Dubbed(Bhola Te Mirza) is a movie that has something for everyone. It is a movie that will make you laugh, cry, cheer, and clap. It is a movie that will entertain you and make you happy. It is a movie that you should not miss.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/CRACK TSR Watermark Image Pro V3.5.7.8 Final Keygen - [SH].md b/spaces/stomexserde/gpt4-ui/Examples/CRACK TSR Watermark Image Pro V3.5.7.8 Final Keygen - [SH].md deleted file mode 100644 index 193f5c1318459e8d1bb6e8e8f15d7726f8f9d62c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/CRACK TSR Watermark Image Pro V3.5.7.8 Final Keygen - [SH].md +++ /dev/null @@ -1,21 +0,0 @@ -
        -

        How to Use TSR Watermark Image Pro to Protect Your Photos Online

        -

        If you are a photographer or a content creator who shares your photos online, you may want to protect your work from unauthorized use or theft. One way to do that is to add a watermark to your photos, which is a visible mark that identifies you as the owner of the image. A watermark can also help you promote your brand and attract more viewers to your website or social media accounts.

        -

        However, adding a watermark to each photo manually can be time-consuming and tedious, especially if you have hundreds or thousands of photos to process. That's why you need a software that can help you batch watermark your photos quickly and easily. One of the best software for this purpose is TSR Watermark Image Pro, a professional watermarking tool that offers many features and options to customize your watermarks.

        -

        CRACK TSR Watermark Image Pro v3.5.7.8 Final Keygen - [SH]


        Download File ->>> https://urlgoal.com/2uIaWQ



        -

        In this article, we will show you how to use TSR Watermark Image Pro to add watermarks to your photos in a few simple steps. We will also explain some of the benefits and advantages of using this software over other watermarking tools.

        -

        Step 1: Download and Install TSR Watermark Image Pro

        -

        The first step is to download and install TSR Watermark Image Pro on your computer. You can get the software from the official website: https://www.watermark-image.com/. There are three versions available: Free, Professional, and Professional + Share. The Free version is for personal use only and has some limitations, such as adding an "Unregistered TSR Watermark Image" text to your watermarks. The Professional version costs $29.95 and offers more features and options, such as adding multiple watermarks, resizing and rotating images, uploading to WordPress and FTP, and more. The Professional + Share version costs $59.95 and includes all the features of the Professional version plus the ability to share your watermarked photos securely with other users.

        -

        After downloading the software, run the installer and follow the instructions on the screen. The installation process is fast and easy, and you can start using the software right away.

        -

        -

        Step 2: Add Your Photos

        -

        The next step is to add your photos that you want to watermark. You can do this by clicking on the "Add images" button on the main window of the software. You can also drag and drop your photos from your computer or from another folder. You can add individual photos or entire folders at once. The software supports various image formats, such as JPG, PNG, BMP, GIF, TIFF, etc.

        -

        After adding your photos, you will see them listed on the left side of the window. You can preview each photo by clicking on it. You can also sort, filter, rename, delete, or move your photos using the buttons on the toolbar.

        -

        Step 3: Add Your Watermark

        -

        The most important step is to add your watermark to your photos. You can do this by clicking on the "Watermark" tab on the right side of the window. Here you can choose from three types of watermarks: Text, Image, or 3D Text.

        -

        A text watermark is a simple text that you can type in the box below. You can customize the font, size, color, style, opacity, angle, position, alignment, and shadow of your text watermark using the options below. You can also add special characters or symbols using the "Insert" button.

        -

        An image watermark is an image file that you can select from your computer or from another folder. You can use any image format that is supported by the software. You can resize, rotate, crop, flip, or adjust the opacity of your image watermark using the options below. You can also choose where to place your image watermark on your photo using the "Position" option.

        -

        A 3D text watermark is a text that has a 3D effect that makes it look more realistic and attractive. You can type in your text in the box below and customize its font, size, color, style, opacity, angle, position, alignment, shadow, depth, perspective, and reflection using the options below.

        -

        You can add multiple watermarks to your photos by clicking on the "Add new watermark" button at the bottom of the window. You can also edit or delete any watermark by clicking on its name

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Cloud Cult You Were Born Mp3 Free ((FULL)) Download.md b/spaces/stomexserde/gpt4-ui/Examples/Cloud Cult You Were Born Mp3 Free ((FULL)) Download.md deleted file mode 100644 index 99775c32679150756d98254df927d06e1e6d93b0..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Cloud Cult You Were Born Mp3 Free ((FULL)) Download.md +++ /dev/null @@ -1,31 +0,0 @@ - -

        How to Download Cloud Cult You Were Born Mp3 for Free

        -

        If you are looking for a way to download Cloud Cult You Were Born Mp3 for free, you have come to the right place. Cloud Cult is an indie rock band from Minnesota that has been making music since 1995. Their song You Were Born was featured in the movie The Fault in Our Stars and has become one of their most popular tracks. In this article, we will show you how to download Cloud Cult You Were Born Mp3 for free using a simple and legal method.

        -

        Why Download Cloud Cult You Were Born Mp3 for Free?

        -

        Cloud Cult You Were Born Mp3 is a beautiful song that expresses the wonder and joy of life. It is a song that can inspire you to live your best life and appreciate the gift of being alive. The lyrics are uplifting and meaningful, and the melody is catchy and soothing. Some of the lyrics are:

        -

        Cloud Cult You Were Born Mp3 Free Download


        DOWNLOAD >>>>> https://urlgoal.com/2uIamM



        -
        -

        You were born into a strange world
        -Like a candle, you were meant to share the fire
        -I don't know where we come from, and I don't know where we go
        -But my arms were made to hold you, so I will never let you go
        -'Cause you were born to change this life
        -You were born to chase the light

        -
        -

        Downloading Cloud Cult You Were Born Mp3 for free can allow you to enjoy this song anytime and anywhere. You can listen to it on your phone, computer, or any other device that supports mp3 files. You can also share it with your friends and family who might appreciate this song as well. Downloading Cloud Cult You Were Born Mp3 for free can also save you money and time, as you don't have to pay for streaming services or buy CDs or vinyl records.

        -

        How to Download Cloud Cult You Were Born Mp3 for Free?

        -

        The easiest and safest way to download Cloud Cult You Were Born Mp3 for free is to use a website that offers free mp3 downloads of songs that are in the public domain or have a Creative Commons license. This means that the songs are free to use, share, and modify without infringing on the rights of the original artists. One such website is Free Music Archive, which is a project of the non-profit organization WFMU.

        -

        To download Cloud Cult You Were Born Mp3 for free from Free Music Archive, follow these steps:

        -
          -
        1. Go to https://freemusicarchive.org/ on your browser.
        2. -
        3. Type "Cloud Cult You Were Born" in the search box and hit enter.
        4. -
        5. You will see a list of results that match your query. Click on the one that says "You Were Born by Cloud Cult" under the album "Light Chasers".
        6. -
        7. You will be taken to a page where you can listen to the song online or download it as an mp3 file. To download it, click on the arrow icon next to the play button.
        8. -
        9. A pop-up window will appear asking you to choose a format and quality for your download. Choose mp3 and high quality (320 kbps).
        10. -
        11. Click on "Download" and wait for the file to be saved on your device.
        12. -
        13. Enjoy listening to Cloud Cult You Were Born Mp3 for free!
        14. -
        -

        Conclusion

        -

        Cloud Cult You Were Born Mp3 is a wonderful song that can brighten up your day and inspire you to live fully. You can download it for free from Free Music Archive using a simple and legal method. We hope this article was helpful and informative. If you liked it, please share it with your friends and leave us a comment below. Thank you for reading!

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Guitar Pro 6 Rev9626 Keygen Free [EXCLUSIVE].md b/spaces/stomexserde/gpt4-ui/Examples/Guitar Pro 6 Rev9626 Keygen Free [EXCLUSIVE].md deleted file mode 100644 index 65d78bdabe9adbc0052da42418a968fb75cfcb1c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Guitar Pro 6 Rev9626 Keygen Free [EXCLUSIVE].md +++ /dev/null @@ -1,135 +0,0 @@ - -

        Guitar Pro 6 Rev9626 Keygen Free: How to Download and Activate It

        -

        If you are a guitar enthusiast, you probably know how important it is to have a good software that can help you create, edit and play guitar tabs. Guitar tabs are a simplified way of writing music notation for guitar, using numbers and symbols instead of notes and staffs. They are easy to read and follow, and they can help you learn new songs, practice your skills, or compose your own music.

        -

        One of the best software for guitar tabs is Guitar Pro 6, a powerful and versatile program that can handle any fretted instrument from 4 to 8 strings, such as guitar, bass, banjo, ukulele, mandolin, etc. Guitar Pro 6 can also support other instruments like piano, drums, keyboards, brass, strings, etc., making it a complete music production tool.

        -

        guitar pro 6 rev9626 keygen free


        Download Zip ⚙⚙⚙ https://urlgoal.com/2uIbm3



        -

        However, Guitar Pro 6 is not a free software. It costs $59.95 for a single license, which can be used on up to five computers. If you want to use it on more devices, you need to buy more licenses or upgrade to the Ultimate version, which costs $199.95. That's quite expensive for some people who just want to enjoy playing guitar.

        -

        Fortunately, there is a way to get Guitar Pro 6 rev9626 keygen free, which means you can download and activate the software without paying anything. In this article, we will show you how to do that step by step. But first, let's see what Guitar Pro 6 can do for you and why you need it.

        -

        What is Guitar Pro 6 and Why You Need It

        -

        Guitar Pro 6 Features and Benefits

        -

        Guitar Pro 6 is more than just a tab editor. It is a comprehensive music software that offers many features and benefits for guitar players of all levels. Here are some of them:

        -
          -
        • Easy and intuitive interface: You can easily navigate through the tabs and buttons of the software, and customize your workspace according to your preferences. You can also display your tabs in different ways, such as full-screen, double-page, parchment-like, etc., and zoom in or out as you wish.
        • -
        • Powerful editing tools: You can create your own tabs from scratch or import them from various sources, such as MIDI files, ASCII files, MusicXML files, etc. You can also edit your tabs with many options, such as adding or deleting notes, changing pitch or duration, adding effects or articulations, transposing or tuning your instrument, etc.
        • -
        • Realistic sound engine: You can play and listen to your tabs with high-quality sound samples that simulate the real sound of your instrument. You can also choose from over 100 soundbanks that cover different genres and styles of music, such as rock, metal, jazz, blues , classical, etc. You can also adjust the volume, pan, reverb, chorus, and other parameters of each track to create your own sound mix.
        • -
        • Advanced playback features: You can play your tabs along with a metronome, a speed trainer, a loop mode, or a countdown. You can also use the virtual fretboard or keyboard to see the notes being played on your instrument. You can also sync your tabs with audio files or video files, and create your own backing tracks or karaoke tracks.
        • -
        • Online sharing and collaboration: You can export your tabs in various formats, such as PDF, PNG, GPX, GP5, etc., and share them with other users online. You can also access thousands of tabs from the Guitar Pro website or other websites, and download them to your software. You can also collaborate with other musicians online and exchange your tabs and feedback.
        • -
        -

        As you can see, Guitar Pro 6 is a must-have software for any guitar player who wants to improve their skills, express their creativity, and have fun with their instrument. But how can you get it for free? Let's find out.

        -

        -

        How to Download Guitar Pro 6 Rev9626 for Free

        -

        Where to Find the Guitar Pro 6 Rev9626 Installer

        -

        The first step to get Guitar Pro 6 rev9626 keygen free is to download the installer of the software. The installer is a file that contains the program and its components, and it allows you to install the software on your computer.

        -

        The official website of Guitar Pro 6 does not offer the rev9626 version anymore, as it has been replaced by newer versions. However, you can still find the rev9626 installer on some third-party websites that host old versions of software. One of these websites is OldVersion.com, which has a large archive of old software versions for various platforms.

        -

        To download the Guitar Pro 6 rev9626 installer from OldVersion.com, follow these steps:

        -
          -
        1. Go to OldVersion.com and type "Guitar Pro" in the search box.
        2. -
        3. Select "Guitar Pro" from the list of results.
        4. -
        5. Scroll down to the "Windows" section and look for "Guitar Pro 6.0.7.9063". This is the rev9626 version of Guitar Pro 6.
        6. -
        7. Click on the green "Download Now" button next to it.
        8. -
        9. Save the file "guitar-pro-6-rev9063.exe" to your computer.
        10. -
        -

        Congratulations! You have successfully downloaded the Guitar Pro 6 rev9626 installer. Now let's see how to install it on your computer.

        -

        How to Install Guitar Pro 6 Rev9626 on Your Computer

        -

        The next step to get Guitar Pro 6 rev9626 keygen free is to install the software on your computer. The installation process is simple and straightforward, and it should not take more than a few minutes. Here are the steps to follow:

        -
          -
        1. Locate the file "guitar-pro-6-rev9063.exe" on your computer and double-click on it.
        2. -
        3. A window will pop up asking you to choose a language for the installation. Select your preferred language and click "OK".
        4. -
        5. A welcome screen will appear. Click "Next".
        6. -
        7. A license agreement screen will appear. Read the terms and conditions carefully and click "I Agree" if you accept them.
        8. -
        9. A destination folder screen will appear. Choose where you want to install Guitar Pro 6 on your computer and click "Next".
        10. -
        11. An additional tasks screen will appear. Choose whether you want to create a desktop shortcut or a start menu shortcut for Guitar Pro 6 and click "Next".
        12. -
        13. A ready to install screen will appear. Click "Install" to start the installation process.
        14. -
        15. A progress bar will show you how much time is left until the installation is complete. Wait patiently until it reaches 100%.
        16. -
        17. A completion screen will appear. Click "Finish" to exit the installer.
        18. -
        -

        Well done! You have successfully installed Guitar Pro 6 rev9626 on your computer. Now let's see how to activate it with a keygen.

        -

        How to Activate Guitar Pro 6 Rev9626 with Keygen

        -

        What is a Keygen and How It Works

        -

        A keygen is a small program that can generate valid registration codes for a software. A registration code is a sequence of characters that can unlock the full features and functions of a software. Usually, a registration code is provided by the software developer when you buy a license for the software. However, a keygen can bypass this process and generate a registration code without paying anything.

        -

        A keygen works by using a mathematical algorithm that can produce valid codes for a specific software. The algorithm is based on the information that the software requires to generate a code, such as the user name, the computer ID, the serial number, etc. The keygen can then use this information to create a code that matches the criteria of the software.

        -

        However, using a keygen is not legal and ethical, as it violates the intellectual property rights of the software developer. It also exposes your computer to potential risks, such as viruses, malware, spyware, etc., that may be hidden in the keygen file. Therefore, we do not recommend using a keygen to activate Guitar Pro 6 rev9626 or any other software. We are only providing this information for educational purposes only.

        -

        Where to Find the Guitar Pro 6 Rev9626 Keygen

        -

        The next step to get Guitar Pro 6 rev9626 keygen free is to download the keygen file. The keygen file is a small executable file that can run on your computer and generate a registration code for Guitar Pro 6 rev9626.

        -

        The official website of Guitar Pro 6 does not offer any keygen file, as it is illegal and unethical. However, you can still find the Guitar Pro 6 rev9626 keygen file on some third-party websites that host cracked software and tools. One of these websites is Crack4Download.com, which has a large collection of cracked software and tools for various platforms.

        -

        To download the Guitar Pro 6 rev9626 keygen file from Crack4Download.com, follow these steps:

        -
          -
        1. Go to Crack4Download.com and type "Guitar Pro 6" in the search box.
        2. -
        3. Select "Guitar Pro 6 rev9626 + crack serial keygen" from the list of results.
        4. -
        5. Scroll down to the "Download Mirror Link" section and click on one of the links provided. You may need to complete a captcha or a survey to access the link.
        6. -
        7. Save the file "guitar-pro-6-rev9626-keygen.zip" to your computer.
        8. -
        -

        Congratulations! You have successfully downloaded the Guitar Pro 6 rev9626 keygen file. Now let's see how to use it to generate a registration code.

        -

        How to Use the Guitar Pro 6 Rev9626 Keygen to Generate a Registration Code

        -

        The final step to get Guitar Pro 6 rev9626 keygen free is to use the keygen file to generate a registration code for Guitar Pro 6 rev9626. The registration code is a sequence of characters that can unlock the full features and functions of Guitar Pro 6 rev9626. Here are the steps to follow:

        -
          -
        1. Locate the file "guitar-pro-6-rev9626-keygen.zip" on your computer and extract it using a program like WinRAR or 7-Zip.
        2. -
        3. Open the folder "guitar-pro-6-rev9626-keygen" and double-click on the file "keygen.exe".
        4. -
        5. A window will pop up with the title "Guitar Pro 6 Key Generator". You will see two fields: "User ID" and "Key ID".
        6. -
        7. In the "User ID" field, enter any name you want. This will be your user name for Guitar Pro 6 rev9626.
        8. -
        9. In the "Key ID" field, enter any number you want. This will be your serial number for Guitar Pro 6 rev9626.
        10. -
        11. Click on the "Generate" button at the bottom of the window. A new field will appear below with the title "Offline Activation Key". This is your registration code for Guitar Pro 6 rev9626.
        12. -
        13. Copy the registration code and save it somewhere safe. You will need it later to activate Guitar Pro 6 rev9626.
        14. -
        -

        Well done! You have successfully used the Guitar Pro 6 rev9626 keygen to generate a registration code. Now let's see how to enter it in Guitar Pro 6 rev9626.

        -

        How to Enter the Registration Code in Guitar Pro 6 Rev9626

        -

        The last step to activate Guitar Pro 6 rev9626 with keygen is to enter the registration code in the software. The registration code is a sequence of characters that can unlock the full features and functions of Guitar Pro 6 rev9626. Here are the steps to follow:

        -
          -
        1. Launch Guitar Pro 6 rev9626 on your computer. A window will pop up with the title "Guitar Pro 6 Activation".
        2. -
        3. Click on the "Offline activation" button at the bottom of the window.
        4. -
        5. A new window will pop up with the title "Offline activation". You will see two fields: "User ID" and "Key ID".
        6. -
        7. In the "User ID" field, enter the same name you used to generate the registration code with the keygen.
        8. -
        9. In the "Key ID" field, enter the same number you used to generate the registration code with the keygen.
        10. -
        11. Click on the "Activate" button at the bottom of the window.
        12. -
        13. A new window will pop up with the title "Offline activation". You will see a field with the title "Request". This is a code that identifies your computer and your software.
        14. -
        15. Copy the request code and paste it in the keygen window, in the field with the title "Request".
        16. -
        17. Click on the "Generate" button at the bottom of the keygen window. A new field will appear below with the title "Activation". This is your activation code for Guitar Pro 6 rev9626.
        18. -
        19. Copy the activation code and paste it in the Guitar Pro 6 rev9626 window, in the field with the title "Activation".
        20. -
        21. Click on the "OK" button at the bottom of the window.
        22. -
        -

        Congratulations! You have successfully activated Guitar Pro 6 rev9626 with keygen. You can now enjoy all the features and functions of Guitar Pro 6 rev9626 without any limitations. Now let's see how to use Guitar Pro 6 rev9626 to create, edit and play guitar tabs.

        -

        How to Use Guitar Pro 6 Rev9626 to Create, Edit and Play Guitar Tabs

        -

        How to Open and Import Guitar Tabs in Guitar Pro 6 Rev9626

        -

        Guitar Pro 6 rev9626 allows you to open and import guitar tabs from various sources, such as MIDI files, ASCII files, MusicXML files, etc. You can also access thousands of tabs from the Guitar Pro website or other websites, and download them to your software. Here are some ways to open and import guitar tabs in Guitar Pro 6 rev9626:

        -
          -
        • Open a tab from your computer: To open a tab that is saved on your computer, go to File > Open and browse your folders to find the tab file. You can also use the keyboard shortcut Ctrl + O. Alternatively, you can drag and drop the tab file into Guitar Pro 6 rev9626.
        • -
        • Import a tab from another format: To import a tab that is in another format, such as MIDI, ASCII, MusicXML, etc., go to File > Import and choose the format you want to import. Then browse your folders to find the tab file. You can also use the keyboard shortcut Ctrl + I. Alternatively, you can drag and drop the tab file into Guitar Pro 6 rev9626.
        • -
        • Download a tab from the Guitar Pro website: To download a tab from the Guitar Pro website, go to File > Open from Guitar Pro website and browse the categories and subcategories to find the tab you want. You can also use the search box to look for a specific tab. Then click on the "Download" button next to the tab and save it to your computer. You can then open it in Guitar Pro 6 rev9626.
        • -
        • Download a tab from another website: To download a tab from another website, such as Ultimate Guitar, Songsterr, etc., go to the website and look for the tab you want. Then look for the option to download the tab in Guitar Pro format, usually with a .gp or .gpx extension. Then save it to your computer and open it in Guitar Pro 6 rev9626.
        • -
        -

        As you can see, there are many ways to open and import guitar tabs in Guitar Pro 6 rev9626. You can also create your own tabs from scratch or edit existing tabs with many options. Let's see how to do that.

        -

        How to Edit and Customize Guitar Tabs in Guitar Pro 6 Rev9626

        -

        Guitar Pro 6 rev9626 allows you to edit and customize guitar tabs with many options, such as adding or deleting notes, changing pitch or duration, adding effects or articulations, transposing or tuning your instrument, etc. You can also use various tools and features to enhance your tabs, such as chord diagrams, scales, lyrics, fingering, etc. Here are some ways to edit and customize guitar tabs in Guitar Pro 6 rev9626:

        -
          -
        • Add or delete notes: To add or delete notes in your tab, select the track and the measure where you want to add or delete notes. Then use the mouse or the keyboard to enter or erase notes on the tablature or the standard notation. You can also use the toolbar buttons or the menu options to add or delete notes.
        • -
        • Change pitch or duration: To change the pitch or duration of a note in your tab, select the note and use the mouse wheel or the arrow keys to move it up or down on the tablature or the standard notation. You can also use the toolbar buttons or the menu options to change the pitch or duration of a note.
        • -
        • Add effects or articulations: To add effects or articulations to a note in your tab, such as bends, slides, hammer-ons, pull-offs, vibrato, etc., select the note and use the toolbar buttons or the menu options to choose the effect or articulation you want. You can also use keyboard shortcuts to add effects or articulations quickly.
        • -
        • Transpose or tune your instrument: To transpose or tune your instrument in your tab, go to Track > Properties and choose the instrument you want to transpose or tune. Then use the "Transposition" or "Tuning" options to adjust the pitch or the tuning of your instrument. You can also use the "Capo" option to add a capo to your instrument.
        • -
        • Use tools and features: To use various tools and features to enhance your tab, such as chord diagrams, scales, lyrics, fingering, etc., go to Tools > Chord, Tools > Scale, Tools > Lyrics, Tools > Fingering, etc., and choose the tool or feature you want to use. You can also use the toolbar buttons or the menu options to access these tools and features.
        • -
        -

        As you can see, there are many ways to edit and customize guitar tabs in Guitar Pro 6 rev9626. You can also play and listen to your tabs with high-quality sound samples and advanced playback features. Let's see how to do that.

        -

        How to Play and Listen to Guitar Tabs in Guitar Pro 6 Rev9626

        -

        Guitar Pro 6 rev9626 allows you to play and listen to your tabs with high-quality sound samples that simulate the real sound of your instrument. You can also choose from over 100 soundbanks that cover different genres and styles of music, such as rock, metal, jazz, blues, classical, etc. You can also adjust the volume, pan, reverb, chorus, and other parameters of each track to create your own sound mix. Here are some ways to play and listen to guitar tabs in Guitar Pro 6 rev9626:

        -
          -
        • Play a tab: To play a tab in Guitar Pro 6 rev9626, go to File > Open and browse your folders to find the tab file. Then click on the "Play" button on the toolbar or press the spacebar on your keyboard. The tab will start playing from the beginning or from the current position of the cursor. You can also use the keyboard shortcuts F5, F6, F7, F8 to play from the start, stop, pause, or resume the playback.
        • -
        • Listen to a tab: To listen to a tab in Guitar Pro 6 rev9626, go to File > Open and browse your folders to find the tab file. Then click on the "Sound" button on the toolbar or press F2 on your keyboard. A window will pop up with the title "Sound". You can then choose the soundbank you want to use for each track of the tab. You can also adjust the volume, pan, reverb, chorus, and other parameters of each track. Then click on the "OK" button to close the window and listen to the tab.
        • -
        • Use advanced playback features: To use advanced playback features in Guitar Pro 6 rev9626, such as a metronome, a speed trainer, a loop mode, or a countdown, go to Play > Metronome , Play > Speed Trainer, Play > Loop, or Play > Countdown and choose the feature you want to use. You can then adjust the settings of each feature according to your preferences. For example, you can set the tempo, the time signature, the number of beats, the speed percentage, the loop start and end, the countdown duration, etc. Then click on the "Play" button to start the playback with the feature enabled.
        • -
        -

        As you can see, there are many ways to play and listen to guitar tabs in Guitar Pro 6 rev9626. You can also sync your tabs with audio files or video files, and create your own backing tracks or karaoke tracks. However, these features are beyond the scope of this article. If you want to learn more about them, you can check the official website of Guitar Pro 6 or the user manual of the software.

        -

        Conclusion

        -

        Guitar Pro 6 rev9626 is a great software for guitar players of all levels who want to create, edit and play guitar tabs. It offers many features and benefits that can help you improve your skills, express your creativity, and have fun with your instrument. However, Guitar Pro 6 rev9626 is not a free software. It costs $59.95 for a single license, which can be used on up to five computers.

        -

        If you want to get Guitar Pro 6 rev9626 keygen free, you need to download and install the software from a third-party website, and then activate it with a keygen file that can generate a valid registration code for the software. However, this is not legal and ethical, as it violates the intellectual property rights of the software developer. It also exposes your computer to potential risks, such as viruses, malware, spyware, etc., that may be hidden in the installer or the keygen file.

        -

        Therefore, we do not recommend using Guitar Pro 6 rev9626 keygen free to activate the software. We are only providing this information for educational purposes only. If you like Guitar Pro 6 rev9626 and want to use it legally and ethically, you should buy a license from the official website of Guitar Pro 6 or from an authorized reseller. This way, you can support the development of Guitar Pro 6 and enjoy all its features and functions without any limitations or risks.

        -

        FAQs

        -

        What is Guitar Pro 6 rev9626?

        -

        Guitar Pro 6 rev9626 is an old version of Guitar Pro 6, a powerful and versatile software for guitar tabs. It was released in 2011 and it has been replaced by newer versions since then.

        -

        What are the system requirements and compatibility of Guitar Pro 6 rev9626?

        -

        Guitar Pro 6 rev9626 is compatible with Windows XP/Vista/7/8/10 and Mac OS X 10.4/10.5/10.6/10.7/10.8/10.9/10.10/10.11. It requires a minimum of 1 GB RAM, 256 MB free HD space, a sound card, and an internet connection for activation.

        -

        Where can I download Guitar Pro 6 rev9626 installer?

        -

        You can download Guitar Pro 6 rev9626 installer from some third-party websites that host old versions of software, such as OldVersion.com.

        -

        Where can I download Guitar Pro 6 rev9626 keygen?

        -

        You can download Guitar Pro 6 rev9626 keygen from some third-party websites that host cracked software and tools, such as Crack4Download.com.

        -

        How can I activate Guitar Pro 6 rev9626 with keygen?

        -

        You can activate Guitar Pro 6 rev9626 with keygen by using the keygen file to generate a registration code for the software, and then entering it in the software.

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Joe My Name Is Joe Full Album Zip.md b/spaces/stomexserde/gpt4-ui/Examples/Joe My Name Is Joe Full Album Zip.md deleted file mode 100644 index b45722d6342cc96aa2f1aed0303105c63176d73d..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Joe My Name Is Joe Full Album Zip.md +++ /dev/null @@ -1,28 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Joe, My Name Is Joe Full Album Zip": - -

        Joe, My Name Is Joe Full Album Zip: A Review of the R&B Singer's Third Studio Album

        -

        Joe, My Name Is Joe Full Album Zip is a popular search term for fans of the American R&B singer Joe, who released his third studio album My Name Is Joe in 2000. The album was a commercial and critical success, selling over three million copies in the US and earning Joe four Grammy nominations. The album featured the hit singles "I Wanna Know", "Stutter", "Treat Her Like a Lady" and "I Believe In You".

        -

        The album showcased Joe's smooth vocals, catchy melodies and diverse influences, ranging from soul, funk, hip hop and gospel. The album also featured collaborations with rappers Mystikal, Nas and Mariah Carey. The album was praised for its production, songwriting and Joe's emotional delivery. Some of the standout tracks include:

        -

        Joe, My Name Is Joe Full Album Zip


        DOWNLOADhttps://urlgoal.com/2uI6wj



        -
          -
        • "Intro (My Name Is Joe)": A brief introduction that sets the tone for the album.
        • -
        • "Somebody Gotta Be On Top": A seductive mid-tempo track that showcases Joe's confident and playful side.
        • -
        • "Stutter": A funky and upbeat track that features a rap verse from Mystikal and samples Summer Madness by Kool & The Gang. The song was a huge hit, reaching number one on the Billboard Hot 100 and Hot R&B/Hip-Hop Songs charts.
        • -
        • "Table for Two": A romantic ballad that features a duet with R&B singer R. Kelly.
        • -
        • "I Wanna Know": A soulful and heartfelt track that expresses Joe's desire to know everything about his lover. The song was another hit, reaching number four on the Billboard Hot 100 and number two on the Hot R&B/Hip-Hop Songs charts.
        • -
        • "Treat Her Like a Lady": A groovy and upbeat track that encourages men to respect and appreciate their women.
        • -
        • "Get Crunk Tonight": A party anthem that features a rap verse from Nas and samples Funky President by James Brown.
        • -
        • "5 6 3 (Joe)": A smooth and sensual track that spells out Joe's name using numbers.
        • -
        • "Peep Show": A naughty and suggestive track that invites his lover to a private show.
        • -
        • "One Life Stand": A passionate and emotional track that declares his commitment to his lover.
        • -
        • "Stutter (Double Take Remix)": A remix of the original track that features a rap verse from Mystikal and samples Beenie Man's Who Am I.
        • -
        • "I Believe In You": A beautiful and uplifting track that features a duet with Mariah Carey and samples I Love You by Faith Evans.
        • -
        • "So Beautiful": A tender and sweet track that compliments his lover's beauty.
        • -
        • "Celebrate You": A joyful and festive track that celebrates his lover's birthday.
        • -
        • "Our Anthem": A patriotic and inspirational track that samples The Star-Spangled Banner by Francis Scott Key.
        • -
        • "Hello": A cover of the classic song by Lionel Richie.
        • -
        -

        Joe, My Name Is Joe Full Album Zip is a great way to enjoy this amazing album by one of the most talented R&B singers of his generation. You can download the album for free from various online sources such as Archive.org[^1^], Mphiphop.com[^2^] or YouTube.com[^3^]. Alternatively, you can stream or buy the album from digital platforms such as Spotify, Apple Music, Amazon Music or Tidal. Either way, you won't regret listening to this masterpiece by Joe.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/sub314xxl/MusicGen-Continuation/tests/utils/__init__.py b/spaces/sub314xxl/MusicGen-Continuation/tests/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen-Continuation/tests/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/sub314xxl/MusicGen/audiocraft/utils/autocast.py b/spaces/sub314xxl/MusicGen/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/supertori/files/stable-diffusion-webui/javascript/textualInversion.js b/spaces/supertori/files/stable-diffusion-webui/javascript/textualInversion.js deleted file mode 100644 index 1103cf6fb1c0d9f0fd6f22dd3d66e8c9d1edbe6c..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/javascript/textualInversion.js +++ /dev/null @@ -1,17 +0,0 @@ - - - -function start_training_textual_inversion(){ - gradioApp().querySelector('#ti_error').innerHTML='' - - var id = randomId() - requestProgress(id, gradioApp().getElementById('ti_output'), gradioApp().getElementById('ti_gallery'), function(){}, function(progress){ - gradioApp().getElementById('ti_progress').innerHTML = progress.textinfo - }) - - var res = args_to_array(arguments) - - res[0] = id - - return res -} diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Appeon 6 5 Powerbuilder Crack 38.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Appeon 6 5 Powerbuilder Crack 38.md deleted file mode 100644 index 5789497398892f53ea74a8e3a2f8bc4884613779..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Appeon 6 5 Powerbuilder Crack 38.md +++ /dev/null @@ -1,9 +0,0 @@ -
        -

        You’ve downloaded the Appeon Powerbuilder Crack. You’ve also extracted it. Now, run the setup file. It should take up to a minute to show you a license key. It should take up to a minute to show you a license key. You’ll see a dialog box. Enter the license key and click OK. Your installation is complete. For your convenience, this full version of the Appeon Powerbuilder 2019 Crack is going to be put on your desktop. Now click the icon on your desktop. You should see the main menu.

        -

        appeon 6 5 powerbuilder crack 38


        Downloadhttps://cinurl.com/2uEXMW



        -

        First of all, you need to uninstall the previous version of the program. You may have installed this application on your laptop. Go to the location where you have stored the Appeon Powerbuilder 2019 Crack.

        -

        this powerbuilder creates a universal application which can work on all platforms. Whether it's a pro app or a simple little kid site. When you publish the software, it's good you will be able to make it available on platforms. Something powerbuilder crack is by far the most convenient technology to do it. Even in this very day, the most of websites are available in the world, web properties. They can not be accessed from any platform unless and until it's converted into any of the applications.

        -

        the series is a company by its name. Started as a technology-based company. In the same way that it is not a programming company. But it's already a well-known company in the software field. Even if you think it's a small start-up. You need to reach the goals which this company has been given. Which may be acceptable to some people in the company. And you will achieve your goals. Then powerbuilder will not stop you. It's up to you to keep working on your goals. You can even use the technology features and technology choices. While every other technology does the same things. When you can take a look at this company's website, you will find the next features:

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Subhash Sharma Applied Multivariate Techniques Solution Manual.zip.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Subhash Sharma Applied Multivariate Techniques Solution Manual.zip.md deleted file mode 100644 index 537162818f613e06fab1d42321fa1a57bf751ab5..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Subhash Sharma Applied Multivariate Techniques Solution Manual.zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

        subhash sharma applied multivariate techniques solution manual.zip


        DOWNLOAD ✸✸✸ https://cinurl.com/2uEYNN



        - -Applied Multivariate Techniques Subhash Sharma University of South Carolina John Wiley & Sons, Inc. New York Chicheste... Author: Subhash Sharma ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Autodesk AutoCAD 2018 8.36 (x86x64) Keygen Crack Setup Free.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Autodesk AutoCAD 2018 8.36 (x86x64) Keygen Crack Setup Free.md deleted file mode 100644 index db3eaea4fcb48f5c7a1df6e63de8d42b0d75642c..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Autodesk AutoCAD 2018 8.36 (x86x64) Keygen Crack Setup Free.md +++ /dev/null @@ -1,161 +0,0 @@ -
        -

        Download Ben 10 Theme Song in Hindi: A Guide for Cartoon Fans

        - -

        If you are a fan of the popular animated series Ben 10, you might want to download the theme song in Hindi and enjoy it on your device. The theme song is catchy, energetic, and reflects the adventurous spirit of the show. In this article, we will show you how to download Ben 10 theme song in Hindi from various sources and play it on your device.

        -

        Autodesk AutoCAD 2018 8.36 (x86x64) Keygen Crack setup free


        Download Zip ⚙⚙⚙ https://urluss.com/2uCGeN



        - -

        What is Ben 10?

        - -

        Ben 10 is a cartoon series created by Man of Action Studios and produced by Cartoon Network Studios. It follows the adventures of Ben Tennyson, a 10-year-old boy who finds a mysterious device called the Omnitrix that allows him to transform into different aliens with amazing powers. Along with his cousin Gwen and his grandfather Max, he travels across the country and fights various villains and alien threats.

        - -

        The series debuted in 2005 and has spawned several sequels, spin-offs, movies, video games, and merchandise. The series has been dubbed in many languages, including Hindi, and has a huge fan base around the world.

        - -

        What is Ben 10 Theme Song in Hindi?

        - -

        Ben 10 theme song in Hindi is the Hindi version of the original theme song of the series. It is sung by Hanu Dixit, a singer, songwriter, music composer, producer, and filmmaker from Mumbai, India. He has covered many popular songs and themes in Hindi and other languages.

        - -

        The theme song in Hindi captures the essence of the show and introduces the main characters and their abilities. It also has catchy lyrics and music that will make you want to sing along. The theme song in Hindi is one minute long and has been uploaded on YouTube by Hanu Dixit.

        -

        - -

        How to Download Ben 10 Theme Song in Hindi?

        - -

        To download Ben 10 theme song in Hindi, you need to follow these steps:

        - -
          -
        1. Go to https://www.youtube.com/watch?v=VhEryx9JFno and watch the video of the theme song.
        2. -
        3. Click on the three dots icon below the video and select "Download". You can also use a YouTube downloader app or website to download the video.
        4. -
        5. Choose the quality and format of the video and click on "Download". The video will be saved on your device.
        6. -
        7. Use a video converter app or website to convert the video into an mp3 file. You can also use an online audio extractor tool to extract the audio from the video.
        8. -
        9. Save the mp3 file on your device and enjoy listening to Ben 10 theme song in Hindi.
        10. -
        - -

        How to Play Ben 10 Theme Song in Hindi?

        - -

        To play Ben 10 theme song in Hindi, you need to follow these steps:

        - -
          -
        1. Transfer the mp3 file to your device's music folder or library.
        2. -
        3. Use a music player app or website to play the mp3 file.
        4. -
        5. You can also set Ben 10 theme song in Hindi as your ringtone or notification sound by using a ringtone maker app or website.
        6. -
        7. You can also share Ben 10 theme song in Hindi with your friends and family by using a file sharing app or website.
        8. -
        - -

        Why Should You Download Ben 10 Theme Song in Hindi?

        - -

        There are many reasons why you should download Ben 10 theme song in Hindi, such as:

        - -
          -
        • You can enjoy listening to one of your favorite cartoon themes in your own language.
        • -
        • You can appreciate the talent and creativity of Hanu Dixit who has sung and produced the theme song in Hindi.
        • -
        • You can relive your childhood memories and nostalgia of watching Ben 10 on Cartoon Network.
        • -
        • You can have fun singing along with the lyrics and imitating the alien transformations.
        • -
        • You can support Hanu Dixit by visiting his YouTube channel and Instagram account and subscribing to them.
        • -
        - -

        Download Ben 10 theme song in Hindi is a great way to enjoy one of the best cartoon themes ever made. If you are a fan of Ben 10 and Hindi music, you should definitely give it a try. You will not regret it!

        -

        Download Ben 10 Theme Song in Hindi: Lyrics and Translation

        - -

        If you want to sing along with Ben 10 theme song in Hindi, you might want to know the lyrics and their meaning. The lyrics are simple and catchy, and they describe the main plot and characters of the show. In this article, we will provide you with the lyrics and translation of Ben 10 theme song in Hindi.

        - -

        Lyrics of Ben 10 Theme Song in Hindi

        - -

        Here are the lyrics of Ben 10 theme song in Hindi, as sung by Hanu Dixit:

        - -
        -Ek Alien device ka asar ho gaya
        -Use baandha jab kalaee pe gajab ho gaya
        -Yeh aam bachcha dekho kitana khaas ho gaya
        -He is Ben 10
        -
        -Anokhee shaktiyon ka ab yeh maalik ho gaya
        -Jab chaaha ek eliyan mein badal gaya
        -Upgrade, Four Arms, Diamond Head ya Ripjaws ban gaya
        -He is ben 10
        -
        -Ho koee kitana bhee taakatavar
        -Nahin hai ise kisee ka bhee dar
        -Har dushman se ladata ye
        -Haan saaree duniya kee hiphaazat karata ye
        -Ben 10
        -
        -B-B-B-Ben
        -B-B-B-Ben
        -
        - -

        Translation of Ben 10 Theme Song in Hindi

        - -

        Here is the translation of Ben 10 theme song in Hindi, in English:

        - -
        -An Alien device had an effect
        -When he tied it on his wrist, it was amazing
        -This ordinary kid became so special
        -He is Ben 10
        -
        -He became the owner of unique powers
        -Whenever he wanted, he changed into an alien
        -Upgrade, Four Arms, Diamond Head or Ripjaws he became
        -He is Ben 10
        -
        -No matter how powerful anyone is
        -He is not afraid of anyone
        -He fights every enemy
        -Yes, he protects the whole world
        -Ben 10
        -
        -B-B-B-Ben
        -B-B-B-Ben
        -
        - -

        Download Ben 10 theme song in Hindi is a great way to enjoy one of the best cartoon themes ever made. If you are a fan of Ben 10 and Hindi music, you should definitely give it a try. You will not regret it!

        -

        Download Ben 10 Theme Song in Hindi: MP3 Sources and Quality

        - -

        If you want to download Ben 10 theme song in Hindi in MP3 format, you might want to know the best sources and quality for it. The MP3 format is a popular and widely used audio format that can be played on various devices and platforms. However, not all MP3 files are created equal, and some may have better sound quality and file size than others.

        - -

        One of the best sources to download Ben 10 theme song in Hindi in MP3 format is the Internet Archive, a non-profit digital library that offers free access to millions of books, movies, music, and more. You can find the theme song on this link: https://archive.org/details/tvtunes_4408. The theme song has a file size of 1.1 MB and a bitrate of 128 kbps, which is decent for an MP3 file.

        - -

        Another good source to download Ben 10 theme song in Hindi in MP3 format is Rytmp3.fun, a website that allows you to download MP3 files from YouTube videos. You can find the theme song on this link: https://ben-10-theme-song-hindi.rytmp3.fun/. The theme song has a file size of 2.52 MB and a bitrate of 320 kbps, which is excellent for an MP3 file.

        - -

        A third source to download Ben 10 theme song in Hindi in MP3 format is Rytmp3.fun, a website that allows you to download MP3 files from YouTube videos. You can find the theme song on this link: https://ben-10-original-themes-song-hindi.rytmp3.fun/. The theme song has a file size of 2.52 MB and a bitrate of 320 kbps, which is excellent for an MP3 file.

        - -

        Download Ben 10 Theme Song in Hindi: How to Choose the Best MP3 File

        - -

        If you want to download Ben 10 theme song in Hindi in MP3 format, you might want to know how to choose the best MP3 file for your needs. There are two main factors that affect the quality and size of an MP3 file: bitrate and compression.

        - -

        Bitrate is the amount of data that is encoded in each second of audio. It is measured in kilobits per second (kbps). The higher the bitrate, the better the sound quality and the larger the file size. The lower the bitrate, the worse the sound quality and the smaller the file size. A typical MP3 file has a bitrate of 128 kbps or higher.

        - -

        Compression is the process of reducing the size of an audio file by removing some of its data. There are two types of compression: lossy and lossless. Lossy compression reduces the quality of the audio by discarding some of its data that is less noticeable to human ears. Lossless compression preserves the quality of the audio by compressing it without losing any data.

        - -

        To choose the best MP3 file for your needs, you need to consider your preferences and your device's capabilities. If you want high-quality sound and don't mind large file sizes, you should choose an MP3 file with a high bitrate (320 kbps or higher) and lossless compression. If you want low-quality sound and small file sizes, you should choose an MP3 file with a low bitrate (64 kbps or lower) and lossy compression.

        - -

        Download Ben 10 theme song in Hindi in MP3 format is a great way to enjoy one of the best cartoon themes ever made. If you are a fan of Ben 10 and Hindi music, you should definitely give it a try. You will not regret it!

        -

        Download Ben 10 Theme Song in Hindi: Video Sources and Quality

        - -

        If you want to download Ben 10 theme song in Hindi in video format, you might want to know the best sources and quality for it. The video format is a popular and widely used audiovisual format that can be played on various devices and platforms. However, not all video files are created equal, and some may have better picture quality and file size than others.

        - -

        One of the best sources to download Ben 10 theme song in Hindi in video format is YouTube, a video-sharing platform that offers free access to millions of videos, including music, movies, shows, and more. You can find the theme song on this link: https://www.youtube.com/watch?v=VhEryx9JFno. The theme song has a resolution of 720p HD and a file size of 9.6 MB, which is good for a video file.

        - -

        Another good source to download Ben 10 theme song in Hindi in video format is Internet Archive, a non-profit digital library that offers free access to millions of books, movies, music, and more. You can find the theme song on this link: https://archive.org/details/00OpeningThemeSongHindiX264720p. The theme song has a resolution of 720p HD and a file size of 4.7 MB, which is excellent for a video file.

        - -

        A third source to download Ben 10 theme song in Hindi in video format is YouTube, a video-sharing platform that offers free access to millions of videos, including music, movies, shows, and more. You can find the theme song on this link: https://www.youtube.com/watch?v=j_5MIuslcSQ. The theme song has a resolution of 480p SD and a file size of 6.4 MB, which is decent for a video file.

        - -

        Download Ben 10 Theme Song in Hindi: How to Choose the Best Video File

        - -

        If you want to download Ben 10 theme song in Hindi in video format, you might want to know how to choose the best video file for your needs. There are two main factors that affect the quality and size of a video file: resolution and compression.

        - -

        Resolution is the number of pixels that make up the image on the screen. It is measured in pixels per inch (ppi) or dots per inch (dpi). The higher the resolution, the sharper and clearer the image and the larger the file size. The lower the resolution, the blurrier and fuzzier the image and the smaller the file size. A typical video file has a resolution of 480p SD or higher.

        - -

        Compression is the process of reducing the size of a video file by removing some of its data. There are two types of compression: lossy and lossless. Lossy compression reduces the quality of the video by discarding some of its data that is less noticeable to human eyes. Lossless compression preserves the quality of the video by compressing it without losing any data.

        - -

        To choose the best video file for your needs, you need to consider your preferences and your device's capabilities. If you want high-quality picture and don't mind large file sizes, you should choose a video file with a high resolution (720p HD or higher) and lossless compression. If you want low-quality picture and small file sizes, you should choose a video file with a low resolution (360p SD or lower) and lossy compression.

        - -

        Download Ben 10 theme song in Hindi in video format is a great way to enjoy one of the best cartoon themes ever made. If you are a fan of Ben 10 and Hindi music, you should definitely give it a try. You will not regret it!

        -

        Download Ben 10 Theme Song in Hindi: The Ultimate Cartoon Theme

        - -

        Ben 10 theme song in Hindi is one of the best cartoon themes ever made. It is catchy, energetic, and reflects the adventurous spirit of the show. It is sung by Hanu Dixit, a talented and creative singer, songwriter, music composer, producer, and filmmaker from Mumbai, India. He has covered many popular songs and themes in Hindi and other languages.

        - -

        You can download Ben 10 theme song in Hindi in various formats, such as MP3 and video, from various sources, such as YouTube and Internet Archive. You can also choose the best quality and size for your needs, depending on your preferences and device's capabilities. You can also play Ben 10 theme song in Hindi on your device, set it as your ringtone or notification sound, share it with your friends and family, and sing along with the lyrics and translation.

        - -

        Download Ben 10 theme song in Hindi is a great way to enjoy one of the best cartoon themes ever made. If you are a fan of Ben 10 and Hindi music, you should definitely give it a try. You will not regret it!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Face-Chinese/README.md b/spaces/svjack/ControlNet-Face-Chinese/README.md deleted file mode 100644 index 1ac48205680ebb4c4265adb859fdcf7d7e9f2b56..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Face-Chinese/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ControlNet Face Chinese -emoji: 💻 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/utils/env.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/utils/env.py deleted file mode 100644 index e3f0d92529e193e6d8339419bcd9bed7901a7769..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/utils/env.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This file holding some environment constant for sharing by other files.""" - -import os.path as osp -import subprocess -import sys -from collections import defaultdict - -import cv2 -import torch - -import annotator.uniformer.mmcv as mmcv -from .parrots_wrapper import get_build_config - - -def collect_env(): - """Collect the information of the running environments. - - Returns: - dict: The environment information. The following fields are contained. - - - sys.platform: The variable of ``sys.platform``. - - Python: Python version. - - CUDA available: Bool, indicating if CUDA is available. - - GPU devices: Device type of each GPU. - - CUDA_HOME (optional): The env var ``CUDA_HOME``. - - NVCC (optional): NVCC version. - - GCC: GCC version, "n/a" if GCC is not installed. - - PyTorch: PyTorch version. - - PyTorch compiling details: The output of \ - ``torch.__config__.show()``. - - TorchVision (optional): TorchVision version. - - OpenCV: OpenCV version. - - MMCV: MMCV version. - - MMCV Compiler: The GCC version for compiling MMCV ops. - - MMCV CUDA Compiler: The CUDA version for compiling MMCV ops. - """ - env_info = {} - env_info['sys.platform'] = sys.platform - env_info['Python'] = sys.version.replace('\n', '') - - cuda_available = torch.cuda.is_available() - env_info['CUDA available'] = cuda_available - - if cuda_available: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - devices[torch.cuda.get_device_name(k)].append(str(k)) - for name, device_ids in devices.items(): - env_info['GPU ' + ','.join(device_ids)] = name - - from annotator.uniformer.mmcv.utils.parrots_wrapper import _get_cuda_home - CUDA_HOME = _get_cuda_home() - env_info['CUDA_HOME'] = CUDA_HOME - - if CUDA_HOME is not None and osp.isdir(CUDA_HOME): - try: - nvcc = osp.join(CUDA_HOME, 'bin/nvcc') - nvcc = subprocess.check_output( - f'"{nvcc}" -V | tail -n1', shell=True) - nvcc = nvcc.decode('utf-8').strip() - except subprocess.SubprocessError: - nvcc = 'Not Available' - env_info['NVCC'] = nvcc - - try: - gcc = subprocess.check_output('gcc --version | head -n1', shell=True) - gcc = gcc.decode('utf-8').strip() - env_info['GCC'] = gcc - except subprocess.CalledProcessError: # gcc is unavailable - env_info['GCC'] = 'n/a' - - env_info['PyTorch'] = torch.__version__ - env_info['PyTorch compiling details'] = get_build_config() - - try: - import torchvision - env_info['TorchVision'] = torchvision.__version__ - except ModuleNotFoundError: - pass - - env_info['OpenCV'] = cv2.__version__ - - env_info['MMCV'] = mmcv.__version__ - - try: - from annotator.uniformer.mmcv.ops import get_compiler_version, get_compiling_cuda_version - except ModuleNotFoundError: - env_info['MMCV Compiler'] = 'n/a' - env_info['MMCV CUDA Compiler'] = 'n/a' - else: - env_info['MMCV Compiler'] = get_compiler_version() - env_info['MMCV CUDA Compiler'] = get_compiling_cuda_version() - - return env_info diff --git a/spaces/terfces0erbo/CollegeProjectV2/Download UPD Pdf Buku Osn Sma Dan Cara Mengerjakannya.md b/spaces/terfces0erbo/CollegeProjectV2/Download UPD Pdf Buku Osn Sma Dan Cara Mengerjakannya.md deleted file mode 100644 index f16e30ffb8fc027cc9ea8d0d05f4f290ede61aec..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Download UPD Pdf Buku Osn Sma Dan Cara Mengerjakannya.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Download Pdf Buku Osn Sma Dan Cara Mengerjakannya


        DOWNLOAD - https://bytlly.com/2uGjRJ



        -
        - 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/thu-coai/DA-Transformer/Dockerfile b/spaces/thu-coai/DA-Transformer/Dockerfile deleted file mode 100644 index a6e97da5d3576a75f9dfdba23228953db377e306..0000000000000000000000000000000000000000 --- a/spaces/thu-coai/DA-Transformer/Dockerfile +++ /dev/null @@ -1,24 +0,0 @@ -FROM python:3.9 - -RUN --mount=target=/root/packages.txt,source=packages.txt apt-get update && xargs -r -a /root/packages.txt apt-get install -y - -RUN pip install --no-cache-dir pip==22.3.1 && pip install --no-cache-dir datasets "huggingface-hub>=0.12.1" "protobuf<4" "click<8.1" - -WORKDIR /home/user/app - -RUN apt-get install -y git git-lfs ffmpeg libsm6 libxext6 cmake libgl1-mesa-glx && git lfs install - -RUN pip install --no-cache-dir Cython "gradio==3.37.0" "torch==1.10.1" jieba subword-nmt sacremoses transformers - -RUN git clone --recurse-submodules https://github.com/thu-coai/DA-Transformer.git && cd DA-Transformer && pip install -e . && cd dag_search && python3 setup.py build_ext --inplace && pip install -e . && cd ../.. - -RUN mkdir -p /home/user && chmod 777 /home/user - -USER user - -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -COPY . . - -CMD ["python3", "app.py"] diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Cubase 5 Full Crack (64-bit) A Complete Tutorial for Beginners.md b/spaces/tialenAdioni/chat-gpt-api/logs/Cubase 5 Full Crack (64-bit) A Complete Tutorial for Beginners.md deleted file mode 100644 index 4902a7565923ffed6f8381828fc9c273fa2cb892..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Cubase 5 Full Crack (64-bit) A Complete Tutorial for Beginners.md +++ /dev/null @@ -1,51 +0,0 @@ - -

        Cubase 5 Free Download Full Crack (PC)

        -

        Cubase 5 is one of the most popular digital audio workstations (DAWs) of our time. It was released by Steinberg in 2009 and has many features and tools for recording, mixing, and mastering audio. However, finding a working crack for Cubase 5 can be challenging, especially for the latest Windows versions. In this article, we will show you how to download and install Cubase 5 full version with crack for free on your PC.

        -

        What is Cubase 5?

        -

        Cubase 5 is a software that allows you to create, edit, and produce music on your computer. It has a user-friendly interface and a powerful engine that can handle complex projects and effects. Cubase 5 has many new features and updates compared to the previous versions, such as:

        -

        cubase 5 free download full version crack 64 bit


        DOWNLOADhttps://urlcod.com/2uK7Wk



        -
          -
        • LoopMash: a revolutionary virtual instrument that can create unique loops and grooves from any audio material.
        • -
        • VariAudio: a tool that can correct and alter the pitch of vocal and monophonic recordings.
        • -
        • VST Expression: a feature that makes working with instrument articulations easier and more realistic.
        • -
        • Signature Track and Tempo Track: two new track types that allow you to change the time signature and tempo of your project.
        • -
        • Native 32-bit and 64-bit support: Cubase 5 can run on both 32-bit and 64-bit systems, offering better performance and stability.
        • -
        -

        How to Download Cubase 5 Full Crack?

        -

        To download Cubase 5 full crack for free, you need to follow these steps:

        -
          -
        1. Turn off your internet connection.
        2. -
        3. Download Cubase 5 full crack from one of these links:
          --
          --
        4. -
        5. Extract the files using Winrar or any other software that can unzip compressed files.
        6. -
        7. Run the Installer.exe file and follow the instructions.
        8. -
        9. When finished, run Cubase 5 from your desktop or start menu.
        10. -
        -

        How to Install Cubase 5 Full Crack?

        -

        To install Cubase 5 full crack on your PC, you need to do these steps:

        -
          -
        1. Make sure you have installed and activated NetFramework 3.5 on your PC. If not, you can download it from here: https://dotnet.microsoft.com/download/dotnet-framework/net35-sp1
        2. -
        3. If you are using Windows 10 or later, you need to run Cubase 5 in compatibility mode as Windows 7 or Vista. To do this, right-click on the Cubase 5 shortcut, select Properties, go to the Compatibility tab, check the box that says "Run this program in compatibility mode for:", and choose Windows 7 or Vista from the dropdown menu.
        4. -
        5. Enjoy your free Cubase 5 full version with crack!
        6. -
        -

        Disclaimer

        -

        This article is for educational purposes only. We do not condone or support piracy or illegal downloading of software. If you like Cubase 5 and want to use it legally, please buy it from the official website: https://new.steinberg.net/cubase/

        -

        Cubase 5 Tips and Tricks

        -

        Now that you have installed Cubase 5 full crack on your PC, you might want to learn some tips and tricks to make the most out of it. Here are some of them:

        -
          -
        • To quickly create a new project, press Ctrl+N and choose a template from the list.
        • -
        • To zoom in and out of the timeline, use the G and H keys on your keyboard.
        • -
        • To split an audio or MIDI part, select it and press Alt+X.
        • -
        • To duplicate an audio or MIDI part, select it and press Ctrl+D.
        • -
        • To mute or solo a track, click on the M or S buttons on the track header.
        • -
        • To add an effect to a track, click on the E button on the track header and choose an effect from the list.
        • -
        • To open the mixer window, press F3.
        • -
        • To record audio or MIDI, arm the track by clicking on the R button on the track header and press * on your numeric keypad.
        • -
        • To edit audio or MIDI, double-click on the part and use the tools and functions in the editor window.
        • -
        • To export your project as an audio file, go to File > Export > Audio Mixdown and choose your settings.
        • -
        -

        Conclusion

        -

        Cubase 5 is a powerful and versatile DAW that can help you create professional-sounding music on your PC. However, it is not free and requires a license to use legally. If you want to try Cubase 5 for free, you can download and install Cubase 5 full crack from the links provided in this article. However, this is not recommended as it may cause problems with your system and violate the terms of use of Steinberg. Therefore, if you like Cubase 5 and want to support its development, please buy it from the official website. Thank you for reading this article and happy music making!

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Get LFO Tool VST for Free (Legally and Safely).md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Get LFO Tool VST for Free (Legally and Safely).md deleted file mode 100644 index f98821f8a77cac9c162ca07b5bda88ffc6382898..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Get LFO Tool VST for Free (Legally and Safely).md +++ /dev/null @@ -1,30 +0,0 @@ - -

        LFO Tool VST Crack: Why You Should Avoid It and How to Get It Legally

        -

        LFO Tool is a versatile and powerful VST plugin that allows you to create custom LFO shapes and modulate any parameter in your DAW. Whether you want to add movement, groove, or expression to your sounds, LFO Tool can help you achieve it. However, if you are looking for a free or cracked version of LFO Tool, you might want to think twice before downloading it. In this article, we will explain why you should avoid LFO Tool VST crack and how to get it legally and safely.

        -

        lfo tool vst crack


        Download 🗹 https://urlcod.com/2uK696



        - -

        Why You Should Avoid LFO Tool VST Crack

        -

        There are many reasons why you should avoid LFO Tool VST crack and any other cracked software. Here are some of the main ones:

        -
          -
        • It is illegal. Downloading, installing, or using cracked software is a violation of the intellectual property rights of the developers and the distributors. You could face legal consequences such as fines or lawsuits if you are caught using cracked software.
        • -
        • It is unethical. Cracked software is a form of piracy that harms the software industry and the creators who invest their time, money, and effort into developing quality products. By using cracked software, you are depriving them of their rightful income and discouraging them from creating more innovative and useful software.
        • -
        • It is risky. Cracked software often comes with malware, viruses, or spyware that can infect your computer and compromise your security and privacy. You could lose your data, damage your system, or expose your personal information to hackers or cybercriminals.
        • -
        • It is unreliable. Cracked software often has bugs, glitches, or errors that can affect its performance and functionality. You could experience crashes, freezes, or compatibility issues that can ruin your workflow and creativity. You also won't be able to access updates, patches, or support from the developers or the distributors.
        • -
        -

        As you can see, using LFO Tool VST crack is not worth the hassle and the risk. You are better off getting it legally and safely.

        - -

        How to Get LFO Tool VST Legally and Safely

        -

        The best way to get LFO Tool VST legally and safely is to buy it from the official website of Xfer Records, the developer of LFO Tool. Here are some of the benefits of buying LFO Tool from Xfer Records:

        -
          -
        • You will get a high-quality and fully functional product that works as intended.
        • -
        • You will get lifetime updates and support from Xfer Records.
        • -
        • You will get a fair price that reflects the value and the features of LFO Tool.
        • -
        • You will support Xfer Records and encourage them to keep developing more awesome plugins.
        • -
        -

        To buy LFO Tool from Xfer Records, you just need to visit their website https://xferrecords.com/products/lfo-tool and click on the "Buy Now" button. You will be redirected to a secure checkout page where you can choose your payment method and complete your purchase. After that, you will receive an email with your download link and license key. You can then download and install LFO Tool on your computer and start using it right away.

        -

        - -

        Conclusion

        -

        LFO Tool is a great plugin that can enhance your music production and sound design. However, you should avoid using LFO Tool VST crack or any other cracked software for legal, ethical, and practical reasons. Instead, you should buy LFO Tool from Xfer Records and enjoy its benefits without any worries. By doing so, you will also support Xfer Records and appreciate their work.

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Clinical Materia Medica by Farrington for Free and Enhance Your Homeopathic Skills.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Clinical Materia Medica by Farrington for Free and Enhance Your Homeopathic Skills.md deleted file mode 100644 index 6311b8e738816fd9945df3f152aa22758deb3828..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Clinical Materia Medica by Farrington for Free and Enhance Your Homeopathic Skills.md +++ /dev/null @@ -1,199 +0,0 @@ - -

        Clinical Materia Medica by Farrington: A Classic Homeopathy Book

        -

        If you are interested in learning more about homeopathy and its remedies, you may have heard of Clinical Materia Medica by E. A. Farrington. This book is considered one of the most authoritative and comprehensive works on homeopathic materia medica, written by a renowned homeopath and teacher in the 19th century. In this article, we will explore what this book is about, who wrote it, what are its main features, how to use it for homeopathic practice, and where to find it for free download.

        -

        What is Clinical Materia Medica?

        -

        Materia medica is a Latin term that means "medical material" or "medical matter". It refers to the collection of information about the sources, properties, effects, and uses of substances that are used for healing purposes. In homeopathy, materia medica is the study of homeopathic remedies, which are derived from natural substances such as plants, animals, minerals, or chemicals.

        -

        clinical materia medica farrington free download


        Download Zip ::: https://bltlly.com/2uOnaL



        -

        Definition and scope of materia medica

        -

        According to Samuel Hahnemann, the founder of homeopathy, materia medica is "a pure science of experience" that aims to discover the specific effects of each remedy on the human body and mind. Materia medica is based on the principle of "like cures like", which means that a substance that can cause certain symptoms in a healthy person can also cure similar symptoms in a sick person, if given in a diluted and potentized form.

        -

        Materia medica covers a wide range of topics, such as:

        -
          -
        • The origin, preparation, classification, and nomenclature of homeopathic remedies
        • -
        • The provings or experiments that test the effects of remedies on healthy volunteers
        • -
        • The symptoms or signs that indicate the use of a remedy for a particular condition or disease
        • -
        • The modalities or factors that modify or influence the action of a remedy
        • -
        • The relationships or comparisons between different remedies
        • -
        • The doses or potencies of remedies and how to administer them
        • -
        • The antidotes or substances that can neutralize or counteract the effects of remedies
        • -
        -

        Importance of clinical observation and experience

        -

        While materia medica is based on provings, it is also enriched by clinical observation and experience. Clinical observation refers to the careful examination and recording of the symptoms and changes that occur in patients who receive homeopathic treatment. Clinical experience refers to the accumulated knowledge and wisdom that homeopaths gain from treating various cases over time.

        -

        Clinical observation and experience are important for several reasons, such as:

        -
          -
        • They confirm or modify the results of provings by showing how remedies work in real-life situations
        • -
        • They reveal new symptoms or indications that were not observed in provings
        • -
        • They provide practical guidance on how to select, combine, repeat, or change remedies according to individual circumstances
        • -
        • They illustrate the principles and methods of homeopathy through examples and cases
        • -
        • They inspire confidence and trust in homeopathy as an effective system of medicine
        • -
        Who was E. A. Farrington? -

        E. A. Farrington was the author of Clinical Materia Medica, and one of the most prominent and respected homeopaths of his time. He was born on January 1, 1847, in Williamsburg, Long Island, New York. He studied medicine under his brother, Harvey W. Farrington, and graduated from the Homoeopathic Medical College of Pennsylvania in 1866, and from the Hahnemann Medical College in 1868. He became a professor of materia medica at both colleges, and also taught at the Philadelphia Post-Graduate School of Homoeopathics. He was a prolific writer and lecturer, and published many articles and books on homeopathy, including Lesser Writings with Therapeutic Hints, Homoeopathy and Homoeopathic Prescribing, and The Homeopathic Heritage. He died on December 17, 1885, at the age of 38, leaving behind a legacy of excellence and dedication to homeopathy.

        -

        clinical materia medica by farrington pdf free download
        -farrington's clinical materia medica ebook free download
        -clinical materia medica ernest albert farrington free download
        -clinical materia medica farrington archive.org free download
        -clinical materia medica farrington homeopathy free download
        -clinical materia medica farrington book free download
        -clinical materia medica farrington epub free download
        -clinical materia medica farrington kindle free download
        -clinical materia medica farrington online free download
        -clinical materia medica farrington 1908 edition free download
        -clinical materia medica farrington boericke & tafel free download
        -clinical materia medica farrington library of congress free download
        -clinical materia medica farrington digitallibraryindia free download
        -clinical materia medica farrington c. ringer & co. free download
        -clinical materia medica farrington banasthali university free download
        -clinical materia medica farrington korndoerfer edition free download
        -clinical materia medica farrington bartlett edition free download
        -clinical materia medica farrington harvey edition free download
        -clinical materia medica farrington 1943 edition free download
        -clinical materia medica farrington 1885 edition free download
        -clinical materia medica by e.a. farrington pdf free download
        -e.a. farrington's clinical materia medica ebook free download
        -clinical materia medica by ernest albert farrington free download
        -e.a. farrington clinical materia medica archive.org free download
        -e.a. farrington clinical materia medica homeopathy free download
        -e.a. farrington clinical materia medica book free download
        -e.a. farrington clinical materia medica epub free download
        -e.a. farrington clinical materia medica kindle free download
        -e.a. farrington clinical materia medica online free download
        -e.a. farrington clinical materia medica 1908 edition free download
        -e.a. farrington clinical materia medica boericke & tafel free download
        -e.a. farrington clinical materia medica library of congress free download
        -e.a. farrington clinical materia medica digitallibraryindia free download
        -e.a. farrington clinical materia medica c. ringer & co. free download
        -e.a. farrington clinical materia medica banasthali university free download
        -e.a. farrington clinical materia medica korndoerfer edition free download
        -e.a. farrington clinical materia medica bartlett edition free download
        -e.a. farrington clinical materia medica harvey edition free download
        -e.a. farrington clinical materia medica 1943 edition free download
        -e.a. farrington clinical materia medica 1885 edition free download

        -

        Biography and achievements

        -

        Farrington was a brilliant student and a gifted teacher. He had a remarkable memory and a keen analytical mind. He was well-versed in classical languages, literature, history, philosophy, and science. He had a deep knowledge of homeopathy and its principles, as well as a wide experience in clinical practice. He was admired by his colleagues and students for his eloquence, clarity, humor, and generosity. He was also a devout Christian and a loving husband and father.

        -

        Some of his notable achievements are:

        -
          -
        • He was the first American homeopath to introduce the study of nosodes (remedies made from disease products) and sarcodes (remedies made from healthy animal tissues) into materia medica.
        • -
        • He was the first to classify remedies according to their natural families or groups, such as animal, vegetable, mineral, chemical, etc., and to compare their similarities and differences within each group.
        • -
        • He was the first to emphasize the importance of studying the general or constitutional effects of remedies, as well as their specific or local effects.
        • -
        • He was the first to use diagrams or charts to illustrate the relationships between remedies, such as complementary, antidotal, inimical, etc..
        • -
        • He was the editor of the American Homoeopathic Review, one of the leading journals of homeopathy in his time.
        • -
        • He was the founder and president of the International Hahnemannian Association, a society of homeopaths who adhered strictly to the teachings of Hahnemann.
        • -
        -

        Contributions to homeopathy and materia medica

        -

        Farrington's most significant contribution to homeopathy and materia medica was his book Clinical Materia Medica, which he published in 1880. This book is a collection of his lectures on various remedies that he delivered at the Hahnemann Medical College. It contains detailed descriptions of 218 remedies, covering their sources, provings, symptoms, modalities, relationships, doses, clinical cases, etc. It is written in a clear and lively style, with many anecdotes and illustrations that make it easy to read and remember. It is based on both provings and clinical observations, and reflects Farrington's extensive knowledge and experience in homeopathy. It is considered one of the most authoritative and comprehensive works on homeopathic materia medica ever written.

        -

        Some of the features that make Clinical Materia Medica by Farrington a classic homeopathy book are:

        -
          -
        • It follows a natural system of classification of remedies according to their families or groups, which helps to understand their similarities and differences better.
        • -
        • It emphasizes the general or constitutional effects of remedies over their specific or local effects, which helps to find the remedy that matches the totality of symptoms better.
        • -
        • It compares and contrasts different remedies within each group or family, which helps to differentiate them better.
        • -
        • It gives practical tips and suggestions on how to use remedies in various conditions or diseases.
        • -
        • It provides many examples and cases from his own practice or from other sources that illustrate the action and efficacy of remedies.
        • -
        • It uses diagrams or charts to show the relationships between remedies.
        • -
        • It covers many rare or new remedies that were not included in other materia medica books at that time.
        • -
        • It is written in a simple and engaging style that appeals to both beginners and experts in homeopathy.
        • -

        How to use Clinical Materia Medica by Farrington for homeopathic practice?

        -

        Clinical Materia Medica by Farrington is not only a valuable source of information, but also a useful tool for homeopathic practice. It can help homeopaths to improve their knowledge and skills in prescribing remedies, as well as to enhance their confidence and trust in homeopathy. However, to use this book effectively, one needs to follow some tips and suggestions, such as:

        -

        Tips and suggestions for reading and studying the book

        -

        Reading and studying Clinical Materia Medica by Farrington can be a rewarding and enjoyable experience, if one adopts the following habits:

        -
          -
        • Read the book systematically and thoroughly, starting from the introduction and proceeding to the different chapters according to the natural order of classification.
        • -
        • Read the book attentively and critically, paying attention to the details and nuances of each remedy, as well as to the general principles and methods of homeopathy.
        • -
        • Read the book repeatedly and regularly, revising and reviewing the remedies that have been learned, as well as learning new ones.
        • -
        • Read the book with an open and curious mind, seeking to understand the logic and rationale behind each remedy, as well as to appreciate the beauty and harmony of nature.
        • -
        • Read the book with a practical and clinical perspective, applying the knowledge gained from the book to real-life cases and situations.
        • -
        -

        Examples and cases from the book

        -

        One of the best ways to use Clinical Materia Medica by Farrington for homeopathic practice is to study the examples and cases that are given in the book. These examples and cases illustrate how Farrington or other homeopaths used the remedies in various conditions or diseases, and how they achieved successful results. They also show how to select, combine, repeat, or change remedies according to individual circumstances. Some of the examples and cases from the book are:

        - - - - - - - -
        RemedyCondition or DiseaseExample or Case
        AconiteFever"A child has been exposed to cold; he is restless, tossing about; his face is red; his skin hot; he has a dry cough; he is thirsty. Aconite will cure."
        BelladonnaSore throat"A young lady had a sore throat. She complained of dryness in the throat; it was bright red; she had headache; her face was flushed; her eyes were bright; her pupils were dilated; she was restless. Belladonna cured her."
        Nux vomicaIndigestion"A man had been eating freely of rich food; he had also been drinking wine. He was attacked with indigestion; he had nausea; he felt as if he would vomit; he had a bitter taste in his mouth; he had a headache; he was irritable. Nux vomica relieved him."
        PulsatillaMenstrual disorders"A young girl had scanty and irregular menstruation; she was pale and anaemic; she had a mild and gentle disposition; she was easily moved to tears; she was fond of sympathy; she had a craving for fresh air. Pulsatilla regulated her menses."
        SulphurSkin eruptions"A boy had a chronic eruption on his scalp; it was moist and offensive; it itched intolerably; he scratched it until it bled; he was dirty and neglected; he had a voracious appetite; he was always hungry. Sulphur cured him."
        -

        Benefits and limitations of the book

        -

        Clinical Materia Medica by Farrington has many benefits for homeopathic practice, such as:

        -
          -
        • It provides a comprehensive and reliable source of information on homeopathic remedies.
        • -
        • It helps to understand the nature and action of remedies better.
        • -
        • It helps to differentiate between similar remedies better.
        • -
        • It helps to find the remedy that matches the totality of symptoms better.
        • -
        • It helps to prescribe remedies more confidently and effectively.
        • -
        • It helps to appreciate the principles and methods of homeopathy better.
        • -
        • It helps to learn from the experience and wisdom of a master homeopath.
        • -
        However, Clinical Materia Medica by Farrington also has some limitations for homeopathic practice, such as:

        -
          -
        • -
        • It may not include some of the newer or lesser-known remedies that have been discovered or proved after Farrington's time.
        • -
        • It may not reflect some of the latest developments or innovations in homeopathy that have emerged since Farrington's time.
        • -
        • It may not suit some of the modern preferences or styles of homeopathic practice that differ from Farrington's approach.
        • -
        • It may contain some errors or inaccuracies that have been corrected or revised by later authors or editions.
        • -
        -

        Therefore, Clinical Materia Medica by Farrington should be used with caution and discretion, and supplemented by other sources of information and knowledge on homeopathy.

        -

        Where to find Clinical Materia Medica by Farrington for free download?

        -

        Clinical Materia Medica by Farrington is a public domain book, which means that it is not protected by copyright and can be freely copied, distributed, or reproduced. Therefore, it is possible to find this book for free download on various online platforms, such as:

        -

        Online sources and links for free download

        -

        Some of the online sources and links that offer Clinical Materia Medica by Farrington for free download are:

        - -

        Legal and ethical issues of free download

        -

        While it is legal to download Clinical Materia Medica by Farrington for free from public domain sources, it may not be ethical to do so without acknowledging the author and the source. It may also not be fair to the publishers and sellers who have invested time and money to produce and distribute the book. Therefore, it is advisable to follow some ethical guidelines when downloading this book for free, such as:

        -
          -
        • Cite the author and the source when using or quoting from the book.
        • -
        • Do not modify or alter the content or format of the book without permission.
        • -
        • Do not use the book for commercial purposes or profit.
        • -
        • Do not infringe on the rights or interests of other authors or publishers who have produced or published similar or related books.
        • -
        • Support the homeopathic community and industry by buying or donating to other homeopathic books or products.
        • -
        -

        Alternative options for accessing the book

        -

        If you are not comfortable with downloading Clinical Materia Medica by Farrington for free, or if you prefer other options for accessing the book, you can consider some alternatives, such as:

        -
          -
        • Borrowing the book from a library or a friend who has a copy.
        • -
        • Buying the book from a bookstore or an online retailer who sells new or used copies.
        • -
        • Subscribing to a digital library or a streaming service that offers access to the book online.
        • -
        • Joining a homeopathy course or a study group that uses the book as a reference or a textbook.
        • -
        -

        Conclusion

        -

        Clinical Materia Medica by Farrington is a classic homeopathy book that provides a comprehensive and authoritative source of information on homeopathic remedies. It is written by a renowned homeopath and teacher who had a deep knowledge and experience in homeopathy. It is based on both provings and clinical observations, and reflects the principles and methods of homeopathy. It is organized according to a natural system of classification of remedies according to their families or groups. It covers 218 remedies in detail, with descriptions, symptoms, modalities, relationships, doses, cases, etc. It is written in a clear and engaging style, with anecdotes and illustrations. It can be used as a valuable tool for homeopathic practice, as well as a source of information and knowledge on homeopathy. It can be found for free download on various online platforms, but it is also advisable to follow some ethical guidelines and to consider some alternative options for accessing the book. Clinical Materia Medica by Farrington is a book that every homeopath should read and study, as it is a treasure of homeopathic wisdom and practice.

        -

        FAQs

        -

        Here are some frequently asked questions about Clinical Materia Medica by Farrington and their answers:

        -
          -
        1. How many remedies are covered in Clinical Materia Medica by Farrington?
        2. -

          Clinical Materia Medica by Farrington covers 218 remedies in detail, and mentions many more in passing. The remedies are classified into 12 groups or families, such as animal, vegetable, mineral, chemical, nosode, sarcode, etc.

          -
        3. What is the difference between clinical materia medica and pure materia medica?
        4. -

          Pure materia medica is the study of homeopathic remedies based on provings or experiments on healthy volunteers. Clinical materia medica is the study of homeopathic remedies based on clinical observation and experience on sick patients. Clinical materia medica confirms or modifies the results of pure materia medica by showing how remedies work in real-life situations.

          -
        5. Who are some of the other authors or books on homeopathic materia medica?
        6. -

          Some of the other authors or books on homeopathic materia medica are:

          -
            -
          • Samuel Hahnemann: The founder of homeopathy and the author of Materia Medica Pura and The Chronic Diseases.
          • -
          • Constantine Hering: The father of American homeopathy and the author of Guiding Symptoms of Our Materia Medica.
          • -
          • James Tyler Kent: A prominent American homeopath and the author of Lectures on Homeopathic Materia Medica and Repertory of the Homeopathic Materia Medica.
          • -
          • William Boericke: A famous American homeopath and the author of Pocket Manual of Homeopathic Materia Medica and Therapeutics.
          • -
          • Cyrus Maxwell Boger: A renowned American homeopath and the author of A Synoptic Key of the Materia Medica and Boenninghausen's Characteristics and Repertory.
          • -
          -
        7. How to find the best remedy for a patient using Clinical Materia Medica by Farrington?
        8. -

          To find the best remedy for a patient using Clinical Materia Medica by Farrington, one needs to follow some steps, such as:

          -
            -
          • Take a detailed case history of the patient, noting down the symptoms, modalities, causes, history, personality, etc.
          • -
          • Analyze the case and find out the totality of symptoms, which is the sum of all the characteristic signs and sensations of the patient.
          • -
          • Compare the totality of symptoms with the symptoms of various remedies in Clinical Materia Medica by Farrington, using the natural system of classification or the diagrams or charts of relationships.
          • -
          • Select the remedy that matches the totality of symptoms most closely, considering also the general or constitutional effects of the remedy.
          • -
          • Prescribe the remedy in the appropriate dose and potency, according to the principles and methods of homeopathy.
          • -
          What are some of the advantages and disadvantages of using Clinical Materia Medica by Farrington over other sources of information on homeopathy? -

          Some of the advantages and disadvantages of using Clinical Materia Medica by Farrington over other sources of information on homeopathy are:

          - - - - - - - - - - -
          AdvantagesDisadvantages
          It is comprehensive and authoritativeIt may not include some newer or lesser-known remedies
          It is based on both provings and clinical observationsIt may not reflect some latest developments or innovations in homeopathy
          It follows a natural system of classificationIt may not suit some modern preferences or styles of practice
          It emphasizes the general or constitutional effectsIt may contain some errors or inaccuracies
          It compares and contrasts different remedies
          It gives practical tips and suggestions
          It provides many examples and cases
          It is written in a clear and engaging style

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Last Island of Survival Mod APK with Unlimited Money and Resources.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Last Island of Survival Mod APK with Unlimited Money and Resources.md deleted file mode 100644 index 3693eccf3280ecac5b39ae758f8c3c78020305a7..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Last Island of Survival Mod APK with Unlimited Money and Resources.md +++ /dev/null @@ -1,20 +0,0 @@ - -

          , , ,

            ,
          • , etc. 6. I will write " Here is the first table for the outline of the article: | Heading | Subheading | | --- | --- | | H1: The Last Island of Survival Mod APK: A Thrilling Survival Experience on a Deserted Island | - | | H2: What is The Last Island of Survival? | - | | H2: What are the features of The Last Island of Survival Mod APK? | H3: Unlimited gems, money, and characters unlocked | | | H3: High-quality graphics and sound effects | | | H3: Multiplayer mode and chat system | | | H3: Crafting, building, and exploring | | | H3: Challenging enemies and bosses | | H2: How to download and install The Last Island of Survival Mod APK? | H3: Requirements | | | H3: Steps | | H2: How to play The Last Island of Survival Mod APK? | H3: Tips and tricks | | | H3: FAQs | | H2: Conclusion | - | Here is the second table for the article with HTML formatting: - -The Last Island of Survival Mod APK: A Thrilling Survival Experience on a Deserted Island - - -

            The Last Island of Survival Mod APK: A Thrilling Survival Experience on a Deserted Island

            -

            Do you love survival games? Do you want to test your skills and strategies on a deserted island? Do you want to have unlimited resources and access to all characters in the game? If you answered yes to any of these questions, then you should try The Last Island of Survival Mod APK. This is a modified version of the original game that gives you many advantages and benefits. In this article, we will tell you everything you need to know about this mod apk, including its features, how to download and install it, how to play it, and some tips and tricks to help you survive.

            -

            the last island of survival mod apk


            Download ►►► https://bltlly.com/2uOmFk



            -

            What is The Last Island of Survival?

            -

            The Last Island of Survival is a survival game developed by HK HERO ENTERTAINMENT CO.,LIMITED. It is available for Android devices and has more than 10 million downloads on Google Play Store. The game is set in a post-apocalyptic world where you are one of the few survivors who managed to escape from a deadly virus outbreak. You find yourself stranded on a remote island with no civilization or resources. You have to survive by gathering materials, crafting tools and weapons, building shelters, hunting animals, fighting zombies and other players, and exploring the island.

            -

            What are the features of The Last Island of Survival Mod APK?

            -

            The Last Island of Survival Mod APK is a modified version of the original game that gives you many advantages and benefits over other players. Here are some of the features that you can enjoy with this mod apk:

            -

            Unlimited gems, money, and characters unlocked

            -

            With this mod apk, you don't have to worry about running out of gems or money in the game. You can use them to buy anything you want from the shop, such as weapons, clothes, accessories, vehicles, etc. You can also unlock all the characters in the game without spending any real money.

            -

            High-quality graphics and sound effects

            -

            The game has stunning graphics that make you feel like you are really on an island. You can see the details of the environment, such as trees, rocks, water, grass, etc. You can also hear realistic sound effects that add to the immersion of the game.

            - 401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/tiedaar/economics_summary_grader/app.py b/spaces/tiedaar/economics_summary_grader/app.py deleted file mode 100644 index 3c55e4323df2f4a50bf4025727eb564045d51717..0000000000000000000000000000000000000000 --- a/spaces/tiedaar/economics_summary_grader/app.py +++ /dev/null @@ -1,25 +0,0 @@ -import gradio as gr -from transformers import pipeline -import json - -data = open('source_dict.txt', 'r') -source_dict = json.loads(data.read()) - - - -def getScore(summary, chapter): - text = summary + '' + source_dict[chapter] - pipe1 = pipeline('text-classification', model='tiedaar/summary-longformer-wording', function_to_apply="none") - pipe2 = pipeline('text-classification', model='tiedaar/summary-longformer-content', function_to_apply="none") - return pipe1(text)[0]['score'], pipe2(text)[0]['score'] - -demo = gr.Interface( - fn=getScore, - inputs=[gr.Textbox(lines=2, placeholder="Summary..."), gr.Dropdown(label = "Chapter", choices = list(source_dict.keys())),], - outputs=[gr.Number(label = "Wording Score"), gr.Number(label="Content Score")], - title="Automatic Summary Scorer", - description="Automatic Summary Scorer for OpenStax Macroeconomics Textbook", - article="This is an app which provides two scores for summaries of chapters in the OpenStax textbook on Macroeconomics. The source text can be found at https://openstax.org/books/principles-macroeconomics-ap-courses-2e/pages/1-key-concepts-and-summary" -) - -demo.launch() \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Minipro Tl866cs Software Free Download !EXCLUSIVE!.md b/spaces/tioseFevbu/cartoon-converter/scripts/Minipro Tl866cs Software Free Download !EXCLUSIVE!.md deleted file mode 100644 index 84b436826e52e478261a0bb7ba5efedfc7aa9df6..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Minipro Tl866cs Software Free Download !EXCLUSIVE!.md +++ /dev/null @@ -1,25 +0,0 @@ - -Here is a possible title and article for your keyword: - -

            How to Download and Install Minipro TL866CS Software for Free

            -

            If you are looking for a reliable and easy-to-use eeprom programmer, you might want to check out the Minipro TL866CS. This device can support up to 14000+ chips, including 25 Flash OTP, 1.8V 25 serials Flash, and many more. It also has a high-performance USB interface that can communicate at 480Mbps speed. In this article, we will show you how to download and install the Minipro TL866CS software for free on your computer.

            -

            Step 1: Download the Minipro TL866CS Software

            -

            The first step is to download the latest version of the Minipro TL866CS software from the official website. The software is compatible with Windows XP, Vista, 7, 8, and 10. You can use the following link to download the software: [^1^]. The file size is about 21MB and it is in RAR format.

            -

            Minipro Tl866cs Software Free Download


            Download File ——— https://urlcod.com/2uHwJC



            -

            Step 2: Extract the RAR File

            -

            After downloading the software, you need to extract the RAR file using a program like WinRAR or 7-Zip. You can right-click on the file and choose "Extract Here" or "Extract to minipro_setup" option. You will get a folder named "minipro_setup" that contains the setup files.

            -

            Step 3: Run the Setup File

            -

            Next, you need to run the setup file named "minipro_setup.exe" inside the folder. You will see a welcome screen that asks you to choose a language. You can select English or any other language you prefer. Then, click "Next" to continue.

            -

            You will see a license agreement screen that asks you to accept the terms and conditions. You can read the agreement and then check the box that says "I accept the agreement". Then, click "Next" to continue.

            -

            You will see a destination folder screen that asks you to choose where to install the software. You can use the default location or browse for another folder. Then, click "Next" to continue.

            -

            You will see a start menu folder screen that asks you to choose where to create shortcuts for the software. You can use the default location or browse for another folder. Then, click "Next" to continue.

            -

            You will see a ready to install screen that shows you a summary of your choices. You can review them and then click "Install" to start the installation process.

            -

            -

            The installation process will take a few minutes and you will see a progress bar that shows you how much time is left. When the installation is complete, you will see a finish screen that asks you to launch the software. You can check the box that says "Launch Minipro" and then click "Finish" to exit the setup.

            -

            Step 4: Connect Your Minipro TL866CS Programmer

            -

            The last step is to connect your Minipro TL866CS programmer to your computer using a USB cable. You should hear a sound that indicates that your device is recognized by your computer. You can also see a green LED on your programmer that indicates that it is powered on.

            -

            Now, you can open the Minipro software from your desktop or start menu shortcut. You will see a main window that shows you various options and features of your programmer. You can select your chip type, read, write, verify, erase, and program your chips using this software.

            -

            Conclusion

            -

            In this article, we have shown you how to download and install the Minipro TL866CS software for free on your computer. This software can help you program various types of chips using your Minipro TL866CS programmer. We hope this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below.

            7196e7f11a
            -
            -
            \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/msvccompiler.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/msvccompiler.py deleted file mode 100644 index 00c630be5092469c01bac86a2a6a1e05515fcc62..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/msvccompiler.py +++ /dev/null @@ -1,683 +0,0 @@ -"""distutils.msvccompiler - -Contains MSVCCompiler, an implementation of the abstract CCompiler class -for the Microsoft Visual Studio. -""" - -# Written by Perry Stoll -# hacked by Robin Becker and Thomas Heller to do a better job of -# finding DevStudio (through the registry) - -import sys, os -from distutils.errors import ( - DistutilsExecError, - DistutilsPlatformError, - CompileError, - LibError, - LinkError, -) -from distutils.ccompiler import CCompiler, gen_lib_options -from distutils import log - -_can_read_reg = False -try: - import winreg - - _can_read_reg = True - hkey_mod = winreg - - RegOpenKeyEx = winreg.OpenKeyEx - RegEnumKey = winreg.EnumKey - RegEnumValue = winreg.EnumValue - RegError = winreg.error - -except ImportError: - try: - import win32api - import win32con - - _can_read_reg = True - hkey_mod = win32con - - RegOpenKeyEx = win32api.RegOpenKeyEx - RegEnumKey = win32api.RegEnumKey - RegEnumValue = win32api.RegEnumValue - RegError = win32api.error - except ImportError: - log.info( - "Warning: Can't read registry to find the " - "necessary compiler setting\n" - "Make sure that Python modules winreg, " - "win32api or win32con are installed." - ) - pass - -if _can_read_reg: - HKEYS = ( - hkey_mod.HKEY_USERS, - hkey_mod.HKEY_CURRENT_USER, - hkey_mod.HKEY_LOCAL_MACHINE, - hkey_mod.HKEY_CLASSES_ROOT, - ) - - -def read_keys(base, key): - """Return list of registry keys.""" - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - L = [] - i = 0 - while True: - try: - k = RegEnumKey(handle, i) - except RegError: - break - L.append(k) - i += 1 - return L - - -def read_values(base, key): - """Return dict of registry keys and values. - - All names are converted to lowercase. - """ - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - d = {} - i = 0 - while True: - try: - name, value, type = RegEnumValue(handle, i) - except RegError: - break - name = name.lower() - d[convert_mbcs(name)] = convert_mbcs(value) - i += 1 - return d - - -def convert_mbcs(s): - dec = getattr(s, "decode", None) - if dec is not None: - try: - s = dec("mbcs") - except UnicodeError: - pass - return s - - -class MacroExpander: - def __init__(self, version): - self.macros = {} - self.load_macros(version) - - def set_macro(self, macro, path, key): - for base in HKEYS: - d = read_values(base, path) - if d: - self.macros["$(%s)" % macro] = d[key] - break - - def load_macros(self, version): - vsbase = r"Software\Microsoft\VisualStudio\%0.1f" % version - self.set_macro("VCInstallDir", vsbase + r"\Setup\VC", "productdir") - self.set_macro("VSInstallDir", vsbase + r"\Setup\VS", "productdir") - net = r"Software\Microsoft\.NETFramework" - self.set_macro("FrameworkDir", net, "installroot") - try: - if version > 7.0: - self.set_macro("FrameworkSDKDir", net, "sdkinstallrootv1.1") - else: - self.set_macro("FrameworkSDKDir", net, "sdkinstallroot") - except KeyError as exc: # - raise DistutilsPlatformError( - """Python was built with Visual Studio 2003; -extensions must be built with a compiler than can generate compatible binaries. -Visual Studio 2003 was not found on this system. If you have Cygwin installed, -you can try compiling with MingW32, by passing "-c mingw32" to setup.py.""" - ) - - p = r"Software\Microsoft\NET Framework Setup\Product" - for base in HKEYS: - try: - h = RegOpenKeyEx(base, p) - except RegError: - continue - key = RegEnumKey(h, 0) - d = read_values(base, r"%s\%s" % (p, key)) - self.macros["$(FrameworkVersion)"] = d["version"] - - def sub(self, s): - for k, v in self.macros.items(): - s = s.replace(k, v) - return s - - -def get_build_version(): - """Return the version of MSVC that was used to build Python. - - For Python 2.3 and up, the version number is included in - sys.version. For earlier versions, assume the compiler is MSVC 6. - """ - prefix = "MSC v." - i = sys.version.find(prefix) - if i == -1: - return 6 - i = i + len(prefix) - s, rest = sys.version[i:].split(" ", 1) - majorVersion = int(s[:-2]) - 6 - if majorVersion >= 13: - # v13 was skipped and should be v14 - majorVersion += 1 - minorVersion = int(s[2:3]) / 10.0 - # I don't think paths are affected by minor version in version 6 - if majorVersion == 6: - minorVersion = 0 - if majorVersion >= 6: - return majorVersion + minorVersion - # else we don't know what version of the compiler this is - return None - - -def get_build_architecture(): - """Return the processor architecture. - - Possible results are "Intel" or "AMD64". - """ - - prefix = " bit (" - i = sys.version.find(prefix) - if i == -1: - return "Intel" - j = sys.version.find(")", i) - return sys.version[i + len(prefix) : j] - - -def normalize_and_reduce_paths(paths): - """Return a list of normalized paths with duplicates removed. - - The current order of paths is maintained. - """ - # Paths are normalized so things like: /a and /a/ aren't both preserved. - reduced_paths = [] - for p in paths: - np = os.path.normpath(p) - # XXX(nnorwitz): O(n**2), if reduced_paths gets long perhaps use a set. - if np not in reduced_paths: - reduced_paths.append(np) - return reduced_paths - - -class MSVCCompiler(CCompiler): - """Concrete class that implements an interface to Microsoft Visual C++, - as defined by the CCompiler abstract class.""" - - compiler_type = 'msvc' - - # Just set this so CCompiler's constructor doesn't barf. We currently - # don't use the 'set_executables()' bureaucracy provided by CCompiler, - # as it really isn't necessary for this sort of single-compiler class. - # Would be nice to have a consistent interface with UnixCCompiler, - # though, so it's worth thinking about. - executables = {} - - # Private class data (need to distinguish C from C++ source for compiler) - _c_extensions = ['.c'] - _cpp_extensions = ['.cc', '.cpp', '.cxx'] - _rc_extensions = ['.rc'] - _mc_extensions = ['.mc'] - - # Needed for the filename generation methods provided by the - # base class, CCompiler. - src_extensions = _c_extensions + _cpp_extensions + _rc_extensions + _mc_extensions - res_extension = '.res' - obj_extension = '.obj' - static_lib_extension = '.lib' - shared_lib_extension = '.dll' - static_lib_format = shared_lib_format = '%s%s' - exe_extension = '.exe' - - def __init__(self, verbose=0, dry_run=0, force=0): - super().__init__(verbose, dry_run, force) - self.__version = get_build_version() - self.__arch = get_build_architecture() - if self.__arch == "Intel": - # x86 - if self.__version >= 7: - self.__root = r"Software\Microsoft\VisualStudio" - self.__macros = MacroExpander(self.__version) - else: - self.__root = r"Software\Microsoft\Devstudio" - self.__product = "Visual Studio version %s" % self.__version - else: - # Win64. Assume this was built with the platform SDK - self.__product = "Microsoft SDK compiler %s" % (self.__version + 6) - - self.initialized = False - - def initialize(self): - self.__paths = [] - if ( - "DISTUTILS_USE_SDK" in os.environ - and "MSSdk" in os.environ - and self.find_exe("cl.exe") - ): - # Assume that the SDK set up everything alright; don't try to be - # smarter - self.cc = "cl.exe" - self.linker = "link.exe" - self.lib = "lib.exe" - self.rc = "rc.exe" - self.mc = "mc.exe" - else: - self.__paths = self.get_msvc_paths("path") - - if len(self.__paths) == 0: - raise DistutilsPlatformError( - "Python was built with %s, " - "and extensions need to be built with the same " - "version of the compiler, but it isn't installed." % self.__product - ) - - self.cc = self.find_exe("cl.exe") - self.linker = self.find_exe("link.exe") - self.lib = self.find_exe("lib.exe") - self.rc = self.find_exe("rc.exe") # resource compiler - self.mc = self.find_exe("mc.exe") # message compiler - self.set_path_env_var('lib') - self.set_path_env_var('include') - - # extend the MSVC path with the current path - try: - for p in os.environ['path'].split(';'): - self.__paths.append(p) - except KeyError: - pass - self.__paths = normalize_and_reduce_paths(self.__paths) - os.environ['path'] = ";".join(self.__paths) - - self.preprocess_options = None - if self.__arch == "Intel": - self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/GX', '/DNDEBUG'] - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/W3', - '/GX', - '/Z7', - '/D_DEBUG', - ] - else: - # Win64 - self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/GS-', '/DNDEBUG'] - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/W3', - '/GS-', - '/Z7', - '/D_DEBUG', - ] - - self.ldflags_shared = ['/DLL', '/nologo', '/INCREMENTAL:NO'] - if self.__version >= 7: - self.ldflags_shared_debug = ['/DLL', '/nologo', '/INCREMENTAL:no', '/DEBUG'] - else: - self.ldflags_shared_debug = [ - '/DLL', - '/nologo', - '/INCREMENTAL:no', - '/pdb:None', - '/DEBUG', - ] - self.ldflags_static = ['/nologo'] - - self.initialized = True - - # -- Worker methods ------------------------------------------------ - - def object_filenames(self, source_filenames, strip_dir=0, output_dir=''): - # Copied from ccompiler.py, extended to return .res as 'object'-file - # for .rc input file - if output_dir is None: - output_dir = '' - obj_names = [] - for src_name in source_filenames: - (base, ext) = os.path.splitext(src_name) - base = os.path.splitdrive(base)[1] # Chop off the drive - base = base[os.path.isabs(base) :] # If abs, chop off leading / - if ext not in self.src_extensions: - # Better to raise an exception instead of silently continuing - # and later complain about sources and targets having - # different lengths - raise CompileError("Don't know how to compile %s" % src_name) - if strip_dir: - base = os.path.basename(base) - if ext in self._rc_extensions: - obj_names.append(os.path.join(output_dir, base + self.res_extension)) - elif ext in self._mc_extensions: - obj_names.append(os.path.join(output_dir, base + self.res_extension)) - else: - obj_names.append(os.path.join(output_dir, base + self.obj_extension)) - return obj_names - - def compile( - self, - sources, - output_dir=None, - macros=None, - include_dirs=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - depends=None, - ): - - if not self.initialized: - self.initialize() - compile_info = self._setup_compile( - output_dir, macros, include_dirs, sources, depends, extra_postargs - ) - macros, objects, extra_postargs, pp_opts, build = compile_info - - compile_opts = extra_preargs or [] - compile_opts.append('/c') - if debug: - compile_opts.extend(self.compile_options_debug) - else: - compile_opts.extend(self.compile_options) - - for obj in objects: - try: - src, ext = build[obj] - except KeyError: - continue - if debug: - # pass the full pathname to MSVC in debug mode, - # this allows the debugger to find the source file - # without asking the user to browse for it - src = os.path.abspath(src) - - if ext in self._c_extensions: - input_opt = "/Tc" + src - elif ext in self._cpp_extensions: - input_opt = "/Tp" + src - elif ext in self._rc_extensions: - # compile .RC to .RES file - input_opt = src - output_opt = "/fo" + obj - try: - self.spawn([self.rc] + pp_opts + [output_opt] + [input_opt]) - except DistutilsExecError as msg: - raise CompileError(msg) - continue - elif ext in self._mc_extensions: - # Compile .MC to .RC file to .RES file. - # * '-h dir' specifies the directory for the - # generated include file - # * '-r dir' specifies the target directory of the - # generated RC file and the binary message resource - # it includes - # - # For now (since there are no options to change this), - # we use the source-directory for the include file and - # the build directory for the RC file and message - # resources. This works at least for win32all. - h_dir = os.path.dirname(src) - rc_dir = os.path.dirname(obj) - try: - # first compile .MC to .RC and .H file - self.spawn([self.mc] + ['-h', h_dir, '-r', rc_dir] + [src]) - base, _ = os.path.splitext(os.path.basename(src)) - rc_file = os.path.join(rc_dir, base + '.rc') - # then compile .RC to .RES file - self.spawn([self.rc] + ["/fo" + obj] + [rc_file]) - - except DistutilsExecError as msg: - raise CompileError(msg) - continue - else: - # how to handle this file? - raise CompileError("Don't know how to compile %s to %s" % (src, obj)) - - output_opt = "/Fo" + obj - try: - self.spawn( - [self.cc] - + compile_opts - + pp_opts - + [input_opt, output_opt] - + extra_postargs - ) - except DistutilsExecError as msg: - raise CompileError(msg) - - return objects - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - output_filename = self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - lib_args = objects + ['/OUT:' + output_filename] - if debug: - pass # XXX what goes here? - try: - self.spawn([self.lib] + lib_args) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def link( - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs) - (libraries, library_dirs, runtime_library_dirs) = fixed_args - - if runtime_library_dirs: - self.warn( - "I don't know what to do with 'runtime_library_dirs': " - + str(runtime_library_dirs) - ) - - lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries) - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - if target_desc == CCompiler.EXECUTABLE: - if debug: - ldflags = self.ldflags_shared_debug[1:] - else: - ldflags = self.ldflags_shared[1:] - else: - if debug: - ldflags = self.ldflags_shared_debug - else: - ldflags = self.ldflags_shared - - export_opts = [] - for sym in export_symbols or []: - export_opts.append("/EXPORT:" + sym) - - ld_args = ( - ldflags + lib_opts + export_opts + objects + ['/OUT:' + output_filename] - ) - - # The MSVC linker generates .lib and .exp files, which cannot be - # suppressed by any linker switches. The .lib files may even be - # needed! Make sure they are generated in the temporary build - # directory. Since they have different names for debug and release - # builds, they can go into the same directory. - if export_symbols is not None: - (dll_name, dll_ext) = os.path.splitext( - os.path.basename(output_filename) - ) - implib_file = os.path.join( - os.path.dirname(objects[0]), self.library_filename(dll_name) - ) - ld_args.append('/IMPLIB:' + implib_file) - - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - - self.mkpath(os.path.dirname(output_filename)) - try: - self.spawn([self.linker] + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - - else: - log.debug("skipping %s (up-to-date)", output_filename) - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function, in - # ccompiler.py. - - def library_dir_option(self, dir): - return "/LIBPATH:" + dir - - def runtime_library_dir_option(self, dir): - raise DistutilsPlatformError( - "don't know how to set runtime library search path for MSVC++" - ) - - def library_option(self, lib): - return self.library_filename(lib) - - def find_library_file(self, dirs, lib, debug=0): - # Prefer a debugging library if found (and requested), but deal - # with it if we don't have one. - if debug: - try_names = [lib + "_d", lib] - else: - try_names = [lib] - for dir in dirs: - for name in try_names: - libfile = os.path.join(dir, self.library_filename(name)) - if os.path.exists(libfile): - return libfile - else: - # Oops, didn't find it in *any* of 'dirs' - return None - - # Helper methods for using the MSVC registry settings - - def find_exe(self, exe): - """Return path to an MSVC executable program. - - Tries to find the program in several places: first, one of the - MSVC program search paths from the registry; next, the directories - in the PATH environment variable. If any of those work, return an - absolute path that is known to exist. If none of them work, just - return the original program name, 'exe'. - """ - for p in self.__paths: - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - - # didn't find it; try existing path - for p in os.environ['Path'].split(';'): - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - - return exe - - def get_msvc_paths(self, path, platform='x86'): - """Get a list of devstudio directories (include, lib or path). - - Return a list of strings. The list will be empty if unable to - access the registry or appropriate registry keys not found. - """ - if not _can_read_reg: - return [] - - path = path + " dirs" - if self.__version >= 7: - key = r"%s\%0.1f\VC\VC_OBJECTS_PLATFORM_INFO\Win32\Directories" % ( - self.__root, - self.__version, - ) - else: - key = ( - r"%s\6.0\Build System\Components\Platforms" - r"\Win32 (%s)\Directories" % (self.__root, platform) - ) - - for base in HKEYS: - d = read_values(base, key) - if d: - if self.__version >= 7: - return self.__macros.sub(d[path]).split(";") - else: - return d[path].split(";") - # MSVC 6 seems to create the registry entries we need only when - # the GUI is run. - if self.__version == 6: - for base in HKEYS: - if read_values(base, r"%s\6.0" % self.__root) is not None: - self.warn( - "It seems you have Visual Studio 6 installed, " - "but the expected registry settings are not present.\n" - "You must at least run the Visual Studio GUI once " - "so that these entries are created." - ) - break - return [] - - def set_path_env_var(self, name): - """Set environment variable 'name' to an MSVC path type value. - - This is equivalent to a SET command prior to execution of spawned - commands. - """ - - if name == "lib": - p = self.get_msvc_paths("library") - else: - p = self.get_msvc_paths(name) - if p: - os.environ[name] = ';'.join(p) - - -if get_build_version() >= 8.0: - log.debug("Importing new compiler from distutils.msvc9compiler") - OldMSVCCompiler = MSVCCompiler - from distutils.msvc9compiler import MSVCCompiler - - # get_build_architecture not really relevant now we support cross-compile - from distutils.msvc9compiler import MacroExpander diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py deleted file mode 100644 index 0e9e3c9cd003f0b72cd1355ba06ccc31795b55a2..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/config/pyprojecttoml.py +++ /dev/null @@ -1,484 +0,0 @@ -""" -Load setuptools configuration from ``pyproject.toml`` files. - -**PRIVATE MODULE**: API reserved for setuptools internal usage only. -""" -import logging -import os -import warnings -from contextlib import contextmanager -from functools import partial -from typing import TYPE_CHECKING, Callable, Dict, Optional, Mapping, Union - -from setuptools.errors import FileError, OptionError - -from . import expand as _expand -from ._apply_pyprojecttoml import apply as _apply -from ._apply_pyprojecttoml import _PREVIOUSLY_DEFINED, _WouldIgnoreField - -if TYPE_CHECKING: - from setuptools.dist import Distribution # noqa - -_Path = Union[str, os.PathLike] -_logger = logging.getLogger(__name__) - - -def load_file(filepath: _Path) -> dict: - from setuptools.extern import tomli # type: ignore - - with open(filepath, "rb") as file: - return tomli.load(file) - - -def validate(config: dict, filepath: _Path) -> bool: - from . import _validate_pyproject as validator - - trove_classifier = validator.FORMAT_FUNCTIONS.get("trove-classifier") - if hasattr(trove_classifier, "_disable_download"): - # Improve reproducibility by default. See issue 31 for validate-pyproject. - trove_classifier._disable_download() # type: ignore - - try: - return validator.validate(config) - except validator.ValidationError as ex: - _logger.error(f"configuration error: {ex.summary}") # type: ignore - _logger.debug(ex.details) # type: ignore - error = ValueError(f"invalid pyproject.toml config: {ex.name}") # type: ignore - raise error from None - - -def apply_configuration( - dist: "Distribution", - filepath: _Path, - ignore_option_errors=False, -) -> "Distribution": - """Apply the configuration from a ``pyproject.toml`` file into an existing - distribution object. - """ - config = read_configuration(filepath, True, ignore_option_errors, dist) - return _apply(dist, config, filepath) - - -def read_configuration( - filepath: _Path, - expand=True, - ignore_option_errors=False, - dist: Optional["Distribution"] = None, -): - """Read given configuration file and returns options from it as a dict. - - :param str|unicode filepath: Path to configuration file in the ``pyproject.toml`` - format. - - :param bool expand: Whether to expand directives and other computed values - (i.e. post-process the given configuration) - - :param bool ignore_option_errors: Whether to silently ignore - options, values of which could not be resolved (e.g. due to exceptions - in directives such as file:, attr:, etc.). - If False exceptions are propagated as expected. - - :param Distribution|None: Distribution object to which the configuration refers. - If not given a dummy object will be created and discarded after the - configuration is read. This is used for auto-discovery of packages in the case - a dynamic configuration (e.g. ``attr`` or ``cmdclass``) is expanded. - When ``expand=False`` this object is simply ignored. - - :rtype: dict - """ - filepath = os.path.abspath(filepath) - - if not os.path.isfile(filepath): - raise FileError(f"Configuration file {filepath!r} does not exist.") - - asdict = load_file(filepath) or {} - project_table = asdict.get("project", {}) - tool_table = asdict.get("tool", {}) - setuptools_table = tool_table.get("setuptools", {}) - if not asdict or not (project_table or setuptools_table): - return {} # User is not using pyproject to configure setuptools - - if setuptools_table: - # TODO: Remove the following once the feature stabilizes: - msg = "Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*." - warnings.warn(msg, _BetaConfiguration) - - # There is an overall sense in the community that making include_package_data=True - # the default would be an improvement. - # `ini2toml` backfills include_package_data=False when nothing is explicitly given, - # therefore setting a default here is backwards compatible. - orig_setuptools_table = setuptools_table.copy() - if dist and getattr(dist, "include_package_data") is not None: - setuptools_table.setdefault("include-package-data", dist.include_package_data) - else: - setuptools_table.setdefault("include-package-data", True) - # Persist changes: - asdict["tool"] = tool_table - tool_table["setuptools"] = setuptools_table - - try: - # Don't complain about unrelated errors (e.g. tools not using the "tool" table) - subset = {"project": project_table, "tool": {"setuptools": setuptools_table}} - validate(subset, filepath) - except Exception as ex: - # TODO: Remove the following once the feature stabilizes: - if _skip_bad_config(project_table, orig_setuptools_table, dist): - return {} - # TODO: After the previous statement is removed the try/except can be replaced - # by the _ignore_errors context manager. - if ignore_option_errors: - _logger.debug(f"ignored error: {ex.__class__.__name__} - {ex}") - else: - raise # re-raise exception - - if expand: - root_dir = os.path.dirname(filepath) - return expand_configuration(asdict, root_dir, ignore_option_errors, dist) - - return asdict - - -def _skip_bad_config( - project_cfg: dict, setuptools_cfg: dict, dist: Optional["Distribution"] -) -> bool: - """Be temporarily forgiving with invalid ``pyproject.toml``""" - # See pypa/setuptools#3199 and pypa/cibuildwheel#1064 - - if dist is None or ( - dist.metadata.name is None - and dist.metadata.version is None - and dist.install_requires is None - ): - # It seems that the build is not getting any configuration from other places - return False - - if setuptools_cfg: - # If `[tool.setuptools]` is set, then `pyproject.toml` config is intentional - return False - - given_config = set(project_cfg.keys()) - popular_subset = {"name", "version", "python_requires", "requires-python"} - if given_config <= popular_subset: - # It seems that the docs in cibuildtool has been inadvertently encouraging users - # to create `pyproject.toml` files that are not compliant with the standards. - # Let's be forgiving for the time being. - warnings.warn(_InvalidFile.message(), _InvalidFile, stacklevel=2) - return True - - return False - - -def expand_configuration( - config: dict, - root_dir: Optional[_Path] = None, - ignore_option_errors: bool = False, - dist: Optional["Distribution"] = None, -) -> dict: - """Given a configuration with unresolved fields (e.g. dynamic, cmdclass, ...) - find their final values. - - :param dict config: Dict containing the configuration for the distribution - :param str root_dir: Top-level directory for the distribution/project - (the same directory where ``pyproject.toml`` is place) - :param bool ignore_option_errors: see :func:`read_configuration` - :param Distribution|None: Distribution object to which the configuration refers. - If not given a dummy object will be created and discarded after the - configuration is read. Used in the case a dynamic configuration - (e.g. ``attr`` or ``cmdclass``). - - :rtype: dict - """ - return _ConfigExpander(config, root_dir, ignore_option_errors, dist).expand() - - -class _ConfigExpander: - def __init__( - self, - config: dict, - root_dir: Optional[_Path] = None, - ignore_option_errors: bool = False, - dist: Optional["Distribution"] = None, - ): - self.config = config - self.root_dir = root_dir or os.getcwd() - self.project_cfg = config.get("project", {}) - self.dynamic = self.project_cfg.get("dynamic", []) - self.setuptools_cfg = config.get("tool", {}).get("setuptools", {}) - self.dynamic_cfg = self.setuptools_cfg.get("dynamic", {}) - self.ignore_option_errors = ignore_option_errors - self._dist = dist - - def _ensure_dist(self) -> "Distribution": - from setuptools.dist import Distribution - - attrs = {"src_root": self.root_dir, "name": self.project_cfg.get("name", None)} - return self._dist or Distribution(attrs) - - def _process_field(self, container: dict, field: str, fn: Callable): - if field in container: - with _ignore_errors(self.ignore_option_errors): - container[field] = fn(container[field]) - - def _canonic_package_data(self, field="package-data"): - package_data = self.setuptools_cfg.get(field, {}) - return _expand.canonic_package_data(package_data) - - def expand(self): - self._expand_packages() - self._canonic_package_data() - self._canonic_package_data("exclude-package-data") - - # A distribution object is required for discovering the correct package_dir - dist = self._ensure_dist() - - with _EnsurePackagesDiscovered(dist, self.setuptools_cfg) as ensure_discovered: - package_dir = ensure_discovered.package_dir - self._expand_data_files() - self._expand_cmdclass(package_dir) - self._expand_all_dynamic(dist, package_dir) - - return self.config - - def _expand_packages(self): - packages = self.setuptools_cfg.get("packages") - if packages is None or isinstance(packages, (list, tuple)): - return - - find = packages.get("find") - if isinstance(find, dict): - find["root_dir"] = self.root_dir - find["fill_package_dir"] = self.setuptools_cfg.setdefault("package-dir", {}) - with _ignore_errors(self.ignore_option_errors): - self.setuptools_cfg["packages"] = _expand.find_packages(**find) - - def _expand_data_files(self): - data_files = partial(_expand.canonic_data_files, root_dir=self.root_dir) - self._process_field(self.setuptools_cfg, "data-files", data_files) - - def _expand_cmdclass(self, package_dir: Mapping[str, str]): - root_dir = self.root_dir - cmdclass = partial(_expand.cmdclass, package_dir=package_dir, root_dir=root_dir) - self._process_field(self.setuptools_cfg, "cmdclass", cmdclass) - - def _expand_all_dynamic(self, dist: "Distribution", package_dir: Mapping[str, str]): - special = ( # need special handling - "version", - "readme", - "entry-points", - "scripts", - "gui-scripts", - "classifiers", - "dependencies", - "optional-dependencies", - ) - # `_obtain` functions are assumed to raise appropriate exceptions/warnings. - obtained_dynamic = { - field: self._obtain(dist, field, package_dir) - for field in self.dynamic - if field not in special - } - obtained_dynamic.update( - self._obtain_entry_points(dist, package_dir) or {}, - version=self._obtain_version(dist, package_dir), - readme=self._obtain_readme(dist), - classifiers=self._obtain_classifiers(dist), - dependencies=self._obtain_dependencies(dist), - optional_dependencies=self._obtain_optional_dependencies(dist), - ) - # `None` indicates there is nothing in `tool.setuptools.dynamic` but the value - # might have already been set by setup.py/extensions, so avoid overwriting. - updates = {k: v for k, v in obtained_dynamic.items() if v is not None} - self.project_cfg.update(updates) - - def _ensure_previously_set(self, dist: "Distribution", field: str): - previous = _PREVIOUSLY_DEFINED[field](dist) - if previous is None and not self.ignore_option_errors: - msg = ( - f"No configuration found for dynamic {field!r}.\n" - "Some dynamic fields need to be specified via `tool.setuptools.dynamic`" - "\nothers must be specified via the equivalent attribute in `setup.py`." - ) - raise OptionError(msg) - - def _expand_directive( - self, specifier: str, directive, package_dir: Mapping[str, str] - ): - with _ignore_errors(self.ignore_option_errors): - root_dir = self.root_dir - if "file" in directive: - return _expand.read_files(directive["file"], root_dir) - if "attr" in directive: - return _expand.read_attr(directive["attr"], package_dir, root_dir) - raise ValueError(f"invalid `{specifier}`: {directive!r}") - return None - - def _obtain(self, dist: "Distribution", field: str, package_dir: Mapping[str, str]): - if field in self.dynamic_cfg: - return self._expand_directive( - f"tool.setuptools.dynamic.{field}", - self.dynamic_cfg[field], - package_dir, - ) - self._ensure_previously_set(dist, field) - return None - - def _obtain_version(self, dist: "Distribution", package_dir: Mapping[str, str]): - # Since plugins can set version, let's silently skip if it cannot be obtained - if "version" in self.dynamic and "version" in self.dynamic_cfg: - return _expand.version(self._obtain(dist, "version", package_dir)) - return None - - def _obtain_readme(self, dist: "Distribution") -> Optional[Dict[str, str]]: - if "readme" not in self.dynamic: - return None - - dynamic_cfg = self.dynamic_cfg - if "readme" in dynamic_cfg: - return { - "text": self._obtain(dist, "readme", {}), - "content-type": dynamic_cfg["readme"].get("content-type", "text/x-rst"), - } - - self._ensure_previously_set(dist, "readme") - return None - - def _obtain_entry_points( - self, dist: "Distribution", package_dir: Mapping[str, str] - ) -> Optional[Dict[str, dict]]: - fields = ("entry-points", "scripts", "gui-scripts") - if not any(field in self.dynamic for field in fields): - return None - - text = self._obtain(dist, "entry-points", package_dir) - if text is None: - return None - - groups = _expand.entry_points(text) - expanded = {"entry-points": groups} - - def _set_scripts(field: str, group: str): - if group in groups: - value = groups.pop(group) - if field not in self.dynamic: - msg = _WouldIgnoreField.message(field, value) - warnings.warn(msg, _WouldIgnoreField) - # TODO: Don't set field when support for pyproject.toml stabilizes - # instead raise an error as specified in PEP 621 - expanded[field] = value - - _set_scripts("scripts", "console_scripts") - _set_scripts("gui-scripts", "gui_scripts") - - return expanded - - def _obtain_classifiers(self, dist: "Distribution"): - if "classifiers" in self.dynamic: - value = self._obtain(dist, "classifiers", {}) - if value: - return value.splitlines() - return None - - def _obtain_dependencies(self, dist: "Distribution"): - if "dependencies" in self.dynamic: - value = self._obtain(dist, "dependencies", {}) - if value: - return _parse_requirements_list(value) - return None - - def _obtain_optional_dependencies(self, dist: "Distribution"): - if "optional-dependencies" not in self.dynamic: - return None - if "optional-dependencies" in self.dynamic_cfg: - optional_dependencies_map = self.dynamic_cfg["optional-dependencies"] - assert isinstance(optional_dependencies_map, dict) - return { - group: _parse_requirements_list(self._expand_directive( - f"tool.setuptools.dynamic.optional-dependencies.{group}", - directive, - {}, - )) - for group, directive in optional_dependencies_map.items() - } - self._ensure_previously_set(dist, "optional-dependencies") - return None - - -def _parse_requirements_list(value): - return [ - line - for line in value.splitlines() - if line.strip() and not line.strip().startswith("#") - ] - - -@contextmanager -def _ignore_errors(ignore_option_errors: bool): - if not ignore_option_errors: - yield - return - - try: - yield - except Exception as ex: - _logger.debug(f"ignored error: {ex.__class__.__name__} - {ex}") - - -class _EnsurePackagesDiscovered(_expand.EnsurePackagesDiscovered): - def __init__(self, distribution: "Distribution", setuptools_cfg: dict): - super().__init__(distribution) - self._setuptools_cfg = setuptools_cfg - - def __enter__(self): - """When entering the context, the values of ``packages``, ``py_modules`` and - ``package_dir`` that are missing in ``dist`` are copied from ``setuptools_cfg``. - """ - dist, cfg = self._dist, self._setuptools_cfg - package_dir: Dict[str, str] = cfg.setdefault("package-dir", {}) - package_dir.update(dist.package_dir or {}) - dist.package_dir = package_dir # needs to be the same object - - dist.set_defaults._ignore_ext_modules() # pyproject.toml-specific behaviour - - # Set `py_modules` and `packages` in dist to short-circuit auto-discovery, - # but avoid overwriting empty lists purposefully set by users. - if dist.py_modules is None: - dist.py_modules = cfg.get("py-modules") - if dist.packages is None: - dist.packages = cfg.get("packages") - - return super().__enter__() - - def __exit__(self, exc_type, exc_value, traceback): - """When exiting the context, if values of ``packages``, ``py_modules`` and - ``package_dir`` are missing in ``setuptools_cfg``, copy from ``dist``. - """ - # If anything was discovered set them back, so they count in the final config. - self._setuptools_cfg.setdefault("packages", self._dist.packages) - self._setuptools_cfg.setdefault("py-modules", self._dist.py_modules) - return super().__exit__(exc_type, exc_value, traceback) - - -class _BetaConfiguration(UserWarning): - """Explicitly inform users that some `pyproject.toml` configuration is *beta*""" - - -class _InvalidFile(UserWarning): - """The given `pyproject.toml` file is invalid and would be ignored. - !!\n\n - ############################ - # Invalid `pyproject.toml` # - ############################ - - Any configurations in `pyproject.toml` will be ignored. - Please note that future releases of setuptools will halt the build process - if an invalid file is given. - - To prevent setuptools from considering `pyproject.toml` please - DO NOT include the `[project]` or `[tool.setuptools]` tables in your file. - \n\n!! - """ - - @classmethod - def message(cls): - from inspect import cleandoc - return cleandoc(cls.__doc__) diff --git a/spaces/tnrzk13/PneumoniaDetection/app.py b/spaces/tnrzk13/PneumoniaDetection/app.py deleted file mode 100644 index ed56dea579aa02e484e2882fc2e09dc89b34b76c..0000000000000000000000000000000000000000 --- a/spaces/tnrzk13/PneumoniaDetection/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn = load_learner('chestXray.pkl') - -categories = ('No Pneumonia', 'Pneumonia') - -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories, map(float,probs))) - -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -path = 'images/' -examples = [f'{path}Pneumonia 1.jpeg', f'{path}Pneumonia 2.jpeg', f'{path}Pneumonia 3.jpeg', f'{path}No Pneumonia 1.jpeg', f'{path}No Pneumonia 2.jpeg', f'{path}No Pneumonia 3.jpeg'] - -intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/layers/roi_align.py b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/layers/roi_align.py deleted file mode 100644 index 1036f962db0bfa9d053ab111be4536add3dc3860..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/layers/roi_align.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch -from torch import nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from maskrcnn_benchmark import _C -from apex import amp - -class _ROIAlign(Function): - @staticmethod - def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio): - ctx.save_for_backward(roi) - ctx.output_size = _pair(output_size) - ctx.spatial_scale = spatial_scale - ctx.sampling_ratio = sampling_ratio - ctx.input_shape = input.size() - output = _C.roi_align_forward( - input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - rois, = ctx.saved_tensors - output_size = ctx.output_size - spatial_scale = ctx.spatial_scale - sampling_ratio = ctx.sampling_ratio - bs, ch, h, w = ctx.input_shape - grad_input = _C.roi_align_backward( - grad_output, - rois, - spatial_scale, - output_size[0], - output_size[1], - bs, - ch, - h, - w, - sampling_ratio, - ) - return grad_input, None, None, None, None - - -roi_align = _ROIAlign.apply - - -class ROIAlign(nn.Module): - def __init__(self, output_size, spatial_scale, sampling_ratio): - super(ROIAlign, self).__init__() - self.output_size = output_size - self.spatial_scale = spatial_scale - self.sampling_ratio = sampling_ratio - - @amp.float_function - def forward(self, input, rois): - return roi_align( - input, rois, self.output_size, self.spatial_scale, self.sampling_ratio - ) - - def __repr__(self): - tmpstr = self.__class__.__name__ + "(" - tmpstr += "output_size=" + str(self.output_size) - tmpstr += ", spatial_scale=" + str(self.spatial_scale) - tmpstr += ", sampling_ratio=" + str(self.sampling_ratio) - tmpstr += ")" - return tmpstr diff --git a/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/meta_init_context.py b/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/meta_init_context.py deleted file mode 100644 index 6cba6fff0fe21fe222c7ab38eae44a9784c0be9c..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/meta_init_context.py +++ /dev/null @@ -1,94 +0,0 @@ -from contextlib import contextmanager -import torch -import torch.nn as nn - -@contextmanager -def init_empty_weights(include_buffers: bool=False): - """Meta initialization context manager. - - A context manager under which models are initialized with all parameters - on the meta device, therefore creating an empty model. Useful when just - initializing the model would blow the available RAM. - - Args: - include_buffers (`bool`, *optional*, defaults to `False`): Whether or - not to also put all buffers on the meta device while initializing. - - Example: - ```python - import torch.nn as nn - - # Initialize a model with 100 billions parameters in no time and without using any RAM. - with init_empty_weights(): - tst = nn.Sequential(*[nn.Linear(10000, 10000) for _ in range(1000)]) - ``` - - - - Any model created under this context manager has no weights. As such you can't do something like - `model.to(some_device)` with it. To load weights inside your empty model, see [`load_checkpoint_and_dispatch`]. - - - """ - with init_on_device(torch.device('meta'), include_buffers=include_buffers) as f: - yield f - -@contextmanager -def init_on_device(device: torch.device, include_buffers: bool=False): - """Device initialization context manager. - - A context manager under which models are initialized with all parameters - on the specified device. - - Args: - device (`torch.device`): Device to initialize all parameters on. - include_buffers (`bool`, *optional*, defaults to `False`): Whether or - not to also put all buffers on the meta device while initializing. - - Example: - ```python - import torch.nn as nn - - with init_on_device(device=torch.device("cuda")): - tst = nn.Liner(100, 100) # on `cuda` device - ``` - """ - old_register_parameter = nn.Module.register_parameter - if include_buffers: - old_register_buffer = nn.Module.register_buffer - - def register_empty_parameter(module, name, param): - old_register_parameter(module, name, param) - if param is not None: - param_cls = type(module._parameters[name]) - kwargs = module._parameters[name].__dict__ - module._parameters[name] = param_cls(module._parameters[name].to(device), **kwargs) - - def register_empty_buffer(module, name, buffer): - old_register_buffer(module, name, buffer) - if buffer is not None: - module._buffers[name] = module._buffers[name].to(device) - if include_buffers: - tensor_constructors_to_patch = {torch_function_name: getattr(torch, torch_function_name) for torch_function_name in ['empty', 'zeros', 'ones', 'full']} - else: - tensor_constructors_to_patch = {} - - def patch_tensor_constructor(fn): - - def wrapper(*args, **kwargs): - kwargs['device'] = device - return fn(*args, **kwargs) - return wrapper - try: - nn.Module.register_parameter = register_empty_parameter - if include_buffers: - nn.Module.register_buffer = register_empty_buffer - for torch_function_name in tensor_constructors_to_patch.keys(): - setattr(torch, torch_function_name, patch_tensor_constructor(getattr(torch, torch_function_name))) - yield - finally: - nn.Module.register_parameter = old_register_parameter - if include_buffers: - nn.Module.register_buffer = old_register_buffer - for (torch_function_name, old_torch_function) in tensor_constructors_to_patch.items(): - setattr(torch, torch_function_name, old_torch_function) \ No newline at end of file diff --git a/spaces/ucalyptus/PTI/models/StyleCLIP/mapper/options/train_options.py b/spaces/ucalyptus/PTI/models/StyleCLIP/mapper/options/train_options.py deleted file mode 100644 index a365217f8b76d38aaef4a42b90152ec7a8e7bf1f..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/PTI/models/StyleCLIP/mapper/options/train_options.py +++ /dev/null @@ -1,49 +0,0 @@ -from argparse import ArgumentParser - - -class TrainOptions: - - def __init__(self): - self.parser = ArgumentParser() - self.initialize() - - def initialize(self): - self.parser.add_argument('--exp_dir', type=str, help='Path to experiment output directory') - self.parser.add_argument('--mapper_type', default='LevelsMapper', type=str, help='Which mapper to use') - self.parser.add_argument('--no_coarse_mapper', default=False, action="store_true") - self.parser.add_argument('--no_medium_mapper', default=False, action="store_true") - self.parser.add_argument('--no_fine_mapper', default=False, action="store_true") - self.parser.add_argument('--latents_train_path', default="train_faces.pt", type=str, help="The latents for the training") - self.parser.add_argument('--latents_test_path', default="test_faces.pt", type=str, help="The latents for the validation") - self.parser.add_argument('--train_dataset_size', default=5000, type=int, help="Will be used only if no latents are given") - self.parser.add_argument('--test_dataset_size', default=1000, type=int, help="Will be used only if no latents are given") - - self.parser.add_argument('--batch_size', default=2, type=int, help='Batch size for training') - self.parser.add_argument('--test_batch_size', default=1, type=int, help='Batch size for testing and inference') - self.parser.add_argument('--workers', default=4, type=int, help='Number of train dataloader workers') - self.parser.add_argument('--test_workers', default=2, type=int, help='Number of test/inference dataloader workers') - - self.parser.add_argument('--learning_rate', default=0.5, type=float, help='Optimizer learning rate') - self.parser.add_argument('--optim_name', default='ranger', type=str, help='Which optimizer to use') - - self.parser.add_argument('--id_lambda', default=0.1, type=float, help='ID loss multiplier factor') - self.parser.add_argument('--clip_lambda', default=1.0, type=float, help='CLIP loss multiplier factor') - self.parser.add_argument('--latent_l2_lambda', default=0.8, type=float, help='Latent L2 loss multiplier factor') - - self.parser.add_argument('--stylegan_weights', default='../pretrained_models/stylegan2-ffhq-config-f.pt', type=str, help='Path to StyleGAN model weights') - self.parser.add_argument('--stylegan_size', default=1024, type=int) - self.parser.add_argument('--ir_se50_weights', default='../pretrained_models/model_ir_se50.pth', type=str, help="Path to facial recognition network used in ID loss") - self.parser.add_argument('--checkpoint_path', default=None, type=str, help='Path to StyleCLIPModel model checkpoint') - - self.parser.add_argument('--max_steps', default=50000, type=int, help='Maximum number of training steps') - self.parser.add_argument('--image_interval', default=100, type=int, help='Interval for logging train images during training') - self.parser.add_argument('--board_interval', default=50, type=int, help='Interval for logging metrics to tensorboard') - self.parser.add_argument('--val_interval', default=2000, type=int, help='Validation interval') - self.parser.add_argument('--save_interval', default=2000, type=int, help='Model checkpoint interval') - - self.parser.add_argument('--description', required=True, type=str, help='Driving text prompt') - - - def parse(self): - opts = self.parser.parse_args() - return opts \ No newline at end of file diff --git a/spaces/unidiffuser-testing/unidiffuser-testing/libs/uvit_multi_post_ln_v1.py b/spaces/unidiffuser-testing/unidiffuser-testing/libs/uvit_multi_post_ln_v1.py deleted file mode 100644 index 3b81814f49f79fd8ea26a5d52e0ff8be9262c773..0000000000000000000000000000000000000000 --- a/spaces/unidiffuser-testing/unidiffuser-testing/libs/uvit_multi_post_ln_v1.py +++ /dev/null @@ -1,285 +0,0 @@ -import torch -import torch.nn as nn -import math -from .timm import trunc_normal_, DropPath, Mlp -import einops -import torch.utils.checkpoint -import torch.nn.functional as F - -if hasattr(torch.nn.functional, 'scaled_dot_product_attention'): - ATTENTION_MODE = 'flash' -else: - try: - import xformers - import xformers.ops - ATTENTION_MODE = 'xformers' - except: - ATTENTION_MODE = 'math' -print(f'attention mode is {ATTENTION_MODE}') - - -def timestep_embedding(timesteps, dim, max_period=10000): - """ - Create sinusoidal timestep embeddings. - - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - return embedding - - -def patchify(imgs, patch_size): - x = einops.rearrange(imgs, 'B C (h p1) (w p2) -> B (h w) (p1 p2 C)', p1=patch_size, p2=patch_size) - return x - - -def unpatchify(x, in_chans): - patch_size = int((x.shape[2] // in_chans) ** 0.5) - h = w = int(x.shape[1] ** .5) - assert h * w == x.shape[1] and patch_size ** 2 * in_chans == x.shape[2] - x = einops.rearrange(x, 'B (h w) (p1 p2 C) -> B C (h p1) (w p2)', h=h, p1=patch_size, p2=patch_size) - return x - - -def interpolate_pos_emb(pos_emb, old_shape, new_shape): - pos_emb = einops.rearrange(pos_emb, 'B (H W) C -> B C H W', H=old_shape[0], W=old_shape[1]) - pos_emb = F.interpolate(pos_emb, new_shape, mode='bilinear') - pos_emb = einops.rearrange(pos_emb, 'B C H W -> B (H W) C') - return pos_emb - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, L, C = x.shape - - qkv = self.qkv(x) - if ATTENTION_MODE == 'flash': - qkv = einops.rearrange(qkv, 'B L (K H D) -> K B H L D', K=3, H=self.num_heads).float() - q, k, v = qkv[0], qkv[1], qkv[2] # B H L D - x = torch.nn.functional.scaled_dot_product_attention(q, k, v) - x = einops.rearrange(x, 'B H L D -> B L (H D)') - elif ATTENTION_MODE == 'xformers': - qkv = einops.rearrange(qkv, 'B L (K H D) -> K B L H D', K=3, H=self.num_heads) - q, k, v = qkv[0], qkv[1], qkv[2] # B L H D - x = xformers.ops.memory_efficient_attention(q, k, v) - x = einops.rearrange(x, 'B L H D -> B L (H D)', H=self.num_heads) - elif ATTENTION_MODE == 'math': - with torch.amp.autocast(device_type='cuda', enabled=False): - qkv = einops.rearrange(qkv, 'B L (K H D) -> K B H L D', K=3, H=self.num_heads).float() - q, k, v = qkv[0], qkv[1], qkv[2] # B H L D - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - x = (attn @ v).transpose(1, 2).reshape(B, L, C) - else: - raise NotImplemented - - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Block(nn.Module): - - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, skip=False, use_checkpoint=False): - super().__init__() - self.norm1 = norm_layer(dim) if skip else None - self.norm2 = norm_layer(dim) - - self.attn = Attention( - dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm3 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - self.skip_linear = nn.Linear(2 * dim, dim) if skip else None - self.use_checkpoint = use_checkpoint - - def forward(self, x, skip=None): - if self.use_checkpoint: - return torch.utils.checkpoint.checkpoint(self._forward, x, skip) - else: - return self._forward(x, skip) - - def _forward(self, x, skip=None): - if self.skip_linear is not None: - x = self.skip_linear(torch.cat([x, skip], dim=-1)) - x = self.norm1(x) - x = x + self.drop_path(self.attn(x)) - x = self.norm2(x) - - x = x + self.drop_path(self.mlp(x)) - x = self.norm3(x) - - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, patch_size, in_chans=3, embed_dim=768): - super().__init__() - self.patch_size = patch_size - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - B, C, H, W = x.shape - assert H % self.patch_size == 0 and W % self.patch_size == 0 - x = self.proj(x).flatten(2).transpose(1, 2) - return x - - -class UViT(nn.Module): - def __init__(self, img_size, in_chans, patch_size, embed_dim=768, depth=12, - num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, pos_drop_rate=0., drop_rate=0., attn_drop_rate=0., - norm_layer=nn.LayerNorm, mlp_time_embed=False, use_checkpoint=False, - text_dim=None, num_text_tokens=None, clip_img_dim=None): - super().__init__() - self.in_chans = in_chans - self.patch_size = patch_size - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - - self.patch_embed = PatchEmbed(patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim) - self.img_size = (img_size, img_size) if isinstance(img_size, int) else img_size # the default img size - assert self.img_size[0] % patch_size == 0 and self.img_size[1] % patch_size == 0 - self.num_patches = (self.img_size[0] // patch_size) * (self.img_size[1] // patch_size) - - self.time_img_embed = nn.Sequential( - nn.Linear(embed_dim, 4 * embed_dim), - nn.SiLU(), - nn.Linear(4 * embed_dim, embed_dim), - ) if mlp_time_embed else nn.Identity() - - self.time_text_embed = nn.Sequential( - nn.Linear(embed_dim, 4 * embed_dim), - nn.SiLU(), - nn.Linear(4 * embed_dim, embed_dim), - ) if mlp_time_embed else nn.Identity() - - self.text_embed = nn.Linear(text_dim, embed_dim) - self.text_out = nn.Linear(embed_dim, text_dim) - - self.clip_img_embed = nn.Linear(clip_img_dim, embed_dim) - self.clip_img_out = nn.Linear(embed_dim, clip_img_dim) - - self.num_text_tokens = num_text_tokens - self.num_tokens = 1 + 1 + num_text_tokens + 1 + self.num_patches - - self.pos_embed = nn.Parameter(torch.zeros(1, self.num_tokens, embed_dim)) - self.pos_drop = nn.Dropout(p=pos_drop_rate) - - self.in_blocks = nn.ModuleList([ - Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, norm_layer=norm_layer, use_checkpoint=use_checkpoint) - for _ in range(depth // 2)]) - - self.mid_block = Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, norm_layer=norm_layer, use_checkpoint=use_checkpoint) - - self.out_blocks = nn.ModuleList([ - Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, norm_layer=norm_layer, skip=True, use_checkpoint=use_checkpoint) - for _ in range(depth // 2)]) - - self.norm = norm_layer(embed_dim) - self.patch_dim = patch_size ** 2 * in_chans - self.decoder_pred = nn.Linear(embed_dim, self.patch_dim, bias=True) - - trunc_normal_(self.pos_embed, std=.02) - self.apply(self._init_weights) - - self.token_embedding = nn.Embedding(2, embed_dim) - self.pos_embed_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'pos_embed'} - - def forward(self, img, clip_img, text, t_img, t_text, data_type): - _, _, H, W = img.shape - - img = self.patch_embed(img) - - t_img_token = self.time_img_embed(timestep_embedding(t_img, self.embed_dim)) - t_img_token = t_img_token.unsqueeze(dim=1) - t_text_token = self.time_text_embed(timestep_embedding(t_text, self.embed_dim)) - t_text_token = t_text_token.unsqueeze(dim=1) - - text = self.text_embed(text) - clip_img = self.clip_img_embed(clip_img) - - token_embed = self.token_embedding(data_type).unsqueeze(dim=1) - - x = torch.cat((t_img_token, t_text_token, token_embed, text, clip_img, img), dim=1) - - num_text_tokens, num_img_tokens = text.size(1), img.size(1) - - pos_embed = torch.cat( - [self.pos_embed[:, :1 + 1, :], self.pos_embed_token, self.pos_embed[:, 1 + 1:, :]], dim=1) - if H == self.img_size[0] and W == self.img_size[1]: - pass - else: # interpolate the positional embedding when the input image is not of the default shape - pos_embed_others, pos_embed_patches = torch.split(pos_embed, [1 + 1 + 1 + num_text_tokens + 1, self.num_patches], dim=1) - pos_embed_patches = interpolate_pos_emb(pos_embed_patches, (self.img_size[0] // self.patch_size, self.img_size[1] // self.patch_size), - (H // self.patch_size, W // self.patch_size)) - pos_embed = torch.cat((pos_embed_others, pos_embed_patches), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - skips = [] - for blk in self.in_blocks: - x = blk(x) - skips.append(x) - - x = self.mid_block(x) - - for blk in self.out_blocks: - x = blk(x, skips.pop()) - - x = self.norm(x) - - t_img_token_out, t_text_token_out, token_embed_out, text_out, clip_img_out, img_out = x.split((1, 1, 1, num_text_tokens, 1, num_img_tokens), dim=1) - - img_out = self.decoder_pred(img_out) - img_out = unpatchify(img_out, self.in_chans) - - clip_img_out = self.clip_img_out(clip_img_out) - - text_out = self.text_out(text_out) - return img_out, clip_img_out, text_out diff --git a/spaces/usbethFlerru/sovits-modelsV2/Extra-Quality-Deep-Hiarcs-14-Uci-Chess-Engine-35-BETTER.md b/spaces/usbethFlerru/sovits-modelsV2/Extra-Quality-Deep-Hiarcs-14-Uci-Chess-Engine-35-BETTER.md deleted file mode 100644 index 47d07bcb8947dd36ff8cf720ae8e49ab3b81166e..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/Extra-Quality-Deep-Hiarcs-14-Uci-Chess-Engine-35-BETTER.md +++ /dev/null @@ -1,82 +0,0 @@ -## [Extra Quality] Deep Hiarcs 14 Uci Chess Engine 35 - - - - - - ![\[Extra Quality\] Deep Hiarcs 14 Uci Chess Engine 35 ((BETTER))](https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSH2ORMsFk0gHwrVcIKWMzjpnT8TUqCC4Re4FcrlOLIbxe2VG6t8LDX7M-p) - - - - - -**LINK === [https://searchdisvipas.blogspot.com/?download=2txnOM](https://searchdisvipas.blogspot.com/?download=2txnOM)** - - - - - - - - - - - - - -# [Extra Quality] Deep Hiarcs 14 UCI Chess Engine 35: A Review - - - -If you are looking for a chess software that combines a powerful engine, a huge database, and a user-friendly interface, you might want to check out Deep Hiarcs 14 UCI Chess Engine 35. This is the latest version of the world-class chess program that has won multiple titles and championships over the years. In this article, we will review some of the features and benefits of this chess software and why it is worth your attention. - - - -## What is Deep Hiarcs 14 UCI Chess Engine 35? - - - -Deep Hiarcs 14 UCI Chess Engine 35 is a chess software that runs on PC Windows and Mac OS computers. It is compatible with UCI (Universal Chess Interface) protocol, which means it can be used with any chess GUI (Graphical User Interface) that supports UCI engines. It also comes with its own GUI, called HIARCS Chess Explorer, which offers a very advanced and intuitive chess database, analysis, and playing program. - - - -## What are the features of Deep Hiarcs 14 UCI Chess Engine 35? - - - -Deep Hiarcs 14 UCI Chess Engine 35 has many features that make it one of the best chess software available today. Some of these features are: - - - -- **Playing Strength:** Deep Hiarcs 14 UCI Chess Engine 35 has an estimated Elo rating of over 3300, which makes it one of the strongest chess engines in the world. It can challenge any human or computer opponent, from beginner to world champion level. It also has realistic levels for players of all strengths, with coach advice and rating estimation. - -- **Playing Style:** Deep Hiarcs 14 UCI Chess Engine 35 has a unique and human-like playing style, which makes it more enjoyable and instructive to play against. It can adapt to different positions and situations, and can play actively, aggressively, or solidly depending on the setting. It also has a high level of selectivity and combinational ability, which allows it to find deep and surprising moves. - -- **Database:** Deep Hiarcs 14 UCI Chess Engine 35 comes with a huge chess database that contains over 119 million positions from over 6 million games. The database covers all major chess events and openings from the past and present. It also supports major chess database formats, such as PGN, CBH, CBV, CTG, and HCE. You can easily search, sort, filter, edit, annotate, and export your games and databases. - -- **Analysis:** Deep Hiarcs 14 UCI Chess Engine 35 offers a very powerful and accurate analysis tool that can help you improve your chess skills and understanding. You can use it to analyze your own games or any position you want. It can provide you with multiple lines of evaluation, blunder check, opening book information, position learning, optimistic search, and more. You can also compare different engines or run engine matches to test their performance. - -- **Preparation:** Deep Hiarcs 14 UCI Chess Engine 35 can help you prepare for your opponents or tournaments by providing you with relevant information and suggestions. You can use it to create your own player repertoire or opening book based on your preferences and style. You can also learn from the games of top players or study specific openings or themes. - -- **Play:** Deep Hiarcs 14 UCI Chess Engine 35 can also be used as a playing partner or opponent for fun or training purposes. You can play against it at any level or time control you want. You can also play online against other human players or watch live games from top tournaments. - - - -## What are the benefits of Deep Hiarcs 14 UCI Chess Engine 35? - - - -Deep Hiarcs 14 UCI Chess Engine 35 has many benefits that make it a worthwhile investment for any chess enthusiast or professional. Some of these benefits are: - - - -- **Quality:** Deep Hiarcs 14 UCI Chess Engine 35 is a product of decades of research and development by Applied Computer Concepts Ltd, a leading company in the field of computer chess. It is based on the latest technology and algorithms that ensure its high 1b8d091108 - - - - - - - - - diff --git a/spaces/user238921933/stable-diffusion-webui/test/server_poll.py b/spaces/user238921933/stable-diffusion-webui/test/server_poll.py deleted file mode 100644 index 42d56a4caacfc40d686dc99668d72238392448cd..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/test/server_poll.py +++ /dev/null @@ -1,24 +0,0 @@ -import unittest -import requests -import time - - -def run_tests(proc, test_dir): - timeout_threshold = 240 - start_time = time.time() - while time.time()-start_time < timeout_threshold: - try: - requests.head("http://localhost:7860/") - break - except requests.exceptions.ConnectionError: - if proc.poll() is not None: - break - if proc.poll() is None: - if test_dir is None: - test_dir = "test" - suite = unittest.TestLoader().discover(test_dir, pattern="*_test.py", top_level_dir="test") - result = unittest.TextTestRunner(verbosity=2).run(suite) - return len(result.failures) + len(result.errors) - else: - print("Launch unsuccessful") - return 1 diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/index.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/index.md deleted file mode 100644 index 79768a1b0025eff1e19367abfdafb7ced34d224f..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/index.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -comments: true -description: Explore Ultralytics YOLOv8, a cutting-edge real-time object detection and image segmentation model for various applications and hardware platforms. -keywords: YOLOv8, object detection, image segmentation, computer vision, machine learning, deep learning, AGPL-3.0 License, Enterprise License ---- - -
            -

            - - -

            - Ultralytics CI - YOLOv8 Citation - Docker Pulls -
            - Run on Gradient - Open In Colab - Open In Kaggle -
            - -Introducing [Ultralytics](https://ultralytics.com) [YOLOv8](https://github.com/ultralytics/ultralytics), the latest version of the acclaimed real-time object detection and image segmentation model. YOLOv8 is built on cutting-edge advancements in deep learning and computer vision, offering unparalleled performance in terms of speed and accuracy. Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs. - -Explore the YOLOv8 Docs, a comprehensive resource designed to help you understand and utilize its features and capabilities. Whether you are a seasoned machine learning practitioner or new to the field, this hub aims to maximize YOLOv8's potential in your projects - -## Where to Start - -- **Install** `ultralytics` with pip and get up and running in minutes   [:material-clock-fast: Get Started](quickstart.md){ .md-button } -- **Predict** new images and videos with YOLOv8   [:octicons-image-16: Predict on Images](modes/predict.md){ .md-button } -- **Train** a new YOLOv8 model on your own custom dataset   [:fontawesome-solid-brain: Train a Model](modes/train.md){ .md-button } -- **Explore** YOLOv8 tasks like segment, classify, pose and track   [:material-magnify-expand: Explore Tasks](tasks/index.md){ .md-button } - -## YOLO: A Brief History - -[YOLO](https://arxiv.org/abs/1506.02640) (You Only Look Once), a popular object detection and image segmentation model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO quickly gained popularity for its high speed and accuracy. - -- [YOLOv2](https://arxiv.org/abs/1612.08242), released in 2016, improved the original model by incorporating batch normalization, anchor boxes, and dimension clusters. -- [YOLOv3](https://pjreddie.com/media/files/papers/YOLOv3.pdf), launched in 2018, further enhanced the model's performance using a more efficient backbone network, multiple anchors and spatial pyramid pooling. -- [YOLOv4](https://arxiv.org/abs/2004.10934) was released in 2020, introducing innovations like Mosaic data augmentation, a new anchor-free detection head, and a new loss function. -- [YOLOv5](https://github.com/ultralytics/yolov5) further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking and automatic export to popular export formats. -- [YOLOv6](https://github.com/meituan/YOLOv6) was open-sourced by [Meituan](https://about.meituan.com/) in 2022 and is in use in many of the company's autonomous delivery robots. -- [YOLOv7](https://github.com/WongKinYiu/yolov7) added additional tasks such as pose estimation on the COCO keypoints dataset. -- [YOLOv8](https://github.com/ultralytics/ultralytics) is the latest version of YOLO by Ultralytics. As a cutting-edge, state-of-the-art (SOTA) model, YOLOv8 builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLOv8 supports a full range of vision AI tasks, including [detection](tasks/detect.md), [segmentation](tasks/segment.md), [pose estimation](tasks/pose.md), [tracking](modes/track.md), and [classification](tasks/classify.md). This versatility allows users to leverage YOLOv8's capabilities across diverse applications and domains. - -## YOLO Licenses: How is Ultralytics YOLO licensed? - -Ultralytics YOLO repositories like YOLOv3, YOLOv5, or YOLOv8 are available under two different licenses: - -- **AGPL-3.0 License**: See [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for details. -- **Enterprise License**: Provides greater flexibility for commercial product development without the open-source requirements of AGPL-3.0. Typical use cases are embedding Ultralytics software and AI models in commercial products and applications. Request an Enterprise License at [Ultralytics Licensing](https://ultralytics.com/license). - -Please note our licensing approach ensures that any enhancements made to our open-source projects are shared back to the community. We firmly believe in the principles of open source, and we are committed to ensuring that our work can be used and improved upon in a manner that benefits everyone. \ No newline at end of file diff --git a/spaces/vict0rsch/climateGAN/climategan/masker.py b/spaces/vict0rsch/climateGAN/climategan/masker.py deleted file mode 100644 index 9bbd3063c29f55d80d38fd41db8f5d534f62c6d3..0000000000000000000000000000000000000000 --- a/spaces/vict0rsch/climateGAN/climategan/masker.py +++ /dev/null @@ -1,234 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from climategan.blocks import ( - BaseDecoder, - Conv2dBlock, - InterpolateNearest2d, - SPADEResnetBlock, -) - - -def create_mask_decoder(opts, no_init=False, verbose=0): - if opts.gen.m.use_spade: - if verbose > 0: - print(" - Add Spade Mask Decoder") - assert "d" in opts.tasks or "s" in opts.tasks - return MaskSpadeDecoder(opts) - else: - if verbose > 0: - print(" - Add Base Mask Decoder") - return MaskBaseDecoder(opts) - - -class MaskBaseDecoder(BaseDecoder): - def __init__(self, opts): - low_level_feats_dim = -1 - use_v3 = opts.gen.encoder.architecture == "deeplabv3" - use_mobile_net = opts.gen.deeplabv3.backbone == "mobilenet" - use_low = opts.gen.m.use_low_level_feats - use_dada = ("d" in opts.tasks) and opts.gen.m.use_dada - - if use_v3 and use_mobile_net: - input_dim = 320 - if use_low: - low_level_feats_dim = 24 - elif use_v3: - input_dim = 2048 - if use_low: - low_level_feats_dim = 256 - else: - input_dim = 2048 - - super().__init__( - n_upsample=opts.gen.m.n_upsample, - n_res=opts.gen.m.n_res, - input_dim=input_dim, - proj_dim=opts.gen.m.proj_dim, - output_dim=opts.gen.m.output_dim, - norm=opts.gen.m.norm, - activ=opts.gen.m.activ, - pad_type=opts.gen.m.pad_type, - output_activ="none", - low_level_feats_dim=low_level_feats_dim, - use_dada=use_dada, - ) - - -class MaskSpadeDecoder(nn.Module): - def __init__(self, opts): - """Create a SPADE-based decoder, which forwards z and the conditioning - tensors seg (in the original paper, conditioning is on a semantic map only). - All along, z is conditioned on seg. First 3 SpadeResblocks (SRB) do not shrink - the channel dimension, and an upsampling is applied after each. Therefore - 2 upsamplings at this point. Then, for each remaining upsamplings - (w.r.t. spade_n_up), the SRB shrinks channels by 2. Before final conv to get 3 - channels, the number of channels is therefore: - final_nc = channels(z) * 2 ** (spade_n_up - 2) - Args: - latent_dim (tuple): z's shape (only the number of channels matters) - cond_nc (int): conditioning tensor's expected number of channels - spade_n_up (int): Number of total upsamplings from z - spade_use_spectral_norm (bool): use spectral normalization? - spade_param_free_norm (str): norm to use before SPADE de-normalization - spade_kernel_size (int): SPADE conv layers' kernel size - Returns: - [type]: [description] - """ - super().__init__() - self.opts = opts - latent_dim = opts.gen.m.spade.latent_dim - cond_nc = opts.gen.m.spade.cond_nc - spade_use_spectral_norm = opts.gen.m.spade.spade_use_spectral_norm - spade_param_free_norm = opts.gen.m.spade.spade_param_free_norm - if self.opts.gen.m.spade.activations.all_lrelu: - spade_activation = "lrelu" - else: - spade_activation = None - spade_kernel_size = 3 - self.num_layers = opts.gen.m.spade.num_layers - self.z_nc = latent_dim - - if ( - opts.gen.encoder.architecture == "deeplabv3" - and opts.gen.deeplabv3.backbone == "mobilenet" - ): - self.input_dim = [320, 24] - self.low_level_conv = Conv2dBlock( - self.input_dim[1], - self.input_dim[0], - 3, - padding=1, - activation="lrelu", - pad_type="reflect", - norm="spectral_batch", - ) - self.merge_feats_conv = Conv2dBlock( - self.input_dim[0] * 2, - self.z_nc, - 3, - padding=1, - activation="lrelu", - pad_type="reflect", - norm="spectral_batch", - ) - elif ( - opts.gen.encoder.architecture == "deeplabv3" - and opts.gen.deeplabv3.backbone == "resnet" - ): - self.input_dim = [2048, 256] - if self.opts.gen.m.use_proj: - proj_dim = self.opts.gen.m.proj_dim - self.low_level_conv = Conv2dBlock( - self.input_dim[1], - proj_dim, - 3, - padding=1, - activation="lrelu", - pad_type="reflect", - norm="spectral_batch", - ) - self.high_level_conv = Conv2dBlock( - self.input_dim[0], - proj_dim, - 3, - padding=1, - activation="lrelu", - pad_type="reflect", - norm="spectral_batch", - ) - self.merge_feats_conv = Conv2dBlock( - proj_dim * 2, - self.z_nc, - 3, - padding=1, - activation="lrelu", - pad_type="reflect", - norm="spectral_batch", - ) - else: - self.low_level_conv = Conv2dBlock( - self.input_dim[1], - self.input_dim[0], - 3, - padding=1, - activation="lrelu", - pad_type="reflect", - norm="spectral_batch", - ) - self.merge_feats_conv = Conv2dBlock( - self.input_dim[0] * 2, - self.z_nc, - 3, - padding=1, - activation="lrelu", - pad_type="reflect", - norm="spectral_batch", - ) - - elif opts.gen.encoder.architecture == "deeplabv2": - self.input_dim = 2048 - self.fc_conv = Conv2dBlock( - self.input_dim, - self.z_nc, - 3, - padding=1, - activation="lrelu", - pad_type="reflect", - norm="spectral_batch", - ) - else: - raise ValueError("Unknown encoder type") - - self.spade_blocks = [] - - for i in range(self.num_layers): - self.spade_blocks.append( - SPADEResnetBlock( - int(self.z_nc / (2**i)), - int(self.z_nc / (2 ** (i + 1))), - cond_nc, - spade_use_spectral_norm, - spade_param_free_norm, - spade_kernel_size, - spade_activation, - ) - ) - self.spade_blocks = nn.Sequential(*self.spade_blocks) - - self.final_nc = int(self.z_nc / (2**self.num_layers)) - self.mask_conv = Conv2dBlock( - self.final_nc, - 1, - 3, - padding=1, - activation="none", - pad_type="reflect", - norm="spectral", - ) - self.upsample = InterpolateNearest2d(scale_factor=2) - - def forward(self, z, cond, z_depth=None): - if isinstance(z, (list, tuple)): - z_h, z_l = z - if self.opts.gen.m.use_proj: - z_l = self.low_level_conv(z_l) - z_l = F.interpolate(z_l, size=z_h.shape[-2:], mode="bilinear") - z_h = self.high_level_conv(z_h) - else: - z_l = self.low_level_conv(z_l) - z_l = F.interpolate(z_l, size=z_h.shape[-2:], mode="bilinear") - z = torch.cat([z_h, z_l], axis=1) - y = self.merge_feats_conv(z) - else: - y = self.fc_conv(z) - - for i in range(self.num_layers): - y = self.spade_blocks[i](y, cond) - y = self.upsample(y) - y = self.mask_conv(y) - return y - - def __str__(self): - return "MaskerSpadeDecoder" diff --git a/spaces/victor/prompthero-openjourney/README.md b/spaces/victor/prompthero-openjourney/README.md deleted file mode 100644 index ff02d212a57655e855082d985a79dc21309d7f25..0000000000000000000000000000000000000000 --- a/spaces/victor/prompthero-openjourney/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Prompthero Openjourney -emoji: 🔥 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/video-p2p-library/Video-P2P-Demo/Video-P2P/train_tuneavideo.py b/spaces/video-p2p-library/Video-P2P-Demo/Video-P2P/train_tuneavideo.py deleted file mode 100644 index 7a21fafb1ed7169803172758d32976b4f8826209..0000000000000000000000000000000000000000 --- a/spaces/video-p2p-library/Video-P2P-Demo/Video-P2P/train_tuneavideo.py +++ /dev/null @@ -1,367 +0,0 @@ -import argparse -import datetime -import logging -import inspect -import math -import os -from typing import Dict, Optional, Tuple -from omegaconf import OmegaConf - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint - -import diffusers -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import AutoencoderKL, DDPMScheduler, DDIMScheduler -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version -from diffusers.utils.import_utils import is_xformers_available -from tqdm.auto import tqdm -from transformers import CLIPTextModel, CLIPTokenizer - -from tuneavideo.models.unet import UNet3DConditionModel -from tuneavideo.data.dataset import TuneAVideoDataset -from tuneavideo.pipelines.pipeline_tuneavideo import TuneAVideoPipeline -from tuneavideo.util import save_videos_grid, ddim_inversion -from einops import rearrange - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.10.0.dev0") - -logger = get_logger(__name__, log_level="INFO") - - -def main( - pretrained_model_path: str, - output_dir: str, - train_data: Dict, - validation_data: Dict, - validation_steps: int = 100, - trainable_modules: Tuple[str] = ( - "attn1.to_q", - "attn2.to_q", - "attn_temp", - ), - train_batch_size: int = 1, - max_train_steps: int = 500, - learning_rate: float = 3e-5, - scale_lr: bool = False, - lr_scheduler: str = "constant", - lr_warmup_steps: int = 0, - adam_beta1: float = 0.9, - adam_beta2: float = 0.999, - adam_weight_decay: float = 1e-2, - adam_epsilon: float = 1e-08, - max_grad_norm: float = 1.0, - gradient_accumulation_steps: int = 1, - gradient_checkpointing: bool = True, - checkpointing_steps: int = 500, - resume_from_checkpoint: Optional[str] = None, - mixed_precision: Optional[str] = "fp16", - use_8bit_adam: bool = False, - enable_xformers_memory_efficient_attention: bool = True, - seed: Optional[int] = None, -): - *_, config = inspect.getargvalues(inspect.currentframe()) - - accelerator = Accelerator( - gradient_accumulation_steps=gradient_accumulation_steps, - mixed_precision=mixed_precision, - ) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if seed is not None: - set_seed(seed) - - # Handle the output folder creation - if accelerator.is_main_process: - # now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") - # output_dir = os.path.join(output_dir, now) - os.makedirs(output_dir, exist_ok=True) - os.makedirs(f"{output_dir}/samples", exist_ok=True) - os.makedirs(f"{output_dir}/inv_latents", exist_ok=True) - OmegaConf.save(config, os.path.join(output_dir, 'config.yaml')) - - # Load scheduler, tokenizer and models. - noise_scheduler = DDPMScheduler.from_pretrained(pretrained_model_path, subfolder="scheduler") - tokenizer = CLIPTokenizer.from_pretrained(pretrained_model_path, subfolder="tokenizer") - text_encoder = CLIPTextModel.from_pretrained(pretrained_model_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(pretrained_model_path, subfolder="vae") - unet = UNet3DConditionModel.from_pretrained_2d(pretrained_model_path, subfolder="unet") - - # Freeze vae and text_encoder - vae.requires_grad_(False) - text_encoder.requires_grad_(False) - - unet.requires_grad_(False) - for name, module in unet.named_modules(): - if name.endswith(tuple(trainable_modules)): - for params in module.parameters(): - params.requires_grad = True - - if enable_xformers_memory_efficient_attention: - if is_xformers_available(): - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - if gradient_checkpointing: - unet.enable_gradient_checkpointing() - - if scale_lr: - learning_rate = ( - learning_rate * gradient_accumulation_steps * train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - if use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`" - ) - - optimizer_cls = bnb.optim.AdamW8bit - else: - optimizer_cls = torch.optim.AdamW - - optimizer = optimizer_cls( - unet.parameters(), - lr=learning_rate, - betas=(adam_beta1, adam_beta2), - weight_decay=adam_weight_decay, - eps=adam_epsilon, - ) - - # Get the training dataset - train_dataset = TuneAVideoDataset(**train_data) - - # Preprocessing the dataset - train_dataset.prompt_ids = tokenizer( - train_dataset.prompt, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt" - ).input_ids[0] - - # DataLoaders creation: - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=train_batch_size - ) - - # Get the validation pipeline - validation_pipeline = TuneAVideoPipeline( - vae=vae, text_encoder=text_encoder, tokenizer=tokenizer, unet=unet, - scheduler=DDIMScheduler.from_pretrained(pretrained_model_path, subfolder="scheduler") - ) - validation_pipeline.enable_vae_slicing() - ddim_inv_scheduler = DDIMScheduler.from_pretrained(pretrained_model_path, subfolder='scheduler') - ddim_inv_scheduler.set_timesteps(validation_data.num_inv_steps) - - # Scheduler - lr_scheduler = get_scheduler( - lr_scheduler, - optimizer=optimizer, - num_warmup_steps=lr_warmup_steps * gradient_accumulation_steps, - num_training_steps=max_train_steps * gradient_accumulation_steps, - ) - - # Prepare everything with our `accelerator`. - unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - unet, optimizer, train_dataloader, lr_scheduler - ) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu and cast to weight_dtype - text_encoder.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / gradient_accumulation_steps) - # Afterwards we recalculate our number of training epochs - num_train_epochs = math.ceil(max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("text2video-fine-tune") - - # Train! - total_batch_size = train_batch_size * accelerator.num_processes * gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if resume_from_checkpoint: - if resume_from_checkpoint != "latest": - path = os.path.basename(resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(output_dir, path)) - global_step = int(path.split("-")[1]) - - first_epoch = global_step // num_update_steps_per_epoch - resume_step = global_step % num_update_steps_per_epoch - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, num_train_epochs): - unet.train() - train_loss = 0.0 - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert videos to latent space - pixel_values = batch["pixel_values"].to(weight_dtype) - video_length = pixel_values.shape[1] - pixel_values = rearrange(pixel_values, "b f c h w -> (b f) c h w") - latents = vae.encode(pixel_values).latent_dist.sample() - latents = rearrange(latents, "(b f) c h w -> b c f h w", f=video_length) - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each video - timesteps = torch.randint(0, noise_scheduler.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["prompt_ids"])[0] - - # Get the target for loss depending on the prediction type - if noise_scheduler.prediction_type == "epsilon": - target = noise - elif noise_scheduler.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.prediction_type}") - - # Predict the noise residual and compute loss - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Gather the losses across all processes for logging (if we use distributed training). - avg_loss = accelerator.gather(loss.repeat(train_batch_size)).mean() - train_loss += avg_loss.item() / gradient_accumulation_steps - - # Backpropagate - accelerator.backward(loss) - if accelerator.sync_gradients: - accelerator.clip_grad_norm_(unet.parameters(), max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - accelerator.log({"train_loss": train_loss}, step=global_step) - train_loss = 0.0 - - if global_step % checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - if global_step % validation_steps == 0: - if accelerator.is_main_process: - samples = [] - generator = torch.Generator(device=latents.device) - generator.manual_seed(seed) - - ddim_inv_latent = None - if validation_data.use_inv_latent: - inv_latents_path = os.path.join(output_dir, f"inv_latents/ddim_latent-{global_step}.pt") - ddim_inv_latent = ddim_inversion( - validation_pipeline, ddim_inv_scheduler, video_latent=latents, - num_inv_steps=validation_data.num_inv_steps, prompt="")[-1].to(weight_dtype) - torch.save(ddim_inv_latent, inv_latents_path) - - for idx, prompt in enumerate(validation_data.prompts): - sample = validation_pipeline(prompt, generator=generator, latents=ddim_inv_latent, - **validation_data).videos - save_videos_grid(sample, f"{output_dir}/samples/sample-{global_step}/{prompt}.gif") - samples.append(sample) - samples = torch.concat(samples) - save_path = f"{output_dir}/samples/sample-{global_step}.gif" - save_videos_grid(samples, save_path) - logger.info(f"Saved samples to {save_path}") - - logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - - if global_step >= max_train_steps: - break - - # Create the pipeline using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = accelerator.unwrap_model(unet) - pipeline = TuneAVideoPipeline.from_pretrained( - pretrained_model_path, - text_encoder=text_encoder, - vae=vae, - unet=unet, - ) - pipeline.save_pretrained(output_dir) - - accelerator.end_training() - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--config", type=str, default="./configs/tuneavideo.yaml") - args = parser.parse_args() - - main(**OmegaConf.load(args.config)) diff --git a/spaces/vih-v/Image_Face_Upscale_Restoration-GFPGAN/app.py b/spaces/vih-v/Image_Face_Upscale_Restoration-GFPGAN/app.py deleted file mode 100644 index 0f07e5655a0f9922a6eafcc72cda38b4ecddca89..0000000000000000000000000000000000000000 --- a/spaces/vih-v/Image_Face_Upscale_Restoration-GFPGAN/app.py +++ /dev/null @@ -1,134 +0,0 @@ -import os - -import cv2 -import gradio as gr -import torch -from basicsr.archs.srvgg_arch import SRVGGNetCompact -from gfpgan.utils import GFPGANer -from realesrgan.utils import RealESRGANer - -os.system("pip freeze") -# download weights -if not os.path.exists('realesr-general-x4v3.pth'): - os.system("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth -P .") -if not os.path.exists('GFPGANv1.2.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.2.pth -P .") -if not os.path.exists('GFPGANv1.3.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P .") -if not os.path.exists('GFPGANv1.4.pth'): - os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P .") - - -torch.hub.download_url_to_file( - 'https://thumbs.dreamstime.com/b/tower-bridge-traditional-red-bus-black-white-colors-view-to-tower-bridge-london-black-white-colors-108478942.jpg', - 'a1.jpg') -torch.hub.download_url_to_file( - 'https://media.istockphoto.com/id/523514029/photo/london-skyline-b-w.jpg?s=612x612&w=0&k=20&c=kJS1BAtfqYeUDaORupj0sBPc1hpzJhBUUqEFfRnHzZ0=', - 'a2.jpg') -torch.hub.download_url_to_file( - 'https://i.guim.co.uk/img/media/06f614065ed82ca0e917b149a32493c791619854/0_0_3648_2789/master/3648.jpg?width=700&quality=85&auto=format&fit=max&s=05764b507c18a38590090d987c8b6202', - 'a3.jpg') -torch.hub.download_url_to_file( - 'https://i.pinimg.com/736x/46/96/9e/46969eb94aec2437323464804d27706d--victorian-london-victorian-era.jpg', - 'a4.jpg') - -# background enhancer with RealESRGAN -model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') -model_path = 'realesr-general-x4v3.pth' -half = True if torch.cuda.is_available() else False -upsampler = RealESRGANer(scale=4, model_path=model_path, model=model, tile=0, tile_pad=10, pre_pad=0, half=half) - -os.makedirs('output', exist_ok=True) - - -# def inference(img, version, scale, weight): -def inference(img, version, scale): - # weight /= 100 - print(img, version, scale) - try: - extension = os.path.splitext(os.path.basename(str(img)))[1] - img = cv2.imread(img, cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = 'RGBA' - elif len(img.shape) == 2: # for gray inputs - img_mode = None - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - else: - img_mode = None - - h, w = img.shape[0:2] - if h < 300: - img = cv2.resize(img, (w * 2, h * 2), interpolation=cv2.INTER_LANCZOS4) - - if version == 'v1.2': - face_enhancer = GFPGANer( - model_path='GFPGANv1.2.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'v1.3': - face_enhancer = GFPGANer( - model_path='GFPGANv1.3.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'v1.4': - face_enhancer = GFPGANer( - model_path='GFPGANv1.4.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'RestoreFormer': - face_enhancer = GFPGANer( - model_path='RestoreFormer.pth', upscale=2, arch='RestoreFormer', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'CodeFormer': - face_enhancer = GFPGANer( - model_path='CodeFormer.pth', upscale=2, arch='CodeFormer', channel_multiplier=2, bg_upsampler=upsampler) - elif version == 'RealESR-General-x4v3': - face_enhancer = GFPGANer( - model_path='realesr-general-x4v3.pth', upscale=2, arch='realesr-general', channel_multiplier=2, bg_upsampler=upsampler) - - try: - # _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True, weight=weight) - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - except RuntimeError as error: - print('Error', error) - - try: - if scale != 2: - interpolation = cv2.INTER_AREA if scale < 2 else cv2.INTER_LANCZOS4 - h, w = img.shape[0:2] - output = cv2.resize(output, (int(w * scale / 2), int(h * scale / 2)), interpolation=interpolation) - except Exception as error: - print('wrong scale input.', error) - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - else: - extension = 'jpg' - save_path = f'output/out.{extension}' - cv2.imwrite(save_path, output) - - output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB) - return output, save_path - except Exception as error: - print('global exception', error) - return None, None - - -title = "" -description = r""" -""" -article = r""" - -""" -demo = gr.Interface( - inference, [ - gr.inputs.Image(type="filepath", label="Input"), - # gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4', 'RestoreFormer', 'CodeFormer'], type="value", default='v1.4', label='version'), - gr.inputs.Radio(['v1.2', 'v1.3', 'v1.4'], type="value", default='v1.4', label='Версия'), - gr.inputs.Number(label="Кратность увеличения", default=2), - # gr.Slider(0, 100, label='Weight, only for CodeFormer. 0 for better quality, 100 for better identity', default=50) - ], [ - gr.outputs.Image(type="numpy", label="Output (The whole image)"), - gr.outputs.File(label="Download the output image") - ], - title=title, - description=description, - article=article, - # examples=[['AI-generate.jpg', 'v1.4', 2, 50], ['lincoln.jpg', 'v1.4', 2, 50], ['Blake_Lively.jpg', 'v1.4', 2, 50], - # ['10045.png', 'v1.4', 2, 50]]).launch() - examples=[]) - -demo.queue(concurrency_count=4) -demo.launch() \ No newline at end of file diff --git a/spaces/vonbarnekowa/stable-diffusion/ldm/modules/diffusionmodules/__init__.py b/spaces/vonbarnekowa/stable-diffusion/ldm/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/vumichien/canvas_controlnet/ldm/modules/encoders/modules.py b/spaces/vumichien/canvas_controlnet/ldm/modules/encoders/modules.py deleted file mode 100644 index 2d78c66849c3407775172280f6dbb7906213ac7a..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/ldm/modules/encoders/modules.py +++ /dev/null @@ -1,213 +0,0 @@ -import torch -import torch.nn as nn -from torch.utils.checkpoint import checkpoint - -from transformers import T5Tokenizer, T5EncoderModel, CLIPTokenizer, CLIPTextModel - -import open_clip -from ldm.util import default, count_params - - -class AbstractEncoder(nn.Module): - def __init__(self): - super().__init__() - - def encode(self, *args, **kwargs): - raise NotImplementedError - - -class IdentityEncoder(AbstractEncoder): - - def encode(self, x): - return x - - -class ClassEmbedder(nn.Module): - def __init__(self, embed_dim, n_classes=1000, key='class', ucg_rate=0.1): - super().__init__() - self.key = key - self.embedding = nn.Embedding(n_classes, embed_dim) - self.n_classes = n_classes - self.ucg_rate = ucg_rate - - def forward(self, batch, key=None, disable_dropout=False): - if key is None: - key = self.key - # this is for use in crossattn - c = batch[key][:, None] - if self.ucg_rate > 0. and not disable_dropout: - mask = 1. - torch.bernoulli(torch.ones_like(c) * self.ucg_rate) - c = mask * c + (1-mask) * torch.ones_like(c)*(self.n_classes-1) - c = c.long() - c = self.embedding(c) - return c - - def get_unconditional_conditioning(self, bs, device="cuda"): - uc_class = self.n_classes - 1 # 1000 classes --> 0 ... 999, one extra class for ucg (class 1000) - uc = torch.ones((bs,), device=device) * uc_class - uc = {self.key: uc} - return uc - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class FrozenT5Embedder(AbstractEncoder): - """Uses the T5 transformer encoder for text""" - def __init__(self, version="google/t5-v1_1-large", device="cuda", max_length=77, freeze=True): # others are google/t5-v1_1-xl and google/t5-v1_1-xxl - super().__init__() - self.tokenizer = T5Tokenizer.from_pretrained(version) - self.transformer = T5EncoderModel.from_pretrained(version) - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.max_length = max_length # TODO: typical value? - if freeze: - self.freeze() - - def freeze(self): - self.transformer = self.transformer.eval() - #self.train = disabled_train - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens) - - z = outputs.last_hidden_state - return z - - def encode(self, text): - return self(text) - - -class FrozenCLIPEmbedder(AbstractEncoder): - """Uses the CLIP transformer encoder for text (from huggingface)""" - LAYERS = [ - "last", - "pooled", - "hidden" - ] - def __init__(self, version="openai/clip-vit-large-patch14", device="cuda", max_length=77, - freeze=True, layer="last", layer_idx=None): # clip-vit-base-patch32 - super().__init__() - assert layer in self.LAYERS - self.tokenizer = CLIPTokenizer.from_pretrained(version) - self.transformer = CLIPTextModel.from_pretrained(version) - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.max_length = max_length - if freeze: - self.freeze() - self.layer = layer - self.layer_idx = layer_idx - if layer == "hidden": - assert layer_idx is not None - assert 0 <= abs(layer_idx) <= 12 - - def freeze(self): - self.transformer = self.transformer.eval() - #self.train = disabled_train - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer=="hidden") - if self.layer == "last": - z = outputs.last_hidden_state - elif self.layer == "pooled": - z = outputs.pooler_output[:, None, :] - else: - z = outputs.hidden_states[self.layer_idx] - return z - - def encode(self, text): - return self(text) - - -class FrozenOpenCLIPEmbedder(AbstractEncoder): - """ - Uses the OpenCLIP transformer encoder for text - """ - LAYERS = [ - #"pooled", - "last", - "penultimate" - ] - def __init__(self, arch="ViT-H-14", version="laion2b_s32b_b79k", device="cuda", max_length=77, - freeze=True, layer="last"): - super().__init__() - assert layer in self.LAYERS - model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version) - del model.visual - self.model = model - - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.max_length = max_length - if freeze: - self.freeze() - self.layer = layer - if self.layer == "last": - self.layer_idx = 0 - elif self.layer == "penultimate": - self.layer_idx = 1 - else: - raise NotImplementedError() - - def freeze(self): - self.model = self.model.eval() - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - tokens = open_clip.tokenize(text) - z = self.encode_with_transformer(tokens.to(self.device)) - return z - - def encode_with_transformer(self, text): - x = self.model.token_embedding(text) # [batch_size, n_ctx, d_model] - x = x + self.model.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.model.ln_final(x) - return x - - def text_transformer_forward(self, x: torch.Tensor, attn_mask = None): - for i, r in enumerate(self.model.transformer.resblocks): - if i == len(self.model.transformer.resblocks) - self.layer_idx: - break - if self.model.transformer.grad_checkpointing and not torch.jit.is_scripting(): - x = checkpoint(r, x, attn_mask) - else: - x = r(x, attn_mask=attn_mask) - return x - - def encode(self, text): - return self(text) - - -class FrozenCLIPT5Encoder(AbstractEncoder): - def __init__(self, clip_version="openai/clip-vit-large-patch14", t5_version="google/t5-v1_1-xl", device=torch.device('cuda' if torch.cuda.is_available() else 'cpu'), - clip_max_length=77, t5_max_length=77): - super().__init__() - self.clip_encoder = FrozenCLIPEmbedder(clip_version, device, max_length=clip_max_length) - self.t5_encoder = FrozenT5Embedder(t5_version, device, max_length=t5_max_length) - print(f"{self.clip_encoder.__class__.__name__} has {count_params(self.clip_encoder)*1.e-6:.2f} M parameters, " - f"{self.t5_encoder.__class__.__name__} comes with {count_params(self.t5_encoder)*1.e-6:.2f} M params.") - - def encode(self, text): - return self(text) - - def forward(self, text): - clip_z = self.clip_encoder.encode(text) - t5_z = self.t5_encoder.encode(text) - return [clip_z, t5_z] - - diff --git a/spaces/wisnuarys15/rvc-wisnu5/infer_pack/models_onnx.py b/spaces/wisnuarys15/rvc-wisnu5/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/wisnuarys15/rvc-wisnu5/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/wwwwwwww2/bingo/src/components/tailwind-indicator.tsx b/spaces/wwwwwwww2/bingo/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/wwwwwwww2/bingo/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
            -
            xs
            -
            sm
            -
            md
            -
            lg
            -
            xl
            -
            2xl
            -
            - ) -} diff --git a/spaces/xdecoder/Demo/xdecoder/backbone/swin.py b/spaces/xdecoder/Demo/xdecoder/backbone/swin.py deleted file mode 100644 index ed66e670a10762d7faf1e16bb2d6d80691182aca..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Demo/xdecoder/backbone/swin.py +++ /dev/null @@ -1,892 +0,0 @@ -# -------------------------------------------------------- -# Swin Transformer -# Copyright (c) 2021 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Ze Liu, Yutong Lin, Yixuan Wei -# -------------------------------------------------------- - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation/blob/main/mmseg/models/backbones/swin_transformer.py -import logging -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from detectron2.modeling import Backbone, ShapeSpec -from detectron2.utils.file_io import PathManager - -from .registry import register_backbone - -logger = logging.getLogger(__name__) - - -class Mlp(nn.Module): - """Multilayer perceptron.""" - - def __init__( - self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0 - ): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class WindowAttention(nn.Module): - """Window based multi-head self attention (W-MSA) module with relative position bias. - It supports both of shifted and non-shifted window. - Args: - dim (int): Number of input channels. - window_size (tuple[int]): The height and width of the window. - num_heads (int): Number of attention heads. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set - attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0 - proj_drop (float, optional): Dropout ratio of output. Default: 0.0 - """ - - def __init__( - self, - dim, - window_size, - num_heads, - qkv_bias=True, - qk_scale=None, - attn_drop=0.0, - proj_drop=0.0, - ): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads) - ) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - trunc_normal_(self.relative_position_bias_table, std=0.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - """Forward function. - Args: - x: input features with shape of (num_windows*B, N, C) - mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None - """ - B_, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B_, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = q @ k.transpose(-2, -1) - - relative_position_bias = self.relative_position_bias_table[ - self.relative_position_index.view(-1) - ].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1 - ) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute( - 2, 0, 1 - ).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - - return x - - -class SwinTransformerBlock(nn.Module): - """Swin Transformer Block. - Args: - dim (int): Number of input channels. - num_heads (int): Number of attention heads. - window_size (int): Window size. - shift_size (int): Shift size for SW-MSA. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__( - self, - dim, - num_heads, - window_size=7, - shift_size=0, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - act_layer=nn.GELU, - norm_layer=nn.LayerNorm, - ): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, - window_size=to_2tuple(self.window_size), - num_heads=num_heads, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - attn_drop=attn_drop, - proj_drop=drop, - ) - - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop - ) - - self.H = None - self.W = None - - def forward(self, x, mask_matrix): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - mask_matrix: Attention mask for cyclic shift. - """ - B, L, C = x.shape - H, W = self.H, self.W - assert L == H * W, "input feature has wrong size" - - # HACK model will not upsampling - # if min([H, W]) <= self.window_size: - # if window size is larger than input resolution, we don't partition windows - # self.shift_size = 0 - # self.window_size = min([H,W]) - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # pad feature maps to multiples of window size - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - attn_mask = mask_matrix - else: - shifted_x = x - attn_mask = None - - # partition windows - x_windows = window_partition( - shifted_x, self.window_size - ) # nW*B, window_size, window_size, C - x_windows = x_windows.view( - -1, self.window_size * self.window_size, C - ) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class PatchMerging(nn.Module): - """Patch Merging Layer - Args: - dim (int): Number of input channels. - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - """ - - def __init__(self, dim, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) - self.norm = norm_layer(4 * dim) - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - B, L, C = x.shape - assert L == H * W, "input feature has wrong size" - - x = x.view(B, H, W, C) - - # padding - pad_input = (H % 2 == 1) or (W % 2 == 1) - if pad_input: - x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2)) - - x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C - x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C - x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C - x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C - x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C - x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C - - x = self.norm(x) - x = self.reduction(x) - - return x - - -class BasicLayer(nn.Module): - """A basic Swin Transformer layer for one stage. - Args: - dim (int): Number of feature channels - depth (int): Depths of this stage. - num_heads (int): Number of attention head. - window_size (int): Local window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - attn_drop (float, optional): Attention dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - dim, - depth, - num_heads, - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop=0.0, - attn_drop=0.0, - drop_path=0.0, - norm_layer=nn.LayerNorm, - downsample=None, - use_checkpoint=False, - ): - super().__init__() - self.window_size = window_size - self.shift_size = window_size // 2 - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList( - [ - SwinTransformerBlock( - dim=dim, - num_heads=num_heads, - window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop, - attn_drop=attn_drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - ) - for i in range(depth) - ] - ) - - # patch merging layer - if downsample is not None: - self.downsample = downsample(dim=dim, norm_layer=norm_layer) - else: - self.downsample = None - - def forward(self, x, H, W): - """Forward function. - Args: - x: Input feature, tensor size (B, H*W, C). - H, W: Spatial resolution of the input feature. - """ - - # calculate attention mask for SW-MSA - Hp = int(np.ceil(H / self.window_size)) * self.window_size - Wp = int(np.ceil(W / self.window_size)) * self.window_size - img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1 - h_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - w_slices = ( - slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None), - ) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition( - img_mask, self.window_size - ) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill( - attn_mask == 0, float(0.0) - ).type(x.dtype) - - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, attn_mask) - else: - x = blk(x, attn_mask) - if self.downsample is not None: - x_down = self.downsample(x, H, W) - Wh, Ww = (H + 1) // 2, (W + 1) // 2 - return x, H, W, x_down, Wh, Ww - else: - return x, H, W, x, H, W - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding - Args: - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): - super().__init__() - patch_size = to_2tuple(patch_size) - self.patch_size = patch_size - - self.in_chans = in_chans - self.embed_dim = embed_dim - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - """Forward function.""" - # padding - _, _, H, W = x.size() - if W % self.patch_size[1] != 0: - x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1])) - if H % self.patch_size[0] != 0: - x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0])) - - x = self.proj(x) # B C Wh Ww - if self.norm is not None: - Wh, Ww = x.size(2), x.size(3) - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww) - - return x - - -class SwinTransformer(nn.Module): - """Swin Transformer backbone. - A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` - - https://arxiv.org/pdf/2103.14030 - Args: - pretrain_img_size (int): Input image size for training the pretrained model, - used in absolute postion embedding. Default 224. - patch_size (int | tuple(int)): Patch size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - depths (tuple[int]): Depths of each Swin Transformer stage. - num_heads (tuple[int]): Number of attention head of each stage. - window_size (int): Window size. Default: 7. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4. - qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): Dropout rate. - attn_drop_rate (float): Attention dropout rate. Default: 0. - drop_path_rate (float): Stochastic depth rate. Default: 0.2. - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - ape (bool): If True, add absolute position embedding to the patch embedding. Default: False. - patch_norm (bool): If True, add normalization after patch embedding. Default: True. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - """ - - def __init__( - self, - pretrain_img_size=224, - patch_size=4, - in_chans=3, - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4.0, - qkv_bias=True, - qk_scale=None, - drop_rate=0.0, - attn_drop_rate=0.0, - drop_path_rate=0.2, - norm_layer=nn.LayerNorm, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - use_checkpoint=False, - ): - super().__init__() - - self.pretrain_img_size = pretrain_img_size - self.num_layers = len(depths) - self.embed_dim = embed_dim - self.ape = ape - self.patch_norm = patch_norm - self.out_indices = out_indices - self.frozen_stages = frozen_stages - - # split image into non-overlapping patches - self.patch_embed = PatchEmbed( - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim, - norm_layer=norm_layer if self.patch_norm else None, - ) - - # absolute position embedding - if self.ape: - pretrain_img_size = to_2tuple(pretrain_img_size) - patch_size = to_2tuple(patch_size) - patches_resolution = [ - pretrain_img_size[0] // patch_size[0], - pretrain_img_size[1] // patch_size[1], - ] - - self.absolute_pos_embed = nn.Parameter( - torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]) - ) - trunc_normal_(self.absolute_pos_embed, std=0.02) - - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [ - x.item() for x in torch.linspace(0, drop_path_rate, sum(depths)) - ] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer( - dim=int(embed_dim * 2 ** i_layer), - depth=depths[i_layer], - num_heads=num_heads[i_layer], - window_size=window_size, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=drop_rate, - attn_drop=attn_drop_rate, - drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchMerging if (i_layer < self.num_layers - 1) else None, - use_checkpoint=use_checkpoint, - ) - self.layers.append(layer) - - num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)] - self.num_features = num_features - - # add a norm layer for each output - for i_layer in out_indices: - layer = norm_layer(num_features[i_layer]) - layer_name = f"norm{i_layer}" - self.add_module(layer_name, layer) - - self._freeze_stages() - - def _freeze_stages(self): - if self.frozen_stages >= 0: - self.patch_embed.eval() - for param in self.patch_embed.parameters(): - param.requires_grad = False - - if self.frozen_stages >= 1 and self.ape: - self.absolute_pos_embed.requires_grad = False - - if self.frozen_stages >= 2: - self.pos_drop.eval() - for i in range(0, self.frozen_stages - 1): - m = self.layers[i] - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - - def _init_weights(m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=0.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - - def load_weights(self, pretrained_dict=None, pretrained_layers=[], verbose=True): - model_dict = self.state_dict() - pretrained_dict = { - k: v for k, v in pretrained_dict.items() - if k in model_dict.keys() - } - need_init_state_dict = {} - for k, v in pretrained_dict.items(): - need_init = ( - ( - k.split('.')[0] in pretrained_layers - or pretrained_layers[0] == '*' - ) - and 'relative_position_index' not in k - and 'attn_mask' not in k - ) - - if need_init: - # if verbose: - # logger.info(f'=> init {k} from {pretrained}') - - if 'relative_position_bias_table' in k and v.size() != model_dict[k].size(): - relative_position_bias_table_pretrained = v - relative_position_bias_table_current = model_dict[k] - L1, nH1 = relative_position_bias_table_pretrained.size() - L2, nH2 = relative_position_bias_table_current.size() - if nH1 != nH2: - logger.info(f"Error in loading {k}, passing") - else: - if L1 != L2: - logger.info( - '=> load_pretrained: resized variant: {} to {}' - .format((L1, nH1), (L2, nH2)) - ) - S1 = int(L1 ** 0.5) - S2 = int(L2 ** 0.5) - relative_position_bias_table_pretrained_resized = torch.nn.functional.interpolate( - relative_position_bias_table_pretrained.permute(1, 0).view(1, nH1, S1, S1), - size=(S2, S2), - mode='bicubic') - v = relative_position_bias_table_pretrained_resized.view(nH2, L2).permute(1, 0) - - if 'absolute_pos_embed' in k and v.size() != model_dict[k].size(): - absolute_pos_embed_pretrained = v - absolute_pos_embed_current = model_dict[k] - _, L1, C1 = absolute_pos_embed_pretrained.size() - _, L2, C2 = absolute_pos_embed_current.size() - if C1 != C1: - logger.info(f"Error in loading {k}, passing") - else: - if L1 != L2: - logger.info( - '=> load_pretrained: resized variant: {} to {}' - .format((1, L1, C1), (1, L2, C2)) - ) - S1 = int(L1 ** 0.5) - S2 = int(L2 ** 0.5) - absolute_pos_embed_pretrained = absolute_pos_embed_pretrained.reshape(-1, S1, S1, C1) - absolute_pos_embed_pretrained = absolute_pos_embed_pretrained.permute(0, 3, 1, 2) - absolute_pos_embed_pretrained_resized = torch.nn.functional.interpolate( - absolute_pos_embed_pretrained, size=(S2, S2), mode='bicubic') - v = absolute_pos_embed_pretrained_resized.permute(0, 2, 3, 1).flatten(1, 2) - - need_init_state_dict[k] = v - self.load_state_dict(need_init_state_dict, strict=False) - - - def forward(self, x): - """Forward function.""" - x = self.patch_embed(x) - - Wh, Ww = x.size(2), x.size(3) - if self.ape: - # interpolate the position embedding to the corresponding size - absolute_pos_embed = F.interpolate( - self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic" - ) - x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C - else: - x = x.flatten(2).transpose(1, 2) - x = self.pos_drop(x) - - outs = {} - for i in range(self.num_layers): - layer = self.layers[i] - x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww) - - if i in self.out_indices: - norm_layer = getattr(self, f"norm{i}") - x_out = norm_layer(x_out) - - out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - outs["res{}".format(i + 2)] = out - - if len(self.out_indices) == 0: - outs["res5"] = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous() - - - return outs - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(SwinTransformer, self).train(mode) - self._freeze_stages() - - -class D2SwinTransformer(SwinTransformer, Backbone): - def __init__(self, cfg, pretrain_img_size, patch_size, in_chans, embed_dim, - depths, num_heads, window_size, mlp_ratio, qkv_bias, qk_scale, - drop_rate, attn_drop_rate, drop_path_rate, norm_layer, ape, - patch_norm, out_indices, use_checkpoint): - super().__init__( - pretrain_img_size, - patch_size, - in_chans, - embed_dim, - depths, - num_heads, - window_size, - mlp_ratio, - qkv_bias, - qk_scale, - drop_rate, - attn_drop_rate, - drop_path_rate, - norm_layer, - ape, - patch_norm, - out_indices, - use_checkpoint=use_checkpoint, - ) - - self._out_features = cfg['OUT_FEATURES'] - - self._out_feature_strides = { - "res2": 4, - "res3": 8, - "res4": 16, - "res5": 32, - } - self._out_feature_channels = { - "res2": self.num_features[0], - "res3": self.num_features[1], - "res4": self.num_features[2], - "res5": self.num_features[3], - } - - def forward(self, x): - """ - Args: - x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``. - Returns: - dict[str->Tensor]: names and the corresponding features - """ - assert ( - x.dim() == 4 - ), f"SwinTransformer takes an input of shape (N, C, H, W). Got {x.shape} instead!" - outputs = {} - y = super().forward(x) - for k in y.keys(): - if k in self._out_features: - outputs[k] = y[k] - return outputs - - def output_shape(self): - feature_names = list(set(self._out_feature_strides.keys()) & set(self._out_features)) - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in feature_names - } - - @property - def size_divisibility(self): - return 32 - - -@register_backbone -def get_swin_backbone(cfg): - swin_cfg = cfg['MODEL']['BACKBONE']['SWIN'] - - pretrain_img_size = swin_cfg['PRETRAIN_IMG_SIZE'] - patch_size = swin_cfg['PATCH_SIZE'] - in_chans = 3 - embed_dim = swin_cfg['EMBED_DIM'] - depths = swin_cfg['DEPTHS'] - num_heads = swin_cfg['NUM_HEADS'] - window_size = swin_cfg['WINDOW_SIZE'] - mlp_ratio = swin_cfg['MLP_RATIO'] - qkv_bias = swin_cfg['QKV_BIAS'] - qk_scale = swin_cfg['QK_SCALE'] - drop_rate = swin_cfg['DROP_RATE'] - attn_drop_rate = swin_cfg['ATTN_DROP_RATE'] - drop_path_rate = swin_cfg['DROP_PATH_RATE'] - norm_layer = nn.LayerNorm - ape = swin_cfg['APE'] - patch_norm = swin_cfg['PATCH_NORM'] - use_checkpoint = swin_cfg['USE_CHECKPOINT'] - out_indices = swin_cfg.get('OUT_INDICES', [0,1,2,3]) - - swin = D2SwinTransformer( - swin_cfg, - pretrain_img_size, - patch_size, - in_chans, - embed_dim, - depths, - num_heads, - window_size, - mlp_ratio, - qkv_bias, - qk_scale, - drop_rate, - attn_drop_rate, - drop_path_rate, - norm_layer, - ape, - patch_norm, - out_indices, - use_checkpoint=use_checkpoint, - ) - - if cfg['MODEL']['BACKBONE']['LOAD_PRETRAINED'] is True: - filename = cfg['MODEL']['BACKBONE']['PRETRAINED'] - with PathManager.open(filename, "rb") as f: - ckpt = torch.load(f, map_location=cfg['device'])['model'] - swin.load_weights(ckpt, swin_cfg.get('PRETRAINED_LAYERS', ['*']), cfg['VERBOSE']) - - return swin \ No newline at end of file diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/inceptionv4.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/inceptionv4.py deleted file mode 100644 index b14916f140712298866c943ebdb4ebad67d72fc4..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/inceptionv4.py +++ /dev/null @@ -1,381 +0,0 @@ -from __future__ import division, absolute_import -import torch -import torch.nn as nn -import torch.utils.model_zoo as model_zoo - -__all__ = ['inceptionv4'] -""" -Code imported from https://github.com/Cadene/pretrained-models.pytorch -""" - -pretrained_settings = { - 'inceptionv4': { - 'imagenet': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/inceptionv4-8e4777a0.pth', - 'input_space': 'RGB', - 'input_size': [3, 299, 299], - 'input_range': [0, 1], - 'mean': [0.5, 0.5, 0.5], - 'std': [0.5, 0.5, 0.5], - 'num_classes': 1000 - }, - 'imagenet+background': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/inceptionv4-8e4777a0.pth', - 'input_space': 'RGB', - 'input_size': [3, 299, 299], - 'input_range': [0, 1], - 'mean': [0.5, 0.5, 0.5], - 'std': [0.5, 0.5, 0.5], - 'num_classes': 1001 - } - } -} - - -class BasicConv2d(nn.Module): - - def __init__(self, in_planes, out_planes, kernel_size, stride, padding=0): - super(BasicConv2d, self).__init__() - self.conv = nn.Conv2d( - in_planes, - out_planes, - kernel_size=kernel_size, - stride=stride, - padding=padding, - bias=False - ) # verify bias false - self.bn = nn.BatchNorm2d( - out_planes, - eps=0.001, # value found in tensorflow - momentum=0.1, # default pytorch value - affine=True - ) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - x = self.conv(x) - x = self.bn(x) - x = self.relu(x) - return x - - -class Mixed_3a(nn.Module): - - def __init__(self): - super(Mixed_3a, self).__init__() - self.maxpool = nn.MaxPool2d(3, stride=2) - self.conv = BasicConv2d(64, 96, kernel_size=3, stride=2) - - def forward(self, x): - x0 = self.maxpool(x) - x1 = self.conv(x) - out = torch.cat((x0, x1), 1) - return out - - -class Mixed_4a(nn.Module): - - def __init__(self): - super(Mixed_4a, self).__init__() - - self.branch0 = nn.Sequential( - BasicConv2d(160, 64, kernel_size=1, stride=1), - BasicConv2d(64, 96, kernel_size=3, stride=1) - ) - - self.branch1 = nn.Sequential( - BasicConv2d(160, 64, kernel_size=1, stride=1), - BasicConv2d(64, 64, kernel_size=(1, 7), stride=1, padding=(0, 3)), - BasicConv2d(64, 64, kernel_size=(7, 1), stride=1, padding=(3, 0)), - BasicConv2d(64, 96, kernel_size=(3, 3), stride=1) - ) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - out = torch.cat((x0, x1), 1) - return out - - -class Mixed_5a(nn.Module): - - def __init__(self): - super(Mixed_5a, self).__init__() - self.conv = BasicConv2d(192, 192, kernel_size=3, stride=2) - self.maxpool = nn.MaxPool2d(3, stride=2) - - def forward(self, x): - x0 = self.conv(x) - x1 = self.maxpool(x) - out = torch.cat((x0, x1), 1) - return out - - -class Inception_A(nn.Module): - - def __init__(self): - super(Inception_A, self).__init__() - self.branch0 = BasicConv2d(384, 96, kernel_size=1, stride=1) - - self.branch1 = nn.Sequential( - BasicConv2d(384, 64, kernel_size=1, stride=1), - BasicConv2d(64, 96, kernel_size=3, stride=1, padding=1) - ) - - self.branch2 = nn.Sequential( - BasicConv2d(384, 64, kernel_size=1, stride=1), - BasicConv2d(64, 96, kernel_size=3, stride=1, padding=1), - BasicConv2d(96, 96, kernel_size=3, stride=1, padding=1) - ) - - self.branch3 = nn.Sequential( - nn.AvgPool2d(3, stride=1, padding=1, count_include_pad=False), - BasicConv2d(384, 96, kernel_size=1, stride=1) - ) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - x2 = self.branch2(x) - x3 = self.branch3(x) - out = torch.cat((x0, x1, x2, x3), 1) - return out - - -class Reduction_A(nn.Module): - - def __init__(self): - super(Reduction_A, self).__init__() - self.branch0 = BasicConv2d(384, 384, kernel_size=3, stride=2) - - self.branch1 = nn.Sequential( - BasicConv2d(384, 192, kernel_size=1, stride=1), - BasicConv2d(192, 224, kernel_size=3, stride=1, padding=1), - BasicConv2d(224, 256, kernel_size=3, stride=2) - ) - - self.branch2 = nn.MaxPool2d(3, stride=2) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - x2 = self.branch2(x) - out = torch.cat((x0, x1, x2), 1) - return out - - -class Inception_B(nn.Module): - - def __init__(self): - super(Inception_B, self).__init__() - self.branch0 = BasicConv2d(1024, 384, kernel_size=1, stride=1) - - self.branch1 = nn.Sequential( - BasicConv2d(1024, 192, kernel_size=1, stride=1), - BasicConv2d( - 192, 224, kernel_size=(1, 7), stride=1, padding=(0, 3) - ), - BasicConv2d( - 224, 256, kernel_size=(7, 1), stride=1, padding=(3, 0) - ) - ) - - self.branch2 = nn.Sequential( - BasicConv2d(1024, 192, kernel_size=1, stride=1), - BasicConv2d( - 192, 192, kernel_size=(7, 1), stride=1, padding=(3, 0) - ), - BasicConv2d( - 192, 224, kernel_size=(1, 7), stride=1, padding=(0, 3) - ), - BasicConv2d( - 224, 224, kernel_size=(7, 1), stride=1, padding=(3, 0) - ), - BasicConv2d( - 224, 256, kernel_size=(1, 7), stride=1, padding=(0, 3) - ) - ) - - self.branch3 = nn.Sequential( - nn.AvgPool2d(3, stride=1, padding=1, count_include_pad=False), - BasicConv2d(1024, 128, kernel_size=1, stride=1) - ) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - x2 = self.branch2(x) - x3 = self.branch3(x) - out = torch.cat((x0, x1, x2, x3), 1) - return out - - -class Reduction_B(nn.Module): - - def __init__(self): - super(Reduction_B, self).__init__() - - self.branch0 = nn.Sequential( - BasicConv2d(1024, 192, kernel_size=1, stride=1), - BasicConv2d(192, 192, kernel_size=3, stride=2) - ) - - self.branch1 = nn.Sequential( - BasicConv2d(1024, 256, kernel_size=1, stride=1), - BasicConv2d( - 256, 256, kernel_size=(1, 7), stride=1, padding=(0, 3) - ), - BasicConv2d( - 256, 320, kernel_size=(7, 1), stride=1, padding=(3, 0) - ), BasicConv2d(320, 320, kernel_size=3, stride=2) - ) - - self.branch2 = nn.MaxPool2d(3, stride=2) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - x2 = self.branch2(x) - out = torch.cat((x0, x1, x2), 1) - return out - - -class Inception_C(nn.Module): - - def __init__(self): - super(Inception_C, self).__init__() - - self.branch0 = BasicConv2d(1536, 256, kernel_size=1, stride=1) - - self.branch1_0 = BasicConv2d(1536, 384, kernel_size=1, stride=1) - self.branch1_1a = BasicConv2d( - 384, 256, kernel_size=(1, 3), stride=1, padding=(0, 1) - ) - self.branch1_1b = BasicConv2d( - 384, 256, kernel_size=(3, 1), stride=1, padding=(1, 0) - ) - - self.branch2_0 = BasicConv2d(1536, 384, kernel_size=1, stride=1) - self.branch2_1 = BasicConv2d( - 384, 448, kernel_size=(3, 1), stride=1, padding=(1, 0) - ) - self.branch2_2 = BasicConv2d( - 448, 512, kernel_size=(1, 3), stride=1, padding=(0, 1) - ) - self.branch2_3a = BasicConv2d( - 512, 256, kernel_size=(1, 3), stride=1, padding=(0, 1) - ) - self.branch2_3b = BasicConv2d( - 512, 256, kernel_size=(3, 1), stride=1, padding=(1, 0) - ) - - self.branch3 = nn.Sequential( - nn.AvgPool2d(3, stride=1, padding=1, count_include_pad=False), - BasicConv2d(1536, 256, kernel_size=1, stride=1) - ) - - def forward(self, x): - x0 = self.branch0(x) - - x1_0 = self.branch1_0(x) - x1_1a = self.branch1_1a(x1_0) - x1_1b = self.branch1_1b(x1_0) - x1 = torch.cat((x1_1a, x1_1b), 1) - - x2_0 = self.branch2_0(x) - x2_1 = self.branch2_1(x2_0) - x2_2 = self.branch2_2(x2_1) - x2_3a = self.branch2_3a(x2_2) - x2_3b = self.branch2_3b(x2_2) - x2 = torch.cat((x2_3a, x2_3b), 1) - - x3 = self.branch3(x) - - out = torch.cat((x0, x1, x2, x3), 1) - return out - - -class InceptionV4(nn.Module): - """Inception-v4. - - Reference: - Szegedy et al. Inception-v4, Inception-ResNet and the Impact of Residual - Connections on Learning. AAAI 2017. - - Public keys: - - ``inceptionv4``: InceptionV4. - """ - - def __init__(self, num_classes, loss, **kwargs): - super(InceptionV4, self).__init__() - self.loss = loss - - self.features = nn.Sequential( - BasicConv2d(3, 32, kernel_size=3, stride=2), - BasicConv2d(32, 32, kernel_size=3, stride=1), - BasicConv2d(32, 64, kernel_size=3, stride=1, padding=1), - Mixed_3a(), - Mixed_4a(), - Mixed_5a(), - Inception_A(), - Inception_A(), - Inception_A(), - Inception_A(), - Reduction_A(), # Mixed_6a - Inception_B(), - Inception_B(), - Inception_B(), - Inception_B(), - Inception_B(), - Inception_B(), - Inception_B(), - Reduction_B(), # Mixed_7a - Inception_C(), - Inception_C(), - Inception_C() - ) - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.classifier = nn.Linear(1536, num_classes) - - def forward(self, x): - f = self.features(x) - v = self.global_avgpool(f) - v = v.view(v.size(0), -1) - - if not self.training: - return v - - y = self.classifier(v) - - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - return y, v - else: - raise KeyError('Unsupported loss: {}'.format(self.loss)) - - -def init_pretrained_weights(model, model_url): - """Initializes model with pretrained weights. - - Layers that don't match with pretrained layers in name or size are kept unchanged. - """ - pretrain_dict = model_zoo.load_url(model_url) - model_dict = model.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - model.load_state_dict(model_dict) - - -def inceptionv4(num_classes, loss='softmax', pretrained=True, **kwargs): - model = InceptionV4(num_classes, loss, **kwargs) - if pretrained: - model_url = pretrained_settings['inceptionv4']['imagenet']['url'] - init_pretrained_weights(model, model_url) - return model diff --git a/spaces/xiangdy/chatGPT/assets/Kelpy-Codos.js b/spaces/xiangdy/chatGPT/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/xiangdy/chatGPT/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/xuetao/bingo3/src/components/external-link.tsx b/spaces/xuetao/bingo3/src/components/external-link.tsx deleted file mode 100644 index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/components/external-link.tsx +++ /dev/null @@ -1,30 +0,0 @@ -export function ExternalLink({ - href, - children -}: { - href: string - children: React.ReactNode -}) { - return ( - - {children} - - - ) -} diff --git a/spaces/xxie92/antibody_visulization/diffab/tools/dock/base.py b/spaces/xxie92/antibody_visulization/diffab/tools/dock/base.py deleted file mode 100644 index 68e52b6d33bf45433fccaffe5481af9d21f15bc4..0000000000000000000000000000000000000000 --- a/spaces/xxie92/antibody_visulization/diffab/tools/dock/base.py +++ /dev/null @@ -1,28 +0,0 @@ -import abc -from typing import List - - -FilePath = str - - -class DockingEngine(abc.ABC): - - @abc.abstractmethod - def __enter__(self): - pass - - @abc.abstractmethod - def __exit__(self, typ, value, traceback): - pass - - @abc.abstractmethod - def set_receptor(self, pdb_path: FilePath): - pass - - @abc.abstractmethod - def set_ligand(self, pdb_path: FilePath): - pass - - @abc.abstractmethod - def dock(self) -> List[FilePath]: - pass diff --git a/spaces/yejijue/img-to-music/share_btn.py b/spaces/yejijue/img-to-music/share_btn.py deleted file mode 100644 index 351a8f6252414dc48fd9972867f875a002731c19..0000000000000000000000000000000000000000 --- a/spaces/yejijue/img-to-music/share_btn.py +++ /dev/null @@ -1,104 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - async function getOutputMusicFile(audioEL){ - const res = await fetch(audioEL.src); - const blob = await res.blob(); - const audioId = Date.now() % 200; - const fileName = `img-to-music-${{audioId}}.wav`; - const musicBlob = new File([blob], fileName, { type: 'audio/wav' }); - console.log(musicBlob); - return musicBlob; - } - - async function audioToBase64(audioFile) { - return new Promise((resolve, reject) => { - let reader = new FileReader(); - reader.readAsDataURL(audioFile); - reader.onload = () => resolve(reader.result); - reader.onerror = error => reject(error); - - }); - } - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgEl = gradioEl.querySelector('#input-img img'); - const prompts = gradioEl.querySelector('#prompts_out textarea').value; - const outputMusic = gradioEl.querySelector('#music-output audio'); - const outputMusic_src = gradioEl.querySelector('#music-output audio').src; - const outputMusic_name = outputMusic_src.split('/').pop(); - let titleTxt = outputMusic_name; - //if(titleTxt.length > 100){ - // titleTxt = titleTxt.slice(0, 100) + ' ...'; - //} - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputMusic){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const inputFile = await getInputImgFile(inputImgEl); - const urlInputImg = await uploadFile(inputFile); - const musicFile = await getOutputMusicFile(outputMusic); - const dataOutputMusic = await uploadFile(musicFile); - - const descriptionMd = `#### Input img: - - -#### Prompts out: -${prompts} - -#### Music: - - -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/img-to-music/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/yeqingmei123/face-test/e4e/utils/common.py b/spaces/yeqingmei123/face-test/e4e/utils/common.py deleted file mode 100644 index b19e18ddcb78b06678fa18e4a76da44fc511b789..0000000000000000000000000000000000000000 --- a/spaces/yeqingmei123/face-test/e4e/utils/common.py +++ /dev/null @@ -1,55 +0,0 @@ -from PIL import Image -import matplotlib.pyplot as plt - - -# Log images -def log_input_image(x, opts): - return tensor2im(x) - - -def tensor2im(var): - # var shape: (3, H, W) - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = ((var + 1) / 2) - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype('uint8')) - - -def vis_faces(log_hooks): - display_count = len(log_hooks) - fig = plt.figure(figsize=(8, 4 * display_count)) - gs = fig.add_gridspec(display_count, 3) - for i in range(display_count): - hooks_dict = log_hooks[i] - fig.add_subplot(gs[i, 0]) - if 'diff_input' in hooks_dict: - vis_faces_with_id(hooks_dict, fig, gs, i) - else: - vis_faces_no_id(hooks_dict, fig, gs, i) - plt.tight_layout() - return fig - - -def vis_faces_with_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face']) - plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input']))) - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']), - float(hooks_dict['diff_target']))) - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target']))) - - -def vis_faces_no_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face'], cmap="gray") - plt.title('Input') - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target') - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output') diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/lib/tools.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/lib/tools.py deleted file mode 100644 index ea368694b904bd7ad340da4a2cb93bacb403a99e..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/PIPNet/lib/tools.py +++ /dev/null @@ -1,189 +0,0 @@ -import cv2 -import sys -import os - - -def make_pipnet(): - cmds = [ - "cd ./third_party/PIPNet/FaceBoxesV2/utils/ && chmod +x ./make.sh " - "&& bash ./make.sh " - "&& cd - ", - ] - for cmd in cmds: - os.system(cmd) - print('[PIPNet.lib.tools] nms .o file built successfully.') - - -make_pipnet() - - -from math import floor -from third_party.PIPNet.FaceBoxesV2.faceboxes_detector import * - -import torch -import torch.nn.parallel -import torch.utils.data -import torchvision.transforms as transforms -import torchvision.models as models - -from third_party.PIPNet.lib.networks import * -from third_party.PIPNet.lib.functions import * -from third_party.PIPNet.reverse_index import ri1, ri2 - - -make_abs_path = lambda fn: os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__)), fn)) - - -class Config: - def __init__(self): - self.det_head = "pip" - self.net_stride = 32 - self.batch_size = 16 - self.init_lr = 0.0001 - self.num_epochs = 60 - self.decay_steps = [30, 50] - self.input_size = 256 - self.backbone = "resnet101" - self.pretrained = True - self.criterion_cls = "l2" - self.criterion_reg = "l1" - self.cls_loss_weight = 10 - self.reg_loss_weight = 1 - self.num_lms = 98 - self.save_interval = self.num_epochs - self.num_nb = 10 - self.use_gpu = True - self.gpu_id = 3 - - -def get_lmk_model(): - - cfg = Config() - - resnet101 = models.resnet101(pretrained=cfg.pretrained) - net = Pip_resnet101( - resnet101, - cfg.num_nb, - num_lms=cfg.num_lms, - input_size=cfg.input_size, - net_stride=cfg.net_stride, - ) - - if cfg.use_gpu: - device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - else: - device = torch.device("cpu") - net = net.to(device) - - weight_file = make_abs_path('../../../weights/PIPNet/epoch59.pth') - state_dict = torch.load(weight_file, map_location=device) - net.load_state_dict(state_dict) - - detector = FaceBoxesDetector( - "FaceBoxes", - make_abs_path("../../../weights/PIPNet/FaceBoxesV2.pth"), - use_gpu=torch.cuda.is_available(), - device=device, - ) - return net, detector - - -def demo_image( - image_file, - net, - detector, - input_size=256, - net_stride=32, - num_nb=10, - use_gpu=True, - device="cuda:0", -): - - my_thresh = 0.6 - det_box_scale = 1.2 - net.eval() - preprocess = transforms.Compose( - [ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ] - ) - reverse_index1, reverse_index2, max_len = ri1, ri2, 17 - # image = cv2.imread(image_file) - image = image_file - image_height, image_width, _ = image.shape - detections, _ = detector.detect(image, my_thresh, 1) - lmks = [] - for i in range(len(detections)): - det_xmin = detections[i][2] - det_ymin = detections[i][3] - det_width = detections[i][4] - det_height = detections[i][5] - det_xmax = det_xmin + det_width - 1 - det_ymax = det_ymin + det_height - 1 - - det_xmin -= int(det_width * (det_box_scale - 1) / 2) - # remove a part of top area for alignment, see paper for details - det_ymin += int(det_height * (det_box_scale - 1) / 2) - det_xmax += int(det_width * (det_box_scale - 1) / 2) - det_ymax += int(det_height * (det_box_scale - 1) / 2) - det_xmin = max(det_xmin, 0) - det_ymin = max(det_ymin, 0) - det_xmax = min(det_xmax, image_width - 1) - det_ymax = min(det_ymax, image_height - 1) - det_width = det_xmax - det_xmin + 1 - det_height = det_ymax - det_ymin + 1 - - # cv2.rectangle(image, (det_xmin, det_ymin), (det_xmax, det_ymax), (0, 0, 255), 2) - - det_crop = image[det_ymin:det_ymax, det_xmin:det_xmax, :] - det_crop = cv2.resize(det_crop, (input_size, input_size)) - inputs = Image.fromarray(det_crop[:, :, ::-1].astype("uint8"), "RGB") - inputs = preprocess(inputs).unsqueeze(0) - inputs = inputs.to(device) - ( - lms_pred_x, - lms_pred_y, - lms_pred_nb_x, - lms_pred_nb_y, - outputs_cls, - max_cls, - ) = forward_pip(net, inputs, preprocess, input_size, net_stride, num_nb) - lms_pred = torch.cat((lms_pred_x, lms_pred_y), dim=1).flatten() - tmp_nb_x = lms_pred_nb_x[reverse_index1, reverse_index2].view(98, max_len) - tmp_nb_y = lms_pred_nb_y[reverse_index1, reverse_index2].view(98, max_len) - tmp_x = torch.mean(torch.cat((lms_pred_x, tmp_nb_x), dim=1), dim=1).view(-1, 1) - tmp_y = torch.mean(torch.cat((lms_pred_y, tmp_nb_y), dim=1), dim=1).view(-1, 1) - lms_pred_merge = torch.cat((tmp_x, tmp_y), dim=1).flatten() - lms_pred = lms_pred.cpu().numpy() - lms_pred_merge = lms_pred_merge.cpu().numpy() - lmk_ = [] - for i in range(98): - x_pred = lms_pred_merge[i * 2] * det_width - y_pred = lms_pred_merge[i * 2 + 1] * det_height - - # cv2.circle( - # image, - # (int(x_pred) + det_xmin, int(y_pred) + det_ymin), - # 1, - # (0, 0, 255), - # 1, - # ) - - lmk_.append([int(x_pred) + det_xmin, int(y_pred) + det_ymin]) - lmks.append(np.array(lmk_)) - - # image_bgr = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) - # cv2.imwrite("./1_out.jpg", image_bgr) - - return lmks - - -if __name__ == "__main__": - net, detector = get_lmk_model() - demo_image( - "/apdcephfs/private_ahbanliang/codes/Real-ESRGAN-master/tmp_frames/yanikefu/frame00000046.png", - net, - detector, - ) diff --git a/spaces/yiningmao/metaphor-detection-baseline/modeling.py b/spaces/yiningmao/metaphor-detection-baseline/modeling.py deleted file mode 100644 index d107593339e42c1eeb0ada4fb39393ff6832f5a2..0000000000000000000000000000000000000000 --- a/spaces/yiningmao/metaphor-detection-baseline/modeling.py +++ /dev/null @@ -1,403 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn - -from utils import Config -from transformers import AutoTokenizer, AutoModel - - -class AutoModelForSequenceClassification(nn.Module): - """Base model for sequence classification""" - - def __init__(self, args, Model, config, num_labels=2): - """Initialize the model""" - super(AutoModelForSequenceClassification, self).__init__() - self.num_labels = num_labels - self.encoder = Model - self.config = config - self.dropout = nn.Dropout(args.drop_ratio) - self.classifier = nn.Linear(config.hidden_size, num_labels) - self.logsoftmax = nn.LogSoftmax(dim=1) - - self._init_weights(self.classifier) - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - def forward( - self, - input_ids, - target_mask=None, - token_type_ids=None, - attention_mask=None, - labels=None, - head_mask=None, - ): - """ - Inputs: - `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the word token indices in the vocabulary - `target_mask`: a torch.LongTensor of shape [batch_size, sequence_length] with the mask for target wor. 1 for target word and 0 otherwise. - `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices - selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). - `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1]. - It's a mask to be used if the input sequence length is smaller than the max input sequence length in the current batch. - It's the mask that we typically use for attention when a batch has varying length sentences. - `labels`: optional labels for the classification output: torch.LongTensor of shape [batch_size, sequence_length] - with indices selected in [0, ..., num_labels]. - `head_mask`: an optional torch.Tensor of shape [num_heads] or [num_layers, num_heads] with indices between 0 and 1. - It's a mask to be used to nullify some heads of the transformer. 1.0 => head is fully masked, 0.0 => head is not masked. - """ - outputs = self.encoder( - input_ids, - token_type_ids=token_type_ids, - attention_mask=attention_mask, - head_mask=head_mask, - ) - pooled_output = outputs[1] - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - logits = self.logsoftmax(logits) - - if labels is not None: - loss_fct = nn.NLLLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - return loss - return logits - - -class AutoModelForTokenClassification(nn.Module): - """Base model for token classification""" - - def __init__(self, args, Model, config, num_labels=2): - """Initialize the model""" - super(AutoModelForTokenClassification, self).__init__() - self.num_labels = num_labels - self.bert = Model - self.config = config - self.dropout = nn.Dropout(args.drop_ratio) - self.classifier = nn.Linear(config.hidden_size, num_labels) - self.logsoftmax = nn.LogSoftmax(dim=1) - - self._init_weights(self.classifier) - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - def forward( - self, - input_ids, - target_mask, - token_type_ids=None, - attention_mask=None, - labels=None, - head_mask=None, - ): - """ - Inputs: - `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the word token indices in the vocabulary - `target_mask`: a torch.LongTensor of shape [batch_size, sequence_length] with the mask for target wor. 1 for target word and 0 otherwise. - `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices - selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). - `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1]. - It's a mask to be used if the input sequence length is smaller than the max input sequence length in the current batch. - It's the mask that we typically use for attention when a batch has varying length sentences. - `labels`: optional labels for the classification output: torch.LongTensor of shape [batch_size, sequence_length] - with indices selected in [0, ..., num_labels]. - `head_mask`: an optional torch.Tensor of shape [num_heads] or [num_layers, num_heads] with indices between 0 and 1. - It's a mask to be used to nullify some heads of the transformer. 1.0 => head is fully masked, 0.0 => head is not masked. - """ - outputs = self.bert( - input_ids, - token_type_ids=token_type_ids, - attention_mask=attention_mask, - head_mask=head_mask, - ) - sequence_output = outputs[0] # [batch, max_len, hidden] - target_output = sequence_output * target_mask.unsqueeze(2) - target_output = self.dropout(target_output) - target_output = target_output.sum(1) / target_mask.sum() # [batch, hideen] - - logits = self.classifier(target_output) - logits = self.logsoftmax(logits) - - if labels is not None: - loss_fct = nn.NLLLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - return loss - return logits - - -class AutoModelForSequenceClassification_SPV(nn.Module): - """MelBERT with only SPV""" - - def __init__(self, args, Model, config, num_labels=2): - """Initialize the model""" - super(AutoModelForSequenceClassification_SPV, self).__init__() - self.num_labels = num_labels - self.encoder = Model - self.config = config - self.dropout = nn.Dropout(args.drop_ratio) - self.classifier = nn.Linear(config.hidden_size * 2, num_labels) - self.logsoftmax = nn.LogSoftmax(dim=1) - - self._init_weights(self.classifier) - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - def forward( - self, - input_ids, - target_mask, - token_type_ids=None, - attention_mask=None, - labels=None, - head_mask=None, - ): - """ - Inputs: - `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the word token indices in the vocabulary - `target_mask`: a torch.LongTensor of shape [batch_size, sequence_length] with the mask for target wor. 1 for target word and 0 otherwise. - `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices - selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). - `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1]. - `labels`: optional labels for the classification output: torch.LongTensor of shape [batch_size, sequence_length] - with indices selected in [0, ..., num_labels]. - `head_mask`: an optional torch.Tensor of shape [num_heads] or [num_layers, num_heads] with indices between 0 and 1. - It's a mask to be used to nullify some heads of the transformer. 1.0 => head is fully masked, 0.0 => head is not masked. - """ - outputs = self.encoder( - input_ids, - token_type_ids=token_type_ids, - attention_mask=attention_mask, - head_mask=head_mask, - ) - sequence_output = outputs[0] # [batch, max_len, hidden] - pooled_output = outputs[1] # [batch, hidden] - - # Get target ouput with target mask - target_output = sequence_output * target_mask.unsqueeze(2) # [batch, hidden] - - # dropout - target_output = self.dropout(target_output) - pooled_output = self.dropout(pooled_output) - - # Get mean value of target output if the target output consistst of more than one token - target_output = target_output.mean(1) - - logits = self.classifier(torch.cat([target_output, pooled_output], dim=1)) - logits = self.logsoftmax(logits) - - if labels is not None: - loss_fct = nn.NLLLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - return loss - return logits - - -class AutoModelForSequenceClassification_MIP(nn.Module): - """MelBERT with only MIP""" - - def __init__(self, args, Model, config, num_labels=2): - """Initialize the model""" - super(AutoModelForSequenceClassification_MIP, self).__init__() - self.num_labels = num_labels - self.encoder = Model - self.config = config - self.dropout = nn.Dropout(args.drop_ratio) - self.args = args - self.classifier = nn.Linear(config.hidden_size * 2, num_labels) - self.logsoftmax = nn.LogSoftmax(dim=1) - - self._init_weights(self.classifier) - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - def forward( - self, - input_ids, - input_ids_2, - target_mask, - target_mask_2, - attention_mask_2, - token_type_ids=None, - attention_mask=None, - labels=None, - head_mask=None, - ): - """ - Inputs: - `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the first input token indices in the vocabulary - `input_ids_2`: a torch.LongTensor of shape [batch_size, sequence_length] with the second input token indicies - `target_mask`: a torch.LongTensor of shape [batch_size, sequence_length] with the mask for target word in the first input. 1 for target word and 0 otherwise. - `target_mask_2`: a torch.LongTensor of shape [batch_size, sequence_length] with the mask for target word in the second input. 1 for target word and 0 otherwise. - `attention_mask_2`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1] for the second input. - `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices - selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). - `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1] for the first input. - `labels`: optional labels for the classification output: torch.LongTensor of shape [batch_size, sequence_length] - with indices selected in [0, ..., num_labels]. - `head_mask`: an optional torch.Tensor of shape [num_heads] or [num_layers, num_heads] with indices between 0 and 1. - It's a mask to be used to nullify some heads of the transformer. 1.0 => head is fully masked, 0.0 => head is not masked. - """ - # First encoder for full sentence - outputs = self.encoder( - input_ids, - token_type_ids=token_type_ids, - attention_mask=attention_mask, - head_mask=head_mask, - ) - sequence_output = outputs[0] # [batch, max_len, hidden] - - # Get target ouput with target mask - target_output = sequence_output * target_mask.unsqueeze(2) - target_output = self.dropout(target_output) - target_output = target_output.sum(1) / target_mask.sum() # [batch, hidden] - - # Second encoder for only the target word - outputs_2 = self.encoder(input_ids_2, attention_mask=attention_mask_2, head_mask=head_mask) - sequence_output_2 = outputs_2[0] # [batch, max_len, hidden] - - # Get target ouput with target mask - target_output_2 = sequence_output_2 * target_mask_2.unsqueeze(2) - target_output_2 = self.dropout(target_output_2) - target_output_2 = target_output_2.sum(1) / target_mask_2.sum() - - logits = self.classifier(torch.cat([target_output_2, target_output], dim=1)) - logits = self.logsoftmax(logits) - - if labels is not None: - loss_fct = nn.NLLLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - return loss - return logits - - -class AutoModelForSequenceClassification_SPV_MIP(nn.Module): - """MelBERT""" - - def __init__(self, args, Model, config, num_labels=2): - """Initialize the model""" - super(AutoModelForSequenceClassification_SPV_MIP, self).__init__() - self.num_labels = num_labels - self.encoder = Model - self.config = config - self.dropout = nn.Dropout(args.drop_ratio) - self.args = args - - self.SPV_linear = nn.Linear(config.hidden_size * 2, args.classifier_hidden) - self.MIP_linear = nn.Linear(config.hidden_size * 2, args.classifier_hidden) - self.classifier = nn.Linear(args.classifier_hidden * 2, num_labels) - self._init_weights(self.SPV_linear) - self._init_weights(self.MIP_linear) - - self.logsoftmax = nn.LogSoftmax(dim=1) - self._init_weights(self.classifier) - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Embedding)): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - def forward( - self, - input_ids, - input_ids_2, - target_mask, - target_mask_2, - attention_mask_2, - token_type_ids=None, - attention_mask=None, - labels=None, - head_mask=None, - ): - """ - Inputs: - `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the first input token indices in the vocabulary - `input_ids_2`: a torch.LongTensor of shape [batch_size, sequence_length] with the second input token indicies - `target_mask`: a torch.LongTensor of shape [batch_size, sequence_length] with the mask for target word in the first input. 1 for target word and 0 otherwise. - `target_mask_2`: a torch.LongTensor of shape [batch_size, sequence_length] with the mask for target word in the second input. 1 for target word and 0 otherwise. - `attention_mask_2`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1] for the second input. - `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices - selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). - `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1] for the first input. - `labels`: optional labels for the classification output: torch.LongTensor of shape [batch_size, sequence_length] - with indices selected in [0, ..., num_labels]. - `head_mask`: an optional torch.Tensor of shape [num_heads] or [num_layers, num_heads] with indices between 0 and 1. - It's a mask to be used to nullify some heads of the transformer. 1.0 => head is fully masked, 0.0 => head is not masked. - """ - - # First encoder for full sentence - outputs = self.encoder( - input_ids, - token_type_ids=token_type_ids, - attention_mask=attention_mask, - head_mask=head_mask, - ) - sequence_output = outputs[0] # [batch, max_len, hidden] - pooled_output = outputs[1] # [batch, hidden] - - # Get target ouput with target mask - target_output = sequence_output * target_mask.unsqueeze(2) - - # dropout - target_output = self.dropout(target_output) - pooled_output = self.dropout(pooled_output) - - target_output = target_output.mean(1) # [batch, hidden] - - # Second encoder for only the target word - outputs_2 = self.encoder(input_ids_2, attention_mask=attention_mask_2, head_mask=head_mask) - sequence_output_2 = outputs_2[0] # [batch, max_len, hidden] - - # Get target ouput with target mask - target_output_2 = sequence_output_2 * target_mask_2.unsqueeze(2) - target_output_2 = self.dropout(target_output_2) - target_output_2 = target_output_2.mean(1) - - # Get hidden vectors each from SPV and MIP linear layers - SPV_hidden = self.SPV_linear(torch.cat([pooled_output, target_output], dim=1)) - MIP_hidden = self.MIP_linear(torch.cat([target_output_2, target_output], dim=1)) - - logits = self.classifier(self.dropout(torch.cat([SPV_hidden, MIP_hidden], dim=1))) - logits = self.logsoftmax(logits) - - if labels is not None: - loss_fct = nn.NLLLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - return loss - return logits diff --git a/spaces/yongjae/whisper-webui/src/segments.py b/spaces/yongjae/whisper-webui/src/segments.py deleted file mode 100644 index ec2650dceade5d0b2022264f6419115eab085aea..0000000000000000000000000000000000000000 --- a/spaces/yongjae/whisper-webui/src/segments.py +++ /dev/null @@ -1,55 +0,0 @@ -from typing import Any, Dict, List - -import copy - -def merge_timestamps(timestamps: List[Dict[str, Any]], merge_window: float = 5, max_merge_size: float = 30, padding_left: float = 1, padding_right: float = 1): - result = [] - - if len(timestamps) == 0: - return result - if max_merge_size is None: - return timestamps - - if padding_left is None: - padding_left = 0 - if padding_right is None: - padding_right = 0 - - processed_time = 0 - current_segment = None - - for i in range(len(timestamps)): - next_segment = timestamps[i] - - delta = next_segment['start'] - processed_time - - # Note that segments can still be longer than the max merge size, they just won't be merged in that case - if current_segment is None or (merge_window is not None and delta > merge_window) \ - or next_segment['end'] - current_segment['start'] > max_merge_size: - # Finish the current segment - if current_segment is not None: - # Add right padding - finish_padding = min(padding_right, delta / 2) if delta < padding_left + padding_right else padding_right - current_segment['end'] += finish_padding - delta -= finish_padding - - result.append(current_segment) - - # Start a new segment - current_segment = copy.deepcopy(next_segment) - - # Pad the segment - current_segment['start'] = current_segment['start'] - min(padding_left, delta) - processed_time = current_segment['end'] - - else: - # Merge the segment - current_segment['end'] = next_segment['end'] - processed_time = current_segment['end'] - - # Add the last segment - if current_segment is not None: - current_segment['end'] += padding_right - result.append(current_segment) - - return result \ No newline at end of file diff --git a/spaces/yonikremer/grouped-sampling-demo/available_models.py b/spaces/yonikremer/grouped-sampling-demo/available_models.py deleted file mode 100644 index 76cbf38669aaa9858d8bcfb4a56004bd4d5fdc84..0000000000000000000000000000000000000000 --- a/spaces/yonikremer/grouped-sampling-demo/available_models.py +++ /dev/null @@ -1,10 +0,0 @@ -AVAILABLE_MODELS = ( - "facebook/opt-iml-max-1.3b", - "facebook/opt-iml-max-30b", - "gpt2", - "chavinlo/alpaca-13b", - "decapoda-research/llama-7b-hf", - "decapoda-research/llama-13b-hf", - "decapoda-research/llama-30b-hf", - "decapoda-research/llama-65b-hf", -) diff --git a/spaces/ysharma/Gradio-demo-streaming/app.py b/spaces/ysharma/Gradio-demo-streaming/app.py deleted file mode 100644 index c50a1bdf436d4adb4c5a385ebec5fa3e941e543f..0000000000000000000000000000000000000000 --- a/spaces/ysharma/Gradio-demo-streaming/app.py +++ /dev/null @@ -1,89 +0,0 @@ -import time -import gradio as gr -import os -import json -import requests - -#Streaming endpoint -API_URL = os.getenv("API_URL") + "/generate_stream" - -def predict(inputs, top_p, temperature, top_k, repetition_penalty, history=[]): - if not inputs.startswith("User: "): - inputs = "User: " + inputs + "\n" - payload = { - "inputs": inputs, #"My name is Jane and I", - "parameters": { - "details": True, - "do_sample": True, - "max_new_tokens": 100, - "repetition_penalty": repetition_penalty, #1.03, - "seed": 0, - "temperature": temperature, #0.5, - "top_k": top_k, #10, - "top_p": top_p #0.95 - } - } - - headers = { - 'accept': 'text/event-stream', - 'Content-Type': 'application/json' - } - - history.append(inputs) - # make a POST request to the API endpoint using the requests.post method, passing in stream=True - response = requests.post(API_URL, headers=headers, json=payload, stream=True) - token_counter = 0 - partial_words = "" - # loop over the response data using the iter_lines method of the response object - for chunk in response.iter_lines(): - # check whether each line is non-empty - if chunk: - # decode each line as response data is in bytes - partial_words = partial_words + json.loads(chunk.decode()[5:])['token']['text'] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter+=1 - yield chat, history #{chatbot: chat, state: history} #[(partial_words, history)] - -def reset_textbox(): - return gr.update(value='') - -title = """

            🔥Streaming your 🤖Chatbot output with Gradio🚀

            """ -description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form: -``` -User: -Assistant: -User: -Assistant: -... -``` -In this app, you can explore the outputs of a 20B large language model. -""" - -with gr.Blocks(css = """#col_container {width: 700px; margin-left: auto; margin-right: auto;} - #chatbot {height: 400px; overflow: auto;}""") as demo: - gr.HTML(title) - with gr.Column(elem_id = "col_container"): - chatbot = gr.Chatbot(elem_id='chatbot') #c - inputs = gr.Textbox(placeholder= "Hi my name is Joe.", label= "Type an input and press Enter") #t - state = gr.State([]) #s - b1 = gr.Button() - - #inputs, top_p, temperature, top_k, repetition_penalty - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=0.95, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=0.5, step=0.1, interactive=True, label="Temperature",) - top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",) - repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", ) - - - inputs.submit( predict, [inputs, top_p, temperature, top_k, repetition_penalty, state], [chatbot, state],) - b1.click( predict, [inputs, top_p, temperature, top_k, repetition_penalty, state], [chatbot, state],) - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - - gr.Markdown(description) - demo.queue().launch(debug=True) diff --git a/spaces/zhang-wei-jian/docker/node_modules/picomatch/lib/scan.js b/spaces/zhang-wei-jian/docker/node_modules/picomatch/lib/scan.js deleted file mode 100644 index e59cd7a1357b184b0c70409b161fcf0ce5433af1..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/picomatch/lib/scan.js +++ /dev/null @@ -1,391 +0,0 @@ -'use strict'; - -const utils = require('./utils'); -const { - CHAR_ASTERISK, /* * */ - CHAR_AT, /* @ */ - CHAR_BACKWARD_SLASH, /* \ */ - CHAR_COMMA, /* , */ - CHAR_DOT, /* . */ - CHAR_EXCLAMATION_MARK, /* ! */ - CHAR_FORWARD_SLASH, /* / */ - CHAR_LEFT_CURLY_BRACE, /* { */ - CHAR_LEFT_PARENTHESES, /* ( */ - CHAR_LEFT_SQUARE_BRACKET, /* [ */ - CHAR_PLUS, /* + */ - CHAR_QUESTION_MARK, /* ? */ - CHAR_RIGHT_CURLY_BRACE, /* } */ - CHAR_RIGHT_PARENTHESES, /* ) */ - CHAR_RIGHT_SQUARE_BRACKET /* ] */ -} = require('./constants'); - -const isPathSeparator = code => { - return code === CHAR_FORWARD_SLASH || code === CHAR_BACKWARD_SLASH; -}; - -const depth = token => { - if (token.isPrefix !== true) { - token.depth = token.isGlobstar ? Infinity : 1; - } -}; - -/** - * Quickly scans a glob pattern and returns an object with a handful of - * useful properties, like `isGlob`, `path` (the leading non-glob, if it exists), - * `glob` (the actual pattern), `negated` (true if the path starts with `!` but not - * with `!(`) and `negatedExtglob` (true if the path starts with `!(`). - * - * ```js - * const pm = require('picomatch'); - * console.log(pm.scan('foo/bar/*.js')); - * { isGlob: true, input: 'foo/bar/*.js', base: 'foo/bar', glob: '*.js' } - * ``` - * @param {String} `str` - * @param {Object} `options` - * @return {Object} Returns an object with tokens and regex source string. - * @api public - */ - -const scan = (input, options) => { - const opts = options || {}; - - const length = input.length - 1; - const scanToEnd = opts.parts === true || opts.scanToEnd === true; - const slashes = []; - const tokens = []; - const parts = []; - - let str = input; - let index = -1; - let start = 0; - let lastIndex = 0; - let isBrace = false; - let isBracket = false; - let isGlob = false; - let isExtglob = false; - let isGlobstar = false; - let braceEscaped = false; - let backslashes = false; - let negated = false; - let negatedExtglob = false; - let finished = false; - let braces = 0; - let prev; - let code; - let token = { value: '', depth: 0, isGlob: false }; - - const eos = () => index >= length; - const peek = () => str.charCodeAt(index + 1); - const advance = () => { - prev = code; - return str.charCodeAt(++index); - }; - - while (index < length) { - code = advance(); - let next; - - if (code === CHAR_BACKWARD_SLASH) { - backslashes = token.backslashes = true; - code = advance(); - - if (code === CHAR_LEFT_CURLY_BRACE) { - braceEscaped = true; - } - continue; - } - - if (braceEscaped === true || code === CHAR_LEFT_CURLY_BRACE) { - braces++; - - while (eos() !== true && (code = advance())) { - if (code === CHAR_BACKWARD_SLASH) { - backslashes = token.backslashes = true; - advance(); - continue; - } - - if (code === CHAR_LEFT_CURLY_BRACE) { - braces++; - continue; - } - - if (braceEscaped !== true && code === CHAR_DOT && (code = advance()) === CHAR_DOT) { - isBrace = token.isBrace = true; - isGlob = token.isGlob = true; - finished = true; - - if (scanToEnd === true) { - continue; - } - - break; - } - - if (braceEscaped !== true && code === CHAR_COMMA) { - isBrace = token.isBrace = true; - isGlob = token.isGlob = true; - finished = true; - - if (scanToEnd === true) { - continue; - } - - break; - } - - if (code === CHAR_RIGHT_CURLY_BRACE) { - braces--; - - if (braces === 0) { - braceEscaped = false; - isBrace = token.isBrace = true; - finished = true; - break; - } - } - } - - if (scanToEnd === true) { - continue; - } - - break; - } - - if (code === CHAR_FORWARD_SLASH) { - slashes.push(index); - tokens.push(token); - token = { value: '', depth: 0, isGlob: false }; - - if (finished === true) continue; - if (prev === CHAR_DOT && index === (start + 1)) { - start += 2; - continue; - } - - lastIndex = index + 1; - continue; - } - - if (opts.noext !== true) { - const isExtglobChar = code === CHAR_PLUS - || code === CHAR_AT - || code === CHAR_ASTERISK - || code === CHAR_QUESTION_MARK - || code === CHAR_EXCLAMATION_MARK; - - if (isExtglobChar === true && peek() === CHAR_LEFT_PARENTHESES) { - isGlob = token.isGlob = true; - isExtglob = token.isExtglob = true; - finished = true; - if (code === CHAR_EXCLAMATION_MARK && index === start) { - negatedExtglob = true; - } - - if (scanToEnd === true) { - while (eos() !== true && (code = advance())) { - if (code === CHAR_BACKWARD_SLASH) { - backslashes = token.backslashes = true; - code = advance(); - continue; - } - - if (code === CHAR_RIGHT_PARENTHESES) { - isGlob = token.isGlob = true; - finished = true; - break; - } - } - continue; - } - break; - } - } - - if (code === CHAR_ASTERISK) { - if (prev === CHAR_ASTERISK) isGlobstar = token.isGlobstar = true; - isGlob = token.isGlob = true; - finished = true; - - if (scanToEnd === true) { - continue; - } - break; - } - - if (code === CHAR_QUESTION_MARK) { - isGlob = token.isGlob = true; - finished = true; - - if (scanToEnd === true) { - continue; - } - break; - } - - if (code === CHAR_LEFT_SQUARE_BRACKET) { - while (eos() !== true && (next = advance())) { - if (next === CHAR_BACKWARD_SLASH) { - backslashes = token.backslashes = true; - advance(); - continue; - } - - if (next === CHAR_RIGHT_SQUARE_BRACKET) { - isBracket = token.isBracket = true; - isGlob = token.isGlob = true; - finished = true; - break; - } - } - - if (scanToEnd === true) { - continue; - } - - break; - } - - if (opts.nonegate !== true && code === CHAR_EXCLAMATION_MARK && index === start) { - negated = token.negated = true; - start++; - continue; - } - - if (opts.noparen !== true && code === CHAR_LEFT_PARENTHESES) { - isGlob = token.isGlob = true; - - if (scanToEnd === true) { - while (eos() !== true && (code = advance())) { - if (code === CHAR_LEFT_PARENTHESES) { - backslashes = token.backslashes = true; - code = advance(); - continue; - } - - if (code === CHAR_RIGHT_PARENTHESES) { - finished = true; - break; - } - } - continue; - } - break; - } - - if (isGlob === true) { - finished = true; - - if (scanToEnd === true) { - continue; - } - - break; - } - } - - if (opts.noext === true) { - isExtglob = false; - isGlob = false; - } - - let base = str; - let prefix = ''; - let glob = ''; - - if (start > 0) { - prefix = str.slice(0, start); - str = str.slice(start); - lastIndex -= start; - } - - if (base && isGlob === true && lastIndex > 0) { - base = str.slice(0, lastIndex); - glob = str.slice(lastIndex); - } else if (isGlob === true) { - base = ''; - glob = str; - } else { - base = str; - } - - if (base && base !== '' && base !== '/' && base !== str) { - if (isPathSeparator(base.charCodeAt(base.length - 1))) { - base = base.slice(0, -1); - } - } - - if (opts.unescape === true) { - if (glob) glob = utils.removeBackslashes(glob); - - if (base && backslashes === true) { - base = utils.removeBackslashes(base); - } - } - - const state = { - prefix, - input, - start, - base, - glob, - isBrace, - isBracket, - isGlob, - isExtglob, - isGlobstar, - negated, - negatedExtglob - }; - - if (opts.tokens === true) { - state.maxDepth = 0; - if (!isPathSeparator(code)) { - tokens.push(token); - } - state.tokens = tokens; - } - - if (opts.parts === true || opts.tokens === true) { - let prevIndex; - - for (let idx = 0; idx < slashes.length; idx++) { - const n = prevIndex ? prevIndex + 1 : start; - const i = slashes[idx]; - const value = input.slice(n, i); - if (opts.tokens) { - if (idx === 0 && start !== 0) { - tokens[idx].isPrefix = true; - tokens[idx].value = prefix; - } else { - tokens[idx].value = value; - } - depth(tokens[idx]); - state.maxDepth += tokens[idx].depth; - } - if (idx !== 0 || value !== '') { - parts.push(value); - } - prevIndex = i; - } - - if (prevIndex && prevIndex + 1 < input.length) { - const value = input.slice(prevIndex + 1); - parts.push(value); - - if (opts.tokens) { - tokens[tokens.length - 1].value = value; - depth(tokens[tokens.length - 1]); - state.maxDepth += tokens[tokens.length - 1].depth; - } - } - - state.slashes = slashes; - state.parts = parts; - } - - return state; -}; - -module.exports = scan; diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/visualization/grad.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/visualization/grad.py deleted file mode 100644 index d766fa5b0add79e64efccb1d9b0511c4b904cc93..0000000000000000000000000000000000000000 --- a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/visualization/grad.py +++ /dev/null @@ -1,117 +0,0 @@ -""" -@Date: 2021/11/06 -@description: -""" -import cv2 -import numpy as np -import torch -import matplotlib.pyplot as plt - -from utils.conversion import depth2xyz - - -def convert_img(value, h, need_nor=True, cmap=None): - value = value.clone().detach().cpu().numpy()[None] - if need_nor: - value -= value.min() - value /= value.max() - value.min() - grad_img = value.repeat(int(h), axis=0) - - if cmap is None: - grad_img = grad_img[..., np.newaxis].repeat(3, axis=-1) - elif cmap == cv2.COLORMAP_PLASMA: - grad_img = cv2.applyColorMap((grad_img * 255).astype(np.uint8), colormap=cmap) - grad_img = grad_img[..., ::-1] - grad_img = grad_img.astype(np.float32) / 255.0 - elif cmap == 'HSV': - grad_img = np.round(grad_img * 1000) / 1000.0 - grad_img = grad_img[..., np.newaxis].repeat(3, axis=-1) - grad_img[..., 0] = grad_img[..., 0] * 180 - grad_img[..., 1] = 255 - grad_img[..., 2] = 255 - grad_img = grad_img.astype(np.uint8) - grad_img = cv2.cvtColor(grad_img, cv2.COLOR_HSV2RGB) - grad_img = grad_img.astype(np.float32) / 255.0 - return grad_img - - -def show_grad(depth, grad_conv, h=5, show=False): - """ - :param h: - :param depth: [patch_num] - :param grad_conv: - :param show: - :return: - """ - - direction, angle, grad = get_all(depth[None], grad_conv) - - # depth_img = convert_img(depth, h) - # angle_img = convert_img(angle[0], h) - # grad_img = convert_img(grad[0], depth.shape[-1] // 4 - h * 2) - depth_img = convert_img(depth, h, cmap=cv2.COLORMAP_PLASMA) - angle_img = convert_img(angle[0], h, cmap='HSV') - - # vis_grad = grad[0] / grad[0].max() / 2 + 0.5 - grad_img = convert_img(grad[0], h) - img = np.concatenate([depth_img, angle_img, grad_img], axis=0) - if show: - plt.imshow(img) - plt.show() - return img - - -def get_grad(direction): - """ - :param direction: [b patch_num] - :return:[b patch_num] - """ - a = torch.roll(direction, -1, dims=1) # xz[i+1] - b = torch.roll(direction, 1, dims=1) # xz[i-1] - grad = torch.acos(torch.clip(a[..., 0] * b[..., 0] + a[..., 1] * b[..., 1], -1+1e-6, 1-1e-6)) - return grad - - -def get_grad2(angle, grad_conv): - """ - :param angle: [b patch_num] - :param grad_conv: - :return:[b patch_num] - """ - angle = torch.sin(angle) - angle = angle + 1 - - angle = torch.cat([angle[..., -1:], angle, angle[..., :1]], dim=-1) - grad = grad_conv(angle[:, None]) # [b, patch_num] -> [b, 1, patch_num] - # grad = torch.abs(grad) - return grad.reshape(angle.shape[0], -1) - - -def get_edge_angle(direction): - """ - :param direction: [b patch_num 2] - :return: - """ - angle = torch.atan2(direction[..., 1], direction[..., 0]) - return angle - - -def get_edge_direction(depth): - xz = depth2xyz(depth)[..., ::2] - direction = torch.roll(xz, -1, dims=1) - xz # direct[i] = xz[i+1] - xz[i] - direction = direction / direction.norm(p=2, dim=-1)[..., None] - return direction - - -def get_all(depth, grad_conv): - """ - - :param grad_conv: - :param depth: [b patch_num] - :return: - """ - direction = get_edge_direction(depth) - angle = get_edge_angle(direction) - # angle_grad = get_grad(direction) - angle_grad = get_grad2(angle, grad_conv) # signed gradient - return direction, angle, angle_grad diff --git a/spaces/zhuyuheng/IMossGPT/modules/utils.py b/spaces/zhuyuheng/IMossGPT/modules/utils.py deleted file mode 100644 index e1516e1fad4761787070d24e867bea57d86ac9ed..0000000000000000000000000000000000000000 --- a/spaces/zhuyuheng/IMossGPT/modules/utils.py +++ /dev/null @@ -1,548 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import json -import os -import datetime -import hashlib -import csv -import requests -import re -import html -import sys -import subprocess - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -import mdtex2html -from markdown import markdown -from pygments import highlight -from pygments.lexers import get_lexer_by_name -from pygments.formatters import HtmlFormatter -import pandas as pd - -from modules.presets import * -from . import shared -from modules.config import retrieve_proxy - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - -def predict(current_model, *args): - iter = current_model.predict(*args) - for i in iter: - yield i - -def billing_info(current_model): - return current_model.billing_info() - -def set_key(current_model, *args): - return current_model.set_key(*args) - -def load_chat_history(current_model, *args): - return current_model.load_chat_history(*args) - -def interrupt(current_model, *args): - return current_model.interrupt(*args) - -def reset(current_model, *args): - return current_model.reset(*args) - -def retry(current_model, *args): - iter = current_model.retry(*args) - for i in iter: - yield i - -def delete_first_conversation(current_model, *args): - return current_model.delete_first_conversation(*args) - -def delete_last_conversation(current_model, *args): - return current_model.delete_last_conversation(*args) - -def set_system_prompt(current_model, *args): - return current_model.set_system_prompt(*args) - -def save_chat_history(current_model, *args): - return current_model.save_chat_history(*args) - -def export_markdown(current_model, *args): - return current_model.export_markdown(*args) - -def load_chat_history(current_model, *args): - return current_model.load_chat_history(*args) - -def set_token_upper_limit(current_model, *args): - return current_model.set_token_upper_limit(*args) - -def set_temperature(current_model, *args): - current_model.set_temperature(*args) - -def set_top_p(current_model, *args): - current_model.set_top_p(*args) - -def set_n_choices(current_model, *args): - current_model.set_n_choices(*args) - -def set_stop_sequence(current_model, *args): - current_model.set_stop_sequence(*args) - -def set_max_tokens(current_model, *args): - current_model.set_max_tokens(*args) - -def set_presence_penalty(current_model, *args): - current_model.set_presence_penalty(*args) - -def set_frequency_penalty(current_model, *args): - current_model.set_frequency_penalty(*args) - -def set_logit_bias(current_model, *args): - current_model.set_logit_bias(*args) - -def set_user_identifier(current_model, *args): - current_model.set_user_identifier(*args) - -def set_single_turn(current_model, *args): - current_model.set_single_turn(*args) - -def handle_file_upload(current_model, *args): - return current_model.handle_file_upload(*args) - -def like(current_model, *args): - return current_model.like(*args) - -def dislike(current_model, *args): - return current_model.dislike(*args) - - -def count_token(message): - encoding = tiktoken.get_encoding("cl100k_base") - input_str = f"role: {message['role']}, content: {message['content']}" - length = len(encoding.encode(input_str)) - return length - - -def markdown_to_html_with_syntax_highlight(md_str): - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("text", stripall=True) - - formatter = HtmlFormatter() - highlighted_code = highlight(code, lexer, formatter) - - return f'
            {highlighted_code}
            ' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - if inline_code_pattern.search(non_code): - result.append(markdown(non_code, extensions=["tables"])) - else: - result.append(mdtex2html.convert(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"\n```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - result += ALREADY_CONVERTED_MARK - return result - - -def convert_asis(userinput): - return ( - f'

            {html.escape(userinput)}

            ' - + ALREADY_CONVERTED_MARK - ) - - -def detect_converted_mark(userinput): - try: - if userinput.endswith(ALREADY_CONVERTED_MARK): - return True - else: - return False - except: - return True - - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def save_file(filename, system, history, chatbot, user_name): - logging.debug(f"{user_name} 保存对话历史中……") - os.makedirs(os.path.join(HISTORY_DIR, user_name), exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR, user_name, filename), "w") as f: - json.dump(json_s, f) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR, user_name, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.debug(f"{user_name} 保存对话历史完毕") - return os.path.join(HISTORY_DIR, user_name, filename) - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.debug(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - logging.debug(f"files are:{files}") - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False, user_name=""): - logging.debug(f"从用户 {user_name} 中获取历史记录文件名列表") - return get_file_names(os.path.join(HISTORY_DIR, user_name), plain) - - -def load_template(filename, mode=0): - logging.debug(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices - ) - - -def get_template_names(plain=False): - logging.debug("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.debug(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_textbox(): - logging.debug("重置文本框") - return gr.update(value="") - - -def reset_default(): - default_host = shared.state.reset_api_host() - retrieve_proxy("") - return gr.update(value=default_host), gr.update(value=""), "API-Host 和代理已重置" - - -def change_api_host(host): - shared.state.set_api_host(host) - msg = f"API-Host更改为了{host}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - retrieve_proxy(proxy) - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if s is None: - return "" - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - try: - with retrieve_proxy(): - response = requests.get("https://ipapi.co/json/", timeout=5) - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - i18n("您的IP区域:未知。") - ) - else: - return i18n("获取IP地理位置失败。原因:") + f"{data['reason']}" + i18n("。你仍然可以使用聊天功能。") - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = i18n("您的IP区域:") + f"{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i - 1 - total = total - lst[i] - return 1 - - -def start_outputing(): - logging.debug("显示取消按钮,隐藏发送按钮") - return gr.Button.update(visible=False), gr.Button.update(visible=True) - - -def end_outputing(): - return ( - gr.Button.update(visible=True), - gr.Button.update(visible=False), - ) - - -def cancel_outputing(): - logging.info("中止输出……") - shared.state.interrupt() - - -def transfer_input(inputs): - # 一次性返回,降低延迟 - textbox = reset_textbox() - outputing = start_outputing() - return ( - inputs, - gr.update(value=""), - gr.Button.update(visible=False), - gr.Button.update(visible=True), - ) - - - -def run(command, desc=None, errdesc=None, custom_env=None, live=False): - if desc is not None: - print(desc) - if live: - result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - raise RuntimeError(f"""{errdesc or 'Error running command'}. -Command: {command} -Error code: {result.returncode}""") - - return "" - result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - message = f"""{errdesc or 'Error running command'}. - Command: {command} - Error code: {result.returncode} - stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else ''} - stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else ''} - """ - raise RuntimeError(message) - return result.stdout.decode(encoding="utf8", errors="ignore") - -def versions_html(): - git = os.environ.get('GIT', "git") - python_version = ".".join([str(x) for x in sys.version_info[0:3]]) - try: - commit_hash = run(f"{git} rev-parse HEAD").strip() - except Exception: - commit_hash = "" - if commit_hash != "": - short_commit = commit_hash[0:7] - commit_info = f"{short_commit}" - else: - commit_info = "unknown \U0001F615" - return f""" - Python: {python_version} -  •  - Gradio: {gr.__version__} -  •  - Commit: {commit_info} - """ - -def add_source_numbers(lst, source_name = "Source", use_source = True): - if use_source: - return [f'[{idx+1}]\t "{item[0]}"\n{source_name}: {item[1]}' for idx, item in enumerate(lst)] - else: - return [f'[{idx+1}]\t "{item}"' for idx, item in enumerate(lst)] - -def add_details(lst): - nodes = [] - for index, txt in enumerate(lst): - brief = txt[:25].replace("\n", "") - nodes.append( - f"
            {brief}...

            {txt}

            " - ) - return nodes - - -def sheet_to_string(sheet, sheet_name = None): - result = [] - for index, row in sheet.iterrows(): - row_string = "" - for column in sheet.columns: - row_string += f"{column}: {row[column]}, " - row_string = row_string.rstrip(", ") - row_string += "." - result.append(row_string) - return result - -def excel_to_string(file_path): - # 读取Excel文件中的所有工作表 - excel_file = pd.read_excel(file_path, engine='openpyxl', sheet_name=None) - - # 初始化结果字符串 - result = [] - - # 遍历每一个工作表 - for sheet_name, sheet_data in excel_file.items(): - - # 处理当前工作表并添加到结果字符串 - result += sheet_to_string(sheet_data, sheet_name=sheet_name) - - - return result - -def get_last_day_of_month(any_day): - # The day 28 exists in every month. 4 days later, it's always next month - next_month = any_day.replace(day=28) + datetime.timedelta(days=4) - # subtracting the number of the current day brings us back one month - return next_month - datetime.timedelta(days=next_month.day) - -def get_model_source(model_name, alternative_source): - if model_name == "gpt2-medium": - return "https://huggingface.co/gpt2-medium" - -def refresh_ui_elements_on_load(current_model, selected_model_name): - return toggle_like_btn_visibility(selected_model_name) - -def toggle_like_btn_visibility(selected_model_name): - if selected_model_name == "xmchat": - return gr.update(visible=True) - else: - return gr.update(visible=False) diff --git a/spaces/zxy666/bingo-chatai666/src/app/page.tsx b/spaces/zxy666/bingo-chatai666/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/zxy666/bingo-chatai666/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
            - - - ) -}